text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of ACL-08: HLT, pages 37–45, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics PDT 2.0 Requirements on a Query Language Jiří Mírovský Institute of Formal and Applied Linguistics Charles University in Prague Malostranské nám. 25, 118 00 Prague 1, Czech Republic [email protected] Abstract Linguistically annotated treebanks play an essential part in the modern computational linguistics. The more complex the treebanks become, the more sophisticated tools are required for using them, namely for searching in the data. We study linguistic phenomena annotated in the Prague Dependency Treebank 2.0 and create a list of requirements these phenomena set on a search tool, especially on its query language. 1 Introduction Searching in a linguistically annotated treebank is a principal task in the modern computational linguistics. A search tool helps extract useful information from the treebank, in order to study the language, the annotation system or even to search for errors in the annotation. The more complex the treebank is, the more sophisticated the search tool and its query language needs to be. The Prague Dependency Treebank 2.0 (Hajič et al. 2006) is one of the most advanced manually annotated treebanks. We study mainly the tectogrammatical layer of the Prague Dependency Treebank 2.0 (PDT 2.0), which is by far the most advanced and complex layer in the treebank, and show what requirements on a query language the annotated linguistic phenomena bring. We also add requirements set by lower layers of annotation. In section 1 (after this introduction) we mention related works on search languages for various types of corpora. Afterwards, we very shortly introduce PDT 2.0, just to give a general picture of the principles and complexion of the annotation scheme. In section 2 we study the annotation manual for the tectogrammatical layer of PDT 2.0 (t-manual, Mikulová et al. 2006) and collect linguistic phenomena that bring special requirements on the query language. We also study lower layers of annotation and add their requirements. In section 3 we summarize the requirements in an extensive list of features required from a search language. We conclude in section 4. 1.1 Related Work In Lai, Bird 2004, the authors name seven linguistic queries they consider important representatives for checking a sufficiency of a query language power. They study several query tools and their query languages and compare them on the basis of their abilities to express these seven queries. In Bird et al. 2005, the authors use a revised set of seven key linguistic queries as a basis for forming a list of three expressive features important for linguistic queries. The features are: immediate precedence, subtree scoping and edge alignment. In Bird et al. 2006, another set of seven linguistic queries is used to show a necessity to enhance XPath (a standard query language for XML, Clark, DeRose 1999) to support linguistic queries. Cassidy 2002 studies adequacy of XQuery (a search language based on XPath, Boag et al. 1999) for searching in hierarchically annotated data. Re37 quirements on a query language for annotation graphs used in speech recognition is also presented in Bird et al. 2000. A description of linguistic phenomena annotated in the Tiger Treebank, along with an introduction to a search tool TigerSearch, developed especially for this treebank, is given in Brants et al. 2002, nevertheless without a systematic study of the required features. Laura Kallmeyer (Kallmeyer 2000) studies requirements on a query language based on two examples of complex linguistic phenomena taken from the NEGRA corpus and the Penn Treebank, respectively. To handle alignment information, Merz and Volk 2005 study requirements on a search tool for parallel treebanks. All the work mentioned above can be used as an ample source of inspiration, though it cannot be applied directly to PDT 2.0. A thorough study of the PDT 2.0 annotation is needed to form conclusions about requirements on a search tool for this dependency tree-based corpus, consisting of several layers of annotation and having an extremely complex annotation scheme, which we shortly describe in the next subsection. 1.2 The Prague Dependency Treebank 2.0 The Prague Dependency Treebank 2.0 is a manually annotated corpus of Czech. The texts are annotated on three layers – morphological, analytical and tectogrammatical. On the morphological layer, each token of every sentence is annotated with a lemma (attribute m/lemma), keeping the base form of the token, and a tag (attribute m/tag), which keeps its morphological information. The analytical layer roughly corresponds to the surface syntax of the sentence; the annotation is a single-rooted dependency tree with labeled nodes. Attribute a/afun describes the type of dependency between a dependent node and its governor. The order of the nodes from left to right corresponds exactly to the surface order of tokens in the sentence (attribute a/ord). The tectogrammatical layer captures the linguistic meaning of the sentence in its context. Again, the annotation is a dependency tree with labeled nodes (Hajičová 1998). The correspondence of the nodes to the lower layers is often not 1:1 (Mírovský 2006). Attribute functor describes the dependency between a dependent node and its governor. A tectogrammatical lemma (attribute t_lemma) is assigned to every node. 16 grammatemes (prefixed gram) keep additional annotation (e.g. gram/verbmod for verbal modality). Topic and focus (Hajičová et al. 1998) are marked (attribute tfa), together with so-called deep word order reflected by the order of nodes in the annotation (attribute deepord). Coreference relations between nodes of certain category types are captured. Each node has a unique identifier (attribute id). Attributes coref_text.rf and coref_gram.rf contain ids of coreferential nodes of the respective types. 2 Phenomena and Requirements We make a list of linguistic phenomena that are annotated in PDT 2.0 and that determine the necessary features of a query language. Our work is focused on two structured layers of PDT 2.0 – the analytical layer and the tectogrammatical layer. For using the morphological layer exclusively and directly, a very good search tool Manatee/Bonito (Rychlý 2000) can be used. We intend to access the morphological information only from the higher layers, not directly. Since there is relation 1:1 among nodes on the analytical layer (but for the technical root) and tokens on the morphological layer, the morphological information can be easily merged into the analytical layer – the nodes only get additional attributes. The tectogrammatical layer is by far the most complex layer in PDT 2.0, therefore we start our analysis with a study of the annotation manual for the tectogrammatical layer (t-manual, Mikulová et al. 2006) and focus also on the requirements on accessing lower layers with non-1:1 relation. Afterwards, we add some requirements on a query language set by the annotation of the lower layers – the analytical layer and the morphological layer. During the studies, we have to keep in mind that we do not only want to search for a phenomenon, but also need to study it, which can be a much more complex task. Therefore, it is not sufficient e.g. to find a predicative complement, which is a trivial task, since attribute functor of the complement is set to value COMPL. In this particular example, we also need to be able to specify in the 38 query properties of the node the second dependency of the complement goes to, e.g. that it is an Actor. A summary of the required features on a query language is given in the subsequent section. 2.1 The Tectogrammatical Layer First, we focus on linguistic phenomena annotated on the tectogrammatical layer. T-manual has more than one thousand pages. Most of the manual describes the annotation of simple phenomena that only require a single-node query or a very simple structured query. We mostly focus on those phenomena that bring a special requirement on the query language. 2.1.1 Basic Principles The basic unit of annotation on the tectogrammatical layer of PDT 2.0 is a sentence. The representation of the tectogrammatical annotation of a sentence is a rooted dependency tree. It consists of a set of nodes and a set of edges. One of the nodes is marked as a root. Each node is a complex unit consisting of a set of pairs attributevalue (t-manual, page 1). The edges express dependency relations between nodes. The edges do not have their own attributes; attributes that logically belong to edges (e.g. type of dependency) are represented as node-attributes (t-manual, page 2). It implies the first and most basic requirement on the query language: one result of the search is one sentence along with the tree belonging to it. Also, the query language should be able to express node evaluation and tree dependency among nodes in the most direct way. 2.1.2 Valency Valency of semantic verbs, valency of semantic verbal nouns, valency of semantic nouns that represent the nominal part of a complex predicate and valency of some semantic adverbs are annotated fully in the trees (t-manual, pages 162-3). Since the valency of verbs is the most complete in the annotation and since the requirements on searching for valency frames of nouns are the same as of verbs, we will (for the sake of simplicity in expressions) focus on the verbs only. Every verb meaning is assigned a valency frame. Verbs usually have more than one meaning; each is assigned a separate valency frame. Every verb has as many valency frames as it has meanings (t-manual, page 105). Therefore, the query language has to be able to distinguish valency frames and search for each one of them, at least as long as the valency frames differ in their members and not only in their index. (Two or more identical valency frames may represent different verb meanings (t-manual, page 105).) The required features include a presence of a son, its non-presence, as well as controlling number of sons of a node. 2.1.3 Coordination and Apposition Tree dependency is not always linguistic dependency (t-manual, page 9). Coordination and apposition are examples of such a phenomenon (t-manual, page 282). If a Predicate governs two coordinated Actors, these Actors technically depend on a coordinating node and this coordinating node depends on the Predicate. the query language should be able to skip such a coordinating node. In general, there should be a possibility to skip any type of node. Skipping a given type of node helps but is not sufficient. The coordinated structure can be more complex, for example the Predicate itself can be coordinated too. Then, the Actors do not even belong to the subtree of any of the Predicates. In the following example, the two Predicates (PRED) are coordinated with conjunction (CONJ), as well as the two Actors (ACT). The linguistic dependencies go from each of the Actors to each of the Predicates but the tree dependencies are quite different: In Czech: S čím mohou vlastníci i nájemci počítat, na co by se měli připravit? In English: What can owners and tenants expect, what they should get ready for? 39 The query language should therefore be able to express the linguistic dependency directly. The information about the linguistic dependency is annotated in the treebank by the means of references, as well as many other phenomena (see below). 2.1.4 Idioms (Phrasemes) etc. Idioms/phrasemes (idiomatic/phraseologic constructions) are combinations of two or more words with a fixed lexical content, which together constitute one lexical unit with a metaphorical meaning (which cannot be decomposed into meanings of its parts) (t-manual, page 308). Only expressions which are represented by at least two auto-semantic nodes in the tectogrammatical tree are captured as idioms (functor DPHR). One-node (one-auto-semantic-word) idioms are not represented as idioms in the tree. For example, in the combination “chlapec k pohledání” (“a boy to look for”), the prepositional phrase gets functor RSTR, and it is not indicated that it is an idiom. Secondary prepositions are another example of a linguistic phenomenon that can be easily recognized in the surface form of the sentence but is difficult to find in the tectogrammatical tree. Therefore, the query language should offer a basic searching in the linear form of the sentence, to allow searching for any idiom or phraseme, regardless of the way it is or is not captured in the tectogrammatical tree. It can even help in a situation when the user does not know how a certain linguistic phenomenon is annotated on the tectogrammatical layer. 2.1.5 Complex Predicates A complex predicate is a multi-word predicate consisting of a semantically empty verb which expresses the grammatical meanings in a sentence, and a noun (frequently denoting an event or a state of affairs) which carries the main lexical meaning of the entire phrase (t-manual, page 345). Searching for a complex predicate is a simple task and does not bring new requirements on the query language. It is valency of complex predicates that requires our attention, especially dual function of a valency modification. The nominal and verbal components of the complex predicate are assigned the appropriate valency frame from the valency lexicon. By means of newly established nodes with t_lemma substitutes, those valency modification positions not present at surface layer are filled. There are problematic cases where the expressed valency modification occurs in the same form in the valency frames of both components of the complex predicate (t-manual, page 362). To study these special cases of valency, the query language has to offer a possibility to define that a valency member of the verbal part of a complex predicate is at the same time a valency member of the nominal part of the complex predicate, possibly with a different function. The identity of valency members is annotated again by the means of references, which is explained later. 2.1.6 Predicative Complement (Dual Dependency) On the tectogrammatical layer, also cases of the so-called predicative complement are represented. The predicative complement is a non-obligatory free modification (adjunct) which has a dual semantic dependency relation. It simultaneously modifies a noun and a verb (which can be nominalized). These two dependency relations are represented by different means (t-manual, page 376): ● the dependency on a verb is represented by means of an edge (which means it is represented in the same way like other modifications), ● the dependency on a (semantic) noun is represented by means of attribute compl.rf, the value of which is the identifier of the modified noun. In the following example, the predicative complement (COMPL) has one dependency on a verb (PRED) and another (dual) dependency on a noun (ACT): 40 In Czech: Ze světové recese vyšly jako jednička Spojené státy. In English: The United States emerged from the world recession as number one. The second form of dependency, represented once again with references (still see below), has to be expressible in the query language. 2.1.7 Coreferences Two types of coreferences are annotated on the tectogrammatical layer: ● grammatical coreference ● textual coreference The current way of representing coreference uses references (t-manual, page 996). Let us finally explain what references are. References make use of the fact that every node of every tree has an identifier (the value of attribute id), which is unique within PDT 2.0. If coreference, dual dependency, or valency member identity is a link between two nodes (one node referring to another), it is enough to specify the identifier of the referred node in the appropriate attribute of the referring node. Reference types are distinguished by different referring attributes. Individual reference subtypes can be further distinguished by the value of another attribute. The essential point in references (for the query language) is that at the time of forming a query, the value of the reference is unknown. For example, in the case of dual dependency of predicative complement, we know that the value of attribute compl.rf of the complement must be the same as the value of attribute id of the governing noun, but the value itself differs tree from tree and therefore is unknown at the time of creating the query. The query language has to offer a possibility to bind these unknown values. 2.1.8 Topic-Focus Articulation On the tectogrammatical layer, also the topic-focus articulation (TFA) is annotated. TFA annotation comprises two phenomena: ● contextual boundness, which is represented by values of attribute tfa for each node of the tectogrammatical tree. ● communicative dynamism, which is represented by the underlying order of nodes. Annotated trees therefore contain two types of information - on the one hand the value of contextual boundness of a node and its relative ordering with respect to its brother nodes reflects its function within the topic-focus articulation of the sentence, on the other hand the set of all the TFA values in the tree and the relative ordering of subtrees reflect the overall functional perspective of the sentence, and thus enable to distinguish in the sentence the complex categories of topic and focus (however, these are not annotated explicitly) (t-manual, page 1118). While contextual boundness does not bring any new requirement on the query language, communicative dynamism requires that the relative order of nodes in the tree from left to right can be expressed. The order of nodes is controlled by attribute deepord, which contains a non-negative real (usually natural) number that sets the order of the nodes from left to right. Therefore, we will again need to refer to a value of an attribute of another node but this time with relation other than “equal to”. 2.1.8.1 Focus Proper Focus proper is the most dynamic and communicatively significant contextually non-bound part of the sentence. Focus proper is placed on the rightmost path leading from the effective root of the tectogrammatical tree, even though it is at a different position in the surface structure. The node representing this expression will be placed rightmost in the tectogrammatical tree. If the focus proper is constituted by an expression represented as the effective root of the tectogrammatical tree (i.e. the governing predicate is the focus proper), there is no right path leading from the effective root (tmanual, page 1129). 2.1.8.2 Quasi-Focus Quasi-focus is constituted by (both contrastive and non-contrastive) contextually bound expressions, on which the focus proper is dependent. The focus proper can immediately depend on the quasi-focus, or it can be a more deeply embedded expression. In the underlying word order, nodes representing the quasi-focus, although they are contextually bound, are placed to the right from their governing node. Nodes representing the quasi-focus are therefore contextually bound nodes on the rightmost 41 path in the tectogrammatical tree (t-manual, page 1130). The ability of the query language to distinguish the rightmost node in the tree and the rightmost path leading from a node is therefore necessary. 2.1.8.3 Rhematizers Rhematizers are expressions whose function is to signal the topic-focus articulation categories in the sentence, namely the communicatively most important categories - the focus and contrastive topic. The position of rhematizers in the surface word order is quite loose, however they almost always stand right before the expressions they rhematize, i.e. the expressions whose being in the focus or contrastive topic they signal (t-manual, pages 1165-6). The guidelines for positioning rhematizers in tectogrammatical trees are simple (t-manual, page 1171): ● a rhematizer (i.e. the node representing the rhematizer) is placed as the closest left brother (in the underlying word order) of the first node of the expression that is in its scope. ● if the scope of a rhematizer includes the governing predicate, the rhematizer is placed as the closest left son of the node representing the governing predicate. ● if a rhematizer constitutes the focus proper, it is placed according to the guidelines for the position of the focus proper - i.e. on the rightmost path leading from the effective root of the tectogrammatical tree. Rhematizers therefore bring a further requirement on the query language – an ability to control the distance between nodes (in the terms of deep word order); at the very least, the query language has to distinguish an immediate brother and relative horizontal position of nodes. 2.1.8.4 (Non-)Projectivity Projectivity of a tree is defined as follows: if two nodes B and C are connected by an edge and C is to the left from B, then all nodes to the right from B and to the left from C are connected with the root via a path that passes through at least one of the nodes B or C. In short: between a father and its son there can only be direct or indirect sons of the father (t-manual, page 1135). The relative position of a node (node A) and an edge (nodes B, C) that together cause a non-projectivity forms four different configurations: (“B is on the left from C” or “B is on the right from C”) x (“A is on the path from B to the root” or “it is not”). Each of the configurations can be searched for using properties of the language that have been required so far by other linguistic phenomena. Four different queries search for four different configurations. To be able to search for all configurations in one query, the query language should be able to combine several queries into one multi-query. We do not require that a general logical expression can be set above the single queries. We only require a general OR combination of the single queries. 2.1.9 Accessing Lower Layers Studies of many linguistic phenomena require a multilayer access. In Czech: Byl by šel do lesa. In English (lit.): He would have gone to the forest. 42 For example, the query “find an example of Patient that is more dynamic than its governing Predicate (with greater deepord) but on the surface layer is on the left side from the Predicate” requires information both from the tectogrammatical layer and the analytical layer. The picture above is taken from PDT 2.0 guide and shows the typical relation among layers of annotation for the sentence (the lowest w-layer is a technical layer containing only the tokenized original data). The information from the lower layers can be easily compressed into the analytical layer, since there is relation 1:1 among the layers (with some rare exceptions like misprints in the w-layer). The situation between the tectogrammatical layer and the analytical layer is much more complex. Several nodes from the analytical layer may be (and often are) represented by one node on the tectogrammatical layer and new nodes without an analytical counterpart may appear on the tectogrammatical layer. It is necessary that the query language addresses this issue and allows access to the information from the lower layers. 2.2 The Analytical and Morphological Layer The analytical layer is much less complex than the tectogrammatical layer. The basic principles are the same – the representation of the structure of a sentence is rendered in the form of a tree – a connected acyclic directed graph in which no more than one edge leads into a node, and whose nodes are labeled with complex symbols (sets of attributes). The edges are not labeled (in the technical sense). The information logically belonging to an edge is represented in attributes of the depending node. One node is marked as a root. Here, we focus on linguistic phenomena annotated on the analytical and morphological layer that bring a new requirement on the query language (that has not been set in the studies of the tectogrammatical layer). 2.2.1 Morphological Tags In PDT 2.0, morphological tags are positional. They consist of 15 characters, each representing a certain morphological category, e.g. the first position represents part of speech, the third position represents gender, the fourth position represents number, the fifth position represents case. The query language has to offer a possibility to specify a part of the tag and leave the rest unspecified. It has to be able to set such conditions on the tag like “this is a noun”, or “this is a plural in fourth case”. Some conditions might include negation or enumeration, like “this is an adjective that is not in fourth case”, or “this is a noun either in third or fourth case”. This is best done with some sort of wild cards. The latter two examples suggest that such a strong tool like regular expressions may be needed. 2.2.2 Agreement There are several cases of agreement in Czech language, like agreement in case, number and gender in attributive adjective phrase, agreement in gender and number between predicate and subject (though it may be complex), or agreement in case in apposition. To study agreement, the query language has to allow to make a reference to only a part of value of attribute of another node, e.g. to the fifth position of the morphological tag for case. 2.2.3 Word Order Word order is a linguistic phenomenon widely studied on the analytical layer, because it offers a perfect combination of a word order (the same like in the sentence) and syntactic relations between the words. The same technique like with the deep word order on the tectogrammatical layer can be used here. The order of words (tokens) ~ nodes in the analytical tree is controlled by attribute ord. Non-projective constructions are much more often and interesting here than on the tectogrammatical layer. Nevertheless, they appear also on the tectogrammatical layer and their contribution to the requirements on the query language has already been mentioned. The only new requirement on the query language is an ability to measure the horizontal distance between words, to satisfy linguistic queries like “find trees where a preposition and the head of the noun phrase are at least five words apart”. 3 Summary of the Features Here we summarize what features the query language has to have to suit PDT 2.0. We list the features from the previous section and also add some 43 obvious requirements that have not been mentioned so far but are very useful generally, regardless of a corpus. 3.1 Complex Evaluation of a Node ● multiple attributes evaluation (an ability to set values of several attributes at one node) ● alternative values (e.g. to define that functor of a node is either a disjunction or a conjunction) ● alternative nodes (alternative evaluation of the whole set of attributes of a node) ● wild cards (regular expressions) in values of attributes (e.g. m/tag=”N...4.*” defines that the morphological tag of a node is a noun in accusative, regardless of other morphological categories) ● negation (e.g. to express “this node is not Actor”) ● relations less than (<=) , greater than (>=) (for numerical attributes) 3.2 Dependencies Between Nodes (Vertical Relations) ● immediate, transitive dependency (existence, non-existence) ● vertical distance (from root, from one another) ● number of sons (zero for lists) 3.3 Horizontal Relations ● precedence, immediate precedence, horizontal distance (all both positive, negative) ● secondary edges, secondary dependencies, coreferences, long-range relations 3.4 Other Features ● multiple-tree queries (combined with general OR relation) ● skipping a node of a given type (for skipping simple types of coordination, apposition etc.) ● skipping multiple nodes of a given type (e.g. for recognizing the rightmost path) ● references (for matching values of attributes unknown at the time of creating the query) ● accessing several layers of annotation at the same time with non-1:1 relation (for studying relation between layers) ● searching in the surface form of the sentence 4 Conclusion We have studied the Prague Dependency Treebank 2.0 tectogrammatical annotation manual and listed linguistic phenomena that require a special feature from any query tool for this corpus. We have also added several other requirements from the lower layers of annotation. We have summarized these features, along with general corpus-independent features, in a concise list. Acknowledgment This research was supported by the Grant Agency of the Academy of Sciences of the Czech Republic, project IS-REST (No. 1ET101120413). References Bird et al. 2000. Towards A Query Language for Annotation Graphs. In: Proceedings of the Second International Language and Evaluation Conference, Paris, ELRA, 2000. Bird et al. 2005. Extending Xpath to Support Linguistc Queries. In: Proceedings of the Workshop on Programming Language Technologies for XML, California, USA, 2005. . Bird et al. 2006. Designing and Evaluating an XPath Dialect for Linguistic Queries. In: Proceedings of the 22nd International Conference on Data Engineering (ICDE), pp 52-61, Atlanta, USA, 2006. Boag et al. 1999. XQuery 1.0: An XML Query Language. IW3C Working Draft, http://www.w3.org/TR/xpath, 1999. Brants S. et al. 2002. The TIGER Treebank. In: Proceedings of TLT 2002, Sozopol, Bulgaria, 2002. Cassidy S. 2002. XQuery as an Annotation Query Language: a Use Case Analysis. In: Proceedings of the Third International Conference on Language Resources and Evaluation, Canary Islands, Spain, 2002 Clark J., DeRose S. 1999. XML Path Language (XPath). http://www.w3.org/TR/xpath, 1999. Hajič J. et al. 2006. Prague Dependency Treebank 2.0. CD-ROM LDC2006T01, LDC, Philadelphia, 2006. 44 Hajičová E. 1998. Prague Dependency Treebank: From analytic to tectogrammatical annotations. In: Proceedings of 2nd TST, Brno, Springer-Verlag Berlin Heidelberg New York, 1998, pp. 45-50. Hajičová E., Partee B., Sgall P. 1998. Topic-Focus Articulation, Tripartite Structures and Semantic Content. Dordrecht, Amsterdam, Kluwer Academic Publishers, 1998. Havelka J. 2007. Beyond Projectivity: Multilingual Evaluation of Constraints and Measures on Non-Projective Structures. In Proceedings of ACL 2007, Prague, pp. 608-615. Kallmeyer L. 2000: On the Complexity of Queries for Structurally Annotated Linguistic Data. In Proceedings of ACIDCA'2000, Corpora and Natural Language Processing, Tunisia, 2000, pp. 105-110. Lai C., Bird S. 2004. Querying and updating treebanks: A critical survey and requirements analysis. In: Proceedings of the Australasian Language Technology Workshop, Sydney, Australia, 2004 Merz Ch., Volk M. 2005. Requirements for a Parallel Treebank Search Tool. In: Proceedings of GLDVConference, Bonn, Germany, 2005. Mikulová et al. 2006. Annotation on the Tectogrammatical Level in the Prague Dependency Treebank (Reference Book). ÚFAL/CKL Technical Report TR-2006-32, Charles University in Prague, 2006. Mírovský J. 2006. Netgraph: a Tool for Searching in Prague Dependency Treebank 2.0. In Proceedings of TLT 2006, Prague, pp. 211-222. Rychlý P. 2000. Korpusové manažery a jejich efektivní implementace. PhD. Thesis, Brno, 2000. 45
2008
5
Proceedings of ACL-08: HLT, pages 434–442, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Which Are the Best Features for Automatic Verb Classification Jianguo Li Department of Linguistics The Ohio State University Columbus Ohio, USA [email protected] Chris Brew Department of Linguistics The Ohio State University Columbus Ohio, USA [email protected] Abstract In this work, we develop and evaluate a wide range of feature spaces for deriving Levinstyle verb classifications (Levin, 1993). We perform the classification experiments using Bayesian Multinomial Regression (an efficient log-linear modeling framework which we found to outperform SVMs for this task) with the proposed feature spaces. Our experiments suggest that subcategorization frames are not the most effective features for automatic verb classification. A mixture of syntactic information and lexical information works best for this task. 1 Introduction Much research in lexical acquisition of verbs has concentrated on the relation between verbs and their argument frames. Many scholars hypothesize that the behavior of a verb, particularly with respect to the expression of arguments and the assignment of semantic roles is to a large extent driven by deep semantic regularities (Dowty, 1991; Green, 1974; Goldberg, 1995; Levin, 1993). Thus measurements of verb frame patterns can perhaps be used to probe for linguistically relevant aspects of verb meanings. The correspondence between meaning regularities and syntax has been extensively studied in Levin (1993) (hereafter Levin). Levin’s verb classes are based on the ability of a verb to occur or not occur in pairs of syntactic frames that are in some sense meaning preserving (diathesis alternation). The focus is on verbs for which distribution of syntactic frames is a useful indicator of class membership, and, correspondingly, on classes which are relevant for such verbs. By using Levin’s classification, we obtain a window on some (but not all) of the potentially useful semantic properties of verbs. Levin’s verb classification, like others, helps reduce redundancy in verb descriptions and enables generalizations across semantically similar verbs with respect to their usage. When the information about a verb type is not available or sufficient for us to draw firm conclusions about its usage, the information about the class to which the verb type belongs can compensate for it, addressing the pervasive problem of data sparsity in a wide range of NLP tasks, such as automatic extraction of subcategorization frames (Korhonen, 2002), semantic role labeling (Swier and Stevenson, 2004; Gildea and Jurafsky, 2002), natural language generation for machine translation (Habash et al., 2003), and deriving predominant verb senses from unlabeled data (Lapata and Brew, 2004). Although there exist several manually-created verb lexicons or ontologies, including Levin’s verb taxonomy, VerbNet, and FrameNet, automatic verb classification (AVC) is still necessary for extending existing lexicons (Korhonen and Briscoe, 2004), building and tuning lexical information specific to different domains (Korhonen et al., 2006), and bootstrapping verb lexicons for new languages (Tsang et al., 2002). AVC helps avoid the expensive hand-coding of such information, but appropriate features must be identified and demonstrated to be effective. In this work, our primary goal is not necessarily to obtain the optimal classification, but rather to investigate 434 the linguistic conditions which are crucial for lexical semantic classification of verbs. We develop feature sets that combine syntactic and lexical information, which are in principle useful for any Levinstyle verb classification. We test the general applicability and scalability of each feature set to the distinctions among 48 verb classes involving 1,300 verbs, which is, to our knowledge, the largest investigation on English verb classification by far. To preview our results, a feature set that combines both syntactic information and lexical information works much better than either of them used alone. In addition, mixed feature sets also show potential for scaling well when dealing with larger number of verbs and verb classes. In contrast, subcategorization frames, at least on their own, are largely ineffective for AVC, despite their evident effectiveness in supporting Levin’s initial intuitions. 2 Related Work Earlier work on verb classification has generally adopted one of the two approaches for devising statistical, corpus-based features. Subcategorization frame (SCF): Subcategorization frames are obviously relevant to alternation behaviors. It is therefore unsurprising that much work on verb classification has adopted them as features (Schulte im Walde, 2000; Brew and Schulte im Walde, 2002; Korhonen et al., 2003). However, relying solely on subcategorization frames also leads to the loss of semantic distinctions. Consider the frame NP-V-PPwith. The semantic interpretation of this frame depends to a large extent on the NP argument selected by the preposition with. In (1), the same surface form NP-V-PPwith corresponds to three different underlying meanings. However, such semantic distinctions are totally lost if lexical information is disregarded. (1) a. I ate with a fork. [INSTRUMENT] b. I left with a friend. [ACCOMPANIMENT] c. I sang with confidence. [MANNER] This deficiency of unlexicalized subcategorization frames leads researchers to make attempts to incorporate lexical information into the feature representation. One possible improvement over subcategorization frames is to enrich them with lexical information. Lexicalized frames are usually obtained by augmenting each syntactic slot with its head noun (2). (2) a. NP(I)-V-PP(with:fork) b. NP(I)-V-PP(with:friend) c. NP(I)-V-PP(with:confidence) With the potentially improved discriminatory power also comes increased exposure to sparse data problems. Trying to overcome the problem of data sparsity, Schulte im Walde (2000) explores the additional use of selectional preference features by augmenting each syntactic slot with the concept to which its head noun belongs in an ontology (e.g. WordNet). Although the problem of data sparsity is alleviated to certain extent (3), these features do not generally improve classification performance (Schulte im Walde, 2000; Joanis, 2002). (3) a. NP(PERSON)-V-PP(with:ARTIFACT) b. NP(PERSON)-V-PP(with:PERSON) c. NP(PERSON)-V-PP(with:FEELING) JOANIS07: Incorporating lexical information directly into subcategorization frames has proved inadequate for AVC. Other methods for combining syntactic information with lexical information have also been attempted (Merlo and Stevenson, 2001; Joanis et al., 2007). These studies use a small collection of features that require some degree of expert linguistic analysis to devise. The deeper linguistic analysis allows their feature set to cover a variety of indicators of verb semantics, beyond that of frame information. Joanis et al. (2007) reports an experiment that involves 15 Levin verb classes. They define a general feature space that is supposed to be applicable to all Levin classes. The features they use fall into four different groups: syntactic slots, slot overlaps, tense, voice and aspect, and animacy of NPs. • Syntactic slots: They encode the frequency of the syntactic positions (e.g. SUBJECT, OBJECT, PPat). They are considered approximation to subcategorization frames. • Slot overlaps: They are supposed to capture the properties of alternation by identifying if a given noun can occur in different syntactic positions relative to a particular verb. For instance, in the alternation The ice melted and 435 The sun melted the ice, ice occurs in the subject position in the first sentence but in the object position in the second sentence. An overlap feature records that there is a subject-object alternation for melt. • Tense, voice and aspect: Verb meaning and alternations also interact in interesting ways with tense, voice, and aspect. For example, middle construction is usually used in present tense (e.g. The bread cuts easily). • Animacy of NPs: The animacy of the semantic role corresponding to the head noun in each syntactic slot can also distinguish classes of verbs. Joanis et al. (2007) demonstrates that the general feature space they devise achieves a rate of error reduction ranging from 48% to 88% over a chance baseline accuracy, across classification tasks of varying difficulty. However, they also show that their general feature space does not generally improve the classification accuracy over subcategorization frames (see table 1). Experimental Task All Features SCF Average 2-way 83.2 80.4 Average 3-way 69.6 69.4 Average (≥6)-way 61.1 62.8 Table 1: Results from Joanis et al. (2007) (%) 3 Integration of Syntactic and Lexical Information In this study, we explore a wider range of features for AVC, focusing particularly on various ways to mix syntactic with lexical information. Dependency relation (DR): Our way to overcome data sparsity is to break lexicalized frames into lexicalized slots (a.k.a. dependency relations). Dependency relations contain both syntactic and lexical information (4). (4) a. SUBJ(I), PP(with:fork) b. SUBJ(I), PP(with:friend) c. SUBJ(I), PP(with:confidence) However, augmenting PP with nouns selected by the preposition (e.g. PP(with:fork)) still gives rise to data sparsity. We therefore decide to break it into two individual dependency relations: PP(with), PP-fork. Although dependency relations have been widely used in automatic acquisition of lexical information, such as detection of polysemy (Lin, 1998) and WSD (McCarthy et al., 2004), their utility in AVC still remains untested. Co-occurrence (CO): CO features mostly convey lexical information only and are generally considered not particularly sensitive to argument structures (Rohde et al., 2004). Nevertheless, it is worthwhile testing whether the meaning components that are brought out by syntactic alternations are also correlated to the neighboring words. In other words, Levin verbs may be distinguished on the dimension of neighboring words, in addition to argument structures. A test on this claim can help answer the question of whether verbs in the same Levin class also tend to share their neighboring words. Adapted co-occurrence (ACO): Conventional CO features generally adopt a stop list to filter out function words. However, some of the functions words, prepositions in particular, are known to carry great amount of syntactic information that is related to lexical meanings of verbs (Schulte im Walde, 2003; Brew and Schulte im Walde, 2002; Joanis et al., 2007). In addition, whereas most verbs tend to put a strong selectional preference on their nominal arguments, they do not care much about the identity of the verbs in their verbal arguments. Based on these observations, we propose to adapt the conventional CO features by (1) keeping all prepositions (2) replacing all verbs in the neighboring contexts of each target verb with their part-of-speech tags. ACO features integrate at least some degree of syntactic information into the feature space. SCF+CO: Another way to mix syntactic information with lexical information is to use subcategorization frames and co-occurrences together in hope that they are complementary to each other, and therefore yield better results for AVC. 4 Experiment Setup 4.1 Corpus To collect each type of features, we use the Gigaword Corpus, which consists of samples of recent newswire text data collected from four distinct in436 ternational sources of English newswire. 4.2 Feature Extraction We evaluate six different feature sets for their effectiveness in AVC: SCF, DR, CO, ACO, SCF+CO, and JOANIS07. SCF contains mainly syntactic information, whereas CO lexical information. The other four feature sets include both syntactic and lexical information. SCF and DR: These more linguistically informed features are constructed based on the grammatical relations generated by the C&C CCG parser (Clark and Curran, 2007). Take He broke the door with a hammer as an example. The grammatical relations generated are given in table 2. he broke the door with a hammer. (det door 3 the 2) (dobj broke 1 door 3) (det hammer 6 a 5) (dobj with 4 hammer 6) (iobj broke 1 with 4) (ncsubj broke 1 He 0 ) Table 2: grammatical relations generated by the parser We first build a lexicalized frame for the verb break: NP1(he)-V-NP2(door)-PP(with:hammer). This is done by matching each grammatical label onto one of the traditional syntactic constituents. The set of syntactic constituents we use is summarized in table 3. constituent remark NP1 subject of the verb NP2 object of the verb NP3 indirect object of the verb PPp prepositional phrase TO infinitival clause GER gerund THAT sentential complement headed by that WH sentential complement headed by a wh-word ADJP adjective phrase ADVP adverb phrase Table 3: Syntactic constituents used for building SCFs Based on the lexicalized frame, we construct an SCF NP1-NP2-PPwith for break. The set of DRs generated for break is [SUBJ(he), OBJ(door), PP(with), PP-hammer]. CO: These features are collected using a flat 4word window, meaning that the 4 words to the left/right of each target verb are considered potential CO features. However, we eliminate any CO features that are in a stopword list, which consists of about 200 closed class words including mainly prepositions, determiners, complementizers and punctuation. We also lemmatize each word using the English lemmatizer as described in Minnen et al. (2000), and use lemmas as features instead of words. ACO: As mentioned before, we adapt the conventional CO features by (1) keeping all prepositions (2) replacing all verbs in the neighboring contexts of each target verb with their part-of-speech tags. (3) keeping words in the left window only if they are tagged as a nominal. SCF+CO: We combine the SCF and CO features. JOANIS07: We use the feature set proposed in Joanis et al. (2007), which consists of 224 features. We extract features on the basis of the output generated by the C&C CCG parser. 4.3 Verb Classes Our experiments involve two separate sets of verb classes: Joanis15: Joanis et al. (2007) manually selects pairs, or triples of classes to represent a range of distinctions that exist among the 15 classes they investigate. For example, some of the pairs/triples are syntactically dissimilar, while others show little syntactic distinction across the classes. Levin48: Earlier work has focused only on a small set of verbs or a small number of verb classes. For example, Schulte im Walde (2000) uses 153 verbs in 30 classes, and Joanis et al. (2007) takes on 835 verbs and 15 verb classes. Since one of our primary goals is to identify a general feature space that is not specific to any class distinctions, it is of great importance to understand how the classification accuracy is affected when attempting to classify more verbs into a larger number of classes. In our automatic verb classification, we aim for a larger scale experiment. We select our experimental verb classes and verbs as follows: We start with all Levin 197 verb classes. We first remove all verbs that belong to at least two Levin classes. Next, we remove any verb that does not occur at least 100 times in the English Gigaword Corpus. All classes that are left with at least 10 verbs are chosen for our experi437 ment. This process yields 48 classes involving about 1,300 verbs. In our automatic verb classification experiment, we test the applicability of each feature set to distinctions among up to 48 classes 1. To our knowledge, this is, by far, the largest investigation on English verb classification. 5 Machine Learning Method 5.1 Preprocessing Data We represent the semantic space for verbs as a matrix of frequencies, where each row corresponds to a Levin verb and each column represents a given feature. We construct a semantic space with each feature set. Except for JONAIS07 which only contains 224 features, all the other feature sets lead to a very high-dimensional space. For instance, the semantic space with CO features contains over one million columns, which is too huge and cumbersome. One way to avoid these high-dimensional spaces is to assume that most of the features are irrelevant, an assumption adopted by many of the previous studies working with high-dimensional semantic spaces (Burgess and Lund, 1997; Pado and Lapata, 2007; Rohde et al., 2004). Burgess and Lund (1997) suggests that the semantic space can be reduced by keeping only the k columns (features) with the highest variance. However, Rohde et al. (2004) have found it is simpler and more effective to discard columns on the basis of feature frequency, with little degradation in performance, and often some improvement. Columns representing low-frequency features tend to be noisier because they only involve few examples. We therefore apply a simple frequency cutoff for feature selection. We only use features that occur with a frequency over some threshold in our data. In order to reduce undue influence of outlier features, we employ the four normalization strategies in table 4, which help reduce the range of extreme values while having little effect on others (Rohde et al., 2004). The raw frequency (wv,f) of a verb v occurring with a feature f is replaced with the normal1In our experiment, we only use monosemous verbs from these 48 verb classes. Due to the space limit, we do not list the 48 verb classes. The size of the most classes falls in the range between 10 to 30, with a couple of classes having a size over 100. ized value (w′ v,f), according to each normalization method. Our experiments show that using correlation for normalization generally renders the best results. The results reported below are obtained from using correlation for normalization. w′ v,f = row wv,f P j wv,j column wv,f P i wi,f length wv,f P j w2 v,j 1/2 correlation T wv,f −P j wv,j P i wi,f (P j wv,j(T −P j wv,j)P i wi,f (T −P i wi,f ))1/2 T = P i P j wi,j Table 4: Normalization techniques To preprocess data, we first apply a frequency cutoff to our data set, and then normalize it using the correlation method. To find the optimal threshold for frequency cut, we consider each value between 0 and 10,000 at an interval of 500. In our experiments, results on training data show that performance declines more noticeably when the threshold is lower than 500 or higher than 10,000. For each task and feature set, we select the frequency cut that offers the best accuracy on the preprocessed training set according to k-fold stratified cross validation 2. 5.2 Classifier For all of our experiments, we use the software that implements the Bayesian multinomial logistic regression (a.k.a BMR). The software performs the socalled 1-of-k classification (Madigan et al., 2005). BMR is similar to Maximum Entropy. It has been shown to be very efficient with handling large numbers of features and extremely sparsely populated matrices, which characterize the data we have for AVC 3. To begin, let x = [x1, ..., xj, ..., xd]T be a vector of feature values characterizing a verb to be classified. We encode the fact that a verb belongs to a class k ∈1, ..., K by a K-dimensional 0/1 valued vector y = (y1, ..., yK)T , where yk = 1 and all other coordinates are 0. Multinomial logistic regres210-fold for Joanis15 and 9-fold for Levin48. We use a balanced training set, which contains 20 verbs from each class in Joanis15, but only 9 verbs from each class in Levin48. 3We also tried Chang and Lin (2001)’s LIBSVM library for Support Vector Machines (SVMs), however, BMR generally outperforms SVMs. 438 sion is a conditional probability model of the form, parameterized by the matrix β = [β1, ..., βK]. Each column of β is a parameter vector corresponding to one of the classes: βk = [βk1, ..., βkd]T . P(yk = 1|βk, x) = exp(βT k x)/ X ki exp(βT kix) 6 Results and Discussion 6.1 Evaluation Metrics Following Joanis et al. (2007), we adopt a single evaluation measure - macro-averaged recall - for all of our classification tasks. As discussed below, since we always use balanced training sets for each individual task, it makes sense for our accuracy metric to give equal weight to each class. Macro-averaged recall treats each verb class equally, so that the size of a class does not affect macro-averaged recall. It usually gives a better sense of the quality of classification across all classes. To calculate macro-averaged recall, the recall value for each individual verb class has to be computed first. recall = no. of test verbs in class c correctly labeled no. of test verbs in class c With a recall value computed for each verb class, the macro-averaged recall can be defined by: macro-averaged recall = 1 |C| X c∈C recall for class c C : a set of verb classes c : an individual verb class |C| : the number of verb classes 6.2 Joanis15 With those manually-selected 15 classes, Joanis et al. (2007) conducts 11 classification tasks including six 2-way classifications, two 3-way classifications, one 6-way classification, one 8-way classification, and one 14-way classification. In our experiments, we replicate these 11 classification tasks using the proposed six different feature sets. For each classification task in this task set, we randomly select 20 verbs from each class as the training set. We repeat this process 10 times for each task. The results reported for each task is obtained by averaging the results of the 10 trials. Note that for each trial, each feature set is trained and tested on the same training/test split. The results for the 11 classification tasks are summarized in table 5. We provide a chance baseline and the accuracy reported in Joanis et al. (2007) 4 for comparison of our results. A few points are worth noting: • Although widely used for AVC, SCF, at least when used alone, is not the most effective feature set. Our experiments show that the performance achieved by using SCF is generally worse than using the feature sets that mix syntactic and lexical information. As a matter of fact, it even loses to the simplest feature set CO on 4 tasks, including the 14-way task. • The two feature sets (DR, SCF+CO) we propose that combine syntactic and lexical information generally perform better than those feature sets (SCF, CO) that only include syntactic or lexical information. Although there is not a clear winner, DR and SCF+CO generally outperform other feature sets, indicating that they are effective ways for combining syntactic and lexical information. In particular, these two feature sets perform comparatively well on the tasks that involve more classes (e.g. 14-way), exhibiting the tendency to scale well with larger number of verb classes and verbs. Another feature set that combines syntactic and lexical information, ACO, which keeps function words in the feature space to preserve syntactic information, outperforms the conventional CO on the majority of tasks. All these observations suggest that how to mix syntactic and lexical information is one of keys to an improved verb classification. • Although JOANIS07 also combines syntactic and lexical information, its performance is not comparable to that of other feature sets that mix syntactic and lexical information. In fact, SCF 4Joanis et al. (2007) is different from our experiments in that they use a chunker for feature extraction and the Support Vector Machine for classification. 439 Experimental Task Random As Reported in Feature Set Baseline Joanis et al. (2007) SCF DR CO ACO SCF+CO JOANIS07 1) Benefactive/Recipient 50 86.4 88.6 88.4 88.2 89.1 90.7 88.9 2) Admire/Amuse 50 93.9 96.7 97.5 92.1 90.5 96.4 96.6 3) Run/Sound 50 86.8 85.4 89.6 91.8 90.2 90.5 87.1 4) Light/Sound 50 75.0 74.8 90.8 86.9 89.7 88.8 82.1 5) Cheat/Steal 50 76.5 77.6 80.6 72.1 75.5 77.8 76.4 6) Wipe/Steal 50 80.4 84.8 80.6 79.0 79.4 84.4 83.9 7) Spray/Fill/Putting 33.3 65.6 73.0 72.8 59.6 66.6 73.8 69.6 8) Run/State Change/Object drop 33.3 74.2 74.8 77.2 76.9 77.6 80.5 75.5 9) Cheat/Steal/Wipe/Spray/Fill/Putting 16.7 64.3 64.9 65.1 54.8 59.1 65.0 64.3 10) 9)/Run/Sound 12.5 61.7 62.3 65.8 55.7 60.8 66.9 63.1 11) 14-way (all except Benefactive) 7.1 58.4 56.4 65.7 57.5 59.6 66.3 57.2 Table 5: Experimental results for Joanis15 (%) and JOANIS07 yield similar accuracy in our experiments, which agrees with the findings in Joanis et al. (2007) (compare table 1 and 5). 6.3 Levin48 Recall that one of our primary goals is to identify the feature set that is generally applicable and scales well while we attempt to classify more verbs into a larger number of classes. If we could exhaust all the possible n-way (2 ≤n ≤48) classification tasks with the 48 Levin classes we will investigate, it will allow us to draw a firmer conclusion about the general applicability and scalability of a particular feature set. However, the number of classification tasks grows really huge when n takes on certain value (e.g. n = 20). For our experiments, we set n to be 2, 5, 10, 20, 30, 40, or 48. For the 2-way classification, we perform all the possible 1,028 tasks. For the 48way classification, there is only one possible task. We randomly select 100 n-way tasks each for n = 5, 10, 20, 30, 40. We believe that this series of tasks will give us a reasonably good idea of whether a particular feature set is generally applicable and scales well. The smallest classes in Levin48 have only 10 verbs. We therefore reduce the number of training verbs to 9 for each class. For each n = 2, 5, 10, 20, 30, 40, 48, we will perform certain number of n-way classification tasks. For each n-way task, we randomly select 9 verbs from each class as training data, and repeat this process 10 times. The accuracy for each n-way task is then computed by averaging the results from these 10 trials. The accuracy reported for the overall n-way classification for each selected n, is obtained by averaging the results from each individual n-way task for that particular n. Again, for each trial, each feature set is trained and tested on the same training/test split. The results for Levin48 are presented in table 6, which clearly reveals the general applicability and scalability of each feature set. • Results from Levin48 reconfirm our finding that SCF is not the most effective feature set for AVC. Although it achieves the highest accuracy on the 2-way classification, its accuracy drops drastically as n gets bigger, indicating that SCF does not scale as well as other feature sets when dealing with larger number of verb classes. On the other hand, the co-occurrence feature (CO), which is believed to convey only lexical information, outperforms SCF on every n-way classification when n ≥10, suggesting that verbs in the same Levin classes tend to share their neighboring words. • The three feature sets we propose that combine syntactic and lexical information generally scale well. Again, DR and SCF+CO generally outperform all other feature sets on all nway classifications, except the 2-way classification. In addition, ACO achieves a better performance on every n-way classification than CO. Although SCF and CO are not very effective when used individually, they tend to yield the best performance when combined together. • Again, JOANIS07 does not match the performance of other feature sets that combine both syntactic and lexical information, but yields similar accuracy as SCF. 440 Experimental Task No of Tasks Random Baseline Feature Set SCF DR CO ACO SCF+CO JOANIS07 2-way 1,028 50 84.0 83.4 77.8 80.9 82.9 82.4 5-way 100 20 71.9 76.4 70.4 73.0 77.3 72.2 10-way 100 10 65.8 73.7 68.8 71.2 72.8 65.9 20-way 100 5 51.4 65.1 58.8 60.1 65.8 50.7 30-way 100 3.3 46.7 56.9 48.6 51.8 57.8 47.1 40-way 100 2.5 43.6 54.8 47.3 49.9 55.1 44.2 48-way 1 2.2 39.1 51.6 42.4 46.8 52.8 38.9 Table 6: Experimental results for Levin48 (%) 6.4 Further Discussion Previous studies on AVC have focused on using SCFs. Our experiments reveal that SCFs, at least when used alone, compare poorly to the feature sets that mix syntactic and lexical information. One explanation for the poor performance could be that we use all the frames generated by the CCG parser in our experiment. A better way of doing this would be to use some expert-selected SCF set. Levin classifies English verbs on the basis of 78 SCFs, which should, at least in principle, be good at separating verb classes. To see if Levin-selected SCFs are more effective for AVC, we match each SCF generated by the C&C CCG parser (CCG-SCF) to one of 78 Levin-defined SCFs, and refer to the resulting SCF set as unfiltered-Levin-SCF. Following studies on automatic SCF extraction (Brent, 1993), we apply a statistical test (Binomial Hypothesis Test) to the unfiltered-Levin-SCF to filter out noisy SCFs, and denote the resulting SCF set as filtered-LevinSCF. We then perform the 48-way task (one of Levin48) with these two different SCF sets. Recall that using CCG-SCF gives us a macro-averaged recall of 39.1% on the 48-way task. Our experiments show that using unfiltered-Levin-SCF and filteredLevin-SCF raises the accuracy to 39.7% and 40.3% respectively. Although a little performance gain has been obtained by using expert-defined SCFs, the accuracy level is still far below that achieved by using a feature set that combines syntactic and semantic information. In fact, even the simple co-occurrence feature (CO) yields a better performance (42.4%) than these Levin-selected SCF sets. 7 Conclusion and Future Work We have performed a wide range of experiments to identify which features are most informative in AVC. Our conclusion is that both syntactic and lexical information are useful for verb classification. Although neither SCF nor CO performs well on its own, a combination of them proves to be the most informative feature for this task. Other ways of mixing syntactic and lexical information, such as DR, and ACO, work relatively well too. What makes these mixed feature sets even more appealing is that they tend to scale well in comparison to SCF and CO. In addition, these feature sets are devised on a general level without relying on any knowledge about specific classes, thus potentially applicable to a wider range of class distinctions. Assuming that Levin’s analysis is generally applicable across languages in terms of the linking of semantic arguments to their syntactic expressions, these mixed feature sets are potentially useful for building verb classifications for other languages. For our future work, we aim to test whether an automatically created verb classification can be beneficial to other NLP tasks. One potential application of our verb classification is parsing. Lexicalized PCFGs (where head words annotate phrasal nodes) have proved a key tool for high performance PCFG parsing, however its performance is hampered by the sparse lexical dependency exhibited in the Penn Treebank. Our experiments on verb classification have offered a class-based approach to alleviate data sparsity problem in parsing. It is our goal to test whether this class-based approach will lead to an improved parsing performance. 8 Acknowledgments This study was supported by NSF grant 0347799. We are grateful to Eric Fosler-Lussier, Detmar Meurers, Mike White and Kirk Baker for their valuable comments. 441 References Brent, M. (1993). From grammar to lexicon: Unsupervised learning of lexical syntax. Computational Linguistics, 19(3):243–262. Brew, C. and Schulte im Walde, S. (2002). Spectral clustering for German verbs. In Proccedings of the 2002 Conference on EMNLP, pages 117–124. Burgess, C. and Lund, K. (1997). Modelling parsing constraints with high-dimentional context space. Language and Cognitive Processes, 12(3):177–210. Chang, C. and Lin, C. (2001). LIBSVM: A library for support vector machines. http://www.csie.ntu.edu.tw. cjlin/libsvm. Clark, S. and Curran, J. (2007). Formalism-independent parser evaluation with CCG and Depbank. In Proceedings of the 45th Annual Meeting of ACL, pages 248– 255. Dowty, D. (1991). Thematic proto-roles and argument selection. Language, 67:547–619. Gildea, D. and Jurafsky, D. (2002). Automatic labeling of semantic role. Computational Linguistics, 28(3):245– 288. Goldberg, A. (1995). Constructions. University of Chicago Press, Chicago, 1st edition. Green, G. (1974). Semantics and Syntactic Regularity. Indiana University Press, Bloomington. Habash, N., Dorr, B., and Traum, D. (2003). Hybrid natural language generation from lexical conceptual structures. Machine Translation, 18(2):81–128. Joanis, E. (2002). Automatic verb classification using a general feature space. Master’s thesis, University of Toronto. Joanis, E., Stevenson, S., and James, D. (2007). A general feature space for automatic verb classification. Natural Language Engineering, 1:1–31. Korhonen, A. (2002). Subcategorization Acquisition. PhD thesis, Cambridge University. Korhonen, A. and Briscoe, T. (2004). Extended lexicalsemantic classification of english verbs. In Proceedings of the 2004 HLT/NAACL Workshop on Computational Lexical Semantics, pages 38–45, Boston, MA. Korhonen, A., Krymolowski, Y., and Collier, N. (2006). Automatic classification of verbs in biomedical texts. In Proceedings of the 21st International Conference on COLING and 44th Annual Meeting of ACL, pages 345–352, Sydney, Australia. Korhonen, A., Krymolowski, Y., and Marx, Z. (2003). Clustering polysemic subcategorization frame distributions semantically. In Proceedings of the 41st Annual Meeting of ACL, pages 48–55, Sapparo, Japan. Lapata, M. and Brew, C. (2004). Verb class disambiguation using informative priors. Computational Linguistics, 30(1):45–73. Levin, B. (1993). English Verb Classes and Alternations: A Preliminary Investigation. University of Chicago Press, Chicago, 1st edition. Lin, D. (1998). Automatic retrieval and clustering of similar words. In Proceedings of the 17th Internation Conference on COLING and 36th Annual Meeting of ACL. Madigan, D., Genkin, A., Lewis, D., and Fradkin, D. (2005). Bayesian Multinomial Logistic Regression for Author Identification. DIMACS Technical Report. McCarthy, D., Koeling, R., Weeds, J., and Carroll, J. (2004). Finding predominant senses in untagged text. In Proceedings of the 42nd Annual Meeting of ACL, pages 280–287. Merlo, P. and Stevenson, S. (2001). Automatic verb classification based on statistical distribution of argument structure. Computational Linguistics, 27(3):373–408. Minnen, G., Carroll, J., and Pearce, D. (2000). Applied morphological processing of English. Natural Language Engineering, 7(3):207–223. Pado, S. and Lapata, M. (2007). Dependency-based construction of semantic space models. Computional Linguistics, 33(2):161–199. Rohde, D., Gonnerman, L., and Plaut, D. (2004). An improved method for deriving word meaning from lexical co-occurrence. http://dlt4.mit.edu/ dr/COALS. Schulte im Walde, S. (2000). Clustering verbs semantically according to alternation behavior. In Proceedings of the 18th International Conference on COLING, pages 747–753. Schulte im Walde, S. (2003). Experiments on the choice of features for learning verb classes. In Proceedings of the 10th Conference of EACL, pages 315–322. Swier, R. and Stevenson, S. (2004). Unsupervised semantic role labelling. In Proceedings of the 2004 Conference on EMNLP, pages 95–102. Tsang, V., Stevenson, S., and Merlo, P. (2002). Crosslinguistic transfer in automatic verb classification. In Proceedings of the 19th International Conference on COLING, pages 1023–1029, Taiwan, China. 442
2008
50
Proceedings of ACL-08: HLT, pages 443–451, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Collecting a Why-question corpus for development and evaluation of an automatic QA-system Joanna Mrozinski Edward Whittaker Department of Computer Science Tokyo Institute of Technology 2-12-1-W8-77 Ookayama, Meguro-ku Tokyo 152-8552 Japan {mrozinsk,edw,furui}@furui.cs.titech.ac.jp Sadaoki Furui Abstract Question answering research has only recently started to spread from short factoid questions to more complex ones. One significant challenge is the evaluation: manual evaluation is a difficult, time-consuming process and not applicable within efficient development of systems. Automatic evaluation requires a corpus of questions and answers, a definition of what is a correct answer, and a way to compare the correct answers to automatic answers produced by a system. For this purpose we present a Wikipedia-based corpus of Whyquestions and corresponding answers and articles. The corpus was built by a novel method: paid participants were contacted through a Web-interface, a procedure which allowed dynamic, fast and inexpensive development of data collection methods. Each question in the corpus has several corresponding, partly overlapping answers, which is an asset when estimating the correctness of answers. In addition, the corpus contains information related to the corpus collection process. We believe this additional information can be used to post-process the data, and to develop an automatic approval system for further data collection projects conducted in a similar manner. 1 Introduction Automatic question answering (QA) is an alternative to traditional word-based search engines. Instead of returning a long list of documents more or less related to the query parameters, the aim of a QA system is to isolate the exact answer as accurately as possible, and to provide the user only a short text clip containing the required information. One of the major development challenges is evaluation. The conferences such as TREC1, CLEF2 and NTCIR3 have provided valuable QA evaluation methods, and in addition produced and distributed corpora of questions, answers and corresponding documents. However, these conferences have focused mainly on fact-based questions with short answers, so called factoid questions. Recently more complex tasks such as list, definition and discoursebased questions have also been included in TREC in a limited fashion (Dang et al., 2007). More complex how- and why-questions (for Asian languages) were also included in the NTCIR07, but the provided data comprised only 100 questions, of which some were also factoids (Fukumoto et al., 2007). Not only is the available non-factoid data quite limited in size, it is also questionable whether the data sets are usable in development outside the conferences. Lin and Katz (2006) suggest that training data has to be more precise, and, that it should be collected, or at least cleaned, manually. Some corpora of why-questions have been collected manually: corpora described in (Verberne et al., 2006) and (Verberne et al., 2007) both comprise fewer than 400 questions and corresponding answers (one or two per question) formulated by native speakers. However, we believe one answer per question is not enough. Even with factoid questions it is sometimes difficult to define what is a correct 1http://trec.nist.gov/ 2http://www.clef-campaign.org/ 3http://research.nii.ac.jp/ntcir/ 443 answer, and complex questions result in a whole new level of ambiguity. Correctness depends greatly on the background knowledge and expectations of the person asking the question. For example, a correct answer to the question “Why did Mr. X take Ms. Y to a coffee shop?” could be very different depending on whether we knew that Mr. X does not drink coffee or that he normally drinks it alone, or that Mr. X and Ms. Y are known enemies. The problem of several possible answers and, in consequence, automatic evaluation has been tackled for years within another field of study: automatic summarisation (Hori et al., 2003; Lin and Hovy, 2003). We believe that the best method of providing “correct” answers is to do what has been done in that field: combine a multitude of answers to ensure both diversity and consensus among the answers. Correctness of an answer is also closely related to the required level of detail. The Internet FAQ pages were successfully used to develop QA-systems (Jijkoun and de Rijke, 2005; Soricut and Brill, 2006), as have the human-powered question sites such as Answers.com, Yahoo Answers and Google Answers, where individuals can post questions and receive answers from peers (Mizuno et al., 2007). Both resources can be assumed to contain adequately errorfree information. FAQ pages are created so as to answer typical questions well enough that the questions do not need to be repeated. Question sites typically rank the answers and offer bonuses for people providing good ones. However, both sites suffer from excess of information. FAQ-pages tend to also answer questions which are not asked, and also contain practical examples. Human-powered answers often contain unrelated information and discourselike elements. Additionally, the answers do not always have a connection to the source material from which they could be extracted. One purpose of our project was to take part in the development of QA systems by providing the community with a new type of corpus. The corpus includes not only the questions with multiple answers and corresponding articles, but also certain additional information that we believe is essential to enhance the usability of the data. In addition to providing a new QA corpus, we hope our description of the data collection process will provide insight, resources and motivation for further research and projects using similar collection methods. We collected our corpus through Amazon Mechanical Turk service 4 (MTurk). The MTurk infrastructure allowed us to distribute our tasks to a multitude of workers around the world, without the burden of advertising. The system also allowed us to test the workers suitability, and to reward the work without the bureaucracy of employment. To our knowledge, this is the first time that the MTurk service has been used in equivalent purpose. We conducted the data collection in three steps: generation, answering and rephrasing of questions. The workers were provided with a set of Wikipedia articles, based on which the questions were created and the answers determined by sentence selection. The WhyQA-corpus consists of three parts: original questions along with their rephrased versions, 8-10 partly overlapping answers for each question, and the Wikipedia articles including the ones corresponding to the questions. The WhyQA-corpus is in XML-format and can be downloaded and used under the GNU Free Documentation License from www.furui.cs.titech.ac.jp/ . 2 Setup Question-answer pairs have previously been generated for example by asking workers to both ask a question and then answer it based on a given text (Verberne et al., 2006; Verberne et al., 2007). We decided on a different approach for two reasons. Firstly, based on our experience such an approach is not optimal in the MTurk framework. The tasks that were welcomed by workers required a short attention span, and reading long texts was negatively received with many complaints, sloppy work and slow response times. Secondly, we believe that the aforementioned approach can produce unnatural questions that are not actually based on the information need of the workers. We divided the QA-generation task into two phases: question-generation (QGenHIT) and answering (QAHIT). We also trimmed the amount of the text that the workers were required to read to create the questions. These measures were taken both in order to lessen the cognitive burden of the task 4http://www.mturk.com 444 and to produce more natural questions. In the first phase the workers generated the questions based on a part of Wikipedia article. The resulting questions were then uploaded to the system as new HITs with the corresponding articles, and answered by available (different) workers. Our hypothesis is that the questions are more natural if their answer is not known at the time of the creation. Finally, in an additional third phase, 5 rephrased versions of each question were created in order to gain variation (QRepHIT). The data quality was ensured by requiring the workers to achieve a certain result from a test (or a Qualification) before they could work on the aforementioned tasks. Below we explain the MTurk system, and then our collection process in detail. 2.1 Mechanical Turk Mechanical Turk is a Web-based service, offered by Amazon.com, Inc. It provides an API through which employers can obtain a connection to people to perform a variety of simple tasks. With tools provided by Amazon.com, the employer creates tasks, and uploads them to the MTurk Web-site. Workers can then browse the tasks and, if they find them profitable and/or interesting enough, work on them. When the tasks are completed, the employer can download the results, and accept or reject them. Some key concepts of the system are listed below, with short descriptions of the functionality. • HIT Human Intelligence Task, the unit of a payable chore in MTurk. • Requester An “employer”, creates and uploads new HITs and rewards the workers. Requesters can upload simple HITs through the MTurk Requester web site, and more complicated ones through the MTurk Web Service APIs. • Worker An “employee”, works on the hits through the MTurk Workers’ web site. • Assignment. One HIT consists of one or more assignments. One worker can complete a single HIT only once, so if the requester needs multiple results per HIT, he needs to set the assignment-count to the desired figure. A HIT is considered completed when all the assignments have been completed. • Rewards At upload time, each HIT has to be assigned a fixed reward, that cannot be changed later. Minimum reward is $0.01. Amazon.com collects a 10% (or a minimum of $0.05) service fee per each paid reward. • Qualifications To improve the data quality, a HIT can also be attached to certain tests, “qualifications” that are either system-provided or created by the requester. An example of a system-provided qualification is the average approval ratio of the worker. Even if it is possible to create tests that workers have to pass before being allowed to work on a HIT so as to ensure the worker’s ability, it is impossible to test the motivation (for instance, they cannot be interviewed). Also, as they are working through the Web, their working conditions cannot be controlled. 2.2 Collection process The document collection used in our research was derived from the Wikipedia XML Corpus by Denoyer and Gallinari (2006). We selected a total of 84 articles, based on their length and contents. A certain length was required so that we could expect the article to contain enough interesting material to produce a wide selection of natural questions. The articles varied in topic, degree of formality and the amount of details; from ”Horror film” and ”Christmas worldwide” to ”G-Man (Half-Life)” and ”History of London”. Articles consisting of bulleted lists were removed, but filtering based on the topic of the article was not performed. Essentially, the articles were selected randomly. 2.2.1 QGenHIT The first phase of the question-answer generation was to generate the questions. In QGenHIT we presented the worker with only part of a Wikipedia article, and instructed them to think of a why-question that they felt could be answered based on the original, whole article which they were not shown. This approach was expected to lead to natural curiosity and questions. Offering too little information would have lead to many questions that would finally be left unanswered, and it also did not give the workers enough to work on. Giving too much information 445 Qualification The workers were required to pass a test before working on the HITs. QGenHIT Questions were generated based on partial Wikipedia articles. These questions were then used to create the QAHITs. QAHIT Workers were presented with a question and a corresponding article. The task was to answer the questions (if possible) through sentence selection. QRepHIT To ensure variation in the questions, each question was rephrased by 5 different workers. Table 1: Main components of the corpus collection process. Article topic: Fermi paradox Original question Why is the moon crucial to the rare earth hypothesis? Rephrased Q 1 How does the rare earth theory depend upon the moon? Rephrased Q 2 What makes the moon so important to rare earth theory? Rephrased Q 3 What is the crucial regard for the moon in the rare earth hypothesis? Rephrased Q 4 Why is the moon so important in the rare earth hypothesis? Rephrased Q 5 What makes the moon necessary, in regards to the rare earth hypothesis? Answer 1. Sentence ids: 20,21. Duplicates: 4. The moon is important because its gravitational pull creates tides that stabilize Earth’s axis. Without this stability, its variation, known as precession of the equinoxes, could cause weather to vary so dramatically that it could potentially suppress the more complex forms of life. Answer 2. Sentence ids: 18,19,20. Duplicates: 2. The popular Giant impact theory asserts that it was formed by a rare collision between the young Earth and a Mars-sized body, usually referred to as Orpheus or Theia, approximately 4.45 billion years ago. The collision had to occur at a precise angle, as a direct hit would have destroyed the Earth, and a shallow hit would have deflected the Mars-sized body. The moon is important because its gravitational pull creates tides that stabilize Earth’s axis. Answer 3. Sentence ids: 20,21,22. Duplicates: 2. The moon is important because its gravitational pull creates tides that stabilize Earth’s axis. Without this stability, its variation, known as precession of the equinoxes, could cause weather to vary so dramatically that it could potentially suppress the more complex forms of life. The heat generated by the Earth/Theia impact, as well as subsequent Lunar tides, may have also significantly contributed to the total heat budget of the Earth’s interior, thereby both strengthening and prolonging the life of the dynamos that generate Earth’s magnetic field Dynamo 1. Answer 4. Sentence ids: 18,20,21. No duplicates. The popular Giant impact theory asserts that it was formed by a rare collision between the young Earth and a Mars-sized body, usually referred to as Orpheus or Theia, approximately 4.45 billion years ago. The moon is important because its gravitational pull creates tides that stabilize Earth’s axis. Without this stability, its variation, known as precession of the equinoxes, could cause weather to vary so dramatically that it could potentially suppress the more complex forms of life. Answer 5. Sentence ids: 18,21. No duplicates. The popular Giant impact theory asserts that it was formed by a rare collision between the young Earth and a Mars-sized body, usually referred to as Orpheus or Theia, approximately 4.45 billion years ago. Without this stability, its variation, known as precession of the equinoxes, could cause weather to vary so dramatically that it could potentially suppress the more complex forms of life. Table 2: Data example: Question with rephrased versions and answers. 446 (long excerpts from the articles) was severely disliked among the workers simply because it took a long time to read. We finally settled on a solution where the partial content consisted of the title and headers of the article, along with the first sentences of each paragraph. The instructions to the questions demanded rigidly that the question starts with the word “Why”, as it was surprisingly difficult to explain what we meant by why-questions if the question word was not fixed. The reward per HIT was $0.04, and 10 questions were collected for each article. We did not force the questions to be different, and thus in the later phase some of the questions were removed manually as they were deemed to mean exactly the same thing. However, there were less than 30 of these duplicate questions in the whole data set. 2.2.2 QAHIT After generating the questions based on partial articles, the resulting questions were uploaded to the system as HITs. Each of these QAHITs presented a single question with the corresponding original article. The worker’s task was to select either 1-3 sentences from the text, or a No-answer-option (NoA). Sentence selection was conducted with Javascript functionality, so the workers had no chance to include freely typed information within the answer (although a comment field was provided). The reward per HIT was $0.06. At the beginning, we collected 10 answers per question, but we cut that down to 8 because the HITs were not completed fast enough. The workers for QAHITs were drawn from the same pool as the workers for QGenHIT, and it was possible for the workers to answer the questions they had generated themselves. 2.2.3 QRepHIT As the final step 5 rephrased versions of each question were generated. This was done to compensate the rigid instructions of the QGenHIT and to ensure variation in the questions. We have not yet measured how well the rephrased questions match the answers of their original versions. In the final QRepHIT questions were grouped into groups of 5. Each HIT consisted of 5 assignments, and a $0.05 reward was offered for each HIT. QRepHIT required the least amount of design and trials, and workers were delighted with the task. The HITs were completed fast and well even in the case when we accidentally uploaded a set of HITs with no reward. As with QAHIT, the worker pool for creating and rephrasing questions was the same. The questions were rephrased by their creator in 4 cases. 2.3 Qualifications To improve the data quality, we used the qualifications to test the workers. For the QGenHITs we only used the system-provided “HIT approval rate”qualification. Only workers whose previous work had been approved in 80% of the cases were able to work on our HITs. In addition to the system-provided qualification, we created a why-question-specific qualification. The workers were presented with 3 questions, and they were to answer each by either selecting 13 most relevant sentences from a list of about 10 sentences, or by deciding that there is no answer present. The possible answer-sentences were divided into groups of essential, OK and wrong, and one of the questions did quite clearly have no answer. The scoring was such that it was impossible to get approved results if not enough essential sentences were included. Selecting sentences from the OK-group only was not sufficient, and selecting sentences from the wrong-group was penalized. A minimum score per question was required, but also the total score was relevant – component scores could compensate each other up to a point. However, if the question with no answer was answered, the score could not be of an approvable level. This qualification was, in addition to the minimum HIT approval rate of 80%, a prerequisite for both the QRepHITs and the QAHITs. A total of 2355 workers took the test, and 1571 (67%) of them passed it, thus becoming our available worker pool. However, in the end the actual number of different workers was only 173. Examples of each HIT, their instructions and the Qualification form are included in the final corpus. The collection process is summarised in Table 1. 447 3 Corpus description The final corpus consists of questions with their rephrased versions and answers. There are total of 695 questions, of which 159 were considered unanswerable based on the articles, and 536 that have 810 answers each. The total cost of producing the corpus was about $350, consisting of $310 paid in workers rewards and $40 in Mechanical Turk fees, including all the trials conducted during the development of the final system. Also included is a set of Wikipedia documents (WikiXML, about 660 000 articles or 670MB in compressed format), including the ones corresponding to the questions (84 documents). The source of WikiXML is the English part of the Wikipedia XML Corpus by Denoyer and Gallinari (2006). In the original data some of the HTML-structures like lists and tables occurred within sentences. Our sentenceselection approach to QA required a more finegrained segmentation and for our purpose, much of the HTML-information was redundant anyway. Consequently we removed most of the HTMLstructures, and the table-cells, list-items and other similar elements were converted into sentences. Apart from sentence-information, only the sectiontitle information was maintained. Example data is shown in Table 2. 3.1 Task-related information Despite the Qualifications and other measures taken in the collection phase of the corpus, we believe the quality of the data remains open to question. However, the Mechanical Turk framework provided additional information for each assignment, for example the time workers spent on the task. We believe this information can be used to analyse and use our data better, and have included it in the corpus to be used in further experiments. • Worker Id Within the MTurk framework, each worker is assigned a unique id. Worker id can be used to assign a reliability-value to the workers, based on the quality of their previous work. It was also used to examine whether the same workers worked on the same data in different phases: Of the original questions, only 7 were answered and 4 other rephrased by the same worker they were created by. However, it has to be acknowledged that it is also possible for one worker to have had several accounts in the system, and thus be working under several different worker ids. • Time On Task The MTurk framework also provides the requester the time it took for the worker to complete the assignment after accepting it. This information is also included in the corpus, although it is impossible to know precisely how much time the workers actually spent on each task. For instance, it is possible that one worker had several assignments open at the same time, or that they were not concentrating fully on working on the task. A high value of Time On Task thus does not necessarily mean that the worker actually spent a long time on it. However, a low value indicates that he/she did only spend a short time on it. • Reward Over the period spent collecting the data, we changed the reward a couple of times to speed up the process. The reward is reported per HIT. • Approval Status Within the collection process we encountered some clearly unacceptable work, and rejected it. The rejected work is also included in the corpus, but marked as rejected. The screening process was by no means perfect, and it is probable that some of the approved work should have been rejected. • HIT id, Assignment id, Upload Time HIT and assignment ids and original upload times of the HITs are provided to make it possible to retrace the collection steps if needed. • Completion Time Completion time is the timestamp of the moment when the task was completed by a worker and returned to the system. The time between the completion time and the upload time is presumably highly dependent on the reward, and on the appeal of the task in question. 3.2 Quality experiments As an example of the post-processing of the data, we conducted some preliminary experiments on the answer agreement between workers. 448 Out of the 695 questions, 159 were filtered out in the first part of QAHIT. We first uploaded only 3 assignments, and the questions that 2 out of 3 workers deemed unanswerable were filtered out. This left 536 questions which were considered answered, each one having 8-10 answers from different workers. Even though in the majority of cases (83% of the questions) one of the workers replied with the NoA, the ones that answered did agree up to a point: of all the answers, 72% were such that all of their sentences were selected by at least two different workers. On top of this, an additional 17% of answers shared at least one sentence that was selected by more than one worker. To understand the agreement better, we also calculated the average agreement of selected sentences based on sentence ids and N-gram overlaps between the answers. In both of these experiments, only those 536 questions that were considered answerable were included. 3.2.1 Answer agreement on sentence ids As the questions were answered by means of sentence selection, the simplest method to check the agreement between the workers was to compare the ids of the selected sentences. The agreement was calculated as follows: each answer was compared to all the other answers for the same question. For each case, the agreement was defined as Agreement = CommonIds AllIds , where CommonIds is the number of sentence ids that existed in both answers, and AllIds is the number of different ids in both answers. We calculated the overall average agreement ratio (Total Avg) and the average of the best matches between two assignments within one HIT (Best Match). We ran the test for two data sets: The most typical case of the workers cheating was to mark the question unaswerable. Because of this the first data set included only the real answers, and the NoAs were removed (NoA not included, 3872 answers). If an answer was compared with a NoA, the agreement was 0, and if two NoAs were compared, the agreement was 1. We did, however, also include the figures for the whole data set (NoA included, 4638 answers). The results are shown in Table 3. The Best Match -results were quite high compared to the Total Avg. From this we can conclude Total Avg Best Match NoA not included 0.39 0.68 NoA included 0.34 0.68 Table 3: Answer agreement based on sentence ids. that in the majority of cases, there was at least one quite similar answer among those for that HIT. However, comparing the sentence ids is only an indicative measure, and it does not tell the whole story about agreement. For each document there may exist several separate sentences that contain the same kind of information, and so two answers can be alike even though the sentence ids do not match. 3.2.2 Answer agreement based on ROUGE Defining the agreement over several passages of texts has for a long time been a research problem within the field of automatic summarisation. For each document it is possible to create several summarisations that can each be considered correct. The problem has been approached by using the ROUGE-metric: calculating the N-gram overlap between manual, “correct” summaries, and the automatic summaries. ROUGE has been proven to correlate well with human evaluation (Lin and Hovy, 2003). Overlaps of higher order N-grams are more usable within speech summarisation as they take the grammatical structure and fluency of the summary into account. When selecting sentences, this is not an issue, so we decided to use only unigram and bigram counts (Table 4: R-1, R2), as well as the skip-bigram values (R-SU) and the longest common N-gram metric R-L. We calculated the figures for two data sets in the same way as in the case of sentence id agreement. Finally, we set a lower bound for the results by comparing the answers to each other randomly (the NoAs were also included). The final F-measures of the ROUGE results are presented in Table 4. The figures vary from 0.37 to 0.56 for the first data set, and from 0.28 to 0.42 to the second. It is debatable how the results should be interpreted, as we have not defined a theoretical upper bound to the values, but the difference to the randomised results is substantial. In the field of automatic summarisation, the overlap of the automatic 449 results and corresponding manual summarisations is generally much lower than the overlap between our answers (Chali and Kolla, 2004). However, it is difficult to draw detailed conclusions based on comparison between these two very different tasks. R-1 R-2 R-SU R-L NoA not included 0.56 0.46 0.37 0.52 NoA included 0.42 0.35 0.28 0.39 Random Answers 0.13 0.01 0.02 0.09 Table 4: Answer agreement: ROUGE-1, -2, -SU and -L. The sentence agreement and ROUGE-figures do not tell us much by themselves. However, they are an example of a procedure that can be used to postprocess the data and in further projects of similar nature. For example, the ROUGE similarity could be used in the data collection phase as a tool of automatic approval and rejection of workers’ assignments. 4 Discussion and future work During the initial trials of data collection we encountered some unexpected phenomena. For example, increasing the reward did have a positive effect in reducing the time it took for HITs to be completed, however it did not correlate in desirable way with data quality. Indeed the quality actually decreased with increasing reward. We believe that this unexpected result is due to the distributed nature of the worker pool in Mechanical Turk. Clearly the motivation of some workers is other than monetary reward. Especially if the HIT is interesting and can be completed in a short period of time, it seems that there are people willing to work on them even for free. MTurk requesters cannot however rely on this voluntary workforce. From MTurk Forums it is clear that some of the workers rely on the money they get from completing the HITs. There seems to be a critical reward-threshold after which the “real workforce”, i.e. workers who are mainly interested in performing the HITs as fast as possible, starts to participate. When the motivation changes from voluntary participation to maximising the monetary gain, the quality of the obtained results often understandably suffers. It would be ideal if a requester could rely on the voluntary workforce alone for results, but in many cases this may result either in too few workers and/or too slow a rate of data acquisition. Therefore it is often necessary to raise the reward and rely on efficient automatic validation of the data. We have looked into the answer agreement of the workers as an experimental post-processing step. We believe that further work in this area will provide the tools required for automatic data quality control. 5 Conclusions In this paper we have described a dynamic and inexpensive method of collecting a corpus of questions and answers using the Amazon Mechanical Turk framework. We have provided to the community a corpus of questions, answers and corresponding documents, that we believe can be used in the development of QA-systems for why-questions. We propose that combining several answers from different people is an important factor in defining the “correct” answer to a why-question, and to that goal have included several answers for each question in the corpus. We have also included data that we believe is valuable in post-processing the data: the work history of a single worker, the time spent on tasks, and the agreement on a single HIT between a set of different workers. We believe that this information, especially the answer agreement of workers, can be successfully used in post-processing and analysing the data, as well as automatically accepting and rejecting workers’ submissions in similar future data collection exercises. Acknowledgments This study was funded by the Monbusho Scholarship of Japanese Government and the 21st Century COE Program ”Framework for Systematization and Application of Large-scale Knowledge Resources (COE-LKR)” References Yllias Chali and Maheedhar Kolla. 2004. Summarization Techniques at DUC 2004. In DUC2004. Hoa Trang Dang, Diane Kelly, and Jimmy Lin. 2007. Overview of the TREC 2007 Question Answering 450 Track. In E. Voorhees and L. P. Buckland, editors, Sixteenth Text REtrieval Conference (TREC), Gaithersburg, Maryland, November. Ludovic Denoyer and Patrick Gallinari. 2006. The Wikipedia XML Corpus. SIGIR Forum. Junichi Fukumoto, Tsuneaki Kato, Fumito Masui, and Tsunenori Mori. 2007. An Overview of the 4th Question Answering Challenge (QAC-4) at NTCIR workshop 6. In Proceedings of the Sixth NTCIR Workshop Meeting, pages 433–440. Chiori Hori, Takaaki Hori, and Sadaoki Furui. 2003. Evaluation Methods for Automatic Speech Summarization. In In Proc. EUROSPEECH, volume 4, pages 2825–2828, Geneva, Switzerland. Valentin Jijkoun and Maarten de Rijke. 2005. Retrieving Answers from Frequently Asked Questions Pages on the Web. In CIKM ’05: Proceedings of the 14th ACM international conference on Information and knowledge management, pages 76–83, New York, NY, USA. ACM Press. Chin-Yew Lin and Eduard Hovy. 2003. Automatic Evaluation of Summaries Using N-gram Co-occurrence Statistics. In Human Technology Conference (HLTNAACL), Edmonton, Canada. Jimmy Lin and Boris Katz. 2006. Building a Reusable Test Collection for Question Answering. J. Am. Soc. Inf. Sci. Technol., 57(7):851–861. Junta Mizuno, Tomoyosi Akiba, Atsushi Fujii, and Katunobu Itou. 2007. Non-factoid Question Answering Experiments at NTCIR-6: Towards Answer Type Detection for Realworld Questions. In Proceedings of the 6th NTCIR Workshop Meeting on Evaluation of Information Access Technologies, pages 487–492. Radu Soricut and Eric Brill. 2006. Automatic Question Answering Using the Web: Beyond the Factoid. Inf. Retr., 9(2):191–206. Suzan Verberne, Lou Boves, Nelleke Oostdijk, and PeterArno Coppen. 2006. Data for Question Answering: the Case of Why. In LREC. Susan Verberne, Lou Boves, Nelleke Oostdijk, and PeterArno Coppen. 2007. Discourse-based Answering of Why-questions. Traitement Automatique des Langues, 47(2: Discours et document: traitements automatiques):21–41. 451
2008
51
Proceedings of ACL-08: HLT, pages 452–460, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Solving Relational Similarity Problems Using the Web as a Corpus Preslav Nakov∗ EECS, CS division University of California at Berkeley Berkeley, CA 94720, USA [email protected] Marti A. Hearst School of Information University of California at Berkeley Berkeley, CA 94720, USA [email protected] Abstract We present a simple linguistically-motivated method for characterizing the semantic relations that hold between two nouns. The approach leverages the vast size of the Web in order to build lexically-specific features. The main idea is to look for verbs, prepositions, and coordinating conjunctions that can help make explicit the hidden relations between the target nouns. Using these features in instance-based classifiers, we demonstrate state-of-the-art results on various relational similarity problems, including mapping noun-modifier pairs to abstract relations like TIME, LOCATION and CONTAINER, characterizing noun-noun compounds in terms of abstract linguistic predicates like CAUSE, USE, and FROM, classifying the relations between nominals in context, and solving SAT verbal analogy problems. In essence, the approach puts together some existing ideas, showing that they apply generally to various semantic tasks, finding that verbs are especially useful features. 1 Introduction Despite the tremendous amount of work on word similarity (see (Budanitsky and Hirst, 2006) for an overview), there is surprisingly little research on the important related problem of relational similarity – semantic similarity between pairs of words. Students who took the SAT test before 2005 or who ∗After January 2008 at the Linguistic Modeling Department, Institute for Parallel Processing, Bulgarian Academy of Sciences, [email protected] are taking the GRE test nowadays are familiar with an instance of this problem – verbal analogy questions, which ask whether, e.g., the relationship between ostrich and bird is more similar to that between lion and cat, or rather between primate and monkey. These analogies are difficult, and the average test taker gives a correct answer 57% of the time (Turney and Littman, 2005). Many NLP applications could benefit from solving relational similarity problems, including but not limited to question answering, information retrieval, machine translation, word sense disambiguation, and information extraction. For example, a relational search engine like TextRunner, which serves queries like “find all X such that X causes wrinkles”, asking for all entities that are in a particular relation with a given entity (Cafarella et al., 2006), needs to recognize that laugh wrinkles is an instance of CAUSE-EFFECT. While there are not many success stories so far, measuring semantic similarity has proven its advantages for textual entailment (Tatu and Moldovan, 2005). In this paper, we introduce a novel linguisticallymotivated Web-based approach to relational similarity, which, despite its simplicity, achieves stateof-the-art performance on a number of problems. Following Turney (2006b), we test our approach on SAT verbal analogy questions and on mapping noun-modifier pairs to abstract relations like TIME, LOCATION and CONTAINER. We further apply it to (1) characterizing noun-noun compounds using abstract linguistic predicates like CAUSE, USE, and FROM, and (2) classifying the relation between pairs of nominals in context. 452 2 Related Work 2.1 Characterizing Semantic Relations Turney and Littman (2005) characterize the relationship between two words as a vector with coordinates corresponding to the Web frequencies of 128 fixed phrases like “X for Y ” and “Y for X” instantiated from a fixed set of 64 joining terms like for, such as, not the, is *, etc. These vectors are used in a nearest-neighbor classifier to solve SAT verbal analogy problems, yielding 47% accuracy. The same approach is applied to classifying noun-modifier pairs: using the Diverse dataset of Nastase and Szpakowicz (2003), Turney&Littman achieve F-measures of 26.5% with 30 fine-grained relations, and 43.2% with 5 course-grained relations. Turney (2005) extends the above approach by introducing the latent relational analysis (LRA), which uses automatically generated synonyms, learns suitable patterns, and performs singular value decomposition in order to smooth the frequencies. The full algorithm consists of 12 steps described in detail in (Turney, 2006b). When applied to SAT questions, it achieves the state-of-the-art accuracy of 56%. On the Diverse dataset, it yields an F-measure of 39.8% with 30 classes, and 58% with 5 classes. Turney (2006a) presents an unsupervised algorithm for mining the Web for patterns expressing implicit semantic relations. For example, CAUSE (e.g., cold virus) is best characterized by “Y * causes X”, and “Y in * early X” is the best pattern for TEMPORAL (e.g., morning frost). With 5 classes, he achieves F-measure=50.2%. 2.2 Noun-Noun Compound Semantics Lauer (1995) reduces the problem of noun compound interpretation to choosing the best paraphrasing preposition from the following set: of, for, in, at, on, from, with or about. He achieved 40% accuracy using corpus frequencies. This result was improved to 55.7% by Lapata and Keller (2005) who used Web-derived n-gram frequencies. Barker and Szpakowicz (1998) use syntactic clues and the identity of the nouns in a nearest-neighbor classifier, achieving 60-70% accuracy. Rosario and Hearst (2001) used a discriminative classifier to assign 18 relations for noun compounds from biomedical text, achieving 60% accuracy. Rosario et al. (2002) reported 90% accuracy with a “descent of hierarchy” approach which characterizes the relationship between the nouns in a bioscience noun-noun compound based on the MeSH categories the nouns belong to. Girju et al. (2005) apply both classic (SVM and decision trees) and novel supervised models (semantic scattering and iterative semantic specialization), using WordNet, word sense disambiguation, and a set of linguistic features. They test their system against both Lauer’s 8 prepositional paraphrases and another set of 21 semantic relations, achieving up to 54% accuracy on the latter. In a previous work (Nakov and Hearst, 2006), we have shown that the relationship between the nouns in a noun-noun compound can be characterized using verbs extracted from the Web, but we provided no formal evaluation. Kim and Baldwin (2006) characterized the semantic relationship in a noun-noun compound using the verbs connecting the two nouns by comparing them to predefined seed verbs. Their approach is highly resource intensive (uses WordNet, CoreLex and Moby’s thesaurus), and is quite sensitive to the seed set of verbs: on a collection of 453 examples and 19 relations, they achieved 52.6% accuracy with 84 seed verbs, but only 46.7% with 57 seed verbs. 2.3 Paraphrase Acquisition Our method of extraction of paraphrasing verbs and prepositions is similar to previous paraphrase acquisition approaches. Lin and Pantel (2001) extract paraphrases from dependency tree paths whose ends contain semantically similar sets of words by generalizing over these ends. For example, given “X solves Y”, they extract paraphrases like “X finds a solution to Y”, “X tries to solve Y”, “X resolves Y”, “Y is resolved by X”, etc. The approach is extended by Shinyama et al. (2002), who use named entity recognizers and look for anchors belonging to matching semantic classes, e.g., LOCATION, ORGANIZATION. The idea is further extended by Nakov et al. (2004), who apply it in the biomedical domain, imposing the additional restriction that the sentences from which the paraphrases are extracted cite the same target paper. 453 2.4 Word Similarity Another important group of related work is on using syntactic dependency features in a vector-space model for measuring word similarity, e.g., (Alshawi and Carter, 1994), (Grishman and Sterling, 1994), (Ruge, 1992), and (Lin, 1998). For example, given a noun, Lin (1998) extracts verbs that have that noun as a subject or object, and adjectives that modify it. 3 Method Given a pair of nouns, we try to characterize the semantic relation between them by leveraging the vast size of the Web to build linguistically-motivated lexically-specific features. We mine the Web for sentences containing the target nouns, and we extract the connecting verbs, prepositions, and coordinating conjunctions, which we use in a vector-space model to measure relational similarity. The process of extraction starts with exact phrase queries issued against a Web search engine (Google) using the following patterns: “infl1 THAT * infl2” “infl2 THAT * infl1” “infl1 * infl2” “infl2 * infl1” where: infl1 and infl2 are inflected variants of noun1 and noun2 generated using the Java WordNet Library1; THAT is a complementizer and can be that, which, or who; and * stands for 0 or more (up to 8) instances of Google’s star operator. The first two patterns are subsumed by the last two and are used to obtain more sentences from the search engine since including e.g. that in the query changes the set of returned results and their ranking. For each query, we collect the text snippets from the result set (up to 1,000 per query). We split them into sentences, and we filter out all incomplete ones and those that do not contain the target nouns. We further make sure that the word sequence following the second mentioned target noun is nonempty and contains at least one nonnoun, thus ensuring the snippet includes the entire noun phrase: snippets representing incomplete sentences often end with a period anyway. We then perform POS tagging using the Stanford POS tagger (Toutanova et al., 2003) 1JWNL: http://jwordnet.sourceforge.net Freq. Feature POS Direction 2205 of P 2 →1 1923 be V 1 →2 771 include V 1 →2 382 serve on V 2 →1 189 chair V 2 →1 189 have V 1 →2 169 consist of V 1 →2 148 comprise V 1 →2 106 sit on V 2 →1 81 be chaired by V 1 →2 78 appoint V 1 →2 77 on P 2 →1 66 and C 1 →2 66 be elected V 1 →2 58 replace V 1 →2 48 lead V 2 →1 47 be intended for V 1 →2 45 join V 2 →1 . . . . . . . . . . . . 4 be signed up for V 2 →1 Table 1: The most frequent Web-derived features for committee member. Here V stands for verb (possibly +preposition and/or +particle), P for preposition and C for coordinating conjunction; 1 →2 means committee precedes the feature and member follows it; 2 →1 means member precedes the feature and committee follows it. and shallow parsing with the OpenNLP tools2, and we extract the following types of features: Verb: We extract a verb if the subject NP of that verb is headed by one of the target nouns (or an inflected form), and its direct object NP is headed by the other target noun (or an inflected form). For example, the verb include will be extracted from “The committee includes many members.” We also extract verbs from relative clauses, e.g., “This is a committee which includes many members.” Verb particles are also recognized, e.g., “The committee must rotate off 1/3 of its members.” We ignore modals and auxiliaries, but retain the passive be. Finally, we lemmatize the main verb using WordNet’s morphological analyzer Morphy (Fellbaum, 1998). Verb+Preposition: If the subject NP of a verb is headed by one of the target nouns (or an inflected form), and its indirect object is a PP containing an NP which is headed by the other target noun (or an inflected form), we extract the verb and the preposi2OpenNLP: http://opennlp.sourceforge.net 454 tion heading that PP, e.g., “The thesis advisory committee consists of three qualified members.” As in the verb case, we extract verb+preposition from relative clauses, we include particles, we ignore modals and auxiliaries, and we lemmatize the verbs. Preposition: If one of the target nouns is the head of an NP containing a PP with an internal NP headed by the other target noun (or an inflected form), we extract the preposition heading that PP, e.g., “The members of the committee held a meeting.” Coordinating conjunction: If the two target nouns are the heads of coordinated NPs, we extract the coordinating conjunction. In addition to the lexical part, for each extracted feature, we keep a direction. Therefore the preposition of represents two different features in the following examples “member of the committee” and “committee of members”. See Table 1 for examples. We use the above-described features to calculate relational similarity, i.e., similarity between pairs of nouns. In order to downweight very common features like of, we use TF.IDF-weighting: w(x) = TF(x) × log  N DF(x)  (1) In the above formula, TF(x) is the number of times the feature x has been extracted for the target noun pair, DF(x) is the total number of training noun pairs that have that feature, and N is the total number of training noun pairs. Given two nouns and their TF.IDF-weighted frequency vectors A and B, we calculate the similarity between them using the following generalized variant of the Dice coefficient: Dice(A, B) = 2 × Pn i=1 min(ai, bi) Pn i=1 ai + Pn i=1 bi (2) Other variants are also possible, e.g., Lin (1998). 4 Relational Similarity Experiments 4.1 SAT Verbal Analogy Following Turney (2006b), we use SAT verbal analogy as a benchmark problem for relational similarity. We experiment with the 374 SAT questions collected by Turney and Littman (2005). Table 2 shows two sample questions: the top word pairs ostrich:bird palatable:toothsome (a) lion:cat (a) rancid:fragrant (b) goose:flock (b) chewy:textured (c) ewe:sheep (c) coarse:rough (d) cub:bear (d) solitude:company (e) primate:monkey (e) no choice Table 2: SAT verbal analogy: sample questions. The stem is in bold, the correct answer is in italic, and the distractors are in plain text. are called stems, the ones in italic are the solutions, and the remaining ones are distractors. Turney (2006b) achieves 56% accuracy on this dataset, which matches the average human performance of 57%, and represents a significant improvement over the 20% random-guessing baseline. Note that the righthand side example in Table 2 is missing one distractor; so do 21 questions. The dataset also mixes different parts of speech: while solitude and company are nouns, all remaining words are adjectives. Other examples contain verbs and adverbs, and even relate pairs of different POS. This is problematic for our approach, which requires that both words be nouns3. After having filtered all examples containing nonnouns, we ended up with 184 questions, which we used in the evaluation. Given a verbal analogy example, we build six feature vectors – one for each of the six word pairs. We then calculate the relational similarity between the stem of the analogy and each of the five candidates, and we choose the pair with the highest score; we make no prediction in case of a tie. The evaluation results for a leave-one-out crossvalidation are shown in Table 3. We also show 95%confidence intervals for the accuracy. The last line in the table shows the performance of Turney’s LRA when limited to the 184 noun-only examples. Our best model v + p + c performs a bit better, 71.3% vs. 67.4%, but the difference is not statistically significant. However, this “inferred” accuracy could be misleading, and the LRA could have performed better if it was restricted to solve noun-only analogies, which seem easier than the general ones, as demonstrated by the significant increase in accuracy for LRA when limited to nouns: 67.4% vs. 56%. 3It can be extended to handle adjective-noun pairs as well, as demonstrated in section 4.2 below. 455 Model ✓ × ∅ Accuracy Cover. v + p + c 129 52 3 71.3±7.0 98.4 v 122 56 6 68.5±7.2 96.7 v + p 119 61 4 66.1±7.2 97.8 v + c 117 62 5 65.4±7.2 97.3 p + c 90 90 4 50.0±7.2 97.8 p 84 94 6 47.2±7.2 96.7 baseline 37 147 0 20.0±5.2 100.0 LRA 122 59 3 67.4±7.1 98.4 Table 3: SAT verbal analogy: 184 noun-only examples. v stands for verb, p for preposition, and c for coordinating conjunction. For each model, the number of correct (✓), wrong (×), and nonclassified examples (∅) is shown, followed by accuracy and coverage (in %s). Model ✓ × ∅ Accuracy Cover. v + p 240 352 8 40.5±3.9 98.7 v + p + c 238 354 8 40.2±3.9 98.7 v 234 350 16 40.1±3.9 97.3 v + c 230 362 8 38.9±3.8 98.7 p + c 114 471 15 19.5±3.0 97.5 p 110 475 15 19.1±3.0 97.5 baseline 49 551 0 8.2±1.9 100.0 LRA 239 361 0 39.8±3.8 100.0 Table 4: Head-modifier relations, 30 classes: evaluation on the Diverse dataset, micro-averaged (in %s). 4.2 Head-Modifier Relations Next, we experiment with the Diverse dataset of Barker and Szpakowicz (1998), which consists of 600 head-modifier pairs: noun-noun, adjective-noun and adverb-noun. Each example is annotated with one of 30 fine-grained relations, which are further grouped into the following 5 coarse-grained classes (the fine-grained relations are shown in parentheses): CAUSALITY (cause, effect, purpose, detraction), TEMPORALITY (frequency, time at, time through), SPATIAL (direction, location, location at, location from), PARTICIPANT (agent, beneficiary, instrument, object, object property, part, possessor, property, product, source, stative, whole) and QUALITY (container, content, equative, material, measure, topic, type). For example, exam anxiety is classified as effect and therefore as CAUSALITY, and blue book is property and therefore also PARTICIPANT. Some examples in the dataset are problematic for our method. First, in three cases, there are two modifiers, e.g., infectious disease agent, and we had to ignore the first one. Second, seven examples have an adverb modifier, e.g., daily exercise, and 262 examples have an adjective modifier, e.g., tiny cloud. We treat them as if the modifier was a noun, which works in many cases, since many adjectives and adverbs can be used predicatively, e.g., ‘This exercise is performed daily.’ or ‘This cloud looks very tiny.’ For the evaluation, we created a feature vector for each head-modifier pair, and we performed a leaveone-out cross-validation: we left one example for testing and we trained on the remaining 599 ones, repeating this procedure 600 times so that each example be used for testing. Following Turney and Littman (2005) we used a 1-nearest-neighbor classifier. We calculated the similarity between the feature vector of the testing example and each of the training examples’ vectors. If there was a unique most similar training example, we predicted its class, and if there were ties, we chose the class predicted by the majority of tied examples, if there was a majority. The results for the 30-class Diverse dataset are shown in Table 4. Our best model achieves 40.5% accuracy, which is slightly better than LRA’s 39.8%, but the difference is not statistically significant. Table 4 shows that the verbs are the most important features, yielding about 40% accuracy regardless of whether used alone or in combination with prepositions and/or coordinating conjunctions; not using them results in 50% drop in accuracy. The reason coordinating conjunctions do not help is that head-modifier relations are typically expressed with verbal or prepositional paraphrases. Therefore, coordinating conjunctions only help with some infrequent relations like equative, e.g., finding player and coach on the Web suggests an equative relation for player coach (and for coach player). As Table 3 shows, this is different for SAT verbal analogy, where verbs are still the most important feature type and the only whose presence/absence makes a statistical difference. However, this time coordinating conjunctions (with prepositions) do help a bit (the difference is not statistically significant) since SAT verbal analogy questions ask for a broader range of relations, e.g., antonymy, for which coordinating conjunctions like but are helpful. 456 Model Accuracy v + p + c + sent + query (type C) 68.1±4.0 v 67.9±4.0 v + p + c 67.8±4.0 v + p + c + sent (type A) 67.3±4.0 v + p 66.9±4.0 sent (sentence words only) 59.3±4.2 p 58.4±4.2 Baseline (majority class) 57.0±4.2 v + p + c + sent + query (C), 8 stars 67.0±4.0 v + p + c + sent (A), 8 stars 65.4±4.1 Best type C on SemEval 67.0±4.0 Best type A on SemEval 66.0±4.1 Table 5: Relations between nominals: evaluation on the SemEval dataset. Accuracy is macro-averaged (in %s), up to 10 Google stars are used unless otherwise stated. 4.3 Relations Between Nominals We further experimented with the SemEval’07 task 4 dataset (Girju et al., 2007), where each example consists of a sentence, a target semantic relation, two nominals to be judged on whether they are in that relation, manually annotated WordNet senses, and the Web query used to obtain the sentence: "Among the contents of the <e1>vessel</e1> were a set of carpenter’s <e2>tools</e2>, several large storage jars, ceramic utensils, ropes and remnants of food, as well as a heavy load of ballast stones." WordNet(e1) = "vessel%1:06:00::", WordNet(e2) = "tool%1:06:00::", Content-Container(e2, e1) = "true", Query = "contents of the * were a" The following nonexhaustive and possibly overlapping relations are possible: Cause-Effect (e.g., hormone-growth), Instrument-Agency (e.g., laser-printer), Theme-Tool (e.g., workforce), Origin-Entity (e.g., grain-alcohol), Content-Container (e.g., bananas-basket), Product-Producer (e.g., honey-bee), and Part-Whole (e.g., leg-table). Each relation is considered in isolation; there are 140 training and at least 70 test examples per relation. Given an example, we reduced the target entities e1 and e2 to single nouns by retaining their heads only. We then mined the Web for sentences containing these nouns, and we extracted the abovedescribed feature types: verbs, prepositions and coordinating conjunctions. We further used the following problem-specific contextual feature types: Sentence words: after stop words removal and stemming with the Porter (1980) stemmer; Entity words: lemmata of the words in e1 and e2; Query words: words part of the query string. Each feature type has a specific prefix which prevents it from mixing with other feature types; the last feature type is used for type C only (see below). The SemEval competition defines four types of systems, depending on whether the manually annotated WordNet senses and the Google query are used: A (WordNet=no, Query=no), B (WordNet=yes, Query=no), C (WordNet=no, Query=yes), and D (WordNet=yes, Query=yes). We experimented with types A and C only since we believe that having the manually annotated WordNet sense keys is an unrealistic assumption for a real-world application. As before, we used a 1-nearest-neighbor classifier with TF.IDF-weighting, breaking ties by predicting the majority class on the training data. The evaluation results are shown in Table 5. We studied the effect of different subsets of features and of more Google star operators. As the table shows, using up to ten Google stars instead of up to eight (see section 3) yields a slight improvement in accuracy for systems of both type A (65.4% vs. 67.3%) and type C (67.0% vs. 68.1%). Both results represent a statistically significant improvement over the majority class baseline and over using sentence words only, and a slight improvement over the best type A and type C systems on SemEval’07, which achieved 66% and 67% accuracy, respectively.4 4.4 Noun-Noun Compound Relations The last dataset we experimented with is a subset of the 387 examples listed in the appendix of (Levi, 1978). Levi’s theory is one of the most important linguistic theories of the syntax and semantics of complex nominals – a general concept grouping 4The best type B system on SemEval achieved 76.3% accuracy using the manually-annotated WordNet senses in context for each example, which constitutes an additional data source, as opposed to an additional resource. The systems that used WordNet as a resource only, i.e., ignoring the manually annotated senses, were classified as type A or C. (Girju et al., 2007) 457 USING THAT NOT USING THAT Model Accuracy Cover. ANF ASF Accuracy Cover. ANF ASF Human: all v 78.4±6.0 99.5 34.3 70.9 – – – Human: first v from each worker 72.3±6.4 99.5 11.6 25.5 – – – – v + p + c 50.0±6.7 99.1 216.6 1716.0 49.1±6.7 99.1 206.6 1647.6 v + p 50.0±6.7 99.1 208.9 1427.9 47.6±6.6 99.1 198.9 1359.5 v + c 46.7±6.6 99.1 187.8 1107.2 43.9±6.5 99.1 177.8 1038.8 v 45.8±6.6 99.1 180.0 819.1 42.9±6.5 99.1 170.0 750.7 p 33.0±6.0 99.1 28.9 608.8 33.0±6.0 99.1 28.9 608.8 p + c 32.1±5.9 99.1 36.6 896.9 32.1±5.9 99.1 36.6 896.9 Baseline 19.6±4.8 100.0 – – – – – – Table 6: Noun-noun compound relations, 12 classes: evaluation on Levi-214 dataset. Shown are micro-averaged accuracy and coverage in %s, followed by average number of features (ANF) and average sum of feature frequencies (ASF) per example. The righthand side reports the results when the query patterns involving THAT were not used. For comparison purposes, the top rows show the performance with the human-proposed verbs used as features. together the partially overlapping classes of nominal compounds (e.g., peanut butter), nominalizations (e.g., dream analysis), and nonpredicate noun phrases (e.g., electric shock). In Levi’s theory, complex nominals can be derived from relative clauses by removing one of the following 12 abstract predicates: CAUSE1 (e.g., tear gas), CAUSE2 (e.g., drug deaths), HAVE1 (e.g., apple cake), HAVE2 (e.g., lemon peel), MAKE1 (e.g., silkworm), MAKE2 (e.g., snowball), USE (e.g., steam iron), BE (e.g., soldier ant), IN (e.g., field mouse), FOR (e.g., horse doctor), FROM (e.g., olive oil), and ABOUT (e.g., price war). In the resulting nominals, the modifier is typically the object of the predicate; when it is the subject, the predicate is marked with the index 2. The second derivational mechanism in the theory is nominalization; it produces nominals whose head is a nominalized verb. Since we are interested in noun compounds only, we manually cleansed the set of 387 examples. We first excluded all concatenations (e.g., silkworm) and examples with adjectival modifiers (e.g., electric shock), thus obtaining 250 noun-noun compounds (Levi-250 dataset). We further filtered out all nominalizations for which the dataset provides no abstract predicate (e.g., city planner), thus ending up with 214 examples (Levi-214 dataset). As in the previous experiments, for each of the 214 noun-noun compounds, we mined the Web for sentences containing both target nouns, from which we extracted paraphrasing verbs, prepositions and coordinating conjunctions. We then performed leave-one-out cross-validation experiments with a 1-nearest-neighbor classifier, trying to predict the correct predicate for the testing example. The results are shown in Table 6. As we can see, using prepositions alone yields about 33% accuracy, which is a statistically significant improvement over the majority-class baseline. Overall, the most important features are the verbs: they yield 45.8% accuracy when used alone, and 50% together with prepositions. Adding coordinating conjunctions helps a bit with verbs, but not with prepositions. Note however that none of the differences between the different feature combinations involving verbs are statistically significant. The righthand side of the table reports the results when the query patterns involving THAT (see section 3) were not used. We can observe a small 1-3% drop in accuracy for all models involving verbs, but it is not statistically significant. We also show the average number of distinct features and sum of feature counts per example: as we can see, there is a strong positive correlation between number of features and accuracy. 5 Comparison to Human Judgments Since in all above tasks the most important features were the verbs, we decided to compare our Web-derived verbs to human-proposed ones for all noun-noun compounds in the Levi-250 dataset. We asked human subjects to produce verbs, possibly 458 followed by prepositions, that could be used in a paraphrase involving that. For example, olive oil can be paraphrased as ‘oil that comes from olives’, ‘oil that is obtained from olives’ or ‘oil that is from olives’. Note that this implicitly allows for prepositional paraphrases – when the verb is to be and is followed by a preposition, as in the last paraphrase. We used the Amazon Mechanical Turk Web service5 to recruit human subjects, and we instructed them to propose at least three paraphrasing verbs per noun-noun compound, if possible. We randomly distributed the noun-noun compounds into groups of 5 and we requested 25 different human subjects per group. Each human subject was allowed to work on any number of groups, but not on the same one twice. A total of 174 different human subjects produced 19,018 verbs. After filtering the bad submissions and normalizing the verbs, we ended up with 17,821 verbs. See (Nakov, 2007) for further details on the process of extraction and cleansing. The dataset itself is freely available (Nakov, 2008). We compared the human-proposed and the Webderived verbs for Levi-214, aggregated by relation. Given a relation, we collected all verbs belonging to noun-noun compounds from that relation together with their frequencies. From a vector-space model point of view, we summed their corresponding frequency vectors. We did this separately for the human- and the program-generated verbs, and we compared the resulting vectors using Dice coefficient with TF.IDF, calculated as before. Figure 1 shows the cosine correlations using all humanproposed verbs and the first verb from each judge. We can see a very-high correlation (mid-70% to mid-90%) for relations like CAUSE1, MAKE1, BE, but low correlations of 11-30% for reverse relations like HAVE2 and MAKE2. Interestingly, using the first verb only improves the results for highly-correlated relations, but negatively affects low-correlated ones. Finally, we repeated the cross-validation experiment with the Levi-214 dataset, this time using the human-proposed verbs6 as features. As Table 6 shows, we achieved 78.4% accuracy using all verbs (and and 72.3% with the first verb from each worker), which is a statistically significant improve5http://www.mturk.com 6Note that the human subjects proposed their verbs without any context and independently of our Web-derived sentences. Figure 1: Cosine correlation (in %s) between the human- and the program- generated verbs by relation: using all human-proposed verbs vs. the first verb. ment over the 50% of our best Web-based model. This result is strong for a 12-way classification problem, and confirms our observation that verbs and prepositions are among the most important features for relational similarity problems. It further suggests that the human-proposed verbs might be an upper bound on the accuracy that could be achieved with automatically extracted features. 6 Conclusions and Future Work We have presented a simple approach for characterizing the relation between a pair of nouns in terms of linguistically-motivated features which could be useful for many NLP tasks. We found that verbs were especially useful features for this task. An important advantage of the approach is that it does not require knowledge about the semantics of the individual nouns. A potential drawback is that it might not work well for low-frequency words. The evaluation on several relational similarity problems, including SAT verbal analogy, headmodifier relations, and relations between complex nominals has shown state-of-the-art performance. The presented approach can be further extended to other combinations of parts of speech: not just nounnoun and adjective-noun. Using a parser with a richer set of syntactic dependency features, e.g., as proposed by Pad´o and Lapata (2007), is another promising direction for future work. Acknowledgments This research was supported in part by NSF DBI0317510. 459 References Hiyan Alshawi and David Carter. 1994. Training and scaling preference functions for disambiguation. Computational Linguistics, 20(4):635–648. Ken Barker and Stan Szpakowicz. 1998. Semi-automatic recognition of noun modifier relationships. In Proc. of Computational linguistics, pages 96–102. Alexander Budanitsky and Graeme Hirst. 2006. Evaluating wordnet-based measures of lexical semantic relatedness. Computational Linguistics, 32(1):13–47. Michael Cafarella, Michele Banko, and Oren Etzioni. 2006. Relational Web search. Technical Report 200604-02, University of Washington, Department of Computer Science and Engineering. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press. Roxana Girju, Dan Moldovan, Marta Tatu, and Daniel Antohe. 2005. On the semantics of noun compounds. Journal of Computer Speech and Language - Special Issue on Multiword Expressions, 4(19):479–496. Roxana Girju, Preslav Nakov, Vivi Nastase, Stan Szpakowicz, Peter Turney, and Deniz Yuret. 2007. Semeval-2007 task 04: Classification of semantic relations between nominals. In Proceedings of SemEval, pages 13–18, Prague, Czech Republic. Ralph Grishman and John Sterling. 1994. Generalizing automatically generated selectional patterns. In Proceedings of the 15th conference on Computational linguistics, pages 742–747. Su Nam Kim and Timothy Baldwin. 2006. Interpreting semantic relations in noun compounds via verb semantics. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 491–498. Mirella Lapata and Frank Keller. 2005. Web-based models for natural language processing. ACM Trans. Speech Lang. Process., 2(1):3. Mark Lauer. 1995. Designing Statistical Language Learners: Experiments on Noun Compounds. Ph.D. thesis, Dept. of Computing, Macquarie University, Australia. Judith Levi. 1978. The Syntax and Semantics of Complex Nominals. Academic Press, New York. Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question-answering. Natural Language Engineering, 7(4):343–360. Dekang Lin. 1998. An information-theoretic definition of similarity. In Proceedings of ICML, pages 296–304. Preslav Nakov and Marti Hearst. 2006. Using verbs to characterize noun-noun relations. In AIMSA, volume 4183 of LNCS, pages 233–244. Springer. Preslav Nakov, Ariel Schwartz, and Marti Hearst. 2004. Citances: Citation sentences for semantic analysis of bioscience text. In Proceedings of SIGIR’04 Workshop on Search and Discovery in Bioinformatics, pages 81– 88, Sheffield, UK. Preslav Nakov. 2007. Using the Web as an Implicit Training Set: Application to Noun Compound Syntax and Semantics. Ph.D. thesis, EECS Department, University of California, Berkeley, UCB/EECS-2007-173. Preslav Nakov. 2008. Paraphrasing verbs for noun compound interpretation. In Proceedings of the LREC’08 Workshop: Towards a Shared Task for Multiword Expressions (MWE’08), Marrakech, Morocco. Vivi Nastase and Stan Szpakowicz. 2003. Exploring noun-modifier semantic relations. In Fifth International Workshop on Computational Semantics (IWCS5), pages 285–301, Tilburg, The Netherlands. Sebastian Pad´o and Mirella Lapata. 2007. Dependencybased construction of semantic space models. Computational Linguistics, 33(2):161–199. Martin Porter. 1980. An algorithm for suffix stripping. Program, 14(3):130–137. Barbara Rosario and Marti Hearst. 2001. Classifying the semantic relations in noun compounds via a domainspecific lexical hierarchy. In Proceedings of EMNLP, pages 82–90. Barbara Rosario, Marti Hearst, and Charles Fillmore. 2002. The descent of hierarchy, and selection in relational semantics. In Proceedings of ACL, pages 247– 254. Gerda Ruge. 1992. Experiment on linguistically-based term associations. Inf. Process. Manage., 28(3):317– 332. Yusuke Shinyama, Satoshi Sekine, and Kiyoshi Sudo. 2002. Automatic paraphrase acquisition from news articles. In Proceedings of HLT, pages 313–318. Marta Tatu and Dan Moldovan. 2005. A semantic approach to recognizing textual entailment. In Proceedings of HLT, pages 371–378. Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of HLT-NAACL, pages 252–259. Peter Turney and Michael Littman. 2005. Corpus-based learning of analogies and semantic relations. Machine Learning Journal, 60(1-3):251–278. Peter Turney. 2005. Measuring semantic similarity by latent relational analysis. In Proceedings of IJCAI, pages 1136–1141. Peter Turney. 2006a. Expressing implicit semantic relations without supervision. In Proceedings of ACL, pages 313–320. Peter Turney. 2006b. Similarity of semantic relations. Computational Linguistics, 32(3):379–416. 460
2008
52
Proceedings of ACL-08: HLT, pages 461–469, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Combining Speech Retrieval Results with Generalized Additive Models J. Scott Olsson∗and Douglas W. Oard† UMIACS Laboratory for Computational Linguistics and Information Processing University of Maryland, College Park, MD 20742 Human Language Technology Center of Excellence John Hopkins University, Baltimore, MD 21211 [email protected], [email protected] Abstract Rapid and inexpensive techniques for automatic transcription of speech have the potential to dramatically expand the types of content to which information retrieval techniques can be productively applied, but limitations in accuracy and robustness must be overcome before that promise can be fully realized. Combining retrieval results from systems built on various errorful representations of the same collection offers some potential to address these challenges. This paper explores that potential by applying Generalized Additive Models to optimize the combination of ranked retrieval results obtained using transcripts produced automatically for the same spoken content by substantially different recognition systems. Topic-averaged retrieval effectiveness better than any previously reported for the same collection was obtained, and even larger gains are apparent when using an alternative measure emphasizing results on the most difficult topics. 1 Introduction Speech retrieval, like other tasks that require transforming the representation of language, suffers from both random and systematic errors that are introduced by the speech-to-text transducer. Limitations in signal processing, acoustic modeling, pronunciation, vocabulary, and language modeling can be accommodated in several ways, each of which make different trade-offs and thus induce different ∗Dept. of Mathematics/AMSC, UMD † College of Information Studies, UMD error characteristics. Moreover, different applications produce different types of challenges and different opportunities. As a result, optimizing a single recognition system for all transcription tasks is well beyond the reach of present technology, and even systems that are apparently similar on average can make different mistakes on different sources. A natural response to this challenge is to combine retrieval results from multiple systems, each imperfect, to achieve reasonably robust behavior over a broader range of tasks. In this paper, we compare alternative ways of combining these ranked lists. Note, we do not assume access to the internal workings of the recognition systems, or even to the transcripts produced by those systems. System combination has a long history in information retrieval. Most often, the goal is to combine results from systems that search different content (“collection fusion”) or to combine results from different systems on the same content (“data fusion”). When working with multiple transcriptions of the same content, we are again presented with new opportunities. In this paper we compare some well known techniques for combination of retrieval results with a new evidence combination technique based on a general framework known as Generalized Additive Models (GAMs). We show that this new technique significantly outperforms several well known information retrieval fusion techniques, and we present evidence that it is the ability of GAMs to combine inputs non-linearly that at least partly explains our improvements. The remainder of this paper is organized as follows. We first review prior work on evidence com461 bination in information retrieval in Section 2, and then introduce Generalized Additive Models in Section 3. Section 4 describes the design of our experiments with a 589 hour collection of conversational speech for which information retrieval queries and relevance judgments are available. Section 5 presents the results of our experiments, and we conclude in Section 6 with a brief discussion of implications of our results and the potential for future work on this important problem. 2 Previous Work One approach for combining ranked retrieval results is to simply linearly combine the multiple system scores for each topic and document. This approach has been extensively applied in the literature (Bartell et al., 1994; Callan et al., 1995; Powell et al., 2000; Vogt and Cottrell, 1999), with varying degrees of success, owing in part to the potential difficulty of normalizing scores across retrieval systems. In this study, we partially abstract away from this potential difficulty by using the same retrieval system on both representations of the collection documents (so that we don’t expect score distributions to be significantly different for the combination inputs). Of course, many fusion techniques using more advanced score normalization methods have been proposed. Shaw and Fox (1994) proposed a number of such techniques, perhaps the most successful of which is known as CombMNZ. CombMNZ has been shown to achieve strong performance and has been used in many subsequent studies (Lee, 1997; Montague and Aslam, 2002; Beitzel et al., 2004; Lillis et al., 2006). In this study, we also use CombMNZ as a baseline for comparison, and following Lillis et al. (2006) and Lee (1997), compute it in the following way. First, we normalize each score si as norm(si) = si−min(s) max(s)−min(s), where max(s) and min(s) are the maximum and minimum scores seen in the input result list. After normalization, the CombMNZ score for a document d is computed as CombMNZd = L X ℓ Ns,d × |Nd > 0|. Here, L is the number of ranked lists to be combined, Nℓ,d is the normalized score of document d in ranked list ℓ, and |Nd > 0| is the number of nonzero normalized scores given to d by any result set. Manmatha et al. (2001) showed that retrieval scores from IR systems could be modeled using a Normal distribution for relevant documents and exponential distribution for non-relevant documents. However, in their study, fusion results using these comparatively complex normalization approaches achieved performance no better than the much simpler CombMNZ. A simple rank-based fusion technique is interleaving (Voorhees et al., 1994). In this approach, the highest ranked document from each list is taken in turn (ignoring duplicates) and placed at the top of the new, combined list. Many probabilistic combination approaches have also been developed, a recent example being Lillis et al. (2006). Perhaps the most closely related proposal, using logistic regression, was made first by Savoy et al. (1988). Logistic regression is one example from the broad class of models which GAMs encompass. Unlike GAMs in their full generality however, logistic regression imposes a comparatively high degree of linearity in the model structure. 2.1 Combining speech retrieval results Previous work on single-collection result fusion has naturally focused on combining results from multiple retrieval systems. In this case, the potential for performance improvements depends critically on the uniqueness of the different input systems being combined. Accordingly, small variations in the same system often do not combine to produce results better than the best of their inputs (Beitzel et al., 2004). Errorful document collections such as conversational speech introduce new difficulties and opportunities for data fusion. This is so, in particular, because even the same system can produce drastically different retrieval results when multiple representations of the documents (e.g., multiple transcript hypotheses) are available. Consider, for example, Figure 1 which shows, for each term in each of our title queries, the proportion of relevant documents containing that term in only one of our two transcript hypotheses. Critically, by plotting this proportion against the term’s inverse document frequency, we observe that the most discriminative query terms are often not available in both document represen462 1 2 3 4 5 0.0 0.2 0.4 0.6 0.8 1.0 Inverse Document Frequency Proportion of relevant docs with term in only one transcript source Figure 1: For each term in each query, the proportion of relevant documents containing the term vs. inverse document frequency. For increasingly discriminative terms (higher idf), we observe that the probability of only one transcript containing the term increases dramatically. tations. As these high-idf terms make large contributions to retrieval scores, this suggests that even an identical retrieval system may return a large score using one transcript hypothesis, and yet a very low score using another. Accordingly, a linear combination of scores is unlikely to be optimal. A second example illustrates the difficulty. Suppose recognition system A can recognize a particular high-idf query term, but system B never can. In the extreme case, the term may simply be out of vocabulary, although this may occur for various other reasons (e.g., poor language modeling or pronunciation dictionaries). Here again, a linear combination of scores will fail, as will rank-based interleaving. In the latter case, we will alternate between taking a plausible document from system A and an inevitably worse result from the crippled system B. As a potential solution for these difficulties, we consider the use of generalized additive models for retrieval fusion. 3 Generalized Additive Models Generalized Additive Models (GAMs) are a generalization of Generalized Linear Models (GLMs), while GLMs are a generalization of the well known linear model. In a GLM, the distribution of an observed random variable Yi is related to the linear predictor ηi through a smooth monotonic link function g, g(µi) = ηi = Xiβ. Here, Xi is the ith row of the model matrix X (one set of observations corresponding to one observed yi) and β is a vector of unknown parameters to be learned from the data. If we constrain our link function g to be the identity transformation, and assume Yi is Normal, then our GLM reduces to a simple linear model. But GLMs are considerably more versatile than linear models. First, rather than only the Normal distribution, the response Yi is free to have any distribution belonging to the exponential family of distributions. This family includes many useful distributions such as the Binomial, Normal, Gamma, and Poisson. Secondly, by allowing non-identity link functions g, some degree of non-linearity may be incorporated in the model structure. A well known GLM in the NLP community is logistic regression (which may alternatively be derived as a maximum entropy classifier). In logistic regression, the response is assumed to be Binomial and the chosen link function is the logit transformation, g(µi) = logit(µi) = log  µi 1 −µi  . Generalized additive models allow for additional model flexibility by allowing the linear predictor to now also contain learned smooth functions fj of the covariates xk. For example, g(µi) = X∗ i θ + f1(x1i) + f2(x2i) + f3(x3i, x4i). As in a GLM, µi ≡E(Yi) and Yi belongs to the exponential family. Strictly parametric model components are still permitted, which we represent as a row of the model matrix X∗ i (with associated parameters θ). GAMs may be thought of as GLMs where one or more covariate has been transformed by a basis expansion, f(x) = Pq j bj(x)βj. Given a set of q basis functions bj spanning a q-dimensional space 463 of smooth transformations, we are back to the linear problem of learning coefficients βj which “optimally” fit the data. If we knew the appropriate transformation of our covariates (say the logarithm), we could simply apply it ourselves. GAMs allow us to learn these transformations from the data, when we expect some transformation to be useful but don’t know it’s form a priori. In practice, these smooth functions may be represented and the model parameters may be learned in various ways. In this work, we use the excellent open source package mgcv (Wood, 2006), which uses penalized likelihood maximization to prevent arbitrarily “wiggly” smooth functions (i.e., overfitting). Smooths (including multidimensional smooths) are represented by thin plate regression splines (Wood, 2003). 3.1 Combining speech retrieval results with GAMs The chief difficulty introduced in combining ranked speech retrieval results is the severe disagreement introduced by differing document hypotheses. As we saw in Figure 1, it is often the case that the most discriminative query terms occur in only one transcript source. 3.1.1 GLM with factors Our first new approach for handling differences in transcripts is an extension of the logistic regression model previously used in data fusion work, (Savoy et al., 1988). Specifically, we augment the model with the first-order interaction of scores x1x2 and the factor αi, so that logit{E(Ri)} = β0 +αi +x1β1 +x2β2 +x1x2β3, where the relevance Ri ∼Binomial. A factor is essentially a learned intercept for different subsets of the response. In this case, αi =    βBOTH if both representations matched qi βIBM only di,IBM matched qi βBBN only di,BBN matched qi where αi corresponds to data row i, with associated document representations di,source and query qi. The intuition is simply that we’d like our model to have different biases for or against relevance based on which transcript source retrieved the document. This is a small-dimensional way of dampening the effects of significant disagreements in the document representations. 3.1.2 GAM with multidimensional smooth If a document’s score is large in both systems, we expect it to have high probability of relevance. However, as a document’s score increases linearly in one source, we have no reason to expect its probability of relevance to also increase linearly. Moreover, because the most discriminative terms are likely to be found in only one transcript source, even an absent score for a document does not ensure a document is not relevant. It is clear then that the mapping from document scores to probability of relevance is in general a complex nonlinear surface. The limited degree of nonlinear structure afforded to GLMs by non-identity link functions is unlikely to sufficiently capture this intuition. Instead, we can model this non-linearity using a generalized additive model with multidimensional smooth f(xIBM, xBBN), so that logit{E(Ri)} = β0 + f(xIBM, xBBN). Again, Ri ∼Binomial and β0 is a learned intercept (which, alternatively, may be absorbed by the smooth f). Figure 2 shows the smoothing transformation f learned during our evaluation. Note the small decrease in predicted probability of relevance as the retrieval score from one system decreases, while the probability curves upward again as the disagreement increases. This captures our intuition that systems often disagree strongly because discriminative terms are often not recognized in all transcript sources. We can think of the probability of relevance mapping learned by the factor model of Section 3.1.1 as also being a surface defined over the space of input document scores. That model, however, was constrained to be linear. It may be visualized as a collection of affine planes (with common normal vectors, but each shifted upwards by their factor level’s weight and the common intercept). 464 4 Experiments 4.1 Dataset Our dataset is a collection of 272 oral history interviews from the MALACH collection. The task is to retrieve short speech segments which were manually designated as being topically coherent by professional indexers. There are 8,104 such segments (corresponding to roughly 589 hours of conversational speech) and 96 assessed topics. We follow the topic partition used for the 2007 evaluation by the Cross Language Evaluation Forum’s cross-language speech retrieval track (Pecina et al., 2007). This gives us 63 topics on which to train our combination systems and 33 topics for evaluation. 4.2 Evaluation 4.2.1 Geometric Mean Average Precision Average precision (AP) is the average of the precision values obtained after each document relevant to a particular query is retrieved. To assess the effectiveness of a system across multiple queries, a commonly used measure is mean average precision (MAP). Mean average precision is defined as the arithmetic mean of per-topic average precision, MAP = 1 n P n APn. A consequence of the arithmetic mean is that, if a system improvement doubles AP for one topic from 0.02 to 0.04, while simultaneously decreasing AP on another from 0.4 to 0.38, the MAP will be unchanged. If we prefer to highlight performance differences on the lowest performing topics, a widely used alternative is the geometric mean of average precision (GMAP), first introduced in the TREC 2004 robust track (Voorhees, 2006). GMAP = n sY n APn Robertson (2006) presents a justification and analysis of GMAP and notes that it may alternatively be computed as an arithmetic mean of logs, GMAP = exp 1 n X n log APn. 4.2.2 Significance Testing for GMAP A standard way of measuring the significance of system improvements in MAP is to compare average precision (AP) on each of the evaluation queries using the Wilcoxon signed-rank test. This test, while not requiring a particular distribution on the measurements, does assume that they belong to an interval scale. Similarly, the arithmetic mean of MAP assumes AP has interval scale. As Robertson (2006) has pointed out, it is in no sense clear that AP (prior to any transformation) satisfies this assumption. This becomes an argument for GMAP, since it may also be defined using an arithmetic mean of logtransformed average precisions. That is to say, the logarithm is simply one possible monotonic transformation which is arguably as good as any other, including the identify transform, in terms of whether the transformed value satisfies the interval assumption. This log transform (and hence GMAP) is useful simply because it highlights improvements on the most difficult queries. We apply the same reasoning to test for statistical significance in GMAP improvements. That is, we test for significant improvements in GMAP by applying the Wilcoxon signed rank test to the paired, transformed average precisions, log AP. We handle tied pairs and compute exact p-values using the Streitberg & R¨ohmel Shift-Algorithm (1990). For topics with AP = 0, we follow the Robust Track convention and add ϵ = 0.00001. The authors are not aware of significance tests having been previously reported on GMAP. 4.3 Retrieval System We use Okapi BM25 (Robertson et al., 1996) as our basic retrieval system, which defines a document D’s retrieval score for query Q as s(D, Q) = n X i=1 idf(qi) (k3+1)qfi k3+qfi )f(qi, D)(k1 + 1) f(qi, D) + k1(1 −b + b |D| avgdl) , where the inverse document frequency (idf) is defined as idf(qi) = log N −n(qi) + 0.5 n(qi) + 0.5 , N is the size of the collection, n(qi) is the document frequency for term qi, qfi is the frequency of term qi in query Q, f(qi, D) is the term frequency of query term qi in document D, |D| is the length of the matching document, and avgdl is the average length of a document in the collection. We set the 465 BBN Score IBM Score linear predictor Figure 2: The two dimensional smooth f(sIBM, sBBN) learned to predict relevance given input scores from IBM and BBN transcripts. parameters to k1 = 1, k3 = 1, b = .5, which gave good results on a single transcript. 4.4 Speech Recognition Transcripts Our first set of speech recognition transcripts was produced by IBM for the MALACH project, and used for several years in the CLEF cross-language speech retrieval (CL-SR) track (Pecina et al., 2007). The IBM recognizer was built using a manually produced pronunciation dictionary and 200 hours of transcribed audio. The resulting interview transcripts have a reported mean word error rate (WER) of approximately 25% on held out data, which was obtained by priming the language model with metadata available from pre-interview questionnaires. This represents significant improvements over IBM transcripts used in earlier CL-SR evaluations, which had a best reported WER of 39.6% (Byrne et al., 2004). This system is reported to have run at approximately 10 times real time. 4.4.1 New Transcripts for MALACH We were graciously permitted to use BBN Technology’s speech recognition system to produce a second set of ASR transcripts for our experiments (Prasad et al., 2005; Matsoukas et al., 2005). We selected the one side of the audio having largest RMS amplitude for training and decoding. This channel was down-sampled to 8kHz and segmented using an available broadcast news segmenter. Because we did not have a pronunciation dictionary which covered the transcribed audio, we automatically generated pronunciations for roughly 14k words using a rulebased transliterator and the CMU lexicon. Using the same 200 hours of transcribed audio, we trained acoustic models as described in (Prasad et al., 2005). We use a mixture of the training transcripts and various newswire sources for our language model training. We did not attempt to prime the language model for particular interviewees or otherwise utilize any interview metadata. For decoding, we ran a fast (approximately 1 times real time) system, as described in (Matsoukas et al., 2005). Unfortunately, as we do not have the same development set used by IBM, a direct comparison of WER is not possible. Testing on a small held out set of 4.3 hours, we observed our system had a WER of 32.4%. 4.5 Combination Methods For baseline comparisons, we ran our evaluation on each of the two transcript sources (IBM and our new transcripts), the linear combination chosen to optimize MAP (LC-MAP), the linear combination chosen to optimize GMAP (LC-GMAP), interleaving (IL), and CombMNZ. We denote our additive factor model as Factor GLM, and our multidimensional smooth GAM model as MD-GAM. Linear combination parameters were chosen to optimize performance on the training set, sweeping the weight for each source at intervals of 0.01. For the generalized additive models, we maximized the penalized likelihood of the training examples under our model, as described in Section 3. 5 Results Table 1 shows our complete set of results. This includes baseline scores from our new set of transcripts, each of our baseline combination approaches, and results from our proposed combination models. Although we are chiefly interested in improvements on difficult topics (i.e., GMAP), we present MAP for comparison. Results in bold indicate the largest mean value of the measure (either AP or log AP), while daggers (†) indicate the 466 Type Model MAP GMAP T IBM 0.0531 (-.2) 0.0134 (-11.8) BBN 0.0532 0.0152 LC-MAP 0.0564 (+6.0) 0.0158 (+3.9) LC-GMAP 0.0587 (+10.3) 0.0154 (+1.3) IL 0.0592 (+11.3) 0.0165 (+8.6) CombMNZ 0.0550 (+3.4) 0.0150 (-1.3) Factor GLM 0.0611 (+14.9)† 0.0161 (+5.9) MD-GAM 0.0561 (+5.5)† 0.0180 (+18.4)† TD IBM 0.0415 (-15.1) 0.0173 (-9.9) BBN 0.0489 0.0192 LC-MAP 0.0519 (+6.1)† 0.0201 (+4.7)† LC-GMAP 0.0531 (+8.6)† 0.0200 (+4.2) IL 0.0507 (+3.7) 0.0210 (+9.4) CombMNZ 0.0495 (+1.2)† 0.0196 (+2.1) Factor GLM 0.0526 (+7.6)† 0.0198 (+3.1) MD-GAM 0.0529 (+8.2)† 0.0223 (+16.2)† Table 1: MAP and GMAP for each combination approach, using the evaluation query set from the CLEF2007 CL-SR (MALACH) collection. Shown in parentheses is the relative improvement in score over the best single transcripts results (i.e., using our new set of transcripts). The best (mean) score for each condition is in bold. combination is a statistically significant improvement (α = 0.05) over our new transcript set (that is, over the best single transcript result). Tests for statistically significant improvements in GMAP are computed using our paired log AP test, as discussed in Section 4.2.2. First, we note that the GAM model with multidimensional smooth gives the largest GMAP improvement for both title and title-description runs. Secondly, it is the only combination approach able to produce statistically significant relative improvements on both measures for both conditions. For GMAP, our measure of interest, these improvements are 18.4% and 16.2% respectively. One surprising observation from Table 1 is that the mean improvement in log AP for interleaving is fairly large and yet not statistically significant (it is in fact a larger mean improvement than several other baseline combination approaches which are significant improvements. This may suggest that interleaving suffers from a large disparity between its best and worst performance on the query set. 0.001 0.002 0.005 0.010 0.020 0.050 0.100 0.200 0.001 0.002 0.005 0.010 0.020 0.050 Term recall in IBM transcripts Term recall in BBN transcripts impact guilt attitud zionism previou assembl Figure 3: The proportion of relevant documents returned in IBM and BBN transcripts for discriminative title words (title words occurring in less than .01 of the collection). Point size is proportional to the improvement in average precision using (1) the best linear combination chosen to optimize GMAP (△) and (2) the combination using MDGAM (⃝). Figure 3 examines whether our improvements come systematically from only one of the transcript sources. It shows the proportion of relevant documents in each transcript source containing the most discriminative title words (words occurring in less than .01 of the collection). Each point represents one term for one topic. The size of the point is proportional to the difference in AP observed on that topic by using MD-GAM and by using LC-GMAP. If the difference is positive (MD-GAM wins), we plot ⃝, otherwise △. First, we observe that, when it wins, MD-GAM tends to increase AP much more than when LC-GMAP wins. While there are many wins also for LC-GMAP, the effects of the larger MD-GAM improvements will dominate for many of the most difficult queries. Secondly, there does not appear to be any evidence that one transcript source has much higher term-recall than the other. 5.1 Oracle linear combination A chief advantage of our MD-GAM combination model is that it is able to map input scores nonlinearly onto a probability of document relevance. 467 Type Model GMAP T Oracle-LC-GMAP 0.0168 MD-GAM 0.0180 (+7.1) TD Oracle-LC-GMAP 0.0222 MD-GAM 0.0223 (+0.5) Table 2: GMAP results for an oracle experiment in which MD-GAM was fairly trained and LC-GMAP was unfairly optimized on the test queries. To make an assessment of how much this capability helps the system, we performed an oracle experiment where we again constrained MD-GAM to be fairly trained but allowed LC-GMAP to cheat and choose the combination optimizing GMAP on the test data. Table 2 lists the results. While the improvement with MD-GAM is now not statistically significant (primarily because of our small query set), we found it still out-performed the oracle linear combination. For title-only queries, this improvement was surprisingly large at 7.1% relative. 6 Conclusion While speech retrieval is one example of retrieval under errorful document representations, other similar tasks may also benefit from these combination models. This includes the task of cross-language retrieval, as well as the retrieval of documents obtained by optical character recognition. Within speech retrieval, further work also remains to be done. For example, various other features are likely to be useful in predicting optimal system combination. These might include, for example, confidence scores, acoustic confusability, or other strong cues that one recognition system is unlikely to have properly recognized a query term. We look forward to investigating these possibilities in future work. The question of how much a system should expose its internal workings (e.g., its document representations) to external systems is a long standing problem in meta-search. We’ve taken the rather narrow view that systems might only expose the list of scores they assigned to retrieved documents, a plausible scenario considering the many systems now emerging which are effectively doing this already. Some examples include EveryZing,1 the MIT Lec1http://www.everyzing.com/ ture Browser,2 and Comcast’s video search.3 This trend is likely to continue as the underlying representations of the content are themselves becoming increasingly complex (e.g., word and subword level lattices or confusion networks). The cost of exposing such a vast quantity of such complex data rapidly becomes difficult to justify. But if the various representations of the content are available, there are almost certainly other combination approaches worth investigating. Some possible approaches include simple linear combinations of the putative term frequencies, combinations of one best transcript hypotheses (e.g., using ROVER (Fiscus, 1997)), or methods exploiting word-lattice information (Evermann and Woodland, 2000). Our planet’s 6.6 billion people speak many more words every day than even the largest Web search engines presently index. While much of this is surely not worth hearing again (or even once!), some of it is surely precious beyond measure. Separating the wheat from the chaff in this cacophony is the raison d’etre for information retrieval, and it is hard to conceive of an information retrieval challenge with greater scope or greater potential to impact our society than improving our access to the spoken word. Acknowledgements The authors are grateful to BBN Technologies, who generously provided access to their speech recognition system for this research. References Brian T. Bartell, Garrison W. Cottrell, and Richard K. Belew. 1994. Automatic combination of multiple ranked retrieval systems. In Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 173–181. Steven M. Beitzel, Eric C. Jensen, Abdur Chowdhury, David Grossman, Ophir Frieder, and Nazli Goharian. 2004. Fusion of effective retrieval strategies in the same information retrieval system. J. Am. Soc. Inf. Sci. Technol., 55(10):859–868. W. Byrne, D. Doermann, M. Franz, S. Gustman, J. Hajic, D.W. Oard, M. Picheny, J. Psutka, B. Ramabhadran, 2http://web.sls.csail.mit.edu/lectures/ 3http://videosearch.comcast.net 468 D. Soergel, T. Ward, and Wei-Jing Zhu. 2004. Automatic recognition of spontaneous speech for access to multilingual oral history archives. IEEE Transactions on Speech and Audio Processing, Special Issue on Spontaneous Speech Processing, 12(4):420–435, July. J. P. Callan, Z. Lu, and W. Bruce Croft. 1995. Searching Distributed Collections with Inference Networks . In E. A. Fox, P. Ingwersen, and R. Fidel, editors, Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 21–28, Seattle, Washington. ACM Press. G. Evermann and P.C. Woodland. 2000. Posterior probability decoding, confidence estimation and system combination. In Proceedings of the Speech Transcription Workshop, May. Jonathan G. Fiscus. 1997. A Post-Processing System to Yield Reduced Word Error Rates: Recogniser Output Voting Error Reduction (ROVER). In Proceedings of the IEEE ASRU Workshop, pages 347–352. Jong-Hak Lee. 1997. Analyses of multiple evidence combination. In SIGIR Forum, pages 267–276. David Lillis, Fergus Toolan, Rem Collier, and John Dunnion. 2006. Probfuse: a probabilistic approach to data fusion. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 139–146, New York, NY, USA. ACM. R. Manmatha, T. Rath, and F. Feng. 2001. Modeling score distributions for combining the outputs of search engines. In SIGIR ’01: Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, pages 267–275, New York, NY, USA. ACM. Spyros Matsoukas, Rohit Prasad, Srinivas Laxminarayan, Bing Xiang, Long Nguyen, and Richard Schwartz. 2005. The 2004 BBN 1xRT Recognition Systems for English Broadcast News and Conversational Telephone Speech. In Interspeech 2005, pages 1641–1644. Mark Montague and Javed A. Aslam. 2002. Condorcet fusion for improved retrieval. In CIKM ’02: Proceedings of the eleventh international conference on Information and knowledge management, pages 538–548, New York, NY, USA. ACM. Pavel Pecina, Petra Hoffmannova, Gareth J.F. Jones, Jianqiang Wang, and Douglas W. Oard. 2007. Overview of the CLEF-2007 Cross-Language Speech Retrieval Track. In Proceedings of the CLEF 2007 Workshop on Cross-Language Information Retrieval and Evaluation, September. Allison L. Powell, James C. French, James P. Callan, Margaret E. Connell, and Charles L. Viles. 2000. The impact of database selection on distributed searching. In Research and Development in Information Retrieval, pages 232–239. R. Prasad, S. Matsoukas, C.L. Kao, J. Ma, D.X. Xu, T. Colthurst, O. Kimball, R. Schwartz, J.L. Gauvain, L. Lamel, H. Schwenk, G. Adda, and F. Lefevre. 2005. The 2004 BBN/LIMSI 20xRT English Conversational Telephone Speech Recognition System. In Interspeech 2005. S. Robertson, S. Walker, S. Jones, and M. HancockBeaulieu M. Gatford. 1996. Okapi at TREC-3. In Text REtrieval Conference, pages 21–30. Stephen Robertson. 2006. On GMAP: and other transformations. In CIKM ’06: Proceedings of the 15th ACM international conference on Information and knowledge management, pages 78–83, New York, NY, USA. ACM. J. Savoy, A. Le Calv´e, and D. Vrajitoru. 1988. Report on the TREC-5 experiment: Data fusion and collection fusion. Joseph A. Shaw and Edward A. Fox. 1994. Combination of multiple searches. In Proceedings of the 2nd Text REtrieval Conference (TREC-2). Bernd Streitberg and Joachim R¨ohmel. 1990. On tests that are uniformly more powerful than the WilcoxonMann-Whitney test. Biometrics, 46(2):481–484. Christopher C. Vogt and Garrison W. Cottrell. 1999. Fusion via a linear combination of scores. Information Retrieval, 1(3):151–173. Ellen M. Voorhees, Narendra Kumar Gupta, and Ben Johnson-Laird. 1994. The collection fusion problem. In D. K. Harman, editor, The Third Text REtrieval Conference (TREC-3), pages 500–225. National Institute of Standards and Technology. Ellen M. Voorhees. 2006. Overview of the TREC 2005 robust retrieval track. In Ellem M. Voorhees and L.P. Buckland, editors, The Fourteenth Text REtrieval Conference, (TREC 2005), Gaithersburg, MD: NIST. Simon N. Wood. 2003. Thin plate regression splines. Journal Of The Royal Statistical Society Series B, 65(1):95–114. Simon Wood. 2006. Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC. 469
2008
53
Proceedings of ACL-08: HLT, pages 470–478, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A Critical Reassessment of Evaluation Baselines for Speech Summarization Gerald Penn and Xiaodan Zhu University of Toronto 10 King’s College Rd. Toronto M5S 3G4 CANADA gpenn,xzhu  @cs.toronto.edu Abstract We assess the current state of the art in speech summarization, by comparing a typical summarizer on two different domains: lecture data and the SWITCHBOARD corpus. Our results cast significant doubt on the merits of this area’s accepted evaluation standards in terms of: baselines chosen, the correspondence of results to our intuition of what “summaries” should be, and the value of adding speechrelated features to summarizers that already use transcripts from automatic speech recognition (ASR) systems. 1 Problem definition and related literature Speech is arguably the most basic, most natural form of human communication. The consistent demand for and increasing availability of spoken audio content on web pages and other digital media should therefore come as no surprise. Along with this availability comes a demand for ways to better navigate through speech, which is inherently more linear or sequential than text in its traditional delivery. Navigation connotes a number of specific tasks, including search, but also browsing (Hirschberg et al., 1999) and skimming, which can involve far more analysis and manipulation of content than the spoken document retrieval tasks of recent NIST fame (1997 2000). These would include time compression of the speech signal and/or “dichotic” presentations of speech, in which a different audio track is presented to either ear (Cherry and Taylor, 1954; Ranjan et al., 2006). Time compression of speech, on the other hand, excises small slices of digitized speech data out of the signal so that the voices speak all of the content but more quickly. The excision can either be fixed rate, for which there have been a number of experiments to detect comprehension limits, or variable rate, where the rate is determined by pause detection and shortening (Arons, 1992), pitch (Arons, 1994) or longer-term measures of linguistic salience (Tucker and Whittaker, 2006). A very short-term measure based on spectral entropy can also be used (Ajmal et al., 2007), which has the advantage that listeners cannot detect the variation in rate, but they nevertheless comprehend better than fixed-rate baselines that preserve pitch periods. With or without variable rates, listeners can easily withstand a factor of two speed-up, but Likert response tests definitively show that they absolutely hate doing it (Tucker and Whittaker, 2006) relative to word-level or utterance-level excisive methods, which would include the summarization-based strategy that we pursue in this paper. The strategy we focus on here is summarization, in its more familiar construal from computational linguistics and information retrieval. We view it as an extension of the text summarization problem in which we use automatically prepared, imperfect textual transcripts to summarize speech. Other details are provided in Section 2.2. Early work on speech summarization was either domainrestricted (Kameyama and Arima, 1994), or prided itself on not using ASR at all, because of its unreliability in open domains (Chen and Withgott, 1992). Summaries of speech, however, can still be delivered audially (Kikuchi et al., 2003), even when (noisy) transcripts are used. 470 The purpose of this paper is not so much to introduce a new way of summarizing speech, as to critically reappraise how well the current state of the art really works. The earliest work to consider open-domain speech summarization seriously from the standpoint of text summarization technology (Valenza et al., 1999; Zechner and Waibel, 2000) approached the task as one of speech transcription followed by text summarization of the resulting transcript (weighted by confidence scores from the ASR system), with the very interesting result that transcription and summarization errors in such systems tend to offset one another in overall performance. In the years following this work, however, some research by others on speech summarization (Maskey and Hirschberg, 2005; Murray et al., 2005; Murray et al., 2006, inter alia) has focussed de rigueur on striving for and measuring the improvements attainable over the transcribe-thensummarize baseline with features available from non-transcriptional sources (e.g., pitch and energy of the acoustic signal) or those, while evident in textual transcripts, not germane to texts other than spoken language transcripts (e.g., speaker changes or question-answer pair boundaries). These “novel” features do indeed seem to help, but not by nearly as much as some of this recent literature would suggest. The experiments and the choice of baselines have largely been framed to illuminate the value of various knowledge sources (“prosodic features,” “named entity features” etc.), rather than to optimize performance per se — although the large-dimensional pattern recognition algorithms and classifiers that they use are inappropriate for descriptive hypothesis testing. First, most of the benefit attained by these novel sources can be captured simply by measuring the lengths of candidate utterances. Only one paper we are aware of (Christensen et al., 2004) has presented the performance of length on its own, although the objective there was to use length, position and other simple textual feature baselines (no acoustics) to distinguish the properties of various genres of spoken audio content, a topic that we will return to in Section 2.1.1 Second, maximal marginal relevance 1Length features are often mentioned in the text of other work as the most beneficial single features in more hetero(MMR) has also fallen by the wayside, although it too performs very well. Again, only one paper that we are aware of (Murray et al., 2005) provides an MMR baseline, and there MMR significantly outperforms an approach trained on a richer collection of features, including acoustic features. MMR was the method of choice for utterance selection in Zechner and Waibel (2000) and their later work, but it is often eschewed perhaps because textbook MMR does not directly provide a means to incorporate other features. There is a simple means of doing so (Section 2.3), and it is furthermore very resilient to low word-error rates (WERs, Section 3.3). Third, as inappropriate uses of optimization methods go, the one comparison that has not made it into print yet is that of the more traditional “what-issaid” features (MMR, length in words and namedentity features) vs. the avant-garde “how-it-is-said” features (structural, acoustic/prosodic and spokenlanguage features). Maskey & Hirschberg (2005) divide their features into these categories, but only to compute a correlation coefficient between them (0.74). The former in aggregate still performs significantly better than the latter in aggregate, even if certain members of the latter do outperform certain members of the former. This is perhaps the most reassuring comparison we can offer to text summarization and ASR enthusiasts, because it corroborates the important role that ASR still plays in speech summarization in spite of its imperfections. Finally, and perhaps most disconcertingly, we can show that current speech summarization performs just as well, and in some respects even better, with SWITCHBOARD dialogues as it does with more coherent spoken-language content, such as lectures. This is not a failing of automated systems themselves — even humans exhibit the same tendency under the experimental conditions that most researchers have used to prepare evaluation gold standards. What this means is that, while speech summarization systems may arguably be useful and are indeed consistent with whatever it is that humans are doing when they are enlisted to rank utterances, this evaluation regime simply does not reflect how well the “summaries” capture the goal-orientation or geneous systems, but without indicating their performance on their own. 471 higher-level purpose of the data that they are trained on. As a community, we have been optimizing an utterance excerpting task, we have been moderately successful at it, but this task in at least one important respect bears no resemblance to what we could convincingly call speech summarization. These four results provide us with valuable insight into the current state of the art in speech summarization: it is not summarization, the aspiration to measure the relative merits of knowledge sources has masked the prominence of some very simple baselines, and the Zechner & Waibel pipe-ASR-outputinto-text-summarizer model is still very competitive — what seems to matter more than having access to the raw spoken data is simply knowing that it is spoken data, so that the most relevant, still textually available features can be used. Section 2 describes the background and further details of the experiments that we conducted to arrive at these conclusions. Section 3 presents the results that we obtained. Section 4 concludes by outlining an ecologically valid alternative for evaluating real summarization in light of these results. 2 Setting of the experiment 2.1 Provenance of the data Speech summarizers are generally trained to summarize either broadcast news or meetings. With the exception of one paper that aspires to compare the “styles” of spoken and written language ceteris paribus (Christensen et al., 2004), the choice of broadcast news as a source of data in more recent work is rather curious. Broadcast news, while open in principle in its range of topics, typically has a range of closely parallel, written sources on those same topics, which can either be substituted for spoken source material outright, or at the very least be used corroboratively alongside them. Broadcast news is also read by professional news readers, using high quality microphones and studio equipment, and as a result has very lower WER — some even call ASR a solved problem on this data source. Broadcast news is also very text-like at a deeper level. Relative position within a news story or dialogue, the dreaded baseline of text summarization, works extremely well in spoken broadcast news summarization, too. Within the operating region of the receiver operating characteristics (ROC) curve most relevant to summarizers (0.1–0.3), Christensen et al. (2004) showed that position was by far the best feature in a read broadcast news system with high WER, and that position and length of the extracted utterance were the two best with low WER. Christensen et al. (2004) also distinguished read news from “spontaneous news,” broadcasts that contain interviews and/or man-in-the-field reports, and showed that in the latter variety position is not at all prominent at any level of WER, but length is. Maskey & Hirschberg’s (2005) broadcast news is a combination of read news and spontaneous news. Spontaneous speech, in our view, particularly in the lecture domain, is our best representative of what needs to be summarized. Here, the positional baseline performs quite poorly (although length does extremely well, as discussed below), and ASR performance is far from perfect. In the case of lectures, there are rarely exact transcripts available, but there are bulleted lines from presentation slides, related research papers on the speaker’s web page and monographs on the same topic that can be used to improve the language models for speech recognition systems. Lectures have just the right amount of props for realistic ASR, but still very open domain vocabularies and enough spontaneity to make this a problem worth solving. As discussed further in Section 4, the classroom lecture genre also provides us with a task that we hope to use to conduct a better grounded evaluation of real summarization quality. To this end, we use a corpus of lectures recorded at the University of Toronto to train and test our summarizer. Only the lecturer is recorded, using a headworn microphone, and each lecture lasts 50 minutes. The lectures in our experiments are all undergraduate computer science lectures. The results reported in this paper used four different lectures, each from a different course and spoken by a different lecturer. We used a leave-one-out cross-validation approach by iteratively training on three lectures worth of material and testing on the one remaining. We combine these iterations by averaging. The lectures were divided at random into 8–15 minute intervals, however, in order to provide a better comparison with the SWITCHBOARD dialogues. Each interval was treated as a separate document and was summarized separately. So the four lectures together actually 472 provide 16 SWITCHBOARD-sized samples of material, and our cross-validation leaves on average four of them out in a turn. We also use part of the SWITCHBOARD corpus in one of our comparisons. SWITCHBOARD is a collection of telephone conversations, in which two participants have been told to speak on a certain topic, but with no objective or constructive goal to proceed towards. While the conversations are locally coherent, this lack of goal-orientation is acutely apparent in all of them — they may be as close as any speech recording can come to being about nothing.2 We randomly selected 27 conversations, containing a total of 3665 utterances (identified by pause length), and had three human annotators manually label each utterance as in- or outof-summary. Interestingly, the interannotator agreement on SWITCHBOARD (     ) is higher than on the lecture corpus (0.372) and higher than the -score reported by Galley (2006) for the ICSI meeting data used by Murray et al. (2005; 2006), in spite of the fact that Murray et al. (2005) primed their annotators with a set of questions to consider when annotating the data.3 This does not mean that the SWITCHBOARD summaries are qualitatively better, but rather that annotators are apt to agree more on which utterances to include in them. 2.2 Summarization task As with most work in speech summarization, our strategy involves considering the problem as one of utterance extraction, which means that we are not synthesizing new text or speech to include in summaries, nor are we attempting to extract small phrases to sew together with new prosodic contours. Candidate utterances are identified through pauselength detection, and the length of these pauses has been experimentally calibrated to 200 msec, which results in roughly sentence-sized utterances. Summarization then consists of choosing the best N% of these utterances for the summary, where N is typ2It should be noted that the meandering style of SWITCHBOARD conversations does have correlates in text processing, particularly in the genres of web blogs and newsgroup- or wikibased technical discussions. 3Although we did define what a summary was to each annotator beforehand, we did not provide questions or suggestions on content for either corpus. ically between 10 and 30. We will provide ROC curves to indicate performance as a function over all N. An ROC is plotted along an x-axis of specificity (true-negative-rate) and a y-axis of sensitivity (truepositive-rate). A larger area under the ROC corresponds to better performance. 2.3 Utterance isolation The framework for our extractive summarization experiments is depicted in Figure 1. With the exception of disfluency removal, it is very similar in its overall structure to that of Zechner’s (2001). The summarizer takes as input either manual or automatic transcripts together with an audio file, and has three modules to process disfluencies and extract features important to identifying sentences. Figure 1: Experimental framework for summarizing spontaneous conversations. During sentence boundary detection, words that are likely to be adjacent to an utterance boundary are determined. We call these words trigger words. False starts are very common in spontaneous speech. According to Zechner’s (2001) statistics on the SWITCHBOARD corpus, they occur in 10-15% of all utterances. A decision tree (C4.5, Release 8) is used to detect false starts, trained on the POS tags and trigger-word status of the first and last four words of sentences from a training set. Once false starts are detected, these are removed. We also identify repetitions as a sequence of between 1 and 4 words which is consecutively re473 peated in spontaneous speech. Generally, repetitions are discarded. Repetitions of greater length are extremely rare statistically and are therefore ignored. Question-answer pairs are also detected and linked. Question-answer detection is a two-stage process. The system first identifies the questions and then finds the corresponding answer. For (both WHand Yes/No) question identification, another C4.5 classifier was trained on 2,000 manually annotated sentences using utterance length, POS bigram occurrences, and the POS tags and trigger-word status of the first and last five words of an utterance. After a question is identified, the immediately following sentence is labelled as the answer. 2.4 Utterance selection To obtain a trainable utterance selection module that can utilize and compare rich features, we formulated utterance selection as a standard binary classification problem, and experimented with several state-of-the-art classifiers, including linear discriminant analysis LDA, support vector machines with a radial basis kernel (SVM), and logistic regression (LR), as shown in Figure 2 (computed on SWITCHBOARD data). MMR, Zechner’s (2001) choice, is provided as a baseline. MMR linearly interpolates a relevance component and a redundancy component that balances the need for new vs. salient information. These two components can just as well be mixed through LR, which admits the possibility of adding more features and the benefit of using LR over held-out estimation. 0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 Recall Precision LR−full−fea LDA−full−fea SVM−full−fea LR−MMR−fea MMR Figure 2: Precision-recall curve for several classifiers on the utterance selection task. As Figure 2 indicates, there is essentially no difference in performance among the three classifiers we tried, nor between MMR and LR restricted to the two MMR components. This is important, since we will be comparing MMR to LR-trained classifiers based on other combinations of features below. The ROC curves in the remainder of this paper have been prepared using the LR classifier. 2.5 Features extracted While there is very little difference realized across pattern recognition methods, there is much more at stake with respect to which features the methods use to characterize their input. We can extract and use the features in Figure 3, arranged there according to their knowledge source. We detect disfluencies in the same manner as Zechner (2001)). Taking ASR transcripts as input, we use the Brill tagger (Brill, 1995) to assign POS tags to each word. There are 42 tags: Brill’s 38 plus four which identify filled-pause disfluencies: empty coordinating conjunctions (CO), lexicalized filled pauses (DM), editing terms (ET), and non-lexicalized filled pauses (UH). Our disfluency features include the number of each of these, their total, and also the number of repetitions. Disfluencies adjacent to a speaker turn are ignored, however, because they occur as a normal part of turn coordination between speakers. Our preliminary experiments suggest that speaker meta-data do not improve on the quality of summarization, and so this feature is not included. We indicate with bold type the features that indicate some quantity of length, and we will consider these as members of another class called “length,” in addition to their given class above. In all of the data on which we have measured, the correlation between time duration and number of words is nearly 1.00 (although pause length is not). 2.6 Evaluation of summary quality We plot receiver operating characteristic (ROC) curves along a range of possible compression parameters, and in one case, ROUGE scores. ROUGE 474 1. Lexical features MMR score4, utterance length (in words), 2. Named entity features — number of: person names, location names organization names the sum of these 3. Structural features utterance position, labelled as first, middle, or last one-third of the conversation a Boolean feature indicating whether an utterance is adjacent to a speaker turn 1. Acoustic features — min, max and avg. of:5 pitch energy speaking rate (unfilled) pause length time duration (in msec) 2. “Spoken language” features disfluencies given/new information question/answer pair identification Figure 3: Features available for utterance selection by knowledge source. Features in bold type quantify length. In our experiments, we exclude these from their knowledge sources, and study them as a separate length category. and F-measure are both widely used in speech summarization, and they have been shown by others to be broadly consistent on speech summarization tasks (Zhu and Penn, 2005). 3 Results and analysis 3.1 Lecture corpus The results of our evaluation on the lecture data appear in Figure 4. As is evident, there is very little difference among the combinations of features with this data source, apart from the positional baseline, “lead,” which simply chooses the first N% of the utterances. This performs quite poorly. The best performance is achieved by using all of the features together, but the length baseline, which uses only those features in bold type from Figure 3, is very close (no statistically significant difference), as is MMR.6 4When evaluated on its own, the MMR interpolating parameter is set through experimentation on a held-out dataset, as in Zechner (2001). When combined with other features, its relevance and redundancy components are provided to the classifier separately. 5All of these features are calculated on the word level and normalized by speaker. 6We conducted the same evaluation without splitting the lectures into 8–15 minute segments (so that the summaries summarize an entire lecture), and although space here precludes the presentation of the ROC curves, they are nearly identical Figure 4: ROC curve for utterance selection with the lecture corpus with several feature combinations. 3.2 SWITCHBOARD corpus The corresponding results on SWITCHBOARD are shown in Figure 5. Again, length and MMR are very close to the best alternative, which is again all of features combined. The difference with respect to either of these baselines is statistically significant within the popular 10–30% compression range, as is the classifier trained on all features but acoustic to those on the segments shown here. 475 Figure 5: ROC curve for SWITCHBOARD utterance selection with several feature combinations. (not shown). The classifier trained on all features but spoken language features (not shown) is not significantly better, so it is the spoken language features that make the difference, not the acoustic features. The best score is also significantly better than on the lecture data, however, particularly in the 10– 30% range. Our analysis of the difference suggests that the much greater variance in utterance length in SWITCHBOARD is what accounts for the overall better performance of the automated system as well as the higher human interannotator agreement. This also goes a long way to explaining why the length baseline is so good. Still another perspective is to classify features as either “what-is-said” (MMR, length and NE features) or “how-it-is-said” (structural, acoustic and spoken-language features), as shown in Figure 6. What-is-said features are better, but only barely so within the usual operating region of summarizers. 3.3 Impact of WER Word error rates (WERs) arising from speech recognition are usually much higher in spontaneous conversations than in read news. Having trained ASR models on SWITCHBOARD section 2 data with our sample of 27 conversations removed, the WER on that sample is 46%. We then train a language model on SWITCHBOARD section 2 without removing the 27-conversation sample so as to delib0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Sensitivity 1−Specificity all what−is−said how−it−is−said Figure 6: ROC curves for textual and non-textual features. erately overfit the model. This pseudo-WER is then 39%. We might be able to get less WER by tuning the ASR models or by using more training data, but that is not the focus here. Summarizing the automatic transcripts generated from both of these systems using our LR-based classifier with all features, as well as manual (perfect) transcripts, we obtain the ROUGE–1 scores in Table 1. WER 10% 15% 20% 25% 30% 0.46 .615 .591 .556 .519 .489 0.39 .615 .591 .557 .526 .491 0 .619 .600 .566 .530 .492 Table 1: ROUGE–1 of LR system with all features under different WERs. Table 1 shows that WERs do not impact summarization performance significantly. One reason is that the acoustic and structural features are not affected by word errors, although WERs can affect the MMR, spoken language, length and NE features. Figures 7 and 8 present the ROC curves of the MMR and spoken language features, respectively, under different WERs. MMR is particularly resilient, even on SWITCHBOARD. Keywords are still often correctly recognized, even in the presence of high WER, although possibly because the same topic is discussed in many SWITCHBOARD conversations. 476 Figure 7: ROC curves for the effectiveness of MMR scores on transcripts under different WERs. Figure 8: ROC curves for the effectiveness of spoken language features on transcripts under different WERs. When some keywords are misrecognized (e.g. hat), furthermore, related words (e.g. dress, wear) still may identify important utterances. As a result, a high WER does not necessarily mean a worse transcript for bag-of-keywords applications like summarization and classification, regardless of the data source. Utterance length does not change very much when WERs vary, and in addition, it is often a latent variable that underlies some other features’ role, e.g., a long utterance often has a higher MMR score than a short utterance, even when the WER changes. Note that the effectiveness of spoken language features varies most between manually and automatically generated transcripts just at around the typical operating region of most summarization systems. The features of this category that respond most to WER are disfluencies. Disfluency detection is also at its most effective in this same range with respect to any transcription method. 4 Future Work In terms of future work in light of these results, clearly the most important challenge is to formulate an experimental alternative to measuring against a subjectively classified gold standard in which annotators are forced to commit to relative salience judgements with no attention to goal orientation and no requirement to synthesize the meanings of larger units of structure into a coherent message. It is here that using the lecture domain offers us some additional assistance. Once these data have been transcribed and outlined, we will be able to formulate examinations for students that test their knowledge of the topics being lectured upon: both their higherlevel understanding of goals and conceptual themes, as well as factoid questions on particular details. A group of students can be provided with access to a collection of entire lectures to establish a theoretical limit. Experimental and control groups can then be provided with access only to summaries of those lectures, prepared using different sets of features, or different modes of delivery (text vs. speech), for example. This task-based protocol involves quite a bit more work, and at our university, at least, there are regulations that preclude us placing a group of students in a class at a disadvantage with respect to an examination for credit that need to be dealt with. It is, however, a far better means of assessing the quality of summaries in an ecologically valid context. It is entirely possible that, within this protocol, the baselines that have performed so well in our experiments, such as length or, in read news, position, will utterly fail, and that less traditional acoustic or spoken language features will genuinely, and with statistical significance, add value to a purely transcriptbased text summarization system. To date, however, that case has not been made. He et al. (1999) conducted a study very similar to the one suggested above and found no significant difference between using pitch and using slide transition boundaries. No ASR transcripts or length features were used. 477 References M. Ajmal, A. Kushki, and K. N. Plataniotis. 2007. Timecompression of speech in informational talks using spectral entropy. In Proceedings of the 8th International Workshop on Image Analysis for Multimedia Interactive Services (WIAMIS-07). B Arons. 1992. Techniques, perception, and applications of time-compressed speech. In American Voice I/O Society Conference, pages 169–177. B. Arons. 1994. Speech Skimmer: Interactively Skimming Recorded Speech. Ph.D. thesis, MIT Media Lab. E. Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543–565. F. Chen and M. Withgott. 1992. The use of emphasis to automatically summarize a spoken discourse. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), volume 1, pages 229–232. E. Cherry and W. Taylor. 1954. Some further experiments on the recognition of speech, with one and two ears. Journal of the Acoustic Society of America, 26:554–559. H. Christensen, B. Kolluru, Y. Gotoh, and S. Renals. 2004. From text summarisation to style-specific summarisation for broadcast news. In Proceedings of the 26th European Conference on Information Retrieval (ECIR-2004), pages 223–237. M. Galley. 2006. A skip-chain conditional random field for ranking meeting utterances by importance. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006). L. He, E. Sanocki, A. Gupta, and J. Grudin. 1999. Autosummarization of audio-video presentations. In MULTIMEDIA ’99: Proceedings of the seventh ACM international conference on Multimedia (Part 1), pages 489–498. J. Hirschberg, S. Whittaker, D. Hindle, F. Pereira, and A. Singhal. 1999. Finding information in audio: A new paradigm for audio browsing and retrieval. In Proceedings of the ESCA/ETRW Workshop on Accessing Information in Spoken Audio, pages 117–122. M. Kameyama and I. Arima. 1994. Coping with aboutness complexity in information extraction from spoken dialogues. In Proceedings of the 3rd International Conference on Spoken Language Processing (ICSLP), pages 87–90. T. Kikuchi, S. Furui, and C. Hori. 2003. Two-stage automatic speech summarization by sentence extraction and compaction. In Proceedings of the ISCA/IEEE Workshop on Spontaneous Speech Processing and Recognition (SSPR), pages 207–210. S. Maskey and J. Hirschberg. 2005. Comparing lexial, acoustic/prosodic, discourse and structural features for speech summarization. In Proceedings of the 9th European Conference on Speech Communication and Technology (Eurospeech), pages 621–624. G. Murray, S. Renals, and J. Carletta. 2005. Extractive summarization of meeting recordings. In Proceedings of the 9th European Conference on Speech Communication and Technology (Eurospeech), pages 593–596. G. Murray, S. Renals, J. Moore, and J. Carletta. 2006. Incorporating speaker and discourse features into speech summarization. In Proceedings of the Human Language Technology Conference - Annual Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL), pages 367–374. National Institute of Standards. 1997–2000. Proceedings of the Text REtrieval Conferences. http://trec.nist.gov/pubs.html. Abhishek Ranjan, Ravin Balakrishnan, and Mark Chignell. 2006. Searching in audio: the utility of transcripts, dichotic presentation, and time-compression. In CHI ’06: Proceedings of the SIGCHI conference on Human Factors in computing systems, pages 721–730, New York, NY, USA. ACM Press. S. Tucker and S. Whittaker. 2006. Time is of the essence: an evaluation of temporal compression algorithms. In CHI ’06: Proceedings of the SIGCHI conference on Human Factors in computing systems, pages 329–338, New York, NY, USA. ACM Press. R. Valenza, T. Robinson, M. Hickey, and R. Tucker. 1999. Summarization of spoken audio through information extraction. In Proceedings of the ESCA/ETRW Workshop on Accessing Information in Spoken Audio, pages 111–116. K. Zechner and A. Waibel. 2000. Minimizing word error rate in textual summaries of spoken language. In Proceedings of the 6th Applied Natural Language Processing Conference and the 1st Meeting of the North American Chapter of the Association for Computational Linguistics (ANLP/NAACL), pages 186–193. K. Zechner. 2001. Automatic Summarization of Spoken Dialogues in Unrestricted Domains. Ph.D. thesis, Carnegie Mellon University. X. Zhu and G. Penn. 2005. Evaluation of sentence selection for speech summarization. In Proceedings of the RANLP workshop on Crossing Barriers in Text Summarization Research, pages 39–45. 478
2008
54
Proceedings of ACL-08: HLT, pages 479–487, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Intensional Summaries as Cooperative Responses in Dialogue: Automation and Evaluation Joseph Polifroni Nokia Research Center 3 Cambridge Center Cambridge, MA 02142, USA [email protected] Marilyn Walker Department of Computer Science University of Sheffield Sheffield, S1 4DP, United Kingdom [email protected] Abstract Despite its long history, and a great deal of research producing many useful algorithms and observations, research in cooperative response generation has had little impact on the recent commercialization of dialogue technologies, particularly within the spoken dialogue community. We hypothesize that a particular type of cooperative response, intensional summaries, are effective for when users are unfamiliar with the domain. We evaluate this hypothesis with two experiments with cruiser, a DS for in-car or mobile users to access restaurant information. First, we compare cruiser with a baseline system-initiative DS, and show that users prefer cruiser. Then, we experiment with four algorithms for constructing intensional summaries in cruiser, and show that two summary types are equally effective: summaries that maximize domain coverage and summaries that maximize utility with respect to a user model. 1 Introduction Research in cooperative response generation has had a long history and produced many useful algorithms and observations (Mays 1980; Pollack et al.1982; Joshi et al., 1986; Kalita et al., 1986) inter alia. However, it has had little impact on the recent commercialization of dialogue technologies, particularly within the spoken dialogue community. We believe that this lack of interest in cooperative response generation arises from two limitations of previous work: (1) There has been relatively little empirical research showing that cooperative responses lead U1: Tell me about restaurants in London. SI1: What kind of cuisine are you interested in? C1: I know of 596 restaurants in London. I know of 3 inexpensive vegetarian restaurants and 14 inexpensive Chinese restaurants. I also know of 4 inexpensive Greek restaurants. U2: Chinese. SI2: Do you have a price range in mind? C2: I know of 27 restaurants in London that serve Chinese food. There are 8 inexpensive Chinese restaurants in Chinatown and 2 inexpensive Chinese restaurants in Hampstead/Kilburn. I also know of 1 inexpensive Chinese restaurant in Soho. U3: How about a cheap one? SI3: What neighborhood would you like? C3: I know of 1 inexpensive Chinese restaurant in Hampstead/Kilburn with very good food quality and 1 in Bayswater with good food quality. I also know of 2 in Chinatown with medium food quality. Figure 1: Intensional summaries (C = cruiser) as compared with a system initiative (SI) strategy in the London restaurant domain. U = User to more natural, effective, or efficient dialogues (Litman et al.1998; Demberg and Moore, 2006); and (2) Previous work has hand-crafted such responses, or hand-annotated the database to support them (Kaplan, 1984; Kalita et al., 1986; Cholvy, 1990; Polifroni et al., 2003; Benamara, 2004), which has made it difficult to port and scale these algorithms. Moreover, we believe that there is an even greater need today for cooperative response generation. Larger and more complex datasets are daily being created on the Web, as information 479 is integrated across multiple sites and vendors. Many users will want to access this information from a mobile device and will have little knowledge of the domain. We hypothesize that these users will need cooperative responses that select and generalize the information provided. In particular, we hypothesize that a particular type of cooperative response, intensional summaries, when provided incrementally during a dialogue, are effective for large or complex domains, or when users are unfamiliar with the domain. These intensional summaries have the ability to describe the data that forms the knowledge base of the system, as well as relationships among the components of that database. We have implemented intensional summaries in cruiser (Cooperative Responses Using Intensional Summaries of Entities and Relations), a DS for in-car or mobile users to access restaurant information (Becker et al.2006; Weng et al.2005; Weng et al.2006). Figure 1 contrasts our proposed intensional summary strategy with the system initiative strategy used in many dialogue systems (Walker et al., 2002; VXML, 2007). Previous research on cooperative responses has noted that summary strategies should vary according to the context (Sparck Jones, 1993), and the interests and preferences of the user (Gaasterland et al., 1992; Carenini and Moore, 2000; Demberg and Moore, 2006). A number of proposals have emphasized the importance of making generalizations (Kaplan, 1984; Kalita et al., 1986; Joshi et al., 1986). In this paper we explore different methods for constructing intensional summaries and investigate their effectiveness. We present fully automated algorithms for constructing intensional summaries using knowledge discovery techniques (Acar, 2005; Lesh and Mitzenmacher, 2004; Han et al., 1996), and decisiontheoretic user models (Carenini and Moore, 2000). We first explain in Sec. 2 our fully automated, domain-independent algorithm for constructing intensional summaries. Then we evaluate our intensional summary strategy with two experiments. First, in Sec. 3, we test the hypothesis that users prefer summary responses in dialogue systems. We also test a refinement of that hypothesis, i.e., that users prefer summary type responses when they are unfamiliar with a domain. We compare several versions of cruiser with the system-initiative strategy, exemplified in Fig. 1, and show that users prefer cruiser. Then, in Sec. 4, we test four different algorithms for constructing intensional summaries, and show in Sec. 4.1 that two summary types are equally effective: summaries that maximize domain coverage and summaries that maximize utility with respect to a user model. We also show in Sec. 4.2 that we can predict with 68% accuracy which summary type to use, a significant improvement over the majority class baseline of 47%. We sum up in Sec. 5. 2 Intensional Summaries This section describes algorithms which result in the four types of intensional summaries shown in Fig. 2. We first define intensional summaries as follows. Let D be a domain comprised of a set R of database records {ri, ...rn}. Each record consists of a set of attributes {Aj, ..., An}, with associated values v: D(Ai)={vi,1, vi,2, ..., vi,n}. In a dialogue system, a constraint is a value introduced by a user with either an explicit or implied associated attribute. A constraint c is a function over records in D such that cj(R) returns a record r if r ⊆D and r : Ai = c. The set of all dialogue constraints {ci, ..., cn} is the context C at any point in the dialogue. The set of records R in D that satisfy C is the focal information: R is the extension of C in D. For example, the attribute cuisine in a restaurant domain has values such as “French” or “Italian”. A user utterance instantiating a constraint on cuisine, e.g., “I’m interested in Chinese food”, results in a set of records for restaurants serving Chinese food. Intensional summaries as shown in Fig. 2 are descriptions of the focal information, that highlight particular subsets of the focal information and make generalizations over these subsets. The algorithm for constructing intensional summaries takes as input the focal information R, and consists of the following steps: • Rank attributes in context C, using one of two ranking methods (Sec. 2.1); 480 Type Ranking #atts Clusters Scoring Summary RefSing Refiner 3 Single value Size I know of 35 restaurants in London serving Indian food. All price ranges are represented. Some of the neighborhoods represented are Mayfair, Soho, and Chelsea. Some of the nearby tube stations are Green Park, South Kensington and Piccadilly Circus. RefAssoc Refiner 2 Associative Size I know of 35 restaurants in London serving Indian food. There are 3 medium-priced restaurants in Mayfair and 3 inexpensive ones in Soho. There are also 2 expensive ones in Chelsea. UMSing User model 3 Single value Utility I know of 35 restaurants in London serving Indian food. There are 6 with good food quality. There are also 12 inexpensive restaurants and 4 with good service quality. UMAssoc User model 2 Associative Utility I know of 35 restaurants in London serving Indian food. There are 4 medium-priced restaurants with good food quality and 10 with medium food quality. There are also 4 that are inexpensive but have poor food quality. Figure 2: Four intensional summary types for a task specifying restaurants with Indian cuisine in London. • Select top-N attributes and construct clusters using selected attributes (Sec. 2.2); • Score and select top-N clusters (Sec. 2.3); • Construct frames for generation, perform aggregation and generate responses. 2.1 Attribute Ranking We explore two candidates for attribute ranking: User model and Refiner. User model: The first algorithm utilizes decision-theoretic user models to provide an attribute ranking specific to each user (Carenini and Moore, 2000). The database contains 596 restaurants in London, with up to 19 attributes and their values. To utilize a user model, we first elicit user ranked preferences for domain attributes. Attributes that are unique across all entities, or missing for many entities, are automatically excluded, leaving six attributes: cuisine, decor quality, food quality, price, service, and neighborhood. These are ranked using the SMARTER procedure (Edwards and Barron, 1994). Rankings are converted to weights (w) for each attribute, with a formula which guarantees that the weights sum to 1: wk = 1 K K X i=k 1 i where K equals the number of attributes in the ranking. The absolute rankings are used to select attributes. The weights are also used for cluster scoring in Sec. 2.3. User model ranking is used to produce UM-Sing and UM-Assoc in Fig. 2. Refiner method: The second attribute ranking method is based on the Refiner algorithm for summary construction (Polifroni et al., 2003). The Refiner returns values for every attribute in the focal information in frames ordered by frequency. If the counts for the top-N (typically, 4) values for a particular attribute, e.g., cuisine, exceeded M% (typically 80%) of the total counts for all values, then that attribute is selected. For example, 82% of Indian restaurants in the London database are in the neighborhoods Mayfair, Soho, and Chelsea. Neighborhood would, therefore, be chosen as an attribute to speak about for Indian restaurants. The thresholds M and N in the original Refiner were set a priori, so it was possible that no attribute met or exceeded the thresholds for a particular subset of the data. In addition, some entities could have many unknown values for some attributes. Thus, to insure that all user queries result in some summary response, we modify the Refiner 481 method to include a ranking function for attributes. This function favors attributes that contain fewer unknown values but always returns a ranked set of attributes. Refiner ranking is used to produce Ref-Sing and Ref-Assoc in Fig. 2. 2.2 Subset Clustering Because the focal information is typically too large to be enumerated, a second parameter attempts to find interesting clusters representing subsets of the focal information to use for the content of intensional summaries. We assume that the coverage of the summary is important, i.e., the larger the cluster, the more general the summary. The simplest algorithm for producing clusters utilizes a specified number of the top-ranked attributes to define a cluster. Single attributes, as in the Ref-Sing and UM-Sing examples in Fig. 2, typically produce large clusters. Thus one algorithm uses the top three attributes to produce clusters, defined by either a single value (e.g., UM-Sing) or by the set of values that comprise a significant portion of the total (e.g., Ref-Sing). price_range medium inexpensive food_quality food_quality good medium poor (4) (10) (4) Figure 3: A partial tree for Indian restaurants in London, using price range as the predictor variable and food quality as the dependent variable. The numbers in parentheses are the size of the clusters described by the path from the root. However, we hypothesize that more informative and useful intensional summaries might be constructed from clusters of discovered associations between attributes. For example, associations between price and cuisine produce summaries such as There are 49 medium-priced restaurants that serve Italian cuisine. We apply c4.5 decision tree induction to compute associations among attributes (Kamber et al., 1997; Quinlan, 1993). Each attribute in turn is designated as the dependent variable, with other attributes used as predictors. Thus, each branch in the tree represents a cluster described by the attribute/value pairs that predict the leaf node. Fig. 3 shows clusters of different sizes induced from Indian restaurants in London. The cluster size is determined by the number of attributes used in tree induction. With two attributes, the average cluster size at the leaf node is 60.4, but drops to 4.2 with three attributes. Thus, we use two attributes to produce associative clusters, as shown in Fig. 2 (i.e., the Ref-Assoc and UMAssoc responses), to favor larger clusters. 2.3 Cluster Scoring The final parameter scores the clusters. One scoring metric is based on cluster size. Single attributes produce large clusters, while association rules produce smaller clusters. The second scoring method selects clusters of high utility according to a user model. We first assign scalar values to the six ranked attributes (Sec. 2.1), using clustering methods as described in (Polifroni et al., 2003) The weights from the user model and the scalar values for the attributes in the user model yield an overall utility U for a cluster h, similar to utilities as calculated for individual entities (Edwards and Barron, 1994; Carenini and Moore, 2000): Uh = K X k=1 wk(xhk) We use cluster size scoring with Refiner ranking and utility scoring with user model ranking. For conciseness, all intensional summaries are based on the three highest scoring clusters. 2.4 Summary The algorithms for attribute selection and cluster generation and scoring yield the four summary types in Table 2. Summary Ref-Sing is constructed using (1) the Refiner attribute ranking; and (2) no association rules. (The quantifier (e.g., some, many) is based on the cover482 age.) Summary Ref-Assoc is constructed using (1) the Refiner attribute ranking; and (2) association rules for clustering. Summary UMSing is constructed using (1) a user model with ranking as above; and (2) no association rules. Summary UM-Assoc is constructed using (1) a user model with ranking of price, food, cuisine, location, service, and decor; and (2) association rules. 3 Experiment One This experiment asks whether subjects prefer intensional summaries to a baseline systeminitiative strategy. We compare two types of intensional summary responses from Fig. 2, RefAssoc and UM-Assoc to system-initiative. The 16 experimental subjects are asked to assume three personas, in random order, chosen to typify a range of user types, as in (Demberg and Moore, 2006). Subjects were asked to read the descriptions of each persona, which were available for reference, via a link, throughout the experiment. The first persona is the Londoner, representing someone who knows London and its restaurants quite well. The Londoner persona typically knows the specific information s/he is looking for. We predict that the system-initiative strategy in Fig. 1 will be preferred by this persona, since our hypothesis is that users prefer intensional summaries when they are unfamiliar with the domain. The second persona is the Generic tourist (GT), who doesn’t know London well and does not have strong preferences when it comes to selecting a restaurant. The GT may want to browse the domain, i.e. to learn about the structure of the domain and retrieve information by recognition rather than specification (Belkin et al., 1994). We hypothesize that the Ref-Assoc strategy in Fig. 2 will best fit the GT, since the corresponding clusters have good domain coverage. The third persona is the UM tourist (UMT). This persona may also want to browse the database, since they are unfamiliar with London. However, this user has expressed preferences about restaurants through a previous interaction. The UMT in our experiment is concerned with price and food quality (in that order), and prefers restaurants in Central London. After location, the UMT is most concerned with cuisine type. The intensional summary labelled Um-Assoc in Fig. 2 is based on this user model, and is computed from discovered associations among preferred attributes. As each persona, subjects rate responses on a Likert scale from 1-7, for each of four dialogues, each containing between three and four query/response pairs. We do not allow tie votes among the three choices. 3.1 Experimental results The primary hypothesis of this work is that users prefer summary responses in dialogue systems, without reference to the context. To test this hypothesis, we first compare Londoner responses (average rating 4.64) to the most highly rated of the two intensional summaries (average rating 5.29) for each query/response pair. This difference is significant (df = 263, p < .0001), confirming that over users prefer an intensional summary strategy to a system-initiative strategy. Table 1 shows ratings as a function of persona and response type. Overall, subjects preferred the responses tailored to their persona. The Londoner persona signifcantly preferred Londoner over UMT responses (df = 95, p < .05), but not more than GT responses. This confirms our hypothesis that users prefer incremental summaries in dialogue systems. Further, it disconfirms our refinement of that hypothesis, that users prefer summaries only when they are unfamiliar with the domain. The fact that no difference was found between Londoner and GT responses indicates that GT responses contain information that is perceived as useful even when users are familiar with the domain. The Generic Tourist persona also preferred the GT responses, significantly more than the Londoner responses (df = 95, p < .05), but not significantly more than the UMT responses. We had hypothesized that the optimal summary type for users completely new to a domain would describe attributes that have high coverage of the focal information. This hypothesis is disconfirmed by these findings, that indicate that user 483 Response Type Persona London GT UMT London 5.02 4.55 4.32 GT 4.14 4.67 4.39 UM tourist 3.68 4.86 5.23 Table 1: Ratings by persona assumed. London = Londoner persona, GT = Generic tourist, UMT = User Model tourist model information is helpful when constructing summaries for any user interested in browsing. Finally, the UM Tourist persona overwhelmingly preferred UMT responses over Londoner responses (df = 95, p < .0001). However, UMT responses were not significantly preferred to GT responses. This confirms our hypothesis that users prefer summary responses when they are unfamiliar with the domain, but disconfirms the hypothesis that users will prefer summaries based on a user model. The results for both the Generic Tourist and the UM Tourist show that both types of intensional summaries contain useful information. 4 Experiment Two The first experiment shows that users prefer intensional summaries; the purpose of the second experiment is to investigate what makes a good intensional summary. We test the different ways of constructing such summaries described in Sec. 2, and illustrated in Fig. 2. Experimental subjects were 18 students whose user models were collected as described in Sec. 2.3. For each user, the four summary types were constructed for eight tasks in the London restaurant domain, where a task is defined by a query instantiating a particular attribute/value combination in the domain (e.g., I’m interested in restaurants in Soho). The tasks were selected to utilize a range of attributes. The focal information for four of the tasks (large set tasks) were larger than 100 entities, while the focal information for the other four tasks were smaller than 100 entities (small set tasks). Each task was presented to the subject on its own web page with the four intensional summaries presented as text on the web page. Each subject was asked to carefully read and rate each alUser model Refiner Association rules 3.4 2.9 Single attributes 3.0 3.4 User model Refiner Small dataset 3.1 3.4 Large dataset 3.2 2.9 Table 2: User ratings showing the interaction between clustering method, attribute ranking, and dataset size in summaries. ternative summary response on a Likert scale of 1 . . . 5 in response to the statement, This response contains information I would find useful when choosing a restaurant. The subjects were also asked to indicate which response they considered the best and the worst, and to provide free-text comments about each response. 4.1 Hypothesis Testing Results We performed an analysis of variance with attribute ranking (user model vs. refiner), clustering method (association rules vs. single attributes), and set size (large vs. small) as independent variables and user ratings as the dependent variable. There was a main effect for set size (df = 1, f = 6.7, p < .01), with summaries describing small datasets (3.3 average rating) rated higher than those for large datasets (3.1 average rating). There was also a significant interaction between attribute ranking and clustering method (df = 1, f = 26.8, p < .001). Table 2 shows ratings for the four summary types. There are no differences between the two highest rated summaries: Ref-Sing (average 3.4) and UMAssoc (average 3.4). See Fig. 2. This suggests that discovered associations provide useful content for intensional summaries, but only for attributes ranked highly by the user model. In addition, there was another significant interaction between ranking method and setsize (df = 1, f = 11.7, p < .001). The ratings at the bottom of Table 2 shows that overall, users rate summaries of small datasets higher, but users rate summaries higher for large datasets when a user model is used. With small datasets, users prefer summaries that don’t utilize user model information. 484 We also calculate the average utility for each response (Sec. 2.1) and find a strong correlation between the rating and its utility (p < .005). When considering this correlation, it is important to remember that utility can be calculated for all responses, and there are cases where the Refiner responses have high utility scores. 4.2 Summary Type Prediction Our experimental data suggest that characteristics associated with the set of restaurants being described are important, as well as utility information derived from application of a a user model. The performance of a classifier in predicting summary type will indicate if trends we discovered among user judgements carry over to an automated means of selecting which response type to use in a given context. In a final experiment, for each task, we use the highest rated summary as a class to be predicted using C4.5 (Quinlan, 1993). Thus we have 4 classes: Ref-Sing, Ref-Assoc, UM-Sing, and UM-Assoc. We derive two types of feature sets from the responses: features derived from each user model and features derived from attributes of the query/response pair itself. The five feature sets for the user model are: • umInfo: 6 features for the rankings for each attribute for each user’s model, e.g. a summary whose user had rated food quality most highly would receive a ’5’ for the feature food quality; • avgUtility: 4 features representing an average utility score for each alternative summary response, based on its clusters (Sec. 2.3). • hiUtility: 4 features representing the highest utility score among the three clusters selected for each response; • loUtility: 4 features representing the lowest utility score among the three clusters selected for each response; • allUtility: 12 features consisting of the high, low, and average utility scores from the previous three feature sets. Three feature sets are derived from the query and response pair: • numRests: 4 features for the coverage of each response. For summary Ref-Assoc in Table 2, numRests is 43; for summary UMAssoc, numrests is 53.; Sys Feature Sets Acc(%) S1 allUtility 47.1 S2 task, numRests 51.5 S3 allUtility,umInfo 62.3∗ S4 allUtility,umInfo,numRests,task 63.2∗ S5 avgUtility,umInfo,numRests,task62.5∗ S6 hiUtility,umInfo,numRests,task 66.9∗ S7 hiUtility,umInfo,numRests,task,dataset 68.4∗ S8 loUtility,umInfo,numRests,task 60.3∗ S9 hiUtility,umInfo 64.0∗ Table 3: Accuracy of feature sets for predicting preferred summary type. ∗= p < .05 as compared to the Baseline (S1)). • task: A feature for the type of constraint used to generate the focal information (e.g., cuisine, price range). • dataset: A feature for the size of the focal information subset (i.e., big, small), for values greater and less than 100. Table 3 shows the relative strengths of the two types of features on classification accuracy. The majority class baseline (System S1) is 47.1%. The S2 system uses only features associated with the query/response pair, and its accuracy (51.5%) is not significantly higher than the baseline. User model features perform better than the baseline (S3 in Table 3), and combining features from the query/response pair and the user model significantly increases accuracy in all cases. We experimented with using all the utility scores (S4), as well as with using just the average (S5), the high (S6), and the low (S8). The best performance (68.4%)is for the (S7) system combination of features. The classification rules in Table 4 for the best system (S7) suggests some bases for users’ decisions. The first rule is very simple, simply stating that, if the highest utility value of the RefSing response is lower than a particular threshold, then use the UM-Assoc response. In other words, if one of the two highest scoring response types has a low utility, use the other. The second rule in Table 4 shows the effect that the number of restaurants in the response has on summary choice. In this rule, the RefSing response is preferred when the highest util485 IF (HighestUtility: Ref-Sing) < 0.18 THEN USE UM-Assoc ---------------------------------------IF (HighestUtility: Ref-Assoc) > 0.18) && (NumRestaurants: UM-Assoc < 400) && (HighestUtility: UM-Assoc < .47) THEN USE Ref-Sing ---------------------------------------IF (NumRestaurants: UM-Assoc < 400) && (HighestUtility: UM-Assoc < .57) && (HighestUtility: Ref-Assoc > .2) THEN USE Ref-Assoc Table 4: Example classification rules from System 7 in Table 3. ity value of that response is over a particular threshold. The final rule in Table 4 predicts Ref-Assoc, the lowest overall scoring response type. When the number of restaurants accounted for by UM-Assoc, as well as the highest utility for that response, are both below a certain threshold, and the highest utility for the Ref-Assoc response is above a certain threshold, then use Ref-Assoc. The utility for any summary type using the Refiner method is usually lower than those using the user model, since overall utility is not taken into account in summary construction. However, even low utility summaries may mention attributes the user finds important. That, combined with higher coverage, could make that summary type preferable over one constructed to maximize user model utility. 5 Conclusion We first compared intensional summary cooperative responses against a system initiative dialogue strategy in cruiser. Subjects assumed three “personas”, a native Londoner, a tourist who was interacting with the system for the first time (GT), or a tourist for which the system has a user model (UMT). The personas were designed to reflect differing ends of the spectra defined by Belkin to characterize informationseeking strategies (Belkin et al., 1994). There was a significant preference for intensional summaries across all personas, but especially when the personas were unfamiliar with the domain. This preference indicates that the benefits of intensional summaries outweigh the increase in verbosity. We then tested four algorithms for summary construction. Results show that intensional summaries based on a user model with association rules, or on the Refiner method (Polifroni et al., 2003), are equally effective. While (Demberg and Moore, 2006) found that their user model stepwise refinement (UMSR) method was superior to the Refiner method, they also found many situations (70 out of 190) in which the Refiner method was preferred. Our experiment was structured differently, but it suggests that, in certain circumstances, or within certain domains, users may wish to hear about choices based on an analysis of focal information, irrespective of user preferences. Our intensional summary algorithms automatically construct summaries from a database, along with user models collected via a domainindependent method; thus we believe that the methods described here are domainindependent. Furthermore, in tests to determine whether a classifier can predict the best summary type to use in a given context, we achieved an accuracy of 68% as compared to a majority class baseline of 47%, using dialogue context features. Both of these results point hopefully towards a different way of automating dialogue design, one based on a combination of user modelling and an analysis of contextual information. In future work we hope to test these algorithms in other domains, and show that intensional summaries can not only be automatically derived but also lead to reduced task times and increased task success. References A.C. Acar and A. Motro. 2005. Intensional Encapsulations of Database Subsets via Genetic Programming. Proc, 16th Int. Conf. on Database and Expert Systems Applications. Copenhagen. Tilman Becker, Nate Blaylock, Ciprian Gerstenberger, Ivana Kruijff-Korbayov´a, Andreas Korthauer, Manfred Pinkal, Michael Pitz, Peter Poller, and Jan Schehl. Natural and intuitive multimodal dialogue for in-car applications: The sammie system. In ECAI, pages 612–616, 2006. 486 N. J. Belkin, C. Cool, A. Stein and U. Thiel. 1994. Cases, Scripts, and Information Seeking Strategies: On the Design of Interactive Information Retrieval Systems. Expert Systems and Applications, 9(3):379–395. F. Benamara. 2004. Generating Intensional Answers in Intelligent Question Answering Systems. Proc. 3rd Int. Conf. on Natural Language Generation INLG. G. Carenini and J. Moore. 2000. A Strategy for Generating Evaluative Arguments. Proc. First Int’l Conf. on Natural Language Generation. 1307– 1314. Brant Cheikes and Bonnie Webber. Elements of a computational model of cooperative response generation. In Proc. Speech and Natural Language Workshop, pages 216–220, Philadelphia, 1989. X. Chen and Y-F. Wu. 2006. Personalized Knowledge Discovery: Mining Novel Association Rules from Text. Proc., SIAM Conference on Data Mining. L. Cholvy. 1990. Answering Queries Addressed to a Rule Base. Revue d’Intelligence Artificielle. 1(1):79–98. V. Demberg and J. Moore. 2006 Information Presentation in Spoken Dialogue Systems. Proc. 11th Conf. EACL.. W. Edwards and F. Hutton Barron. 1994. Smarts and smarter: Improved simple methods for multiattribute utility measurement. Organizational Behavior and Human Decision Processes. 60:306– 325. T. Gaasterland and P. Godfrey and J. Minker. 1992. An Overview of Cooperative Answering. Journal of Intelligent Information Systems. 1(2):387–416. J. Han, Y. Huang and N. Cercone. 1996. Intelligent Query Answering by Knowledge Discovery Techniques. IEEE Transactions on Knowledge and Data Engineering. 8(3):373–390. Aravind Joshi, Bonnie Webber, and Ralph M. Weischedel. Living up to expectations: computing expert responses. In HLT ’86: Proceedings of the workshop on Strategic computing natural language, pages 179–189, Morristown, NJ, USA, 1986. Association for Computational Linguistics. J. Kalita and M.J. Colburn and G. McCalla. 1984. A response to the need for summary responses. COLING-84). 432–436. M. Kamber, L. Winstone, W. Gong, S. Cheng and J Han. 1997. Generalization and decision tree induction: efficient classification in data mining. Proc. 7th Int. Workshop on Research Issues in Data Engineering (RIDE ’97). 111–121. S.J.Kaplan. 1984. Designing a Portable Natural Language Database Query System. ACM Transactions on Database Systems, 9(1):1–19. N. Lesh and M. Mitzenmacher. Interactive data summarization: an example application. Proc., Working Conference on Advanced Visual Interfaces. Gallipoli, Italy. pages 183–187. Diane J. Litman, Shimei Pan, and Marilyn A. Walker. Evaluating response strategies in a webbased spoken dialogue agent. In COLING-ACL, pages 780–786, 1998. J. Polifroni, G. Chung, and S. Seneff. 2003. Towards the Automatic Generation of Mixed-Initiative Dialogue Systems from Web Content. Proc. Eurospeech. 2721–2724. E. Mays. Correcting misconceptions about database structure. In Proceedings of the CSCSI ’80, 1980. Martha E. Pollack, Julia Hirschberg, and Bonnie L. Webber. User participation in the reasoning processes of expert systems. In AAAI, pages 358–361, 1982. J.R. Quinlan 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann. San Mateo, CA. K. Sparck Jones. 1998. Automatic summarising: factors and directions. I. Mani and M. Maybury, eds. Advances in Automatic Text Summarization. MIT Press. M. Walker, A. Rudnicky, J. Aberdeen, E. Bratt, J. Garofolo, H. Hastie, A. Le, B. Pellom, A. Potamianos, R. Passonneau, R. Prasad, S. Roukos, G. Sanders, S. Seneffand D. Stallard. 2002. DARPA Communicator Evaluation: Progress from 2000 to 2001. Proc, ICSLP 2002. F. Weng, L. Cavedon, B. Raghunathan, D. Mirkovic, H. Cheng, H. Schmidt, H. Bratt, R. Mishra, S. Peters, L. Zhao, S. Upson, E. Shriberg, and C. Bergmann. Developing a conversational dialogue system for cognitively overloaded drivers. In Proceedings, International Congress on Intelligent Transportation Systems, 2005. F. Weng, S. Varges, B. Raghunathan, F. Ratiu, H. Pon-Barry, B. Lathrop, Q. Zhang, T. Scheideck, H. Bratt, K. Xu, M. Purver, R. Mishra, M. Raya, S. Peters, Y. Meng, L. Cavedon, and L. Shriberg. Chat: A conversational helper for automotive tasks. In Proceedings, Interspeech: International Conference on Spoken Language Processing, 2006. Voxeo. VoiceXML Development Guide. http://voicexml.org. 487
2008
55
Proceedings of ACL-08: HLT, pages 488–495, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Word Clustering and Word Selection based Feature Reduction for MaxEnt based Hindi NER Sujan Kumar Saha Indian Institute of Technology Kharagpur, West Bengal India - 721302 [email protected] Pabitra Mitra Indian Institute of Technology Kharagpur, West Bengal India - 721302 [email protected] Sudeshna Sarkar Indian Institute of Technology Kharagpur, West Bengal India - 721302 [email protected] Abstract Statistical machine learning methods are employed to train a Named Entity Recognizer from annotated data. Methods like Maximum Entropy and Conditional Random Fields make use of features for the training purpose. These methods tend to overfit when the available training corpus is limited especially if the number of features is large or the number of values for a feature is large. To overcome this we proposed two techniques for feature reduction based on word clustering and selection. A number of word similarity measures are proposed for clustering words for the Named Entity Recognition task. A few corpus based statistical measures are used for important word selection. The feature reduction techniques lead to a substantial performance improvement over baseline Maximum Entropy technique. 1 Introduction Named Entity Recognition (NER) involves locating and classifying the names in a text. NER is an important task, having applications in information extraction, question answering, machine translation and in most other Natural Language Processing (NLP) applications. NER systems have been developed for English and few other languages with high accuracy. These belong to two main categories based on machine learning (Bikel et al., 1997; Borthwick, 1999; McCallum and Li, 2003) and language or domain specific rules (Grishman, 1995; Wakao et al., 1996). In English, the names are usually capitalized which is an important clue for identifying a name. Absence of capitalization makes the Hindi NER task difficult. Also, person names are more diverse in Indian languages, many common words being used as names. A pioneering work on Hindi NER is by Li and McCallum (2003) where they used Conditional Random Fields (CRF) and feature induction to automatically construct only the features that are important for recognition. In an effort to reduce overfitting, they use a combination of a Gaussian prior and early-stopping. In their Maximum Entropy (MaxEnt) based approach for Hindi NER development, Saha et al. (2008) also observed that the performance of the MaxEnt based model often decreases when huge number of features are used in the model. This is due to overfitting which is a serious problem in most of the NLP tasks in resource poor languages where annotated data is scarce. This paper is a study on effectiveness of word clustering and selection as feature reduction techniques for MaxEnt based NER. For clustering we use a number of word similarities like cosine similarity among words and co-occurrence, along with the k-means clustering algorithm. The clusters are then used as features instead of words. For important word selection we use corpus based statistical measurements to find the importance of the words in the NER task. A significant performance improvement over baseline MaxEnt was observed after using the above feature reduction techniques. The paper is organized as follows. The MaxEnt 488 based NER system is described in Section 2. Various approaches for word clustering are discussed in Section 3. Next section presents the procedure for selecting the important words. In Section 5 experimental results and related discussions are given. Finally Section 6 concludes the paper. 2 Maximum Entropy Based Model for Hindi NER Maximum Entropy (MaxEnt) principle is a commonly used technique which provides probability of belongingness of a token to a class. MaxEnt computes the probability p(o|h) for any o from the space of all possible outcomes O, and for every h from the space of all possible histories H. In NER, history can be viewed as all information derivable from the training corpus relative to the current token. The computation of probability (p(o|h)) of an outcome for a token in MaxEnt depends on a set of features that are helpful in making predictions about the outcome. The features may be binary-valued or multivalued. Given a set of features and a training corpus, the MaxEnt estimation process produces a model in which every feature fi has a weight αi. We can compute the conditional probability as (Berger et al., 1996): p(o|h) = 1 Z(h) Y i αifi(h,o) (1) Z(h) = X o Y i αifi(h,o) (2) The conditional probability of the outcome is the product of the weights of all active features, normalized over the products of all the features. For our development we have used a Java based open-nlp MaxEnt toolkit1. A beam search algorithm is used to get the most probable class from the probabilities. 2.1 Training Corpus The training data for the Hindi NER task is composed of about 243K words which is collected from the popular daily Hindi newspaper “Dainik Jagaran”. This corpus has been manually annotated and contains about 16,491 Named Entities (NEs). In this study we have considered 4 types 1http://sourceforge.net/projects/maxent/ Type Features Word wi, wi−1, wi−2, wi+1, wi+2 NE Tag ti−1, ti−2 Digit information Contains digit, Only digit, Four digit, Numerical word Affix information Fixed length suffix, Suffix list, Fixed length prefix POS information POS of words, Coarse-grained POS, POS based binary features Table 1: Features used in the MaxEnt based Hindi NER system of NEs, these are Person (Per), Location (Loc), Organization (Org) and Date (Dat). To recognize entity boundaries each name class N has 4 types of labels: N Begin, N Continue, N End and N Unique. For example, Kharagpur is annotated as Loc Unique and Atal Bihari Vajpeyi is annotated as Per Begin Per Continue Per End. Hence, there are a total of 17 classes including one class for not-name. The corpus contains 6298 person, 4696 location, 3652 organization and 1845 date entities. 2.2 Feature Description We have identified a number of candidate features for the Hindi NER task. Several experiments were conducted with the identified features, individually and in combination. Some of the features are mentioned below. They are summarized in Table 1. Static Word Feature: Recognition of NE is highly dependent on contexts. So the surrounding words of a particular word (wi) are used as features. During our experiments different combinations of previous 3 words (wi−3...wi−1) to next 3 words (wi+1...wi+3) are treated as features. This is represented by L binary features where L is the size of lexicon. Dynamic NE tag: NE tags of the previous words (ti−m...ti−1) are used as features. During decoding, the value of this feature for a word (wi) is obtained only after the computation of the NE tag for the previous word (wi−1). Digit Information: If a word (wi) contains digit(s) then the feature ContainsDigit is set to 1. This feature is used with some modifications also. OnlyDigit, which is set to 1 if the word contains 489 Feature Id Feature Per Loc Org Dat Total F1 wi, wi−1, wi+1 61.36 68.29 52.12 88.9 67.26 F2 wi, wi−1, wi−2, wi+1, wi+2 64.10 67.81 58 92.30 69.09 F3 wi, wi−1, wi−2, wi−3, wi+1, wi+2, wi+3 60.42 67.81 51.48 90.18 66.84 F4 wi, wi−1, wi−2, wi+1, wi+2, ti−1, ti−2, Suffix 66.67 73.36 58.58 89.09 71.2 F5 wi, wi−1, wi+1, ti−1, Suffix 69.65 75.8 59.31 89.09 73.42 F6 wi, wi−1, wi+1, ti−1, Prefix 66.67 71 58.58 87.8 70.02 F7 wi, wi−1, wi+1, ti−1, Prefix, Suffix 70.61 71 59.31 89.09 72.5 F8 wi, wi−1, wi+1, ti−1, Suffix, Digit 70.61 75.8 60.54 93.8 74.26 F9 wi, wi−1, wi+1, ti−1, POS (28 tags) 64.25 71 60.54 89.09 70.39 F10 wi, wi−1, wi+1, ti−1, POS (coarse grained) 69.65 75.8 59.31 92.82 74.16 F11 wi, wi−1, wi+1, Ti−1, Suffix, Digit, NomPSP 72.26 78.6 61.36 92.82 75.6 F12 wi, wi−1, wi+1, wi−2, wi+2, Ti−1, Prefix, Suffix, Digit, NomPSP 65.26 78.01 52.12 93.33 72.65 Table 2: F-values for different features in the MaxEnt based Hindi NER system only digits, 4Digit, which is set to 1 if the word contains only 4 digits, etc. are some modifications of the feature which are helpful. Numerical Word: For a word (wi) if it is a numerical word i.e. word denoting a number (e.g. eka2 (one), do (two), tina (three) etc.) then the feature NumWord is set to 1. Word Suffix: Word suffix information is helpful to identify the NEs. Two types of suffix features have been used. Firstly a fixed length word suffix (set of characters occurring at the end of the word) of the current and surrounding words are used as features. Secondly we compiled list of common suffixes of place names in Hindi. For example, pura, bAda, nagara etc. are location suffixes. We used binary feature corresponding to the list - whether a given word has a suffix from the list. Word Prefix: Prefix information of a word may be also helpful in identifying whether it is a NE. A 2All Hindi words are written in italics using the ‘Itrans’ transliteration. fixed length word prefix (set of characters occurring at the beginning of the word) of current and surrounding words are treated as features. List of important prefixes, which are used frequently in the NEs, are also effective. Parts-of-Speech (POS) Information: The POS of the current word and the surrounding words are used as feature for NER. We have used a Hindi POS tagger developed at IIT Kharagpur, India which has an accuracy about 90%. We have used the POS values of the current and surrounding words as features. We realized that the detailed POS tagging is not very relevant. Since NEs are noun phrases, the noun tag is very relevant. Further the postposition following a name may give a clue to the NE type. So we decided to use a coarse-grained tagset with only three tags - nominal (Nom), postposition (PSP) and other (O). The POS information is also used by defining several binary features. An example is the NomPSP binary feature. The value of this feature is defined to be 1 if the current word is nominal and the next 490 word is a PSP. 2.3 Performance of Hindi NER using MaxEnt Method The performance of the MaxEnt based Hindi NER using the above mentioned features is reported here as a baseline. We have evaluated the system using a blind test corpus of 25K words. The test corpus contains 521 person, 728 location, 262 organization and 236 date entities. The accuracies are measured in terms of the f-measure, which is the weighted harmonic mean of precision and recall. Precision is the fraction of the correct annotations and recall is the fraction of the total NEs that are successfully annotated. The general formula for measuring the f-measure or f-value is, Fβ = (1+β2) . (precision . recall) \ (β2 . precision + recall). Here the value of β is taken as 1. In Table 2 we have shown the accuracy values for few feature sets. While experimenting with static word features, we have observed that a window of previous and next two words (wi−2...wi+2) gives best result (69.09) using the word features only. But when wi−3 and wi+3 are added with it, the f-value is reduced to 66.84. Again when wi−2 and wi+2 are deducted from the feature set (i.e. only wi−1 and wi+1 as feature), the f-value is reduced to 67.26. This demonstrates that wi−2 and wi+2 are helpful features in NE identification. When suffix, prefix and digit information are added to the feature set, the f-value is increased upto 74.26. The value is obtained using the feature set F8 [wi, wi−1, wi+1, ti−1, Suffix, Digit]. It is observed that when wi−2 and wi+2 are added with the feature, the accuracy decreases by 2%. It contradicts the results using the word features only. Another interesting observation is that prefix information are helpful features in NE identification as these increase accuracy when separately added with the word features (F6). Similarly the suffix information helps in increasing the accuracy. But when both the suffix and prefix information are used in combination along with the word features, the f-value is decreased. From Table 2, a f-value of 73.42 is obtained using F5 [wi, wi−1, wi+1, ti−1, Suffix] but when prefix information are added with it (F7), the f-value is reduced to 72.5. POS information are important features in NER. In general it is observed that coarse grained POS information performs better than the finer grained POS information. The best accuracy (75.6 f-value) of the baseline system is obtained using the binary NomPSP feature along with word feature (wi−1, wi+1), suffix and digit information. It is noted that when wi−2, wi+2 and prefix information are added with the best feature, the f-value is reduced to 72.65. From the above discussion it is clear that the system suffers from overfitting if a large number of features are used to train the system. Note that the surrounding word (wi−2, wi−1, wi+1, wi+2 etc.) features can take any value from the lexicon and hence are of high dimensionality. These cause the degradation of performance of the system. However it is obvious that few words in the lexicon are important in identification of NEs. To solve the problem of high dimensionality we use clustering to group the words present in the corpus into much smaller number of clusters. Then the word clusters are used as features instead of the word features (for surrounding words). For example, our Hindi corpus contains 17,456 different words, which are grouped into N (say 100) clusters. Then for a particular word, it is assigned to a cluster and the corresponding cluster-id is used as feature. Hence the number of features is reduced to 100 instead of 17,456. Similarly, selection of important words can also solve the problem of high dimensionality. As some of the words in the lexicon play important role in the NE identification process, we aim to select these particular words. Only these important words are used in NE identification instead of all words in the corpus. 3 Word Clustering Clustering is the process of grouping together objects based on their similarity. The measure of similarity is critical for good quality clustering. We have experimented with some approaches to compute word-word similarity. These are described in details in the following section. 491 3.1 Cosine Similarity based on Sentence Level Co-occurrence A word is represented by a binary vector of dimension same as the number of sentences in the corpus. A component of the vector is 1 if the word occurs in the corresponding sentence and zero otherwise. Then we measure cosine similarity between the word vectors. The cosine similarity between two word vectors ( ⃗A and ⃗B) with dimension d is measured as: CosSim( ⃗A, ⃗B) = P d AdBd (P d A2 d) 1 2 × (P d B2 d) 1 2 (3) This measures the number of co-occurring sentences. 3.2 Cosine Similarity based on Proximal Words In this measure a word is represented by a vector having dimension same as the lexicon size. For ease of implementation we have taken a dimension of 2 × 200, where each component of the vector corresponds to one of the 200 most frequent preceding and following words of a token word. List Prev containing the most frequent (top 200) previous words (wi−1 or wi−2 if wi is the first word of a NE) and List Next contains 200 most frequent next words (wi+1 or wi+2 if wi is the last word of a NE). A particular word wk may occur several times (say n) in the corpus. For each occurrence of wk find if its previous word (wk−1 or wk−2) matches any element of List Prev. If matches, then set 1 to the corresponding position of the vector and set zero to all other positions related to List Prev. Similarly check the next word (wk+1 or wk+2) in the List Next and find the values of the corresponding positions. The final word vector ⃗ Wk is obtained by taking the average of all occurrences of wk. Then the cosine similarity is measured between the word vectors. This measures the similarity of the contexts of the occurrences of the word in terms of the proximal words. 3.3 Similarity based on Proximity to NE Categories Here, for each word (wi) in the corpus four binary vectors are defined corresponding to two preceding and two following positions (i-1, i-2, i+1, i+2). Each binary vector is of dimension five corresponding to four NE classes (Cj) and one for the not-name class. For a particular word wk, find all the words occur in a particular position (say, +1). Measure the fraction (Pj(wk)) of these words belonging to a class Cj. The component of the word vector ⃗ Wk for the position corresponding to Cj is Pj(wk). Pj(wk) = No. of times wk+1 is a NE of class Cj Total occurrence of wk in corpus The Euclidean distance is used to find the similarity between the above word vectors as a similarity measure. Some of the word vectors for the +1 position are given in Table 3. In this table we have given the word vectors for a few Hindi words, which are, sthita (located), shahara (city), jAkara (go), nagara (township), gA.nva (village), nivAsI (resident), mishrA (a surname) and limiTeDa (ltd.). From the table we observe that the word vectors are close for sthita [0 0.478 0 0 0.522], shahara [0 0.585 0.001 0.024 0.39], nagara [0 0.507 0.019 0 0.474] and gA.nva [0 0.551 0 0 0.449]. So these words are considered as close. Word Per Loc Org Dat Not sthita 0 0.478 0 0 0.522 shahara 0 0.585 0.001 0.024 0.39 jAkara 0 0.22 0 0 0.88 nagara 0 0.507 0.019 0 0.474 gA.nva 0 0.551 0 0 0.449 nivAsI 0.108 0.622 0 0 0.27 mishrA 0.889 0 0 0 0.111 limiTeDa 0 0 1 0 0 Table 3: Example of some word vectors for next (+1) position (see text for glosses) 3.4 K-means Clustering Using the above similarity measures we have used the k-means algorithm. The seeds were randomly selected. The value of k (number of clusters) was varied till the best result is obtained. 4 Important Word Selection It is noted that not all words are equally important in determining the NE category. Some of the words 492 in the lexicon are typically associated with a particular NE category and hence have important role to play in the classification process. We describe below a few statistical techniques that has been used to identify the important words. 4.1 Class Independent Important Word Selection We define context words as those which occur in proximity of a NE. In other words, context words are the words present in the wi−2, wi−1, wi+1 or wi+2 position if wi is a NE. Note that only a subset of the lexicon are context words. For all the context words, its N weight is calculated as the ratio between the occurrence of the word as a context word and its total number of occurrence in the corpus. The context words having the higher N weight are considered as important words for NER. For our experiments we have considered top 500 words as important words. N weight(wi) = Occurrence of wi as context word Total occurrence of wi in corpus 4.2 Important Words for Each Class Similar to the class independent important word selection from the contexts, important words are selected for individual classes also. This is an extension of the previous context word considering only NEs of a particular class. For person, location, organization and date classes we have considered top 150, 120, 50 and 50 words respectively as important words. Four binary features are also defined for these four classes. These are defined as having value 1 if any of the context words belongs to the important words list for a particular class. 4.3 Important Words for Each Position Position based important words are also selected from the corpus. Here instead of context, particular positions are considered. Four lists are compiled for two preceding and two following positions (-2, -1, +1 and +2). 5 Evaluation of NE Recognition The following subsections contain the experimental results using word clustering and important word selection. The results demonstrate the effectiveness of k Per Loc Org Dat Total 20 66.33 74.57 43.64 91.30 69.54 50 64.13 76.35 52 93.62 71.7 80 66.33 74.57 53.85 93.62 72.08 100 70.1 73.1 57.7 96.62 72.78 120 66.15 73.43 54.9 93.62 71.52 150 66.88 74.94 53.06 95.65 72.33 200 66.09 73.82 52 92 71.13 Table 4: Variation of MaxEnt based system accuracy depending on number of clusters (k) word clustering and important word selection over the baseline MaxEnt model. 5.1 Using Word Clusters To evaluate the effectiveness of the clustering approaches in Hindi NER, we have used cluster features instead of word features. For the surrounding words, corresponding cluster-ids are used as feature. Choice of k : We have already mentioned that, for k-means clustering number of classes (k) should be determined initially. To find suitable k we had conducted the following experiments. We have selected a feature F1 (mentioned in Table 2) and applied the clusters with different k as features replacing the word features. In Table 4 we have summarized the experimental results, in order to find a suitable k for clustering, the word vectors obtained using the procedure described in Section 3.3. From the table we observe that the best result is obtained when k is 100. We have used k = 100 for the subsequent experiments for comparing the effectiveness of the features. Similarly when we deal with all the words in the corpus (17,465 words), we got best results when the words are clustered into 1100 clusters. ♦ The details of the comparison between the baseline word features and the reduced features obtained using clustering are given in Table 5. In general it is observed that clustering has improved the performance over baseline features. Using only cluster features the system provides a maximum f-value of 74.26 where the corresponding word features give f-value of 69.09. Among the various similarity measures of clustering, improved results are obtained using the clus493 Feature Using Word Features Using Clusters (C1) Using Clusters (C2) Using Clusters (C3) wi, window(-1, +1) 67.26 69.67 72.05 72.78 wi, window(-2, +2) 69.09 71.52 72.65 74.26 wi, window(-1, +1), Suffix 73.42 74.24 75.44 75.84 wi, window(-1, +1), Prefix, Suffix 72.5 74.76 75.7 76.33 wi, window(-1, +1), Prefix, Suffix, Digit 74.26 75.09 75.91 76.41 wi, window(-1, +1), Prefix, Suffix, Digit, NomPSP 75.6 77.2 77.39 77.61 wi, window(-2, +2), Prefix, Suffix, Digit, NomPSP 72.65 77.86 78.61 79.03 Table 5: F-values for different features in a MaxEnt based Hindi NER with clustering based feature reduction [window(−m, +n) refers to the cluster or word features corresponding to previous m positions and next n positions; C1 is the clusters which use sentence level co-occurrence based cosine similarity (3.1), C2 denotes the clusters which use proximal word based cosine similarity (3.2), C3 denotes the clusters for each positions related to NE (3.3)] ters which uses the similarity measurement based on proximity of the words to NE categories (defined in Section 3.3). Using clustering features the best f-value (79.03) is obtained using clusters for previous two and next two words along with the suffix, prefix, digit and POS information. It is observed that the prefix information increases the accuracy if applied along with suffix information when cluster features are used. More interestingly, addition of cluster features for positions −2 and +2 over the feature [window(-1, +1), Suffix, Prefix, Digit, NomPSP] increase the f-value from 77.61 to 79.03. But in the baseline system addition of word features (wi−2 and wi+2) over the same feature decrease the f-value from 75.6 to 72.65. 5.2 Using Important Word Selection The details of the comparison between the word feature and the reduced features based on important word selection are given in Table 6. For the surrounding word features, find whether the particular word (e.g. at position -1, -2 etc.) presents in the important words list (corresponding to the particular position if position based important words are considered). If the word occurs in the list then the word is used as features. In general it is observed that word selection also improves performance over baseline features. Among the different approaches, the best result is obtained when important words for two preceding and two following positions (defined in Section 4.3) are selected. Using important word based features, the highest f-value of 79.85 is obtained by using the important words for previous two and next two positions along with the suffix, prefix, digit and POS information. 5.3 Relative Effectiveness of Clustering and Word Selection In most of the cases clustering based features perform better then the important word based feature reduction. But the best f-value (79.85) of the system (using the clustering based and important word based features separately) is obtained by using important word based features. Next we have made an experiment by considering both the clusters and important words combined. We have defined the combined feature as, if the word (wi) is in the corresponding important word list then the word is used as feature otherwise the corresponding cluster-id (in which wi belongs to) is considered as feature. Using the combined feature, we have achieved further improvement. Here we are able to achieve the highest f-value of 80.01. 6 Conclusion A hierarchical word clustering technique, where clusters are driven automatically from large unan494 Feature Using Word Features Using Words (I1) Using Words (I2) Using Words (I3) wi, window(-1, +1) 67.26 66.31 67.53 66.8 wi, window(-2, +2) 69.09 72.04 72.9 73.34 wi, window(-1, +1), Suffix 73.42 73.85 73.12 74.61 wi, window(-1, +1), Prefix, Suffix 72.5 73.52 73.94 74.87 wi, window(-1, +1), Prefix, Suffix, Digit 74.26 73.97 74.13 74.7 wi, window(-1, +1), Prefix, Suffix, Digit, NomPSP 75.6 75.84 76.6 77.22 wi, window(-2, +2), Prefix, Suffix, Digit, NomPSP 72.65 76.69 77.42 79.85 Table 6: F-values for different features in a MaxEnt based Hindi NER with important word based feature reduction [window(−m, +n) refers to the important word or baseline word features corresponding to previous m positions and next n positions; I1 is the class independent important words (4.1), I2 denotes the important words for each class (4.2), I3 denotes the important words for each positions (4.3)] notated corpus, is used by Miller et al. (2004) for augmenting annotated training data. Note that our clustering approach is different, where the clusters are obtained using some statistics derived from the annotated corpus, and also the purpose is different as we have used the clusters for feature reduction. In this paper we propose two feature reduction techniques for Hindi NER based on word clustering and word selection. A number of word similarity measures are used for clustering. A few statistical approaches are used for the selection of important words. It is observed that significant enhancement of accuracy over the baseline system which use word features is obtained. This is probably due to reduction of overfitting. This is more important for a resource poor languages like Hindi where there is scarcity in annotated training data and other NER resources (like, gazetteer lists). 7 Acknowledgement The work is partially funded by Microsoft Research India. References Berger A L, Pietra S D and Pietra V D 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistic, 22(1):39–71. Bikel D M, Miller S, Schwartz R and W Ralph. 1997. Nymble: A High Performance Learning Name-finder. In Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 194–201. Borthwick A. 1999. A Maximum Entropy Approach to Named Entity Recognition. Ph.D. thesis, Computer Science Department, New York University. Grishman R. 1995. The New York University System MUC-6 or Where’s the syntax? In Proceedings of the Sixth Message Understanding Conference. Li W and McCallum A. 2003. Rapid Development of Hindi Named Entity Recognition using Conditional Random Fields and Feature Induction. ACM Transactions on Asian Language Information Processing (TALIP), 2(3):290–294. McCallum A and Li W. 2003. Early Results for Named Entity Recognition with Conditional Random fields, feature induction and web-enhanced lexicons. In Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL. Miller S, Guinness J and Zamanian A. 2004. Name Tagging with Word Clusters and Discriminative Training. In Proceedings of the HLT-NAACL 2004, pages 337– 342. Saha S K, Sarkar S and Mitra P. 2008. A Hybrid Feature Set based Maximum Entropy Hindi Named Entity Recognition. In Proceedings of the Third International Joint Conference on Natural Language Processing (IJCNLP-08), pages 343–349. Wakao T, Gaizauskas R and Wilks Y. 1996. Evaluation of an algorithm for the recognition and classification of proper names. In Proceedings of COLING-96. 495
2008
56
Proceedings of ACL-08: HLT, pages 496–504, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Combining EM Training and the MDL Principle for an Automatic Verb Classification incorporating Selectional Preferences Sabine Schulte im Walde, Christian Hying, Christian Scheible, Helmut Schmid Institute for Natural Language Processing University of Stuttgart, Germany {schulte,hyingcn,scheibcn,schmid}@ims.uni-stuttgart.de Abstract This paper presents an innovative, complex approach to semantic verb classification that relies on selectional preferences as verb properties. The probabilistic verb class model underlying the semantic classes is trained by a combination of the EM algorithm and the MDL principle, providing soft clusters with two dimensions (verb senses and subcategorisation frames with selectional preferences) as a result. A language-model-based evaluation shows that after 10 training iterations the verb class model results are above the baseline results. 1 Introduction In recent years, the computational linguistics community has developed an impressive number of semantic verb classifications, i.e., classifications that generalise over verbs according to their semantic properties. Intuitive examples of such classifications are the MOTION WITH A VEHICLE class, including verbs such as drive, fly, row, etc., or the BREAK A SOLID SURFACE WITH AN INSTRUMENT class, including verbs such as break, crush, fracture, smash, etc. Semantic verb classifications are of great interest to computational linguistics, specifically regarding the pervasive problem of data sparseness in the processing of natural language. Up to now, such classifications have been used in applications such as word sense disambiguation (Dorr and Jones, 1996; Kohomban and Lee, 2005), machine translation (Prescher et al., 2000; Koehn and Hoang, 2007), document classification (Klavans and Kan, 1998), and in statistical lexical acquisition in general (Rooth et al., 1999; Merlo and Stevenson, 2001; Korhonen, 2002; Schulte im Walde, 2006). Given that the creation of semantic verb classifications is not an end task in itself, but depends on the application scenario of the classification, we find various approaches to an automatic induction of semantic verb classifications. For example, Siegel and McKeown (2000) used several machine learning algorithms to perform an automatic aspectual classification of English verbs into event and stative verbs. Merlo and Stevenson (2001) presented an automatic classification of three types of English intransitive verbs, based on argument structure and heuristics to thematic relations. Pereira et al. (1993) and Rooth et al. (1999) relied on the ExpectationMaximisation algorithm to induce soft clusters of verbs, based on the verbs’ direct object nouns. Similarly, Korhonen et al. (2003) relied on the Information Bottleneck (Tishby et al., 1999) and subcategorisation frame types to induce soft verb clusters. This paper presents an innovative, complex approach to semantic verb classes that relies on selectional preferences as verb properties. The underlying linguistic assumption for this verb class model is that verbs which agree on their selectional preferences belong to a common semantic class. The model is implemented as a softclustering approach, in order to capture the polysemy of the verbs. The training procedure uses the Expectation-Maximisation (EM) algorithm (Baum, 1972) to iteratively improve the probabilistic parameters of the model, and applies the Minimum Description Length (MDL) principle (Rissanen, 1978) to induce WordNet-based selectional preferences for arguments within subcategorisation frames. Our model is potentially useful for lexical induction (e.g., verb senses, subcategorisation and selectional preferences, collocations, and verb alternations), 496 and for NLP applications in sparse data situations. In this paper, we provide an evaluation based on a language model. The remainder of the paper is organised as follows. Section 2 introduces our probabilistic verb class model, the EM training, and how we incorporate the MDL principle. Section 3 describes the clustering experiments, including the experimental setup, the evaluation, and the results. Section 4 reports on related work, before we close with a summary and outlook in Section 5. 2 Verb Class Model 2.1 Probabilistic Model This paper suggests a probabilistic model of verb classes that groups verbs into clusters with similar subcategorisation frames and selectional preferences. Verbs may be assigned to several clusters (soft clustering) which allows the model to describe the subcategorisation properties of several verb readings separately. The number of clusters is defined in advance, but the assignment of the verbs to the clusters is learnt during training. It is assumed that all verb readings belonging to one cluster have similar subcategorisation and selectional properties. The selectional preferences are expressed in terms of semantic concepts from WordNet, rather than a set of individual words. Finally, the model assumes that the different arguments are mutually independent for all subcategorisation frames of a cluster. From the last assumption, it follows that any statistical dependency between the arguments of a verb has to be explained by multiple readings. The statistical model is characterised by the following equation which defines the probability of a verb v with a subcategorisation frame f and arguments a1, ..., anf : p(v, f, a1, ..., anf ) = X c p(c) p(v|c) p(f|c) ∗ nf Y i=1 X r∈R p(r|c, f, i) p(ai|r) The model describes a stochastic process which generates a verb-argument tuple like ⟨speak, subj-pp.to, professor, audience⟩by 1. selecting some cluster c, e.g. c3 (which might correspond to a set of communication verbs), with probability p(c3), 2. selecting a verb v, here the verb speak, from cluster c3 with probability p(speak|c3), 3. selecting a subcategorisation frame f, here subj-pp.to, with probability p(subj-pp.to|c3); note that the frame probability only depends on the cluster, and not on the verb, 4. selecting a WordNet concept r for each argument slot, e.g. person for the first slot with probability p(person|c3, subj-pp.to, 1) and social group for the second slot with probability p(social group|c3, subj-pp.to, 2), 5. selecting a word ai to instantiate each concept as argument i; in our example, we might choose professor for person with probability p(professor|person) and audience for social group with probability p(audience|social group). The model contains two hidden variables, namely the clusters c and the selectional preferences r. In order to obtain the overall probability of a given verbargument tuple, we have to sum over all possible values of these hidden variables. The assumption that the arguments are independent of the verb given the cluster is essential for obtaining a clustering algorithm because it forces the EM algorithm to make the verbs within a cluster as similar as possible.1 The assumption that the different arguments of a verb are mutually independent is important to reduce the parameter set to a tractable size The fact that verbs select for concepts rather than individual words also reduces the number of parameters and helps to avoid sparse data problems. The application of the MDL principle guarantees that no important information is lost. The probabilities p(r|c, f, i) and p(a|r) mentioned above are not represented as atomic entities. Instead, we follow an approach by Abney 1The EM algorithm adjusts the model parameters in such a way that the probability assigned to the training tuples is maximised. Given the model constraints, the data probability can only be maximised by making the verbs within a cluster as similar to each other as possible, regarding the required arguments. 497 and Light (1999) and turn WordNet into a Hidden Markov model (HMM). We create a new pseudoconcept for each WordNet noun and add it as a hyponym to each synset containing this word. In addition, we assign a probability to each hypernymy– hyponymy transition, such that the probabilities of the hyponymy links of a synset sum up to 1. The pseudo-concept nodes emit the respective word with a probability of 1, whereas the regular concept nodes are non-emitting nodes. The probability of a path in this (a priori) WordNet HMM is the product of the probabilities of the transitions within the path. The probability p(a|r) is then defined as the sum of the probabilities of all paths from the concept r to the word a. Similarly, we create a partial WordNet HMM for each argument slot ⟨c, f, i⟩which encodes the selectional preferences. It contains only the WordNet concepts that the slot selects for, according to the MDL principle (cf. Section 2.3), and the dominating concepts. The probability p(r|c, f, i) is the total probability of all paths from the top-most WordNet concept entity to the terminal node r. 2.2 EM Training The model is trained on verb-argument tuples of the form described above, i.e., consisting of a verb and a subcategorisation frame, plus the nominal2 heads of the arguments. The tuples may be extracted from parsed data, or from a treebank. Because of the hidden variables, the model is trained iteratively with the Expectation-Maximisation algorithm (Baum, 1972). The parameters are randomly initialised and then re-estimated with the InsideOutside algorithm (Lari and Young, 1990) which is an instance of the EM algorithm for training Probabilistic Context-Free Grammars (PCFGs). The PCFG training algorithm is applicable here because we can define a PCFG for each of our models which generates the same verb-argument tuples with the same probability. The PCFG is defined as follows: (1) The start symbol is TOP. (2) For each cluster c, we add a rule TOP →Vc Ac whose probability is p(c). 2Arguments with lexical heads other than nouns (e.g., subcategorised clauses) are not included in the selectional preference induction. (3) For each verb v in cluster c, we add a rule Vc →v with probability p(v|c). (4) For each subcategorisation frame f of cluster c with length n, we add a rule Ac →f Rc,f,1,entity ... Rc,f,n,entity with probability p(f|c). (5) For each transition from a node r to a node r′ in the selectional preference model for slot i of the subcategorisation frame f of cluster c, we add a rule Rc,f,i,r →Rc,f,i,r′ whose probability is the transition probability from r to r′ in the respective WordNet-HMM. (6) For each terminal node r in the selectional preference model, we add a rule Rc,f,i,r →Rr whose probability is 1. With this rule, we “jump” from the selectional restriction model to the corresponding node in the a priori model. (7) For each transition from a node r to a node r′ in the a priori model, we add a rule Rr →Rr′ whose probability is the transition probability from r to r′ in the a priori WordNet-HMM. (8) For each word node a in the a priori model, we add a rule Ra →a whose probability is 1. Based on the above definitions, a partial “parse” for ⟨speak subj-pp.to professor audience⟩, referring to cluster 3 and one possible WordNet path, is shown in Figure 1. The connections within R3 (R3,...,entity– R3,...,person/group) and within R (Rperson/group– Rprofessor/audience) refer to sequential applications of rule types (5) and (7), respectively. TOP V3 speak A3 subj-pp.to R3,subj−pp.to,1,entity R3,subj−pp.to,1,person Rperson Rprofessor professor R3,subj−pp.to,2,entity R3,subj−pp.to,2,group Rgroup Raudience audience Figure 1: Example parse tree. The EM training algorithm maximises the likelihood of the training data. 498 2.3 MDL Principle A model with a large number of fine-grained concepts as selectional preferences assigns a higher likelihood to the data than a model with a small number of general concepts, because in general a larger number of parameters is better in describing training data. Consequently, the EM algorithm a priori prefers fine-grained concepts but – due to sparse data problems – tends to overfit the training data. In order to find selectional preferences with an appropriate granularity, we apply the Minimum Description Length principle, an approach from Information Theory. According to the MDL principle, the model with minimal description length should be chosen. The description length itself is the sum of the model length and the data length, with the model length defined as the number of bits needed to encode the model and its parameters, and the data length defined as the number of bits required to encode the training data with the given model. According to coding theory, an optimal encoding uses −log2p bits, on average, to encode data whose probability is p. Usually, the model length increases and the data length decreases as more parameters are added to a model. The MDL principle finds a compromise between the size of the model and the accuracy of the data description. Our selectional preference model relies on Li and Abe (1998), applying the MDL principle to determine selectional preferences of verbs and their arguments, by means of a concept hierarchy ordered by hypernym/hyponym relations. Given a set of nouns within a specific argument slot as a sample, the approach finds the cut3 in a concept hierarchy which minimises the sum of encoding both the model and the data. The model length (ML) is defined as ML = k 2 ∗log2 |S|, with k the number of concepts in the partial hierarchy between the top concept and the concepts in the cut, and |S| the sample size, i.e., the total frequency of the data set. The data length (DL) is defined as DL = − X n∈S log2 p(n). 3A cut is defined as a set of concepts in the concept hierarchy that defines a partition of the ”leaf” concepts (the lowest concepts in the hierarchy), viewing each concept in the cut as representing the set of all leaf concepts it dominates. The probability of a noun p(n) is determined by dividing the total probability of the concept class the noun belongs to, p(concept), by the size of that class, |concept|, i.e., the number of nouns that are dominated by that concept: p(n) = p(concept) |concept| . The higher the concept within the hierarchy, the more nouns receive an equal probability, and the greater is the data length. The probability of the concept class in turn is determined by dividing the frequency of the concept class f(concept) by the sample size: p(concept) = f(concept) |S| , where f(concept) is calculated by upward propagation of the frequencies of the nominal lexemes from the data sample through the hierarchy. For example, if the nouns coffee, tea, milk appeared with frequencies 25, 50, 3, respectively, within a specific argument slot, then their hypernym concept beverage would be assigned a frequency of 78, and these 78 would be propagated further upwards to the next hypernyms, etc. As a result, each concept class is assigned a fraction of the frequency of the whole data set (and the top concept receives the total frequency of the data set). For calculating p(concept) (and the overall data length), though, only the concept classes within the cut through the hierarchy are relevant. Our model uses WordNet 3.0 as the concept hierarchy, and comprises one (complete) a priori WordNet model for the lexical head probabilities p(a|r) and one (partial) model for each selectional probability distribution p(r|c, f, i), cf. Section 2.1. 2.4 Combining EM and MDL The training procedure that combines the EM training with the MDL principle can be summarised as follows. 1. The probabilities of a verb class model with c classes and a pre-defined set of verbs and frames are initialised randomly. The selectional preference models start out with the most general WordNet concept only, i.e., the partial WordNet hierarchies underlying the probabilities p(r|c, f, i) initially only contain the concept r for entity. 499 2. The model is trained for a pre-defined number of iterations. In each iteration, not only the model probabilities are re-estimated and maximised (as done by EM), but also the cuts through the concept hierarchies that represent the various selectional preference models are re-assessed. In each iteration, the following steps are performed. (a) The partial WordNet hierarchies that represent the selectional preference models are expanded to include the hyponyms of the respective leaf concepts of the partial hierarchies. I.e., in the first iteration, all models are expanded towards the hyponyms of entity, and in subsequent iterations each selectional preference model is expanded to include the hyponyms of the leaf nodes in the partial hierarchies resulting from the previous iteration. This expansion step allows the selection models to become more and more detailed, as the training proceeds and the verb clusters (and their selectional restrictions) become increasingly specific. (b) The training tuples are processed: For each tuple, a PCFG parse forest as indicated by Figure 1 is done, and the Inside-Outside algorithm is applied to estimate the frequencies of the ”parse tree rules”, given the current model probabilities. (c) The MDL principle is applied to each selectional preference model: Starting from the respective leaf concepts in the partial hierarchies, MDL is calculated to compare each set of hyponym concepts that share a hypernym with the respective hypernym concept. If the MDL is lower for the set of hyponyms than the hypernym, the hyponyms are left in the partial hierarchy. Otherwise the expansion of the hypernym towards the hyponyms is undone and we continue recursively upwards the hierarchy, calculating MDL to compare the former hypernym and its cohyponyms with the next upper hypernym, etc. The recursion allows the training algorithm to remove nodes which were added in earlier iterations and are no longer relevant. It stops if the MDL is lower for the hyponyms than for the hypernym. This step results in selectional preference models that minimally contain the top concept entity, and maximally contain the partial WordNet hierarchy between entity and the concept classes that have been expanded within this iteration. (d) The probabilities of the verb class model are maximised based on the frequency estimates obtained in step (b). 3 Experiments The model is generally applicable to all languages for which WordNet exists, and for which the WordNet functions provided by Princeton University are available. For the purposes of this paper, we choose English as a case study. 3.1 Experimental Setup The input data for training the verb class models were derived from Viterbi parses of the whole British National Corpus, using the lexicalised PCFG for English by Carroll and Rooth (1998). We took only active clauses into account, and disregarded auxiliary and modal verbs as well as particle verbs, leaving a total of 4,852,371 Viterbi parses. Those input tuples were then divided into 90% training data and 10% test data, providing 4,367,130 training tuples (over 2,769,804 types), and 485,241 test tuples (over 368,103 types). As we wanted to train and assess our verb class model under various conditions, we used different fractions of the training data in different training regimes. Because of time and memory constraints, we only used training tuples that appeared at least twice. (For the sake of comparison, we also trained one model on all tuples.) Furthermore, we disregarded tuples with personal pronoun arguments; they are not represented in WordNet, and even if they are added (e.g. to general concepts such as person, entity) they have a rather destructive effect. We considered two subsets of the subcategorisation frames with 10 and 20 elements, which were chosen according to their overall frequency in the training data; for example, the 10 most frequent frame types were subj:obj, subj, subj:ap, subj:to, subj:obj:obj2, subj:obj:pp-in, subj:adv, subj:pp-in, subj:vbase, subj:that.4 When relying on theses 10/20 subcategorisation frames, plus including the above restrictions, we were left with 39,773/158,134 and 42,826/166,303 training tuple types/tokens, respectively. The overall number of training tuples 4A frame lists its arguments, separated by ’:’. Most arguments within the frame types should be self-explanatory. ap is an adjectival phrase. 500 was therefore much smaller than the generally available data. The corresponding numbers including tuples with a frequency of one were 478,717/597,078 and 577,755/701,232. The number of clusters in the experiments was either 20 or 50, and we used up to 50 iterations over the training tuples. The model probabilities were output after each 5th iteration. The output comprises all model probabilities introduced in Section 2.1. The following sections describe the evaluation of the experiments, and the results. 3.2 Evaluation One of the goals in the development of the presented verb class model was to obtain an accurate statistical model of verb-argument tuples, i.e. a model which precisely predicts the tuple probabilities. In order to evaluate the performance of the model in this respect, we conducted an evaluation experiment, in which we computed the probability which the verb class model assigns to our test tuples and compared it to the corresponding probability assigned by a baseline model. The model with the higher probability is judged the better model. We expected that the verb class model would perform better than the baseline model on tuples where one or more of the arguments were not observed with the respective verb, because either the argument itself or a semantically similar argument (according to the selectional preferences) was observed with verbs belonging to the same cluster. We also expected that the verb class model assigns a lower probability than the baseline model to test tuples which frequently occurred in the training data, since the verb class model fails to describe precisely the idiosyncratic properties of verbs which are not shared by the other verbs of its cluster. The Baseline Model The baseline model decomposes the probability of a verb-argument tuple into a product of conditional probabilities:5 p(v, f, anf 1 ) = p(v) p(f|v) nf Y i=1 p(ai|ai−1 1 , ⟨v, f⟩, fi) 5fi is the label of the ith slot. The verb and the subcategorisation frame are enclosed in angle brackets because they are treated as a unit during smoothing. The probability of our example tuple ⟨speak, subj-pp.to, professor, audience⟩ in the baseline model is then p(speak) p(subj-pp.to|speak) p(professor|⟨speak, subj-pp.to⟩, subj) p(audience| professor, ⟨speak, subj-pp.to⟩, pp.to). The model contains no hidden variables. Thus the parameters can be directly estimated from the training data with relative frequencies. The parameter estimates are smoothed with modified Kneser-Ney smoothing (Chen and Goodman, 1998), such that the probability of each tuple is positive. Smoothing of the Verb Class Model Although the verb class model has a built-in smoothing capacity, it needs additional smoothing for two reasons: Firstly, some of the nouns in the test data did not occur in the training data. The verb class model assigns a zero probability to such nouns. Hence we smoothed the concept instantiation probabilities p(noun|concept) with Witten-Bell smoothing (Chen and Goodman, 1998). Secondly, we smoothed the probabilities of the concepts in the selectional preference models where zero probabilities may occur. The smoothing ensures that the verb class model assigns a positive probability to each verb-argument tuple with a known verb, a known subcategorisation frame, and arguments which are in WordNet. Other tuples were excluded from the evaluation because the verb class model cannot deal with them. 3.3 Results The evaluation results of our classification experiments are presented in Table 1, for 20 and 50 clusters, with 10 and 20 subcategorisation frame types. The table cells provide the loge of the probabilities per tuple token. The probabilities increase with the number of iterations, flattening out after approx. 25 iterations, as illustrated by Figure 2. Both for 10 and 20 frames, the results are better for 50 than for 20 clusters, with small differences between 10 and 20 frames. The results vary between -11.850 and -10.620 (for 5-50 iterations), in comparison to baseline values of -11.546 and -11.770 for 10 and 20 frames, respectively. The results thus show that our verb class model results are above the baseline results after 10 iterations; this means that our statistical model then assigns higher probabilities to the test tuples than the baseline model. 501 No. of Iteration Clusters 5 10 15 20 25 30 35 40 45 50 10 frames 20 -11.770 -11.408 -10.978 -10.900 -10.853 -10.841 -10.831 -10.823 -10.817 -10.812 50 -11.850 -11.452 -11.061 -10.904 -10.730 -10.690 -10.668 -10.628 -10.625 -10.620 20 frames 20 -11.769 -11.430 -11.186 -10.971 -10.921 -10.899 -10.886 -10.875 -10.873 -10.869 50 -11.841 -11.472 -11.018 -10.850 -10.737 -10.728 -10.706 -10.680 -10.662 -10.648 Table 1: Clustering results – BNC tuples. Figure 2: Illustration of clustering results. Including input tuples with a frequency of one in the training data with 10 subcategorisation frames (as mentioned in Section 3.1) decreases the loge per tuple to between -13.151 and -12.498 (for 5-50 iterations), with similar training behaviour as in Figure 2, and in comparsion to a baseline of -17.988. The differences in the result indicate that the models including the hapax legomena are worse than the models that excluded the sparse events; at the same time, the differences between baseline and clustering model are larger. In order to get an intuition about the qualitative results of the clusterings, we select two example clusters that illustrate that the idea of the verb class model has been realised within the clusters. According to our own intuition, the clusters are overall semantically impressive, beyond the examples. Future work will assess by semantics-based evaluations of the clusters (such as pseudo-word disambiguation, or a comparison against existing verb classifications), whether this intuition is justified, whether it transfers to the majority of verbs within the cluster analyses, and whether the clusters capture polysemic verbs appropriately. The two examples are taken from the 10 frame/50 cluster verb class model, with probabilities of 0.05 and 0.04. The ten most probable verbs in the first cluster are show, suggest, indicate, reveal, find, imply, conclude, demonstrate, state, mean, with the two most probable frame types subj and subj:that, i.e., the intransitive frame, and a frame that subcategorises a that clause. As selectional preferences within the intransitive frame (and quite similarly in the subj:that frame), the most probable concept classes6 are study, report, survey, name, research, result, evidence. The underlined nouns represent specific concept classes, because they are leaf nodes in the selectional preference hierarchy, thus referring to very specific selectional preferences, which are potentially useful for collocation induction. The ten most probable verbs in the second cluster are arise, remain, exist, continue, need, occur, change, improve, begin, become, with the intransitive frame being most probable. The most probable concept classes are problem, condition, question, natural phenomenon, situation. The two examples illustrate that the verbs within a cluster are semantically related, and that they share obvious subcategorisation frames with intuitively plausible selectional preferences. 4 Related Work Our model is an extension of and thus most closely related to the latent semantic clustering (LSC) model (Rooth et al., 1999) for verb-argument pairs ⟨v, a⟩ which defines their probability as follows: p(v, a) = X c p(c) p(v|c) p(a|c) In comparison to our model, the LSC model only considers a single argument (such as direct objects), 6For readability, we only list one noun per WordNet concept. 502 or a fixed number of arguments from one particular subcategorisation frame, whereas our model defines a probability distribution over all subcategorisation frames. Furthermore, our model specifies selectional preferences in terms of general WordNet concepts rather than sets of individual words. In a similar vein, our model is both similar and distinct in comparison to the soft clustering approaches by Pereira et al. (1993) and Korhonen et al. (2003). Pereira et al. (1993) suggested deterministic annealing to cluster verb-argument pairs into classes of verbs and nouns. On the one hand, their model is asymmetric, thus not giving the same interpretation power to verbs and arguments; on the other hand, the model provides a more fine-grained clustering for nouns, in the form of an additional hierarchical structure of the noun clusters. Korhonen et al. (2003) used verb-frame pairs (instead of verbargument pairs) to cluster verbs relying on the Information Bottleneck (Tishby et al., 1999). They had a focus on the interpretation of verbal polysemy as represented by the soft clusters. The main difference of our model in comparison to the above two models is, again, that we incorporate selectional preferences (rather than individual words, or subcategorisation frames). In addition to the above soft-clustering models, various approaches towards semantic verb classification have relied on hard-clustering models, thus simplifying the notion of verbal polysemy. Two large-scale approaches of this kind are Schulte im Walde (2006), who used k-Means on verb subcategorisation frames and verbal arguments to cluster verbs semantically, and Joanis et al. (2008), who applied Support Vector Machines to a variety of verb features, including subcategorisation slots, tense, voice, and an approximation to animacy. To the best of our knowledge, Schulte im Walde (2006) is the only hard-clustering approach that previously incorporated selectional preferences as verb features. However, her model was not soft-clustering, and she only used a simple approach to represent selectional preferences by WordNet’s top-level concepts, instead of making use of the whole hierarchy and more sophisticated methods, as in the current paper. Last but not least, there are other models of selectional preferences than the MDL model we used in our paper. Most such models also rely on the WordNet hierarchy (Resnik, 1997; Abney and Light, 1999; Ciaramita and Johnson, 2000; Clark and Weir, 2002). Brockmann and Lapata (2003) compared some of the models against human judgements on the acceptability of sentences, and demonstrated that the models were significantly correlated with human ratings, and that no model performed best; rather, the different methods are suited for different argument relations. 5 Summary and Outlook This paper presented an innovative, complex approach to semantic verb classes that relies on selectional preferences as verb properties. The probabilistic verb class model underlying the semantic classes was trained by a combination of the EM algorithm and the MDL principle, providing soft clusters with two dimensions (verb senses and subcategorisation frames with selectional preferences) as a result. A language model-based evaluation showed that after 10 training iterations the verb class model results are above the baseline results. We plan to improve the verb class model with respect to (i) a concept-wise (instead of a cut-wise) implementation of the MDL principle, to operate on concepts instead of combinations of concepts; and (ii) variations of the concept hierarchy, using e.g. the sense-clustered WordNets from the Stanford WordNet Project (Snow et al., 2007), or a WordNet version improved by concepts from DOLCE (Gangemi et al., 2003), to check on the influence of conceptual details on the clustering results. Furthermore, we aim to use the verb class model in NLP tasks, (i) as resource for lexical induction of verb senses, verb alternations, and collocations, and (ii) as a lexical resource for the statistical disambiguation of parse trees. References Steven Abney and Marc Light. 1999. Hiding a Semantic Class Hierarchy in a Markow Model. In Proceedings of the ACL Workshop on Unsupervised Learning in Natural Language Processing, pages 1–8, College Park, MD. Leonard E. Baum. 1972. An Inequality and Associated Maximization Technique in Statistical Estimation for Probabilistic Functions of Markov Processes. Inequalities, III:1–8. 503 Carsten Brockmann and Mirella Lapata. 2003. Evaluating and Combining Approaches to Selectional Preference Acquisition. In Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics, pages 27–34, Budapest, Hungary. Glenn Carroll and Mats Rooth. 1998. Valence Induction with a Head-Lexicalized PCFG. In Proceedings of the 3rd Conference on Empirical Methods in Natural Language Processing, Granada, Spain. Stanley Chen and Joshua Goodman. 1998. An Empirical Study of Smoothing Techniques for Language Modeling. Technical Report TR-10-98, Center for Research in Computing Technology, Harvard University. Massimiliano Ciaramita and Mark Johnson. 2000. Explaining away Ambiguity: Learning Verb Selectional Preference with Bayesian Networks. In Proceedings of the 18th International Conference on Computational Linguistics, pages 187–193, Saarbr¨ucken, Germany. Stephen Clark and David Weir. 2002. Class-Based Probability Estimation using a Semantic Hierarchy. Computational Linguistics, 28(2):187–206. Bonnie J. Dorr and Doug Jones. 1996. Role of Word Sense Disambiguation in Lexical Acquisition: Predicting Semantics from Syntactic Cues. In Proceedings of the 16th International Conference on Computational Linguistics, pages 322–327, Copenhagen, Denmark. Aldo Gangemi, Nicola Guarino, Claudio Masolo, and Alessandro Oltramari. 2003. Sweetening WordNet with DOLCE. AI Magazine, 24(3):13–24. Eric Joanis, Suzanne Stevenson, and David James. 2008? A General Feature Space for Automatic Verb Classification. Natural Language Engineering. To appear. Judith L. Klavans and Min-Yen Kan. 1998. The Role of Verbs in Document Analysis. In Proceedings of the 17th International Conference on Computational Linguistics and the 36th Annual Meeting of the Association for Computational Linguistics, pages 680–686, Montreal, Canada. Philipp Koehn and Hieu Hoang. 2007. Factored Translation Models. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 868–876, Prague, Czech Republic. Upali S. Kohomban and Wee Sun Lee. 2005. Learning Semantic Classes for Word Sense Disambiguation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 34–41, Ann Arbor, MI. Anna Korhonen, Yuval Krymolowski, and Zvika Marx. 2003. Clustering Polysemic Subcategorization Frame Distributions Semantically. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 64–71, Sapporo, Japan. Anna Korhonen. 2002. Subcategorization Acquisition. Ph.D. thesis, University of Cambridge, Computer Laboratory. Technical Report UCAM-CL-TR-530. Karim Lari and Steve J. Young. 1990. The Estimation of Stochastic Context-Free Grammars using the InsideOutside Algorithm. Computer Speech and Language, 4:35–56. Hang Li and Naoki Abe. 1998. Generalizing Case Frames Using a Thesaurus and the MDL Principle. Computational Linguistics, 24(2):217–244. Paola Merlo and Suzanne Stevenson. 2001. Automatic Verb Classification Based on Statistical Distributions of Argument Structure. Computational Linguistics, 27(3):373–408. Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional Clustering of English Words. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 183–190, Columbus, OH. Detlef Prescher, Stefan Riezler, and Mats Rooth. 2000. Using a Probabilistic Class-Based Lexicon for Lexical Ambiguity Resolution. In Proceedings of the 18th International Conference on Computational Linguistics. Philip Resnik. 1997. Selectional Preference and Sense Disambiguation. In Proceedings of the ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, What, and How?, Washington, DC. Jorma Rissanen. 1978. Modeling by Shortest Data Description. Automatica, 14:465–471. Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1999. Inducing a Semantically Annotated Lexicon via EM-Based Clustering. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics, Maryland, MD. Sabine Schulte im Walde. 2006. Experiments on the Automatic Induction of German Semantic Verb Classes. Computational Linguistics, 32(2):159–194. Eric V. Siegel and Kathleen R. McKeown. 2000. Learning Methods to Combine Linguistic Indicators: Improving Aspectual Classification and Revealing Linguistic Insights. Computational Linguistics, 26(4):595–628. Rion Snow, Sushant Prakash, Daniel Jurafsky, and Andrew Y. Ng. 2007. Learning to Merge Word Senses. In Proceedings of the joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, Prague, Czech Republic. Naftali Tishby, Fernando Pereira, and William Bialek. 1999. The Information Bottleneck Method. In Proceedings of the 37th Annual Conference on Communication, Control, and Computing, Monticello, IL. 504
2008
57
Proceedings of ACL-08: HLT, pages 505–513, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Randomized Language Models via Perfect Hash Functions David Talbot∗ School of Informatics University of Edinburgh 2 Buccleuch Place, Edinburgh, UK [email protected] Thorsten Brants Google Inc. 1600 Amphitheatre Parkway Mountain View, CA 94303, USA [email protected] Abstract We propose a succinct randomized language model which employs a perfect hash function to encode fingerprints of n-grams and their associated probabilities, backoff weights, or other parameters. The scheme can represent any standard n-gram model and is easily combined with existing model reduction techniques such as entropy-pruning. We demonstrate the space-savings of the scheme via machine translation experiments within a distributed language modeling framework. 1 Introduction Language models (LMs) are a core component in statistical machine translation, speech recognition, optical character recognition and many other areas. They distinguish plausible word sequences from a set of candidates. LMs are usually implemented as n-gram models parameterized for each distinct sequence of up to n words observed in the training corpus. Using higher-order models and larger amounts of training data can significantly improve performance in applications, however the size of the resulting LM can become prohibitive. With large monolingual corpora available in major languages, making use of all the available data is now a fundamental challenge in language modeling. Efficiency is paramount in applications such as machine translation which make huge numbers of LM requests per sentence. To scale LMs to larger corpora with higher-order dependencies, researchers ∗Work completed while this author was at Google Inc. have considered alternative parameterizations such as class-based models (Brown et al., 1992), model reduction techniques such as entropy-based pruning (Stolcke, 1998), novel represention schemes such as suffix arrays (Emami et al., 2007), Golomb Coding (Church et al., 2007) and distributed language models that scale more readily (Brants et al., 2007). In this paper we propose a novel randomized language model. Recent work (Talbot and Osborne, 2007b) has demonstrated that randomized encodings can be used to represent n-gram counts for LMs with signficant space-savings, circumventing information-theoretic constraints on lossless data structures by allowing errors with some small probability. In contrast the representation scheme used by our model encodes parameters directly. It can be combined with any n-gram parameter estimation method and existing model reduction techniques such as entropy-based pruning. Parameters that are stored in the model are retrieved without error; however, false positives may occur whereby n-grams not in the model are incorrectly ‘found’ when requested. The false positive rate is determined by the space usage of the model. Our randomized language model is based on the Bloomier filter (Chazelle et al., 2004). We encode fingerprints (random hashes) of n-grams together with their associated probabilities using a perfect hash function generated at random (Majewski et al., 1996). Lookup is very efficient: the values of 3 cells in a large array are combined with the fingerprint of an n-gram. This paper focuses on machine translation. However, many of our findings should transfer to other applications of language modeling. 505 2 Scaling Language Models In statistical machine translation (SMT), LMs are used to score candidate translations in the target language. These are typically n-gram models that approximate the probability of a word sequence by assuming each token to be independent of all but n−1 preceding tokens. Parameters are estimated from monolingual corpora with parameters for each distinct word sequence of length l ∈[n] observed in the corpus. Since the number of parameters grows somewhat exponentially with n and linearly with the size of the training corpus, the resulting models can be unwieldy even for relatively small corpora. 2.1 Scaling Strategies Various strategies have been proposed to scale LMs to larger corpora and higher-order dependencies. Model-based techniques seek to parameterize the model more efficiently (e.g. latent variable models, neural networks) or to reduce the model size directly by pruning uninformative parameters, e.g. (Stolcke, 1998), (Goodman and Gao, 2000). Representationbased techniques attempt to reduce space requirements by representing the model more efficiently or in a form that scales more readily, e.g. (Emami et al., 2007), (Brants et al., 2007), (Church et al., 2007). 2.2 Lossy Randomized Encodings A fundamental result in information theory (Carter et al., 1978) states that a random set of objects cannot be stored using constant space per object as the universe from which the objects are drawn grows in size: the space required to uniquely identify an object increases as the set of possible objects from which it must be distinguished grows. In language modeling the universe under consideration is the set of all possible n-grams of length n for given vocabulary. Although n-grams observed in natural language corpora are not randomly distributed within this universe no lossless data structure that we are aware of can circumvent this space-dependency on both the n-gram order and the vocabulary size. Hence as the training corpus and vocabulary grow, a model will require more space per parameter. However, if we are willing to accept that occasionally our model will be unable to distinguish between distinct n-grams, then it is possible to store each parameter in constant space independent of both n and the vocabulary size (Carter et al., 1978), (Talbot and Osborne, 2007a). The space required in such a lossy encoding depends only on the range of values associated with the n-grams and the desired error rate, i.e. the probability with which two distinct n-grams are assigned the same fingerprint. 2.3 Previous Randomized LMs Recent work (Talbot and Osborne, 2007b) has used lossy encodings based on Bloom filters (Bloom, 1970) to represent logarithmically quantized corpus statistics for language modeling. While the approach results in significant space savings, working with corpus statistics, rather than n-gram probabilities directly, is computationally less efficient (particularly in a distributed setting) and introduces a dependency on the smoothing scheme used. It also makes it difficult to leverage existing model reduction strategies such as entropy-based pruning that are applied to final parameter estimates. In the next section we describe our randomized LM scheme based on perfect hash functions. This scheme can be used to encode any standard n-gram model which may first be processed using any conventional model reduction technique. 3 Perfect Hash-based Language Models Our randomized LM is based on the Bloomier filter (Chazelle et al., 2004). We assume the n-grams and their associated parameter values have been precomputed and stored on disk. We then encode the model in an array such that each n-gram’s value can be retrieved. Storage for this array is the model’s only significant space requirement once constructed.1 The model uses randomization to map n-grams to fingerprints and to generate a perfect hash function that associates n-grams with their values. The model can erroneously return a value for an n-gram that was never actually stored, but will always return the correct value for an n-gram that is in the model. We will describe the randomized algorithm used to encode n-gram parameters in the model, analyze the probability of a false positive, and explain how we construct and query the model in practice. 1Note that we do not store the n-grams explicitly and therefore that the model’s parameter set cannot easily be enumerated. 506 3.1 N-gram Fingerprints We wish to encode a set of n-gram/value pairs S = {(x1, v(x1)), (x2, v(x2)), . . . , (xN, v(xN))} using an array A of size M and a perfect hash function. Each n-gram xi is drawn from some set of possible n-grams U and its associated value v(xi) from a corresponding set of possible values V. We do not store the n-grams and their probabilities directly but rather encode a fingerprint of each n-gram f(xi) together with its associated value v(xi) in such a way that the value can be retrieved when the model is queried with the n-gram xi. A fingerprint hash function f : U →[0, B −1] maps n-grams to integers between 0 and B −1.2 The array A in which we encode n-gram/value pairs has addresses of size ⌈log2 B⌉hence B will determine the amount of space used per n-gram. There is a trade-off between space and error rate since the larger B is, the lower the probability of a false positive. This is analyzed in detail below. For now we assume only that B is at least as large as the range of values stored in the model, i.e. B ≥|V|. 3.2 Composite Perfect Hash Functions The function used to associate n-grams with their values (Eq. (1)) combines a composite perfect hash function (Majewski et al., 1996) with the fingerprint function. An example is shown in Fig. 1. The composite hash function is made up of k independent hash functions h1, h2, . . . , hk where each hi : U →[0, M −1] maps n-grams to locations in the array A. The lookup function is then defined as g : U →[0, B −1] by3 g(xi) = f(xi) ⊗ k O i=1 A[hi(xi)] ! (1) where f(xi) is the fingerprint of n-gram xi and A[hi(xi)] is the value stored in location hi(xi) of the array A. Eq. (1) is evaluated to retrieve an n-gram’s parameter during decoding. To encode our model correctly we must ensure that g(xi) = v(xi) for all n-grams in our set S. Generating A to encode this 2The analysis assumes that all hash functions are random. 3We use ⊗to denote the exclusive bitwise OR operator. Figure 1: Encoding an n-gram’s value in the array. function for a given set of n-grams is a significant challenge described in the following sections. 3.3 Encoding n-grams in the model All addresses in A are initialized to zero. The procedure we use to ensure g(xi) = v(xi) for all xi ∈S updates a single, unique location in A for each ngram xi. This location is chosen from among the k locations given by hj(xi), j ∈[k]. Since the composite function g(xi) depends on the values stored at all k locations A[h1(xi)], A[h2(xi)], . . . , A[hk(xi)] in A, we must also ensure that once an n-gram xi has been encoded in the model, these k locations are not subsequently changed since this would invalidate the encoding; however, n-grams encoded later may reference earlier entries and therefore locations in A can effectively be ‘shared’ among parameters. In the following section we describe a randomized algorithm to find a suitable order in which to enter n-grams in the model and, for each n-gram xi, determine which of the k hash functions, say hj, can be used to update A without invalidating previous entries. Given this ordering of the n-grams and the choice of hash function hj for each xi ∈S, it is clear that the following update rule will encode xi in the array A so that g(xi) will return v(xi) (cf. Eq.(1)) A[hj(xi)] = v(xi) ⊗f(xi) ⊗ k O i=1∩i̸=j A[hi(xi)]. (2) 3.4 Finding an Ordered Matching We now describe an algorithm (Algorithm 1; (Majewski et al., 1996)) that selects one of the k hash 507 functions hj, j ∈[k] for each n-gram xi ∈S and an order in which to apply the update rule Eq. (2) so that g(xi) maps xi to v(xi) for all n-grams in S. This problem is equivalent to finding an ordered matching in a bipartite graph whose LHS nodes correspond to n-grams in S and RHS nodes correspond to locations in A. The graph initially contains edges from each n-gram to each of the k locations in A given by h1(xi), h2(xi), . . . , hk(xi) (see Fig. (2)). The algorithm uses the fact that any RHS node that has degree one (i.e. a single edge) can be safely matched with its associated LHS node since no remaining LHS nodes can be dependent on it. We first create the graph using the k hash functions hj, j ∈[k] and store a list (degree one) of those RHS nodes (locations) with degree one. The algorithm proceeds by removing nodes from degree one in turn, pairing each RHS node with the unique LHS node to which it is connected. We then remove both nodes from the graph and push the pair (xi, hj(xi)) onto a stack (matched). We also remove any other edges from the matched LHS node and add any RHS nodes that now have degree one to degree one. The algorithm succeeds if, while there are still n-grams left to match, degree one is never empty. We then encode n-grams in the order given by the stack (i.e., first-in-last-out). Since we remove each location in A (RHS node) from the graph as it is matched to an n-gram (LHS node), each location will be associated with at most one n-gram for updating. Moreover, since we match an n-gram to a location only once the location has degree one, we are guaranteed that any other ngrams that depend on this location are already on the stack and will therefore only be encoded once we have updated this location. Hence dependencies in g are respected and g(xi) = v(xi) will remain true following the update in Eq. (2) for each xi ∈S. 3.5 Choosing Random Hash Functions The algorithm described above is not guaranteed to succeed. Its success depends on the size of the array M, the number of n-grams stored |S| and the choice of random hash functions hj, j ∈[k]. Clearly we require M ≥|S|; in fact, an argument from Majewski et al. (1996) implies that if M ≥1.23|S| and k = 3, the algorithm succeeds with high probabilFigure 2: The ordered matching algorithm: matched = [(a, 1), (b, 2), (d, 4), (c, 5)] ity. We use 2-universal hash functions (L. Carter and M. Wegman, 1979) defined for a range of size M via a prime P ≥M and two random numbers 1 ≤aj ≤P and 0 ≤bj ≤P for j ∈[k] as hj(x) ≡ajx + bj mod P taken modulo M. We generate a set of k hash functions by sampling k pairs of random numbers (aj, bj), j ∈[k]. If the algorithm does not find a matching with the current set of hash functions, we re-sample these parameters and re-start the algorithm. Since the probability of failure on a single attempt is low when M ≥1.23|S|, the probability of failing multiple times is very small. 3.6 Querying the Model and False Positives The construction we have described above ensures that for any n-gram xi ∈S we have g(xi) = v(xi), i.e., we retrieve the correct value. To retrieve a value given an n-gram xi we simply compute the fingerprint f(xi), the hash functions hj(xi), j ∈[k] and then return g(xi) using Eq. (1). Note that unlike the constructions in (Talbot and Osborne, 2007b) and (Church et al., 2007) no errors are possible for ngrams stored in the model. Hence we will not make errors for common n-grams that are typically in S. 508 Algorithm 1 Ordered Matching Input : Set of n-grams S; k hash functions hj, j ∈[k]; number of available locations M. Output : Ordered matching matched or FAIL. matched ⇐[ ] for all i ∈[0, M −1] do r2li ⇐∅ end for for all xi ∈S do l2ri ⇐∅ for all j ∈[k] do l2ri ⇐l2ri ∪hj(xi) r2lhj(xi) ⇐r2lhj(xi) ∪xi end for end for degree one ⇐{i ∈[0, M −1] | |r2li| = 1} while |degree one| ≥1 do rhs ⇐POP degree one lhs ⇐POP r2lrhs PUSH (lhs, rhs) onto matched for all rhs′ ∈l2rlhs do POP r2lrhs′ if |r2lrhs′| = 1 then degree one ⇐degree one ∪rhs′ end if end for end while if |matched| = |S| then return matched else return FAIL end if On the other hand, querying the model with an ngram that was not stored, i.e. with xi ∈U \ S we may erroneously return a value v ∈V. Since the fingerprint f(xi) is assumed to be distributed uniformly at random (u.a.r.) in [0, B −1], g(xi) is also u.a.r. in [0, B−1] for xi ∈U \S. Hence with |V| values stored in the model, the probability that xi ∈U \ S is assigned a value in v ∈V is Pr{g(xi) ∈V|xi ∈U \ S} = |V|/B. We refer to this event as a false positive. If V is fixed, we can obtain a false positive rate ϵ by setting B as B ≡|V|/ϵ. For example, if |V| is 128 then taking B = 1024 gives an error rate of ϵ = 128/1024 = 0.125 with each entry in A using ⌈log2 1024⌉= 10 bits. Clearly B must be at least |V| in order to distinguish each value. We refer to the additional bits allocated to each location (i.e. ⌈log2 B⌉−log2 |V| or 3 in our example) as error bits in our experiments below. 3.7 Constructing the Full Model When encoding a large set of n-gram/value pairs S, Algorithm 1 will only be practical if the raw data and graph can be held in memory as the perfect hash function is generated. This makes it difficult to encode an extremely large set S into a single array A. The solution we adopt is to split S into t smaller sets S′ i, i ∈[t] that are arranged in lexicographic order.4 We can then encode each subset in a separate array A′ i, i ∈[t] in turn in memory. Querying each of these arrays for each n-gram requested would be inefficient and inflate the error rate since a false positive could occur on each individual array. Instead we store an index of the final n-gram encoded in each array and given a request for an n-gram’s value, perform a binary search for the appropriate array. 3.8 Sanity Checks Our models are consistent in the following sense (w1, w2, . . . , wn) ∈S =⇒(w2, . . . , wn) ∈S. Hence we can infer that an n-gram can not be present in the model, if the n −1-gram consisting of the final n −1 words has already tested false. Following (Talbot and Osborne, 2007a) we can avoid unnecessary false positives by not querying for the longer n-gram in such cases. Backoff smoothing algorithms typically request the longest n-gram supported by the model first, requesting shorter n-grams only if this is not found. In our case, however, if a query is issued for the 5-gram (w1, w2, w3, w4, w5) when only the unigram (w5) is present in the model, the probability of a false positive using such a backoff procedure would not be ϵ as stated above, but rather the probability that we fail to avoid an error on any of the four queries performed prior to requesting the unigram, i.e. 1−(1−ϵ)4 ≈4ϵ. We therefore query the model first with the unigram working up to the full n-gram requested by the decoder only if the preceding queries test positive. The probability of returning a false positive for any ngram requested by the decoder (but not in the model) will then be at most ϵ. 4In our system we use subsets of 5 million n-grams which can easily be encoded using less than 2GB of working space. 509 4 Experimental Set-up 4.1 Distributed LM Framework We deploy the randomized LM in a distributed framework which allows it to scale more easily by distributing it across multiple language model servers. We encode the model stored on each languagage model server using the randomized scheme. The proposed randomized LM can encode parameters estimated using any smoothing scheme (e.g. Kneser-Ney, Katz etc.). Here we choose to work with stupid backoff smoothing (Brants et al., 2007) since this is significantly more efficient to train and deploy in a distributed framework than a contextdependent smoothing scheme such as Kneser-Ney. Previous work (Brants et al., 2007) has shown it to be appropriate to large-scale language modeling. 4.2 LM Data Sets The language model is trained on four data sets: target: The English side of Arabic-English parallel data provided by LDC (132 million tokens). gigaword: The English Gigaword dataset provided by LDC (3.7 billion tokens). webnews: Data collected over several years, up to January 2006 (34 billion tokens). web: The Web 1T 5-gram Version 1 corpus provided by LDC (1 trillion tokens).5 An initial experiment will use the Web 1T 5-gram corpus only; all other experiments will use a loglinear combination of models trained on each corpus. The combined model is pre-compiled with weights trained on development data by our system. 4.3 Machine Translation The SMT system used is based on the framework proposed in (Och and Ney, 2004) where translation is treated as the following optimization problem ˆe = arg max e M X i=1 λiΦi(e, f). (3) Here f is the source sentence that we wish to translate, e is a translation in the target language, Φi, i ∈ [M] are feature functions and λi, i ∈[M] are weights. (Some features may not depend on f.) 5N-grams with count < 40 are not included in this data set. Full Set Entropy-Pruned # 1-grams 13,588,391 13,588,391 # 2-grams 314,843,401 184,541,402 # 3-grams 977,069,902 439,430,328 # 4-grams 1,313,818,354 407,613,274 # 5-grams 1,176,470,663 238,348,867 Total 3,795,790,711 1,283,522,262 Table 1: Num. of n-grams in the Web 1T 5-gram corpus. 5 Experiments This section describes three sets of experiments: first, we encode the Web 1T 5-gram corpus as a randomized language model and compare the resulting size with other representations; then we measure false positive rates when requesting n-grams for a held-out data set; finally we compare translation quality when using conventional (lossless) languages models and our randomized language model. Note that the standard practice of measuring perplexity is not meaningful here since (1) for efficient computation, the language model is not normalized; and (2) even if this were not the case, quantization and false positives would render it unnormalized. 5.1 Encoding the Web 1T 5-gram corpus We build a language model from the Web 1T 5-gram corpus. Parameters, corresponding to negative logarithms of relative frequencies, are quantized to 8-bits using a uniform quantizer. More sophisticated quantizers (e.g. (S. Lloyd, 1982)) may yield better results but are beyond the scope of this paper. Table 1 provides some statistics about the corpus. We first encode the full set of n-grams, and then a version that is reduced to approx. 1/3 of its original size using entropy pruning (Stolcke, 1998). Table 2 shows the total space and number of bytes required per n-gram to encode the model under different schemes: “LDC gzip’d” is the size of the files as delivered by LDC; “Trie” uses a compact trie representation (e.g., (Clarkson et al., 1997; Church et al., 2007)) with 3 byte word ids, 1 byte values, and 3 byte indices; “Block encoding” is the encoding used in (Brants et al., 2007); and “randomized” uses our novel randomized scheme with 12 error bits. The latter requires around 60% of the space of the next best representation and less than half of the com510 size (GB) bytes/n-gram Full Set LDC gzip’d 24.68 6.98 Trie 21.46 6.07 Block Encoding 18.00 5.14 Randomized 10.87 3.08 Entropy Pruned Trie 7.70 6.44 Block Encoding 6.20 5.08 Randomized 3.68 3.08 Table 2: Web 1T 5-gram language model sizes with different encodings. “Randomized” uses 12 error bits. monly used trie encoding. Our method is the only one to use the same amount of space per parameter for both full and entropy-pruned models. 5.2 False Positive Rates All n-grams explicitly inserted into our randomized language model are retrieved without error; however, n-grams not stored may be incorrectly assigned a value resulting in a false positive. Section (3) analyzed the theoretical error rate; here, we measure error rates in practice when retrieving n-grams for approx. 11 million tokens of previously unseen text (news articles published after the training data had been collected). We measure this separately for all n-grams of order 2 to 5 from the same text. The language model is trained on the four data sources listed above and contains 24 billion ngrams. With 8-bit parameter values, the model requires 55.2/69.0/82.7 GB storage when using 8/12/16 error bits respectively (this corresponds to 2.46/3.08/3.69 bytes/n-gram). Using such a large language model results in a large fraction of known n-grams in new text. Table 3 shows, e.g., that almost half of all 5-grams from the new text were seen in the training data. Column (1) in Table 4 shows the number of false positives that occurred for this test data. Column (2) shows this as a fraction of the number of unseen n-grams in the data. This number should be close to 2−b where b is the number of error bits (i.e. 0.003906 for 8 bits and 0.000244 for 12 bits). The error rates for bigrams are close to their expected values. The numbers are much lower for higher n-gram orders due to the use of sanity checks (see Section 3.8). total seen unseen 2gms 11,093,093 98.98% 1.02% 3gms 10,652,693 91.08% 8.92% 4gms 10,212,293 68.39% 31.61% 5gms 9,781,777 45.51% 54.49% Table 3: Number of n-grams in test set and percentages of n-grams that were seen/unseen in the training data. (1) (2) (3) false pos. false pos unseen false pos total 8 error bits 2gms 376 0.003339 0.000034 3gms 2839 0.002988 0.000267 4gms 6659 0.002063 0.000652 5gms 6356 0.001192 0.000650 total 16230 0.001687 0.000388 12 error bits 2gms 25 0.000222 0.000002 3gms 182 0.000192 0.000017 4gms 416 0.000129 0.000041 5gms 407 0.000076 0.000042 total 1030 0.000107 0.000025 Table 4: False positive rates with 8 and 12 error bits. The overall fraction of n-grams requested for which an error occurs is of most interest in applications. This is shown in Column (3) and is around a factor of 4 smaller than the values in Column (2). On average, we expect to see 1 error in around 2,500 requests when using 8 error bits, and 1 error in 40,000 requests with 12 error bits (see “total” row). 5.3 Machine Translation We run an improved version of our 2006 NIST MT Evaluation entry for the Arabic-English “Unlimited” data track.6 The language model is the same one as in the previous section. Table 5 shows baseline translation BLEU scores for a lossless (non-randomized) language model with parameter values quantized into 5 to 8 bits. We use MT04 data for system development, with MT05 data and MT06 (“NIST” subset) data for blind testing. As expected, results improve when using more bits. There seems to be little benefit in going beyond 6See http://www.nist.gov/speech/tests/mt/2006/doc/ 511 dev test test bits MT04 MT05 MT06 5 0.5237 0.5608 0.4636 6 0.5280 0.5671 0.4649 7 0.5299 0.5691 0.4672 8 0.5304 0.5697 0.4663 Table 5: Baseline BLEU scores with lossless n-gram model and different quantization levels (bits). 0.554 0.556 0.558 0.56 0.562 0.564 0.566 0.568 0.57 8 9 10 11 12 13 14 15 16 MT05 BLEU Number of Error Bits 8 bit values 7 bit values 6 bit values 5 bit values Figure 3: BLEU scores on the MT05 data set. 8 bits. Overall, our baseline results compare favorably to those reported on the NIST MT06 web site. We now replace the language model with a randomized version. Fig. 3 shows BLEU scores for the MT05 evaluation set with parameter values quantized into 5 to 8 bits and 8 to 16 additional ‘error’ bits. Figure 4 shows a similar graph for MT06 data. We again see improvements as quantization uses more bits. There is a large drop in performance when reducing the number of error bits from 10 to 8, while increasing it beyond 12 bits offers almost no further gains with scores that are almost identical to the lossless model. Using 8-bit quantization and 12 error bits results in an overall requirement of (8+12)×1.23 = 24.6 bits = 3.08 bytes per n-gram. All runs use the sanity checks described in Section 3.8. Without sanity checks, scores drop, e.g. by 0.002 for 8-bit quantization and 12 error bits. Randomization and entropy pruning can be combined to achieve further space savings with minimal loss in quality as shown in Table (6). The BLEU score drops by between 0.0007 to 0.0018 while the 0.454 0.456 0.458 0.46 0.462 0.464 0.466 0.468 8 9 10 11 12 13 14 15 16 MT06 (NIST) BLEU Number of Error Bits 8 bit values 7 bit values 6 bit values 5 bit values Figure 4: BLEU scores on MT06 data (“NIST” subset). size dev test test LM GB MT04 MT05 MT06 unpruned block 116 0.5304 0.5697 0.4663 unpruned rand 69 0.5299 0.5692 0.4659 pruned block 42 0.5294 0.5683 0.4665 pruned rand 27 0.5289 0.5679 0.4656 Table 6: Combining randomization and entropy pruning. All models use 8-bit values; “rand” uses 12 error bits. model is reduced to approx. 1/4 of its original size. 6 Conclusions We have presented a novel randomized language model based on perfect hashing. It can associate arbitrary parameter types with n-grams. Values explicitly inserted into the model are retrieved without error; false positives may occur but are controlled by the number of bits used per n-gram. The amount of storage needed is independent of the size of the vocabulary and the n-gram order. Lookup is very efficient: the values of 3 cells in a large array are combined with the fingerprint of an n-gram. Experiments have shown that this randomized language model can be combined with entropy pruning to achieve further memory reductions; that error rates occurring in practice are much lower than those predicted by theoretical analysis due to the use of runtime sanity checks; and that the same translation quality as a lossless language model representation can be achieved when using 12 ‘error’ bits, resulting in approx. 3 bytes per n-gram (this includes one byte to store parameter values). 512 References B. Bloom. 1970. Space/time tradeoffs in hash coding with allowable errors. CACM, 13:422–426. Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of EMNLPCoNLL 2007, Prague. Peter F. Brown, Vincent J. Della Pietra, Peter V. deSouza, Jennifer C. Lai, and Robert L. Mercer. 1992. Classbased n-gram models of natural language. Computational Linguistics, 18(4):467–479. Peter Brown, Stephen Della Pietra, Vincent Della Pietra, and Robert Mercer. 1993. The mathematics of machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Larry Carter, Robert W. Floyd, John Gill, George Markowsky, and Mark N. Wegman. 1978. Exact and approximate membership testers. In STOC, pages 59– 65. L. Carter and M. Wegman. 1979. Universal classes of hash functions. Journal of Computer and System Science, 18:143–154. Bernard Chazelle, Joe Kilian, Ronitt Rubinfeld, and Ayellet Tal. 2004. The Bloomier Filter: an efficient data structure for static support lookup tables. In Proc. 15th ACM-SIAM Symposium on Discrete Algoritms, pages 30–39. Kenneth Church, Ted Hart, and Jianfeng Gao. 2007. Compressing trigram language models with golomb coding. In Proceedings of EMNLP-CoNLL 2007, Prague, Czech Republic, June. P. Clarkson and R. Rosenfeld. 1997. Statistical language modeling using the CMU-Cambridge toolkit. In Proceedings of EUROSPEECH, vol. 1, pages 2707–2710, Rhodes, Greece. Ahmad Emami, Kishore Papineni, and Jeffrey Sorensen. 2007. Large-scale distributed language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2007, Hawaii, USA. J. Goodman and J. Gao. 2000. Language model size reduction by pruning and clustering. In ICSLP’00, Beijing, China. S. Lloyd. 1982. Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2):129– 137. B.S. Majewski, N.C. Wormald, G. Havas, and Z.J. Czech. 1996. A family of perfect hashing methods. British Computer Journal, 39(6):547–554. Franz J. Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449. Andreas Stolcke. 1998. Entropy-based pruning of backoff language models. In Proc. DARPA Broadcast News Transcription and Understanding Workshop, pages 270–274. D. Talbot and M. Osborne. 2007a. Randomised language modelling for statistical machine translation. In 45th Annual Meeting of the ACL 2007, Prague. D. Talbot and M. Osborne. 2007b. Smoothed Bloom filter language models: Tera-scale LMs on the cheap. In EMNLP/CoNLL 2007, Prague. 513
2008
58
Proceedings of ACL-08: HLT, pages 514–522, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Applying Morphology Generation Models to Machine Translation Kristina Toutanova Microsoft Research Redmond, WA, USA [email protected] Hisami Suzuki Microsoft Research Redmond, WA, USA [email protected] Achim Ruopp Butler Hill Group Redmond, WA, USA [email protected] Abstract We improve the quality of statistical machine translation (SMT) by applying models that predict word forms from their stems using extensive morphological and syntactic information from both the source and target languages. Our inflection generation models are trained independently of the SMT system. We investigate different ways of combining the inflection prediction component with the SMT system by training the base MT system on fully inflected forms or on word stems. We applied our inflection generation models in translating English into two morphologically complex languages, Russian and Arabic, and show that our model improves the quality of SMT over both phrasal and syntax-based SMT systems according to BLEU and human judgements. 1 Introduction One of the outstanding problems for further improving machine translation (MT) systems is the difficulty of dividing the MT problem into sub-problems and tackling each sub-problem in isolation to improve the overall quality of MT. Evidence for this difficulty is the fact that there has been very little work investigating the use of such independent subcomponents, though we started to see some successful cases in the literature, for example in word alignment (Fraser and Marcu, 2007), target language capitalization (Wang et al., 2006) and case marker generation (Toutanova and Suzuki, 2007). This paper describes a successful attempt to integrate a subcomponent for generating word inflections into a statistical machine translation (SMT) system. Our research is built on previous work in the area of using morpho-syntactic information for improving SMT. Work in this area is motivated by two advantages offered by morphological analysis: (1) it provides linguistically motivated clustering of words and makes the data less sparse; (2) it captures morphological constraints applicable on the target side, such as agreement phenomena. This second problem is very difficult to address with wordbased translation systems, when the relevant morphological information in the target language is either non-existent or implicitly encoded in the source language. These two aspects of morphological processing have often been addressed separately: for example, morphological pre-processing of the input data is a common method of addressing the first aspect, e.g. (Goldwater and McClosky, 2005), while the application of a target language model has almost solely been responsible for addressing the second aspect. Minkov et al. (2007) introduced a way to address these problems by using a rich featurebased model, but did not apply the model to MT. In this paper, we integrate a model that predicts target word inflection in the translations of English into two morphologically complex languages (Russian and Arabic) and show improvements in the MT output. We study several alternative methods for integration and show that it is best to propagate uncertainty among the different components as shown by other research, e.g. (Finkel et al., 2006), and in some cases, to factor the translation problem so that the baseline MT system can take advantage of the reduction in sparsity by being able to work on word stems. We also demonstrate that our independently trained models are portable, showing that they can improve both syntactic and phrasal SMT systems. 514 2 Related work There has been active research on incorporating morphological knowledge in SMT. Several approaches use pre-processing schemes, including segmentation of clitics (Lee, 2004; Habash and Sadat, 2006), compound splitting (Nießen and Ney, 2004) and stemming (Goldwater and McClosky, 2005). Of these, the segmentation approach is difficult to apply when the target language is morphologically rich as the segmented morphemes must be put together in the output (El-Kahlout and Oflazer, 2006); and in fact, most work using pre-processing focused on translation into English. In recent work, Koehn and Hoang (2007) proposed a general framework for including morphological features in a phrase-based SMT system by factoring the representation of words into a vector of morphological features and allowing a phrase-based MT system to work on any of the factored representations, which is implemented in the Moses system. Though our motivation is similar to that of Koehn and Hoang (2007), we chose to build an independent component for inflection prediction in isolation rather than folding morphological information into the main translation model. While this may lead to search errors due to the fact that the models are not integrated as tightly as possible, it offers some important advantages, due to the very decoupling of the components. First, our approach is not affected by restrictions on the allowable context size or a phrasal segmentation that are imposed by current MT decoders. This also makes the model portable and applicable to different types of MT systems. Second, we avoid the problem of the combinatorial expansion in the search space which currently arises in the factored approach of Moses. Our inflection prediction model is based on (Minkov et al., 2007), who build models to predict the inflected forms of words in Russian and Arabic, but do not apply their work to MT. In contrast, we focus on methods of integration of an inflection prediction model with an MT system, and on evaluation of the model’s impact on translation. Other work closely related to ours is (Toutanova and Suzuki, 2007), which uses an independently trained case marker prediction model in an English-Japanese translation system, but it focuses on the problem of generating a small set of closed class words rather than generating inflected forms for each word in translation, and proposes different methods of integration of the components. 3 Inflection prediction models This section describes the task and our model for inflection prediction, following (Minkov et al., 2007). We define the task of inflection prediction as the task of choosing the correct inflections of given target language stems, given a corresponding source sentence. The stemming and inflection operations we use are defined by lexicons. 3.1 Lexicon operations For each target language we use a lexicon L which determines the following necessary operations: Stemming: returns the set of possible morphological stems Sw = {s1, ..., sl} for the word w according to L. 1 Inflection: returns the set of surface word forms Iw = {i1, ..., im} for the stems Sw according to L. Morphological analysis: returns the set of possible morphological analyses Aw = {a1, ..., av} for w. A morphological analysis a is a vector of categorical values, where each dimension and its possible values are defined by L. For the morphological analysis operation, we used the same set of morphological features described in (Minkov et al., 2007), that is, seven features for Russian (POS, Person, Number, Gender, Tense, Mood and Case) and 12 for Arabic (POS, Person, Number, Gender, Tense, Mood, Negation, Determiner, Conjunction, Preposition, Object and Possessive pronouns). Each word is factored into a stem (uninflected form) and a subset of these features, where features can have either binary (as in Determiner in Arabic) or multiple values. Some features are relevant only for a particular (set of) partof-speech (POS) (e.g., Gender is relevant only in nouns, pronouns, verbs, and adjectives in Russian), while others combine with practically all categories (e.g., Conjunction in Arabic). The number of possible inflected forms per stem is therefore quite large: as we see in Table 1 of Section 3, there are on average 14 word forms per stem in Russian and 24 in 1Alternatively, stemming can return a disambiguated stem analysis; in which case the set Sw consists of one item. The same is true with the operation of morphological analysis. 515 Arabic for our dataset. This makes the generation of correct forms a challenging problem in MT. The Russian lexicon was obtained by intersecting a general domain lexicon with our training data (Table 2), and the Arabic lexicon was obtained by running the Buckwalter morphological analyser (Buckwalter, 2004) on the training data. Contextual disambiguation of morphology was not performed in either of these languages. In addition to the forms supposed by our lexicon, we also treated capitalization as an inflectional feature in Russian, and defined all true-case word variants as possible inflections of its stem(s). Arabic does not use capitalization. 3.2 Task More formally, our task is as follows: given a source sentence e, a sequence of stems in the target language S1, . . . St, . . . Sn forming a translation of e, and additional morpho-syntactic annotations A derived from the input, select an inflection yt from its inflection set It for every stem set St in the target sentence. 3.3 Models We built a Maximum Entropy Markov model for inflection prediction following (Minkov et al., 2007). The model decomposes the probability of an inflection sequence into a product of local probabilities for the prediction for each word. The local probabilities are conditioned on the previous k predictions (k is set to four in Russian and two in Arabic in our experiments). The probability of a predicted inflection sequence, therefore, is given by: p(y | x) = n Y t=1 p(yt | yt−1...yt−k, xt), yt ∈It, where It is the set of inflections corresponding to St, and xt refers to the context at position t. The context available to the task includes extensive morphological and syntactic information obtained from the aligned source and target sentences. Figure 1 shows an example of an aligned English-Russian sentence pair: on the source (English) side, POS tags and word dependency structure are indicated by solid arcs. The alignments between English and Russian words are indicated by the dotted lines. The dependency structure on the Russian side, indicated by solid arcs, is given by a treelet MT system (see Section 4.1), projected from the word dependency strucNN+sg+nom+neut the DET allocation of resources has completed NN+sg PREP NN+pl AUXV+sg VERB+pastpart распределение NN+pl+gen+masc ресурсов VERB+perf+pass+neut+sg завершено raspredelenie resursov zaversheno Figure 1: Aligned English-Russian sentence pair with syntactic and morphological annotation. ture of English and word alignment information. The features for our inflection prediction model are binary and pair up predicates on the context (¯x, yt−1...yt−k) and the target label (yt). The features at a certain position t can refer to any word in the source sentence, any word stem in the target language, or any morpho-syntactic information in A. This is the source of the power of a model used as an independent component – because it does not need to be integrated in the main search of an MT decoder, it is not subject to the decoder’s locality constraints, and can thus make use of more global information. 3.4 Performance on reference translations Table 1 summarizes the results of applying the inflection prediction model on reference translations, simulating the ideal case where the translations input to our model contain correct stems in correct order. We stemmed the reference translations, predicted the inflection for each stem, and measured the accuracy of prediction, using a set of sentences that were not part of the training data (1K sentences were used for Arabic and 5K for Russian).2 Our model performs significantly better than both the random and trigram language model baselines, and achieves an accuracy of over 91%, which suggests that the model is effective when its input is clean in its stem choice and order. Next, we apply our model in the more noisy but realistic scenario of predicting inflections of MT output sentences. 2The accuracy is based on the words in our lexicon. We define the stem of an out-of-vocabulary (OOV) word to be itself, so in the MT scenario described below, we will not predict the word forms for an OOV item, and will simply leave it unchanged. 516 Russian Arabic Random 16.4 8.7 LM 81.0 69.4 Model 91.6 91.0 Avg | I | 13.9 24.1 Table 1: Results on reference translations (accuracy, %). 4 Machine translation systems and data We integrated the inflection prediction model with two types of machine translation systems: systems that make use of syntax and surface phrase-based systems. 4.1 Treelet translation system This is a syntactically-informed MT system, designed following (Quirk et al., 2005). In this approach, translation is guided by treelet translation pairs, where a treelet is a connected subgraph of a syntactic dependency tree. Translations are scored according to a linear combination of feature functions. The features are similar to the ones used in phrasal systems, and their weights are trained using max-BLEU training (Och, 2003). There are nine feature functions in the treelet system, including log-probabilities according to inverted and direct channel models estimated by relative frequency, lexical weighting channel models following Vogel et al. (2003), a trigram target language model, two order models, word count, phrase count, and average phrase size functions. The treelet translation model is estimated using a parallel corpus. First, the corpus is word-aligned using an implementation of lexicalized-HMMs (He, 2007); then the source sentences are parsed into a dependency structure, and the dependency is projected onto the target side following the heuristics described in (Quirk et al., 2005). These aligned sentence pairs form the training data of the inflection models as well. An example was given in Figure 1. 4.2 Phrasal translation system This is a re-implementation of the Pharaoh translation system (Koehn, 2004). It uses the same lexicalized-HMM model for word alignment as the treelet system, and uses the standard extraction heuristics to extract phrase pairs using forward and backward alignments. In decoding, the system uses a linear combination of feature functions whose weights are trained using max-BLEU training. The features include log-probabilities according to inverted and direct channel models estimated by relative frequency, lexical weighting channel models, a trigram target language model, distortion, word count and phrase count. 4.3 Data sets For our English-Russian and English-Arabic experiments, we used data from a technical (computer) domain. For each language pair, we used a set of parallel sentences (train) for training the MT system submodels (e.g., phrase tables, language model), a set of parallel sentences (lambda) for training the combination weights with max-BLEU training, a set of parallel sentences (dev) for training a small number of combination parameters for our integration methods (see Section 5), and a set of parallel sentences (test) for final evaluation. The details of these sets are shown in Table 2. The training data for the inflection models is always a subset of the training set (train). All MT systems for a given language pair used the same datasets. Dataset sent pairs word tokens (avg/sent) English-Russian English Russian train 1,642K 24,351K (14.8) 22,002K (13.4) lambda 2K 30K (15.1) 27K (13.7) dev 1K 14K (13.9) 13K (13.5) test 4K 61K (15.3) 60K(14.9) English-Arabic English Arabic train 463K 5,223K (11.3) 4,761K (10.3) lambda 2K 22K (11.1) 20K (10.0) dev 1K 11K (11.1) 10K (10.0) test 4K 44K (11.0) 40K (10.1) Table 2: Data set sizes, rounded up to the nearest 1000. 5 Integration of inflection models with MT systems We describe three main methods of integration we have considered. The methods differ in the extent to which the factoring of the problem into two subproblems — predicting stems and predicting inflections — is reflected in the base MT systems. In the first method, the MT system is trained to produce fully inflected target words and the inflection model can change the inflections. In the other two methods, the 517 MT system is trained to produce sequences of target language stems S, which are then inflected by the inflection component. Before we motivate these methods, we first describe the general framework for integrating our inflection model into the MT system. For each of these methods, we assume that the output of the base MT system can be viewed as a ranked list of translation hypotheses for each source sentence e. More specifically, we assume an output {S1,S2,.. . ,Sm} of m-best translations which are sequences of target language stems. The translations further have scores {w1,w2,. . . ,wm} assigned by the base MT system. We also assume that each translation hypothesis Si together with source sentence e can be annotated with the annotation A, as illustrated in Figure 1. We discuss how we convert the output of the base MT systems to this form in the subsections below. Given such a list of candidate stem sequences, the base MT model together with the inflection model and a language model choose a translation Y∗as follows: (1) Yi = arg maxY ′ i ∈Infl(Si)λ1logPIM(Y ′ i |Si)+ λ2logPLM(Y ′ i ), i = 1 . . . n (2) Y ∗ = arg maxi=1...n λ1logPIM(Yi|Si) + λ2logPLM(Yi) + λ3wi In these formulas, the dependency on e and A is omitted for brevity in the expression for the probability according to the inflection model PIM. PLM(Y ′ i ) is the joint probability of the sequence of inflected words according to a trigram language model (LM). The LM used for the integration is the same LM used in the base MT system that is trained on fully inflected word forms (the base MT system trained on stems uses an LM trained on a stem sequence). Equation (1) shows that the model first selects the best sequence of inflected forms for each MT hypothesis Si according to the LM and the inflection model. Equation (2) shows that from these n fully inflected hypotheses, the model then selects the one which has the best score, combined with the base MT score wi for Si. We should note that this method does not represent standard n-best reranking because the input from the base MT system contains sequences of stems, and the model is generating fully inflected translations from them. Thus the chosen translation may not be in the provided nbest list. This method is more similar to the one used in (Wang et al., 2006), with the difference that they use only 1-best input from a base MT system. The interpolation weights λ in Equations (1) and (2) as well as the optimal number of translations n from the base MT system to consider, given a maximum of m=100 hypotheses, are trained using a separate dataset. We performed a grid search on the values of λ and n, to maximize the BLEU score of the final system on a development set (dev) of 1000 sentences (Table 2). The three methods of integration differ in the way the base MT engine is applied. Since we always discard the choices of specific inflected forms for the target stems by converting candidate translations to sequences of stems, it is interesting to know whether we need a base MT system that produces fully inflected translations or whether we can do as well or better by training the base MT systems to produce sequences of stems. Stemming the target sentences is expected to be helpful for word alignment, especially when the stemming operation is defined so that the word alignment becomes more one-toone (Goldwater and McClosky, 2005). In addition, stemming the target sentences reduces the sparsity in the translation tables and language model, and is likely to impact positively the performance of an MT system in terms of its ability to recover correct sequences of stems in the target. Also, machine learning tells us that solving a more complex problem than we are evaluated on (in our case for the base MT, predicting stems together with their inflections instead of just predicting stems) is theoretically unjustified (Vapnik, 1995). However, for some language pairs, stemming one language can make word alignment worse, if it leads to more violations in the assumptions of current word alignment models, rather than making the source look more like the target. In addition, using a trigram LM on stems may lead to larger violations of the Markov independence assumptions, than using a trigram LM on fully inflected words. Thus, if we apply the exact same base MT system to use stemmed forms in alignment and/or translation, it is not a priori clear whether we would get a better result than if we apply the system to use fully inflected forms. 518 5.1 Method 1 In this method, the base MT system is trained in the usual way, from aligned pairs of source sentences and fully inflected target sentences. The inflection model is then applied to re-inflect the 1-best or m-best translations and to select an output translation. The hypotheses in the m-best output from the base MT system are stemmed and the scores of the stemmed hypotheses are assumed to be equal to the scores of the original ones.3 Thus we obtain input of the needed form, consisting of m sequences of target language stems along with scores. For this and other methods, if we are working with an m-best list from the treelet system, every translation hypothesis contains the annotations A that our model needs, because the system maintains the alignment, parse trees, etc., as part of its search space. Thus we do not need to do anything further to obtain input of the form necessary for application of the inflection model. For the phrase-based system, we generated the annotations needed by first parsing the source sentence e, aligning the source and candidate translations with the word-alignment model used in training, and projected the dependency tree to the target using the algorithm of (Quirk et al., 2005). Note that it may be better to use the word alignment maintained as part of the translation hypotheses during search, but our solution is more suitable to situations where these can not be easily obtained. For all methods, we study two settings for integration. In the first, we only consider (n=1) hypotheses from the base MT system. In the second setting, we allow the model to use up to 100 translations, and to automatically select the best number to use. As seen in Table 3, (n=16) translations were chosen for Russian and as seen in Table 5, (n=2) were chosen for Arabic for this method. 5.2 Method 2 In this method, the base MT system is trained to produce sequences of stems in the target language. The most straightforward way to achieve this is to stem the training parallel data and to train the MT system using this input. This is our Method 3 described 3It may be better to take the max of the scores for a stem sequence occurring more than once in the list, or take the logsum-exp of the scores. below. We formulated Method 2 as an intermediate step, to decouple the impact of stemming at the alignment and translation stages. In Method 2, word alignment is performed using fully inflected target language sentences. After alignment, the target language is stemmed and the base MT systems’ sub-models are trained using this stemmed input and alignment. In addition to this word-aligned corpus the MT systems use another product of word alignment: the IBM model 1 translation tables. Because the trained translation tables of IBM model 1 use fully inflected target words, we generated stemmed versions of the translation tables by applying the rules of probability. 5.3 Method 3 In this method the base MT system produces sequences of target stems. It is trained in the same way as the baseline MT system, except its input parallel training data are preprocessed to stem the target sentences. In this method, stemming can impact word alignment in addition to the translation models. 6 MT performance results Before delving into the results for each method, we discuss our evaluation measures. For automatically measuring performance, we used 4-gram BLEU against a single reference translation. We also report oracle BLEU scores which incorporate two kinds of oracle knowledge. For the methods using n=1 translation from a base MT system, the oracle BLEU score is the BLEU score of the stemmed translation compared to the stemmed reference, which represents the upper bound achievable by changing only the inflected forms (but not stems) of the words in a translation. For models using n > 1 input hypotheses, the oracle also measures the gain from choosing the best possible stem sequence in the provided (m=100-best) hypothesis list, in addition to choosing the best possible inflected forms for these stems. For the models in the tables, even if, say, n=16 was chosen in parameter fitting, the oracle is measured on the initially provided list of 100-best. 6.1 English-Russian treelet system Table 3 shows the results of the baseline and the model using the different methods for the treelet MT system on English-Russian. The baseline is the 519 Model BLEU Oracle BLEU Base MT (n=1) 29.24 Method 1 (n=1) 30.44 36.59 Method 1 (n=16) 30.61 45.33 Method 2 (n=1) 30.79 37.38 Method 2 (n=16) 31.24 48.48 Method 3 (n=1) 31.42 38.06 Method 3 (n=32) 31.80 49.19 Table 3: Test set performance for English-to-Russian MT (BLEU) results by model using a treelet MT system. treelet system described in Section 4.1 and trained on the data in Table 2. We can see that Method 1 results in a good improvement of 1.2 BLEU points, even when using only the best (n = 1) translation from the baseline. The oracle improvement achievable by predicting inflections is quite substantial: more than 7 BLEU points. Propagating the uncertainty of the baseline system by using more input hypotheses consistently improves performance across the different methods, with an additional improvement of between .2 and .4 BLEU points. From the results of Method 2 we can see that reducing sparsity at translation modeling is advantageous. Both the oracle BLEU of the first hypothesis and the achieved performance of the model improved; the best performance achieved by Method 2 is .63 points higher than the performance of Method 1. We should note that the oracle performance for Method 2, n > 1 is measured using 100-best lists of target stem sequences, whereas the one for Method 1 is measured using 100-best lists of inflected target words. This can be a disadvantage for Method 1, because a 100-best list of inflected translations actually contains about 50 different sequences of stems (the rest are distinctions in inflections). Nevertheless, even if we measure the oracle for Method 2 using 40-best, it is higher than the 100-best oracle of Method 1. In addition, it appears that using a hypothesis list larger than n > 1=100 is not be helpful for our method, as the model chose to use only up to 32 hypotheses. Finally, we can see that using stemming at the word alignment stage further improved both the oracle and the achieved results. The performance of the best model is 2.56 BLEU points better than the baseline. Since stemming in Russian for the most part removes properties of words which are not expressed in English at the word level, these results are consistent with previous results using stemming to improve word alignment. From these results, we also see that about half of the gain from using stemming in the base MT system came from improving word alignment, and half came from using translation models operating at the less sparse stem level. Overall, the improvement achieved by predicting morphological properties of Russian words with a feature-rich component model is substantial, given the relatively large size of the training data (1.6 million sentences), and indicates that these kinds of methods are effective in addressing the problems in translating morphology-poor to morphology-rich languages. 6.2 English-Russian phrasal system For the phrasal system, we performed integration only with Method 1, using the top 1 or 100best translations. This is the most straightforward method for combining with any system, and we applied it as a proof-of-concept experiment. Model BLEU Oracle BLEU Base MT (n=1) 36.00 Method 1 (n=1) 36.43 42.33 Method 1 (n=100) 36.72 55.00 Table 4: Test set performance for English-to-Russian MT (BLEU) results by model using a phrasal MT system. The phrasal MT system is trained on the same data as the treelet system. The phrase size and distortion limit were optimized (we used phrase size of 7 and distortion limit of 3). This system achieves a substantially better BLEU score (by 6.76) than the treelet system. The oracle BLEU score achievable by Method 1 using n=1 translation, though, is still 6.3 BLEU point higher than the achieved BLEU. Our model achieved smaller improvements for the phrasal system (0.43 improvement for n=1 translations and 0.72 for the selected n=100 translations). However, this improvement is encouraging given the large size of the training data. One direction for potentially improving these results is to use word alignments from the MT system, rather than using an alignment model to predict them. 520 Model BLEU Oracle BLEU Base MT (n=1) 35.54 Method 1 (n=1) 37.24 42.29 Method 1 (n=2) 37.41 52.21 Method 2 (n=1) 36.53 42.46 Method 2 (n=4) 36.72 54.74 Method 3 (n=1) 36.87 42.96 Method 3 (n=2) 36.92 54.90 Table 5: Test set performance for English-to-Arabic MT (BLEU) results by model using a treelet MT system. 6.3 English-Arabic treelet system The Arabic system also improves with the use of our mode: the best system (Method 1, n=2) achieves the BLEU score of 37.41, a 1.87 point improvement over the baseline. Unlike the case of Russian, Method 2 and 3 do not achieve better results than Method 1, though the oracle BLEU score improves in these models (54.74 and 54.90 as opposed to 52.21 of Method 1). We do notice, however, that the oracle improvement for the 1-best analysis is much smaller than what we obtained in Russian. We have been unable to closely diagnose why performance did not improve using Methods 2 and 3 so far due to the absence of expertise in Arabic, but one factor we suspect is affecting performance the most in Arabic is the definition of stemming: the effect of stemming is most beneficial when it is applied specifically to normalize the distinctions not explicitly encoded in the other language; it may hurt performance otherwise. We believe that in the case of Arabic, this latter situation is actually happening: grammatical properties explicitly encoded in English (e.g., definiteness, conjunction, pronominal clitics) are lost when the Arabic words are stemmed. This may be having a detrimental effect on the MT systems that are based on stemmed input. Further investigation is necessary to confirm this hypothesis. 6.4 Human evaluation In this section we briefly report the results of human evaluation on the output of our inflection prediction system, as the correlation between BLEU scores and human evaluation results is not always obvious. We compared the output of our component against the best output of the treelet system without our component. We evaluated the following three scenarios: (1) Arabic Method 1 with n=1, which corresponds to the best performing system in BLEU according to Table 5; (2) Russian, Method 1 with n=1; (3) Russian, Method 3 with n=32, which corresponds to the best performing system in BLEU in Table 3. Note that in (1) and (2), the only differences in the compared outputs are the changes in word inflections, while in (3) the outputs may differ in the selection of the stems. In all scenarios, two human judges (native speakers of these languages) evaluated 100 sentences that had different translations by the baseline system and our model. The judges were given the reference translations but not the source sentences, and were asked to classify each sentence pair into three categories: (1) the baseline system is better (score=-1), (2) the output of our model is better (score=1), or (3) they are of the same quality (score=0). human eval score BLEU diff Arabic Method 1 0.1 1.9 Russian Method 1 0.255 1.2 Russian Method 3 0.26 2.6 Table 6: Human evaluation results Table 6 shows the results of the averaged, aggregated score across two judges per evaluation scenario, along with the BLEU score improvements achieved by applying our model. We see that in all cases, the human evaluation scores are positive, indicating that our models produce translations that are better than those produced by the baseline system. 4 We also note that in Russian, the human evaluation scores are similar for Method 1 and 3 (0.255 and 0.26), though the BLEU score gains are quite different (1.2 vs 2.6). This may be attributed to the fact that human evaluation typically favors the scenario where only word inflections are different (Toutanova and Suzuki, 2007). 7 Conclusion and future work We have shown that an independent model of morphology generation can be successfully integrated with an SMT system, making improvements in both phrasal and syntax-based MT. In the future, we would like to include more sophistication in the design of a lexicon for a particular language pair based on error analysis, and extend our pre-processing to include other operations such as word segmentation. 4However, the improvement in Arabic is not statistically significant on this 100 sentence set. 521 References Tim Buckwalter. 2004. Buckwalter arabic morphological analyzer version 2.0. Ilknur Durgar El-Kahlout and Kemal Oflazer. 2006. Initial explorations in English to Turkish statistical machine translation. In NAACL workshop on statistical machine translation. Jenny Finkel, Christopher Manning, and Andrew Ng. 2006. Solving the problem of cascading errors: approximate Bayesian inference for linguistic annotation pipelines. In EMNLP. Alexander Fraser and Daniel Marcu. 2007. Measuring word alignment quality for statistical machine translation. Computational Linguistics, 33(3):293–303. Sharon Goldwater and David McClosky. 2005. Improving statistical MT through morphological analysis. In EMNLP. Nizar Habash and Fatiha Sadat. 2006. Arabic preprocessing schemes for statistical machine translation. In HLT-NAACL. Xiaodong He. 2007. Using word-dependent transition models in HMM based word alignment for statistical machine translation. In ACL Workshop on Statistical Machine Translation. Philipp Koehn and Hieu Hoang. 2007. Factored translation models. In EMNLP-CoNNL. Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation models. In AMTA. Young-Suk Lee. 2004. Morphological analysis for statistical machine translation. In HLT-NAACL. Einat Minkov, Kristina Toutanova, and Hisami Suzuki. 2007. Generating complex morphology for machine translation. In ACL. Sonja Nießen and Hermann Ney. 2004. Statistical machine translation with scarce resources using morphosyntactic information. Computational Linguistics, 30(2):181–204. Franz Och. 2003. Minimum error rate training for statistical machine translation. In ACL. Chris Quirk, Arul Menezes, and Colin Cherry. 2005. Dependency tree translation: Syntactically informed phrasal SMT. In ACL. Kristina Toutanova and Hisami Suzuki. 2007. Generating case markers in machine translation. In NAACL-HLT. Vladimir Vapnik. 1995. The nature of Statistical Learning Theory. Springer-Verlag. Wei Wang, Kevin Knight, and Daniel Marcu. 2006. Capitalizing machine translation. In HLT-NAACL. 522
2008
59
Proceedings of ACL-08: HLT, pages 46–54, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Task-oriented Evaluation of Syntactic Parsers and Their Representations Yusuke Miyao† Rune Sætre† Kenji Sagae† Takuya Matsuzaki† Jun’ichi Tsujii†‡∗ †Department of Computer Science, University of Tokyo, Japan ‡School of Computer Science, University of Manchester, UK ∗National Center for Text Mining, UK {yusuke,rune.saetre,sagae,matuzaki,tsujii}@is.s.u-tokyo.ac.jp Abstract This paper presents a comparative evaluation of several state-of-the-art English parsers based on different frameworks. Our approach is to measure the impact of each parser when it is used as a component of an information extraction system that performs protein-protein interaction (PPI) identification in biomedical papers. We evaluate eight parsers (based on dependency parsing, phrase structure parsing, or deep parsing) using five different parse representations. We run a PPI system with several combinations of parser and parse representation, and examine their impact on PPI identification accuracy. Our experiments show that the levels of accuracy obtained with these different parsers are similar, but that accuracy improvements vary when the parsers are retrained with domain-specific data. 1 Introduction Parsing technologies have improved considerably in the past few years, and high-performance syntactic parsers are no longer limited to PCFG-based frameworks (Charniak, 2000; Klein and Manning, 2003; Charniak and Johnson, 2005; Petrov and Klein, 2007), but also include dependency parsers (McDonald and Pereira, 2006; Nivre and Nilsson, 2005; Sagae and Tsujii, 2007) and deep parsers (Kaplan et al., 2004; Clark and Curran, 2004; Miyao and Tsujii, 2008). However, efforts to perform extensive comparisons of syntactic parsers based on different frameworks have been limited. The most popular method for parser comparison involves the direct measurement of the parser output accuracy in terms of metrics such as bracketing precision and recall, or dependency accuracy. This assumes the existence of a gold-standard test corpus, such as the Penn Treebank (Marcus et al., 1994). It is difficult to apply this method to compare parsers based on different frameworks, because parse representations are often framework-specific and differ from parser to parser (Ringger et al., 2004). The lack of such comparisons is a serious obstacle for NLP researchers in choosing an appropriate parser for their purposes. In this paper, we present a comparative evaluation of syntactic parsers and their output representations based on different frameworks: dependency parsing, phrase structure parsing, and deep parsing. Our approach to parser evaluation is to measure accuracy improvement in the task of identifying protein-protein interaction (PPI) information in biomedical papers, by incorporating the output of different parsers as statistical features in a machine learning classifier (Yakushiji et al., 2005; Katrenko and Adriaans, 2006; Erkan et al., 2007; Sætre et al., 2007). PPI identification is a reasonable task for parser evaluation, because it is a typical information extraction (IE) application, and because recent studies have shown the effectiveness of syntactic parsing in this task. Since our evaluation method is applicable to any parser output, and is grounded in a real application, it allows for a fair comparison of syntactic parsers based on different frameworks. Parser evaluation in PPI extraction also illuminates domain portability. Most state-of-the-art parsers for English were trained with the Wall Street Journal (WSJ) portion of the Penn Treebank, and high accuracy has been reported for WSJ text; however, these parsers rely on lexical information to attain high accuracy, and it has been criticized that these parsers may overfit to WSJ text (Gildea, 2001; 46 Klein and Manning, 2003). Another issue for discussion is the portability of training methods. When training data in the target domain is available, as is the case with the GENIA Treebank (Kim et al., 2003) for biomedical papers, a parser can be retrained to adapt to the target domain, and larger accuracy improvements are expected, if the training method is sufficiently general. We will examine these two aspects of domain portability by comparing the original parsers with the retrained parsers. 2 Syntactic Parsers and Their Representations This paper focuses on eight representative parsers that are classified into three parsing frameworks: dependency parsing, phrase structure parsing, and deep parsing. In general, our evaluation methodology can be applied to English parsers based on any framework; however, in this paper, we chose parsers that were originally developed and trained with the Penn Treebank or its variants, since such parsers can be re-trained with GENIA, thus allowing for us to investigate the effect of domain adaptation. 2.1 Dependency parsing Because the shared tasks of CoNLL-2006 and CoNLL-2007 focused on data-driven dependency parsing, it has recently been extensively studied in parsing research. The aim of dependency parsing is to compute a tree structure of a sentence where nodes are words, and edges represent the relations among words. Figure 1 shows a dependency tree for the sentence “IL-8 recognizes and activates CXCR1.” An advantage of dependency parsing is that dependency trees are a reasonable approximation of the semantics of sentences, and are readily usable in NLP applications. Furthermore, the efficiency of popular approaches to dependency parsing compare favorable with those of phrase structure parsing or deep parsing. While a number of approaches have been proposed for dependency parsing, this paper focuses on two typical methods. MST McDonald and Pereira (2006)’s dependency parser,1 based on the Eisner algorithm for projective dependency parsing (Eisner, 1996) with the secondorder factorization. 1http://sourceforge.net/projects/mstparser Figure 1: CoNLL-X dependency tree Figure 2: Penn Treebank-style phrase structure tree KSDEP Sagae and Tsujii (2007)’s dependency parser,2 based on a probabilistic shift-reduce algorithm extended by the pseudo-projective parsing technique (Nivre and Nilsson, 2005). 2.2 Phrase structure parsing Owing largely to the Penn Treebank, the mainstream of data-driven parsing research has been dedicated to the phrase structure parsing. These parsers output Penn Treebank-style phrase structure trees, although function tags and empty categories are stripped off (Figure 2). While most of the state-of-the-art parsers are based on probabilistic CFGs, the parameterization of the probabilistic model of each parser varies. In this work, we chose the following four parsers. NO-RERANK Charniak (2000)’s parser, based on a lexicalized PCFG model of phrase structure trees.3 The probabilities of CFG rules are parameterized on carefully hand-tuned extensive information such as lexical heads and symbols of ancestor/sibling nodes. RERANK Charniak and Johnson (2005)’s reranking parser. The reranker of this parser receives nbest4 parse results from NO-RERANK, and selects the most likely result by using a maximum entropy model with manually engineered features. BERKELEY Berkeley’s parser (Petrov and Klein, 2007).5 The parameterization of this parser is op2http://www.cs.cmu.edu/˜sagae/parser/ 3http://bllip.cs.brown.edu/resources.shtml 4We set n = 50 in this paper. 5http://nlp.cs.berkeley.edu/Main.html#Parsing 47 Figure 3: Predicate argument structure timized automatically by assigning latent variables to each nonterminal node and estimating the parameters of the latent variables by the EM algorithm (Matsuzaki et al., 2005). STANFORD Stanford’s unlexicalized parser (Klein and Manning, 2003).6 Unlike NO-RERANK, probabilities are not parameterized on lexical heads. 2.3 Deep parsing Recent research developments have allowed for efficient and robust deep parsing of real-world texts (Kaplan et al., 2004; Clark and Curran, 2004; Miyao and Tsujii, 2008). While deep parsers compute theory-specific syntactic/semantic structures, predicate argument structures (PAS) are often used in parser evaluation and applications. PAS is a graph structure that represents syntactic/semantic relations among words (Figure 3). The concept is therefore similar to CoNLL dependencies, though PAS expresses deeper relations, and may include reentrant structures. In this work, we chose the two versions of the Enju parser (Miyao and Tsujii, 2008). ENJU The HPSG parser that consists of an HPSG grammar extracted from the Penn Treebank, and a maximum entropy model trained with an HPSG treebank derived from the Penn Treebank.7 ENJU-GENIA The HPSG parser adapted to biomedical texts, by the method of Hara et al. (2007). Because this parser is trained with both WSJ and GENIA, we compare it parsers that are retrained with GENIA (see section 3.3). 3 Evaluation Methodology In our approach to parser evaluation, we measure the accuracy of a PPI extraction system, in which 6http://nlp.stanford.edu/software/lex-parser. shtml 7http://www-tsujii.is.s.u-tokyo.ac.jp/enju/ This study demonstrates that IL-8 recognizes and activates CXCR1, CXCR2, and the Duffy antigen by distinct mechanisms. The molar ratio of serum retinol-binding protein (RBP) to transthyretin (TTR) is not useful to assess vitamin A status during infection in hospitalised children. Figure 4: Sentences including protein names ENTITY1(IL-8) SBJ −→recognizes OBJ ←−ENTITY2(CXCR1) Figure 5: Dependency path the parser output is embedded as statistical features of a machine learning classifier. We run a classifier with features of every possible combination of a parser and a parse representation, by applying conversions between representations when necessary. We also measure the accuracy improvements obtained by parser retraining with GENIA, to examine the domain portability, and to evaluate the effectiveness of domain adaptation. 3.1 PPI extraction PPI extraction is an NLP task to identify protein pairs that are mentioned as interacting in biomedical papers. Because the number of biomedical papers is growing rapidly, it is impossible for biomedical researchers to read all papers relevant to their research; thus, there is an emerging need for reliable IE technologies, such as PPI identification. Figure 4 shows two sentences that include protein names: the former sentence mentions a protein interaction, while the latter does not. Given a protein pair, PPI extraction is a task of binary classification; for example, ⟨IL-8, CXCR1⟩is a positive example, and ⟨RBP, TTR⟩is a negative example. Recent studies on PPI extraction demonstrated that dependency relations between target proteins are effective features for machine learning classifiers (Katrenko and Adriaans, 2006; Erkan et al., 2007; Sætre et al., 2007). For the protein pair IL-8 and CXCR1 in Figure 4, a dependency parser outputs a dependency tree shown in Figure 1. From this dependency tree, we can extract a dependency path shown in Figure 5, which appears to be a strong clue in knowing that these proteins are mentioned as interacting. 48 (dep_path (SBJ (ENTITY1 recognizes)) (rOBJ (recognizes ENTITY2))) Figure 6: Tree representation of a dependency path We follow the PPI extraction method of Sætre et al. (2007), which is based on SVMs with SubSet Tree Kernels (Collins and Duffy, 2002; Moschitti, 2006), while using different parsers and parse representations. Two types of features are incorporated in the classifier. The first is bag-of-words features, which are regarded as a strong baseline for IE systems. Lemmas of words before, between and after the pair of target proteins are included, and the linear kernel is used for these features. These features are commonly included in all of the models. Filtering by a stop-word list is not applied because this setting made the scores higher than Sætre et al. (2007)’s setting. The other type of feature is syntactic features. For dependency-based parse representations, a dependency path is encoded as a flat tree as depicted in Figure 6 (prefix “r” denotes reverse relations). Because a tree kernel measures the similarity of trees by counting common subtrees, it is expected that the system finds effective subsequences of dependency paths. For the PTB representation, we directly encode phrase structure trees. 3.2 Conversion of parse representations It is widely believed that the choice of representation format for parser output may greatly affect the performance of applications, although this has not been extensively investigated. We should therefore evaluate the parser performance in multiple parse representations. In this paper, we create multiple parse representations by converting each parser’s default output into other representations when possible. This experiment can also be considered to be a comparative evaluation of parse representations, thus providing an indication for selecting an appropriate parse representation for similar IE tasks. Figure 7 shows our scheme for representation conversion. This paper focuses on five representations as described below. CoNLL The dependency tree format used in the 2006 and 2007 CoNLL shared tasks on dependency parsing. This is a representation format supported by several data-driven dependency parsers. This repreFigure 7: Conversion of parse representations Figure 8: Head dependencies sentation is also obtained from Penn Treebank-style trees by applying constituent-to-dependency conversion8 (Johansson and Nugues, 2007). It should be noted, however, that this conversion cannot work perfectly with automatic parsing, because the conversion program relies on function tags and empty categories of the original Penn Treebank. PTB Penn Treebank-style phrase structure trees without function tags and empty nodes. This is the default output format for phrase structure parsers. We also create this representation by converting ENJU’s output by tree structure matching, although this conversion is not perfect because forms of PTB and ENJU’s output are not necessarily compatible. HD Dependency trees of syntactic heads (Figure 8). This representation is obtained by converting PTB trees. We first determine lexical heads of nonterminal nodes by using Bikel’s implementation of Collins’ head detection algorithm9 (Bikel, 2004; Collins, 1997). We then convert lexicalized trees into dependencies between lexical heads. SD The Stanford dependency format (Figure 9). This format was originally proposed for extracting dependency relations useful for practical applications (de Marneffe et al., 2006). A program to convert PTB is attached to the Stanford parser. Although the concept looks similar to CoNLL, this representa8http://nlp.cs.lth.se/pennconverter/ 9http://www.cis.upenn.edu/˜dbikel/software. html 49 Figure 9: Stanford dependencies tion does not necessarily form a tree structure, and is designed to express more fine-grained relations such as apposition. Research groups for biomedical NLP recently adopted this representation for corpus annotation (Pyysalo et al., 2007a) and parser evaluation (Clegg and Shepherd, 2007; Pyysalo et al., 2007b). PAS Predicate-argument structures. This is the default output format for ENJU and ENJU-GENIA. Although only CoNLL is available for dependency parsers, we can create four representations for the phrase structure parsers, and five for the deep parsers. Dotted arrows in Figure 7 indicate imperfect conversion, in which the conversion inherently introduces errors, and may decrease the accuracy. We should therefore take caution when comparing the results obtained by imperfect conversion. We also measure the accuracy obtained by the ensemble of two parsers/representations. This experiment indicates the differences and overlaps of information conveyed by a parser or a parse representation. 3.3 Domain portability and parser retraining Since the domain of our target text is different from WSJ, our experiments also highlight the domain portability of parsers. We run two versions of each parser in order to investigate the two types of domain portability. First, we run the original parsers trained with WSJ10 (39832 sentences). The results in this setting indicate the domain portability of the original parsers. Next, we run parsers re-trained with GENIA11 (8127 sentences), which is a Penn Treebankstyle treebank of biomedical paper abstracts. Accuracy improvements in this setting indicate the possibility of domain adaptation, and the portability of the training methods of the parsers. Since the parsers listed in Section 2 have programs for the training 10Some of the parser packages include parsing models trained with extended data, but we used the models trained with WSJ section 2-21 of the Penn Treebank. 11The domains of GENIA and AImed are not exactly the same, because they are collected independently. with a Penn Treebank-style treebank, we use those programs as-is. Default parameter settings are used for this parser re-training. In preliminary experiments, we found that dependency parsers attain higher dependency accuracy when trained only with GENIA. We therefore only input GENIA as the training data for the retraining of dependency parsers. For the other parsers, we input the concatenation of WSJ and GENIA for the retraining, while the reranker of RERANK was not retrained due to its cost. Since the parsers other than NO-RERANK and RERANK require an external POS tagger, a WSJ-trained POS tagger is used with WSJtrained parsers, and geniatagger (Tsuruoka et al., 2005) is used with GENIA-retrained parsers. 4 Experiments 4.1 Experiment settings In the following experiments, we used AImed (Bunescu and Mooney, 2004), which is a popular corpus for the evaluation of PPI extraction systems. The corpus consists of 225 biomedical paper abstracts (1970 sentences), which are sentence-split, tokenized, and annotated with proteins and PPIs. We use gold protein annotations given in the corpus. Multi-word protein names are concatenated and treated as single words. The accuracy is measured by abstract-wise 10-fold cross validation and the one-answer-per-occurrence criterion (Giuliano et al., 2006). A threshold for SVMs is moved to adjust the balance of precision and recall, and the maximum f-scores are reported for each setting. 4.2 Comparison of accuracy improvements Tables 1 and 2 show the accuracy obtained by using the output of each parser in each parse representation. The row “baseline” indicates the accuracy obtained with bag-of-words features. Table 3 shows the time for parsing the entire AImed corpus, and Table 4 shows the time required for 10-fold cross validation with GENIA-retrained parsers. When using the original WSJ-trained parsers (Table 1), all parsers achieved almost the same level of accuracy — a significantly better result than the baseline. To the extent of our knowledge, this is the first result that proves that dependency parsing, phrase structure parsing, and deep parsing perform 50 CoNLL PTB HD SD PAS baseline 48.2/54.9/51.1 MST 53.2/56.5/54.6 N/A N/A N/A N/A KSDEP 49.3/63.0/55.2 N/A N/A N/A N/A NO-RERANK 50.7/60.9/55.2 45.9/60.5/52.0 50.6/60.9/55.1 49.9/58.2/53.5 N/A RERANK 53.6/59.2/56.1 47.0/58.9/52.1 48.1/65.8/55.4 50.7/62.7/55.9 N/A BERKELEY 45.8/67.6/54.5 50.5/57.6/53.7 52.3/58.8/55.1 48.7/62.4/54.5 N/A STANFORD 50.4/60.6/54.9 50.9/56.1/53.0 50.7/60.7/55.1 51.8/58.1/54.5 N/A ENJU 52.6/58.0/55.0 48.7/58.8/53.1 57.2/51.9/54.2 52.2/58.1/54.8 48.9/64.1/55.3 Table 1: Accuracy on the PPI task with WSJ-trained parsers (precision/recall/f-score) CoNLL PTB HD SD PAS baseline 48.2/54.9/51.1 MST 49.1/65.6/55.9 N/A N/A N/A N/A KSDEP 51.6/67.5/58.3 N/A N/A N/A N/A NO-RERANK 53.9/60.3/56.8 51.3/54.9/52.8 53.1/60.2/56.3 54.6/58.1/56.2 N/A RERANK 52.8/61.5/56.6 48.3/58.0/52.6 52.1/60.3/55.7 53.0/61.1/56.7 N/A BERKELEY 52.7/60.3/56.0 48.0/59.9/53.1 54.9/54.6/54.6 50.5/63.2/55.9 N/A STANFORD 49.3/62.8/55.1 44.5/64.7/52.5 49.0/62.0/54.5 54.6/57.5/55.8 N/A ENJU 54.4/59.7/56.7 48.3/60.6/53.6 56.7/55.6/56.0 54.4/59.3/56.6 52.0/63.8/57.2 ENJU-GENIA 56.4/57.4/56.7 46.5/63.9/53.7 53.4/60.2/56.4 55.2/58.3/56.5 57.5/59.8/58.4 Table 2: Accuracy on the PPI task with GENIA-retrained parsers (precision/recall/f-score) WSJ-trained GENIA-retrained MST 613 425 KSDEP 136 111 NO-RERANK 2049 1372 RERANK 2806 2125 BERKELEY 1118 1198 STANFORD 1411 1645 ENJU 1447 727 ENJU-GENIA 821 Table 3: Parsing time (sec.) equally well in a real application. Among these parsers, RERANK performed slightly better than the other parsers, although the difference in the f-score is small, while it requires much higher parsing cost. When the parsers are retrained with GENIA (Table 2), the accuracy increases significantly, demonstrating that the WSJ-trained parsers are not sufficiently domain-independent, and that domain adaptation is effective. It is an important observation that the improvements by domain adaptation are larger than the differences among the parsers in the previous experiment. Nevertheless, not all parsers had their performance improved upon retraining. Parser CoNLL PTB HD SD PAS baseline 424 MST 809 N/A N/A N/A N/A KSDEP 864 N/A N/A N/A N/A NO-RERANK 851 4772 882 795 N/A RERANK 849 4676 881 778 N/A BERKELEY 869 4665 895 804 N/A STANFORD 847 4614 886 799 N/A ENJU 832 4611 884 789 1005 ENJU-GENIA 874 4624 895 783 1020 Table 4: Evaluation time (sec.) retraining yielded only slight improvements for RERANK, BERKELEY, and STANFORD, while larger improvements were observed for MST, KSDEP, NORERANK, and ENJU. Such results indicate the differences in the portability of training methods. A large improvement from ENJU to ENJU-GENIA shows the effectiveness of the specifically designed domain adaptation method, suggesting that the other parsers might also benefit from more sophisticated approaches for domain adaptation. While the accuracy level of PPI extraction is the similar for the different parsers, parsing speed 51 RERANK ENJU CoNLL HD SD CoNLL HD SD PAS KSDEP CoNLL 58.5 (+0.2) 57.1 (−1.2) 58.4 (+0.1) 58.5 (+0.2) 58.0 (−0.3) 59.1 (+0.8) 59.0 (+0.7) RERANK CoNLL 56.7 (+0.1) 57.1 (+0.4) 58.3 (+1.6) 57.3 (+0.7) 58.7 (+2.1) 59.5 (+2.3) HD 56.8 (+0.1) 57.2 (+0.5) 56.5 (+0.5) 56.8 (+0.2) 57.6 (+0.4) SD 58.3 (+1.6) 58.3 (+1.6) 56.9 (+0.2) 58.6 (+1.4) ENJU CoNLL 57.0 (+0.3) 57.2 (+0.5) 58.4 (+1.2) HD 57.1 (+0.5) 58.1 (+0.9) SD 58.3 (+1.1) Table 5: Results of parser/representation ensemble (f-score) differs significantly. The dependency parsers are much faster than the other parsers, while the phrase structure parsers are relatively slower, and the deep parsers are in between. It is noteworthy that the dependency parsers achieved comparable accuracy with the other parsers, while they are more efficient. The experimental results also demonstrate that PTB is significantly worse than the other representations with respect to cost for training/testing and contributions to accuracy improvements. The conversion from PTB to dependency-based representations is therefore desirable for this task, although it is possible that better results might be obtained with PTB if a different feature extraction mechanism is used. Dependency-based representations are competitive, while CoNLL seems superior to HD and SD in spite of the imperfect conversion from PTB to CoNLL. This might be a reason for the high performances of the dependency parsers that directly compute CoNLL dependencies. The results for ENJUCoNLL and ENJU-PAS show that PAS contributes to a larger accuracy improvement, although this does not necessarily mean the superiority of PAS, because two imperfect conversions, i.e., PAS-to-PTB and PTB-toCoNLL, are applied for creating CoNLL. 4.3 Parser ensemble results Table 5 shows the accuracy obtained with ensembles of two parsers/representations (except the PTB format). Bracketed figures denote improvements from the accuracy with a single parser/representation. The results show that the task accuracy significantly improves by parser/representation ensemble. Interestingly, the accuracy improvements are observed even for ensembles of different representations from the same parser. This indicates that a single parse representation is insufficient for expressing the true Bag-of-words features 48.2/54.9/51.1 Yakushiji et al. (2005) 33.7/33.1/33.4 Mitsumori et al. (2006) 54.2/42.6/47.7 Giuliano et al. (2006) 60.9/57.2/59.0 Sætre et al. (2007) 64.3/44.1/52.0 This paper 54.9/65.5/59.5 Table 6: Comparison with previous results on PPI extraction (precision/recall/f-score) potential of a parser. Effectiveness of the parser ensemble is also attested by the fact that it resulted in larger improvements. Further investigation of the sources of these improvements will illustrate the advantages and disadvantages of these parsers and representations, leading us to better parsing models and a better design for parse representations. 4.4 Comparison with previous results on PPI extraction PPI extraction experiments on AImed have been reported repeatedly, although the figures cannot be compared directly because of the differences in data preprocessing and the number of target protein pairs (Sætre et al., 2007). Table 6 compares our best result with previously reported accuracy figures. Giuliano et al. (2006) and Mitsumori et al. (2006) do not rely on syntactic parsing, while the former applied SVMs with kernels on surface strings and the latter is similar to our baseline method. Bunescu and Mooney (2005) applied SVMs with subsequence kernels to the same task, although they provided only a precision-recall graph, and its f-score is around 50. Since we did not run experiments on protein-pair-wise cross validation, our system cannot be compared directly to the results reported by Erkan et al. (2007) and Katrenko and Adriaans 52 (2006), while Sætre et al. (2007) presented better results than theirs in the same evaluation criterion. 5 Related Work Though the evaluation of syntactic parsers has been a major concern in the parsing community, and a couple of works have recently presented the comparison of parsers based on different frameworks, their methods were based on the comparison of the parsing accuracy in terms of a certain intermediate parse representation (Ringger et al., 2004; Kaplan et al., 2004; Briscoe and Carroll, 2006; Clark and Curran, 2007; Miyao et al., 2007; Clegg and Shepherd, 2007; Pyysalo et al., 2007b; Pyysalo et al., 2007a; Sagae et al., 2008). Such evaluation requires gold standard data in an intermediate representation. However, it has been argued that the conversion of parsing results into an intermediate representation is difficult and far from perfect. The relationship between parsing accuracy and task accuracy has been obscure for many years. Quirk and Corston-Oliver (2006) investigated the impact of parsing accuracy on statistical MT. However, this work was only concerned with a single dependency parser, and did not focus on parsers based on different frameworks. 6 Conclusion and Future Work We have presented our attempts to evaluate syntactic parsers and their representations that are based on different frameworks; dependency parsing, phrase structure parsing, or deep parsing. The basic idea is to measure the accuracy improvements of the PPI extraction task by incorporating the parser output as statistical features of a machine learning classifier. Experiments showed that state-of-theart parsers attain accuracy levels that are on par with each other, while parsing speed differs significantly. We also found that accuracy improvements vary when parsers are retrained with domainspecific data, indicating the importance of domain adaptation and the differences in the portability of parser training methods. Although we restricted ourselves to parsers trainable with Penn Treebank-style treebanks, our methodology can be applied to any English parsers. Candidates include RASP (Briscoe and Carroll, 2006), the C&C parser (Clark and Curran, 2004), the XLE parser (Kaplan et al., 2004), MINIPAR (Lin, 1998), and Link Parser (Sleator and Temperley, 1993; Pyysalo et al., 2006), but the domain adaptation of these parsers is not straightforward. It is also possible to evaluate unsupervised parsers, which is attractive since evaluation of such parsers with goldstandard data is extremely problematic. A major drawback of our methodology is that the evaluation is indirect and the results depend on a selected task and its settings. This indicates that different results might be obtained with other tasks. Hence, we cannot conclude the superiority of parsers/representations only with our results. In order to obtain general ideas on parser performance, experiments on other tasks are indispensable. Acknowledgments This work was partially supported by Grant-in-Aid for Specially Promoted Research (MEXT, Japan), Genome Network Project (MEXT, Japan), and Grant-in-Aid for Young Scientists (MEXT, Japan). References D. M. Bikel. 2004. Intricacies of Collins’ parsing model. Computational Linguistics, 30(4):479–511. T. Briscoe and J. Carroll. 2006. Evaluating the accuracy of an unlexicalized statistical parser on the PARC DepBank. In COLING/ACL 2006 Poster Session. R. Bunescu and R. J. Mooney. 2004. Collective information extraction with relational markov networks. In ACL 2004, pages 439–446. R. C. Bunescu and R. J. Mooney. 2005. Subsequence kernels for relation extraction. In NIPS 2005. E. Charniak and M. Johnson. 2005. Coarse-to-fine nbest parsing and MaxEnt discriminative reranking. In ACL 2005. E. Charniak. 2000. A maximum-entropy-inspired parser. In NAACL-2000, pages 132–139. S. Clark and J. R. Curran. 2004. Parsing the WSJ using CCG and log-linear models. In 42nd ACL. S. Clark and J. R. Curran. 2007. Formalism-independent parser evaluation with CCG and DepBank. In ACL 2007. A. B. Clegg and A. J. Shepherd. 2007. Benchmarking natural-language parsers for biological applications using dependency graphs. BMC Bioinformatics, 8:24. 53 M. Collins and N. Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In ACL 2002. M. Collins. 1997. Three generative, lexicalised models for statistical parsing. In 35th ACL. M.-C. de Marneffe, B. MacCartney, and C. D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In LREC 2006. J. M. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In COLING 1996. G. Erkan, A. Ozgur, and D. R. Radev. 2007. Semisupervised classification for extracting protein interaction sentences using dependency parsing. In EMNLP 2007. D. Gildea. 2001. Corpus variation and parser performance. In EMNLP 2001, pages 167–202. C. Giuliano, A. Lavelli, and L. Romano. 2006. Exploiting shallow linguistic information for relation extraction from biomedical literature. In EACL 2006. T. Hara, Y. Miyao, and J. Tsujii. 2007. Evaluating impact of re-training a lexical disambiguation model on domain adaptation of an HPSG parser. In IWPT 2007. R. Johansson and P. Nugues. 2007. Extended constituent-to-dependency conversion for English. In NODALIDA 2007. R. M. Kaplan, S. Riezler, T. H. King, J. T. Maxwell, and A. Vasserman. 2004. Speed and accuracy in shallow and deep stochastic parsing. In HLT/NAACL’04. S. Katrenko and P. Adriaans. 2006. Learning relations from biomedical corpora using dependency trees. In KDECB, pages 61–80. J.-D. Kim, T. Ohta, Y. Teteisi, and J. Tsujii. 2003. GENIA corpus — a semantically annotated corpus for bio-textmining. Bioinformatics, 19:i180–182. D. Klein and C. D. Manning. 2003. Accurate unlexicalized parsing. In ACL 2003. D. Lin. 1998. Dependency-based evaluation of MINIPAR. In LREC Workshop on the Evaluation of Parsing Systems. M. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1994. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. T. Matsuzaki, Y. Miyao, and J. Tsujii. 2005. Probabilistic CFG with latent annotations. In ACL 2005. R. McDonald and F. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In EACL 2006. T. Mitsumori, M. Murata, Y. Fukuda, K. Doi, and H. Doi. 2006. Extracting protein-protein interaction information from biomedical text with SVM. IEICE - Trans. Inf. Syst., E89-D(8):2464–2466. Y. Miyao and J. Tsujii. 2008. Feature forest models for probabilistic HPSG parsing. Computational Linguistics, 34(1):35–80. Y. Miyao, K. Sagae, and J. Tsujii. 2007. Towards framework-independent evaluation of deep linguistic parsers. In Grammar Engineering across Frameworks 2007, pages 238–258. A. Moschitti. 2006. Making tree kernels practical for natural language processing. In EACL 2006. J. Nivre and J. Nilsson. 2005. Pseudo-projective dependency parsing. In ACL 2005. S. Petrov and D. Klein. 2007. Improved inference for unlexicalized parsing. In HLT-NAACL 2007. S. Pyysalo, T. Salakoski, S. Aubin, and A. Nazarenko. 2006. Lexical adaptation of link grammar to the biomedical sublanguage: a comparative evaluation of three approaches. BMC Bioinformatics, 7(Suppl. 3). S. Pyysalo, F. Ginter, J. Heimonen, J. Bj¨orne, J. Boberg, J. J¨arvinen, and T. Salakoski. 2007a. BioInfer: a corpus for information extraction in the biomedical domain. BMC Bioinformatics, 8(50). S. Pyysalo, F. Ginter, V. Laippala, K. Haverinen, J. Heimonen, and T. Salakoski. 2007b. On the unification of syntactic annotations under the Stanford dependency scheme: A case study on BioInfer and GENIA. In BioNLP 2007, pages 25–32. C. Quirk and S. Corston-Oliver. 2006. The impact of parse quality on syntactically-informed statistical machine translation. In EMNLP 2006. E. K. Ringger, R. C. Moore, E. Charniak, L. Vanderwende, and H. Suzuki. 2004. Using the Penn Treebank to evaluate non-treebank parsers. In LREC 2004. R. Sætre, K. Sagae, and J. Tsujii. 2007. Syntactic features for protein-protein interaction extraction. In LBM 2007 short papers. K. Sagae and J. Tsujii. 2007. Dependency parsing and domain adaptation with LR models and parser ensembles. In EMNLP-CoNLL 2007. K. Sagae, Y. Miyao, T. Matsuzaki, and J. Tsujii. 2008. Challenges in mapping of syntactic representations for framework-independent parser evaluation. In the Workshop on Automated Syntatic Annotations for Interoperable Language Resources. D. D. Sleator and D. Temperley. 1993. Parsing English with a Link Grammar. In 3rd IWPT. Y. Tsuruoka, Y. Tateishi, J.-D. Kim, T. Ohta, J. McNaught, S. Ananiadou, and J. Tsujii. 2005. Developing a robust part-of-speech tagger for biomedical text. In 10th Panhellenic Conference on Informatics. A. Yakushiji, Y. Miyao, Y. Tateisi, and J. Tsujii. 2005. Biomedical information extraction with predicateargument structure patterns. In First International Symposium on Semantic Mining in Biomedicine. 54
2008
6
Proceedings of ACL-08: HLT, pages 523–531, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Multilingual Harvesting of Cross-Cultural Stereotypes Tony Veale School of Computer Science University College Dublin Belfield, Dublin 4, Ireland [email protected] Yanfen Hao School of Computer Science University College Dublin Belfield, Dublin 4, Ireland [email protected] Guofu Li School of Computer Science University College Dublin Belfield, Dublin 4, Ireland [email protected] Abstract People rarely articulate explicitly what a native speaker of a language is already assumed to know. So to acquire the stereotypical knowledge that underpins much of what is said in a given culture, one must look to what is implied by language rather than what is overtly stated. Similes are a convenient vehicle for this kind of knowledge, insofar as they mark out the most salient aspects of the most frequently evoked concepts. In this paper we perform a multilingual exploration of the space of common-place similes, by mining a large body of Chinese similes from the web and comparing these to the English similes harvested by Veale and Hao (2007). We demonstrate that while the simile-frame is inherently leaky in both languages, a multilingual analysis allows us to filter much of the noise that otherwise hinders the knowledge extraction process. In doing so, we can also identify a core set of stereotypical descriptions that exist in both languages and accurately map these descriptions onto a multilingual lexical ontology like HowNet. Finally, we demonstrate that conceptual descriptions that are derived from common-place similes are extremely compact and predictive of ontological structure. 1 Introduction Direct perception of our environment is just one of the ways we can acquire knowledge of the world. Another, more distinctly human approach, is through the comprehension of linguistic descriptions of another person’s perceptions and beliefs. Since computers have limited means of human-like perception, the latter approach is also very much suited to the automatic acquisition of world knowledge by a computer (see Hearst, 1992; Charniak and Berland, 1999; Etzioni et al., 2004; V¨olker et al., 2005; Almuhareb and Poesio, 2005; Cimiano and Wenderoth, 2007; Veale and Hao, 2007). Thus, by using the web as a distributed text corpus (see Keller et al., 2002), a multitude of facts and beliefs can be extracted, for purposes ranging from questionanswering to ontology population. The possible configurations of different concepts can also be learned from how the words denoting these concepts are distributed; thus, a computer can learn that coffee is a beverage that can be served hot or cold, white or black, strong or weak and sweet or bitter (see Almuhareb and Poesio, 2005). But it is difficult to discern from these facts the idealized or stereotypical states of the world, e.g., that one expects coffee to be hot and beer to be cold, so that if one spills coffee, we naturally infer the possibilities of scalding and staining without having to be told that the coffee was hot or black; the assumptions of hotness and blackness are just two stereotypical facts about coffee that we readily take for granted. Lenat and Guha (1990) describe these assumed facts as residing in the white space of a text, in the body of common-sense assumptions that are rarely articulated as explicit statements. These culturally-shared common-sense beliefs cannot be harvested directly from a single web resource or document set, but must be gleaned indirectly, from telling phrases that are scattered across the many texts of the web. Veale and Hao (2007) argue that the most pivotal 523 reference points of this world-view can be detected in common-place similes like “as lazy as a dog”, “as fat as a hippo” or “as chaste as a nun”. To the extent that this world-view is ingrained in and influenced by how we speak, it can differ from culture to culture and language to language. In English texts, for example, the concept Tortoise is stereotypically associated with the properties slowness, patience and wrinkled, but in Chinese texts, we find that the same animal is a model of slowness, ugliness, and nutritional value. Likewise, because Chinese “wine” has a high alcohol content, the dimension of Strength is much more salient to a Chinese speaker than an English speaker, as reflected in how the word 酒is used in statements such as 像酒一样浓重, which means “as strong as wine”, or literally, “as wine equally strong”. In this paper, we compare the same web-based approach to acquiring stereotypical concept descriptions from text using two very different languages, English and Chinese, to determine the extent to which the same cross-cultural knowledge is unearthed for each. In other words, we treat the web as a large parallel corpus (e.g., see Resnick and Smith, 2003), though not of parallel documents in different languages, but of corresponding translationequivalent phrases. By seeking translation equivalence between different pieces of textually-derived knowledge, this paper addresses the following questions: if a particular syntagmatic pattern is useful for mining knowledge in English, can its translated form be equally useful for Chinese? To what extent does the knowledge acquired using different source languages overlap, and to what extent is this knowledge language- (and culture-) specific? Given that the syntagmatic patterns used in each language are not wholly unambiguous or immune to noise, to what extent should finding the same beliefs expressed in two different languages increase our confidence in the acquired knowledge? Finally, what representational synergies arise from finding these same facts expressed in two different languages? Given these goals, the rest of the paper assumes the following structure: in section 2, we summarize related work on syntagmatic approaches to knowledge-acquisition; in section 3, we describe our multilingual efforts in English and Chinese to acquire stereotypical or generic-level facts from the web, by using corresponding translations of the commonplace stereotype-establishing pattern “as ADJ as a NOUN”; and in section 4, we describe how these English and Chinese data-sets can be unified using the bilingual ontology HowNet (Dong and Dong, 2006). This mapping allows us to determine the meaning overlap in both data sets, the amount of noise in each data set, and the degree to which this noise is reduced when parallel translations can be identified. In section 5 we demonstrate the overall usefulness of stereotype-based knowledgerepresentation by replicating the clustering experiments of Almuhareb and Poesio (2004, 2005) and showing that stereotype-based representations are both compact and predictive of ontological classification. We conclude the paper with some final remarks in section 6. 2 Related Work Text-based approaches to knowledge acquisition range from the ambitiously comprehensive, in which an entire text or resource is fully parsed and analyzed in depth, to the surgically precise, in which highly-specific text patterns are used to eke out correspondingly specific relationships from a large corpus. Endeavors such as that of Harabagiu et al. (1999), in which each of the textual glosses in WordNet (Fellbaum, 1998) is linguistically analyzed to yield a sense-tagged logical form, is an example of the former approach. In contrast, foundational efforts such as that of Hearst (1992) typify the latter surgical approach, in which one fishes in a large text for word sequences that strongly suggest a particular semantic relationship, such as hypernymy or, in the case of Charniak and Berland (1999), the partwhole relation. Such efforts offer high precision but low recall, and extract just a tiny (but very useful) subset of the semantic content of a text. The KnowItAll system of Etzioni et al. (2004) employs the same generic patterns as Hearst ( e.g., “NPs such as NP1, NP2, ...”), and more besides, to extract a whole range of facts that can be exploited for webbased question-answering. Cimiano and Wenderoth (2007) also use a range of Hearst-like patterns to find text sequences in web-text that are indicative of the lexico-semantic properties of words; in particular, these authors use phrases like “to * a new 524 NOUN” and “the purpose of NOUN is to *” to identify the agentive and telic roles of given nouns, thereby fleshing out the noun’s qualia structure as posited by Pustejovsky’s (1990) theory of the generative lexicon. The basic Hearst approach has even proven useful for identifying the meta-properties of concepts in a formal ontology. V¨olker et al. (2005) show that patterns like “is no longer a|an NOUN” can identify, with reasonable accuracy, those concepts in an ontology that are not rigid, which is to say, concepts like Teacher and Student whose instances may at any point stop being instances of these concepts. Almuhareb and Poesio (2005) use patterns like “a|an|the * C is|was” and “the * of the C is|was” to find the actual properties of concepts as they are used in web texts; the former pattern is used to identify value features like hot, red, large, etc., while the latter is used to identify the attribute features that correspond to these values, such as temperature, color and size. Almuhareb and Poesio go on to demonstrate that the values and attributes that are found for word-concepts on the web yield a sufficiently rich representation for these word-concepts to be automatically clustered into a form resembling that assigned by WordNet (see Fellbaum, 1998). Veale and Hao (2007) show that the pattern “as ADJ as a|an NOUN” can also be used to identify the value feature associated with a given concept, and argue that because this pattern corresponds to that of the simile frame in English, the adjectival features that are retrieved are much more likely to be highly salient of the noun-concept (the simile vehicle) that is used. Whereas Almuhareb and Poesio succeed in identifying the range of potential attributes and values that may be possessed by a particular concept, Veale and Hao succeed in identifying the generic properties of a concept as it is conceived in its stereotypical form. As noted by the latter authors, this results in a much smaller yet more diagnostic feature set for each concept. However, because the simile frame is often exploited for ironic purposes in web texts (e.g., “as meaty as a skeleton”), and because irony is so hard to detect, Veale and Hao suggest that the adjective:noun pairings found on the web should be hand-filtered to remove such examples. Given this onerous requirement for hand-filtering, and the unique, culturallyloaded nature of the noise involved, we use the work of Veale and Hao as the basis for the cross-cultural investigation in this paper. 3 Harvesting Knowledge from Similes: English and Chinese Because similes are containers of culturallyreceived knowledge, we can reasonably expect the most commonly used similes to vary significantly from language to language, especially when those languages correspond to very different cultures. These similes form part of the linguistic currency of a culture which must be learned by a speaker, and indeed, some remain opaque even to the most educated native speakers. In “A Christmas Carol”, for instance, Dickens (1943/1984) questions the meaning of “as dead as a doornail”, and notes: “I might have been inclined, myself, to regard a coffin-nail as the deadest piece of ironmongery in the trade. But the wisdom of our ancestors is in the simile”. Notwithstanding the opacity of some instances of the simile form, similes are very revealing about the concepts one most encounters in everyday language. In section 5 we demonstrate that concept descriptions which are harvested from similes are both extremely compact and highly predictive of ontological structure. For now, we turn to the process by which similes can be harvested from the text of the web. In section 3.1 we summarize the efforts of Veale and Hao, whose database of English similes drives part of our current investigation. In section 3.2 we describe how a comparable database of Chinese similes can be harvested from the web. 3.1 Harvesting English Similes Veale and Hao (2007) use the Google API in conjunction with Princeton WordNet (Fellbaum, 1998) as the basis of their harvesting system. They first extracted a list of antonymous adjectives, such as “hot” or “cold”, from WordNet, the intuition being that explicit similes will tend to exploit properties that occupy an exemplary point on a scale. For every adjective ADJ on this list, they then sent the query “as ADJ as *” to Google and scanned the first 200 snippets returned for different noun values for the wildcard *. The complete set of nouns extracted in this way was then used to drive a sec525 ond harvesting phase, in which the query “as * as a NOUN” was used to collect similes that employ different adjectives or which lie beyond the 200snippet horizon of the original search. Based on this wide-ranging series of core samples (of 200 hits each) from across the web, Veale and Hao report that both phases together yielded 74,704 simile instances (of 42,618 unique types, or unique adjective:noun pairings), relating 3769 different adjectives to 9286 different nouns. As often noted by other authors, such as V¨olker et al. (2005), a patternoriented approach to knowledge mining is prone to noise, not least because the patterns used are rarely leak-free (inasmuch as they admit word sequences that do not exhibit the desired relationship), and because these patterns look at small text sequences in isolation from their narrative contexts. Veale and Hao (2007) report that when the above 42,618 simile types are hand-annotated by a native speaker, only 12,259 were judged as non-ironic and meaningful in a null context. In other words, just 29% of the retrieved pairings conform to what one would consider a well-formed and reusable simile that conveys some generic aspect of cultural knowledge. Of those deemed invalid, 2798 unique pairings were tagged as ironic, insofar as they stated precisely the opposite of what is stereotypically believed to be true. 3.2 Harvesting Chinese Similes To harvest a comparable body of Chinese similes from the web, we also use the Google API, in conjunction with both WordNet and HowNet (Dong and Dong, 2006). HowNet is a bilingual lexical ontology that associates English and Chinese word labels with an underlying set of approximately 100,000 lexical concepts. While each lexical concept is defined using a unique numeric identifier, almost all of HowNet’s concepts can be uniquely identified by a pairing of English and Chinese labels. For instance, the word “王八” can mean both Tortoise and Cuckold in Chinese, but the combined label tortoise|王八 uniquely picks out the first sense while cuckold|王 八uniquely picks out the second. Though Chinese has a large number of figurative expressions, the yoking of English to Chinese labels still serves to identify the correct sense in almost every case. For instance, “绿帽子” is another word for Cuckold in Chinese, but it can also translate as “green hat” and “green scarf”. Nonetheless, green hat|绿 帽子uniquely identifies the literal sense of “绿帽 子” (a green covering) while green scarf|绿帽子 and cuckold|绿帽子both identify the same human sense, the former being a distinctly culture-specific metaphor for cuckolded males (in English, a dispossessed lover “wears the cuckold’s horns”; in Chinese, one apparently “wears a green scarf”). We employ the same two-phase design as Veale and Hao: an initial set of Chinese adjectives are extracted from HowNet, with the stipulation that their English translations (as given by HowNet) are also categorized as adjectives in WordNet. We then use the Chinese equivalent of the English simile frame “像* 一样ADJ” (literally, “as-NOUNequally-ADJ”) to retrieve a set of noun values that stereotypically embody these adjectival features. Again, a set of 200 snippets is analyzed for each query, and only those values of the Google * wildcard that HowNet categorizes as nouns are accepted. In a second phase, these nouns are used to create new queries of the form “像Noun一样*” and the resulting Google snippets are now scanned for adjectival values of *. In all, 25,585 unique Chinese similes (i.e., pairings of an adjective to a noun) are harvested, linking 3080 different Chinese adjectives to 4162 Chinese nouns. When hand-annotated by a native Chinese speaker, the Chinese simile frame reveals itself to be considerably less leaky than the corresponding English frame. Over 58% of these pairings (14,867) are tagged as well-formed and meaningful similes that convey some stereotypical element of world knowledge. The Chinese pattern “像*一 样*” is thus almost twice as reliable as the English ”as * as a *” pattern. In addition, Chinese speakers exploit the simile frame much less frequently for ironic purposes, since just 185 of the retrieved similes (or 0.7%) are tagged as ironic, compared with ten times as many (or 7%) retrieved English similes. In the next section we consider the extent to which these English and Chinese similes convey the same information. 4 Tagging and Mapping of Similes In each case, the harvesting processes for English and for Chinese allow us to acquire stereotypi526 cal associations between words, not word senses. Nonetheless, the frequent use of synonymous terms introduces a substantial degree of redundancy in these associations, and this redundancy can be used to perform sense discrimination. In the case of English similes, Veale and Hao (2007) describe how two English similes “as A as N1” and “as A as N2” will be mutually disambiguating if N1 and N2 are synonyms in WordNet, or if some sense of N1 is a hypernym or hyponym of some sense of N2 in WordNet. This heuristic allows Veale and Hao to automatically sense-tag 85%, or 10,378, of the unique similes that are annotated as valid. We apply a similar intuition to the disambiguation of Chinese similes: though HowNet does not support the notion of a synset, different word-senses that have the same meaning will be associated with the same logical definition. Thus, the Chinese word “著名” can translate as “celebrated”, “famous”, “well-known” and “reputable”, but all four of these possible senses, given by celebrated|著名, famous|著名, well-known|著名and reputable|著 名, are associated with the same logical form in HowNet, which defines them as a specialization of ReputationValue|名声值. This allows us to safely identify “著名” with this logical form. Overall, 69% of Chinese similes can have both their adjective and noun assigned to specific HowNet meanings in this way. 4.1 Translation Equivalence Among Similes Since HowNet represents an integration of English and Chinese lexicons, it can easily be used to connect the English and Chinese data-sets. For while the words used in any given simile are likely to be ambiguous (in the case of one-character Chinese words, highly so), it would seem unlikely that an incorrect translation of a web simile would also be found on the web. This is an intuition that we can now use the annotated data-sets to evaluate. For every English simile of the form <Ae as Ne>, we use HowNet to generate a range of possible Chinese variations <Ac0 as Nc0>, <Ac1 as Nc0>, <Ac0 as Nc1>, <Ac1 as Nc1>, ... by using the HowNet lexical entries Ae|Ac0, Ae|Ac1, ..., Ne|Nc0, Ne|Nc1, ... as a translation bridge. If the variation <Aci as Ncj> is found in the Chinese data-set, then translation equivalence is assumed between <Ae as Language Precision Recall F1 English 0.76 0.25 0.38 Chinese 0.82 0.27 0.41 Table 1: Automatic filtering of similes using Translation Equivalence. Ne> and <Aci as Ncj>; furthermore, Ae|Aci is assumed to be the HowNet sense of the adjectives Ae and Aci while Ncj is assumed to be the HowNet sense of the nouns Ne and Ncj. Sense-tagging is thus a useful side-effect of simile-mapping with a bilingual lexicon. We attempt to find Chinese translation equivalences for all 42,618 of the English adjective:noun pairings harvested by Veale and Hao; this includes both the 12,259 pairings that were hand-annotated as valid stereotypical facts, and the remaining 30,359 that were dismissed as noisy or ironic. Using HowNet, we can establish equivalences from 4177 English similes to 4867 Chinese similes. In those mapped, we find 3194 English similes and 4019 Chinese similes that were hand-annotated as valid by their respective native-speaker judges. In other words, translation equivalence can be used to separate well-formed stereotypical beliefs from illformed or ironic beliefs with approximately 80% precision. The precise situation is summarized in Table 1. As noted in section 3, just 29% of raw English similes and 58% of raw Chinese similes that are harvested from web-text are judged as valid stereotypical statements by a native-speaking judge. For the task of filtering irony and noise from raw data sets, translation equivalence thus offers good precision but poor recall, since most English similes appear not to have a corresponding Chinese variant on the web. Nonetheless, this heuristic allows us to reliably identify a sizeable body of cross-cultural stereotypes that hold in both languages. 4.1.1 Error Analysis Noisy propositions may add little but empty content to a representation, but ironic propositions will actively undermine a representation from within, leading to inferences that are not just unlikely, but patently false (as is generally the intention of irony). Since Veale and Hao (2007) annotate their data527 set for irony, this allows us to measure the number of egregious mistakes made when using translation equivalence as a simile filter. Overall, we see that 1% of Chinese similes that are accepted via translation equivalence are ironic, accounting for 9% of all errors made when filtering Chinese similes. Likewise, 1% of the English similes that are accepted are ironic, accounting for 5% of all errors made when filtering English similes. 4.2 Representational Synergies By mapping WordNet-tagged English similes onto HowNet-tagged Chinese similes, we effectively obtain two representational viewpoints onto the same shared data set. For instance, though HowNet has a much shallower hierarchical organization than WordNet, it compensates by encapsulating the meaning of different word senses using simple logical formulae of semantic primitives, or sememes, that are derived from the meaning of common Chinese characters. WordNet and HowNet thus offer two complementary levels or granularities of generalization that can be exploited as the context demands. 4.2.1 Adjective Organization Unlike WordNet, HowNet organizes its adjectival senses hierarchically, allowing one to obtain a weaker form of a given description by climbing the hierarchy, or to obtain a stronger form by descending the hierarchy from a particular sense. Thus, one can go up from kaleidoscopic|斑驳陆 离to colored|彩, or down from colored|彩to any of motley|斑驳, dappled|斑驳, prismatic|斑驳 陆离and even gorgeous|斑斓. Once stereotypical descriptions have been sense-tagged relative to HowNet, they can easily be further enhanced or bleached to suit the context of their use. For example, by allowing a Chinese adjective to denote any of the senses above it or below in the HowNet hierarchy, we can extend the mapping of English to Chinese similes so as to achieve an improved recall of .36 (though we note that this technique reduces the precision of the translation-equivalence heuristic to .75). As demonstrated by Almuhareb and Poesio (2004), the best conceptual descriptions combine adjectival values with the attributes that they fill. Because adjectival senses hook into HowNet’s upper ontology via a series of abstract taxonyms like TasteValue|美丑值, ReputationValue|名声值and AmountValue|多少值, a taxonym of the form AttributeValue can be identified for every adjective sense in HowNet. For example, the English adjective ”beautiful” can denote either beautiful|美, organized by HowNet under BeautyValue|美丑 值, or beautiful|婉, organized by HowNet under gracious|雅which in turn is organized under GraceValue|典雅值. The adjective “beautiful” can therefore specify either the Grace or Beauty attributes of a concept. Once similes have been sensetagged, we can build up a picture of most salient attributes of our stereotypical concepts. For instance, “peacock” similes yield the following attributes via HowNet: Beauty, Appearance, Color, Pride, Behavior, Resplendence, Bearing and Grace; likewise “demon” similes yield the following: Morality, Behavior, Temperament, Ability and Competence. 4.2.2 Orthographic Form The Chinese data-set lacks counterparts to many similes that one would not think of as culturallydetermined, such “as red as a ruby”, “as cruel as a tyrant” and “as smelly as a skunk”. One significant reason for this kind of omission is not cultural difference, but obviousness: many Chinese words are multi-character gestalts of different ideas (see Packard, 2000), so that these ideas form an explicit part of the orthography of a lexical concept. For instance, using HowNet, we can see that skunk|臭鼬 is actually a gestalt of the concepts smelly|臭and weasel|鼬, so the simile “as smelly as a skunk” is already somewhat redundant in Chinese (somewhat akin to the English similes “as hot as a hotdog” or “as hard as a hardhat”). Such decomposition can allow us to find those English similes that are already orthographically explicit in Chinese word-forms. We simply look for pairs of HowNet senses of the form Noun|XYZ and Adj|X, where X and XYZ are Chinese words and the simile “as Adj as a|an Noun” is found in the English simile set. When we do so, we find that 648 English similes, from “as meaty as a steak” to “as resonant as a cello”, are already fossilized in the orthographic realization of the corresponding Chinese concepts. When fossilized similes are uncovered in this way, 528 the recall of translation equivalence as a noise filter rises to .29, while its precision rises to .84 (see Table 1) 5 Empirical Evaluation: Simile-derived Representations Stereotypes persist in language and culture because they are, more often than not, cognitively useful: by emphasizing the most salient aspects of a concept, a stereotype acts as a dense conceptual description that is easily communicated, widely shared, and which supports rapid inference. To demonstrate the usefulness of stereotype-based concept descriptions, we replicate here the clustering experiments of Almuhareb and Poesio (2004, 2005), who in turn demonstrated that conceptual features that are mined from specific textual patterns can be used to construct WordNet-like ontological structures. These authors used different text patterns for mining feature values (like hot) and attributes (like temperature), and their experiments evaluated the relative effectiveness of each as a means of ontological clustering. Since our focus in this paper is on the harvesting of feature values, we replicate here only their experiments with values. Almuhareb and Poesio (2004) used as their experimental basis a sampling of 214 English nouns from 13 of WordNet’s upper-level semantic categories, and proceeded to harvest adjectival features for these noun-concepts from the web using the textual pattern “[a | an | the] * C [is | was]”. This pattern yielded a combined total of 51,045 value features for these 214 nouns, such as hot, black, etc., which were then used as the basis of a clustering algorithm in an attempt to reconstruct the WordNet classifications for all 214 nouns. Clustering was performed by the CLUTO-2.1 package (Karypis, 2003), which partitioned the 214 nouns in 13 categories on the basis of their 51,045 web-derived features. Comparing these clusters with the original WordNet-based groupings, Almuhareb and Poesio report a clustering accuracy of 71.96%. In a second, larger experiment, Almuhareb and Poesio (2005) sampled 402 nouns from 21 different semantic classes in WordNet, and harvested 94,989 feature values from the web using the same textual pattern. They then applied the repeated bisections clustering algorithm to Approach accuracy features Almuhareb + Poesio 71.96% 51,045 Simile-derived stereotypes 70.2% 2,209 Table 2: Results for experiment 1 (214 nouns, 13 WN categories). Approach Cluster Cluster features purity entropy Almu. + Poesio (no filtering) 56.7% 38.4% 94,989 Almu. + Poesio (with filtering) 62.7% 33.8% 51345 Simile-derived stereotypes (no filtering) 64.3% 33% 5,547 Table 3: Results for experiment 2 (402 nouns, 21 WN categories). this larger data set, and report an initial cluster purity measure of 56.7%. Suspecting that a noisy feature set had contributed to the apparent drop in performance, these authors then proceed to apply a variety of noise filters to reduce the set of feature values to 51,345, which in turn leads to an improved cluster purity measure of 62.7%. We replicated both of Almuhareb and Poesio’s experiments on the same experimental data-sets (of 214 and 402 nouns respectively), using instead the English simile pattern “as * as a NOUN” to harvest features for these nouns from the web. Note that in keeping with the original experiments, no handtagging or filtering of these features is performed, so that every raw match with the simile pattern is used. Overall, we harvest just 2209 feature values for the 214 nouns of experiment 1, and 5547 features for the 402 nouns of experiment 2. A comparison of both sets of results for experiment 1 is shown is Table 2, while a comparison based on experiment 2 is shown is Table 3. While Almuhareb and Poesio achieve marginally higher clustering on the 214 nouns of experiment 1, they do so by using over 20 times as many features. 529 In experiment 2, we see a similar ratio of feature quantities before filtering; after some initial filtering, Almuhareb and Poesio reduce their feature set to just under 10 times the size of the simile-derived feature set. These experiments demonstrate two key points about stereotype-based representations. First, the feature representations do not need to be handfiltered and noise-free to be effective; we see from the above results that the raw values extracted from the simile pattern prove slightly more effective than filtered feature sets used by Almuhareb and Poesio. Secondly, and perhaps more importantly, stereotype-based representations prove themselves a much more compact means (by factor of 10 to 20 times) of achieving the same clustering goals. 6 Conclusions Knowledge-acquisition from texts can be a process fraught with complexity: such texts - especially web-based texts - are frequently under-determined and vague; highly ambiguous, both lexically and structurally; and dense with figures of speech, hyperbolae and irony. None of the syntagmatic frames surveyed in section 2, from the “NP such as NP1, NP2 ...” pattern of Hearst (1992) and Etzioni et al. (2004) to the “no longer NOUN” pattern of V¨olker et al. (2005), are leak-free and immune to noise. Cimiano and Wenderoth (2007) mitigate this problem somewhat by performing part-of-speech analysis on all extracted text sequences, but the problem remains: the surgical, pattern-based approach offers an efficient and targeted means of knowledgeacquisition from corpora because it largely ignores the context in which these patterns occur; yet one requires this context to determine if a given text sequence really is a good exemplar of the semantic relationship that is sought. In this paper we have described how stereotypical associations between adjectival properties and noun concepts can be mined from similes in web text. When harvested in both English and Chinese, these associations exhibit two kinds of redundancy that can mitigate the problem of noise. The first kind, within-language redundancy, allows us to perform sense-tagging of the adjectives and nouns that are used in similes, by exploiting the fact that the same stereotypical association can occur in a variety of synonymous forms. By recognizing synonymy between the elements of different similes, we can thus identify the underlying senses (or WordNet synsets) in these similes. The second kind, between-language redundancy, exploits the fact that the same associations can occur in different languages, allowing us to exploit translationequivalence to pin these associations to particular lexical concepts in a multilingual lexical ontology like HowNet. While between-language redundancy is a limited phenomenon, with just 26% of Veale and Hao’s annotated English similes having Chinese translations on the web, this phenomenon does allow us to identify a significant core of shared stereotypical knowledge across these two very different languages. Overall, our analysis suggests that a comparable number of well-formed Chinese and English similes can be mined from the web (our exploration finds approx. 12,000 unique examples of each). This demonstrates that harvesting stereotypical knowledge from similes is a workable strategy in both languages. Moreover, Chinese simile usage is characterized by two interesting facts that are of some practical import: the simile frame “像NOUN 一样ADJ” is a good deal less leaky and prone to noise than the equivalent English frame, “as ADJ as a NOUN”; and Chinese speakers appear less willing to subvert the stereotypical norms of similes for ironic purposes. Further research is needed to determine whether these observations generalize to other knowledgemining patterns. References A. Almuhareb and M. Poesio. 2004. Attribute-Based and Value-Based Clustering: An Evaluation. In proceedings of EMNLP 2004, pp 158–165. Barcelona, Spain. A. Almuhareb and M. Poesio. 2005. Concept Learning and Categorization from the Web. In proceedings of CogSci 2005, the 27th Annual Conference of the Cognitive Science Society. New Jersey: Lawrence Erlbaum. C. Dickens. 1843/1981. A Christmas Carol. Puffin Books, Middlesex, UK. C. Fellbaum. 1998. WordNet, an electronic lexical database. MIT Press. E. Charniak and M. Berland. 1999. Finding parts in 530 very large corpora. In proceedings of the 37th Annual Meeting of the ACL, pp 57-64. F. Keller, M. Lapata, and O. Ourioupina. 2002. Using the web to overcome data sparseness. In proceedings of EMNLP-02, pp 230-237. F. Keller, M. Lapata, and O. Ourioupina. 1990. Building large knowledge-based systems: representation and inference in the Cyc project. Addison-Wesley. G. Karypis. 2003. CLUTO: A clustering toolkit. University of Minnesota. J. L. Packard. 2000. The Morphology of Chinese: A Linguistic and Cognitive Approach. Cambridge University Press, UK. J. Pustejovsky. 1991. The generative lexicon. Computational Linguistics 17(4), pp 209-441. J. V¨olker, D. Vrandecic and Y. Sure. 2005. Automatic Evaluation of Ontologies (AEON). In Y. Gil, E. Motta, V. R. Benjamins, M. A. Musen, Proceedings of the 4th International Semantic Web Conference (ISWC2005), volume 3729 of LNCS, pp. 716-731. Springer Verlag Berlin-Heidelberg. M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In proceedings of the 14th intenatinal conference on Computational Linguistics, pp 539-545. O. Etzioni, S. Kok, S. Soderland, M. Cafarella, A-M. Popescu, D. Weld, D. Downey, T. Shaked and A. Yates. 2004. Web-scale information extraction in KnowItAll (preliminary results). In proceedings of the 13th WWW Conference, pp 100-109. P. Cimiano and J. Wenderoth. 2007. Automatic Acquisition of Ranked Qualia Structures from the Web. In proceedings of the 45th Annual Meeting of the ACL, pp 888–895. P. Resnik and N. A. Smith. 2003. The Web as a parallel corpus. Computational Linguistics, 29(3),pp 349-380. S. Harabagiu, G. Miller and D. Moldovan. 1999. WordNet2 - a morphologically and semantically enhanced resource. In proceedings of SIGLEX-99, pp 1-8, University of Maryland. T. Veale and Y. Hao. 2007. Making Lexical Ontologies Functional and Context-Sensitive. In proceedings of the 45th Annual Meeting of the ACL, pp 57-64. Z. Dong and Q. Dong. 2006. HowNet and the Computation of Meaning. World Scientific: Singapore. 531
2008
60
Proceedings of ACL-08: HLT, pages 532–540, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Semi-supervised Convex Training for Dependency Parsing Qin Iris Wang Department of Computing Science University of Alberta Edmonton, AB, Canada, T6G 2E8 [email protected] Dale Schuurmans Department of Computing Science University of Alberta Edmonton, AB, Canada, T6G 2E8 [email protected] Dekang Lin Google Inc. 1600 Amphitheatre Parkway Mountain View, CA, USA, 94043 [email protected] Abstract We present a novel semi-supervised training algorithm for learning dependency parsers. By combining a supervised large margin loss with an unsupervised least squares loss, a discriminative, convex, semi-supervised learning algorithm can be obtained that is applicable to large-scale problems. To demonstrate the benefits of this approach, we apply the technique to learning dependency parsers from combined labeled and unlabeled corpora. Using a stochastic gradient descent algorithm, a parsing model can be efficiently learned from semi-supervised data that significantly outperforms corresponding supervised methods. 1 Introduction Supervised learning algorithms still represent the state of the art approach for inferring dependency parsers from data (McDonald et al., 2005a; McDonald and Pereira, 2006; Wang et al., 2007). However, a key drawback of supervised training algorithms is their dependence on labeled data, which is usually very difficult to obtain. Perceiving the limitation of supervised learning—in particular, the heavy dependence on annotated corpora—many researchers have investigated semi-supervised learning techniques that can take both labeled and unlabeled training data as input. Following the common theme of “more data is better data” we also use both a limited labeled corpora and a plentiful unlabeled data resource. Our goal is to obtain better performance than a purely supervised approach without unreasonable computational effort. Unfortunately, although significant recent progress has been made in the area of semi-supervised learning, the performance of semi-supervised learning algorithms still fall far short of expectations, particularly in challenging real-world tasks such as natural language parsing or machine translation. A large number of distinct approaches to semisupervised training algorithms have been investigated in the literature (Bennett and Demiriz, 1998; Zhu et al., 2003; Altun et al., 2005; Mann and McCallum, 2007). Among the most prominent approaches are self-training, generative models, semisupervised support vector machines (S3VM), graphbased algorithms and multi-view algorithms (Zhu, 2005). Self-training is a commonly used technique for semi-supervised learning that has been ap532 plied to several natural language processing tasks (Yarowsky, 1995; Charniak, 1997; Steedman et al., 2003). The basic idea is to bootstrap a supervised learning algorithm by alternating between inferring the missing label information and retraining. Recently, McClosky et al. (2006a) successfully applied self-training to parsing by exploiting available unlabeled data, and obtained remarkable results when the same technique was applied to parser adaptation (McClosky et al., 2006b). More recently, Haffari and Sarkar (2007) have extended the work of Abney (2004) and given a better mathematical understanding of self-training algorithms. They also show connections between these algorithms and other related machine learning algorithms. Another approach, generative probabilistic models, are a well-studied framework that can be extremely effective. However, generative models use the EM algorithm for parameter estimation in the presence of missing labels, which is notoriously prone to getting stuck in poor local optima. Moreover, EM optimizes a marginal likelihood score that is not discriminative. Consequently, most previous work that has attempted semi-supervised or unsupervised approaches to parsing have not produced results beyond the state of the art supervised results (Klein and Manning, 2002; Klein and Manning, 2004). Subsequently, alternative estimation strategies for unsupervised learning have been proposed, such as Contrastive Estimation (CE) by Smith and Eisner (2005). Contrastive Estimation is a generalization of EM, by defining a notion of learner guidance. It makes use of a set of examples (its neighborhood) that are similar in some way to an observed example, requiring the learner to move probability mass to a given example, taking only from the example’s neighborhood. Nevertheless, CE still suffers from shortcomings, including local minima. In recent years, SVMs have demonstrated state of the art results in many supervised learning tasks. As a result, many researchers have put effort on developing algorithms for semi-supervised SVMs (S3VMs) (Bennett and Demiriz, 1998; Altun et al., 2005). However, the standard objective of an S3VM is non-convex on the unlabeled data, thus requiring sophisticated global optimization heuristics to obtain reasonable solutions. A number of researchers have proposed several efficient approximation algorithms for S3VMs (Bennett and Demiriz, 1998; Chapelle and Zien, 2005; Xu and Schuurmans, 2005). For example, Chapelle and Zien (2005) propose an algorithm that smoothes the objective with a Gaussian function, and then performs a gradient descent search in the primal space to achieve a local solution. An alternative approach is proposed by Xu and Schuurmans (2005) who formulate a semi-definite programming (SDP) approach. In particular, they present an algorithm for multiclass unsupervised and semi-supervised SVM learning, which relaxes the original non-convex objective into a close convex approximation, thereby allowing a global solution to be obtained. However, the computational cost of SDP is still quite expensive. Instead of devising various techniques for coping with non-convex loss functions, we approach the problem from a different perspective. We simply replace the non-convex loss on unlabeled data with an alternative loss that is jointly convex with respect to both the model parameters and (the encoding of) the self-trained prediction targets. More specifically, for the loss on the unlabeled data part, we substitute the original unsupervised structured SVM loss with a least squares loss, but keep constraints on the inferred prediction targets, which avoids trivialization. Although using a least squares loss function for classification appears misguided, there is a precedent for just this approach in the early pattern recognition literature (Duda et al., 2000). This loss function has the advantage that the entire training objective on both the labeled and unlabeled data now becomes convex, since it consists of a convex structured large margin loss on labeled data and a convex least squares loss on unlabeled data. As we will demonstrate below, this approach admits an efficient training procedure that can find a global minimum, and, perhaps surprisingly, can systematically improve the accuracy of supervised training approaches for learning dependency parsers. Thus, in this paper, we focus on semi-supervised language learning, where we can make use of both labeled and unlabeled data. In particular, we investigate a semi-supervised approach for structured large margin training, where the objective is a combination of two convex functions, the structured large margin loss on labeled data and the least squares loss on unlabeled data. We apply the result533 funds Investors continue to pour cash into money Figure 1: A dependency tree ing semi-supervised convex objective to dependency parsing, and obtain significant improvement over the corresponding supervised structured SVM. Note that our approach is different from the self-training technique proposed in (McClosky et al., 2006a), although both methods belong to semi-supervised training category. In the remainder of this paper, we first review the supervised structured large margin training technique. Then we introduce the standard semisupervised structured large margin objective, which is non-convex and difficult to optimize. Next we present a new semi-supervised training algorithm for structured SVMs which is convex optimization. Finally, we apply this algorithm to dependency parsing and show improved dependency parsing accuracy for both Chinese and English. 2 Dependency Parsing Model Given a sentence X = (x1, ..., xn) (xi denotes each word in the sentence), we are interested in computing a directed dependency tree, Y , over X. As shown in Figure 1, in a dependency structure, the basic units of a sentence are the syntactic relationships (aka. head-child or governor-dependent or regent-subordinate relations) between two individual words, where the relationships are expressed by drawing links connecting individual words (Manning and Schutze, 1999). The direction of each link points from a head word to a child word, and each word has one and only one head, except for the head of the sentence. Thus a dependency structure is actually a rooted, directed tree. We assume that a directed dependency tree Y consists of ordered pairs (xi →xj) of words in X such that each word appears in at least one pair and each word has in-degree at most one. Dependency trees are assumed to be projective here, which means that if there is an arc (xi →xj), then xi is an ancestor of all the words between xi and xj.1 Let Φ(X) denote the set of all the directed, projective trees that span on X. The parser’s goal is then to find the most preferred parse; that is, a projective tree, Y ∈Φ(X), that obtains the highest “score”. In particular, one would assume that the score of a complete spanning tree Y for a given sentence, whether probabilistically motivated or not, can be decomposed as a sum of local scores for each link (a word pair) (Eisner, 1996; Eisner and Satta, 1999; McDonald et al., 2005a). Given this assumption, the parsing problem reduces to find Y ∗ = arg max Y ∈Φ(X) score(Y |X) (1) = arg max Y ∈Φ(X) X (xi→xj)∈Y score(xi →xj) where the score(xi →xj) can depend on any measurable property of xi and xj within the sentence X. This formulation is sufficiently general to capture most dependency parsing models, including probabilistic dependency models (Eisner, 1996; Wang et al., 2005) as well as non-probabilistic models (McDonald et al., 2005a). For standard scoring functions, particularly those used in non-generative models, we further assume that the score of each link in (1) can be decomposed into a weighted linear combination of features score(xi →xj) = θ · f(xi →xj) (2) where f(xi →xj) is a feature vector for the link (xi →xj), and θ are the weight parameters to be estimated during training. 3 Supervised Structured Large Margin Training Supervised structured large margin training approaches have been applied to parsing and produce promising results (Taskar et al., 2004; McDonald et al., 2005a; Wang et al., 2006). In particular, structured large margin training can be expressed as minimizing a regularized loss (Hastie et al., 2004), as shown below: 1We assume all the dependency trees are projective in our work (just as some other researchers do), although in the real word, most languages are non-projective. 534 min θ β 2 θ⊤θ + (3) X i max Li,k (∆(Li,k, Yi) −diff(θ, Yi, Li,k)) where Yi is the target tree for sentence Xi; Li,k ranges over all possible alternative k trees in Φ(Xi); diff(θ, Yi, Li,k) = score(θ, Yi) −score(θ, Li,k); score(θ, Yi) = P (xm→xn)∈Yi θ · f(xm →xn), as shown in Section 2; and ∆(Li,k, Yi) is a measure of distance between the two trees Li,k and Yi. This is an application of the structured large margin training approach first proposed in (Taskar et al., 2003) and (Tsochantaridis et al., 2004). Using the techniques of Hastie et al. (2004) one can show that minimizing the objective (3) is equivalent to solving the quadratic program min θ,ξ β 2 θ⊤θ + e⊤ξ subject to ξi,k ≥∆(Li,k, Yi) −diff(θ, Yi, Li,k) ξi,k ≥0 for all i, Li,k ∈Φ(Xi) (4) where e denotes the vector of all 1’s and ξ represents slack variables. This approach corresponds to the training problem posed in (McDonald et al., 2005a) and has yielded the best published results for English dependency parsing. To compare with the new semi-supervised approach we will present in Section 5 below, we reimplemented the supervised structured large margin training approach in the experiments in Section 7. More specifically, we solve the following quadratic program, which is based on Equation (3) min θ α 2 θ⊤θ + X i max L k X m=1 k X n=1 ∆(Li,m,n, Yi,m,n) −diff(θ, Yi,m,n, Li,m,n) (5) where diff(θ, Yi,m,n, Li,m,n) = score(θ, Yi,m,n) − score(θ, Li,m,n) and k is the sentence length. We represent a dependency tree as a k × k adjacency matrix. In the adjacency matrix, the value of Yi,m,n is 1 if the word m is the head of the word n, 0 otherwise. Since both the distance function ∆(Li, Yi) and the score function decompose over links, solving (5) is equivalent to solve the original constrained quadratic program shown in (4). 4 Semi-supervised Structured Large Margin Objective The objective of standard semi-supervised structured SVM is a combination of structured large margin losses on both labeled and unlabeled data. It has the following form: min θ α 2 θ⊤θ + N X i=1 structured loss (θ, Xi, Yi) + min Yj U X j=1 structured loss (θ, Xj, Yj) (6) where structured loss (θ, Xi, Yi) = max L k X m=1 k X n=1 ∆(Li,m,n, Yi,m,n) (7) −diff(θ, Yi,m,n, Li,m,n) N and U are the number of labeled and unlabeled training sentences respectively, and Yj ranges over guessed targets on the unsupervised data. In the second term of the above objective shown in (6), both θ and Yj are variables. The resulting loss function has a hat shape (usually called hat-loss), which is non-convex. Therefore the objective as a whole is non-convex, making the search for global optimal difficult. Note that the root of the optimization difficulty for S3VMs is the non-convex property of the second term in the objective function. We will propose a novel approach which can deal with this problem. We introduce an efficient approximation— least squares loss—for the structured large margin loss on unlabeled data below. 5 Semi-supervised Convex Training for Structured SVM Although semi-supervised structured SVM learning has been an active research area, semi-supervised structured SVMs have not been used in many real applications to date. The main reason is that most available semi-supervised large margin learning approaches are non-convex or computationally expensive (e.g. (Xu and Schuurmans, 2005)). These techniques are difficult to implement and extremely hard to scale up. We present a semi-supervised algorithm 535 for structured large margin training, whose objective is a combination of two convex terms: the supervised structured large margin loss on labeled data and the cheap least squares loss on unlabeled data. The combined objective is still convex, easy to optimize and much cheaper to implement. 5.1 Least Squares Convex Objective Before we introduce the new algorithm, we first introduce a convex loss which we apply it to unlabeled training data for the semi-supervised structured large margin objective which we will introduce in Section 5.2 below. More specifically, we use a structured least squares loss to approximate the structured large margin loss on unlabeled data. The corresponding objective is: min θ,Yj α 2 θ⊤θ + (8) λ 2 U X j=1 k X m=1 k X n=1  θ⊤f(Xj,m →Xj,n) −Yj,m,n 2 subject to constraints on Y (explained below). The idea behind this objective is that for each possible link (Xj,m →Xj,n), we intend to minimize the difference between the link and the corresponding estimated link based on the learned weight vector. Since this is conducted on unlabeled data, we need to estimate both θ and Yj to solve the optimization problem. As mentioned in Section 3, a dependency tree Yj is represented as an adjacency matrix. Thus we need to enforce some constraints in the adjacency matrix to make sure that each Yj satisfies the dependency tree constraints. These constraints are critical because they prevent (8) from having a trivial solution in Y. More concretely, suppose we use rows to denote heads and columns to denote children. Then we have the following constraints on the adjacency matrix: • (1) All entries in Yj are between 0 and 1 (convex relaxation of discrete directed edge indicators); • (2) The sum over all the entries on each column is equal to one (one-head rule); • (3) All the entries on the diagonal are zeros (no self-link rule); • (4) Yj,m,n + Yj,n,m ≤1 (anti-symmetric rule), which enforces directedness. One final constraint that is sufficient to ensure that a directed tree is obtained, is connectedness (i.e. acyclicity), which can be enforced with an additional semidefinite constraint. Although convex, this constraint is more expensive to enforce, therefore we drop it in our experiments below. (However, adding the semidefinite connectedness constraint appears to be feasible on a sentence by sentence level.) Critically, the objective (8) is jointly convex in both the weights θ and the edge indicator variables Y. This means, for example, that there are no local minima in (8)—any iterative improvement strategy, if it converges at all, must converge to a global minimum. 5.2 Semi-supervised Convex Objective By combining the convex structured SVM loss on labeled data (shown in Equation (5)) and the convex least squares loss on unlabeled data (shown in Equation (8)), we obtain a semi-supervised structured large margin loss min θ,Yj α 2 θ⊤θ + N X i=1 structured loss (θ, Xi, Yi) + U X j=1 least squares loss (θ, Xj, Yj) (9) subject to constraints on Y (explained above). Since the summation of two convex functions is also convex, so is (9). Replacing the two losses with the terms shown in Equation (5) and Equation (8), we obtain the final convex objective as follows: min θ,Yj α 2N θ⊤θ + N X i=1 max L k X m=1 k X n=1 ∆(Li,m,n, Yi,m,n) − diff(θ, Yi,m,n, Li,m,n) + α 2U θ⊤θ + (10) λ 2 U X j=1 k X m=1 k X n=1  θ⊤f(Xj,m →Xj,n) −Yj,m,n 2 subject to constraints on Y (explained above), where diff(θ, Yi,m,n, Li,m,n) = score(θ, Yi,m,n) − 536 score(θ, Li,m,n), N and U are the number of labeled and unlabeled training sentences respectively, as we mentioned before. Note that in (10) we have split the regularizer into two parts; one for the supervised component of the objective, and the other for the unsupervised component. Thus the semi-supervised convex objective is regularized proportionally to the number of labeled and unlabeled training sentences. 6 Efficient Optimization Strategy To solve the convex optimization problem shown in Equation (10), we used a gradient descent approach which simply uses stochastic gradient steps. The procedure is as follows. • Step 0, initialize the Yj variables of each unlabeled sentence as a right-branching (leftheaded) chain model, i.e. the head of each word is its left neighbor. • Step 1, pass through all the labeled training sentences one by one. The parameters θ are updated based on each labeled sentence. • Step 2, based on the learned parameter weights from the labeled data, update θ and Yj on each unlabeled sentence alternatively: – treat Yj as a constant, update θ on each unlabeled sentence by taking a local gradient step; – treat θ as a constant, update Yj by calling the optimization software package CPLEX to solve for an optimal local solution. • Repeat the procedure of step 1 and step 2 until maximum iteration number has reached. This procedure works efficiently on the task of training a dependency parser. Although θ and Yj are updated locally on each sentence, progress in minimizing the total objective shown in Equation (10) is made in each iteration. In our experiments, the objective usually converges within 30 iterations. 7 Experimental Results Given a convex approach to semi-supervised structured large margin training, and an efficient training algorithm for achieving a global optimum, we now investigate its effectiveness for dependency parsing. In particular, we investigate the accuracy of the results it produces. We applied the resulting algorithm to learn dependency parsers for both English and Chinese. 7.1 Experimental Design Data Sets Since we use a semi-supervised approach, both labeled and unlabeled training data are needed. For experiment on English, we used the English Penn Treebank (PTB) (Marcus et al., 1993) and the constituency structures were converted to dependency trees using the same rules as (Yamada and Matsumoto, 2003). The standard training set of PTB was spit into 2 parts: labeled training data—the first 30k sentences in section 2-21, and unlabeled training data—the remaining sentences in section 2-21. For Chinese, we experimented on the Penn Chinese Treebank 4.0 (CTB4) (Palmer et al., 2004) and we used the rules in (Bikel, 2004) for conversion. We also divided the standard training set into 2 parts: sentences in section 400-931 and sentences in section 1-270 are used as labeled and unlabeled data respectively. For both English and Chinese, we adopted the standard development and test sets throughout the literature. As listed in Table 1 with greater detail, we experimented with sets of data with different sentence length: PTB-10/CTB4-10, PTB-15/CTB4-15, PTB-20/CTB4-20, CTB4-40 and CTB4, which contain sentences with up to 10, 15, 20, 40 and all words respectively. Features For simplicity, in current work, we only used two sets of features—word-pair and tag-pair indicator features, which are a subset of features used by other researchers on dependency parsing (McDonald et al., 2005a; Wang et al., 2007). Although our algorithms can take arbitrary features, by only using these simple features, we already obtained very promising results on dependency parsing using both the supervised and semi-supervised approaches. Using the full set of features described in (McDonald et al., 2005a; Wang et al., 2007) and comparing the corresponding dependency parsing 537 English PTB-10 Training(l/ul) 3026/1016 Dev 163 Test 270 PTB-15 Training 7303/2370 Dev 421 Test 603 PTB-20 Training 12519/4003 Dev 725 Test 1034 Chinese CTB4-10 Training(l/ul) 642/347 Dev 61 Test 40 CTB4-15 Training 1262/727 Dev 112 Test 83 CTB4-20 Training 2038/1150 Dev 163 Test 118 CTB4-40 Training 4400/2452 Dev 274 Test 240 CTB4 Training 5314/2977 Dev 300 Test 289 Table 1: Size of Experimental Data (# of sentences) results with previous work remains a direction for future work. Dependency Parsing Algorithms For simplicity of implementation, we use a standard CKY parser in the experiments, although Eisner’s algorithm (Eisner, 1996) and the Spanning Tree algorithm (McDonald et al., 2005b) are also applicable. 7.2 Results We evaluate parsing accuracy by comparing the directed dependency links in the parser output against the directed links in the treebank. The parameters α and λ which appear in Equation (10) were tuned on the development set. Note that, during training, we only used the raw sentences of the unlabeled data. As shown in Table 2 and Table 3, for each data set, the semi-supervised approach achieves a significant improvement over the supervised one in dependency parsing accuracy on both Chinese and English. These positive results are somewhat surprising since a very simple loss function was used on Training Test length Supervised Semi-sup Train-10 ≤10 82.98 84.50 Train-15 ≤10 84.80 86.93 ≤15 76.96 80.79 Train-20 ≤10 84.50 86.32 ≤15 78.77 80.57 ≤20 74.89 77.85 Train-40 ≤10 84.19 85.71 ≤15 78.03 81.21 ≤20 76.25 77.79 ≤40 68.17 70.90 Train-all ≤10 82.67 84.80 ≤15 77.92 79.30 ≤20 77.30 77.24 ≤40 70.11 71.90 all 66.30 67.35 Table 2: Supervised and Semi-supervised Dependency Parsing Accuracy on Chinese (%) Training Test length Supervised Semi-sup Train-10 ≤10 87.77 89.17 Train-15 ≤10 88.06 89.31 ≤15 81.10 83.37 Train-20 ≤10 88.78 90.61 ≤15 83.00 83.87 ≤20 77.70 79.09 Table 3: Supervised and Semi-supervised Dependency Parsing Accuracy on English (%) 538 the unlabeled data. A key benefit of the approach is that a straightforward training algorithm can be used to obtain global solutions. Note that the results of our model are not directly comparable with previous parsing results shown in (McClosky et al., 2006a), since the parsing accuracy is measured in terms of dependency relations while their results are f-score of the bracketings implied in the phrase structure. 8 Conclusion and Future Work In this paper, we have presented a novel algorithm for semi-supervised structured large margin training. Unlike previous proposed approaches, we introduce a convex objective for the semi-supervised learning algorithm by combining a convex structured SVM loss and a convex least square loss. This new semisupervised algorithm is much more computationally efficient and can easily scale up. We have proved our hypothesis by applying the algorithm to the significant task of dependency parsing. The experimental results show that the proposed semi-supervised large margin training algorithm outperforms the supervised one, without much additional computational cost. There remain many directions for future work. One obvious direction is to use the whole Penn Treebank as labeled data and use some other unannotated data source as unlabeled data for semi-supervised training. Next, as we mentioned before, a much richer feature set can be used in our model to get better dependency parsing results. Another direction is to apply the semi-supervised algorithm to other natural language problems, such as machine translation, topic segmentation and chunking. In these areas, there are only limited annotated data available. Therefore semi-supervised approaches are necessary to achieve better performance. The proposed semi-supervised convex training approach can be easily applied to these tasks. Acknowledgments We thank the anonymous reviewers for their useful comments. Research is supported by the Alberta Ingenuity Center for Machine Learning, NSERC, MITACS, CFI and the Canada Research Chairs program. The first author was also funded by the Queen Elizabeth II Graduate Scholarship. References S. Abney. 2004. Understanding the yarowsky algorithm. Computational Linguistics, 30(3):365–395. Y. Altun, D. McAllester, and M. Belkin. 2005. Maximum margin semi-supervised learning for structured variables. In Proceedings of Advances in Neural Information Processing Systems 18. K. Bennett and A. Demiriz. 1998. Semi-supervised support vector machines. In Proceedings of Advances in Neural Information Processing Systems 11. D. Bikel. 2004. Intricacies of Collins’ parsing model. Computational Linguistics, 30(4). O. Chapelle and A. Zien. 2005. Semi-supervised classification by low density separation. In Proceedings of the Tenth International Workshop on Artificial Inteligence and Statistics. E. Charniak. 1997. Statistical parsing with a contextfree grammar and word statistics. In Proceedings of the Association for the Advancement of Artificial Intelligence, pages 598–603. R. Duda, P. Hart, and D. Stork. 2000. Pattern Classification. Wiley, second edition. J. Eisner and G. Satta. 1999. Efficient parsing for bilexical context-free grammars and head-automaton grammars. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. J. Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the International Conference on Computational Linguistics. G. Haffari and A. Sarkar. 2007. Analysis of semisupervised learning with the yarowsky algorithm. In Proceedings of the Conference on Uncertainty in Artificial Intelligence. T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu. 2004. The entire regularization path for the support vector machine. Journal of Machine Learning Research, 5:1391–1415. D. Klein and C. Manning. 2002. A generative constituent-context model for improved grammar induction. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. D. Klein and C. Manning. 2004. Corpus-based induction of syntactic structure: Models of dependency and constituency. In Proceedingsof the Annual Meeting of the Association for Computational Linguistics. G. S. Mann and A. McCallum. 2007. Simple, robust, scalable semi-supervised learning via expectation regularization. In Proceedings of International Conference on Machine Learning. C. Manning and H. Schutze. 1999. Foundations of Statistical Natural Language Processing. MIT Press. 539 M. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19(2):313–330. D. McClosky, E. Charniak, and M. Johnson. 2006a. Effective self-training for parsing. In Proceedings of the Human Language Technology: the Annual Conference of the North American Chapter of the Association for Computational Linguistics. D. McClosky, E. Charniak, and M. Johnson. 2006b. Reranking and self-training for parser adaptation. In Proceedings of the International Conference on Computational Linguistics and the Annual Meeting of the Association for Computational Linguistics. R. McDonald and F. Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of European Chapter of the Annual Meeting of the Association for Computational Linguistics. R. McDonald, K. Crammer, and F. Pereira. 2005a. Online large-margin training of dependency parsers. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. R. McDonald, F. Pereira, K. Ribarov, and J. Hajic. 2005b. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of Human Language Technologies and Conference on Empirical Methods in Natural Language Processing. M. Palmer et al. 2004. Chinese Treebank 4.0. Linguistic Data Consortium. N. Smith and J. Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the Annual Meeting of the Association for Computational Linguistics. M. Steedman, M. Osborne, A. Sarkar, S. Clark, R. Hwa, J. Hockenmaier, P. Ruhlen, S. Baker, and J. Crim. 2003. Bootstrapping statistical parsers from small datasets. In Proceedings of the European Chapter of the Annual Meeting of the Association for Computational Linguistics, pages 331–338. B. Taskar, C. Guestrin, and D. Koller. 2003. Maxmargin Markov networks. In Proceedings of Advances in Neural Information Processing Systems 16. B. Taskar, D. Klein, M. Collins, D. Koller, and C. Manning. 2004. Max-margin parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. I. Tsochantaridis, T. Hofmann, T. Joachims, and Y. Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In Proceedings of International Conference on Machine Learning. Q. Wang, D. Schuurmans, and D. Lin. 2005. Strictly lexical dependency parsing. In Proceedings of the International Workshop on Parsing Technologies, pages 152–159. Q. Wang, C. Cherry, D. Lizotte, and D. Schuurmans. 2006. Improved large margin dependency parsing via local constraints and Laplacian regularization. In Proceedings of The Conference on Computational Natural Language Learning, pages 21–28. Q. Wang, D. Lin, and D. Schuurmans. 2007. Simple training of dependency parsers via structured boosting. In Proceedings of the International Joint Conference on Artificial Intelligence, pages 1756–1762. L. Xu and D. Schuurmans. 2005. Unsupervised and semi-supervised multi-class support vector machines. In Proceedings the Association for the Advancement of Artificial Intelligence. H. Yamada and Y. Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of the International Workshop on Parsing Technologies. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, pages 189–196, Cambridge, Massachusetts. X. Zhu, Z. Ghahramani, and J. Lafferty. 2003. Semisupervised learning using Gaussian fields and harmonic functions. In Proceedings of International Conference on Machine Learning. X. Zhu. 2005. Semi-supervised learning literature survey. Technical report, Computer Sciences, University of Wisconsin-Madison. 540
2008
61
Proceedings of ACL-08: HLT, pages 541–549, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Chinese-English Backward Transliteration Assisted with Mining Monolingual Web Pages Fan Yang, Jun Zhao, Bo Zou, Kang Liu, Feifan Liu National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China {fyang,jzhao,bzou,kliu,ffliu}@nlpr.ia.ac.cn Abstract In this paper, we present a novel backward transliteration approach which can further assist the existing statistical model by mining monolingual web resources. Firstly, we employ the syllable-based search to revise the transliteration candidates from the statistical model. By mapping all of them into existing words, we can filter or correct some pseudo candidates and improve the overall recall. Secondly, an AdaBoost model is used to rerank the revised candidates based on the information extracted from monolingual web pages. To get a better precision during the reranking process, a variety of web-based information is exploited to adjust the ranking score, so that some candidates which are less possible to be transliteration names will be assigned with lower ranks. The experimental results show that the proposed framework can significantly outperform the baseline transliteration system in both precision and recall. 1 Introduction* The task of Name Entity (NE) translation is to translate a name entity from source language to target language, which plays an important role in machine translation and cross-language information retrieval (CLIR). Transliteration is a subtask in NE translation, which translates NEs based on the phonetic similarity. In NE translation, most person names are transliterated, and some parts of location names or organization names also need to be transliterated. Transliteration has two directions: forward transliteration which transforms an original name into target language, and backward transliteration which recovers a name back to its original expression. For instance, the original English per *Contact: Jun ZHAO, [email protected]. son name “Clinton” can be forward transliterated to its Chinese expression “克/ke 林/lin顿/dun” and the backward transliteration is the inverse processing. In this paper, we focus on backward transliteration from Chinese to English. Many previous researches have tried to build a transliteration model using statistical approach [Knight and Graehl, 1998; Lin and Chen, 2002; Virga and Khudanpur, 2003; Gao, 2004]. There are two main challenges in statistical backward transliteration: First, statistical transliteration approach selects the most probable translations based on the knowledge learned from the training data. This approach, however, does not work well when there are multiple standards [Gao, 2004]. Second, backward transliteration is more challenging than forward transliteration as it is required to disambiguate the noises introduced in the forward transliteration and estimate the original name as close as possible [Lin and Chen, 2002]. One of the most important causes in introducing noises is that: some silent syllables in original names have been missing when they are transliterated to target language. For example, when “Campbell” is transliterated into “坎/kan贝/bei尔/er”, the “p” is missing. In order to make up the disadvantages of statistical approach, some researchers have been seeking for the assistance of web resource. [Wang et al., 2004; Cheng et al., 2004; Nagata et al., 2001; Zhang et al, 2005] used bilingual web pages to extract translation pairs. Other efforts have been made to combine a statistical transliteration model with web mining [Al-Onaizan and Knight, 2002; Long Jiang et al, 2007]. Most of these methods need bilingual resources. However, those kinds of resources are not readily available in many cases. Moreover, to search for bilingual pages, we have to depend on the performance of search engines. We can’t get Chinese-English bilingual pages when the input is a Chinese query. Therefore, the existing 541 assistance approaches using web-mining to assist transliteration are not suitable for Chinese to English backward transliteration. Thus in this paper, we mainly focus on the following two problems to be solved in transliteration. Problem I: Some silent syllables are missing in English-Chinese forward transliteration. How to recover them effectively and efficiently in backward transliteration is still an open problem. Problem II: Statistical transliteration always chooses the translations based on probabilities. However, in some cases, the correct translation may have lower probability. Therefore, more studies are needed on combination with other techniques as supplements. Aiming at these two problems, we propose a method which mines monolingual web resources to assist backward transliteration. The main ideas are as follows. We assume that for every Chinese entity name which needs to be backward transliterated to an English original name, the correct transliteration exists somewhere in the web. What we need to do is to find out the answers based on the clues given by statistical transliteration results. Different from the traditional methods which extract transliteration pairs from bilingual pages, we only use monolingual web resources. Our method has two advantages. Firstly, there are much more monolingual web resources available to be used. Secondly, our method can revise the transliteration candidates to the existing words before the subsequent re-ranking process, so that we can better mine the correct transliteration from the Web. Concretely, there are two phases involved in our approach. In the first phase, we split the result of transliteration into syllables, and then a syllablebased searching processing can be employed to revise the result in a word list generated from web pages, with an expectation of higher recall of transliteration. In the second phase, we use a revised word as a search query to get its contexts and hit information, which are integrated into the AdaBoost classifier to determine whether the word is a transliteration name or not with a confidence score. This phase can readjust the candidate’s score to a more reasonable point so that precision of transliteration can be improved. Table 1 illustrates how to transliterate the Chinese name “阿/a加/jia 西/xi” back to “Agassi”. Chinese name Transliteration results Revised Candidate Re-rank Results 阿加西 a jia xi Agassi aggasi agahi agacy agasie … agasi agathi agathe agassi … agassi agasi agache agga … Table 1. An example of transliteration flow The experimental results show that our approach improves the recall from 41.73% to 59.28% in open test when returning the top-100 results, and the top-5 precision is improved from 19.69% to 52.19%. The remainder of the paper is structured as follows. Section 2 presents the framework of our system. We discuss the details of our statistical transliteration model in Section 3. In Section 4, we introduce the approach of revising and re-ranking the results of transliteration. The experiments are reported in Section 5. The last section gives the conclusion and the prediction of future work. 2 System Framework Our system has three main modules. Figure 1. System framework 1) Statistical transliteration: This module receives a Chinese Pinyin sequence as its input, and output the N-best results as the transliteration candidates. 2) Candidate transliteration revision through syllable-based searching: In the module, a transliteration candidate is transformed into a syllable query. We use a syllable-based searching strategy to select the revised candidate from a huge word list. Each word in the list is indexed by syllables, and the similarity between the word and the query is calculated. The most similar words are returned as the revision results. This module guarMonolingual web pages Words list Chinese name Statistical model Transliteration candidates Syllable-based search Revised candidates Re-ranking phase Final results Search engine 542 antees the transliteration candidates are all existing words. 3) Revised candidate re-ranking in web pages: In the module, we search the revised candidates to get their contexts and hit information which we can use to score the probability of being a transliteration name. This phase doesn’t generate new candidates, but re-rank the revised candidate set to improve the performance in top-5. Under this framework, we can solve the two problems of statistical model mentioned above. (1) The silent syllables will be given lower weights in syllable-based search, so the missing syllables will be recovered through selecting the most similar existing words which can contain some silent syllables. (2) The query expansion technology can recall more potential transliteration candidates by expanding syllables to their “synonymies”. So the mistakes introduced when selecting syllables in statistical transliteration will be corrected through giving suitable weights to synonymies. Through the revision phase, the results of statistical model which may have illegal spelling will be mapped to its most similar existing words. That can improve the recall. In re-ranking phase, the revised candidate set will be re-ranked to put the right answer on the top using hybrid information got from web resources. So the precision of transliteration will be improved. 3 Statistical Transliteration Model We use syllables as translation units to build a statistical Chinese-English backward transliteration model in our system. 3.1 Traditional Statistical Translation Model [P. Brown et al., 1993] proposed an IBM sourcechannel model for statistical machine translation (SMT). When the channel output f= f1,f2 …. fn observed, we use formula (1) to seek for the original sentence e=e1,e2 …. en with the most likely posteriori. ' argmax ( | ) argmax ( | ) ( ) e e e P e f P f e P e = = (1) The translation model ( | ) P f e is estimated from a paired corpus of foreign-language sentences and their English translations. The language model ( ) P e is trained from English texts. 3.2 Our Transliteration Model The alignment method is the base of statistical transliteration model. There are mainly two kinds of alignment methods: phoneme-based alignment [Knight and Graehl, 1998; Virga and Khudanpur, 2003] and grapheme-based alignment [Long Jiang, 2007]. In our system, we adopt the syllable-based alignment from Chinese pinyin to English syllables, where the syllabication rules mentioned in [Long Jiang et al., 2007] are used. For example, Chinese name “希/xi 尔/er 顿 /dun” and its backward transliteration “Hilton” can be aligned as follows. “Hilton” is split into syllable sequence as “hi/l/ton”, and the alignment pairs are “xi-hi”, “er-l”, “dun-ton”. Based on the above alignment method, we can get our statistical Chinese-English backward transliteration model as, arg max ( | ) ( ) E E p PY ES p ES = (2) Where, PY is a Chinese Pinyin sequence, ES is a English syllables sequence, ( | ) p PY ES is the probability of translating ES into PY, ( ) p ES is the generative probability of a English syllable language model. 3.3 The Difference between Backward Transliteration and Traditional Translation Chinese-English backward transliteration has some differences from traditional translation. 1) We don’t need to adjust the order of syllables when transliteration. 2) The language model in backward transliteration describes the relationship of syllables in words. It can’t work as well as the language model describing the word relationship in sentences. We think that the crucial problem in backward transliteration is selecting the right syllables at every step. It’s very hard to obtain the exact answer only based on the statistical transliteration model. We will try to improve the statistical model performance with the assistance of mining web resources. 4 Mining Monolingual Web Pages to Assist Backward Transliteration In order to get assistance from monolingual Web resource to improve statistical transliteration, our 543 method contains two main phases: “revision” and “re-ranking”. In the revision phase, transliteration candidates are revised using syllable-based search in the word list, which are generated by collecting the existing words in web pages. Because the process of named entity recognition may lose some NEs, we will reserve all the words in web corpus without any filtering. The revision process can improve the recall through correcting some mistakes in the transliteration results of statistical model. In the re-ranking phase, we search every revised candidate on English pages, score them according to their contexts and hit information so that the right answer will be given a higher rank. 4.1 Using Syllable-based Retrieval to Revise Transliteration Candidates In this section, we will propose two methods respectively for the two problems of statistical model mentioned in section 1. 4.1.1 Syllable-based retrieval model When we search a transliteration candidate tci in the word list, we firstly split it into syllables {es1,es2,…..esn}. Then this syllable sequence is used as a query for syllable-based searching. We define some notions here.  Term set T={t1,t2….tk} is an orderly set of all syllables which can be viewed as terms.  Pinyin set P={py1,py2….pyk} is an orderly set of all Pinyin.  An input word can be represented by a vector of syllables {es1,es2,…..esn}. We calculate the similarity between a transliteration result and each word in the list to select the most similar words as the revised candidates. The {es1,es2,…..,esn} will be transformed into a vector Vquery={t1,t2….tk} where ti represents the ith term in T. The value of ti is equal to 0 if the ith term doesn’t appear in query. In the same way, the word in list can also be transformed into vector representation. So the similarity can be calculated as the inner product between these two vectors. We don’t use tf and idf conceptions as traditional information retrieval (IR) to calculate the terms’ weight. We use the weight of ti to express the expectation probability of ith term having pronunciation. If the term has a lower probability of having pronunciation, its weight is low. So when we searching, the missing silent syllables in the results of statistical transliteration model can be recovered because such syllables have little impact on similarity measurement. The formula we used is as follows. ( , ) / query word word py V V Sim query word L L ! = (3) The numerator is the inner product of two vectors. The denominator is the length of word Lword divided by the length of Chinese pinyin sequence Lpy. In this formula, the more syllables in one word, the higher score of inner production it may get, but the word will get a loss for its longer length. The word which has the shortest length and the highest syllable hitting ratio will be the best. Another difference from traditional IR is how to deal with the order of the words in a query. According to transliteration, the similarity must be calculated under the limitation of keeping order, which can’t be satisfied by current methods. We use the algorithm like calculating the edit distance between two words. The syllables are viewed as the units which construct a word. The edit distance calculation finds the best matching with the least operation cost to change one word to another word by using deletion/addition/insertion operations on syllables. But the complexity will be too high to afford if we calculate the edit distance between a query and each word in the list. So, we just calculate the edit distance for the words which get high score without the order limitation. This trade off method can save much time but still keep performance. 4.1.2 Mining the Equivalent through Syllable Expansion In most collections, the same concept may be referred to using different words. This issue, known as synonymy, has an impact on the recall of most information retrieval systems. In this section, we try to use the expansion technology to solve problem II. There are three kinds of expansions to be explained below. Syllable expansion based on phonetic similarity: The syllables which correspond to the same Chinese pinyin can be viewed as synonymies. For example, the English syllables “din” and “tin” can be aligned to the same Chinese pinyin “ding”. Given a Chinese pinyin sequence {py1,py2,…..pyn} as the input of transliteration model, for every pyi, there are a set of syllables 544 {es1, es2 ….. esk} which can be selected as its translation. The statistical model will select the most probable one, while others containing the right answer are discarded. To solve this problem, we expand the query to take the synonymies of terms into consideration. We create an expansion set for each Chinese pinyin. A syllable esi will be selected into the expansion set of pyj based on the alignment probability P(esi|pyj) which can be extracted from the training corpus. The phonetic similarity expansion is based on the input Chinese Pinyin sequence, so it’s same for all candidates. Syllable expansion based on syllable similarity: If two syllables have similar alignment probability with every pinyin, we can view these two syllables as synonymy. Therefore, if a syllable is in the query, its synonymies should be contained too. For example, “fea” and “fe” can replace each other. To calculate the similarity, we first obtain the alignment probability P(pyj|esk) of every syllable. Then the distance between any two syllables will be calculated using formula (4). 1 1 ( , ) ( | ) ( | ) N j k i j i k i Sim es es P py es P py es N = = ! (4) This formula is used to evaluate the similarity of two syllables in alignment. The expansion set of the ith syllable can be generated by selecting the most similar N syllables. This kind of expansion is conducted upon the output of statistical transliteration model. Syllable expansion based on syllable edit distance: The disadvantage of last two expansions is that they are entirely dependent on the training set. In other word, if some syllables haven’t appeared in the training corpus, they will not be expanded. To solve the problem, we use the method of expansion based on edit distance. We use edit distance to measure the similarity between two syllables, one is in training set and the other is absent. Because the edit distance expansion is not very relevant to pronunciation, we will give this expansion method a low weight in combination. It works when new syllables arise. Combine the above three strategies: We will combine the three kinds of expansion method together. We use the linear interpolation to integrate them. The formulas are follows. (1 ) pre sy ed S S S S ! ! " = # + + (5) (1 ) pre py ed S S S S ! ! " = # + + (6) where Spre is the score of exact matching, Ssy is the score of expansion based on syllables similarity and Spy based on phonetic similarity. We will adjust these parameters to get the best performance. The experimental results and analysis will be reported in section 5.3. 4.2 Re-Ranking the Revised Candidates Set using the Monolingual Web Resource In the first phase, we have generated the revised candidate set {rc1,rc2,…,rcn} from the word list using the transliteration results as clues. The objective is to improve the overall recall. In the second phase, we try to improve the precision, i.e. we wish to re-rank the candidate set so that the correct answer will be put in a higher rank. [Al-Onaizan et al., 2002] has proposed some methods to re-score the transliteration candidates. The limitation of their approach is that some candidates are propbale not existing words, with which we will not get any information from web. So it can only re-rank the transliteration results to improve the precision of top-5. In our work, we can improve the recall of transliteration through the revising process before re-ranking. In this section, we employ the AdaBoost framework which integrates several kinds of features to re-rank the revised candidate set. The function of the AdaBoost classifier is to calculate the probability of the candidate being a NE. Then we can rerank the revised candidate set based on the score. The features used in our system are as follows. NE or not: Using rci as query to search for monolingual English Web Pages, we can get the context set {Ti1, Ti2……Tin} of rci. Then for every Tik, we use the named entity recognition (NER) software to determine whether rci is a NE or not. If rci is recognized as a NE in some Tik, rci will get a score. If rci can’t be recognized as NE in any contexts, it will be pruned. The hit of the revised candidate: We can get the hit information of rci from search engine. It is used to evaluate the importance of rci. Unlike [AlOnaizan et al., 2002], in which the hit can be used to eliminate the translation results which contain illegal spelling, we just use hit number as a feature. The limitation of compound NEs: When transliterating a compound NE, we always split them into several parts, and then combine their transliteration results together. But in this circumstance, 545 every part can add a limitation in the selection of the whole NE. For example: “希/xi拉/la里/li ⋅ 克 /ke林/lin顿/dun” is a compound name. “希/xi拉/la 里/li” can be transliterate to “Hilary” or “Hilaly” and “克/ke林/lin顿/dun” can be transliterate to “Clinton” or “Klinton”. But the combination of “Hilary⋅Clinton” will be selected for it is the most common combination. So the hit of combination query will be extracted as a feature in classifier. Hint words around the NE: We can take some hint words around the NE into the query, in order to add some limitations to filter out noisy words. For example: “总统 (president)” can be used as hint word for “克林顿 (Clinton)”. To find the hint words, we first search the Chinese name in Chinese web pages. The frequent words can be extracted as hint words and they will be translated to English using a bilingual dictionary. These hint words are combined with the revised candidates to search English web pages. So, the hit of the query will be extracted as feature. The formula of AdaBoost is as follow. 1 ( ) ( ( )) T t t t H x sign h x ! = = " (7) Where t ! is the weight for the ith weak classifier ( ) th x . t ! can be calculated based on the precision of its corresponding classifier. 5 Experiments We carry out experiments to investigate how much the revision process and the re-ranking process can improve the performance compared with the baseline of statistical transliteration model. We will also evaluate to which extents we can solve the two problems mentioned in section 1 with the assistance of Web resources. 5.1 Experimental data The training corpus for statistical transliteration model comes from the corpus of Chinese <-> English Name Entity Lists v 1.0 (LDC2005T34). It contains 565,935 transliteration pairs. Ruling out those pairs which are not suitable for the research on Chinese-English backward transliteration, such as Chinese-Japanese, we select a training set which contains 14,443 pairs of Chinese-European & American person names. In the training set, 1,344 pairs are selected randomly as the close test data. 1,294 pairs out of training set are selected as the open test data. To set up the word list, a 2GB-sized collection of web pages is used. Since 7.42% of the names in the test data don’t appear in the list, we use Google to get the web page containing the absent names and add these pages into the collection. The word list contains 672,533 words. 5.2 Revision phase vs. statistical approach Using the results generated from statistical model as baseline, we evaluate the revision module in recall first. The statistical transliteration model works in the following 4 steps: 1) Chinese name are transformed into pinyin representation and the English names are split into syllables. 2) The GIZA++1 tool is invoked to align pinyin to syllables, and the alignment probabilities ( | ) P py es are obtained. 3) Those frequent sequences of syllables are combined as phrases. For example, “be/r/g””berg”, “s/ky””sky”. 4) Camel 2 decoder is executed to generate 100-best candidates for every name. We compare the statistical transliteration results with the revised results in Table 2. From Table 2 we can find that the recall of top-100 after revision is improved by 13.26% in close test set and 17.55% in open test set. It proves that the revision module is effective for correcting the mistakes made in statistical transliteration model. Transliteration results Revised results close open close open Top1 33.64% 9.41% 27.15% 11.04% Top5 40.37% 13.38% 42.83% 19.69% Top10 47.79% 17.56% 56.98% 26.52% Top20 61.88% 25.44% 71.05% 37.81% Top50 66.49% 36.19% 82.16% 46.22% Top100 72.52% 41.73% 85.78% 59.28% Table 2. Statistical model vs. Revision module To show the effects of the revision on the two above-mentioned problems in which the statistical model does not solve well: the losing of silent syllables and the selection bias problem, we make a statistics of the improvements with a measurement of “correction time”. For a Chinese word whose correct transliteration appears in top-100 candidates only if it has been 1 http://www.fjoch.com/GIZA++.html 2 http://www.nlp.org.cn 546 revised, we count the “correction time”. For example, when “Argahi” is revised to “Agassi” the correction time is “1” for Problem II and “1” for Problem I, because in “hi” “si” the syllable is expanded, and in “si” ”ssi” an “s” is added. Close test Open test Problem I 0.6931 0.7853 Problem II 0.9264 1.1672 Table 3. Average time of correction This measurement reflects the efficiency of the revision of search strategy, in contrast to those spelling correction techniques in which several operations of “add” and “expand” are inevitable. It has proved that the more an average correction time is, the more efficient our strategy is. ! !"# !"$ !"% !"& !"' !"( !") !"* !"+ # # $ % & ' ( ) * + ,-./0-1 0232/02/4 Figure 2. Length influence in recall comparison The recall of the statistical model relies on the length of English name in some degree. It is more difficult to obtain an absolutely correct answer for longer names, because they may contain more silent and confused syllables. However, through the revision phase, this tendency can be effectively alleviated. In Figure 2, we make a comparison between the results of the statistical model and the revision module with the changing of syllable’s length in open test. The curves demonstrate that the revision indeed prevents the decrease of recall for longer names. 5.3 Parameter setting in the revision phase We will show the experimental results when setting different parameters for query expansion. In the expansion based on phonetic similarity, for every Chinese pinyin, we select at most 20 syllables to create an expansion set. We set 0.1 ! = in formula (5). The results are shown in the columns labeled “exp1” in Table 4. From the results we can conclude that, we get the best performance when 0.4 ! = . That means the performance is best when the weight of exact matching is a little larger than the weight of fuzzy matching. We can also see that, higher weight of exact matching will lead to low recall, while higher weight of fuzzy matching will bring noise in. The expansion method based on syllable similarity is also evaluated. For every syllable, we select at most 15 syllables to create the expansion set. We set 0.1 ! = . The results are shown in the columns labeled “exp2” in Table 4. From the results we can conclude that, we get the best performance when 0.5 ! = . It means that we can’t put emphasis on any matching methods. Comparison with the expansion based on phonetic similarity, the performance is poorer. It means that the expansion based on phonetic similarity is more suitable for revising transliteration candidates. 5.4 Revision phase vs. re-ranking phase After the phase of revising transliteration candidates, we re-rank the revised candidate set with the assistance of monolingual web resources. In this section, we will show the improvement in precision after re-ranking. We have selected four kinds of features to integrate in the AdaBoost framework. To determine whether the candidate is NE or not in its context, we use the software tool Lingpipe3. The queries are sent to google, so that we can get the hit of queries and the top-10 snippets will be extracted as context. The comparison of revision results and reranking results is shown as follows. Revised results Re-ranked results close open close open Top1 27.15% 11.04% 58.08% 38.63% Top5 42.83% 19.69% 76.35% 52.19% Top10 56.98% 26.52% 83..92% 54.33% Top20 71.05% 37.81% 83.92% 57.61% Top50 82.16% 46.22% 83.92% 57.61% Top100 85.78% 59.28% 85.78% 59.28% Table 5. Revision results vs. Re-ranking results From these results we can conclude that, after re-ranking phase, the noisy words will get a lower 3 http://www.alias-i.com/lingpipe/ 547 0.2 ! = 0.3 ! = 0.4 ! = 0.5 ! = 0.6 ! = 0.7 ! = 0.8 ! = exp1 exp2 exp1 exp2 exp1 exp2 exp1 exp2 exp1 exp2 exp1 exp2 exp1 exp2 Top1 13.46 13.32 13.79 13.61 11.04 12.70 11.65 10.93 10.83 11.25 9.62 10.63 8.73 10.18 Top5 21.58 19.59 23.27 20.17 19.69 18.28 21.07 17.25 22.05 16.84 17.90 16.26 17.38 15.34 Top10 27.39 22.71 28.41 24.73 26.52 22.93 26.83 21.81 27.26 20.39 24.38 21.20 25.42 18.20 Top20 35.23 34.88 35.94 29.49 37.81 31.57 38.59 33.04 36.52 31.72 35.25 29.75 34.65 27.62 Top50 43.91 40.63 43.75 40.85 46.22 41.46 48.72 42.79 45.48 40.49 41.57 39.94 42.81 38.07 Top100 53.76 48.47 54.38 52.04 59.28 53.15 57.36 53.46 55.19 51.83 55.63 49.52 53.41 47.15 Table 4. Parameters Experiment rank. Through the revision module, we get both higher recall and higher precision than statistical transliteration model when at most 5 results are returned. We also use the average rank and average reciprocal rank (ARR) [Voorhees and Tice, 2000] to evaluate the improvement. ARR is calculated as 1 1 1 ( ) M i ARR M R i = = ! (8) where ( ) R i is the rank of the answer of ith test word. M is the size of test set. The higher of ARR, the better the performance is. The results are shown as Table 6. Statistical model Revision module Re-rank Module close open close open close open Average rank 37.63 70.94 24.52 58.09 16.71 43.87 ARR 0.3815 0.1206 0.3783 0.1648 0.6519 0.4492 Table 6. ARR and AR evaluation The ARR after revision phase is lower than the statistical model. Because the goal of revision module is to improve the recall as possible as we can, some noisy words will be introduced in. The noisy words will be pruned in re-ranking module. That is why we get the highest ARR value at last. So we can conclude that the revision module improves recall and re-ranking module improves precision, which help us get a better performance than pure statistical transliteration model 6 Conclusion In this paper, we present a new approach which can revise the results generated from statistical transliteration model with the assistance of monolingual web resource. Through the revision process, the recall of transliteration results has been improved from 72.52% to 85.78% in the close test set and from 41.73% to 59.28% in open test set, respectively. We improve the precision in re-ranking phase, the top-5 precision can be improved to 76.35% in close test and 52.19% in open test. The promising results show that our approach works pretty well in the task of backward transliteration. In the future, we will try to improve the similarity measurement in the revision phase. And we also wish to develop a new approach using the transliteration candidates to search for their right answer more directly and effectively. Acknowledgments The work is supported by the National High Technology Development 863 Program of China under Grants no. 2006AA01Z144, the National Natural Science Foundation of China under Grants No. 60673042, the Natural Science Foundation of Beijing under Grants no. 4073043. References Yaser Al-Onaizan and Kevin Knight. 2002. Translating named entities using monolingual and bilingual resources. In Proc.of ACL-02. Kevin Knight and Jonathan Graehl. 1998. Machine Transliteration. Computational Linguistics 24(4). Wei-Hao Lin and Hsin-His Chen. 2002 Backward Machine Transliteration by Learning Phonetic Similarity. In Proc. Of the 6th CoNLL Donghui Feng, Yajuan Lv, and Ming Zhou. 2004. A New Approach for English-Chinese Named Entity Alignment. In Proc. of EMNLP-2004. Long Jiang, Ming Zhou, Lee-Feng Chien, and Cheng Niu, 2007. Named Entity Translation with Web Mining and Transliteration. In Proc. of IJCAI-2007. Wei Gao. 2004. Phoneme-based Statistical Transliteration of Foreign Name for OOV Problem. A thesis of Master. The Chinese University of Hong Kong. Ying Zhang, Fei Huang, Stephan Vogel. 2005. Mining translations of OOV terms from the web through cross-lingual query expansion. SIGIR 2005. Pu-Jen Cheng, Wen-Hsiang Lu, Jer-Wen Teng, and Lee-Feng Chien. 2004 Creating Multilingual Translation Lexicons with Regional Variations Using Web Corpora. In Proc. of ACL-04 Masaaki Nagata, Teruka Saito, and Kenji Suzuki. 2001. Using the Web as a Bilingual Dictionary. In Proc. of ACL 2001 Workshop on Data-driven Methods in Machine Translation. 548 Paola Virga and Sanjeev Khudanpur. 2003. Transliteration of proper names in cross-lingual information retrieval. In Proc. of the ACL workshop on Multilingual Named Entity Recognition. Jenq-Haur Wang, Jei-Wen Teng, Pu-Jen Cheng, WenHsiang Lu, Lee-Feng Chien. 2004. Translating unknown cross-lingual queries in digital libraries using a web-based approach. In Proc. of JCDL 2004. E.M.Voorhees and D.M.Tice. 2000. The trec-8 question answering track report. In Eighth Text Retrieval Conference (TREC-8) 549
2008
62
Proceedings of ACL-08: HLT, pages 550–558, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Robustness and Generalization of Role Sets: PropBank vs. VerbNet Be˜nat Zapirain and Eneko Agirre IXA NLP Group University of the Basque Country {benat.zapirain,e.agirre}@ehu.es Llu´ıs M`arquez TALP Research Center Technical University of Catalonia [email protected] Abstract This paper presents an empirical study on the robustness and generalization of two alternative role sets for semantic role labeling: PropBank numbered roles and VerbNet thematic roles. By testing a state–of–the–art SRL system with the two alternative role annotations, we show that the PropBank role set is more robust to the lack of verb–specific semantic information and generalizes better to infrequent and unseen predicates. Keeping in mind that thematic roles are better for application needs, we also tested the best way to generate VerbNet annotation. We conclude that tagging first PropBank roles and mapping into VerbNet roles is as effective as training and tagging directly on VerbNet, and more robust for domain shifts. 1 Introduction Semantic Role Labeling is the problem of analyzing clause predicates in open text by identifying arguments and tagging them with semantic labels indicating the role they play with respect to the verb. Such sentence–level semantic analysis allows to determine “who” did “what” to “whom”, “when” and “where”, and, thus, characterize the participants and properties of the events established by the predicates. This kind of semantic analysis is very interesting for a broad spectrum of NLP applications (information extraction, summarization, question answering, machine translation, etc.), since it opens the door to exploit the semantic relations among linguistic constituents. The properties of the semantically annotated corpora available have conditioned the type of research and systems that have been developed so far. PropBank (Palmer et al., 2005) is the most widely used corpus for training SRL systems, probably because it contains running text from the Penn Treebank corpus with annotations on all verbal predicates. Also, a few evaluation exercises on SRL have been conducted on this corpus in the CoNLL-2004 and 2005 conferences. However, a serious criticisms to the PropBank corpus refers to the role set it uses, which consists of a set of numbered core arguments, whose semantic translation is verb-dependent. While Arg0 and Arg1 are intended to indicate the general roles of Agent and Theme, other argument numbers do not generalize across verbs and do not correspond to general semantic roles. This fact might compromise generalization and portability of SRL systems, especially when the training corpus is small. More recently, a mapping from PropBank numbered arguments into VerbNet thematic roles has been developed and a version of the PropBank corpus with thematic roles has been released (Loper et al., 2007). Thematic roles represent a compact set of verb-independent general roles widely used in linguistic theory (e.g., Agent, Theme, Patient, Recipient, Cause, etc.). We foresee two advantages of using such thematic roles. On the one hand, statistical SRL systems trained from them could generalize better and, therefore, be more robust and portable, as suggested in (Yi et al., 2007). On the other hand, roles in a paradigm like VerbNet would allow for inferences over the assigned roles, which is only possible in a more limited way with PropBank. In a previous paper (Zapirain et al., 2008), we presented a first comparison between the two previous role sets on the SemEval-2007 Task 17 corpus (Pradhan et al., 2007). The SemEval-2007 corpus only 550 comprised examples about 50 different verbs. The results of that paper were, thus, considered preliminary, as they could depend on the small amount of data (both in training data and number of verbs) or the specific set of verbs being used. Now, we extend those experiments to the entire PropBank corpus, and we include two extra experiments on domain shifts (using the Brown corpus as test set) and on grouping VerbNet labels. More concretely, this paper explores two aspects of the problem. First, having in mind the claim that general thematic roles should be more robust to changing domains and unseen predicates, we study the performance of a state-of-the-art SRL system trained on either codification of roles and some specific settings, i.e. including/excluding verb-specific information, labeling unseen verb predicates, or domain shifts. Second, assuming that application scenarios would prefer dealing with general thematic role labels, we explore the best way to label a text with thematic roles, namely, by training directly on VerbNet roles or by using the PropBank SRL system and perform a posterior mapping into thematic roles. The results confirm our preliminary findings (Zapirain et al., 2008). We observe that the PropBank roles are more robust in all tested experimental conditions, i.e., the performance decrease is more severe for VerbNet. Besides, tagging first PropBank roles and then mapping into VerbNet roles is as effective as training and tagging directly on VerbNet, and more robust for domain shifts. The rest of the paper is organized as follows: Section 2 contains some background on PropBank and VerbNet role sets. Section 3 presents the experimental setting and the base SRL system used for the role set comparisons. In Section 4 the main comparative experiments on robustness are described. Section 5 is devoted to analyze the posterior mapping of PropBank outputs into VerbNet thematic roles, and includes results on domain–shift experiments using Brown as test set. Finally, Sections 6 and 7 contain a discussion of the results. 2 Corpora and Semantic Role Sets The PropBank corpus is the result of adding a semantic layer to the syntactic structures of Penn Treebank II (Palmer et al., 2005). Specifically, it provides information about predicate-argument structures to all verbal predicates of the Wall Street Journal section of the treebank. The role set is theory– neutral and consists of a set of numbered core arguments (Arg0, Arg1, ..., Arg5). Each verb has a frameset listing its allowed role labels and mapping each numbered role to an English-language description of its semantics. Different senses for a polysemous verb have different framesets, but the argument labels are semantically consistent in all syntactic alternations of the same verb–sense. For instance in “Kevin broke [the window]Arg1” and in “[The door]Arg1 broke into a million pieces”, for the verb broke.01, both Arg1 arguments have the same semantic meaning, that is “broken entity”. Nevertheless, argument labels are not necessarily consistent across different verbs (or verb senses). For instance, the same Arg2 label is used to identify the Destination argument of a proposition governed by the verb send and the Beneficiary argument of the verb compose. This fact might compromise generalization of systems trained on PropBank, which might be focusing too much on verb– specific knowledge. It is worth noting that the two most frequent arguments, Arg0 and Arg1, are intended to indicate the general roles of Agent and Theme and are usually consistent across different verbs. However, this correspondence is not total. According to the study by (Yi et al., 2007), Arg0 corresponds to Agent 85.4% of the time, but also to Experiencer (7.2%), Theme (2.1%), and Cause (1.9%). Similarly, Arg1 corresponds to Theme in 47.0% of the occurrences but also to Topic (23.0%), Patient (10.8%), and Product (2.9%), among others. Contrary to core arguments, adjuncts (Temporal and Location markers, etc.) are annotated with a closed set of general and verb-independent labels. VerbNet (Kipper et al., 2000) is a computational verb lexicon in which verbs are organized hierarchically into classes depending on their syntactic/semantic linking behavior. The classes are based on Levin’s verb classes (Levin, 1993) and each contains a list of member verbs and a correspondence between the shared syntactic frames and the semantic information, such as thematic roles and selectional constraints. There are 23 thematic roles (Agent, Patient, Theme, Experiencer, Source, Beneficiary, Instrument, etc.) which, unlike the Prop551 Bank numbered arguments, are considered as general verb-independent roles. This level of abstraction makes them, in principle, better suited (compared to PropBank numbered arguments) for being directly exploited by general NLP applications. But, VerbNet by itself is not an appropriate resource to train SRL systems. As opposed to PropBank, the number of tagged examples is far more limited in VerbNet. Fortunately, in the last years a twofold effort has been made in order to generate a large corpus fully annotated with thematic roles. Firstly, the SemLink1 resource (Loper et al., 2007) established a mapping between PropBank framesets and VerbNet thematic roles. Secondly, the SemLink mapping was applied to a representative portion of the PropBank corpus and manually disambiguated (Loper et al., 2007). The resulting corpus is currently available for the research community and makes possible comparative studies between role sets. 3 Experimental Setting 3.1 Datasets The data used in this work is the benchmark corpus provided by the SRL shared task of CoNLL-2005 (Carreras and M`arquez, 2005). The dataset, of over 1 million tokens, comprises PropBank sections 02– 21 for training, and sections 24 and 23 for development and test, respectively. From the input information, we used part of speech tags and full parse trees (generated using Charniak’s parser) and discarded named entities. Also, we used the publicly available SemLink mapping from PropBank into VerbNet roles (Loper et al., 2007) to generate a replicate of the CoNLL-2005 corpus containing also the VerbNet annotation of roles. Unfortunately, SemLink version 1.0 does not cover all propositions and arguments in the PropBank corpus. In order to have an homogeneous corpus and not to bias experimental evaluation, we decided to discard all incomplete examples and keep only those propositions that were 100% mapped into VerbNet roles. The resulting corpus contains 56% of the original propositions, that is, over 50,000 propositions in the training set. This subcorpus is much larger than the SemEval-2007 Task 17 dataset used 1http://verbs.colorado.edu/semlink/ in our previous experimental work (Zapirain et al., 2008). The difference is especially noticeable in the diversity of predicates represented. In this case, there are 1,709 different verbs (1,505 lemmas) compared to the 50 verbs of the SemEval corpus. We believe that the size and richness of this corpus is enough to test and extract reliable conclusions on the robustness and generalization across verbs of the role sets under study. In order to study the behavior of both role sets in out–of–domain data, we made use of the PropBanked Brown corpus (Marcus et al., 1994) for testing, as it is also mapped into VerbNet thematic roles in the SemLink resource. Again, we discarded those propositions that were not entirely mapped into thematic roles (45%). 3.2 SRL System Our basic Semantic Role Labeling system represents the tagging problem as a Maximum Entropy Markov Model (MEMM). The system uses full syntactic information to select a sequence of constituents from the input text and tags these tokens with Begin/Inside/Outside (BIO) labels, using state-of-theart classifiers and features. The system achieves very good performance in the CoNLL-2005 shared task dataset and in the SRL subtask of the SemEval-2007 English lexical sample task (Zapirain et al., 2007). Check this paper for a complete description of the system. When searching for the most likely state sequence, the following constraints are observed2: 1. No duplicate argument classes for Arg0–Arg5 PropBank (or VerbNet) roles are allowed. 2. If there is a R-X argument (reference), then there has to be a X argument before (referent). 3. If there is a C-X argument (continuation), then there has to be a X argument before. 4. Before a I-X token, there has to be a B-X or I-X token. 5. Given a predicate, only the arguments described in its PropBank (or VerbNet) lexical entry (i.e., the verbal frameset) are allowed. 2Note that some of the constraints are dependent of the role set used, i.e., PropBank or VerbNet 552 Regarding the last constraint, the lexical entries of the verbs were constructed from the training data itself. For instance, the verb build appears with four different PropBank core roles (Arg0–3) and five VerbNet roles (Product, Material, Asset, Attribute, Theme), which are the only ones allowed for that verb at test time. Note that in the cases where the verb sense was known we could constraint the possible arguments to those that appear in the lexical entry of that sense, as opposed of using the arguments that appear in all senses. 4 On the Generalization of Role Sets We first seek a basic reference of the comparative performance of the classifier on each role set. We devised two settings based on our dataset. In the first setting (‘SemEval’) we use all the available information provided in the corpus, including the verb senses in PropBank and VerbNet. This information was available both in the training and test, and was thus used as an additional feature by the classifier and to constrain further the possible arguments when searching for the most probable Viterbi path. We call this setting ‘SemEval’ because the SemEval-2007 competition (Pradhan et al., 2007) was performed using this configuration. Being aware that, in a real scenario, the sense information will not be available, we devised the second setting (‘CoNLL’), where the hand-annotated verb sense information was discarded. This is the setting used in the CoNLL 2005 shared task (Carreras and M`arquez, 2005). The results for the first setting are shown in the ‘SemEval setting’ rows of Table 1. The correct, excess, missed, precision, recall and F1 measures are reported, as customary. The significance intervals for F1 are also reported. They have been obtained with bootstrap resampling (Noreen, 1989). F1 scores outside of these intervals are assumed to be significantly different from the related F1 score (p < 0.05). The results for PropBank are slightly better, which is reasonable, as the number of labels that the classifier has to learn in the case of VerbNet should make the task harder. In fact, given the small difference, one could think that VerbNet labels, being more numerous, are easier to learn, perhaps because they are more consistent across verbs. In the second setting (‘CoNLL setting’ row in the same table) the PropBank classifier degrades slightly, but the difference is not statistically significant. On the contrary, the drop of 1.6 points for VerbNet is significant, and shows greater sensitivity to the absence of the sense information for verbs. One possible reason could be that the VerbNet classifier is more dependant on the argument filter (i.e., the 5th constraint in Section 3.2, which only allows roles that occur in the verbal frameset) used in the Viterbi search, and lacking the sense information makes the filter less useful. In fact, we have attested that the 5th constrain discard more than 60% of the possible candidates for VerbNet, making the task of the classifier easier. In order to test this hypothesis, we run the CoNLL setting with the 5th constraint disabled (that is, allowing any argument). The results in the ‘CoNLL setting (no 5th)’ rows of Table 1 show that the drop for PropBank is negligible and not significant, while the drop for VerbNet is more important, and statistically significant. Another view of the data is obtained if we compute the F1 scores for core arguments and adjuncts separately (last two columns in Table 1). The performance drop for PropBank in the first three rows is equally distributed on both core arguments and adjuncts. On the contrary, the drop for VerbNet roles is more acute in core arguments (3.7 points), while adjuncts with the 5th constraint disabled get results close to the SemEval setting. These results confirm that the information in the verbal frameset is more important in VerbNet than in PropBank, as only core arguments are constrained in the verbal framesets. The explanation could stem from the fact that current SRL systems rely more on syntactic information than pure semantic knowledge. While PropBank arguments Arg0–5 are easier to distinguish on syntactic grounds alone, it seems quite difficult to distinguish among roles like Theme and Topic unless we have access to the specific verbal frameset. This corresponds nicely with the performance drop for VerbNet when there is less information about the verb in the algorithm (i.e., sense or frameset). We further analyzed the results by looking at each of the individual core arguments and adjuncts. Table 2 shows these results on the CoNLL setting. The performance for the most frequent roles is similar 553 PropBank Experiment correct excess missed precision recall F1 F1 core F1 adj. SemEval setting 6,022 1,378 1,722 81.38 77.76 79.53 ±0.9 82.25 72.48 CoNLL setting 5,977 1,424 1,767 80.76 77.18 78.93 ±0.9 81.64 71.90 CoNLL setting (no 5th) 5,972 1,434 1,772 80.64 77.12 78.84 ±0.9 81.49 71.50 No verbal features 5,557 1,828 2,187 75.25 71.76 73.46 ±1.0 74.87 70.11 Unseen verbs 267 89 106 75.00 71.58 73.25 ±4.0 76.21 64.92 VerbNet Experiment correct excess missed precision recall F1 F1 core F1 adj. SemEval setting 5,927 1,409 1,817 80.79 76.54 78.61 ±0.9 81.28 71.83 CoNLL setting 5,816 1,548 1,928 78.98 75.10 76.99 ±0.9 79.44 70.20 CoNLL setting (no 5th) 5,746 1,669 1,998 77.49 74.20 75.81 ±0.9 77.60 71.67 No verbal features 4,679 2,724 3,065 63.20 60.42 61.78 ±0.9 59.19 69.95 Unseen verbs 207 136 166 60.35 55.50 57.82 ±4.3 55.04 63.41 Table 1: Basic results using PropBank (top) and VerbNet (bottom) role sets on different settings. for both. Arg0 gets 88.49, while Agent and Experiencer get 87.31 and 87.76 respectively. Arg2 gets 79.91, but there is more variation on Theme, Topic and Patient (which get 75.46, 85.70 and 78.64 respectively). Finally, we grouped the results according to the frequency of the verbs in the training data. Table 3 shows that both PropBank and VerbNet get decreasing results for less frequent verbs. PropBank gets better results in all frequency ranges, except for the most frequent, which contains a single verb (say). Overall, the results on this section point out at the weaknesses of the VerbNet role set regarding robustness and generalization. The next sections examine further its behavior. 4.1 Generalization to Unseen Predicates In principle, the PropBank core roles (Arg0–4) get a different interpretation depending of the verb, that is, the meaning of each of the roles is described separately for each verb in the PropBank framesets. Still, the annotation criteria used with PropBank tried to make the two main roles (Arg0 and Arg1, which account for most of the occurrences) consistent across verbs. On the contrary, in VerbNet all roles are completely independent of the verb, in the sense that the interpretation of the role does not vary across verbs. But, at the same time, each verbal entry lists the possible roles it accepts, and the combinations allowed. This experiment tests the sensitivity of the two approaches when the SRL system encounters a verb which does not occur in the training data. In principle, we would expect the VerbNet semantic labels, which are more independent across verbs, to be more robust at tagging new predicates. It is worth noting that this is a realistic scenario, even for the verb-specific PropBank labels. Predicates which do not occur in the training data, but do have a PropBank lexicon entry, could appear quite often in the text to be analyzed. For this experiment, we artificially created a test set for unseen verbs. We chose 50 verbs at random, and split them into 40 verbs for training and 10 for testing (yielding 13,146 occurrences for training and 2,723 occurrences for testing; see Table 4). The results obtained after training and testing the classifier are shown in the last rows in Table 1. Note that they are not directly comparable to the other results mentioned so far, as the train and test sets are smaller. Figures indicate that the performance of the PropBank argument classifier is considerably higher than the VerbNet classifier, with a ∼15 point gap. This experiment shows that lacking any information about verbal head, the classifier has a hard time to distinguish among VerbNet roles. In order to confirm this, we performed the following experiment. 4.2 Sensitivity to Verb-dependent Features In this experiment we want to test the sensitivity of the role sets when the classifier does not have any information of the verb predicate. We removed from the training and testing data all the features which make any reference to the verb, including, among others: the surface form, lemma and POS of the verb, and all the combined features that include the verb form (please, refer to (Zapirain et al., 2007) for a complete description of the feature set). The results are shown in the ‘No verbal features’ 554 CoNLL setting No verb features PBank VNet PBank VNet corr. F1 corr. F1 F1 F1 Overall 5977 78.93 5816 76.99 73.46 61.78 Arg0 1919 88.49 84.02 Arg1 2240 79.81 73.29 Arg2 303 65.44 48.58 Arg3 10 52.63 14.29 Actor1 44 85.44 0.00 Actor2 10 71.43 25.00 Agent 1603 87.31 77.21 Attribut. 25 71.43 50.79 Cause 51 62.20 5.61 Experien. 215 87.76 86.69 Location 31 64.58 25.00 Patient1 38 67.86 5.71 Patient 208 78.64 25.06 Patient2 21 67.74 43.33 Predicate 83 62.88 28.69 Product 44 61.97 2.44 Recipient 85 79.81 62.73 Source 29 60.42 30.95 Stimulus 39 63.93 13.70 Theme 1021 75.46 52.14 Theme1 20 57.14 4.44 Theme2 21 70.00 23.53 Topic 683 85.70 73.58 ADV 132 53.44 129 52.12 52.67 53.31 CAU 13 53.06 13 52.00 53.06 45.83 DIR 22 53.01 27 56.84 40.00 46.34 DIS 133 77.78 137 79.42 77.25 78.34 LOC 126 61.76 126 61.02 59.56 57.34 MNR 109 58.29 111 54.81 52.99 51.49 MOD 249 96.14 248 95.75 96.12 95.57 NEG 124 98.41 124 98.80 98.41 98.01 PNC 26 44.07 29 44.62 38.33 41.79 TMP 453 75.00 450 73.71 73.06 73.89 Table 2: Detailed results on the CoNLL setting. Reference arguments and verbs have been omitted for brevity, as well as those with less than 10 occ. The last two columns refer to the results on the CoNLL setting with no verb features. Freq. PBank VNet Freq. PBank VNet 0-50 74,21 71,11 500-900 77,97 75,77 50-100 74,79 71,83 > 900 91,83 92,23 100-500 77,16 75,41 Table 3: F1 results split according to the frequency of the verb in the training data. Train affect, announce, ask, attempt, avoid, believe, build, care, cause, claim, complain, complete, contribute, describe, disclose, enjoy, estimate, examine, exist, explain, express, feel, fix, grant, hope, join, maintain, negotiate, occur, prepare, promise, propose, purchase, recall, receive, regard, remember, remove, replace, say Test allow, approve, buy, find, improve, kill, produce, prove, report, rush Table 4: Verbs used in the unseen verb experiment rows of Table 1. The performance drops more than 5 points in PropBank, but the drop for VerbNet is dramatic, with more than 15 points. A closer look at the detailed role-by-role performances can be done if we compare the F1 rows in the CoNLL setting and in the ‘no verb features’ setting in Table 2. Those results show that both Arg0 and Arg1 are quite robust to the lack of target verb information, while Arg2 and Arg3 get more affected. Given the relatively low number of Arg2 and Arg3 arguments, their performance drop does not affect so much the overall PropBank performance. In the case of VerbNet, the picture is very different. Focusing on the most frequent roles first, while the performance drop for Experiencer, Agent and Topic is of 1, 10 and 12 points respectively, the other roles get very heavy losses (e.g. Theme and Patient drop 23 and 50 points), and the rest of roles are barely found. It is worth noting that the adjunct labels get very similar performances in both PropBank and VerbNet cases. In fact, Table 1 in the last two rows shows very clearly that the performance drop is caused by the core arguments. The better robustness of the PropBank roles can be explained by the fact that, when creating PropBank, the human PropBank annotators tried to be consistent when tagging Arg0 and Arg1 across verbs. We also think that both Arg0 and Arg1 can be detected quite well relying on unlexicalized syntactic features only, that is, not knowing which are the verbal and nominal heads. On the other hand, distinguishing between Arg2–4 is more dependant on the subcategorization frame of the verb, and thus more sensitive to the lack of verbal information. In the case of VerbNet, the more fine-grained distinction among roles seems to depend more on the meaning of the predicate. For instance, distinguishing between Agent–Experiencer, or Theme–Topic– Patient. The lack of the verbal head makes it much more difficult to distinguish among those roles. The same phenomena can be observed among the roles not typically realized as Subject or Object such as Recipient, Source, Product, or Stimulus. 5 Mapping into VerbNet Thematic Roles As mentioned in the introduction, the interpretation of PropBank roles depends on the verb, and that 555 Test on WSJ all core adj. PropBank to VerbNet (hand) 79.17 ±0.9 81.77 72.50 VerbNet (SemEval setting) 78.61 ±0.9 81.28 71.84 PropBank to VerbNet (MF) 77.15 ±0.9 79.09 71.90 VerbNet (CoNLL setting) 76.99 ±0.9 79.44 70.88 Test on Brown PropBank to VerbNet (MF) 64.79 ±1.0 68.93 55.94 VerbNet (CoNLL setting) 62.87 ±1.0 67.07 54.69 Table 5: Results on VerbNet roles using two different strategies. Topmost 4 rows for the usual test set (WSJ), and the 2 rows below for the Brown test set. makes them less suitable for NLP applications. On the other hand, VerbNet roles have a direct interpretation. In this section, we test the performance of two different approaches to tag input sentences with VerbNet roles: (1) train on corpora tagged with VerbNet, and tag the input directly; (2) train on corpora tagged with PropBank, tag the input with PropBank roles, and use a PropBank to VerbNet mapping to output VerbNet roles. The results for the first approach are already available (cf. Table 1). For the second approach, we just need to map PropBank roles into VerbNet roles using SemLink (Loper et al., 2007). We devised two experiments. In the first one we use the handannotated verb class in the test set. For each predicate we translate PropBank roles into VerbNet roles making use of the SemLink mapping information corresponding to that verb lemma and its verbal class. For instance, consider an occurrence of allow in a test sentence. If the occurrence has been manually annotated with the VerbNet class 29.5, we can use the following entry in SemLink to add the VerbNet role Predicate to the argument labeled with Arg1, and Agent to the Arg0 argument. <predicate lemma="allow"> <argmap pb-roleset="allow.01" vn-class="29.5"> <role pb-arg="1" vn-theta="Predicate" /> <role pb-arg="0" vn-theta="Agent" /> </argmap> </predicate> The results obtained using the hand-annotated VerbNet classes (and the SemEval setting for PropBank), are shown in the first row of Table 5. If we compare these results to those obtained by VerbNet in the SemEval setting (second row of Table 5), they are 0.5 points better, but the difference is not statistically significant. experiment corr. F1 Grouped (CoNLL Setting) 5,951 78.11±0.9 PropBank to VerbNet to Grouped 5,970 78.21±0.9 Table 6: Results for VerbNet grouping experiments. In a second experiment, we discarded the sense annotations from the dataset, and tried to predict the VerbNet class of the target verb using the most frequent class for the verb in the training data. Surprisingly, the accuracy of choosing the most frequent class is 97%. In the case of allow the most frequent class is 29.5, so we would use the same SemLink entry as above. The third row in Table 5 shows the results using the most frequent VerbNet class (and the CoNLL setting for PropBank). The performance drop compared to the use of the handannotated VerbNet class is of 2 points and statistically significant, and 0.2 points above the results obtained using VerbNet directly on the same conditions (fourth row of the same Table). The last two rows in table 5 show the results when testing on the the Brown Corpus. In this case, the difference is larger, 1.9 points, and statistically significant in favor of the mapping approach. These results show that VerbNet roles are less robust to domain shifts. The performance drop when moving to an out–of–domain corpus is consistent with previously published results (Carreras and M`arquez, 2005). 5.1 Grouping experiments VerbNet roles are more numerous than PropBank roles, and that, in itself, could cause a drop in performance. Motivated by the results in (Yi et al., 2007), we grouped the 23 VerbNet roles in 7 coarser role groups. Note that their groupings are focused on the roles which map to PropBank Arg2. In our case we are interested in a more general grouping which covers all VerbNet roles, so we added two additional groups (Agent-Experiencer and ThemeTopic-Patient). We re-tagged the roles in the datasets with those groups, and then trained and tested our SRL system on those grouped labels. The results are shown in the first row of Table 6. In order to judge if our groupings are easier to learn, we can see that he performance gain with respect to the ungrouped roles (fourth row of Table 5) is small (76.99 556 vs. 78.11) but significant. But if we compare them to the results of the PropBank to VerbNet mapping, where we simply substitute the fine-grained roles by their corresponding groups, we see that they still lag behind (second row in Table 6). Although one could argue that better motivated groupings could be proposed, these results indicate that the larger number of VerbNet roles does not explain in itself the performance difference when compared to PropBank. 6 Related Work As far as we know, there are only two other works performing comparisons of alternative role sets on a common test data. Gildea and Jurafsky (2002) mapped FrameNet frame elements into a set of abstract thematic roles (i.e., more general roles such as Agent, Theme, Location), and concluded that their system could use these thematic roles without degradation in performance. (Yi et al., 2007) is a closely related work. They also compare PropBank and VerbNet role sets, but they focus on the performance of Arg2. They show that splitting Arg2 instances into subgroups based on VerbNet thematic roles improves the performance of the PropBank-based classifier. Their claim is that since VerbNet uses argument labels that are more consistent across verbs, they would provide more consistent training instances which would generalize better, especially to new verbs and genres. In fact they get small improvements in PropBank (WSJ) and a large improvement when testing on Brown. An important remark is that Yi et al. use a combination of grouped VerbNet roles (for Arg2) and PropBank roles (for the rest of arguments). In contrast, our study compares both role sets as they stand, without modifications or mixing. Another difference is that they compare the systems based on the PropBank roles —by mapping the output VerbNet labels back to PropBank Arg2— while in our case we decided to do just the contrary (i.e., mapping PropBank output into VerbNet labels and compare there). As we already said, we think that VerbNet–based labels can be more useful for NLP applications, so our target is to have a SRL system that provides VerbNet annotations. While not in direct contradiction, both studies show different angles of the complex relation between the two role sets. 7 Conclusion and Future work In this paper we have presented a study of the performance of a state-of-the-art SRL system trained on two alternative codifications of roles (PropBank and VerbNet) and some particular settings, e.g., including/excluding verb–specific information in features, labeling of infrequent and unseen verb predicates, and domain shifts. We observed that PropBank labeling is more robust in all previous experimental conditions, showing less performance drops than VerbNet labels. Assuming that application-based scenarios would prefer dealing with general thematic role labels, we explore the best way to label a text with VerbNet thematic roles, namely, by training directly on VerbNet roles or by using the PropBank SRL system and performing a posterior mapping into thematic roles. While results are similar and not statistically significant in the WSJ test set, when testing on the Brown out–of–domain test set the difference in favor of PropBank plus mapping step is statistically significant. We also tried to map the fine-grained VerbNet roles into coarser roles, but it did not yield better results than the mapping from PropBank roles. As a side-product, we show that a simple most frequent sense disambiguation strategy for verbs is sufficient to provide excellent results in the PropBank to VerbNet mapping. Regarding future work, we would like to explore ways to improve the performance on VerbNet roles, perhaps using selectional preferences. We also want to work on the adaptation to new domains of both roles sets. Acknowledgements We are grateful to Martha Palmer and Edward Loper for kindly providing us with the SemLink mappings. This work has been partially funded by the Basque Government (IT-397-07) and by the Ministry of Education (KNOW TIN2006-15049, OpenMT TIN2006-15307-C03-02). Be˜nat is supported by a PhD grant from the University of the Basque Country. 557 References Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the CoNLL-2005 shared task: Semantic role labeling. In Ido Dagan and Daniel Gildea, editors, Proceedings of the Ninth Conference on Computational Natural Language Learning (CoNLL-2005), pages 152– 164, Ann Arbor, Michigan, USA, June. Association for Computational Linguistics. Karin Kipper, Hoa Trang Dang, and Martha Palmer. 2000. Class based construction of a verb lexicon. In Proceedings of the 17th National Conference on Artificial Intelligence (AAAI-2000), Austin, TX, July. Beth Levin. 1993. English Verb Classes and Alternations: A Preliminary Investigation. The University of Chicago Press, Chicago. Edward Loper, Szu-Ting Yi, and Martha Palmer. 2007. Combining lexical resources: Mapping between propbank and verbnet. In Proceedings of the 7th International Workshop on Computational Linguistics, Tilburg, the Netherlands. Mitchell Marcus, Grace Kim, Mary Ann Marcinkiewicz, Robert MacIntyre, Ann Bies, Mark Ferguson, Karen Katz, and Britta Schasberger. 1994. The penn treebank: annotating predicate argument structure. In HLT ’94: Proceedings of the workshop on Human Language Technology, pages 114–119, Morristown, NJ, USA. Association for Computational Linguistics. Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses. John Wiley & Sons. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71– 105. Sameer Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. Semeval-2007 task-17: English lexical sample, SRL and all words. In Proceedings of the Fourth International Workshop on Semantic Evaluations (SemEval-2007), pages 87–92, Prague, Czech Republic, June. Association for Computational Linguistics. Szu-Ting Yi, Edward Loper, and Martha Palmer. 2007. Can semantic roles generalize across genres? In Proceedings of the Human Language Technology Conferences/North American Chapter of the Association for Computational Linguistics Annual Meeting (HLT/NAACL-2007). Be˜nat Zapirain, Eneko Agirre, and Llu´ıs M`arquez. 2007. Sequential SRL Using Selectional Preferences. An Approach with Maximum Entropy Markov Models. In Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007), pages 354–357. Be˜nat Zapirain, Eneko Agirre, and Llu´ıs M`arquez. 2008. A Preliminary Study on the Robustness and Generalization of Role Sets for Semantic Role Labeling. In Proceedings of the 9th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing-2008), pages 219–230, Haifa, Israel, February. 558
2008
63
Proceedings of ACL-08: HLT, pages 559–567, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A Tree Sequence Alignment-based Tree-to-Tree Translation Model Min Zhang1 Hongfei Jiang2 Aiti Aw1 Haizhou Li1 Chew Lim Tan3 and Sheng Li2 1Institute for Infocomm Research 2Harbin Institute of Technology 3National University of Singapore [email protected] [email protected] [email protected] [email protected] [email protected] [email protected] Abstract This paper presents a translation model that is based on tree sequence alignment, where a tree sequence refers to a single sequence of subtrees that covers a phrase. The model leverages on the strengths of both phrase-based and linguistically syntax-based method. It automatically learns aligned tree sequence pairs with mapping probabilities from word-aligned biparsed parallel texts. Compared with previous models, it not only captures non-syntactic phrases and discontinuous phrases with linguistically structured features, but also supports multi-level structure reordering of tree typology with larger span. This gives our model stronger expressive power than other reported models. Experimental results on the NIST MT-2005 Chinese-English translation task show that our method statistically significantly outperforms the baseline systems. 1 Introduction Phrase-based modeling method (Koehn et al., 2003; Och and Ney, 2004a) is a simple, but powerful mechanism to machine translation since it can model local reorderings and translations of multiword expressions well. However, it cannot handle long-distance reorderings properly and does not exploit discontinuous phrases and linguistically syntactic structure features (Quirk and Menezes, 2006). Recently, many syntax-based models have been proposed to address the above deficiencies (Wu, 1997; Chiang, 2005; Eisner, 2003; Ding and Palmer, 2005; Quirk et al, 2005; Cowan et al., 2006; Zhang et al., 2007; Bod, 2007; Yamada and Knight, 2001; Liu et al., 2006; Liu et al., 2007; Gildea, 2003; Poutsma, 2000; Hearne and Way, 2003). Although good progress has been reported, the fundamental issues in applying linguistic syntax to SMT, such as non-isomorphic tree alignment, structure reordering and non-syntactic phrase modeling, are still worth well studying. In this paper, we propose a tree-to-tree translation model that is based on tree sequence alignment. It is designed to combine the strengths of phrase-based and syntax-based methods. The proposed model adopts tree sequence 1 as the basic translation unit and utilizes tree sequence alignments to model the translation process. Therefore, it not only describes non-syntactic phrases with syntactic structure information, but also supports multi-level tree structure reordering in larger span. These give our model much more expressive power and flexibility than those previous models. Experiment results on the NIST MT-2005 ChineseEnglish translation task show that our method significantly outperforms Moses (Koehn et al., 2007), a state-of-the-art phrase-based SMT system, and other linguistically syntax-based methods, such as SCFG-based and STSG-based methods (Zhang et al., 2007). In addition, our study further demonstrates that 1) structure reordering rules in our model are very useful for performance improvement while discontinuous phrase rules have less contribution and 2) tree sequence rules are able to model non-syntactic phrases with syntactic structure information, and thus contribute much to the performance improvement, but those rules consisting of more than three sub-trees have almost no contribution. The rest of this paper is organized as follows: Section 2 reviews previous work. Section 3 elabo 1 A tree sequence refers to an ordered sub-tree sequence that covers a phrase or a consecutive tree fragment in a parse tree. It is the same as the concept “forest” used in Liu et al (2007). 559 rates the modelling process while Sections 4 and 5 discuss the training and decoding algorithms. The experimental results are reported in Section 6. Finally, we conclude our work in Section 7. 2 Related Work Many techniques on linguistically syntax-based SMT have been proposed in literature. Yamada and Knight (2001) use noisy-channel model to transfer a target parse tree into a source sentence. Eisner (2003) studies how to learn non-isomorphic tree-to-tree/string mappings using a STSG. Ding and Palmer (2005) propose a syntax-based translation model based on a probabilistic synchronous dependency insertion grammar. Quirk et al. (2005) propose a dependency treelet-based translation model. Cowan et al. (2006) propose a featurebased discriminative model for target language syntactic structures prediction, given a source parse tree. Huang et al. (2006) study a TSG-based tree-to-string alignment model. Liu et al. (2006) propose a tree-to-string model. Zhang et al. (2007b) present a STSG-based tree-to-tree translation model. Bod (2007) reports that the unsupervised STSG-based translation model performs much better than the supervised one. The motivation behind all these work is to exploit linguistically syntactic structure features to model the translation process. However, most of them fail to utilize non-syntactic phrases well that are proven useful in the phrase-based methods (Koehn et al., 2003). The formally syntax-based model for SMT was first advocated by Wu (1997). Xiong et al. (2006) propose a MaxEnt-based reordering model for BTG (Wu, 1997) while Setiawan et al. (2007) propose a function word-based reordering model for BTG. Chiang (2005)’s hierarchal phrase-based model achieves significant performance improvement. However, no further significant improvement is achieved when the model is made sensitive to syntactic structures by adding a constituent feature (Chiang, 2005). In the last two years, many research efforts were devoted to integrating the strengths of phrasebased and syntax-based methods. In the following, we review four representatives of them. 1) Hassan et al. (2007) integrate supertags (a kind of lexicalized syntactic description) into the target side of translation model and language model under the phrase-based translation framework, resulting in good performance improvement. However, neither source side syntactic knowledge nor reordering model is further explored. 2) Galley et al. (2006) handle non-syntactic phrasal translations by traversing the tree upwards until a node that subsumes the phrase is reached. This solution requires larger applicability contexts (Marcu et al., 2006). However, phrases are utilized independently in the phrase-based method without depending on any contexts. 3) Addressing the issues in Galley et al. (2006), Marcu et al. (2006) create an xRS rule headed by a pseudo, non-syntactic non-terminal symbol that subsumes the phrase and its corresponding multiheaded syntactic structure; and one sibling xRS rule that explains how the pseudo symbol can be combined with other genuine non-terminals for acquiring the genuine parse trees. The name of the pseudo non-terminal is designed to reflect the full realization of the corresponding rule. The problem in this method is that it neglects alignment consistency in creating sibling rules and the naming mechanism faces challenges in describing more complicated phenomena (Liu et al., 2007). 4) Liu et al. (2006) treat all bilingual phrases as lexicalized tree-to-string rules, including those non-syntactic phrases in training corpus. Although the solution shows effective empirically, it only utilizes the source side syntactic phrases of the input parse tree during decoding. Furthermore, the translation probabilities of the bilingual phrases and other tree-to-string rules are not compatible since they are estimated independently, thus having different parameter spaces. To address the above problems, Liu et al. (2007) propose to use forest-to-string rules to enhance the expressive power of their tree-to-string model. As is inherent in a tree-to-string framework, Liu et al.’s method defines a kind of auxiliary rules to integrate forestto-string rules into tree-to-string models. One problem of this method is that the auxiliary rules are not described by probabilities since they are constructed during decoding, rather than learned from the training corpus. So, to balance the usage of different kinds of rules, they use a very simple feature counting the number of auxiliary rules used in a derivation for penalizing the use of forest-to-string and auxiliary rules. In this paper, an alternative solution is presented to combine the strengths of phrase-based and syn560 1 ( ) I T e 1 ( ) J T f A Figure 1: A word-aligned parse tree pairs of a Chinese sentence and its English translation Figure 2: Two Examples of tree sequences Figure 3: Two examples of translation rules tax-based methods. Unlike previous work, our solution neither requires larger applicability contexts (Galley et al., 2006), nor depends on pseudo nodes (Marcu et al., 2006) or auxiliary rules (Liu et al., 2007). We go beyond the single sub-tree mapping model to propose a tree sequence alignment-based translation model. To the best of our knowledge, this is the first attempt to empirically explore the tree sequence alignment based model in SMT. 3 Tree Sequence Alignment Model 3.1 Tree Sequence Translation Rule The leaf nodes of a sub-tree in a tree sequence can be either non-terminal symbols (grammar tags) or terminal symbols (lexical words). Given a pair of source and target parse trees 1 ( ) J T f and 1 ( ) I T e in Fig. 1, Fig. 2 illustrates two examples of tree sequences derived from the two parse trees. A tree sequence translation rule r is a pair of aligned tree sequences r =< 2 1 ( ) j j TS f , 2 1 ( ) i i TS e , A% >, where: z 2 1 ( ) j j TS f is a source tree sequence, covering the span [ 1 2 , j j ] in 1 ( ) J T f , and z 2 1 ( ) i i TS e is a target one, covering the span [ 1 2 ,i i ] in 1 ( ) I T e , and z A% are the alignments between leaf nodes of two tree sequences, satisfying the following condition: 1 2 1 2 ( , ) : i j A i i i j j j ∀ ∈ ≤≤ ↔ ≤ ≤ % . Fig. 3 shows two rules extracted from the tree pair shown in Fig. 1, where r1 is a tree-to-tree rule and r2 is a tree sequence-to-tree sequence rule. Obviously, tree sequence rules are more powerful than phrases or tree rules as they can capture all phrases (including both syntactic and non-syntactic phrases) with syntactic structure information and allow any tree node operations in a longer span. We expect that these properties can well address the issues of non-isomorphic structure alignments, structure reordering, non-syntactic phrases and discontinuous phrases translations. 3.2 Tree Sequence Translation Model Given the source and target sentences 1 J f and 1 Ie and their parse trees 1 ( ) J T f and 1 ( ) I T e , the tree sequence-to-tree sequence translation model is formulated as: 1 1 1 1 1 1 1 1 1 1 ( ), ( ) 1 1 ( ), ( ) 1 1 1 1 1 1 1 ( | ) ( , ( ), ( ) | ) ( ( ( ) | ) ( ( ) | ( ), ) ( | ( ), ( ), )) J I J I I J I I J J T f T e J J T f T e I J J I I J J r r r r r P e f P e T e T f f P T f f P T e T f f P e T e T f f = = ⋅ ⋅ ∑ ∑ (1) In our implementation, we have: 561 1) 1 1 ( ( ) | ) 1 J J r P T f f ≡since we only use the best source and target parse tree pairs in training. 2) 1 1 1 1 ( | ( ), ( ), ) 1 I I J J r P e T e T f f ≡ since we just output the leaf nodes of 1 ( ) I T e to generate 1 Ie regardless of source side information. Since 1 ( ) J T f contains the information of 1 J f , now we have: 1 1 1 1 1 1 1 ( | ) ( ( ) | ( ), ) ( ( ) | ( )) I J I J J I J r r r P e f P T e T f f P T e T f = = (2) By Eq. (2), translation becomes a tree structure mapping issue. We model it using our tree sequence-based translation rules. Given the source parse tree 1 ( ) J T f , there are multiple derivations that could lead to the same target tree 1 ( ) I T e , the mapping probability 1 1 ( ( ) | ( )) I J r P T e T f is obtained by summing over the probabilities of all derivations. The probability of each derivationθ is given as the product of the probabilities of all the rules ( ) i p r used in the derivation (here we assume that a rule is applied independently in a derivation). 2 2 1 1 1 1 1 1 ( | ) ( ( ) | ( )) = ( : ( ), ( ), ) i I J I J i j i i j r r r P e f P T e T f p r TS e TS f A θ θ ∈ = < > ∑∏ % (3) Eq. (3) formulates the tree sequence alignmentbased translation model. Figs. 1 and 3 show how the proposed model works. First, the source sentence is parsed into a source parse tree. Next, the source parse tree is detached into two source tree sequences (the left hand side of rules in Fig. 3). Then the two rules in Fig. 3 are used to map the two source tree sequences to two target tree sequences, which are then combined to generate a target parse tree. Finally, a target translation is yielded from the target tree. Our model is implemented under log-linear framework (Och and Ney, 2002). We use seven basic features that are analogous to the commonly used features in phrase-based systems (Koehn, 2004): 1) bidirectional rule mapping probabilities; 2) bidirectional lexical rule translation probabilities; 3) the target language model; 4) the number of rules used and 5) the number of target words. In addition, we define two new features: 1) the number of lexical words in a rule to control the model’s preference for lexicalized rules over un-lexicalized rules and 2) the average tree depth in a rule to balance the usage of hierarchical rules and flat rules. Note that we do not distinguish between larger (taller) and shorter source side tree sequences, i.e. we let these rules compete directly with each other. 4 Rule Extraction Rules are extracted from word-aligned, bi-parsed sentence pairs 1 1 ( ), ( ), J I T f T e A < > , which are classified into two categories: z initial rule, if all leaf nodes of the rule are terminals (i.e. lexical word), and z abstract rule, otherwise, i.e. at least one leaf node is a non-terminal (POS or phrase tag). Given an initial rule 2 2 1 1 ( ), ( ), j i j i TS f TS e A < > % , its sub initial rule is defined as a triple 4 4 3 3 ˆ ( ), ( ), j i j i TS f TS e A < > if and only if: z 4 4 3 3 ˆ ( ), ( ), j i j i TS f TS e A < > is an initial rule. z 3 4 3 4 ( , ) : i j A i i i j j j ∀ ∈ ≤≤ ↔ ≤ ≤ % , i.e. ˆA A ⊆% z 4 3 ( ) j j TS f is a sub-graph of 2 1 ( ) j j TS f while 4 3 ( ) i i TS e is a sub-graph of 2 1 ( ) i i TS e . Rules are extracted in two steps: 1) Extracting initial rules first. 2) Extracting abstract rules from extracted initial rules with the help of sub initial rules. It is straightforward to extract initial rules. We first generate all fully lexicalized source and target tree sequences using a dynamic programming algorithm and then iterate over all generated source and target tree sequence pairs 2 2 1 1 ( ), ( ) j i j i TS f TS e < > . If the condition “ ( , ) i j ∀ 1 2 1 2 : A i i i j j j ∈ ≤≤ ↔ ≤ ≤ ” is satisfied, the triple 2 2 1 1 ( ), ( ), j i j i TS f TS e A < > % is an initial rule, where A% are alignments between leaf nodes of 2 1 ( ) j j TS f and 2 1 ( ) i i TS e . We then derive abstract rules from initial rules by removing one or more of its sub initial rules. The abstract rule extraction algorithm presented next is implemented using dynamic programming. Due to space limitation, we skip the details here. In order to control the number of rules, we set three constraints for both finally extracted initial and abstract rules: 1) The depth of a tree in a rule is not greater 562 than h . 2) The number of non-terminals as leaf nodes is not greater thanc . 3) The tree number in a rule is not greater than d. In addition, we limit initial rules to have at most seven lexical words as leaf nodes on either side. However, in order to extract long-distance reordering rules, we also generate those initial rules with more than seven lexical words for abstract rules extraction only (not used in decoding). This makes our abstract rules more powerful in handling global structure reordering. Moreover, by configuring these parameters we can implement other translation models easily: 1) STSG-based model when 1 d = ; 2) SCFG-based model when 1 d = and 2 h = ; 3) phrase-based translation model only (no reordering model) when 0 c = and 1 h = . Algorithm 1: abstract rules extraction Input: initial rule set ini r Output: abstract rule set abs r 1: for each i ini r r ∈ , do 2: put all sub initial rules of ir into a set subini ir 3: for each subset subini ir Θ ⊆ do 4: if there are spans overlapping between any two rules in the subsetΘ then 5: continue //go to line 3 6: end if 7: generate an abstract rule by removing the portions covered byΘ from ir and co-indexing the pairs of non-terminals that rooting the removed source and target parts 8: add them into the abstract rule set abs r 9: end do 10: end do 5 Decoding Given 1 ( ) J T f , the decoder is to find the best derivation θ that generates < 1 ( ) J T f , 1 ( ) I T e >. 1 1 1 1 , ˆ arg max ( ( ) | ( )) arg max ( ) I I i I J e i e r r e P T e T f p r θ θ ∈ = ≈ ∏ (4) Algorithm 2: Tree Sequence-based Decoder Input: 1 ( ) J T f Output: 1 ( ) I T e Data structures: 1 2 [ , ] h j j To store translations to a span 1 2 [ , ] j j 1: for s = 0 to J -1 do // s: span length 2: for 1j = 1 to J - s , 2j = 1j + s do 3: for each rule r spanning 1 2 [ , ] j j do 4: if r is an initial rule then 5: insert r into 1 2 [ , ] h j j 6: else 7: generate new translations from r by replacing non-terminal leaf nodes of r with their corresponding spans’ translations that are already translated in previous steps 8: insert them into 1 2 [ , ] h j j 9: end if 10: end for 11: end for 12: end for 13: output the hypothesis with the highest score in [1, ] h J as the final best translation The decoder is a span-based beam search together with a function for mapping the source derivations to the target ones. Algorithm 2 illustrates the decoding algorithm. It translates each span iteratively from small one to large one (lines 1-2). This strategy can guarantee that when translating the current span, all spans smaller than the current one have already been translated before if they are translatable (line 7). When translating a span, if the usable rule is an initial rule, then the tree sequence on the target side of the rule is a candidate translation (lines 4-5). Otherwise, we replace the nonterminal leaf nodes of the current abstract rule with their corresponding spans’ translations that are already translated in previous steps (line 7). To speed up the decoder, we use several thresholds to limit search beams for each span: 1) α , the maximal number of rules used 2) β , the minimal log probability of rules 3) γ , the maximal number of translations yield It is worth noting that the decoder does not force a complete target parse tree to be generated. If no rules can be used to generate a complete target parse tree, the decoder just outputs whatever have 563 been translated so far monotonically as one hypothesis. 6 Experiments 6.1 Experimental Settings We conducted Chinese-to-English translation experiments. We trained the translation model on the FBIS corpus (7.2M+9.2M words) and trained a 4gram language model on the Xinhua portion of the English Gigaword corpus (181M words) using the SRILM Toolkits (Stolcke, 2002) with modified Kneser-Ney smoothing. We used sentences with less than 50 characters from the NIST MT-2002 test set as our development set and the NIST MT2005 test set as our test set. We used the Stanford parser (Klein and Manning, 2003) to parse bilingual sentences on the training set and Chinese sentences on the development and test sets. The evaluation metric is case-sensitive BLEU-4 (Papineni et al., 2002). We used GIZA++ (Och and Ney, 2004) and the heuristics “grow-diag-final” to generate m-to-n word alignments. For the MER training (Och, 2003), we modified Koehn’s MER trainer (Koehn, 2004) for our tree sequence-based system. For significance test, we used Zhang et al’s implementation (Zhang et al, 2004). We set three baseline systems: Moses (Koehn et al., 2007), and SCFG-based and STSG-based treeto-tree translation models (Zhang et al., 2007). For Moses, we used its default settings. For the SCFG/STSG and our proposed model, we used the same settings except for the parameters d and h ( 1 d = and 2 h = for the SCFG; 1 d = and 6 h = for the STSG; 4 d = and 6 h = for our model). We optimized these parameters on the training and development sets: c =3, α =20, β =-100 and γ =100. 6.2 Experimental Results We carried out a number of experiments to examine the proposed tree sequence alignment-based translation model. In this subsection, we first report the rule distributions and compare our model with the three baseline systems. Then we study the model’s expressive ability by comparing the contributions made by different kinds of rules, including strict tree sequence rules, non-syntactic phrase rules, structure reordering rules and discontinuous phrase rules2. Finally, we investigate the impact of maximal sub-tree number and sub-tree depth in our model. All of the following discussions are held on the training and test data. Rule Initial Rules Abstract Rules L P U Total BP 322,965 0 0 322,965 TR 443,010 144,459 24,871 612,340 TSR 225,570 103,932 714 330,216 Table 1: # of rules used in the testing ( 4 d = , h = 6) (BP: bilingual phrase (used in Moses), TR: tree rule (only 1 tree), TSR: tree sequence rule (> 1 tree), L: fully lexicalized, P: partially lexicalized, U: unlexicalized) Table 1 reports the statistics of rules used in the experiments. It shows that: 1) We verify that the BPs are fully covered by the initial rules (i.e. lexicalized rules), in which the lexicalized TSRs model all non-syntactic phrase pairs with rich syntactic information. In addition, we find that the number of initial rules is greater than that of bilingual phrases. This is because one bilingual phrase can be covered by more than one initial rule which having different sub-tree structures. 2) Abstract rules generalize initial rules to unseen data and with structure reordering ability. The number of the abstract rule is far less than that of the initial rules. This is because leaf nodes of an abstract rule can be non-terminals that can represent any sub-trees using the non-terminals as roots. Fig. 4 compares the performance of different models. It illustrates that: 1) Our tree sequence-based model significantly outperforms (p < 0.01) previous phrase-based and linguistically syntax-based methods. This empirically verifies the effect of the proposed method. 2) Both our method and STSG outperform Moses significantly. Our method also clearly outperforms STSG. These results suggest that: z The linguistically motivated structure features are very useful for SMT, which can be cap 2 To be precise, we examine the contributions of strict tree sequence rules and single tree rules separately in this section. Therefore, unless specified, the term “tree sequence rules” used in this section only refers to the strict tree sequence rules, which must contain at least two sub-trees on the source side. 564 tured by the two syntax-based models through tree node operations. z Our model is much more effective in utilizing linguistic structures than STSG since it uses tree sequence as basic translation unit. This allows our model not only to handle structure reordering by tree node operations in a larger span, but also to capture non-syntactic phrases, which circumvents previous syntactic constraints, thus giving our model more expressive power. 3) The linguistically motivated SCFG shows much lower performance. This is largely because SCFG only allows sibling nodes reordering and fails to utilize both non-syntactic phrases and those syntactic phrases that cannot be covered by a single CFG rule. It thereby suggests that SCFG is less effective in modelling parse tree structure transfer between Chinese and English when using Penn Treebank style linguistic grammar and under wordalignment constraints. However, formal SCFG show much better performance in the formally syntax-based translation framework (Chiang, 2005). This is because the formal syntax is learned from phrases directly without relying on any linguistic theory (Chiang, 2005). As a result, it is more robust to the issue of non-syntactic phrase usage and non-isomorphic structure alignment. 24.71 26.07 23.86 22.72 21.5 22.5 23.5 24.5 25.5 26.5 SCFG Moses STSG Ours BLEU(%) Figure 4: Performance comparison of different methods Rule Type TR (STSG) TR +TSR_L TR+TSR_L +TSR_P TR +TSR BLEU(%) 24.71 25.72 25.93 26.07 Table 2: Contributions of TSRs (see Table 1 for the definitions of the abbreviations used in this table) Table 2 measures the contributions of different kinds of tree sequence rules. It suggests that: 1) All the three kinds of TSRs contribute to the performance improvement and their combination further improves the performance. It suggests that they are complementary to each other since the lexicalized TSRs are used to model non-syntactic phrases while the other two kinds of TSRs can generalize the lexicalized rules to unseen phrases. 2) The lexicalized TSRs make the major contribution since they can capture non-syntactic phrases with syntactic structure features. Rule Type BLEU (%) TR+TSR 26.07 (TR+TSR) w/o SRR 24.62 (TR+TSR) w/o DPR 25.78 Table 3: Effect of Structure Reordering Rules (SRR: refers to the structure reordering rules that have at least two non-terminal leaf nodes with inverted order in the source and target sides, which are usually not captured by phrase-based models. Note that the reordering between lexical words and non-terminal leaf nodes is not considered here) and Discontinuous Phrase Rules (DPR: refers to these rules having at least one non-terminal leaf node between two lexicalized leaf nodes) in our tree sequence-based model ( 4 d = and 6 h = ) Rule Type # of rules # of rules overlapped (Intersection) SRR 68,217 18,379 (26.9%) DPR 57,244 18,379 (32.1%) Table 4: numbers of SRR and DPR rules Table 3 shows the contributions of SRR and DPR. It clearly indicates that SRRs are very effective in reordering structures, which improve performance by 1.45 (26.07-24.62) BLEU score. However, DPRs have less impact on performance in our tree sequence-based model. This seems in contradiction to the previous observations3 in literature. However, it is not surprising simply because we use tree sequences as the basic translation units. Thereby, our model can capture all phrases. In this sense, our model behaves like a phrasebased model, less sensitive to discontinuous phras 3 Wellington et al. (2006) reports that discontinuities are very useful for translational equivalence analysis using binarybranching structures under word alignment and parse tree constraints while they are almost of no use if under word alignment constraints only. Bod (2007) finds that discontinues phrase rules make significant performance improvement in linguistically STSG-based SMT models. 565 es (Wellington et al., 2006). Our additional experiments also verify that discontinuous phrase rules are complementary to syntactic phrase rules (Bod, 2007) while non-syntactic phrase rules may compromise the contribution of discontinuous phrase rules. Table 4 reports the numbers of these two kinds of rules. It shows that around 30% rules are shared by the two kinds of rule sets. These overlapped rules contain at least two non-terminal leaf nodes plus two terminal leaf nodes, which implies that longer rules do not affect performance too much. 22.07 25.28 26.14 25.94 26.02 26.07 21.5 22.5 23.5 24.5 25.5 26.5 1 2 3 4 5 6 BLEU(%) Figure 5: Accuracy changing with different maximal tree depths ( h = 1 to 6 when 4 d = ) 22.72 24.71 26.05 26.03 26.07 25.74 25.29 25.28 25.26 24.78 21.5 22.5 23.5 24.5 25.5 26.5 1 2 3 4 5 BLEU(%) Figure 6: Accuracy changing with the different maximal number of trees in a tree sequence (d =1 to 5), the upper line is for 6 h = while the lower line is for 2 h = . Fig. 5 studies the impact when setting different maximal tree depth ( h ) in a rule on the performance. It demonstrates that: 1) Significant performance improvement is achieved when the value of h is increased from 1 to 2. This can be easily explained by the fact that when h = 1, only monotonic search is conducted, while h = 2 allows non-terminals to be leaf nodes, thus introducing preliminary structure features to the search and allowing non-monotonic search. 2) Internal structures and large span (due to h increasing) are also useful as attested by the gain of 0.86 (26.14-25.28) Blue score when the value of h increases from 2 to 4. Fig. 6 studies the impact on performance by setting different maximal tree number (d) in a rule. It further indicates that: 1) Tree sequence rules (d >1) are useful and even more helpful if we limit the tree depth to no more than two (lower line, h=2). However, tree sequence rules consisting of more than three subtrees have almost no contribution to the performance improvement. This is mainly due to data sparseness issue when d >3. 2) Even if only two-layer sub-trees (lower line) are allowed, our method still outperforms STSG and Moses when d>1. This further validates the effectiveness of our design philosophy of using multi-sub-trees as basic translation unit in SMT. 7 Conclusions and Future Work In this paper, we present a tree sequence alignment-based translation model to combine the strengths of phrase-based and syntax-based methods. The experimental results on the NIST MT2005 Chinese-English translation task demonstrate the effectiveness of the proposed model. Our study also finds that in our model the tree sequence rules are very useful since they can model non-syntactic phrases and reorderings with rich linguistic structure features while discontinuous phrases and tree sequence rules with more than three sub-trees have less impact on performance. There are many interesting research topics on the tree sequence-based translation model worth exploring in the future. The current method extracts large amount of rules. Many of them are redundant, which make decoding very slow. Thus, effective rule optimization and pruning algorithms are highly desirable. Ideally, a linguistically and empirically motivated theory can be worked out, suggesting what kinds of rules should be extracted given an input phrase pair. For example, most function words and headwords can be kept in abstract rules as features. In addition, word alignment is a hard constraint in our rule extraction. We will study direct structure alignments to reduce the impact of word alignment errors. We are also interested in comparing our method with the forestto-string model (Liu et al., 2007). Finally, we would also like to study unsupervised learningbased bilingual parsing for SMT. 566 References Rens Bod. 2007. Unsupervised Syntax-Based Machine Translation: The Contribution of Discontinuous Phrases. MT-Summmit-07. 51-56. David Chiang. 2005. A hierarchical phrase-based model for SMT. ACL-05. 263-270. Brooke Cowan, Ivona Kucerova and Michael Collins. 2006. A discriminative model for tree-to-tree translation. EMNLP-06. 232-241. Yuan Ding and Martha Palmer. 2005. Machine translation using probabilistic synchronous dependency insertion grammars. ACL-05. 541-548. Jason Eisner. 2003. Learning non-isomorphic tree mappings for MT. ACL-03 (companion volume). Michel Galley, Mark Hopkins, Kevin Knight and Daniel Marcu. 2004. What’s in a translation rule? HLTNAACL-04. Michel Galley, J. Graehl, K. Knight, D. Marcu, S. DeNeefe, W. Wang and I. Thayer. 2006. Scalable Inference and Training of Context-Rich Syntactic Translation Models. COLING-ACL-06. 961-968 Daniel Gildea. 2003. Loosely Tree-Based Alignment for Machine Translation. ACL-03. 80-87. Jonathan Graehl and Kevin Knight. 2004. Training tree transducers. HLT-NAACL-2004. 105-112. Mary Hearne and Andy Way. 2003. Seeing the wood for the trees: data-oriented translation. MT Summit IX, 165-172. Liang Huang, Kevin Knight and Aravind Joshi. 2006. Statistical Syntax-Directed Translation with Extended Domain of Locality. AMTA-06 (poster). Dan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. ACL-03. 423-430. Philipp Koehn, F. J. Och and D. Marcu. 2003. Statistical phrase-based translation. HLT-NAACL-03. 127133. Philipp Koehn. 2004. Pharaoh: a beam search decoder for phrase-based statistical machine translation models. AMTA-04, 115-124 Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. ACL-07 (poster) 77-180. Yang Liu, Qun Liu and Shouxun Lin. 2006. Tree-toString Alignment Template for Statistical Machine Translation. COLING-ACL-06. 609-616. Yang Liu, Yun Huang, Qun Liu and Shouxun Lin. 2007. Forest-to-String Statistical Translation Rules. ACL-07. 704-711. Daniel Marcu, W. Wang, A. Echihabi and K. Knight. 2006. SPMT: Statistical Machine Translation with Syntactified Target Language Phrases. EMNLP-06. 44-52. I. Dan Melamed. 2004. Statistical machine translation by parsing. ACL-04. 653-660. Franz J. Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. ACL-02. 295-302. Franz J. Och. 2003. Minimum error rate training in statistical machine translation. ACL-03. 160-167. Franz J. Och and Hermann Ney. 2004a. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417-449. Kishore Papineni, Salim Roukos, ToddWard and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. ACL-02. 311-318. Arjen Poutsma. 2000. Data-oriented translation. COLING-2000. 635-641 Chris Quirk and Arul Menezes. 2006. Do we need phrases? Challenging the conventional wisdom in SMT. COLING-ACL-06. 9-16. Chris Quirk, Arul Menezes and Colin Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal SMT. ACL-05. 271-279. Stefan Riezler and John T. Maxwell III. 2006. Grammatical Machine Translation. HLT-NAACL-06. 248-255. Hendra Setiawan, Min-Yen Kan and Haizhou Li. 2007. Ordering Phrases with Function Words. ACL-7. 712-719. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. ICSLP-02. 901-904. Benjamin Wellington, Sonjia Waxmonsky and I. Dan Melamed. 2006. Empirical Lower Bounds on the Complexity of Translational Equivalence. COLINGACL-06. 977-984. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-403. Deyi Xiong, Qun Liu and Shouxun Lin. 2006. Maximum Entropy Based Phrase Reordering Model for SMT. COLING-ACL-06. 521– 528. Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. ACL-01. 523-530. Min Zhang, Hongfei Jiang, Ai Ti Aw, Jun Sun, Sheng Li and Chew Lim Tan. 2007. A Tree-to-Tree Alignment-based Model for Statistical Machine Translation. MT-Summit-07. 535-542. Ying Zhang. Stephan Vogel. Alex Waibel. 2004. Interpreting BLEU/NIST scores: How much improvement do we need to have a better system? LREC-04. 20512054. 567
2008
64
Proceedings of ACL-08: HLT, pages 568–576, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Automatic Syllabification with Structured SVMs for Letter-To-Phoneme Conversion Susan Bartlett† Grzegorz Kondrak† Colin Cherry‡ †Department of Computing Science ‡Microsoft Research University of Alberta One Microsoft Way Edmonton, AB, T6G 2E8, Canada Redmond, WA, 98052 {susan,kondrak}@cs.ualberta.ca [email protected] Abstract We present the first English syllabification system to improve the accuracy of letter-tophoneme conversion. We propose a novel discriminative approach to automatic syllabification based on structured SVMs. In comparison with a state-of-the-art syllabification system, we reduce the syllabification word error rate for English by 33%. Our approach also performs well on other languages, comparing favorably with published results on German and Dutch. 1 Introduction Pronouncing an unfamiliar word is a task that is often accomplished by breaking the word down into smaller components. Even small children learning to read are taught to pronounce a word by “sounding out” its parts. Thus, it is not surprising that Letter-to-Phoneme (L2P) systems, which convert orthographic forms of words into sequences of phonemes, can benefit from subdividing the input word into smaller parts, such as syllables or morphemes. Marchand and Damper (2007) report that incorporating oracle syllable boundary information improves the accuracy of their L2P system, but they fail to emulate that result with any of their automatic syllabification methods. Demberg et al. (2007), on the other hand, find that morphological segmentation boosts L2P performance in German, but not in English. To our knowledge, no previous English orthographic syllabification system has been able to actually improve performance on the larger L2P problem. In this paper, we focus on the task of automatic orthographic syllabification, with the explicit goal of improving L2P accuracy. A syllable is a subdivision of a word, typically consisting of a vowel, called the nucleus, and the consonants preceding and following the vowel, called the onset and the coda, respectively. Although in the strict linguistic sense syllables are phonological rather than orthographic entities, our L2P objective constrains the input to orthographic forms. Syllabification of phonemic representation is in fact an easier task, which we plan to address in a separate publication. Orthographic syllabification is sometimes referred to as hyphenation. Many dictionaries provide hyphenation information for orthographic word forms. These hyphenation schemes are related to, and influenced by, phonemic syllabification. They serve two purposes: to indicate where words may be broken for end-of-line divisions, and to assist the dictionary reader with correct pronunciation (Gove, 1993). Although these purposes are not always consistent with our objective, we show that we can improve L2P conversion by taking advantage of the available hyphenation data. In addition, automatic hyphenation is a legitimate task by itself, which could be utilized in word editors or in synthesizing new trade names from several concepts. We present a discriminative approach to orthographic syllabification. We formulate syllabification as a tagging problem, and learn a discriminative tagger from labeled data using a structured support vector machine (SVM) (Tsochantaridis et al., 2004). With this approach, we reduce the error rate for English by 33%, relative to the best existing system. Moreover, we are also able to improve a state-of-theart L2P system by incorporating our syllabification models. Our method is not language specific; when applied to German and Dutch, our performance is 568 comparable with the best existing systems in those languages, even though our system has been developed and tuned on English only. The paper is structured as follows. After discussing previous computational approaches to the problem (Section 2), we introduce structured SVMs (Section 3), and outline how we apply them to orthographic syllabification (Section 4). We present our experiments and results for the syllabification task in Section 5. In Section 6, we apply our syllabification models to the L2P task. Section 7 concludes. 2 Related Work Automatic preprocessing of words is desirable because the productive nature of language ensures that no finite lexicon will contain all words. Marchand et al. (2007) show that rule-based methods are relatively ineffective for orthographic syllabification in English. On the other hand, few data-driven syllabification systems currently exist. Demberg (2006) uses a fourth-order Hidden Markov Model to tackle orthographic syllabification in German. When added to her L2P system, Demberg’s orthographic syllabification model effects a one percent absolute improvement in L2P word accuracy. Bouma (2002) explores syllabification in Dutch. He begins with finite state transducers, which essentially implement a general preference for onsets. Subsequently, he uses transformation-based learning to automatically extract rules that improve his system. Bouma’s best system, trained on some 250K examples, achieves 98.17% word accuracy. Daelemans and van den Bosch (1992) implement a backpropagation network for Dutch orthography, but find it is outperformed by less complex look-up table approaches. Marchand and Damper (2007) investigate the impact of syllabification on the L2P problem in English. Their Syllabification by Analogy (SbA) algorithm is a data-driven, lazy learning approach. For each input word, SbA finds the most similar substrings in a lexicon of syllabified words and then applies these dictionary syllabifications to the input word. Marchand and Damper report 78.1% word accuracy on the NETtalk dataset, which is not good enough to improve their L2P system. Chen (2003) uses an n-gram model and Viterbi decoder as a syllabifier, and then applies it as a preprocessing step in his maximum-entropy-based English L2P system. He finds that the syllabification pre-processing produces no gains over his baseline system. Marchand et al. (2007) conduct a more systematic study of existing syllabification approaches. They examine syllabification in both the pronunciation and orthographic domains, comparing their own SbA algorithm with several instance-based learning approaches (Daelemans et al., 1997; van den Bosch, 1997) and rule-based implementations. They find that SbA universally outperforms these other approaches by quite a wide margin. Syllabification of phonemes, rather than letters, has also been investigated (M¨uller, 2001; Pearson et al., 2000; Schmid et al., 2007). In this paper, our focus is on orthographic forms. However, as with our approach, some previous work in the phonetic domain has formulated syllabification as a tagging problem. 3 Structured SVMs A structured support vector machine (SVM) is a large-margin training method that can learn to predict structured outputs, such as tag sequences or parse trees, instead of performing binary classification (Tsochantaridis et al., 2004). We employ a structured SVM that predicts tag sequences, called an SVM Hidden Markov Model, or SVM-HMM. This approach can be considered an HMM because the Viterbi algorithm is used to find the highest scoring tag sequence for a given observation sequence. The scoring model employs a Markov assumption: each tag’s score is modified only by the tag that came before it. This approach can be considered an SVM because the model parameters are trained discriminatively to separate correct tag sequences from incorrect ones by as large a margin as possible. In contrast to generative HMMs, the learning process requires labeled training data. There are a number of good reasons to apply the structured SVM formalism to this problem. We get the benefit of discriminative training, not available in a generative HMM. Furthermore, we can use an arbitrary feature representation that does not require 569 any conditional independence assumptions. Unlike a traditional SVM, the structured SVM considers complete tag sequences during training, instead of breaking each sequence into a number of training instances. Training a structured SVM can be viewed as a multi-class classification problem. Each training instance xi is labeled with a correct tag sequence yi drawn from a set of possible tag sequences Yi. As is typical of discriminative approaches, we create a feature vector Ψ(x, y) to represent a candidate y and its relationship to the input x. The learner’s task is to weight the features using a vector w so that the correct tag sequence receives more weight than the competing, incorrect sequences: ∀i∀y∈Yi,y̸=yi [Ψ(xi, yi) · w > Ψ(xi, y) · w] (1) Given a trained weight vector w, the SVM tags new instances xi according to: argmaxy∈Yi [Ψ(xi, y) · w] (2) A structured SVM finds a w that satisfies Equation 1, and separates the correct taggings by as large a margin as possible. The argmax in Equation 2 is conducted using the Viterbi algorithm. Equation 1 is a simplification. In practice, a structured distance term is added to the inequality in Equation 1 so that the required margin is larger for tag sequences that diverge further from the correct sequence. Also, slack variables are employed to allow a trade-off between training accuracy and the complexity of w, via a tunable cost parameter. For most structured problems, the set of negative sequences in Yi is exponential in the length of xi, and the constraints in Equation 1 cannot be explicitly enumerated. The structured SVM solves this problem with an iterative online approach: 1. Collect the most damaging incorrect sequence y according to the current w. 2. Add y to a growing set ¯Yi of incorrect sequences. 3. Find a w that satisfies Equation 1, using the partial ¯Yi sets in place of Yi. 4. Go to next training example, loop to step 1. This iterative process is explained in far more detail in (Tsochantaridis et al., 2004). 4 Syllabification with Structured SVMs In this paper we apply structured SVMs to the syllabification problem. Specifically, we formulate syllabification as a tagging problem and apply the SVM-HMM software package1 (Altun et al., 2003). We use a linear kernel, and tune the SVM’s cost parameter on a development set. The feature representation Ψ consists of emission features, which pair an aspect of x with a single tag from y, and transition features, which count tag pairs occurring in y. With SVM-HMM, the crux of the task is to create a tag scheme and feature set that produce good results. In this section, we discuss several different approaches to tagging for the syllabification task. Subsequently, we outline our emission feature representation. While developing our tagging schemes and feature representation, we used a development set of 5K words held out from our CELEX training data. All results reported in this section are on that set. 4.1 Annotation Methods We have employed two different approaches to tagging in this research. Positional tags capture where a letter occurs within a syllable; Structural tags express the role each letter is playing within the syllable. Positional Tags The NB tag scheme simply labels every letter as either being at a syllable boundary (B), or not (N). Thus, the word im-mor-al-ly is tagged ⟨N B N N B N B N N⟩, indicating a syllable boundary after each B tag. This binary classification approach to tagging is implicit in several previous implementations (Daelemans and van den Bosch, 1992; Bouma, 2002), and has been done explicitly in both the orthographic (Demberg, 2006) and phoneme domains (van den Bosch, 1997). A weakness of NB tags is that they encode no knowledge about the length of a syllable. Intuitively, we expect the length of a syllable to be valuable information — most syllables in English contain fewer than four characters. We introduce a tagging scheme that sequentially numbers the N tags to impart information about syllable length. Under the Numbered 1http://svmlight.joachims.org/svm struct.html 570 NB tag scheme, im-mor-al-ly is annotated as ⟨N1 B N1 N2 B N1 B N1 N2⟩. With this tag set, we have effectively introduced a bias in favor of shorter syllables: tags like N6, N7. . . are comparatively rare, so the learner will postulate them only when the evidence is particularly compelling. Structural Tags Numbered NB tags are more informative than standard NB tags. However, neither annotation system can represent the internal structure of the syllable. This has advantages: tags can be automatically generated from a list of syllabified words without even a passing familiarity with the language. However, a more informative annotation, tied to phonotactics, ought to improve accuracy. Krenn (1997) proposes the ONC tag scheme, in which phonemes of a syllable are tagged as an onset, nucleus, or coda. Given these ONC tags, syllable boundaries can easily be generated by applying simple regular expressions. Unfortunately, it is not as straightforward to generate ONC-tagged training data in the orthographic domain, even with syllabified training data. Silent letters are problematic, and some letters can behave differently depending on their context (in English, consonants such as m, y, and l can act as vowels in certain situations). Thus, it is difficult to generate ONC tags for orthographic forms without at least a cursory knowledge of the language and its principles. For English, tagging the syllabified training set with ONC tags is performed by the following simple algorithm. In the first stage, all letters from the set {a, e, i, o, u} are marked as vowels, while the remaining letters are marked as consonants. Next, we examine all the instances of the letter y. If a y is both preceded and followed by a consonant, we mark that instance as a vowel rather than a consonant. In the second stage, the first group of consecutive vowels in each syllable is tagged as nucleus. All letters preceding the nucleus are then tagged as onset, while all letters following the nucleus are tagged as coda. Our development set experiments suggested that numbering ONC tags increases their performance. Under the Numbered ONC tag scheme, the singlesyllable word stealth is labeled ⟨O1 O2 N1 N2 C1 C2 C3⟩. A disadvantage of Numbered ONC tags is that, unlike positional tags, they do not represent syllable breaks explicitly. Within the ONC framework, we need the conjunction of two tags (such as an N1 tag followed by an O1 tag) to represent the division between syllables. This drawback can be overcome by combining ONC tags and NB tags in a hybrid Break ONC tag scheme. Using Break ONC tags, the word lev-i-ty is annotated as ⟨O N CB NB O N⟩. The ⟨NB⟩tag indicates a letter is both part of the nucleus and before a syllable break, while the ⟨N⟩ tag represents a letter that is part of a nucleus but in the middle of a syllable. In this way, we get the best of both worlds: tags that encapsulate information about syllable structure, while also representing syllable breaks explicitly with a single tag. 4.2 Emission Features SVM-HMM predicts a tag for each letter in a word, so emission features use aspects of the input to help predict the correct tag for a specific letter. Consider the tag for the letter o in the word immorally. With a traditional HMM, we consider only that it is an o being emitted, and assess potential tags based on that single letter. The SVM framework is less restrictive: we can include o as an emission feature, but we can also include features indicating that the preceding and following letters are m and r respectively. In fact, there is no reason to confine ourselves to only one character on either side of the focus letter. After experimenting with the development set, we decided to include in our feature set a window of eleven characters around the focus character, five on either side. Figure 1 shows that performance gains level off at this point. Special beginning- and end-of-word characters are appended to words so that every letter has five characters before and after. We also experimented with asymmetric context windows, representing more characters after the focus letter than before, but we found that symmetric context windows perform better. Because our learner is effectively a linear classifier, we need to explicitly represent any important conjunctions of features. For example, the bigram bl frequently occurs within a single English syllable, while the bigram lb generally straddles two syllables. Similarly, a fourgram like tion very often 571 Figure 1: Word accuracy as a function of the window size around the focus character, using unigram features on the development set. forms a syllable in and of itself. Thus, in addition to the single-letter features outlined above, we also include in our representation any bigrams, trigrams, four-grams, and five-grams that fit inside our context window. As is apparent from Figure 2, we see a substantial improvement by adding bigrams to our feature set. Higher-order n-grams produce increasingly smaller gains. Figure 2: Word accuracy as a function of maximum ngram size on the development set. In addition to these primary n-gram features, we experimented with linguistically-derived features. Intuitively, basic linguistic knowledge, such as whether a letter is a consonant or a vowel, should be helpful in determining syllabification. However, our experiments suggested that including features like these has no significant effect on performance. We believe that this is caused by the ability of the SVM to learn such generalizations from the n-gram features alone. 5 Syllabification Experiments In this section, we will discuss the results of our best emission feature set (five-gram features with a context window of eleven letters) on held-out unseen test sets. We explore several different languages and datasets, and perform a brief error analysis. 5.1 Datasets Datasets are especially important in syllabification tasks. Dictionaries sometimes disagree on the syllabification of certain words, which makes a gold standard difficult to obtain. Thus, any reported accuracy is only with respect to a given set of data. In this paper, we report the results of experiments on two datasets: CELEX and NETtalk. We focus mainly on CELEX, which has been developed over a period of years by linguists in the Netherlands. CELEX contains English, German, and Dutch words, and their orthographic syllabifications. We removed all duplicates and multipleword entries for our experiments. The NETtalk dictionary was originally developed with the L2P task in mind. The syllabification data in NETtalk was created manually in the phoneme domain, and then mapped directly to the letter domain. NETtalk and CELEX do not provide the same syllabification for every word. There are numerous instances where the two datasets differ in a perfectly reasonable manner (e.g. for-ging in NETtalk vs. forg-ing in CELEX). However, we argue that NETtalk is a vastly inferior dataset. On a sample of 50 words, NETtalk agrees with Merriam-Webster’s syllabifications in only 54% of instances, while CELEX agrees in 94% of cases. Moreover, NETtalk is riddled with truly bizarre syllabifications, such as be-aver, dis-hcloth and som-ething. These syllabifications make generalization very hard, and are likely to complicate the L2P task we ultimately want to accomplish. Because previous work in English primarily used NETtalk, we report our results on both datasets. Nevertheless, we believe NETtalk is unsuitable for building a syllabification model, and that results on CELEX are much more indicative of the efficacy of our (or any other) approach. At 20K words, NETtalk is much smaller than CELEX. For NETtalk, we randomly divide the data into 13K training examples and 7K test words. We 572 randomly select a comparably-sized training set for our CELEX experiments (14K), but test on a much larger, 25K set. Recall that 5K training examples were held out as a development set. 5.2 Results We report the results using two metrics. Word accuracy (WA) measures how many words match the gold standard. Syllable break error rate (SBER) captures the incorrect tags that cause an error in syllabification. Word accuracy is the more demanding metric. We compare our system to Syllabification by Analogy (SbA), the best existing system for English (Marchand and Damper, 2007). For both CELEX and NETtalk, SbA was trained and tested with the same data as our structured SVM approach. Data Set Method WA SBER CELEX NB tags 86.66 2.69 Numbered NB 89.45 2.51 Numbered ONC 89.86 2.50 Break ONC 89.99 2.42 SbA 84.97 3.96 NETtalk Numbered NB 81.75 5.01 SbA 75.56 7.73 Table 1: Syllabification performance in terms of word accuracy and syllable break error percentage. Table 1 presents the word accuracy and syllable break error rate achieved by each of our tag sets on both the CELEX and NETtalk datasets. Of our four tag sets, NB tags perform noticeably worse. This is an important result because it demonstrates that it is not sufficient to simply model a syllable’s boundaries; we must also model a syllable’s length or structure to achieve the best results. Given the similarity in word accuracy scores, it is difficult to draw definitive conclusions about the remaining three tags sets, but it does appear that there is an advantage to modeling syllable structure, as both ONC tag sets score better than the best NB set. All variations of our system outperform SbA on both datasets. Overall, our best tag set lowers the error rate by one-third, relative to SbA’s performance. Note that we employ only numbered NB tags for the NETtalk test; we could not apply structural tag schemes to the NETtalk training data because of its bizarre syllabification choices. Our higher level of accuracy is also achieved more efficiently. Once a model is learned, our system can syllabify 25K words in about a minute, while SbA requires several hours (Marchand, 2007). SVM training times vary depending on the tag set and dataset used, and the number of training examples. On 14K CELEX examples with the ONC tag set, our model trained in about an hour, on a singleprocessor P4 3.4GHz processor. Training time is, of course, a one-time cost. This makes our approach much more attractive for inclusion in an actual L2P system. Figure 3 shows our method’s learning curve. Even small amounts of data produce adequate performance — with only 2K training examples, word accuracy is already over 75%. Using a 60K training set and testing on a held-out 5K set, we see word accuracies climb to 95.65%. Figure 3: Word accuracy as function of the size of the training data. 5.3 Error Analysis We believe that the reason for the relatively low performance of unnumbered NB tags is the weakness of the signal coming from NB emission features. With the exception of q and x, every letter can take on either an N tag or a B tag with almost equal probability. This is not the case with Numbered NB tags. Vowels are much more likely to have N2 or N3 tags (because they so often appear in the middle of a syllable), while consonants take on N1 labels with greater probability. The numbered NB and ONC systems make many of the same errors, on words that we might expect to 573 cause difficulty. In particular, both suffer from being unaware of compound nouns and morphological phenomena. All three systems, for example, incorrectly syllabify hold-o-ver as hol-dov-er. This kind of error is caused by a lack of knowledge of the component words. The three systems also display trouble handling consecutive vowels, as when co-ad-jutors is syllabified incorrectly as coad-ju-tors. Vowel pairs such as oa are not handled consistently in English, and the SVM has trouble predicting the exceptions. 5.4 Other Languages We take advantage of the language-independence of Numbered NB tags to apply our method to other languages. Without even a cursory knowledge of German or Dutch, we have applied our approach to these two languages. # Data Points Dutch German ∼50K 98.20 98.81 ∼250K 99.45 99.78 Table 2: Syllabification performance in terms of word accuracy percentage. We have randomly selected two training sets from the German and Dutch portions of CELEX. Our smaller model is trained on ∼50K words, while our larger model is trained on ∼250K. Table 2 shows our performance on a 30K test set held out from both training sets. Results from both the small and large models are very good indeed. Our performance on these language sets is clearly better than our best score for English (compare at 95% with a comparable amount of training data). Syllabification is a more regular process in German and Dutch than it is in English, which allows our system to score higher on those languages. Our method’s word accuracy compares favorably with other methods. Bouma’s finite state approach for Dutch achieves 96.49% word accuracy using 50K training points, while we achieve 98.20%. With a larger model, trained on about 250K words, Bouma achieves 98.17% word accuracy, against our 99.45%. Demberg (2006) reports that her HMM approach for German scores 97.87% word accuracy, using a 90/10 training/test split on the CELEX dataset. On the same set, Demberg et al. (2007) obtain 99.28% word accuracy by applying the system of Schmid et al. (2007). Our score using a similar split is 99.78%. Note that none of these scores are directly comparable, because we did not use the same train-test splits as our competitors, just similar amounts of training and test data. Furthermore, when assembling random train-test splits, it is quite possible that words sharing the same lemma will appear in both the training and test sets. This makes the problem much easier with large training sets, where the chance of this sort of overlap becomes high. Therefore, any large data results may be slightly inflated as a prediction of actual out-of-dictionary performance. 6 L2P Performance As we stated from the outset, one of our primary motivations for exploring orthographic syllabification is the improvements it can produce in L2P systems. To explore this, we tested our model in conjunction with a recent L2P system that has been shown to predict phonemes with state-of-the-art word accuracy (Jiampojamarn et al., 2007). Using a model derived from training data, this L2P system first divides a word into letter chunks, each containing one or two letters. A local classifier then predicts a number of likely phonemes for each chunk, with confidence values. A phoneme-sequence Markov model is then used to select the most likely sequence from the phonemes proposed by the local classifier. Syllabification English Dutch German None 84.67 91.56 90.18 Numbered NB 85.55 92.60 90.59 Break ONC 85.59 N/A N/A Dictionary 86.29 93.03 90.57 Table 3: Word accuracy percentage on the letter-tophoneme task with and without the syllabification information. To measure the improvement syllabification can effect on the L2P task, the L2P system was trained with syllabified, rather than unsyllabified words. Otherwise, the execution of the L2P system remains unchanged. Data for this experiment is again drawn 574 from the CELEX dictionary. In Table 3, we report the average word accuracy achieved by the L2P system using 10-fold cross-validation. We report L2P performance without any syllabification information, with perfect dictionary syllabification, and with our small learned models of syllabification. L2P performance with dictionary syllabification represents an approximate upper bound on the contributions of our system. Our syllabification model improves L2P performance. In English, perfect syllabification produces a relative error reduction of 10.6%, and our model captures over half of the possible improvement, reducing the error rate by 6.0%. To our knowledge, this is the first time a syllabification model has improved L2P performance in English. Previous work includes Marchand and Damper (2007)’s experiments with SbA and the L2P problem on NETtalk. Although perfect syllabification reduces their L2P relative error rate by 18%, they find that their learned model actually increases the error rate. Chen (2003) achieved word accuracy of 91.7% for his L2P system, testing on a different dictionary (Pronlex) with a much larger training set. He does not report word accuracy for his syllabification model. However, his baseline L2P system is not improved by adding a syllabification model. For Dutch, perfect syllabification reduces the relative L2P error rate by 17.5%; we realize over 70% of the available improvement with our syllabification model, reducing the relative error rate by 12.4%. In German, perfect syllabification produces only a small reduction of 3.9% in the relative error rate. Experiments show that our learned model actually produces a slightly higher reduction in the relative error rate. This anomaly may be due to errors or inconsistencies in the dictionary syllabifications that are not replicated in the model output. Previously, Demberg (2006) generated statistically significant L2P improvements in German by adding syllabification pre-processing. However, our improvements are coming at a much higher baseline level of word accuracy – 90% versus only 75%. Our results also provide some evidence that syllabification preprocessing may be more beneficial to L2P than morphological preprocessing. Demberg et al. (2007) report that oracle morphological annotation produces a relative error rate reduction of 3.6%. We achieve a larger decrease at a higher level of accuracy, using an automatic pre-processing technique. This may be because orthographic syllabifications already capture important facts about a word’s morphology. 7 Conclusion We have applied structured SVMs to the syllabification problem, clearly outperforming existing systems. In English, we have demonstrated a 33% relative reduction in error rate with respect to the state of the art. We used this improved syllabification to increase the letter-to-phoneme accuracy of an existing L2P system, producing a system with 85.5% word accuracy, and recovering more than half of the potential improvement available from perfect syllabification. This is the first time automatic syllabification has been shown to improve English L2P. Furthermore, we have demonstrated the languageindependence of our system by producing competitive orthographic syllabification solutions for both Dutch and German, achieving word syllabification accuracies of 98% and 99% respectively. These learned syllabification models also improve accuracy for German and Dutch letter-to-phoneme conversion. In future work on this task, we plan to explore adding morphological features to the SVM, in an effort to overcome errors in compound words and inflectional forms. We would like to experiment with performing L2P and syllabification jointly, rather than using syllabification as a pre-processing step for L2P. We are also working on applying our method to phonetic syllabification. Acknowledgements Many thanks to Sittichai Jiampojamarn for his help with the L2P experiments, and to Yannick Marchand for providing the SbA results. This research was supported by the Natural Sciences and Engineering Research Council of Canada and the Alberta Informatics Circle of Research Excellence. References Yasemin Altun, Ioannis Tsochantaridis, and Thomas Hofmann. 2003. Hidden Markov support vector ma575 chines. Proceedings of the 20th International Conference on Machine Learning (ICML), pages 3–10. Susan Bartlett. 2007. Discriminative approach to automatic syllabification. Master’s thesis, Department of Computing Science, University of Alberta. Gosse Bouma. 2002. Finite state methods for hyphenation. Natural Language Engineering, 1:1–16. Stanley Chen. 2003. Conditional and joint models for grapheme-to-phoneme conversion. Proceedings of the 8th European Conference on Speech Communication and Technology (Eurospeech). Walter Daelemans and Antal van den Bosch. 1992. Generalization performance of backpropagation learning on a syllabification task. Proceedings of the 3rd Twente Workshop on Language Technology, pages 27– 38. Walter Daelemans, Antal van den Bosch, and Ton Weijters. 1997. IGTree: Using trees for compression and classification in lazy learning algorithms. Artificial Intelligence Review, pages 407–423. Vera Demberg, Helmust Schmid, and Gregor M¨ohler. 2007. Phonological constraints and morphological preprocessing for grapheme-to-phoneme conversion. Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL). Vera Demberg. 2006. Letter-to-phoneme conversion for a German text-to-speech system. Master’s thesis, University of Stuttgart. Philip Babcock Gove, editor. 1993. Webster’s Third New International Dictionary of the English Language, Unabridged. Merriam-Webster Inc. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and hidden Markov models to letter-to-phoneme conversion. Proceedings of the Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics HLTNAACL, pages 372–379. Brigitte Krenn. 1997. Tagging syllables. Proceedings of Eurospeech, pages 991–994. Yannick Marchand and Robert Damper. 2007. Can syllabification improve pronunciation by analogy of English? Natural Language Engineering, 13(1):1–24. Yannick Marchand, Connie Adsett, and Robert Damper. 2007. Evaluation of automatic syllabification algorithms for English. In Proceedings of the 6th International Speech Communication Association (ISCA) Workshop on Speech Synthesis. Yannick Marchand. 2007. Personal correspondence. Karin M¨uller. 2001. Automatic detection of syllable boundaries combining the advantages of treebank and bracketed corpora training. Proceedings on the 39th Meeting of the Association for Computational Linguistics (ACL), pages 410–417. Steve Pearson, Roland Kuhn, Steven Fincke, and Nick Kibre. 2000. Automatic methods for lexical stress assignment and syllabification. In Proceedings of the 6th International Conference on Spoken Language Processing (ICSLP), pages 423–426. Helmut Schmid, Bernd M¨obius, and Julia Weidenkaff. 2007. Tagging syllable boundaries with joint N-gram models. Proceedings of Interspeech. Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2004. Support vector machine learning for interdependent and structured output spaces. Proceedings of the 21st International Conference on Machine Learning (ICML), pages 823– 830. Antal van den Bosch. 1997. Learning to pronounce written words: a study in inductive language learning. Ph.D. thesis, Universiteit Maastricht. 576
2008
65
Proceedings of ACL-08: HLT, pages 577–585, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics A New String-to-Dependency Machine Translation Algorithm with a Target Dependency Language Model Libin Shen BBN Technologies Cambridge, MA 02138, USA [email protected] Jinxi Xu BBN Technologies Cambridge, MA 02138, USA [email protected] Ralph Weischedel BBN Technologies Cambridge, MA 02138, USA [email protected] Abstract In this paper, we propose a novel string-todependency algorithm for statistical machine translation. With this new framework, we employ a target dependency language model during decoding to exploit long distance word relations, which are unavailable with a traditional n-gram language model. Our experiments show that the string-to-dependency decoder achieves 1.48 point improvement in BLEU and 2.53 point improvement in TER compared to a standard hierarchical string-tostring system on the NIST 04 Chinese-English evaluation set. 1 Introduction In recent years, hierarchical methods have been successfully applied to Statistical Machine Translation (Graehl and Knight, 2004; Chiang, 2005; Ding and Palmer, 2005; Quirk et al., 2005). In some language pairs, i.e. Chinese-to-English translation, state-ofthe-art hierarchical systems show significant advantage over phrasal systems in MT accuracy. For example, Chiang (2007) showed that the Hiero system achieved about 1 to 3 point improvement in BLEU on the NIST 03/04/05 Chinese-English evaluation sets compared to a start-of-the-art phrasal system. Our work extends the hierarchical MT approach. We propose a string-to-dependency model for MT, which employs rules that represent the source side as strings and the target side as dependency structures. We restrict the target side to the so called wellformed dependency structures, in order to cover a large set of non-constituent transfer rules (Marcu et al., 2006), and enable efficient decoding through dynamic programming. We incorporate a dependency language model during decoding, in order to exploit long-distance word relations which are unavailable with a traditional n-gram language model on target strings. For comparison purposes, we replicated the Hiero decoder (Chiang, 2005) as our baseline. Our stringto-dependency decoder shows 1.48 point improvement in BLEU and 2.53 point improvement in TER on the NIST 04 Chinese-English MT evaluation set. In the rest of this section, we will briefly discuss previous work on hierarchical MT and dependency representations, which motivated our research. In section 2, we introduce the model of string-to-dependency decoding. Section 3 illustrates of the use of dependency language models. In section 4, we describe the implementation details of our MT system. We discuss experimental results in section 5, compare to related work in section 6, and draw conclusions in section 7. 1.1 Hierarchical Machine Translation Graehl and Knight (2004) proposed the use of targettree-to-source-string transducers (xRS) to model translation. In xRS rules, the right-hand-side(rhs) of the target side is a tree with non-terminals(NTs), while the rhs of the source side is a string with NTs. Galley et al. (2006) extended this string-to-tree model by using Context-Free parse trees to represent the target side. A tree could represent multi-level transfer rules. The Hiero decoder (Chiang, 2007) does not require explicit syntactic representation on either side of the rules. Both source and target are strings with NTs. Decoding is solved as chart parsing. Hiero can be viewed as a hierarchical string-to-string model. Ding and Palmer (2005) and Quirk et al. (2005) 577 it will find boy the interesting Figure 1: The dependency tree for sentence the boy will find it interesting followed the tree-to-tree approach (Shieber and Schabes, 1990) for translation. In their models, dependency treelets are used to represent both the source and the target sides. Decoding is implemented as tree transduction preceded by source side dependency parsing. While tree-to-tree models can represent richer structural information, existing tree-totree models did not show advantage over string-totree models on translation accuracy due to a much larger search space. One of the motivations of our work is to achieve desirable trade-off between model capability and search space through the use of the so called wellformed dependency structures in rule representation. 1.2 Dependency Trees Dependency trees reveal long-distance relations between words. For a given sentence, each word has a parent word which it depends on, except for the root word. Figure 1 shows an example of a dependency tree. Arrows point from the child to the parent. In this example, the word find is the root. Dependency trees are simpler in form than CFG trees since there are no constituent labels. However, dependency relations directly model semantic structure of a sentence. As such, dependency trees are a desirable prior model of the target sentence. 1.3 Motivations for Well-Formed Dependency Structures We restrict ourselves to the so-called well-formed target dependency structures based on the following considerations. Dynamic Programming In (Ding and Palmer, 2005; Quirk et al., 2005), there is no restriction on dependency treelets used in transfer rules except for the size limit. This may result in a high dimensionality in hypothesis representation and make it hard to employ shared structures for efficient dynamic programming. In (Galley et al., 2004), rules contain NT slots and combination is only allowed at those slots. Therefore, the search space becomes much smaller. Furthermore, shared structures can be easily defined based on the labels of the slots. In order to take advantage of dynamic programming, we fixed the positions onto which another another tree could be attached by specifying NTs in dependency trees. Rule Coverage Marcu et al. (2006) showed that many useful phrasal rules cannot be represented as hierarchical rules with the existing representation methods, even with composed transfer rules (Galley et al., 2006). For example, the following rule • <(hong)Chinese, (DT(the) JJ(red))English> is not a valid string-to-tree transfer rule since the red is a partial constituent. A number of techniques have been proposed to improve rule coverage. (Marcu et al., 2006) and (Galley et al., 2006) introduced artificial constituent nodes dominating the phrase of interest. The binarization method used by Wang et al. (2007) can cover many non-constituent rules also, but not all of them. For example, it cannot handle the above example. DeNeefe et al. (2007) showed that the best results were obtained by combing these methods. In this paper, we use well-formed dependency structures to handle the coverage of non-constituent rules. The use of dependency structures is due to the flexibility of dependency trees as a representation method which does not rely on constituents (Fox, 2002; Ding and Palmer, 2005; Quirk et al., 2005). The well-formedness of the dependency structures enables efficient decoding through dynamic programming. 2 String-to-Dependency Translation 2.1 Transfer Rules with Well-Formed Dependency Structures A string-to-dependency grammar G is a 4-tuple G =< R, X, Tf, Te >, where R is a set of transfer rules. X is the only non-terminal, which is similar to the Hiero system (Chiang, 2007). Tf is a set of 578 terminals in the source language, and Te is a set of terminals in the target language1. A string-to-dependency transfer rule R ∈R is a 4-tuple R =< Sf, Se, D, A >, where Sf ∈(Tf ∪ {X})+ is a source string, Se ∈(Te ∪{X})+ is a target string, D represents the dependency structure for Se, and A is the alignment between Sf and Se. Non-terminal alignments in A must be one-to-one. In order to exclude undesirable structures, we only allow Se whose dependency structure D is well-formed, which we will define below. In addition, the same well-formedness requirement will be applied to partial decoding results. Thus, we will be able to employ shared structures to merge multiple partial results. Based on the results in previous work (DeNeefe et al., 2007), we want to keep two kinds of dependency structures. In one kind, we keep dependency trees with a sub-root, where all the children of the sub-root are complete. We call them fixed dependency structures because the head is known or fixed. In the other, we keep dependency structures of sibling nodes of a common head, but the head itself is unspecified or floating. Each of the siblings must be a complete constituent. We call them floating dependency structures. Floating structures can represent many linguistically meaningful non-constituent structures: for example, like the red, a modifier of a noun. Only those two kinds of dependency structures are well-formed structures in our system. Furthermore, we operate over well-formed structures in a bottom-up style in decoding. However, the description given above does not provide a clear definition on how to combine those two types of structures. In the rest of this section, we will provide formal definitions of well-formed structures and combinatory operations over them, so that we can easily manipulate well-formed structures in decoding. Formal definitions also allow us to easily extend the framework to incorporate a dependency language model in decoding. Examples will be provided along with the formal definitions. Consider a sentence S = w1w2...wn. Let d1d2...dn represent the parent word IDs for each word. For example, d4 = 2 means that w4 depends 1We ignore the left hand side here because there is only one non-terminal X. Of course, this formalism can be extended to have multiple NTs. it will find boy the find boy (a) (b) (c) Figure 2: Fixed dependency structures boy will the interesting it (a) (b) Figure 3: Floating dependency structures on w2. If wi is a root, we define di = 0. Definition 1 A dependency structure di..j is fixed on head h, where h ∈[i, j], or fixed for short, if and only if it meets the following conditions • dh /∈[i, j] • ∀k ∈[i, j] and k ̸= h, dk ∈[i, j] • ∀k /∈[i, j], dk = h or dk /∈[i, j] In addition, we say the category of di..j is (−, h, −), where −means this field is undefined. Definition 2 A dependency structure di...dj is floating with children C, for a non-empty set C ⊆ {i, ..., j}, or floating for short, if and only if it meets the following conditions • ∃h /∈[i, j], s.t.∀k ∈C, dk = h • ∀k ∈[i, j] and k /∈C, dk ∈[i, j] • ∀k /∈[i, j], dk /∈[i, j] We say the category of di..j is (C, −, −) if j < h, or (−, −, C) otherwise. A category is composed of the three fields (A, h, B), where h is used to represent the head, and A and B are designed to model left and right dependents of the head respectively. A dependency structure is well-formed if and only if it is either fixed or floating. Examples We can represent dependency structures with graphs. Figure 2 shows examples of fixed structures, Figure 3 shows examples of floating structures, and Figure 4 shows ill-formed dependency structures. It is easy to verify that the structures in Figures 2 and 3 are well-formed. 4(a) is ill-formed because 579 interesting will find find boy (a) (b) Figure 4: Ill-formed dependency structures boy does not have its child word the in the tree. 4(b) is ill-formed because it is not a continuous segment. As for the example the red mentioned above, it is a well-formed floating dependency structure. 2.2 Operations on Well-Formed Dependency Structures and Categories One of the purposes of introducing floating dependency structures is that siblings having a common parent will become a well-defined entity, although they are not considered a constituent. We always build well-formed partial structures on the target side in decoding. Furthermore, we combine partial dependency structures in a way such that we can obtain all possible well-formed but no ill-formed dependency structures during bottom-up decoding. The solution is to employ categories introduced above. Each well-formed dependency structure has a category. We can apply four combinatory operations over the categories. If we can combine two categories with a certain category operation, we can use a corresponding tree operation to combine two dependency structures. The category of the combined dependency structure is the result of the combinatory category operations. We first introduce three meta category operations. Two of them are unary operations, left raising (LR) and right raising (RR), and one is the binary operation unification (UF). First, the raising operations are used to turn a completed fixed structure into a floating structure. It is easy to verify the following theorem according to the definitions. Theorem 1 A fixed structure with category (−, h, −) for span [i, j] is also a floating structure with children {h} if there are no outside words depending on word h. ∀k /∈[i, j], dk ̸= h. (1) Therefore we can always raise a fixed structure if we assume it is complete, i.e. (1) holds. it will find boy the interesting LA LA LA RA RA LC RC Figure 5: A dependency tree with flexible combination Definition 3 Meta Category Operations • LR((−, h, −)) = ({h}, −, −) • RR((−, h, −)) = (−, −, {h}) • UF((A1, h1, B1), (A2, h2, B2)) = NORM((A1 ⊔ A2, h1 ⊔h2, B1 ⊔B2)) Unification is well-defined if and only if we can unify all three elements and the result is a valid fixed or floating category. For example, we can unify a fixed structure with a floating structure or two floating structures in the same direction, but we cannot unify two fixed structures. h1 ⊔h2 =    h1 if h2 = − h2 if h1 = − undefined otherwise A1 ⊔A2 =    A1 if A2 = − A2 if A1 = − A1 ∪A2 otherwise NORM((A, h, B)) =        (−, h, −) if h ̸= − (A, −, −) if h = −, B = − (−, −, B) if h = −, A = − undefined otherwise Next we introduce the four tree operations on dependency structures. Instead of providing the formal definition, we use figures to illustrate these operations to make it easy to understand. Figure 1 shows a traditional dependency tree. Figure 5 shows the four operations to combine partial dependency structures, which are left adjoining (LA), right adjoining (RA), left concatenation (LC) and right concatenation (RC). Child and parent subtrees can be combined with adjoining which is similar to the traditional dependency formalism. We can either adjoin a fixed structure or a floating structure to the head of a fixed structure. Complete siblings can be combined via concatenation. We can concatenate two fixed structures, one fixed structure with one floating structure, or two floating structures in the same direction. The flexibility of the order of operation allows us to take ad580 will find boy the LA LA LA will find boy the LA LA LC 2 3 2 1 1 3 (b) (a) Figure 6: Operations over well-formed structures vantage of various translation fragments encoded in transfer rules. Figure 6 shows alternative ways of applying operations on well-formed structures to build larger structures in a bottom-up style. Numbers represent the order of operation. We use the same names for the operations on categories for the sake of convenience. We can easily use the meta category operations to define the four combinatory operations. The definition of the operations in the left direction is as follows. Those in the right direction are similar. Definition 4 Combinatory category operations LA((A1, −, −), (−, h2, −)) = UF((A1, −, −), (−, h2, −)) LA((−, h1, −), (−, h2, −)) = UF(LR((−, h1, −)), (−, h2, −)) LC((A1, −, −), (A2, −, −)) = UF((A1, −, −), (A2, −, −)) LC((A1, −, −), (−, h2, −)) = UF((A1, −, −), LR((−, h2, −))) LC((−, h1, −), (A2, −, −)) = UF(LR((−, h1, −)), (A2, −, −)) LC((−, h1, −), (−, h2, −)) = UF(LR((−, h1, −)), LR((−, h2, −))) It is easy to verify the soundness and completeness of category operations based on one-to-one mapping of the conditions in the definitions of corresponding operations on dependency structures and on categories. Theorem 2 (soundness and completeness) Suppose X and Y are well-formed dependency structures. OP(cat(X), cat(Y )) is well-defined for a given operation OP if and only if OP(X, Y ) is well-defined. Furthermore, cat(OP(X, Y )) = OP(cat(X), cat(Y )) Suppose we have a dependency tree for a red apple, where both a and red depend on apple. There are two ways to compute the category of this string from the bottom up. cat(Da red apple) = LA(cat(Da), LA(cat(Dred), cat(Dapple))) = LA(LC(cat(Da), cat(Dred)), cat(Dapple)) Based on Theorem 2, it follows that combinatory operation of categories has the confluence property, since the result dependency structure is determined. Corollary 1 (confluence) The category of a wellformed dependency tree does not depend on the order of category calculation. With categories, we can easily track the types of dependency structures and constrain operations in decoding. For example, we have a rule with dependency structure find ←X, where X right adjoins to find. Suppose we have two floating structures2, cat(X1) = ({he, will}, −, −) cat(X2) = (−, −, {it, interesting}) We can replace X by X2, but not by X1 based on the definition of category operations. 2.3 Rule Extraction Now we explain how we get the string-todependency rules from training data. The procedure is similar to (Chiang, 2007) except that we maintain tree structures on the target side, instead of strings. Given sentence-aligned bi-lingual training data, we first use GIZA++ (Och and Ney, 2003) to generate word level alignment. We use a statistical CFG parser to parse the English side of the training data, and extract dependency trees with Magerman’s rules (1995). Then we use heuristic rules to extract transfer rules recursively based on the GIZA alignment and the target dependency trees. The rule extraction procedure is as follows. 1. Initialization: All the 4-tuples (P i,j f , P m,n e , D, A) are valid phrase alignments, where source phrase P i,j f is 2Here we use words instead of word indexes in categories to make the example easy to understand. 581 it find interesting (D1) (D2) it X find interesting (D’) Figure 7: Replacing it with X in D1 aligned to target phrase P m,n e under alignment3 A, and D, the dependency structure for P m,n e , is well-formed. All valid phrase templates are valid rules templates. 2. Inference: Let (P i,j f , P m,n e , D1, A) be a valid rule template, and (P p,q f , P s,t e , D2, A) a valid phrase alignment, where [p, q] ⊂[i, j], [s, t] ⊂[m, n], D2 is a sub-structure of D1, and at least one word in P i,j f but not in P p,q f is aligned. We create a new valid rule template (P ′ f, P ′ e, D′, A), where we obtain P ′ f by replacing P p,q f with label X in P i,j f , and obtain P ′ e by replacing P s,t e with X in P m,n e . Furthermore, We obtain D′ by replacing sub-structure D2 with X in D14. An example is shown in Figure 7. Among all valid rule templates, we collect those that contain at most two NTs and at most seven elements in the source as transfer rules in our system. 2.4 Decoding Following previous work on hierarchical MT (Chiang, 2005; Galley et al., 2006), we solve decoding as chart parsing. We view target dependency as the hidden structure of source fragments. The parser scans all source cells in a bottom-up style, and checks matched transfer rules according to the source side. Once there is a completed rule, we build a larger dependency structure by substituting component dependency structures for corresponding NTs in the target dependency structure of rules. Hypothesis dependency structures are organized in a shared forest, or AND-OR structures. An AND3By P i,j f aligned to P m,n e , we mean all words in P i,j f are either aligned to words in P m,n e or unaligned, and vice versa. Furthermore, at least one word in P i,j f is aligned to a word in P m,n e . 4If D2 is a floating structure, we need to merge several dependency links into one. structure represents an application of a rule over component OR-structures, and an OR-structure represents a set of alternative AND-structures with the same state. A state means a n-tuple that characterizes the information that will be inquired by up-level AND-structures. Supposing we use a traditional tri-gram language model in decoding, we need to specify the leftmost two words and the rightmost two words in a state. Since we only have a single NT X in the formalism described above, we do not need to add the NT label in states. However, we need to specify one of the three types of the dependency structure: fixed, floating on the left side, or floating on the right side. This information is encoded in the category of the dependency structure. In the next section, we will explain how to extend categories and states to exploit a dependency language model during decoding. 3 Dependency Language Model For the dependency tree in Figure 1, we calculate the probability of the tree as follows Prob = PT (find) ×PL(will|find-as-head) ×PL(boy|will, find-as-head) ×PL(the|boy-as-head) ×PR(it|find-as-head) ×PR(interesting|it, find-as-head) Here PT (x) is the probability that word x is the root of a dependency tree. PL and PR are left and right side generative probabilities respectively. Let wh be the head, and wL1wL2...wLn be the children on the left side from the nearest to the farthest. Suppose we use a tri-gram dependency LM, PL(wL1wL2...wLn|wh-as-head) = PL(wL1|wh-as-head) ×PL(wL2|wL1, wh-as-head) ×... × PL(wLn|wLn−1, wLn−2) (2) wh-as-head represents wh used as the head, and it is different from wh in the dependency language model. The right side probability is similar. In order to calculate the dependency language model score, or depLM score for short, on the fly for 582 partial hypotheses in a bottom-up decoding, we need to save more information in categories and states. We use a 5-tuple (LF, LN, h, RN, RF) to represent the category of a dependency structure. h represents the head. LF and RF represent the farthest two children on the left and right sides respectively. Similarly, LN and RN represent the nearest two children on the left and right sides respectively. The three types of categories are as follows. • fixed: (LF, −, h, −, RF) • floating left: (LF, LN, −, −, −) • floating right: (−, −, −, RN, RF) Similar operations as described in Section 2.2 are used to keep track of the head and boundary child nodes which are then used to compute depLM scores in decoding. Due to the limit of space, we skip the details here. 4 Implementation Details Features 1. Probability of the source side given the target side of a rule 2. Probability of the target side given the source side of a rule 3. Word alignment probability 4. Number of target words 5. Number of concatenation rules used 6. Language model score 7. Dependency language model score 8. Discount on ill-formed dependency structures We have eight features in our system. The values of the first four features are accumulated on the rules used in a translation. Following (Chiang, 2005), we also use concatenation rules like X →XX for backup. The 5th feature counts the number of concatenation rules used in a translation. In our system, we allow substitutions of dependency structures with unmatched categories, but there is a discount for such substitutions. Weight Optimization We tune the weights with several rounds of decoding-optimization. Following (Och, 2003), the k-best results are accumulated as the input of the optimizer. Powell’s method is used for optimization with 20 random starting points around the weight vector of the last iteration. Rescoring We rescore 1000-best translations (Huang and Chiang, 2005) by replacing the 3-gram LM score with the 5-gram LM score computed offline. 5 Experiments We carried out experiments on three models. • baseline: replication of the Hiero system. • filtered: a string-to-string MT system as in baseline. However, we only keep the transfer rules whose target side can be generated by a well-formed dependency structure. • str-dep: a string-to-dependency system with a dependency LM. We take the replicated Hiero system as our baseline because it is the closest to our string-todependency model. They have similar rule extraction and decoding algorithms. Both systems use only one non-terminal label in rules. The major difference is in the representation of target structures. We use dependency structures instead of strings; thus, the comparison will show the contribution of using dependency information in decoding. All models are tuned on BLEU (Papineni et al., 2001), and evaluated on both BLEU and Translation Error Rate (TER) (Snover et al., 2006) so that we could detect over-tuning on one metric. We used part of the NIST 2006 ChineseEnglish large track data as well as some LDC corpora collected for the DARPA GALE program (LDC2005E83, LDC2006E34 and LDC2006G05) as our bilingual training data. It contains about 178M/191M words in source/target. Hierarchical rules were extracted from a subset which has about 35M/41M words5, and the rest of the training data were used to extract phrasal rules as in (Och, 2003; Chiang, 2005). The English side of this subset was also used to train a 3-gram dependency LM. Traditional 3-gram and 5-gram LMs were trained on a corpus of 6G words composed of the LDC Gigaword corpus and text downloaded from Web (Bulyko et al., 2007). We tuned the weights on NIST MT05 and tested on MT04. 5It includes eight corpora: LDC2002E18, LDC2003E07, LDC2004T08 HK News, LDC2005E83, LDC2005T06, LDC2005T10, LDC2006E34, and LDC2006G05 583 Model #Rules baseline 140M filtered 26M str-dep 27M Table 1: Number of transfer rules Model BLEU% TER% lower mixed lower mixed Decoding (3-gram LM) baseline 38.18 35.77 58.91 56.60 filtered 37.92 35.48 57.80 55.43 str-dep 39.52 37.25 56.27 54.07 Rescoring (5-gram LM) baseline 40.53 38.26 56.35 54.15 filtered 40.49 38.26 55.57 53.47 str-dep 41.60 39.47 55.06 52.96 Table 2: BLEU and TER scores on the test set. Table 1 shows the number of transfer rules extracted from the training data for the tuning and test sets. The constraint of well-formed dependency structures greatly reduced the size of the rule set. Although the rule size increased a little bit after incorporating dependency structures in rules, the size of string-to-dependency rule set is less than 20% of the baseline rule set size. Table 2 shows the BLEU and TER scores on MT04. On decoding output, the string-todependency system achieved 1.48 point improvement in BLEU and 2.53 point improvement in TER compared to the baseline hierarchical stringto-string system. After 5-gram rescoring, it achieved 1.21 point improvement in BLEU and 1.19 improvement in TER. The filtered model does not show improvement on BLEU. The filtered string-to-string rules can be viewed the string projection of stringto-dependency rules. It means that just using dependency structure does not provide an improvement on performance. However, dependency structures allow the use of a dependency LM which gives rise to significant improvement. 6 Discussion The well-formed dependency structures defined here are similar to the data structures in previous work on mono-lingual parsing (Eisner and Satta, 1999; McDonald et al., 2005). However, here we have fixed structures growing on both sides to exploit various translation fragments learned in the training data, while the operations in mono-lingual parsing were designed to avoid artificial ambiguity of derivation. Charniak et al. (2003) described a two-step stringto-CFG-tree translation model which employed a syntax-based language model to select the best translation from a target parse forest built in the first step. Only translation probability P(F|E) was employed in the construction of the target forest due to the complexity of the syntax-based LM. Since our dependency LM models structures over target words directly based on dependency trees, we can build a single-step system. This dependency LM can also be used in hierarchical MT systems using lexicalized CFG trees. The use of a dependency LM in MT is similar to the use of a structured LM in ASR (Xu et al., 2002), which was also designed to exploit long-distance relations. The depLM is used in a bottom-up style, while SLM is employed in a left-to-right style. 7 Conclusions and Future Work In this paper, we propose a novel string-todependency algorithm for statistical machine translation. For comparison purposes, we replicated the Hiero system as described in (Chiang, 2005). Our string-to-dependency system generates 80% fewer rules, and achieves 1.48 point improvement in BLEU and 2.53 point improvement in TER on the decoding output on the NIST 04 Chinese-English evaluation set. Dependency structures provide a desirable platform to employ linguistic knowledge in MT. In the future, we will continue our research in this direction to carry out translation with deeper features, for example, propositional structures (Palmer et al., 2005). We believe that the fixed and floating structures proposed in this paper can be extended to model predicates and arguments. Acknowledgments This work was supported by DARPA/IPTO Contract No. HR0011-06-C-0022 under the GALE program. We are grateful to Roger Bock, Ivan Bulyko, Mike Kayser, John Makhoul, Spyros Matsoukas, AnttiVeikko Rosti, Rich Schwartz and Bing Zhang for their help in running the experiments and constructive comments to improve this paper. 584 References I. Bulyko, S. Matsoukas, R. Schwartz, L. Nguyen, and J. Makhoul. 2007. Language model adaptation in machine translation from speech. In Proceedings of the 32nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). E. Charniak, K. Knight, and K. Yamada. 2003. Syntaxbased language models for statistical machine translation. In Proceedings of MT Summit IX. D. Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43th Annual Meeting of the Association for Computational Linguistics (ACL). D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2). S. DeNeefe, K. Knight, W. Wang, and D. Marcu. 2007. What can syntax-based mt learn from phrase-based mt? In Proceedings of the 2007 Conference of Empirical Methods in Natural Language Processing. Y. Ding and M. Palmer. 2005. Machine translation using probabilistic synchronous dependency insertion grammars. In Proceedings of the 43th Annual Meeting of the Association for Computational Linguistics (ACL), pages 541–548, Ann Arbor, Michigan, June. J. Eisner and G. Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL). H. Fox. 2002. Phrasal cohesion and statistical machine translation. In Proceedings of the 2002 Conference of Empirical Methods in Natural Language Processing. M. Galley, M. Hopkins, K. Knight, and D. Marcu. 2004. What’s in a translation rule? In Proceedings of the 2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. M. Galley, J. Graehl, K. Knight, D. Marcu, S. DeNeefea, W. Wang, and I. Thayer. 2006. Scalable inference and training of context-rich syntactic models. In COLINGACL ’06: Proceedings of 44th Annual Meeting of the Association for Computational Linguistics and 21st Int. Conf. on Computational Linguistics. J. Graehl and K. Knight. 2004. Training tree transducers. In Proceedings of the 2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. L. Huang and D. Chiang. 2005. Better k-best parsing. In Proceedings of the 9th International Workshop on Parsing Technologies. D. Magerman. 1995. Statistical decision-tree models for parsing. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. D. Marcu, W. Wang, A. Echihabi, and K. Knight. 2006. SPMT: Statistical machine translation with syntactified target language phraases. In Proceedings of the 2006 Conference of Empirical Methods in Natural Language Processing. R. McDonald, K. Crammer, and F. Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43th Annual Meeting of the Association for Computational Linguistics (ACL). F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1). F. J. Och. 2003. Minimum error rate training for statistical machine translation. In Erhard W. Hinrichs and Dan Roth, editors, Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL), pages 160–167, Sapporo, Japan, July. M. Palmer, D. Gildea, and P. Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1). K. Papineni, S. Roukos, and T. Ward. 2001. Bleu: a method for automatic evaluation of machine translation. IBM Research Report, RC22176. C. Quirk, A. Menezes, and C. Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal SMT. In Proceedings of the 43th Annual Meeting of the Association for Computational Linguistics (ACL), pages 271–279, Ann Arbor, Michigan, June. S. Shieber and Y. Schabes. 1990. Synchronous tree adjoining grammars. In Proceedings of COLING ’90: The 13th Int. Conf. on Computational Linguistics. M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of Association for Machine Translation in the Americas. W. Wang, K. Knight, and D. Marcu. 2007. Binarizing syntax trees to improve syntax-based machine translation accuracy. In Proceedings of the 2007 Conference of Empirical Methods in Natural Language Processing. P. Xu, C. Chelba, and F. Jelinek. 2002. A study on richer syntactic dependencies for structured language modeling. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL). 585
2008
66
Proceedings of ACL-08: HLT, pages 586–594, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Forest Reranking: Discriminative Parsing with Non-Local Features∗ Liang Huang University of Pennsylvania Philadelphia, PA 19104 [email protected] Abstract Conventional n-best reranking techniques often suffer from the limited scope of the nbest list, which rules out many potentially good alternatives. We instead propose forest reranking, a method that reranks a packed forest of exponentially many parses. Since exact inference is intractable with non-local features, we present an approximate algorithm inspired by forest rescoring that makes discriminative training practical over the whole Treebank. Our final result, an F-score of 91.7, outperforms both 50-best and 100-best reranking baselines, and is better than any previously reported systems trained on the Treebank. 1 Introduction Discriminative reranking has become a popular technique for many NLP problems, in particular, parsing (Collins, 2000) and machine translation (Shen et al., 2005). Typically, this method first generates a list of top-n candidates from a baseline system, and then reranks this n-best list with arbitrary features that are not computable or intractable to compute within the baseline system. But despite its apparent success, there remains a major drawback: this method suffers from the limited scope of the nbest list, which rules out many potentially good alternatives. For example 41% of the correct parses were not in the candidates of ∼30-best parses in (Collins, 2000). This situation becomes worse with longer sentences because the number of possible interpretations usually grows exponentially with the ∗Part of this work was done while I was visiting Institute of Computing Technology, Beijing, and I thank Prof. Qun Liu and his lab for hosting me. I am also grateful to Dan Gildea and Mark Johnson for inspirations, Eugene Charniak for help with his parser, and Wenbin Jiang for guidance on perceptron averaging. This project was supported by NSF ITR EIA-0205456. local non-local conventional reranking only at the root DP-based discrim. parsing exact N/A this work: forest-reranking exact on-the-fly Table 1: Comparison of various approaches for incorporating local and non-local features. sentence length. As a result, we often see very few variations among the n-best trees, for example, 50best trees typically just represent a combination of 5 to 6 binary ambiguities (since 25 < 50 < 26). Alternatively, discriminative parsing is tractable with exact and efficient search based on dynamic programming (DP) if all features are restricted to be local, that is, only looking at a local window within the factored search space (Taskar et al., 2004; McDonald et al., 2005). However, we miss the benefits of non-local features that are not representable here. Ideally, we would wish to combine the merits of both approaches, where an efficient inference algorithm could integrate both local and non-local features. Unfortunately, exact search is intractable (at least in theory) for features with unbounded scope. So we propose forest reranking, a technique inspired by forest rescoring (Huang and Chiang, 2007) that approximately reranks the packed forest of exponentially many parses. The key idea is to compute non-local features incrementally from bottom up, so that we can rerank the n-best subtrees at all internal nodes, instead of only at the root node as in conventional reranking (see Table 1). This method can thus be viewed as a step towards the integration of discriminative reranking with traditional chart parsing. Although previous work on discriminative parsing has mainly focused on short sentences (≤15 words) (Taskar et al., 2004; Turian and Melamed, 2007), our work scales to the whole Treebank, where 586 VP1,6 VBD1,2 blah NP2,6 NP2,3 blah PP3,6 b e2 e1 Figure 1: A partial forest of the example sentence. we achieved an F-score of 91.7, which is a 19% error reduction from the 1-best baseline, and outperforms both 50-best and 100-best reranking. This result is also better than any previously reported systems trained on the Treebank. 2 Packed Forests as Hypergraphs Informally, a packed parse forest, or forest in short, is a compact representation of all the derivations (i.e., parse trees) for a given sentence under a context-free grammar (Billot and Lang, 1989). For example, consider the following sentence 0 I 1 saw 2 him 3 with 4 a 5 mirror 6 where the numbers between words denote string positions. Shown in Figure 1, this sentence has (at least) two derivations depending on the attachment of the prep. phrase PP3,6 “with a mirror”: it can either be attached to the verb “saw”, VBD1,2 NP2,3 PP3,6 VP1,6 , (*) or be attached to “him”, which will be further combined with the verb to form the same VP as above. These two derivations can be represented as a single forest by sharing common sub-derivations. Such a forest has a structure of a hypergraph (Klein and Manning, 2001; Huang and Chiang, 2005), where items like PP3,6 are called nodes, and deductive steps like (*) correspond to hyperedges. More formally, a forest is a pair ⟨V, E⟩, where V is the set of nodes, and E the set of hyperedges. For a given sentence w1:l = w1 . . . wl, each node v ∈V is in the form of Xi,j, which denotes the recognition of nonterminal X spanning the substring from positions i through j (that is, wi+1 . . . wj). Each hyperedge e ∈E is a pair ⟨tails(e), head(e)⟩, where head(e) ∈V is the consequent node in the deductive step, and tails(e) ∈V ∗is the list of antecedent nodes. For example, the hyperedge for deduction (*) is notated: e1 = ⟨(VBD1,2, NP2,3, PP3,6), VP1,6⟩ We also denote IN (v) to be the set of incoming hyperedges of node v, which represent the different ways of deriving v. For example, in the forest in Figure 1, IN (VP1,6) is {e1, e2}, with e2 = ⟨(VBD1,2, NP2,6), VP1,6⟩. We call |e| the arity of hyperedge e, which counts the number of tail nodes in e. The arity of a hypergraph is the maximum arity over all hyperedges. A CKY forest has an arity of 2, since the input grammar is required to be binary branching (cf. Chomsky Normal Form) to ensure cubic time parsing complexity. However, in this work, we use forests from a Treebank parser (Charniak, 2000) whose grammar is often flat in many productions. For example, the arity of the forest in Figure 1 is 3. Such a Treebank-style forest is easier to work with for reranking, since many features can be directly expressed in it. There is also a distinguished root node TOP in each forest, denoting the goal item in parsing, which is simply S0,l where S is the start symbol and l is the sentence length. 3 Forest Reranking 3.1 Generic Reranking with the Perceptron We first establish a unified framework for parse reranking with both n-best lists and packed forests. For a given sentence s, a generic reranker selects the best parse ˆy among the set of candidates cand(s) according to some scoring function: ˆy = argmax y∈cand(s) score(y) (1) In n-best reranking, cand(s) is simply a set of n-best parses from the baseline parser, that is, cand(s) = {y1, y2, . . . , yn}. Whereas in forest reranking, cand(s) is a forest implicitly representing the set of exponentially many parses. As usual, we define the score of a parse y to be the dot product between a high dimensional feature representation and a weight vector w: score(y) = w · f(y) (2) 587 where the feature extractor f is a vector of d functions f = (f1, . . . , fd), and each feature fj maps a parse y to a real number fj(y). Following (Charniak and Johnson, 2005), the first feature f1(y) = log Pr(y) is the log probability of a parse from the baseline generative parser, while the remaining features are all integer valued, and each of them counts the number of times that a particular configuration occurs in parse y. For example, one such feature f2000 might be a question “how many times is a VP of length 5 surrounded by the word ‘has’ and the period? ” which is an instance of the WordEdges feature (see Figure 2(c) and Section 3.2 for details). Using a machine learning algorithm, the weight vector w can be estimated from the training data where each sentence si is labelled with its correct (“gold-standard”) parse y∗ i . As for the learner, Collins (2000) uses the boosting algorithm and Charniak and Johnson (2005) use the maximum entropy estimator. In this work we use the averaged perceptron algorithm (Collins, 2002) since it is an online algorithm much simpler and orders of magnitude faster than Boosting and MaxEnt methods. Shown in Pseudocode 1, the perceptron algorithm makes several passes over the whole training data, and in each iteration, for each sentence si, it tries to predict a best parse ˆyi among the candidates cand(si) using the current weight setting. Intuitively, we want the gold parse y∗ i to be picked, but in general it is not guaranteed to be within cand(si), because the grammar may fail to cover the gold parse, and because the gold parse may be pruned away due to the limited scope of cand(si). So we define an oracle parse y+ i to be the candidate that has the highest Parseval F-score with respect to the gold tree y∗ i :1 y+ i ≜ argmax y∈cand(si) F(y, y∗ i ) (3) where function F returns the F-score. Now we train the reranker to pick the oracle parses as often as possible, and in case an error is made (line 6), perform an update on the weight vector (line 7), by adding the difference between two feature representations. 1If one uses the gold y∗ i for oracle y+ i , the perceptron will continue to make updates towards something unreachable even when the decoder has picked the best possible candidate. Pseudocode 1 Perceptron for Generic Reranking 1: Input: Training examples {cand(si), y+ i }N i=1 ⊲y+ i is the oracle tree for si among cand(si) 2: w ←0 ⊲initial weights 3: for t ←1 . . . T do ⊲T iterations 4: for i ←1 . . . N do 5: ˆy = argmaxy∈cand(si) w · f(y) 6: if ˆy ̸= y+ i then 7: w ←w + f(y+ i ) −f(ˆy) 8: return w In n-best reranking, since all parses are explicitly enumerated, it is trivial to compute the oracle tree.2 However, it remains widely open how to identify the forest oracle. We will present a dynamic programming algorithm for this problem in Sec. 4.1. We also use a refinement called “averaged parameters” where the final weight vector is the average of weight vectors after each sentence in each iteration over the training data. This averaging effect has been shown to reduce overfitting and produce much more stable results (Collins, 2002). 3.2 Factorizing Local and Non-Local Features A key difference between n-best and forest reranking is the handling of features. In n-best reranking, all features are treated equivalently by the decoder, which simply computes the value of each one on each candidate parse. However, for forest reranking, since the trees are not explicitly enumerated, many features can not be directly computed. So we first classify features into local and non-local, which the decoder will process in very different fashions. We define a feature f to be local if and only if it can be factored among the local productions in a tree, and non-local if otherwise. For example, the Rule feature in Fig. 2(a) is local, while the ParentRule feature in Fig. 2(b) is non-local. It is worth noting that some features which seem complicated at the first sight are indeed local. For example, the WordEdges feature in Fig. 2(c), which classifies a node by its label, span length, and surrounding words, is still local since all these information are encoded either in the node itself or in the input sentence. In contrast, it would become non-local if we replace the surrounding words by surrounding POS 2In case multiple candidates get the same highest F-score, we choose the parse with the highest log probability from the baseline parser to be the oracle parse (Collins, 2000). 588 VP VBD NP PP S VP VBD NP PP VP VBZ has NP |←5 words →| . . VP VBD saw NP DT the ... (a) Rule (local) (b) ParentRule (non-local) (c) WordEdges (local) (d) NGramTree (non-local) ⟨VP →VBD NP PP ⟩ ⟨VP →VBD NP PP | S ⟩ ⟨NP 5 has . ⟩ ⟨VP (VBD saw) (NP (DT the)) ⟩ Figure 2: Illustration of some example features. Shaded nodes denote information included in the feature. tags, which are generated dynamically. More formally, we split the feature extractor f = (f1, . . . , fd) into f = (fL; fN) where fL and fN are the local and non-local features, respectively. For the former, we extend their domains from parses to hyperedges, where f(e) returns the value of a local feature f ∈fL on hyperedge e, and its value on a parsey factors across the hyperedges (local productions), fL(y) = X e∈y fL(e) (4) and we can pre-compute fL(e) for each e in a forest. Non-local features, however, can not be precomputed, but we still prefer to compute them as early as possible, which we call “on-the-fly” computation, so that our decoder can be sensitive to them at internal nodes. For instance, the NGramTree feature in Fig. 2 (d) returns the minimum tree fragement spanning a bigram, in this case “saw” and “the”, and should thus be computed at the smallest common ancestor of the two, which is the VP node in this example. Similarly, the ParentRule feature in Fig. 2 (b) can be computed when the S subtree is formed. In doing so, we essentially factor non-local features across subtrees, where for each subtree y′ in a parse y, we define a unit feature ˚ f(y′) to be the part of f(y) that are computable within y′, but not computable in any (proper) subtree of y′. Then we have: fN(y) = X y′∈y ˚fN(y′) (5) Intuitively, we compute the unit non-local features at each subtree from bottom-up. For example, for the binary-branching node Ai,k in Fig. 3, the Ai,k Bi,j wi . . . wj−1 Cj,k wj . . . wk−1 Figure 3: Example of the unit NGramTree feature at node Ai,k: ⟨A (B . . . wj−1) (C . . . wj) ⟩. unit NGramTree instance is for the pair ⟨wj−1, wj⟩ on the boundary between the two subtrees, whose smallest common ancestor is the current node. Other unit NGramTree instances within this span have already been computed in the subtrees, except those for the boundary words of the whole node, wi and wk−1, which will be computed when this node is further combined with other nodes in the future. 3.3 Approximate Decoding via Cube Pruning Before moving on to approximate decoding with non-local features, we first describe the algorithm for exact decoding when only local features are present, where many concepts and notations will be re-used later. We will use D(v) to denote the top derivations of node v, where D1(v) is its 1-best derivation. We also use the notation ⟨e, j⟩to denote the derivation along hyperedge e, using the jith subderivation for tail ui, so ⟨e, 1⟩is the best derivation along e. The exact decoding algorithm, shown in Pseudocode 2, is an instance of the bottom-up Viterbi algorithm, which traverses the hypergraph in a topological order, and at each node v, calculates its 1-best derivation using each incoming hyperedge e ∈IN (v). The cost of e, c(e), is the score of its 589 Pseudocode 2 Exact Decoding with Local Features 1: function VITERBI(⟨V, E⟩) 2: for v ∈V in topological order do 3: for e ∈IN (v) do 4: c(e) ←w · fL(e) + P ui∈tails(e) c(D1(ui)) 5: if c(e) > c(D1(v)) then ⊲better derivation? 6: D1(v) ←⟨e, 1⟩ 7: c(D1(v)) ←c(e) 8: return D1(TOP) Pseudocode 3 Cube Pruning for Non-local Features 1: function CUBE(⟨V, E⟩) 2: for v ∈V in topological order do 3: KBEST(v) 4: return D1(TOP) 5: procedure KBEST(v) 6: heap ←∅; buf ←∅ 7: for e ∈IN (v) do 8: c(⟨e, 1⟩) ←EVAL(e, 1) ⊲extract unit features 9: append ⟨e, 1⟩to heap 10: HEAPIFY(heap) ⊲prioritized frontier 11: while |heap| > 0 and |buf | < k do 12: item ←POP-MAX(heap) ⊲extract next-best 13: append item to buf 14: PUSHSUCC(item, heap) 15: sort buf to D(v) 16: procedure PUSHSUCC(⟨e, j⟩, heap) 17: e is v →u1 . . . u|e| 18: for i in 1 . . . |e| do 19: j′ ←j + bi ⊲bi is 1 only on the ith dim. 20: if |D(ui)| ≥j′ i then ⊲enough sub-derivations? 21: c(⟨e, j′⟩) ←EVAL(e, j′) ⊲unit features 22: PUSH(⟨e, j′⟩, heap) 23: function EVAL(e, j) 24: e is v →u1 . . . u|e| 25: return w · fL(e) + w ·˚fN(⟨e, j⟩) + P i c(Dji(ui)) (pre-computed) local features w · fL(e). This algorithm has a time complexity of O(E), and is almost identical to traditional chart parsing, except that the forest might be more than binary-branching. For non-local features, we adapt cube pruning from forest rescoring (Chiang, 2007; Huang and Chiang, 2007), since the situation here is analogous to machine translation decoding with integrated language models: we can view the scores of unit nonlocal features as the language model cost, computed on-the-fly when combining sub-constituents. Shown in Pseudocode 3, cube pruning works bottom-up on the forest, keeping a beam of at most k derivations at each node, and uses the k-best parsing Algorithm 2 of Huang and Chiang (2005) to speed up the computation. When combining the subderivations along a hyperedge e to form a new subtree y′ = ⟨e, j⟩, we also compute its unit non-local feature values˚fN(⟨e, j⟩) (line 25). A priority queue (heap in Pseudocode 3) is used to hold the candidates for the next-best derivation, which is initialized to the set of best derivations along each hyperedge (lines 7 to 9). Then at each iteration, we pop the best derivation (lines 12), and push its successors back into the priority queue (line 14). Analogous to the language model cost in forest rescoring, the unit feature cost here is a non-monotonic score in the dynamic programming backbone, and the derivations may thus be extracted out-of-order. So a buffer buf is used to hold extracted derivations, which is sorted at the end (line 15) to form the list of top-k derivations D(v) of node v. The complexity of this algorithm is O(E + V k log kN) (Huang and Chiang, 2005), where O(N) is the time for on-the-fly feature extraction for each subtree, which becomes the bottleneck in practice. 4 Supporting Forest Algorithms 4.1 Forest Oracle Recall that the Parseval F-score is the harmonic mean of labelled precision P and labelled recall R: F(y, y∗) ≜ 2PR P + R = 2|y ∩y∗| |y| + |y∗| (6) where |y| and |y∗| are the numbers of brackets in the test parse and gold parse, respectively, and |y ∩y∗| is the number of matched brackets. Since the harmonic mean is a non-linear combination, we can not optimize the F-scores on sub-forests independently with a greedy algorithm. In other words, the optimal F-score tree in a forest is not guaranteed to be composed of two optimal F-score subtrees. We instead propose a dynamic programming algorithm which optimizes the number of matched brackets for a given number of test brackets. For example, our algorithm will ask questions like, “when a test parse has 5 brackets, what is the maximum number of matched brackets?” More formally, at each node v, we compute an oracle function ora[v] : N 7→N, which maps an integer t to ora[v](t), the max. number of matched brackets 590 Pseudocode 4 Forest Oracle Algorithm 1: function ORACLE(⟨V, E⟩, y∗) 2: for v ∈V in topological order do 3: for e ∈BS(v) do 4: e is v →u1u2 . . . u|e| 5: ora[v] ←ora[v] ⊕(⊗iora[ui]) 6: ora[v] ←ora[v] ⇑(1, 1v∈y∗) 7: return F(y+, y∗) = maxt 2·ora[TOP](t) t+|y∗| ⊲oracle F1 for all parses yv of node v with exactly t brackets: ora[v](t) ≜ max yv:|yv|=t |yv ∩y∗| (7) When node v is combined with another node u along a hyperedge e = ⟨(v, u), w⟩, we need to combine the two oracle functions ora[v] and ora[u] by distributing the test brackets of w between v and u, and optimize the number of matched bracktes. To do this we define a convolution operator ⊗between two functions f and g: (f ⊗g)(t) ≜ max t1+t2=t f(t1) + g(t2) (8) For instance: t f(t) 2 1 3 2 ⊗ t g(t) 4 4 5 4 = t (f ⊗g)(t) 6 5 7 6 8 6 The oracle function for the head node w is then ora[w](t) = (ora[v] ⊗ora[u])(t −1) + 1w∈y∗(9) where 1 is the indicator function, returning 1 if node w is found in the gold tree y∗, in which case we increment the number of matched brackets. We can also express Eq. 9 in a purely functional form ora[w] = (ora[v] ⊗ora[u]) ⇑(1, 1w∈y∗) (10) where ⇑is a translation operator which shifts a function along the axes: (f ⇑(a, b))(t) ≜f(t −a) + b (11) Above we discussed the case of one hyperedge. If there is another hyperedge e′ deriving node w, we also need to combine the resulting oracle functions from both hyperedges, for which we define a pointwise addition operator ⊕: (f ⊕g)(t) ≜max{f(t), g(t)} (12) Shown in Pseudocode 4, we perform these computations in a bottom-up topological order, and finally at the root node TOP, we can compute the best global F-score by maximizing over different numbers of test brackets (line 7). The oracle tree y+ can be recursively restored by keeping backpointers for each ora[v](t), which we omit in the pseudocode. The time complexity of this algorithm for a sentence of l words is O(|E| · l2(a−1)) where a is the arity of the forest. For a CKY forest, this amounts to O(l3 · l2) = O(l5), but for general forests like those in our experiments the complexities are much higher. In practice it takes on average 0.05 seconds for forests pruned by p = 10 (see Section 4.2), but we can pre-compute and store the oracle for each forest before training starts. 4.2 Forest Pruning Our forest pruning algorithm (Jonathan Graehl, p.c.) is very similar to the method based on marginal probability (Charniak and Johnson, 2005), except that ours prunes hyperedges as well as nodes. Basically, we use an Inside-Outside algorithm to compute the Viterbi inside cost β(v) and the Viterbi outside cost α(v) for each node v, and then compute the merit αβ(e) for each hyperedge: αβ(e) = α(head(e)) + X ui∈tails(e) β(ui) (13) Intuitively, this merit is the cost of the best derivation that traverses e, and the difference δ(e) = αβ(e) −β(TOP) can be seen as the distance away from the globally best derivation. We prune away all hyperedges that have δ(e) > p for a threshold p. Nodes with all incoming hyperedges pruned are also pruned. The key difference from (Charniak and Johnson, 2005) is that in this algorithm, a node can “partially” survive the beam, with a subset of its hyperedges pruned. In practice, this method prunes on average 15% more hyperedges than their method. 5 Experiments We compare the performance of our forest reranker against n-best reranking on the Penn English Treebank (Marcus et al., 1993). The baseline parser is the Charniak parser, which we modified to output a 591 Local instances Non-Local instances Rule 10, 851 ParentRule 18, 019 Word 20, 328 WProj 27, 417 WordEdges 454, 101 Heads 70, 013 CoLenPar 22 HeadTree 67, 836 Bigram⋄ 10, 292 Heavy 1, 401 Trigram⋄ 24, 677 NGramTree 67, 559 HeadMod⋄ 12, 047 RightBranch 2 DistMod⋄ 16, 017 Total Feature Instances: 800, 582 Table 2: Features used in this work. Those with a ⋄ are from (Collins, 2000), and others are from (Charniak and Johnson, 2005), with simplifications. packed forest for each sentence.3 5.1 Data Preparation We use the standard split of the Treebank: sections 02-21 as the training data (39832 sentences), section 22 as the development set (1700 sentences), and section 23 as the test set (2416 sentences). Following (Charniak and Johnson, 2005), the training set is split into 20 folds, each containing about 1992 sentences, and is parsed by the Charniak parser with a model trained on sentences from the remaining 19 folds. The development set and the test set are parsed with a model trained on all 39832 training sentences. We implemented both n-best and forest reranking systems in Python and ran our experiments on a 64bit Dual-Core Intel Xeon with 3.0GHz CPUs. Our feature set is summarized in Table 2, which closely follows Charniak and Johnson (2005), except that we excluded the non-local features Edges, NGram, and CoPar, and simplified Rule and NGramTree features, since they were too complicated to compute.4 We also added four unlexicalized local features from Collins (2000) to cope with data-sparsity. Following Charniak and Johnson (2005), we extracted the features from the 50-best parses on the training set (sec. 02-21), and used a cut-off of 5 to prune away low-count features. There are 0.8M features in our final set, considerably fewer than that of Charniak and Johnson which has about 1.3M fea3This is a relatively minor change to the Charniak parser, since it implements Algorithm 3 of Huang and Chiang (2005) for efficient enumeration of n-best parses, which requires storing the forest. The modified parser and related scripts for handling forests (e.g. oracles) will be available on my homepage. 4In fact, our Rule and ParentRule features are two special cases of the original Rule feature in (Charniak and Johnson, 2005). We also restricted NGramTree to be on bigrams only. 89.0 91.0 93.0 95.0 97.0 99.0 0 500 1000 1500 2000 Parseval F-score (%) average # of hyperedges or brackets per sentence p=10 p=20 n=10 n=50 n=100 1-best forest oracle n-best oracle Figure 4: Forests (shown with various pruning thresholds) enjoy higher oracle scores and more compact sizes than n-best lists (on sec 23). tures in the updated version.5 However, our initial experiments show that, even with this much simpler feature set, our 50-best reranker performed equally well as theirs (both with an F-score of 91.4, see Tables 3 and 4). This result confirms that our feature set design is appropriate, and the averaged perceptron learner is a reasonable candidate for reranking. The forests dumped from the Charniak parser are huge in size, so we use the forest pruning algorithm in Section 4.2 to prune them down to a reasonable size. In the following experiments we use a threshold of p = 10, which results in forests with an average number of 123.1 hyperedges per forest. Then for each forest, we annotate its forest oracle, and on each hyperedge, pre-compute its local features.6 Shown in Figure 4, these forests have an forest oracle of 97.8, which is 1.1% higher than the 50-best oracle (96.7), and are 8 times smaller in size. 5.2 Results and Analysis Table 3 compares the performance of forest reranking against standard n-best reranking. For both systems, we first use only the local features, and then all the features. We use the development set to determine the optimal number of iterations for averaged perceptron, and report the F1 score on the test set. With only local features, our forest reranker achieves an F-score of 91.25, and with the addition of non5http://www.cog.brown.edu/∼mj/software.htm. We follow this version as it corrects some bugs from their 2005 paper which leads to a 0.4% increase in performance (see Table 4). 6A subset of local features, e.g. WordEdges, is independent of which hyperedge the node takes in a derivation, and can thus be annotated on nodes rather than hyperedges. We call these features node-local, which also include part of Word features. 592 baseline: 1-best Charniak parser 89.72 n-best reranking features n pre-comp. training F1% local 50 1.7G / 16h 3 × 0.1h 91.28 all 50 2.4G / 19h 4 × 0.3h 91.43 all 100 5.3G / 44h 4 × 0.7h 91.49 forest reranking (p = 10) features k pre-comp. training F1% local 1.2G / 2.9h 3 × 0.8h 91.25 all 15 4 × 6.1h 91.69 Table 3: Forest reranking compared to n-best reranking on sec. 23. The pre-comp. column is for feature extraction, and training column shows the number of perceptron iterations that achieved best results on the dev set, and average time per iteration. local features, the accuracy rises to 91.69 (with beam size k = 15), which is a 0.26% absolute improvement over 50-best reranking.7 This improvement might look relatively small, but it is much harder to make a similar progress with n-best reranking. For example, even if we double the size of the n-best list to 100, the performance only goes up by 0.06% (Table 3). In fact, the 100best oracle is only 0.5% higher than the 50-best one (see Fig. 4). In addition, the feature extraction step in 100-best reranking produces huge data files and takes 44 hours in total, though this part can be parallelized.8 On two CPUs, 100-best reranking takes 25 hours, while our forest-reranker can also finish in 26 hours, with a much smaller disk space. Indeed, this demonstrates the severe redundancies as another disadvantage of n-best lists, where many subtrees are repeated across different parses, while the packed forest reduces space dramatically by sharing common sub-derivations (see Fig. 4). To put our results in perspective, we also compare them with other best-performing systems in Table 4. Our final result (91.7) is better than any previously reported system trained on the Treebank, although 7It is surprising that 50-best reranking with local features achieves an even higher F-score of 91.28, and we suspect this is due to the aggressive updates and instability of the perceptron, as we do observe the learning curves to be non-monotonic. We leave the use of more stable learning algorithms to future work. 8The n-best feature extraction already uses relative counts (Johnson, 2006), which reduced file sizes by at least a factor 4. type system F1% D Collins (2000) 89.7 Henderson (2004) 90.1 Charniak and Johnson (2005) 91.0 updated (Johnson, 2006) 91.4 this work 91.7 G Bod (2003) 90.7 Petrov and Klein (2007) 90.1 S McClosky et al. (2006) 92.1 Table 4: Comparison of our final results with other best-performing systems on the whole Section 23. Types D, G, and S denote discriminative, generative, and semi-supervised approaches, respectively. McClosky et al. (2006) achieved an even higher accuarcy (92.1) by leveraging on much larger unlabelled data. Moreover, their technique is orthogonal to ours, and we suspect that replacing their n-best reranker by our forest reranker might get an even better performance. Plus, except for n-best reranking, most discriminative methods require repeated parsing of the training set, which is generally impratical (Petrov and Klein, 2008). Therefore, previous work often resorts to extremely short sentences (≤15 words) or only looked at local features (Taskar et al., 2004; Henderson, 2004; Turian and Melamed, 2007). In comparison, thanks to the efficient decoding, our work not only scaled to the whole Treebank, but also successfully incorporated non-local features, which showed an absolute improvement of 0.44% over that of local features alone. 6 Conclusion We have presented a framework for reranking on packed forests which compactly encodes many more candidates than n-best lists. With efficient approximate decoding, perceptron training on the whole Treebank becomes practical, which can be done in about a day even with a Python implementation. Our final result outperforms both 50-best and 100-best reranking baselines, and is better than any previously reported systems trained on the Treebank. We also devised a dynamic programming algorithm for forest oracles, an interesting problem by itself. We believe this general framework could also be applied to other problems involving forests or lattices, such as sequence labeling and machine translation. 593 References Sylvie Billot and Bernard Lang. 1989. The structure of shared forests in ambiguous parsing. In Proceedings of ACL ’89, pages 143–151. Rens Bod. 2003. An efficient implementation of a new DOP model. In Proceedings of EACL. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine-grained n-best parsing and discriminative reranking. In Proceedings of the 43rd ACL. Eugene Charniak. 2000. A maximum-entropyinspired parser. In Proceedings of NAACL. David Chiang. 2007. Hierarchical phrasebased translation. Computational Linguistics, 33(2):201–208. Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proceedings of ICML, pages 175–182. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of EMNLP. James Henderson. 2004. Discriminative training of a neural network statistical parser. In Proceedings of ACL. Liang Huang and David Chiang. 2005. Better kbest Parsing. In Proceedings of the Ninth International Workshop on Parsing Technologies (IWPT2005). Liang Huang and David Chiang. 2007. Forest rescoring: Fast decoding with integrated language models. In Proceedings of ACL. Mark Johnson. 2006. Features of statistical parsers. Talk given at the Joint Microsoft Research and Univ. of Washington Computational Linguistics Colloquium. http://www.cog.brown.edu/∼mj/papers/msuw06talk.pdf. Dan Klein and Christopher D. Manning. 2001. Parsing and Hypergraphs. In Proceedings of the Seventh International Workshop on Parsing Technologies (IWPT-2001), 17-19 October 2001, Beijing, China. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguistics, 19:313–330. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of the HLT-NAACL, New York City, USA, June. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43rd ACL. Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of HLT-NAACL. Slav Petrov and Dan Klein. 2008. Discriminative log-linear grammars with latent variables. In Proceedings of NIPS 20. Libin Shen, Anoop Sarkar, and Franz Josef Och. 2005. Discriminative reranking for machine translation. In Proceedings of HLT-NAACL. Ben Taskar, Dan Klein, Michael Collins, Daphne Koller, and Chris Manning. 2004. Max-margin parsing. In Proceedings of EMNLP. Joseph Turian and I. Dan Melamed. 2007. Scalable discriminative learning for natural language parsing and translation. In Proceedings of NIPS 19. 594
2008
67
Proceedings of ACL-08: HLT, pages 595–603, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Simple Semi-supervised Dependency Parsing Terry Koo, Xavier Carreras, and Michael Collins MIT CSAIL, Cambridge, MA 02139, USA {maestro,carreras,mcollins}@csail.mit.edu Abstract We present a simple and effective semisupervised method for training dependency parsers. We focus on the problem of lexical representation, introducing features that incorporate word clusters derived from a large unannotated corpus. We demonstrate the effectiveness of the approach in a series of dependency parsing experiments on the Penn Treebank and Prague Dependency Treebank, and we show that the cluster-based features yield substantial gains in performance across a wide range of conditions. For example, in the case of English unlabeled second-order parsing, we improve from a baseline accuracy of 92.02% to 93.16%, and in the case of Czech unlabeled second-order parsing, we improve from a baseline accuracy of 86.13% to 87.13%. In addition, we demonstrate that our method also improves performance when small amounts of training data are available, and can roughly halve the amount of supervised data required to reach a desired level of performance. 1 Introduction In natural language parsing, lexical information is seen as crucial to resolving ambiguous relationships, yet lexicalized statistics are sparse and difficult to estimate directly. It is therefore attractive to consider intermediate entities which exist at a coarser level than the words themselves, yet capture the information necessary to resolve the relevant ambiguities. In this paper, we introduce lexical intermediaries via a simple two-stage semi-supervised approach. First, we use a large unannotated corpus to define word clusters, and then we use that clustering to construct a new cluster-based feature mapping for a discriminative learner. We are thus relying on the ability of discriminative learning methods to identify and exploit informative features while remaining agnostic as to the origin of such features. To demonstrate the effectiveness of our approach, we conduct experiments in dependency parsing, which has been the focus of much recent research—e.g., see work in the CoNLL shared tasks on dependency parsing (Buchholz and Marsi, 2006; Nivre et al., 2007). The idea of combining word clusters with discriminative learning has been previously explored by Miller et al. (2004), in the context of namedentity recognition, and their work directly inspired our research. However, our target task of dependency parsing involves more complex structured relationships than named-entity tagging; moreover, it is not at all clear that word clusters should have any relevance to syntactic structure. Nevertheless, our experiments demonstrate that word clusters can be quite effective in dependency parsing applications. In general, semi-supervised learning can be motivated by two concerns: first, given a fixed amount of supervised data, we might wish to leverage additional unlabeled data to facilitate the utilization of the supervised corpus, increasing the performance of the model in absolute terms. Second, given a fixed target performance level, we might wish to use unlabeled data to reduce the amount of annotated data necessary to reach this target. We show that our semi-supervised approach yields improvements for fixed datasets by performing parsing experiments on the Penn Treebank (Marcus et al., 1993) and Prague Dependency Treebank (Hajiˇc, 1998; Hajiˇc et al., 2001) (see Sections 4.1 and 4.3). By conducting experiments on datasets of varying sizes, we demonstrate that for fixed levels of performance, the cluster-based approach can reduce the need for supervised data by roughly half, which is a substantial savings in data-annotation costs (see Sections 4.2 and 4.4). The remainder of this paper is divided as follows: 595 Ms. Haag plays Elianti . * obj p root nmod sbj Figure 1: An example of a labeled dependency tree. The tree contains a special token “*” which is always the root of the tree. Each arc is directed from head to modifier and has a label describing the function of the attachment. Section 2 gives background on dependency parsing and clustering, Section 3 describes the cluster-based features, Section 4 presents our experimental results, Section 5 discusses related work, and Section 6 concludes with ideas for future research. 2 Background 2.1 Dependency parsing Recent work (Buchholz and Marsi, 2006; Nivre et al., 2007) has focused on dependency parsing. Dependency syntax represents syntactic information as a network of head-modifier dependency arcs, typically restricted to be a directed tree (see Figure 1 for an example). Dependency parsing depends critically on predicting head-modifier relationships, which can be difficult due to the statistical sparsity of these word-to-word interactions. Bilexical dependencies are thus ideal candidates for the application of coarse word proxies such as word clusters. In this paper, we take a part-factored structured classification approach to dependency parsing. For a given sentence x, let Y(x) denote the set of possible dependency structures spanning x, where each y ∈ Y(x) decomposes into a set of “parts” r ∈y. In the simplest case, these parts are the dependency arcs themselves, yielding a first-order or “edge-factored” dependency parsing model. In higher-order parsing models, the parts can consist of interactions between more than two words. For example, the parser of McDonald and Pereira (2006) defines parts for sibling interactions, such as the trio “plays”, “Elianti”, and “.” in Figure 1. The Carreras (2007) parser has parts for both sibling interactions and grandparent interactions, such as the trio “*”, “plays”, and “Haag” in Figure 1. These kinds of higher-order factorizations allow dependency parsers to obtain a limited form of context-sensitivity. Given a factorization of dependency structures into parts, we restate dependency parsing as the folapple pear Apple IBM bought run of in 01 100 101 110 111 000 001 010 011 00 0 10 1 11 Figure 2: An example of a Brown word-cluster hierarchy. Each node in the tree is labeled with a bit-string indicating the path from the root node to that node, where 0 indicates a left branch and 1 indicates a right branch. lowing maximization: PARSE(x; w) = argmax y∈Y(x) X r∈y w · f(x, r) Above, we have assumed that each part is scored by a linear model with parameters w and featuremapping f(·). For many different part factorizations and structure domains Y(·), it is possible to solve the above maximization efficiently, and several recent efforts have concentrated on designing new maximization algorithms with increased contextsensitivity (Eisner, 2000; McDonald et al., 2005b; McDonald and Pereira, 2006; Carreras, 2007). 2.2 Brown clustering algorithm In order to provide word clusters for our experiments, we used the Brown clustering algorithm (Brown et al., 1992). We chose to work with the Brown algorithm due to its simplicity and prior success in other NLP applications (Miller et al., 2004; Liang, 2005). However, we expect that our approach can function with other clustering algorithms (as in, e.g., Li and McCallum (2005)). We briefly describe the Brown algorithm below. The input to the algorithm is a vocabulary of words to be clustered and a corpus of text containing these words. Initially, each word in the vocabulary is considered to be in its own distinct cluster. The algorithm then repeatedly merges the pair of clusters which causes the smallest decrease in the likelihood of the text corpus, according to a class-based bigram language model defined on the word clusters. By tracing the pairwise merge operations, one obtains a hierarchical clustering of the words, which can be represented as a binary tree as in Figure 2. Within this tree, each word is uniquely identified by its path from the root, and this path can be compactly represented with a bit string, as in Figure 2. In order to obtain a clustering of the words, we select all nodes at a certain depth from the root of the 596 hierarchy. For example, in Figure 2 we might select the four nodes at depth 2 from the root, yielding the clusters {apple,pear}, {Apple,IBM}, {bought,run}, and {of,in}. Note that the same clustering can be obtained by truncating each word’s bit-string to a 2-bit prefix. By using prefixes of various lengths, we can produce clusterings of different granularities (Miller et al., 2004). For all of the experiments in this paper, we used the Liang (2005) implementation of the Brown algorithm to obtain the necessary word clusters. 3 Feature design Key to the success of our approach is the use of features which allow word-cluster-based information to assist the parser. The feature sets we used are similar to other feature sets in the literature (McDonald et al., 2005a; Carreras, 2007), so we will not attempt to give a exhaustive description of the features in this section. Rather, we describe our features at a high level and concentrate on our methodology and motivations. In our experiments, we employed two different feature sets: a baseline feature set which draws upon “normal” information sources such as word forms and parts of speech, and a cluster-based feature set that also uses information derived from the Brown cluster hierarchy. 3.1 Baseline features Our first-order baseline feature set is similar to the feature set of McDonald et al. (2005a), and consists of indicator functions for combinations of words and parts of speech for the head and modifier of each dependency, as well as certain contextual tokens.1 Our second-order baseline features are the same as those of Carreras (2007) and include indicators for triples of part of speech tags for sibling interactions and grandparent interactions, as well as additional bigram features based on pairs of words involved these higher-order interactions. Examples of baseline features are provided in Table 1. 1We augment the McDonald et al. (2005a) feature set with backed-off versions of the “Surrounding Word POS Features” that include only one neighboring POS tag. We also add binned distance features which indicate whether the number of tokens between the head and modifier of a dependency is greater than 2, 5, 10, 20, 30, or 40 tokens. Baseline Cluster-based ht,mt hc4,mc4 hw,mw hc6,mc6 hw,ht,mt hc*,mc* hw,ht,mw hc4,mt ht,mw,mt ht,mc4 hw,mw,mt hc6,mt hw,ht,mw,mt ht,mc6 · · · hc4,mw hw,mc4 · · · ht,mt,st hc4,mc4,sc4 ht,mt,gt hc6,mc6,sc6 · · · ht,mc4,sc4 hc4,mc4,gc4 · · · Table 1: Examples of baseline and cluster-based feature templates. Each entry represents a class of indicators for tuples of information. For example, “ht,mt” represents a class of indicator features with one feature for each possible combination of head POS-tag and modifier POStag. Abbreviations: ht = head POS, hw = head word, hc4 = 4-bit prefix of head, hc6 = 6-bit prefix of head, hc* = full bit string of head; mt,mw,mc4,mc6,mc* = likewise for modifier; st,gt,sc4,gc4,. . . = likewise for sibling and grandchild. 3.2 Cluster-based features The first- and second-order cluster-based feature sets are supersets of the baseline feature sets: they include all of the baseline feature templates, and add an additional layer of features that incorporate word clusters. Following Miller et al. (2004), we use prefixes of the Brown cluster hierarchy to produce clusterings of varying granularity. We found that it was nontrivial to select the proper prefix lengths for the dependency parsing task; in particular, the prefix lengths used in the Miller et al. (2004) work (between 12 and 20 bits) performed poorly in dependency parsing.2 After experimenting with many different feature configurations, we eventually settled on a simple but effective methodology. First, we found that it was helpful to employ two different types of word clusters: 1. Short bit-string prefixes (e.g., 4–6 bits), which we used as replacements for parts of speech. 2One possible explanation is that the kinds of distinctions required in a named-entity recognition task (e.g., “Alice” versus “Intel”) are much finer-grained than the kinds of distinctions relevant to syntax (e.g., “apple” versus “eat”). 597 2. Full bit strings,3 which we used as substitutes for word forms. Using these two types of clusters, we generated new features by mimicking the template structure of the original baseline features. For example, the baseline feature set includes indicators for word-to-word and tag-to-tag interactions between the head and modifier of a dependency. In the cluster-based feature set, we correspondingly introduce new indicators for interactions between pairs of short bit-string prefixes and pairs of full bit strings. Some examples of cluster-based features are given in Table 1. Second, we found it useful to concentrate on “hybrid” features involving, e.g., one bit-string and one part of speech. In our initial attempts, we focused on features that used cluster information exclusively. While these cluster-only features provided some benefit, we found that adding hybrid features resulted in even greater improvements. One possible explanation is that the clusterings generated by the Brown algorithm can be noisy or only weakly relevant to syntax; thus, the clusters are best exploited when “anchored” to words or parts of speech. Finally, we found it useful to impose a form of vocabulary restriction on the cluster-based features. Specifically, for any feature that is predicated on a word form, we eliminate this feature if the word in question is not one of the top-N most frequent words in the corpus. When N is between roughly 100 and 1,000, there is little effect on the performance of the cluster-based feature sets.4 In addition, the vocabulary restriction reduces the size of the feature sets to managable proportions. 4 Experiments In order to evaluate the effectiveness of the clusterbased feature sets, we conducted dependency parsing experiments in English and Czech. We test the features in a wide range of parsing configurations, including first-order and second-order parsers, and labeled and unlabeled parsers.5 3As in Brown et al. (1992), we limit the clustering algorithm so that it recovers at most 1,000 distinct bit-strings; thus full bit strings are not equivalent to word forms. 4We used N = 800 for all experiments in this paper. 5In an “unlabeled” parser, we simply ignore dependency label information, which is a common simplification. The English experiments were performed on the Penn Treebank (Marcus et al., 1993), using a standard set of head-selection rules (Yamada and Matsumoto, 2003) to convert the phrase structure syntax of the Treebank to a dependency tree representation.6 We split the Treebank into a training set (Sections 2–21), a development set (Section 22), and several test sets (Sections 0,7 1, 23, and 24). The data partition and head rules were chosen to match previous work (Yamada and Matsumoto, 2003; McDonald et al., 2005a; McDonald and Pereira, 2006). The part of speech tags for the development and test data were automatically assigned by MXPOST (Ratnaparkhi, 1996), where the tagger was trained on the entire training corpus; to generate part of speech tags for the training data, we used 10-way jackknifing.8 English word clusters were derived from the BLLIP corpus (Charniak et al., 2000), which contains roughly 43 million words of Wall Street Journal text.9 The Czech experiments were performed on the Prague Dependency Treebank 1.0 (Hajiˇc, 1998; Hajiˇc et al., 2001), which is directly annotated with dependency structures. To facilitate comparisons with previous work (McDonald et al., 2005b; McDonald and Pereira, 2006), we used the training/development/test partition defined in the corpus and we also used the automatically-assigned part of speech tags provided in the corpus.10 Czech word clusters were derived from the raw text section of the PDT 1.0, which contains about 39 million words of newswire text.11 We trained the parsers using the averaged perceptron (Freund and Schapire, 1999; Collins, 2002), which represents a balance between strong performance and fast training times. To select the number 6We used Joakim Nivre’s “Penn2Malt” conversion tool (http://w3.msi.vxu.se/ nivre/research/Penn2Malt.html). Dependency labels were obtained via the “Malt” hard-coded setting. 7For computational reasons, we removed a single 249-word sentence from Section 0. 8That is, we tagged each fold with the tagger trained on the other 9 folds. 9We ensured that the sentences of the Penn Treebank were excluded from the text used for the clustering. 10Following Collins et al. (1999), we used a coarsened version of the Czech part of speech tags; this choice also matches the conditions of previous work (McDonald et al., 2005b; McDonald and Pereira, 2006). 11This text was disjoint from the training and test corpora. 598 Sec dep1 dep1c MD1 dep2 dep2c MD2 dep1-L dep1c-L dep2-L dep2c-L 00 90.48 91.57 (+1.09) — 91.76 92.77 (+1.01) — 90.29 91.03 (+0.74) 91.33 92.09 (+0.76) 01 91.31 92.43 (+1.12) — 92.46 93.34 (+0.88) — 90.84 91.73 (+0.89) 91.94 92.65 (+0.71) 23 90.84 92.23 (+1.39) 90.9 92.02 93.16 (+1.14) 91.5 90.32 91.24 (+0.92) 91.38 92.14 (+0.76) 24 89.67 91.30 (+1.63) — 90.92 91.85 (+0.93) — 89.55 90.06 (+0.51) 90.42 91.18 (+0.76) Table 2: Parent-prediction accuracies on Sections 0, 1, 23, and 24. Abbreviations: dep1/dep1c = first-order parser with baseline/cluster-based features; dep2/dep2c = second-order parser with baseline/cluster-based features; MD1 = McDonald et al. (2005a); MD2 = McDonald and Pereira (2006); suffix -L = labeled parser. Unlabeled parsers are scored using unlabeled parent predictions, and labeled parsers are scored using labeled parent predictions. Improvements of cluster-based features over baseline features are shown in parentheses. of iterations of perceptron training, we performed up to 30 iterations and chose the iteration which optimized accuracy on the development set. Our feature mappings are quite high-dimensional, so we eliminated all features which occur only once in the training data. The resulting models still had very high dimensionality, ranging from tens of millions to as many as a billion features.12 All results presented in this section are given in terms of parent-prediction accuracy, which measures the percentage of tokens that are attached to the correct head token. For labeled dependency structures, both the head token and dependency label must be correctly predicted. In addition, in English parsing we ignore the parent-predictions of punctuation tokens,13 and in Czech parsing we retain the punctuation tokens; this matches previous work (Yamada and Matsumoto, 2003; McDonald et al., 2005a; McDonald and Pereira, 2006). 4.1 English main results In our English experiments, we tested eight different parsing configurations, representing all possible choices between baseline or cluster-based feature sets, first-order (Eisner, 2000) or second-order (Carreras, 2007) factorizations, and labeled or unlabeled parsing. Table 2 compiles our final test results and also includes two results from previous work by McDonald et al. (2005a) and McDonald and Pereira (2006), for the purposes of comparison. We note a few small differences between our parsers and the 12Due to the sparsity of the perceptron updates, however, only a small fraction of the possible features were active in our trained models. 13A punctuation token is any token whose gold-standard part of speech tag is one of {‘‘ ’’ : , .}. parsers evaluated in this previous work. First, the MD1 and MD2 parsers were trained via the MIRA algorithm (Crammer and Singer, 2003; Crammer et al., 2004), while we use the averaged perceptron. In addition, the MD2 model uses only sibling interactions, whereas the dep2/dep2c parsers include both sibling and grandparent interactions. There are some clear trends in the results of Table 2. First, performance increases with the order of the parser: edge-factored models (dep1 and MD1) have the lowest performance, adding sibling relationships (MD2) increases performance, and adding grandparent relationships (dep2) yields even better accuracies. Similar observations regarding the effect of model order have also been made by Carreras (2007). Second, note that the parsers using cluster-based feature sets consistently outperform the models using the baseline features, regardless of model order or label usage. Some of these improvements can be quite large; for example, a first-order model using cluster-based features generally performs as well as a second-order model using baseline features. Moreover, the benefits of cluster-based feature sets combine additively with the gains of increasing model order. For example, consider the unlabeled parsers in Table 2: on Section 23, increasing the model order from dep1 to dep2 results in a relative reduction in error of roughly 13%, while introducing clusterbased features from dep2 to dep2c yields an additional relative error reduction of roughly 14%. As a final note, all 16 comparisons between cluster-based features and baseline features shown in Table 2 are statistically significant.14 14We used the sign test at the sentence level. The comparison between dep1-L and dep1c-L is significant at p < 0.05, and all other comparisons are significant at p < 0.0005. 599 Tagger always trained on full Treebank Tagger trained on reduced dataset Size dep1 dep1c ∆ dep2 dep2c ∆ 1k 84.54 85.90 1.36 86.29 87.47 1.18 2k 86.20 87.65 1.45 87.67 88.88 1.21 4k 87.79 89.15 1.36 89.22 90.46 1.24 8k 88.92 90.22 1.30 90.62 91.55 0.93 16k 90.00 91.27 1.27 91.27 92.39 1.12 32k 90.74 92.18 1.44 92.05 93.36 1.31 All 90.89 92.33 1.44 92.42 93.30 0.88 Size dep1 dep1c ∆ dep2 dep2c ∆ 1k 80.49 84.06 3.57 81.95 85.33 3.38 2k 83.47 86.04 2.57 85.02 87.54 2.52 4k 86.53 88.39 1.86 87.88 89.67 1.79 8k 88.25 89.94 1.69 89.71 91.37 1.66 16k 89.66 91.03 1.37 91.14 92.22 1.08 32k 90.78 92.12 1.34 92.09 93.21 1.12 All 90.89 92.33 1.44 92.42 93.30 0.88 Table 3: Parent-prediction accuracies of unlabeled English parsers on Section 22. Abbreviations: Size = #sentences in training corpus; ∆= difference between cluster-based and baseline features; other abbreviations are as in Table 2. 4.2 English learning curves We performed additional experiments to evaluate the effect of the cluster-based features as the amount of training data is varied. Note that the dependency parsers we use require the input to be tagged with parts of speech; thus the quality of the part-ofspeech tagger can have a strong effect on the performance of the parser. In these experiments, we consider two possible scenarios: 1. The tagger has a large training corpus, while the parser has a smaller training corpus. This scenario can arise when tagged data is cheaper to obtain than syntactically-annotated data. 2. The same amount of labeled data is available for training both tagger and parser. Table 3 displays the accuracy of first- and secondorder models when trained on smaller portions of the Treebank, in both scenarios described above. Note that the cluster-based features obtain consistent gains regardless of the size of the training set. When the tagger is trained on the reduced-size datasets, the gains of cluster-based features are more pronounced, but substantial improvements are obtained even when the tagger is accurate. It is interesting to consider the amount by which cluster-based features reduce the need for supervised data, given a desired level of accuracy. Based on Table 3, we can extrapolate that cluster-based features reduce the need for supervised data by roughly a factor of 2. For example, the performance of the dep1c and dep2c models trained on 1k sentences is roughly the same as the performance of the dep1 and dep2 models, respectively, trained on 2k sentences. This approximate data-halving effect can be observed throughout the results in Table 3. When combining the effects of model order and cluster-based features, the reductions in the amount of supervised data required are even larger. For example, in scenario 1 the dep2c model trained on 1k sentences is close in performance to the dep1 model trained on 4k sentences, and the dep2c model trained on 4k sentences is close to the dep1 model trained on the entire training set (roughly 40k sentences). 4.3 Czech main results In our Czech experiments, we considered only unlabeled parsing,15 leaving four different parsing configurations: baseline or cluster-based features and first-order or second-order parsing. Note that our feature sets were originally tuned for English parsing, and except for the use of Czech clusters, we made no attempt to retune our features for Czech. Czech dependency structures may contain nonprojective edges, so we employ a maximum directed spanning tree algorithm (Chu and Liu, 1965; Edmonds, 1967; McDonald et al., 2005b) as our firstorder parser for Czech. For the second-order parsing experiments, we used the Carreras (2007) parser. Since this parser only considers projective dependency structures, we “projectivized” the PDT 1.0 training set by finding, for each sentence, the projective tree which retains the most correct dependencies; our second-order parsers were then trained with respect to these projective trees. The development and test sets were not projectivized, so our secondorder parser is guaranteed to make errors in test sentences containing non-projective dependencies. To overcome this, McDonald and Pereira (2006) use a 15We leave labeled parsing experiments to future work. 600 dep1 dep1c dep2 dep2c 84.49 86.07 (+1.58) 86.13 87.13 (+1.00) Table 4: Parent-prediction accuracies of unlabeled Czech parsers on the PDT 1.0 test set, for baseline features and cluster-based features. Abbreviations are as in Table 2. Parser Accuracy Nivre and Nilsson (2005) 80.1 McDonald et al. (2005b) 84.4 Hall and Nov´ak (2005) 85.1 McDonald and Pereira (2006) 85.2 dep1c 86.07 dep2c 87.13 Table 5: Unlabeled parent-prediction accuracies of Czech parsers on the PDT 1.0 test set, for our models and for previous work. Size dep1 dep1c ∆ dep2 dep2c ∆ 1k 72.79 73.66 0.87 74.35 74.63 0.28 2k 74.92 76.23 1.31 76.63 77.60 0.97 4k 76.87 78.14 1.27 78.34 79.34 1.00 8k 78.17 79.83 1.66 79.82 80.98 1.16 16k 80.60 82.44 1.84 82.53 83.69 1.16 32k 82.85 84.65 1.80 84.66 85.81 1.15 64k 84.20 85.98 1.78 86.01 87.11 1.10 All 84.36 86.09 1.73 86.09 87.26 1.17 Table 6: Parent-prediction accuracies of unlabeled Czech parsers on the PDT 1.0 development set. Abbreviations are as in Table 3. two-stage approximate decoding process in which the output of their second-order parser is “deprojectivized” via greedy search. For simplicity, we did not implement a deprojectivization stage on top of our second-order parser, but we conjecture that such techniques may yield some additional performance gains; we leave this to future work. Table 4 gives accuracy results on the PDT 1.0 test set for our unlabeled parsers. As in the English experiments, there are clear trends in the results: parsers using cluster-based features outperform parsers using baseline features, and secondorder parsers outperform first-order parsers. Both of the comparisons between cluster-based and baseline features in Table 4 are statistically significant.16 Table 5 compares accuracy results on the PDT 1.0 test set for our parsers and several other recent papers. 16We used the sign test at the sentence level; both comparisons are significant at p < 0.0005. N dep1 dep1c dep2 dep2c 100 89.19 92.25 90.61 93.14 200 90.03 92.26 91.35 93.18 400 90.31 92.32 91.72 93.20 800 90.62 92.33 91.89 93.30 1600 90.87 — 92.20 — All 90.89 — 92.42 — Table 7: Parent-prediction accuracies of unlabeled English parsers on Section 22. Abbreviations: N = threshold value; other abbreviations are as in Table 2. We did not train cluster-based parsers using threshold values larger than 800 due to computational limitations. dep1-P dep1c-P dep1 dep2-P dep2c-P dep2 77.19 90.69 90.89 86.73 91.84 92.42 Table 8: Parent-prediction accuracies of unlabeled English parsers on Section 22. Abbreviations: suffix -P = model without POS; other abbreviations are as in Table 2. 4.4 Czech learning curves As in our English experiments, we performed additional experiments on reduced sections of the PDT; the results are shown in Table 6. For simplicity, we did not retrain a tagger for each reduced dataset, so we always use the (automatically-assigned) part of speech tags provided in the corpus. Note that the cluster-based features obtain improvements at all training set sizes, with data-reduction factors similar to those observed in English. For example, the dep1c model trained on 4k sentences is roughly as good as the dep1 model trained on 8k sentences. 4.5 Additional results Here, we present two additional results which further explore the behavior of the cluster-based feature sets. In Table 7, we show the development-set performance of second-order parsers as the threshold for lexical feature elimination (see Section 3.2) is varied. Note that the performance of cluster-based features is fairly insensitive to the threshold value, whereas the performance of baseline features clearly degrades as the vocabulary size is reduced. In Table 8, we show the development-set performance of the first- and second-order parsers when features containing part-of-speech-based information are eliminated. Note that the performance obtained by using clusters without parts of speech is close to the performance of the baseline features. 601 5 Related Work As mentioned earlier, our approach was inspired by the success of Miller et al. (2004), who demonstrated the effectiveness of using word clusters as features in a discriminative learning approach. Our research, however, applies this technique to dependency parsing rather than named-entity recognition. In this paper, we have focused on developing new representations for lexical information. Previous research in this area includes several models which incorporate hidden variables (Matsuzaki et al., 2005; Koo and Collins, 2005; Petrov et al., 2006; Titov and Henderson, 2007). These approaches have the advantage that the model is able to learn different usages for the hidden variables, depending on the target problem at hand. Crucially, however, these methods do not exploit unlabeled data when learning their representations. Wang et al. (2005) used distributional similarity scores to smooth a generative probability model for dependency parsing and obtained improvements in a Chinese parsing task. Our approach is similar to theirs in that the Brown algorithm produces clusters based on distributional similarity, and the clusterbased features can be viewed as being a kind of “backed-off” version of the baseline features. However, our work is focused on discriminative learning as opposed to generative models. Semi-supervised phrase structure parsing has been previously explored by McClosky et al. (2006), who applied a reranked parser to a large unsupervised corpus in order to obtain additional training data for the parser; this self-training appraoch was shown to be quite effective in practice. However, their approach depends on the usage of a high-quality parse reranker, whereas the method described here simply augments the features of an existing parser. Note that our two approaches are compatible in that we could also design a reranker and apply self-training techniques on top of the clusterbased features. 6 Conclusions In this paper, we have presented a simple but effective semi-supervised learning approach and demonstrated that it achieves substantial improvement over a competitive baseline in two broad-coverage dependency parsing tasks. Despite this success, there are several ways in which our approach might be improved. To begin, recall that the Brown clustering algorithm is based on a bigram language model. Intuitively, there is a “mismatch” between the kind of lexical information that is captured by the Brown clusters and the kind of lexical information that is modeled in dependency parsing. A natural avenue for further research would be the development of clustering algorithms that reflect the syntactic behavior of words; e.g., an algorithm that attempts to maximize the likelihood of a treebank, according to a probabilistic dependency model. Alternately, one could design clustering algorithms that cluster entire head-modifier arcs rather than individual words. Another idea would be to integrate the clustering algorithm into the training algorithm in a limited fashion. For example, after training an initial parser, one could parse a large amount of unlabeled text and use those parses to improve the quality of the clusters. These improved clusters can then be used to retrain an improved parser, resulting in an overall algorithm similar to that of McClosky et al. (2006). Setting aside the development of new clustering algorithms, a final area for future work is the extension of our method to new domains, such as conversational text or other languages, and new NLP problems, such as machine translation. Acknowledgments The authors thank the anonymous reviewers for their insightful comments. Many thanks also to Percy Liang for providing his implementation of the Brown algorithm, and Ryan McDonald for his assistance with the experimental setup. The authors gratefully acknowledge the following sources of support. Terry Koo was funded by NSF grant DMS-0434222 and a grant from NTT, Agmt. Dtd. 6/21/1998. Xavier Carreras was supported by the Catalan Ministry of Innovation, Universities and Enterprise, and a grant from NTT, Agmt. Dtd. 6/21/1998. Michael Collins was funded by NSF grants 0347631 and DMS-0434222. 602 References P.F. Brown, V.J. Della Pietra, P.V. deSouza, J.C. Lai, and R.L. Mercer. 1992. Class-Based n-gram Models of Natural Language. Computational Linguistics, 18(4):467–479. S. Buchholz and E. Marsi. 2006. CoNLL-X Shared Task on Multilingual Dependency Parsing. In Proceedings of CoNLL, pages 149–164. X. Carreras. 2007. Experiments with a Higher-Order Projective Dependency Parser. In Proceedings of EMNLP-CoNLL, pages 957–961. E. Charniak, D. Blaheta, N. Ge, K. Hall, and M. Johnson. 2000. BLLIP 1987–89 WSJ Corpus Release 1, LDC No. LDC2000T43. Linguistic Data Consortium. Y.J. Chu and T.H. Liu. 1965. On the shortest arborescence of a directed graph. Science Sinica, 14:1396– 1400. M. Collins, J. Hajiˇc, L. Ramshaw, and C. Tillmann. 1999. A Statistical Parser for Czech. In Proceedings of ACL, pages 505–512. M. Collins. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. In Proceedings of EMNLP, pages 1–8. K. Crammer and Y. Singer. 2003. Ultraconservative Online Algorithms for Multiclass Problems. Journal of Machine Learning Research, 3:951–991. K. Crammer, O. Dekel, S. Shalev-Shwartz, and Y. Singer. 2004. Online Passive-Aggressive Algorithms. In S. Thrun, L. Saul, and B. Sch¨olkopf, editors, NIPS 16, pages 1229–1236. J. Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards, 71B:233– 240. J. Eisner. 2000. Bilexical Grammars and Their CubicTime Parsing Algorithms. In H. Bunt and A. Nijholt, editors, Advances in Probabilistic and Other Parsing Technologies, pages 29–62. Kluwer Academic Publishers. Y. Freund and R. Schapire. 1999. Large Margin Classification Using the Perceptron Algorithm. Machine Learning, 37(3):277–296. J. Hajiˇc, E. Hajiˇcov´a, P. Pajas, J. Panevova, and P. Sgall. 2001. The Prague Dependency Treebank 1.0, LDC No. LDC2001T10. Linguistics Data Consortium. J. Hajiˇc. 1998. Building a Syntactically Annotated Corpus: The Prague Dependency Treebank. In E. Hajiˇcov´a, editor, Issues of Valency and Meaning. Studies in Honor of Jarmila Panevov´a, pages 12–19. K. Hall and V. Nov´ak. 2005. Corrective Modeling for Non-Projective Dependency Parsing. In Proceedings of IWPT, pages 42–52. T. Koo and M. Collins. 2005. Hidden-Variable Models for Discriminative Reranking. In Proceedings of HLTEMNLP, pages 507–514. W. Li and A. McCallum. 2005. Semi-Supervised Sequence Modeling with Syntactic Topic Models. In Proceedings of AAAI, pages 813–818. P. Liang. 2005. Semi-Supervised Learning for Natural Language. Master’s thesis, Massachusetts Institute of Technology. M.P. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. T. Matsuzaki, Y. Miyao, and J. Tsujii. 2005. Probabilistic CFG with Latent Annotations. In Proceedings of ACL, pages 75–82. D. McClosky, E. Charniak, and M. Johnson. 2006. Effective Self-Training for Parsing. In Proceedings of HLT-NAACL, pages 152–159. R. McDonald and F. Pereira. 2006. Online Learning of Approximate Dependency Parsing Algorithms. In Proceedings of EACL, pages 81–88. R. McDonald, K. Crammer, and F. Pereira. 2005a. Online Large-Margin Training of Dependency Parsers. In Proceedings of ACL, pages 91–98. R. McDonald, F. Pereira, K. Ribarov, and J. Hajiˇc. 2005b. Non-Projective Dependency Parsing using Spanning Tree Algorithms. In Proceedings of HLT-EMNLP, pages 523–530. S. Miller, J. Guinness, and A. Zamanian. 2004. Name Tagging with Word Clusters and Discriminative Training. In Proceedings of HLT-NAACL, pages 337–342. J. Nivre and J. Nilsson. 2005. Pseudo-Projective Dependency Parsing. In Proceedings of ACL, pages 99–106. J. Nivre, J. Hall, S. K¨ubler, R. McDonald, J. Nilsson, S. Riedel, and D. Yuret. 2007. The CoNLL 2007 Shared Task on Dependency Parsing. In Proceedings of EMNLP-CoNLL 2007, pages 915–932. S. Petrov, L. Barrett, R. Thibaux, and D. Klein. 2006. Learning Accurate, Compact, and Interpretable Tree Annotation. In Proceedings of COLING-ACL, pages 433–440. A. Ratnaparkhi. 1996. A Maximum Entropy Model for Part-Of-Speech Tagging. In Proceedings of EMNLP, pages 133–142. I. Titov and J. Henderson. 2007. Constituent Parsing with Incremental Sigmoid Belief Networks. In Proceedings of ACL, pages 632–639. Q.I. Wang, D. Schuurmans, and D. Lin. 2005. Strictly Lexical Dependency Parsing. In Proceedings of IWPT, pages 152–159. H. Yamada and Y. Matsumoto. 2003. Statistical Dependency Analysis With Support Vector Machines. In Proceedings of IWPT, pages 195–206. 603
2008
68
Proceedings of ACL-08: HLT, pages 604–612, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Optimal k-arization of Synchronous Tree-Adjoining Grammar Rebecca Nesson School of Engineering and Applied Sciences Harvard University Cambridge, MA 02138 [email protected] Giorgio Satta Department of Information Engineering University of Padua I-35131 Padova, Italy [email protected] Stuart M. Shieber School of Engineering and Applied Sciences Harvard University Cambridge, MA 02138 [email protected] Abstract Synchronous Tree-Adjoining Grammar (STAG) is a promising formalism for syntaxaware machine translation and simultaneous computation of natural-language syntax and semantics. Current research in both of these areas is actively pursuing its incorporation. However, STAG parsing is known to be NP-hard due to the potential for intertwined correspondences between the linked nonterminal symbols in the elementary structures. Given a particular grammar, the polynomial degree of efficient STAG parsing algorithms depends directly on the rank of the grammar: the maximum number of correspondences that appear within a single elementary structure. In this paper we present a compile-time algorithm for transforming a STAG into a strongly-equivalent STAG that optimally minimizes the rank, k, across the grammar. The algorithm performs in O(|G| + |Y | · L3 G) time where LG is the maximum number of links in any single synchronous tree pair in the grammar and Y is the set of synchronous tree pairs of G. 1 Introduction Tree-adjoining grammar is a widely used formalism in natural-language processing due to its mildlycontext-sensitive expressivity, its ability to naturally capture natural-language argument substitution (via its substitution operation) and optional modification (via its adjunction operation), and the existence of efficient algorithms for processing it. Recently, the desire to incorporate syntax-awareness into machine translation systems has generated interest in the application of synchronous tree-adjoining grammar (STAG) to this problem (Nesson, Shieber, and Rush, 2006; Chiang and Rambow, 2006). In a parallel development, interest in incorporating semantic computation into the TAG framework has led to the use of STAG for this purpose (Nesson and Shieber, 2007; Han, 2006b; Han, 2006a; Nesson and Shieber, 2006). Although STAG does not increase the expressivity of the underlying formalisms (Shieber, 1994), STAG parsing is known to be NPhard due to the potential for intertwined correspondences between the linked nonterminal symbols in the elementary structures (Satta, 1992; Weir, 1988). Without efficient algorithms for processing it, its potential for use in machine translation and TAG semantics systems is limited. Given a particular grammar, the polynomial degree of efficient STAG parsing algorithms depends directly on the rank of the grammar: the maximum number of correspondences that appear within a single elementary structure. This is illustrated by the tree pairs given in Figure 1 in which no two numbered links may be isolated. (By “isolated”, we mean that the links can be contained in a fragment of the tree that contains no other links and dominates only one branch not contained in the fragment. A precise definition is given in section 3.) An analogous problem has long been known to exist for synchronous context-free grammars (SCFG) (Aho and Ullman, 1969). The task of producing efficient parsers for SCFG has recently been addressed by binarization or k-arization of SCFG grammars that produce equivalent grammars in which the rank, k, has been minimized (Zhang 604 A B C D w A B C D E F G 1 2 3 4 A B C D E F G A B C D 2 3 1 4 1 2 3 4 2 4 3 1 w′ w w′ x x′ y′ y z z′ A B C D 1 w 3 4 E 2 x 5 A B C D 1 3 4 E 2 5 w′ x′ γ1 : γ2 : γ3 : Figure 1: Example of intertwined links that cannot be binarized. No two links can be isolated in both trees in a tree pair. Note that in tree pair γ1, any set of three links may be isolated while in tree pair γ2, no group of fewer than four links may be isolated. In γ3 no group of links smaller than four may be isolated. S V P V likes red candies aime les bonbons rouges Det NP↓ S V P V NP↓ NP N NP N N∗ N Adj N∗ N Adj S NP V P John V likes Jean aime S NP V P V les Det NP NP red N Adj candies N bonbons N rouges N Adj 2 1 2 1 Jean NP NP John NP↓1 NP↓1 likes John candies red 1 2 1 (a) (b) (c) Figure 2: An example STAG derivation of the English/French sentence pair “John likes red candies”/“Jean aime les bonbons rouges”. The figure is divided as follows: (a) the STAG grammar, (b) the derivation tree for the sentence pair, and (c) the derived tree pair for the sentences. and Gildea, 2007; Zhang et al., 2006; Gildea, Satta, and Zhang, 2006). The methods for k-arization of SCFG cannot be directly applied to STAG because of the additional complexity introduced by the expressivity-increasing adjunction operation of TAG. In SCFG, where substitution is the only available operation and the depth of elementary structures is limited to one, the k-arization problem reduces to analysis of permutations of strings of nonterminal symbols. In STAG, however, the arbitrary depth of the elementary structures and the lack of restriction to contiguous strings of nonterminals introduced by adjunction substantially complicate the task. In this paper we offer the first algorithm addressing this problem for the STAG case. We present a compile-time algorithm for transforming a STAG into a strongly-equivalent STAG that optimally minimizes k across the grammar. This is a critical minimization because k is the feature of the grammar that appears in the exponent of the complexity of parsing algorithms for STAG. Following the method of Seki et al. (1991), an STAG parser can be implemented with complexity O(n4·(k+1) · |G|). By minimizing k, the worst-case complexity of a parser instantiated for a particular grammar is optimized. The karization algorithm performs in O(|G| + |Y | · L3 G) time where LG is the maximum number of links in any single synchronous tree pair in the grammar and Y is the set of synchronous tree pairs of G. By comparison, a baseline algorithm performing exhaustive search requires O(|G| + |Y | · L6 G) time.1 The remainder of the paper proceeds as follows. In section 2 we provide a brief introduction to the STAG formalism. We present the k-arization algorithm in section 3 and an analysis of its complexity in section 4. We prove the correctness of the algorithm in section 5. 1In a synchronous tree pair with L links, there are O(L4) pairs of valid fragments. It takes O(L) time to check if the two components in a pair have the same set of links. Once the synchronous fragment with the smallest number of links is excised, this process iterates at most L times, resulting in time O(L6 G). 605 D E F A B C 1 2 3 4 y z 5 H I J 2 3 1 N M 4 w′ x′ 5 L y′ K γ : x G z′ n1 : n2 : n3 : n4 : n5 : Figure 3: A synchronous tree pair containing fragments αL = γL(n1, n2) and αR = γR(n3). Since links(n1, n2) = links(n3) = { 2, 4, 5}, we can define synchronous fragment α = ⟨αL, αR⟩. Note also that node n3 is a maximal node and node n5 is not. σ(n1) = 2 5 5 3 3 2 4 4; σ(n3) = 2 5 5 4 4 2. 2 Synchronous Tree-Adjoining Grammar A tree-adjoining grammar (TAG) consists of a set of elementary tree structures of arbitrary depth, which are combined by substitution, familiar from contextfree grammars, or an operation of adjunction that is particular to the TAG formalism. Auxiliary trees are elementary trees in which the root and a frontier node, called the foot node and distinguished by the diacritic ∗, are labeled with the same nonterminal A. The adjunction operation involves splicing an auxiliary tree in at an internal node in an elementary tree also labeled with nonterminal A. Trees without a foot node, which serve as a base for derivations, are called initial trees. For further background, refer to the survey by Joshi and Schabes (1997). We depart from the traditional definition in notation only by specifying adjunction and substitution sites explicitly with numbered links. Each link may be used only once in a derivation. Operations may only occur at nodes marked with a link. For simplicity of presentation we provisionally assume that only one link is permitted at a node. We later drop this assumption. In a synchronous TAG (STAG) the elementary structures are ordered pairs of TAG trees, with a linking relation specified over pairs of nonterminal nodes. Each link has two locations, one in the left tree in a pair and the other in the right tree. An example of an STAG derivation including both substitution and adjunction is given in Figure 2. For further background, refer to the work of Shieber and Schabes (1990) and Shieber (1994). 3 k-arization Algorithm For a synchronous tree pair γ = ⟨γL, γR⟩, a fragment of γL (or γR) is a complete subtree rooted at some node n of γL, written γL(n), or else a subtree rooted at n with a gap at node n′, written γL(n, n′); see Figure 3 for an example. We write links(n) and links(n, n′) to denote the set of links of γL(n) and γL(n, n′), respectively. When we do not know the root or gap nodes of some fragment αL, we also write links(αL). We say that a set of links Λ from γ can be isolated if there exist fragments αL and αR of γL and γR, respectively, both with links Λ. If this is the case, we can construct a synchronous fragment α = ⟨αL, αR⟩. The goal of our algorithm is to decompose γ into synchronous fragments such that the maximum number of links of a synchronous fragment is kept to a minimum, and γ can be obtained from the synchronous fragments by means of the usual substitution and adjunction operations. In order to simplify the presentation of our algorithm we assume, without any loss of generality, that all elementary trees of the source STAG have nodes with at most two children. 3.1 Maximal Nodes A node n of γL (or γR) is called maximal if (i) links(n) ̸= ∅, and (ii) it is either the root node of γL or, for its parent node n′, we have links(n′) ̸= links(n). Note that for every node n′ of γL such that links(n′) ̸= ∅there is always a unique maximal node n such that links(n′) = links(n). Thus, for the purpose of our algorithm, we need only look at maximal nodes as places for excising tree fragments. We can show that the number of maximal nodes Mn in a subtree γL(n) always satisfies |links(n)| ≤Mn ≤2 × |links(n)| −1. Let n be some node of γL, and let l(n) be the (unique) link impinging on n if such a link exists, and l(n) = ε otherwise. We associate n with a string σ(n), defined by a pre- and post-order traversal of fragment γL(n). The symbols of σ(n) are the links in links(n), viewed as atomic symbols. Given a node n with p children n1, . . . , np, 0 ≤p ≤2, we define σ(n) = l(n) σ(n1) · · · σ(np) l(n). See again Figure 3 for an example. Note that |σ(n)| = 2 × |links(n)|. 606 3 1 1 1 1 2 2 2 2 X X X X R R R R R R G G G G G G X′ X′ X′ ∗ X′ X′ excise adjoin transform γL : n1 : n2 : Figure 4: A diagram of the tree transformation performed when fragment γL(n1, n2) is removed. In this and the diagrams that follow, patterned or shaded triangles represent segments of the tree that contain multiple nodes and at least one link. Where the pattern or shading corresponds across trees in a tree pair, the set of links contained within those triangles are equivalent. 3.2 Excision of Synchronous Fragments Although it would be possible to excise synchronous fragments without creating new nonterminal nodes, for clarity we present a simple tree transformation when a fragment is excised that leaves existing nodes intact. A schematic depiction is given in Figure 4. In the figure, we demonstrate the excision process on one half of a synchronous fragment: γL(n1, n2) is excised to form two new trees. The excised tree is not processed further. In the excision process the root and gap nodes of the original tree are not altered. The material between them is replaced with a single new node with a fresh nonterminal symbol and a fresh link number. This nonterminal node and link form the adjunction or substitution site for the excised tree. Note that any link impinging on the root node of the excised fragment is by our convention included in the fragment and any link impinging on the gap node is not. To regenerate the original tree, the excised fragment can be adjoined or substituted back into the tree from which it was excised. The new nodes that were generated in the excision may be removed and the original root and gap nodes may be merged back together retaining any impinging links, respectively. Note that if there was a link on either the root or gap node in the original tree, it is not lost or duplicated 1 1 0 0 0 0 0 0 0 0 1 2 0 1 0 0 0 0 1 0 1 0 5 0 0 1 1 0 0 0 0 0 0 5 0 0 1 1 0 0 0 0 0 0 3 0 0 0 0 0 0 0 1 1 0 3 0 0 0 0 0 0 0 1 1 0 2 0 1 0 0 0 0 1 0 0 0 4 0 0 0 0 1 1 0 0 0 0 4 0 0 0 0 1 1 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 2 5 5 4 4 2 3 3 1 0 Figure 5: Table π with synchronous fragment ⟨γL(n1, n2), γR(n3)⟩from Figure 3 highlighted. in the process. 3.3 Method Let nL and nR be the root nodes of trees γL and γR, respectively. We know that links(nL) = links(nR), and |σ(nL)| = |σ(nR)|, the second string being a rearrangement of the occurrences of symbols in the first one. The main data structure of our algorithm is a Boolean matrix π of size |σ(nL)|×|σ(nL)|, whose rows are addressed by the occurrences of symbols in σ(nL), in the given order, and whose columns are similarly addressed by σ(nR). For occurrences of links x1 , x2 , the element of π at a row addressed by x1 and a column addressed by x2 is 1 if x1 = x2, and 0 otherwise. Thus, each row and column of π has exactly two non-zero entries. See Figure 5 for an example. For a maximal node n1 of γL, we let π(n1) denote the stripe of adjacent rows of π addressed by substring σ(n1) of σ(nL). If n1 dominates n2 in γL, we let π(n1, n2) denote the rows of π addressed by σ(n1) but not by σ(n2). This forms a pair of horizontal stripes in π. For nodes n3, n4 of γR, we similarly define π(n3) and π(n3, n4) as vertical stripes of adjacent columns. See again Figure 5. Our algorithm is reported in Figure 6. For each synchronous tree pair γ = ⟨γL, γR⟩from the input grammar, we maintain an agenda B with all candidate fragments αL from γL having at least two links. These fragments are processed greedily in order of increasing number of links. The function ISOLATE(), described in more detail be607 1: Function KARIZE(G) {G a binary STAG} 2: G′ ←STAG with empty set of synch trees; 3: for all γ = ⟨γL, γR⟩in G do 4: init π and B; 5: while B ̸= ∅do 6: αL ←next fragment from B; 7: αR ←ISOLATE(αL, π, γR); 8: if αR ̸= null then 9: add ⟨αL, αR⟩to G′; 10: γ ←excise ⟨αL, αR⟩from γ; 11: update π and B; 12: add γ to G′; 13: return G′ Figure 6: Main algorithm. low, looks for a right fragment αR with the same links as αL. Upon success, the synchronous fragment α = ⟨αL, αR⟩is added to the output grammar. Furthermore, we excise α from γ and update data structures π and B. The above process is iterated until B becomes empty. We show in section 5 that this greedy strategy is sound and complete. The function ISOLATE() is specified in Figure 7. We take as input a left fragment αL, which is associated with one or two horizontal stripes in π, depending on whether αL has a gap node or not. The left boundary of αL in π is the index x1 of the column containing the leftmost occurrence of a 1 in the horizontal stripes associated with αL. Similarly, the right boundary of αL in π is the index x2 of the column containing the rightmost occurrence of a 1 in these stripes. We retrieve the shortest substring σ(n) of σ(nR) that spans over indices x1 and x2. This means that n is the lowest node from γR such that the links of αL are a subset of the links of γR(n). If the condition at line 3 is satisfied, all of the matrix entries of value 1 that are found from column x1 to column x2 fall within the horizontal stripes associated with αL. In this case we can report the right fragment αR = γR(n). Otherwise, we check whether the entries of value 1 that fall outside of the two horizontal stripes in between columns x1 and x2 occur within adjacent columns, say from column x3 ≥x1 to column x4 ≤x2. In this case, we check whether there exists some node n′ such that the substring of σ(n) from position x3 to x4 is 1: Function ISOLATE(αL, π, γR) 2: select n ∈γR such that σ(n) is the shortest string within σ(nR) including left/right boundaries of αL in π; 3: if |σ(n)| = 2 × |links(αL)| then 4: return γR(n); 5: select n′ ∈γR such that σ(n′) is the gap string within σ(n) for which links(n) −links(n′) = links(αL); 6: if n′ is not defined then 7: return null; {more than one gap} 8: return γR(n, n′); Figure 7: Find synchronous fragment. an occurrence of string σ(n′). This means that n′ is the gap node, and we report the right fragment αL = γR(n, n′). See again Figure 5. We now drop the assumption that only one link may impinge on a node. When multiple links impinge on a single node n, l(n) is an arbitrary order over those links. In the execution of the algorithm, any stripe that contains one link in l(n) it must include every link in l(n). This prevents the excision of a proper subset of the links at any node. This preserves correctness because excising any proper subset would impose an order over the links at n that is not enforced in the input grammar. Because the links at a node are treated as a unit, the complexity of the algorithm is not affected. 4 Complexity We discuss here an implementation of the algorithm of section 3 resulting in time complexity O(|G| + |Y | · L3 G), where Y is the set of synchronous tree pairs of G and LG is the maximum number of links in a synchronous tree pair in Y . Consider a synchronous tree pair γ = ⟨γL, γR⟩ with L links. If M is the number of maximal nodes in γL or γR, we have M = Θ(L) (Section 3.1). We implement the sparse table π in O(L) space, recording for each row and column the indices of its two non-zero entries. We also assume that we can go back and forth between maximal nodes n and strings σ(n) in constant time. Here each σ(n) is represented by its boundary positions within σ(nL) or σ(nR), nL and nR the root nodes of γL and γR, respectively. 608 At line 2 of the function ISOLATE() (Figure 7) we retrieve the left and right boundaries by scanning the rows of π associated with input fragment αL. We then retrieve node n by visiting all maximal nodes of γL spanning these boundaries. Under the above assumptions, this can be done in time O(L). In a similar way we can implement line 5, resulting in overall run time O(L) for function ISOLATE(). In the function KARIZE() (Figure 6) we use buckets Bi, 1 ≤i ≤L, where each Bi stores the candidate fragments αL with |links(αL)| = i. To populate these buckets, we first process fragments γL(n) by visiting bottom up the maximal nodes of γL. The quantity |links(n)| is computed from the quantities |links(ni)|, where ni are the highest maximal nodes dominated by n. (There are at most two such nodes.) Fragments γL(n, n′) can then be processed using the relation |links(n, n′)| = |links(n)| −|links(n′)|. In this way each fragment is processed in constant time, and population of all the buckets takes O(L2) time. We now consider the while loop at lines 5 to 11 in function KARIZE(). For a synchronous tree pair γ, the loop iterates once for each candidate fragment αL in some bucket. We have a total of O(L2) iterations, since the initial number of candidates in the buckets is O(L2), and the possible updating of the buckets after a synchronous fragment is removed does not increase the total size of all the buckets. If the links in αL cannot be isolated, one iteration takes time O(L) (the call to function ISOLATE()). If the links in αL can be isolated, then we need to restructure π and to repopulate the buckets. The former can be done in time O(L) and the latter takes time O(L2), as already discussed. Crucially, the updating of π and the buckets takes place no more than L −1 times. This is because each time we excise a synchronous fragment, the number of links in γ is reduced by at least one. We conclude that function KARIZE() takes time O(L3) for each synchronous tree γ, and the total running time is O(|G| + |Y | · L3 G), where Y is the set of synchronous tree pairs of G. The term |G| accounts for the reading of the input, and dominates the complexity of the algorithm only in case there are very few links in each synchronous tree pair. A B C D 1 w 3 4 E 2 x 5 B D 1 w 3 6 n1 : n2 : n3 : n4 : γ : γ′ : A′ A Figure 8: In γ links 3 and 5 cannot be isolated because the fragment would have to contain two gaps. However, after the removal of fragment γ(n1, n2), an analogous fragment γ′(n3, n4) may be removed. 5 Proof of Correctness The algorithm presented in the previous sections produces an optimal k-arization for the input grammar. In this section we sketch a proof of correctness of the strategy employed by the algorithm.2 The k-arization strategy presented above is greedy in that it always chooses the excisable fragment with the smallest number of links at each step and does not perform any backtracking. We must therefore show that this process cannot result in a non-optimal solution. If fragments could not overlap each other, this would be trivial to show because the excision process would be confluent. If all overlapping fragments were cases of complete containment of one fragment within another, the proof would also be trivial because the smallest-to-largest excision order would guarantee optimality. However, it is possible for fragments to partially overlap each other, meaning that the intersection of the set of links contained in the two fragments is non-empty and the difference between the set of links in one fragment and the other is also non-empty. Overlapping fragment configurations are given in Figure 9 and discussed in detail below. The existence of partially overlapping fragments complicates the proof of optimality for two reasons. First, the excision of a fragment α that is partially overlapped with another fragment β necessarily precludes the excision of β at a later stage in the ex2Note that the soundness of the algorithm can be easily verified from the fact that the removal of fragments can be reversed by performing standard STAG adjunction and substitution operations until a single STAG tree pair is produced. This tree pair is trivially homomorphic to the original tree pair and can easily be mapped to the original tree pair. 609 (1, 1′) ! ! A B C D n1 : n2 : n3 : n4 : A B C n5 : n6 : n7 : A B C D n8 : n9 : n10 : n11 : (2) (3) Figure 9: The four possible configurations of overlapped fragments within a single tree. For type 1, let α = γ(n1, n3) and β = γ(n2, n4). The roots and gaps of the fragments are interleaved. For type 1′, let α = γ(n1, n3) and β = γ(n2). The root of β dominates the gap of α. For type 2, let α = γ(n5, n6) and β = γ(n5, n7). The fragments share a root and have gap nodes that do not dominate each other. For type 3 let α = γ(n8, n10) and β = γ(n9, n11). The root of α dominates the root of β, both roots dominate both gaps, but neither gap dominates the other. cision process. Second, the removal of a fragment may cause a previously non-isolatable set of links to become isolatable, effectively creating a new fragment that may be advantageous to remove. This is demonstrated in Figure 8. These possibilities raise the question of whether the choice between removing fragments α and β may have consequences at a later stage in the excision process. We demonstrate that this choice cannot affect the k found for a given grammar. We begin by sketching the proof of a lemma that shows that removal of a fragment β that partially overlaps another fragment α always leaves an analogous fragment that may be removed. 5.1 Validity Preservation Consider a STAG tree pair γ containing the set of links Λ and two synchronous fragments α and β with α containing links links(α) and β containing links(β) (links(α), links(β) ⊊Λ). If α and β do not overlap, the removal of β is defined as validity preserving with respect to α. If α and β overlap, removal of β from γ is validity preserving with respect to α if after the removal there exists a valid synchronous fragment (containing at most one gap on each side) that contains all and only the links (links(α)−links(β))∪{ x} where x is the new link added to γ. remove α remove β A B C D E F G n1 : n2 : n3 : n4 : n5 : n6 : n7 : A n1 : E n5 : C n3 : x x D n4 : F n6 : H I A n1 : B n2 : J x D n4 : E n5 : K x D n4 : Figure 10: Removal from a tree pair γ containing type 1– type 2 fragment overlap. The fragment α is represented by the horizonal-lined pieces of the tree pair. The fragment β is represented by the vertical-lined pieces of the tree pair. Cross-hatching indicates the overlapping portion of the two fragments. We prove a lemma that removal of any synchronous fragment from an STAG tree pair is validity preserving with respect to all of the other synchronous fragments in the tree pair. It suffices to show that for two arbitrary synchronous fragments α and β, the removal of β is validity preserving with respect to α. We show this by examination of the possible configurations of α and β. Consider the case in which β is fully contained within α. In this case links(β) ⊊links(α). The removal of β leaves the root and gap of α intact in both trees in the pair, so it remains a valid fragment. The new link is added at the new node inserted where β was removed. Since β is fully contained within α, this node is below the root of α but not below its gap. Thus, the removal process leaves α with the links (links(α)−links(β))∪{ x}, where x is the link added in the removal process; the removal is validity preserving. Synchronous fragments may partially overlap in several different ways. There are four possible configurations for an overlapped fragment within a single tree, depicted in Figure 9. These different singletree overlap types can be combined in any way to form valid synchronous fragments. Due to space constraints, we consider two illustrative cases and leave the remainder as an exercise to the reader. An example of removing fragments from a tree set containing type 1–type 2 overlapped fragments is given in Figure 10. Let α = ⟨γL(n1, n3), γR(n5, n6)⟩. Let 610 β = ⟨γL(n2, n4), γR(n5, n7)⟩. If α is removed, the validity preserving fragment for β is ⟨γ′ L(n1, n4), γ′ R(n5)⟩. It contains the links in the vertical-lined part of the tree and the new link x. This forms a valid fragment because both sides contain at most one gap and both contain the same set of links. In addition, it is validity preserving for β because it contains exactly the set of links that were in links(β) and not in links(α) plus the new link x. If we instead choose to remove β, the validity preserving fragment for α is ⟨γ′ L(n1, n4), γ′ R(n5)⟩. The links in each side of this fragment are the same, each side contains at most one gap, and the set of links is exactly the set left over from links(α) once links(β) is removed plus the newly generated link x. An example of removing fragments from a tree set containing type 1′–type 3 (reversed) overlapped fragments is given in Figure 11. If α is removed, the validity preserving fragment for β is ⟨γ′ L(n1), γ′ R(n4)⟩. If β is removed, the validity preserving fragment for α is ⟨γ′ L(n1, n8), γ′ R(n4)⟩. Similar reasoning follows for all remaining types of overlapped fragments. 5.2 Proof Sketch We show that smallest-first removal of fragments is optimal. Consider a decision point at which a choice is made about which fragment to remove. Call the size of the smallest fragments at this point m, and let the set of fragments of size m be X with α, β ∈X. There are two cases to consider. First, consider two partially overlapped fragments α ∈X and δ /∈X. Note that |links(α)| < |links(δ)|. Validity preservation of α with respect to δ guarantees that δ or its validity preserving analog will still be available for excision after α is removed. Excising δ increases k more than excising α or any fragment that removal of α will lead to before δ is considered. Thus, removal of δ cannot result in a smaller value for k if it is removed before α rather than after α. Second, consider two partially overlapped fragments α, β ∈X. Due to the validity preservation lemma, we may choose arbitrarily between the fragments in X without jeopardizing our ability to later remove other fragments (or their validity preserving analogs) in that set. Removal of fragment α cannot increase the size of any remaining fragment. Removal of α or β may generate new fragments remove α remove β A B C n1 : n2 : n3 : E F G n5 : n6 : n7 : D n4 : A n1 : C n3 : x H E n5 : x F n6 : I D n4 : A n1 : B n2 : x J↓ D n4 : K x G n7 : n8 : Figure 11: Removal from a tree pair γ containing a type 1′–type 3 (reversed) fragment overlap. The fragment α is represented by the horizontal lined pieces of the tree pair. The fragment β is represented by the vertical-lined pieces of the tree pair. Cross-hatching indicates the overlapping portion of the two fragments. that were not previously valid and may reduce the size of existing fragments that it overlaps. In addition, removal of α may lead to availability of smaller fragments at the next removal step than removal of β (and vice versa). However, since removal of either α or β produces a k of size at least m, the later removal of fragments of size less than m cannot affect the k found by the algorithm. Due to validity preservation, removal of any of these smaller fragments will still permit removal of all currently existing fragments or their analogs at a later step in the removal process. If the removal of α generates a new fragment δ of size larger than m all remaining fragments in X (and all others smaller than δ) will be removed before δ is considered. Therefore, if removal of β generates a new fragment smaller than δ, the smallest-first strategy will properly guarantee its removal before δ. 6 Conclusion In order for STAG to be used in machine translation and other natural-language processing tasks it must be possible to process it efficiently. The difficulty in parsing STAG stems directly from the factor k that indicates the degree to which the correspondences are intertwined within the elementary structures of the grammar. The algorithm presented in this paper is the first method available for k-arizing a synchronous TAG grammar into an equivalent grammar with an optimal value for k. The algorithm operates offline and requires only O(|G| + |Y | · L3 G) time. Both the derivation trees and derived trees produced are trivially homomorphic to those that are produced by the original grammar. 611 References Aho, Alfred V. and Jeffrey D. Ullman. 1969. Syntax directed translations and the pushdown assembler. Journal of Computer and System Sciences, 3(1):37–56. Chiang, David and Owen Rambow. 2006. The hidden TAG model: synchronous grammars for parsing resource-poor languages. In Proceedings of the 8th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+ 8), pages 1–8. Gildea, Daniel, Giorgio Satta, and Hao Zhang. 2006. Factoring synchronous grammars by sorting. In Proceedings of the International Conference on Computational Linguistics and the Association for Computational Linguistics (COLING/ACL-06), July. Han, Chung-Hye. 2006a. Pied-piping in relative clauses: Syntax and compositional semantics based on synchronous tree adjoining grammar. In Proceedings of the 8th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+ 8), pages 41–48, Sydney, Australia. Han, Chung-Hye. 2006b. A tree adjoining grammar analysis of the syntax and semantics of it-clefts. In Proceedings of the 8th International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+ 8), pages 33–40, Sydney, Australia. Joshi, Aravind K. and Yves Schabes. 1997. Treeadjoining grammars. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Languages. Springer, pages 69–124. Nesson, Rebecca and Stuart M. Shieber. 2006. Simpler TAG semantics through synchronization. In Proceedings of the 11th Conference on Formal Grammar, Malaga, Spain, 29–30 July. Nesson, Rebecca and Stuart M. Shieber. 2007. Extraction phenomena in synchronous TAG syntax and semantics. In Proceedings of Syntax and Structure in Statistical Translation (SSST), Rochester, NY, April. Nesson, Rebecca, Stuart M. Shieber, and Alexander Rush. 2006. Induction of probabilistic synchronous tree-insertion grammars for machine translation. In Proceedings of the 7th Conference of the Association for Machine Translation in the Americas (AMTA 2006), Boston, Massachusetts, 8-12 August. Satta, Giorgio. 1992. Recognition of linear context-free rewriting systems. In Proceedings of the 10th Meeting of the Association for Computational Linguistics (ACL92), pages 89–95, Newark, Delaware. Seki, H., T. Matsumura, M. Fujii, and T. Kasami. 1991. On multiple context-free grammars. Theoretical Computer Science, 88:191–229. Shieber, Stuart M. 1994. Restricting the weak-generative capacity of synchronous tree-adjoining grammars. Computational Intelligence, 10(4):371–385, November. Shieber, Stuart M. and Yves Schabes. 1990. Synchronous tree adjoining grammars. In Proceedings of the 13th International Conference on Computational Linguistics (COLING ’90), Helsinki, August. Weir, David. 1988. Characterizing mildly contextsensitive grammar formalisms. PhD Thesis, Department of Computer and Information Science, University of Pennsylvania. Zhang, Hao and Daniel Gildea. 2007. Factorization of synchronous context-free grammars in linear time. In NAACL Workshop on Syntax and Structure in Statistical Translation (SSST), April. Zhang, Hao, Liang Huang, Daniel Gildea, and Kevin Knight. 2006. Synchronous binarization for machine translation. In Proceedings of the Human Language Technology Conference/North American Chapter of the Association for Computational Linguistics (HLT/NAACL). 612
2008
69
Proceedings of ACL-08: HLT, pages 55–62, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics MAXSIM: A Maximum Similarity Metric for Machine Translation Evaluation Yee Seng Chan and Hwee Tou Ng Department of Computer Science National University of Singapore Law Link, Singapore 117590 {chanys, nght}@comp.nus.edu.sg Abstract We propose an automatic machine translation (MT) evaluation metric that calculates a similarity score (based on precision and recall) of a pair of sentences. Unlike most metrics, we compute a similarity score between items across the two sentences. We then find a maximum weight matching between the items such that each item in one sentence is mapped to at most one item in the other sentence. This general framework allows us to use arbitrary similarity functions between items, and to incorporate different information in our comparison, such as n-grams, dependency relations, etc. When evaluated on data from the ACL-07 MT workshop, our proposed metric achieves higher correlation with human judgements than all 11 automatic MT evaluation metrics that were evaluated during the workshop. 1 Introduction In recent years, machine translation (MT) research has made much progress, which includes the introduction of automatic metrics for MT evaluation. Since human evaluation of MT output is time consuming and expensive, having a robust and accurate automatic MT evaluation metric that correlates well with human judgement is invaluable. Among all the automatic MT evaluation metrics, BLEU (Papineni et al., 2002) is the most widely used. Although BLEU has played a crucial role in the progress of MT research, it is becoming evident that BLEU does not correlate with human judgement well enough, and suffers from several other deficiencies such as the lack of an intuitive interpretation of its scores. During the recent ACL-07 workshop on statistical MT (Callison-Burch et al., 2007), a total of 11 automatic MT evaluation metrics were evaluated for correlation with human judgement. The results show that, as compared to BLEU, several recently proposed metrics such as Semantic-role overlap (Gimenez and Marquez, 2007), ParaEval-recall (Zhou et al., 2006), and METEOR (Banerjee and Lavie, 2005) achieve higher correlation. In this paper, we propose a new automatic MT evaluation metric, MAXSIM, that compares a pair of system-reference sentences by extracting n-grams and dependency relations. Recognizing that different concepts can be expressed in a variety of ways, we allow matching across synonyms and also compute a score between two matching items (such as between two n-grams or between two dependency relations), which indicates their degree of similarity with each other. Having weighted matches between items means that there could be many possible ways to match, or link items from a system translation sentence to a reference translation sentence. To match each system item to at most one reference item, we model the items in the sentence pair as nodes in a bipartite graph and use the Kuhn-Munkres algorithm (Kuhn, 1955; Munkres, 1957) to find a maximum weight matching (or alignment) between the items in polynomial time. The weights (from the edges) of the resulting graph will then be added to determine the final similarity score between the pair of sentences. 55 Although a maximum weight bipartite graph was also used in the recent work of (Taskar et al., 2005), their focus was on learning supervised models for single word alignment between sentences from a source and target language. The contributions of this paper are as follows. Current metrics (such as BLEU, METEOR, Semantic-role overlap, ParaEval-recall, etc.) do not assign different weights to their matches: either two items match, or they don’t. Also, metrics such as METEOR determine an alignment between the items of a sentence pair by using heuristics such as the least number of matching crosses. In contrast, we propose weighting different matches differently, and then obtain an optimal set of matches, or alignments, by using a maximum weight matching framework. We note that this framework is not used by any of the 11 automatic MT metrics in the ACL-07 MT workshop. Also, this framework allows for defining arbitrary similarity functions between two matching items, and we could match arbitrary concepts (such as dependency relations) gathered from a sentence pair. In contrast, most other metrics (notably BLEU) limit themselves to matching based only on the surface form of words. Finally, when evaluated on the datasets of the recent ACL07 MT workshop (Callison-Burch et al., 2007), our proposed metric achieves higher correlation with human judgements than all of the 11 automatic MT evaluation metrics evaluated during the workshop. In the next section, we describe several existing metrics. In Section 3, we discuss issues to consider when designing a metric. In Section 4, we describe our proposed metric. In Section 5, we present our experimental results. Finally, we outline future work in Section 6, before concluding in Section 7. 2 Automatic Evaluation Metrics In this section, we describe BLEU, and the three metrics which achieved higher correlation results than BLEU in the recent ACL-07 MT workshop. 2.1 BLEU BLEU (Papineni et al., 2002) is essentially a precision-based metric and is currently the standard metric for automatic evaluation of MT performance. To score a system translation, BLEU tabulates the number of n-gram matches of the system translation against one or more reference translations. Generally, more n-gram matches result in a higher BLEU score. When determining the matches to calculate precision, BLEU uses a modified, or clipped n-gram precision. With this, an n-gram (from both the system and reference translation) is considered to be exhausted or used after participating in a match. Hence, each system n-gram is “clipped” by the maximum number of times it appears in any reference translation. To prevent short system translations from receiving too high a score and to compensate for its lack of a recall component, BLEU incorporates a brevity penalty. This penalizes the score of a system if the length of its entire translation output is shorter than the length of the reference text. 2.2 Semantic Roles (Gimenez and Marquez, 2007) proposed using deeper linguistic information to evaluate MT performance. For evaluation in the ACL-07 MT workshop, the authors used the metric which they termed as SR-Or-*1. This metric first counts the number of lexical overlaps SR-Or-t for all the different semantic roles t that are found in the system and reference translation sentence. A uniform average of the counts is then taken as the score for the sentence pair. In their work, the different semantic roles t they considered include the various core and adjunct arguments as defined in the PropBank project (Palmer et al., 2005). For instance, SR-Or-A0 refers to the number of lexical overlaps between the A0 arguments. To extract semantic roles from a sentence, several processes such as lemmatization, partof-speech tagging, base phrase chunking, named entity tagging, and finally semantic role tagging need to be performed. 2.3 ParaEval The ParaEval metric (Zhou et al., 2006) uses a large collection of paraphrases, automatically extracted from parallel corpora, to evaluate MT performance. To compare a pair of sentences, ParaEval first locates paraphrase matches between the two 1Verified through personal communication as this is not evident in their paper. 56 sentences. Then, unigram matching is performed on the remaining words that are not matched using paraphrases. Based on the matches, ParaEval will then elect to use either unigram precision or unigram recall as its score for the sentence pair. In the ACL-07 MT workshop, ParaEval based on recall (ParaEval-recall) achieves good correlation with human judgements. 2.4 METEOR Given a pair of strings to compare (a system translation and a reference translation), METEOR (Banerjee and Lavie, 2005) first creates a word alignment between the two strings. Based on the number of word or unigram matches and the amount of string fragmentation represented by the alignment, METEOR calculates a score for the pair of strings. In aligning the unigrams, each unigram in one string is mapped, or linked, to at most one unigram in the other string. These word alignments are created incrementally through a series of stages, where each stage only adds alignments between unigrams which have not been matched in previous stages. At each stage, if there are multiple different alignments, then the alignment with the most number of mappings is selected. If there is a tie, then the alignment with the least number of unigram mapping crosses is selected. The three stages of “exact”, “porter stem”, and “WN synonymy” are usually applied in sequence to create alignments. The “exact” stage maps unigrams if they have the same surface form. The “porter stem” stage then considers the remaining unmapped unigrams and maps them if they are the same after applying the Porter stemmer. Finally, the “WN synonymy” stage considers all remaining unigrams and maps two unigrams if they are synonyms in the WordNet sense inventory (Miller, 1990). Once the final alignment has been produced, unigram precision P (number of unigram matches m divided by the total number of system unigrams) and unigram recall R (m divided by the total number of reference unigrams) are calculated and combined into a single parameterized harmonic mean (Rijsbergen, 1979): Fmean = P · R αP + (1 −α)R (1) To account for longer matches and the amount of fragmentation represented by the alignment, METEOR groups the matched unigrams into as few chunks as possible and imposes a penalty based on the number of chunks. The METEOR score for a pair of sentences is: score = " 1 −γ no. of chunks m β# Fmean where γ no. of chunks m β represents the fragmentation penalty of the alignment. Note that METEOR consists of three parameters that need to be optimized based on experimentation: α, β, and γ. 3 Metric Design Considerations We first review some aspects of existing metrics and highlight issues that should be considered when designing an MT evaluation metric. • Intuitive interpretation: To compensate for the lack of recall, BLEU incorporates a brevity penalty. This, however, prevents an intuitive interpretation of its scores. To address this, standard measures like precision and recall could be used, as in some previous research (Banerjee and Lavie, 2005; Melamed et al., 2003). • Allowing for variation: BLEU only counts exact word matches. Languages, however, often allow a great deal of variety in vocabulary and in the ways concepts are expressed. Hence, using information such as synonyms or dependency relations could potentially address the issue better. • Matches should be weighted: Current metrics either match, or don’t match a pair of items. We note, however, that matches between items (such as words, n-grams, etc.) should be weighted according to their degree of similarity. 4 The Maximum Similarity Metric We now describe our proposed metric, Maximum Similarity (MAXSIM), which is based on precision and recall, allows for synonyms, and weights the matches found. 57 Given a pair of English sentences to be compared (a system translation against a reference translation), we perform tokenization2, lemmatization using WordNet3, and part-of-speech (POS) tagging with the MXPOST tagger (Ratnaparkhi, 1996). Next, we remove all non-alphanumeric tokens. Then, we match the unigrams in the system translation to the unigrams in the reference translation. Based on the matches, we calculate the recall and precision, which we then combine into a single Fmean unigram score using Equation 1. Similarly, we also match the bigrams and trigrams of the sentence pair and calculate their corresponding Fmean scores. To obtain a single similarity score scores for this sentence pair s, we simply average the three Fmean scores. Then, to obtain a single similarity score sim-score for the entire system corpus, we repeat this process of calculating a scores for each system-reference sentence pair s, and compute the average over all |S| sentence pairs: sim-score = 1 |S| |S| X s=1 " 1 N N X n=1 Fmeans,n # where in our experiments, we set N=3, representing calculation of unigram, bigram, and trigram scores. If we are given access to multiple references, we calculate an individual sim-score between the system corpus and each reference corpus, and then average the scores obtained. 4.1 Using N-gram Information In this subsection, we describe in detail how we match the n-grams of a system-reference sentence pair. Lemma and POS match Representing each ngram by its sequence of lemma and POS-tag pairs, we first try to perform an exact match in both lemma and POS-tag. In all our n-gram matching, each ngram in the system translation can only match at most one n-gram in the reference translation. Representing each unigram (lipi) at position i by its lemma li and POS-tag pi, we count the number matchuni of system-reference unigram pairs where both their lemma and POS-tag match. To find matching pairs, we proceed in a left-to-right fashion 2http://www.cis.upenn.edu/ treebank/tokenizer.sed 3http://wordnet.princeton.edu/man/morph.3WN r1 r2 r3 0 0.5 0.75 0.75 0.75 1 1 1 s3 s2 s1 0.5 r1 r2 r3 0.75 1 1 s3 s1 s2 Figure 1: Bipartite matching. (in both strings). We first compare the first system unigram to the first reference unigram, then to the second reference unigram, and so on until we find a match. If there is a match, we increment matchuni by 1 and remove this pair of system-reference unigrams from further consideration (removed items will not be matched again subsequently). Then, we move on to the second system unigram and try to match it against the reference unigrams, once again proceeding in a left-to-right fashion. We continue this process until we reach the last system unigram. To determine the number matchbi of bigram matches, a system bigram (lsipsi, lsi+1psi+1) matches a reference bigram (lripri, lri+1pri+1) if lsi = lri, psi = pri, lsi+1 = lri+1, and psi+1 = pri+1. For trigrams, we similarly determine matchtri by counting the number of trigram matches. Lemma match For the remaining set of n-grams that are not yet matched, we now relax our matching criteria by allowing a match if their corresponding lemmas match. That is, a system unigram (lsipsi) matches a reference unigram (lripri) if lsi = lri. In the case of bigrams, the matching conditions are lsi = lri and lsi+1 = lri+1. The conditions for trigrams are similar. Once again, we find matches in a left-to-right fashion. We add the number of unigram, bigram, and trigram matches found during this phase to matchuni, matchbi, and matchtri respectively. Bipartite graph matching For the remaining ngrams that are not matched so far, we try to match them by constructing bipartite graphs. During this phase, we will construct three bipartite graphs, one 58 each for the remaining set of unigrams, bigrams, and trigrams. Using bigrams to illustrate, we construct a weighted complete bipartite graph, where each edge e connecting a pair of system-reference bigrams has a weight w(e), indicating the degree of similarity between the bigrams connected. Note that, without loss of generality, if the number of system nodes and reference nodes (bigrams) are not the same, we can simply add dummy nodes with connecting edges of weight 0 to obtain a complete bipartite graph with equal number of nodes on both sides. In an n-gram bipartite graph, the similarity score, or the weight w(e) of the edge e connecting a system n-gram (ls1ps1, . . . , lsnpsn) and a reference n-gram (lr1pr1, . . . , lrnprn) is calculated as follows: Si = I(psi, pri) + Syn(lsi, lri) 2 w(e) = 1 n n X i=1 Si where I(psi, pri) evaluates to 1 if psi = pri, and 0 otherwise. The function Syn(lsi, lri) checks whether lsi is a synonym of lri. To determine this, we first obtain the set WNsyn(lsi) of WordNet synonyms for lsi and the set WNsyn(lri) of WordNet synonyms for lri. Then, Syn(lsi, lri) =    1, WNsyn(lsi) ∩WNsyn(lri) ̸= ∅ 0, otherwise In gathering the set WNsyn for a word, we gather all the synonyms for all its senses and do not restrict to a particular POS category. Further, if we are comparing bigrams or trigrams, we impose an additional condition: Si ̸= 0, for 1 ≤i ≤n, else we will set w(e) = 0. This captures the intuition that in matching a system n-gram against a reference ngram, where n > 1, we require each system token to have at least some degree of similarity with the corresponding reference token. In the top half of Figure 1, we show an example of a complete bipartite graph, constructed for a set of three system bigrams (s1, s2, s3) and three reference bigrams (r1, r2, r3), and the weight of the connecting edge between two bigrams represents their degree of similarity. Next, we aim to find a maximum weight matching (or alignment) between the bigrams such that each system (reference) bigram is connected to exactly one reference (system) bigram. This maximum weighted bipartite matching problem can be solved in O(n3) time (where n refers to the number of nodes, or vertices in the graph) using the KuhnMunkres algorithm (Kuhn, 1955; Munkres, 1957). The bottom half of Figure 1 shows the resulting maximum weighted bipartite graph, where the alignment represents the maximum weight matching, out of all possible alignments. Once we have solved and obtained a maximum weight matching M for the bigram bipartite graph, we sum up the weights of the edges to obtain the weight of the matching M: w(M) = P e∈M w(e), and add w(M) to matchbi. From the unigram and trigram bipartite graphs, we similarly calculate their respective w(M) and add to the corresponding matchuni and matchtri. Based on matchuni, matchbi, and matchtri, we calculate their corresponding precision P and recall R, from which we obtain their respective Fmean scores via Equation 1. Using bigrams for illustration, we calculate its P and R as: P = matchbi no. of bigrams in system translation R = matchbi no. of bigrams in reference translation 4.2 Dependency Relations Besides matching a pair of system-reference sentences based on the surface form of words, previous work such as (Gimenez and Marquez, 2007) and (Rajman and Hartley, 2002) had shown that deeper linguistic knowledge such as semantic roles and syntax can be usefully exploited. In the previous subsection, we describe our method of using bipartite graphs for matching of ngrams found in a sentence pair. This use of bipartite graphs, however, is a very general framework to obtain an optimal alignment of the corresponding “information items” contained within a sentence pair. Hence, besides matching based on n-gram strings, we can also match other “information items”, such as dependency relations. 59 Metric Adequacy Fluency Rank Constituent Average MAXSIMn+d 0.780 0.827 0.875 0.760 0.811 MAXSIMn 0.804 0.845 0.893 0.766 0.827 Semantic-role 0.774 0.839 0.804 0.742 0.790 ParaEval-recall 0.712 0.742 0.769 0.798 0.755 METEOR 0.701 0.719 0.746 0.670 0.709 BLEU 0.690 0.722 0.672 0.603 0.672 Table 1: Overall correlations on the Europarl and News Commentary datasets. The “Semantic-role overlap”metric is abbreviated as “Semantic-role”. Note that each figure above represents 6 translation tasks: the Europarl and News Commentary datasets each with 3 language pairs (German-English, Spanish-English, French-English). In our work, we train the MSTParser4 (McDonald et al., 2005) on the Penn Treebank Wall Street Journal (WSJ) corpus, and use it to extract dependency relations from a sentence. Currently, we focus on extracting only two relations: subject and object. For each relation (ch, dp, pa) extracted, we note the child lemma ch of the relation (often a noun), the relation type dp (either subject or object), and the parent lemma pa of the relation (often a verb). Then, using the system relations and reference relations extracted from a system-reference sentence pair, we similarly construct a bipartite graph, where each node is a relation (ch, dp, pa). We define the weight w(e) of an edge e between a system relation (chs, dps, pas) and a reference relation (chr, dpr, par) as follows: Syn(chs, chr) + I(dps, dpr) + Syn(pas, par) 3 where functions I and Syn are defined as in the previous subsection. Also, w(e) is non-zero only if dps = dpr. After solving for the maximum weight matching M, we divide w(M) by the number of system relations extracted to obtain a precision score P, and divide w(M) by the number of reference relations extracted to obtain a recall score R. P and R are then similarly combined into a Fmean score for the sentence pair. To compute the similarity score when incorporating dependency relations, we average the Fmean scores for unigrams, bigrams, trigrams, and dependency relations. 5 Results To evaluate our metric, we conduct experiments on datasets from the ACL-07 MT workshop and NIST 4Available at: http://sourceforge.net/projects/mstparser Europarl Metric Adq Flu Rank Con Avg MAXSIMn+d 0.749 0.786 0.857 0.651 0.761 MAXSIMn 0.749 0.786 0.857 0.651 0.761 Semantic-role 0.815 0.854 0.759 0.612 0.760 ParaEval-recall 0.701 0.708 0.737 0.772 0.730 METEOR 0.726 0.741 0.770 0.558 0.699 BLEU 0.803 0.822 0.699 0.512 0.709 Table 2: Correlations on the Europarl dataset. Adq=Adequacy, Flu=Fluency, Con=Constituent, and Avg=Average. News Commentary Metric Adq Flu Rank Con Avg MAXSIMn+d 0.812 0.869 0.893 0.869 0.861 MAXSIMn 0.860 0.905 0.929 0.881 0.894 Semantic-role 0.734 0.824 0.848 0.871 0.819 ParaEval-recall 0.722 0.777 0.800 0.824 0.781 METEOR 0.677 0.698 0.721 0.782 0.720 BLEU 0.577 0.622 0.646 0.693 0.635 Table 3: Correlations on the News Commentary dataset. MT 2003 evaluation exercise. 5.1 ACL-07 MT Workshop The ACL-07 MT workshop evaluated the translation quality of MT systems on various translation tasks, and also measured the correlation (with human judgement) of 11 automatic MT evaluation metrics. The workshop used a Europarl dataset and a News Commentary dataset, where each dataset consisted of English sentences (2,000 English sentences for Europarl and 2,007 English sentences for News Commentary) and their translations in various languages. As part of the workshop, correlations of the automatic metrics were measured for the tasks 60 of translating German, Spanish, and French into English. Hence, we will similarly measure the correlation of MAXSIM on these tasks. 5.1.1 Evaluation Criteria For human evaluation of the MT submissions, four different criteria were used in the workshop: Adequacy (how much of the original meaning is expressed in a system translation), Fluency (the translation’s fluency), Rank (different translations of a single source sentence are compared and ranked from best to worst), and Constituent (some constituents from the parse tree of the source sentence are translated, and human judges have to rank these translations). During the workshop, Kappa values measured for inter- and intra-annotator agreement for rank and constituent are substantially higher than those for adequacy and fluency, indicating that rank and constituent are more reliable criteria for MT evaluation. 5.1.2 Correlation Results We follow the ACL-07 MT workshop process of converting the raw scores assigned by an automatic metric to ranks and then using the Spearman’s rank correlation coefficient to measure correlation. During the workshop, only three automatic metrics (Semantic-role overlap, ParaEval-recall, and METEOR) achieve higher correlation than BLEU. We gather the correlation results of these metrics from the workshop paper (Callison-Burch et al., 2007), and show in Table 1 the overall correlations of these metrics over the Europarl and News Commentary datasets. In the table, MAXSIMn represents using only n-gram information (Section 4.1) for our metric, while MAXSIMn+d represents using both ngram and dependency information. We also show the breakdown of the correlation results into the Europarl dataset (Table 2) and the News Commentary dataset (Table 3). In all our results for MAXSIM in this paper, we follow METEOR and use α=0.9 (weighing recall more than precision) in our calculation of Fmean via Equation 1, unless otherwise stated. The results in Table 1 show that MAXSIMn and MAXSIMn+d achieve overall average (over the four criteria) correlations of 0.827 and 0.811 respectively. Note that these results are substantially Metric Adq Flu Avg MAXSIMn+d 0.943 0.886 0.915 MAXSIMn 0.829 0.771 0.800 METEOR (optimized) 1.000 0.943 0.972 METEOR 0.943 0.886 0.915 BLEU 0.657 0.543 0.600 Table 4: Correlations on the NIST MT 2003 dataset. higher than BLEU, and in particular higher than the best performing Semantic-role overlap metric in the ACL-07 MT workshop. Also, Semantic-role overlap requires more processing steps (such as base phrase chunking, named entity tagging, etc.) than MAXSIM. For future work, we could experiment with incorporating semantic-role information into our current framework. We note that the ParaEvalrecall metric achieves higher correlation on the constituent criterion, which might be related to the fact that both ParaEval-recall and the constituent criterion are based on phrases: ParaEval-recall tries to match phrases, and the constituent criterion is based on judging translations of phrases. 5.2 NIST MT 2003 Dataset We also conduct experiments on the test data (LDC2006T04) of NIST MT 2003 Chinese-English translation task. For this dataset, human judgements are available on adequacy and fluency for six system submissions, and there are four English reference translation texts. Since implementations of the BLEU and METEOR metrics are publicly available, we score the system submissions using BLEU (version 11b with its default settings), METEOR, and MAXSIM, showing the resulting correlations in Table 4. For METEOR, when used with its originally proposed parameter values of (α=0.9, β=3.0, γ=0.5), which the METEOR researchers mentioned were based on some early experimental work (Banerjee and Lavie, 2005), we obtain an average correlation value of 0.915, as shown in the row “METEOR”. In the recent work of (Lavie and Agarwal, 2007), the values of these parameters were tuned to be (α=0.81, β=0.83, γ=0.28), based on experiments on the NIST 2003 and 2004 Arabic-English evaluation datasets. When METEOR was run with these new parameter values, it returned an average correlation value of 61 0.972, as shown in the row “METEOR (optimized)”. MAXSIM using only n-gram information (MAXSIMn) gives an average correlation value of 0.800, while adding dependency information (MAXSIMn+d) improves the correlation value to 0.915. Note that so far, the parameters of MAXSIM are not optimized and we simply perform uniform averaging of the different n-grams and dependency scores. Under this setting, the correlation achieved by MAXSIM is comparable to that achieved by METEOR. 6 Future Work In our current work, the parameters of MAXSIM are as yet un-optimized. We found that by setting α=0.7, MAXSIMn+d could achieve a correlation of 0.972 on the NIST MT 2003 dataset. Also, we have barely exploited the potential of weighted similarity matching. Possible future directions include adding semantic role information, using the distance between item pairs based on the token position within each sentence as additional weighting consideration, etc. Also, we have seen that dependency relations help to improve correlation on the NIST dataset, but not on the ACL-07 MT workshop datasets. Since the accuracy of dependency parsers is not perfect, a possible future work is to identify when best to incorporate such syntactic information. 7 Conclusion In this paper, we present MAXSIM, a new automatic MT evaluation metric that computes a similarity score between corresponding items across a sentence pair, and uses a bipartite graph to obtain an optimal matching between item pairs. This general framework allows us to use arbitrary similarity functions between items, and to incorporate different information in our comparison. When evaluated for correlation with human judgements, MAXSIM achieves superior results when compared to current automatic MT evaluation metrics. References S. Banerjee and A. Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization, ACL05, pages 65–72. C. Callison-Burch, C. Fordyce, P. Koehn, C. Monz, and J. Schroeder. 2007. (meta-) evaluation of machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, ACL07, pages 136– 158. J. Gimenez and L. Marquez. 2007. Linguistic features for automatic evaluation of heterogenous MT systems. In Proceedings of the Second Workshop on Statistical Machine Translation, ACL07, pages 256–264. H. W. Kuhn. 1955. The hungarian method for the assignment problem. Naval Research Logistic Quarterly, 2(1):83–97. A. Lavie and A. Agarwal. 2007. METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. In Proceedings of the Second Workshop on Statistical Machine Translation, ACL07, pages 228–231. R. McDonald, K. Crammer, and F. Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of ACL05, pages 91–98. I. D. Melamed, R. Green, and J. P. Turian. 2003. Precision and recall of machine translation. In Proceedings of HLT-NAACL03, pages 61–63. G. A. Miller. 1990. WordNet: An on-line lexical database. International Journal of Lexicography, 3(4):235–312. J. Munkres. 1957. Algorithms for the assignment and transportation problems. Journal of the Society for Industrial and Applied Mathematics, 5(1):32–38. M. Palmer, D. Gildea, and P. Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. K. Papineni, S. Roukos, T. Ward, and W. J. Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of ACL02, pages 311–318. M. Rajman and A. Hartley. 2002. Automatic ranking of MT systems. In Proceedings of LREC02, pages 1247– 1253. A. Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of EMNLP96, pages 133–142. C. Rijsbergen. 1979. Information Retrieval. Butterworths, London, UK, 2nd edition. B. Taskar, S. Lacoste-Julien, and D. Klein. 2005. A discriminative matching approach to word alignment. In Proceedings of HLT/EMNLP05, pages 73–80. L. Zhou, C. Y. Lin, and E. Hovy. 2006. Re-evaluating machine translation results with paraphrase support. In Proceedings of EMNLP06, pages 77–84. 62
2008
7
Proceedings of ACL-08: HLT, pages 613–621, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Enhancing Performance of Lexicalised Grammars Rebecca Dridan†, Valia Kordoni†, Jeremy Nicholson†‡ †Dept of Computational Linguistics, Saarland University and DFKI GmbH, Germany ‡Dept of Computer Science and Software Engineering and NICTA, University of Melbourne, Australia {rdrid,kordoni}@coli.uni-sb.de, [email protected] Abstract This paper describes how external resources can be used to improve parser performance for heavily lexicalised grammars, looking at both robustness and efficiency. In terms of robustness, we try using different types of external data to increase lexical coverage, and find that simple POS tags have the most effect, increasing coverage on unseen data by up to 45%. We also show that filtering lexical items in a supertagging manner is very effective in increasing efficiency. Even using vanilla POS tags we achieve some efficiency gains, but when using detailed lexical types as supertags we manage to halve parsing time with minimal loss of coverage or precision. 1 Introduction Heavily lexicalised grammars have been used in applications such as machine translation and information extraction because they can produce semantic structures which provide more information than less informed parsers. In particular, because of the structural and semantic information attached to lexicon items, these grammars do well at describing complex relationships, like non-projectivity and center embedding. However, the cost of this additional information sometimes makes deep parsers that use these grammars impractical. Firstly because, if the information is not available, the parsers may fail to produce an analysis, a failure of robustness. Secondly, the effect of analysing the extra information can slow the parser down, causing efficiency problems. This paper describes experiments aimed at improving parser performance in these two areas, by annotating the input given to one such deep parser, the PET parser (Callmeier, 2000), which uses lexicalised grammars developed under the HPSG formalism (Pollard and Sag, 1994). 2 Background In all heavily lexicalised formalisms, such as LTAG, CCG, LFG and HPSG, the lexicon plays a key role in parsing. But a lexicon can never hope to contain all words in open domain text, and so lexical coverage is a central issue in boosting parser robustness. Some systems use heuristics based on numbers, capitalisation and perhaps morphology to guess the category of the unknown word (van Noord and Malouf, 2004), while others have focused on automatically expanding the lexicon (Baldwin, 2005; Hockenmaier et al., 2002; O’Donovan et al., 2005). Another method, described in Section 4, uses external resources such as part-of-speech (POS) tags to select generic lexical entries for out-of-vocabulary words. In all cases, we lose some of the depth of information the hand-crafted lexicon would provide, but an analysis is still produced, though possibly less than fully specified. The central position of these detailed lexicons causes problems, not only of robustness, but also of efficiency and ambiguity. Many words may have five, six or more lexicon entries associated with them, and this can lead to an enormous search space for the parser. Various means of filtering this search space have been attempted. Kiefer et al. (1999) describes a method of filtering lexical items by specifying and checking for required prefixes and particles 613 which is particularly effective for German, but also applicable to English. Other research has looked at using dependencies to restrict the parsing process (Sagae et al., 2007), but the most well known filtering method is supertagging. Originally described by Bangalore and Joshi (1994) for use in LTAG parsing, it has also been used very successfully for CCG (Clark, 2002). Supertagging is the process of assigning probable ‘supertags’ to words before parsing to restrict parser ambiguity, where a supertag is a tag that includes more specific information than the typical POS tags. The supertags used in each formalism differ, being elementary trees in LTAG and CCG categories for CCG. Section 3.2 describes an experiment akin to supertagging for HPSG, where the supertags are HPSG lexical types. Unlike elementary trees and CCG categories, which are predominantly syntactic categories, the HPSG lexical types contain a lot of semantic information, as well as syntactic. In the case study we describe here, the tools, grammars and treebanks we use are taken from work carried out in the DELPH-IN1 collaboration. This research is based on using HPSG along with Minimal Recursion Semantics (MRS: Copestake et al. (2001)) as a platform to develop deep natural language processing tools, with a focus on multilinguality. The grammars are designed to be bidirectional (used for generation as well as parsing) and so contain very specific linguistic information. In this work, we focus on techniques to improve parsing, not generation, but, as all the methods involve pre-processing and do not change the grammar itself, we do not affect the generation capabilities of the grammars. We use two of the DELPHIN wide-coverage grammars: the English Resource Grammar (ERG: Copestake and Flickinger (2000)) and a German grammar, GG (M¨uller and Kasper, 2000; Crysmann, 2003). We also use the PET parser, and the [incr tsdb()] system profiler and treebanking tool (Oepen, 2001) for evaluation. 3 Parser Restriction An exhaustive parser, such as PET, by default produces every parse licensed by the grammar. However, in many application scenarios, this is unnecessary and time consuming. The benefits of us1http://wiki.delph-in.net/ ing a deep parser with a lexicalised grammar are the precision and depth of the analysis produced, but this depth comes from making many fine distinctions which greatly increases the parser search space, making parsing slow. By restricting the lexical items considered during parsing, we improve the efficiency of a parser with a possible trade-off of losing correct parses. For example, the noun phrase reading of The dog barks is a correct parse, although unlikely. By blocking the use of barks as a noun in this case, we lose this reading. This may be an acceptable trade-off in some applications that can make use of the detailed information, but only if it can be delivered in reasonable time. An example of such an application is the real-time speech translation system developed in the Verbmobil project (Wahlster, 2000), which integrated deep parsing results, where available, into its appointment scheduling and travel planning dialogues. In these experiments we look at two methods of restricting the parser, first by using POS tags and then using lexical types. To control the trade-off between efficiency and precision, we vary which lexical items are restricted according to a likelihood threshold from the respective taggers. Only open class words are restricted, since it is the gross distinctions between, for instance, noun and verb that we would like to utilise. Any differences between categories for closed class words are more subtle and we feel the parser is best left to make these distinctions without restriction. The data set used for these experiments is the jh5 section of the treebank released with the ERG. This text consists of edited written English in the domain of Norwegian hiking instructions from the LOGON project (Oepen et al., 2004). 3.1 Part of Speech Tags We use TreeTagger (Schmid, 1994) to produce POS tags and then open class words are restricted if the POS tagger assigned a tag with a probability over a certain threshold. A lower threshold will lead to faster parsing, but at the expense of losing more correct parses. We experiment with various thresholds, and results are shown in Table 1. Since a gold standard treebank for our data set was available, it was possible to evaluate the accuracy of the parser. Evaluation of deep parsing results is often reported only in terms of coverage (number of sentences which re614 Threshold Coverage Precision Time gold 93.5% 92.2% N/A unrestricted 93.3% 92.4% 0.67s 1.00 90.7% 91.9% 0.59s 0.98 88.8% 89.3% 0.49s 0.95 88.4% 89.5% 0.48s 0.90 86.4% 88.5% 0.44s 0.80 84.3% 87.0% 0.43s 0.60 81.5% 87.3% 0.39s Table 1: Results obtained when restricting the parser lexicon according to the POS tag, where words are restricted according to a threshold of POS probabilities. ceive an analysis), because, since the hand-crafted grammars are optimised for precision over coverage, the analyses are assumed to be correct. However, in this experiment, we are potentially ‘diluting’ the precision of the grammar by using external resources to remove parses and so it is important that we have some idea of how the accuracy is affected. In the table, precision is the percentage of sentences that, having produced at least one parse, produced a correct parse. A parse was judged to be correct if it exactly matched the gold standard tree in all aspects, syntactic and semantic. The results show quite clearly how the coverage drops as the average parse time per sentence drops. In hybrid applications that can back-off to less informative analyses, this may be a reasonable trade-off, enabling detailed analyses in shorter times where possible, and using the shallower analyses otherwise. 3.2 Lexical Types Another option for restricting the parser is to use the lexical types used by the grammar itself, in a similar method to that described by Prins and van Noord (2003). This could be considered a form of supertagging as used in LTAG and CCG. Restricting by lexical types should have the effect of reducing ambiguity further than POS tags can do, since one POS tag could still allow the use of multiple lexical items with compatible lexical types. On the other hand, it could be considered more difficult to tag accurately, since there are many more lexical types than POS tags (almost 900 in the ERG) and less training data is available. Configuration Coverage Precision Time gold 93.5% 92.2% N/A unrestricted 93.3% 92.4% 0.67s 0.98 with POS 93.5% 91.9% 0.63s 0.95 with POS 93.1% 92.4% 0.48s 0.90 with POS 92.9% 92.3% 0.37s 0.80 with POS 91.8% 91.8% 0.31s 0.60 with POS 86.2% 93.5% 0.21s 0.98 no POS 92.9% 92.3% 0.62s 0.95 no POS 90.9% 91.0% 0.48s 0.90 no POS 87.7% 89.2% 0.42s 0.80 no POS 79.7% 84.6% 0.33s 0.60 no POS 67.0% 84.2% 0.23s Table 2: Results obtained when restricting the parser lexicon according to the predicted lexical type, where words are restricted according to a threshold of tag probabilities. Two models, with and without POS tags as features, were used. While POS taggers such as TreeTagger are common, and there some supertaggers are available, notably that of Clark and Curran (2007) for CCG, no standard supertagger exists for HPSG. Consequently, we developed a Maximum Entropy model for supertagging using the OpenNLP implementation.2 Similarly to Zhang and Kordoni (2006), we took training data from the gold–standard lexical types in the treebank associated with ERG (in our case, the July-07 version). For each token, we extracted features in two ways. One used features only from the input string itself: four characters from the beginning and end of the target word token, and two words of context (where available) either side of the target. The second used the features from the first, along with POS tags given by TreeTagger for the context tokens. We held back the jh5 section of the treebank for testing the Maximum Entropy model. Again, the lexical items that were to be restricted were controlled by a threshold, in this case the probability given by the maximum entropy model. Table 2 shows the results achieved by these two models, with the unrestricted results and the gold standard provided for comparison. Here we see the same trends of falling coverage 2http://maxent.sourceforge.net/ 615 with falling time for both models, with the POS tagged model consistently outperforming the wordform model. To give a clearer picture of the comparative performance of all three experiments, Figure 1 shows how the results vary with time for both models, and for the POS tag restricted experiment. Here we can see that the coverage and precision of the lexical type restriction experiment that uses the word-form model is just above that of the POS restricted one. However the POS tagged model clearly outperforms both, showing minimal loss of coverage or precision at a threshold which halved the average parsing time. At the lowest parsing time, we see that precision of the POS tagged model even goes up. This can be explained by noting that coverage here goes down, and obviously we are losing more incorrect parses than correct parses. This echoes the main result from Prins and van Noord (2003), that filtering the lexical categories used by the parser can significantly reduce parsing time, while maintaining, or even improving, precision. The main differences between our method and that of Prins and van Noord are the training data and the tagging model. The key feature of their experiment was the use of ‘unsupervised’ training data, that is, the uncorrected output of their parser. In this experiment, we used gold standard training data, but much less of it (just under 200 000 words) and still achieved a very good precision. It would be interesting to see what amount of unsupervised parser output we would require to achieve the same level of precision. The other difference was the tagging model, maximum entropy versus Hidden Markov Model (HMM). We selected maximum entropy because Zhang and Kordoni (2006) had shown that they got better results using a maximum entropy tagger instead of a HMM one when predicting lexical types, albeit for a slightly different purpose. It is not possible to directly compare results between our experiments and those in Prins and van Noord, because of different languages, data sets and hardware, but it is worth noting that parsing times are much lower in our setup, perhaps more so than can be attributed to 4 years hardware improvement. While the range of sentence lengths appears to be very similar between the data sets, one possible reason for this could be the very large number of lexical categories used in their ALPINO system. 65 70 75 80 85 90 95 0.2 0.3 0.4 0.5 0.6 0.7 Average time per sentence (seconds) Coverage Gold standard POS tags 3 33 33 3 3 Lexical types (no POS model) + + + + + + Lexical types (with POS model) 2 2 2 2 2 2 Unrestricted ⋆ ⋆ 75 80 85 90 95 0.2 0.3 0.4 0.5 0.6 0.7 Average time per sentence (seconds) Precision Gold standard POS tags 3 33 33 3 3 Lexical types (no POS model) + + + + + + Lexical types (with POS model) 2 2 2 2 2 2 Unrestricted ⋆ ⋆ Figure 1: Coverage and precision varying with time for the three restriction experiments. Gold standard and unrestricted results shown for comparison. While this experiment is similar to that of Clark and Curran (2007), it differs in that their supertagger assign categories to every word, while we look up every word in the lexicon and the tagger is used to filter what the lexicon returns, only if the tagger confidence is sufficiently high. As Table 2 shows, when we use the tags for which the tagger had a low confidence, we lose significant coverage. In order to run as a supertagger rather than a filter, the tagger would need to be much more accurate. While we can look at multi-tagging as an option, we believe much more training data would be needed to achieve a sufficient level of tag accuracy. Increasing efficiency is important for enabling these heavily lexicalised grammars to bring the benefits of their deep analyses to applications, but simi616 larly important is robustness. The following section is aimed at addressing this issue of robustness, again by using external information. 4 Unknown Word Handling The lexical information available to the parser is what makes the depth of the analysis possible, and the default configuration of the parser uses an allor-nothing approach, where a parse is not produced if all the lexical information is not available. However, in order to increase robustness, it is possible to use underspecified lexical information where a fully specified lexical item is not available. One method of doing this, built in to the PET parser, is to use POS tags to select generic lexical items, and hence allow a (less than fully specified) parse to be built. The six data sets used for these experiments were chosen to give a range of languages and genres. Four sets are English text: jh5 described in Section 3; trec consisting of questions from TREC and included in the treebanks released with the ERG; a00 which is taken from the BNC and consists of factsheets and newsletters; and depbank, the 700 sentences of the Briscoe and Carroll version of DepBank (Briscoe and Carroll, 2006) taken from the Wall Street Journal. The last two data sets are German text: clef700 consisting of German questions taken from the CLEF competition and eiche564 a sample of sentences taken from a treebank parsed with the German HPSG grammar, GG and consisting of transcribed German speech data concerning appointment scheduling from the Verbmobil project. Vital statistics of these data sets are described in Table 3. We used TreeTagger to POS tag the six data sets, with the tagger configured to assign multiple tags, where the probability of the less likely tags was at least half that of the most likely tag. The data was input using a PET input chart (PIC), which allows POS tags to be assigned to each token, and then parsed each with the PET parser.3 All English data sets used the July-07 CVS version of the ERG and the German sets used the September 2007 version of GG. Unlike the experiments described in Section 3, adding POS tags in this way will have no effect on sentences which the parser is already able 3Subversion revision 384 Language Number of Sentences Ave. Sentence Length jh5 English 464 14.2 trec English 693 6.9 a00 English 423 17.2 depbank English 700 21.5 clef German 700 7.5 eiche564 German 564 11.5 Table 3: Data sets used in input annotation experiments. to parse. The POS tags will only be considered when the parser has no lexicon entry for a given word, and hence can only increase coverage. Results are shown in Table 4, comparing the coverage over each set to that obtained without using POS tags to handle unknown words. Coverage here is defined as the percentage of sentences with at least one parse. These results show very clearly one of the potential drawbacks of using a highly lexicalised grammar formalism like HPSG: unknown words are one of the main causes of parse failure, as quantified in Baldwin et al. (2004) and Nicholson et al. (2008). In the results here, we see that for jh5, trec and eiche564, adding unknown word handling made almost no difference, since the grammars (specifically the lexicons) have been tuned for these data sets. On the other hand, over unseen texts, adding unknown word handling made a dramatic difference to the coverage. This motivates strategies like the POS tag annotation used here, as well as the work on deep lexical acquisition (DLA) described in Zhang and Kordoni (2006) and Baldwin (2005), since no grammar could ever hope to cover all words used within a language. As mentioned in Section 3, coverage is not the only evaluation metric that should be considered, particularly when adding potentially less precise information to the parsing process (in this case POS tags). Since the primary effect of adding POS tags is shown with those data sets for which we do not have gold standard treebanks, evaluating accuracy in this case is more difficult. However, in order to give some idea of the effects on precision, a sample of 100 sentences from the a00 data set was evaluated for accuracy, for this and the following experiments. 617 In this instance, we found there was only a slight drop in precision, where the original analyses had a precision of 82% and the precision of the analyses when POS tags were used was 80%. Since the parser has the means to accept named entity (NE) information in the input, we also experimented with using generic lexical items generated from NE data. We used SProUT (Becker et al., 2002) to tag the data sets and used PET’s inbuilt NE handling mechanism to add NE items to the input, associated with the appropriate word tokens. This works slightly differently from the POS annotation mechanism, in that NE items are considered by the parser, even when the associated words are in the lexicon. This has the effect of increasing the number of analyses produced for sentences that already have a full lexical span, but could also increase coverage by enabling parses to be produced where there is no lexical span, or where no parse was possible because a token was not recognised as part of a name. In order to isolate the effect of the NE data, we ran one experiment where the input was annotated only with the SProUT data, and another where the POS tags were also added. These results are also in Table 4. Again, we see coverage increases in the three unseen data sets, a00, depbank and clef, but not to the same extent as the POS tags. Examining the results in more detail, we find that the increases come almost exclusively from sentences without lexical span, rather than in sentences where a token was previously not recognised as part of a name. This means that the NE tagger is operating almost like a POS tagger that only tags proper nouns, and as the POS tagger tags proper nouns quite accurately, we find the NE tagger gives no benefit here. When examining the precision over our sample evaluation set from a00, we find that using the NE data alone adds no correct parses, while using NE data with POS tags actually removes correct parses when compared with POS alone, since the (in these cases, incorrect) NE data is preferred over the POS tags. It is possible that another named entity tagger would give better results, and this may be looked at in future experiments. Other forms of external information might also be used to increase lexical coverage. Zhang and Kordoni (2006) reported a 20% coverage increase over baseline using a lexical type predictor for unknown words, and so we explored this avenue. The same maximum entropy tagger used in Section 3 was used and each open class word was tagged with its most likely lexical type, as predicted by the maximum entropy model. Table 5 shows the results, with the baseline and POS annotated results for comparison. As with the previous experiments, we see a coverage increase in those data sets which are considered unseen text for these grammars. Again it is clear that the use of POS tags as features obviously improves the maximum entropy model, since this second model has almost 10% better coverage on our unseen texts. However, lexical types do not appear to be as effective for increasing lexical coverage as the POS tags. One difference between the POS and lexical type taggers is that the POS tagger could produce multiple tags per word. Therefore, for the next experiment, we altered the lexical type tagger so it could also produce multiple tags. As with the TreeTagger configuration we used for POS annotation, extra lexical type tags were produced if they were at least half as probable as the most likely tag. A lower probability threshold of 0.01 was set, so that hundreds of tags of equal likelihood were not produced in the case where the tagger was unable to make an informed prediction. The results with multiple tagging are also shown in Table 5. The multiple tagging version gives a coverage increase of between 2 and 10% over the single tag version of the tagger, but, at least for the English data sets, it is still less effective than straight-forward POS tagging. For the German unseen data set, clef, we do start getting above what the POS tagger can achieve. This may be in part because of the features used by the lexical type tagger — German, being a more morphologically rich language, may benefit more from the prefix and suffix features used in the tagger. In terms of precision measured on our sample evaluation set, the single tag version of the lexical type tagger which used POS tag features achieved a very good precision of 87% where, of all the extra sentences that could now be parsed, only one did not have a correct parse. In an application where precision is considered much more important than coverage, this would be a good method of increasing coverage without loss of accuracy. The single tag version that did not use POS tags in the model achieved 618 Baseline with POS NE only NE+POS jh5 93.1% 93.3% 93.1% 93.3% trec 97.1% 97.5% 97.4% 97.7% a00 50.1% 83.9% 53.0% 85.8% depbank 36.3% 76.9% 51.1% 80.4% clef 22.0% 67.7% 42.3% 75.3% eiche564 63.8% 63.8% 64.0% 64.0% Table 4: Parser coverage with baseline using no unknown word handling and unknown word handling using POS tags, SProUT named entity data as the only annotation, or SProUT tags in addition to POS annotation. Single Lexical Types Multiple Lexical Types Baseline POS -POS +POS -POS +POS jh5 93.1% 93.3% 93.3% 93.3% 93.5% 93.5% trec 97.1% 97.5% 97.3% 97.4% 97.3% 97.4% a00 50.1% 83.9% 63.8% 72.6% 65.7% 78.5% depbank 36.3% 76.9% 51.7% 64.4% 53.9% 69.7% clef 22.0% 67.7% 59.9% 66.8% 69.7% 76.9% eiche564 63.8% 63.8% 63.8% 63.8% 63.8% 63.8% Table 5: Parser coverage using a lexical type predictor for unknown word handling. The predictor was run in single tag mode, and then in multi-tag mode. Two different tagging models were used, with and without POS tags as features. the same precision as with using only POS tags, but without the same increase in coverage. On the other hand, the multiple tagging versions, which at least started approaching the coverage of the POS tag experiment, dropped to a precision of around 76%. From the results of Section 3, one might expect that at least the lexical type method of handling unknown words might at least lead to quicker parsing than when using POS tags, however POS tags are used differently in this situation. When POS tags are used to restrict the parser, any lexicon entry that unifies with the generic part-of-speech lexical category can be used by the parser. That is, when the word is restricted to, for example, a verb, any lexical item with one of the numerous more specific verb categories can be used. In contrast, in these experiments, the lexicon plays no part. The POS tag causes one underspecified lexical item (per POS tag) to be considered in parsing. While these underspecified items may allow more analyses to be built than if the exact category was used, the main contribution to parsing time turned out to be the number of tags assigned to each word, whether that was a POS tag or a lexical type. The POS tagger assigned multiple tags much less frequently than the multiple tagging lexical type tagger and so had a faster average parsing time. The single tagging lexical type tagger had only slightly fewer tags assigned overall, and hence was slightly faster, but at the expense of a significantly lower coverage. 5 Conclusion The work reported here shows the benefits that can be gained by utilising external resources to annotate parser input in highly lexicalised grammar formalisms. Even something as simple and readily available (for languages likely to have lexicalised grammars) as a POS tagger can massively increase the parser coverage on unseen text. While annotating with named entity data or a lexical type supertagger were also found to increase coverage, the POS tagger had the greatest effect with up to 45% coverage increase on unseen text. In terms of efficiency, POS tags were also shown to speed up parsing by filtering unlikely lexicon items, but better results were achieved in this case by using a lexical type supertagger. Again encouraging the use of external resources, the supertagging was found to be much more effective when POS tags 619 were used to train the tagging model, and in this configuration, managed to halve the parsing time with minimal effect on coverage or precision. 6 Further Work A number of avenues of future research were suggested by the observations made during this work. In terms of robustness and increasing lexical coverage, more work into using lexical types for unknown words could be explored. In light of the encouraging results for German, one area to look at is the effect of different features for different languages. Use of back-off models might also be worth considering when the tagger probabilities are low. Different methods of using the supertagger could also be explored. The experiment reported here used the single most probable type for restricting the lexicon entries used by the parser. Two extensions of this are obvious. The first is to use multiple tags over a certain threshold, by either inputting multiple types as was done for the unknown word handling, or by using a generic type that is compatible with all the predicted types over a certain threshold. The other possible direction to try is to not check the predicted type against the lexicon, but to simply construct a lexical item from the most likely type, given a (high) threshold probability. This would be similar to the CCG supertagging mechanism and is likely to give generous speedups at the possible expense of precision, but it would be illuminating to discover how this trade-off plays out in our setup. References Timothy Baldwin, Emily M. Bender, Dan Flickinger, Ara Kim, and Stephan Oepen. 2004. Road-testing the English Resource Grammar over the British National Corpus. In Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004), pages 2047–50, Lisbon, Portugal. Timothy Baldwin. 2005. Bootstrapping deep lexical resources: Resources for courses. In Proceedings of the ACL-SIGLEX 2005 Workshop on Deep Lexical Acquisition, pages 67–76, Ann Arbor, USA. Srinivas Bangalore and Aravind K. Joshi. 1994. Disambiguation of super parts of speech (or supertags): Almost parsing. In Proceedings of the 15th COLING Conference, pages 154–160, Kyoto, Japan. Markus Becker, Witold Drozdzynski, Hans-Ulrich Krieger, Jakub Piskorski, Ulrich Sch¨afer, and Feiyu Xu. 2002. SProUT - Shallow Processing with Typed Feature Structures and Unification. In Proceedings of the International Conference on NLP (ICON 2002), Mumbai, India. Ted Briscoe and John Carroll. 2006. Evaluating the accuracy of an unlexicalised statistical parser on the PARC DepBank. In Proceedings of the 44th Annual Meeting of the ACL, pages 41–48, Sydney, Australia. Ulrich Callmeier. 2000. PET - a platform for experimentation with efficient HPSG processing techniques. Natural Language Engineering, 6(1):99–107. Stephen Clark and James R. Curran. 2007. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493–552. Stephen Clark. 2002. Supertagging for combinatory categorical grammar. In Proceedings of the 6th International Workshop on Tree Adjoining Grammar and Related Frameworks, pages 101–106, Venice, Italy. Ann Copestake and Dan Flickinger. 2000. An opensource grammar development environment and broadcoverage English grammar using HPSG. In Proceedings of the Second conference on Language Resources and Evaluation (LREC-2000), Athens, Greece. Ann Copestake, Alex Lascarides, and Dan Flickinger. 2001. An algebra for semantic construction in constraint-based grammars. In Proceedings of the 39th Annual Meeting of the ACL and 10th Conference of the EACL (ACL-EACL 2001), Toulouse, France. Berthold Crysmann. 2003. On the efficient implementation of German verb placement in HPSG. In Proceedings of RANLP 2003, pages 112–116, Borovets, Bulgaria. Julia Hockenmaier, Gann Bierner, and Jason Baldridge. 2002. Extending the coverage of a CCG system. Research in Language and Computation. Bernd Kiefer, Hans-Ulrich Krieger, John Carroll, and Rob Malouf. 1999. A bag of useful techniques for efficient and robust parsing. In Proceedings of the 37th Annual Meeting of the ACL, pages 473–480, Maryland, USA. Stefan M¨uller and Walter Kasper. 2000. HPSG analysis of German. In Verbmobil: Foundations of Speech-toSpeech Translation, pages 238–253. Springer, Berlin, Germany. Jeremy Nicholson, Valia Kordoni, Yi Zhang, Timothy Baldwin, and Rebecca Dridan. 2008. Evaluating and extending the coverage of HPSG grammars. In Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC 2008), Marrakech, Morocco. 620 Ruth O’Donovan, Michael Burke, Aoife Cahill, Josef van Genabith, and Andy Way. 2005. Large-scale induction and evaluation of lexical resources from the PennII and Penn-III treebanks. Computational Linguistics, 31:pp 329–366. Stephan Oepen, Helge Dyvik, Jan Tore Lønning, Erik Velldal, Dorothee Beermann, John Carroll, Dan Flickinger, Lars Hellan, Janne Bondi Johannessen, Paul Meurer, Torbjørn Nordg˚ard, and Victoria Ros´en. 2004. Som˚a kapp-ete med trollet? Towards MRSbased Norwegian—English machine translation. In Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation, Baltimore, USA. Stephan Oepen. 2001. [incr tsdb()] – competence and performance laboratory. User manual, Computational Linguistics, Saarland University, Saarbr¨ucken, Germany. Carl Pollard and Ivan A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press, Chicago, USA. Robbert Prins and Gertjan van Noord. 2003. Reinforcing parser preferences through tagging. Traitement Automatique des Langues, 44(3):121–139. Kenji Sagae, Yusuke Miyao, and Jun’ichi Tsujii. 2007. HPSG parsing with shallow dependency constraints. In Proceedings of the 45th Annual Meeting of the ACL, pages 624–631, Prague, Czech Republic. Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In Proceedings of International Conference on New Methods in Language Processing, Manchester, UK. Gertjan van Noord and Robert Malouf. 2004. Wide coverage parsing with stochastic attribute value grammars. In IJCNLP-04 Workshop Beyond Shallow Analyses – Formalisms and statistical modelling for deep analyses. Wolfgang Wahlster, editor. 2000. Verbmobil: Foundations of Speech-to-Speech Translation. SpringerVerlag, Berlin. Yi Zhang and Valia Kordoni. 2006. Automated deep lexical acquisition for robust open texts processing. In Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC 2006), pages 275–280, Genoa, Italy. 621
2008
70
Proceedings of ACL-08: HLT, pages 622–629, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Assessing Dialog System User Simulation Evaluation Measures Using Human Judges Hua Ai University of Pittsburgh Pittsburgh PA, 15260, USA [email protected] Diane J. Litman University of Pittsburgh Pittsburgh, PA 15260, USA [email protected] Abstract Previous studies evaluate simulated dialog corpora using evaluation measures which can be automatically extracted from the dialog systems’ logs. However, the validity of these automatic measures has not been fully proven. In this study, we first recruit human judges to assess the quality of three simulated dialog corpora and then use human judgments as the gold standard to validate the conclusions drawn from the automatic measures. We observe that it is hard for the human judges to reach good agreement when asked to rate the quality of the dialogs from given perspectives. However, the human ratings give consistent ranking of the quality of simulated corpora generated by different simulation models. When building prediction models of human judgments using previously proposed automatic measures, we find that we cannot reliably predict human ratings using a regression model, but we can predict human rankings by a ranking model. 1 Introduction User simulation has been widely used in different phases in spoken dialog system development. In the system development phase, user simulation is used in training different system components. For example, (Levin et al., 2000) and (Scheffler, 2002) exploit user simulations to generate large corpora for using Reinforcement Learning to develop dialog strategies, while (Chung, 2004) implement user simulation to train the speech recognizer and understanding components. While user simulation is considered to be more low-cost and time-efficient than experiments with human subjects, one major concern is how well the state-of-the-art user simulations can mimic human user behaviors and how well they can substitute for human users in a variety of tasks. (Schatzmann et al., 2005) propose a set of evaluation measures to assess the quality of simulated corpora. They find that these evaluation measures are sufficient to discern simulated from real dialogs. Since this multiple-measure approach does not offer a easily reportable statistic indicating the quality of a user simulation, (Williams, 2007) proposes a single measure for evaluating and rank-ordering user simulations based on the divergence between the simulated and real users’ performance. This new approach also offers a lookup table that helps to judge whether an observed ordering of two user simulations is statistically significant. In this study, we also strive to develop a prediction model of the rankings of the simulated users’ performance. However, our approach use human judgments as the gold standard. Although to date there are few studies that use human judges to directly assess the quality of user simulation, we believe that this is a reliable approach to assess the simulated corpora as well as an important step towards developing a comprehensive set of user simulation evaluation measures. First, we can estimate the difficulty of the task of distinguishing real and simulated corpora by knowing how hard it is for human judges to reach an agreement. Second, human judgments can be used as the gold standard of the automatic evaluation measures. Third, we can validate the automatic 622 measures by correlating the conclusions drawn from the automatic measures with the human judgments. In this study, we recruit human judges to assess the quality of three user simulation models. Judges are asked to read the transcripts of the dialogs between a computer tutoring system and the simulation models and to rate the dialogs on a 5-point scale from different perspectives. Judges are also given the transcripts between human users and the computer tutor. We first assess human judges’ abilities in distinguishing real from simulated users. We find that it is hard for human judges to reach good agreement on the ratings. However, these ratings give consistent ranking on the quality of the real and the simulated user models. Similarly, when we use previously proposed automatic measures to predict human judgments, we cannot reliably predict human ratings using a regression model, but we can consistently mimic human judges’ rankings using a ranking model. We suggest that this ranking model can be used to quickly assess the quality of a new simulation model without manual efforts by ranking the new model against the old models. 2 Related Work A lot of research has been done in evaluating different components of Spoken Dialog Systems as well as overall system performance. Different evaluation approaches are proposed for different tasks. Some studies (e.g., (Walker et al., 1997)) build regression models to predict user satisfaction scores from the system log as well as the user survey. There are also studies that evaluate different systems/system components by ranking the quality of their outputs. For example, (Walker et al., 2001) train a ranking model that ranks the outputs of different language generation strategies based on human judges’ rankings. In this study, we build both a regression model and a ranking model to evaluate user simulation. (Schatzmann et al., 2005) summarize some broadly used automatic evaluation measures for user simulation and integrate several new automatic measures to form a comprehensive set of statistical evaluation measures. The first group of measures investigates how much information is transmitted in the dialog and how active the dialog participants are. The second group of measures analyzes the style of the dialog and the last group of measures examines the efficiency of the dialogs. While these automatic measures are handy to use, these measures have not been validated by humans. There are well-known practices which validate automatic measures using human judgments. For example, in machine translation, BLEU score (Papineni et al., 2002) is developed to assess the quality of machine translated sentences. Statistical analysis is used to validate this score by showing that BLEU score is highly correlated with the human judgment. In this study, we validate a subset of the automatic measures proposed by (Schatzmann et al., 2005) by correlating the measures with human judgments. We follow the design of (Linguistic Data Consortium, 2005) in obtaining human judgments. We call our study an assessment study. 3 System and User Simulation Models In this section, we describe our dialog system (ITSPOKE) and the user simulation models which we use in the assessment study. ITSPOKE is a speech-enabled Intelligent Tutoring System that helps students understand qualitative physics questions. In the system, the computer tutor first presents a physics question and the student types an essay as the answer. Then, the tutor analyzes the essay and initiates a tutoring dialog to correct misconceptions and to elicit further explanations. A corpus of 100 tutoring dialogs was collected between 20 college students (solving 5 physics problems each) and the computer tutor, yielding 1388 student turns. The correctness of student answers is automatically judged by the system and kept in the system’s logs. Our previous study manually clustered tutor questions into 20 clusters based on the knowledge (e.g., acceleration, Newton’s 3rd Law) that is required to answer each question (Ai and Litman, 2007). We train three simulation models from the real corpus: the random model, the correctness model, and the cluster model. All simulation models generate student utterances on the word level by picking out the recognized student answers (with potential speech recognition errors) from the human subject experiments with different policies. The random model (ran) is a simple unigram model which randomly picks a student’s utterance from the real cor623 pus as the answer to a tutor’s question, neglecting which question it is. The correctness model (cor) is designed to give a correct/incorrect answer with the same probability as the average of real students. For each tutor’s question, we automatically compute the average correctness rate of real student answers from the system logs. Then, a correct/incorrect answer is randomly chosen from the correct/incorrect answer sets for this question. The cluster model (clu) tries to model student learning by assuming that a student will have a higher chance to give a correct answer to the question of a cluster in which he/she mostly answers correctly before. It computes the conditional probability of whether a student answer is correct/incorrect given the content of the tutor’s question and the correctness of the student’s answer to the last previous question that belongs to the same question cluster. We also refer to the real student as the real student model (real) in the paper. We hypothesize that the ranking of the four student models (from the most realistic to the least) is: real, clu, cor, and ran. 4 Assessment Study Design 4.1 Data We decided to conduct a middle-scale assessment study that involved 30 human judges. We conducted a small pilot study to estimate how long it took a judge to answer all survey questions (described in Section 4.2) in one dialog because we wanted to control the length of the study so that judges would not have too much cognitive load and would be consistent and accurate on their answers. Based on the pilot study, we decided to assign each judge 12 dialogs which took about an hour to complete. Each dialog was assigned to two judges. We used three out of the five physics problems from the original real corpus to ensure the variety of dialog contents while keeping the corpus size small. Therefore, the evaluation corpus consisted of 180 dialogs, in which 15 dialogs were generated by each of the 4 student models on each of the 3 problems. 4.2 Survey Design 4.2.1 Survey questions We designed a web survey to collect human judgments on a 5-point scale on both utterance and diFigure 1: Utterance level questions. alog levels. Each dialog is separated into pairs of a tutor question and the corresponding student answer. Figure 1 shows the three questions which are asked for each tutor-student utterance pair. The three questions assess the quality of the student answers from three aspects of Grice’s Maxim (Grice, 1975): Maxim of Quantity (u QNT), Maxim of Relevance (u RLV), and Maxim of Manner (u MNR). We do not include the Maxim of Quality because in our task domain the correctness of the student answers depends largely on students’ physics knowledge, which is not a factor we would like to consider when evaluating the realness of the students’ dialog behaviors. In Figure 2, we show the three dialog level questions which are asked at the end of each dialog. The first question (d TUR) is a Turing test type of question which aims to obtain an impression of the student’s overall performance. The second question (d QLT) assesses the dialog quality from a tutoring perspective. The third question (d PAT) sets a higher standard on the student’s performance. Unlike the first two questions which ask whether the student “looks” good, this question further asks whether the judges would like to partner with the particular student. 4.2.2 Survey Website We display one tutor-student utterance pair and the three utterance level questions on each web page. After the judges answer the three questions, he/she will be led to the next page which displays the next pair of tutor-student utterances in the dialog with the same three utterance level questions. The judge 624 Figure 2: Dialog level questions. reads through the dialog in this manner and answers all utterance level questions. At the end of the dialog, three dialog level questions are displayed on one webpage. We provide a textbox under each dialog level question for the judge to type in a brief explanation on his/her answer. After the judge completes the three dialog level questions, he/she will be led to a new dialog. This procedure repeats until the judge completes all of the 12 assigned dialogs. 4.3 Assessment Study 30 college students are recruited as human judges via flyers. Judges are required to be native speakers of American English to make correct judgments on the language use and fluency of the dialog. They are also required to have taken at least one course on Newtonian physics to ensure that they can understand the physics tutoring dialogs and make judgments about the content of the dialogs. We follow the same task assigning procedure that is used in (Linguistic Data Consortium, 2005) to ensure a uniform distribution of judges across student models and dialogs while maintaining a random choice of judges, models, and dialogs. Judges are instructed to work as quickly as comfortably possible. They are encouraged to provide their intuitive reactions and not to ponder their decisions. 5 Assessment Study Results In the initial analysis, we observe that it is a difficult task for human judges to rate on the 5-point scale and the agreements among the judges are fairly low. Table 1 shows for each question, the percentages of d TUR d QLT d PAT u QNT u RLV u MNR 22.8% 27.8% 35.6% 39.2% 38.4% 38.7% Table 1: Percent agreements on 5-point scale pairs of judges who gave the same ratings on the 5point scale. For the rest of the paper, we collapse the “definitely” types of answers with its adjacent “probably” types of answers (more specifically, answer 1 with 2, and 4 with 5). We substitute scores 1 and 2 with a score of 1.5, and scores 4 and 5 with a score of 4.5. A score of 3 remains the same. 5.1 Inter-annotator agreement Table 2 shows the inter-annotator agreements on the collapsed 3-point scale. The first column presents the question types. In the first row, “diff” stands for the differences between human judges’ ratings. The column “diff=0” shows the percent agreements on the 3-point scale. We can see the improvements from the original 5-point scale when comparing with Table 1. The column “diff=1” shows the percentages of pairs of judges who agree with each other on a weaker basis in that one of the judges chooses “cannot tell”. The column “diff=2” shows the percentages of pairs of judges who disagree with each other. The column “Kappa” shows the un-weighted kappa agreements and the column “Kappa*” shows the linear weighted kappa. We construct the confusion matrix for each question to compute kappa agreements. Table 3 shows the confusion matrix for d TUR. The first three rows of the first three columns show the counts of judges’ ratings on the 3-point scale. For example, the first cell shows that there are 20 cases where both judges give 1.5 to the same dialog. When calculating the linear weighted kappa, we define the distances between the adjacent categories to be one1. Note that we randomly picked two judges to rate each dialog so that different dialogs are rated by different pairs of judges and one pair of judges only worked on one dialog together. Thus, the kappa agreements here do not reflect the agreement of one pair of judges. Instead, the kappa agreements show the overall observed agreement among every pair of 1We also calculated the quadratic weighted kappa in which the distances are squared and the kappa results are similar to the linear weighted ones. For calculating the two weighted kappas, see http://faculty.vassar.edu/lowry/kappa.html for details. 625 Q diff=0 diff=1 diff=2 Kappa Kappa* d TUR 35.0% 45.6% 19.4% 0.022 0.079 d QLT 46.1% 28.9% 25.0% 0.115 0.162 d PAT 47.2% 30.6% 22.2% 0.155 0.207 u QNT 66.8% 13.9% 19.3% 0.377 0.430 u RLV 66.6% 17.2% 16.2% 0.369 0.433 u MNR 67.5% 15.4% 17.1% 0.405 0.470 Table 2: Agreements on 3-point scale score=1.5 score=3 score=4.5 sum score=1.5 20 26 20 66 score=3 17 11 19 47 score=4.5 15 20 32 67 sum 52 57 71 180 Table 3: Confusion Matrix on d TUR judges controlling for the chance agreement. We observe that human judges have low agreement on all types of questions, although the agreements on the utterance level questions are better than the dialog level questions. This observation indicates that assessing the overall quality of simulated/real dialogs on the dialog level is a difficult task. The lowest agreement appears on d TUR. We investigate the low agreements by looking into judges’ explanations on the dialog level questions. 21% of the judges find it hard to rate a particular dialog because that dialog is too short or the student utterances mostly consist of one or two words. There are also some common false beliefs among the judges. For example, 16% of the judges think that humans will say longer utterances while 9% of the judges think that only humans will admit the ignorance of an answer. 5.2 Rankings of the models In Table 4, the first column shows the name of the questions; the second column shows the name of the models; the third to the fifth column present the percentages of judges who choose answer 1 and 2, can’t tell, and answer 4 and 5. For example, when looking at the column “1 and 2” for d TUR, we see that 22.2% of the judges think a dialog by a real student is generated probably or definitely by a computer; more judges (25.6%) think a dialog by the cluster model is generated by a computer; even more judges (32.2%) think a dialog by the correctness model is generated by a computer; and even Question model 1 and 2 can’t tell 4 and 5 d TUR real 22.2% 28.9% 48.9% clu 25.6% 31.1% 43.3% cor 32.2% 26.7% 41.1% ran 51.1% 28.9% 20.0% d QLT real 20.0% 10.0% 70.0% clu 21.1% 20.0% 58.9% cor 24.4% 15.6% 60.0% ran 60.0% 18.9% 21.1% d PAT real 28.9% 21.1% 50.0% clu 41.1% 17.8% 41.1% cor 43.3% 18.9% 37.8% ran 82.2% 14.4% 3.4% Table 4: Rankings on Dialog Level Questions more judges (51.1%) think a dialog by the random model is generated by a computer. When looking at the column “4 and 5” for d TUR, we find that most of the judges think a dialog by the real student is generated by a human while the fewest number of judges think a dialog by the random model is generated by a human. Given that more human-like is better, both rankings support our hypothesis that the quality of the models from the best to the worst is: real, clu, cor, and ran. In other words, although it is hard to obtain well-agreed ratings among judges, we can combine the judges’ ratings to produce the ranking of the models. We see consistent ranking orders on d QLT and d PAT as well, except for a disorder of cluster and correctness model on d QLT indicated by the underlines. When comparing two models, we can tell which model is better from the above rankings. Nevertheless, we also want to know how significant the difference is. We use t-tests to examine the significance of differences between every two models. We average the two human judges’ ratings to get an averaged score for each dialog. For each pair of models, we compare the two groups of the averaged scores for the dialogs generated by the two models using 2-tail t-tests at the significance level of p < 0.05. In Table 5, the first row presents the names of the models in each pair of comparison. Sig means that the t-test is significant after Bonferroni correction; question mark (?) means that the t-test is significant before the correction, but not significant afterwards, we treat this situation as a trend; not means that the t-test is not significant at all. The table shows 626 realrealrealranrancorran cor clu cor clu clu d TUR sig not not sig sig not d QLT sig not not sig sig not d PAT sig ? ? sig sig not u QNT sig not not sig sig not u RLV sig not not sig sig not u MNR sig not not sig sig not Table 5: T-Tests Results that only the random model is significantly different from all other models. The correctness model and the cluster model are not significantly different from the real student given the human judges’ ratings, neither are the two models significantly different from each other. 5.3 Human judgment accuracy on d TUR We look further into d TUR in Table 4 because it is the only question that we know the ground truth. We compute the accuracy of human judgment as (number of ratings 4&5 on real dialogs + number of ratings of 1&2 on simulated dialogs)/(2*total number of dialogs). The accuracy is 39.44%, which serves as further evidence that it is difficult to discern human from simulated users directly. A weaker accuracy is calculated to be 68.35% when we treat “cannot tell” as a correct answer as well. 6 Validating Automatic Measures Since it is expensive to use human judges to rate simulated dialogs, we are interested in building prediction models of human judgments using automatic measures. If the prediction model can reliably mimic human judgments, it can be used to rate new simulation models without collecting human ratings. In this section, we use a subset of the automatic measures proposed in (Schatzmann et al., 2005) that are applicable to our data to predict human judgments. Here, the human judgment on each dialog is calculated as the average of the two judges’ ratings. We focus on predicting human judgments on the dialog level because these ratings represent the overall performance of the student models. We use six high-level dialog feature measures including the number of student turns (Sturn), the number of tutor turns (Tturn), the number of words per student turn (Swordrate), the number of words per tutor turn (Twordrate), the ratio of system/user words per dialog (WordRatio), and the percentage of correct answers (cRate). 6.1 The Regression Model We use stepwise multiple linear regression to model the human judgments using the set of automatic features we listed above. The stepwise procedure automatically selects measures to be included in the model. For example, d TUR is predicted as 3.65 − 0.08 ∗WordRatio −3.21 ∗Swordrate, with an R-square of 0.12. The prediction models for d QLT and d PAT have similar low R-square values of 0.08 and 0.17, respectively. This result is not surprising because we only include the surface level automatic measures here. Also, these measures are designed for comparison between models instead of prediction. Thus, in Section 6.2, we build a ranking model to utilize the measures in their comparative manner. 6.2 The Ranking Model We train three ranking models to mimic human judges’ rankings of the real and the simulated student models on the three dialog level questions using RankBoost, a boosting algorithm for ranking ((Freund et al., 2003), (Mairesse et al., 2007)). We briefly explain the algorithm using the same terminologies and equations as in (Mairesse et al., 2007), by building the ranking model for d TUR as an example. In the training phase, the algorithm takes as input a group of dialogs that are represented by values of the automatic measures and the human judges’ ratings on d TUR. The RankBoost algorithm treats the group of dialogs as ordered pairs: T = {(x, y)| x, y are two dialog samples, x has a higher human rated score than y } Each dialog x is represented by a set of m indicator functions hs(x) (1 ≤s ≤m). For example: hs(x) = ½ 1 if WordRatio(x) ≥0.47 0 otherwise Here, the threshold of 0.47 is calculated by RankBoost. α is a parameter associated with each indicator function. For each dialog, a ranking score is 627 calculated as: F(x) = X s αshs(x) (1) In the training phase, the human ratings are used to set α by minimizing the loss function: LOSS = 1 |T| X (x,y)∈T eval(F(x) ≤F(y)) (2) The eval function returns 0 if (x, y) pair is ranked correctly, and 1 otherwise. In other words, LOSS score is the percentage of misordered pairs where the order of the predicted scores disagree with the order indicated by human judges. In the testing phase, the ranking score for every dialog is calculated by Equation 1. A baseline model which ranks dialog pairs randomly produces a LOSS of 0.5 (lower is better). While LOSS indicates how many pairs of dialogs are ranked correctly, our main focus here is to rank the performance of the four student models instead of individual dialogs. Therefore, we propose another Averaged Model Ranking (AMR) score. AMR is computed as the sum of the ratings of all the dialogs generated by one model averaged by the number of the dialogs. The four student models are then ranked based on their AMR scores. The chance to get the right ranking order of the four student models by random guess is 1/(4!). Table 6 shows a made-up example to illustrate the two measures. real 1 and real 2 are two dialogs generated by the real student model; ran 1 and ran 2 are two dialogs by the random model. The second and third column shows the human-rated score as the gold standard and the machine-predicted score in the testing phase respectively. The LOSS in this example is 1/6, because only the pair of real 2 and ran 1 is misordered out of all the 6 possible pair combinations. We then compute the AMR of the two models. According to human-rated scores, the real model is scored 0.75 (=(0.9+0.6)/2) while the random model is scored 0.3. When looking at the predicted scores, the real model is scored 0.65, which is also higher than the random model with a score of 0.4. We thus conclude that the ranking model ranks the two student models correctly according to the overall rating measure. We use both LOSS and AMR to evaluate the ranking models. Dialog Human-rated Score Predicted Score real 1 0.9 0.9 real 2 0.6 0.4 ran 1 0.4 0.6 ran 2 0.2 0.2 Table 6: A Made-up Example of the Ranking Model Cross Validation d TUR d QLT d PAT Regular 0.176 0.155 0.151 Minus-one-model 0.224 0.180 0.178 Table 7: LOSS scores for Regular and Minus-one-model (during training) Cross Validations First, we use regular 4-fold cross validation where we randomly hold out 25% of the data for testing and train on the remaining 75% of the data for 4 rounds. Both the training and the testing data consist of dialogs equally distributed among the four student models. However, since the practical usage of the ranking model is to rank a new model against several old models without collecting additional human ratings, we further test the algorithm by repeating the 4 rounds of testing while taking turns to hold out the dialogs from one model in the training data, assuming that model is the new model that we do not have human ratings to train on. The testing corpus still consists of dialogs from all four models. We call this approach the minus-one-model cross validation. Table 7 shows the LOSS scores for both cross validations. Using 2-tailed t-tests, we observe that the ranking models significantly outperforms the random baseline in all cases after Bonferroni correction (p < 0.05). When comparing the two cross validation results for the same question, we see more LOSS in the more difficult minus-one-model case. However, the LOSS scores do not offer a direct conclusion on whether the ranking model ranks the four student models correctly or not. To address this question, we use AMR scores to re-evaluate all cross validation results. Table 8 shows the humanrated and predicted AMR scores averaged over four rounds of testing on the regular cross validation results. We see that the ranking model gives the same rankings of the student models as the human judges on all questions. When applying AMR on the minus-one-model cross validation results, we see similar results that the ranking model reproduces hu628 real clu cor ran human predicted human predicted human predicted human predicted d TUR 0.68 0.62 0.65 0.59 0.63 0.52 0.51 0.49 d QLT 0.75 0.71 0.71 0.63 0.69 0.61 0.48 0.50 d PAR 0.66 0.65 0.60 0.60 0.58 0.57 0.31 0.32 Table 8: AMR Scores for Regular Cross Validation man judges’ rankings. Therefore, we suggest that the ranking model can be used to evaluate a new simulation model by ranking it against several old models. Since our testing corpus is relatively small, we would like to confirm this result on a large corpus and on other dialog systems in the future. 7 Conclusion and Future Work Automatic evaluation measures are used in evaluating simulated dialog corpora. In this study, we investigate a set of previously proposed automatic measures by comparing the conclusions drawn by these measures with human judgments. These measures are considered as valid if the conclusions drawn by these measures agree with human judgments. We use a tutoring dialog corpus with real students, and three simulated dialog corpora generated by three different simulation models trained from the real corpus. Human judges are recruited to read the dialog transcripts and rate the dialogs by answering different utterance and dialog level questions. We observe low agreements among human judges’ ratings. However, the overall human ratings give consistent rankings on the quality of the real and simulated user models. Therefore, we build a ranking model which successfully mimics human judgments using previously proposed automatic measures. We suggest that the ranking model can be used to rank new simulation models against the old models in order to assess the quality of the new model. In the future, we would like to test the ranking model on larger dialog corpora generated by more simulation models. We would also want to include more automatic measures that may be available in the richer corpora to improve the ranking and the regression models. Acknowledgments This work is supported by NSF 0325054. We thank J. Tereault, M. Rotaru, K. Forbes-Riley and the anonymous reviewers for their insightful suggestions, F. Mairesse for helping with RankBoost, and S. Silliman for his help in the survey experiment. References H. Ai and D. Litman. 2007. Knowledge Consistent User Simulations for Dialog Systems. In Proc. of Interspeech 2007. G. Chung. 2004. Developing a Flexible Spoken Dialog System Using Simulation. In Proc. of ACL 04. Y. Freund, R. Iyer, R.E. Schapire, and Y. Singer. 2003. An Efficient Boosting Algorithm for Combining Preferences. Journal of Machine Learning Research. H. P. Grice 1975. Logic and Conversation. Syntax and Semantics III: Speech Acts, 41-58. E. Levin, R. Pieraccini, and W. Eckert. 2000. A Stochastic Model of Human-Machine Interaction For learning Dialog Strategies. IEEE Trans. On Speech and Audio Processing, 8(1):11-23. Linguistic Data Consortium. 2005. Linguistic Data Annotation Specification: Assessment of Fluency and Adequacy in Translations. F. Mairesse, M. Walker, M. Mehl and R. Moore. 2007. Using Linguistic Cues for the Automatic Recognition of Personality in Conversation and Text. Journal of Artificial Intelligence Research, Vol 30, pp 457-501. K.A. Papineni, S. Roukos, R.T. Ward, and W-J. Zhu. 2002. Bleu: A Method for Automatic Evaluation of Machine Translation. In Proc. of 40th ACL. J. Schatzmann, K. Georgila, and S. Young. 2005. Quantitative Evaluation of User Simulation Techniques for Spoken Dialog Systems. In Proc. of 6th SIGdial. K. Scheffler. 2002. Automatic Design of Spoken Dialog Systems. Ph.D. diss., Cambridge University. J. D. Williams. 2007. A Method for Evaluating and Comparing User Simulations: The Cramer-von Mises Divergence. Proc IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). M. Walker, D. Litman, C. Kamm, and A. Abella. 1997. PARADISE: A Framework for Evaluating Spoken Dialog Agents. In Proc. of ACL 97. M. Walker, O. Rambow, and M. Rogati. 2001. SPoT: A Trainable Sentence Planner. In Proc. of NAACL 01. 629
2008
71
Proceedings of ACL-08: HLT, pages 630–637, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Robust Dialog Management with N-best Hypotheses Using Dialog Examples and Agenda Cheongjae Lee, Sangkeun Jung and Gary Geunbae Lee Pohang University of Science and Technology Department of Computer Science and Engineering Pohang, Republic of Korea {lcj80,hugman,gblee}@postech.ac.kr Abstract This work presents an agenda-based approach to improve the robustness of the dialog manager by using dialog examples and n-best recognition hypotheses. This approach supports n-best hypotheses in the dialog manager and keeps track of the dialog state using a discourse interpretation algorithm with the agenda graph and focus stack. Given the agenda graph and n-best hypotheses, the system can predict the next system actions to maximize multi-level score functions. To evaluate the proposed method, a spoken dialog system for a building guidance robot was developed. Preliminary evaluation shows this approach would be effective to improve the robustness of example-based dialog modeling. 1 Introduction Development of spoken dialog systems involves human language technologies which must cooperate to answer user queries. Since the performance in human language technologies such as Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU)1 have been improved, this advance has made it possible to develop spoken dialog systems for many different application domains. Nevertheless, there are major problems for practical spoken dialog systems. One of them which must be considered by the Dialog Manager (DM) is the error propagation from ASR and NLU modules. In 1Through this paper, we will use the term natural language to include both spoken language and written language general, errors in spoken dialog systems are prevalent due to errors in speech recognition or language understanding. These errors can cause the dialog system to misunderstand a user and in turn lead to an inappropriate response. To avoid these errors, a basic solution is to improve the accuracy and robustness of the recognition and understanding processes. However, it has been impossible to develop perfect ASR and NLU modules because of noisy environments and unexpected input. Therefore, the development of robust dialog management has also been one of the most important goals in research on practical spoken dialog systems. In the dialog manager, a popular method to deal with these errors is to adopt dialog mechanisms for detecting and repairing potential errors at the conversational level (McTear et al., 2005; Torres et al., 2005; Lee et al., 2007). In human-computer communication, the goal of error recovery strategy is to maximize the user’s satisfaction of using the system by guiding for the repair of the wrong information by human-computer interaction. On the other hand, there are different approaches to improve the robustness of dialog management using n-best hypotheses. Rather than Markov Decision Processes (MDPs), partially observable MDPs (POMDPs) potentially provide a much more powerful framework for robust dialog modeling since they consider nbest hypotheses to estimate the distribution of the belief state (Williams and Young, 2007). In recent, we proposed another data-driven approach for the dialog modeling called Examplebased Dialog Modeling (EBDM) (Lee et al., 2006a). However, difficulties occur when attempting to de630 ploy EBDM in practical spoken dialog systems in which ASR and NLU errors are frequent. Thus, this paper proposes a new method to improve the robustness of the EBDM framework using an agendabased approach and n-best recognition hypotheses. We consider a domain-specific agenda to estimate the best dialog state and example because, in taskoriented systems, a current dialog state is highly correlated to the previous dialog state. We have also used the example-based error recovery approach to handle exceptional cases due to noisy input or unexpected focus shift. This paper is organized as follows. Previous related work is described in Section 2, followed by the methodology and problems of the example-based dialog modeling in Section 3. An agenda-based approach for heuristics is presented in Section 4. Following that, we explain greedy selection with n-best hypotheses in Section 5. Section 6 describes the error recovery strategy to handle unexpected cases. Then, Section 7 provides the experimental results of a real user evaluation to verify our approach. Finally, we draw conclusions and make suggestions for future work in Section 8. 2 Related Work In many spoken dialog systems that have been developed recently, various knowledge sources are used. One of the knowledge sources, which are usually application-dependent, is an agenda or task model. These are powerful representations for segmenting large tasks into more reasonable subtasks (Rich and Sidner, 1998; Bohus and Rudnicky, 2003; Young et al., 2007). These are manually designed for various purposes including dialog modeling, search space reduction, domain knowledge, and user simulation. In Collagen (Rich and Sidner, 1998), a plan tree, which is an approximate representation of a partial SharedPlan, is composed of alternating act and plan recipe nodes for internal discourse state representation and discourse interpretation. In addition, Bohus and Rudnicky (2003) have presented a RavenClaw dialog management which is an agenda-based architecture using hierarchical task decomposition and an expectation agenda. For modeling dialog, the domain-specific dialog control is represented in the Dialog Task Specification layer using a tree of dialog agents, with each agent handling a certain subtask of the dialog task. Recently, the problem of a large state space in POMDP framework has been solved by grouping states into partitions using user goal trees and ontology rules as heuristics (Young et al., 2007). In this paper, we are interested in exploring algorithms that would integrate this knowledge source for users to achieve domain-specific goals. We used an agenda graph whose hierarchy reflects the natural order of dialog control. This graph is used to both keep track of the dialog state and to select the best example using multiple recognition hypotheses for augmenting previous EBDM framework. 3 Example-based Dialog Modeling Our approach is implemented based on ExampleBased Dialog Modeling (EBDM) which is one of generic dialog modelings. We begin with a brief overview of the EBDM framework in this section. EBDM was inspired by Example-Based Machine Translation (EBMT) (Nagao, 1984), a translation system in which the source sentence can be translated using similar example fragments within a large parallel corpus, without knowledge of the language’s structure. The idea of EBMT can be extended to determine the next system actions by finding similar dialog examples within the dialog corpus. The system action can be predicted by finding semantically similar user utterances with the dialog state. The dialog state is defined as the set of relevant internal variables that affect the next system action. EBDM needs to automatically construct an example database from the dialog corpus. Dialog Example DataBase (DEDB) is semantically indexed to generalize the data in which the indexing keys can be determined according to state variables chosen by a system designer for domain-specific applications (Figure 1). Each turn pair (user turn, system turn) in the dialog corpus is mapped to semantic instances in the DEDB. The index constraints represent the state variables which are domain-independent attributes. To determine the next system action, there are three processes in the EBDM framework as follows: • Query Generation: The dialog manager makes Structured Query Language (SQL) 631 Figure 1: Indexing scheme for dialog example database on building guidance domain statement using discourse history and NLU results. • Example Search: The dialog manager searches for semantically similar dialog examples in the DEDB given the current dialog state. If no example is retrieved, some state variables can be ignored by relaxing particular variables according to the level of importance given the dialog’s genre and domain. • Example Selection: The dialog manager selects the best example to maximize the utterance similarity measure based on lexicosemantic similarity and discourse history similarity. Figure 2 illustrates the overall strategy of EBDM framework for spoken dialog systems. The EBDM framework is a simple and powerful approach to rapidly develop natural language interfaces for multi-domain dialog processing (Lee et al., 2006b). However, in the context of spoken dialog system for domain-specific tasks, this framework must solve two problems: (1) Keeping track of the dialog state with a view to ensuring steady progress towards task completion, (2) Supporting n-best recognition hypotheses to improve the robustness of dialog manager. Consequently, we sought to solve these probFigure 2: Strategy of the Example-Based Dialog Modeling (EBDM) framework. lems by integrating the agenda graph as a heuristic which reflects the natural hierarchy and order of subtasks needed to complete the task. 4 Agenda Graph In this paper, agenda graph G is simply a way of encoding the domain-specific dialog control to complete the task. An agenda is one of the subtask flows, which are possible paths from root node to terminal node. G is composed of nodes (v) which correspond to possible intermediate steps in the process of completing the specified task, and edges (e) which con632 Figure 3: Example of an agenda graph for a building guidance. nect nodes. In other words, v corresponds to user goal state to achieve domain-specific subtask in its expected agenda. Each node includes three different components: (1) A precondition that must be true before the subtask is executed; (2) A description of the node that includes its label and identifier; and (3) Links to nodes that will be executed at the subsequent turn. For every edge eij = (vi, vj), we defined a transition probability based on prior knowledge of dialog flows. This probability can be assigned based on empirical analysis of human-computer conversations, assuming that the users behave in consistent, goal-directed ways. Alternatively, it can be assigned manually at the discretion of the system developer to control the dialog flow. This heuristic has advantages for practical spoken dialog system because a key condition for successful task-oriented dialog system is that the user and system know which task or subtask is currently being executed. To exemplify, Figure 3 illustrates part of the agenda graph for PHOPE, a building guidance robot using the spoken dialog system. In Figure 3, G is represented by a Directed Acyclic Graph (DAG), where each link in the graph reflects a transition between one user goal state and the next. The set of paths in G represent an agenda designed by the system developer. We adapted DAG representation because it is more intuitive and flexible than hierarchical tree representation. The syntax for graph representation in our system is described by an XML schema (Figure 4). 4.1 Mapping Examples to Nodes In the agenda graph G, each node v should hold relevant dialog examples corresponding to user goal states. Therefore, the dialog examples in DEDB are Figure 4: XML description for the agenda graph mapped to a user goal state when a precondition of the node is true. Initially, the root node of the DAG is the starting state, where there is no dialog example. Then, the attributes of each dialog example are examined via the preconditions of each user goal node by breadth-first traversal. If the precondition is true, the node holds relevant that may appear in the user’s goal state. The method of selecting the best of these examples will be described in 5. 4.2 Discourse Interpretation Inspired by Collagen (Rich and Sidner, 1998; Lesh et al., 2001), we investigated a discourse interpretation algorithm to consider how the current user’s goal can contribute to the current agenda in a focus stack according to Lochbaum’s discourse interpretation algorithm (Lochbaum, 1998). The focus stack takes into account the discourse structure by keeping track of discourse states. In our system, the focus stack is a set of user goal nodes which lead to completion of the subtask. The top on the focus stack is the previous node in this set. The focus stack is updated after every utterance. To interpret the type of the discourse state, this breaks down into five main cases of possible current node for an observed user’s goal: • NEW TASK: Starting a new task to complete a new agenda (Child of the root). • NEW SUB TASK: Starting a new subtask to partially shift focus (A different child of the parent). 633 • NEXT TASK: Working on the next subtask contributing to current agenda (Its child node). • CURRENT TASK: Repeating or modifying the observed goal on the current subtask (Current node). • PARENT TASK: Modifying the observation on the previous subtask (Parent node). Nodes in parentheses denote the topological position of the current node relative to the top node on the focus stack. If NEXT TASK is selected, the current node is pushed to the focus stack. NEXT TASK covers totally focused behavior, i.e., when there are no unexpected focus shifts. This occurs when the current user utterance is highly correlated to the previous system utterance. The remaining four cases cover various types of discourse state. For example, NEW SUB TASK involves starting a new subtask to partially shift focus, thereby popping the previous goal off the focus stack and pushing a new user goal for the new subtask. NEW TASK, which is placed on the node linked to root node, involves starting a new task to complete a new agenda. Therefore, a dialog is re-started and the current node is pushed onto the focus stack with the current user goal as its first element. If none of the above cases holds, the discourse interpretation concludes that the current input should be rejected because we expect user utterances to be correlated to the previous turn in a task-oriented domain. Therefore, this interpretation does not contribute to the current agenda on the focus stack due to ASR and NLU errors that are due to noisy environments and unexpected input. These cases can be handled by using an error recovery strategy in Section 6. Figure 5 shows some examples of pseudo-codes used in the discourse interpretation algorithm to select the best node among possible next nodes. S,H,and G denote the focus stack, hypothesis, and agenda graph, respectively. The INTERPRET algorithm is initially called to interpret the current discourse state. Furthermore, the essence of a discourse interpretation algorithm is to find candidate nodes of possible next subtask for an observed user goal, expressed in the definition of GENERATE. The SELECT algorithm selects the best node to maximize Figure 5: Pseudo-codes for the discourse interpretation algorithm the score function based on current input and discourse structure given the focus stack. The details of how the score of candidate nodes are calculated are explained in Section 5. 5 Greedy Selection with n-best Hypotheses Many speech recognizers can generate a list of plausible hypotheses (n-best list) but output only the most probable one. Examination of the n-best list reveals that the best hypothesis, the one with the lowest word error rate, is not always in top-1 position but sometimes in the lower rank of the n-best list. Therefore, we need to select the hypothesis that maximizes the scoring function among a set of n-best hypotheses of each utterance. The role of agenda graph is for a heuristic to score the discourse state to successfully complete the task given the focus stack. The current system depends on a greedy policy which is based on immediate transitions rather than full transitions from the initial state. The greedy selection with n-best hypotheses is implemented as follows. Firstly, every hypothesis hi is scanned and all possible nodes are generated using the discourse interpretation. Secondly, the multi-level score functions are computed for each candidate node ci given a hypothesis hi. Using the greedy algorithm, the node with the highest score is selected as the user goal state. Finally, the system actions are predicted by the dialog example to maximize the example score in the best node. The generation of candidate nodes is based on multiple hypotheses from the previous EBDM 634 framework. This previous EBDM framework chose a dialog example to maximize the utterance similarity measure. However, our system generates a set of multiple dialog examples with each utterance similarity over a threshold given a specific hypothesis. Then, the candidate nodes are generated by matching to each dialog example bound to the node. If the number of matching nodes is exactly one, that node is selected. Otherwise, the best node which would be pushed onto the focus stack must be selected using multi-level score functions. 5.1 Node Selection The node selection is determined by calculating some score functions. We defined multi-level score functions that combine the scores of ASR, SLU, and DM modules, which range from 0.00 to 1.00. The best node is selected by greedy search with multiple hypotheses H and candidate nodes C as follows: c∗= arg max hi∈H,ci∈C ωSH(hi) + (1 −ω)SD(ci|S) where H is a list of n-best hypotheses and C is a set of nodes to be generated by the discourse interpretation. For the node selection, we divided the score function into two functions SH(hi), hypothesis score, and SD(ci|S), discourse score, where ci is the focus node to be generated by single hypothesis hi. We defined the hypothesis score at the utterance level as SH(hi) = αSrec(hi) + βScont(hi) where Srec(hi) denotes the recognition score which is a generalized confidence score over the confidence score of the top-rank hypothesis. Scont(hi) is the content score in the view of content management to access domain-specific contents. For example, in the building guidance domain, theses contents would be a building knowledge database including room name, room number, and room type. The score is defined as: Scont(hi) =    N(Chi) N(Cprev) if Chi ⊆Cprev N(Chi) N(Ctotal) if Chi ⊈Cprev where Cprev is a set of contents at the previous turn and Ctotal is a set of total contents in the content database. Chi denotes a set of focused contents by hypothesis hi at the current turn. N(C) represents the number of contents C. This score reflects the degree of content coherence because the number of contents of interest has been gradually reduced without any unexpected focus shift. In the hypothesis score, α and β denote weights which depend on the accuracy of speech recognition and language understanding, respectively. In addition to the hypothesis score, we defined the discourse score SD at the discourse level to consider the discourse structure between the previous node and current node given the focus stack S. This score is the degree to which candidate node ci is in focus with respect to the previous user goal and system utterance. In the agenda graph G, each transition has its own probability as prior knowledge. Therefore, when ci is NEXT TASK, the discourse score is computed as SD(ci|S) = P(ci|c = top(S)) where P(ci|c = top(S)) is a transition probability from the top node c on the focus stack S to the candidate node ci. However, there is a problem for cases other than NEXT TASK because the graph has no backward probability. To solve this problem, we assume that the transition probability may be lower than that of the NEXT TASK case because a user utterance is likely to be influenced by the previous turn. Actually, when using the task-oriented dialog system, typical users stay focused most of the time during imperfect communication (Lesh et al., 2001). To assign the backward transition probability, we obtain the minimum transition probability Pmin(S) among from the top node on the focus stack S to its children. Then, the discourse score SD can be formalized when the candidate node ci does not correspond to NEXT TASK as follows: SD(ci|S) = max{Pmin(S) −λDist(ci, c), 0} where λ is a penalty of distance between candidate node and previous node, Dist(ci, c), according to type of candidate node such as NEW TASK and NEW SUB TASK. The simplest case is to uniformly assign λ to a specific value. To select the best node using the node score, we use ω (0 ≤ω ≤1) as an interpolation weight 635 between the hypothesis score Sh and the discourse score SD. This weight is empirically assigned according to the characteristics of the dialog genre and task. For example, ω can set lower to manage the transactional dialog in which the user utterance is highly correlated to the previous system utterance, i.e., a travel reservation task, because this task usually has preference orders to fill slots. 5.2 Example Selection After selecting the best node, we use the example score to select the best dialog example mapped into this node. e∗= arg max ej∈E(c∗) ωSutter(h∗, ej)+(1−ω)Ssem(h∗, ej) where h∗is the best hypothesis to maximize the node score and ej is a dialog example in the best node c∗. Sutter(h, ej) denotes the value of the utterance similarity of the user’s utterances between the hypothesis h and dialog example ej in the best node c∗(Lee et al., 2006a). To augment the utterance similarity used in the EBDM framework, we also defined the semantic score for example selection, Ssem(h, ej): Ssem(h, ej) = # of matching index keys # of total index keys The semantic score is the ratio of matching index keys to the number of total index keys between hypothesis h and example record ej. This score reflects that a dialog example is semantically closer to the current utterance if the example is selected with more index keys. After processing of the node and example selection, the best example is used to predict the system actions. Therefore, the dialog manager can predict the next actions with the agenda graph and n-best recognition hypotheses. 6 Error Recovery Strategy As noted in Section 4.2, the discourse interpretation sometimes fails to generate candidate nodes. In addition, the dialog manager should confirm the current information when the score falls below some threshold. For these cases, we adapt an examplebased error recovery strategy (Lee et al., 2007). In this approach, the system detects that something is wrong in the user’s utterance and takes immediate steps to address the problem using some help messages such as UtterHelp, InfoHelp, and UsageHelp in the example-based error recovery strategies. We also added a new help message, AgendaHelp, that uses the agenda graph and the label of each node to tell the user which subtask to perform next such as ”SYSTEM: Next, you can do the subtask 1)Search Location with Room Name or 2)Search Location with Room Type”. 7 Experiment & Result First we developed the spoken dialog system for PHOPE in which an intelligent robot can provide information about buildings (i.e., room number, room location, room name, room type) and people (i.e., name, phone number, e-mail address, cellular phone number). If the user selects a specific room to visit, then the robot takes the user to the desired room. For this system, ten people used the WOZ method to collect a dialog corpus of about 500 utterances from 100 dialogs which were based on a set of pre-defined 10 subjects relating to domain-specific tasks. Then, we designed an agenda graph and integrated it into the EBDM framework. In an attempt to quantify the impact of our approach, five Korean users participated in a preliminary evaluation. We provided them with pre-defined scenarios and asked them to collect test data from 50 dialogs, including about 150 utterances. After processing each dialog, the participants completed a questionnaire to assess their satisfaction with aspects of the performance evaluation. The speech recognition hypotheses are obtained by using the Hidden Markov model Toolkit (HTK) speech recognizer adapted to our application domain in which the word error rate (WER) is 21.03%. The results of the Task Completion Rate (TCR) are shown in Table 1. We explored the effects of our agenda-based approach with n-best hypotheses compared to the previous EBDM framework which has no agenda graph and supports only 1-best hypothesis. Note that using 10-best hypotheses and the agenda graph increases the TCR from 84.0% to 90.0%, that is, 45 out of 50 dialogs were completed successfully. The average number of turns (#AvgTurn) to completion was also shorter, which 636 shows 4.35 turns per a dialog using the agenda graph and 10-best hypotheses. From these results, we conclude that the the use of the n-best hypotheses with the agenda graph is helpful to improve the robustness of the EBDM framework against noisy inputs. System #AvgTurn TCR (%) 1-best(-AG) 4.65 84.0 10-best(+AG) 4.35 90.0 Table 1: Task completion rate according to using the AG (Agenda Graph) and n-best hypotheses for n=1 and n=10. 8 Conclusion & Discussion This paper has proposed a new agenda-based approach with n-best recognition hypotheses to improve the robustness of the Example-based Dialog Modeling (EBDM) framework. The agenda graph can be thought of as a hidden cost of applying our methodology. However, an explicit agenda is necessary to successfully achieve the purpose of using spoken dialog system. Our preliminary results indicate this fact that the use of agenda graph as heuristics can increase the TCR. In addition, our approach is robust to recognition errors because it maintains multiple hypotheses for each user utterance. There are several possible subjects for further research on our approach. First, the optimal interpolation weights should be determined. This task will require larger dialog corpora by using user simulation. Second, the cost of designing the agenda graph should be reduced. We have focused on developing a system to construct this graph semi-automatically by applying dialog state clustering and utterance clustering to achieve hierarchical clustering of dialog examples. Finally, future work will include expanding our system to other applications, such as navigation systems for automobiles. Acknowledgement This work was supported by grant No. RTI04-02-06 from the Regional Technology Innovation Program and by the Intelligent Robotics Development Program, one of the 21st Century Frontier R&D Programs funded by the Ministry of Commerce, Industry and Energy (MOICE) of Korea. References Bohus, B. and Rudnicky A. 2003. RavenClaw: Dialog Management Using Hierarchical Task Decomposition and an Expectation Agenda. Proceedings of the European Conference on Speech, Communication and Technology, 597–600. Grosz, B.J. and Kraus, S. 1996. Collaborative Plans for Complex Group Action. Artificial Intelligence, 86(2):269–357. Lee, C., Jung, S., Eun, J., Jeong, M., and Lee, G.G. 2006. A Situation-based Dialogue Management using Dialogue Examples. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 69–72. Lee, C., Jung, S., Jeong, M., and Lee, G.G. 2006. Chat and Goal-oriented Dialog Together: A Unified Example-based Architecture for Multi-domain Dialog Management. Proceedings of the IEEE Spoken Language Technology Workshop, 194-197. Lee, C., Jung, S., and Lee, G.G. 2007. Example-based Error Reocvery Strategy For Spoken Dialog System. Proceedings of the IEEE Automatic Speech Recognition and Understanding Workshop, 538–543. Lesh, N., Rich, C., and Sidner, C. 2001. Collaborating with focused and unfocused users under imperfect communication. Proceedings of the International Conference on User Modeling, 63–74. Lochbaum, K.E. 1998. A Collaborative Planning Model of Intentional Structure. Computational Linguistics, 24(4):525–572. McTear, M., O’Neil, I., Hanna, P., and Liu, X. 2005. Handling errors and determining confirmation strategies-An object-based approach. Speech Communication, 45(3):249–269. Nagao, M. 1984. A Frame Work of a Mechnical Translatino between Japanese and English by Analogy Principle. Proceedings of the international NATO symposium on artificial and human intelligence, 173–180. Rich, C. and Sidner, C.. 1998. Collagen: A Collaboration Agent for Software Interface Agents. Journal of User Modeling and User-Adapted Interaction, 8(3):315–350. Torres, F., Hurtado, L.F., Garcia, F., Sanchis, E., and Segarra, E. 2005. Error Handling in a Stochastic Dialog System through Confidence Measure. Speech Communication, 45(3):211–229. Williams, J.D. and Young, S. 2007. Partially Observable Markov Decision Processes for Spoken Dialog Systems. Computer Speech Language, 21(2):393-422. Young, S., Schatzmann, J., Weilhammer, K., and Ye, H.. 2007. The Hidden Information State Approach to Dialog Management. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, 149–152. 637
2008
72
Proceedings of ACL-08: HLT, pages 638–646, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Learning Effective Multimodal Dialogue Strategies from Wizard-of-Oz data: Bootstrapping and Evaluation Verena Rieser School of Informatics University of Edinburgh Edinburgh, EH8 9LW, GB [email protected] Oliver Lemon School of Informatics University of Edinburgh Edinburgh, EH8 9LW, GB [email protected] Abstract We address two problems in the field of automatic optimization of dialogue strategies: learning effective dialogue strategies when no initial data or system exists, and evaluating the result with real users. We use Reinforcement Learning (RL) to learn multimodal dialogue strategies by interaction with a simulated environment which is “bootstrapped” from small amounts of Wizard-of-Oz (WOZ) data. This use of WOZ data allows development of optimal strategies for domains where no working prototype is available. We compare the RL-based strategy against a supervised strategy which mimics the wizards’ policies. This comparison allows us to measure relative improvement over the training data. Our results show that RL significantly outperforms Supervised Learning when interacting in simulation as well as for interactions with real users. The RL-based policy gains on average 50-times more reward when tested in simulation, and almost 18-times more reward when interacting with real users. Users also subjectively rate the RL-based policy on average 10% higher. 1 Introduction Designing a spoken dialogue system is a timeconsuming and challenging task. A developer may spend a lot of time and effort anticipating the potential needs of a specific application environment and then deciding on the most appropriate system action (e.g. confirm, present items,. . . ). One of the key advantages of statistical optimisation methods, such as Reinforcement Learning (RL), for dialogue strategy design is that the problem can be formulated as a principled mathematical model which can be automatically trained on real data (Lemon and Pietquin, 2007; Frampton and Lemon, to appear). In cases where a system is designed from scratch, however, there is often no suitable in-domain data. Collecting dialogue data without a working prototype is problematic, leaving the developer with a classic chicken-and-egg problem. We propose to learn dialogue strategies by simulation-based RL (Sutton and Barto, 1998), where the simulated environment is learned from small amounts of Wizard-of-Oz (WOZ) data. Using WOZ data rather than data from real HumanComputer Interaction (HCI) allows us to learn optimal strategies for domains where no working dialogue system already exists. To date, automatic strategy learning has been applied to dialogue systems which have already been deployed using handcrafted strategies. In such work, strategy learning was performed based on already present extensive online operation experience, e.g. (Singh et al., 2002; Henderson et al., 2005). In contrast to this preceding work, our approach enables strategy learning in domains where no prior system is available. Optimised learned strategies are then available from the first moment of online-operation, and tedious handcrafting of dialogue strategies is omitted. This independence from large amounts of in-domain dialogue data allows researchers to apply RL to new application areas beyond the scope of existing dialogue systems. We call this method ‘bootstrapping’. In a WOZ experiment, a hidden human operator, the so called “wizard”, simulates (partly or com638 pletely) the behaviour of the application, while subjects are left in the belief that they are interacting with a real system (Fraser and Gilbert, 1991). That is, WOZ experiments only simulate HCI. We therefore need to show that a strategy bootstrapped from WOZ data indeed transfers to real HCI. Furthermore, we also need to introduce methods to learn useful user simulations (for training RL) from such limited data. The use of WOZ data has earlier been proposed in the context of RL. (Williams and Young, 2004) utilise WOZ data to discover the state and action space for MDP design. (Prommer et al., 2006) use WOZ data to build a simulated user and noise model for simulation-based RL. While both studies show promising first results, their simulated environment still contains many hand-crafted aspects, which makes it hard to evaluate whether the success of the learned strategy indeed originates from the WOZ data. (Schatzmann et al., 2007) propose to ‘bootstrap’ with a simulated user which is entirely hand-crafted. In the following we propose an entirely data-driven approach, where all components of the simulated learning environment are learned from WOZ data. We also show that the resulting policy performs well for real users. 2 Wizard-of-Oz data collection Our domains of interest are information-seeking dialogues, for example a multimodal in-car interface to a large database of music (MP3) files. The corpus we use for learning was collected in a multimodal study of German task-oriented dialogues for an incar music player application by (Rieser et al., 2005). This study provides insights into natural methods of information presentation as performed by human wizards. 6 people played the role of an intelligent interface (the “wizards”). The wizards were able to speak freely and display search results on the screen by clicking on pre-computed templates. Wizards’ outputs were not restricted, in order to explore the different ways they intuitively chose to present search results. Wizard’s utterances were immediately transcribed and played back to the user with Text-To-Speech. 21 subjects (11 female, 10 male) were given a set of predefined tasks to perform, as well as a primary driving task, using a driving simulator. The users were able to speak, as well as make selections on the screen. We also introduced artificial noise in the setup, in order to closer resemble the conditions of real HCI. Please see (Rieser et al., 2005) for further detail. The corpus gathered with this setup comprises 21 sessions and over 1600 turns. Example 1 shows a typical multimodal presentation sub-dialogue from the corpus (translated from German). Note that the wizard displays quite a long list of possible candidates on an (average sized) computer screen, while the user is driving. This example illustrates that even for humans it is difficult to find an “optimal” solution to the problem we are trying to solve. (1) User: Please search for music by Madonna . Wizard: I found seventeen hundred and eleven items. The items are displayed on the screen. [displays list] User: Please select ‘Secret’. For each session information was logged, e.g. the transcriptions of the spoken utterances, the wizard’s database query and the number of results, the screen option chosen by the wizard, and a rich set of contextual dialogue features was also annotated, see (Rieser et al., 2005). Of the 793 wizard turns 22.3% were annotated as presentation strategies, resulting in 177 instances for learning, where the six wizards contributed about equal proportions. Information about user preferences was obtained, using a questionnaire containing similar questions to the PARADISE study (Walker et al., 2000). In general, users report that they get distracted from driving if too much information is presented. On the other hand, users prefer shorter dialogues (most of the user ratings are negatively correlated with dialogue length). These results indicate that we need to find a strategy given the competing trade-offs between the number of results (large lists are difficult for users to process), the length of the dialogue (long dialogues are tiring, but collecting more information can result in more precise results), and the noise in the speech recognition environment (in high noise conditions accurate information is difficult to obtain). In the following we utilise the ratings from the user questionnaires to optimise a presentation strategy using simulation-based RL. 639   acquisition action:   askASlot implConfAskASlot explConf presentInfo  state:   filledSlot 1 | 2 | 3 | 4 | : n 0,1 o confirmedSlot 1 | 2 | 3 | 4 | : n 0,1 o DB: n 1--438 o   presentation action: " presentInfoVerbal presentInfoMM # state:   DB low: n 0,1 o DB med: n 0,1 o DB high n 0,1 o     Figure 1: State-Action space for hierarchical Reinforcement Learning 3 Simulated Learning Environment Simulation-based RL (also know as “model-free” RL) learns by interaction with a simulated environment. We obtain the simulated components from the WOZ corpus using data-driven methods. The employed database contains 438 items and is similar in retrieval ambiguity and structure to the one used in the WOZ experiment. The dialogue system used for learning comprises some obvious constraints reflecting the system logic (e.g. that only filled slots can be confirmed), implemented as Information State Update (ISU) rules. All other actions are left for optimisation. 3.1 MDP and problem representation The structure of an information seeking dialogue system consists of an information acquisition phase, and an information presentation phase. For information acquisition the task of the dialogue manager is to gather ‘enough’ search constraints from the user, and then, ‘at the right time’, to start the information presentation phase, where the presentation task is to present ‘the right amount’ of information in the right way– either on the screen or listing the items verbally. What ‘the right amount’ actually means depends on the application, the dialogue context, and the preferences of users. For optimising dialogue strategies information acquisition and presentation are two closely interrelated problems and need to be optimised simultaneously: when to present information depends on the available options for how to present them, and vice versa. We therefore formulate the problem as a Markov Decision Process (MDP), relating states to actions in a hierarchical manner (see Figure 1): 4 actions are available for the information acquisition phase; once the action presentInfo is chosen, the information presentation phase is entered, where 2 different actions for output realisation are available. The state-space comprises 8 binary features representing the task for a 4 slot problem: filledSlot indicates whether a slots is filled, confirmedSlot indicates whether a slot is confirmed. We also add features that human wizards pay attention to, using the feature selection techniques of (Rieser and Lemon, 2006b). Our results indicate that wizards only pay attention to the number of retrieved items (DB). We therefore add the feature DB to the state space, which takes integer values between 1 and 438, resulting in 28 × 438 = 112, 128 distinct dialogue states. In total there are 4112,128 theoretically possible policies for information acquisition. 1 For the presentation phase the DB feature is discretised, as we will further discuss in Section 3.6. For the information presentation phase there are 223 = 256 theoretically possible policies. 3.2 Supervised Baseline We create a baseline by applying Supervised Learning (SL). This baseline mimics the average wizard behaviour and allows us to measure the relative improvements over the training data (cf. (Henderson et al., 2005)). For these experiments we use the WEKA toolkit (Witten and Frank, 2005). We learn with the decision tree J4.8 classifier, WEKA’s implementation of the C4.5 system (Quinlan, 1993), and rule induc1In practise, the policy space is smaller, as some of combinations are not possible, e.g. a slot cannot be confirmed before being filled. Furthermore, some incoherent action choices are excluded by the basic system logic. 640 baseline JRip J48 timing 52.0(± 2.2) 50.2(± 9.7) 53.5(±11.7) modality 51.0(± 7.0) 93.5(±11.5)* 94.6(± 10.0)* Table 1: Predicted accuracy for presentation timing and modality (with standard deviation ±), * denotes statistically significant improvement at p < .05 tion JRIP, the WEKA implementation of RIPPER (Cohen, 1995). In particular, we learn models which predict the following wizard actions: • Presentation timing: when the ‘average’ wizard starts the presentation phase • Presentation modality: in which modality the list is presented. As input features we use annotated dialogue context features, see (Rieser and Lemon, 2006b). Both models are trained using 10-fold cross validation. Table 1 presents the results for comparing the accuracy of the learned classifiers against the majority baseline. For presentation timing, none of the classifiers produces significantly improved results. Hence, we conclude that there is no distinctive pattern the wizards follow for when to present information. For strategy implementation we therefore use a frequency-based approach following the distribution in the WOZ data: in 0.48 of cases the baseline policy decides to present the retrieved items; for the rest of the time the system follows a hand-coded strategy. For learning presentation modality, both classifiers significantly outperform the baseline. The learned models can be rewritten as in Algorithm 1. Note that this rather simple algorithm is meant to represent the average strategy as present in the initial data (which then allows us to measure the relative improvements of the RL-based strategy). Algorithm 1 SupervisedStrategy 1: if DB ≤3 then 2: return presentInfoVerbal 3: else 4: return presentInfoMM 5: end if 3.3 Noise simulation One of the fundamental characteristics of HCI is an error prone communication channel. Therefore, the simulation of channel noise is an important aspect of the learning environment. Previous work uses dataintensive simulations of ASR errors, e.g. (Pietquin and Dutoit, 2006). We use a simple model simulating the effects of non- and misunderstanding on the interaction, rather than the noise itself. This method is especially suited to learning from small data sets. From our data we estimate a 30% chance of user utterances to be misunderstood, and 4% to be complete non-understandings. We simulate the effects noise has on the user behaviour, as well as for the task accuracy. For the user side, the noise model defines the likelihood of the user accepting or rejecting the system’s hypothesis (for example when the system utters a confirmation), i.e. in 30% of the cases the user rejects, in 70% the user agrees. These probabilities are combined with the probabilities for user actions from the user simulation, as described in the next section. For non-understandings we have the user simulation generating Out-of-Vocabulary utterances with a chance of 4%. Furthermore, the noise model determines the likelihood of task accuracy as calculated in the reward function for learning. A filled slot which is not confirmed by the user has a 30% chance of having been mis-recognised. 3.4 User simulation A user simulation is a predictive model of real user behaviour used for automatic dialogue strategy development and testing. For our domain, the user can either add information (add), repeat or paraphrase information which was already provided at an earlier stage (repeat), give a simple yes-no answer (y/n), or change to a different topic by providing a different slot value than the one asked for (change). These actions are annotated manually (κ = .7). We build two different types of user simulations, one is used for strategy training, and one for testing. Both are simple bi-gram models which predict the next user action based on the previous system action (P(auser|asystem)). We face the problem of learning such models when training data is sparse. For training, we therefore use a cluster-based user simulation method, see (Rieser 641 and Lemon, 2006a). For testing, we apply smoothing to the bi-gram model. The simulations are evaluated using the SUPER metric proposed earlier (Rieser and Lemon, 2006a), which measures variance and consistency of the simulated behaviour with respect to the observed behaviour in the original data set. This technique is used because for training we need more variance to facilitate the exploration of large state-action spaces, whereas for testing we need simulations which are more realistic. Both user simulations significantly outperform random and majority class baselines. See (Rieser, 2008) for further details. 3.5 Reward modelling The reward function defines the goal of the overall dialogue. For example, if it is most important for the dialogue to be efficient, the reward penalises dialogue length, while rewarding task success. In most previous work the reward function is manually set, which makes it “the most hand-crafted aspect” of RL (Paek, 2006). In contrast, we learn the reward model from data, using a modified version of the PARADISE framework (Walker et al., 2000), following pioneering work by (Walker et al., 1998). In PARADISE multiple linear regression is used to build a predictive model of subjective user ratings (from questionnaires) from objective dialogue performance measures (such as dialogue length). We use PARADISE to predict Task Ease (a variable obtained by taking the average of two questions in the questionnaire) 2 from various input variables, via stepwise regression. The chosen model comprises dialogue length in turns, task completion (as manually annotated in the WOZ data), and the multimodal user score from the user questionnaire, as shown in Equation 2. TaskEase = −20.2 ∗dialogueLength + 11.8 ∗taskCompletion + 8.7 ∗multimodalScore; (2) This equation is used to calculate the overall reward for the information acquisition phase. During learning, Task Completion is calculated online according to the noise model, penalising all slots which are filled but not confirmed. 2“The task was easy to solve.”, “I had no problems finding the information I wanted.” For the information presentation phase, we compute a local reward. We relate the multimodal score (a variable obtained by taking the average of 4 questions) 3 to the number of items presented (DB) for each modality, using curve fitting. In contrast to linear regression, curve fitting does not assume a linear inductive bias, but it selects the most likely model (given the data points) by function interpolation. The resulting models are shown in Figure 3.5. The reward for multimodal presentation is a quadratic function that assigns a maximal score to a strategy displaying 14.8 items (curve inflection point). The reward for verbal presentation is a linear function assigning negative scores to all presented items ≤4. The reward functions for information presentation intersect at no. items=3. A comprehensive evaluation of this reward function can be found in (Rieser and Lemon, 2008a). -80 -70 -60 -50 -40 -30 -20 -10 0 10 0 10 20 30 40 50 60 70 user score no. items reward function for information presentation intersection point turning point:14.8 multimodal presentation: MM(x) verbal presentation: Speech(x) Figure 2: Evaluation functions relating number of items presented in different modalities to multimodal score 3.6 State space discretisation We use linear function approximation in order to learn with large state-action spaces. Linear function approximation learns linear estimates for expected reward values of actions in states represented as feature vectors. This is inconsistent with the idea 3“I liked the combination of information being displayed on the screen and presented verbally.”, “Switching between modes did not distract me.”, “The displayed lists and tables contained on average the right amount of information.”, “The information presented verbally was easy to remember.” 642 of non-linear reward functions (as introduced in the previous section). We therefore quantise the state space for information presentation. We partition the database feature into 3 bins, taking the first intersection point between verbal and multimodal reward and the turning point of the multimodal function as discretisation boundaries. Previous work on learning with large databases commonly quantises the database feature in order to learn with large state spaces using manual heuristics, e.g. (Levin et al., 2000; Heeman, 2007). Our quantisation technique is more principled as it reflects user preferences for multi-modal output. Furthermore, in previous work database items were not only quantised in the state-space, but also in the reward function, resulting in a direct mapping between quantised retrieved items and discrete reward values, whereas our reward function still operates on the continuous values. In addition, the decision when to present a list (information acquisition phase) is still based on continuous DB values. In future work we plan to engineer new state features in order to learn with nonlinear rewards while the state space is still continuous. A continuous representation of the state space allows learning of more fine-grained local trade-offs between the parameters, as demonstrated by (Rieser and Lemon, 2008b). 3.7 Testing the Learned Policies in Simulation We now train and test the multimodal presentation strategies by interacting with the simulated learning environment. For the following RL experiments we used the REALL-DUDE toolkit of (Lemon et al., 2006b). The SHARSHA algorithm is employed for training, which adds hierarchical structure to the well known SARSA algorithm (Shapiro and Langley, 2002). The policy is trained with the cluster-based user simulation over 180k system cycles, which results in about 20k simulated dialogues. In total, the learned strategy has 371 distinct state-action pairs (see (Rieser, 2008) for details). We test the RL-based and supervised baseline policies by running 500 test dialogues with a smoothed user simulation (so that we are not training and testing on the same simulation). We then compare quantitative dialogue measures performing a paired t-test. In particular, we compare mean values of the final rewards, number of filled and confirmed slots, dialog length, and items presented multimodally (MM items) and items presented verbally (verbal items). RL performs significantly better (p < .001) than the baseline strategy. The only non-significant difference is the number of items presented verbally, where both RL and SL strategy settled on a threshold of less than 4 items. The mean performance measures for simulationbased testing are shown in Table 2 and Figure 3. The major strength of the learned policy is that it learns to keep the dialogues reasonably short (on average 5.9 system turns for RL versus 8.4 turns for SL) by presenting lists as soon as the number of retrieved items is within tolerance range for the respective modality (as reflected in the reward function). The SL strategy in contrast has not learned the right timing nor an upper bound for displaying items on the screen. The results show that simulationbased RL with an environment bootstrapped from WOZ data allows learning of robust strategies which significantly outperform the strategies contained in the initial data set. One major advantage of RL is that it allows us to provide additional information about user preferences in the reward function, whereas SL simply mimics the data. In addition, RL is based on delayed rewards, i.e. the optimisation of a final goal. For dialogue systems we often have measures indicating how successful and/or satisfying the overall performance of a strategy was, but it is hard to tell how things should have been exactly done in a specific situation. This is what makes RL specifically attractive for dialogue strategy learning. In the next section we test the learned strategy with real users. 4 User Tests 4.1 Experimental design For the user tests the RL policy is ported to a working ISU-based dialogue system via table look-up, which indicates the action with the highest expected reward for each state (cf. (Singh et al., 2002)). The supervised baseline is implemented using standard threshold-based update rules. The experimental conditions are similar to the WOZ study, i.e. we ask the users to solve similar tasks, and use similar questionnaires. Furthermore, we decided to use typed user input rather than ASR. The use of text input 643 Measure SL baseline RL Strategy SIM REAL SIM REAL av. turns 8.42(±3.04) 5.86(±3.2) 5.9(±2.4)*** 5.07(±2.9)*** av. speech items 1.04(±.2) 1.29(±.4) 1.1(±.3) 1.2(±.4) av. MM items 61.37(±82.5) 52.2(±68.5) 11.2(±2.4)*** 8.73(±4.4)*** av. reward -1741.3(±566.2) -628.2(±178.6) 44.06(±51.5)*** 37.62(±60.7)*** Table 2: Comparison of results obtained in simulation (SIM) and with real users (REAL) for SL and RL-based strategies; *** denotes significant difference between SL and RL at p < .001 Figure 3: Graph comparison of objective measures: SLs = SL policy in simulation; SLr = SL policy with real users; RLs = RL policy in simulation; RLr = RL policy with real users. allows us to target the experiments to the dialogue management decisions, and block ASR quality from interfering with the experimental results (Hajdinjak and Mihelic, 2006). 17 subjects (8 female, 9 male) are given a set of 6×2 predefined tasks, which they solve by interaction with the RL-based and the SLbased system in controlled order. As a secondary task users are asked to count certain objects in a driving simulation. In total, 204 dialogues with 1,115 turns are gathered in this setup. 4.2 Results In general, the users rate the RL-based significantly higher (p < .001) than the SL-based policy. The results from a paired t-test on the user questionnaire data show significantly improved Task Ease, better presentation timing, more agreeable verbal and multimodal presentation, and that more users would use the RL-based system in the future (Future Use). All the observed differences have a medium effects size (r ≥|.3|). We also observe that female participants clearly favour the RL-based strategy, whereas the ratings by male participants are more indifferent. Similar gender effects are also reported by other studies on multimodal output presentation, e.g. (Foster and Oberlander, 2006). Furthermore, we compare objective dialogue performance measures. The dialogues of the RL strategy are significantly shorter (p < .005), while fewer items are displayed (p < .001), and the help function is used significantly less (p < .003). The mean performance measures for testing with real users are shown in Table 2 and Figure 3. However, there is no significant difference for the performance of the secondary driving task. 5 Comparison of Results We finally test whether the results obtained in simulation transfer to tests with real users, following (Lemon et al., 2006a). We evaluate the quality of the simulated learning environment by directly comparing the dialogue performance measures between simulated and real interaction. This comparison enables us to make claims regarding whether a policy which is ‘bootstrapped’ from WOZ data is transferable to real HCI. We first evaluate whether objective dialogue measures are transferable, using a paired t-test. For the RL policy there is no statistical difference in overall performance (reward), dialogue length (turns), and the number of presented items (verbal and multimodal items) between simulated 644 Measure WOZ SL RL av. Task Ease .53±.14 .63±.26 .79±.21*** av. Future Use .56±.16 .55±.21 .67±.20*** Table 3: Improved user ratings over the WOZ study where *** denotes p < .001 and real interaction (see Table 2, Figure 3). This indicates that the learned strategy transfers well to real settings. For the SL policy the dialogue length for real users is significantly shorter than in simulation. From an error analysis we conclude that real users intelligently adapt to poor policies, e.g. by changing topic, whereas the simulated users do not react in this way. Furthermore, we want to know whether the subjective user ratings for the RL strategy improved over the WOZ study. We therefore compare the user ratings from the WOZ questionnaire to the user ratings of the final user tests using a independent t-test and a Wilcoxon Signed Ranks Test. Users rate the RL-policy on average 10% higher. We are especially interested in the ratings for Task Ease (as this was the ultimate measure optimised with PARADISE) and Future Use, as we believe this measure to be an important indicator of acceptance of the technology. The results show that only the RL strategy leads to significantly improved user ratings (increasing average Task Ease by 49% and Future Use by 19%), whereas the ratings for the SL policy are not significantly better than those for the WOZ data, see Table 3. 4 This indicates that the observed difference is indeed due to the improved strategy (and not to other factors like the different user population or the embedded dialogue system). 6 Conclusion We addressed two problems in the field of automatic optimization of dialogue strategies: learning effective dialogue strategies when no initial data or system exists, and evaluating the result with real users. We learned optimal strategies by interaction with a simulated environment which is bootstrapped from 4The ratings are normalised as some of the questions were on different scales. a small amount of Wizard-of-Oz data, and we evaluated the result with real users. The use of WOZ data allows us to develop optimal strategies for domains where no working prototype is available. The developed simulations are entirely data driven and the reward function reflects real user preferences. We compare the Reinforcement Learning-based strategy against a supervised strategy which mimics the (human) wizards’ policies from the original data. This comparison allows us to measure relative improvement over the training data. Our results show that RL significantly outperforms SL in simulation as well as in interactions with real users. The RL-based policy gains on average 50-times more reward when tested in simulation, and almost 18-times more reward when interacting with real users. The human users also subjectively rate the RL-based policy on average 10% higher, and 49% higher for Task Ease. We also show that results obtained in simulation are comparable to results for real users. We conclude that a strategy trained from WOZ data via bootstrapping is transferable to real Human-ComputerInteraction. In future work will apply similar techniques to statistical planning for Natural Language Generation in spoken dialogue (Lemon, 2008; Janarthanam and Lemon, 2008), (see the EC FP7 CLASSiC project: www.classic-project.org). Acknowledgements The research leading to these results has received funding from the European Community’s 7th Framework Programme (FP7/2007-2013) under grant agreement no. 216594 (CLASSiC project www.classic-project.org), the EC FP6 project “TALK: Talk and Look, Tools for Ambient Linguistic Knowledge (IST 507802, www. talk-project.org), from the EPSRC, project no. EP/E019501/1, and from the IRTG Saarland University. 645 References W. W. Cohen. 1995. Fast effective rule induction. In Proc. of the 12th ICML-95. M. E. Foster and J. Oberlander. 2006. Data-driven generation of emphatic facial displays. In Proc. of EACL. M. Frampton and O. Lemon. (to appear). Recent research advances in Reinforcement Learning in Spoken Dialogue Systems. Knowledge Engineering Review. N. M. Fraser and G. N. Gilbert. 1991. Simulating speech systems. Computer Speech and Language, 5:81–99. M. Hajdinjak and F. Mihelic. 2006. The PARADISE evaluation framework: Issues and findings. Computational Linguistics, 32(2):263–272. P. Heeman. 2007. Combining reinforcement learning with information-state update rules. In Proc. of NAACL. J. Henderson, O. Lemon, and K. Georgila. 2005. Hybrid Reinforcement/Supervised Learning for Dialogue Policies from COMMUNICATOR data. In Proc. of IJCAI workshop on Knowledge and Reasoning in Practical Dialogue Systems, pages 68–75. S. Janarthanam and O. Lemon. 2008. User simulations for online adaptation and knowledge-alignment in Troubleshooting dialogue systems. In Proc. of the 12th SEMDIAL Workshop (LONdial). O. Lemon and O. Pietquin. 2007. Machine learning for spoken dialogue systems. In Proc. of Interspeech. O. Lemon, K. Georgila, and J. Henderson. 2006a. Evaluating Effectiveness and Portability of Reinforcement Learned Dialogue Strategies with real users: the TALK TownInfo Evaluation. In Proc. of IEEE/ACL workshop on Spoken Language Technology (SLT). O. Lemon, X. Liu, D. Shapiro, and C. Tollander. 2006b. Hierarchical reinforcement learning of dialogue policies in a development environment for dialogue systems: REALL-DUDE. In Proc. of the 10th SEMDIAL Workshop (BRANdial). O. Lemon. 2008. Adaptive Natural Language Generation in Dialogue using Reinforcement Learning. In Proc. of the 12th SEMDIAL Workshop (LONdial). E. Levin, R. Pieraccini, and W. Eckert. 2000. A stochastic model of human-machine interaction for learning dialog strategies. IEEE Transactions on Speech and Audio Processing, 8(1). T. Paek. 2006. Reinforcement learning for spoken dialogue systems: Comparing strengths and weaknesses for practical deployment. In Proc. Dialog-on-Dialog Workshop, Interspeech. O. Pietquin and T. Dutoit. 2006. A probabilistic framework for dialog simulation and optimal strategy learnin. IEEE Transactions on Audio, Speech and Language Processing, 14(2):589–599. T. Prommer, H. Holzapfel, and A. Waibel. 2006. Rapid simulation-driven reinforcement learning of multimodal dialog strategies in human-robot interaction. In Proc. of Interspeech/ICSLP. R. Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann. V. Rieser and O. Lemon. 2006a. Cluster-based user simulations for learning dialogue strategies. In Proc. of Interspeech/ICSLP. V. Rieser and O. Lemon. 2006b. Using machine learning to explore human multimodal clarification strategies. In Proc. of ACL. V. Rieser and O. Lemon. 2008a. Automatic learning and evaluation of user-centered objective functions for dialogue system optimisation. In LREC. V. Rieser and O. Lemon. 2008b. Does this list contain what you were searching for? Learning adaptive dialogue strategies for interactive question answering. Journal of Natural Language Engineering (special issue on Interactive Question answering, to appear). V. Rieser, I. Kruijff-Korbayov´a, and O. Lemon. 2005. A corpus collection and annotation framework for learning multimodal clarification strategies. In Proc. of the 6th SIGdial Workshop. V. Rieser. 2008. Bootstrapping Reinforcement Learningbased Dialogue Strategies from Wizard-of-Oz data (to appear). Ph.D. thesis, Saarland University. J. Schatzmann, B. Thomson, K. Weilhammer, H. Ye, and S. Young. 2007. Agenda-based user simulation for bootstrapping a POMDP dialogue system. In Proc. of HLT/NAACL. D. Shapiro and P. Langley. 2002. Separating skills from preference: Using learning to program by reward. In Proc. of the 19th ICML. S. Singh, D. Litman, M. Kearns, and M. Walker. 2002. Optimizing dialogue management with reinforcement learning: Experiments with the NJFun system. JAIR, 16. R. Sutton and A. Barto. 1998. Reinforcement Learning. MIT Press. M. Walker, J. Fromer, and S. Narayanan. 1998. Learning optimal dialogue strategies: A case study of a spoken dialogue agent for email. In Proceedings of ACL/COLING. M. Walker, C. Kamm, and D. Litman. 2000. Towards developing general models of usability with PARADISE. Journal of Natural Language Engineering, 6(3). J. Williams and S. Young. 2004. Using Wizard-of-Oz simulations to bootstrap reinforcement-learning-based dialog management systems. In Proc. of the 4th SIGdial Workshop. I. Witten and E. Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques (2nd Edition). Morgan Kaufmann. 646
2008
73
Proceedings of ACL-08: HLT, pages 647–655, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Phrase Chunking using Entropy Guided Transformation Learning Ruy L. Milidi´u Departamento de Inform´atica PUC-Rio Rio de Janeiro, Brazil [email protected] C´ıcero Nogueira dos Santos Departamento de Inform´atica PUC-Rio [email protected] Julio C. Duarte Centro Tecnol´ogico do Ex´ercito Rio de Janeiro, Brazil [email protected] Abstract Entropy Guided Transformation Learning (ETL) is a new machine learning strategy that combines the advantages of decision trees (DT) and Transformation Based Learning (TBL). In this work, we apply the ETL framework to four phrase chunking tasks: Portuguese noun phrase chunking, English base noun phrase chunking, English text chunking and Hindi text chunking. In all four tasks, ETL shows better results than Decision Trees and also than TBL with hand-crafted templates. ETL provides a new training strategy that accelerates transformation learning. For the English text chunking task this corresponds to a factor of five speedup. For Portuguese noun phrase chunking, ETL shows the best reported results for the task. For the other three linguistic tasks, ETL shows state-of-theart competitive results and maintains the advantages of using a rule based system. 1 Introduction Phrase Chunking is a Natural Language Processing (NLP) task that consists in dividing a text into syntactically correlated parts of words. Theses phrases are non-overlapping, i.e., a word can only be a member of one chunk (Sang and Buchholz, 2000). It provides a key feature that helps on more elaborated NLP tasks such as parsing and information extraction. Since the last decade, many high-performance chunking systems were proposed, such as, SVMbased (Kudo and Matsumoto, 2001; Wu et al., 2006), Winnow (Zhang et al., 2002), votedperceptrons (Carreras and M`arquez, 2003), Transformation-Based Learning (TBL) (Ramshaw and Marcus, 1999; Megyesi, 2002) and Hidden Markov Model (HMM) (Molina and Pla, 2002), Memory-based (Sang, 2002). State-of-the-art systems for English base noun phrase chunking and text chunking are based in statistical techniques (Kudo and Matsumoto, 2001; Wu et al., 2006; Zhang et al., 2002). TBL is one of the most accurate rule-based techniques for phrase chunking tasks (Ramshaw and Marcus, 1999; Ngai and Florian, 2001; Megyesi, 2002). On the other hand, TBL rules must follow patterns, called templates, that are meant to capture the relevant feature combinations. The process of generating good templates is highly expensive. It strongly depends on the problem expert skills to build them. Even when a template set is available for a given task, it may not be effective when we change from a language to another (dos Santos and Oliveira, 2005). In this work, we apply Entropy Guided Transformation Learning (ETL) for phrase chunking. ETL is a new machine learning strategy that combines the advantages of Decision Trees (DT) and TBL (dos Santos and Milidi´u, 2007a). The ETL key idea is to use decision tree induction to obtain feature combinations (templates) and then use the TBL algorithm to generate transformation rules. ETL produces transformation rules that are more effective than decision trees and also eliminates the need of a problem domain expert to build TBL templates. We evaluate the performance of ETL over four 647 phrase chunking tasks: (1) English Base Noun Phrase (NP) chunking; (2) Portuguese NP chunking; (3) English Text Chunking; and (4) Hindi Text Chunking. Base NP chunking consists in recognizing non-overlapping text segments that contain NPs. Text chunking consists in dividing a text into syntactically correlated parts of words. For these four tasks, ETL shows state-of-the-art competitive results and maintains the advantages of using a rule based system. The remainder of the paper is organized as follows. In section 2, the ETL strategy is described. In section 3, the experimental design and the corresponding results are reported. Finally, in section 4, we present our concluding remarks. 2 Entropy Guided Transformation Learning Entropy Guided Transformation Learning (ETL) is a new machine learning strategy that combines the advantages of Decision Trees (DT) and Transformation-Based Learning (TBL) (dos Santos and Milidi´u, 2007a). The key idea of ETL is to use decision tree induction to obtain templates. Next, the TBL strategy is used to generate transformation rules. The proposed method is illustrated in the Fig. 1. Figure 1: ETL - Entropy Guided Transformation Learning. A combination of DT and TBL is presented in (Corston-Oliver and Gamon, 2003). The main difference between Corston-Oliver & Gamon work and the ETL strategy is that they extract candidate rules directly from the DT, and then use the TBL strategy to select the appropriate rules. Another difference is that they use a binary DT, whereas ETL uses a DT that is not necessarily binary. An evolutionary approach based on Genetic Algorithms (GA) to automatically generate TBL templates is presented in (Milidi´u et al., 2007). Using a simple genetic coding, the generated template sets have efficacy near to the handcrafted templates for the tasks: English Base Noun Phrase Identification, Text Chunking and Portuguese Named Entities Recognition. The main drawback of this strategy is that the GA step is computationally expensive. If we need to consider a large context window or a large number of features, it can be infeasible. The remainder of this section is organized as follows. In section 2.1, we describe the DT learning algorithm. In section 2.2, the TBL algorithm is depicted. In section 2.3, we depict the process of obtaining templates from a decision tree decomposition. Finally, in section 2.4, we present a template evolution scheme that speeds up the TBL step. 2.1 Decision Trees Decision tree learning is one of the most widely used machine learning algorithms. It performs a partitioning of the training set using principles of Information Theory. The learning algorithm executes a general to specific search of a feature space. The most informative feature is added to a tree structure at each step of the search. Information Gain Ratio, which is based on the data Entropy, is normally used as the informativeness measure. The objective is to construct a tree, using a minimal set of features, that efficiently partitions the training set into classes of observations. After the tree is grown, a pruning step is carried out in order to avoid overfitting. One of the most used algorithms for induction of a DT is the C4.5 (Quinlan, 1993). We use Quinlan’s C4.5 system throughout this work. 2.2 Transformation-Based Learning Transformation Based error-driven Learning (TBL) is a successful machine learning algorithm introduced by Eric Brill (Brill, 1995). It has since been used for several Natural Language Processing tasks, such as part-of-speech (POS) tagging (Brill, 1995), English text chunking (Ramshaw and Marcus, 1999; dos Santos and Milidi´u, 2007b), spelling correc648 tion (Mangu and Brill, 1997), Portuguese appositive extraction (Freitas et al., 2006), Portuguese named entity extraction (Milidi´u et al., 2006) and Portuguese noun-phrase chunking (dos Santos and Oliveira, 2005), achieving state-of-the-art performance in many of them. TBL uses an error correcting strategy. Its main scheme is to generate an ordered list of rules that correct classification mistakes in the training set, which have been produced by an initial classifier. The requirements of the algorithm are: • two instances of the training set, one that has been correctly labeled, and another that remains unlabeled; • an initial classifier, the baseline system, which classifies the unlabeled training set by trying to apply the correct class for each sample. In general, the baseline system is based on simple statistics of the labeled training set; and • a set of rule templates, which are meant to capture the relevant feature combinations that would determine the sample’s classification. Concrete rules are acquired by instantiation of this predefined set of rule templates. • a threshold value, that is used as a stopping criteria for the algorithm and is needed to avoid overfitting to the training data. The learning method is a mistake-driven greedy procedure that iteratively acquires a set of transformation rules. The TBL algorithm can be depicted as follows: 1. Starts applying the baseline system, in order to guess an initial classification for the unlabeled version of the training set; 2. Compares the resulting classification with the correct one and, whenever a classification error is found, all the rules that can correct it are generated by instantiating the templates. This template instantiation is done by capturing some contextual data of the sample being corrected. Usually, a new rule will correct some errors, but will also generate some other errors by changing correctly classified samples; 3. Computes the rules’ scores (errors repaired - errors created). If there is not a rule with a score above an arbitrary threshold, the learning process is stopped; 4. Selects the best scoring rule, stores it in the set of learned rules and applies it to the training set; 5. Returns to step 2. When classifying a new sample item, the resulting sequence of rules is applied according to its generation order. 2.3 DT Template Extraction There are many ways to extract feature combinations from decision trees. In an path from the root to the leaves, more informative features appear first . Since we want to generate the most promising templates only, we just combine the more informative ones. The process we use to extract templates from a DT includes a depth-first traversal of the DT. For each visited node, we create a new template that combines its parent node template with the feature used to split the data at that node. This is a very simple decomposition scheme. Nevertheless, it results into extremely effective templates. We also use pruned trees in all experiments shown in section 3. Fig. 2 shows an excerpt of a DT generated for the English text chunking task1. Using the described method to extract templates from the DT shown in Fig. 2, we obtain the template set listed in the left side of Table 1. In order to generate more feature combinations, without largely increasing the number of templates, we extend the template set by including templates that do not have the root node feature. The extended template set for the DT shown in Fig. 2 is listed in the right side of the Table 1. We have also tried some other strategies that extract a larger number of templates from a DT. However, the efficacy of the learned rules is quite similar to the one generated by the first method. This reinforces the conjecture that a DT generates informative feature combinations. 1CK[0] = Chunk tag of the current word (initial classifier result); CK[–1] = previous word Chunk tag; CK[1] = next word Chunk tag; POS[0] = current word POS tag; WRD[0] = current word. 649 Table 1: Text chunking DT Template set example Template set Extended template set CK[0] CK[0] CK[0] CK[1] CK[0] CK[1] CK[1] CK[0] CK[1] WRD[0] CK[0] CK[1] WRD[0] CK[1] WRD[0] CK[0] CK[1] WRD[0] CK[–1] CK[0] CK[1] WRD[0] CK[–1] CK[1] WRD[0] CK[–1] CK[0] CK[1] POS[0] CK[0] CK[1] POS[0] CK[1] POS[0] CK[0] CK[–1] CK[0] CK[–1] CK[–1] Figure 2: Text chunking decision tree excerpt. 2.4 Template Evolution Speedup TBL training time is highly sensitive to the number and complexity of the applied templates. In (Curran and Wong, 2000), it is argued that we can better tune the training time vs. templates complexity trade-off by using an evolutionary template approach. The main idea is to apply only a small number of templates that evolve throughout the training. When training starts, templates are short, consisting of few feature combinations. As training proceeds, templates evolve to more complex ones that contain more feature combinations. In this way, only a few templates are considered at any point in time. Nevertheless, the descriptive power is not significantly reduced. The template evolution approach can be easily implemented by using template sets extracted from a DT. We implement this idea by successively training TBL models. Each model uses only the templates that contain feature combinations up to a given tree level. For instance, using the tree shown in Fig. 2, we have the following template sets for the three first training rounds2: 1. CK[0] CK[1]; CK[0] CK[–1] 2. CK[0] CK[1] WRD[0]; CK[0] CK[1] POS[0] 3. CK[0] CK[1] WRD[0] CK[–1] Using the template evolution strategy, the training time is decreased by a factor of five for the English text chunking task. This is a remarkable reduction, since we use an implementation of the fastTBL algorithm (Ngai and Florian, 2001) that is already a very fast TBL version. The efficacy of the rules generated by the sequential training is quite similar to the one obtained by training with all the templates at the same time. 3 Experiments This section presents the experimental setup and results of the application of ETL to four phrase chunking tasks. ETL results are compared with the results of DT and TBL using hand-crafted templates. In the TBL step, for each one of the four chunking tasks, the initial classifier assigns to each word the chunk tag that was most frequently associated with the part-of-speech of that word in the training set. The DT learning works as a feature selector and is not affected by irrelevant features. We have tried several context window sizes when training the classifiers. Some of the tested window sizes would be very hard to be explored by a domain expert using 2We ignore templates composed of only one feature test. 650 TBL alone. The corresponding huge number of possible templates would be very difficult to be managed by a template designer. For the four tasks, the following experimental setup provided us our best results. ETL in the ETL learning, we use the features word, POS and chunk. In order to overcome the sparsity problem, we only use the 200 most frequent words to induce the DT. In the DT learning, the chunk tag of the word is the one applied by the initial classifier. On the other hand, the chunk tag of neighbor words are the true ones. We report results for ETL trained with all the templates at the same time as well as using template evolution. TBL the results for the TBL approach refers to TBL trained with the set of templates proposed in (Ramshaw and Marcus, 1999). DT the best result for the DT classifier is shown. The features word, POS and chunk are used to generate the DT classifier. The chunk tag of a word and its neighbors are the ones guessed by the initial classifier. Using only the 100 most frequent words gives our best results. In all experiments, the term WS=X subscript means that a window of size X was used for the given model. For instance, ETLWS=3 corresponds to ETL trained with window of size three, that is, the current token, the previous and the next one. 3.1 Portuguese noun phrase chunking For this task, we use the SNR-CLIC corpus described in (Freitas et al., 2005). This corpus is tagged with both POS and NP tags. The NP tags are: I, for in NP; O, for out of NP; and B for the leftmost word of an NP beginning immediately after another NP. We divided the corpus into 3514sentence (83346 tokens) training set and a 878sentence (20798 tokens) test set. In Table 2 we compare the results3 of ETL with DT and TBL. We can see that ETL, even with a small window size, produces better results than DT and TBL. The Fβ=1 of the ETLWS=7 classifier is 1.8% higher than the one of TBL and 2.6% higher than the one of the DT classifier. 3#T = Number of templates. Table 2: Portuguese noun phrase chunking. Acc. Prec. Rec. Fβ=1 # T (%) (%) (%) (%) BLS 96.57 62.69 74.45 68.06 – DTWS=13 97.35 83.96 87.27 85.58 – TBL 97.45 85.48 87.32 86.39 100 ETLWS=3 97.61 86.12 87.24 86.67 21 ETLWS=5 97.68 86.85 87.49 87.17 35 ETLWS=7 97.82 88.15 88.20 88.18 34 ETLWS=9 97.82 88.02 88.34 88.18 40 Table 3 shows the results4 of ETL using template evolution. As we can see, for the task of Portuguese noun phrase chunking, the template evolution strategy reduces the average training time in approximately 35%. On the other hand, there is a decrease of the classifier efficacy in some cases. Table 3: Portuguese noun phrase chunking using ETL with template evolution. Acc. Prec. Rec. Fβ=1 TTR (%) (%) (%) (%) (%) ETLWS=3 97.61 86.22 87.27 86.74 20.7 ETLWS=5 97.56 86.39 87.10 86.74 38.2 ETLWS=7 97.69 87.35 87.89 87.62 37.0 ETLWS=9 97.76 87.55 88.14 87.85 41.9 In (dos Santos and Oliveira, 2005), a special set of six templates is shown. These templates are designed to reduce classification errors of preposition within the task of Portuguese noun phrase chunking. These templates use very specific domain knowledge and are difficult to DT and TBL to extract. Table 4 shows the results of an experiment where we include these six templates into the Ramshaw&Marcus template set and also into the template sets generated by ETL. Again, ETL produces better results than TBL. Table 5 shows the results of using a committee composed by the three best ETL classifiers. The classification is done by selecting the most popular tag among all the three committee members. The achieved Fβ=1, 89.14% is the best one ever reported for the SNR-CLIC corpus. 4TTR = Training time reduction. 651 Table 4: Portuguese noun phrase chunking using six additional hand-crafted templates. Acc. Prec. Rec. Fβ=1 # T (%) (%) (%) (%) BLS 96.57 62.69 74.45 68.06 – TBL 97.60 86.79 88.12 87.45 106 ETLWS=3 97.73 86.95 88.40 87.67 27 ETLWS=5 97.87 88.35 89.02 88.68 41 ETLWS=7 97.91 88.12 89.22 88.67 40 ETLWS=9 97.93 88.53 89.11 88.82 46 Table 5: Committee with the classifiers ETLW S=5, ETLW S=7 and ETLW S=9, shown in Table 4. Results (%) Accuracy 97.97 Precision 88.62 Recall 89.67 Fβ=1 89.14 3.2 English base noun phrase chunking The data used in the base NP chunking experiments is the one by Ramshaw & Marcus (Ramshaw and Marcus, 1999). This corpus contains sections 1518 and section 20 of the Penn Treebank, and is predivided into 8936-sentence (211727 tokens) training set and a 2012-sentence (47377 tokens) test. This corpus is tagged with both POS and chunk tags. Table 6 compares the results of ETL with DT and TBL for the base NP chunking. We can see that ETL, even using a small window size, produces better results than DT and TBL. The Fβ=1 of the ETLWS=9 classifier is 0.87% higher than the one of TBL and 2.31% higher than the one of the DT classifier. Table 7 shows the results of ETL using template evolution. The template evolution strategy reduces the average training time in approximately 62%. Differently from the Portuguese NP chunking, we observe an increase of the classifier efficacy in almost all the cases. Table 8 shows the results of using a committee composed by the eight ETL classifiers reported in this section. Table 8 also shows the results for a committee of SVM models presented in (Kudo and Matsumoto, 2001). SVM’s results are the state-ofTable 6: Base NP chunking. Acc. Prec. Rec. Fβ=1 # T (%) (%) (%) (%) BLS 94.48 78.20 81.87 79.99 – DTWS=11 97.03 89.92 91.16 90.53 – TBL 97.42 91.68 92.26 91.97 100 ETLWS=3 97.54 91.93 92.78 92.35 68 ETLWS=5 97.55 92.43 92.77 92.60 85 ETLWS=7 97.52 92.49 92.70 92.59 106 ETLWS=9 97.63 92.62 93.05 92.84 122 Table 7: Base NP chunking using ETL with template evolution. Acc. Prec. Rec. Fβ=1 TTR (%) (%) (%) (%) (%) ETLWS=3 97.58 92.07 92.74 92.41 53.9 ETLWS=5 97.63 92.66 93.16 92.91 57.9 ETLWS=7 97.61 92.56 93.04 92.80 65.1 ETLWS=9 97.59 92.50 93.01 92.76 69.4 the-art for the Base NP chunking task. On the other hand, using a committee of ETL classifiers, we produce very competitive results and maintain the advantages of using a rule based system. Table 8: Base NP chunking using a committee of eight ETL classifiers. Accuracy Precision Recall Fβ=1 (%) (%) (%) (%) ETL 97.72 92.87 93.34 93.11 SVM – 94.15 94.29 94.22 3.3 English text chunking The data used in the English text chunking experiments is the CoNLL-2000 corpus, which is described in (Sang and Buchholz, 2000). It is composed by the same texts as the Ramshaw & Marcus (Ramshaw and Marcus, 1999) corpus. Table 9 compares the results of ETL with DTs and TBL for English text chunking. ETL, even using a small window size, produces better results than DTs and TBL. The Fβ=1 of the ETLWS=3 classifier is 0.28% higher than the one of TBL and 2.17% higher than the one of the DT classifier. It is an interesting linguistic finding that the use of a window of size 3 652 (the current token, the previous token and the next token) provides the current best results for this task. Table 9: English text Chunking. Acc. Prec. Rec. Fβ=1 # T (%) (%) (%) (%) BLS 77.29 72.58 82.14 77.07 – DTWS=9 94.29 89.55 91.00 90.27 – TBL 95.12 92.05 92.28 92.16 100 ETLWS=3 95.24 92.32 92.56 92.44 105 ETLWS=5 95.12 92.19 92.27 92.23 167 ETLWS=7 95.13 92.24 92.32 92.28 183 ETLWS=9 95.07 92.10 92.27 92.19 205 Table 10 shows the results of ETL using template evolution. The template evolution strategy reduces the average training time by approximately 81%. On the other hand, there is a small decrease of the classifier efficacy in all cases. Table 10: English text chunking using ETL with template evolution. Acc. Prec. Rec. Fβ=1 TTR (%) (%) (%) (%) (%) ETLWS=3 95.21 92.14 92.53 92.34 77.2 ETLWS=5 94.98 91.84 92.25 92.04 80.8 ETLWS=7 95.03 91.89 92.28 92.09 83.0 ETLWS=9 95.01 91.87 92.21 92.04 84.5 Table 11 shows the results of using a committee composed by the eight ETL classifiers reported in this section. Table 11 also shows the results for a SVM model presented in (Wu et al., 2006). SVM’s results are the state-of-the-art for the Text chunking task. On the other hand, using a committee of ETL classifiers, we produce very competitive results and maintain the advantages of using a rule based system. Table 11: English text Chunking using a committee of eight ETL classifiers. Accuracy Precision Recall Fβ=1 (%) (%) (%) (%) ETL 95.50 92.63 92.96 92.79 SVM – 94.12 94.13 94.12 Table 12 shows the results, broken down by chunk type, of using a committee composed by the eight ETL classifiers reported in this section. Table 12: English text chunking results, broken down by chunk type, for the ETL committee. Precision Recall Fβ=1 (%) (%) (%) ADJP 75.59 72.83 74.19 ADVP 82.02 79.56 80.77 CONJP 35.71 55.56 43.48 INTJ 00.00 00.00 00.00 LST 00.00 00.00 00.00 NP 92.90 93.08 92.99 PP 96.53 97.63 97.08 PRT 66.93 80.19 72.96 SBAR 86.50 85.05 85.77 VP 92.84 93.58 93.21 Overall 92.63 92.96 92.79 3.4 Hindi text chunking The data used in the Hindi text chunking experiments is the SPSAL-2007 corpus, which is described in (Bharati and Mannem, 2007). This corpus is pre-divided into a 20000-tokens training set, a 5000-tokens development set and a 5000-tokens test set. This corpus is tagged with both POS and chunk tags. To fairly compare our approach with the ones presented in the SPSAL-2007, the POS tags of the test corpus were replaced by the ones predicted by an ETL-based Hindi POS Tagger. The description of our ETL pos tagger is beyond the scope of this work. Since the amount of training data is very small (20000 tokens), the accuracy of the ETL Hindi POS tagger is low, 77.50% for the test set. The results are reported in terms of chunking accuracy, the same performance measure used in the SPSAL-2007. Table 13 compares the results of ETL with DT and TBL for Hindi text chunking. ETL produces better results than DT and achieves the same performance of TBL using 60% less templates. We believe that ETL performance is not as good as in the other tasks mainly because of the small amount of training data, which increases the sparsity problem. We do not use template evolution for Hindi text 653 chunking. Since the training corpus is very small, the training time reduction is not significant. Table 13: Hindi text Chunking. Accuracy # Templates (%) BLS 70.05 – DTWS=5 78.20 – TBL 78.53 100 ETLWS=5 78.53 30 Table 14 compares the results of ETL with the two best Hindi text chunkers at SPSAL-2007 (Bharati and Mannem, 2007). The first one is a combination of Hidden Markov Models (HMM) and Conditional Random Fields (CRF) (PVS and Gali, 2007). The second is based in Maximum Entropy Models (MaxEnt) (Dandapat, 2007). ETL performs better than MaxEnt and worst than HMM+CRF. It is important to note that the accuracy of the POS tagger used by (PVS and Gali, 2007) (78.66%) is better than ours (77.50%). The POS tagging quality directly affects the chunking accuracy. Table 14: Comparison with best systems of SPSAL-2007 Accuracy (%) HMM + CRF 80.97 ETLWS=5 78.53 MaxEnt 74.92 4 Conclusions In this paper, we approach the phrase chunking task using Entropy Guided Transformation Learning (ETL). We carry out experiments with four phrase chunking tasks: Portuguese noun phrase chunking, English base noun phrase chunking, English text chunking and Hindi text chunking. In all four tasks, ETL shows better results than Decision Trees and also than TBL with hand-crafted templates. ETL provides a new training strategy that accelerates transformation learning. For the English text chunking task this corresponds to a factor of five speedup. For Portuguese noun phrase chunking, ETL shows the best reported results for the task. For the other three linguistic tasks, ETL shows competitive results and maintains the advantages of using a rule based system. References Akshar Bharati and Prashanth R. Mannem. 2007. Introduction to shallow parsing contest on south asian languages. In Proceedings of the IJCAI and the Workshop On Shallow Parsing for South Asian Languages (SPSAL), pages 1–8. Eric Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Comput. Linguistics, 21(4):543–565. Xavier Carreras and Llu´ıs M`arquez. 2003. Phrase recognition by filtering and ranking with perceptrons. In Proceedings of RANLP-2003, Borovets, Bulgaria. Simon Corston-Oliver and Michael Gamon. 2003. Combining decision trees and transformation-based learning to correct transferred linguistic representations. In Proceedings of the Ninth Machine Tranlsation Summit, pages 55–62, New Orleans, USA. Association for Machine Translation in the Americas. J. R. Curran and R. K. Wong. 2000. Formalisation of transformation-based learning. In Proceedings of the Australian Computer Science Conference - ACSC, pages 51–57, Canberra, Australia. Sandipan Dandapat. 2007. Part of speech tagging and chunking with maximum entropy model. In Proceedings of the IJCAI and the Workshop On Shallow Parsing for South Asian Languages (SPSAL), pages 29–32. C´ıcero N. dos Santos and Ruy L. Milidi´u. 2007a. Entropy guided transformation learning. Technical Report 29/07, Departamento de Informtica, PUC-Rio. C´ıcero N. dos Santos and Ruy L. Milidi´u. 2007b. Probabilistic classifications with tbl. In Proceedings of Eighth International Conference on Intelligent Text Processing and Computational Linguistics – CICLing, pages 196–207, Mexico City, Mexico, February. C´ıcero N. dos Santos and Claudia Oliveira. 2005. Constrained atomic term: Widening the reach of rule templates in transformation based learning. In EPIA, pages 622–633. M. C. Freitas, M. Garrao, C. Oliveira, C. N. dos Santos, and M. Silveira. 2005. A anotac¸˜ao de um corpus para o aprendizado supervisionado de um modelo de sn. In Proceedings of the III TIL / XXV Congresso da SBC, S˜ao Leopoldo - RS - Brasil. M. C. Freitas, J. C. Duarte, C. N. dos Santos, R. L. Milidi´u, R. P. Renteria, and V. Quental. 2006. A machine learning approach to the identification of appos654 itives. In Proceedings of Ibero-American AI Conference, Ribeir˜ao Preto, Brazil, October. T. Kudo and Y. Matsumoto. 2001. Chunking with support vector machines. In Proceedings of the NAACL2001. Lidia Mangu and Eric Brill. 1997. Automatic rule acquisition for spelling correction. In Proceedings of The Fourteenth International Conference on Machine Learning, ICML 97. Morgan Kaufmann. Be´ata Megyesi. 2002. Shallow parsing with pos taggers and linguistic features. Journal of Machine Learning Research, 2:639–668. Ruy L. Milidi´u, Julio C. Duarte, and Roberto Cavalcante. 2006. Machine learning algorithms for portuguese named entity recognition. In Proceedings of Fourth Workshop in Information and Human Language Technology (TIL’06), Ribeir˜ao Preto, Brazil. Ruy L. Milidi´u, Julio C. Duarte, and C´ıcero N. dos Santos. 2007. Tbl template selection: An evolutionary approach. In Proceedings of Conference of the Spanish Association for Artificial Intelligence - CAEPIA, Salamanca, Spain. Antonio Molina and Ferran Pla. 2002. Shallow parsing using specialized hmms. J. Mach. Learn. Res., 2:595– 613. Grace Ngai and Radu Florian. 2001. Transformationbased learning in the fast lane. In Proceedings of North Americal ACL, pages 40–47, June. Avinesh PVS and Karthik Gali. 2007. Part-of-speech tagging and chunking using conditional random fields and transformation based learning. In Proceedings of the IJCAI and the Workshop On Shallow Parsing for South Asian Languages (SPSAL), pages 21–24. J. Ross Quinlan. 1993. C4.5: programs for machine learning. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. Lance Ramshaw and Mitch Marcus. 1999. Text chunking using transformation-based learning. In S. Armstrong, K.W. Church, P. Isabelle, S. Manzi, E. Tzoukermann, and D. Yarowsky, editors, Natural Language Processing Using Very Large Corpora. Kluwer. Erik F. Tjong Kim Sang and Sabine Buchholz. 2000. Introduction to the conll-2000 shared task: chunking. In Proceedings of the 2nd workshop on Learning language in logic and the 4th CONLL, pages 127–132, Morristown, NJ, USA. Association for Computational Linguistics. Erik F. Tjong Kim Sang. 2002. Memory-based shallow parsing. J. Mach. Learn. Res., 2:559–594. Yu-Chieh Wu, Chia-Hui Chang, and Yue-Shi Lee. 2006. A general and multi-lingual phrase chunking model based on masking method. In Proceedings of 7th International Conference on Intelligent Text Processing and Computational Linguistics, pages 144–155. Tong Zhang, Fred Damerau, and David Johnson. 2002. Text chunking based on a generalization of winnow. J. Mach. Learn. Res., 2:615–637. 655
2008
74
Proceedings of ACL-08: HLT, pages 656–664, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Learning Bigrams from Unigrams Xiaojin Zhu† and Andrew B. Goldberg† and Michael Rabbat‡ and Robert Nowak§ †Department of Computer Sciences, University of Wisconsin-Madison ‡Department of Electrical and Computer Engineering, McGill University §Department of Electrical and Computer Engineering, University of Wisconsin-Madison {jerryzhu, goldberg}@cs.wisc.edu, [email protected], [email protected] Abstract Traditional wisdom holds that once documents are turned into bag-of-words (unigram count) vectors, word orders are completely lost. We introduce an approach that, perhaps surprisingly, is able to learn a bigram language model from a set of bag-of-words documents. At its heart, our approach is an EM algorithm that seeks a model which maximizes the regularized marginal likelihood of the bagof-words documents. In experiments on seven corpora, we observed that our learned bigram language models: i) achieve better test set perplexity than unigram models trained on the same bag-of-words documents, and are not far behind “oracle bigram models” trained on the corresponding ordered documents; ii) assign higher probabilities to sensible bigram word pairs; iii) improve the accuracy of ordereddocument recovery from a bag-of-words. Our approach opens the door to novel phenomena, for example, privacy leakage from index files. 1 Introduction A bag-of-words (BOW) is a basic document representation in natural language processing. In this paper, we consider a BOW in its simplest form, i.e., a unigram count vector or word histogram over the vocabulary. When performing the counting, word order is ignored. For example, the phrases “really neat” and “neat really” contribute equally to a BOW. Obviously, once a set of documents is turned into a set of BOWs, the word order information within them is completely lost—or is it? In this paper, we show that one can in fact partly recover the order information. Specifically, given a set of documents in unigram-count BOW representation, one can recover a non-trivial bigram language model (LM)1, which has part of the power of a bigram LM trained on ordered documents. At first glance this seems impossible: How can one learn bigram information from unigram counts? However, we will demonstrate that multiple BOW documents enable us to recover some higher-order information. Our results have implications in a wide range of natural language problems, in particular document privacy. With the wide adoption of natural language applications like desktop search engines, software programs are increasingly indexing computer users’ personal files for fast processing. Most index files include some variant of the BOW. As we demonstrate in this paper, if a malicious party gains access to BOW index files, it can recover more than just unigram frequencies: (i) the malicious party can recover a higher-order LM; (ii) with the LM it may attempt to recover the original ordered document from a BOW by finding the most-likely word permutation2. Future research will quantify the extent to which such a privacy breach is possible in theory, and will find solutions to prevent it. There is a vast literature on language modeling; see, e.g., (Rosenfeld, 2000; Chen and Goodman, 1999; Brants et al., 2007; Roark et al., 2007). How1A trivial bigram LM is a unigram LM which ignores history: P(v|u) = P(v). 2It is possible to use a generic higher-order LM, e.g., a trigram LM trained on standard English corpora, for this purpose. However, incorporating a user-specific LM helps. 656 ever, to the best of our knowledge, none addresses this reverse direction of learning higher-order LMs from lower-order data. This work is inspired by recent advances in inferring network structure from co-occurrence data, for example, for computer networks and biological pathways (Rabbat et al., 2007). 2 Problem Formulation and Identifiability We assume that a vocabulary of size W is given. For notational convenience, we include in the vocabulary a special “begin-of-document” symbol ⟨d⟩ which appears only at the beginning of each document. The training corpus consists of a collection of n BOW documents {x1, . . . , xn}. Each BOW xi is a vector (xi1, . . . , xiW ) where xiu is the number of times word u occurs in document i. Our goal is to learn a bigram LM θ, represented as a W ×W transition matrix with θuv = P(v|u), from the BOW corpus. Note P(v|⟨d⟩) corresponds to the initial state probability for word v, and P(⟨d⟩|u) = 0, ∀u. It is worth noting that traditionally one needs ordered documents to learn a bigram LM. A natural question that arises in our problem is whether or not a bigram LM can be recovered from the BOW corpus with any guarantee. Let X denote the space of all possible BOWs. As a toy example, consider W = 3 with the vocabulary {⟨d⟩, A, B}. Assuming all documents have equal length |x| = 4 (including ⟨d⟩), then X = {(⟨d⟩:1, A:3, B:0), (⟨d⟩:1, A:2, B:1), (⟨d⟩:1, A:1, B:2), (⟨d⟩:1, A:0, B:3)}. Our training BOW corpus, when sufficiently large, provides the marginal distribution ˆp(x) for x ∈X. Can we recover a bigram LM from ˆp(x)? To answer this question, we first need to introduce a generative model for the BOWs. We assume that the BOW corpus is generated from a bigram LM θ in two steps: (i) An ordered document is generated from the bigram LM θ; (ii) The document’s unigram counts are collected to produce the BOW x. Therefore, the probability of a BOW x being generated by θ can be computed by marginalizing over unique orderings z of x: P(x|θ) = X z∈σ(x) P(z|θ) = X z∈σ(x) |x| Y j=2 θzj−1,zj, where σ(x) is the set of unique orderings, and |x| is the document length. For example, if x =(⟨d⟩:1, A:2, B:1) then σ(x) = {z1, z2, z3} with z1 = “⟨d⟩A A B”, z2 = “⟨d⟩A B A”, z3 = “⟨d⟩B A A”. Bigram LM recovery then amounts to finding a θ that satisfies the system of marginal-matching equations P(x|θ) = ˆp(x) , ∀x ∈X. (1) As a concrete example where one can exactly recover a bigram LM from BOWs, consider our toy example again. We know there are only three free variables in our 3×3 bigram LM θ: r = θ⟨d⟩A, p = θAA, q = θBB, since the rest are determined by normalization. Suppose the documents are generated from a bigram LM with true parameters r = 0.25, p = 0.9, q = 0.5. If our BOW corpus is very large, we will observe that 20.25% of the BOWs are (⟨d⟩:1, A:3, B:0), 37.25% are (⟨d⟩:1, A:2, B:1), and 18.75% are (⟨d⟩:1, A:0, B:3). These numbers are computed using the definition of P(x|θ). We solve the reverse problem of finding r, p, q from the system of equations (1), now explicitly written as        rp2 = 0.2025 rp(1 −p) + r(1 −p)(1 −q) +(1 −r)(1 −q)p = 0.3725 (1 −r)q2 = 0.1875. The above system has only one valid solution, which is the correct set of bigram LM parameters (r, p, q) = (0.25, 0.9, 0.5). However, if the true parameters were (r, p, q) = (0.1, 0.2, 0.3) with proportions of BOWs being 0.4%, 19.8%, 8.1%, respectively, it is easy to verify that the system would have multiple valid solutions: (0.1, 0.2, 0.3), (0.8819, 0.0673, 0.8283), and (0.1180, 0.1841, 0.3030). In general, if ˆp(x) is known from the training BOW corpus, when can we guarantee to uniquely recover the bigram LM θ? This is the question of identifiability, which means the transition matrix θ satisfying (1) exists and is unique. Identifiability is related to finding unique solutions of a system of polynomial equations since (1) is such a system in the elements of θ. The details are beyond the scope of this paper, but applying the technique in (Basu and Boston, 2000), it is possible to show that for W = 3 (including ⟨d⟩) we need longer documents (|x| ≥5) to ensure identifiability. The identifiability of more general cases is still an open research question. 657 3 Bigram Recovery Algorithm In practice, the documents are not truly generated from a bigram LM, and the BOW corpus may be small. We therefore seek a maximum likelihood estimate of θ or a regularized version of it. Equivalently, we no longer require equality in (1), but instead find θ that makes the distribution P(x|θ) as close to ˆp(x) as possible. We formalize this notion below. 3.1 The Objective Function Given a BOW corpus {x1, . . . , xn}, its normalized log likelihood under θ is ℓ(θ) ≡ 1 C Pn i=1 log P(xi|θ), where C = Pn i=1(|xi| −1) is the corpus length excluding ⟨d⟩’s. The idea is to find θ that maximizes ℓ(θ). This also brings P(x|θ) closest to ˆp(x) in the KL-divergence sense. However, to prevent overfitting, we regularize the problem so that θ prefers to be close to a “prior” bigram LM φ. The prior φ is also estimated from the BOW corpus, and is discussed in Section 3.4. We define the regularizer to be an asymmetric dissimilarity D(φ, θ) between the prior φ and the learned model θ. The dissimilarity is 0 if θ = φ, and increases as they diverge. Specifically, the KLdivergence between two word distributions conditioned on the same history u is KL(φu·∥θu·) = PW v=1 φuv log φuv θuv . We define D(φ, θ) to be the average KL-divergence over all histories: D(φ, θ) ≡ 1 W PW u=1 KL(φu·∥θu·), which is convex in θ (Cover and Thomas, 1991). We will use the following derivative later: ∂D(φ, θ)/∂θuv = −φuv/(Wθuv). We are now ready to define the regularized optimization problem for recovering a bigram LM θ from the BOW corpus: max θ ℓ(θ) −λD(φ, θ) subject to θ1 = 1, θ ≥0. (2) The weight λ controls the strength of the prior. The constraints ensure that θ is a valid bigram matrix, where 1 is an all-one vector, and the non-negativity constraint is element-wise. Equivalently, (2) can be viewed as the maximum a posteriori (MAP) estimate of θ, with independent Dirichlet priors for each row of θ: p(θu·) = Dir(θu·|αu·) and hyperparameters αuv = λC W φuv + 1. The summation over hidden ordered documents z in P(x|θ) couples the variables and makes (2) a non-concave problem. We optimize θ using an EM algorithm. 3.2 The EM Algorithm We derive the EM algorithm for the optimization problem (2). Let O(θ) ≡ℓ(θ) −λD(φ, θ) be the objective function. Let θ(t−1) be the bigram LM at iteration t −1. We can lower-bound O as follows: O(θ) = 1 C n X i=1 log X z∈σ(xi) P(z|θ(t−1), x) P(z|θ) P(z|θ(t−1), x) −λD(φ, θ) ≥ 1 C n X i=1 X z∈σ(xi) P(z|θ(t−1), x) log P(z|θ) P(z|θ(t−1), x) −λD(φ, θ) ≡ L(θ, θ(t−1)). We used Jensen’s inequality above since log() is concave. The lower bound L involves P(z|θ(t−1), x), the probability of hidden orderings of the BOW under the previous iteration’s model. In the E-step of EM we compute P(z|θ(t−1), x), which will be discussed in Section 3.3. One can verify that L(θ, θ(t−1)) is concave in θ, unlike the original objective O(θ). In addition, the lower bound “touches” the objective at θ(t−1), i.e., L(θ(t−1), θ(t−1)) = O(θ(t−1)). The EM algorithm iteratively maximizes the lower bound, which is now a concave optimization problem: maxθ L(θ, θ(t−1)), subject to θ1 = 1. The non-negativity constraints turn out to be automatically satisfied. Introducing Lagrange multipliers βu for each history u = 1 . . . W, we form the Lagrangian ∆: ∆≡L(θ, θ(t−1)) − W X u=1 βu W X v=1 θuv −1 ! . Taking the partial derivative with respect to θuv and setting it to zero: ∂∆/∂θuv = 0, we arrive at the following update: θuv ∝ n X i=1 X z∈σ(xi) P(z|θ(t−1), x)cuv(z) + λC W φuv. (3) 658 Input: BOW documents {x1, . . . , xn}, a prior bigram LM φ, weight λ. 1. t = 1. Initialize θ(0) = φ. 2. Repeat until the objective O(θ) converges: (a) (E-step) Compute P(z|θ(t−1), x) for z ∈ σ(xi), i = 1, . . . , n. (b) (M-step) Compute θ(t) using (3). Let t = t + 1. Output: The recovered bigram LM θ. Table 1: The EM algorithm The normalization is over v = 1 . . . W. We use cuv(z) to denote the number of times the bigram “uv” appears in the ordered document z. This is the M-step of EM. Intuitively, the first term counts how often the bigram “uv” occurs, weighing each ordering by its probability under the previous model; the second term pulls the parameter towards the prior. If the weight of the prior λ →∞, we would have θuv = φuv. The update is related to the MAP estimate for a multinomial distribution with a Dirichlet prior, where we use the expected counts. We initialize the EM algorithm with θ(0) = φ. The EM algorithm is summarized in Table 1. 3.3 Approximate E-step The E-step needs to compute the expected bigram counts of the form X z∈σ(x) P(z|θ, x)cuv(z). (4) However, this poses a computational problem. The summation is over unique ordered documents. The number of unique ordered documents can be on the order of |x|!, i.e., all permutations of the BOW. For a short document of length 15, this number is already 1012. Clearly, brute-force enumeration is only feasible for very short documents. Approximation is necessary to handle longer ones. A simple Monte Carlo approximation to (4) would involve sampling ordered documents z1, z2, . . . , zL according to zi ∼P(z|θ, x), and replacing (4) with PL i=1 cuv(zi)/L. This estimate is unbiased, and the variance decreases linearly with the number of samples, L. However, sampling directly from P is difficult. Instead, we sample ordered documents zi ∼ R(zi|θ, x) from a distribution R which is easy to generate, and construct an approximation using importance sampling (see, e.g., (Liu, 2001)). With each sample, zi, we associate a weight wi ∝ P(zi|θ, x)/R(zi|θ, x). The importance sampling approximation to (4) is then given by (PL i=1 wicuv(zi))/(PL i=1 wi). Re-weighting the samples in this fashion accounts for the fact that we are using a sampling distribution R which is different the target distribution P, and guarantees that our approximation is asymptotically unbiased. The quality of an importance sampling approximation is closely related to how closely R resembles P; the more similar they are, the better the approximation, in general. Given a BOW x and our current bigram model estimate, θ, we generate one sample (an ordered document zi) by sequentially drawing words from the bag, with probabilities proportional to θ, but properly normalized to form a distribution based on which words remain in the bag. For example, suppose x = (⟨d⟩:1, A:2, B:1, C:1). Then we set zi1 = ⟨d⟩, and sample zi2 = A with probability 2θ⟨d⟩A/(2θ⟨d⟩A + θ⟨d⟩B + θ⟨d⟩C). Similarly, if zi(j−1) = u and if v is in the original BOW that hasn’t been sampled yet, then we set the next word in the ordered document zij equal to v with probability proportional to cvθuv, where cv is the count of v in the remaining BOW. For this scheme, one can verify (Rabbat et al., 2007) that the importance weight corresponding to a sampled ordered document zi = (zi1, . . . , zi|x|) is given by wi = Q|x| t=2 P|x| i=t θzt−1zi. In our implementation, the number of importance samples used for a document x is 10|x|2 if the length of the document |x| > 8; otherwise we enumerate σ(x) without importance sampling. 3.4 Prior Bigram LM φ The quality of the EM solution θ can depend on the prior bigram LM φ. To assess bigram recoverability from a BOW corpus alone, we consider only priors estimated from the corpus itself3. Like θ, φ is a W ×W transition matrix with φuv = P(v|u). When 3Priors based on general English text or domain-specific knowledge could be used in specific applications. 659 appropriate, we set the initial probability φ⟨d⟩v proportional to the number of times word v appears in the BOW corpus. We consider three prior models: Prior 1: Unigram φunigram. The most na¨ıve φ is a unigram LM which ignores word history. The probability for word v is estimated from the BOW corpus frequency of v, with add-1 smoothing: φunigram uv ∝1 + Pn i=1 xiv. We should point out that the unigram prior is an asymmetric bigram, i.e., φunigram uv ̸= φunigram vu . Prior 2: Frequency of Document Cooccurrence (FDC) φfdc. Let δ(u, v|x) = 1 if words u ̸= v co-occur (regardless of their counts) in BOW x, and 0 otherwise. In the case u = v, δ(u, u|x) = 1 only if u appears at least twice in x. Let cfdc uv = Pn i=1 δ(u, v|xi) be the number of BOWs in which u, v co-occur. The FDC prior is φfdc uv ∝cfdc uv + 1. The co-occurrence counts cfdc are symmetric, but φfdc is asymmetric because of normalization. FDC captures some notion of potential transitions from u to v. FDC is in spirit similar to Kneser-Ney smoothing (Kneser and Ney, 1995) and other methods that accumulate indicators of document membership. Prior 3: Permutation-Based (Perm) φperm. Recall that cuv(z) is the number of times the bigram “uv” appears in an ordered document z. We define cperm uv = Pn i=1 Ez∈σ(xi)[cuv(z)], where the expectation is with respect to all unique orderings of each BOW. We make the zero-knowledge assumption of uniform probability over these orderings, rather than P(z|θ) as in the EM algorithm described above. EM will refine these estimates, though, so this is a natural starting point. Space precludes a full discussion, but it can be proven that cperm uv = Pn i=1 xiuxiv/|xi| if u ̸= v, and cperm uu = Pn i=1 xiu(xiu −1)/|xi|. Finally, φperm uv ∝cperm uv + 1. 3.5 Decoding Ordered Documents from BOWs Given a BOW x and a bigram LM θ, we formulate document recovery as the problem z∗= argmaxz∈σ(x)P(z|θ). In fact, we can generate the top N candidate ordered documents in terms of P(z|θ). We use A∗search to construct such an N-best list (Russell and Norvig, 2003). Each state is an ordered, partial document. Its successor states append one more unused word in x to the partial document. The actual cost g from the start (empty document) to a state is the log probability of the partial document under bigram θ. We design a heuristic cost h from the state to the goal (complete document) that is admissible: the idea is to over-use the best bigram history for the remaining words in x. Let the partial document end with word we. Let the count vector for the remaining BOW be (c1, . . . , cW ). One admissible heuristic is h = log QW u=1 P(u|bh(u); θ)cu, where the “best history” for word type u is bh(u) = argmaxvθvu, and v ranges over the word types with non-zero counts in (c1, . . . , cW ), plus we. It is easy to see that h is an upper bound on the bigram log probability that the remaining words in x can achieve. We use a memory-bounded A∗search similar to (Russell, 1992), because long BOWs would otherwise quickly exhaust memory. When the priority queue grows larger than the bound, the worst states (in terms of g + h) in the queue are purged. This necessitates a double-ended priority queue that can pop either the maximum or minimum item. We use an efficient implementation with Splay trees (Chong and Sahni, 2000). We continue running A∗after popping the goal state from its priority queue. Repeating this N times gives the N-best list. 4 Experiments We show experimentally that the proposed algorithm is indeed able to recover reasonable bigram LMs from BOW corpora. We observe: 1. Good test set perplexity: Using test (heldout) set perplexity (PP) as an objective measure of LM quality, we demonstrate that our recovered bigram LMs are much better than na¨ıve unigram LMs trained on the same BOW corpus. Furthermore, they are not far behind the “oracle” bigram LMs trained on ordered documents that correspond to the BOWs. 2. Sensible bigram pairs: We inspect the recovered bigram LMs and find that they assign higher probabilities to sensible bigram pairs (e.g., “i mean”, “oh boy”, “that’s funny”), and lower probabilities to nonsense pairs (e.g., “i yep”, “you let’s”, “right lot”). 3. Document recovery from BOW: With the bigram LMs, we show improved accuracy in recovering ordered documents from BOWs. We describe these experiments in detail below. 660 Corpus |V | # Docs # Tokens |x| SV10 10 6775 7792 1.2 SV25 25 9778 13324 1.4 SV50 50 12442 20914 1.7 SV100 100 14602 28611 2.0 SV250 250 18933 51950 2.7 SV500 500 23669 89413 3.8 SumTime 882 3341 68815 20.6 Table 2: Corpora statistics: vocabulary size, document count, total token count, and mean document length. 4.1 Corpora and Protocols We note that although in principle our algorithm works on large corpora, the current implementation does not scale well (Table 3 last column). We therefore experimented on seven corpora with relatively small vocabulary sizes, and with short documents (mostly one sentence per document). Table 2 lists statistics describing the corpora. The first six contain text transcripts of conversational telephone speech from the small vocabulary “SVitchboard 1” data set. King et al. constructed each corpus from the full Switchboard corpus, with the restriction that the sentences use only words in the corresponding vocabulary (King et al., 2005). We refer to these corpora as SV10, SV25, SV50, SV100, SV250, and SV500. The seventh corpus comes from the SumTime-Meteo data set (Sripada et al., 2003), which contains real weather forecasts for offshore oil rigs in the North Sea. For the SumTime corpus, we performed sentence segmentation to produce documents, removed punctuation, and replaced numeric digits with a special token. For each of the seven corpora, we perform 5-fold cross validation. We use four folds other than the k-th fold as the training set to train (recover) bigram LMs, and the k-th fold as the test set for evaluation. This is repeated for k = 1 . . . 5, and we report the average cross validation results. We distinguish the original ordered documents (training set z1, . . . zn, test set zn+1, . . . , zm) and the corresponding BOWs (training set x1 . . . xn, test set xn+1 . . . xm). In all experiments, we simply set the weight λ = 1 in (2). Given a training set and a test set, we perform the following steps: 1. Build prior LMs φX from the training BOW corpus x1, . . . xn, for X = unigram, fdc, perm. 2. Recover the bigram LMs θX with the EM algorithm in Table 1, from the training BOW corpus x1, . . . xn and using the prior from step 1. 3. Compute the MAP bigram LM from the ordered training documents z1, . . . zn. We call this the “oracle” bigram LM because it uses order information (not available to our algorithm), and we use it as a lower-bound on perplexity. 4. Test all LMs on zn+1, . . . , zm by perplexity. 4.2 Good Test Set Perplexity Table 3 reports the 5-fold cross validation mean-testset-PP values for all corpora, and the run time per EM iteration. Because of the long running time, we adopt the rule-of-thumb stopping criterion of “two EM iterations”. First, we observe that all bigram LMs perform better than unigram LMs φunigram even though they are trained on the same BOW corpus. Second, all recovered bigram LMs θX improved upon their corresponding baselines φX. The difference across every row is statistically significant according to a two-tailed paired t-test with p < 0.05. The differences among PP(θX) for the same corpus are also significant (except between θunigram and θperm for SV500). Finally, we observe that θperm tends to be best for the smaller vocabulary corpora, whereas θfdc dominates as the vocabulary grows. To see how much better we could do if we had ordered training documents z1, . . . , zn, we present the mean-test-set-PP of “oracle” bigram LMs in Table 4. We used three smoothing methods to obtain oracle LMs: absolute discounting using a constant of 0.5 (we experimented with other values, but 0.5 worked best), Good-Turing, and interpolated Witten-Bell as implemented in the SRILM toolkit (Stolcke, 2002). We see that our recovered LMs (trained on unordered BOW documents), especially for small vocabulary corpora, are close to the oracles (trained on ordered documents). For the larger datasets, the recovery task is more difficult, and the gap between the oracle LMs and the θ LMs widens. Note that the oracle LMs do much better than the recovered LMs on the SumTime corpus; we suspect the difference is due to the larger vocabulary and significantly higher average sentence length (see Table 2). 4.3 Sensible Bigram Pairs The next set of experiments compares the recovered bigram LMs to their corresponding prior LMs 661 Corpus X PP(φX) PP(θX) Time/ Iter SV10 unigram 7.48 6.95 < 1s fdc 6.52 6.47 < 1s perm 6.50 6.45 < 1s SV25 unigram 16.4 12.8 0.1s fdc 12.3 11.8 0.1s perm 12.2 11.7 0.1s SV50 unigram 29.1 19.7 2s fdc 19.6 17.8 4s perm 19.5 17.7 5s SV100 unigram 45.4 27.8 7s fdc 29.5 25.3 11s perm 30.0 25.6 11s SV250 unigram 91.8 51.2 5m fdc 60.0 47.3 8m perm 65.4 49.7 8m SV500 unigram 149.1 87.2 3h fdc 104.8 80.1 3h perm 123.9 87.4 3h SumTime unigram 129.7 81.8 4h fdc 103.2 77.7 4h perm 187.9 85.4 3h Table 3: Mean test set perplexities of prior LMs and bigram LMs recovered after 2 EM iterations. in terms of how they assign probabilities to word pairs. One naturally expects probabilities for frequently occurring bigrams to increase, while rare or nonsensical bigrams’ probabilities should decrease. For a prior-bigram pair (φ, θ), we evaluate the change in probabilities by computing the ratio ρhw = P(w|h,θ) P(w|h,φ) = θhw φhw . For a given history h, we sort words w by this ratio rather than by actual bigram probability because the bigrams with the highest and lowest probabilities tend to stay the same, while the changes accounting for differences in PP scores are more noticeable by considering the ratio. Due to space limitation, we present one specific result (FDC prior, fold 1) for the SV500 corpus in Table 5. Other results are similar. The table lists a few most frequent unigrams as history words h (left), and the words w with the smallest (center) and largest (right) ρhw ratio. Overall we see that our EM algorithm is forcing meaningless bigrams (e.g., “i goodness”, “oh thing”) to have lower probabilities, while assigning higher probabilities to sensible bigram pairs (e.g., “really good”, “that’s funny”). Note that the reverse of some common expressions (e.g., “right that’s”) also rise in probability, suggesting the algorithm detects that the two words are ofCorpus Absolute Discount GoodTuring WittenBell θ∗ SV10 6.27 6.28 6.27 6.45 SV25 10.5 10.6 10.5 11.7 SV50 14.8 14.9 14.8 17.7 SV100 20.0 20.1 20.0 25.3 SV250 33.7 33.7 33.8 47.3 SV500 50.9 50.9 51.3 80.1 SumTime 10.8 10.5 10.6 77.7 Table 4: Mean test set perplexities for oracle bigram LMs trained on z1, . . . , zn and tested on zn+1, . . . , zm. For reference, the rightmost column lists the best result using a recovered bigram LM (θperm for the first three corpora, θfdc for the latter four). ten adjacent, but lacks sufficient information to nail down the exact order. 4.4 Document Recovery from BOW We now play the role of the malicious party mentioned in the introduction. We show that, compared to their corresponding prior LMs, our recovered bigram LMs are better able to reconstruct ordered documents out of test BOWs xn+1, . . . , xm. We perform document recovery using 1-best A∗decoding. We use “document accuracy” and “n-gram accuracy” (for n = 2, 3) as our evaluation criteria. We define document accuracy (Accdoc) as the fraction of documents4 for which the decoded document matches the true ordered document exactly. Similarly, n-gram accuracy (Accn) measures the fraction of all n-grams in test documents (with n or more words) that are recovered correctly. For this evaluation, we compare models built for the SV500 corpus. Table 6 presents 5-fold cross validation average test-set accuracies. For each accuracy measure, we compare the prior LM with the recovered bigram LM. It is interesting to note that the FDC and Perm priors reconstruct documents surprisingly well, but we can always improve them by running our EM algorithm. The accuracies obtained by θ are statistically significantly better (via twotailed paired t-tests with p < 0.05) than their corresponding priors φ in all cases except Accdoc for θperm versus φperm. Furthermore, θfdc and θperm are significantly better than all other models in terms of all three reconstruction accuracy measures. 4We omit single-word documents from these computations. 662 h w (smallest ρhw) w (largest ρhw) i yep, bye-bye, ah, goodness, ahead mean, guess, think, bet, agree you let’s, us, fact, such, deal thank, bet, know, can, do right as, lot, going, years, were that’s, all, right, now, you’re oh thing, here, could, were, doing boy, really, absolutely, gosh, great that’s talking, home, haven’t, than, care funny, wonderful, true, interesting, amazing really now, more, yep, work, you’re sad, neat, not, good, it’s Table 5: The recovered bigram LM θfdc decreases nonsense bigram probabilities (center column) and increases sensible ones (right column) compared to the prior φfdc on the SV500 corpus. φperm reconstructions of test BOWs θperm reconstructions of test BOWs just it’s it’s it’s just going it’s just it’s just it’s going it’s probably out there else something it’s probably something else out there the the have but it doesn’t but it doesn’t have the the you to talking nice was it yes yes it was nice talking to you that’s well that’s what i’m saying well that’s that’s what i’m saying a little more here home take a little more take home here and they can very be nice too and they can be very nice too i think well that’s great i’m well i think that’s great i’m but was he because only always but only because he was always that’s think i don’t i no no i don’t i think that’s that in and it it’s interesting and it it’s interesting that in that’s right that’s right that’s difficult right that’s that’s right that’s difficult so just not quite a year so just not a quite year well it is a big dog well it is big a dog so do you have a car so you do have a car Table 7: Subset of SV500 documents that only φperm or θperm (but not both) reconstructs correctly. The correct reconstructions are in bold. Accdoc Acc2 Acc3 X φX θX φX θX φX θX unigram 11.1 26.8 17.7 32.8 2.7 11.8 fdc 30.2 31.0 33.0 35.1 11.4 13.3 perm 30.9 31.5 32.7 34.8 11.5 13.1 Table 6: Percentage of correctly reconstructed documents, 2-grams and 3-grams from test BOWs in SV500, 5-fold cross validation. The same trends continue for 4grams and 5-grams (not shown). We conclude our experiments with a closer look at some BOWs for which φ and θ reconstruct differently. As a representative example, we compare θperm to φperm on one test set of the SV500 corpus. There are 92 documents that are correctly reconstructed by θperm but not by φperm. In contrast, only 65 documents are accurately reordered by φperm but not by θperm. Table 7 presents a subset of these documents with six or more words. Overall, we conclude that the recovered bigram LMs do a better job at reconstructing BOW documents. 5 Conclusions and Future Work We presented an algorithm that learns bigram language models from BOWs. We plan to: i) investigate ways to speed up our algorithm; ii) extend it to trigram and higher-order models; iii) handle the mixture of BOW documents and some ordered documents (or phrases) when available; iv) adapt a general English LM to a special domain using only BOWs from that domain; and v) explore novel applications of our algorithm. Acknowledgments We thank Ben Liblit for tips on doubled-ended priority queues, and the anonymous reviewers for valuable comments. This work is supported in part by the Wisconsin Alumni Research Foundation, NSF CCF-0353079 and CCF-0728767, and the Natural Sciences and Engineering Research Council (NSERC) of Canada. 663 References Samit Basu and Nigel Boston. 2000. Identifiability of polynomial systems. Technical report, University of Illinois at Urbana-Champaign. Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL). Stanley F. Chen and Joshua T. Goodman. 1999. An empirical study of smoothing techniques for language modeling. Computer Speech and Language, 13(4):359–393. Kyun-Rak Chong and Sartaj Sahni. 2000. Correspondence-based data structures for doubleended priority queues. The ACM Journal of Experimental Algorithmics, 5(2). Thomas M. Cover and Joy A. Thomas. 1991. Elements of Information Theory. John Wiley & Sons, Inc. Simon King, Chris Bartels, and Jeff Bilmes. 2005. SVitchboard 1: Small vocabulary tasks from Switchboard 1. In Interspeech 2005, Lisbon, Portugal. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for M-gram language modeling. In ICASSP. Jun S. Liu. 2001. Monte Carlo Strategies in Scientific Computing. Springer. Michael Rabbat, M´ario Figueiredo, and Robert Nowak. 2007. Inferring network structure from cooccurrences. In Advances in Neural Information Processing Systems (NIPS) 20. Brian Roark, Murat Saraclar, and Michael Collins. 2007. Discriminative n-gram language modeling. Computer Speech and Language, 21(2):373–392. Ronald Rosenfeld. 2000. Two decades of statistical language modeling: Where do we go from here? Proceedings of the IEEE, 88(8). Stuart Russell and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach. Prentice-Hall, Englewood Cliffs, NJ, second edition. Stuart Russell. 1992. Efficient memory-bounded search methods. In The 10th European Conference on Artificial Intelligence. Somayajulu G. Sripada, Ehud Reiter, Jim Hunter, and Jin Yu. 2003. Exploiting a parallel TEXT-DATA corpus. In Proceedings of Corpus Linguistics, pages 734–743, Lancaster, U.K. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. In Proceedings of International Conference on Spoken Language Processing, Denver, Colorado. 664
2008
75
Proceedings of ACL-08: HLT, pages 665–673, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Semi-Supervised Sequential Labeling and Segmentation using Giga-word Scale Unlabeled Data Jun Suzuki and Hideki Isozaki NTT Communication Science Laboratories, NTT Corp. 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0237 Japan {jun, isozaki}@cslab.kecl.ntt.co.jp Abstract This paper provides evidence that the use of more unlabeled data in semi-supervised learning can improve the performance of Natural Language Processing (NLP) tasks, such as part-of-speech tagging, syntactic chunking, and named entity recognition. We first propose a simple yet powerful semi-supervised discriminative model appropriate for handling large scale unlabeled data. Then, we describe experiments performed on widely used test collections, namely, PTB III data, CoNLL’00 and ’03 shared task data for the above three NLP tasks, respectively. We incorporate up to 1G-words (one billion tokens) of unlabeled data, which is the largest amount of unlabeled data ever used for these tasks, to investigate the performance improvement. In addition, our results are superior to the best reported results for all of the above test collections. 1 Introduction Today, we can easily find a large amount of unlabeled data for many supervised learning applications in Natural Language Processing (NLP). Therefore, to improve performance, the development of an effective framework for semi-supervised learning (SSL) that uses both labeled and unlabeled data is attractive for both the machine learning and NLP communities. We expect that such SSL will replace most supervised learning in real world applications. In this paper, we focus on traditional and important NLP tasks, namely part-of-speech (POS) tagging, syntactic chunking, and named entity recognition (NER). These are also typical supervised learning applications in NLP, and are referred to as sequential labeling and segmentation problems. In some cases, these tasks have relatively large amounts of labeled training data. In this situation, supervised learning can provide competitive results, and it is difficult to improve them any further by using SSL. In fact, few papers have succeeded in showing significantly better results than state-of-theart supervised learning. Ando and Zhang (2005) reported a substantial performance improvement compared with state-of-the-art supervised learning results for syntactic chunking with the CoNLL’00 shared task data (Tjong Kim Sang and Buchholz, 2000) and NER with the CoNLL’03 shared task data (Tjong Kim Sang and Meulder, 2003). One remaining question is the behavior of SSL when using as much labeled and unlabeled data as possible. This paper investigates this question, namely, the use of a large amount of unlabeled data in the presence of (fixed) large labeled data. To achieve this, it is paramount to make the SSL method scalable with regard to the size of unlabeled data. We first propose a scalable model for SSL. Then, we apply our model to widely used test collections, namely Penn Treebank (PTB) III data (Marcus et al., 1994) for POS tagging, CoNLL’00 shared task data for syntactic chunking, and CoNLL’03 shared task data for NER. We used up to 1G-words (one billion tokens) of unlabeled data to explore the performance improvement with respect to the unlabeled data size. In addition, we investigate the performance improvement for ‘unseen data’ from the viewpoint of unlabeled data coverage. Finally, we compare our results with those provided by the best current systems. The contributions of this paper are threefold. First, we present a simple, scalable, but powerful task-independent model for semi-supervised sequential labeling and segmentation. Second, we report the best current results for the widely used test 665 collections described above. Third, we confirm that the use of more unlabeled data in SSL can really lead to further improvements. 2 Conditional Model for SSL We design our model for SSL as a natural semisupervised extension of conventional supervised conditional random fields (CRFs) (Lafferty et al., 2001). As our approach for incorporating unlabeled data, we basically follow the idea proposed in (Suzuki et al., 2007). 2.1 Conventional Supervised CRFs Let x ∈X and y ∈Y be an input and output, where X and Y represent the set of possible inputs and outputs, respectively. C stands for the set of cliques in an undirected graphical model G(x, y), which indicates the interdependency of a given x and y. yc denotes the output from the corresponding clique c. Each clique c∈C has a potential function Ψc. Then, the CRFs define the conditional probability p(y|x) as a product of Ψcs. In addition, let f =(f1, . . ., fI) be a feature vector, and λ = (λ1, . . ., λI) be a parameter vector, whose lengths are I. p(y|x; λ) on a CRF is defined as follows: p(y|x; λ) = 1 Z(x) Y c Ψc(yc, x; λ), (1) where Z(x) = P y∈Y Q c∈C Ψc(yc, x; λ) is the partition function. We generally assume that the potential function is a non-negative real value function. Therefore, the exponentiated weighted sum over the features of a clique is widely used, so that, Ψc(yc, x; λ)=exp(λ · f c(yc, x)) where f c(yc, x) is a feature vector obtained from the corresponding clique c in G(x, y). 2.2 Semi-supervised Extension for CRFs Suppose we have J kinds of probability models (PMs). The j-th joint PM is represented by pj(xj, y; θj) where θj is a model parameter. xj = Tj(x) is simply an input x transformed by a predefined function Tj. We assume xj has the same graph structure as x. This means pj(xj, y) can be factorized by the cliques c in G(x, y). That is, pj(xj, y; θj)=Q c pj(xjc, yc; θj). Thus, we can incorporate generative models such as Bayesian networks including (1D and 2D) hidden Markov models (HMMs) as these joint PMs. Actually, there is a difference in that generative models are directed graphical models while our conditional PM is an undirected. However, this difference causes no violations when we construct our approach. Let us introduce λ′=(λ1, . . ., λI, λI+1, . . ., λI+J), and h = (f1, . . ., fI, log p1, . . ., log pJ), which is the concatenation of feature vector f and the loglikelihood of J-joint PMs. Then, we can define a new potential function by embedding the joint PMs; Ψ′ c(yc, x; λ′, Θ) = exp(λ · f c(yc, x)) · Y j pj(xjc, yc; θj)λI+j = exp(λ′ · hc(yc, x)). where Θ = {θj}J j=1, and hc(yc, x) is h obtained from the corresponding clique c in G(x, y). Since each pj(xjc, yc) has range [0, 1], which is nonnegative, Ψ′ c can also be used as a potential function. Thus, the conditional model for our SSL can be written as: P(y|x; λ′, Θ) = 1 Z′(x) Y c Ψ′ c(yc, x; λ′, Θ), (2) where Z′(x)=P y∈Y Q c∈C Ψ′ c(yc, x; λ′, Θ). Hereafter in this paper, we refer to this conditional model as a ‘Joint probability model Embedding style SemiSupervised Conditional Model’, or JESS-CM for short. Given labeled data, Dl={(xn, yn)}N n=1, the MAP estimation of λ′ under a fixed Θ can be written as: L1(λ′|Θ) = X n log P(yn|xn; λ′, Θ) + log p(λ′), where p(λ′) is a prior probability distribution of λ′. Clearly, JESS-CM shown in Equation 2 has exactly the same form as Equation 1. With a fixed Θ, the log-likelihood, log pj, can be seen simply as the feature functions of JESS-CM as with fi. Therefore, embedded joint PMs do not violate the global convergence conditions. As a result, as with supervised CRFs, it is guaranteed that λ′ has a value that achieves the global maximum of L1(λ′|Θ). Moreover, we can obtain the same form of gradient as that of supervised CRFs (Sha and Pereira, 2003), that is, ∇L1(λ′|Θ) = E ˜ P (Y,X;λ′,Θ) £ h(Y, X) ¤ − X n EP (Y|xn;λ′,Θ) £ h(Y, xn) ¤ +∇log p(λ′). Thus, we can easily optimize L1 by using the forward-backward algorithm since this paper solely 666 focuses on a sequence model and a gradient-based optimization algorithm in the same manner as those used in supervised CRF parameter estimation. We cannot naturally incorporate unlabeled data into standard discriminative learning methods since the correct outputs y for unlabeled data are unknown. On the other hand with a generative approach, a well-known way to achieve this incorporation is to use maximum marginal likelihood (MML) parameter estimation, i.e., (Nigam et al., 2000). Given unlabeled data Du = {xm}M m=1, MML estimation in our setting maximizes the marginal distribution of a joint PM over a missing (hidden) variable y, namely, it maximizes P m log P y∈Y p(xm, y; θ). Following this idea, there have been introduced a parameter estimation approach for non-generative approaches that can effectively incorporate unlabeled data (Suzuki et al., 2007). Here, we refer to it as ‘Maximum Discriminant Functions sum’ (MDF) parameter estimation. MDF estimation substitutes p(x, y) with discriminant functions g(x, y). Therefore, to estimate the parameter Θ of JESS-CM by using MDF estimation, the following objective function is maximized with a fixed λ′: L2(Θ|λ′) = X m log X y∈Y g(xm, y; λ′, Θ) + log p(Θ), where p(Θ) is a prior probability distribution of Θ. Since the normalization factor does not affect the determination of y, the discriminant function of JESS-CM shown in Equation 2 is defined as g(x, y; λ′, Θ) = Q c∈C Ψ′ c(yc, x; λ′, Θ). With a fixed λ′, the local maximum of L2(Θ|λ′) around the initialized value of Θ can be estimated by an iterative computation such as the EM algorithm (Dempster et al., 1977). 2.3 Scalability: Efficient Training Algorithm A parameter estimation algorithm of λ′ and Θ can be obtained by maximizing the objective functions L1(λ′|Θ) and L2(Θ|λ′) iteratively and alternately. Figure 1 summarizes an algorithm for estimating λ′ and Θ for JESS-CM. This paper considers a situation where there are many more unlabeled data M than labeled data N, that is, N << M. This means that the calculation cost for unlabeled data is dominant. Thus, in order to make the overall parameter estimation procedure Input: training data D = {Dl, Du} where labeled data Dl = {(xn, yn)}N n=1, and unlabeled data Du = {xm}M m=1 Initialize: Θ(0) ←uniform distribution, t ←0 do 1. t ←t + 1 2. (Re)estimate λ′: maximize L1(λ′|Θ) with fixed Θ←Θ(t−1) using Dl. 3. Estimate Θ(t): (Initial values = Θ(t−1)) update one step toward maximizing L2(Θ|λ′) with fixed λ′ using Du. do until |Θ(t)−Θ(t−1)| |Θ(t−1)| < ϵ. Reestimate λ′: perform the same procedure as 1. Output: a JESS-CM, P(y|x, λ′, Θ(t)). Figure 1: Parameter estimation algorithm for JESS-CM. scalable for handling large scale unlabeled data, we only perform one step of MDF estimation for each t as explained on 3. in Figure 1. In addition, the calculation cost for estimating parameters of embedded joint PMs (HMMs) is independent of the number of HMMs, J, that we used (Suzuki et al., 2007). As a result, the cost for calculating the JESS-CM parameters, λ′ and Θ, is essentially the same as executing T iterations of the MML estimation for a single HMM using the EM algorithm plus T + 1 time optimizations of the MAP estimation for a conventional supervised CRF if it converged when t = T. In addition, our parameter estimation algorithm can be easily performed in parallel computation. 2.4 Comparison with Hybrid Model SSL based on a hybrid generative/discriminative approach proposed in (Suzuki et al., 2007) has been defined as a log-linear model that discriminatively combines several discriminative models, pD i , and generative models, pG j , such that: R(y|x; Λ, Θ, Γ) = Q i pD i (y|x; λi)γi Q j pG j (xj, y; θj)γj P y Q i pD i (y|x; λi)γi Q j pG j (xj, y; θj)γj , where Λ={λi}I i=1, and Γ={{γi}I i=1, {γj}I+J j=I+1}. With the hybrid model, if we use the same labeled training data to estimate both Λ and Γ, γjs will become negligible (zero or nearly zero) since pD i is already fitted to the labeled training data while pG j are trained by using unlabeled data. As a solution, a given amount of labeled training data is divided into two distinct sets, i.e., 4/5 for estimating Λ, and the 667 remaining 1/5 for estimating Γ (Suzuki et al., 2007). Moreover, it is necessary to split features into several sets, and then train several corresponding discriminative models separately and preliminarily. In contrast, JESS-CM is free from this kind of additional process, and the entire parameter estimation procedure can be performed in a single pass. Surprisingly, although JESS-CM is a simpler version of the hybrid model in terms of model structure and parameter estimation procedure, JESS-CM provides F-scores of 94.45 and 88.03 for CoNLL’00 and ’03 data, respectively, which are 0.15 and 0.83 points higher than those reported in (Suzuki et al., 2007) for the same configurations. This performance improvement is basically derived from the full benefit of using labeled training data for estimating the parameter of the conditional model while the combination weights, Γ, of the hybrid model are estimated solely by using 1/5 of the labeled training data. These facts indicate that JESS-CM has several advantageous characteristics compared with the hybrid model. 3 Experiments In our experiments, we report POS tagging, syntactic chunking and NER performance incorporating up to 1G-words of unlabeled data. 3.1 Data Set To compare the performance with that of previous studies, we selected widely used test collections. For our POS tagging experiments, we used the Wall Street Journal in PTB III (Marcus et al., 1994) with the same data split as used in (Shen et al., 2007). For our syntactic chunking and NER experiments, we used exactly the same training, development and test data as those provided for the shared tasks of CoNLL’00 (Tjong Kim Sang and Buchholz, 2000) and CoNLL’03 (Tjong Kim Sang and Meulder, 2003), respectively. The training, development and test data are detailed in Table 11 . The unlabeled data for our experiments was taken from the Reuters corpus, TIPSTER corpus (LDC93T3C) and the English Gigaword corpus, third edition (LDC2007T07). As regards the TIP1The second-order encoding used in our NER experiments is the same as that described in (Sha and Pereira, 2003) except removing IOB-tag of previous position label. (a) POS-tagging: (WSJ in PTB III) # of labels 45 Data set (WSJ sec. IDs) # of sent. # of words Training 0–18 38,219 912,344 Development 19–21 5,527 131,768 Test 22–24 5,462 129,654 (b) Chunking: (WSJ in PTB III: CoNLL’00 shared task data) # of labels 23 (w/ IOB-tagging) Data set (WSJ sec. IDs) # of sent. # of words Training 15–18 8,936 211,727 Development N/A N/A N/A Test 20 2,012 47,377 (c) NER: (Reuters Corpus: CoNLL’03 shared task data) # of labels 29 (w/ IOB-tagging+2nd-order encoding) Data set (time period) # of sent. # of words Training 22–30/08/96 14,987 203,621 Development 30–31/08/96 3,466 51,362 Test 06–07/12/96 3,684 46,435 Table 1: Details of training, development, and test data (labeled data set) used in our experiments data abbr. (time period) # of sent. # of words Tipster wsj 04/90–03/92 1,624,744 36,725,301 Reuters reu 09/96–08/97* 13,747,227 215,510,564 Corpus *(excluding 06–07/12/96) English afp 05/94–12/96 5,510,730 135,041,450 Gigaword apw 11/94–12/96 7,207,790 154,024,679 ltw 04/94–12/96 3,094,290 72,928,537 nyt 07/94–12/96 15,977,991 357,952,297 xin 01/95–12/96 1,740,832 40,078,312 total all 48,903,604 1,012,261,140 Table 2: Unlabeled data used in our experiments STER corpus, we extracted all the Wall Street Journal articles published between 1990 and 1992. With the English Gigaword corpus, we extracted articles from five news sources published between 1994 and 1996. The unlabeled data used in this paper is detailed in Table 2. Note that the total size of the unlabeled data reaches 1G-words (one billion tokens). 3.2 Design of JESS-CM We used the same graph structure as the linear chain CRF for JESS-CM. As regards the design of the feature functions fi, Table 3 shows the feature templates used in our experiments. In the table, s indicates a focused token position. Xs−1:s represents the bi-gram of feature X obtained from s −1 and s positions. {Xu}B u=A indicates that u ranges from A to B. For example, {Xu}s+2 u=s−2 is equal to five feature templates, {Xs−2, Xs−1, Xs, Xs+1, Xs+2}. ‘word type’ or wtp represents features of a word such as capitalization, the existence of digits, and punctuation as shown in (Sutton et al., 2006) without regular expressions. Although it is common to use external 668 (a) POS tagging:(total 47 templates) [ys], [ys−1:s], {[ys, pf-Ns], [ys, sf-Ns]}9 N=1, {[ys, wdu], [ys, wtpu], [ys−1:s, wtpu]}s+2 u=s−2, {[ys, wdu−1:u], [ys, wtpu−1:u], [ys−1:s, wtpu−1:u]}s+2 u=s−1 (b) Syntactic chunking: (total 39 templates) [ys], [ys−1:s], {[ys, wdu], [ys, posu], [ys, wdu, posu], [ys−1:s, wdu], [ys−1:s, posu]}s+2 u=s−2, {[ys, wdu−1:u], [ys, posu−1:u], {[ys−1:s, posu−1:u]}s+2 u=s−1, (c) NER: (total 79 templates) [ys], [ys−1:s], {[ys, wdu], [ys, lwdu], [ys, posu], [ys, wtpu], [ys−1:s, lwdu], [ys−1:s, posu], [ys−1:s, wtpu]}s+2 u=s−2, {[ys, lwdu−1:u], [ys, posu−1:u], [ys, wtpu−1:u], [ys−1:s, posu−1:u], [ys−1:s, wtpu−1:u]}s+2 u=s−1, [ys, poss−1:s:s+1], [ys, wtps−1:s:s+1], [ys−1:s, poss−1:s:s+1], [ys−1:s, wtps−1:s:s+1], [ys, wd4ls], [ys, wd4rs], {[ys, pf-Ns], [ys, sf-Ns], [ys−1:s, pf-Ns], [ys−1:s, sf-Ns]}4 N=1 wd: word, pos: part-of-speech lwd : lowercase of word, wtp: ‘word type’, wd4{l,r}: words within the left or right 4 tokens {pf,sf}-N: N character prefix or suffix of word Table 3: Feature templates used in our experiments                                                          (a) Influence of η (b) Changes in performance in Dirichlet prior and convergence property Figure 2: Typical behavior of tunable parameters resources such as gazetteers for NER, we used none. All our features can be automatically extracted from the given training data. 3.3 Design of Joint PMs (HMMs) We used first order HMMs for embedded joint PMs since we assume that they have the same graph structure as JESS-CM as described in Section 2.2. To reduce the required human effort, we simply used the feature templates shown in Table 3 to generate the features of the HMMs. With our design, one feature template corresponded to one HMM. This design preserves the feature whereby each HMM emits a single symbol from a single state (or transition). We can easily ignore overlapping features that appear in a single HMM. As a result, 47, 39 and 79 distinct HMMs are embedded in the potential functions of JESS-CM for POS tagging, chunking and NER experiments, respectively. 3.4 Tunable Parameters In our experiments, we selected Gaussian and Dirichlet priors as the prior distributions in L1 and L2, respectively. This means that JESS-CM has two tunable parameters, σ2 and η, in the Gaussian and Dirichlet priors, respectively. The values of these tunable parameters are chosen by employing a binary line search. We used the value for the best performance with the development set2. However, it may be computationally unrealistic to retrain the entire procedure several times using 1G-words of unlabeled data. Therefore, these tunable parameter values are selected using a relatively small amount of unlabeled data (17M-words), and we used the selected values in all our experiments. The left graph in Figure 2 shows typical η behavior. The left end is equivalent to optimizing L2 without a prior, and the right end is almost equivalent to considering pj(xj, y) for all j to be a uniform distribution. This is why it appears to be bounded by the performance obtained from supervised CRF. We omitted the influence of σ2 because of space constraints, but its behavior is nearly the same as that of supervised CRF. Unfortunately, L2(Θ|λ′) may have two or more local maxima. Our parameter estimation procedure does not guarantee to provide either the global optimum or a convergence solution in Θ and λ′ space. An example of non-convergence is the oscillation of the estimated Θ. That is, Θ traverses two or more local maxima. Therefore, we examined its convergence property experimentally. The right graph in Figure 2 shows a typical convergence property. Fortunately, in all our experiments, JESS-CM converged in a small number of iterations. No oscillation is observed here. 4 Results and Discussion 4.1 Impact of Unlabeled Data Size Table 4 shows the performance of JESS-CM using 1G-words of unlabeled data and the performance gain compared with supervised CRF, which is trained under the same conditions as JESS-CM except that joint PMs are not incorporated. We emphasize that our model achieved these large improvements solely using unlabeled data as additional resources, without introducing a sophisticated model, deep feature engineering, handling external hand2Since CoNLL’00 shared task data has no development set, we divided the labeled training data into two distinct sets, 4/5 for training and the remainder for the development set, and determined the tunable parameters in preliminary experiments. 669 (a) POS tagging (b) Chunking (c) NER measures label accuracy entire sent. acc. Fβ=1 sent. acc. Fβ=1 entire sent. acc. eval. data dev. test dev. test test test dev. test dev. test JESS-CM (CRF/HMM) 97.35 97.40 56.34 57.01 95.15 65.06 94.48 89.92 91.17 85.12 (gain from supervised CRF) (+0.17) (+0.19) (+1.90) (+1.63) (+1.27) (+4.92) (+2.74) (+3.57) (+3.46) (+3.96) Table 4: Results for POS tagging (PTB III data), syntactic chunking (CoNLL’00 data), and NER (CoNLL’03 data) incorporated with 1G-words of unlabeled data, and the performance gain from supervised CRF                                                                                                             (a) POS tagging (b) Syntactic chunking (c) NER Figure 3: Performance changes with respect to unlabeled data size in JESS-CM crafted resources, or task dependent human knowledge (except for the feature design). Our method can greatly reduce the human effort needed to obtain a high performance tagger or chunker. Figure 3 shows the learning curves of JESS-CM with respect to the size of the unlabeled data, where the x-axis is on the logarithmic scale of the unlabeled data size (Mega-word). The scale at the top of the graph shows the ratio of the unlabeled data size to the labeled data size. We observe that a small amount of unlabeled data hardly improved the performance since the supervised CRF results are competitive. It seems that we require at least dozens of times more unlabeled data than labeled training data to provide a significant performance improvement. The most important and interesting behavior is that the performance improvements against the unlabeled data size are almost linear on a logarithmic scale within the size of the unlabeled data used in our experiments. Moreover, there is a possibility that the performance is still unsaturated at the 1G-word unlabeled data point. This suggests that increasing the unlabeled data in JESS-CM may further improve the performance. Suppose J=1, the discriminant function of JESSCM is g(x, y) = A(x, y)p1(x1, y; θ1)λI+1 where A(x, y) = exp(λ · P c f c(yc, x)). Note that both A(x, y) and λI+j are given and fixed during the MDF estimation of joint PM parameters Θ. Therefore, the MDF estimation in JESS-CM can be regarded as a variant of the MML estimation (see Section 2.2), namely, it is MML estimation with a bias, A(x, y), and smooth factors, λI+j. MML estimation can be seen as modeling p(x) since it is equivalent to maximizing P m log p(xm) with marginalized hidden variables y, where P y∈Y p(x, y) = p(x). Generally, more data will lead to a more accurate model of p(x). With our method, as with modeling p(x) in MML estimation, more unlabeled data is preferable since it may provide more accurate modeling. This also means that it provides better ‘clusters’ over the output space since Y is used as hidden states in HMMs. These are intuitive explanations as to why more unlabeled data in JESS-CM produces better performance. 4.2 Expected Performance for Unseen Data We try to investigate the impact of unlabeled data on the performance of unseen data. We divide the test set (or the development set) into two disjoint sets: L.app and L.neg app. L.app is a set of sentences constructed by words that all appeared in the Labeled training data. L.¬app is a set of sentences that have at least one word that does not appear in the Labeled training data. Table 5 shows the performance with these two sets obtained from both supervised CRF and JESSCM with 1G-word unlabeled data. As the supervised CRF results, the performance of the L.¬app sets is consistently much lower than that of the cor670 (a) POS tagging (b) Chunking (c) NER eval. data development test test development test L.¬app L.app L.¬app L.app L.¬app L.app L.¬app L.app L.¬app L.app rates of sentences (46.1%) (53.9%) (40.4%) (59.6%) (70.7%) (29.3%) (54.3%) (45.7%) (64.3%) (35.7%) supervised CRF (baseline) 46.78 60.99 48.57 60.01 56.92 67.91 79.60 97.35 75.69 91.03 JESS-CM (CRF/HMM) 49.02 62.60 50.79 61.24 62.47 71.30 85.87 97.47 80.84 92.85 (gain from supervised CRF) (+2.24) (+1.61) (+2.22) (+1.23) (+5.55) (+3.40) (+6.27) (+0.12) (+5.15) (+1.82) U.app 83.7% 96.3% 84.3% 95.8% 89.5% 99.2% 95.3% 99.8% 94.9% 100.0% Table 5: Comparison with L.¬app and L.app sets obtained from both supervised CRF and JESS-CM with 1G-word unlabeled data evaluated by the entire sentence accuracies, and the ratio of U.app. unlab. data dev (Aug. 30-31) test (Dec. 06-07) (period) #sent. #wds Fβ=1 U.app Fβ=1 U.app reu(Sep.) 1.0M 17M 93.50 82.0% 88.27 69.7% reu(Oct.) 1.3M 20M 93.04 71.0% 88.82 72.0% reu(Nov.) 1.2M 18M 92.94 68.7% 89.08 74.3% reu(Dec.)* 9M 15M 92.91 67.0% 89.29 84.4% Table 6: Influence of U.app in NER experiments: *(excluding Dec. 06-07) responding L.app sets. Moreover, we can observe that the ratios of L.¬app are not so small; nearly half (46.1% and 40.4%) in the PTB III data, and more than half (70.7%, 54.3% and 64.3%) in CoNLL’00 and ’03 data, respectively. This indicates that words not appearing in the labeled training data are really harmful for supervised learning. Although the performance with L.¬app sets is still poorer than with L.app sets, the JESS-CM results indicate that the introduction of unlabeled data effectively improves the performance of L.¬app sets, even more than that of L.app sets. These improvements are essentially very important; when a tagger and chunker are actually used, input data can be obtained from anywhere and this may mostly include words that do not appear in the given labeled training data since the labeled training data is limited and difficult to increase. This means that the improved performance of L.¬app can link directly to actual use. Table 5 also shows the ratios of sentences that are constructed from words that all appeared in the 1G-word Unlabeled data used in our experiments (U.app) in the L.¬app and L.app. This indicates that most of the words in the development or test sets are covered by the 1G-word unlabeled data. This may be the main reason for JESS-CM providing large performance gains for both the overall and L.¬app set performance of all three tasks. Table 6 shows the relation between JESS-CM performance and U.app in the NER experiments. The development data and test data were obtained from system dev. test additional resources JESS-CM (CRF/HMM) 97.35 97.40 1G-word unlabeled data (Shen et al., 2007) 97.28 97.33 – (Toutanova et al., 2003) 97.15 97.24 crude company name detector [sup. CRF (baseline)] 97.18 97.21 – Table 7: POS tagging results of the previous top systems for PTB III data evaluated by label accuracy system test additional resources JESS-CM (CRF/HMM) 95.15 1G-word unlabeled data 94.67 15M-word unlabeled data (Ando and Zhang, 2005) 94.39 15M-word unlabeled data (Suzuki et al., 2007) 94.36 17M-word unlabeled data (Zhang et al., 2002) 94.17 full parser output (Kudo and Matsumoto, 2001) 93.91 – [supervised CRF (baseline)] 93.88 – Table 8: Syntactic chunking results of the previous top systems for CoNLL’00 shared task data (Fβ=1 score) 30-31 Aug. 1996 and 6-7 Dec. 1996 Reuters news articles, respectively. We find that temporal proximity leads to better performance. This aspect can also be explained as U.app. Basically, the U.app increase leads to improved performance. The evidence provided by the above experiments implies that increasing the coverage of unlabeled data offers the strong possibility of increasing the expected performance of unseen data. Thus, it strongly encourages us to use an SSL approach that includes JESS-CM to construct a general tagger and chunker for actual use. 5 Comparison with Previous Top Systems and Related Work In POS tagging, the previous best performance was reported by (Shen et al., 2007) as summarized in Table 7. Their method uses a novel sophisticated model that learns both decoding order and labeling, while our model uses a standard first order Markov model. Despite using such a simple model, our method can provide a better result with the help of unlabeled data. 671 system dev. test additional resources JESS-CM (CRF/HMM) 94.48 89.92 1G-word unlabeled data 93.66 89.36 37M-word unlabeled data (Ando and Zhang, 2005) 93.15 89.31 27M-word unlabeled data (Florian et al., 2003) 93.87 88.76 own large gazetteers, 2M-word labeled data (Suzuki et al., 2007) N/A 88.41 27M-word unlabeled data [sup. CRF (baseline)] 91.74 86.35 – Table 9: NER results of the previous top systems for CoNLL’03 shared task data evaluated by Fβ=1 score As shown in Tables 8 and 9, the previous best performance for syntactic chunking and NER was reported by (Ando and Zhang, 2005), and is referred to as ‘ASO-semi’. ASO-semi also incorporates unlabeled data solely as additional information in the same way as JESS-CM. ASO-semi uses unlabeled data for constructing auxiliary problems that are expected to capture a good feature representation of the target problem. As regards syntactic chunking, JESS-CM significantly outperformed ASO-semi for the same 15M-word unlabeled data size obtained from the Wall Street Journal in 1991 as described in (Ando and Zhang, 2005). Unfortunately with NER, JESS-CM is slightly inferior to ASO-semi for the same 27M-word unlabeled data size extracted from the Reuters corpus. In fact, JESS-CM using 37M-words of unlabeled data provided a comparable result. We observed that ASOsemi prefers ‘nugget extraction’ tasks to ’field segmentation’ tasks (Grenager et al., 2005). We cannot provide details here owing to the space limitation. Intuitively, their word prediction auxiliary problems can capture only a limited number of characteristic behaviors because the auxiliary problems are constructed by a limited number of ‘binary’ classifiers. Moreover, we should remember that ASOsemi used the human knowledge that ‘named entities mostly consist of nouns or adjectives’ during the auxiliary problem construction in their NER experiments. In contrast, our results require no such additional knowledge or limitation. In addition, the design and training of auxiliary problems as well as calculating SVD are too costly when the size of the unlabeled data increases. These facts imply that our SSL framework is rather appropriate for handling large scale unlabeled data. On the other hand, ASO-semi and JESS-CM have an important common feature. That is, both methods discriminatively combine models trained by using unlabeled data in order to create informative feature representation for discriminative learning. Unlike self/co-training approaches (Blum and Mitchell, 1998), which use estimated labels as ‘correct labels’, this approach automatically judges the reliability of additional features obtained from unlabeled data in terms of discriminative training. Ando and Zhang (2007) have also pointed out that this methodology seems to be one key to achieving higher performance in NLP applications. There is an approach that combines individually and independently trained joint PMs into a discriminative model (Li and McCallum, 2005). There is an essential difference between this method and JESSCM. We categorize their approach as an ‘indirect approach’ since the outputs of the target task, y, are not considered during the unlabeled data incorporation. Note that ASO-semi is also an ‘indirect approach’. On the other hand, our approach is a ‘direct approach’ because the distribution of y obtained from JESS-CM is used as ‘seeds’ of hidden states during MDF estimation for join PM parameters (see Section 4.1). In addition, MDF estimation over unlabeled data can effectively incorporate the ‘labeled’ training data information via a ‘bias’ since λ included in A(x, y) is estimated from labeled training data. 6 Conclusion We proposed a simple yet powerful semi-supervised conditional model, which we call JESS-CM. It is applicable to large amounts of unlabeled data, for example, at the giga-word level. Experimental results obtained by using JESS-CM incorporating 1Gwords of unlabeled data have provided the current best performance as regards POS tagging, syntactic chunking, and NER for widely used large test collections such as PTB III, CoNLL’00 and ’03 shared task data, respectively. We also provided evidence that the use of more unlabeled data in SSL can lead to further improvements. Moreover, our experimental analysis revealed that it may also induce an improvement in the expected performance for unseen data in terms of the unlabeled data coverage. Our results may encourage the adoption of the SSL method for many other real world applications. 672 References R. Ando and T. Zhang. 2005. A High-Performance Semi-Supervised Learning Method for Text Chunking. In Proc. of ACL-2005, pages 1–9. R. Ando and T. Zhang. 2007. Two-view Feature Generation Model for Semi-supervised Learning. In Proc. of ICML-2007, pages 25–32. A. Blum and T. Mitchell. 1998. Combining Labeled and Unlabeled Data with Co-Training. In Conference on Computational Learning Theory 11. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum Likelihood from Incomplete Data via the EM Algorithm. Journal of the Royal Statistical Society, Series B, 39:1–38. R. Florian, A. Ittycheriah, H. Jing, and T. Zhang. 2003. Named Entity Recognition through Classifier Combination. In Proc. of CoNLL-2003, pages 168–171. T. Grenager, D. Klein, and C. Manning. 2005. Unsupervised Learning of Field Segmentation Models for Information Extraction. In Proc. of ACL-2005, pages 371–378. T. Kudo and Y. Matsumoto. 2001. Chunking with Support Vector Machines. In Proc. of NAACL 2001, pages 192–199. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proc. of ICML-2001, pages 282–289. W. Li and A. McCallum. 2005. Semi-Supervised Sequence Modeling with Syntactic Topic Models. In Proc. of AAAI-2005, pages 813–818. M. P. Marcus, B. Santorini, and M. A. Marcinkiewicz. 1994. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. 2000. Text Classification from Labeled and Unlabeled Documents using EM. Machine Learning, 39:103– 134. F. Sha and F. Pereira. 2003. Shallow Parsing with Conditional Random Fields. In Proc. of HLT/NAACL-2003, pages 213–220. L. Shen, G. Satta, and A. Joshi. 2007. Guided Learning for Bidirectional Sequence Classification. In Proc. of ACL-2007, pages 760–767. C. Sutton, M. Sindelar, and A. McCallum. 2006. Reducing Weight Undertraining in Structured Discriminative Learning. In Proc. of HTL-NAACL 2006, pages 89–95. J Suzuki, A Fujino, and H Isozaki. 2007. SemiSupervised Structured Output Learning Based on a Hybrid Generative and Discriminative Approach. In Proc. of EMNLP-CoNLL, pages 791–800. E. F. Tjong Kim Sang and S. Buchholz. 2000. Introduction to the CoNLL-2000 Shared Task: Chunking. In Proc. of CoNLL-2000 and LLL-2000, pages 127–132. E. T. Tjong Kim Sang and F. De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: LanguageIndependent Named Entity Recognition. In Proc. of CoNLL-2003, pages 142–147. K. Toutanova, D. Klein, C.D. Manning, and Y. Yoram Singer. 2003. Feature-rich Part-ofspeech Tagging with a Cyclic Dependency Network. In Proc. of HLT-NAACL-2003, pages 252–259. T. Zhang, F. Damerau, and D. Johnson. 2002. Text Chunking based on a Generalization of Winnow. Machine Learning Research, 2:615–637. 673
2008
76
Proceedings of ACL-08: HLT, pages 674–682, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Large Scale Acquisition of Paraphrases for Learning Surface Patterns Rahul Bhagat∗ Information Sciences Institute University of Southern California Marina del Rey, CA [email protected] Deepak Ravichandran Google Inc. 1600 Amphitheatre Parkway Mountain View, CA [email protected] Abstract Paraphrases have proved to be useful in many applications, including Machine Translation, Question Answering, Summarization, and Information Retrieval. Paraphrase acquisition methods that use a single monolingual corpus often produce only syntactic paraphrases. We present a method for obtaining surface paraphrases, using a 150GB (25 billion words) monolingual corpus. Our method achieves an accuracy of around 70% on the paraphrase acquisition task. We further show that we can use these paraphrases to generate surface patterns for relation extraction. Our patterns are much more precise than those obtained by using a state of the art baseline and can extract relations with more than 80% precision for each of the test relations. 1 Introduction Paraphrases are textual expressions that convey the same meaning using different surface words. For example consider the following sentences: Google acquired YouTube. (1) Google completed the acquisition of YouTube. (2) Since they convey the same meaning, sentences (1) and (2) are sentence level paraphrases, and the phrases “acquired” and “completed the acquisition of” in (1) and (2) respectively are phrasal paraphrases. Paraphrases provide a way to capture the variability of language and hence play an important ∗Work done during an internship at Google Inc. role in many natural language processing (NLP) applications. For example, in question answering, paraphrases have been used to find multiple patterns that pinpoint the same answer (Ravichandran and Hovy, 2002); in statistical machine translation, they have been used to find translations for unseen source language phrases (Callison-Burch et al., 2006); in multi-document summarization, they have been used to identify phrases from different sentences that express the same information (Barzilay et al., 1999); in information retrieval they have been used for query expansion (Anick and Tipirneni, 1999). Learning paraphrases requires one to ensure identity of meaning. Since there are no adequate semantic interpretation systems available today, paraphrase acquisition techniques use some other mechanism as a kind of “pivot” to (help) ensure semantic identity. Each pivot mechanism selects phrases with similar meaning in a different characteristic way. A popular method, the so-called distributional similarity, is based on the dictum of Zelig Harris “you shall know the words by the company they keep”: given highly discriminating left and right contexts, only words with very similar meaning will be found to fit in between them. For paraphrasing, this has been often used to find syntactic transformations in parse trees that preserve (semantic) meaning. Another method is to use a bilingual dictionary or translation table as pivot mechanism: all source language words or phrases that translate to a given foreign word/phrase are deemed to be paraphrases of one another. In this paper we call the paraphrases that contain only words as surface paraphrases and those 674 that contain paths in a syntax tree as syntactic paraphrases. We here, present a method to acquire surface paraphrases from a single monolingual corpus. We use a large corpus (about 150GB) to overcome the data sparseness problem. To overcome the scalability problem, we pre-process the text with a simple parts-of-speech (POS) tagger and then apply locality sensitive hashing (LSH) (Charikar, 2002; Ravichandran et al., 2005) to speed up the remaining computation for paraphrase acquisition. Our experiments show results to verify the following main claim: Claim 1: Highly precise surface paraphrases can be obtained from a very large monolingual corpus. With this result, we further show that these paraphrases can be used to obtain high precision surface patterns that enable the discovery of relations in a minimally supervised way. Surface patterns are templates for extracting information from text. For example, if one wanted to extract a list of company acquisitions, “⟨ACQUIRER⟩acquired ⟨ACQUIREE⟩” would be one surface pattern with “⟨ACQUIRER⟩” and “⟨ACQUIREE⟩” as the slots to be extracted. Thus we can claim: Claim 2: These paraphrases can then be used for generating high precision surface patterns for relation extraction. 2 Related Work Most recent work in paraphrase acquisition is based on automatic acquisition. Barzilay and McKeown (2001) used a monolingual parallel corpus to obtain paraphrases. Bannard and Callison-Burch (2005) and Zhou et al. (2006) both employed a bilingual parallel corpus in which each foreign language word or phrase was a pivot to obtain source language paraphrases. Dolan et al. (2004) and Barzilay and Lee (2003) used comparable news articles to obtain sentence level paraphrases. All these approaches rely on the presence of parallel or comparable corpora and are thus limited by their availability and size. Lin and Pantel (2001) and Szpektor et al. (2004) proposed methods to obtain entailment templates by using a single monolingual resource. While both differ in their approaches, they both end up finding syntactic paraphrases. Their methods cannot be used if we cannot parse the data (either because of scale or data quality). Our approach on the other hand, finds surface paraphrases; it is more scalable and robust due to the use of simple POS tagging. Also, our use of locality sensitive hashing makes finding similar phrases in a large corpus feasible. Another task related to our work is relation extraction. Its aim is to extract instances of a given relation. Hearst (1992) the pioneering paper in the field used a small number of hand selected patterns to extract instances of hyponymy relation. Berland and Charniak (1999) used a similar method for extracting instances of meronymy relation. Ravichandran and Hovy (2002) used seed instances of a relation to automatically obtain surface patterns by querying the web. But their method often finds patterns that are too general (e.g., X and Y), resulting in low precision extractions. Rosenfeld and Feldman (2006) present a somewhat similar web based method that uses a combination of seed instances and seed patterns to learn good quality surface patterns. Both these methods differ from ours in that they learn relation patterns on the fly (from the web). Our method however, pre-computes paraphrases for a large set of surface patterns using distributional similarity over a large corpus and then obtains patterns for a relation by simply finding paraphrases (offline) for a few seed patterns. Using distributional similarity avoids the problem of obtaining overly general patterns and the pre-computation of paraphrases means that we can obtain the set of patterns for any relation instantaneously. Romano et al. (2006) and Sekine (2006) used syntactic paraphrases to obtain patterns for extracting relations. While procedurally different, both methods depend heavily on the performance of the syntax parser and require complex syntax tree matching to extract the relation instances. Our method on the other hand acquires surface patterns and thus avoids the dependence on a parser and syntactic matching. This also makes the extraction process scalable. 3 Acquiring Paraphrases This section describes our model for acquiring paraphrases from text. 675 3.1 Distributional Similarity Harris’s distributional hypothesis (Harris, 1954) has played an important role in lexical semantics. It states that words that appear in similar contexts tend to have similar meanings. In this paper, we apply the distributional hypothesis to phrases i.e. word ngrams. For example, consider the phrase “acquired” of the form “X acquired Y ”. Considering the context of this phrase, we might find {Google, eBay, Yahoo,...} in position X and {YouTube, Skype, Overture,...} in position Y . Now consider another phrase “completed the acquisition of”, again of the form “X completed the acquisition of Y ”. For this phrase, we might find {Google, eBay, Hilton Hotel corp.,...} in position X and {YouTube, Skype, Bally Entertainment Corp.,...} in position Y . Since the contexts of the two phrases are similar, our extension of the distributional hypothesis would assume that “acquired” and “completed the acquisition of” have similar meanings. 3.2 Paraphrase Learning Model Let p be a phrase (n-gram) of the form X p Y , where X and Y are the placeholders for words occurring on either side of p. Our first task is to find the set of phrases that are similar in meaning to p. Let P = {p1, p2, p3, ..., pl} be the set of all phrases of the form X pi Y where pi ∈P. Let Si,X be the set of words that occur in position X of pi and Si,Y be the set of words that occur in position Y of pi. Let Vi be the vector representing pi such that Vi = Si,X ∪Si,Y . Each word f ∈Vi has an associated score that measures the strength of the association of the word f with phrase pi; as do many others, we employ pointwise mutual information (Cover and Thomas, 1991) to measure this strength of association. pmi(pi; f) = log P (pi,f) P (pi)P (f) (1) The probabilities in equation (1) are calculated by using the maximum likelihood estimate over our corpus. Once we have the vectors for each phrase pi ∈P, we can find the paraphrases for each pi by finding its nearest neighbors. We use cosine similarity, which is a commonly used measure for finding similarity between two vectors. If we have two phrases pi ∈P and pj ∈P with the corresponding vectors Vi and Vj constructed as described above, the similarity between the two phrases is calculated as: sim(pi; pj) = Vi!Vj |Vi|∗|Vj| (2) Each word in Vi (and Vj) has with it an associated flag which indicates weather the word came from Si,X or Si,Y . Hence for each phrase pi of the form X pi Y , we have a corresponding phrase −pi that has the form Y pi X. This is important to find certain kinds of paraphrases. The following example will illustrate. Consider the sentences: Google acquired YouTube. (3) YouTube was bought by Google. (4) From sentence (3), we obtain two phrases: 1. pi = acquired which has the form “X acquired Y ” where “X = Google” and “Y = YouTube” 2. −pi = −acquired which has the form “Y acquired X” where “X = YouTube” and “Y = Google” Similarly, from sentence (4) we obtain two phrases: 1. pj = was bought by which has the form “X was bought by Y ” where “X = YouTube” and “Y = Google” 2. −pj = −was bought by which has the form “Y was bought by X” where “X = Google” and “Y = YouTube” The switching of X and Y positions in (3) and (4) ensures that “acquired” and “−was bought by” are found to be paraphrases by the algorithm. 3.3 Locality Sensitive Hashing As described in Section 3.2, we find paraphrases of a phrase pi by finding its nearest neighbors based on cosine similarity between the feature vector of pi and other phrases. To do this for all the phrases in the corpus, we’ll have to compute the similarity between all vector pairs. If n is the number of vectors and d is the dimensionality of the vector space, finding cosine similarity between each pair of vectors has time complexity O(n2d). This computation is infeasible for our corpus, since both n and d are large. 676 To solve this problem, we make use of Locality Sensitive Hashing (LSH). The basic idea behind LSH is that a LSH function creates a fingerprint for each vector such that if two vectors are similar, they are likely to have similar fingerprints. The LSH function we use here was proposed by Charikar (2002). It represents a d dimensional vector by a stream of b bits (b ≪d) and has the property of preserving the cosine similarity between vectors, which is exactly what we want. Ravichandran et al. (2005) have shown that by using the LSH nearest neighbors calculation can be done in O(nd) time.1. 4 Learning Surface Patterns Let r be a target relation. Our task is to find a set of surface patterns S = {s1, s2, ..., sn} that express the target relation. For example, consider the relation r = “acquisition”. We want to find the set of patterns S that express this relation: S = {⟨ACQUIRER⟩ acquired ⟨ACQUIREE⟩, ⟨ACQUIRER⟩bought ⟨ACQUIREE⟩, ⟨ACQUIREE⟩ was bought by ⟨ACQUIRER⟩,...}. The remainder of the section describes our model for learning surface patterns for target relations. 4.1 Model Assumption Paraphrases express the same meaning using different surface forms. So if one knew a pattern that expresses a target relation, one could build more patterns for that relation by finding paraphrases for the surface phrase(s) in that pattern. This is the basic assumption of our model. For example, consider the seed pattern “⟨ACQUIRER⟩ acquired ⟨ACQUIREE⟩” for the target relation “acquisition”. The surface phrase in the seed pattern is “acquired”. Our model then assumes that we can obtain more surface patterns for “acquisition” by replacing “acquired” in the seed pattern with its paraphrases i.e. {bought, −was bought by2,...}. The resulting surface patterns are: 1The details of the algorithm are omitted, but interested readers are encouraged to read Charikar (2002) and Ravichandran et al. (2005) 2The “−” in “−was bought by” indicates that the ⟨ACQUIRER⟩and ⟨ACQUIREE⟩arguments of the input phrase “acquired” need to be switched for the phrase “was bought by”. {⟨ACQUIRER⟩bought ⟨ACQUIREE⟩, ⟨ACQUIREE⟩ was bought by ⟨ACQUIRER⟩,...} 4.2 Surface Pattern Model Let r be a target relation. Let SEED = {seed1, seed2,..., seedn} be the set of seed patterns that express the target relation. For each seedi ∈SEED, we obtain the corresponding set of new patterns PATi in two steps: 1. We find the surface phrase, pi, using a seed and find the corresponding set of paraphrases, Pi = {pi,1, pi,2, ..., pi,m}. Each paraphrase, pi,j ∈Pi, has with it an associated score which is similarity between pi and pi,j. 2. In seed pattern, seedi, we replace the surface phrase, pi, with its paraphrases and obtain the set of new patterns PATi = {pati,1, pati,2, ..., pati,m}. Each pattern has with it an associated score, which is the same as the score of the paraphrase from which it was obtained3 . The patterns are ranked in the decreasing order of their scores. After we obtain PATi for each seedi ∈SEED, we obtain the complete set of patterns, PAT, for the target relation r as the union of all the individual pattern sets, i.e., PAT = PAT1 ∪PAT2 ∪... ∪ PATn. 5 Experimental Methodology In this section, we describe experiments to validate the main claims of the paper. We first describe paraphrase acquisition, we then summarize our method for learning surface patterns, and finally describe the use of patterns for extracting relation instances. 5.1 Paraphrases Finding surface variations in text requires a large corpus. The corpus needs to be orders of magnitude larger than that required for learning syntactic variations, since surface phrases are sparser than syntactic phrases. For our experiments, we used a corpus of about 150GB (25 billion words) obtained from Google News4 . It consists of few years worth of news data. 3If a pattern is generated from more than one seed, we assign it its average score. 4The corpus was cleaned to remove duplicate articles. 677 We POS tagged the corpus using Tnt tagger (Brants, 2000) and collected all phrases (n-grams) in the corpus that contained at least one verb, and had a noun or a noun-noun compound on either side. We restricted the phrase length to at most five words. We build a vector for each phrase as described in Section 3. To mitigate the problem of sparseness and co-reference to a certain extent, whenever we have a noun-noun compound in the X or Y positions, we treat it as bag of words. For example, in the sentence “Google Inc. acquired YouTube”, “Google” and “Inc.” will be treated as separate features in the vector5. Once we have constructed all the vectors, we find the paraphrases for every phrase by finding its nearest neighbors as described in Section 3. For our experiments, we set the number of random bits in the LSH function to 3000, and the similarity cut-off between vectors to 0.15. We eventually end up with a resource containing over 2.5 million phrases such that each phrase is connected to its paraphrases. 5.2 Surface Patterns One claim of this paper is that we can find good surface patterns for a target relation by starting with a seed pattern. To verify this, we study two target relations6: 1. Acquisition: We define this as the relation between two companies such that one company acquired the other. 2. Birthplace: We define this as the relation between a person and his/her birthplace. For “acquisition” relation, we start with the surface patterns containing only the words buy and acquire: 1. “⟨ACQUIRER⟩bought ⟨ACQUIREE⟩” (and its variants, i.e. buy, buys and buying) 2. “⟨ACQUIRER⟩acquired ⟨ACQUIREE⟩” (and its variants, i.e. acquire, acquires and acquiring) 5This adds some noise in the vectors, but we found that this results in better paraphrases. 6Since we have to do all the annotations for evaluations on our own, we restricted our experiments to only two commonly used relations. This results in a total of eight seed patterns. For “birthplace” relation, we start with two seed patterns: 1. “⟨PERSON⟩was born in ⟨LOCATION⟩” 2. “⟨PERSON⟩was born at ⟨LOCATION⟩”. We find other surface patterns for each of these relations by replacing the surface words in the seed patterns by their paraphrases, as described in Section 4. 5.3 Relation Extraction The purpose of learning surface patterns for a relation is to extract instances of that relation. We use the surface patterns obtained for the relations “acquisition” and “birthplace” to extract instances of these relations from the LDC North American News Corpus. This helps us to extrinsically evaluate the quality of the surface patterns. 6 Experimental Results In this section, we present the results of the experiments and analyze them. 6.1 Baselines It is hard to construct a baseline for comparing the quality of paraphrases, as there isn’t much work in extracting surface level paraphrases using a monolingual corpus. To overcome this, we show the effect of reduction in corpus size on the quality of paraphrases, and compare the results informally to the other methods that produce syntactic paraphrases. To compare the quality of the extraction patterns, and relation instances, we use the method presented by Ravichandran and Hovy (2002) as the baseline. For each of the given relations, “acquisition” and “birthplace”, we use 10 seed instances, download the top 1000 results from the Google search engine for each instance, extract the sentences that contain the instances, and learn the set of baseline patterns for each relation. We then apply these patterns to the test corpus and extract the corresponding baseline instances. 6.2 Evaluation Criteria Here we present the evaluation criteria we used to evaluate the performance on the different tasks. 678 Paraphrases We estimate the quality of paraphrases by annotating a random sample as correct/incorrect and calculating the accuracy. However, estimating the recall is difficult given that we do not have a complete set of paraphrases for the input phrases. Following Szpektor et al. (2004), instead of measuring recall, we calculate the average number of correct paraphrases per input phrase. Surface Patterns We can calculate the precision (P) of learned patterns for each relation by annotating the extracted patterns as correct/incorrect. However calculating the recall is a problem for the same reason as above. But we can calculate the relative recall (RR) of the system against the baseline and vice versa. The relative recall RRS|B of system S with respect to system B can be calculated as: RRS|B = CS∩CB CB where CS is the number of correct patterns found by our system and CB is the number of correct patterns found by the baseline. RRB|S can be found in a similar way. Relation Extraction We estimate the precision (P) of the extracted instances by annotating a random sample of instances as correct/incorrect. While calculating the true recall here is not possible, even calculating the true relative recall of the system against the baseline is not possible as we can annotate only a small sample. However, following Pantel et al. (2004), we assume that the recall of the baseline is 1 and estimate the relative recall RRS|B of the system S with respect to the baseline B using their respective precision scores PS and PB and number of instances extracted by them |S| and |B| as: RRS|B = PS∗|S| PB∗|B| 6.3 Gold Standard In this section, we describe the creation of gold standard for the different tasks. Paraphrases We created the gold standard paraphrase test set by randomly selecting 50 phrases and their corresponding paraphrases from our collection of 2.5 million phrases. For each test phrase, we asked two annotators to annotate its paraphrases as correct/incorrect. The annotators were instructed to look for strict paraphrases i.e. equivalent phrases that can be substituted for each other. To obtain the inter-annotator agreement, the two annotators annotated the test set separately. The kappa statistic (Siegal and Castellan Jr., 1988) was κ = 0.63. The interesting thing is that the annotators got this respectable kappa score without any prior training, which is hard to achieve when one annotates for a similar task like textual entailment. Surface Patterns For the target relations, we asked two annotators to annotate the patterns for each relation as either “precise” or “vague”. The annotators annotated the system as well as the baseline outputs. We consider the “precise” patterns as correct and the “vague” as incorrect. The intuition is that applying the vague patterns for extracting target relation instances might find some good instances, but will also find many bad ones. For example, consider the following two patterns for the “acquisition” relation: ⟨ACQUIRER⟩acquired ⟨ACQUIREE⟩ (5) ⟨ACQUIRER⟩and ⟨ACQUIREE⟩ (6) Example (5) is a precise pattern as it clearly identifies the “acquisition” relation while example (6) is a vague pattern because it is too general and says nothing about the “acquisition” relation. The kappa statistic between the two annotators for this task was κ = 0.72. Relation Extraction We randomly sampled 50 instances of the “acquisition” and “birthplace” relations from the system and the baseline outputs. We asked two annotators to annotate the instances as correct/incorrect. The annotators marked an instance as correct only if both the entities and the relation between them were correct. To make their task easier, the annotators were provided the context for each instance, and were free to use any resources at their disposal (including a web search engine), to verify the correctness of the instances. The annotators found that the annotation for this task was much easier than the previous two; the few disagreements they had were due to ambiguity of some of the instances. The kappa statistic for this task was κ = 0.91. 679 Annotator Accuracy Average # correct paraphrases Annotator 1 67.31% 4.2 Annotator 2 74.27% 4.28 Table 1: Quality of paraphrases are being distributed to approved a revision to the have been distributed to unanimously approved a new are being handed out to approved an annual were distributed to will consider adopting a −are handing out approved a revised will be distributed to all approved a new Table 2: Example paraphrases 6.4 Result Summary Table 1 shows the results of annotating the paraphrases test set. We do not have a baseline to compare against but we can analyze them in light of numbers reported previously for syntactic paraphrases. DIRT (Lin and Pantel, 2001) and TEASE (Szpektor et al., 2004) report accuracies of 50.1% and 44.3% respectively compared to our average accuracy across two annotators of 70.79%. The average number of paraphrases per phrase is however 10.1 and 5.5 for DIRT and TEASE respectively compared to our 4.2. One reason why this number is lower is that our test set contains completely random phrases from our set (2.5 million phrases): some of these phrases are rare and have very few paraphrases. Table 2 shows some paraphrases generated by our system for the phrases “are being distributed to” and “approved a revision to the”. Table 3 shows the results on the quality of surface patterns for the two relations. It can be observed that our method outperforms the baseline by a wide margin in both precision and relative recall. Table 4 shows some example patterns learned by our system. Table 5 shows the results of the quality of extracted instances. Our system obtains very high precision scores but suffers in relative recall given that the baseline with its very general patterns is likely to find a huge number of instances (though a very small portion of them are correct). Table 6 shows some example instances we extracted. acquisition birthplace X agreed to buy Y X , who was born in Y X , which acquired Y X , was born in Y X completed its acquisition of Y X was raised in Y X has acquired Y X was born in NNNNa in Y X purchased Y X , born in Y aEach “N” here is a placeholder for a number from 0 to 9. Table 4: Example extraction templates acquisition birthplace 1. Huntington Bancshares Inc. agreed to acquire Reliance Bank 1. Cyril Andrew Ponnamperuma was born in Galle 2. Sony bought Columbia Pictures 2. Cook was born in NNNN in Devonshire 3. Hanson Industries buys Kidde Inc. 3. Tansey was born in Cincinnati 4. Casino America inc. agreed to buy Grand Palais 4. Tsoi was born in NNNN in Uzbekistan 5. Tidewater inc. acquired Hornbeck Offshore Services Inc. 5. Mrs. Totenberg was born in San Francisco Table 6: Example instances 6.5 Discussion and Error Analysis We studied the effect of the decrease in size of the available raw corpus on the quality of the acquired paraphrases. We used about 10% of our original corpus to learn the surface paraphrases and evaluated them. The precision, and the average number of correct paraphrases are calculated on the same test set, as described in Section 6.2. The performance drop on using 10% of the original corpus is significant (11.41% precision and on an average 1 correct paraphrase per phrase), which shows that we indeed need a large amount of data to learn good quality surface paraphrases. One reason for this drop is also that when we use only 10% of the original data, for some of the phrases from the test set, we do not find any paraphrases (thus resulting in 0% accuracy for them). This is not unexpected, as the larger resource would have a much larger recall, which again points at the advantage of using a large data set. Another reason for this performance drop could be the parameter settings: We found that the quality of learned paraphrases depended greatly on the various cut-offs used. While we adjusted our model 680 Relation Method # Patterns Annotator 1 Annotator 2 P RR P RR Acquisition Baseline 160 55% 13.02% 60% 11.16% Paraphrase Method 231 83.11% 28.40% 93.07% 25% Birthplace Baseline 16 31.35% 15.38% 31.25% 15.38% Paraphrase Method 16 81.25% 40% 81.25% 40% Table 3: Quality of Extraction Patterns Relation Method # Patterns Annotator 1 Annotator 2 P RR P RR Acquisition Baseline 1, 261, 986 6% 100% 2% 100% Paraphrase Method 3875 88% 4.5% 82% 12.59% Birthplace Baseline 979, 607 4% 100% 2% 100% Paraphrase Method 1811 98% 4.53% 98% 9.06% Table 5: Quality of instances parameters for working with smaller sized data, it is conceivable that we did not find the ideal setting for them. So we consider these numbers to be a lower bound. But even then, these numbers clearly indicate the advantage of using more data. We also manually inspected our paraphrases. We found that the problem of “antonyms” was somewhat less pronounced due to our use of a large corpus, but they still were the major source of error. For example, our system finds the phrase “sell” as a paraphrase for “buy”. We need to deal with this problem separately in the future (may be as a postprocessing step using a list of antonyms). Moving to the task of relation extraction, we see from table 5 that our system has a much lower relative recall compared to the baseline. This was expected as the baseline method learns some very general patterns, which are likely to extract some good instances, even though they result in a huge hit to its precision. However, our system was able to obtain this performance using very few seeds. So an increase in the number of input seeds, is likely to increase the relative recall of the resource. The question however remains as to what good seeds might be. It is clear that it is much harder to come up with good seed patterns (that our system needs), than seed instances (that the baseline needs). But there are some obvious ways to overcome this problem. One way is to bootstrap. We can look at the paraphrases of the seed patterns and use them to obtain more patterns. Our initial experiments with this method using handpicked seeds showed good promise. However, we need to investigate automating this approach. Another method is to use the good patterns from the baseline system and use them as seeds for our system. We plan to investigate this approach as well. One reason, why we have seen good preliminary results using these approaches (for improving recall), we believe, is that the precision of the paraphrases is good. So either a seed doesn’t produce any new patterns or it produces good patterns, thus keeping the precision of the system high while increasing relative recall. 7 Conclusion Paraphrases are an important technique to handle variations in language. Given their utility in many NLP tasks, it is desirable that we come up with methods that produce good quality paraphrases. We believe that the paraphrase acquisition method presented here is a step towards this very goal. We have shown that high precision surface paraphrases can be obtained by using distributional similarity on a large corpus. We made use of some recent advances in theoretical computer science to make this task scalable. We have also shown that these paraphrases can be used to obtain high precision extraction patterns for information extraction. While we believe that more work needs to be done to improve the system recall (some of which we are investigating), this seems to be a good first step towards developing a minimally supervised, easy to implement, and scalable relation extraction system. 681 References P. G. Anick and S. Tipirneni. 1999. The paraphrase search assistant: terminological feedback for iterative information seeking. In ACM SIGIR, pages 153–159. C. Bannard and C. Callison-Burch. 2005. Paraphrasing with bilingual parallel corpora. In Association for Computational Linguistics, pages 597–604. R. Barzilay and L. Lee. 2003. Learning to paraphrase: an unsupervised approach using multiple-sequence alignment. In In Proceedings North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 16–23. R. Barzilay and K. R. McKeown. 2001. Extracting paraphrases from a parallel corpus. In In Proceedings of Association for Computational Linguistics, pages 50– 57. R. Barzilay, K. R. McKeown, and M. Elhadad. 1999. Information fusion in the context of multi-document summarization. In Association for Computational Linguistics, pages 550–557. M. Berland and E. Charniak. 1999. Finding parts in very large corpora. In In Proceedings of Association for Computational Linguistics, pages 57–64. T. Brants. 2000. Tnt – a statistical part-of-speech tagger. In In Proceedings of the Applied NLP Conference (ANLP). C. Callison-Burch, P. Koehn, and M. Osborne. 2006. Improved statistical machine translation using paraphrases. In Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 17–24. M. S. Charikar. 2002. Similarity estimation techniques from rounding algorithms. In In Proceedings of the thiry-fourth annual ACM symposium on Theory of computing, pages 380–388. T.M. Cover and J.A. Thomas. 1991. Elements of Information Theory. John Wiley & Sons. B. Dolan, C. Quirk, and C. Brockett. 2004. Unsupervised construction of large paraphrase corpora: exploiting massively parallel news sources. In In Proceedings of the conference on Computational Linguistics (COLING), pages 350–357. Z. Harris. 1954. Distributional structure. Word, pages 10(23):146–162. M. A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the conference on Computational linguistics, pages 539–545. D. Lin and P. Pantel. 2001. Dirt: Discovery of inference rules from text. In ACM SIGKDD international conference on Knowledge discovery and data mining, pages 323–328. P. Pantel, D. Ravichandran, and E.H. Hovy. 2004. Towards terascale knowledge acquisition. In In Proceedings of the conference on Computational Linguistics (COLING), pages 771–778. D. Ravichandran and E.H. Hovy. 2002. Learning surface text for a question answering system. In Association for Computational Linguistics (ACL), Philadelphia, PA. D. Ravichandran, P. Pantel, and E.H. Hovy. 2005. Randomized algorithms and nlp: using locality sensitive hash function for high speed noun clustering. In In Proceedings of Association for Computational Linguistics, pages 622–629. L. Romano, M. Kouylekov, I. Szpektor, I. Dagan, and A. Lavelli. 2006. Investigating a generic paraphrasebased approach for relation extraction. In In Proceedings of the European Chapter of the Association for Computational Linguistics (EACL). B. Rosenfeld and R. Feldman. 2006. Ures: an unsupervised web relation extraction system. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 667–674. S. Sekine. 2006. On-demand information extraction. In In Proceedings of COLING/ACL, pages 731–738. S. Siegal and N.J. Castellan Jr. 1988. Nonparametric Statistics for the Behavioral Sciences. McGraw-Hill. I. Szpektor, H. Tanev, I. Dagan, and B. Coppola. 2004. Scaling web-based acquisition of entailment relations. In In Proceedings of Empirical Methods in Natural Language Processing, pages 41–48. L. Zhou, C.Y. Lin, D. Munteanu, and E.H. Hovy. 2006. Paraeval: using paraphrases to evaluate summaries automatically. In In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 447–454. 682
2008
77
Proceedings of ACL-08: HLT, pages 683–691, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Contextual Preferences Idan Szpektor, Ido Dagan, Roy Bar-Haim Department of Computer Science Bar-Ilan University Ramat Gan, Israel {szpekti,dagan,barhair}@cs.biu.ac.il Jacob Goldberger School of Engineering Bar-Ilan University Ramat Gan, Israel [email protected] Abstract The validity of semantic inferences depends on the contexts in which they are applied. We propose a generic framework for handling contextual considerations within applied inference, termed Contextual Preferences. This framework defines the various context-aware components needed for inference and their relationships. Contextual preferences extend and generalize previous notions, such as selectional preferences, while experiments show that the extended framework allows improving inference quality on real application data. 1 Introduction Applied semantic inference is typically concerned with inferring a target meaning from a given text. For example, to answer “Who wrote Idomeneo?”, Question Answering (QA) systems need to infer the target meaning ‘Mozart wrote Idomeneo’ from a given text “Mozart composed Idomeneo”. Following common Textual Entailment terminology (Giampiccolo et al., 2007), we denote the target meaning by h (for hypothesis) and the given text by t. A typical applied inference operation is matching. Sometimes, h can be directly matched in t (in the example above, if the given sentence would be literally “Mozart wrote Idomeneo”). Generally, the target meaning can be expressed in t in many different ways. Indirect matching is then needed, using inference knowledge that may be captured through rules, termed here entailment rules. In our example, ‘Mozart wrote Idomeneo’ can be inferred using the rule ‘X compose Y →X write Y ’. Recently, several algorithms were proposed for automatically learning entailment rules and paraphrases (viewed as bi-directional entailment rules) (Lin and Pantel, 2001; Ravichandran and Hovy, 2002; Shinyama et al., 2002; Szpektor et al., 2004; Sekine, 2005). A common practice is to try matching the structure of h, or of the left-hand-side of a rule r, within t. However, context should be considered to allow valid matching. For example, suppose that to find acquisitions of companies we specify the target template hypothesis (a hypothesis with variables) ‘X acquire Y ’. This h should not be matched in “children acquire language quickly”, because in this context Y is not a company. Similarly, the rule ‘X charge Y →X accuse Y ’ should not be applied to “This store charged my account”, since the assumed sense of ‘charge’ in the rule is different than its sense in the text. Thus, the intended contexts for h and r and the context within the given t should be properly matched to verify valid inference. Context matching at inference time was often approached in an application-specific manner (Harabagiu et al., 2003; Patwardhan and Riloff, 2007). Recently, some generic methods were proposed to handle context-sensitive inference (Dagan et al., 2006; Pantel et al., 2007; Downey et al., 2007; Connor and Roth, 2007), but these usually treat only a single aspect of context matching (see Section 6). We propose a comprehensive framework for handling various contextual considerations, termed Contextual Preferences. It extends and generalizes previous work, defining the needed contextual components and their relationships. We also present and implement concrete representation models and un683 supervised matching methods for these components. While our presentation focuses on semantic inference using lexical-syntactic structures, the proposed framework and models seem suitable for other common types of representations as well. We applied our models to a test set derived from the ACE 2005 event detection task, a standard Information Extraction (IE) benchmark. We show the benefits of our extended framework for textual inference and present component-wise analysis of the results. To the best of our knowledge, these are also the first unsupervised results for event argument extraction in the ACE 2005 dataset. 2 Contextual Preferences 2.1 Notation As mentioned above, we follow the generic Textual Entailment (TE) setting, testing whether a target meaning hypothesis h can be inferred from a given text t. We allow h to be either a text or a template, a text fragment with variables. For example, “The stock rose 8%” entails an instantiation of the template hypothesis ‘X gain Y ’. Typically, h represents an information need requested in some application, such as a target predicate in IE. In this paper, we focus on parse-based lexicalsyntactic representation of texts and hypotheses, and on the basic inference operation of matching. Following common practice (de Salvo Braz et al., 2005; Romano et al., 2006; Bar-Haim et al., 2007), h is syntactically matched in t if it can be embedded in t’s parse tree. For template hypotheses, the matching induces a mapping between h’s variables and their instantiation in t. Matching h in t can be performed either directly or indirectly using entailment rules. An entailment rule r: ‘LHS →RHS’ is a directional entailment relation between two templates. h is matched in t using r if LHS is matched in t and h matches RHS. In the example above, r: ‘X rise Y →X gain Y ’ allows us to entail ‘X gain Y ’, with “stock” and “8%” instantiating h’s variables. We denote vars(z) the set of variables of z, where z is a template or a rule. 2.2 Motivation When matching considers only the structure of hypotheses, texts and rules it may result in incorrect inference due to contextual mismatches. For example, an IE system may identify mentions of public demonstrations using the hypothesis h: ‘X demonstrate’. However, h should not be matched in “Engineers demonstrated the new system”, due to a mismatch between the intended sense of ‘demonstrate’ in h and its sense in t. Similarly, when looking for physical attack mentions using the hypothesis ‘X attack Y ’, we should not utilize the rule r: ‘X accuse Y →X attack Y ’, due to a mismatch between a verbal attack in r and an intended physical attack in h. Finally, r: ‘X produce Y →X lay Y ’ (applicable when X refers to poultry and Y to eggs) should not be matched in t: “Bugatti produce the fastest cars”, due to a mismatch between the meanings of ‘produce’ in r and t. Overall, such incorrect inferences may be avoided by considering contextual information for t, h and r during their matching process. 2.3 The Contextual Preferences Framework We propose the Contextual Preferences (CP) framework for addressing context at inference time. In this framework, the representation of an object z, where z may be a text, a template or an entailment rule, is enriched with contextual information denoted cp(z). This information helps constraining or disambiguating the meaning of z, and is used to validate proper matching between pairs of objects. We consider two components within cp(z): (a) a representation for the global (“topical”) context in which z typically occurs, denoted cpg(z); (b) a representation for the preferences and constraints (“hard” preferences) on the possible terms that can instantiate variables within z, denoted cpv(z). For example, cpv(‘X produce Y →X lay Y ’) may specify that X’s instantiations should be similar to “chicken” or “duck”. Contextual Preferences are used when entailment is assessed between a text t and a hypothesis h, either directly or by utilizing an entailment-rule r. On top of structural matching, we now require that the Contextual Preferences of the participants in the inference will also match. When h is directly matched in t, we require that each component in cp(h) will be matched with its counterpart in cp(t). When r is utilized, we additionally require that cp(r) will be matched with both cp(t) and cp(h). Figure 1 summarizes the matching relationships between the CP 684 Figure 1: The directional matching relationships between a hypothesis (h), an entailment rule (r) and a text (t) in the Contextual Preferences framework. components of h, t and r. Like Textual Entailment inference, Contextual Preferences matching is directional. When matching h with t we require that the global context preferences specified by cpg(h) would subsume those induced by cpg(t), and that the instantiations of h’s variables in t would adhere to the preferences in cpv(h) (since t should entail h, but not necessarily vice versa). For example, if the preferred global context of a hypothesis is sports, it would match a text that discusses the more specific topic of basketball. To implement the CP framework, concrete models are needed for each component, specifying its representation, how it is constructed, and an appropriate matching procedure. Section 3 describes the specific CP models that were implemented in this paper. The CP framework provides a generic view of contextual modeling in applied semantic inference. Mapping from a specific application to the generic framework follows the mappings assumed in the Textual Entailment paradigm. For example, in QA the hypothesis to be proved corresponds to the affirmative template derived from the question (e.g. h: ‘X invented the PC’ for “Who invented the PC?”). Thus, cpg(h) can be constructed with respect to the question’s focus while cpv(h) may be generated from the expected answer type (Moldovan et al., 2000; Harabagiu et al., 2003). Construction of hypotheses’ CP for IE is demonstrated in Section 4. 3 Contextual Preferences Models This section presents the current models that we implemented for the various components of the CP framework. For each component type we describe its representation, how it is constructed, and a corresponding unsupervised match score. Finally, the different component scores are combined to yield an overall match score, which is used in our experiments to rank inference instances by the likelihood of their validity. Our goal in this paper is to cover the entire scope of the CP framework by including specific models that were proposed in previous work, where available, and elsewhere propose initial models to complete the CP scope. 3.1 Contextual Preferences for Global Context To represent the global context of an object z we utilize Latent Semantic Analysis (LSA) (Deerwester et al., 1990), a well-known method for representing the contextual-usage of words based on corpus statistics. We use LSA analysis of the BNC corpus1, in which every term is represented by a normalized vector of the top 100 SVD dimensions, as described in (Gliozzo, 2005). To construct cpg(z) we first collect a set of terms that are representative for the preferred general context of z. Then, the (single) vector which is the sum of the LSA vectors of the representative terms becomes the representation of cpg(z). This LSA vector captures the “average” typical contexts in which the representative terms occur. The set of representative terms for a text t consists of all the nouns and verbs in it, represented by their lemma and part of speech. For a rule r: ‘LHS →RHS’, the representative terms are the words appearing in LHS and in RHS. For example, the representative terms for ‘X divorce Y →X marry Y ’ are {divorce:v, marry:v}. As mentioned earlier, construction of hypotheses and their contextual preferences depends on the application at hand. In our experiments these are defined manually, as described in Section 4, derived from the manual definitions of target meanings in the IE data. The score of matching the cpg components of two objects, denoted by mg(·, ·), is the Cosine similarity of their LSA vectors. Negative values are set to 0. 3.2 Contextual Preferences for Variables 3.2.1 Representation For comparison with prior work, we follow (Pantel et al., 2007) and represent preferences for vari1http://www.natcorp.ox.ac.uk/ 685 able instantiations using a distributional approach, and in addition incorporate a standard specification of named-entity types. Thus, cpv is represented by two lists. The first list, denoted cpv:e, contains examples for valid instantiations of that variable. For example, cpv:e(X kill Y →Y die of X) may be [X: {snakebite, disease}, Y : {man, patient}]. The second list, denoted cpv:n, contains the variable’s preferred named-entity types (if any). For example, cpv:n(X born in Y ) may be [X: {Person}, Y : {Location}]. We denote cpv:e(z)[j] and cpv:n(z)[j] as the lists for a specific variable j of the object z. For a text t, in which a template p is matched, the preference cpv:e(t) for each template variable is simply its instantiation in t. For example, when ‘X eat Y ’ is matched in t: “Many Americans eat fish regularly”, we construct cpv:e(t) = [X: {Many Americans}, Y : {fish}]. Similarly, cpv:n(t) for each variable is the named-entity type of its instantiation in t (if it is a named entity). We identify entity types using the default Lingpipe2 Named-Entity Recognizer (NER), which recognizes the types Location, Person and Organization. In the above example, cpv:n(t)[X] would be {Person}. For a rule r: LHS →RHS, we automatically add to cpv:e(r) all the variable instantiations that were found common for both LHS and RHS in a corpus (see Section 4), as in (Pantel et al., 2007; Pennacchiotti et al., 2007). To construct cpv:n(r), we currently use a simple approach where each individual term in cpv:e(r) is analyzed by the NER system, and its type (if any) is added to cpv:n(r). For a template hypothesis, we currently represent cpv(h) only by its list of preferred named-entity types, cpv:n. Similarly to cpg(h), the preferred types for each template variable were adapted from those defined in our IE data (see Section 4). To allow compatible comparisons with previous work (see Sections 5 and 6), we utilize in this paper only cpv:e when matching between cpv(r) and cpv(t), as only this representation was examined in prior work on context-sensitive rule applications. cpv:n is utilized for context matches involving cpv(h). We denote the score of matching two cpv components by mv(·, ·). 2http://www.alias-i.com/lingpipe/ 3.2.2 Matching cpv:e Our primary matching method is based on replicating the best-performing method reported in (Pantel et al., 2007), which utilizes the CBC distributional word clustering algorithm (Pantel, 2003). In short, this method extends each cpv:e list with CBC clusters that contain at least one term in the list, scoring them according to their “relevancy”. The score of matching two cpv:e lists, denoted here SCBC(·, ·), is the score of the highest scoring member that appears in both lists. We applied the final binary match score presented in (Pantel et al., 2007), denoted here binaryCBC: mv:e(r, t) is 1 if SCBC(r, t) is above a threshold and 0 otherwise. As a more natural ranking method, we also utilize SCBC directly, denoted rankedCBC, having mv:e(r, t) = SCBC(r, t). In addition, we tried a simpler method that directly compares the terms in two cpv:e lists, utilizing the commonly-used term similarity metric of (Lin, 1998a). This method, denoted LIN, uses the same raw distributional data as CBC but computes only pair-wise similarities, without any clustering phase. We calculated the scores of the 1000 most similar terms for every term in the Reuters RVC1 corpus3. Then, a directional similarity of term a to term b, s(a, b), is set to be their similarity score if a is in b’s 1000 most similar terms and 0 otherwise. The final score of matching r with t is determined by a nearest-neighbor approach, as the score of the most similar pair of terms in the corresponding two lists of the same variable: mv:e(r, t) = maxj∈vars(r)[maxa∈cpv:e(t)[j],b∈cpv:e(r)[j][s(a, b)]]. 3.2.3 Matching cpv:n We use a simple scoring mechanism for comparing between two named-entity types a and b, s(a, b): 1 for identical types and 0.8 otherwise. A variable j has a single preferred entity type in cpv:n(t)[j], the type of its instantiation in t. However, it can have several preferred types for h. When matching h with t, j’s match score is that of its highest scoring type, and the final score is the product of all variable scores: mv:n(h, t) = Q j∈vars(h)(maxa∈cpv:n(h)[j][s(a, cpv:n(t)[j])]). Variable j may also have several types in r, the 3http://about.reuters.com/researchandstandards/corpus/ 686 types of the common arguments in cpv:e(r). When matching h with r, s(a, cpv:n(t)[j]) is replaced with the average score for a and each type in cpv:n(r)[j]. 3.3 Overall Score for a Match A final score for a given match, denoted allCP, is obtained by the product of all six matching scores of the various CP components (multiplying by 1 if a component score is missing). The six scores are the results of matching any of the two components of h, t and r: mg(h, t), mv(h, t), mg(h, r), mv(h, r), mg(r, t) and mv(r, t) (as specified above, mv(r, t) is based on matching cpv:e while mv(h, r) and mv(h, t) are based on matching cpv:n). We use rankedCBC for calculating mv(r, t). Unlike previous work (e.g. (Pantel et al., 2007)), we also utilize the prior score of a rule r, which is provided by the rule-learning algorithm (see next section). We denote by allCP+pr the final match score obtained by the product of the allCP score with the prior score of the matched rule. 4 Experimental Settings Evaluating the contribution of Contextual Preferences models requires: (a) a sample of test hypotheses, and (b) a corresponding corpus that contains sentences which entail these hypotheses, where all hypothesis matches (either direct or via rules) are annotated. We found that the available event mention annotations in the ACE 2005 training set4 provide a useful test set that meets these generic criteria, with the added value of a standard real-world dataset. The ACE annotation includes 33 types of events, for which all event mentions are annotated in the corpus. The annotation of each mention includes the instantiated arguments for the predicates, which represent the participants in the event, as well as general attributes such as time and place. ACE guidelines specify for each event type its possible arguments, where all arguments are optional. Each argument is associated with a semantic role and a list of possible named-entity types. For instance, an Injure event may have the arguments {Agent, Victim, Instrument, Time, Place}, where Victim should be a person. For each event type we manually created a small set of template hypotheses that correspond to the 4http://projects.ldc.upenn.edu/ace/ given event predicate, and specified the appropriate semantic roles for each variable. We considered only binary hypotheses, due to the type of available entailment rules (see below). For Injure, the set of hypotheses included ‘A injure V’ and ‘injure V in T’ where role(A)={Agent, Instrument}, role(V)={Victim}, and role(T)={Time, Place}. Thus, correct match of an argument corresponds to correct role identification. The templates were represented as Minipar (Lin, 1998b) dependency parse-trees. The Contextual Preferences for h were constructed manually: the named-entity types for cpv:n(h) were set by adapting the entity types given in the guidelines to the types supported by the Lingpipe NER (described in Section 3.2). cpg(h) was generated from a short list of nouns and verbs that were extracted from the verbal event definition in the ACE guidelines. For Injure, this list included {injure:v, injury:n, wound:v}. This assumes that when writing down an event definition the user would also specify such representative keywords. Entailment-rules for a given h (rules in which RHS is equal to h) were learned automatically by the DIRT algorithm (Lin and Pantel, 2001), which also produces a quality score for each rule. We implemented a canonized version of DIRT (Szpektor and Dagan, 2007) on the Reuters corpus parsed by Minipar. Each rule’s arguments for cpv(r) were also collected from this corpus. We assessed the CP framework by its ability to correctly rank, for each predicate (event), all the candidate entailing mentions that are found for it in the test corpus. Such ranking evaluation is suitable for unsupervised settings, with a perfect ranking placing all correct mentions before any incorrect ones. The candidate mentions are found in the parsed test corpus by matching the specified event hypotheses, either directly or via the given set of entailment rules, using a syntactic matcher similar to the one in (Szpektor and Dagan, 2007). Finally, the mentions are ranked by their match scores, as described in Section 3.3. As detailed in the next section, those candidate mentions which are also annotated as mentions of the same event in ACE are considered correct. The evaluation aims to assess the correctness of inferring a target semantic meaning, which is de687 noted by a specific predicate. Therefore, we eliminated four ACE event types that correspond to multiple distinct predicates. For instance, the TransferMoney event refers to both donating and lending money, which are not distinguished by the ACE annotation. We also omitted three events with less than 10 mentions and two events for which the given set of learned rules could not match any mention. We were left with 24 event types for evaluation, which amount to 4085 event mentions in the dataset. Out of these, our binary templates can correctly match only mentions with at least two arguments, which appear 2076 times in the dataset. Comparing with previous evaluation methodologies, in (Szpektor et al., 2007; Pantel et al., 2007) proper context matching was evaluated by post-hoc judgment of a sample of rule applications for a sample of rules. Such annotation needs to be repeated each time the set of rules is changed. In addition, since the corpus annotation is not exhaustive, recall could not be computed. By contrast, we use a standard real-world dataset, in which all mentions are annotated. This allows immediate comparison of different rule sets and matching methods, without requiring any additional (post-hoc) annotation. 5 Results and Analysis We experimented with three rule setups over the ACE dataset, in order to measure the contribution of the CP framework. In the first setup no rules are used, applying only direct matches of template hypotheses to identify event mentions. In the other two setups we also utilized DIRT’s top 50 or 100 rules for each hypothesis. A match is considered correct when all matched arguments are extracted correctly according to their annotated event roles. This main measurement is denoted All. As an additional measurement, denoted Any, we consider a match as correct if at least one argument is extracted correctly. Once event matches are extracted, we first measure for each event its Recall, the number of correct mentions identified out of all annotated event mentions5 and Precision, the number of correct matches out of all extracted candidate matches. These figures 5For Recall, we ignored mentions with less than two arguments, as they cannot be correctly matched by binary templates. quantify the baseline performance of the DIRT rule set used. To assess our ranking quality, we measure for each event the commonly used Average Precision (AP) measure (Voorhees and Harmann, 1998), which is the area under the non-interpolated recallprecision curve, while considering for each setup all correct extracted matches as 100% Recall. Overall, we report Mean Average Precision (MAP), macro average Precision and macro average Recall over the ACE events. Tables 1 and 2 summarize the main results of our experiments. As far as we know, these are the first published unsupervised results for identifying event arguments in the ACE 2005 dataset. Examining Recall, we see that it increases substantially when rules are applied: by more than 100% for the top 50 rules, and by about 150% for the top 100, showing the benefit of entailment-rules to covering language variability. The difference between All and Any results shows that about 65% of the rules that correctly match one argument also match correctly both arguments. We use two baselines for measuring the CP ranking contribution: Precision, which corresponds to the expected MAP of random ranking, and MAP of ranking using the prior rule score provided by DIRT. Without rules, the baseline All Precision is 34.1%, showing that even the manually constructed hypotheses, which correspond directly to the event predicate, extract event mentions with limited accuracy when context is ignored. When rules are applied, Precision is very low. But ranking is considerably improved using only the prior score (from 1.4% to 22.7% for 50 rules), showing that the prior is an informative indicator for valid matches. Our main result is that the allCP and allCP+pr methods rank matches statistically significantly better than the baselines in all setups (according to the Wilcoxon double-sided signed-ranks test at the level of 0.01 (Wilcoxon, 1945)). In the All setup, ranking is improved by 70% for direct matching (Table 1). When entailment-rules are also utilized, prior-only ranking is improved by about 35% and 50% when using allCP and allCP+pr, respectively (Table 2). Figure 2 presents the average Recall-Precision curve of the ‘50 Rules, All’ setup for applying allCP or allCP+pr, compared to prior-only ranking baseline (other setups behave similarly). The improvement in ranking is evident: the drop in precision is signif688 R P MAP (%) (%) (%) cpv cpg allCP All 14.0 34.1 46.5 52.2 60.2 Any 21.8 66.0 72.2 80.5 84.1 Table 1: Recall (R), Precision (P) and Mean Average Precision (MAP) when only matching template hypotheses directly. # R P MAP (%) Rules (%) (%) prior allCP allCP+pr All 50 29.6 1.4 22.7 30.6 34.1 100 34.9 0.7 20.5 26.3 30.2 Any 50 46.5 3.5 41.2 43.7 48.6 100 52.9 1.8 35.5 35.1 40.8 Table 2: Recall (R), Precision (P) and Mean Average Precision (MAP) when also using rules for matching. icantly slower when CP is used. The behavior of CP with and without the prior is largely the same up to 50% Recall, but later on our implemented CP models are noisier and should be combined with the prior rule score. Templates are incorrectly matched for several reasons. First, there are context mismatches which are not scored sufficiently low by our models. Another main cause is incorrect learned rules in which LHS and RHS are topically related, e.g. ‘X convict Y → X arrest Y ’, or rules that are used in the wrong entailment direction, e.g. ‘X marry Y →X divorce Y ’ (DIRT does not learn rule direction). As such rules do correspond to plausible contexts of the hypothesis, their matches obtain relatively high CP scores. In addition, some incorrect matches are caused by our syntactic matcher, which currently does not handle certain phenomena such as co-reference, modality or negation, and due to Minipar parse errors. 5.1 Component Analysis Table 3 displays the contribution of different CP components to ranking, when adding only that component’s match score to the baselines, and under ablation tests, when using all CP component scores except the tested component, with or without the prior. As it turns out, matching h with t (i.e. cp(h, t), which combines cpg(h, t) and cpv(h, t)) is most useful. With our current models, using only cp(h, t) along with the prior, while ignoring cp(r), achieves 50 Rules - All 0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100 Relative Recall Precision baseline CP CP + prior Figure 2: Recall-Precision curves for ranking using: (a) only the prior (baseline); (b) allCP; (c) allCP+pr. the highest score in the table. The strong impact of matching h and t’s preferences is also evident in Table 1, where ranking based on either cpg or cpv substantially improves precision, while their combination provides the best ranking. These results indicate that the two CP components capture complementary information and both are needed to assess the correctness of a match. When ignoring the prior rule score, cp(r, t) is the major contributor over the baseline Precision. For cpv(r, t), this is in synch with the result in (Pantel et al., 2007), which is based on this single model without utilizing prior rule scores. On the other hand, cpv(r, t) does not improve the ranking when the prior is used, suggesting that this contextual model for the rule’s variables is not stronger than the context-insensitive prior rule score. Furthermore, relative to this cpv(r, t) model from (Pantel et al., 2007), our combined allCP model, with or without the prior (first row of Table 2), obtains statistically significantly better ranking (at the level of 0.01). Comparing between the algorithms for matching cpv:e (Section 3.2.2) we found that while rankedCBC is statistically significantly better than binaryCBC, rankedCBC and LIN generally achieve the same results. When considering the tradeoffs between the two, LIN is based on a much simpler learning algorithm while CBC’s output is more compact and allows faster CP matches. 689 Addition To Ablation From P prior allCP allCP+pr Baseline 1.4 22.7 30.6 34.1 cpg(h, t) ∗10.4 ∗35.4 32.4 33.7 cpv(h, t) ∗11.0 29.9 27.6 32.9 cp(h, t) ∗8.9 ∗37.5 28.6 30.0 cpg(r, t) ∗4.2 ∗30.6 32.5 35.4 cpv(r, t) ∗21.7 21.9 ∗12.9 33.6 cp(r, t) ∗26.0 ∗29.6 ∗17.9 36.8 cpg(h, r) ∗8.1 22.4 31.9 34.3 cpv(h, r) ∗10.7 22.7 ∗27.9 34.4 cp(h, r) ∗16.5 22.4 ∗29.2 34.4 cpg(h, r, t) ∗7.7 ∗30.2 ∗27.5 ∗29.2 cpv(h, r, t) ∗27.5 29.2 ∗7.7 30.2 ∗Indicates statistically significant changes compared to the baseline, according to the Wilcoxon test at the level of 0.01. Table 3: MAP(%), under the ‘50 rules, All’ setup, when adding component match scores to Precision (P) or prioronly MAP baselines, and when ranking with allCP or allCP+pr methods but ignoring that component scores. Currently, some models do not improve the results when the prior is used. Yet, we would like to further weaken the dependency on the prior score, since it is biased towards frequent contexts. We aim to properly identify also infrequent contexts (or meanings) at inference time, which may be achieved by better CP models. More generally, when used on top of all other components, some of the models slightly degrade performance, as can be seen by those figures in the ablation tests which are higher than the corresponding baseline. However, due to their different roles, each of the matching components might capture some unique preferences. For example, cp(h, r) should be useful to filter out rules that don’t match the intended meaning of the given h. Overall, this suggests that future research for better models should aim to obtain a marginal improvement by each component. 6 Related Work Context sensitive inference was mainly investigated in an application-dependent manner. For example, (Harabagiu et al., 2003) describe techniques for identifying the question focus and the answer type in QA. (Patwardhan and Riloff, 2007) propose a supervised approach for IE, in which relevant text regions for a target relation are identified prior to applying extraction rules. Recently, the need for context-aware inference was raised (Szpektor et al., 2007). (Pantel et al., 2007) propose to learn the preferred instantiations of rule variables, termed Inferential Selectional Preferences (ISP). Their clustering-based model is the one we implemented for mv(r, t). A similar approach is taken in (Pennacchiotti et al., 2007), where LSA similarity is used to compare between the preferred variable instantiations for a rule and their instantiations in the matched text. (Downey et al., 2007) use HMM-based similarity for the same purpose. All these methods are analogous to matching cpv(r) with cpv(t) in the CP framework. (Dagan et al., 2006; Connor and Roth, 2007) proposed generic approaches for identifying valid applications of lexical rules by classifying the surrounding global context of a word as valid or not for that rule. These approaches are analogous to matching cpg(r) with cpg(t) in our framework. 7 Conclusions We presented the Contextual Preferences (CP) framework for assessing the validity of inferences in context. CP enriches the representation of textual objects with typical contextual information that constrains or disambiguates their meaning, and provides matching functions that compare the preferences of objects involved in the inference. Experiments with our implemented CP models, over realworld IE data, show significant improvements relative to baselines and some previous work. In future research we plan to investigate improved models for representing and matching CP, and to extend the experiments to additional applied datasets. We also plan to apply the framework to lexical inference rules, for which it seems directly applicable. Acknowledgements The authors would like to thank Alfio Massimiliano Gliozzo for valuable discussions. This work was partially supported by ISF grant 1095/05, the IST Programme of the European Community under the PASCAL Network of Excellence IST-2002-506778, the NEGEV project (www.negev-initiative.org) and the FBK-irst/Bar-Ilan University collaboration. 690 References Roy Bar-Haim, Ido Dagan, Iddo Greental, and Eyal Shnarch. 2007. Semantic inference at the lexicalsyntactic level. In Proceedings of AAAI. Michael Connor and Dan Roth. 2007. Context sensitive paraphrasing with a global unsupervised classifier. In Proceedings of the European Conference on Machine Learning (ECML). Ido Dagan, Oren Glickman, Alfio Gliozzo, Efrat Marmorshtein, and Carlo Strapparava. 2006. Direct word sense matching for lexical substitution. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of ACL. Rodrigo de Salvo Braz, Roxana Girju, Vasin Punyakanok, Dan Roth, and Mark Sammons. 2005. An inference model for semantic entailment in natural language. In Proceedings of the National Conference on Artificial Intelligence (AAAI). Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society of Information Science, 41(6):391–407. Doug Downey, Stefan Schoenmackers, and Oren Etzioni. 2007. Sparse information extraction: Unsupervised language models to the rescue. In Proceedings of the 45th Annual Meeting of ACL. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third pascal recognizing textual entailment challenge. In Proceedings of the ACLPASCAL Workshop on Textual Entailment and Paraphrasing. Alfio Massimiliano Gliozzo. 2005. Semantic Domains in Computational Linguistics. Ph.D. thesis. AdvisorCarlo Strapparava. Sanda M. Harabagiu, Steven J. Maiorano, and Marius A. Pas¸ca. 2003. Open-domain textual question answering techniques. Nat. Lang. Eng., 9(3):231–267. Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question answering. In Natural Language Engineering, volume 7(4), pages 343–360. Dekang Lin. 1998a. Automatic retrieval and clustering of similar words. In Proceedings of COLING-ACL. Dekang Lin. 1998b. Dependency-based evaluation of minipar. In Proceedings of the Workshop on Evaluation of Parsing Systems at LREC. Dan Moldovan, Sanda Harabagiu, Marius Pasca, Rada Mihalcea, Roxana Girju, Richard Goodrum, and Vasile Rus. 2000. The structure and performance of an open-domain question answering system. In Proceedings of the 38th Annual Meeting of ACL. Patrick Pantel, Rahul Bhagat, Bonaventura Coppola, Timothy Chklovski, and Eduard Hovy. 2007. ISP: Learning inferential selectional preferences. In Human Language Technologies 2007: The Conference of NAACL; Proceedings of the Main Conference. Patrick Andre Pantel. 2003. Clustering by committee. Ph.D. thesis. Advisor-Dekang Lin. Siddharth Patwardhan and Ellen Riloff. 2007. Effective information extraction with semantic affinity patterns and relevant regions. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Marco Pennacchiotti, Roberto Basili, Diego De Cao, and Paolo Marocco. 2007. Learning selectional preferences for entailment or paraphrasing rules. In Proceedings of RANLP. Deepak Ravichandran and Eduard Hovy. 2002. Learning surface text patterns for a question answering system. In Proceedings of the 40th Annual Meeting of ACL. Lorenza Romano, Milen Kouylekov, Idan Szpektor, Ido Dagan, and Alberto Lavelli. 2006. Investigating a generic paraphrase-based approach for relation extraction. In Proceedings of the 11th Conference of the EACL. Satoshi Sekine. 2005. Automatic paraphrase discovery based on context and keywords between ne pairs. In Proceedings of IWP. Yusuke Shinyama, Satoshi Sekine, Sudo Kiyoshi, and Ralph Grishman. 2002. Automatic paraphrase acquisition from news articles. In Proceedings of Human Language Technology Conference. Idan Szpektor and Ido Dagan. 2007. Learning canonical forms of entailment rules. In Proceedings of RANLP. Idan Szpektor, Hristo Tanev, Ido Dagan, and Bonaventura Coppola. 2004. Scaling web-based acquisition of entailment relations. In Proceedings of EMNLP 2004, pages 41–48, Barcelona, Spain. Idan Szpektor, Eyal Shnarch, and Ido Dagan. 2007. Instance-based evaluation of entailment rule acquisition. In Proceedings of the 45th Annual Meeting of ACL. Ellen M. Voorhees and Donna Harmann. 1998. Overview of the seventh text retrieval conference (trec–7). In The Seventh Text Retrieval Conference. Frank Wilcoxon. 1945. Individual comparisons by ranking methods. Biometrics Bulletin, 1(6):80–83. 691
2008
78
Proceedings of ACL-08: HLT, pages 692–700, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Unsupervised Discovery of Generic Relationships Using Pattern Clusters and its Evaluation by Automatically Generated SAT Analogy Questions Dmitry Davidov ICNC Hebrew University of Jerusalem [email protected] Ari Rappoport Institute of Computer Science Hebrew University of Jerusalem [email protected] Abstract We present a novel framework for the discovery and representation of general semantic relationships that hold between lexical items. We propose that each such relationship can be identified with a cluster of patterns that captures this relationship. We give a fully unsupervised algorithm for pattern cluster discovery, which searches, clusters and merges highfrequency words-based patterns around randomly selected hook words. Pattern clusters can be used to extract instances of the corresponding relationships. To assess the quality of discovered relationships, we use the pattern clusters to automatically generate SAT analogy questions. We also compare to a set of known relationships, achieving very good results in both methods. The evaluation (done in both English and Russian) substantiates the premise that our pattern clusters indeed reflect relationships perceived by humans. 1 Introduction Semantic resources can be very useful in many NLP tasks. Manual construction of such resources is labor intensive and susceptible to arbitrary human decisions. In addition, manually constructed semantic databases are not easily portable across text domains or languages. Hence, there is a need for developing semantic acquisition algorithms that are as unsupervised and language independent as possible. A fundamental type of semantic resource is that of concepts (represented by sets of lexical items) and their inter-relationships. While there is relatively good agreement as to what concepts are and which concepts should exist in a lexical resource, identifying types of important lexical relationships is a rather difficult task. Most established resources (e.g., WordNet) represent only the main and widely accepted relationships such as hypernymy and meronymy. However, there are many other useful relationships between concepts, such as noun-modifier and inter-verb relationships. Identifying and representing these explicitly can greatly assist various tasks and applications. There are already applications that utilize such knowledge (e.g., (Tatu and Moldovan, 2005) for textual entailment). One of the leading methods in semantics acquisition is based on patterns (see e.g., (Hearst, 1992; Pantel and Pennacchiotti, 2006)). The standard process for pattern-based relation extraction is to start with hand-selected patterns or word pairs expressing a particular relationship, and iteratively scan the corpus for co-appearances of word pairs in patterns and for patterns that contain known word pairs. This methodology is semi-supervised, requiring prespecification of the desired relationship or handcoding initial seed words or patterns. The method is quite successful, and examining its results in detail shows that concept relationships are often being manifested by several different patterns. In this paper, unlike the majority of studies that use patterns in order to find instances of given relationships, we use sets of patterns as the definitions of lexical relationships. We introduce pattern clusters, a novel framework in which each cluster corresponds to a relationship that can hold between the lexical items that fill its patterns’ slots. We present a fully unsupervised algorithm to compute pat692 tern clusters, not requiring any, even implicit, prespecification of relationship types or word/pattern seeds. Our algorithm does not utilize preprocessing such as POS tagging and parsing. Some patterns may be present in several clusters, thus indirectly addressing pattern ambiguity. The algorithm is comprised of the following stages. First, we randomly select hook words and create a context corpus (hook corpus) for each hook word. Second, we define a meta-pattern using high frequency words and punctuation. Third, in each hook corpus, we use the meta-pattern to discover concrete patterns and target words co-appearing with the hook word. Fourth, we cluster the patterns in each corpus according to co-appearance of the target words. Finally, we merge clusters from different hook corpora to produce the final structure. We also propose a way to label each cluster by word pairs that represent it best. Since we are dealing with relationships that are unspecified in advance, assessing the quality of the resulting pattern clusters is non-trivial. Our evaluation uses two methods: SAT tests, and comparison to known relationships. We used instances of the discovered relationships to automatically generate analogy SAT tests in two languages, English and Russian1. Human subjects answered these and real SAT tests. English grades were 80% for our test and 71% for the real test (83% and 79% for Russian), showing that our relationship definitions indeed reflect human notions of relationship similarity. In addition, we show that among our pattern clusters there are clusters that cover major known noun-compound and verb-verb relationships. In the present paper we focus on the pattern cluster resource itself and how to evaluate its intrinsic quality. In (Davidov and Rappoport, 2008) we show how to use the resource for a known task of a totally different nature, classification of relationships between nominals (based on annotated data), obtaining superior results over previous work. Section 2 discusses related work, and Section 3 presents the pattern clustering and labeling algorithm. Section 4 describes the corpora we used and the algorithm’s parameters in detail. Sections 5 and 1Turney and Littman (2005) automatically answers SAT tests, while our focus is on generating them. 6 present SAT and comparison evaluation results. 2 Related Work Extraction of relation information from text is a large sub-field in NLP. Major differences between pattern approaches include the relationship types sought (including domain restrictions), the degrees of supervision and required preprocessing, and evaluation method. 2.1 Relationship Types There is a large body of related work that deals with discovery of basic relationship types represented in useful resources such as WordNet, including hypernymy (Hearst, 1992; Pantel et al., 2004; Snow et al., 2006), synonymy (Davidov and Rappoport, 2006; Widdows and Dorow, 2002) and meronymy (Berland and Charniak, 1999; Girju et al., 2006). Since named entities are very important in NLP, many studies define and discover relations between named entities (Hasegawa et al., 2004; Hassan et al., 2006). Work was also done on relations between verbs (Chklovski and Pantel, 2004). There is growing research on relations between nominals (Moldovan et al., 2004; Girju et al., 2007). 2.2 Degree of Supervision and Preprocessing While numerous studies attempt to discover one or more pre-specified relationship types, very little previous work has directly attempted the discovery of which main types of generic relationships actually exist in an unrestricted domain. Turney (2006) provided a pattern distance measure that allows a fully unsupervised measurement of relational similarity between two pairs of words; such a measure could in principle be used by a clustering algorithm in order to deduce relationship types, but this was not discussed. Unlike (Turney, 2006), we do not perform any pattern ranking. Instead we produce (possibly overlapping) hard clusters, where each pattern cluster represents a relationship discovered in the domain. Banko et al. (2007) and Rosenfeld and Feldman (2007) find relationship instances where the relationships are not specified in advance. They aim to find relationship instances rather than identify generic semantic relationships. Thus, their representation is very different from ours. In addition, (Banko et al., 2007) utilize supervised tools such 693 as a POS tagger and a shallow parser. Davidov et al. (2007) proposed a method for unsupervised discovery of concept-specific relations. That work, like ours, relies on pattern clusters. However, it requires initial word seeds and targets the discovery of relationships specific for some given concept, while we attempt to discover and define generic relationships that exist in the entire domain. Studying relationships between tagged named entities, (Hasegawa et al., 2004; Hassan et al., 2006) proposed unsupervised clustering methods that assign given sets of pairs into several clusters, where each cluster corresponds to one of a known set of relationship types. Their classification setting is thus very different from our unsupervised discovery one. Several recent papers discovered relations on the web using seed patterns (Pantel et al., 2004), rules (Etzioni et al., 2004), and word pairs (Pasca et al., 2006; Alfonseca et al., 2006). The latter used the notion of hook which we also use in this paper. Several studies utilize some preprocessing, including parsing (Hasegawa et al., 2004; Hassan et al., 2006) and usage of syntactic (Suchanek et al., 2006) and morphological (Pantel et al., 2004) information in patterns. Several algorithms use manuallyprepared resources, including WordNet (Moldovan et al., 2004; Costello et al., 2006) and Wikipedia (Strube and Ponzetto, 2006). In this paper, we do not utilize any language-specific preprocessing or any other resources, which makes our algorithm relatively easily portable between languages, as we demonstrate in our bilingual evaluation. 2.3 Evaluation Method Evaluation for hypernymy and synonymy usually uses WordNet (Lin and Pantel, 2002; Widdows and Dorow, 2002; Davidov and Rappoport, 2006). For more specific lexical relationships like relationships between verbs (Chklovski and Pantel, 2004), nominals (Girju et al., 2004; Girju et al., 2007) or meronymy subtypes (Berland and Charniak, 1999) there is still little agreement which important relationships should be defined. Thus, there are more than a dozen different type hierarchies and tasks proposed for noun compounds (and nominals in general), including (Nastase and Szpakowicz, 2003; Girju et al., 2005; Girju et al., 2007). There are thus two possible ways for a fair evaluation. A study can develop its own relationship definitions and dataset, like (Nastase and Szpakowicz, 2003), thus introducing a possible bias; or it can accept the definition and dataset prepared by another work, like (Turney, 2006). However, this makes it impossible to work on new relationship types. Hence, when exploring very specific relationship types or very generic, but not widely accepted, types (like verb strength), many researchers resort to manual human-based evaluation (Chklovski and Pantel, 2004). In our case, where relationship types are not specified in advance, creating an unbiased benchmark is very problematic, so we rely on human subjects for relationship evaluation. 3 Pattern Clustering Algorithm Our algorithm first discovers and clusters patterns in which a single (‘hook’) word participates, and then merges the resulting clusters to form the final structure. In this section we detail the algorithm. The algorithm utilizes several parameters, whose selection is detailed in Section 4. We refer to a pattern contained in our clusters (a pattern type) as a ‘pattern’ and to an occurrence of a pattern in the corpus (a pattern token) as a ‘pattern instance’. 3.1 Hook Words and Hook Corpora As a first step, we randomly select a set of hook words. Hook words were used in e.g. (Alfonseca et al., 2006) for extracting general relations starting from given seed word pairs. Unlike most previous work, our hook words are not provided in advance but selected randomly; the goal in those papers is to discover relationships between given word pairs, while we use hook words in order to discover relationships that generally occur in the corpus. Only patterns in which a hook word actually participates will eventually be discovered. Hence, in principle we should select as many hook words as possible. However, words whose frequency is very high are usually ambiguous and are likely to produce patterns that are too noisy, so we do not select words with frequency higher than a parameter FC. In addition, we do not select words whose frequency is below a threshold FB, to avoid selection of typos and other noise that frequently appear on the web. We also limit the total number N of hook words. 694 Our algorithm merges clusters originating from different hook words. Using too many hook words increases the chance that some of them belong to a noisy part in the corpus and thus lowers the quality of our resulting clusters. For each hook word, we now create a hook corpus, the set of the contexts in which the word appears. Each context is a window containing W words or punctuation characters before and after the hook word. We avoid extracting text from clearly unformatted sentences and our contexts do not cross paragraph boundaries. The size of each hook corpus is much smaller than that of the whole corpus, easily fitting into main memory; the corpus of a hook word occurring h times in the corpus contains at most 2hW words. Since most operations are done on each hook corpus separately, computation is very efficient. Note that such context corpora can in principle be extracted by focused querying on the web, making the system dynamically scalable. It is also possible to restrict selection of hook words to a specific domain or word type, if we want to discover only a desired subset of existing relationships. Thus we could sample hook words from nouns, verbs, proper names, or names of chemical compounds if we are only interested in discovering relationships between these. Selecting hook words randomly allows us to avoid using any language-specific data at this step. 3.2 Pattern Specification In order to reduce noise and to make the computation more efficient, we did not consider all contexts of a hook word as pattern candidates, only contexts that are instances of a specified meta-pattern type. Following (Davidov and Rappoport, 2006), we classified words into high-frequency words (HFWs) and content words (CWs). A word whose frequency is more (less) than FH (FC) is considered to be a HFW (CW). Unlike (Davidov and Rappoport, 2006), we consider all punctuation characters as HFWs. Our patterns have the general form [Prefix] CW1 [Infix] CW2 [Postfix] where Prefix, Infix and Postfix contain only HFWs. To reduce the chance of catching CWi’s that are parts of a multiword expression, we require Prefix and Postfix to have at least one word (HFW), while Infix is allowed to contain any number of HFWs (but recall that the total length of a pattern is limited by window size). A pattern example is ‘such X as Y and’. During this stage we only allow single words to be in CW slots2. 3.3 Discovery of Target Words For each of the hook corpora, we now extract all pattern instances where one CW slot contains the hook word and the other CW slot contains some other (‘target’) word. To avoid the selection of common words as target words, and to avoid targets appearing in pattern instances that are relatively fixed multiword expressions, we sort all target words in a given hook corpus by pointwise mutual information between hook and target, and drop patterns obtained from pattern instances containing the lowest and highest L percent of target words. 3.4 Local Pattern Clustering We now have for each hook corpus a set of patterns. All of the corresponding pattern instances share the hook word, and some of them also share a target word. We cluster patterns in a two-stage process. First, we group in clusters all patterns whose instances share the same target word, and ignore the rest. For each target word we have a single pattern cluster. Second, we merge clusters that share more than S percent of their patterns. A pattern can appear in more than a single cluster. Note that clusters contain pattern types, obtained through examining pattern instances. 3.5 Global Cluster Merging The purpose of this stage is to create clusters of patterns that express generic relationships rather than ones specific to a single hook word. In addition, the technique used in this stage reduces noise. For each created cluster we will define core patterns and unconfirmed patterns, which are weighed differently during cluster labeling (see Section 3.6). We merge clusters from different hook corpora using the following algorithm: 1. Remove all patterns originating from a single hook corpus. 2While for pattern clusters creation we use only single words as CWs, later during evaluation we allow multiword expressions in CW slots of previously acquired patterns. 695 2. Mark all patterns of all present clusters as unconfirmed. 3. While there exists some cluster C1 from corpus DX containing only unconfirmed patterns: (a) Select a cluster with a minimal number of patterns. (b) For each corpus D different from DX: i. Scan D for clusters C2 that share at least S percent of their patterns, and all of their core patterns, with C1. ii. Add all patterns of C2 to C1, setting all shared patterns as core and all others as unconfirmed. iii. Remove cluster C2. (c) If all of C1’s patterns remain unconfirmed remove C1. 4. If several clusters have the same set of core patterns merge them according to rules (i,ii). We start from the smallest clusters because we expect these to be more precise; the best patterns for semantic acquisition are those that belong to small clusters, and appear in many different clusters. At the end of this algorithm, we have a set of pattern clusters where for each cluster there are two subsets, core patterns and unconfirmed patterns. 3.6 Labeling of Pattern Clusters To label pattern clusters we define a HITS measure that reflects the affinity of a given word pair to a given cluster. For a given word pair (w1, w2) and cluster C with n core patterns Pcore and m unconfirmed patterns Punconf, Hits(C, (w1, w2)) = |{p; (w1, w2) appears in p ∈Pcore}| /n+ α × |{p; (w1, w2) appears in p ∈Punconf}| /m. In this formula, ‘appears in’ means that the word pair appears in instances of this pattern extracted from the original corpus or retrieved from the web during evaluation (see Section 5.2). Thus if some pair appears in most of patterns of some cluster it receives a high HITS value for this cluster. The top 5 pairs for each cluster are selected as its labels. α ∈(0..1) is a parameter that lets us modify the relative weight of core and unconfirmed patterns. 4 Corpora and Parameters In this section we describe our experimental setup, and discuss in detail the effect of each of the algorithms’ parameters. 4.1 Languages and Corpora The evaluation was done using corpora in English and Russian. The English corpus (Gabrilovich and Markovitch, 2005) was obtained through crawling the URLs in the Open Directory Project (dmoz.org). It contains about 8.2G words and its size is about 68GB of untagged plain text. The Russian corpus was collected over the web, comprising a variety of domains, including news, web pages, forums, novels and scientific papers. It contains 7.5G words of size 55GB untagged plain text. Aside from removing noise and sentence duplicates, we did not apply any text preprocessing or tagging. 4.2 Parameters Our algorithm uses the following parameters: FC, FH, FB, W, N, L, S and α. We used part of the Russian corpus as a development set for determining the parameters. On our development set we have tested various parameter settings. A detailed analysis of the involved parameters is beyond the scope of this paper; below we briefly discuss the observed qualitative effects of parameter selection. Naturally, the parameters are not mutually independent. FC (upper bound for content word frequency in patterns) influences which words are considered as hook and target words. More ambiguous words generally have higher frequency. Since content words determine the joining of patterns into clusters, the more ambiguous a word is, the noisier the resulting clusters. Thus, higher values of FC allow more ambiguous words, increasing cluster recall but also increasing cluster noise, while lower ones increase cluster precision at the expense of recall. FH (lower bound for HFW frequency in patterns) influences the specificity of patterns. Higher values restrict our patterns to be based upon the few most common HFWs (like ‘the’, ‘of’, ‘and’) and thus yield patterns that are very generic. Lowering the values, we obtain increasing amounts of pattern clusters for more specific relationships. The value we use for FH is lower than that used for FC, in order to allow as HFWs function words of relatively low frequency (e.g., ‘through’), while allowing as content words some frequent words that participate in meaningful relationships (e.g., ‘game’). However, this way we may also introduce more noise. 696 FB (lower bound for hook words) filters hook words that do not appear enough times in the corpus. We have found that this parameter is essential for removing typos and other words that do not qualify as hook words. N (number of hook words) influences relationship coverage. With higher N values we discover more relationships roughly of the same specificity level, but computation becomes less efficient and more noise is introduced. W (window size) determines the length of the discovered patterns. Lower values are more efficient computationally, but values that are too low result in drastic decrease in coverage. Higher values would be more useful when we allow our algorithm to support multiword expressions as hooks and targets. L (target word mutual information filter) helps in avoiding using as targets common words that are unrelated to hooks, while still catching as targets frequent words that are related. Low L values decrease pattern precision, allowing patterns like ‘give X please Y more’, where X is the hook (e.g., ‘Alex’) and Y the target (e.g., ‘some’). High values increase pattern precision at the expense of recall. S (minimal overlap for cluster merging) is a clusters merge filter. Higher values cause more strict merging, producing smaller but more precise clusters, while lower values start introducing noise. In extreme cases, low values can start a chain reaction of total merging. α (core vs. unconfirmed weight for HITS labeling) allows lower quality patterns to complement higher quality ones during labeling. Higher values increase label noise, while lower ones effectively ignore unconfirmed patterns during labeling. In our experiments we have used the following values (again, determined using a development set) for these parameters: FC: 1, 000 words per million (wpm); FH: 100 wpm; FB: 1.2 wpm; N: 500 words; W: 5 words; L: 30%; S: 2/3; α: 0.1. 5 SAT-based Evaluation As discussed in Section 2, the evaluation of semantic relationship structures is non-trivial. The goal of our evaluation was to assess whether pattern clusters indeed represent meaningful, precise and different relationships. There are two complementary perspectives that a pattern clusters quality assessment needs to address. The first is the quality (precision/recall) of individual pattern clusters: does each pattern cluster capture lexical item pairs of the same semantic relationship? does it recognize many pairs of the same semantic relationship? The second is the quality of the cluster set as whole: does the pattern clusters set allow identification of important known semantic relationships? do several pattern clusters describe the same relationship? Manually examining the resulting pattern clusters, we saw that the majority of sampled clusters indeed clearly express an interesting specific relationship. Examples include familiar hypernymy clusters such as3 {‘such X as Y’, ‘X such as Y’, ‘Y and other X’,} with label (pets, dogs), and much more specific clusters like { ‘buy Y accessory for X!’, ‘shipping Y for X’, ‘Y is available for X’, ‘Y are available for X’, ‘Y are available for X systems’, ‘Y for X’ }, labeled by (phone, charger). Some clusters contain overlapping patterns, like ‘Y for X’, but represent different relationships when examined as a whole. We addressed the evaluation questions above using a SAT-like analogy test automatically generated from word pairs captured by our clusters (see below in this section). In addition, we tested coverage and overlap of pattern clusters with a set of 35 known relationships, and we compared our patterns to those found useful by other algorithms (the next section). Quantitatively, the final number of clusters is 508 (470) for English (Russian), and the average cluster size is 5.5 (6.1) pattern types. 55% of the clusters had no overlap with other clusters. 5.1 SAT Analogy Choice Test Our main evaluation method, which is also a useful application by itself, uses our pattern clusters to automatically generate SAT analogy questions. The questions were answered by human subjects. We randomly selected 15 clusters. This allowed us to assess the precision of the whole cluster set as well as of the internal coherence of separate clusters (see below). For each cluster, we constructed a SAT analogy question in the following manner. The header of the question is a word pair that is one of the label pairs of the cluster. The five multiple 3For readability, we omit punctuations in Prefix and Postfix. 697 choice items include: (1) another label of the cluster (the ‘correct’ answer); (2) three labels of other clusters among the 15; and (3) a pair constructed by randomly selecting words from those making up the various cluster labels. In our sample there were no word pairs assigned as labels to more than one cluster4. As a baseline for comparison, we have mixed these questions with 15 real SAT questions taken from English and Russian SAT analogy tests. In addition, we have also asked our subjects to write down one example pair of the same relationship for each question in the test. As an example, from one of the 15 clusters we have randomly selected the label (glass, water). The correct answer selected from the same cluster was (schoolbag, book). The three pairs randomly selected from the other 14 clusters were (war, death), (request, license) and (mouse, cat). The pair randomly selected from a cluster not among the 15 clusters was (milk, drink). Among the subjects’ proposals for this question were (closet, clothes) and (wallet, money). We computed accuracy of SAT answers, and the correlation between answers for our questions and the real ones (Table 1). Three things are demonstrated about our system when humans are capable of selecting the correct answer. First, our clusters are internally coherent in the sense of expressing a certain relationship, because people identified that the pairs in the question header and in the correct answer exhibit the same relationship. Second, our clusters distinguish between different relationships, because the three pairs not expressing the same relationship as the header were not selected by the evaluators. Third, our cluster labeling algorithm produces results that are usable by people. The test was performed in both English and Russian, with 10 (6) subjects for English (Russian). The subjects (biology and CS students) were not involved with the research, did not see the clusters, and did not receive any special training as preparation. Inter-subject agreement and Kappa were 0.82, 0.72 (0.9, 0.78) for English (Russian). As reported in (Turney, 2005), an average high-school SAT grade is 57. Table 1 shows the final English and Rus4But note that a pair can certainly obtain a positive HITS value for several clusters. Our method Real SAT Correlation English 80% 71% 0.85 Russian 83% 79% 0.88 Table 1: Pattern cluster evaluation using automatically generated SAT analogy choice questions. sian grade average for ours and real SAT questions. We can see that for both languages, around 80% of the choices were correct (the random choice baseline is 20%). Our subjects are university students, so results higher than 57 are expected, as we can see from real SAT performance. The difference in grades between the two languages might be attributed to the presence of relatively hard and uncommon words. It also may result from the Russian test being easier because there is less verb-noun ambiguity in Russian. We have observed a high correlation between true grades and ours, suggesting that our automatically generated test reflects the ability to recognize analogies and can be potentially used for automated generation of SAT-like tests. The results show that our pattern clusters indeed mirror a human notion of relationship similarity and represent meaningful relationships. They also show that as intended, different clusters describe different relationships. 5.2 Analogy Invention Test To assess recall of separate pattern clusters, we have asked subjects to provide (if possible) an additional pair for each SAT question. On each such pair we have automatically extracted a set of pattern instances that capture this pair by using automated web queries. Then we calculated the HITS value for each of the selected pairs and assigned them to clusters with highest HITS value. The numbers of pairs provided were 81 for English and 43 for Russian. We have estimated precision for this task as macro-average of percentage of correctly assigned pairs, obtaining 87% for English and 82% for Russian (the random baseline of this 15-class classification task is 6.7%). It should be noted however that the human-provided additional relationship examples in this test are not random so it may introduce bias. Nevertheless, these results confirm that our pattern clusters are able to recognize new in698 30 Noun Compound Relationships Avg. num Overlap of clusters Russian 1.8 0.046 English 1.7 0.059 5 Verb Verb Relationships Russian 1.4 0.01 English 1.2 0 Table 2: Patterns clusters discovery of known relationships. stances of relationships of the same type. 6 Evaluation Using Known Information We also evaluated our pattern clusters using relevant information reported in related work. 6.1 Discovery of Known Relationships To estimate recall of our pattern cluster set, we attempted to estimate whether (at least) a subset of known relationships have corresponding pattern clusters. As a testing subset, we have used 35 relationships for both English and Russian. 30 relations are noun compound relationships as proposed in the (Nastase and Szpakowicz, 2003) classification scheme, and 5 relations are verb-verb relations proposed by (Chklovski and Pantel, 2004). We have manually created sets of 5 unambiguous sample pairs for each of these 35 relationships. For each such pair we have assigned the pattern cluster with best HITS value. The middle column of Table 2 shows the average number of clusters per relationship. Ideally, if for each relationship all 5 pairs are assigned to the same cluster, the average would be 1. In the worst case, when each pair is assigned to a different cluster, the average would be 5. We can see that most of the pairs indeed fall into one or two clusters, successfully recognizing that similarly related pairs belong to the same cluster. The column on the right shows the overlap between different clusters, measured as the average number of shared pairs in two randomly selected clusters. The baseline in this case is essentially 5, since there are more than 400 clusters for 5 word pairs. We see a very low overlap between assigned clusters, which shows that these clusters indeed separate well between defined relations. 6.2 Discovery of Known Pattern Sets We compared our clusters to lists of patterns reported as useful by previous papers. These lists included patterns expressing hypernymy (Hearst, 1992; Pantel et al., 2004), meronymy (Berland and Charniak, 1999; Girju et al., 2006), synonymy (Widdows and Dorow, 2002; Davidov and Rappoport, 2006), and verb strength + verb happensbefore (Chklovski and Pantel, 2004). In all cases, we discovered clusters containing all of the reported patterns (including their refinements with domainspecific prefix or postfix) and not containing patterns of competing relationships. 7 Conclusion We have proposed a novel way to define and identify generic lexical relationships as clusters of patterns. Each such cluster is set of patterns that can be used to identify, classify or capture new instances of some unspecified semantic relationship. We showed how such pattern clusters can be obtained automatically from text corpora without any seeds and without relying on manually created databases or languagespecific text preprocessing. In an evaluation based on an automatically created analogy SAT test we showed on two languages that pairs produced by our clusters indeed strongly reflect human notions of relation similarity. We also showed that the obtained pattern clusters can be used to recognize new examples of the same relationships. In an additional test where we assign labeled pairs to pattern clusters, we showed that they provide good coverage for known noun-noun and verb-verb relationships for both tested languages. While our algorithm shows good performance, there is still room for improvement. It utilizes a set of constants that affect precision, recall and the granularity of the extracted cluster set. It would be beneficial to obtain such parameters automatically and to create a multilevel relationship hierarchy instead of a flat one, thus combining different granularity levels. In this study we applied our algorithm to a generic domain, while the same method can be used for more restricted domains, potentially discovering useful domain-specific relationships. 699 References Alfonseca, E., Ruiz-Casado, M., Okumura, M., Castells, P., 2006. Towards large-scale non-taxonomic relation extraction: estimating the precision of rote extractors. COLING-ACL ’06 Ontology Learning & Population Workshop. Banko, M., Cafarella, M. J. , Soderland, S., Broadhead, M., and Etzioni, O., 2007. Open information extraction from the Web. IJCAI ’07. Berland, M., Charniak, E., 1999. Finding parts in very large corpora. ACL ’99. Chklovski, T., Pantel, P., 2004. VerbOcean: mining the web for fine-grained semantic verb relations. EMNLP ’04. Costello, F., Veale, T. Dunne, S., 2006. Using WordNet to automatically deduce relations between words in noun-noun compounds. COLING-ACL ’06. Davidov, D., Rappoport, A., 2006. Efficient unsupervised discovery of word categories using symmetric patterns and high frequency words. COLING-ACL ’06. Davidov, D., Rappoport, A. and Koppel, M., 2007. Fully unsupervised discovery of concept-specific relationships by Web mining. ACL ’07. Davidov, D., Rappoport, A., 2008. Classification of relationships between nominals using pattern clusters. ACL ’08. Etzioni, O., Cafarella, M., Downey, D., Popescu, A., Shaked, T., Soderland, S., Weld, D., and Yates, A., 2004. Methods for domain-independent information extraction from the web: An experimental comparison. AAAI 04 Gabrilovich, E., Markovitch, S., 2005. Feature generation for text categorization using world knowledge. IJCAI 2005. Girju, R., Giuglea, A., Olteanu, M., Fortu, O., Bolohan, O., and Moldovan, D., 2004. Support vector machines applied to the classification of semantic relations in nominalized noun phrases. HLT/NAACL Workshop on Computational Lexical Semantics. Girju, R., Moldovan, D., Tatu, M., and Antohe, D., 2005. On the semantics of noun compounds. Computer Speech and Language, 19(4):479-496. Girju, R., Badulescu, A., and Moldovan, D., 2006. Automatic discovery of part-whole relations. Computational Linguistics, 32(1). Girju, R., Hearst, M., Nakov, P., Nastase, V., Szpakowicz, S., Turney, P., and Yuret, D., 2007. Task 04: Classification of semantic relations between nominal at SemEval 2007. ACL ’07 SemEval Workshop. Hasegawa, T., Sekine, S., and Grishman, R., 2004. Discovering relations among named entities from large corpora. ACL ’04. Hassan, H., Hassan, A. and Emam, O., 2006. Unsupervised information extraction approach using graph mutual reinforcement. EMNLP ’06. Hearst, M., 1992. Automatic acquisition of hyponyms from large text corpora. COLING ’92 Lin, D., Pantel, P., 2002. Concept discovery from text. COLING 02. Moldovan, D., Badulescu, A., Tatu, M., Antohe, D.,Girju, R., 2004. Models for the semantic classification of noun phrases. HLT-NAACL ’04 Workshop on Computational Lexical Semantics. Nastase, V., Szpakowicz, S., 2003. Exploring noun modifier semantic relations. IWCS-5. Pantel, P., Pennacchiotti, M., 2006. Espresso: leveraging generic patterns for automatically harvesting semantic relations. COLING-ACL 2006. Pantel, P., Ravichandran, D. and Hovy, E.H., 2004. Towards terascale knowledge acquisition. COLING ’04. Pasca, M., Lin, D., Bigham, J., Lifchits A., Jain, A., 2006. Names and similarities on the web: fact extraction in the fast lane. COLING-ACL ’06. Rosenfeld, B., Feldman, R., 2007. Clustering for unsupervised relation identification. CIKM ’07. Snow, R., Jurafsky, D., Ng, A.Y., 2006. Semantic taxonomy induction from heterogeneous evidence. COLING-ACL ’06. Strube, M., Ponzetto, S., 2006. WikiRelate! computing semantic relatedness using Wikipedia. AAAI ’06. Suchanek, F., Ifrim, G., and Weikum, G., 2006. LEILA: learning to extract information by linguistic analysis. COLING-ACL ’06 Ontology Learning & Population Workshop. Tatu, M., Moldovan, D., 2005. A semantic approach to recognizing textual entailment. HLT/EMNLP 2005. Turney, P., 2005. Measuring semantic similarity by latent relational analysis. IJCAI ’05. Turney, P., Littman, M., 2005. Corpus-based learning of analogies and semantic selations. Machine Learning(60):1–3:251–278. Turney, P., 2006. Expressing implicit semantic relations without supervision. COLING-ACL ’06. Widdows, D., Dorow, B., 2002. A graph model for unsupervised lexical acquisition. COLING ’02. 700
2008
79
Proceedings of ACL-08: HLT, pages 63–71, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Contradictions and Justifications: Extensions to the Textual Entailment Task Ellen M. Voorhees National Institute of Standards and Technology Gaithersburg, MD 20899-8940, USA [email protected] Abstract The third PASCAL Recognizing Textual Entailment Challenge (RTE-3) contained an optional task that extended the main entailment task by requiring a system to make three-way entailment decisions (entails, contradicts, neither) and to justify its response. Contradiction was rare in the RTE-3 test set, occurring in only about 10% of the cases, and systems found accurately detecting it difficult. Subsequent analysis of the results shows a test set must contain many more entailment pairs for the three-way decision task than the traditional two-way task to have equal confidence in system comparisons. Each of six human judges representing eventual end users rated the quality of a justification by assigning “understandability” and “correctness” scores. Ratings of the same justification across judges differed significantly, signaling the need for a better characterization of the justification task. 1 Introduction The PASCAL Recognizing Textual Entailment (RTE) workshop series (see www.pascal-network. org/Challenges/RTE3/) has been a catalyst for recent research in developing systems that are able to detect when the content of one piece of text necessarily follows from the content of another piece of text (Dagan et al., 2006; Giampiccolo et al., 2007). This ability is seen as a fundamental component in the solutions for a variety of natural language problems such as question answering, summarization, and information extraction. In addition to the main entailment task, the most recent Challenge, RTE-3, contained a second optional task that extended the main task in two ways. The first extension was to require systems to make three-way entailment decisions; the second extension was for systems to return a justification or explanation of how its decision was reached. In the main RTE entailment task, systems report whether the hypothesis is entailed by the text. The system responds with YES if the hypothesis is entailed and NO otherwise. But this binary decision conflates the case when the hypothesis actually contradicts the text—the two could not both be true— with simple lack of entailment. The three-way entailment decision task requires systems to decide whether the hypothesis is entailed by the text (YES), contradicts the text (NO), or is neither entailed by nor contradicts the text (UNKNOWN). The second extension required a system to explain why it reached its conclusion in terms suitable for an eventual end user (i.e., not system developer). Explanations are one way to build a user’s trust in a system, but it is not known what kinds of information must be conveyed nor how best to present that information. RTE-3 provided an opportunity to collect a diverse sample of explanations to begin to explore these questions. This paper analyzes the extended task results, with the next section describing the three-way decision subtask and Section 3 the justification subtask. Contradiction was rare in the RTE-3 test set, occurring in only about 10% of the cases, and systems found accurately detecting it difficult. While the level of agreement among human annotators as to 63 the correct answer for an entailment pair was within expected bounds, the test set was found to be too small to reliably distinguish among systems’ threeway accuracy scores. Human judgments of the quality of a justification varied widely, signaling the need for a better characterization of the justification task. Comments from the judges did include some common themes. Judges prized conciseness, though they were uncomfortable with mathematical notation unless they had a mathematical background. Judges strongly disliked being shown system internals such as scores reported by various components. 2 The Three-way Decision Task The extended task used the RTE-3 main task test set of entailment pairs as its test set. This test set contains 800 text and hypothesis pairs, roughly evenly split between pairs for which the text entails the hypothesis (410 pairs) and pairs for which it does not (390 pairs), as defined by the reference answer key released by RTE organizers. RTE uses an “ordinary understanding” principle for deciding entailment. The hypothesis is considered entailed by the text if a human reading the text would most likely conclude that the hypothesis were true, even if there could exist unusual circumstances that would invalidate the hypothesis. It is explicitly acknowledged that ordinary understanding depends on a common human understanding of language as well as common background knowledge. The extended task also used the ordinary understanding principle for deciding contradictions. The hypothesis and text were deemed to contradict if a human would most likely conclude that the text and hypothesis could not both be true. The answer key for the three-way decision task was developed at the National Institute of Standards and Technology (NIST) using annotators who had experience as TREC and DUC assessors. NIST assessors annotated all 800 entailment pairs in the test set, with each pair independently annotated by two different assessors. The three-way answer key was formed by keeping exactly the same set of YES answers as in the two-way key (regardless of the NIST annotations) and having NIST staff adjudicate assessor differences on the remainder. This resulted in a three-way answer key containing 410 (51%) Reference Systems’ Responses Answer YES UNKN NO Totals YES 2449 2172 299 4920 UNKN 929 2345 542 3816 NO 348 415 101 864 Totals 3726 4932 942 9600 Table 1: Contingency table of responses over all 800 entailment pairs and all 12 runs. YES answers, 319 (40%) UNKNOWN answers, and 72 (9%) NO answers. 2.1 System results Eight different organizations participated in the three-way decision subtask submitting a total of 12 runs. A run consists of exactly one response of YES, NO, or UNKNOWN for each of the 800 test pairs. Runs were evaluated using accuracy, the percentage of system responses that match the reference answer. Figure 1 shows both the overall accuracy of each of the runs (numbers running along the top of the graph) and the accuracy as conditioned on the reference answer (bars). The conditioned accuracy for YES answers, for example, is accuracy computed using just those test pairs for which YES is the reference answer. The runs are sorted by decreasing overall accuracy. Systems were much more accurate in recognizing entailment than contradiction (black bars are greater than white bars). Since conditioned accuracy does not penalize for overgeneration of a response, the conditioned accuracy for UNKNOWN is excellent for those systems that used UNKNOWN as their default response. Run H never concluded that a pair was a contradiction, for example. Table 1 gives another view of the relative difficulty of detecting contradiction. The table is a contingency table of the systems’ responses versus the reference answer summed over all test pairs and all runs. A reference answer is represented as a row in the table and a system’s response as a column. Since there are 800 pairs in the test set and 12 runs, there is a total of 9600 responses. As a group the systems returned NO as a response 942 times, approximately 10% of the time. While 10% is a close match to the 9% of the test set for which NO is the reference answer, the systems detected contradictions for the wrong pairs: the table’s 64 A B C D E F G H I J K L 0.0 0.2 0.4 0.6 0.8 1.0 Conditioned Accuracy YES UNKNOWN NO 0.731 0.713 0.591 0.569 0.494 0.471 0.454 0.451 0.436 0.425 0.419 0.365 Figure 1: Overall accuracy (top number) and accuracy conditioned by reference answer for three-way runs. diagonal entry for NO is the smallest entry in both its row and its column. The smallest row entry means that systems were more likely to respond that the hypothesis was entailed than that it contradicted when it in fact contradicted. The smallest column entry means than when the systems did respond that the hypothesis contradicted, it was more often the case that the hypothesis was actually entailed than that it contradicted. The 101 correct NO responses represent 12% of the 864 possible correct NO responses. In contrast, the systems responded correctly for 50% (2449/4920) of the cases when YES was the reference answer and for 61% (2345/3816) of the cases when UNKNOWN was the reference answer. 2.2 Human agreement Textual entailment is evaluated assuming that there is a single correct answer for each test pair. This is a simplifying assumption used to make the evaluation tractable, but as with most NLP phenomena it is not actually true. It is quite possible for two humans to have legitimate differences of opinions (i.e., to differ when neither is mistaken) about whether a hypothesis is entailed or contradicts, especially given annotations are based on ordinary understanding. Since systems are given credit only when they respond with the reference answer, differences in annotators’ opinions can clearly affect systems’ accuracy scores. The RTE main task addressed this issue by including a candidate entailment pair in the test set only if multiple annotators agreed on its disposition (Giampiccolo et al., 2007). The test set also Main Task NIST Judge 1 YES UNKN NO YES 378 27 5 NO 48 242 100 conflated agreement = .90 Main Task NIST Judge 2 YES UNKN NO YES 383 23 4 NO 46 267 77 conflated agreement = .91 Table 2: Agreement between NIST judges (columns) and main task reference answers (rows). contains 800 pairs so an individual test case contributes only 1/800 = 0.00125 to the overall accuracy score. To allow the results from the two- and three-way decision tasks to be comparable (and to leverage the cost of creating the main task test set), the extended task used the same test set as the main task and used simple accuracy as the evaluation measure. The expectation was that this would be as effective an evaluation design for the three-way task as it is for the two-way task. Unfortunately, subsequent analysis demonstrates that this is not so. Recall that NIST judges annotated all 800 entailment pairs in the test set, with each pair independently annotated twice. For each entailment pair, one of the NIST judges was arbitrarily assigned as the first judge for that pair and the other as the second judge. The agreement between NIST and RTE annotators is shown in Table 2. The top half of 65 the table shows the agreement between the two-way answer key and the annotations of the set of first judges; the bottom half is the same except using the annotations of the set of second judges. The NIST judges’ answers are given in the columns and the two-way reference answers in the rows. Each cell in the table gives the raw count before adjudication of the number of test cases that were assigned that combination of annotations. Agreement is then computed as the percentage of matches when a NIST judge’s NO or UNKNOWN annotation matched a NO two-way reference answer. Agreement is essentially identical for both sets of judges at 0.90 and 0.91 respectively. Because the agreement numbers reflect the raw counts before adjudication, at least some of the differences may be attributable to annotator errors that were corrected during adjudication. But there do exist legitimate differences of opinion, even for the extreme cases of entails versus contradicts. Typical disagreements involve granularity of place names and amount of background knowledge assumed. Example disagreements concerned whether Hollywood was equivalent to Los Angeles, whether East Jerusalem was equivalent to Jerusalem, and whether members of the same political party who were at odds with one another were ‘opponents’. RTE organizers reported an agreement rate of about 88% among their annotators for the two-way task (Giampiccolo et al., 2007). The 90% agreement rate between the NIST judges and the twoway answer key probably reflects a somewhat larger amount of disagreement since the test set already had RTE annotators’ disagreements removed. But it is similar enough to support the claim that the NIST annotators agree with other annotators as often as can be expected. Table 3 shows the threeway agreement between the two NIST annotators. As above, the table gives the raw counts before adjudication and agreement is computed as percentage of matching annotations. Three-way agreement is 0.83—smaller than two-way agreement simply because there are more ways to disagree. Just as annotator agreement declines as the set of possible answers grows, the inherent stability of the accuracy measure also declines: accuracy and agreement are both defined as the percentage of exact matches on answers. The increased uncertainty YES UNKN NO YES 381 UNKN 82 217 NO 11 43 66 three-way agreement = .83 Table 3: Agreement between NIST judges. when moving from two-way to three-way decisions significantly reduces the power of the evaluation. With the given level of annotator agreement and 800 pairs in the test set, in theory accuracy scores could change by as much as 136 (the number of test cases for which annotators disagreed) ×0.00125 = .17 by using a different choice of annotator. The maximum difference in accuracy scores actually observed in the submitted runs was 0.063. Previous analyses of other evaluation tasks such as document retrieval and question answering demonstrated that system rankings are stable despite differences of opinion in the underlying annotations (Voorhees, 2000; Voorhees and Tice, 2000). The differences in accuracy observed for the threeway task are large enough to affect system rankings, however. Compared to the system ranking of ABCDEFGHIJKL induced by the official three-way answer key, the ranking induced by the first set of judges’ raw annotations is BADCFEGKHLIJ. The ranking induced by the second set of judges’ raw annotations is much more similar to the official results, ABCDEFGHKIJL. How then to proceed? Since the three-way decision task was motivated by the belief that distinguishing contradiction from simple non-entailment is important, reverting back to a binary decision task is not an attractive option. Increasing the size of the test set beyond 800 test cases will result in a more stable evaluation, though it is not known how big the test set needs to be. Defining new annotation rules in hopes of increasing annotator agreement is a satisfactory option only if those rules capture a characteristic of entailment that systems should actually embody. Reasonable people do disagree about entailment and it is unwise to enforce some arbitrary definition in the name of consistency. Using UNKNOWN as the reference answer for all entailment pairs on which annotators disagree may be a reasonable strategy: the disagreement itself is strong evidence that 66 neither of the other options holds. Creating balanced test sets using this rule could be difficult, however. Following this rule, the RTE-3 test set would have 360 (45%) YES answers, 64 (8%) NO answers, and 376 (47%) UNKNOWN answers, and would induce the ranking ABCDEHIJGKFL. (Runs such as H, I, and J that return UNKNOWN as a default response are rewarded using this annotation rule.) 3 Justifications The second part of the extended task was for systems to provide explanations of how they reached their conclusions. The specification of a justification for the purposes of the task was deliberately vague— a collection of ASCII strings with no minimum or maximum size—so as to not preclude good ideas by arbitrary rules. A justification run contained all of the information from a three-way decision run plus the rationale explaining the response for each of the 800 test pairs in the RTE-3 test set. Six of the runs shown in Figure 1 (A, B, C, D, F, and H) are justification runs. Run A is a manual justification run, meaning there was some human tweaking of the justifications (but not the entailment decisions). After the runs were submitted, NIST selected a subset of 100 test pairs to be used in the justification evaluation. The pairs were selected by NIST staff after looking at the justifications so as to maximize the informativeness of the evaluation set. All runs were evaluated on the same set of 100 pairs. Figure 2 shows the justification produced by each run for pair 75 (runs D and F were submitted by the same organization and contained identical justifications for many pairs including pair 75). The text of pair 75 is Muybridge had earlier developed an invention he called the Zoopraxiscope., and the hypothesis is The Zoopraxiscope was invented by Muybridge. The hypothesis is entailed by the text, and each of the systems correctly replied that it is entailed. Explanations for why the hypothesis is entailed differ widely, however, with some rationales of dubious validity. Each of the six different NIST judges rated all 100 justifications. For a given justification, a judge first assigned an integer score between 1–5 on how understandable the justification was (with 1 as unintelligible and 5 as completely understandable). If the understandability score assigned was 3 or greater, the judge then assigned a correctness score, also an integer between 1–5 with 5 the high score. This second score was interpreted as how compelling the argument contained in the justification was rather than simple correctness because justifications could be strictly correct but immaterial. 3.1 System results The motivation for the justification subtask was to gather data on how systems might best explain themselves to eventual end users. Given this goal and the exploratory nature of the exercise, judges were given minimal guidance on how to assign scores other than that it should be from a user’s, not a system developer’s, point of view. Judges used a system that displayed the text, hypothesis, and reference answer, and then displayed each submission’s justification in turn. The order in which the runs’ justifications were displayed was randomly selected for each pair; for a given pair, each judge saw the same order. Figure 2 includes the scores assigned to each of the justifications of entailment pair 75. Each pair of numbers in brackets is a score pair assigned by one judge. The first number in the pair is the understandability score and the second the correctness score. The correctness score is omitted (‘–’) when the understandability score is 1 or 2 because no correctness score was assigned in that case. The scores from the different judges are given in the same order for each justification. With 100 entailment pairs evaluated by each of 6 judges assigning 2 separate scores, each run had a total of 1200 numbers assigned to it. Figure 3 shows two views of these numbers: a histogram of the number of justifications in the run that were assigned a given score value summed over all judges and all test pairs, and the overall mean score for the run1. A correctness score that was not assigned because understandability was too poor is displayed as a score of 0 in the histogram and treated as a 0 in the computation of the mean. Understandability scores are shown toward the left in the figure and correctness scores toward the right. The mean scores for correctness are fairly low 1Using the mean treats a score as an interval variable rather than an ordinal variable, a questionable treatment given a 5point scale. A mean is a convenient summary, however. 67 Run A: There is a relationship between Zoopraxiscope and Muybridge in both the text and hypothesis. The term "invention" is morphologically similar to "invented". Scores: [4,4] [4,4] [5,4] [4,1] [5,4], [3,2] Run B: The text mentions ’Muybridge’. We can infer that Muybridge is inventor. From the fact that Muybridge is inventor, we can infer that Muybridge invented. We can conclude that the Zoopraxiscope was invented by Muybridge. Scores:[4,3] [3,3] [5,4] [5,1] [5,3] [3,2] Run C: 1: The Hypothesis could be precisely matched with content in the Text, with allowance for polarity and embedded contexts. 2: Hypothesis words match well with words in the Text. 3: text adjunct "called" of "invention" dropped on aligned hyp word "invented" Scores: [3,3] [4,4] [4,4] [2,–] [1,–] [2,–] Runs D and F: The words in the hypothesis are all found, with the approximately all the exact same syntactic dependencies, also in the text. Therefore, I concluded that the given hypothesis: The Zoopraxiscope was invented by Muybridge. is entailed by the given text Muybridge had earlier developed an invention he called the Zoopraxiscope. Scores: [3,3] [4,3] [4,3] [5,1] [4,3] [2,–] Run H: Yes! I have general knowledge that: IF Y is developed by X THEN Y is manufactured by X Here: X = Muybridge, Y = the invention Thus, here: We are told in T: the invention is developed by Muybridge Thus it follows that: the invention is manufactured by Muybridge In addition, I know: "manufacture" and "invent" mean roughly the same thing Hence: The Zoopraxiscope was invented by Muybridge. Scores: [2,–] [4,1] [3,3] [3,1] [2,–] [1,–] Figure 2: Justification for entailment pair 75 from each justification run. Brackets contain the pair of scores assigned to the justification by one of the six human judges; the first number in the pair is the understandability score and the second is the correctness score. for all runs. Recall, however, that the ‘correctness’ score was actually interpreted as compellingness. There were many justifications that were strictly correct but not very informative, and they received low correctness scores. For example, the low correctness scores for the justification from run A in Figure 2 were given because those judges did not feel that the fact that “invention and inventor are morphologically similar”was enough of an explanation. Mean correctness scores were also affected by understandability. Since an unassigned correctness score was treated as a zero when computing the mean, systems with low understandability scores must have lower correctness scores. Nonetheless, it is also true that systems reached the correct entailment decision by faulty reasoning uncomfortably often, as illustrated by the justification from run H in Figure 2. 68 0 100 200 300 400 Run A* [4.27 2.75] 0 1 1 2 2 3 3 4 4 5 5 Understandability Correctness 0 100 200 300 400 Run B [4.11 2.00] 0 1 1 2 2 3 3 4 4 5 5 Understandability Correctness 0 100 200 300 400 Run C [2.66 1.23] 0 1 1 2 2 3 3 4 4 5 5 Understandability Correctness 0 100 200 300 400 Run D [3.15 1.54] 0 1 1 2 2 3 3 4 4 5 5 Understandability Correctness 0 100 200 300 400 Run F [3.11 1.47] 0 1 1 2 2 3 3 4 4 5 5 Understandability Correctness 0 100 200 300 400 Run H [4.09 1.49] 0 1 1 2 2 3 3 4 4 5 5 Understandability Correctness Figure 3: Number of justifications in a run that were assigned a particular score value summed over all judges and all test pairs. Brackets contain the overall mean understandability and correctness scores for the run. The starred run (A) is the manual run. 3.2 Human agreement The most striking feature of the system results in Figure 3 is the variance in the scores. Not explicit in that figure, though illustrated in the example in Figure 2, is that different judges often gave widely different scores to the same justification. One systematic difference was immediately detected. The NIST judges have varying backgrounds with respect to mathematical training. Those with more training were more comfortable with, and often preferred, justifications expressed in mathematical notation; those with little training strongly disliked any mathematical notation in an explanation. This preference affected both the understandability and the correctness scores. Despite being asked to assign two separate scores, judges found it difficult to separate understandability and correctness. As a result, correctness scores were affected by presentation. The scores assigned by different judges were sufficiently different to affect how runs compared to one another. This effect was quantified in the following way. For each entailment pair in the test set, the set of six runs was ranked by the scores assigned by one assessor, with rank one assigned to the best run and rank six the worst run. If several systems had the same score, they were each assigned the mean rank for the tied set. (For example, if two systems had the same score that would rank them second and third, they were each assigned rank 2.5.) A run was then assigned its mean rank over the 100 justifications. Figure 4 shows how the mean rank of the runs varies by assessor. The x-axis in the figure shows the judge assigning the score and the y-axis the mean rank (remember that rank one is best). A run is plotted using its letter name consistent with previous figures, and lines connect the same system across different judges. Lines intersect demonstrating that different judges prefer different justifications. After rating the 100 justifications, judges were asked to write a short summary of their impression of the task and what they looked for in a justification. These summaries did have some common themes. Judges prized conciseness and specificity, and expected (or at least hoped for) explanations in fluent English. Judges found “chatty” templates such as the one used in run H more annoying than engaging. Verbatim repetition of the text and hypothesis within 69 Judge1 Judge2 Judge3 Judge4 Judge5 Judge6 1 2 3 4 5 Mean Rank Understandabilty B B B B B B A A A A A A C C C C C C D D D D D D F F F F F F H H H H H H Judge1 Judge2 Judge3 Judge4 Judge5 Judge6 1 2 3 4 5 Mean Rank Correctness B B B B B B A A A A A A C C C C C C D D D D D D F F F F F F H H H H H H Figure 4: Relative effectiveness of runs as measured by mean rank. the justification (as in runs D and F) was criticized as redundant. Generic phrases such as “there is a relation between” and “there is a match” were worse than useless: judges assigned no expository value to such assertions and penalized them as clutter. Judges were also adverse to the use of system internals and jargon in the explanations. Some systems reported scores computed from WordNet (Fellbaum, 1998) or DIRT (Lin and Pantel, 2001). Such reports were penalized since the judges did not care what WordNet or DIRT are, and if they had cared, had no way to calibrate such a score. Similarly, linguistic jargon such as ‘polarity’ and ‘adjunct’ and ‘hyponym’ had little meaning for the judges. Such qualitative feedback from the judges provides useful guidance to system builders on ways to explain system behavior. A broader conclusion from the justifications subtask is that it is premature for a quantitative evaluation of system-constructed explanations. The community needs a better understanding of the overall goal of justifications to develop a workable evaluation task. The relationships captured by many RTE entailment pairs are so obvious to humans (e.g., an inventor creates, a niece is a relative) that it is very unlikely end users would want explanations that include this level of detail. Having a true user task as a target would also provide needed direction as to the characteristics of those users, and thus allow judges to be more effective surrogates. 4 Conclusion The RTE-3 extended task provided an opportunity to examine systems’ abilities to detect contradiction and to provide explanations of their reasoning when making entailment decisions. True contradiction was rare in the test set, accounting for approximately 10% of the test cases, though it is not possible to say whether this is a representative fraction for the text sources from which the test was drawn or simply a chance occurrence. Systems found detecting contradiction difficult, both missing it when it was present and finding it when it was not. Levels of human (dis)agreement regarding entailment and contradiction are such that test sets for a three-way decision task need to be substantially larger than for binary decisions for the evaluation to be both reliable and sensitive. The justification task as implemented in RTE-3 is too abstract to make an effective evaluation task. Textual entailment decisions are at such a basic level of understanding for humans that human users don’t want explanations at this level of detail. User backgrounds have a profound effect on what presentation styles are acceptable in an explanation. The justification task needs to be more firmly situated in the context of a real user task so the requirements of the user task can inform the evaluation task. Acknowledgements The extended task of RTE-3 was supported by the Disruptive Technology Office (DTO) AQUAINT program. Thanks to fellow coordinators of the task, Chris Manning and Dan Moldovan, and to the participants for making the task possible. 70 References Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The PASCAL recognising textual entailment challenge. In Lecture Notes in Computer Science, volume 3944, pages 177–190. Springer-Verlag. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. The MIT Press. Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In Proceedings of the ACLPASCAL Workshop on Textual Entailment and Paraphrasing, pages 1–9. Association for Computational Linguistics. Dekang Lin and Patrick Pantel. 2001. DIRT —Discovery of inference rules from text. In Proceedings of the ACM Conference on Knowledge Discovery and Data Mining (KDD-01), pages 323–328. Ellen M. Voorhees and Dawn M. Tice. 2000. Building a question answering test collection. In Proceedings of the Twenty-Third Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 200–207, July. Ellen M. Voorhees. 2000. Variations in relevance judgments and the measurement of retrieval effectiveness. Information Processing and Management, 36:697– 716. 71
2008
8
Proceedings of ACL-08: HLT, pages 701–709, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Improving Search Results Quality by Customizing Summary Lengths Michael Kaisser University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW [email protected] Marti A. Hearst UC Berkeley 102 South Hall Berkeley, CA 94705 [email protected] John B. Lowe Powerset, Inc. 475 Brannan St. San Francisco, CA 94107 [email protected] Abstract Web search engines today typically show results as a list of titles and short snippets that summarize how the retrieved documents are related to the query. However, recent research suggests that longer summaries can be preferable for certain types of queries. This paper presents empirical evidence that judges can predict appropriate search result summary lengths, and that perceptions of search result quality can be affected by varying these result lengths. These findings have important implications for search results presentation, especially for natural language queries. 1 Introduction Search results listings on the web have become standardized as a list of information summarizing the retrieved documents. This summary information is often referred to as the document’s surrogate (Marchionini et al., 2008). In older search systems, such as those used in news and legal search, the document surrogate typically consisted of the title and important metadata, such as date, author, source, and length of the article, as well as the document’s manually written abstract. In most cases, the full text content of the document was not available to the search engine and so no extracts could be made. In web search, document surrogates typically show the web page’s title, a URL, and information extracted from the full text contents of the document. This latter part is referred to by several different names, including summary, abstract, extract, and snippet. Today it is standard for web search engines to show these summaries as one or two lines of text, often with ellipses separating sentence fragments. However, there is evidence that the ideal result length is often longer than the standard snippet length, and that furthermore, result length depends on the type of answer being sought. In this paper, we systematically examine the question of search result length preference, comparing different result lengths for different query types. We find evidence that desired answer length is sensitive to query type, and that for some queries longer answers are judged to be of higher quality. In the following sections we summarize the related work on result length variation and on query topic classification. We then describe two studies. In the first, judges examined queries and made predictions about the expected answer types and the ideal answer lengths. In the second study, judges rated answers of different lengths for these queries. The studies find evidence supporting the idea that different query types are best answered with summaries of different lengths. 2 Related Work 2.1 Query-biased Summaries In the early days of the web, the result summary consisted of the first few lines of text, due both to concerns about intellectual property, and because often that was the only part of the full text that the search engines retained from their crawls. Eventually, search engines started showing what are known variously as query-biased summaries, keyword-in701 context (KWIC) extractions, and user-directed summaries (Tombros and Sanderson, 1998). In these summaries, sentence fragments, full sentences, or groups of sentences that contain query terms are extracted from the full text. Early versions of this idea were developed in the Snippet Search tool (Pedersen et al., 1991) and the Superbook tool’s Table-ofContents view (Egan et al., 1989). A query-biased summary shows sentences that summarize the ways the query terms are used within the document. In addition to showing which subsets of query terms occur in a retrieved document, this display also exposes the context in which the query terms appear with respect to one another. Research suggests that query-biased summaries are superior to showing the first few sentences from documents. Tombros & Sanderson (1998), in a study with 20 participants using TREC ad hoc data, found higher precision and recall and higher subjective preferences for query-biased summaries over summaries showing the first few sentences. Similar results for timing and subjective measurements were found by White et al. (2003) in a study with 24 participants. White et al. (2003) also describe experiments with different sentence selection mechanisms, including giving more weight to sentences that contained query words along with text formatting. There are significant design questions surrounding how best to formulate and display query-biased summaries. As with standard document summarization and extraction, there is an inherent trade-off between showing long, informative summaries and minimizing the screen space required by each search hit. There is also a tension between showing short snippets that contain all or most of the query terms and showing coherent stretches of text. If the query terms do not co-occur near one another, then the extract has to become very long if full sentences and all query terms are to be shown. Many web search engine snippets compromise by showing fragments instead of sentences. 2.2 Studies Comparing Results Lengths Recently, a few studies have analyzed the results of varying search summary length. In the question-answering context (as opposed to general web search), Lin et al. (2003) conducted a usability study with 32 computer science students comparing four types of answer context: exact answer, answer-in-sentence, answer-in-paragraph, and answer-in-document. To remove effects of incorrect answers, they used a system that produced only correct answers, drawn from an online encyclopedia. Participants viewed answers for 8 question scenarios. Lin et al. (2003) found no significant differences in task completion times, but they did find differences in subjective responses. Most participants (53%) preferred paragraph-sized chunks, noting that a sentence wasn’t much more information beyond the exact answer, and a full document was oftentimes too long. That said, 23% preferred full documents, 20% preferred sentences, and one participant preferred exact answer, thus suggesting that there is considerable individual variation. Paek et al. (2004) experimented with showing differing amounts of summary information in results listings, controlling the study design so that only one result in each list of 10 was relevant. For half the test questions, the target information was visible in the original snippet, and for the other half, the participant needed to use their mouse to view more information from the relevant search result. They compared three interface conditions: (i) a standard search results listing, in which a mouse click on the title brings up the full text of the web page, (ii) “instant” view, for which a mouseclick expanded the document summary to show additional sentences from the document, and those sentences contained query terms and the answer to the search task, and (iii) a “dynamic” view that responded to a mouse hover, and dynamically expanded the summary with a few words at a time. Eleven out of 18 participants preferred instant view over the other two views, and on average all participants produced faster and more accurate results with this view. Seven participants preferred dynamic view over the others, but many others found this view disruptive. The dynamic view suffered from the problem that, as the text expanded, the mouse no longer covered the selected results, and 702 so an unintended, different search result sometimes started to expand. Notably, none of the participants preferred the standard results listing view. Cutrell & Guan (2007), compared search summaries of varying length: short (1 line of text), medium (2-3 lines) and long (6-7 lines) using search engine-produced snippets (it is unclear if the summary text was contiguous or included ellipses). They also compared 6 navigational queries (where the goal is to find a website’s homepage), with 6 informational queries (e.g., “find when the Titanic set sail for its only voyage and what port it left from,” “find out how long the Las Vegas monorail is”). In a study with 22 participants, they found that participants were 24 seconds faster on average with the long view than with the short and medium view. The also found that participants were 10 seconds slower on average with the long view for the navigational tasks. They present eye tracking evidence which suggests that on the navigational task, the extra text distracts the eye from the URL. They did not report on subjective responses to the different answer lengths. Rose et al. (2007) varied search results summaries along several dimensions, finding that text choppiness and sentence truncation had negative effects, and genre cues had positive effects. They did not find effects for varying summary length, but they only compared relatively similar summary lengths (2 vs. 3 vs. 4 lines long). 2.3 Categorizing Questions by Expected Answer Types In the field of automated question-answering, much effort has been expended on automatically determining the kind of answer that is expected for a given question. The candidate answer types are often drawn from the types of questions that have appeared in the TREC Question Answering track (Voorhees, 2003). For example, the Webclopedia project created a taxonomy of 180 types of question targets (Hovy et al., 2002), and the FALCON project (Harabagiu et al., 2003) developed an answer taxonomy with 33 top level categories (such as PERSON, TIME, REASON, PRODUCT, LOCATION, NUMERICAL VALUE, QUOTATION), and these were further refined into an unspecified number of additional categories. Ramakrishnan et al. (2004) show an automated method for determining expected answer types using syntactic information and mapping query terms to WordNet. 2.4 Categorizing Web Queries A different line of research is the query log categorization problem. In query logs, the queries are often much more terse and ill-defined than in the TREC QA track, and, accordingly, the taxonomies used to classify what is called the query intent have been much more general. In an attempt to demonstrate how information needs for web search differ from the assumptions of pre-web information retrieval systems, Broder (2002) created a taxonomy of web search goals, and then estimated frequency of such goals by a combination of an online survey (3,200 responses, 10% response rate) and a manual analysis of 1,000 query from the AltaVista query logs. This taxonomy has been heavily influential in discussions of query types on the Web. Rose & Levinson (2004) followed up on Broder’s work, again using web query logs, but developing a taxonomy that differed somewhat from Broder’s. They manually classified a set of 1,500 AltaVista search engine log queries. For two sets of 500 queries, the labeler saw just the query and the retrieved documents; for the third set the labeler also saw information about which item(s) the searcher clicked on. They found that the classifications that used the extra information about clickthrough did not change the proportions of assignments to each category. Because they did not directly compare judgments with and without click information on the same queries, this is only weak evidence that query plus retrieved documents is sufficient to classify query intent. Alternatively, queries from web query logs can be classified according to the topic of the query, independent of the type of information need. For example, a search involving the topic of weather can consist of the simple information need of looking at today’s forecast, or the rich and complex information need of studying meteorology. Over many years, Spink & Jansen et al. (2006; 2007) have manually analyzed samples of query logs to track a number of different trends. One of the most notable is the change in topic mix. As an alternative to man703 ual classification of query topics, Shen et al. (2005) described an algorithm for automatically classifying web queries into a set of pre-defined topics. More recently, Broder et al. (2007) presented a highly accurate method (around .7 F-score) for classifying short, rare queries into a taxonomy of 6,000 categories. 3 Study Goals Related work suggests that longer results are preferable, but not for all query types. The goal of our efforts was to determine preferred result length for search results, depending on type of query. To do this, we performed two studies: 1. We asked a set of judges to categorize a large set of web queries according to their expected preferred response type and expected preferred response length. 2. We then developed high-quality answer passages of different lengths for a subset of these queries by selecting appropriate passages from the online encyclopedia Wikipedia, and asked judges to rate the quality of these answers. The results of this study should inform search interface designers about what the best presentation format is. 3.1 Using Mechanical Turk For these studies, we make use of a web service offered by Amazon.com called Mechanical Turk, in which participants (called “turkers”) are paid small sums of money in exchange for work on “Human Intelligence tasks” (HITs).1 These HITs are generated from an XML description of the task created by the investigator (called a “requester”). The participants can come from any walk of life, and their identity is not known to the requesters. We have in past work found the results produced by these judges to be of high quality, and have put into place various checks to detect fraudulent behavior. Other researchers have investigated the efficacy of language 1Website: http://www.mturk.com. For experiment 1, approximately 38,000 HITs were completed at a cost of about $1,500. For experiment 2, approximately 7,300 HITs were completed for about $170. Turkers were paid between $.01 and $.05 per HIT depending on task complexity; Amazon imposes additional charges. 1. Person(s) 2. Organization(s) 3. Time(s) (date, year, time span etc.) 4. Number or Quantity 5. Geographic Location(s) (e.g., city, lake, address) 6. Place(s) (e.g.,”the White House”, “at a supermarket”) 7. Obtain resource online (e.g., movies, lyrics, books, magazines, knitting patterns) 8. Website or URL 9. Purchase and product information 10. Gossip and celebrity information 11. Language-related (e.g., translations, definitions, crossword puzzle answers) 12. General information about a topic 13. Advice 14. Reason or Cause, Explanation 15. Yes/No, with or without explanation or evidence 16. Other 17. Unjudgable Table 1: Allowable responses to the question: “What sort of result or results does the query ask for?” in the first experiment. 1. A word or short phrase 2. A sentence 3. One or more paragraphs (i.e. at least several sentences) 4. An article or full document 5. A list 6. Other, or some combination of the above Table 2: Allowable responses to the question: “How long is the best result for this query?” in the first experiment. annotation using this service and have found that the results are of high quality (Su et al., 2007). 3.2 Estimating Ideal Answer Length and Type We developed a set of 12,790 queries, drawn from Powerset’s in house query database which contains representative subsets of queries from different search engines’ query logs, as well as hand-edited query sets used for regression testing. There are a disproportionally large number of natural language queries in this set compared with query sets from typical keyword engines. Such queries are often complete questions and are sometimes grammatical fragments (e.g., “date of next US election”) and so are likely to be amenable to interesting natural language processing algorithms, which is an area of in704 Figure 1: Results of the first experiment. The y-axis shows the semantic type of the predicted answer, in the same order as listed in Table 1; the x-axis shows the preferred length as listed in Table 2. Three bars with length greater than 1,500 are trimmed to the maximum size to improve readability (GeneralInfo/Paragraphs, GeneralInfo/Article, and Number/Phrase). terest of our research. The average number of words per query (as determined by white space separation) was 5.8 (sd. 2.9) and the average number of characters (including punctuation and white space) was 32.3 (14.9). This is substantially longer than the current average for web search query, which was approximately 2.8 in 2005 (Jansen et al., 2007); this is due to the existence of natural language queries. Judges were asked to classify each query according to its expected response type into one of 17 categories (see Table 1). These categories include answer types used in question answering research as well as (to better capture the diverse nature of web queries) several more general response types such as Advice and General Information. Additionally, we asked judges to anticipate what the best result length would be for the query, as shown in Table 2. Each of the 12,790 queries received three assessments by MTurk judges. For answer types, the number of times all three judges agreed was 4537 (35.4%); two agreed 6030 times (47.1%), and none agreed 2223 times (17.4%). Not surprisingly, there was significant overlap between the label GeneralInfo and the other categories. For answer length estimations, all three judges agreed in 2361 cases (18.5%), two agreed in 7210 cases (56.4%) and none 3219 times (25.2%). Figure 1 summarizes expected length judgments by estimated answer category. Distribution of the length categories differs a great deal across the individual expected response categories. In general, the results are intuitive: judges preferred short responses for “precise queries” (e.g., those asking for numbers) and they preferred longer responses for queries in broad categories like Advice or GeneralInfo. But some results are less intuitive: for example, judges preferred different response lengths for queries categorized as Person and Organization – in fact for the latter the largest single selection made was List. Reviewing the queries for these two categories, we note that most queries about organizations in our collection asked for companies 705 length type average std dev Word or Phrase 38.1 25.8 Sentence 148.1 71.4 Paragraph 490.5 303.1 Section 1574.2 1251.1 Table 3: Average number of characters for each answer length type for the stimuli used in the second experiment. (e.g. “around the world travel agency”) and for these there usually is more than one correct answer, whereas the queries about persons (“CEO of microsoft” ) typically only had one relevant answer. The results of this table show that there are some trends but not definitive relationships between query type (as classified in this study) and expected answer length. More detailed classifications might help resolve some of the conflicts. 3.3 Result Length Study The purpose of the second study was twofold: first, to see if doing a larger study confirms what is hinted at in the literature: that search result lengths longer than the standard snippet may be desirable for at least a subset of queries. Second, we wanted to see if judges’ predictions of desirable results lengths would be confirmed by other judges’ responses to search results of different lengths. 3.3.1 Method It has been found that obtaining judges’ agreement on intent of a query from a log can be difficult (Rose and Levinson, 2004; Kellar et al., 2007). In order to make the task of judging query relevance easier, for the next phase of the study we focused on only those queries for which all three assessors in the first experiment agreed both on the category label and on the estimated ideal length. There were 1099 such high-confidence queries, whose average number of words was 6.3 (2.9) and average number of characters was 34.5 (14.3). We randomly selected a subset of the highagreement queries from the first experiment and manually excluded queries for which it seemed obvious that no responses could be found in Wikipedia. These included queries about song lyrics, since intellectual property restrictions prevent these being posted, and crossword puzzle questions such as “a four letter word for water.” The remaining set contained 170 queries. MTurk annotators were asked to find one good text passage (in English) for each query from the Wikipedia online encyclopedia. They were also asked to subdivide the text of this answer into each of the following lengths: a word or phrase, a sentence, a paragraph, a section or an entire article.2 Thus, the shorter answer passages are subsumed by the longer ones. Table 3 shows the average lengths and standard deviations of each result length type. Table 4 contains sample answers for the shorter length formats for one query. For 24 of the 170 queries the annotators could not find a suitable response in Wikipedia, e.g., “How many miles between NY and Milwaukee?” We collected two to five results for each of the remaining 146 queries and manually chose the best of these answer passages. Note that, by design, all responses were factually correct; they only differed in their length. Ten MTurk judges saw each query/answer length pair, and for each of these, were told: “Below you see a search engine query and a possible response. We would like you to give us your opinion about the response. We are especially interested in the length of the response. Is it suitable for the query? Is there too much or not enough information? Please rate the response on a scale from 0 (very bad response) to 10 (very good response).” There were 124 judges in total; of these, 16 did more than 146 HITs, meaning they saw the same query more than one time (but with different lengths). Upon examination of the results, we determined that two of these high-volume judges were not trying to do the task properly, and so we dropped their judgments from the final analysis. 3.3.2 Results Our results show that judges prefer results of different lengths, depending on the query. The results also suggest that judges’ estimates of a preferred result length in the first experiment are accurate predictors when there is strong agreement among them. Figure 2 shows in four diagrams 2Note the slight difference between the length categories in the first and second experiment: The List and Other options were dropped for the second experiment because we wanted to concentrate on textual length. Additionally, to provide more than one option between Sentence and Article, the category One or more paragraphs was split up into two: (One) Paragraph and (One) Section. 706 query Who was the first person to scale K2? Paragraph An Italian expedition finally succeeded in ascending to the summit of K2 on July 31, 1954. The expedition was led by Ardito Desio, although the two climbers who actually reached the top were Lino Lacedelli and Achille Compagnoni. The team included a Pakistani member, Colonel Muhammad Ata-ullah. He had been a part of an earlier 1953 American expedition which failed to make the summit because of a storm which killed a key climber, Art Gilkey. On the expedition also was the famous Italian climber Walter Bonatti. He proved vital to the expeditions success in that he carried vital oxygen to 26,600ft for Lacedelli and Compagnoni. His dramatic bivouac, at that altitude with the equipment, wrote another chapter in the saga of Himalayan climbing. Sentence The expedition was led by Ardito Desio, although the two climbers who actually reached the top were Lino Lacedelli and Achille Compagnoni. Phrase Lino Lacedelli and Achille Compagnoni Table 4: Sample answers of differing lengths used as input for the second study. Note that the shorter answers are contained in the longer ones. For the full article case, judges were asked to follow a hyperlink to an article. Figure 2: Results of the second experiment, where each query/answer-length pair was assessed by 8–10 judges using a scale of 0 (‘very bad’) to 10 (‘very good’). Marks indicate means and standard errors. The top left graph shows responses of different lengths for queries that were classified as best answered with a phrase in the first experiment. The upper right shows responses for queries predicted to be best answered with a sentence, lower left for best answered with one or more paragraphs and lower right for best answered with an article. 707 Slope Std. Error p-value Phrase -0.850 0.044 < 0.0001 Sentence -0.550 0.050 < 0.0001 Paragraph 0.328 0.049 < 0.0001 Article 0.856 0.053 < 0.0001 Table 5: Results of unweighted linear regression on the data for the second experiment, which was separated into four groups based on the predicted preferred length. how queries assigned by judges to one of the four length categories from the first experiment were judged when presented with responses of the five answer lengths from the second experiment. The graphs show the means and standard error of the judges’ scores across all queries for each predictedlength/presented-length combination. In order to test whether these results are significant we performed four separate linear regressions; one for each of the predicted preferred length categories. The snippet length, the independent variable, was coded as 1-5, shortest to longest. The score for each query-snippet pair is the dependent variable. Table 5 shows that for each group there is evidence to reject the null hypothesis that the slope is equal to 0 at the 99% confidence level. High scores are associated with shorter snippet lengths for queries with predicted preferred length phrase or sentence and also with longer snippet lengths for queries with predicted preferred length paragraphs or article. These associations are strongest for the queries with the most extreme predicted preferred lengths (phrase and article). Our results also suggest the intuition that the best answer lengths do not form strictly distinct classes, but rather lie on a continuum. If the ideal response is from a certain category (e.g., a sentence), returning a result from an adjacent category (a phrase or a paragraph) is not strongly penalized by judges, whereas retuning a result from a category further up or down the scale (an article) is. One potential drawback of this study format is that we do not show judges a list of results for queries, as is standard in search engines, and so they do not experience the tradeoff effect of longer results requiring more scrolling if the desired answer is not shown first. However, the earlier results of Cutrell & Guan (2007) and Paek et al. (2004) suggest that the preference for longer results occurs even in contexts that require looking through multiple results. Another potential drawback of the study is that judges only view one relevant result; the effects of showing a list of long non-relevant results may be more negative than that of showing short non-relevant results; this study would not capture that effect. 4 Conclusions and Future Work Our studies suggest that different queries are best served with different response lengths (Experiment 1), and that for a subset of especially clear queries, human judges can predict the preferred result lengths (Experiment 2). The results furthermore support the contention that standard results listings are too short in many cases, at least assuming that the summary shows information that is relevant for the query. These findings have important implications for the design of search results presentations, suggesting that as user queries become more expressive, search engine results should become more responsive to the type of answer desired. This may mean showing more context in the results listing, or perhaps using more dynamic tools such as expandon-mouseover to help answer the query in place. The obvious next step is to determine how to automatically classify queries according to their predicted result length and type. For classifying according to expected length, we have run some initial experiments based on unigram word counts which correctly classified 78% of 286 test queries (on 805 training queries) into one of three length bins. We plan to pursue this further in future work. For classifying according to type, as discussed above, most automated query classification for web logs have been based on the topic of the query rather than on the intended result type, but the question answering literature has intensively investigated how to predict appropriate answer types. It is likely that the techniques from these two fields can be productively combined to address this challenge. Acknowledgments. This work was supported in part by Powerset, Inc., and in part by Microsoft Research through the MSR European PhD Scholarship Programme. We would like to thank Susan Gruber and Bonnie Webber for their helpful comments and suggestions. 708 References A. Broder, M. Fontoura, E. Gabrilovich, A. Joshi, V. Josifovski, and T. Zhang. 2007. Robust classification of rare queries using web knowledge. Proceedings of SIGIR 2007. A. Broder. 2002. A taxonomy of web search. ACM SIGIR Forum, 36(2):3–10. E. Cutrell and Z. Guan. 2007. What Are You Looking For? An Eye-tracking Study of Information Usage in Web Search. Proceedings of ACM SIGCHI 2007. D.E. Egan, J.R. Remde, L.M. Gomez, T.K. Landauer, J. Eberhardt, and C.C. Lochbaum. 1989. Formative design evaluation of Superbook. ACM Transactions on Information Systems (TOIS), 7(1):30–57. S.M. Harabagiu, S.J. Maiorano, and M.A. Pasca. 2003. Open-domain textual question answering techniques. Natural Language Engineering, 9(03):231–267. E. Hovy, U. Hermjakob, and D. Ravichandran. 2002. A question/answer typology with surface text patterns. Proceedings of the second international conference on Human Language Technology Research, pages 247– 251. B.J. Jansen and Spink. 2006. How are we searching the World Wide Web? A comparison of nine search engine transaction logs. Information Processing and Management, 42(1):248–263. B.J. Jansen, A. Spink, and S. Koshman. 2007. Web searcher interaction with the Dogpile.com metasearch engine. Journal of the American Society for Information Science and Technology, 58(5):744–755. M. Kellar, C. Watters, and M. Shepherd. 2007. A Goalbased Classification of Web Information Tasks. JASIST, 43(1). J. Lin, D. Quan, V. Sinha, K. Bakshi, D. Huynh, B. Katz, and D.R. Karger. 2003. What Makes a Good Answer? The Role of Context in Question Answering. HumanComputer Interaction (INTERACT 2003). G. Marchionini, R.W. White, and Marchionini. 2008. Find What You Need, Understand What You Find. Journal of Human-Computer Interaction (to appear). T. Paek, S.T. Dumais, and R. Logan. 2004. WaveLens: A new view onto internet search results. Proceedings on the ACM SIGCHI Conference on Human Factors in Computing Systems, pages 727–734. J. Pedersen, D. Cutting, and J. Tukey. 1991. Snippet search: A single phrase approach to text access. Proceedings of the 1991 Joint Statistical Meetings. G. Ramakrishnan and D. Paranjpe. 2004. Is question answering an acquired skill? Proceedings of the 13th international conference on World Wide Web, pages 111–120. D.E. Rose and D. Levinson. 2004. Understanding user goals in web search. Proceedings of the 13th international conference on World Wide Web, pages 13–19. D.E. Rose, D. Orr, and R.G.P. Kantamneni. 2007. Summary attributes and perceived search quality. Proceedings of the 16th international conference on World Wide Web, pages 1201–1202. D. Shen, R. Pan, J.T. Sun, J.J. Pan, K. Wu, J. Yin, and Q. Yang. 2005. Q2C@UST: our winning solution to query classification in KDDCUP 2005. ACM SIGKDD Explorations Newsletter, 7(2):100–110. Q. Su, D. Pavlov, J. Chow, and W. Baker. 2007. InternetScale Collection of Human-Reviewed Data. Proceedings of WWW 2007. A. Tombros and M. Sanderson. 1998. Advantages of query biased summaries in information retrieval. Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 2–10. E.M. Voorhees. 2003. Overview of the TREC 2003 Question Answering Track. Proceedings of the Twelfth Text REtrieval Conference (TREC 2003). R.W. White, J. Jose, and I. Ruthven. 2003. A taskoriented study on the influencing effects of querybiased summarisation in web searching. Information Processing and Management, 39(5):707–733. 709
2008
80
Proceedings of ACL-08: HLT, pages 710–718, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Using Conditional Random Fields to Extract Contexts and Answers of Questions from Online Forums Shilin Ding † ∗ Gao Cong§ † Chin-Yew Lin‡ Xiaoyan Zhu† †Department of Computer Science and Technology, Tsinghua University, Beijing, China §Department of Computer Science, Aalborg University, Denmark ‡Microsoft Research Asia, Beijing, China [email protected] [email protected] [email protected] [email protected] Abstract Online forum discussions often contain vast amounts of questions that are the focuses of discussions. Extracting contexts and answers together with the questions will yield not only a coherent forum summary but also a valuable QA knowledge base. In this paper, we propose a general framework based on Conditional Random Fields (CRFs) to detect the contexts and answers of questions from forum threads. We improve the basic framework by Skip-chain CRFs and 2D CRFs to better accommodate the features of forums for better performance. Experimental results show that our techniques are very promising. 1 Introduction Forums are web virtual spaces where people can ask questions, answer questions and participate in discussions. The availability of vast amounts of thread discussions in forums has promoted increasing interests in knowledge acquisition and summarization for forum threads. Forum thread usually consists of an initiating post and a number of reply posts. The initiating post usually contains several questions and the reply posts usually contain answers to the questions and perhaps new questions. Forum participants are not physically co-present, and thus reply may not happen immediately after questions are posted. The asynchronous nature and multiparticipants make multiple questions and answers ∗This work was done when Shilin Ding was a visiting student at the Microsoft Research Asia †This work was done when Gao Cong worked as a researcher at the Microsoft Research Asia. <context id=1>S1: Hi I am looking for a pet friendly hotel in Hong Kong because all of my family is going there for vacation. S2: my family has 2 sons and a dog.</context> <question id=1>S3: Is there any recommended hotel near Sheung Wan or Tsing Sha Tsui?</question> <context id=2,3>S4: We also plan to go shopping in Causeway Bay.</context> <question id=2>S5: What’s the traffic situation around those commercial areas?</question> <question id=3>S6: Is it necessary to take a taxi?</question>. S7: Any information would be appreciated. <answer qid=1>S8: The Comfort Lodge near Kowloon Park allows pet as I know, and usually fits well within normal budget. S9: It is also conveniently located, nearby the Kowloon railway station and subway.</answer> <answer qid=2,3> S10: It’s very crowd in those areas, so I recommend MTR in Causeway Bay because it is cheap to take you around </answer> Figure 1: An example thread with question-contextanswer annotated interweaved together, which makes it more difficult to summarize. In this paper, we address the problem of detecting the contexts and answers from forum threads for the questions identified in the same threads. Figure 1 gives an example of a forum thread with questions, contexts and answers annotated. It contains three question sentences, S3, S5 and S6. Sentences S1 and S2 are contexts of question 1 (S3). Sentence S4 is the context of questions 2 and 3, but not 1. Sentence S8 is the answer to question 3. (S4-S5-S10) is one example of question-context-answer triple that we want to detect in the thread. As shown in the example, a forum question usually requires contextual information to provide background or constraints. 710 Moreover, it sometimes needs contextual information to provide explicit link to its answers. For example, S8 is an answer of question 1, but they cannot be linked with any common word. Instead, S8 shares word pet with S1, which is a context of question 1, and thus S8 could be linked with question 1 through S1. We call contextual information the context of a question in this paper. A summary of forum threads in the form of question-context-answer can not only highlight the main content, but also provide a user-friendly organization of threads, which will make the access to forum information easier. Another motivation of detecting contexts and answers of the questions in forum threads is that it could be used to enrich the knowledge base of community-based question and answering (CQA) services such as Live QnA and Yahoo! Answers, where context is comparable with the question description while question corresponds to the question title. For example, there were about 700,000 questions in the Yahoo! Answers travel category as of January 2008. We extracted about 3,000,000 travel related questions from six online travel forums. One would expect that a CQA service with large QA data will attract more users to the service. To enrich the knowledge base, not only the answers, but also the contexts are critical; otherwise the answer to a question such as How much is the taxi would be useless without context in the database. However, it is challenging to detecting contexts and answers for questions in forum threads. We assume the questions have been identified in a forum thread using the approach in (Cong et al., 2008). Although identifying questions in a forum thread is also nontrivial, it is beyond the focus of this paper. First, detecting contexts of a question is important and non-trivial. We found that 74% of questions in our corpus, which contain 1,064 questions from 579 forum threads about travel, need contexts. However, relative position information is far from adequate to solve the problem. For example, in our corpus 63% of sentences preceding questions are contexts and they only represent 34% of all correct contexts. To effectively detect contexts, the dependency between sentences is important. For example in Figure 1, both S1 and S2 are contexts of question 1. S1 could be labeled as context based on word similarity, but it is not easy to link S2 with the question directly. S1 and S2 are linked by the common word family, and thus S2 can be linked with question 1 through S1. The challenge here is how to model and utilize the dependency for context detection. Second, it is difficult to link answers with questions. In forums, multiple questions and answers can be discussed in parallel and are interweaved together while the reply relationship between posts is usually unavailable. To detect answers, we need to handle two kinds of dependencies. One is the dependency relationship between contexts and answers, which should be leveraged especially when questions alone do not provide sufficient information to find answers; the other is the dependency between answer candidates (similar to sentence dependency described above). The challenge is how to model and utilize these two kinds of dependencies. In this paper we propose a novel approach for detecting contexts and answers of the questions in forum threads. To our knowledge this is the first work on this.We make the following contributions: First, we employ Linear Conditional Random Fields (CRFs) to identify contexts and answers, which can capture the relationships between contiguous sentences. Second, we also found that context is very important for answer detection. To capture the dependency between contexts and answers, we introduce Skip-chain CRF model for answer detection. We also extend the basic model to 2D CRFs to model dependency between contiguous questions in a forum thread for context and answer identification. Finally, we conducted experiments on forum data. Experimental results show that 1) Linear CRFs outperform SVM and decision tree in both context and answer detection; 2) Skip-chain CRFs outperform Linear CRFs for answer finding, which demonstrates that context improves answer finding; 3) 2D CRF model improves the performance of Linear CRFs and the combination of 2D CRFs and Skipchain CRFs achieves better performance for context detection. The rest of this paper is organized as follows: The next section discusses related work. Section 3 presents the proposed techniques. We evaluate our techniques in Section 4. Section 5 concludes this paper and discusses future work. 711 2 Related Work There is some research on summarizing discussion threads and emails. Zhou and Hovy (2005) segmented internet relay chat, clustered segments into subtopics, and identified responding segments of the first segment in each sub-topic by assuming the first segment to be focus. In (Nenkova and Bagga, 2003; Wan and McKeown, 2004; Rambow et al., 2004), email summaries were organized by extracting overview sentences as discussion issues. Carenini et al (2007) leveraged both quotation relation and clue words for email summarization. In contrast, given a forum thread, we extract questions, their contexts, and their answers as summaries. Shrestha and McKeown (2004)’s work on email summarization is closer to our work. They used RIPPER as a classifier to detect interrogative questions and their answers and used the resulting question and answer pairs as summaries. However, it did not consider contexts of questions and dependency between answer sentences. We also note the existing work on extracting knowledge from discussion threads. Huang et al.(2007) used SVM to extract input-reply pairs from forums for chatbot knowledge. Feng et al. (2006a) used cosine similarity to match students’ query with reply posts for discussion-bot. Feng et al. (2006b) identified the most important message in online classroom discussion board. Our problem is quite different from the above work. Detecting context for question in forums is related to the context detection problem raised in the QA roadmap paper commissioned by ARDA (Burger et al., 2006). To our knowledge, none of the previous work addresses the problem of context detection. The method of finding follow-up questions (Yang et al., 2006) from TREC context track could be adapted for context detection. However, the followup relationship is limited between questions while context is not. In our other work (Cong et al., 2008), we proposed a supervised approach for question detection and an unsupervised approach for answer detection without considering context detection. Extensive research has been done in questionanswering, e.g. (Berger et al., 2000; Jeon et al., 2005; Cui et al., 2005; Harabagiu and Hickl, 2006; Dang et al., 2007). They mainly focus on constructing answer for certain types of question from a large document collection, and usually apply sophisticated linguistic analysis to both questions and the documents in the collection. Soricut and Brill (2006) used statistical translation model to find the appropriate answers from their QA pair collections from FAQ pages for the posted question. In our scenario, we not only need to find answers for various types of questions in forum threads but also their contexts. 3 Context and Answer Detection A question is a linguistic expression used by a questioner to request information in the form of an answer. The sentence containing request focus is called question. Context are the sentences containing constraints or background information to the question, while answer are that provide solutions. In this paper, we use sentences as the detection segment though it is applicable to other kinds of segments. Given a thread and a set of m detected questions {Qi}m i=1, our task is to find the contexts and answers for each question. We first discuss using Linear CRFs for context and answer detection, and then extend the basic framework to Skip-chain CRFs and 2D CRFs to better model our problem. Finally, we will briefly introduce CRF models and the features that we used for CRF model. 3.1 Using Linear CRFs For ease of presentation, we focus on detecting contexts using Linear CRFs. The model could be easily extended to answer detection. Context detection. As discussed in Introduction that context detection cannot be trivially solved by position information (See Section 4.2 for details), and dependency between sentences is important for context detection. Recall that in Figure 1, S2 could be labeled as context of Q1 if we consider the dependency between S2 and S1, and that between S1 and Q1, while it is difficult to establish connection between S2 and Q1 without S1. Table 1 shows that the correlation between the labels of contiguous sentences is significant. In other words, when a sentence Yt’s previous Yt−1 is not a context (Yt−1 ̸= C) then it is very likely that Yt (i.e. Yt ̸= C) is also not a context. It is clear that the candidate contexts are not independent and there are strong dependency rela712 Contiguous sentences yt = C yt ̸= C yt−1 = C 901 1,081 yt−1 ̸= C 1,081 47,190 Table 1: Contingency table(χ2 = 9,386,p-value<0.001) tionships between contiguous sentences in a thread. Therefore, a desirable model should be able to capture the dependency. The context detection can be modeled as a classification problem. Traditional classification tools, e.g. SVM, can be employed, where each pair of question and candidate context will be treated as an instance. However, they cannot capture the dependency relationship between sentences. To this end, we proposed a general framework to detect contexts and answers based on Conditional Random Fields (Lafferty et al., 2001) (CRFs) which are able to model the sequential dependencies between contiguous nodes. A CRF is an undirected graphical model G of the conditional distribution P(Y|X). Y are the random variables over the labels of the nodes that are globally conditioned on X, which are the random variables of the observations. (See Section 3.4 for more about CRFs) Linear CRF model has been successfully applied in NLP and text mining tasks (McCallum and Li, 2003; Sha and Pereira, 2003). However, our problem cannot be modeled with Linear CRFs in the same way as other NLP tasks, where one node has a unique label. In our problem, each node (sentence) might have multiple labels since one sentence could be the context of multiple questions in a thread. Thus, it is difficult to find a solution to tag context sentences for all questions in a thread in single pass. Here we assume that questions in a given thread are independent and are found, and then we can label a thread with m questions one-by-one in mpasses. In each pass, one question Qi is selected as focus and each other sentence in the thread will be labeled as context C of Qi or not using Linear CRF model. The graphical representations of Linear CRFs is shown in Figure2(a). The linear-chain edges can capture the dependency between two contiguous nodes. The observation sequence x = <x1, x2,...,xt>, where t is the number of sentences in a thread, represents predictors (to be described in Section 3.5), and the tag sequence y=<y1,...,yt>, where yi ∈{C, P}, determines whether a sentence is plain text P or context C of question Qi. Answer detection. Answers usually appear in the posts after the post containing the question. There are also strong dependencies between contiguous answer segments. Thus, position and similarity information alone are not adequate here. To cope with the dependency between contiguous answer segments, Linear CRFs model are employed as in context detection. 3.2 Leveraging Context for Answer Detection Using Skip-chain CRFs We observed in our corpus 74% questions lack constraints or background information which are very useful to link question and answers as discussed in Introduction. Therefore, contexts should be leveraged to detect answers. The Linear CRF model can capture the dependency between contiguous sentences. However, it cannot capture the long distance dependency between contexts and answers. One straightforward method of leveraging context is to detect contexts and answers in two phases, i.e. to first identify contexts, and then label answers using both the context and question information (e.g. the similarity between context and answer can be used as features in CRFs). The two-phase procedure, however, still cannot capture the non-local dependency between contexts and answers in a thread. To model the long distance dependency between contexts and answers, we will use Skip-chain CRF model to detect context and answer together. Skipchain CRF model is applied for entity extraction and meeting summarization (Sutton and McCallum, 2006; Galley, 2006). The graphical representation of a Skip-chain CRF given in Figure2(b) consists of two types of edges: linear-chain (yt−1 to yt) and skip-chain edges (yi to yj). Ideally, the skip-chain edges will establish the connection between candidate pairs with high probability of being context and answer of a question. To introduce skip-chain edges between any pairs of non-contiguous sentences will be computationally expensive, and also introduce noise. To make the cardinality and number of cliques in the graph manageable and also eliminate noisy edges, we would like to generate edges only for sentence pairs with high possibility of being context and answer. This is 713 (a) Linear CRFs (b) Skip-chain CRFs (c) 2D CRFs Figure 2: CRF Models Skip-Chain yv = A yv ̸= A yu = C 4,105 5,314 yu ̸= C 3,744 9,740 Table 2: Contingence table(χ2=615.8,p-value < 0.001) achieved as follows. Given a question Qi in post Pj of a thread with n posts, its contexts usually occur within post Pj or before Pj while answers appear in the posts after Pj. We will establish an edge between each candidate answer v and one condidate context in {Pk}j k=1 such that they have the highest possibility of being a context-answer pair of question Qi: u = argmax u∈{Pk}j k=1 sim(xu, Qi).sim(xv, {xu, Qi}) here, we use the product of sim(xu, Qi) and sim(xv, {xu, Qi} to estimate the possibility of being a context-answer pair for (u, v) , where sim(·, ·) is the semantic similarity calculated on WordNet as described in Section 3.5. Table 2 shows that yu and yv in the skip chain generated by our heuristics influence each other significantly. Skip-chain CRFs improve the performance of answer detection due to the introduced skip-chain edges that represent the joint probability conditioned on the question, which is exploited by skip-chain feature function: f(yu, yv, Qi, x). 3.3 Using 2D CRF Model Both Linear CRFs and Skip-chain CRFs label the contexts and answers for each question in separate passes by assuming that questions in a thread are independent. Actually the assumption does not hold in many cases. Let us look at an example. As in Figure 1, sentence S10 is an answer for both question Q2 and Q3. S10 could be recognized as the answer of Q2 due to the shared word areas and Causeway bay (in Q2’s context, S4), but there is no direct relation between Q3 and S10. To label S10, we need consider the dependency relation between Q2 and Q3. In other words, the question-answer relation between Q3 and S10 can be captured by a joint modeling of the dependency among S10, Q2 and Q3. The labels of the same sentence for two contiguous questions in a thread would be conditioned on the dependency relationship between the questions. Such a dependency cannot be captured by both Linear CRFs and Skip-chain CRFs. To capture the dependency between the contiguous questions, we employ 2D CRFs to help context and answer detection. 2D CRF model is used in (Zhu et al., 2005) to model the neighborhood dependency in blocks within a web page. As shown in Figure2(c), 2D CRF models the labeling task for all questions in a thread. For each thread, there are m rows in the grid, where the ith row corresponds to one pass of Linear CRF model (or Skip-chain model) which labels contexts and answers for question Qi. The vertical edges in the figure represent the joint probability conditioned on the contiguous questions, which will be exploited by 2D feature function: f(yi,j, yi+1,j, Qi, Qi+1, x). Thus, the information generated in single CRF chain could be propagated over the whole grid. In this way, context and answer detection for all questions in the thread could be modeled together. 3.4 Conditional Random Fields (CRFs) The Linear, Skip-Chain and 2D CRFs can be generalized as pairwise CRFs, which have two kinds of cliques in graph G: 1) node yt and 2) edge (yu, yv). The joint probability is defined as: p(y|x)= 1 Z(x) exp nX k,t λkfk(yt, x)+ X k,t µkgk(yu, yv, x) o 714 where Z(x) is the normalization factor, fk is the feature on nodes, gk is on edges between u and v, and λk and µk are parameters. Linear CRFs are based on the first order Markov assumption that the contiguous nodes are dependent. The pairwise edges in Skip-chain CRFs represent the long distance dependency between the skipped nodes, while the ones in 2D CRFs represent the dependency between the neighboring nodes. Inference and Parameter Estimation. For Linear CRFs, dynamic programming is used to compute the maximum a posteriori (MAP) of y given x. However, for more complicated graphs with cycles, exact inference needs the junction tree representation of the original graph and the algorithm is exponential to the treewidth. For fast inference, loopy Belief Propagation (Pearl, 1988) is implemented. Given the training Data D = {x(i), y(i)}n i=1, the parameter estimation is to determine the parameters based on maximizing the log-likelihood Lλ = Pn i=1 log p(y(i)|x(i)). In Linear CRF model, dynamic programming and L-BFGS (limited memory Broyden-Fletcher-Goldfarb-Shanno) can be used to optimize objective function Lλ, while for complicated CRFs, Loopy BP are used instead to calculate the marginal probability. 3.5 Features used in CRF models The main features used in Linear CRF models for context detection are listed in Table 3. The similarity feature is to capture the word similarity and semantic similarity between candidate contexts and answers. The word similarity is based on cosine similarity of TF/IDF weighted vectors. The semantic similarity between words is computed based on Wu and Palmer’s measure (Wu and Palmer, 1994) using WordNet (Fellbaum, 1998).1 The similarity between contiguous sentences will be used to capture the dependency for CRFs. In addition, to bridge the lexical gaps between question and context, we learned top-3 context terms for each question term from 300,000 question-description pairs obtained from Yahoo! Answers using mutual information (Berger et al., 2000) ( question description in Yahoo! Answers is comparable to contexts in fo1The semantic similarity between sentences is calculated as in (Yang et al., 2006). Similarity features: · Cosine similarity with the question · Similarity with the question using WordNet · Cosine similarity between contiguous sentences · Similarity between contiguous sentences using WordNet · Cosine similarity with the expanded question using the lexical matching words Structural features: · The relative position to current question · Is its author the same with that of the question? · Is it in the same paragraph with its previous sentence? Discourse and lexical features: · The number of Pronouns in the question · The presence of fillers, fluency devices (e.g. “uh”, “ok”) · The presence of acknowledgment tokens · The number of non-stopwords · Whether the question has a noun or not? · Whether the question has a verb or not? Table 3: Features for Linear CRFs. Unless otherwise mentioned, we refer to features of the sentence whose label to be predicted rums), and then use them to expand question and compute cosine similarity. The structural features of forums provide strong clues for contexts. For example, contexts of a question usually occur in the post containing the question or preceding posts. We extracted the discourse features from a question, such as the number of pronouns in the question. A more useful feature would be to find the entity in surrounding sentences referred by a pronoun. We tried GATE (Cunningham et al., 2002) for anaphora resolution of the pronouns in questions, but the performance became worse with the feature, which is probably due to the difficulty of anaphora resolution in forum discourse. We also observed that questions often need context if the question do not contain a noun or a verb. In addition, we use similarity features between skip-chain sentences for Skip-chain CRFs and similarity features between questions for 2D CRFs. 4 Experiments 4.1 Experimental setup Corpus. We obtained about 1 million threads from TripAdvisor forum; we randomly selected 591 threads and removed 22 threads which has more than 40 sentences and 6 questions; the remaining 579 forum threads form our corpus 2. Each thread in our 2TripAdvisor (http://www.tripadvisor.com/ForumHome) is one of the most popular travel forums; the list of 579 urls is 715 Model Prec(%) Rec(%) F1(%) Context Detection SVM 75.27 68.80 71.32 C4.5 70.16 64.30 67.21 L-CRF 75.75 72.84 74.45 Answer Detection SVM 73.31 47.35 57.52 C4.5 65.36 46.55 54.37 L-CRF 63.92 58.74 61.22 Table 4: Context and Answer Detection corpus contains at least two posts and on average each thread consists of 3.87 posts. Two annotators were asked to tag questions, their contexts, and answers in each thread. The kappa statistic for identifying question is 0.96, for linking context and question given a question is 0.75, and for linking answer and question given a question is 0.69. We conducted experiments on both the union and intersection of the two annotated data. The experimental results on both data are qualitatively comparable. We only report results on union data due to space limitation. The union data contains 1,064 questions, 1,458 contexts and 3,534 answers. Metrics. We calculated precision, recall, and F1-score for all tasks. All the experimental results are obtained through the average of 5 trials of 5-fold cross validation. 4.2 Experimental results Linear CRFs for Context and Answer Detection. This experiment is to evaluate Linear CRF model (Section 3.1) for context and answer detection by comparing with SVM and C4.5(Quinlan, 1993). For SVM, we use SVMlight(Joachims, 1999). We tried linear, polynomial and RBF kernels and report the results on polynomial kernel using default parameters since it performs the best in the experiment. SVM and C4.5 use the same set of features as Linear CRFs. As shown in Table 4, Linear CRF model outperforms SVM and C4.5 for both context and answer detection. The main reason for the improvement is that CRF models can capture the sequential dependency between segments in forums as discussed in Section 3.1. given in http://homepages.inf.ed.ac.uk/gcong/acl08/; Removing the 22 long threads can greatly reduce the training and test time. position Prec(%) Rec(%) F1(%) Context Detection Previous One 63.69 34.29 44.58 Previous All 43.48 76.41 55.42 Anwer Detection Following One 66.48 19.98 30.72 Following All 31.99 100 48.48 Table 5: Using position information for detection Context Prec(%) Rec(%) F1(%) No context 63.92 58.74 61.22 Prev. sentence 61.41 62.50 61.84 Real context 63.54 66.40 64.94 L-CRF+context 65.51 63.13 64.06 Table 6: Contextual Information for Answer Detection. Prev. sentence uses one previous sentence of the current question as context. RealContext uses the context annotated by experts. L-CRF+context uses the context found by Linear CRFs We next report a baseline of context detection using previous sentences in the same post with its question since contexts often occur in the question post or preceding posts. Similarly, we report a baseline of answer detecting using following segments of a question as answers. The results given in Table 5 show that location information is far from adequate to detect contexts and answers. The usefulness of contexts. This experiment is to evaluate the usefulness of contexts in answer detection, by adding the similarity between the context (obtained with different methods) and candidate answer as an extra feature for CRFs. Table 6 shows the impact of context on answer detection using Linear CRFs. Linear CRFs with contextual information perform better than those without context. L-CRF+context is close to that using real context, while it is better than CRFs using the previous sentence as context. The results clearly shows that contextual information greatly improves the performance of answer detection. Improved Models. This experiment is to evaluate the effectiveness of Skip-Chain CRFs (Section 3.2) and 2D CRFs (Section 3.3) for our tasks. The results are given in Table 7 and Table 8. In context detection, Skip-Chain CRFs have simi716 Model Prec(%) Rec(%) F1(%) L-CRF+Context 75.75 72.84 74.45 Skip-chain 74.18 74.90 74.42 2D 75.92 76.54 76.41 2D+Skip-chain 76.27 78.25 77.34 Table 7: Skip-chain and 2D CRFs for context detection lar results as Linear CRFs, i.e. the inter-dependency captured by the skip chains generated using the heuristics in Section 3.2 does not improve the context detection. The performance of Linear CRFs is improved in 2D CRFs (by 2%) and 2D+Skip-chain CRFs (by 3%) since they capture the dependency between contiguous questions. In answer detection, as expected, Skip-chain CRFs outperform L-CRF+context since Skip-chain CRFs can model the inter-dependency between contexts and answers while in L-CRF+context the context can only be reflected by the features on the observations. We also observed that 2D CRFs improve the performance of L-CRF+context due to the dependency between contiguous questions. In contrast with our expectation, the 2D+Skip-chain CRFs does not improve Skip-chain CRFs in terms of answer detection. The possible reason could be that the structure of the graph is very complicated and too many parameters need to be learned on our training data. Evaluating Features. We also evaluated the contributions of each category of features in Table 3 to context detection. We found that similarity features are the most important and structural feature the next. We also observed the same trend for answer detection. We omit the details here due to space limitation. As a summary, 1) our CRF model outperforms SVM and C4.5 for both context and answer detections; 2) context is very useful in answer detection; 3) the Skip-chain CRF method is effective in leveraging context for answer detection; and 4) 2D CRF model improves the performance of Linear CRFs for both context and answer detection. 5 Discussions and Conclusions We presented a new approach to detecting contexts and answers for questions in forums with good performance. We next discuss our experience not covered by the experiments, and future work. Model Prec(%) Rec(%) F1(%) L-CRF+context 65.51 63.13 64.06 Skip-chain 67.59 71.06 69.40 2D 65.77 68.17 67.34 2D+Skip-chain 66.90 70.56 68.89 Table 8: Skip-chain and 2D CRFs for answer detection Since contexts of questions are largely unexplored in previous work, we analyze the contexts in our corpus and classify them into three categories: 1) context contains the main content of question while question contains no constraint, e.g. “i will visit NY at Oct, looking for a cheap hotel but convenient. Any good suggestion? ”; 2) contexts explain or clarify part of the question, such as a definite noun phrase, e.g. ‘We are going on the Taste of Paris. Does anyone know if it is advisable to take a suitcase with us on the tour., where the first sentence is to describe the tour; and 3) contexts provide constraint or background for question that is syntactically complete, e.g. “We are interested in visiting the Great Wall(and flying from London). Can anyone recommend a tour operator.” In our corpus, about 26% questions do not need context, 12% questions need Type 1 context, 32% need Type 2 context and 30% Type 3. We found that our techniques often do not perform well on Type 3 questions. We observed that factoid questions, one of focuses in the TREC QA community, take less than 10% question in our corpus. It would be interesting to revisit QA techniques to process forum data. Other future work includes: 1) to summarize multiple threads using the triples extracted from individual threads. This could be done by clustering question-context-answer triples; 2) to use the traditional text summarization techniques to summarize the multiple answer segments; 3) to integrate the Question Answering techniques as features of our framework to further improve answer finding; 4) to reformulate questions using its context to generate more user-friendly questions for CQA services; and 5) to evaluate our techniques on more online forums in various domains. Acknowledgments We thank the anonymous reviewers for their detailed comments, and Ming Zhou and Young-In Song for their valuable suggestions in preparing the paper. 717 References A. Berger, R. Caruana, D. Cohn, D. Freitag, and V. Mittal. 2000. Bridging the lexical chasm: statistical approaches to answer-finding. In Proceedings of SIGIR. J. Burger, C. Cardie, V. Chaudhri, R. Gaizauskas, S. Harabagiu, D. Israel, C. Jacquemin, C. Lin, S. Maiorano, G. Miller, D. Moldovan, B. Ogden, J. Prager, E. Riloff, A. Singhal, R. Shrihari, T. Strzalkowski16, E. Voorhees, and R. Weishedel. 2006. Issues, tasks and program structures to roadmap research in question and answering (qna). ARAD: Advanced Research and Development Activity (US). G. Carenini, R. Ng, and X. Zhou. 2007. Summarizing email conversations with clue words. In Proceedings of WWW. G. Cong, L. Wang, C.Y. Lin, Y.I. Song, and Y. Sun. 2008. Finding question-answer pairs from online forums. In Proceedings of SIGIR. H. Cui, R. Sun, K. Li, M. Kan, and T. Chua. 2005. Question answering passage retrieval using dependency relations. In Proceedings of SIGIR. H. Cunningham, D. Maynard, K. Bontcheva, and V. Tablan. 2002. Gate: A framework and graphical development environment for robust nlp tools and applications. In Proceedings of ACL. H. Dang, J. Lin, and D. Kelly. 2007. Overview of the trec 2007 question answering track. In Proceedings of TREC. C. Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database (Language, Speech, and Communication). The MIT Press, May. D. Feng, E. Shaw, J. Kim, and E. Hovy. 2006a. An intelligent discussion-bot for answering student queries in threaded discussions. In Proceedings of IUI. D. Feng, E. Shaw, J. Kim, and E. Hovy. 2006b. Learning to detect conversation focus of threaded discussions. In Proceedings of HLT-NAACL. M. Galley. 2006. A skip-chain conditional random field for ranking meeting utterances by importance. In Proceedings of EMNLP. S. Harabagiu and A. Hickl. 2006. Methods for using textual entailment in open-domain question answering. In Proceedings of ACL. J. Huang, M. Zhou, and D. Yang. 2007. Extracting chatbot knowledge from online discussion forums. In Proceedings of IJCAI. J. Jeon, W. Croft, and J. Lee. 2005. Finding similar questions in large question and answer archives. In Proceedings of CIKM. T. Joachims. 1999. Making large-scale support vector machine learning practical. MIT Press, Cambridge, MA, USA. J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML. A. McCallum and W. Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of CoNLL-2003. A. Nenkova and A. Bagga. 2003. Facilitating email thread access by extractive summary generation. In Proceedings of RANLP. J. Pearl. 1988. Probabilistic reasoning in intelligent systems: networks of plausible inference. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. J. Quinlan. 1993. C4.5: programs for machine learning. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA. O. Rambow, L. Shrestha, J. Chen, and C. Lauridsen. 2004. Summarizing email threads. In Proceedings of HLT-NAACL. F. Sha and F. Pereira. 2003. Shallow parsing with conditional random fields. In HLT-NAACL. L. Shrestha and K. McKeown. 2004. Detection of question-answer pairs in email conversations. In Proceedings of COLING. R. Soricut and E. Brill. 2006. Automatic question answering using the web: Beyond the Factoid. Information Retrieval, 9(2):191–206. C. Sutton and A. McCallum. 2006. An introduction to conditional random fields for relational learning. In Lise Getoor and Ben Taskar, editors, Introduction to Statistical Relational Learning. MIT Press. To appear. S. Wan and K. McKeown. 2004. Generating overview summaries of ongoing email thread discussions. In Proceedings of COLING. Z. Wu and M. S. Palmer. 1994. Verb semantics and lexical selection. In Proceedings of ACL. F. Yang, J. Feng, and G. Fabbrizio. 2006. A data driven approach to relevancy recognition for contextual question answering. In Proceedings of the Interactive Question Answering Workshop at HLT-NAACL 2006. L. Zhou and E. Hovy. 2005. Digesting virtual ”geek” culture: The summarization of technical internet relay chats. In Proceedings of ACL. J. Zhu, Z. Nie, J. Wen, B. Zhang, and W. Ma. 2005. 2d conditional random fields for web information extraction. In Proceedings of ICML. 718
2008
81
Proceedings of ACL-08: HLT, pages 719–727, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Learning to Rank Answers on Large Online QA Collections Mihai Surdeanu, Massimiliano Ciaramita, Hugo Zaragoza Barcelona Media Innovation Center, Yahoo! Research Barcelona [email protected], {massi,hugo}@yahoo-inc.com Abstract This work describes an answer ranking engine for non-factoid questions built using a large online community-generated question-answer collection (Yahoo! Answers). We show how such collections may be used to effectively set up large supervised learning experiments. Furthermore we investigate a wide range of feature types, some exploiting NLP processors, and demonstrate that using them in combination leads to considerable improvements in accuracy. 1 Introduction The problem of Question Answering (QA) has received considerable attention in the past few years. Nevertheless, most of the work has focused on the task of factoid QA, where questions match short answers, usually in the form of named or numerical entities. Thanks to international evaluations organized by conferences such as the Text REtrieval Conference (TREC)1 or the Cross Language Evaluation Forum (CLEF) Workshop2, annotated corpora of questions and answers have become available for several languages, which has facilitated the development of robust machine learning models for the task. The situation is different once one moves beyond the task of factoid QA. Comparatively little research has focused on QA models for non-factoid questions such as causation, manner, or reason questions. Because virtually no training data is available for this problem, most automated systems train either 1http://trec.nist.gov 2http://www.clef-campaign.org Q: How do you quiet a squeaky door? A: Spray WD-40 directly onto the hinges of the door. Open and close the door several times. Remove hinges if the door still squeaks. Remove any rust, dirt or loose paint. Apply WD-40 to High removed hinges. Put the hinges back, Quality open and close door several times again. Q: How to extract html tags from an html Low documents with c++? Quality A: very carefully Table 1: Sample content from Yahoo! Answers. on small hand-annotated corpora built in house (Higashinaka and Isozaki, 2008) or on question-answer pairs harvested from Frequently Asked Questions (FAQ) lists or similar resources (Soricut and Brill, 2006). None of these situations is ideal: the cost of building the training corpus in the former setup is high; in the latter scenario the data tends to be domain-specific, hence unsuitable for the learning of open-domain models. On the other hand, recent years have seen an explosion of user-generated content (or social media). Of particular interest in our context are communitydriven question-answering sites, such as Yahoo! Answers3, where users answer questions posed by other users and best answers are selected manually either by the asker or by all the participants in the thread. The data generated by these sites has significant advantages over other web resources: (a) it has a high growth rate and it is already abundant; (b) it covers a large number of topics, hence it offers a better 3http://answers.yahoo.com 719 approximation of open-domain content; and (c) it is available for many languages. Community QA sites, similar to FAQs, provide large number of questionanswer pairs. Nevertheless, this data has a significant drawback: it has high variance of quality, i.e., answers range from very informative to completely irrelevant or even abusive. Table 1 shows some examples of both high and low quality content. In this paper we address the problem of answer ranking for non-factoid questions from social media content. Our research objectives focus on answering the following two questions: 1. Is it possible to learn an answer ranking model for complex questions from such noisy data? This is an interesting question because a positive answer indicates that a plethora of training data is readily available to QA researchers and system developers. 2. Which features are most useful in this scenario? Are similarity models as effective as models that learn question-to-answer transformations? Does syntactic and semantic information help? For generality, we focus only on textual features extracted from the answer text and we ignore all meta data information that is not generally available. Notice that we concentrate on one component of a possible social-media QA system. In addition to answer ranking, a complete system would have to search for similar questions already answered (Jeon et al., 2005), and rank content quality using ”social” features such as the authority of users (Jeon et al., 2006; Agichtein et al., 2008). This is not the focus of our work: here we investigate the problem of learning an answer ranking model capable of dealing with complex questions, using a large number of, possible noisy, question-answer pairs. By focusing exclusively on textual content we increase the portability of our approach to other collections where “social” features might not available, e.g., Web search. The paper is organized as follows. We describe our approach, including all the features explored for answer modeling, in Section 2. We introduce the corpus used in our empirical analysis in Section 3. We detail our experiments and analyze the results in Section 4. We overview related work in Section 5 and conclude the paper in Section 6. Answer Collection Answers Translation Features Web Correlation Features Features Similarity Answer Ranking Q Answer Retrieval (unsupervised) (discriminative learning) (class−conditional learning) Features Density/Frequency Figure 1: System architecture. 2 Approach The architecture of the QA system analyzed in the paper, summarized in Figure 1, follows that of the most successful TREC systems. The first component, answer retrieval, extracts a set of candidate answers A for question Q from a large collection of answers, C, provided by a communitygenerated question-answering site. The retrieval component uses a state-of-the-art information retrieval (IR) model to extract A given Q. Since our focus is on exploring the usability of the answer content, we do not perform retrieval by finding similar questions already answered (Jeon et al., 2005), i.e., our answer collection C contains only the site’s answers without the corresponding questions answered. The second component, answer ranking, assigns to each answer Ai ∈A a score that represents the likelihood that Ai is a correct answer for Q, and ranks all answers in descending order of these scores. The scoring function is a linear combination of four different classes of features (detailed in Section 2.2). This function is the focus of the paper. To answer our first research objective we will compare the quality of the rankings provided by this component against the rankings generated by the IR model used for answer retrieval. To answer the second research objective we will analyze the contribution of the proposed feature set to this function. Again, since our interest is in investigating the utility of the answer textual content, we use only information extracted from the answer text when learning the scoring function. We do not use any meta information (e.g., answerer credibility, click counts, etc.) (Agichtein et al., 2008; Jeon et al., 2006). Our QA approach combines three types of machine learning methodologies (as highlighted in Figure 1): the answer retrieval component uses un720 supervised IR models, the answer ranking is implemented using discriminative learning, and finally, some of the ranking features are produced by question-to-answer translation models, which use class-conditional learning. 2.1 Ranking Model Learning with user-generated content can involve arbitrarily large amounts of data. For this reason we choose as a ranking algorithm the Perceptron which is both accurate and efficient and can be trained with online protocols. Specifically, we implement the ranking Perceptron proposed by Shen and Joshi (2005), which reduces the ranking problem to a binary classification problem. The general intuition is to exploit the pairwise preferences induced from the data by training on pairs of patterns, rather than independently on each pattern. Given a weight vector α, the score for a pattern x (a candidate answer) is simply the inner product between the pattern and the weight vector: fα(x) = ⟨x, α⟩ (1) However, the error function depends on pairwise scores. In training, for each pair (xi, xj) ∈A, the score fα(xi −xj) is computed; note that if f is an inner product fα(xi −xj) = fα(xi) −fα(xj). Given a margin function g(i, j) and a positive rate τ, if fα(xi −xj) ≤g(i, j)τ, an update is performed: αt+1 = αt + (xi −xj)τg(i, j) (2) By default we use g(i, j) = (1 i −1 j ), as a margin function, as suggested in (Shen and Joshi, 2005), and find τ empirically on development data. Given that there are only two possible ranks in our setting, this function only generates two possible values. For regularization purposes, we use as a final model the average of all Perceptron models posited during training (Freund and Schapire, 1999). 2.2 Features In the scoring model we explore a rich set of features inspired by several state-of-the-art QA systems. We investigate how such features can be adapted and combined for non-factoid answer ranking, and perform a comparative feature analysis using a significant amount of real-world data. For clarity, we group the features into four sets: features that model the similarity between questions and answers (FG1), features that encode question-to-answer transformations using a translation model (FG2), features that measure keyword density and frequency (FG3), and features that measure the correlation between question-answer pairs and other collections (FG4). Wherever applicable, we explore different syntactic and semantic representations of the textual content, e.g., extracting the dependency-based representation of the text or generalizing words to their WordNet supersenses (WNSS) (Ciaramita and Altun, 2006). We detail each of these feature groups next. FG1: Similarity Features We measure the similarity between a question Q and an answer A using the length-normalized BM25 formula (Robertson and Walker, 1997). We chose this similarity formula because, out of all the IR models we tried, it provided the best ranking at the output of the answer retrieval component. For completeness we also include in the feature set the value of the tf ·idf similarity measure. For both formulas we use the implementations available in the Terrier IR platform4 with the default parameters. To understand the contribution of our syntactic and semantic processors we compute the above similarity features for five different representations of the question and answer content: Words (W) - this is the traditional IR view where the text is seen as a bag of words. Dependencies (D) - the text is represented as a bag of binary syntactic dependencies. The relative syntactic processor is detailed in Section 3. Dependencies are fully lexicalized but unlabeled and we currently extract dependency paths of length 1, i.e., direct head-modifier relations (this setup achieved the best performance). Generalized dependencies (Dg) - same as above, but the words in dependencies are generalized to their WNSS, if detected. Bigrams (B) - the text is represented as a bag of bigrams (larger n-grams did not help). We added this view for a fair analysis of the above syntactic views. Generalized bigrams (Bg) - same as above, but the words are generalized to their WNSS. 4http://ir.dcs.gla.ac.uk/terrier 721 In all these representations we skip stop words and normalize all words to their WordNet lemmas. FG2: Translation Features Berger et al. (2000) showed that similarity-based models are doomed to perform poorly for QA because they fail to “bridge the lexical chasm” between questions and answers. One way to address this problem is to learn question-to-answer transformations using a translation model (Berger et al., 2000; Echihabi and Marcu, 2003; Soricut and Brill, 2006; Riezler et al., 2007). In our model, we incorporate this approach by adding the probability that the question Q is a translation of the answer A, P(Q|A), as a feature. This probability is computed using IBM’s Model 1 (Brown et al., 1993): P(Q|A) = Y q∈Q P(q|A) (3) P(q|A) = (1 −λ)Pml(q|A) + λPml(q|C) (4) Pml(q|A) = X a∈A (T(q|a)Pml(a|A)) (5) where the probability that the question term q is generated from answer A, P(q|A), is smoothed using the prior probability that the term q is generated from the entire collection of answers C, Pml(q|C). λ is the smoothing parameter. Pml(q|C) is computed using the maximum likelihood estimator. Pml(q|A) is computed as the sum of the probabilities that the question term q is a translation of an answer term a, T(q|a), weighted by the probability that a is generated from A. The translation table for T(q|a) is computed using the EM-based algorithm implemented in the GIZA++ toolkit5. Similarly with the previous feature group, we add translation-based features for the five different text representations introduced above. By moving beyond the bag-of-word representation we hope to learn relevant transformations of structures, e.g., from the “squeaky” →“door” dependency to “spray” ←“WD-40” in the Table 1 example. FG3: Density and Frequency Features These features measure the density and frequency of question terms in the answer text. Variants of these features were used previously for either answer or passage ranking in factoid QA (Moldovan et al., 1999; Harabagiu et al., 2000). 5http://www.fjoch.com/GIZA++.html Same word sequence - computes the number of nonstop question words that are recognized in the same order in the answer. Answer span - the largest distance (in words) between two non-stop question words in the answer. Same sentence match - number of non-stop question terms matched in a single sentence in the answer. Overall match - number of non-stop question terms matched in the complete answer. These last two features are computed also for the other four text representations previously introduced (B, Bg, D, and Dg). Counting the number of matched dependencies is essentially a simplified tree kernel for QA (e.g., see (Moschitti et al., 2007)) matching only trees of depth 2. Experiments with full dependency tree kernels based on several variants of the convolution kernels of Collins and Duffy (2001) did not yield improvements. We conjecture that the mistakes of the syntactic parser may be amplified in tree kernels, which consider an exponential number of sub-trees. Informativeness - we model the amount of information contained in the answer by counting the number of non-stop nouns, verbs, and adjectives in the answer text that do not appear in the question. FG4: Web Correlation Features Previous work has shown that the redundancy of a large collection (e.g., the web) can be used for answer validation (Brill et al., 2001; Magnini et al., 2002). In the same spirit, we add features that measure the correlation between question-answer pairs and large external collections: Web correlation - we measure the correlation between the question-answer pair and the web using the Corrected Conditional Probability (CCP) formula of Magnini et al. (2002): CCP(Q, A) = hits(Q + A)/(hits(Q) hits(A)2/3) where hits returns the number of page hits from a search engine. When a query returns zero hits we iteratively relax it by dropping the keyword with the smallest priority. Keyword priorities are assigned using the heuristics of Moldovan et al. (1999). Query-log correlation - as in (Ciaramita et al., 2008) we also compute the correlation between questionanswer pairs and a search-engine query-log corpus of more than 7.5 million queries, which shares 722 roughly the same time stamp with the communitygenerated question-answer corpus. We compute the Pointwise Mutual Information (PMI) and Chi square (χ2) association measures between each questionanswer word pair in the query-log corpus. The largest and the average values are included as features, as well as the number of QA word pairs which appear in the top 10, 5, and 1 percentile of the PMI and χ2 word pair rankings. 3 The Corpus The corpus is extracted from a sample of the U.S. Yahoo! Answers logs. In this paper we focus on the subset of advice or “how to” questions due to their frequency and importance in social communities.6 To construct our corpus, we implemented the following successive filtering steps: Step 1: from the full corpus we keep only questions that match the regular expression: how (to|do|did|does|can|would|could|should) and have an answer selected as best either by the asker or by the participants in the thread. The outcome of this step is a set of 364,419 question-answer pairs. Step 2: from the above corpus we remove the questions and answers of obvious low quality. We implement this filter with a simple heuristic by keeping only questions and answers that have at least 4 words each, out of which at least 1 is a noun and at least 1 is a verb. This step filters out questions like “How to be excellent?” and answers such as “I don’t know”. The outcome of this step forms our answer collection C. C contains 142,627 question-answer pairs.7. Arguably, all these filters could be improved. For example, the first step can be replaced by a question classifier (Li and Roth, 2005). Similarly, the second step can be implemented with a statistical classifier that ranks the quality of the content using both the textual and non-textual information available in the database (Jeon et al., 2006; Agichtein et al., 2008). We plan to further investigate these issues which are not the main object of this work. 6Nevertheless, the approach proposed here is independent of the question type. We will explore answer ranking for other non-factoid question types in future work. 7The data will be available through the Yahoo! Webscope program ([email protected]). The data was processed as follows. The text was split at the sentence level, tokenized and PoS tagged, in the style of the Wall Street Journal Penn TreeBank (Marcus et al., 1993). Each word was morphologically simplified using the morphological functions of the WordNet library8. Sentences were annotated with WNSS categories, using the tagger of Ciaramita and Altun (2006)9, which annotates text with a 46-label tagset. These tags, defined by WordNet lexicographers, provide a broad semantic categorization for nouns and verbs and include labels for nouns such as food, animal, body and feeling, and for verbs labels such as communication, contact, and possession. Next, we parsed all sentences with the dependency parser of Attardi et al. (2007)10. It is important to realize that the output of all mentioned processing steps is noisy and contains plenty of mistakes, since the data has huge variability in terms of quality, style, genres, domains etc., and domain adaptation for the NLP tasks involved is still an open problem (Dredze et al., 2007). We used 60% of the questions for training, 20% for development, and 20% for test. The candidate answer set for a given question is composed by one positive example, i.e., its corresponding best answer, and as negative examples all the other answers retrieved in the top N by the retrieval component. 4 Experiments We evaluate our results using two measures: mean Precision at rank=1 (P@1) – i.e., the percentage of questions with the correct answer on the first position – and Mean Reciprocal Rank (MRR) – i.e., the score of a question is 1/k, where k is the position of the correct answer. We use as baseline the output of our answer retrieval component (Figure 1). This component uses the BM25 criterion, the highest performing IR model in our experiments. Table 2 lists the results obtained using this baseline and our best model (“Ranking” in the table) on the testing partition. Since we are interested in the performance of the ranking model, we evaluate on the subset of questions where the correct answer is retrieved by answer retrieval in the top N answers (similar to Ko et al. (2007)). In the table we report 8http://wordnet.princeton.edu 9sourceforge.net/projects/supersensetag 10http://sourceforge.net/projects/desr 723 MRR P@1 N = 10 N = 15 N = 25 N = 50 N = 10 N = 15 N = 25 N = 50 recall@N 26.25% 29.04% 32.81% 38.09% 26.25% 29.04% 32.81% 38.09% Baseline 61.33 56.12 50.31 43.74 45.94 41.48 36.74 31.66 Ranking 68.72±0.01 63.84±0.01 57.76±0.07 50.72±0.01 54.22±0.01 49.59±0.03 43.98±0.09 37.99±0.01 Improvement +12.04% +13.75% +14.80% +15.95% +18.02% +19.55% +19.70% +19.99% Table 2: Overall results for the test partition. results for several N values. For completeness, we show the percentage of questions that match this criterion in the “recall@N” row. Our ranking model was tuned strictly on the development set (i.e., feature selection and parameters of the translation models). During training, the presentation of the training instances is randomized, which generates a randomized ranking algorithm. We exploit this property to estimate the variance in the results produced by each model and report the average result over 10 trials together with an estimate of the standard deviation. The baseline result shows that, for N = 15, BM25 alone can retrieve in first rank 41% of the correct answers, and MRR tells us that the correct answer is often found within the first three answers (this is not so surprising if we remember that in this configuration only questions with the correct answer in the first 15 were kept for the experiment). The baseline results are interesting because they indicate that the problem is not hopelessly hard, but it is far from trivial. In principle, we see much room for improvement over bag-of-word methods. Next we see that learning a weighted combination of features yields consistently marked improvements: for example, for N = 15, the best model yields a 19% relative improvement in P@1 and 14% in MRR. More importantly, the results indicate that the model learned is stable: even though for the model analyzed in Table 2 we used N = 15 in training, we measure approximately the same relative improvement as N increases during evaluation. These results provide robust evidence that: (a) we can use publicly available online QA collections to investigate features for answer ranking without the need for costly human evaluation, (b) we can exploit large and noisy online QA collections to improve the accuracy of answer ranking systems and (c) readily available and scalable NLP technology can be used Iter. Feature Set MRR P@1 0 BM25(W) 56.06 41.12% 1 + translation(Bg) 61.13 46.24% 2 + overall match(D) 62.50 48.34% 3 + translation(W) 63.00 49.08% 4 + query-log avg(χ2) 63.50 49.63% 5 + answer span normalized by A size 63.71 49.84% 6 + query-log max(PMI) 63.87 50.09% 7 + same word sequence 63.99 50.23% 8 + translation(B) 64.03 50.30% 9 + tfidf(W) 64.08 50.42% 10 + same sentence match(W) 64.10 50.42% 11 + informativeness: verb count 64.18 50.36% 12 + tfidf(B) 64.22 50.36% 13 + same word sequence normalized by Q size 64.33 50.54% 14 + query-log max(χ2) 64.46 50.66% 15 + same sentence match(W) normalized by Q size 64.55 50.78% 16 + query-log avg(PMI) 64.60 50.88% 17 + overall match(W) 64.65 50.91% Table 3: Summary of the model selection process. to improve lexical matching and translation models. In the remaining of this section we analyze the performance of the different features. Table 3 summarizes the outcome of our automatic greedy feature selection process on the development set. Where applicable, we show within parentheses the text representation for the corresponding feature. The process is initialized with a single feature that replicates the baseline model (BM25 applied to the bag-of-words (W) representation). The algorithm incrementally adds to the feature set the feature that provides the highest MRR improvement in the development partition. The process stops when no features yield any improvement. The table shows that, while the features selected span all the four feature groups introduced, the lion’s share is taken by the translation features: approximately 60% of the MRR 724 W B Bg D Dg W + W + W + B + W + B + Bg B B + Bg Bg + D D + Dg FG1 (Similarity) 0 +1.06 -2.01 +0.84 -1.75 +1.06 +1.06 +1.06 +1.06 FG2 (Translation) +4.95 +4.73 +5.06 +4.63 +4.66 +5.80 +6.01 +6.36 +6.36 FG3 (Frequency) +2.24 +2.33 +2.39 +2.27 +2.41 +3.56 +3.56 +3.62 +3.62 Table 4: Contribution of NLP processors. Scores are MRR improvements on the development set. improvement is achieved by these features. The frequency/density features are responsible for approximately 23% of the improvement. The rest is due to the query-log correlation features. This indicates that, even though translation models are the most useful, it is worth exploring approaches that combine several strategies for answer ranking. Note that if some features do not appear in Table 3 it does not necessarily mean that they are useless. In some cases such features are highly correlated with features previously selected, which already exploited their signal. For example, most similarity features (FG1) are correlated. Because BM25(W) is part of the baseline model, the selection process chooses another FG1 feature only much later (iteration 9) when the model is significantly changed. On the other hand, some features do not provide a useful signal at all. A notable example in this class is the web-based CCP feature, which was designed originally for factoid answer validation and does not adapt well to our problem. Because the length of non-factoid answers is typically significantly larger than in the factoid QA task, we have to discard a large part of the query when computing hits(Q+A) to reach non-zero counts. This means that the final hit counts, hence the CCP value, are generally uncorrelated with the original (Q,A) tuple. One interesting observation is that the first two features chosen by our model selection process use information from the NLP processors. The first chosen feature is the translation probability computed between the Bg question and answer representations (bigrams with words generalized to their WNSS tags). The second feature selected measures the number of syntactic dependencies from the question that are matched in the answer. These results provide empirical evidence that coarse semantic disambiguation and syntactic parsing have a positive contribution to non-factoid QA, even in broad-coverage noisy settings based on Web data. The above observation deserves a more detailed analysis. Table 4 shows the performance of our first three feature groups when they are applied to each of the five text representations or incremental combinations of representations. For each model corresponding to a table cell we use only the features from the corresponding feature group and representation to avoid the correlation with features from other groups. We generate each best model using the same feature selection process described above. The left part of Table 4 shows that, generally, the models using representations that include the output of our NLP processors (Bg, D and Dg) improve over the baseline (FG1 and W).11 However, comparable improvements can be obtained with the simpler bigram representation (B). This indicates that, in terms of individual contributions, our NLP processors can be approximated with simpler n-gram models in this task. Hence, is it fair to say that syntactic and semantic analysis is useful for such Web QA tasks? While the above analysis seems to suggest a negative answer, the right-hand side of Table 4 tells a more interesting story. It shows that the NLP analysis provides complementary information to the ngram-based models. The best models for the FG2 and FG3 feature groups are obtained when combining the n-gram representations with the representations that use the output of the NLP processors (W + B + Bg + D). The improvements are relatively small, but remarkable (e.g., see FG2) if we take into account the significant scale of the evaluation. This observation correlates well with the analysis shown in Table 3, which shows that features using semantic (Bg) and syntactic (D) representations contribute the most on top of the IR model (BM25(W)). 11The exception to this rule are the models FG1(Bg) and FG1(Dg). This is caused by the fact that the BM25 formula is less forgiving with errors of the NLP processors (due to the high idf scores assigned to bigrams and dependencies), and the WNSS tagger is the least robust component in our pipeline. 725 5 Related Work Content from community-built question-answer sites can be retrieved by searching for similar questions already answered (Jeon et al., 2005) and ranked using meta-data information like answerer authority (Jeon et al., 2006; Agichtein et al., 2008). Here we show that the answer text can be successfully used to improve answer ranking quality. Our method is complementary to the above approaches. In fact, it is likely that an optimal retrieval engine from social media should combine all these three methodologies. Moreover, our approach might have applications outside of social media (e.g., for opendomain web-based QA), because the ranking model built is based only on open-domain knowledge and the analysis of textual content. In the QA literature, answer ranking for nonfactoid questions has typically been performed by learning question-to-answer transformations, either using translation models (Berger et al., 2000; Soricut and Brill, 2006) or by exploiting the redundancy of the Web (Agichtein et al., 2001). Girju (2003) extracts non-factoid answers by searching for certain semantic structures, e.g., causation relations as answers to causation questions. In this paper we combine several methodologies, including the above, into a single model. This approach allowed us to perform a systematic feature analysis on a large-scale real-world corpus and a comprehensive feature set. Recent work has showed that structured retrieval improves answer ranking for factoid questions: Bilotti et al. (2007) showed that matching predicateargument frames constructed from the question and the expected answer types improves answer ranking. Cui et al. (2005) learned transformations of dependency paths from questions to answers to improve passage ranking. However, both approaches use similarity models at their core because they require the matching of the lexical elements in the search structures. On the other hand, our approach allows the learning of full transformations from question structures to answer structures using translation models applied to different text representations. Our answer ranking framework is closest in spirit to the system of Ko et al. (2007) or Higashinaka et al. (2008). However, the former was applied only to factoid QA and both are limited to similarity, redundancy and gazetteer-based features. Our model uses a larger feature set that includes correlation and transformation-based features and five different content representations. Our evaluation is also carried out on a larger scale. Our work is also related to that of Riezler et al. (2007) where SMT-based query expansion methods are used on data from FAQ pages. 6 Conclusions In this work we described an answer ranking engine for non-factoid questions built using a large community-generated question-answer collection. On one hand, this study shows that we can effectively exploit large amounts of available Web data to do research on NLP for non-factoid QA systems, without any annotation or evaluation cost. This provides an excellent framework for large-scale experimentation with various models that otherwise might be hard to understand or evaluate. On the other hand, we expect the outcome of this process to help several applications, such as open-domain QA on the Web and retrieval from social media. For example, on the Web our ranking system could be combined with a passage retrieval system to form a QA system for complex questions. On social media, our system should be combined with a component that searches for similar questions already answered; this output can possibly be filtered further by a content-quality module that explores “social” features such as the authority of users, etc. We show that the best ranking performance is obtained when several strategies are combined into a single model. We obtain the best results when similarity models are aggregated with features that model question-to-answer transformations, frequency and density of content, and correlation of QA pairs with external collections. While the features that model question-to-answer transformations provide most benefits, we show that the combination is crucial for improvement. Lastly, we show that syntactic dependency parsing and coarse semantic disambiguation yield a small, yet statistically significant performance increase on top of the traditional bag-of-words and n-gram representation. We obtain these results using only off-the-shelf NLP processors that were not adapted in any way for our task. 726 References G. Attardi, F. Dell’Orletta, M. Simi, A. Chanev and M. Ciaramita. 2007. Multilingual Dependency Parsing and Domain Adaptation using DeSR. Proc. of CoNLL Shared Task Session of EMNLP-CoNLL 2007. E. Agichtein, C. Castillo, D. Donato, A. Gionis, and G. Mishne. 2008. Finding High-Quality Content in Social Media, with an Application to Community-based Question Answering. Proc. of WSDM. E. Agichtein, S. Lawrence, and L. Gravano. 2001. Learning Search Engine Specific Query Transformations for Question Answering. Proc. of WWW. A. Berger, R. Caruana, D. Cohn, D. Freytag, and V. Mittal. 2000. Bridging the Lexical Chasm: Statistical Approaches to Answer Finding. Proc. of SIGIR. M. Bilotti, P. Ogilvie, J. Callan, and E. Nyberg. 2007. Structured Retrieval for Question Answering. Proc. of SIGIR. E. Brill, J. Lin, M. Banko, S. Dumais, and A. Ng. 2001. Data-Intensive Question Answering. Proc. of TREC. P. Brown, S. Della Pietra, V. Della Pietra, R. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2). M. Ciaramita and Y. Altun. 2006. Broad Coverage Sense Disambiguation and Information Extraction with a Supersense Sequence Tagger. Proc. of EMNLP. M. Ciaramita, V. Murdock and V. Plachouras. 2008. Semantic Associations for Contextual Advertising. 2008. Journal of Electronic Commerce Research - Special Issue on Online Advertising and Sponsored Search, 9(1), pp.1-15. M. Collins and N. Duffy. 2001. Convolution Kernels for Natural Language. Proc. of NIPS 2001. H. Cui, R. Sun, K. Li, M. Kan, and T. Chua. 2005. Question Answering Passage Retrieval Using Dependency Relations. Proc. of SIGIR. M. Dredze, J. Blitzer, P. Pratim Talukdar, K. Ganchev, J. Graca, and F. Pereira. 2007. Frustratingly Hard Domain Adaptation for Parsing. In Proc. of EMNLPCoNLL 2007 Shared Task. A. Echihabi and D. Marcu. 2003. A Noisy-Channel Approach to Question Answering. Proc. of ACL. Y. Freund and R.E. Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 37, pp. 277-296. R. Girju. 2003. Automatic Detection of Causal Relations for Question Answering. Proc. of ACL, Workshop on Multilingual Summarization and Question Answering. S. Harabagiu, D. Moldovan, M. Pasca, R. Mihalcea, M. Surdeanu, R. Bunescu, R. Girju, V. Rus, and P. Morarescu. 2000. Falcon: Boosting Knowledge for Answer Engines. Proc. of TREC. R. Higashinaka and H. Isozaki. 2008. Corpus-based Question Answering for why-Questions. Proc. of IJCNLP. J. Jeon, W. B. Croft, and J. H. Lee. 2005. Finding Similar Questions in Large Question and Answer Archives. Proc. of CIKM. J. Jeon, W. B. Croft, J. H. Lee, and S. Park. 2006. A Framework to Predict the Quality of Answers with Non-Textual Features. Proc. of SIGIR. J. Ko, T. Mitamura, and E. Nyberg. 2007. Languageindependent Probabilistic Answer Ranking. for Question Answering. Proc. of ACL. X. Li and D. Roth. 2005. Learning Question Classifiers: The Role of Semantic Information. Natural Language Engineering. B. Magnini, M. Negri, R. Prevete, and H. Tanev. 2002. Comparing Statistical and Content-Based Techniques for Answer Validation on the Web. Proc. of the VIII Convegno AI*IA. M.P. Marcus, B. Santorini and M.A. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn TreeBank. Computational Linguistics, 19(2), pp. 313-330. D. Moldovan, S. Harabagiu, M. Pasca, R. Mihalcea, R. Goodrum, R. Girju, and V. Rus. 1999. LASSO - A Tool for Surfing the Answer Net. Proc. of TREC. A. Moschitti, S. Quarteroni, R. Basili and S. Manandhar. 2007. Exploiting Syntactic and Shallow Semantic Kernels for Question/Answer Classification. Proc. of ACL. S. Robertson and S. Walker. 1997. On relevance Weights with Little Relevance Information. Proc. of SIGIR. R. Soricut and E. Brill. 2006. Automatic Question Answering Using the Web: Beyond the Factoid. Journal of Information Retrieval - Special Issue on Web Information Retrieval, 9(2). L. Shen and A. Joshi. 2005. Ranking and Reranking with Perceptron, Machine Learning. Special Issue on Learning in Speech and Language Technologies, 60(13), pp. 73-96. S. Riezler, A. Vasserman, I. Tsochantaridis, V. Mittal and Y. Liu. 2007. Statistical Machine Translation for Query Expansion in Answer Retrieval. In Proc. of ACL. 727
2008
82
Proceedings of ACL-08: HLT, pages 728–736, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Unsupervised Lexicon-Based Resolution of Unknown Words for Full Morphological Analysis Meni Adler and Yoav Goldberg and David Gabay and Michael Elhadad Ben Gurion University of the Negev Department of Computer Science∗ POB 653 Be’er Sheva, 84105, Israel {adlerm,goldberg,gabayd,elhadad}@cs.bgu.ac.il Abstract Morphological disambiguation proceeds in 2 stages: (1) an analyzer provides all possible analyses for a given token and (2) a stochastic disambiguation module picks the most likely analysis in context. When the analyzer does not recognize a given token, we hit the problem of unknowns. In large scale corpora, unknowns appear at a rate of 5 to 10% (depending on the genre and the maturity of the lexicon). We address the task of computing the distribution p(t|w) for unknown words for full morphological disambiguation in Hebrew. We introduce a novel algorithm that is language independent: it exploits a maximum entropy letters model trained over the known words observed in the corpus and the distribution of the unknown words in known tag contexts, through iterative approximation. The algorithm achieves 30% error reduction on disambiguation of unknown words over a competitive baseline (to a level of 70% accurate full disambiguation of unknown words). We have also verified that taking advantage of a strong language-specific model of morphological patterns provides the same level of disambiguation. The algorithm we have developed exploits distributional information latent in a wide-coverage lexicon and large quantities of unlabeled data. ∗This work is supported in part by the Lynn and William Frankel Center for Computer Science. 1 Introduction The term unknowns denotes tokens in a text that cannot be resolved in a given lexicon. For the task of full morphological analysis, the lexicon must provide all possible morphological analyses for any given token. In this case, unknown tokens can be categorized into two classes of missing information: unknown tokens are not recognized at all by the lexicon, and unknown analyses, where the set of analyses for a lexeme does not contain the correct analysis for a given token. Despite efforts on improving the underlying lexicon, unknowns typically represent 5% to 10% of the number of tokens in large-scale corpora. The alternative to continuously investing manual effort in improving the lexicon is to design methods to learn possible analyses for unknowns from observable features: their letter structure and their context. In this paper, we investigate the characteristics of Hebrew unknowns for full morphological analysis, and propose a new method for handling such unavoidable lack of information. Our method generates a distribution of possible analyses for unknowns. In our evaluation, these learned distributions include the correct analysis for unknown words in 85% of the cases, contributing an error reduction of over 30% over a competitive baseline for the overall task of full morphological analysis in Hebrew. The task of a morphological analyzer is to produce all possible analyses for a given token. In Hebrew, the analysis for each token is of the form lexeme-and-features1: lemma, affixes, lexical cate1In contrast to the prefix-stem-suffix analysis format of 728 gory (POS), and a set of inflection properties (according to the POS) – gender, number, person, status and tense. In this work, we refer to the morphological analyzer of MILA – the Knowledge Center for Processing Hebrew2 (hereafter KC analyzer). It is a synthetic analyzer, composed of two data resources – a lexicon of about 2,400 lexemes, and a set of generation rules (see (Adler, 2007, Section 4.2)). In addition, we use an unlabeled text corpus, composed of stories taken from three Hebrew daily news papers (Aruts 7, Haaretz, The Marker), of 42M tokens. We observed 3,561 different composite tags (e.g., noun-sing-fem-prepPrefix:be) over this corpus. These 3,561 tags form the large tagset over which we train our learner. On the one hand, this tagset is much larger than the largest tagset used in English (from 17 tags in most unsupervised POS tagging experiments, to the 46 tags of the WSJ corpus and the about 150 tags of the LOB corpus). On the other hand, our tagset is intrinsically factored as a set of dependent sub-features, which we explicitly represent. The task we address in this paper is morphological disambiguation: given a sentence, obtain the list of all possible analyses for each word from the analyzer, and disambiguate each word in context. On average, each token in the 42M corpus is given 2.7 possible analyses by the analyzer (much higher than the average 1.41 POS tag ambiguity reported in English (Dermatas and Kokkinakis, 1995)). In previous work, we report disambiguation rates of 89% for full morphological disambiguation (using an unsupervised EM-HMM model) and 92.5% for part of speech and segmentation (without assigning all the inflectional features of the words). In order to estimate the importance of unknowns in Hebrew, we analyze tokens in several aspects: (1) the number of unknown tokens, as observed on the corpus of 42M tokens; (2) a manual classification of a sample of 10K unknown token types out of the 200K unknown types identified in the corpus; (3) the number of unknown analyses, based on an annotated corpus of 200K tokens, and their classification. About 4.5% of the 42M token instances in the Buckwalter’s Arabic analyzer (2004), which looks for any legal combination of prefix-stem-suffix, but does not provide full morphological features such as gender, number, case etc. 2http://mila.cs.technion.ac.il.html training corpus were unknown tokens (45% of the 450K token types). For less edited text, such as random text sampled from the Web, the percentage is much higher – about 7.5%. In order to classify these unknown tokens, we sampled 10K unknown token types and examined them manually. The classification of these tokens with their distribution is shown in Table 13. As can be seen, there are two main classes of unknown token types: Neologisms (32%) and Proper nouns (48%), which cover about 80% of the unknown token instances. The POS distribution of the unknown tokens of our annotated corpus is shown in Table 2. As expected, most unknowns are open class words: proper names, nouns or adjectives. Regarding unknown analyses, in our annotated corpus, we found 3% of the 100K token instances were missing the correct analysis in the lexicon (3.65% of the token types). The POS distribution of the unknown analyses is listed in Table 2. The high rate of unknown analyses for prepositions at about 3% is a specific phenomenon in Hebrew, where prepositions are often prefixes agglutinated to the first word of the noun phrase they head. We observe the very low rate of unknown verbs (2%) – which are well marked morphologically in Hebrew, and where the rate of neologism introduction seems quite low. This evidence illustrates the need for resolution of unknowns: The naive policy of selecting ‘proper name’ for all unknowns will cover only half of the errors caused by unknown tokens, i.e., 30% of the whole unknown tokens and analyses. The other 70% of the unknowns ( 5.3% of the words in the text in our experiments) will be assigned a wrong tag. As a result of this observation, our strategy is to focus on full morphological analysis for unknown tokens and apply a proper name classifier for unknown analyses and unknown tokens. In this paper, we investigate various methods for achieving full morphological analysis distribution for unknown tokens. The methods are not based on an annotated corpus, nor on hand-crafted rules, but instead exploit the distribution of words in an available lexicon and the letter similarity of the unknown words with known words. 3Transcription according to Ornan (2002) 729 Category Examples Distribution Types Instances Proper names ’asulin (family name) oileq` ’a’udi (Audi) ice`` 40% 48% Neologisms ’agabi (incidental) iab` tizmur (orchestration) xenfz 30% 32% Abbreviation mz”p (DIFS) t"fn kb”t (security officer) h"aw 2.4% 7.8% Foreign presentacyah (presentation) divhpfxt ’a’ut (out) he`` right 3.8% 5.8% Wrong spelling ’abibba’ah. ronah (springatlast) dpexg`aaia` ’idiqacyot (idication) zeivwici` ryuˇsalaim (Rejusalem) milyeix 1.2% 4% Alternative spelling ’opyynim (typical) mipiite` priwwilegyah (privilege ) diblieeixt 3.5% 3% Tokenization ha”sap (the”threshold) sq"d ‘al/17 (on/17) 71/lr 8% 2% Table 1: Unknown Hebrew token categories and distribution. Part of Speech Unknown Tokens Unknown Analyses Total Proper name 31.8% 24.4% 56.2% Noun 12.6% 1.6% 14.2% Adjective 7.1% 1.7% 8.8% Junk 3.0% 1.3% 4.3% Numeral 1.1% 2.3% 3.4% Preposition 0.3% 2.8% 3.1% Verb 1.8% 0.4% 2.2% Adverb 0.9% 0.9% 1.8% Participle 0.4% 0.8% 1.2% Copula / 0.8% 0.8% Quantifier 0.3% 0.4% 0.7% Modal 0.3% 0.4% 0.7% Conjunction 0.1% 0.5% 0.6% Negation / 0.6% 0.6% Foreign 0.2% 0.4% 0.6% Interrogative 0.1% 0.4% 0.5% Prefix 0.3% 0.2% 0.5% Pronoun / 0.5% 0.5% Total 60% 40% 100% Table 2: Unknowns Hebrew POS Distribution. 730 2 Previous Work Most of the work that dealt with unknowns in the last decade focused on unknown tokens (OOV). A naive approach would assign all possible analyses for each unknown token with uniform distribution, and continue disambiguation on the basis of a learned model with this initial distribution. The performance of a tagger with such a policy is actually poor: there are dozens of tags in the tagset (3,561 in the case of Hebrew full morphological disambiguation) and only a few of them may match a given token. Several heuristics were developed to reduce the possibility space and to assign a distribution for the remaining analyses. Weischedel et al. (1993) combine several heuristics in order to estimate the token generation probability according to various types of information – such as the characteristics of particular tags with respect to unknown tokens (basically the distribution shown in Table 2), and simple spelling features: capitalization, presence of hyphens and specific suffixes. An accuracy of 85% in resolving unknown tokens was reported. Dermatas and Kokkinakis (1995) suggested a method for guessing unknown tokens based on the distribution of the hapax legomenon, and reported an accuracy of 66% for English. Mikheev (1997) suggested a guessing-rule technique, based on prefix morphological rules, suffix morphological rules, and ending-guessing rules. These rules are learned automatically from raw text. They reported a tagging accuracy of about 88%. Thede and Harper (1999) extended a second-order HMM model with a C = ck,i matrix, in order to encode the probability of a token with a suffix sk to be generated by a tag ti. An accuracy of about 85% was reported. Nakagawa (2004) combine word-level and character-level information for Chinese and Japanese word segmentation. At the word level, a segmented word is attached to a POS, where the character model is based on the observed characters and their classification: Begin of word, In the middle of a word, End of word, the character is a word itself S. They apply Baum-Welch training over a segmented corpus, where the segmentation of each word and its character classification is observed, and the POS tagging is ambiguous. The segmentation (of all words in a given sentence) and the POS tagging (of the known words) is based on a Viterbi search over a lattice composed of all possible word segmentations and the possible classifications of all observed characters. Their experimental results show that the method achieves high accuracy over state-of-the-art methods for Chinese and Japanese word segmentation. Hebrew also suffers from ambiguous segmentation of agglutinated tokens into significant words, but word formation rules seem to be quite different from Chinese and Japanese. We also could not rely on the existence of an annotated corpus of segmented word forms. Habash and Rambow (2006) used the root+pattern+features representation of Arabic tokens for morphological analysis and generation of Arabic dialects, which have no lexicon. They report high recall (95%–98%) but low precision (37%–63%) for token types and token instances, against gold-standard morphological analysis. We also exploit the morphological patterns characteristic of semitic morphology, but extend the guessing of morphological features by using contextual features. We also propose a method that relies exclusively on learned character-level features and contextual features, and eventually reaches the same performance as the patterns-based approach. Mansour et al. (2007) combine a lexicon-based tagger (such as MorphTagger (Bar-Haim et al., 2005)), and a character-based tagger (such as the data-driven ArabicSVM (Diab et al., 2004)), which includes character features as part of its classification model, in order to extend the set of analyses suggested by the analyzer. For a given sentence, the lexicon-based tagger is applied, selecting one tag for a token. In case the ranking of the tagged sentence is lower than a threshold, the character-based tagger is applied, in order to produce new possible analyses. They report a very slight improvement on Hebrew and Arabic supervised POS taggers. Resolution of Hebrew unknown tokens, over a large number of tags in the tagset (3,561) requires a much richer model than the the heuristics used for English (for example, the capitalization feature which is dominant in English does not exist in Hebrew). Unlike Nakagawa, our model does not use any segmented text, and, on the other hand, it aims to select full morphological analysis for each token, 731 including unknowns. 3 Method Our objective is: given an unknown word, provide a distribution of possible tags that can serve as the analysis of the unknown word. This unknown analysis step is performed at training and testing time. We do not attempt to disambiguate the word – but only to provide a distribution of tags that will be disambiguated by the regular EM-HMM mechanism. We examined three models to construct the distribution of tags for unknown words, that is, whenever the KC analyzer does not return any candidate analysis, we apply these models to produce possible tags for the token p(t|w): Letters A maximum entropy model is built for all unknown tokens in order to estimate their tag distribution. The model is trained on the known tokens that appear in the corpus. For each analysis of a known token, the following features are extracted: (1) unigram, bigram, and trigram letters of the base-word (for each analysis, the base-word is the token without prefixes), together with their index relative to the start and end of the word. For example, the n-gram features extracted for the word abc are { a:1 b:2 c:3 a:-3 b:-2 c:-1 ab:1 bc:2 ab:-2 bc:-1 abc:1 abc:-1 } ; (2) the prefixes of the base-word (as a single feature); (3) the length of the base-word. The class assigned to this set of features, is the analysis of the base-word. The model is trained on all the known tokens of the corpus, each token is observed with its possible POS-tags once for each of its occurrences. When an unknown token is found, the model is applied as follows: all the possible linguistic prefixes are extracted from the token (one of the 76 prefix sequences that can occur in Hebrew); if more than one such prefix is found, the token is analyzed for each possible prefix. For each possible such segmentation, the full feature vector is constructed, and submitted to the Maximum Entropy model. We hypothesize a uniform distribution among the possible segmentations and aggregate a distribution of possible tags for the analysis. If the proposed tag of the base-word is never found in the corpus preceded by the identified prefix, we remove this possible analysis. The eventual outcome of the model application is a set of possible full morphological analyses for the token – in exactly the same format as the morphological analyzer provides. Patterns Word formation in Hebrew is based on root+pattern and affixation. Patterns can be used to identify the lexical category of unknowns, as well as other inflectional properties. Nir (1993) investigated word-formation in Modern Hebrew with a special focus on neologisms; the most common wordformation patterns he identified are summarized in Table 3. A naive approach for unknown resolution would add all analyses that fit any of these patterns, for any given unknown token. As recently shown by Habash and Rambow (2006), the precision of such a strategy can be pretty low. To address this lack of precision, we learn a maximum entropy model on the basis of the following binary features: one feature for each pattern listed in column Formation of Table 3 (40 distinct patterns) and one feature for “no pattern”. Pattern-Letters This maximum entropy model is learned by combining the features of the letters model and the patterns model. Linear-Context-based p(t|c) approximation The three models above are context free. The linear-context model exploits information about the lexical context of the unknown words: to estimate the probability for a tag t given a context c – p(t|c) – based on all the words in which a context occurs, the algorithm works on the known words in the corpus, by starting with an initial tag-word estimate p(t|w) (such as the morpho-lexical approximation, suggested by Levinger et al. (1995)), and iteratively re-estimating: ˆp(t|c) = P w∈W p(t|w)p(w|c) Z ˆp(t|w) = P c∈C p(t|c)p(c|w)allow(t, w) Z where Z is a normalization factor, W is the set of all words in the corpus, C is the set of contexts. allow(t, w) is a binary function indicating whether t is a valid tag for w. p(c|w) and p(w|c) are estimated via raw corpus counts. Loosely speaking, the probability of a tag given a context is the average probability of a tag given any 732 Category Formation Example Verb Template ’iCCeC ’ibh. en (diagnosed) oga` miCCeC mih. zer (recycled) xfgn CiCCen timren (manipulated) oxnz CiCCet tiknet (programmed) zpkz tiCCeC ti’arek (dated) jx`z Participle Template meCuCaca mˇswh. zar (reconstructed) xfgeyn muCCaC muqlat. (recorded) hlwen maCCiC malbin (whitening) oialn Noun Suffixation ut h. aluciyut (pioneership) zeivelg ay yomanay (duty officer) i`pnei an ’egropan (boxer) otexb` on pah. on (shack) oegt iya marakiyah (soup tureen) diiwxn it t.iyulit (open touring vehicle) zileih a lomdah (courseware) dcnel Template maCCeC maˇsneq (choke) wpyn maCCeCa madgera (incubator) dxbcn miCCaC mis‘ap (branching) srqn miCCaCa mignana (defensive fighting) dppbn CeCeCa pelet. (output) hlt tiCCoCet tiproset (distribution) zqextz taCCiC tah. rit. (engraving) hixgz taCCuCa tabru’ah (sanitation) d`exaz miCCeCet micrepet (leotard) ztxvn CCiC crir (dissonance) xixv CaCCan balˇsan (linguist) oyla CaCeCet ˇsah. emet (cirrhosis) zngy CiCul t.ibu‘ (ringing) reaih haCCaCa hanpaˇsa (animation) dytpd heCCeC het’em (agreement) m`zd Adjective Suffixationb i nora’i (awful) i`xep ani yeh. idani (individual) ipcigi oni t.elewizyonic (televisional) ipeifieelh a’i yed. ida’i (unique) i`cigi ali st.udentiali (student) il`ihpcehq Template C1C2aC3C2aC3d metaqtaq (sweetish) wzwzn CaCuC rapus (flaccid ) qetx Adverb Suffixation ot qcarot (briefly) zexvw it miyadit (immediately) zicin Prefixation b bekeip (with fun) sika aCoCeC variation: wzer ‘wyeq (a copy). bThe feminine form is made by the t and iya suffixes: ipcigi yeh. idanit (individual), dixvep nwcriya (Christian). cIn the feminine form, the last h of the original noun is omitted. dC1C2aC3C2oC3 variation: oehphw qt.ant.wn (tiny). Table 3: Common Hebrew Neologism Formations. 733 Model Analysis Set Morphological Disambiguation Coverage Ambiguity Probability Baseline 50.8% 1.5 0.48 57.3% Pattern 82.8% 20.4 0.10 66.8% Letter 76.7% 5.9 0.32 69.1% Pattern-Letter 84.1% 10.4 0.25 69.8% WordContext-Pattern 84.4% 21.7 0.12 66.5% TagContext-Pattern 85.3% 23.5 0.19 64.9% WordContext-Letter 80.7% 7.94 0.30 69.7% TagContext-Letter 83.1% 7.8 0.22 66.9% WordContext-Pattern-Letter 85.2% 12.0 0.24 68.8% TagContext-Pattern-Letter 86.1% 14.3 0.18 62.1% Table 4: Evaluation of unknown token full morphological analysis. of the words appearing in that context, and similarly the probability of a tag given a word is the averaged probability of that tag in all the (reliable) contexts in which the word appears. We use the function allow(t, w) to control the tags (ambiguity class) allowed for each word, as given by the lexicon. For a given word wi in a sentence, we examine two types of contexts: word context wi−1, wi+1, and tag context ti−1, ti+1. For the case of word context, the estimation of p(w|c) and p(c|w) is simply the relative frequency over all the events w1, w2, w3 occurring at least 10 times in the corpus. Since the corpus is not tagged, the relative frequency of the tag contexts is not observed, instead, we use the context-free approximation of each word-tag, in order to determine the frequency weight of each tag context event. For example, given the sequence icnl ziznerl daebz tgubah l‘umatit lmadai (a quite oppositional response), and the analyses set produced by the context-free approximation: tgubah [NN 1.0] l‘umatit [] lmadai [RB 0.8, P1-NN 0.2]. The frequency weight of the context {NN RB} is 1 ∗0.8 = 0.8 and the frequency weight of the context {NN P1-NN} is 1 ∗0.2 = 0.2. 4 Evaluation For testing, we manually tagged the text which is used in the Hebrew Treebank (consisting of about 90K tokens), according to our tagging guideline (?). We measured the effectiveness of the three models with respect to the tags that were assigned to the unknown tokens in our test corpus (the ‘correct tag’), according to three parameters: (1) The coverage of the model, i.e., we count cases where p(t|w) contains the correct tag with a probability larger than 0.01; (2) the ambiguity level of the model, i.e., the average number of analyses suggested for each token; (3) the average probability of the ‘correct tag’, according to the predicted p(t|w). In addition, for each experiment, we run the full morphology disambiguation system where unknowns are analyzed according by the model. Our baseline proposes the most frequent tag (proper name) for all possible segmentations of the token, in a uniform distribution. We compare the following models: the 3 context free models (patterns, letters and the combined patterns and letters) and the same models combined with the word and tag context models. Note that the context models have low coverage (about 40% for the word context and 80% for the tag context models), and therefore, the context models cannot be used on their own. The highest coverage is obtained for the combined model (tag context, pattern, letter) at 86.1%. We first show the results for full morphological disambiguation, over 3,561 distinct tags in Table 4. The highest coverage is obtained for the model combining the tag context, patterns and letters models. The tag context model is more effective because it covers 80% of the unknown words, whereas the word context model only covers 40%. As expected, our simple baseline has the highest precision, since the most frequent proper name tag covers over 50% of the unknown words. The eventual effectiveness of 734 Model Analysis Set POS Tagging Coverage Ambiguity Probability Baseline 52.9% 1.5 0.52 60.6% Pattern 87.4% 8.7 0.19 76.0% Letter 80% 4.0 0.39 77.6% Pattern-Letter 86.7% 6.2 0.32 78.5% WordContext-Pattern 88.7% 8.8 0.21 75.8% TagContext-Pattern 89.5% 9.1 0.14 73.8% WordContext-Letter 83.8% 4.5 0.37 78.2% TagContext-Letter 87.1% 5.7 0.28 75.2% WordContext-Pattern-Letter 87.8 6.5 0.32 77.5% TagContext-Pattern-Letter 89.0% 7.2 0.25 74% Table 5: Evaluation of unknown token POS tagging. the method is measured by its impact on the eventual disambiguation of the unknown words. For full morphological disambiguation, our method achieves an error reduction of 30% (57% to 70%). Overall, with the level of 4.5% of unknown words observed in our corpus, the algorithm we have developed contributes to an error reduction of 5.5% for full morphological disambiguation. The best result is obtained for the model combining pattern and letter features. However, the model combining the word context and letter features achieves almost identical results. This is an interesting result, as the pattern features encapsulate significant linguistic knowledge, which apparently can be approximated by a purely distributional approximation. While the disambiguation level of 70% is lower than the rate of 85% achieved in English, it must be noted that the task of full morphological disambiguation in Hebrew is much harder – we manage to select one tag out of 3,561 for unknown words as opposed to one out of 46 in English. Table 5 shows the result of the disambiguation when we only take into account the POS tag of the unknown tokens. The same models reach the best results in this case as well (Pattern+Letters and WordContext+Letters). The best disambiguation result is 78.5% – still much lower than the 85% achieved in English. The main reason for this lower level is that the task in Hebrew includes segmentation of prefixes and suffixes in addition to POS classification. We are currently investigating models that will take into account the specific nature of prefixes in Hebrew (which encode conjunctions, definite articles and prepositions) to better predict the segmentation of unknown words. 5 Conclusion We have addressed the task of computing the distribution p(t|w) for unknown words for full morphological disambiguation in Hebrew. The algorithm we have proposed is language independent: it exploits a maximum entropy letters model trained over the known words observed in the corpus and the distribution of the unknown words in known tag contexts, through iterative approximation. The algorithm achieves 30% error reduction on disambiguation of unknown words over a competitive baseline (to a level of 70% accurate full disambiguation of unknown words). We have also verified that taking advantage of a strong language-specific model of morphological patterns provides the same level of disambiguation. The algorithm we have developed exploits distributional information latent in a wide-coverage lexicon and large quantities of unlabeled data. We observe that the task of analyzing unknown tokens for POS in Hebrew remains challenging when compared with English (78% vs. 85%). We hypothesize this is due to the highly ambiguous pattern of prefixation that occurs widely in Hebrew and are currently investigating syntagmatic models that exploit the specific nature of agglutinated prefixes in Hebrew. 735 References Meni Adler. 2007. Hebrew Morphological Disambiguation: An Unsupervised Stochastic Word-based Approach. Ph.D. thesis, Ben-Gurion University of the Negev, Beer-Sheva, Israel. Roy Bar-Haim, Khalil Sima’an, and Yoad Winter. 2005. Choosing an optimal architecture for segmentation and pos-tagging of modern Hebrew. In Proceedings of ACL-05 Workshop on Computational Approaches to Semitic Languages. Tim Buckwalter. 2004. Buckwalter Arabic morphological analyzer, version 2.0. Evangelos Dermatas and George Kokkinakis. 1995. Automatic stochastic tagging of natural language texts. Computational Linguistics, 21(2):137–163. Mona Diab, Kadri Hacioglu, and Daniel Jurafsky. 2004. Automatic tagging of Arabic text: From raw text to base phrase chunks. In Proceeding of HLT-NAACL04. Michael Elhadad, Yael Netzer, David Gabay, and Meni Adler. 2005. Hebrew morphological tagging guidelines. Technical report, Ben-Gurion University, Dept. of Computer Science. Nizar Habash and Owen Rambow. 2006. Magead: A morphological analyzer and generator for the arabic dialects. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 681–688, Sydney, Australia, July. Association for Computational Linguistics. Moshe Levinger, Uzi Ornan, and Alon Itai. 1995. Learning morpholexical probabilities from an untagged corpus with an application to Hebrew. Computational Linguistics, 21:383–404. Saib Mansour, Khalil Sima’an, and Yoad Winter. 2007. Smoothing a lexicon-based pos tagger for Arabic and Hebrew. In ACL07 Workshop on Computational Approaches to Semitic Languages, Prague, Czech Republic. Andrei Mikheev. 1997. Automatic rule induction for unknown-word guessing. Computational Linguistics, 23(3):405–423. Tetsuji Nakagawa. 2004. Chinese and Japanese word segmentation using word-level and character-level information. In Proceedings of the 20th international conference on Computational Linguistics, Geneva. Raphael Nir. 1993. Word-Formation in Modern Hebrew. The Open University of Israel, Tel-Aviv, Israel. Uzi Ornan. 2002. Hebrew in Latin script. L˘eˇson´enu, LXIV:137–151. (in Hebrew). Scott M. Thede and Mary P. Harper. 1999. A secondorder hidden Markov model for part-of-speech tagging. In Proceeding of ACL-99. R. Weischedel, R. Schwartz, J. Palmucci, M. Meteer, and L. Ramshaw. 1993. Coping with ambiguity and unknown words through probabilistic models. Computational Linguistics, 19:359–382. 736
2008
83
Proceedings of ACL-08: HLT, pages 737–745, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Unsupervised Multilingual Learning for Morphological Segmentation Benjamin Snyder and Regina Barzilay Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {bsnyder,regina}@csail.mit.edu Abstract For centuries, the deep connection between languages has brought about major discoveries about human communication. In this paper we investigate how this powerful source of information can be exploited for unsupervised language learning. In particular, we study the task of morphological segmentation of multiple languages. We present a nonparametric Bayesian model that jointly induces morpheme segmentations of each language under consideration and at the same time identifies cross-lingual morpheme patterns, or abstract morphemes. We apply our model to three Semitic languages: Arabic, Hebrew, Aramaic, as well as to English. Our results demonstrate that learning morphological models in tandem reduces error by up to 24% relative to monolingual models. Furthermore, we provide evidence that our joint model achieves better performance when applied to languages from the same family. 1 Introduction For centuries, the deep connection between human languages has fascinated linguists, anthropologists and historians (Eco, 1995). The study of this connection has made possible major discoveries about human communication: it has revealed the evolution of languages, facilitated the reconstruction of proto-languages, and led to understanding language universals. The connection between languages should be a powerful source of information for automatic linguistic analysis as well. In this paper we investigate two questions: (i) Can we exploit cross-lingual correspondences to improve unsupervised language learning? (ii) Will this joint analysis provide more or less benefit when the languages belong to the same family? We study these two questions in the context of unsupervised morphological segmentation, the automatic division of a word into morphemes (the basic units of meaning). For example, the English word misunderstanding would be segmented into mis understand - ing. This task is an informative testbed for our exploration, as strong correspondences at the morphological level across various languages have been well-documented (Campbell, 2004). The model presented in this paper automatically induces a segmentation and morpheme alignment from a multilingual corpus of short parallel phrases.1 For example, given parallel phrases meaning in my land in English, Arabic, Hebrew, and Aramaic, we wish to segment and align morphemes as follows: fy arḍ - y b - arṣ - y b - arʿ
- y in my land English: Arabic: Hebrew: Aramaic: This example illustrates the potential benefits of unsupervised multilingual learning. The three Semitic languages use cognates (words derived from a common ancestor) to represent the word land. They also use an identical suffix (-y) to represent the first person possessive pronoun (my). These similarities in form should guide the model by constraining 1In this paper, we focus on bilingual models. The model can be extended to handle several languages simultaneously as in this example. 737 the space of joint segmentations. The corresponding English phrase lacks this resemblance to its Semitic counterparts. However, in this as in many cases, no segmentation is required for English as all the morphemes are expressed as individual words. For this reason, English should provide a strong source of disambiguation for highly inflected languages, such as Arabic and Hebrew. In general, we pose the following question. In which scenario will multilingual learning be most effective? Will it be for related languages, which share a common core of linguistic features, or for distant languages, whose linguistic divergence can provide strong sources of disambiguation? As a first step towards answering this question, we propose a model which can take advantage of both similarities and differences across languages. This joint bilingual model identifies optimal morphemes for two languages and at the same time finds compact multilingual representations. For each language in the pair, the model favors segmentations which yield high frequency morphemes. Moreover, bilingual morpheme pairs which consistently share a common semantic or syntactic function are treated as abstract morphemes, generated by a single language-independent process. These abstract morphemes are induced automatically by the model from recurring bilingual patterns. For example, in the case above, the tuple (in, fy, b-, b-) would constitute one of three abstract morphemes in the phrase. When a morpheme occurs in one language without a direct counterpart in the other language, our model can explain away the stray morpheme as arising through a language-specific process. To achieve this effect in a probabilistic framework, we formulate a hierarchical Bayesian model with Dirichlet Process priors. This framework allows us to define priors over the infinite set of possible morphemes in each language. In addition, we define a prior over abstract morphemes. This prior can incorporate knowledge of the phonetic relationship between the two alphabets, giving potential cognates greater prior likelihood. The resulting posterior distributions concentrate their probability mass on a small group of recurring and stable patterns within and between languages. We test our model on a multilingual corpus of short parallel phrases drawn from the Hebrew Bible and Arabic, Aramaic, and English translations. The Semitic language family, of which Hebrew, Arabic, and Aramaic are members, is known for a highly productive morphology (Bravmann, 1977). Our results indicate that cross-lingual patterns can indeed be exploited successfully for the task of unsupervised morphological segmentation. When modeled in tandem, gains are observed for all language pairs, reducing relative error by as much as 24%. Furthermore, our experiments show that both related and unrelated language pairs benefit from multilingual learning. However, when common structures such as phonetic correspondences are explicitly modeled, related languages provide the most benefit. 2 Related Work Multilingual Language Learning Recently, the availability of parallel corpora has spurred research on multilingual analysis for a variety of tasks ranging from morphology to semantic role labeling (Yarowsky et al., 2000; Diab and Resnik, 2002; Xi and Hwa, 2005; Pad´o and Lapata, 2006). Most of this research assumes that one language has annotations for the task of interest. Given a parallel corpus, the annotations are projected from this source language to its counterpart, and the resulting annotations are used for supervised training in the target language. In fact, Rogati et al., (2003) employ this method to learn arabic morphology assuming annotations provided by an English stemmer. An alternative approach has been proposed by Feldman, Hana and Brew (2004; 2006). While their approach does not require a parallel corpus it does assume the availability of annotations in one language. Rather than being fully projected, the source annotations provide co-occurrence statistics used by a model in the resource-poor target language. The key assumption here is that certain distributional properties are invariant across languages from the same language families. An example of such a property is the distribution of part-of-speech bigrams. Hana et al., (2004) demonstrate that adding such statistics from an annotated Czech corpus improves the performance of a Russian part-of-speech tagger over a fully unsupervised version. The approach presented here differs from previous work in two significant ways. First, we do 738 not assume supervised data in any of the languages. Second, we learn a single multilingual model, rather than asymmetrically handling one language at a time. This design allows us to capitalize on structural regularities across languages for the mutual benefit of each language. Unsupervised Morphological Segmentation Unsupervised morphology is an active area of research (Schone and Jurafsky, 2000; Goldsmith, 2001; Adler and Elhadad, 2006; Creutz and Lagus, 2007; Dasgupta and Ng, 2007). Most existing algorithms derive morpheme lexicons by identifying recurring patterns in string distribution. The goal is to optimize the compactness of the data representation by finding a small lexicon of highly frequent strings. Our work builds on probabilistic segmentation approaches such as Morfessor (Creutz and Lagus, 2007). In these approaches, models with short description length are preferred. Probabilities are computed for both the morpheme lexicon and the representation of the corpus conditioned on the lexicon. A locally optimal segmentation is identified using a task-specific greedy search. In contrast to previous approaches, our model induces morphological segmentation for multiple related languages simultaneously. By representing morphemes abstractly through the simultaneous alignment and segmentation of data in two languages, our algorithm capitalizes on deep connections between morpheme usage across different languages. 3 Multilingual Morphological Segmentation The underlying assumption of our work is that structural commonality across different languages is a powerful source of information for morphological analysis. In this section, we provide several examples that motivate this assumption. The main benefit of joint multilingual analysis is that morphological structure ambiguous in one language is sometimes explicitly marked in another language. For example, in Hebrew, the preposition meaning “in”, b-, is always prefixed to its nominal argument. On the other hand, in Arabic, the most common corresponding particle is fy, which appears as a separate word. By modeling crosslingual morpheme alignments while simultaneously segmenting, the model effectively propagates information between languages and in this case would be encouraged to segment the Hebrew prefix b-. Cognates are another important means of disambiguation in the multilingual setting. Consider translations of the phrase “...and they wrote it...”: • Hebrew: w-ktb-w ath • Arabic: f-ktb-w-ha In both languages, the triliteral root ktb is used to express the act of writing. By considering the two phrases simultaneously, the model can be encouraged to split off the respective Hebrew and Arabic prefixes w- and f- in order to properly align the cognate root ktb. In the following section, we describe a model that can model both generic cross-lingual patterns (fy and b-), as well as cognates between related languages (ktb for Hebrew and Arabic). 4 Model Overview In order to simultaneously model probabilistic dependencies across languages as well as morpheme distributions within each language, we employ a hierarchical Bayesian model.2 Our segmentation model is based on the notion that stable recurring string patterns within words are indicative of morphemes. In addition to learning independent morpheme patterns for each language, the model will prefer, when possible, to join together frequently occurring bilingual morpheme pairs into single abstract morphemes. The model is fully unsupervised and is driven by a preference for stable and high frequency cross-lingual morpheme patterns. In addition the model can incorporate character-to-character phonetic correspondences between alphabets as prior information, thus allowing the implicit modeling of cognates. Our aim is to induce a model which concentrates probability on highly frequent patterns while still allowing for the possibility of those previously unseen. Dirichlet processes are particularly suitable for such conditions. In this framework, we can encode 2In (Snyder and Barzilay, 2008) we consider the use of this model in the case where supervised data in one or more languages is available. 739 prior knowledge over the infinite sets of possible morpheme strings as well as abstract morphemes. Distributions drawn from a Dirichlet process nevertheless produce sparse representations with most probability mass concentrated on a small number of observed and predicted patterns. Our model utilizes a Dirichlet process prior for each language, as well as for the cross-lingual links (abstract morphemes). Thus, a distribution over morphemes and morpheme alignments is first drawn from the set of Dirichlet processes and then produces the observed data. In practice, we never deal with such distributions directly, but rather integrate over them during Gibbs sampling. In the next section we describe our model’s “generative story” for producing the data we observe. We formalize our model in the context of two languages E and F. However, the formulation can be extended to accommodate evidence from multiple languages as well. We provide an example of parallel phrase generation in Figure 1. High-level Generative Story We have a parallel corpus of several thousand short phrases in the two languages E and F. Our model provides a generative story explaining how these parallel phrases were probabilistically created. The core of the model consists of three components: a distribution A over bilingual morpheme pairs (abstract morphemes), a distribution E over stray morphemes in language E occurring without a counterpart in language F, and a similar distribution F for stray morphemes in language F. As usual for hierarchical Bayesian models, the generative story begins by drawing the model parameters themselves – in our case the three distributions A, E, and F. These three distributions are drawn from three separate Dirichlet processes, each with appropriately defined base distributions. The Dirichlet processes ensure that the resulting distributions concentrate their probability mass on a small number of morphemes while holding out reasonable probability for unseen possibilities. Once A, E, and F have been drawn, we model our parallel corpus of short phrases as a series of independent draws from a phrase-pair generation model. For each new phrase-pair, the model first chooses the number and type of morphemes to be generated. In particular, it must choose how many unaligned stray morphemes from language E, unaligned stray morphemes from language F, and abstract morphemes are to compose the parallel phrases. These three numbers, respectively denoted as m, n, and k, are drawn from a Poisson distribution. This step is illustrated in Figure 1 part (a). The model then proceeds to independently draw m language E morphemes from distribution E, n language-F morphemes from distribution F, and k abstract morphemes from distribution A. This step is illustrated in part (b) of Figure 1. The m + k resulting language-E morphemes are then ordered and fused to form a phrase in language E, and likewise for the n + k resulting languageF morphemes. The ordering and fusing decisions are modeled as draws from a uniform distribution over the set of all possible orderings and fusings for sizes m, n, and k. These final steps are illustrated in parts (c)-(d) of Figure 1. Now we describe the model more formally. Stray Morpheme Distributions Sometimes a morpheme occurs in a phrase in one language without a corresponding foreign language morpheme in the parallel phrase. We call these “stray morphemes,” and we employ language-specific morpheme distributions to model their generation. For each language, we draw a distribution over all possible morphemes (finite-length strings composed of characters in the appropriate alphabet) from a Dirichlet process with concentration parameter α and base distribution Pe or Pf respectively: E|α, Pe ∼ DP(α, Pe) F|α, Pf ∼ DP(α, Pf) The base distributions Pe and Pf can encode prior knowledge about the properties of morphemes in each of the two languages, such as length and character n-grams. For simplicity, we use a geometric distribution over the length of the string with a final end-morpheme character. The distributions E and F which result from the respective Dirichlet processes place most of their probability mass on a small number of morphemes with the degree of concentration 740 وا#$%&%''(ואת הכנעני"...and the Canaanites" w-at h-knʿn-y w-al-knʿn-y-yn and-ACC the-canaan-of and-the-canaan-of-PLURAL at knʿn knʿn yn w w y y al h at knʿn knʿn yn w w y y al h E F A m = 1 n = 1 k = 4 (a) (b) (c) (d) Figure 1: Generation process for a parallel bilingual phrase, with Hebrew shown on top and Arabic on bottom. (a) First the numbers of stray (m and n) and abstract (k) morphemes are drawn from a Poisson distribution. (b) Stray morphemes are then drawn from E and F (language-specific distributions) and abstract morphemes are drawn from A. (c) The resulting morphemes are ordered. (d) Finally, some of the contiguous morphemes are fused into words. controlled by the prior α. Nevertheless, some nonzero probability is reserved for every possible string. We note that these single-language morpheme distributions also serve as monolingual segmentation models, and similar models have been successfully applied to the task of word boundary detection (Goldwater et al., 2006). Abstract Morpheme Distribution To model the connections between morphemes across languages, we further define a model for bilingual morpheme pairs, or abstract morphemes. This model assigns probabilities to all pairs of morphemes – that is, all pairs of finite strings from the respective alphabets – (e, f). Intuitively, we wish to assign high probability to pairs of morphemes that play similar syntactic or semantic roles (e.g. (fy, b-) for “in” in Arabic and Hebrew). These morpheme pairs can thus be viewed as representing abstract morphemes. As with the stray morpheme models, we wish to define a distribution which concentrates probability mass on a small number of highly co-occurring morpheme pairs while still holding out some probability for all other pairs. We define this abstract morpheme model A as a draw from another Dirichlet process: A|α′, P ′ ∼ DP(α′, P ′) (e, f) ∼ A As before, the resulting distribution A will give non-zero probability to all abstract morphemes (e, f). The base distribution P ′ acts as a prior on such pairs. To define P ′, we can simply use a mixture of geometric distributions in the lengths of the component morphemes. However, if the languages E and F are related and the regular phonetic correspondences between the letter in the two alphabets are known, then we can use P ′ to assign higher likelihood to potential cognates. In particular we define the prior P ′(e, f) to be the probabilistic string-edit distance (Ristad and Yianilos, 1998) between e and f, using the known phonetic correspondences to parameterize the string-edit model. In particular, insertion and deletion probabilities are held constant for all characters, and substitution probabilities are determined based on the known sound correspondences. We report results for both the simple geometric prior as well as the string-edit prior. Phrase Generation To generate a bilingual parallel phrase, we first draw m, n, and k independently from a Poisson distribution. These three integers represent the number and type of the morphemes that compose the parallel phrase, giving the number of stray morphemes in each language E and F and the number of coupled bilingual morpheme pairs, respectively. m, n, k ∼ Poisson(λ) Given these values, we now draw the appropriate number of stray and abstract morphemes from the corresponding distributions: 741 e1, ..., em ∼ E f1, ..., fn ∼ F (e′ 1, f′ 1), ..., (e′ k, f′ k) ∼ A The sets of morphemes drawn for each language are then ordered: ˜e1, ..., ˜em+k ∼ ORDER|e1, ..., em, e′ 1, ..., e′ k ˜f1, ..., ˜fn+k ∼ ORDER|f1, ..., fn, f′ 1, ..., f′ k Finally the ordered morphemes are fused into the words that form the parallel phrases: w1, ..., ws ∼ FUSE|˜e1, ..., ˜em+k v1, ..., vt ∼ FUSE| ˜f1, ..., ˜fn+k To keep the model as simple as possible, we employ uniform distributions over the sets of orderings and fusings. In other words, given a set of r morphemes (for each language), we define the distribution over permutations of the morphemes to simply be ORDER(·|r) = 1 r!. Then, given a fixed morpheme order, we consider fusing each adjacent morpheme into a single word. Again, we simply model the distribution over the r −1 fusing decisions uniformly as FUSE(·|r) = 1 2r−1 . Implicit Alignments Note that nowhere do we explicitly assign probabilities to morpheme alignments between parallel phrases. However, our model allows morphemes to be generated in precisely one of two ways: as a lone stray morpheme or as part of a bilingual abstract morpheme pair. Thus, our model implicitly assumes that each morpheme is either unaligned, or aligned to exactly one morpheme in the opposing language. If we are given a parallel phrase with already segmented morphemes we can easily induce the distribution over alignments implied by our model. As we will describe in the next section, drawing from these induced alignment distributions plays a crucial role in our inference procedure. Inference Given our corpus of short parallel bilingual phrases, we wish to make segmentation decisions which yield a set of morphemes with high joint probability. To assess the probability of a potential morpheme set, we need to marginalize over all possible alignments (i.e. possible abstract morpheme pairings and stray morpheme assignments). We also need to marginalize over all possible draws of the distributions A, E, and F from their respective Dirichlet process priors. We achieve these aims by performing Gibbs sampling. Sampling We follow (Neal, 1998) in the derivation of our blocked and collapsed Gibbs sampler. Gibbs sampling starts by initializing all random variables to arbitrary starting values. At each iteration, the sampler selects a random variable Xi, and draws a new value for Xi from the conditional distribution of Xi given the current value of the other variables: P(Xi|X−i). The stationary distribution of variables derived through this procedure is guaranteed to converge to the true joint distribution of the random variables. However, if some variables can be jointly sampled, then it may be beneficial to perform block sampling of these variables to speed convergence. In addition, if a random variable is not of direct interest, we can avoid sampling it directly by marginalizing it out, yielding a collapsed sampler. We utilize variable blocking by jointly sampling multiple segmentation and alignment decisions. We also collapse our Gibbs sampler in the standard way, by using predictive posteriors marginalized over all possible draws from the Dirichlet processes (resulting in Chinese Restaurant Processes). Resampling For each bilingual phrase, we resample each word in the phrase in turn. For word w in language E, we consider at once all possible segmentations, and for each segmentation all possible alignments. We keep fixed the previously sampled segmentation decisions for all other words in the phrase as well as sampled alignments involving morphemes in other words. We are thus considering at once: all possible segmentations of w along with all possible alignments involving morphemes in w with some subset of previously sampled languageF morphemes.3 3We retain morpheme identities during resampling of the morpheme alignments. This procedure is technically justi742 Arabic Hebrew precision recall F-score precision recall F-score RANDOM 18.28 19.24 18.75 24.95 24.66 24.80 MORFESSOR 71.10 60.51 65.38 65.38 57.69 61.29 MONOLINGUAL 52.95 78.46 63.22 55.76 64.44 59.78 + ARABIC/HEBREW 60.40 78.64 68.32 59.08 66.50 62.57 + ARAMAIC 61.33 77.83 68.60 54.63 65.68 59.64 + ENGLISH 63.19 74.79 68.49 60.20 64.42 62.23 + ARAMAIC+PH 66.74 75.46 70.83 60.87 59.73 60.29 + ARABIC/HEBREW+PH 67.75 77.29 72.20 64.90 62.87 63.87 Table 1: Precision, recall and F-score evaluated on Arabic and Hebrew. The first three rows provide baselines (random selection, an alternative state-of-the-art system, and the monolingual version of our model). The next three rows show the result of our bilingual model when one of Arabic, Hebrew, Aramaic, or English is added. The final two rows show the result of the bilingual model when character-to-character phonetic correspondences are used in the abstract morpheme prior. The sampling formulas are easily derived as products of the relevant Chinese Restaurant Processes (with a minor adjustment to take into account the number of stray and abstract morphemes resulting from each decision). See (Neal, 1998) for general formulas for Gibbs sampling from distributions with Dirichlet process priors. All results reported are averaged over five runs using simulated annealing. 5 Experimental Set-Up Morpheme Definition For the purpose of these experiments, we define morphemes to include conjunctions, prepositional and pronominal affixes, plural and dual suffixes, particles, definite articles, and roots. We do not model cases of infixed morpheme transformations, as those cannot be modeled by linear segmentation. Dataset As a source of parallel data, we use the Hebrew Bible and translations. For the Hebrew version, we use an edition distributed by Westminster Hebrew Institute (Groves and Lowery, 2006). This Bible edition is augmented by gold standard morphological analysis (including segmentation) performed by biblical scholars. For the Arabic, Aramaic, and English versions, fied by augmenting the model with a pair of “morphemeidentity” variables deterministically drawn from each abstract morpheme. Thus the identity of the drawn morphemes can be retained even while resampling their generation mechanism. we use the Van Dyke Arabic translation,4 Targum Onkelos,5 and the Revised Standard Version (Nelson, 1952), respectively. We obtained gold standard segmentations of the Arabic translation with a hand-crafted Arabic morphological analyzer which utilizes manually constructed word lists and compatibility rules and is further trained on a large corpus of hand-annotated Arabic data (Habash and Rambow, 2005). The accuracy of this analyzer is reported to be 94% for full morphological analyses, and 98%-99% when part-of-speech tag accuracy is not included. We don’t have gold standard segmentations for the English and Aramaic portions of the data, and thus restrict our evaluation to Hebrew and Arabic. To obtain our corpus of short parallel phrases, we preprocessed each language pair using the Giza++ alignment toolkit.6 Given word alignments for each language pair, we extract a list of phrase pairs that form independent sets in the bipartite alignment graph. This process allows us to group together phrases like fy s.bah. in Arabic and bbqr in Hebrew while being reasonably certain that all the relevant morphemes are contained in the short extracted phrases. The number of words in such phrases ranges from one to four words in the Semitic languages and up to six words in English. Before performing any experiments, a manual inspection of 4http://www.arabicbible.com/bible/vandyke.htm 5http://www.mechon-mamre.org/i/t/u/u0.htm 6http://www.fjoch.com/GIZA++.html 743 the generated parallel phrases revealed that many infrequent phrase pairs occurred merely as a result of noisy translation and alignment. Therefore, we eliminated all parallel phrases that occur fewer than five times. As a result of this process, we obtain 6,139 parallel short phrases in Arabic, Hebrew, Aramaic, and English. The average number of morphemes per word in the Hebrew data is 1.8 and is 1.7 in Arabic. For the bilingual models which employs probabilistic string-edit distance as a prior on abstract morphemes, we parameterize the string-edit model with the chart of Semitic consonant relationships listed on page xxiv of (Thackston, 1999). All pairs of corresponding letters are given equal substitution probability, while all other letter pairs are given substitution probability of zero. Evaluation Methods Following previous work, we evaluate the performance of our automatic segmentation algorithm using F-score. This measure is the harmonic mean of recall and precision, which are calculated on the basis of all possible segmentation points. The evaluation is performed on a random set of 1/5 of the parallel phrases which is unseen during the training phase. During testing, we do not allow the models to consider any multilingual evidence. This restriction allows us to simulate future performance on purely monolingual data. Baselines Our primary purpose is to compare the performance of our bilingual model with its fully monolingual counterpart. However, to demonstrate the competitiveness of this baseline model, we also provide results using MORFESSOR (Creutz and Lagus, 2007), a state-of-the-art unsupervised system for morphological segmentation. While developed originally for Finnish, this system has been successfully applied to a range of languages including German, Turkish and English. The probabilistic formulation of this model is close to our monolingual segmentation model, but it uses a greedy search specifically designed for the segmentation task. We use the publicly available implementation of this system. To provide some idea of the inherent difficulty of this segmentation task, we also provide results from a random baseline which makes segmentation decisions based on a coin weighted with the true segmentation frequency. 6 Results Table 1 shows the performance of the various automatic segmentation methods. The first three rows provide baselines, as mentioned in the previous section. Our primary baseline is MONOLINGUAL, which is the monolingual counterpart to our model and only uses the language-specific distributions E or F. The next three rows shows the performance of various bilingual models that don’t use character-tocharacter phonetic correspondences to capture cognate information. We find that with the exception of the HEBREW(+ARAMAIC) pair, the bilingual models show marked improvement over MONOLINGUAL. We notice that in general, adding English – which has comparatively little morphological ambiguity – is about as useful as adding a more closely related Semitic language. However, once characterto-character phonetic correspondences are added as an abstract morpheme prior (final two rows), we find the performance of related language pairs outstrips English, reducing relative error over MONOLINGUAL by 10% and 24% for the Hebrew/Arabic pair. 7 Conclusions and Future Work We started out by posing two questions: (i) Can we exploit cross-lingual patterns to improve unsupervised analysis? (ii) Will this joint analysis provide more or less benefit when the languages belong to the same family? The model and results presented in this paper answer the first question in the affirmative, at least for the task of morphological segmentation. We also provided some evidence that considering closely related languages may be more beneficial than distant pairs if the model is able to explicitly represent shared language structure (the characterto-character phonetic correspondences in our case). In the future, we hope to apply similar multilingual models to other core unsupervised analysis tasks, including part-of-speech tagging and grammar induction, and to further investigate the role that language relatedness plays in such models. 7 7We acknowledge the support of the National Science Foundation (CAREER grant IIS-0448168 and grant IIS-0415865) and the Microsoft Research Faculty Fellowship. Thanks to members of the MIT NLP group for enlightening discussion. 744 References Meni Adler and Michael Elhadad. 2006. An unsupervised morpheme-based hmm for hebrew morphological disambiguation. In Proceedings of the ACL/CONLL, pages 665–672. M. M. Bravmann. 1977. Studies in Semitic Philology. Leiden:Brill. Lyle Campbell. 2004. Historical Linguistics: An Introduction. Cambridge: MIT Press. Mathias Creutz and Krista Lagus. 2007. Unsupervised models for morpheme segmentation and morphology learning. ACM Transactions on Speech and Language Processing, 4(1). Sajib Dasgupta and Vincent Ng. 2007. Unsupervised part-of-speech acquisition for resource-scarce languages. In Proceedings of the EMNLP-CoNLL, pages 218–227. Mona Diab and Philip Resnik. 2002. An unsupervised method for word sense tagging using parallel corpora. In Proceedings of the ACL, pages 255–262. Umberto Eco. 1995. The Search for the Perfect Language. Wiley-Blackwell. Anna Feldman, Jirka Hana, and Chris Brew. 2006. A cross-language approach to rapid creation of new morpho-syntactically annotated resources. In Proceedings of LREC. John A. Goldsmith. 2001. Unsupervised learning of the morphology of a natural language. Computational Linguistics, 27(2):153–198. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2006. Contextual dependencies in unsupervised word segmentation. In Proceedings of the ACL, pages 673–680. Alan Groves and Kirk Lowery, editors. 2006. The Westminster Hebrew Bible Morphology Database. Westminster Hebrew Institute, Philadelphia, PA, USA. Nizar Habash and Owen Rambow. 2005. Arabic tokenization, part-of-speech tagging and morphological disambig uation in one fell swoop. In Proceedings of the ACL, pages 573–580. Jiri Hana, Anna Feldman, and Chris Brew. 2004. A resource-light approach to russian morphology: Tagging russian using czech resources. In Proceedings of EMNLP, pages 222–229. Radford M. Neal. 1998. Markov chain sampling methods for dirichlet process mixture models. Technical Report 9815, Dept. of Statistics and Dept. of Computer Science, University of Toronto, September. Thomas Nelson, editor. 1952. The Holy Bible Revised Standard Version. Thomas Nelson & Sons. Sebastian Pad´o and Mirella Lapata. 2006. Optimal constituent alignment with edge covers for semantic projection. In Proceedings of ACL, pages 1161 – 1168. Eric Sven Ristad and Peter N. Yianilos. 1998. Learning string-edit distance. IEEE Trans. Pattern Anal. Mach. Intell., 20(5):522–532. Monica Rogati, J. Scott McCarley, and Yiming Yang. 2003. Unsupervised learning of arabic stemming using a parallel corpus. In Proceedings of the ACL, pages 391–398. Patrick Schone and Daniel Jurafsky. 2000. Knowledgefree induction of morphology using latent semantic analysis. In Proceedings of the CoNLL, pages 67–72. Benjamin Snyder and Regina Barzilay. 2008. Crosslingual propagation for morphological analysis. In Proceedings of AAAI. Wheeler M. Thackston. 1999. Introduction to Syriac. Ibex Publishers. Chenhai Xi and Rebecca Hwa. 2005. A backoff model for bootstrapping resources for non-english languages. In Proceedings of HLT/EMNLP, pages 851 – 858. David Yarowsky, Grace Ngai, and Richard Wicentowski. 2000. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of HLT, pages 161–168. 745
2008
84
Proceedings of ACL-08: HLT, pages 746–754, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics EM Can Find Pretty Good HMM POS-Taggers (When Given a Good Start)∗ Yoav Goldberg and Meni Adler and Michael Elhadad Ben Gurion University of the Negev Department of Computer Science POB 653 Be’er Sheva, 84105, Israel {yoavg,adlerm,elhadad}@cs.bgu.ac.il Abstract We address the task of unsupervised POS tagging. We demonstrate that good results can be obtained using the robust EM-HMM learner when provided with good initial conditions, even with incomplete dictionaries. We present a family of algorithms to compute effective initial estimations p(t|w). We test the method on the task of full morphological disambiguation in Hebrew achieving an error reduction of 25% over a strong uniform distribution baseline. We also test the same method on the standard WSJ unsupervised POS tagging task and obtain results competitive with recent state-ofthe-art methods, while using simple and efficient learning methods. 1 Introduction The task of unsupervised (or semi-supervised) partof-speech (POS) tagging is the following: given a dictionary mapping words in a language to their possible POS, and large quantities of unlabeled text data, learn to predict the correct part of speech for a given word in context. The only supervision given to the learning process is the dictionary, which in a realistic scenario, contains only part of the word types observed in the corpus to be tagged. Unsupervised POS tagging has been traditionally approached with relative success (Merialdo, 1994; Kupiec, 1992) by HMM-based generative models, employing EM parameters estimation using the Baum-Welch algorithm. However, as recently noted ∗This work is supported in part by the Lynn and William Frankel Center for Computer Science. by Banko and Moore (2004), these works made use of filtered dictionaries: dictionaries in which only relatively probable analyses of a given word are preserved. This kind of filtering requires serious supervision: in theory, an expert is needed to go over the dictionary elements and filter out unlikely analyses. In practice, counts from an annotated corpus have been traditionally used to perform the filtering. Furthermore, these methods require rather comprehensive dictionaries in order to perform well. In recent work, researchers try to address these deficiencies by using dictionaries with unfiltered POS-tags, and testing the methods on “diluted dictionaries” – in which many of the lexical entries are missing (Smith and Eisner, 2005) (SE), (Goldwater and Griffiths, 2007) (GG), (Toutanova and Johnson, 2008) (TJ). All the work mentioned above focuses on unsupervised English POS tagging. The dictionaries are all derived from tagged English corpora (all recent work uses the WSJ corpus). As such, the setting of the research is artificial: there is no reason to perform unsupervised learning when an annotated corpus is available. The problem is rather approached as a workbench for exploring new learning methods. The result is a series of creative algorithms, that have steadily improved results on the same dataset: unsupervised CRF training using contrastive estimation (SE), a fully-bayesian HMM model that jointly performs clustering and sequence learning (GG), and a Bayesian LDA-based model using only observed context features to predict tag words (TJ). These sophisticated learning algorithms all outperform the traditional baseline of EM-HMM based methods, 746 while relying on similar knowledge: the lexical context of the words to be tagged and their letter structure (e.g., presence of suffixes, capitalization and hyphenation).1 Our motivation for tackling unsupervised POS tagging is different: we are interested in developing a Hebrew POS tagger. We have access to a good Hebrew lexicon (and a morphological analyzer), and a fair amount of unlabeled training data, but hardly any annotated corpora. We actually report results on full morphological disambiguation for Hebrew, a task similar but more challenging than POS tagging: we deal with a tagset much larger than English (over 3,561 distinct tags) and an ambiguity level of about 2.7 per token as opposed to 1.4 for English. Instead of inventing a new learning framework, we go back to the traditional EM trained HMMs. We argue that the key challenge to learning an effective model is to define good enough initial conditions. Given sufficiently good initial conditions, EM trained models can yield highly competitive results. Such models have other benefits as well: they are simple, robust, and computationally more attractive. In this paper, we concentrate on methods for deriving sufficiently good initial conditions for EMHMM learning. Our method for learning initial conditions for the p(t|w) distributions relies on a mixture of language specific models: a paradigmatic model of similar words (where similar words are words with similar inflection patterns), simple syntagmatic constraints (e.g., the sequence V-V is extremely rare in English). These are complemented by a linear lexical context model. Such models are simple to build and test. We present results for unsupervised PoS tagging of Hebrew text and for the common WSJ English test sets. We show that our method achieves state-ofthe-art results for the English setting, even with a relatively small dictionary. Furthermore, while recent work report results on a reduced English tagset of 17 PoS tags, we also present results for the complete 45 tags tagset of the WSJ corpus. This considerably raises the bar of the EM-HMM baseline. We also report state-of-the-art results for Hebrew full mor1Another notable work, though within a slightly different framework, is the prototype-driven method proposed by (Haghighi and Klein, 2006), in which the dictionary is replaced with a very small seed of prototypical examples. phological disambiguation. Our primary conclusion is that the problem of learning effective stochastic classifiers remains primarily a search task. Initial conditions play a dominant role in solving this task and can rely on linguistically motivated approximations. A robust learning method (EM-HMM) combined with good initial conditions based on a robust feature set can go a long way (as opposed to a more complex learning method). It seems that computing initial conditions is also the right place to capture complex linguistic intuition without fear that over-generalization could lead a learner to diverge. 2 Previous Work The tagging accuracy of supervised stochastic taggers is around 96%–97% (Manning and Schutze, 1999). Merialdo (1994) reports an accuracy of 86.6% for an unsupervised token-based EMestimated HMM, trained on a corpus of about 1M words, over a tagset of 159 tags. Elworthy (1994), in contrast, reports accuracy of 75.49%, 80.87%, and 79.12% for unsupervised word-based HMM trained on parts of the LOB corpora, with a tagset of 134 tags. With (artificially created) good initial conditions, such as a good approximation of the tag distribution for each word, Elworthy reports an improvement to 94.6%, 92.27%, and 94.51% on the same data sets. Merialdo, on the other hand, reports an improvement to 92.6% and 94.4% for the case where 100 and 2,000 sentences of the training corpus are manually tagged. Later, Banko and Moore (2004) observed that earlier unsupervised HMM-EM results were artificially high due to use of Optimized Lexicons, in which only frequent-enough analyses of each word were kept. Brill (1995b) proposed an unsupervised tagger based on transformationbased learning (Brill, 1995a), achieving accuracies of above 95%. This unsupervised tagger relied on an initial step in which the most probable tag for each word is chosen. Optimized lexicons and Brill’s most-probable-tag Oracle are not available in realistic unsupervised settings, yet, they show that good initial conditions greatly facilitate learning. Recent work on unsupervised POS tagging for English has significantly improved the results on this task: GG, SE and most recently TJ report the best re747 sults so far on the task of unsupervised POS tagging of the WSJ with diluted dictionaries. With dictionaries as small as 1249 lexical entries the LDA-based method with a strong ambiguity-class model reaches POS accuracy as high as 89.7% on a reduced tagset of 17 tags. While these 3 methods rely on the same feature set (lexical context, spelling features) for the learning stage, the LDA approach bases its predictions entirely on observable features, and excludes the traditional hidden states sequence. In Hebrew, Levinger et al. (1995) introduced the similar-words algorithm for estimating p(t|w) from unlabeled data, which we describe below. Our method uses this algorithm as a first step, and refines the approximation by introducing additional linguistic constraints and an iterative refinement step. 3 Initial Conditions For EM-HMM The most common model for unsupervised learning of stochastic processes is Hidden Markov Models (HMM). For the case of tagging, the states correspond to the tags ti, and words wi are emitted each time a state is visited. The parameters of the model can be estimated by applying the Baum-Welch EM algorithm (Baum, 1972), on a large-scale corpus of unlabeled text. The estimated parameters are then used in conjunction with Viterbi search, to find the most probable sequence of tags for a given sentence. In this work, we follow Adler (2007) and use a variation of second-order HMM in which the probability of a tag is conditioned by the tag that precedes it and by the one that follows it, and the probability of an emitted word is conditioned by its tag and the tag that follows it2. In all experiments, we use the backoff smoothing method of (Thede and Harper, 1999), with additive smoothing (Chen, 1996) for the lexical probabilities. We investigate methods to approximate the initial parameters of the p(t|w) distribution, from which we obtain p(w|t) by marginalization and Bayesian inversion. We also experiment with constraining the p(t|t−1, t+1) distribution. 2Technically this is not Markov Model but a Dependency Net. However, bidirectional conditioning seem more suitable for language tasks, and in practice the learning and inference methods are mostly unaffected. See (Toutanova et al., 2003). General syntagmatic constraints We set linguistically motivated constraints on the p(t|t−1, t+1) distribution. In our setting, these are used to force the probability of some events to 0 (e.g., “Hebrew verbs can not be followed by the of preposition”). Morphology-based p(t|w) approximation Levinger et al. (1995) developed a context-free method for acquiring morpho-lexical probabilities (p(t|w)) from an untagged corpus. The method is based on language-specific rules for constructing a similar words (SW) set for each analysis of a word. This set is composed of morphological variations of the word under the given analysis. For example, the Hebrew tokenילדcan be analyzed as either a noun (boy) or a verb (gave birth). The noun SW set for this token is composed of the definiteness and number inflections( הילד,ילדים,הילדיםthe boy, boys, the boys), while the verb SW set is composed of gender and tense inflections( ילדה,ילדוshe/they gave birth). The approximated probability of each analysis is based on the corpus frequency of its SW set. For the complete details, refer to the original paper. Cucerzan and Yarowsky (2000) proposed a similar method for the unsupervised estimation of p(t|w) in English, relying on simple spelling features to characterize similar word classes. Linear-Context-based p(t|w) approximation The method of Levinger et al. makes use of Hebrew inflection patterns in order to estimate context free approximation of p(t|w) by relating a word to its different inflections. However, the context in which a word occurs can also be very informative with respect to its POS-analysis (Sch¨utze, 1995). We propose a novel algorithm for estimating p(t|w) based on the contexts in which a word occurs.3 The algorithm starts with an initial p(t|w) estimate, and iteratively re-estimates: ˆp(t|c) = P w∈W p(t|w)p(w|c) Z ˆp(t|w) = P c∈RELC p(t|c)p(c|w)allow(t, w) Z 3While we rely on the same intuition, our use of context differs from earlier works on distributional POS-tagging like (Sch¨utze, 1995), in which the purpose is to directly assign the possible POS for an unknown word. In contrast, our algorithm aims to improve the estimate for the whole distribution p(t|w), to be further disambiguated by the EM-HMM learner. 748 where Z is a normalization factor, W is the set of all words in the corpus, C is the set of all contexts, and RELC ⊆C is a set of reliable contexts, defined below. allow(t, w) is a binary function indicating whether t is a valid tag for w. p(c|w) and p(w|c) are estimated via raw corpus counts. Intuitively, we estimate the probability of a tag given a context as the average probability of a tag given any of the words appearing in that context, and similarly the probability of a tag given a word is the averaged probability of that tag in all the (reliable) contexts in which the word appears. At each round, we define RELC, the set of reliable contexts, to be the set of all contexts in which p(t|c) > 0 for at most X different ts. The method is general, and can be applied to different languages. The parameters to specify for each language are: the initial estimation p(t|w), the estimation of the allow relation for known and OOV words, and the types of contexts to consider. 4 Application to Hebrew In Hebrew, several words combine into a single token in both agglutinative and fusional ways. This results in a potentially high number of tags for each token. On average, in our corpus, the number of possible analyses per known word reached 2.7, with the ambiguity level of the extended POS tagset in corpus for English (1.41) (Dermatas and Kokkinakis, 1995). In this work, we use the morphological analyzer of MILA – Knowledge Center for Processing Hebrew (KC analyzer). In contrast to English tagsets, the number of tags for Hebrew, based on all combinations of the morphological attributes, can grow theoretically to about 300,000 tags. In practice, we found ‘only’ about 3,560 tags in a corpus of 40M tokens training corpus taken from Hebrew news material and Knesset transcripts. For testing, we manually tagged the text which is used in the Hebrew Treebank (Sima’an et al., 2001) (about 90K tokens), according to our tagging guidelines. 4.1 Initial Conditions General syntagmatic constraints We define 4 syntagmatic constraints over p(t|t−1, t+1): (1) a construct state form cannot be followed by a verb, preposition, punctuation, existential, modal, or copula; (2) a verb cannot be followed by the preposition ˇ שלsel (of), (3) copula and existential cannot be followed by a verb, and (4) a verb cannot be followed by another verb, unless one of them has a prefix, or the second verb is an infinitive, or the first verb is imperative and the second verb is in future tense.4 Morphology-Based p(t|w) approximation We extended the set of rules used in Levinger et al. , in order to support the wider tagset used by the KC analyzer: (1) The SW set for adjectives, copulas, existentials, personal pronouns, verbs and participles, is composed of all gender-number inflections; (2) The SW set for common nouns is composed of all number inflections, with definite article variation for absolute noun; (3) Prefix variations for proper nouns; (4) Gender variation for numerals; and (5) Gendernumber variation for all suffixes (possessive, nominative and accusative). Linear-Context-based p(t|w) approximation For the initial p(t|w) we use either a uniform distribution based on the tags allowed in the dictionary, or the estimate obtained by using the modified Levinger et al. algorithm. We use contexts of the form LR=w−1, w+1 (the neighbouring words). We estimate p(w|c) and p(c|w) via relative frequency over all the events w1, w2, w3 occurring at least 10 times in the corpus. allow(t, w) follows the dictionary. Because of the wide coverage of the Hebrew lexicon, we take RELC to be C (all available contexts). 4.2 Evaluation We run a series of experiments with 8 distinct initial conditions, as shown in Table 1: our baseline (Uniform) is the uniform distribution over all tags provided by the KC analyzer for each word. The Syntagmatic initial conditions add the p(t|t−1, t+1) constraints described above to the uniform baseline. The Morphology-Based and Linear-Context initial conditions are computed as described above, while the Morph+Linear is the result of applying the linear-context algorithm over initial values computed by the Morphology-based method. We repeat 4This rule was taken from Shacham and Wintner(2007). 749 Initial Condition Dist Context-Free EM-HMM Full Seg+Pos Full Seg+Pos Uniform 60 63.8 71.9 85.5 89.8 Syntagmatic Pair Constraints 60 / / 85.8 89.8 Init-Trans 60 / / 87.9 91 Morpho-Lexical Morph-Based 76.8 76.4 83.1 87.7 91.6 Linear-Context 70.1 75.4 82.6 85.3 89.6 Morph+Linear 79.8 79.0 85.5 88 92 PairConst+Morph Morph-Based / / / 87.6 91.4 Linear-Context / / / 84.5 89.0 Morph+Linear / / / 87.1 91.5 InitTrans+Morph Morph-Based / / / 89.2 92.3 Linear-Context / / / 87.7 90.9 Morph+Linear / / / 89.4 92.4 Table 1: Accuracy (%) of Hebrew Morphological Disambiguation and POS Tagging over various initial conditions these last 3 models with the addition of the syntagmatic constraints (Synt+Morph). For each of these, we first compare the computed p(t|w) against a gold standard distribution, taken from the test corpus (90K tokens), according to the measure used by (Levinger et al., 1995) (Dist). On this measure, we confirm that our improved morpholexical approximation improves the results reported by Levinger et al. from 74% to about 80% on a richer tagset, and on a much larger test set (90K vs. 3,400 tokens). We then report on the effectiveness of p(t|w) as a context-free tagger that assigns to each word the most likely tag, both for full morphological analysis (3,561 tags) (Full) and for the simpler task of token segmentation and POS tag selection (36 tags) (Seg+Pos). The best results on this task are 80.8% and 87.5% resp. achieved on the Morph+Linear initial conditions. Finally, we test effectiveness of the initial conditions with EM-HMM learning. We reach 88% accuracy on full morphological and 92% accuracy for POS tagging and word segmentation, for the Morph+Linear initial conditions. As expected, EM-HMM improves results (from 80% to 88%). Strikingly, EM-HMM improves the uniform initial conditions from 64% to above 85%. However, better initial conditions bring us much over this particular local maximum – with an error reduction of 20%. In all cases, the main improvement over the uniform baseline is brought by the morphology-based initial conditions. When applied on its own, the linear context brings modest improvement. But the combination of the paradigmatic morphology-based method with the linear context improves all measures. A most interesting observation is the detrimental contribution of the syntagmatic constraints we introduced. We found that 113,453 sentences of the corpus (about 5%) contradict these basic and apparently simple constraints. As an alternative to these common-sense constraints, we tried to use a small seed of randomly selected sentences (10K annotated tokens) in order to skew the initial uniform distribution of the state transitions. We initialize the p(t|t−1, t+1) distribution with smoothed ML estimates based on tag trigram and bigram counts (ignoring the tag-word annotations). This small seed initialization (InitTrans) has a great impact on accuracy. Overall, we reach 89.4% accuracy on full morphological and 92.4% accuracy for POS tagging and word segmentation, for the Morph+Linear conditions – an error reduction of more than 25% from the uniform distribution baseline. 5 Application to English We now apply the same technique to English semisupervised POS tagging. Recent investigations of this task use dictionaries derived from the Penn WSJ corpus, with a reduced tag set of 17 tags5 instead of the original 45-tags tagset. They experiment with full dictionaries (containing complete POS information for all the words in the text) as well as “diluted” dictionaries, from which large portions of the vocabulary are missing. These settings are very different from those used for Hebrew: the tagset is much smaller (17 vs. ∼3,560) and the dictionaries are either complete or extremely crippled. However, for the sake of comparison, we have reproduced the same experimental settings. We derive dictionaries from the complete WSJ corpus6, and the exact same diluted dictionaries used in SE, TJ and GG. 5ADJ ADV CONJ DET ENDPUNC INPUNC LPUNC RPUNC N POS PRT PREP PRT TO V VBG VBN WH 6The dictionary derived from the WSJ data is very noisy: many of the stop words get wrong analyses stemming from tagging mistakes (for instance, the word the has 6 possible analyses in the data-derived dictionary, which we checked manually and found all but DT erroneous). Such noise is not expected in a real world dictionary, and our algorithm is not designed to accomodate it. We corrected the entries for the 20 most frequent words in the corpus. This step could probably be done automatically, but we consider it to be a non-issue in any realistic setting. 750 Syntagmatic Constraints We indirectly incorporated syntagmatic constraints through a small change to the tagset. The 17-tags English tagset allows for V-V transitions. Such a construction is generally unlikely in English. By separating modals from the rest of the verbs, and creating an additional class for the 5 be verbs (am,is,are,was,were), we made such transition much less probable. The new 19-tags tagset reflects the “verb can not follow a verb” constraint. Morphology-Based p(t|w) approximation English morphology is much simpler compared to that of Hebrew, making direct use of the Levinger context free approximation impossible. However, some morphological cues exist in English as well, in particular common suffixation patterns. We implemented our morphology-based context-free p(t|w) approximation for English as a special case of the linear context-based algorithm described in Sect.3. Instead of generating contexts based on neighboring words, we generate them using the following 5 morphological templates: suff=S The word has suffix S (suff=ing). L+suff=W,S The word appears just after word W, with suffix S (L+suff=have,ed). R+suff=S,W The word appears just before word W, with suffix S (R+suff=ing,to) wsuf=S1,S2 The word suffix is S1, the same stem is seen with suffix S2 (wsuf=ϵ,s). suffs=SG The word stem appears with the SG group of suffixes (suffs=ed,ing,s). We consider a word to have a suffix only if the word stem appears with a different suffix somewhere in the text. We implemented a primitive stemmer for extracting the suffixes while preserving a usable stem by taking care of few English orthography rules (handling, e.g., , bigger →big er, nicer →nice er, happily →happy ly, picnicking →picnic ing). For the immediate context W in the templates L+suff,R+suff, we consider only the 20 most frequent tokens in the corpus. Linear-Context-based p(t|w) approximation We expect the context based approximation to be particularly useful in English. We use the following 3 context templates: LL=w−2,w−1, LR=w−1,w+1 and RR=w+1,w+2. We estimate p(w|c) and p(c|w) by relative frequency over word triplets occurring at least twice in the unannotated training corpus. Combined p(t|w) approximation This approximation combines the morphological and linear context approximations by using all the abovementioned context templates together in the iterative process. For all three p(t|w) approximations, we take RELC to be contexts containing at most 4 tags. allow(t, w) follows the dictionary for known words, and is the set of all open-class POS for unknown words. We take the initial p(t|w) for each w to be uniform over all the dictionary specified tags for w. Accordingly, the initial p(t|w) = 0 for w not in the dictionary. We run the process for 8 iterations.7 Diluted Dictionaries and Unknown Words Some of the missing dictionary elements are assigned a set of possible POS-tags and corresponding probabilities in the p(t|w) estimation process. Other unknown tokens remain with no analysis at the end of the initial process computation. For these missing elements, we assign an ambiguity class by a simple ambiguity-class guesser, and set p(t|w) to be uniform over all the tags in the ambiguity class. Our ambiguity-class guesser assigns for each word the set of all open-class tags that appeared with the word suffix in the dictionary. The word suffix is the longest (up to 3 characters) suffix of the word that also appears in the top-100 suffixes in the dictionary. Taggers We test the resulting p(t|w) approximation by training 2 taggers: CF-Tag, a context-free tagger assigning for each word its most probable POS according to p(t|w), with a fallback to the most probable tag in case the word does not appear in the dictionary or if ∀t, p(t|w) = 0. EM-HMM, a second-order EM-HMM initialized with the estimated p(t|w). Baselines As baseline, we use two EM-trained HMM taggers, initialized with a uniform p(t|w) for every word, based on the allowed tags in the dictionary. For words not in the dictionary, we take the allowed tags to be either all the open-class POS 7This is the first value we tried, and it seems to work fine. We haven’t experimented with other values. The same applies for the choice of 4 as the RELC threshold. 751 (uniform(oc)) or the allowed tags according to our simple ambiguity-class guesser (uniform(suf)). All the p(t|w) estimates and HMM models are trained on the entire WSJ corpus. We use the same 24K word test-set as used in SE, TJ and GG, as well as the same diluted dictionaries. We report the results on the same reduced tagsets for comparison, but also include the results on the full 46 tags tagset. 5.1 Results Table 2 summarizes the results of our experiments. Uniform initialization based on the simple suffixbased ambiguity class guesser yields big improvements over the uniform all-open-class initialization. However, our refined initial conditions always improve the results (by as much as 40% error reduction). As expected, the linear context is much more effective than the morphological one, especially with richer dictionaries. This seem to indicate that in English the linear context is better at refining the estimations when the ambiguity classes are known, while the morphological context is in charge of adding possible tags when the ambiguity classes are not known. Furthermore, the benefit of the morphology-context is bigger for the complete tagset setting, indicating that, while the coarsegrained POS-tags are indicated by word distribution, the finer distinctions are indicated by inflections and orthography. The combination of linear and morphology contexts is always beneficial. Syntagmatic constraints (e.g., separating be verbs and modals from the rest of the verbs) constantly improve results by about 1%. Note that the context-free tagger based on our p(t|w) estimates is quite accurate. As with the EM trained models, combining linear and morphological contexts is always beneficial. To put these numbers in context, Table 3 lists current state-of-the art results for the same task. CE+spl is the Contrastive-Estimation CRF method of SE. BHMM is the completely Bayesian-HMM of GG. PLSA+AC, LDA, LDA+AC are the models presented in TJ, LDA+AC is a Bayesian model with a strong ambiguity class (AC) component, and is the current state-of-the-art of this task. The other models are variations excluding the Bayesian components (PLSA+AC) or the ambiguity class. While our models are trained on the unannotated text of the entire WSJ Treebank, CE and BHMM use much less training data (only the 24k words of the test-set). However, as noted by TJ, there is no reason one should limit the amount of unlabeled data used, and in addition other results reported in GG,SE show that accuracy does not seem to improve as more unlabeled data are used with the models. We also report results for training our EM-HMM tagger on the smaller dataset (the p(t|w) estimation is still based on the entire unlabeled WSJ). All the abovementioned models follow the assumption that all 17 tags are valid for the unknown words. In contrast, we restrict the set of allowed tags for an unknown word to open-class tags. Closed class words are expected to be included in a dictionary, even a small one. The practice of allowing only open-class tags for unknown words goes back a long way (Weischedel et al., 1993), and proved highly beneficial also in our case. Notice that even our simplest models, in which the initial p(t|w) distribution for each w is uniform, already outperform most of the other models, and, in the case of the diluted dictionaries, by a wide margin. Similarly, given the p(t|w) estimate, EMHMM training on the smaller dataset (24k) is still very competitive (yet results improve with more unlabeled data). When we use our refined p(t|w) distribution as the basis of EM-HMM training, we get the best results for the complete dictionary case. With the diluted dictionaries, we are outperformed only by LDA+AC. As we outperform this model in the complete dictionary case, it seems that the advantage of this model is due to its much stronger ambiguity class model, and not its Bayesian components. Also note that while we outperform this model when using the 19-tags tagset, it is slightly better in the original 17-tags setting. It could be that the reliance of the LDA models on observed surface features instead of hidden state features is beneficial avoiding the misleading V-V transitions. We also list the performance of our best models with a slightly more realistic dictionary setting: we take our dictionary to include information for all words occurring in section 0-18 of the WSJ corpus (43208 words). We then train on the entire unannotated corpus, and test on sections 22-24 – the standard train/test split for supervised English POS tagging. We achieve accuracy of 92.85% for the 19tags set, and 91.3% for the complete 46-tags tagset. 752 Initial Conditions Full dict ≥2 dict ≥3 dict (49206 words) (2141 words) (1249 words) CF-Tag EM-HMM CF-Tag EM-HMM CF-Tag EM-HMM Uniform(oc) 81.7 88.7 68.4 81.9 62.5 79.6 Uniform(suf) NA NA 76.8 83.4 76.9 81.6 17tags Morph-Cont 82.2 88.6 73.3 83.9 69.1 81.7 Linear-Cont 90.1 92.9 81.1 87.8 78.3 85.8 Combined-Cont 89.9 93.3 83.1 88.5 81.1 86.4 Uniform(oc) 79.9 91.0 66.6 83.4 60.7 84.7 Uniform(suf) NA NA 75.1 86.5 73.1 86.7 19tags Morph-Cont 80.5 89.2 71.5 86.5 67.5 87.1 Linear-Cont 88.4 93.7 78.9 89.0 76.3 86.9 Combined-Cont 88.0 93.8 81.1 89.4 79.2 87.4 Uniform(oc) 76.7 88.3 61.2 * 55.7 * Uniform(suf) NA NA 64.2 81.9 60.3 79.8 46tags Morph-Cont 74.8 88.8 65.6 83.0 61.9 80.3 Linear-Cont 85.5 91.2 74.5 84.0 70.1 82.2 Combined-Cont 85.9 91.4 75.4 85.5 72.4 83.3 Table 2: Accuracy (%) of English POS Tagging over various initial conditions Dict InitEM-HMM (24k) LDA LDA+AC PLSA+AC CE+spl BHMM Full 93.8 (91.1) 93.4 93.4 89.7 88.7 87.3 ≥2 89.4 (87.9) 87.4 91.2 87.8 79.5 79.6 ≥3 87.4 (85.9) 85 89.7 85.9 78.4 71 Table 3: Comparison of English Unsupervised POS Tagging Methods 6 Conclusion We have demonstrated that unsupervised POS tagging can reach good results using the robust EMHMM learner when provided with good initial conditions, even with incomplete dictionaries. We presented a general family of algorithms to compute effective initial conditions: estimation of p(t|w) relying on an iterative process shifting probabilities between words and their contexts. The parameters of this process (definition of the contexts and initial estimations of p(t|w) can safely encapsulate rich linguistic intuitions. While recent work, such as GG, aim to use the Bayesian framework and incorporate “linguistically motivated priors”, in practice such priors currently only account for the fact that language related distributions are sparse - a very general kind of knowledge. In contrast, our method allow the incorporation of much more fine-grained intuitions. We tested the method on the challenging task of full morphological disambiguation in Hebrew (which was our original motivation) and on the standard WSJ unsupervised POS tagging task. In Hebrew, our model includes an improved version of the similar words algorithm of (Levinger et al., 1995), a model of lexical context, and a small set of tag ngrams. The combination of these knowledge sources in the initial conditions brings an error reduction of more than 25% over a strong uniform distribution baseline. In English, our model is competitive with recent state-of-the-art results, while using simple and efficient learning methods. The comparison with other algorithms indicates directions of potential improvement: (1) our initialconditions method might benefit the other, more sophisticated learning algorithms as well. (2) Our models were designed under the assumption of a relatively complete dictionary. As such, they are not very good at assigning ambiguity-classes to OOV tokens when starting with a very small dictionary. While we demonstrate competitive results using a simple suffix-based ambiguity-class guesser which ignores capitalization and hyphenation information, we believe there is much room for improvement in this respect. In particular, (Haghighi and Klein, 2006) presents very strong results using a distributional-similarity module and achieve impressive tagging accuracy while starting with a mere 116 prototypical words. Experimenting with combining similar models (as well as TJ’s ambiguity class model) with our p(t|w) distribution estimation method is an interesting research direction. 753 References Meni Adler. 2007. Hebrew Morphological Disambiguation: An Unsupervised Stochastic Word-based Approach. Ph.D. thesis, Ben-Gurion University of the Negev, Beer-Sheva, Israel. Michele Banko and Robert C. Moore. 2004. Part-ofspeech tagging in context. In Proceedings of Coling 2004, pages 556–561, Geneva, Switzerland, Aug 23– Aug 27. COLING. Leonard E. Baum. 1972. An inequality and associated maximization technique in statistical estimation for probabilistic functions of a Markov process. Inequalities, 3:1–8. Eric Brill. 1995a. Transformation-based error-driven learning and natural languge processing: A case study in part-of-speech tagging. Computational Linguistics, 21:543–565. Eric Brill. 1995b. Unsupervised learning of disambiguation rules for part of speech tagging. In David Yarovsky and Kenneth Church, editors, Proceedings of the Third Workshop on Very Large Corpora, pages 1–13, Somerset, New Jersey. Association for Computational Linguistics. Stanley F. Chen. 1996. Building Probabilistic Models for Natural Language. Ph.D. thesis, Harvard University, Cambridge, MA. Silviu Cucerzan and David Yarowsky. 2000. Language independent, minimally supervised induction of lexical probabilities. In ACL ’00: Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, pages 270–277, Morristown, NJ, USA. Association for Computational Linguistics. Evangelos Dermatas and George Kokkinakis. 1995. Automatic stochastic tagging of natural language texts. Computational Linguistics, 21(2):137–163. David Elworthy. 1994. Does Baum-Welch re-estimation help taggers? In Proceeding of ANLP-94. Sharon Goldwater and Thomas L. Griffiths. 2007. A fully bayesian approach to unsupervised part-ofspeech tagging. In Proceeding of ACL 2007, Prague, Czech Republic. Aria Haghighi and Dan Klein. 2006. Prototype-driven learning for sequence models. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 320– 327, Morristown, NJ, USA. Association for Computational Linguistics. J. Kupiec. 1992. Robust part-of-speech tagging using hidden Markov model. Computer Speech and Language, 6:225–242. Moshe Levinger, Uzi Ornan, and Alon Itai. 1995. Learning morpholexical probabilities from an untagged corpus with an application to Hebrew. Computational Linguistics, 21:383–404. Christopher D. Manning and Hinrich Schutze. 1999. Foundation of Statistical Language Processing. MIT Press. Bernard Merialdo. 1994. Tagging English text with probabilistic model. Computational Linguistics, 20:155–171. Hinrich Sch¨utze. 1995. Distributional part-of-speech tagging. In Proceedings of the seventh conference on European chapter of the Association for Computational Linguistics, pages 141–148, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Danny Shacham and Shuly Wintner. 2007. Morphological disambiguation of hebrew: A case study in classifier combination. In Proceeding of EMNLP-07, Prague, Czech. Khalil Sima’an, Alon Itai, Alon Altman Yoad Winter, and Noa Nativ. 2001. Building a tree-bank of modern Hebrew text. Journal Traitement Automatique des Langues (t.a.l.). Special Issue on NLP and Corpus Linguistics. Noah A. Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 354–362, Ann Arbor, Michigan, June. Scott M. Thede and Mary P. Harper. 1999. A secondorder hidden Markov model for part-of-speech tagging. In Proceeding of ACL-99. Kristina Toutanova and Mark Johnson. 2008. A bayesian lda-based model for semi-supervised part-of-speech tagging. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20. MIT Press, Cambridge, MA. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In HLTNAACL. R. Weischedel, R. Schwartz, J. Palmucci, M. Meteer, and L. Ramshaw. 1993. Coping with ambiguity and unknown words through probabilistic models. Computational Linguistics, 19:359–382. 754
2008
85
Proceedings of ACL-08: HLT, pages 755–762, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Distributed Word Clustering for Large Scale Class-Based Language Modeling in Machine Translation Jakob Uszkoreit∗Thorsten Brants Google, Inc. 1600 Amphitheatre Parkway Mountain View, CA 94303, USA {uszkoreit,brants}@google.com Abstract In statistical language modeling, one technique to reduce the problematic effects of data sparsity is to partition the vocabulary into equivalence classes. In this paper we investigate the effects of applying such a technique to higherorder n-gram models trained on large corpora. We introduce a modification of the exchange clustering algorithm with improved efficiency for certain partially class-based models and a distributed version of this algorithm to efficiently obtain automatic word classifications for large vocabularies (>1 million words) using such large training corpora (>30 billion tokens). The resulting clusterings are then used in training partially class-based language models. We show that combining them with wordbased n-gram models in the log-linear model of a state-of-the-art statistical machine translation system leads to improvements in translation quality as indicated by the BLEU score. 1 Introduction A statistical language model assigns a probability P(w) to any given string of words wm 1 = w1, ..., wm. In the case of n-gram language models this is done by factoring the probability: P(wm 1 ) = m Y i=1 P(wi|wi−1 1 ) and making a Markov assumption by approximating this by: m Y i=1 P(wi|wi−1 1 ) ≈ m Y i=1 p(wi|wi−1 i−n+1) Even after making the Markov assumption and thus treating all strings of preceding words as equal which ∗Parts of this research were conducted while the author studied at the Berlin Institute of Technology do not differ in the last n −1 words, one problem ngram language models suffer from is that the training data is too sparse to reliably estimate all conditional probabilities P(wi|wi−1 1 ). Class-based n-gram models are intended to help overcome this data sparsity problem by grouping words into equivalence classes rather than treating them as distinct words and thus reducing the number of parameters of the model (Brown et al., 1990). They have often been shown to improve the performance of speech recognition systems when combined with word-based language models (Martin et al., 1998; Whittaker and Woodland, 2001). However, in the area of statistical machine translation, especially in the context of large training corpora, fewer experiments with class-based n-gram models have been performed with mixed success (Raab, 2006). Class-based n-gram models have also been shown to benefit from their reduced number of parameters when scaling to higher-order n-grams (Goodman and Gao, 2000), and even despite the increasing size and decreasing sparsity of language model training corpora (Brants et al., 2007), class-based n-gram models might lead to improvements when increasing the n-gram order. When training class-based n-gram models on large corpora and large vocabularies, one of the problems arising is the scalability of the typical clustering algorithms used for obtaining the word classification. Most often, variants of the exchange algorithm (Kneser and Ney, 1993; Martin et al., 1998) or the agglomerative clustering algorithm presented in (Brown et al., 1990) are used, both of which have prohibitive runtimes when clustering large vocabularies on the basis of large training corpora with a sufficiently high number of classes. In this paper we introduce a modification of the exchange algorithm with improved efficiency and then present a distributed version of the modified algorithm, which makes it feasible to obtain word clas755 sifications using billions of tokens of training data. We then show that using partially class-based language models trained using the resulting classifications together with word-based language models in a state-of-the-art statistical machine translation system yields improvements despite the very large size of the word-based models used. 2 Class-Based Language Modeling By partitioning all Nv words of the vocabulary into Nc sets, with c(w) mapping a word onto its equivalence class and c(wj i ) mapping a sequence of words onto the sequence of their respective equivalence classes, a typical class-based n-gram model approximates P(wi|wi−1 1 ) with the two following component probabilities: P(wi|wi−1 1 ) ≈p0(wi|c(wi)) · p1(c(wi)|c(wi−1 i−n+1)) (1) thus greatly reducing the number of parameters in the model, since usually Nc is much smaller than Nv. In the following, we will call this type of model a two-sided class-based model, as both the history of each n-gram, the sequence of words conditioned on, as well as the predicted word are replaced by their class. Once a partition of the words in the vocabulary is obtained, two-sided class-based models can be built just like word-based n-gram models using existing infrastructure. In addition, the size of the model is usually greatly reduced. 2.1 One-Sided Class-Based Models Two-sided class-based models received most attention in the literature. However, several different types of mixed word and class models have been proposed for the purpose of improving the performance of the model (Goodman, 2000), reducing its size (Goodman and Gao, 2000) as well as lowering the complexity of related clustering algorithms (Whittaker and Woodland, 2001). In (Emami and Jelinek, 2005) a clustering algorithm is introduced which outputs a separate clustering for each word position in a trigram model. In the experimental evaluation, the authors observe the largest improvements using a specific clustering for the last word of each trigram but no clustering at all for the first two word positions. Generalizing this leads to arbitrary order class-based n-gram models of the form: P(wi|wi−1 1 ) ≈p0(wi|c(wi)) · p1(c(wi)|wi−1 i−n+1) (2) which we will call predictive class-based models in the following sections. 3 Exchange Clustering One of the frequently used algorithms for automatically obtaining partitions of the vocabulary is the exchange algorithm (Kneser and Ney, 1993; Martin et al., 1998). Beginning with an initial clustering, the algorithm greedily maximizes the log likelihood of a two-sided class bigram or trigram model as described in Eq. (1) on the training data. Let V be the set of words in the vocabulary and C the set of classes. This then leads to the following optimization criterion, where N(w) and N(c) denote the number of occurrences of a word w or a class c in the training data and N(c, d) denotes the number of occurrences of some word in class c followed by a word in class d in the training data: ˆC = argmax C X w∈V N(w) · log N(w) + + X c∈C,d∈C N(c, d) · log N(c, d) − −2 · X c∈C N(c) · log N(c) (3) The algorithm iterates over all words in the vocabulary and tentatively moves each word to each cluster. The change in the optimization criterion is computed for each of these tentative moves and the exchange leading to the highest increase in the optimization criterion (3) is performed. This procedure is then repeated until the algorithm reaches a local optimum. To be able to efficiently calculate the changes in the optimization criterion when exchanging a word, the counts in Eq. (3) are computed once for the initial clustering, stored, and afterwards updated when a word is exchanged. Often only a limited number of iterations are performed, as letting the algorithm terminate in a local optimum can be computationally impractical. 3.1 Complexity The implementation described in (Martin et al., 1998) uses a memory saving technique introducing a binary search into the complexity estimation. For the sake of simplicity, we omit this detail in the following complexity analysis. We also do not employ this optimization in our implementation. The worst case complexity of the exchange algorithm is quadratic in the number of classes. However, 756 Input: The fixed number of clusters Nc Compute initial clustering while clustering changed in last iteration do forall w ∈V do forall c ∈C do move word w tentatively to cluster c compute updated optimization criterion move word w to cluster maximizing optimization criterion Algorithm 1: Exchange Algorithm Outline the average case complexity can be reduced by updating only the counts which are actually affected by moving a word from one cluster to another. This can be done by considering only those sets of clusters for which N(w, c) > 0 or N(c, w) > 0 for a word w about to be exchanged, both of which can be calculated efficiently when exchanging a word. The algorithm scales linearly in the size of the vocabulary. With N pre c and N suc c denoting the average number of clusters preceding and succeeding another cluster, B denoting the number of distinct bigrams in the training corpus, and I denoting the number of iterations, the worst case complexity of the algorithm is in: O(I · (2 · B + Nv · Nc · (N pre c + N suc c ))) When using large corpora with large numbers of bigrams the number of required updates can increase towards the quadratic upper bound as N pre c and N suc c approach Nc. For a more detailed description and further analysis of the complexity, the reader is referred to (Martin et al., 1998). 4 Predictive Exchange Clustering Modifying the exchange algorithm in order to optimize the log likelihood of a predictive class bigram model, leads to substantial performance improvements, similar to those previously reported for another type of one-sided class model in (Whittaker and Woodland, 2001). We use a predictive class bigram model as given in Eq. (2), for which the maximum-likelihood probability estimates for the n-grams are given by their relative frequencies: P(wi|wi−1 1 ) ≈ p0(wi|c(wi)) · p1(c(wi)|wi−1)(4) = N(wi) N(c(wi)) · N(wi−1, c(wi)) N(wi−1) (5) where N(w) again denotes the number of occurrences of the word w in the training corpus and N(v, c) the number of occurrences of the word v followed by some word in class c. Then the following optimization criterion can be derived, with F(C) being the log likelihood function of the predictive class bigram model given a clustering C: F(C) = X w∈V N(w) · log p(w|c(w)) + X v∈V,c∈C N(v, c) · log p(c|v) (6) = X w∈V N(w) · log N(w) N(c(w)) + X v∈V,c∈C N(v, c) · log N(v, c) N(v) (7) = X w∈V N(w) · log N(w) − X w∈V N(w) · log N(c(w)) + X v∈V,c∈C N(v, c) · log N(v, c) − X v∈V,c∈C N(v, c) · log N(v) (8) The very last summation of Eq. (8) now effectively sums over all occurrences of all words and thus cancels out with the first summation of (8) which leads to: F(C) = X v∈V,c∈C N(v, c) · log N(v, c) − X w∈V N(w) · log N(c(w)) (9) In the first summation of Eq. (9), for a given word v only the set of classes which contain at least one word w for which N(v, w) > 0 must be considered, denoted by suc(v). The second summation is equivalent to P c∈C N(c) · log N(c). Thus the further simplified criterion is: F(C) = X v∈V,c∈suc(v) N(v, c) · log N(v, c) − X c∈C N(c) · log N(c) (10) When exchanging a word w between two classes c and c′, only two summands of the second summation of Eq. (10) are affected. The first summation can be updated by iterating over all bigrams ending in the exchanged word. Throughout one iteration of the algorithm, in which for each word in the vocabulary each possible move to another class is evaluated, this 757 amounts to the number of distinct bigrams in the training corpus B, times the number of clusters Nc. Thus the worst case complexity using the modified optimization criterion is in: O(I · Nc · (B + Nv)) Using this optimization criterion has two effects on the complexity of the algorithm. The first difference is that in contrast to the exchange algorithm using a two sided class-based bigram model in its optimization criterion, only two clusters are affected by moving a word. Thus the algorithm scales linearly in the number of classes. The second difference is that B dominates the term B + Nv for most corpora and scales far less than linearly with the vocabulary size, providing a significant performance advantage over the other optimization criterion, especially when large vocabularies are used (Whittaker and Woodland, 2001). For efficiency reasons, an exchange of a word between two clusters is separated into a remove and a move procedure. In each iteration the remove procedure only has to be called once for each word, while for a given word move is called once for every cluster to compute the consequences of the tentative exchanges. An outline of the move procedure is given below. The remove procedure is similar. Input: A word w, and a destination cluster c Result: The change in the optimization criterion when moving w to cluster c delta ←N(c) · log N(c) N ′(c) ←N(c) −N(w) delta ←delta −N ′(c) · log N ′(c) if not a tentative move then N(c) ←N ′(c) forall v ∈suc(w) do delta ←delta −N(v, c) · log N(v, c) N ′(v, c) ←N(v, c) −N(v, w) delta ←delta + N ′(v, c) · log N ′(v, c) if not a tentative move then N(v, c) ←N ′(v, c) return delta Procedure MoveWord 5 Distributed Clustering When training on large corpora, even the modified exchange algorithm would still require several days if not weeks of CPU time for a sufficient number of iterations. To overcome this we introduce a novel distributed exchange algorithm, based on the modified exchange algorithm described in the previous section. The vocabulary is randomly partitioned into sets of roughly equal size. With each word w in one of these sets, all words v preceding w in the corpus are stored with the respective bigram count N(v, w). The clusterings generated in each iteration as well as the initial clustering are stored as the set of words in each cluster, the total number of occurrences of each cluster in the training corpus, and the list of words preceeding each cluster. For each word w in the predecessor list of a given cluster c, the number of times w occurs in the training corpus before any word in c, N(w, c), is also stored. Together with the counts stored with the vocabulary partitions, this allows for efficient updating of the terms in Eq. (10). The initial clustering together with all the required counts is created in an initial iteration by assigning the n-th most frequent word to cluster n mod Nc. While (Martin et al., 1998) and (Emami and Jelinek, 2005) observe that the initial clustering does not seem to have a noticeable effect on the quality of the resulting clustering or the convergence rate, the intuition behind this method of initialization is that it is unlikely for the most frequent words to be clustered together due to their high numbers of occurrences. In each subsequent iteration each one of a number of workers is assigned one of the partitions of the words in the vocabulary. After loading the current clustering, it then randomly chooses a subset of these words of a fixed size. For each of the selected words the worker then determines to which cluster the word is to be moved in order to maximize the increase in log likelihood, using the count updating procedures described in the previous section. All changes a worker makes to the clustering are accumulated locally in delta data structures. At the end of the iteration all deltas are merged and applied to the previous clustering, resulting in the complete clustering loaded in the next iteration. This algorithm fits well into the MapReduce programming model (Dean and Ghemawat, 2004) that we used for our implementation. 5.1 Convergence While the greedy non-distributed exchange algorithm is guaranteed to converge as each exchange increases the log likelihood of the assumed bigram model, this is not necessarily true for the distributed exchange algorithm. This stems from the fact that the change in log likelihood is calculated by each worker under the assumption that no other changes to the clustering are performed by other workers in 758 this iteration. However, if in each iteration only a rather small and randomly chosen subset of all words are considered for exchange, the intuition is that the remaining words still define the parameters of each cluster well enough for the algorithm to converge. In (Emami and Jelinek, 2005) the authors observe that only considering a subset of the vocabulary of half the size of the complete vocabulary in each iteration does not affect the time required by the exchange algorithm to converge. Yet each iteration is sped up by approximately a factor of two. The quality of class-based models trained using the resulting clusterings did not differ noticeably from those trained using clusterings for which the full vocabulary was considered in each iteration. Our experiments showed that this also seems to be the case for the distributed exchange algorithm. While considering very large subsets of the vocabulary in each iteration can cause the algorithm to not converge at all, considering only a very small fraction of the words for exchange will increase the number of iterations required to converge. In experiments we empirically determined that choosing a subset of roughly a third of the size of the full vocabulary is a good balance in this trade-off. We did not observe the algorithm to not converge unless we used fractions above half of the vocabulary size. We typically ran the clustering for 20 to 30 iterations after which the number of words exchanged in each iteration starts to stabilize at less than 5 percent of the vocabulary size. Figure 1 shows the number of words exchanged in each of 34 iterations when clustering the approximately 300,000 word vocabulary of the Arabic side of the English-Arabic parallel training data into 512 and 2,048 clusters. Despite a steady reduction in the number of words exchanged per iteration, we observed the convergence in regards to log-likelihood to be far from monotone. In our experiments we were able to achieve significantly more monotone and faster convergence by employing the following heuristic. As described in Section 5, we start out the first iteration with a random partition of the vocabulary into subsets each assigned to a specific worker. However, instead of keeping this assignment constant throughout all iterations, after each iteration the vocabulary is partitioned anew so that all words from any given cluster are considered by the same worker in the next iteration. The intuition behind this heuristic is that as the clustering becomes more coherent, the information each worker has about groups of similar words is becoming increasingly accurate. In our experiments this heuristic lead to almost monotone convergence in log-likelihood. It also reduced the 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 0 5 10 15 20 25 30 35 words exchanged iteration 512 clusters 2048 clusters Figure 1: Number of words exchanged per iteration when clustering the vocabulary of the Arabic side of the English-Arabic parallel training data (347 million tokens). number of iterations required to converge by up to a factor of three. 5.2 Resource Requirements The runtime of the distributed exchange algorithm depends highly on the number of distinct bigrams in the training corpus. When clustering the approximately 1.5 million word vocabulary of a 405 million token English corpus into 1,000 clusters, one iteration takes approximately 5 minutes using 50 workers based on standard hardware running the Linux operating system. When clustering the 0.5 million most frequent words in the vocabulary of an English corpus with 31 billion tokens into 1,000 clusters, one iteration takes approximately 30 minutes on 200 workers. When scaling up the vocabulary and corpus sizes, the current bottleneck of our implementation is loading the current clustering into memory. While the memory requirements decrease with each iteration, during the first few iterations a worker typically still needs approximately 2 GB of memory to load the clustering generated in the previous iteration when training 1,000 clusters on the 31 billion token corpus. 6 Experiments We trained a number of predictive class-based language models on different Arabic and English corpora using clusterings trained on the complete data of the same corpus. We use the distributed training and application infrastructure described in (Brants et al., 2007) with modifications to allow the training of predictive class-based models and their application in the decoder of the machine translation system. 759 For all models used in our experiments, both wordand class-based, the smoothing method used was Stupid Backoff(Brants et al., 2007). Models with Stupid Backoffreturn scores rather than normalized probabilities, thus perplexities cannot be calculated for these models. Instead we report BLEU scores (Papineni et al., 2002) of the machine translation system using different combinations of word- and classbased models for translation tasks from English to Arabic and Arabic to English. 6.1 Training Data For English we used three different training data sets: en target: The English side of Arabic-English and Chinese-English parallel data provided by LDC (405 million tokens). en ldcnews: Consists of several English news data sets provided by LDC (5 billion tokens). en webnews: Consists of data collected up to December 2005 from web pages containing primarily English news articles (31 billion tokens). A fourth data set, en web, was used together with the other three data sets to train the large wordbased model used in the second machine translation experiment. This set consists of general web data collected in January 2006 (2 trillion tokens). For Arabic we used the following two different training data sets: ar gigaword: Consists of several Arabic news data sets provided by LDC (629 million tokens). ar webnews: Consists of data collected up to December 2005 from web pages containing primarily Arabic news articles (approximately 600 million tokens). 6.2 Machine Translation Results Given a sentence f in the source language, the machine translation problem is to automatically produce a translation ˆe in the target language. In the subsequent experiments, we use a phrase-based statistical machine translation system based on the loglinear formulation of the problem described in (Och and Ney, 2002): ˆe = argmax e p(e|f) = argmax e M X m=1 λmhm(e, f) (11) where {hm(e, f)} is a set of M feature functions and {λm} a set of weights. We use each predictive classbased language model as well as a word-based model as separate feature functions in the log-linear combination in Eq. (11). The weights are trained using minimum error rate training (Och, 2003) with BLEU score as the objective function. The dev and test data sets contain parts of the 2003, 2004 and 2005 Arabic NIST MT evaluation sets among other parallel data. The blind test data used is the “NIST” part of the 2006 Arabic-English NIST MT evaluation set, and is not included in the training data. For the first experiment we trained predictive class-based 5-gram models using clusterings with 64, 128, 256 and 512 clusters1 on the en target data. We then added these models as additional features to the log linear model of the Arabic-English machine translation system. The word-based language model used by the system in these experiments is a 5-gram model also trained on the en target data set. Table 1 shows the BLEU scores reached by the translation system when combining the different class-based models with the word-based model in comparison to the BLEU scores by a system using only the word-based model on the Arabic-English translation task. dev test nist06 word-based only 0.4085 0.3498 0.5088 64 clusters 0.4122 0.3514 0.5114 128 clusters 0.4142 0.3530 0.5109 256 clusters 0.4141 0.3536 0.5076 512 clusters 0.4120 0.3504 0.5140 Table 1: BLEU scores of the Arabic English system using models trained on the English en target data set Adding the class-based models leads to small improvements in BLEU score, with the highest improvements for both dev and nist06 being statistically significant 2. In the next experiment we used two predictive class-based models, a 5-gram model with 512 clusters trained on the en target data set and a 6-gram model also using 512 clusters trained on the en ldcnews data set. We used these models in addition to a word-based 6-gram model created by combining models trained on all four English data sets. Table 2 shows the BLEU scores of the machine translation system using only this word-based model, the scores after adding the class-based model trained on the en target data set and when using all three models. 1The beginning of sentence, end of sentence and unkown word tokens were each treated as separate clusters 2Differences of more than 0.0051 are statistically significant at the 0.05 level using bootstrap resampling (Noreen, 1989; Koehn, 2004) 760 dev test nist06 word-based only 0.4677 0.4007 0.5672 with en target 0.4682 0.4022 0.5707 all three models 0.4690 0.4059 0.5748 Table 2: BLEU scores of the Arabic English system using models trained on various data sets For our experiment with the English Arabic translation task we trained two 5-gram predictive classbased models with 512 clusters on the Arabic ar gigaword and ar webnews data sets. The wordbased Arabic 5-gram model we used was created by combining models trained on the Arabic side of the parallel training data (347 million tokens), the ar gigaword and ar webnews data sets, and additional Arabic web data. dev test nist06 word-based only 0.2207 0.2174 0.3033 with ar webnews 0.2237 0.2136 0.3045 all three models 0.2257 0.2260 0.3318 Table 3: BLEU scores of the English Arabic system using models trained on various data sets As shown in Table 3, adding the predictive classbased model trained on the ar webnews data set leads to small improvements in dev and nist06 scores but causes the test score to decrease. However, adding the class-based model trained on the ar gigaword data set to the other class-based and the word-based model results in further improvement of the dev score, but also in large improvements of the test and nist06 scores. We performed experiments to eliminate the possibility of data overlap between the training data and the machine translation test data as cause for the large improvements. In addition, our experiments showed that when there is overlap between the training and test data, the class-based models lead to lower scores as long as they are trained only on data also used for training the word-based model. One explanation could be that the domain of the ar gigaword corpus is much closer to the domain of the test data than that of other training data sets used. However, further investigation is required to explain the improvements. 6.3 Clusters The clusters produced by the distributed algorithm vary in their size and number of occurrences. In a clustering of the en target data set with 1,024 clusters, the cluster sizes follow a typical longtailed distribution with the smallest cluster containBai Bi Bu Cai Cao Chang Chen Cheng Chou Chuang Cui Dai Deng Ding Du Duan Fan Fu Gao Ge Geng Gong Gu Guan Han Hou Hsiao Hsieh Hsu Hu Huang Huo Jiang Jiao Juan Kang Kuang Kuo Li Liang Liao Lin Liu Lu Luo Mao Meets Meng Mi Miao Mu Niu Pang Pi Pu Qian Qiao Qiu Qu Ren Run Shan Shang Shen Si Song Su Sui Sun Tan Tang Tian Tu Wang Wu Xie Xiong Xu Yang Yao Ye Yin Zeng Zhang Zhao Zheng Zhou Zhu Zhuang Zou % PERCENT cents percent approvals bonus cash concessions cooperatives credit disbursements dividends donations earnings emoluments entitlements expenditure expenditures fund funding funds grants income incomes inflation lending liquidity loan loans mortgage mortgages overhead payroll pension pensions portfolio profits protectionism quotas receipts receivables remittances remuneration rent rents returns revenue revenues salaries salary savings spending subscription subsidies subsidy surplus surpluses tax taxation taxes tonnage tuition turnover wage wages Abby Abigail Agnes Alexandra Alice Amanda Amy Andrea Angela Ann Anna Anne Annette Becky Beth Betsy Bonnie Brenda Carla Carol Carole Caroline Carolyn Carrie Catherine Cathy Cheryl Christina Christine Cindy Claire Clare Claudia Colleen Cristina Cynthia Danielle Daphne Dawn Debbie Deborah Denise Diane Dina Dolores Donna Doris Edna Eileen Elaine Eleanor Elena Elisabeth Ellen Emily Erica Erin Esther Evelyn Felicia Felicity Flora Frances Gail Gertrude Gillian Gina Ginger Gladys Gloria Gwen Harriet Heather Helen Hilary Irene Isabel Jane Janice Jeanne Jennifer Jenny Jessica Jo Joan Joanna Joanne Jodie Josie Judith Judy Julia Julie Karen Kate Katherine Kathleen Kathryn Kathy Katie Kimberly Kirsten Kristen Kristin Laura Laurie Leah Lena Lillian Linda Lisa Liz Liza Lois Loretta Lori Lorraine Louise Lynne Marcia Margaret Maria Marian Marianne Marilyn Marjorie Marsha Mary Maureen Meg Melanie Melinda Melissa Merle Michele Michelle Miriam Molly Nan Nancy Naomi Natalie Nina Nora Norma Olivia Pam Pamela Patricia Patti Paula Pauline Peggy Phyllis Rachel Rebecca Regina Renee Rita Roberta Rosemary Sabrina Sally Samantha Sarah Selena Sheila Shelley Sherry Shirley Sonia Stacy Stephanie Sue Susanne Suzanne Suzy Sylvia Tammy Teresa Teri Terri Theresa Tina Toni Tracey Ursula Valerie Vanessa Veronica Vicki Vivian Wendy Yolanda Yvonne almonds apple apples asparagus avocado bacon bananas barley basil bean beans beets berries berry boneless broccoli cabbage carrot carrots celery cherries cherry chile chiles chili chilies chives cilantro citrus cranberries cranberry cucumber cucumbers dill doughnuts egg eggplant eggs elk evergreen fennel figs flowers fruit fruits garlic ginger grapefruit grasses herb herbs jalapeno Jell-O lemon lemons lettuce lime lions macaroni mango maple melon mint mozzarella mushrooms oak oaks olives onion onions orange oranges orchids oregano oyster parsley pasta pastries pea peach peaches peanuts pear pears peas pecan pecans perennials pickles pine pineapple pines plum pumpkin pumpkins raspberries raspberry rice rosemary roses sage salsa scallions scallops seasonings seaweed shallots shrimp shrubs spaghetti spices spinach strawberries strawberry thyme tomato tomatoes truffles tulips turtles walnut walnuts watermelon wildflowers zucchini mid-April mid-August mid-December mid-February midJanuary mid-July mid-June mid-March mid-May midNovember mid-October mid-September mid-afternoon midafternoon midmorning midsummer Table 4: Examples of clusters 761 ing 13 words and the largest cluster containing 20,396 words. Table 4 shows some examples of the generated clusters. For each cluster we list all words occurring more than 1,000 times in the corpus. 7 Conclusion In this paper, we have introduced an efficient, distributed clustering algorithm for obtaining word classifications for predictive class-based language models with which we were able to use billions of tokens of training data to obtain classifications for millions of words in relatively short amounts of time. The experiments presented show that predictive class-based models trained using the obtained word classifications can improve the quality of a state-ofthe-art machine translation system as indicated by the BLEU score in both translation tasks. When using predictive class-based models in combination with a word-based language model trained on very large amounts of data, the improvements continue to be statistically significant on the test and nist06 sets. We conclude that even despite the large amounts of data used to train the large word-based model in our second experiment, class-based language models are still an effective tool to ease the effects of data sparsity. We furthermore expect to be able to increase the gains resulting from using class-based models by using more sophisticated techniques for combining them with word-based models such as linear interpolations of word- and class-based models with coefficients depending on the frequency of the history. Another interesting direction of further research is to evaluate the use of the presented clustering technique for language model size reduction. References Thorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language models in machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and on Computational Natural Language Learning (EMNLP-CoNLL), pages 858–867, Prague, Czech Republic. Peter F. Brown, Vincent J. Della Pietra, Peter V. de Souza, Jennifer C. Lai, and Robert L. Mercer. 1990. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467–479. Jeffrey Dean and Sanjay Ghemawat. 2004. MapReduce: Simplified data processing on large clusters. In Proceedings of the Sixth Symposium on Operating System Design and Implementation (OSDI-04), San Francisco, CA, USA. Ahmad Emami and Frederick Jelinek. 2005. Random clusterings for language modeling. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Philadelphia, PA, USA. Joshua Goodman and Jianfeng Gao. 2000. Language model size reduction by pruning and clustering. In Proceedings of the IEEE International Conference on Spoken Language Processing (ICSLP), Beijing, China. Joshua Goodman. 2000. A bit of progress in language modeling. Technical report, Microsoft Research. Reinherd Kneser and Hermann Ney. 1993. Improved clustering techniques for class-based statistical language modelling. In Proceedings of the 3rd European Conference on Speech Communication and Technology, pages 973–976, Berlin, Germany. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Barcelona, Spain. Sven Martin, J¨org Liermann, and Hermann Ney. 1998. Algorithms for bigram and trigram word clustering. Speech Communication, 24:19–37. Eric W. Noreen. 1989. Computer-Intensive Methods for Testing Hypotheses. John Wiley & Sons, New York. Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 295–302, Philadelphia, PA, USA. Franz Josef Och. 2003. Minimum error rate training for statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL), pages 160–167, Sapporo, Japan. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 311–318, Philadelphia, PA, USA. Martin Raab. 2006. Language model techniques in machine translation. Master’s thesis, Universit¨at Karlsruhe / Carnegie Mellon University. E. W. D. Whittaker and P. C. Woodland. 2001. Efficient class-based language modelling for very large vocabularies. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 545–548, Salt Lake City, UT, USA. 762
2008
86
Proceedings of ACL-08: HLT, pages 763–770, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Enriching Morphologically Poor Languages for Statistical Machine Translation Eleftherios Avramidis [email protected] Philipp Koehn [email protected] School of Informatics University of Edinburgh 2 Baccleuch Place Edinburgh, EH8 9LW, UK Abstract We address the problem of translating from morphologically poor to morphologically rich languages by adding per-word linguistic information to the source language. We use the syntax of the source sentence to extract information for noun cases and verb persons and annotate the corresponding words accordingly. In experiments, we show improved performance for translating from English into Greek and Czech. For English–Greek, we reduce the error on the verb conjugation from 19% to 5.4% and noun case agreement from 9% to 6%. 1 Introduction Traditional statistical machine translation methods are based on mapping on the lexical level, which takes place in a local window of a few words. Hence, they fail to produce adequate output in many cases where more complex linguistic phenomena play a role. Take the example of morphology. Predicting the correct morphological variant for a target word may not depend solely on the source words, but require additional information about its role in the sentence. Recent research on handling rich morphology has largely focused on translating from rich morphology languages, such as Arabic, into English (Habash and Sadat, 2006). There has been less work on the opposite case, translating from English into morphologically richer languages. In a study of translation quality for languages in the Europarl corpus, Koehn (2005) reports that translating into morphologically richer languages is more difficult than translating from them. There are intuitive reasons why generating richer morphology from morphologically poor languages is harder. Take the example of translating noun phrases from English to Greek (or German, Czech, etc.). In English, a noun phrase is rendered the same if it is the subject or the object. However, Greek words in noun phrases are inflected based on their role in the sentence. A purely lexical mapping of English noun phrases to Greek noun phrases suffers from the lack of information about its role in the sentence, making it hard to choose the right inflected forms. Our method is based on factored phrase-based statistical machine translation models. We focused on preprocessing the source data to acquire the needed information and then use it within the models. We mainly carried out experiments on English to Greek translation, a language pair that exemplifies the problems of translating from a morphologically poor to a morphologically rich language. 1.1 Morphology in Phrase-based SMT When examining parallel sentences of such language pairs, it is apparent that for many English words and phrases which appear usually in the same form, the corresponding terms of the richer target language appear inflected in many different ways. On a single word-based probabilistic level, it is then obvious that for one specific English word e the probability p(f|e) of it being translated into a word f decreases as the number of translation candidates increase, making the decisions more uncertain. 763 • English: The president, after reading the press review and the announcements, left his office • Greek-1: The president[nominative], after reading[3rdsing] the press review[accusative,sing] and the announcements[accusative,plur], left[3rdsing] his office[accusative,sing] • Greek-2: The president[nominative], after reading[3rdsing] the press review[accusative,sing] and the announcements[nominative,plur], left[3rdplur] his office[accusative,sing] Figure 1: Example of missing agreement information, affecting the meaning of the second sentence One of the main aspects required for the fluency of a sentence is agreement. Certain words have to match in gender, case, number, person etc. within a sentence. The exact rules of agreement are language-dependent and are closely linked to the morphological structure of the language. Traditional statistical machine translation models deal with this problems in two ways: • The basic SMT approach uses the target language model as a feature in the argument maximisation function. This language model is trained on grammatically correct text, and would therefore give a good probability for word sequences that are likely to occur in a sentence, while it would penalise ungrammatical or badly ordered formations. • Meanwhile, in phrase-based SMT models, words are mapped in chunks. This can resolve phenomena where the English side uses more than one words to describe what is denoted on the target side by one morphologically inflected term. Thus, with respect to these methods, there is a problem when agreement needs to be applied on part of a sentence whose length exceeds the order of the of the target n-gram language model and the size of the chunks that are translated (see Figure 1 for an example). 1.2 Related Work In one of the first efforts to enrich the source in word-based SMT, Ueffing and Ney (2003) used partof-speech (POS) tags, in order to deal with the verb conjugation of Spanish and Catalan; so, POS tags were used to identify the pronoun+verb sequence and splice these two words into one term. The approach was clearly motivated by the problems occurring by a single-word-based SMT and have been solved by adopting a phrase-based model. Meanwhile, there is no handling of the case when the pronoun stays in distance with the related verb. Minkov et al. (2007) suggested a post-processing system which uses morphological and syntactic features, in order to ensure grammatical agreement on the output. The method, using various grammatical source-side features, achieved higher accuracy when applied directly to the reference translations but it was not tested as a part of an MT system. Similarly, translating English into Turkish (Durgar El-Kahlout and Oflazer, 2006) uses POS and morph stems in the input along with rich Turkish morph tags on the target side, but improvement was gained only after augmenting the generation process with morphotactical knowledge. Habash et al. (2007) also investigated case determination in Arabic. Carpuat and Wu (2007) approached the issue as a Word Sense Disambiguation problem. In their presentation of the factored SMT models, Koehn and Hoang (2007) describe experiments for translating from English to German, Spanish and Czech, using morphology tags added on the morphologically rich side, along with POS tags. The morphological factors are added on the morphologically rich side and scored with a 7-gram sequence model. Probabilistic models for using only source tags were investigated by Birch et al. (2007), who attached syntax hints in factored SMT models by having Combinatorial Categorial Grammar (CCG) supertags as factors on the input words, but in this case English was the target language. This paper reports work that strictly focuses on translation from English to a morphologically richer language. We go one step further than just using easily acquired information (e.g. English POS or lemmata) and extract target-specific information from the source sentence context. We use syntax, not in 764 Figure 2: Classification of the errors on our EnglishGreek baseline system (ch. 4.1), as suggested by Vilar et al. (2006) order to aid reordering (Yamada and Knight, 2001; Collins et al., 2005; Huang et al., 2006), but as a means for getting the “missing” morphology information, depending on the syntactic position of the words of interest. Then, contrary to the methods that added only output features or altered the generation procedure, we used this information in order to augment only the source side of a factored translation model, assuming that we do not have resources allowing factors or specialized generation in the target language (a common problem, when translating from English into under-resourced languages). 2 Methods for enriching input We selected to focus on noun cases agreement and verb person conjugation, since they were the most frequent grammatical errors of our baseline SMT system (see full error analysis in Figure 2). Moreover, these types of inflection signify the constituents of every phrase, tightly linked to the meaning of the sentence. 2.1 Case agreement The case agreement for nouns, adjectives and articles is mainly defined by the syntactic role that each noun phrase has. Nominative case is used to define the nouns which are the subject of the sentence, accusative shows usually the direct object of the verbs and dative case refers to the indirect object of bitransitive verbs. Therefore, the followed approach takes advantage of syntax, following a method similar to Semantic Role Labelling (Carreras and Marquez, 2005; Surdeanu and Turmo, 2005). English, as morphologically poor language, usually follows a fixed word order (subject-verb-object), so that a syntax parser can be easily used for identifying the subject and the object of most sentences. Considering such annotation, a factored translation model is trained to map the word-case pair to the correct inflection of the target noun. Given the agreement restriction, all words that accompany the noun (adjectives, articles, determiners) must follow the case of the noun, so their likely case needs to be identified as well. For this purpose we use a syntax parser to acquire the syntax tree for each English sentence. The trees are parsed depth-first and the cases are identified within particular “sub-tree patterns” which are manually specified. We use the sequence of the nodes in the tree to identify the syntactic role of each noun phrase. Figure 3: Case tags are assigned on depth-first parse of the English syntax tree, based on sub-tree patterns To make things more clear, an example can be seen in figure 3. At first, the algorithm identifies the subtree “S-(NPB-VP)” and the nominative tag is applied on the NPB node, so that it is assigned to the word “we” (since a pronoun can have a case). The example of accusative shows how cases get transferred to nested subtrees. In practice, they are recursively transferred to every underlying noun phrase (NP) but not to clauses that do not need this information (e.g. prepositional phrases). Similar rules are applied for covering a wide range of node sequence patterns. Also note that this method had to be target765 oriented in some sense: we considered the target language rules for choosing the noun case in every prepositional phrase, depending on the leading preposition. This way, almost all nouns were tagged and therefore the number of the factored words was increased, in an effort to decrease sparsity. Similarly, cases which do not actively affect morphology (e.g. dative in Greek) were not tagged during factorization. 2.2 Verb person conjugation For resolving the verb conjugation, we needed to identify the person of a verb and add this piece of linguistic information as a tag. As we parse the tree top-down, on every level, we look for two discrete nodes which, somewhere in their children, include the verb and the corresponding subject. Consequently, the node which contains the subject is searched recursively until a subject is found. Then, the person is identified and the tag is assigned to the node which contains the verb, which recursively bequeaths this tag to the nested subtree. For the subject selection, the following rules were applied: • The verb person is directly connected to the subject of the sentence and in most cases it is directly inferred by a personal pronoun (I, you etc). Therefore, since this is usually the case, when a pronoun existed, it was directly used as a tag. • All pronouns in a different case (e.g. them, myself) were were converted into nominative case before being used as a tag. • When the subject of the sentence is not a pronoun, but a single noun, then it is in third person. The POS tag of this noun is then used to identify if it is plural or singular. This was selectively modified for nouns which despite being in singular, take a verb in plural. • The gender of the subject does not affect the inflection of the verb in Greek. Therefore, all three genders that are given by the third person pronouns were reduced to one. In Figure 4 we can see an example of how the person tag is extracted from the subject of the senFigure 4: Applying person tags on an English syntax tree tence and gets passed to the relative clause. In particular, as the algorithm parses the syntax tree, it identifies the sub-tree which has NP-A as a head and includes the WHNP node. Consequently, it recursively browses the preceding NPB so as to get the subject of the sentence. The word “aspects” is found, which has a POS tag that shows it is a plural noun. Therefore, we consider the subject to be of the third person in plural (tagged by they) which is recursively passed to the children of the head node. 3 Factored Model The factored statistical machine translation model uses a log-linear approach, in order to combine the several components, including the language model, the reordering model, the translation models and the generation models. The model is defined mathematically (Koehn and Hoang, 2007) as following: p(f|e) = 1 Z exp n X i=1 λihi(f, e) (1) where λi is a vector of weights determined during a tuning process, and hi is the feature function. The feature function for a translation probability distribution is hT (f|e) = X j τ(ej, fj) (2) While factored models may use a generation step to combine the several translation components based on the output factors, we use only source factors; 766 therefore we don’t need a generation step to combine the probabilities of the several components. Instead, factors are added so that both words and its factor(s) are assigned the same probability. Of course, when there is not 1-1 mapping between the word+factor splice on the source and the inflected word on the target, the well-known issue of sparse data arises. In order to reduce these problems, decoding needed to consider alternative paths to translation tables trained with less or no factors (as Birch et al. (2007) suggested), so as to cover instances where a word appears with a factor which it has not been trained with. This is similar to back-off. The alternative paths are combined as following (fig. 5): hT (f|e) = X j hTt(j)(ej, fj) (3) where each phrase j is translated by one translation table t(j) and each table i has a feature function hTi. as shown in eq. (2). Figure 5: Decoding using an alternative path with different factorization 4 Experiments This preprocessing led to annotated source data, which were given as an input to a factored SMT system. 4.1 Experiment setup For testing the factored translation systems, we used Moses (Koehn et al., 2007), along with a 5-gram SRILM language model (Stolcke, 2002). A Greek model was trained on 440,082 aligned sentences of Europarl v.3, tuned with Minimum Error Training (Och, 2003). It was tuned over a development set of 2,000 Europarl sentences and tested on two sets of 2,000 sentences each, from the Europarl and a News Commentary respectively, following the specifications made by the ACL 2007 2nd Workshop on SMT1. A Czech model was trained on 57,464 aligned sentences, tuned over 1057 sentences of the News Commentary corpus and and tested on two sets of 964 sentences and 2000 sentences respectively. The training sentences were trimmed to a length of 60 words for reducing perplexity and a standard lexicalised reordering, with distortion limit set to 6. For getting the syntax trees, the latest version of Collins’ parser (Collins, 1997) was used. When needed, part-of-speech (POS) tags were acquired by using Brill’s tagger (Brill, 1992) on v1.14. Results were evaluated with both BLEU (Papineni et al., 2001) and NIST metrics (NIST, 2002). 4.2 Results BLEU NIST set devtest test07 devtest test07 baseline 18.13 18.05 5.218 5.279 person 18.16 18.17 5.224 5.316 pos+person 18.14 18.16 5.259 5.316 person+case 18.08 18.24 5.258 5.340 altpath:POS 18.21 18.20 5.285 5.340 Table 1: Translating English to Greek: Using a single translation table may cause sparse data problems, which are addressed using an alternative path to a second translation table We tested several various combinations of tags, while using a single translation component. Some combinations seem to be affected by sparse data problems and the best score is achieved by using both person and case tags. Our full method, using both factors, was more effective on the second testset, but the best score in average was succeeded by using an alternative path to a POS-factored translation table (table 1). The NIST metric clearly shows a significant improvement, because it mostly measures difficult n-gram matches (e.g. due to the longdistance rules we have been dealing with). 1see http://www.statmt.org/wmt07 referring to sets dev2006 (tuning) and devtest2006, test2007 (testing) 767 4.3 Error analysis In n-gram based metrics, the scores for all words are equally weighted, so mistakes on crucial sentence constituents may be penalized the same as errors on redundant or meaningless words (Callison-Burch et al., 2006). We consider agreement on verbs and nouns an important factor for the adequacy of the result, since they adhere more to the semantics of the sentence. Since we targeted these problems, we conducted a manual error analysis focused on the success of the improved system regarding those specific phenomena. system verbs errors missing baseline 311 19.0% 7.4% single 295 4.7% 5.4% alt.path 294 5.4% 2.7% Table 2: Error analysis of 100 test sentences, focused on verb person conjugation, for using both person and case tags system NPs errors missing baseline 469 9.0% 4.9% single 465 6.2% 4.5% alt. path 452 6.0% 4.0% Table 3: Error analysis of 100 test sentences, focused on noun cases, for using both person and case tags The analysis shows that using a system with only one phrase translation table caused a high percentage of missing or untranslated words. When a word appears with a tag with which it has not been trained, that would be considered an unseen event and remain untranslated. The use of the alternative path seems to be a good solution. step parsing tagging decoding VPs 16.7% 25% 58.3% NPs 39.2% 21.7% 39.1% avg 31.4% 22.9% 45.7 % Table 4: Analysis on which step of the translation process the agreement errors derive from, based on manual resolution on the errors of table 3 The impact of the preprocessing stage to the errors may be seen in table 4, where errors are tracked back to the stage they derived from. Apart from the decoding errors, which may be attributed to sparse data or other statistical factors, a large part of the errors derive from the preprocessing step; either the syntax tree of the sentence was incorrectly or partially resolved, or our labelling process did not correctly match all possible sub-trees. 4.4 Investigating applicability to other inflected languages The grammatical phenomena of noun cases and verb persons are quite common among many human languages. While the method was tested in Greek, there was an effort to investigate whether it is useful for other languages with similar characteristics. For this reason, the method was adapted for Czech, which needs agreement on both verb conjugation and 9 noun cases. Dative case was included for the indirect object and the rules of the prepositional phrases were adapted to tag all three cases that can be verb phrase constituents. The Czech noun cases which appear only in prepositional phrases were ignored, since they are covered by the phrase-based model. BLUE NIST set devtest test devtest test baseline 12.08 12.34 4.634 4.865 person+case altpath:POS 11.98 11.99 4.584 4.801 person altpath:word 12.23 12.11 4.647 4.846 case altpath:word 12.54 12.51 4.758 4.957 Table 5: Enriching source data can be useful when translating from English to Czech, since it is a morphologically rich language. Experiments shown improvement when using factors on noun-cases with an alternative path In Czech, due to the small size of the corpus, it was possible to improve metric scores only by using an alternative path to a bare word-to-word translation table. Combining case and verb tags worsened the results, which suggests that, while applying the method to more languages, a different use of the attributes may be beneficial for each of them. 768 5 Conclusion In this paper we have shown how SMT performance can be improved, when translating from English into morphologically richer languages, by adding linguistic information on the source. Although the source language misses morphology attributes required by the target language, the needed information is inherent in the syntactic structure of the source sentence. Therefore, we have shown that this information can be easily be included in a SMT model by preprocessing the source text. Our method focuses on two linguistic phenomena which produce common errors on the output and are important constituents of the sentence. In particular, noun cases and verb persons are required by the target language, but not directly inferred by the source. For each of the sub-problems, our algorithm used heuristic syntax-based rules on the statistically generated syntax tree of each sentence, in order to address the missing information, which was consequently tagged in by means of word factors. This information was proven to improve the outcome of a factored SMT model, by reducing the grammatical agreement errors on the generated sentences. An initial system using one translation table with additional source side factors caused sparse data problems, due to the increased number of unseen word-factor combinations. Therefore, the decoding process is given an alternative path towards a translation table with less or no factors. The method was tested on translating from English into two morphologically rich languages. Note that this may be easily expanded for translating from English into many morphologically richer languages with similar attributes. Opposed to other factored translation model approaches that require target language factors, that are not easily obtainable for many languages, our approach only requires English syntax trees, which are acquired with widely available automatic parsers. The preprocessing scripts were adapted so that they provide the morphology attributes required by the target language and the best combination of factors and alternative paths was chosen. Acknowledgments This work was supported in part under the EuroMatrix project funded by the European Commission (6th Framework Programme). Many thanks to Josh Schroeder for preparing the training, development and test data for Greek, in accordance to the standards of ACL 2007 2nd Workshop on SMT; to Hieu Hoang, Alexandra Birch and all the members of the Edinburgh University SMT group for answering questions, making suggestions and providing support. References Birch, A., Osborne, M., and Koehn, P. 2007. CCG Supertags in factored Statistical Machine Translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pages 9–16, Prague, Czech Republic. Association for Computational Linguistics. Brill, E. 1992. A simple rule-based part of speech tagger. Proceedings of the Third Conference on Applied Natural Language Processing, pages 152–155. Callison-Burch, C., Osborne, M., and Koehn, P. 2006. Re-evaluation the role of bleu in machine translation research. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics. The Association for Computer Linguistics. Carpuat, M. and Wu, D. 2007. Improving Statistical Machine Translation using Word Sense Disambiguation. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2007), pages 61–72, Prague, Czech Republic. Carreras, X. and Marquez, L. 2005. Introduction to the CoNLL-2005 Shared Task: Semantic Role Labeling. In Proceedings of 9th Conference on Computational Natural Language Learning (CoNLL), pages 169–172, Ann Arbor, Michigan, USA. Collins, M. 1997. Three generative, lexicalised models for statistical parsing. Proceedings of the 35th conference on Association for Computational Linguistics, pages 16–23. Collins, M., Koehn, P., and Kuˇcerová, I. 2005. Clause restructuring for statistical machine translation. In ACL ’05: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 531–540, Morristown, NJ, USA. Association for Computational Linguistics. 769 Durgar El-Kahlout, i. and Oflazer, K. 2006. Initial explorations in english to turkish statistical machine translation. In Proceedings on the Workshop on Statistical Machine Translation, pages 7–14, New York City. Association for Computational Linguistics. Habash, N., Gabbard, R., Rambow, O., Kulick, S., and Marcus, M. 2007. Determining case in Arabic: Learning complex linguistic behavior requires complex linguistic features. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 1084–1092. Habash, N. and Sadat, F. 2006. Arabic preprocessing schemes for statistical machine translation. In Proceedings of the Human Language Technology Conference of the NAAC L, Companion Volume: Short Papers, pages 49–52, New York City, USA. Association for Computational Linguistics. Huang, L., Knight, K., and Joshi, A. 2006. Statistical syntax-directed translation with extended domain of locality. Proc. AMTA, pages 66–73. Koehn, P. 2005. Europarl: A parallel corpus for statistical machine translation. MT Summit, 5. Koehn, P. and Hoang, H. 2007. Factored translation models. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 868–876. Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177– 180, Prague, Czech Republic. Association for Computational Linguistics. Minkov, E., Toutanova, K., and Suzuki, H. 2007. Generating complex morphology for machine translation. In ACL 07: Proceedings of the 45th Annual Meeting of the Association of Computational linguistics, pages 128–135, Prague, Czech Republic. Association for Computational Linguistics. NIST 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. Och, F. J. 2003. Minimum error rate training in statistical machine translation. In ACL ’03: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pages 160–167, Morristown, NJ, USA. Association for Computational Linguistics. Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. 2001. BLEU: a method for automatic evaluation of machine translation. In ACL ’02: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318, Morristown, NJ, USA. Association for Computational Linguistics. Stolcke, A. 2002. SRILM-an extensible language modeling toolkit. Proc. ICSLP, 2:901–904. Surdeanu, M. and Turmo, J. 2005. Semantic Role Labeling Using Complete Syntactic Analysis. In Proceedings of 9th Conference on Computational Natural Language Learning (CoNLL), pages 221–224, Ann Arbor, Michigan, USA. Ueffing, N. and Ney, H. 2003. Using pos information for statistical machine translation into morphologically rich languages. In EACL ’03: Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics, pages 347–354, Morristown, NJ, USA. Association for Computational Linguistics. Vilar, D., Xu, J., D’Haro, L. F., and Ney, H. 2006. Error Analysis of Machine Translation Output. In Proceedings of the 5th Internation Conference on Language Resources and Evaluation (LREC’06), pages 697–702, Genoa, Italy. Yamada, K. and Knight, K. 2001. A syntax-based statistical translation model. In ACL ’01: Proceedings of the 39th Annual Meeting on Association for Computational Linguistics, pages 523–530, Morristown, NJ, USA. Association for Computational Linguistics. 770
2008
87
Proceedings of ACL-08: HLT, pages 771–779, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Learning Bilingual Lexicons from Monolingual Corpora Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick and Dan Klein Computer Science Division, University of California at Berkeley { aria42,pliang,tberg,klein }@cs.berkeley.edu Abstract We present a method for learning bilingual translation lexicons from monolingual corpora. Word types in each language are characterized by purely monolingual features, such as context counts and orthographic substrings. Translations are induced using a generative model based on canonical correlation analysis, which explains the monolingual lexicons in terms of latent matchings. We show that high-precision lexicons can be learned in a variety of language pairs and from a range of corpus types. 1 Introduction Current statistical machine translation systems use parallel corpora to induce translation correspondences, whether those correspondences be at the level of phrases (Koehn, 2004), treelets (Galley et al., 2006), or simply single words (Brown et al., 1994). Although parallel text is plentiful for some language pairs such as English-Chinese or EnglishArabic, it is scarce or even non-existent for most others, such as English-Hindi or French-Japanese. Moreover, parallel text could be scarce for a language pair even if monolingual data is readily available for both languages. In this paper, we consider the problem of learning translations from monolingual sources alone. This task, though clearly more difficult than the standard parallel text approach, can operate on language pairs and in domains where standard approaches cannot. We take as input two monolingual corpora and perhaps some seed translations, and we produce as output a bilingual lexicon, defined as a list of word pairs deemed to be word-level translations. Precision and recall are then measured over these bilingual lexicons. This setting has been considered before, most notably in Koehn and Knight (2002) and Fung (1995), but the current paper is the first to use a probabilistic model and present results across a variety of language pairs and data conditions. In our method, we represent each language as a monolingual lexicon (see figure 2): a list of word types characterized by monolingual feature vectors, such as context counts, orthographic substrings, and so on (section 5). We define a generative model over (1) a source lexicon, (2) a target lexicon, and (3) a matching between them (section 2). Our model is based on canonical correlation analysis (CCA)1 and explains matched word pairs via vectors in a common latent space. Inference in the model is done using an EM-style algorithm (section 3). Somewhat surprisingly, we show that it is possible to learn or extend a translation lexicon using monolingual corpora alone, in a variety of languages and using a variety of corpora, even in the absence of orthographic features. As might be expected, the task is harder when no seed lexicon is provided, when the languages are strongly divergent, or when the monolingual corpora are from different domains. Nonetheless, even in the more difficult cases, a sizable set of high-precision translations can be extracted. As an example of the performance of the system, in English-Spanish induction with our best feature set, using corpora derived from topically similar but non-parallel sources, the system obtains 89.0% precision at 33% recall. 1See Hardoon et al. (2003) for an overview. 771 state society enlargement control importance sociedad estado amplificación importancia control ... ... s t m Figure 1: Bilingual lexicon induction: source word types s are listed on the left and target word types t on the right. Dashed lines between nodes indicate translation pairs which are in the matching m. 2 Bilingual Lexicon Induction As input, we are given a monolingual corpus S (a sequence of word tokens) in a source language and a monolingual corpus T in a target language. Let s = (s1, . . . , snS) denote nS word types appearing in the source language, and t = (t1, . . . , tnT ) denote word types in the target language. Based on S and T, our goal is to output a matching m between s and t. We represent m as a set of integer pairs so that (i, j) ∈m if and only if si is matched with tj. 2.1 Generative Model We propose the following generative model over matchings m and word types (s, t), which we call matching canonical correlation analysis (MCCA). MCCA model m ∼MATCHING-PRIOR [matching m] For each matched edge (i, j) ∈m: −zi,j ∼N(0, Id) [latent concept] −fS(si) ∼N(WSzi,j, ΨS) [source features] −fT (ti) ∼N(WT zi,j, ΨT ) [target features] For each unmatched source word type i: −fS(si) ∼N(0, σ2IdS) [source features] For each unmatched target word type j: −fT (tj) ∼N(0, σ2IdT ) [target features] First, we generate a matching m ∈M, where M is the set of matchings in which each word type is matched to at most one other word type.2 We take MATCHING-PRIOR to be uniform over M.3 Then, for each matched pair of word types (i, j) ∈ m, we need to generate the observed feature vectors of the source and target word types, fS(si) ∈RdS and fT (tj) ∈RdT . The feature vector of each word type is computed from the appropriate monolingual corpus and summarizes the word’s monolingual characteristics; see section 5 for details and figure 2 for an illustration. Since si and tj are translations of each other, we expect fS(si) and fT (tj) to be connected somehow by the generative process. In our model, they are related through a vector zi,j ∈Rd representing the shared, language-independent concept. Specifically, to generate the feature vectors, we first generate a random concept zi,j ∼N(0, Id), where Id is the d × d identity matrix. The source feature vector fS(si) is drawn from a multivariate Gaussian with mean WSzi,j and covariance ΨS, where WS is a dS × d matrix which transforms the language-independent concept zi,j into a languagedependent vector in the source space. The arbitrary covariance parameter ΨS ⪰0 explains the sourcespecific variations which are not captured by WS; it does not play an explicit role in inference. The target fT (tj) is generated analogously using WT and ΨT , conditionally independent of the source given zi,j (see figure 2). For each of the remaining unmatched source word types si which have not yet been generated, we draw the word type features from a baseline normal distribution with variance σ2IdS, with hyperparameter σ2 ≫0; unmatched target words are similarly generated. If two word types are truly translations, it will be better to relate their feature vectors through the latent space than to explain them independently via the baseline distribution. However, if a source word type is not a translation of any of the target word types, we can just generate it independently without requiring it to participate in the matching. 2Our choice of M permits unmatched word types, but does not allow words to have multiple translations. This setting facilitates comparison to previous work and admits simpler models. 3However, non-uniform priors could encode useful information, such as rank similarities. 772 1.0 1.0 20.0 5.0 100.0 50.0 . . . Source Space Canonical Space Rds Rdt 1.0 1.0 . . . 1.0 Target Space Rd 1.0 { { Orthographic Features Contextual Features time tiempo #ti #ti ime mpo me# pe# change dawn period necessary 40.0 65.0 120.0 45.0 suficiente período mismo adicional si tj z fS(si) fT (tj) Figure 2: Illustration of our MCCA model. Each latent concept zi,j originates in the canonical space. The observed word vectors in the source and target spaces are generated independently given this concept. 3 Inference Given our probabilistic model, we would like to maximize the log-likelihood of the observed data (s, t): ℓ(θ) = log p(s, t; θ) = log X m p(m, s, t; θ) with respect to the model parameters θ = (WS, WT , ΨS, ΨT ). We use the hard (Viterbi) EM algorithm as a starting point, but due to modeling and computational considerations, we make several important modifications, which we describe later. The general form of our algorithm is as follows: Summary of learning algorithm E-step: Find the maximum weighted (partial) bipartite matching m ∈M M-step: Find the best parameters θ by performing canonical correlation analysis (CCA) M-step Given a matching m, the M-step optimizes log p(m, s, t; θ) with respect to θ, which can be rewritten as max θ X (i,j)∈m log p(si, tj; θ). (1) This objective corresponds exactly to maximizing the likelihood of the probabilistic CCA model presented in Bach and Jordan (2006), which proved that the maximum likelihood estimate can be computed by canonical correlation analysis (CCA). Intuitively, CCA finds d-dimensional subspaces US ∈ RdS×d of the source and UT ∈RdT ×d of the target such that the components of the projections U⊤ S fS(si) and U⊤ T fT (tj) are maximally correlated.4 US and UT can be found by solving an eigenvalue problem (see Hardoon et al. (2003) for details). Then the maximum likelihood estimates are as follows: WS = CSSUSP 1/2, WT = CTT UT P 1/2, ΨS = CSS −WSW ⊤ S , and ΨT = CTT −WT W ⊤ T , where P is a d × d diagonal matrix of the canonical correlations, CSS = 1 |m| P (i,j)∈m fS(si)fS(si)⊤is the empirical covariance matrix in the source domain, and CTT is defined analogously. E-step To perform a conventional E-step, we would need to compute the posterior over all matchings, which is #P-complete (Valiant, 1979). On the other hand, hard EM only requires us to compute the best matching under the current model:5 m = argmax m′ log p(m′, s, t; θ). (2) We cast this optimization as a maximum weighted bipartite matching problem as follows. Define the edge weight between source word type i and target word type j to be wi,j = log p(si, tj; θ) (3) −log p(si; θ) −log p(tj; θ), 4Since dS and dT can be quite large in practice and often greater than |m|, we use Cholesky decomposition to rerepresent the feature vectors as |m|-dimensional vectors with the same dot products, which is all that CCA depends on. 5If we wanted softer estimates, we could use the agreementbased learning framework of Liang et al. (2008) to combine two tractable models. 773 which can be loosely viewed as a pointwise mutual information quantity. We can check that the objective log p(m, s, t; θ) is equal to the weight of a matching plus some constant C: log p(m, s, t; θ) = X (i,j)∈m wi,j + C. (4) To find the optimal partial matching, edges with weight wi,j < 0 are set to zero in the graph and the optimal full matching is computed in O((nS+nT )3) time using the Hungarian algorithm (Kuhn, 1955). If a zero edge is present in the solution, we remove the involved word types from the matching.6 Bootstrapping Recall that the E-step produces a partial matching of the word types. If too few word types are matched, learning will not progress quickly; if too many are matched, the model will be swamped with noise. We found that it was helpful to explicitly control the number of edges. Thus, we adopt a bootstrapping-style approach that only permits high confidence edges at first, and then slowly permits more over time. In particular, we compute the optimal full matching, but only retain the highest weighted edges. As we run EM, we gradually increase the number of edges to retain. In our context, bootstrapping has a similar motivation to the annealing approach of Smith and Eisner (2006), which also tries to alter the space of hidden outputs in the E-step over time to facilitate learning in the M-step, though of course the use of bootstrapping in general is quite widespread (Yarowsky, 1995). 4 Experimental Setup In section 5, we present developmental experiments in English-Spanish lexicon induction; experiments 6Empirically, we obtained much better efficiency and even increased accuracy by replacing these marginal likelihood weights with a simple proxy, the distances between the words’ mean latent concepts: wi,j = A −||z∗ i −z∗ j ||2, (5) where A is a thresholding constant, z∗ i = E(zi,j | fS(si)) = P 1/2U ⊤ S fS(si), and z∗ j is defined analogously. The increased accuracy may not be an accident: whether two words are translations is perhaps better characterized directly by how close their latent concepts are, whereas log-probability is more sensitive to perturbations in the source and target spaces. are presented for other languages in section 6. In this section, we describe the data and experimental methodology used throughout this work. 4.1 Data Each experiment requires a source and target monolingual corpus. We use the following corpora: • EN-ES-W: 3,851 Wikipedia articles with both English and Spanish bodies (generally not direct translations). • EN-ES-P: 1st 100k sentences of text from the parallel English and Spanish Europarl corpus (Koehn, 2005). • EN-ES(FR)-D: English: 1st 50k sentences of Europarl; Spanish (French): 2nd 50k sentences of Europarl.7 • EN-CH-D: English: 1st 50k sentences of Xinhua parallel news corpora;8 Chinese: 2nd 50k sentences. • EN-AR-D: English: 1st 50k sentences of 1994 proceedings of UN parallel corpora;9 Arabic: 2nd 50k sentences. • EN-ES-G: English: 100k sentences of English Gigaword; Spanish: 100k sentences of Spanish Gigaword.10 Note that even when corpora are derived from parallel sources, no explicit use is ever made of document or sentence-level alignments. In particular, our method is robust to permutations of the sentences in the corpora. 4.2 Lexicon Each experiment requires a lexicon for evaluation. Following Koehn and Knight (2002), we consider lexicons over only noun word types, although this is not a fundamental limitation of our model. We consider a word type to be a noun if its most common tag is a noun in our monolingual corpus.11 For 7Note that the although the corpora here are derived from a parallel corpus, there are no parallel sentences. 8LDC catalog # 2002E18. 9LDC catalog # 2004E13. 10These corpora contain no parallel sentences. 11We use the Tree Tagger (Schmid, 1994) for all POS tagging except for Arabic, where we use the tagger described in Diab et al. (2004). 774 0.6 0.65 0.7 0.75 0.8 0.85 0.9 0.95 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Precision Recall EN-ES-P EN-ES-W Figure 3: Example precision/recall curve of our system on EN-ES-P and EN-ES-W settings. See section 6.1. all languages pairs except English-Arabic, we extract evaluation lexicons from the Wiktionary online dictionary. As we discuss in section 7, our extracted lexicons have low coverage, particularly for proper nouns, and thus all performance measures are (sometimes substantially) pessimistic. For EnglishArabic, we extract a lexicon from 100k parallel sentences of UN parallel corpora by running the HMM intersected alignment model (Liang et al., 2008), adding (s, t) to the lexicon if s was aligned to t at least three times and more than any other word. Also, as in Koehn and Knight (2002), we make use of a seed lexicon, which consists of a small, and perhaps incorrect, set of initial translation pairs. We used two methods to derive a seed lexicon. The first is to use the evaluation lexicon Le and select the hundred most common noun word types in the source corpus which have translations in Le. The second method is to heuristically induce, where applicable, a seed lexicon using edit distance, as is done in Koehn and Knight (2002). Section 6.2 compares the performance of these two methods. 4.3 Evaluation We evaluate a proposed lexicon Lp against the evaluation lexicon Le using the F1 measure in the standard fashion; precision is given by the number of proposed translations contained in the evaluation lexicon, and recall is given by the fraction of possible translation pairs proposed.12 Since our model 12We should note that precision is not penalized for (s, t) if s does not have a translation in Le, and recall is not penalized for failing to recover multiple translations of s. Setting p0.1 p0.25 p0.33 p0.50 Best-F1 EDITDIST 58.6 62.6 61.1 —47.4 ORTHO 76.0 81.3 80.1 52.3 55.0 CONTEXT 91.1 81.3 80.2 65.3 58.0 MCCA 87.2 89.7 89.0 89.7 72.0 Table 1: Performance of EDITDIST and our model with various features sets on EN-ES-W. See section 5. naturally produces lexicons in which each entry is associated with a weight based on the model, we can give a full precision/recall curve (see figure 3). We summarize these curves with both the best F1 over all possible thresholds and various precisions px at recalls x. All reported numbers exclude evaluation on the seed lexicon entries, regardless of how those seeds are derived or whether they are correct. In all experiments, unless noted otherwise, we used a seed of size 100 obtained from Le and considered lexicons between the top n = 2, 000 most frequent source and target noun word types which were not in the seed lexicon; each system proposed an already-ranked one-to-one translation lexicon amongst these n words. Where applicable, we compare against the EDITDIST baseline, which solves a maximum bipartite matching problem where edge weights are normalized edit distances. We will use MCCA (for matching CCA) to denote our model using the optimal feature set (see section 5.3). 5 Features In this section, we explore feature representations of word types in our model. Recall that fS(·) and fT (·) map source and target word types to vectors in RdS and RdT , respectively (see section 2). The features used in each representation are defined identically and derived only from the appropriate monolingual corpora. For a concrete example of a word type to feature vector mapping, see figure 2. 5.1 Orthographic Features For closely related languages, such as English and Spanish, translation pairs often share many orthographic features. One direct way to capture orthographic similarity between word pairs is edit distance. Running EDITDIST (see section 4.3) on EN775 ES-W yielded 61.1 p0.33, but precision quickly degrades for higher recall levels (see EDITDIST in table 1). Nevertheless, when available, orthographic clues are strong indicators of translation pairs. We can represent orthographic features of a word type w by assigning a feature to each substring of length ≤3. Note that MCCA can learn regular orthographic correspondences between source and target words, which is something edit distance cannot capture (see table 5). Indeed, running our MCCA model with only orthographic features on EN-ESW, labeled ORTHO in table 1, yielded 80.1 p0.33, a 31% error-reduction over EDITDIST in p0.33. 5.2 Context Features While orthographic features are clearly effective for historically related language pairs, they are more limited for other language pairs, where we need to appeal to other clues. One non-orthographic clue that word types s and t form a translation pair is that there is a strong correlation between the source words used with s and the target words used with t. To capture this information, we define context features for each word type w, consisting of counts of nouns which occur within a window of size 4 around w. Consider the translation pair (time, tiempo) illustrated in figure 2. As we become more confident about other translation pairs which have active period and periodico context features, we learn that translation pairs tend to jointly generate these features, which leads us to believe that time and tiempo might be generated by a common underlying concept vector (see section 2).13 Using context features alone on EN-ES-W, our MCCA model (labeled CONTEXT in table 1) yielded a 80.2 p0.33. It is perhaps surprising that context features alone, without orthographic information, can yield a best-F1comparable to EDITDIST. 5.3 Combining Features We can of course combine context and orthographic features. Doing so yielded 89.03 p0.33 (labeled MCCA in table 1); this represents a 46.4% error reduction in p0.33 over the EDITDIST baseline. For the remainder of this work, we will use MCCA to refer 13It is important to emphasize, however, that our current model does not directly relate a word type’s role as a participant in the matching to that word’s role as a context feature. (a) Corpus Variation Setting p0.1 p0.25 p0.33 p0.50 Best-F1 EN-ES-G 75.0 71.2 68.3 —49.0 EN-ES-W 87.2 89.7 89.0 89.7 72.0 EN-ES-D 91.4 94.3 92.3 89.7 63.7 EN-ES-P 97.3 94.8 93.8 92.9 77.0 (b) Seed Lexicon Variation Corpus p0.1 p0.25 p0.33 p0.50 Best-F1 EDITDIST 58.6 62.6 61.1 — 47.4 MCCA 91.4 94.3 92.3 89.7 63.7 MCCA-AUTO 91.2 90.5 91.8 77.5 61.7 (c) Language Variation Languages p0.1 p0.25 p0.33 p0.50 Best-F1 EN-ES 91.4 94.3 92.3 89.7 63.7 EN-FR 94.5 89.1 88.3 78.6 61.9 EN-CH 60.1 39.3 26.8 —30.8 EN-AR 70.0 50.0 31.1 —33.1 Table 2: (a) varying type of corpora used on system performance (section 6.1), (b) using a heuristically chosen seed compared to one taken from the evaluation lexicon (section 6.2), (c) a variety of language pairs (see section 6.3). to our model using both orthographic and context features. 6 Experiments In this section we examine how system performance varies when crucial elements are altered. 6.1 Corpus Variation There are many sources from which we can derive monolingual corpora, and MCCA performance depends on the degree of similarity between corpora. We explored the following levels of relationships between corpora, roughly in order of closest to most distant: • Same Sentences: EN-ES-P • Non-Parallel Similar Content: EN-ES-W • Distinct Sentences, Same Domain: EN-ES-D • Unrelated Corpora: EN-ES-G Our results for all conditions are presented in table 2(a). The predominant trend is that system performance degraded when the corpora diverged in 776 content, presumably due to context features becoming less informative. However, it is notable that even in the most extreme case of disjoint corpora from different time periods and topics (e.g. EN-ES-G), we are still able to recover lexicons of reasonable accuracy. 6.2 Seed Lexicon Variation All of our experiments so far have exploited a small seed lexicon which has been derived from the evaluation lexicon (see section 4.3). In order to explore system robustness to heuristically chosen seed lexicons, we automatically extracted a seed lexicon similarly to Koehn and Knight (2002): we ran EDITDIST on EN-ES-D and took the top 100 most confident translation pairs. Using this automatically derived seed lexicon, we ran our system on EN-ESD as before, evaluating on the top 2,000 noun word types not included in the automatic lexicon.14 Using the automated seed lexicon, and still evaluating against our Wiktionary lexicon, MCCA-AUTO yielded 91.8 p0.33 (see table 2(b)), indicating that our system can produce lexicons of comparable accuracy with a heuristically chosen seed. We should note that this performance represents no knowledge given to the system in the form of gold seed lexicon entries. 6.3 Language Variation We also explored how system performance varies for language pairs other than English-Spanish. On English-French, for the disjoint EN-FR-D corpus (described in section 4.1), MCCA yielded 88.3 p0.33 (see table 2(c) for more performance measures). This verified that our model can work for another closely related language-pair on which no model development was performed. One concern is how our system performs on language pairs where orthographic features are less applicable. Results on disjoint English-Chinese and English-Arabic are given as EN-CH-D and EN-AR in table 2(c), both using only context features. In these cases, MCCA yielded much lower precisions of 26.8 and 31.0 p0.33, respectively. For both languages, performance degraded compared to EN-ES14Note that the 2,000 words evaluated here were not identical to the words tested on when the seed lexicon is derived from the evaluation lexicon. (a) English-Spanish Rank Source Target Correct 1. education educación Y 2. pacto pact Y 3. stability estabilidad Y 6. corruption corrupción Y 7. tourism turismo Y 9. organisation organización Y 10. convenience conveniencia Y 11. syria siria Y 12. cooperation cooperación Y 14. culture cultura Y 21. protocol protocolo Y 23. north norte Y 24. health salud Y 25. action reacción N (b) English-French Rank Source Target Correct 3. xenophobia xénophobie Y 4. corruption corruption Y 5. subsidiarity subsidiarité Y 6. programme programme-cadre N 8. traceability traçabilité Y (c) English-Chinese Rank Source Target Correct 1. prices !" Y 2. network #$ Y 3. population %& Y 4. reporter ' N 5. oil () Y Table 3: Sample output from our (a) Spanish, (b) French, and (c) Chinese systems. We present the highest confidence system predictions, where the only editing done is to ignore predictions which consist of identical source and target words. D and EN-FR-D, presumably due in part to the lack of orthographic features. However, MCCA still achieved surprising precision at lower recall levels. For instance, at p0.1, MCCA yielded 60.1 and 70.0 on Chinese and Arabic, respectively. Figure 3 shows the highest-confidence outputs in several languages. 6.4 Comparison To Previous Work There has been previous work in extracting translation pairs from non-parallel corpora (Rapp, 1995; Fung, 1995; Koehn and Knight, 2002), but generally not in as extreme a setting as the one considered here. Due to unavailability of data and specificity in experimental conditions and evaluations, it is not possible to perform exact comparisons. How777 (a) Example Non-Cognate Pairs health salud traceability rastreabilidad youth juventud report informe advantages ventajas (b) Interesting Incorrect Pairs liberal partido Kirkhope Gorsel action reacci´on Albanians Bosnia a.m. horas Netherlands Breta˜na Table 4: System analysis on EN-ES-W: (a) non-cognate pairs proposed by our system, (b) hand-selected representative errors. (a) Orthographic Feature Source Feat. Closest Target Feats. Example Translation #st #es, est (statue, estatua) ty# ad#, d# (felicity, felicidad) ogy g´ıa, g´ı (geology, geolog´ıa) (b) Context Feature Source Feat. Closest Context Features party partido, izquierda democrat socialistas, dem´ocratas beijing pek´ın, kioto Table 5: Hand selected examples of source and target features which are close in canonical space: (a) orthographic feature correspondences, (b) context features. ever, we attempted to run an experiment as similar as possible in setup to Koehn and Knight (2002), using English Gigaword and German Europarl. In this setting, our MCCA system yielded 61.7% accuracy on the 186 most confident predictions compared to 39% reported in Koehn and Knight (2002). 7 Analysis We have presented a novel generative model for bilingual lexicon induction and presented results under a variety of data conditions (section 6.1) and languages (section 6.3) showing that our system can produce accurate lexicons even in highly adverse conditions. In this section, we broadly characterize and analyze the behavior of our system. We manually examined the top 100 errors in the English-Spanish lexicon produced by our system on EN-ES-W. Of the top 100 errors: 21 were correct translations not contained in the Wiktionary lexicon (e.g. pintura to painting), 4 were purely morphological errors (e.g. airport to aeropuertos), 30 were semantically related (e.g. basketball to b´eisbol), 15 were words with strong orthographic similarities (e.g. coast to costas), and 30 were difficult to categorize and fell into none of these categories. Since many of our ‘errors’ actually represent valid translation pairs not contained in our extracted dictionary, we supplemented our evaluation lexicon with one automatically derived from 100k sentences of parallel Europarl data. We ran the intersected HMM wordalignment model (Liang et al., 2008) and added (s, t) to the lexicon if s was aligned to t at least three times and more than any other word. Evaluating against the union of these lexicons yielded 98.0 p0.33, a significant improvement over the 92.3 using only the Wiktionary lexicon. Of the true errors, the most common arose from semantically related words which had strong context feature correlations (see table 4(b)). We also explored the relationships our model learns between features of different languages. We projected each source and target feature into the shared canonical space, and for each projected source feature we examined the closest projected target features. In table 5(a), we present some of the orthographic feature relationships learned by our system. Many of these relationships correspond to phonological and morphological regularities such as the English suffix ing mapping to the Spanish suffix g´ıa. In table 5(b), we present context feature correspondences. Here, the broad trend is for words which are either translations or semantically related across languages to be close in canonical space. 8 Conclusion We have presented a generative model for bilingual lexicon induction based on probabilistic CCA. Our experiments show that high-precision translations can be mined without any access to parallel corpora. It remains to be seen how such lexicons can be best utilized, but they invite new approaches to the statistical translation of resource-poor languages. 778 References Francis R. Bach and Michael I. Jordan. 2006. A probabilistic interpretation of canonical correlation analysis. Technical report, University of California, Berkeley. Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1994. The mathematic of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263–311. Mona Diab, Kadri Hacioglu, and Daniel Jurafsky. 2004. Automatic tagging of arabic text: From raw text to base phrase chunks. In HLT-NAACL. Pascale Fung. 1995. Compiling bilingual lexicon entries from a non-parallel english-chinese corpus. In Third Annual Workshop on Very Large Corpora. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In COLING-ACL. David R. Hardoon, Sandor Szedmak, and John ShaweTaylor. 2003. Canonical correlation analysis an overview with application to learning methods. Technical Report CSD-TR-03-02, Royal Holloway University of London. Philipp Koehn and Kevin Knight. 2002. Learning a translation lexicon from monolingual corpora. In Proceedings of ACL Workshop on Unsupervised Lexical Acquisition. P. Koehn. 2004. Pharaoh: A beam search decoder for phrase-based statistical machine translation models. In Proceedings of AMTA 2004. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit. H. W. Kuhn. 1955. The Hungarian method for the assignment problem. Naval Research Logistic Quarterly. P. Liang, D. Klein, and M. I. Jordan. 2008. Agreementbased learning. In NIPS. Reinhard Rapp. 1995. Identifying word translation in non-parallel texts. In ACL. Helmut Schmid. 1994. Probabilistic part-of-speech tagging using decision trees. In International Conference on New Methods in Language Processing. N. Smith and J. Eisner. 2006. Annealing structural bias in multilingual weighted grammar induction. In ACL. L. G. Valiant. 1979. The complexity of computing the permanent. Theoretical Computer Science, 8:189– 201. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In ACL. 779
2008
88
Proceedings of ACL-08: HLT, pages 780–788, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Pivot Approach for Extracting Paraphrase Patterns from Bilingual Corpora Shiqi Zhao1, Haifeng Wang2, Ting Liu1, Sheng Li1 1Harbin Institute of Technology, Harbin, China {zhaosq,tliu,lisheng}@ir.hit.edu.cn 2Toshiba (China) Research and Development Center, Beijing, China [email protected] Abstract Paraphrase patterns are useful in paraphrase recognition and generation. In this paper, we present a pivot approach for extracting paraphrase patterns from bilingual parallel corpora, whereby the English paraphrase patterns are extracted using the sentences in a foreign language as pivots. We propose a loglinear model to compute the paraphrase likelihood of two patterns and exploit feature functions based on maximum likelihood estimation (MLE) and lexical weighting (LW). Using the presented method, we extract over 1,000,000 pairs of paraphrase patterns from 2M bilingual sentence pairs, the precision of which exceeds 67%. The evaluation results show that: (1) The pivot approach is effective in extracting paraphrase patterns, which significantly outperforms the conventional method DIRT. Especially, the log-linear model with the proposed feature functions achieves high performance. (2) The coverage of the extracted paraphrase patterns is high, which is above 84%. (3) The extracted paraphrase patterns can be classified into 5 types, which are useful in various applications. 1 Introduction Paraphrases are different expressions that convey the same meaning. Paraphrases are important in plenty of natural language processing (NLP) applications, such as question answering (QA) (Lin and Pantel, 2001; Ravichandran and Hovy, 2002), machine translation (MT) (Kauchak and Barzilay, 2006; Callison-Burch et al., 2006), multi-document summarization (McKeown et al., 2002), and natural language generation (Iordanskaja et al., 1991). Paraphrase patterns are sets of semantically equivalent patterns, in which a pattern generally contains two parts, i.e., the pattern words and slots. For example, in the pattern “X solves Y”, “solves” is the pattern word, while “X” and “Y” are slots. One can generate a text unit (phrase or sentence) by filling the pattern slots with specific words. Paraphrase patterns are useful in both paraphrase recognition and generation. In paraphrase recognition, if two text units match a pair of paraphrase patterns and the corresponding slot-fillers are identical, they can be identified as paraphrases. In paraphrase generation, a text unit that matches a pattern P can be rewritten using the paraphrase patterns of P. A variety of methods have been proposed on paraphrase patterns extraction (Lin and Pantel, 2001; Ravichandran and Hovy, 2002; Shinyama et al., 2002; Barzilay and Lee, 2003; Ibrahim et al., 2003; Pang et al., 2003; Szpektor et al., 2004). However, these methods have some shortcomings. Especially, the precisions of the paraphrase patterns extracted with these methods are relatively low. In this paper, we extract paraphrase patterns from bilingual parallel corpora based on a pivot approach. We assume that if two English patterns are aligned with the same pattern in another language, they are likely to be paraphrase patterns. This assumption is an extension of the one presented in (Bannard and Callison-Burch, 2005), which was used for deriving phrasal paraphrases from bilingual corpora. Our method involves three steps: (1) corpus preprocessing, including English monolingual dependency 780 parsing and English-foreign language word alignment, (2) aligned patterns induction, which produces English patterns along with the aligned pivot patterns in the foreign language, (3) paraphrase patterns extraction, in which paraphrase patterns are extracted based on a log-linear model. Our contributions are as follows. Firstly, we are the first to use a pivot approach to extract paraphrase patterns from bilingual corpora, though similar methods have been used for learning phrasal paraphrases. Our experiments show that the pivot approach significantly outperforms conventional methods. Secondly, we propose a log-linear model for computing the paraphrase likelihood. Besides, we use feature functions based on maximum likelihood estimation (MLE) and lexical weighting (LW), which are effective in extracting paraphrase patterns. Using the proposed approach, we extract over 1,000,000 pairs of paraphrase patterns from 2M bilingual sentence pairs, the precision of which is above 67%. Experimental results show that the pivot approach evidently outperforms DIRT, a well known method that extracts paraphrase patterns from monolingual corpora (Lin and Pantel, 2001). Besides, the log-linear model is more effective than the conventional model presented in (Bannard and CallisonBurch, 2005). In addition, the coverage of the extracted paraphrase patterns is high, which is above 84%. Further analysis shows that 5 types of paraphrase patterns can be extracted with our method, which can by used in multiple NLP applications. The rest of this paper is structured as follows. Section 2 reviews related work on paraphrase patterns extraction. Section 3 presents our method in detail. We evaluate the proposed method in Section 4, and finally conclude this paper in Section 5. 2 Related Work Paraphrase patterns have been learned and used in information extraction (IE) and answer extraction of QA. For example, Lin and Pantel (2001) proposed a method (DIRT), in which they obtained paraphrase patterns from a parsed monolingual corpus based on an extended distributional hypothesis, where if two paths in dependency trees tend to occur in similar contexts it is hypothesized that the meanings of the paths are similar. The examples of obtained para(1) X solves Y Y is solved by X X finds a solution to Y ...... (2) born in <ANSWER> , <NAME> <NAME> was born on <ANSWER> , <NAME> ( <ANSWER> ...... (3) ORGANIZATION decides φ ORGANIZATION confirms φ ...... Table 1: Examples of paraphrase patterns extracted with the methods of Lin and Pantel (2001), Ravichandran and Hovy (2002), and Shinyama et al. (2002). phrase patterns are shown in Table 1 (1). Based on the same hypothesis as above, some methods extracted paraphrase patterns from the web. For instance, Ravichandran and Hovy (2002) defined a question taxonomy for their QA system. They then used hand-crafted examples of each question type as queries to retrieve paraphrase patterns from the web. For instance, for the question type “BIRTHDAY”, The paraphrase patterns produced by their method can be seen in Table 1 (2). Similar methods have also been used by Ibrahim et al. (2003) and Szpektor et al. (2004). The main disadvantage of the above methods is that the precisions of the learned paraphrase patterns are relatively low. For instance, the precisions of the paraphrase patterns reported in (Lin and Pantel, 2001), (Ibrahim et al., 2003), and (Szpektor et al., 2004) are lower than 50%. Ravichandran and Hovy (2002) did not directly evaluate the precision of the paraphrase patterns extracted using their method. However, the performance of their method is dependent on the hand-crafted queries for web mining. Shinyama et al. (2002) presented a method that extracted paraphrase patterns from multiple news articles about the same event. Their method was based on the assumption that NEs are preserved across paraphrases. Thus the method acquired paraphrase patterns from sentence pairs that share comparable NEs. Some examples can be seen in Table 1 (3). The disadvantage of this method is that it greatly relies on the number of NEs in sentences. The preci781 start Palestinian suicide bomber blew himself up in SLOT1 on SLOT2 killing SLOT3 other people and injuring wounding SLOT4 end detroit the *e* a ‘s *e* building building in detroit flattened ground levelled to blasted leveled *e* was reduced razed leveled to down rubble into ashes *e* to *e* (1) (2) Figure 1: Examples of paraphrase patterns extracted by Barzilay and Lee (2003) and Pang et al. (2003). sion of the extracted patterns may sharply decrease if the sentences do not contain enough NEs. Barzilay and Lee (2003) applied multi-sequence alignment (MSA) to parallel news sentences and induced paraphrase patterns for generating new sentences (Figure 1 (1)). Pang et al. (2003) built finite state automata (FSA) from semantically equivalent translation sets based on syntactic alignment. The learned FSAs could be used in paraphrase representation and generation (Figure 1 (2)). Obviously, it is difficult for a sentence to match such complicated patterns, especially if the sentence is not from the same domain in which the patterns are extracted. Bannard and Callison-Burch (2005) first exploited bilingual corpora for phrasal paraphrase extraction. They assumed that if two English phrases e1 and e2 are aligned with the same phrase c in another language, these two phrases may be paraphrases. Specifically, they computed the paraphrase probability in terms of the translation probabilities: p(e2|e1) = X c pMLE(c|e1)pMLE(e2|c) (1) In Equation (1), pMLE(c|e1) and pMLE(e2|c) are the probabilities of translating e1 to c and c to e2, which are computed based on MLE: pMLE(c|e1) = count(c, e1) P c′ count(c′, e1) (2) where count(c, e1) is the frequency count that phrases c and e1 are aligned in the corpus. pMLE(e2|c) is computed in the same way. This method proved effective in extracting high quality phrasal paraphrases. As a result, we extend it to paraphrase pattern extraction in this paper. STE(take) should We take market into consideration take market into consideration take into consideration PSTE(take) first TE demand demand Figure 2: Examples of a subtree and a partial subtree. 3 Proposed Method 3.1 Corpus Preprocessing In this paper, we use English paraphrase patterns extraction as a case study. An English-Chinese (EC) bilingual parallel corpus is employed for training. The Chinese part of the corpus is used as pivots to extract English paraphrase patterns. We conduct word alignment with Giza++ (Och and Ney, 2000) in both directions and then apply the grow-diag heuristic (Koehn et al., 2005) for symmetrization. Since the paraphrase patterns are extracted from dependency trees, we parse the English sentences in the corpus with MaltParser (Nivre et al., 2007). Let SE be an English sentence, TE the parse tree of SE, e a word of SE, we define the subtree and partial subtree following the definitions in (Ouangraoua et al., 2007). In detail, a subtree STE(e) is a particular connected subgraph of the tree TE, which is rooted at e and includes all the descendants of e. A partial subtree PSTE(e) is a connected subgraph of the subtree STE(e), which is rooted at e but does not necessarily include all the descendants of e. For instance, for the sentence “We should first take market demand into consideration”, STE(take) and PSTE(take) are shown in Figure 21. 3.2 Aligned Patterns Induction To induce the aligned patterns, we first induce the English patterns using the subtrees and partial subtrees. Then, we extract the pivot Chinese patterns aligning to the English patterns. 1Note that, a subtree may contain several partial subtrees. In this paper, all the possible partial subtrees are considered when extracting paraphrase patterns. 782 Algorithm 1: Inducing an English pattern 1: Input: words in STE(e) : wiwi+1...wj 2: Input: PE(e) = φ 3: For each wk (i ≤k ≤j) 4: If wk is in PSTE(e) 5: Append wk to the end of PE(e) 6: Else 7: Append POS(wk) to the end of PE(e) 8: End For Algorithm 2: Inducing an aligned pivot pattern 1: Input: SC = t1t2...tn 2: Input: PC = φ 3: For each tl (1 ≤l ≤n) 4: If tl is aligned with wk in SE 5: If wk is a word in PE(e) 6: Append tl to the end of PC 7: If POS(wk) is a slot in PE(e) 8: Append POS(wk) to the end of PC 9: End For Step-1 Inducing English patterns. In this paper, an English pattern PE(e) is a string comprising words and part-of-speech (POS) tags. Our intuition for inducing an English pattern is that a partial subtree PSTE(e) can be viewed as a unit that conveys a definite meaning, though the words in PSTE(e) may not be continuous. For example, PSTE(take) in Figure 2 contains words “take ... into consideration”. Therefore, we may extract “take X into consideration” as a pattern. In addition, the words that are in STE(e) but not in PSTE(e) (denoted as STE(e)/PSTE(e)) are also useful for inducing patterns, since they can constrain the pattern slots. In the example in Figure 2, the word “demand” indicates that a noun can be filled in the slot X and the pattern may have the form “take NN into consideration”. Based on this intuition, we induce an English pattern PE(e) as in Algorithm 12. For the example in Figure 2, the generated pattern PE(take) is “take NN NN into consideration”. Note that the patterns induced in this way are quite specific, since the POS of each word in STE(e)/PSTE(e) forms a slot. Such patterns are difficult to be matched in applications. We there2POS(wk) in Algorithm 1 denotes the POS tag of wk. NN_1 考虑 NN_2 NN_1 考虑 NN_2 NN_1 NN_2 considered by is NN_1 consider NN_2 Figure 3: Aligned patterns with numbered slots. fore take an additional step to simplify the patterns. Let ei and ej be two words in STE(e)/PSTE(e), whose POS posi and posj are slots in PE(e). If ei is a descendant of ej in the parse tree, we remove posi from PE(e). For the example above, the POS of “market” is removed, since it is the descendant of “demand”, whose POS also forms a slot. The simplified pattern is “take NN into consideration”. Step-2 Extracting pivot patterns. For each English pattern PE(e), we extract an aligned Chinese pivot pattern PC. Let a Chinese sentence SC be the translation of the English sentence SE, PE(e) a pattern induced from SE, we extract the pivot pattern PC aligning to PE(e) as in Algorithm 2. Note that the Chinese patterns are not extracted from parse trees. They are only sequences of Chinese words and POSes that are aligned with English patterns. A pattern may contain two or more slots sharing the same POS. To distinguish them, we assign a number to each slot in the aligned E-C patterns. In detail, the slots having identical POS in PC are numbered incrementally (i.e., 1,2,3...), while each slot in PE(e) is assigned the same number as its aligned slot in PC. The examples of the aligned patterns with numbered slots are illustrated in Figure 3. 3.3 Paraphrase Patterns Extraction As mentioned above, if patterns e1 and e2 are aligned with the same pivot pattern c, e1 and e2 may be paraphrase patterns. The paraphrase likelihood can be computed using Equation (1). However, we find that using only the MLE based probabilities can suffer from data sparseness. In order to exploit more and richer information to estimate the paraphrase likelihood, we propose a log-linear model: score(e2|e1) = X c exp[ N X i=1 λihi(e1, e2, c)] (3) where hi(e1, e2, c) is a feature function and λi is the 783 weight. In this paper, 4 feature functions are used in our log-linear model, which include: h1(e1, e2, c) = scoreMLE(c|e1) h2(e1, e2, c) = scoreMLE(e2|c) h3(e1, e2, c) = scoreLW (c|e1) h4(e1, e2, c) = scoreLW (e2|c) Feature functions h1(e1, e2, c) and h2(e1, e2, c) are based on MLE. scoreMLE(c|e) is computed as: scoreMLE(c|e) = log pMLE(c|e) (4) scoreMLE(e|c) is computed in the same way. h3(e1, e2, c) and h4(e1, e2, c) are based on LW. LW was originally used to validate the quality of a phrase translation pair in MT (Koehn et al., 2003). It checks how well the words of the phrases translate to each other. This paper uses LW to measure the quality of aligned patterns. We define scoreLW (c|e) as the logarithm of the lexical weight3: scoreLW (c|e) = 1 n n X i=1 log( 1 |{j|(i, j) ∈a}| X ∀(i,j)∈a w(ci|ej)) (5) where a denotes the word alignment between c and e. n is the number of words in c. ci and ej are words of c and e. w(ci|ej) is computed as follows: w(ci|ej) = count(ci, ej) P c′ i count(c′ i, ej) (6) where count(ci, ej) is the frequency count of the aligned word pair (ci, ej) in the corpus. scoreLW (e|c) is computed in the same manner. In our experiments, we set a threshold T. If the score between e1 and e2 based on Equation (3) exceeds T, e2 is extracted as the paraphrase of e1. 3.4 Parameter Estimation Five parameters need to be estimated, i.e., λ1, λ2, λ3, λ4 in Equation (3), and the threshold T. To estimate the parameters, we first construct a development set. In detail, we randomly sample 7,086 3The logarithm of the lexical weight is divided by n so as not to penalize long patterns. groups of aligned E-C patterns that are obtained as described in Section 3.2. The English patterns in each group are all aligned with the same Chinese pivot pattern. We then extract paraphrase patterns from the aligned patterns as described in Section 3.3. In this process, we set λi = 1 (i = 1, ..., 4) and assign T a minimum value, so as to obtain all possible paraphrase patterns. A total of 4,162 pairs of paraphrase patterns have been extracted and manually labeled as “1” (correct paraphrase patterns) or “0” (incorrect). Here, two patterns are regarded as paraphrase patterns if they can generate paraphrase fragments by filling the corresponding slots with identical words. We use gradient descent algorithm (Press et al., 1992) to estimate the parameters. For each set of parameters, we compute the precision P, recall R, and f-measure F as: P = |set1∩set2| |set1| , R = |set1∩set2| |set2| , F = 2PR P+R, where set1 denotes the set of paraphrase patterns extracted under the current parameters. set2 denotes the set of manually labeled correct paraphrase patterns. We select the parameters that can maximize the F-measure on the development set4. 4 Experiments The E-C parallel corpus in our experiments was constructed using several LDC bilingual corpora5. After filtering sentences that are too long (> 40 words) or too short (< 5 words), 2,048,009 pairs of parallel sentences were retained. We used two constraints in the experiments to improve the efficiency of computation. First, only subtrees containing no more than 10 words were used to induce English patterns. Second, although any POS tag can form a slot in the induced patterns, we only focused on three kinds of POSes in the experiments, i.e., nouns (tags include NN, NNS, NNP, NNPS), verbs (VB, VBD, VBG, VBN, VBP, VBZ), and adjectives (JJ, JJS, JJR). In addition, we constrained that a pattern must contain at least one content word 4The parameters are: λ1 = 0.0594137, λ2 = 0.995936, λ3 = −0.0048954, λ4 = 1.47816, T = −10.002. 5The corpora include LDC2000T46, LDC2000T47, LDC2002E18, LDC2002T01, LDC2003E07, LDC2003E14, LDC2003T17, LDC2004E12, LDC2004T07, LDC2004T08, LDC2005E83, LDC2005T06, LDC2005T10, LDC2006E24, LDC2006E34, LDC2006E85, LDC2006E92, LDC2006T04, LDC2007T02, LDC2007T09. 784 Method #PP (pairs) Precision LL-Model 1,058,624 67.03% MLE-Model 1,015,533 60.60% DIRT top-1 1,179 19.67% DIRT top-5 5,528 18.73% Table 2: Comparison of paraphrasing methods. so as to filter patterns like “the [NN 1]”. 4.1 Evaluation of the Log-linear Model As previously mentioned, in the log-linear model of this paper, we use both MLE based and LW based feature functions. In this section, we evaluate the log-linear model (LL-Model) and compare it with the MLE based model (MLE-Model) presented by Bannard and Callison-Burch (2005)6. We extracted paraphrase patterns using two models, respectively. From the results of each model, we randomly picked 3,000 pairs of paraphrase patterns to evaluate the precision. The 6,000 pairs of paraphrase patterns were mixed and presented to the human judges, so that the judges cannot know by which model each pair was produced. The sampled patterns were then manually labeled and the precision was computed as described in Section 3.4. The number of the extracted paraphrase patterns (#PP) and the precision are depicted in the first two lines of Table 2. We can see that the numbers of paraphrase patterns extracted using the two models are comparable. However, the precision of LLModel is significantly higher than MLE-Model. Actually, MLE-Model is a special case of LLModel and the enhancement of the precision is mainly due to the use of LW based features. It is not surprising, since Bannard and CallisonBurch (2005) have pointed out that word alignment error is the major factor that influences the performance of the methods learning paraphrases from bilingual corpora. The LW based features validate the quality of word alignment and assign low scores to those aligned E-C pattern pairs with incorrect alignment. Hence the precision can be enhanced. 6In this experiment, we also estimated a threshold T ′ for MLE-Model using the development set (T ′ = −5.1). The pattern pairs whose score based on Equation (1) exceed T ′ were extracted as paraphrase patterns. 4.2 Comparison with DIRT It is necessary to compare our method with another paraphrase patterns extraction method. However, it is difficult to find methods that are suitable for comparison. Some methods only extract paraphrase patterns using news articles on certain topics (Shinyama et al., 2002; Barzilay and Lee, 2003), while some others need seeds as initial input (Ravichandran and Hovy, 2002). In this paper, we compare our method with DIRT (Lin and Pantel, 2001), which does not need to specify topics or input seeds. As mentioned in Section 2, DIRT learns paraphrase patterns from a parsed monolingual corpus based on an extended distributional hypothesis. In our experiment, we implemented DIRT and extracted paraphrase patterns from the English part of our bilingual parallel corpus. Our corpus is smaller than that reported in (Lin and Pantel, 2001). To alleviate the data sparseness problem, we only kept patterns appearing more than 10 times in the corpus for extracting paraphrase patterns. Different from our method, no threshold was set in DIRT. Instead, the extracted paraphrase patterns were ranked according to their scores. In our experiment, we kept top-5 paraphrase patterns for each target pattern. From the extracted paraphrase patterns, we sampled 600 groups for evaluation. Each group comprises a target pattern and its top-5 paraphrase patterns. The sampled data were manually labeled and the top-n precision was calculated as PN i=1 ni N×n , where N is the number of groups and ni is the number of correct paraphrase patterns in the top-n paraphrase patterns of the i-th group. The top-1 and top-5 results are shown in the last two lines of Table 2. Although there are more correct patterns in the top-5 results, the precision drops sequentially from top-1 to top-5 since the denominator of top-5 is 4 times larger than that of top-1. Obviously, the number of the extracted paraphrase patterns is much smaller than that extracted using our method. Besides, the precision is also much lower. We believe that there are two reasons. First, the extended distributional hypothesis is not strict enough. Patterns sharing similar slot-fillers do not necessarily have the same meaning. They may even have the opposite meanings. For example, “X worsens Y” and “X solves Y” were extracted as para785 Type Count Example trivial change 79 (e1) all the members of [NNPS 1] (e2) all members of [NNPS 1] phrase replacement 267 (e1) [JJ 1] economic losses (e2) [JJ 1] financial losses phrase reordering 56 (e1) [NN 1] definition (e2) the definition of [NN 1] structural paraphrase 71 (e1) the admission of [NNP 1] to the wto (e2) the [NNP 1] ’s wto accession information + or 27 (e1) [NNS 1] are in fact women (e2) [NNS 1] are women Table 3: The statistics and examples of each type of paraphrase patterns. phrase patterns by DIRT. The other reason is that DIRT can only be effective for patterns appearing plenty of times in the corpus. In other words, it seriously suffers from data sparseness. We believe that DIRT can perform better on a larger corpus. 4.3 Pivot Pattern Constraints As described in Section 3.2, we constrain that the pattern words of an English pattern e must be extracted from a partial subtree. However, we do not have such constraint on the Chinese pivot patterns. Hence, it is interesting to investigate whether the performance can be improved if we constrain that the pattern words of a pivot pattern c must also be extracted from a partial subtree. To conduct the evaluation, we parsed the Chinese sentences of the corpus with a Chinese dependency parser (Liu et al., 2006). We then induced English patterns and extracted aligned pivot patterns. For the aligned patterns (e, c), if c’s pattern words were not extracted from a partial subtree, the pair was filtered. After that, we extracted paraphrase patterns, from which we sampled 3,000 pairs for evaluation. The results show that 736,161 pairs of paraphrase patterns were extracted and the precision is 65.77%. Compared with Table 2, the number of the extracted paraphrase patterns gets smaller and the precision also gets lower. The results suggest that the performance of the method cannot be improved by constraining the extraction of pivot patterns. 4.4 Analysis of the Paraphrase Patterns We sampled 500 pairs of correct paraphrase patterns extracted using our method and analyzed the types. We found that there are 5 types of paraphrase patterns, which include: (1) trivial change, such as changes of prepositions and articles, etc; (2) phrase replacement; (3) phrase reordering; (4) structural paraphrase, which contain both phrase replacements and phrase reordering; (5) adding or reducing information that does not change the meaning. Some statistics and examples are shown in Table 3. The paraphrase patterns are useful in NLP applications. Firstly, over 50% of the paraphrase patterns are in the type of phrase replacement, which can be used in IE pattern reformulation and sentencelevel paraphrase generation. Compared with phrasal paraphrases, the phrase replacements in patterns are more accurate due to the constraints of the slots. The paraphrase patterns in the type of phrase reordering can also be used in IE pattern reformulation and sentence paraphrase generation. Especially, in sentence paraphrase generation, this type of paraphrase patterns can reorder the phrases in a sentence, which can hardly be achieved by the conventional MT-based generation method (Quirk et al., 2004). The structural paraphrase patterns have the advantages of both phrase replacement and phrase reordering. More paraphrase sentences can be generated using these patterns. The paraphrase patterns in the type of “information + and -” are useful in sentence compression and expansion. A sentence matching a long pattern can be compressed by paraphrasing it using shorter patterns. Similarly, a short sentence can be expanded by paraphrasing it using longer patterns. For the 3,000 pairs of test paraphrase patterns, we also investigate the number and type of the pattern slots. The results are summarized in Table 4 and 5. From Table 4, we can see that more than 92% of the paraphrase patterns contain only one slot, just like the examples shown in Table 3. In addition, about 7% of the paraphrase patterns contain two slots, such as “give [NN 1] [NN 2]” vs. “give [NN 2] to [NN 1]”. This result suggests that our method tends to extract short paraphrase patterns, 786 Slot No. #PP Percentage Precision 1-slot 2,780 92.67% 66.51% 2-slots 218 7.27% 73.85% ≥3-slots 2 <1% 50.00% Table 4: The statistics of the numbers of pattern slots. Slot Type #PP Percentage Precision N-slots 2,376 79.20% 66.71% V-slots 273 9.10% 70.33% J-slots 438 14.60% 70.32% Table 5: The statistics of the type of pattern slots. which is mainly because the data sparseness problem is more serious when extracting long patterns. From Table 5, we can find that near 80% of the paraphrase patterns contain noun slots, while about 9% and 15% contain verb slots and adjective slots7. This result implies that nouns are the most typical variables in paraphrase patterns. 4.5 Evaluation within Context Sentences In Section 4.1, we have evaluated the precision of the paraphrase patterns without considering context information. In this section, we evaluate the paraphrase patterns within specific context sentences. The open test set includes 119 English sentences. We parsed the sentences with MaltParser and induced patterns as described in Section 3.2. For each pattern e in sentence SE, we searched e’s paraphrase patterns from the database of the extracted paraphrase patterns. The result shows that 101 of the 119 sentences contain at least one pattern that can be paraphrased using the extracted paraphrase patterns, the coverage of which is 84.87%. Furthermore, since a pattern may have several paraphrase patterns, we exploited a method to automatically select the best one in the given context sentence. In detail, a paraphrase pattern e′ of e was reranked based on a language model (LM): score(e′|e, SE) = λscoreLL(e′|e) + (1 −λ)scoreLM(e′|SE) (7) 7Notice that, a pattern may contain more than one type of slots, thus the sum of the percentages is larger than 1. Here, scoreLL(e′|e) denotes the score based on Equation (3). scoreLM(e′|SE) is the LM based score: scoreLM(e′|SE) = 1 nlogPLM(S′ E), where S′ E is the sentence generated by replacing e in SE with e′. The language model in the experiment was a tri-gram model trained using the English sentences in the bilingual corpus. We empirically set λ = 0.7. The selected best paraphrase patterns in context sentences were manually labeled. The context information was also considered by our judges. The result shows that the precision of the best paraphrase patterns is 59.39%. To investigate the contribution of the LM based score, we ran the experiment again with λ = 1 (ignoring the LM based score) and found that the precision is 57.09%. It indicates that the LM based reranking can improve the precision. However, the improvement is small. Further analysis shows that about 70% of the correct paraphrase substitutes are in the type of phrase replacement. 5 Conclusion This paper proposes a pivot approach for extracting paraphrase patterns from bilingual corpora. We use a log-linear model to compute the paraphrase likelihood and exploit feature functions based on MLE and LW. Experimental results show that the pivot approach is effective, which extracts over 1,000,000 pairs of paraphrase patterns from 2M bilingual sentence pairs. The precision and coverage of the extracted paraphrase patterns exceed 67% and 84%, respectively. In addition, the log-linear model with the proposed feature functions significantly outperforms the conventional models. Analysis shows that 5 types of paraphrase patterns are extracted with our method, which are useful in various applications. In the future we wish to exploit more feature functions in the log-linear model. In addition, we will try to make better use of the context information when replacing paraphrase patterns in context sentences. Acknowledgments This research was supported by National Natural Science Foundation of China (60503072, 60575042). We thank Lin Zhao, Xiaohang Qu, and Zhenghua Li for their help in the experiments. 787 References Colin Bannard and Chris Callison-Burch. 2005. Paraphrasing with Bilingual Parallel Corpora. In Proceedings of ACL, pages 597-604. Regina Barzilay and Lillian Lee. 2003. Learning to Paraphrase: An Unsupervised Approach Using MultipleSequence Alignment. In Proceedings of HLT-NAACL, pages 16-23. Chris Callison-Burch, Philipp Koehn, and Miles Osborne. 2006. Improved Statistical Machine Translation Using Paraphrases. In Proceedings of HLTNAACL, pages 17-24. Ali Ibrahim, Boris Katz, and Jimmy Lin. 2003. Extracting Structural Paraphrases from Aligned Monolingual Corpora. In Proceedings of IWP, pages 57-64. Lidija Iordanskaja, Richard Kittredge, and Alain Polgu`ere. 1991. Lexical Selection and Paraphrase in a Meaning-Text Generation Model. In C´ecile L. Paris, William R. Swartout, and William C. Mann (Eds.): Natural Language Generation in Artificial Intelligence and Computational Linguistics, pages 293-312. David Kauchak and Regina Barzilay. 2006. Paraphrasing for Automatic Evaluation. In Proceedings of HLTNAACL, pages 455-462. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation. In Proceedings of IWSLT. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of HLT-NAACL, pages 127-133. De-Kang Lin and Patrick Pantel. 2001. Discovery of Inference Rules for Question Answering. In Natural Language Engineering 7(4): 343-360. Ting Liu, Jin-Shan Ma, Hui-Jia Zhu, and Sheng Li. 2006. Dependency Parsing Based on Dynamic Local Optimization. In Proceedings of CoNLL-X, pages 211-215. Kathleen R. Mckeown, Regina Barzilay, David Evans, Vasileios Hatzivassiloglou, Judith L. Klavans, Ani Nenkova, Carl Sable, Barry Schiffman, and Sergey Sigelman. 2002. Tracking and Summarizing News on a Daily Basis with Columbia’s Newsblaster. In Proceedings of HLT, pages 280-285. Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G¨ulsen Eryigit, Sandra K¨ubler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A LanguageIndependent System for Data-Driven Dependency Parsing. In Natural Language Engineering 13(2): 95135. Franz Josef Och and Hermann Ney. 2000. Improved Statistical Alignment Models. In Proceedings of ACL, pages 440-447. A¨ıda Ouangraoua, Pascal Ferraro, Laurent Tichit, and Serge Dulucq. 2007. Local Similarity between Quotiented Ordered Trees. In Journal of Discrete Algorithms 5(1): 23-35. Bo Pang, Kevin Knight, and Daniel Marcu. 2003. Syntax-based Alignment of Multiple Translations: Extracting Paraphrases and Generating New Sentences. In Proceedings of HLT-NAACL, pages 102-109. William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. 1992. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, Cambridge, U.K., 1992, 412-420. Chris Quirk, Chris Brockett, and William Dolan. 2004. Monolingual Machine Translation for Paraphrase Generation. In Proceedings of EMNLP, pages 142149. Deepak Ravichandran and Eduard Hovy. 2002. Learning Surface Text Patterns for a Question Answering System. In Proceedings of ACL, pages 41-47. Yusuke Shinyama, Satoshi Sekine, and Kiyoshi Sudo. 2002. Automatic Paraphrase Acquisition from News Articles. In Proceedings of HLT, pages 40-46. Idan Szpektor, Hristo Tanev, Ido Dagan and Bonaventura Coppola. 2004. Scaling Web-based Acquisition of Entailment Relations. In Proceedings of EMNLP, pages 41-48. 788
2008
89
Proceedings of ACL-08: HLT, pages 72–80, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Cohesive Phrase-based Decoding for Statistical Machine Translation Colin Cherry∗ Microsoft Research One Microsoft Way Redmond, WA, 98052 [email protected] Abstract Phrase-based decoding produces state-of-theart translations with no regard for syntax. We add syntax to this process with a cohesion constraint based on a dependency tree for the source sentence. The constraint allows the decoder to employ arbitrary, non-syntactic phrases, but ensures that those phrases are translated in an order that respects the source tree’s structure. In this way, we target the phrasal decoder’s weakness in order modeling, without affecting its strengths. To further increase flexibility, we incorporate cohesion as a decoder feature, creating a soft constraint. The resulting cohesive, phrase-based decoder is shown to produce translations that are preferred over non-cohesive output in both automatic and human evaluations. 1 Introduction Statistical machine translation (SMT) is complicated by the fact that words can move during translation. If one assumes arbitrary movement is possible, that alone is sufficient to show the problem to be NPcomplete (Knight, 1999). Syntactic cohesion1 is the notion that all movement occurring during translation can be explained by permuting children in a parse tree (Fox, 2002). Equivalently, one can say that phrases in the source, defined by subtrees in its parse, remain contiguous after translation. Early ∗Work conducted while at the University of Alberta. 1We use the term “syntactic cohesion” throughout this paper to mean what has previously been referred to as “phrasal cohesion”, because the non-linguistic sense of “phrase” has become so common in machine translation literature. methods for syntactic SMT held to this assumption in its entirety (Wu, 1997; Yamada and Knight, 2001). These approaches were eventually superseded by tree transducers and tree substitution grammars, which allow translation events to span subtree units, providing several advantages, including the ability to selectively produce uncohesive translations (Eisner, 2003; Graehl and Knight, 2004; Quirk et al., 2005). What may have been forgotten during this transition is that there is a reason it was once believed that a cohesive translation model would work: for some language pairs, cohesion explains nearly all translation movement. Fox (2002) showed that cohesion is held in the vast majority of cases for English-French, while Cherry and Lin (2006) have shown it to be a strong feature for word alignment. We attempt to use this strong, but imperfect, characterization of movement to assist a non-syntactic translation method: phrase-based SMT. Phrase-based decoding (Koehn et al., 2003) is a dominant formalism in statistical machine translation. Contiguous segments of the source are translated and placed in the target, which is constructed from left to right. The process iterates within a beam search until each word from the source has been covered by exactly one phrasal translation. Candidate translations are scored by a linear combination of models, weighted according to Minimum Error Rate Training or MERT (Och, 2003). Phrasal SMT draws strength from being able to memorize noncompositional and context-specific translations, as well as local reorderings. Its primary weakness is in movement modeling; its default distortion model applies a flat penalty to any deviation from source 72 order, forcing the decoder to rely heavily on its language model. Recently, a number of data-driven distortion models, based on lexical features and relative distance, have been proposed to compensate for this weakness (Tillman, 2004; Koehn et al., 2005; AlOnaizan and Papineni, 2006; Kuhn et al., 2006). There have been a number of proposals to incorporate syntactic information into phrasal decoding. Early experiments with syntactically-informed phrases (Koehn et al., 2003), and syntactic reranking of K-best lists (Och et al., 2004) produced mostly negative results. The most successful attempts at syntax-enhanced phrasal SMT have directly targeted movement modeling: Zens et al. (2004) modified a phrasal decoder with ITG constraints, while a number of researchers have employed syntax-driven source reordering before decoding begins (Xia and McCord, 2004; Collins et al., 2005; Wang et al., 2007).2 We attempt something between these two approaches: our constraint is derived from a linguistic parse tree, but it is used inside the decoder, not as a preprocessing step. We begin in Section 2 by defining syntactic cohesion so it can be applied to phrasal decoder output. Section 3 describes how to add both hard and soft cohesion constraints to a phrasal decoder. Section 4 provides our results from both automatic and human evaluations. Sections 5 and 6 provide a qualitative discussion of cohesive output and conclude. 2 Cohesive Phrasal Output Previous approaches to measuring the cohesion of a sentence pair have worked with a word alignment (Fox, 2002; Lin and Cherry, 2003). This alignment is used to project the spans of subtrees from the source tree onto the target sentence. If a modifier and its head, or two modifiers of the same head, have overlapping spans in the projection, then this indicates a cohesion violation. To check phrasal translations for cohesion violations, we need a way to project the source tree onto the decoder’s output. Fortunately, each phrase used to create the target sentence can be tracked back to its original source phrase, providing an alignment between source and 2While certainly both syntactic and successful, we consider Hiero (Chiang, 2007) to be a distinct approach, and not an extension to phrasal decoding’s left-to-right beam search. target phrases. Since each source token is used exactly once during translation, we can transform this phrasal alignment into a word-to-phrase alignment, where each source token is linked to a target phrase. We can then project the source subtree spans onto the target phrase sequence. Note that we never consider individual tokens on the target side, as their connection to the source tree is obscured by the phrasal abstraction that occurred during translation. Let em 1 be the input source sentence, and ¯fp 1 be the output target phrase sequence. Our word-to-phrase alignment ai ∈[1, p], 1 ≤i ≤m, maps a source token position i to a target phrase position ai. Next, we introduce our source dependency tree T. Each source token ei is also a node in T. We define T(ei) to be the subtree of T rooted at ei. We define a local tree to be a head node and its immediate modifiers. With this notation in place, we can define our projected spans. Following Lin and Cherry (2003), we define a head span to be the projection of a single token ei onto the target phrase sequence: spanH (ei, T, am 1 ) = [ai, ai] and the subtree span to be the projection of the subtree rooted at ei: spanS(ei, T, am 1 ) = " min {j|ej∈T(ei)} aj, max {k|ek∈T(ei)} ak # Consider the simple phrasal translation shown in Figure 1 along with a dependency tree for the English source. If we examine the local tree rooted at likes, we get the following projected spans: spanS(nobody, T, a) = [1, 1] spanH (likes, T, a) = [1, 1] spanS(pay, T, a) = [1, 2] For any local tree, we consider only the head span of the head, and the subtree spans of any modifiers. Typically, cohesion would be determined by checking these projected spans for intersection. However, at this level of resolution, avoiding intersection becomes highly restrictive. The monotone translation in Figure 1 would become non-cohesive: nobody intersects with both its sibling pay and with its head likes at phrase index 1. This complication stems from the use of multi-word phrases that 73 nobody likes to pay taxes personne n ' aime payer des impôts (nobody likes) (paying taxes) 1 2 Figure 1: An English source tree with translated French output. Segments are indicated with underlined spans. do not correspond to syntactic constituents. Restricting phrases to syntactic constituents has been shown to harm performance (Koehn et al., 2003), so we tighten our definition of a violation to disregard cases where the only point of overlap is obscured by our phrasal resolution. To do so, we replace span intersection with a new notion of span innersection. Assume we have two spans [u, v] and [x, y] that have been sorted so that [u, v] ≤[x, y] lexicographically. We say that the two spans innersect if and only if x < v. So, [1, 3] and [2, 4] innersect, while [1, 3] and [3, 4] do not. One can think of innersection as intersection, minus the cases where the two spans share only a single boundary point, where x = v. When two projected spans innersect, it indicates that the second syntactic constituent must begin before the first ends. If the two spans in question correspond to nodes in the same local tree, innersection indicates an unambiguous cohesion violation. Under this definition, the translation in Figure 1 is cohesive, as none of its spans innersect. Our hope is that syntactic cohesion will help the decoder make smarter distortion decisions. An example with distortion is shown in Figure 2. In this case, we present two candidate French translations of an English sentence, assuming there is no entry in the phrase table for “voting session.” Because the proper French construction is “session of voting”, the decoder has to move voting after session using a distortion operation. Figure 2 shows two methods to do so, each using an equal numbers of phrases. The projected spans for the local tree rooted at begins in each candidate are shown in Table 1. Note the innersection between the head begins and its modifier session in (b). Thus, a cohesion-aware system would receive extra guidance to select (a), which maintains the original meaning much better than (b). Span (a) (b) spanS(session, T, a) [1,3] [1,3]* spanH (begins, T, a) [4,4] [2,2]* spanS(tomorrow, T, a) [4,4] [4,4] Table 1: Spans of the local trees rooted at begins from Figures 2 (a) and (b). Innersection is marked with a “*”. 2.1 K-best List Filtering A first attempt at using cohesion to improve SMT output would be to apply our definition as a filter on K-best lists. That is, we could have a phrasal decoder output a 1000-best list, and return the highestranked cohesive translation to the user. We tested this approach on our English-French development set, and saw no improvement in BLEU score. Error analysis revealed that only one third of the uncohesive translations had a cohesive alternative in their 1000-best lists. In order to reach the remaining two thirds, we need to constrain the decoder’s search space to explore only cohesive translations. 3 Cohesive Decoding This section describes a modification to standard phrase-based decoding, so that the system is constrained to produce only cohesive output. This will take the form of a check performed each time a hypothesis is extended, similar to the ITG constraint for phrasal SMT (Zens et al., 2004). To create a such a check, we need to detect a cohesion violation inside a partial translation hypothesis. We cannot directly apply our span-based cohesion definition, because our word-to-phrase alignment is not yet complete. However, we can still detect violations, and we can do so before the spans involved are completely translated. Recall that when two projected spans a and b (a < b) innersect, it indicates that b begins before a ends. We can say that the translation of b interrupts the translation of a. We can enforce cohesion by ensuring that these interruptions never happen. Because the decoder builds its translations from left to right, eliminating interruptions amounts to enforcing the following rule: once the decoder begins translating any part of a source subtree, it must cover all 74 the voting session begins tomorrow la session de vote débute demain 2 3 4 1 (the) (session) (of voting) (begins tomorrow) (a) (b) 1 2 the voting session begins tomorrow 3 4 la session commence à voter demain (the) (session begins) (to vote) (tomorrow) 2 Figure 2: Two candidate translations for the same parsed source. (a) is cohesive, while (b) is not. the words under that subtree before it can translate anything outside of it. For example, in Figure 2b, the decoder translates the, which is part of T(session) in ¯f1. In ¯f2, it translates begins, which is outside T(session). Since we have yet to cover voting, we know that the projected span of T(session) will end at some index v > 2, creating an innersection. This eliminates the hypothesis after having proposed only the first two phrases. 3.1 Algorithm In this section, we formally define an interruption, and present an algorithm to detect one during decoding. During both discussions, we represent each target phrase as a set that contains the English tokens used in its translation: ¯fj = {ei|ai = j}. Formally, an interruption occurs whenever the decoder would add a phrase ¯fh+1 to the hypothesis ¯fh 1 , and: ∃r ∈T such that: ∃e ∈T(r) s.t. e ∈¯fh 1 (a. Started) ∃e′ /∈T(r) s.t. e′ ∈¯fh+1 (b. Interrupted) ∃e′′ ∈T(r) s.t. e′′ /∈¯fh+1 1 (c. Unfinished) (1) The key to checking for interruptions quickly is knowing which subtrees T(r) to check for qualities (1:a,b,c). A na¨ıve approach would check every subtree that has begun translation in ¯fh 1 . Figure 3a highlights the roots of all such subtrees for a hypothetical T and ¯fh 1 . Fortunately, with a little analysis that accounts for ¯fh+1, we can show that at most two subtrees need to be checked. For a given interruption-free ¯fh 1 , we call subtrees that have begun translation, but are not yet complete, open subtrees. Only open subtrees can lead to interruptions. We can focus our interruption check on ¯fh, the last phrase in ¯fh 1 , as any open subtree T(r) must contain at least one e ∈¯fh. If this were not the Algorithm 1 Interruption check. • Get the left and right-most tokens used to create ¯fh, call them eL and eR • For each of e ∈{eL, eR}: i. r′ ←e, r ←null While ∃e′ ∈¯fh+1 such that e′ /∈T(r′): r ←r′, r′ ←parent(r) ii. If r ̸= null and ∃e′′ ∈T(r) such that e′′ /∈¯fh+1 1 , then ¯fh+1 interrupts T(r). case, then the open T(r) must have began translation somewhere in ¯fh−1 1 , and T(r) would be interrupted by the placement of ¯fh. Since our hypothesis ¯fh 1 is interruption-free, this is impossible. This leaves the subtrees highlighted in Figure 3b to be checked. Furthermore, we need only consider subtrees that contain the left and right-most source tokens eL and eR translated by ¯fh. Since ¯fh was created from a contiguous string of source tokens, any distinct subtree between these two endpoints will be completed within ¯fh. Finally, for each of these focus points eL and eR, only the highest containing subtree T(r) that does not completely contain ¯fh+1 needs to be considered. Anything higher would contain all of ¯fh+1, and would not satisfy requirement (1:b) of our interruption definition. Any lower subtree would be a descendant of r, and therefore the check for the lower subtree is subsumed by the check for T(r). This leaves only two subtrees, highlighted in our running example in Figure 3c. With this analysis in place, an extension ¯fh+1 of the hypothesis ¯fh 1 can be checked for interruptions with Algorithm 1. Step (i) in this algorithm finds an ancestor r′ such that T(r′) completely contains 75 f h f h+1 f h 1 f h f h+1 f h 1 f h f h+1 f h 1 a) b) c) Figure 3: Narrowing down the source subtrees to be checked for completeness. ¯fh+1, and then returns r, the highest node that does not contain ¯fh+1. We know this r satisfies requirements (1:a,b). If there is no T(r) that does not contain ¯fh+1, then e and its ancestors cannot lead to an interruption. Step (ii) then checks the coverage vector of the hypothesis3 to make sure that T(r) is covered in ¯fh+1 1 . If T(r) is not complete in ¯fh+1 1 , then that satisfies requirement (1:c), which means an interruption has occurred. For example, in Figure 2b, our first interruption occurs as we add ¯fh+1 = ¯f2 to ¯fh 1 = ¯f1 1 . The detection algorithm would first get the left and right boundaries of ¯f1; in this case, the is both eL and eR. Then, it would climb up the tree from the until it reached r′ = begins and r = session. It would then check T(session) for coverage in ¯f2 1 . Since voting ∈T(session) is not covered in ¯f2 1 , it would detect an interruption. Walking up the tree takes at most linear time, and each check to see if T(r) contains all of ¯fh+1 can be performed in constant time, provided the source spans of each subtree have been precomputed. Checking to see if all of T(r) has been covered in Step (ii) takes at most linear time. This makes the entire process linear in the size of the source sentence. 3.2 Soft Constraint Syntactic cohesion is not a perfect constraint for translation. Parse errors and systematic violations can create cases where cohesion works against the decoder. Fox (2002) demonstrated and counted cases where cohesion was not maintained in handaligned sentence-pairs, while Cherry and Lin (2006) 3This coverage vector is maintained by all phrasal decoders to track how much of the source sentence has been covered by the current partial translation, and to ensure that the same token is not translated twice. showed that a soft cohesion constraint is superior to a hard constraint for word alignment. Therefore, we propose a soft version of our cohesion constraint. We perform our interruption check, but we do not invalidate any hypotheses. Instead, each hypothesis maintains a count of the number of extensions that have caused interruptions during its construction. This count becomes a feature in the decoder’s log-linear model, the weight of which is trained with MERT. After the first interruption, the exact meaning of further interruptions becomes difficult to interpret; but the interruption count does provide a useful estimate of the extent to which the translation is faithful to the source tree structure. Initially, we were not certain to what extent this feature would be used by the MERT module, as BLEU is not always sensitive to syntactic improvements. However, trained with our French-English tuning set, the interruption count received the largest absolute feature weight, indicating, at the very least, that the feature is worth scaling to impact decoder. 3.3 Implementation We modify the Moses decoder (Koehn et al., 2007) to translate head-annotated sentences. The decoder stores the flat sentence in the original sentence data structure, and the head-encoded dependency tree in an attached tree data structure. The tree structure caches the source spans corresponding to each of its subtrees. We then implement both a hard check for interruptions to be used before hypotheses are placed on the stack,4 and a soft check that is used to calculate an interruption count feature. 4A hard cohesion constraint used in conjunction with a traditional distortion limit also requires a second linear-time check to ensure that all subtrees currently in progress can be finished under the constraints induced by the distortion limit. 76 Set Cohesive Uncohesive Dev-Test 1170 330 Test 1563 437 Table 2: Number of sentences that receive cohesive translations from the baseline decoder. This property also defines our evaluation subsets. 4 Experiments We have adapted the notion of syntactic cohesion so that it is applicable to phrase-based decoding. This results in a translation process that respects sourceside syntactic boundaries when distorting phrases. In this section we will test the impact of such information on an English to French translation task. 4.1 Experimental Details We test our cohesion-enhanced Moses decoder trained using 688K sentence pairs of Europarl French-English data, provided by the SMT 2006 Shared Task (Koehn and Monz, 2006). Word alignments are provided by GIZA++ (Och and Ney, 2003) with grow-diag-final combination, with infrastructure for alignment combination and phrase extraction provided by the shared task. We decode with Moses, using a stack size of 100, a beam threshold of 0.03 and a distortion limit of 4. Weights for the log-linear model are set using MERT, as implemented by Venugopal and Vogel (2005). Our tuning set is the first 500 sentences of the SMT06 development data. We hold out the remaining 1500 development sentences for development testing (dev-test), and the entirety of the provided 2000-sentence test set for blind testing (test). Since we require source dependency trees, all experiments test English to French translation. English dependency trees are provided by Minipar (Lin, 1994). Our cohesion constraint directly targets sentences for which an unmodified phrasal decoder produces uncohesive output according to the definition in Section 2. Therefore, we present our results not only on each test set in its entirety, but also on the subsets defined by whether or not the baseline naturally produces a cohesive translation. The sizes of the resulting evaluation sets are given in Table 2. Our development tests indicated that the soft and hard cohesion constraints performed somewhat similarly, with the soft constraint providing more stable, and generally better results. We confirmed these trends on our test set, but to conserve space, we provide detailed results for only the soft constraint. 4.2 Automatic Evaluation We first present our soft cohesion constraint’s effect on BLEU score (Papineni et al., 2002) for both our dev-test and test sets. We compare against an unmodified baseline decoder, as well as a decoder enhanced with a lexical reordering model (Tillman, 2004; Koehn et al., 2005). For each phrase pair in our translation table, the lexical reordering model tracks statistics on its reordering behavior as observed in our word-aligned training text. The lexical reordering model provides a good comparison point as a non-syntactic, and potentially orthogonal, improvement to phrase-based movement modeling. We use the implementation provided in Moses, with probabilities conditioned on bilingual phrases and predicting three orientation bins: straight, inverted and disjoint. Since adding features to the decoder’s log-linear model is straight-forward, we also experiment with a combined system that uses both the cohesion constraint and a lexical reordering model. The results of our experiments are shown in Table 3, and reveal some interesting phenomena. First of all, looking across columns, we can see that there is a definite divide in BLEU score between our two evaluation subsets. Sentences with cohesive baseline translations receive much higher BLEU scores than those with uncohesive baseline translations. This indicates that the cohesive subset is easier to translate with a phrase-based system. Our definition of cohesive phrasal output appears to provide a useful feature for estimating translation confidence. Comparing the baseline with and without the soft cohesion constraint, we see that cohesion has only a modest effect on BLEU, when measured on all sentence pairs, with improvements ranging between 0.2 and 0.5 absolute points. Recall that the majority of baseline translations are naturally cohesive. The cohesion constraint’s effect is much more pronounced on the more difficult uncohesive subsets, showing absolute improvements between 0.5 and 1.1 points. Considering the lexical reordering model, we see that its effect is very similar to that of syntactic cohesion. Its BLEU scores are very similar, with lex77 Dev-Test Test System All Cohesive Uncohesive All Cohesive Uncohesive base 32.04 33.80 27.46 32.35 33.78 28.73 lex 32.19 33.91 27.86 32.71 33.89 29.66 coh 32.22 33.82 28.04 32.88 34.03 29.86 lex+coh 32.45 34.12 28.09 32.90 34.04 29.83 Table 3: BLEU scores with an integrated soft cohesion constraint (coh) or a lexical reordering model (lex). Any system significantly better than base has been highlighted, as tested by bootstrap re-sampling with a 95% confidence interval. ical reordering also affecting primarily the uncohesive subset. This similarity in behavior is interesting, as its data-driven, bilingual reordering probabilities are quite different from our cohesion flag, which is driven by monolingual syntax. Examining the system that employs both movement models, we see that the combination (lex+coh) receives the highest score on the dev-test set. A large portion of the combined system’s gain is on the cohesive subset, indicating that the cohesion constraint may be enabling better use of the lexical reordering model on otherwise cohesive translations. Unfortunately, these same gains are not born out on the test set, where the lexical reordering model appears unable to improve upon the already strong performance of the cohesion constraint. 4.3 Human Evaluation We also present a human evaluation designed to determine whether bilingual speakers prefer cohesive decoder output. Our comparison systems are the baseline decoder (base) and our soft cohesion constraint (coh). We evaluate on our dev-test set,5 as it has our smallest observed BLEU-score gap, and we wish to determine if it is actually improving. Our experimental set-up is modeled after the human evaluation presented in (Collins et al., 2005). We provide two human annotators6 a set of 75 English source sentences, along with a reference translation and a pair of translation candidates, one from each system. The annotators are asked to indicate which of the two system translations they prefer, or if they 5The cohesion constraint has no free parameters to optimize during development, so this does not create an advantage. 6Annotators were both native English speakers who speak French as a second language. Each has a strong comprehension of written French. Annotator #2 Annotator #1 base coh equal sum (#1) base 6 7 1 14 coh 8 35 4 47 equal 7 4 3 14 sum (#2) 21 46 8 Table 4: Confusion matrix from human evaluation. consider them to be equal. To avoid bias, the competing systems were presented anonymously and in random order. Following (Collins et al., 2005), we provide the annotators with only short sentences: those with source sentences between 10 and 25 tokens long. Following (Callison-Burch et al., 2006), we conduct a targeted evaluation; we only draw our evaluation pairs from the uncohesive subset targeted by our constraint. All 75 sentences that meet these two criteria are included in the evaluation. The aggregate results of our human evaluation are shown in the bottom row and right-most column of Table 4. Each annotator prefers coh in over 60% of the test sentences, and each prefers base in less than 30% of the test sentences. This presents strong evidence that we are having a consistent, positive effect on formerly non-cohesive translations. A complete confusion matrix indicating agreement between the two annotators is also given in Table 4. There are a few more off-diagonal points than one might expect, but it is clear that the two annotators are in agreement with respect to coh’s improvements. A combination annotator, which selects base or coh only when both human annotators agree and equal otherwise, finds base is preferred in only 8% of cases, compared to 47% for coh. 78 (1+) creating structures that do not currently exist and reducing . . . base de cr´eer des structures qui existent actuellement et ne pas r´eduire . . . to create structures that actually exist and do not reduce . . . coh de cr´eer des structures qui n ’ existent pas encore et r´eduire . . . to create structures that do not yet exist and reduce . . . (2−) . . . repealed the 1998 directive banning advertising base . . . abrog´ee l’interdiction de la directive de 1998 de publicit´e . . . repealed the ban from the 1998 directive on advertising coh . . . abrog´ee la directive de 1998 l’interdiction de publicit´e . . . repealed the 1998 directive the ban on advertising Table 5: A comparison of baseline and cohesion-constrained English-to-French translations, with English glosses. 5 Discussion Examining the French translations produced by our cohesion constrained phrasal decoder, we can draw some qualitative generalizations. The constraint is used primarily to prevent distortion: it provides an intelligent estimate as to when source order must be respected. The resulting translations tend to be more literal than unconstrained translations. So long as the vocabulary present in our phrase table and language model supports a literal translation, cohesion tends to produce an improvement. Consider the first translation example shown in Table 5. In the baseline translation, the language model encourages the system to move the negation away from “exist” and toward “reduce.” The result is a tragic reversal of meaning in the translation. Our cohesion constraint removes this option, forcing the decoder to assemble the correct French construction for “does not yet exist.” The second example shows a case where our resources do not support a literal translation. In this case, we do not have a strong translation mapping to produce a French modifier equivalent to the English “banning.” Stuck with a noun form (“the ban”), the baseline is able to distort the sentence into something that is almost correct (the above gloss is quite generous). The cohesive system, even with a soft constraint, cannot reproduce the same movement, and returns a less grammatical translation. We also examined cases where the decoder overrides the soft cohesion constraint and produces an uncohesive translation. We found this was done very rarely, and primarily to overcome parse errors. Only one correct syntactic construct repeatedly forced the decoder to override cohesion: Minipar’s conjunction representation, which connects conjuncts in parentchild relationships, is at times too restrictive. A sibling representation, which would allow conjuncts to be permuted arbitrarily, may work better. 6 Conclusion We have presented a definition of syntactic cohesion that is applicable to phrase-based SMT. We have used this definition to develop a linear-time algorithm to detect cohesion violations in partial decoder hypotheses. This algorithm was used to implement a soft cohesion constraint for the Moses decoder, based on a source-side dependency tree. Our experiments have shown that roughly 1/5 of our baseline English-French translations contain cohesion violations, and these translations tend to receive lower BLEU scores. This suggests that cohesion could be a strong feature in estimating the confidence of phrase-based translations. Our soft constraint produced improvements ranging between 0.5 and 1.1 BLEU points on sentences for which the baseline produces uncohesive translations. A human evaluation showed that translations created using a soft cohesion constraint are preferred over uncohesive translations in the majority of cases. Acknowledgments Special thanks to Dekang Lin, Shane Bergsma, and Jess Enright for their useful insights and discussions, and to the anonymous reviewers for their comments. The author was funded by Alberta Ingenuity and iCORE studentships. 79 References Y. Al-Onaizan and K. Papineni. 2006. Distortion models for statistical machine translation. In COLING-ACL, pages 529–536, Sydney, Australia. C. Callison-Burch, M. Osborne, and P. Koehn. 2006. Reevaluating the role of BLEU in machine translation research. In EACL, pages 249–256. C. Cherry and D. Lin. 2006. Soft syntactic constraints for word alignment through discriminative training. In COLING-ACL, Sydney, Australia, July. Poster. D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228, June. M. Collins, P. Koehn, and I. Kucerova. 2005. Clause restructuring for statistical machine translation. In ACL, pages 531–540. J. Eisner. 2003. Learning non-ismorphic tree mappings for machine translation. In ACL, Sapporo, Japan. Short paper. H. J. Fox. 2002. Phrasal cohesion and statistical machine translation. In EMNLP, pages 304–311. J. Graehl and K. Knight. 2004. Training tree transducers. In HLT-NAACL, pages 105–112, Boston, USA, May. K. Knight. 1999. Squibs and discussions: Decoding complexity in word-replacement translation models. Computational Linguistics, 25(4):607–615, December. P. Koehn and C. Monz. 2006. Manual and automatic evaluation of machine translation. In HLT-NACCL Workshop on Statistical Machine Translation, pages 102–121. P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In HLT-NAACL, pages 127– 133. P. Koehn, A. Axelrod, A. Birch Mayne, C. CallisonBurch, M. Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 IWSLT speech translation evaluation. In International Workshop on Spoken Language Translation. P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL. Demonstration. R. Kuhn, D. Yuen, M. Simard, P. Paul, G. Foster, E. Joanis, and H. Johnson. 2006. Segment choice models: Feature-rich models for global distortion in statistical machine translation. In HLT-NAACL, pages 25–32, New York, NY. D. Lin and C. Cherry. 2003. Word alignment with cohesion constraint. In HLT-NAACL, pages 49–51, Edmonton, Canada, May. Short paper. D. Lin. 1994. Principar - an efficient, broad-coverage, principle-based parser. In COLING, pages 42–48, Kyoto, Japan. F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–52. F. J. Och, D. Gildea, S. Khudanpur, A. Sarkar, K. Yamada, A. Fraser, S. Kumar, L. Shen, D. Smith, K. Eng, V. Jain, Z. Jin, and D. Radev. 2004. A smorgasbord of features for statistical machine translation. In HLTNAACL 2004: Main Proceedings, pages 161–168. F. J. Och. 2003. Minimum error rate training for statistical machine translation. In ACL, pages 160–167. K. Papineni, S. Roukos, T. Ward, and W. J. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL, pages 311–318. C. Quirk, A. Menezes, and C. Cherry. 2005. Dependency treelet translation: Syntactically informed phrasal SMT. In ACL, pages 271–279, Ann Arbor, USA, June. C. Tillman. 2004. A unigram orientation model for statistical machine translation. In HLT-NAACL, pages 101–104. Short paper. A. Venugopal and S. Vogel. 2005. Considerations in maximum mutual information and minimum classification error training for statistical machine translation. In EAMT. C. Wang, M. Collins, and P. Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In EMNLP, pages 737–745. D. Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. F. Xia and M. McCord. 2004. Improving a statistical mt system with automatically learned rewrite patterns. In Proceedings of Coling 2004, pages 508–514. K. Yamada and K. Knight. 2001. A syntax-based statistical translation model. In ACL, pages 523–530. R. Zens, H. Ney, T. Watanabe, and E. Sumita. 2004. Reordering constraints for phrase-based statistical machine translation. In COLING, pages 205–211, Geneva, Switzerland, August. 80
2008
9
Proceedings of ACL-08: HLT, pages 789–797, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Unsupervised Learning of Narrative Event Chains Nathanael Chambers and Dan Jurafsky Department of Computer Science Stanford University Stanford, CA 94305 {natec,jurafsky}@stanford.edu Abstract Hand-coded scripts were used in the 1970-80s as knowledge backbones that enabled inference and other NLP tasks requiring deep semantic knowledge. We propose unsupervised induction of similar schemata called narrative event chains from raw newswire text. A narrative event chain is a partially ordered set of events related by a common protagonist. We describe a three step process to learning narrative event chains. The first uses unsupervised distributional methods to learn narrative relations between events sharing coreferring arguments. The second applies a temporal classifier to partially order the connected events. Finally, the third prunes and clusters self-contained chains from the space of events. We introduce two evaluations: the narrative cloze to evaluate event relatedness, and an order coherence task to evaluate narrative order. We show a 36% improvement over baseline for narrative prediction and 25% for temporal coherence. 1 Introduction This paper induces a new representation of structured knowledge called narrative event chains (or narrative chains). Narrative chains are partially ordered sets of events centered around a common protagonist. They are related to structured sequences of participants and events that have been called scripts (Schank and Abelson, 1977) or Fillmorean frames. These participants and events can be filled in and instantiated in a particular text situation to draw inferences. Chains focus on a single actor to facilitate learning, and thus this paper addresses the three tasks of chain induction: narrative event induction, temporal ordering of events and structured selection (pruning the event space into discrete sets). Learning these prototypical schematic sequences of events is important for rich understanding of text. Scripts were central to natural language understanding research in the 1970s and 1980s for proposed tasks such as summarization, coreference resolution and question answering. For example, Schank and Abelson (1977) proposed that understanding text about restaurants required knowledge about the Restaurant Script, including the participants (Customer, Waiter, Cook, Tables, etc.), the events constituting the script (entering, sitting down, asking for menus, etc.), and the various preconditions, ordering, and results of each of the constituent actions. Consider these two distinct narrative chains. accused X W joined X claimed W served X argued W oversaw dismissed X W resigned It would be useful for question answering or textual entailment to know that ‘X denied ’ is also a likely event in the left chain, while ‘ replaces W’ temporally follows the right. Narrative chains (such as Firing of Employee or Executive Resigns) offer the structure and power to directly infer these new subevents by providing critical background knowledge. In part due to its complexity, automatic induction has not been addressed since the early nonstatistical work of Mooney and DeJong (1985). The first step to narrative induction uses an entitybased model for learning narrative relations by fol789 lowing a protagonist. As a narrative progresses through a series of events, each event is characterized by the grammatical role played by the protagonist, and by the protagonist’s shared connection to surrounding events. Our algorithm is an unsupervised distributional learning approach that uses coreferring arguments as evidence of a narrative relation. We show, using a new evaluation task called narrative cloze, that our protagonist-based method leads to better induction than a verb-only approach. The next step is to order events in the same narrative chain. We apply work in the area of temporal classification to create partial orders of our learned events. We show, using a coherence-based evaluation of temporal ordering, that our partial orders lead to better coherence judgements of real narrative instances extracted from documents. Finally, the space of narrative events and temporal orders is clustered and pruned to create discrete sets of narrative chains. 2 Previous Work While previous work hasn’t focused specifically on learning narratives1, our work draws from two lines of research in summarization and anaphora resolution. In summarization, topic signatures are a set of terms indicative of a topic (Lin and Hovy, 2000). They are extracted from hand-sorted (by topic) sets of documents using log-likelihood ratios. These terms can capture some narrative relations, but the model requires topic-sorted training data. Bean and Riloff (2004) proposed the use of caseframe networks as a kind of contextual role knoweldge for anaphora resolution. A caseframe is a verb/event and a semantic role (e.g. <patient> kidnapped). Caseframe networks are relations between caseframes that may represent synonymy (<patient> kidnapped and <patient> abducted) or related events (<patient> kidnapped and <patient> released). Bean and Riloff learn these networks from two topic-specific texts and apply them to the problem of anaphora resolution. Our work can be seen as an attempt to generalize the intuition of caseframes (finding an entire set of events 1We analyzed FrameNet (Baker et al., 1998) for insight, but found that very few of the frames are event sequences of the type characterizing narratives and scripts. rather than just pairs of related frames) and apply it to a different task (finding a coherent structured narrative in non-topic-specific text). More recently, Brody (2007) proposed an approach similar to caseframes that discovers highlevel relatedness between verbs by grouping verbs that share the same lexical items in subject/object positions. He calls these shared arguments anchors. Brody learns pairwise relations between clusters of related verbs, similar to the results with caseframes. A human evaluation of these pairs shows an improvement over baseline. This and previous caseframe work lend credence to learning relations from verbs with common arguments. We also draw from lexical chains (Morris and Hirst, 1991), indicators of text coherence from word overlap/similarity. We use a related notion of protagonist overlap to motivate narrative chain learning. Work on semantic similarity learning such as Chklovski and Pantel (2004) also automatically learns relations between verbs. We use similar distributional scoring metrics, but differ with our use of a protagonist as the indicator of relatedness. We also use typed dependencies and the entire space of events for similarity judgements, rather than only pairwise lexical decisions. Finally, Fujiki et al. (2003) investigated script acquisition by extracting the 41 most frequent pairs of events from the first paragraph of newswire articles, using the assumption that the paragraph’s textual order follows temporal order. Our model, by contrast, learns entire event chains, uses more sophisticated probabilistic measures, and uses temporal ordering models instead of relying on document order. 3 The Narrative Chain Model 3.1 Definition Our model is inspired by Centering (Grosz et al., 1995) and other entity-based models of coherence (Barzilay and Lapata, 2005) in which an entity is in focus through a sequence of sentences. We propose to use this same intuition to induce narrative chains. We assume that although a narrative has several participants, there is a central actor who characterizes a narrative chain: the protagonist. Narrative chains are thus structured by the protagonist’s grammatical roles in the events. In addition, narrative 790 events are ordered by some theory of time. This paper describes a partial ordering with the before (no overlap) relation. Our task, therefore, is to learn events that constitute narrative chains. Formally, a narrative chain is a partially ordered set of narrative events that share a common actor. A narrative event is a tuple of an event (most simply a verb) and its participants, represented as typed dependencies. Since we are focusing on a single actor in this study, a narrative event is thus a tuple of the event and the typed dependency of the protagonist: (event, dependency). A narrative chain is a set of narrative events {e1, e2, ..., en}, where n is the size of the chain, and a relation B(ei, ej) that is true if narrative event ei occurs strictly before ej in time. 3.2 The Protagonist The notion of a protagonist motivates our approach to narrative learning. We make the following assumption of narrative coherence: verbs sharing coreferring arguments are semantically connected by virtue of narrative discourse structure. A single document may contain more than one narrative (or topic), but the narrative assumption states that a series of argument-sharing verbs is more likely to participate in a narrative chain than those not sharing. In addition, the narrative approach captures grammatical constraints on narrative coherence. Simple distributional learning might discover that the verb push is related to the verb fall, but narrative learning can capture additional facts about the participants, specifically, that the object or patient of the push is the subject or agent of the fall. Each focused protagonist chain offers one perspective on a narrative, similar to the multiple perspectives on a commercial transaction event offered by buy and sell. 3.3 Partial Ordering A narrative chain, by definition, includes a partial ordering of events. Early work on scripts included ordering constraints with more complex preconditions and side effects on the sequence of events. This paper presents work toward a partial ordering and leaves logical constraints as future work. We focus on the before relation, but the model does not preclude advanced theories of temporal order. 4 Learning Narrative Relations Our first model learns basic information about a narrative chain: the protagonist and the constituent subevents, although not their ordering. For this we need a metric for the relation between an event and a narrative chain. Pairwise relations between events are first extracted unsupervised. A distributional score based on how often two events share grammatical arguments (using pointwise mutual information) is used to create this pairwise relation. Finally, a global narrative score is built such that all events in the chain provide feedback on the event in question (whether for inclusion or for decisions of inference). Given a list of observed verb/dependency counts, we approximate the pointwise mutual information (PMI) by: pmi(e(w, d), e(v, g)) = log P(e(w, d), e(v, g)) P(e(w, d))P(e(v, g)) (1) where e(w, d) is the verb/dependency pair w and d (e.g. e(push,subject)). The numerator is defined by: P(e(w, d), e(v, g)) = C(e(w, d), e(v, g)) P x,y P d,f C(e(x, d), e(y, f)) (2) where C(e(x, d), e(y, f)) is the number of times the two events e(x, d) and e(y, f) had a coreferring entity filling the values of the dependencies d and f. We also adopt the ‘discount score’ to penalize low occuring words (Pantel and Ravichandran, 2004). Given the debate over appropriate metrics for distributional learning, we also experimented with the t-test. Our experiments found that PMI outperforms the t-test on this task by itself and when interpolated together using various mixture weights. Once pairwise relation scores are calculated, a global narrative score can then be built such that all events provide feedback on the event in question. For instance, given all narrative events in a document, we can find the next most likely event to occur by maximizing: max j:0<j<m n X i=0 pmi(ei, fj) (3) where n is the number of events in our chain and ei is the ith event. m is the number of events f in our training corpus. A ranked list of guesses can be built from this summation and we hypothesize that 791 Known events: (pleaded subj), (admits subj), (convicted obj) Likely Events: sentenced obj 0.89 indicted obj 0.74 paroled obj 0.76 fined obj 0.73 fired obj 0.75 denied subj 0.73 Figure 1: Three narrative events and the six most likely events to include in the same chain. the more events in our chain, the more informed our ranked output. An example of a chain with 3 events and the top 6 ranked guesses is given in figure 1. 4.1 Evaluation Metric: Narrative Cloze The cloze task (Taylor, 1953) is used to evaluate a system (or human) for language proficiency by removing a random word from a sentence and having the system attempt to fill in the blank (e.g. I forgot to the waitress for the good service). Depending on the type of word removed, the test can evaluate syntactic knowledge as well as semantic. Deyes (1984) proposed an extended task, discourse cloze, to evaluate discourse knowledge (removing phrases that are recoverable from knowledge of discourse relations like contrast and consequence). We present a new cloze task that requires narrative knowledge to solve, the narrative cloze. The narrative cloze is a sequence of narrative events in a document from which one event has been removed. The task is to predict the missing verb and typed dependency. Take this example text about American football with McCann as the protagonist: 1. McCann threw two interceptions early. 2. Toledo pulled McCann aside and told him he’d start. 3. McCann quickly completed his first two passes. These clauses are represented in the narrative model as five events: (threw subject), (pulled object), (told object), (start subject), (completed subject). These verb/dependency events make up a narrative cloze model. We could remove (threw subject) and use the remaining four events to rank this missing event. Removing a single such pair to be filled in automatically allows us to evaluate a system’s knowledge of narrative relations and coherence. We do not claim this cloze task to be solvable even by humans, New York Times Editorial occupied subj brought subj rejecting subj projects subj met subj appeared subj offered subj voted pp for offer subj thinks subj Figure 2: One of the 69 test documents, containing 10 narrative events. The protagonist is President Bush. but rather assert it as a comparative measure to evaluate narrative knowledge. 4.2 Narrative Cloze Experiment We use years 1994-2004 (1,007,227 documents) of the Gigaword Corpus (Graff, 2002) for training2. We parse the text into typed dependency graphs with the Stanford Parser (de Marneffe et al., 2006)3, recording all verbs with subject, object, or prepositional typed dependencies. We use the OpenNLP4 coreference engine to resolve the entity mentions. For each document, the verb pairs that share coreferring entities are recorded with their dependency types. Particles are included with the verb. We used 10 news stories from the 1994 section of the corpus for development. The stories were hand chosen to represent a range of topics such as business, sports, politics, and obituaries. We used 69 news stories from the 2001 (year selected randomly) section of the corpus for testing (also removed from training). The test set documents were randomly chosen and not preselected for a range of topics. From each document, the entity involved in the most events was selected as the protagonist. For this evaluation, we only look at verbs. All verb clauses involving the protagonist are manually extracted and translated into the narrative events (verb,dependency). Exceptions that are not included are verbs in headlines, quotations (typically not part of a narrative), “be” properties (e.g. john is happy), modifying verbs (e.g. hurried to leave, only leave is used), and multiple instances of one event. The original test set included 100 documents, but 2The document count does not include duplicate news stories. We found up to 18% of the corpus are duplications, mostly AP reprints. We automatically found these by matching the first two paragraphs of each document, removing exact matches. 3http://nlp.stanford.edu/software/lex-parser.shtml 4http://opennlp.sourceforge.net 792 those without a narrative chain at least five events in length were removed, leaving 69 documents. Most of the removed documents were not stories, but genres such as interviews and cooking recipes. An example of an extracted chain is shown in figure 2. We evalute with Narrative Cloze using leave-oneout cross validation, removing one event and using the rest to generate a ranked list of guesses. The test dataset produces 740 cloze tests (69 narratives with 740 events). After generating our ranked guesses, the position of the correct event is averaged over all 740 tests for the final score. We penalize unseen events by setting their ranked position to the length of the guess list (ranging from 2k to 15k). Figure 1 is an example of a ranked guess list for a short chain of three events. If the original document contained (fired obj), this cloze test would score 3. 4.2.1 Baseline We want to measure the utility of the protagonist and the narrative coherence assumption, so our baseline learns relatedness strictly based upon verb co-occurence. The PMI is then defined as between all occurrences of two verbs in the same document. This baseline evaluation is verb only, as dependencies require a protagonist to fill them. After initial evaluations, the baseline was performing very poorly due to the huge amount of data involved in counting all possible verb pairs (using a protagonist vastly reduces the number). We experimented with various count cutoffs to remove rare occurring pairs of verbs. The final results use a baseline where all pairs occurring less than 10 times in the training data are removed. Since the verb-only baseline does not use typed dependencies, our narrative model cannot directly compare to this abstracted approach. We thus modified the narrative model to ignore typed dependencies, but still count events with shared arguments. Thus, we calculate the PMI across verbs that share arguments. This approach is called Protagonist. The full narrative model that includes the grammatical dependencies is called Typed Deps. 4.2.2 Results Experiments with varying sizes of training data are presented in figure 3. Each ranked list of candidate verbs for the missing event in Base1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 0 500 1000 1500 2000 2500 3000 Training Data from 1994!X Ranked Position Narrative Cloze Test Baseline Protagonist Typed Deps Figure 3: Results with varying sizes of training data. Year 2003 is not explicitly shown because it has an unusually small number of documents compared to other years. line/Protagonist contained approximately 9 thousand candidates. Of the 740 cloze tests, 714 of the removed events were present in their respective list of guesses. This is encouraging as only 3.5% of the events are unseen (or do not meet cutoff thresholds). When all training data is used (1994-2004), the average ranked position is 1826 for Baseline and 1160 for Protagonist (1 being most confident). The Baseline performs better at first (years 1994-5), but as more data is seen, the Baseline worsens while the Protagonist improves. This verb-only narrative model shows a 36.5% improvement over the baseline trained on all years. Results from the full Typed Deps model, not comparable to the baseline, parallel the Protagonist results, improving as more data is seen (average ranked position of 1908 with all the training data). We also ran the experiment without OpenNLP coreference, and instead used exact and substring matching for coreference resolution. This showed a 5.7% decrease in the verb-only results. These results show that a protagonist greatly assists in narrative judgements. 5 Ordering Narrative Events The model proposed in the previous section is designed to learn the major subevents in a narrative chain, but not how these events are ordered. In this section we extend the model to learn a partial temporal ordering of the events. 793 There are a number of algorithms for determining the temporal relationship between two events (Mani et al., 2006; Lapata and Lascarides, 2006; Chambers et al., 2007), many of them trained on the TimeBank Corpus (Pustejovsky et al., 2003) which codes events and their temporal relationships. The currently highest performing of these on raw data is the model of temporal labeling described in our previous work (Chambers et al., 2007). Other approaches have depended on hand tagged features. Chambers et al. (2007) shows 59.4% accuracy on the classification task for six possible relations between pairs of events: before, immediately-before, included-by, simultaneous, begins and ends. We focus on the before relation because the others are less relevant to our immediate task. We combine immediately-before with before, and merge the other four relations into an other category. At the binary task of determining if one event is before or other, we achieve 72.1% accuracy on Timebank. The above approach is a two-stage machine learning architecture. In the first stage, the model uses supervised machine learning to label temporal attributes of events, including tense, grammatical aspect, and aspectual class. This first stage classifier relies on features such as neighboring part of speech tags, neighboring auxiliaries and modals, and WordNet synsets. We use SVMs (Chambers et al. (2007) uses Naive Bayes) and see minor performance boosts on Timebank. These imperfect classifications, combined with other linguistic features, are then used in a second stage to classify the temporal relationship between two events. Other features include event-event syntactic properties such as the syntactic dominance relations between the two events, as well as new bigram features of tense, aspect and class (e.g. “present past” if the first event is in the present, and the second past), and whether the events occur in the same or different sentences. 5.1 Training a Temporal Classifier We use the entire Timebank Corpus as supervised training data, condensing the before and immediately-before relations into one before relation. The remaining relations are merged into other. The vast majority of potential event pairs in Timebank are unlabeled. These are often none relations (events that have no explicit relation) or as is often the case, overlap relations where the two events have no Timebank-defined ordering but overlap in time. Even worse, many events do have an ordering, but they were not tagged by the human annotators. This could be due to the overwhelming task of temporal annotation, or simply because some event orderings are deemed more important than others in understanding the document. We consider all untagged relations as other, and experiment with including none, half, and all of them in training. Taking a cue from Mani et al. (2006), we also increased Timebank’s size by applying transitivity rules to the hand labeled data. The following is an example of the applied transitive rule: if run BEFORE fall and fall BEFORE injured then run BEFORE injured This increases the number of relations from 37519 to 45619. Perhaps more importantly for our task, of all the added relations, the before relation is added the most. We experimented with original vs. expanded Timebank and found the expanded performed slightly worse. The decline may be due to poor transitivity additions, as several Timebank documents contain inconsistent labelings. All reported results are from training without transitivity. 5.2 Temporal Classifier in Narrative Chains We classify the Gigaword Corpus in two stages, once for the temporal features on each event (tense, grammatical aspect, aspectual class), and once between all pairs of events that share arguments. This allows us to classify the before/other relations between all potential narrative events. The first stage is trained on Timebank, and the second is trained using the approach described above, varying the size of the none training relations. Each pair of events in a gigaword document that share a coreferring argument is treated as a separate ordering classification task. We count the resulting number of labeled before relations between each verb/dependency pair. Processing the entire corpus produces a database of event pair counts where confidence of two generic events A and B can be measured by comparing how many before labels have been seen versus their inverted order B and A5. 5Note that we train with the before relation, and so transposing two events is similar to classifying the after relation. 794 5.3 Temporal Evaluation We want to evaluate temporal order at the narrative level, across all events within a chain. We envision narrative chains being used for tasks of coherence, among other things, and so it is desired to evaluate temporal decisions within a coherence framework. Along these lines, our test set uses actual narrative chains from documents, hand labeled for a partial ordering. We evaluate coherence of these true chains against a random ordering. The task is thus deciding which of the two chains is most coherent, the original or the random (baseline 50%)? We generated up to 300 random orderings for each test document, averaging the accuracy across all. Our evaluation data is the same 69 documents used in the test set for learning narrative relations. The chain from each document is hand identified and labeled for a partial ordering using only the before relation. Ordering was done by the authors and all attempts were made to include every before relation that exists in the document, or that could be deduced through transitivity rules. Figure 4 shows an example and its full reversal, although the evaluation uses random orderings. Each edge is a distinct before relation and is used in the judgement score. The coherence score for a partially ordered narrative chain is the sum of all the relations that our classified corpus agrees with, weighted by how certain we are. If the gigaword classifications disagree, a weighted negative score is given. Confidence is based on a logarithm scale of the difference between the counts of before and after classifications. Formally, the score is calculated as the following: X E:x,y        log(D(x, y)) if xβy and B(x, y) > B(y, x) −log(D(x, y)) if xβy and B(y, x) > B(x, y) −log(D(x, y)) if !xβy & !yβx & D(x, y) > 0 0 otherwise where E is the set of all event pairs, B(i, j) is how many times we classified events i and j as before in Gigaword, and D(i, j) = |B(i, j) −B(j, i)|. The relation iβj indicates that i is temporally before j. 5.4 Results Out approach gives higher scores to orders that coincide with the pairwise orderings classified in our gigaword training data. The results are shown in figure 5. Of the 69 chains, 6 did not have any ordered events and were removed from the evaluation. We Figure 4: A narrative chain and its reverse order. All ≥6 ≥10 correct 8086 75% 7603 78% 6307 89% incorrect 1738 1493 619 tie 931 627 160 Figure 5: Results for choosing the correct ordered chain. (≥10) means there were at least 10 pairs of ordered events in the chain. generated (up to) 300 random orderings for each of the remaining 63. We report 75.2% accuracy, but 22 of the 63 had 5 or fewer pairs of ordered events. Figure 5 therefore shows results from chains with more than 5 pairs, and also 10 or more. As we would hope, the accuracy improves the larger the ordered narrative chain. We achieve 89.0% accuracy on the 24 documents whose chains most progress through time, rather than chains that are difficult to order with just the before relation. Training without none relations resulted in high recall for before decisions. Perhaps due to data sparsity, this produces our best results as reported above. 6 Discrete Narrative Event Chains Up till this point, we have learned narrative relations across all possible events, including their temporal order. However, the discrete lists of events for which Schank scripts are most famous have not yet been constructed. We intentionally did not set out to reproduce explicit self-contained scripts in the sense that the ‘restaurant script’ is complete and cannot include other events. The name narrative was chosen to imply a likely order of events that is common in spoken and written retelling of world events. Discrete sets have the drawback of shutting out unseen and un795 Figure 6: An automatically learned Prosecution Chain. Arrows indicate the before relation. likely events from consideration. It is advantageous to consider a space of possible narrative events and the ordering within, not a closed list. However, it is worthwhile to construct discrete narrative chains, if only to see whether the combination of event learning and ordering produce scriptlike structures. This is easily achievable by using the PMI scores from section 4 in an agglomerative clustering algorithm, and then applying the ordering relations from section 5 to produce a directed graph. Figures 6 and 7 show two learned chains after clustering and ordering. Each arrow indicates a before relation. Duplicate arrows implied by rules of transitivity are removed. Figure 6 is remarkably accurate, and figure 7 addresses one of the chains from our introduction, the employment narrative. The core employment events are accurate, but clustering included life events (born, died, graduated) from obituaries of which some temporal information is incorrect. The Timebank corpus does not include obituaries, thus we suffer from sparsity in training data. 7 Discussion We have shown that it is possible to learn narrative event chains unsupervised from raw text. Not only do our narrative relations show improvements over a baseline, but narrative chains offer hope for many other areas of NLP. Inference, coherence in summarization and generation, slot filling for question answering, and frame induction are all potential areas. We learned a new measure of similarity, the narFigure 7: An Employment Chain. Dotted lines indicate incorrect before relations. rative relation, using the protagonist as a hook to extract a list of related events from each document. The 37% improvement over a verb-only baseline shows that we may not need presorted topics of documents to learn inferences. In addition, we applied state of the art temporal classification to show that sets of events can be partially ordered. Judgements of coherence can then be made over chains within documents. Further work in temporal classification may increase accuracy even further. Finally, we showed how the event space of narrative relations can be clustered to create discrete sets. While it is unclear if these are better than an unconstrained distribution of events, they do offer insight into the quality of narratives. An important area not discussed in this paper is the possibility of using narrative chains for semantic role learning. A narrative chain can be viewed as defining the semantic roles of an event, constraining it against roles of the other events in the chain. An argument’s class can then be defined as the set of narrative arguments in which it appears. We believe our model provides an important first step toward learning the rich causal, temporal and inferential structure of scripts and frames. Acknowledgment: This work is funded in part by DARPA through IBM and by the DTO Phase III Program for AQUAINT through Broad Agency Announcement (BAA) N61339-06-R-0034. Thanks to the reviewers for helpful comments and the suggestion for a non-full-coreference baseline. 796 References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet project. In Christian Boitet and Pete Whitelock, editors, ACL-98, pages 86– 90, San Francisco, California. Morgan Kaufmann Publishers. Regina Barzilay and Mirella Lapata. 2005. Modeling local coherence: an entity-based approach. Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 141–148. David Bean and Ellen Riloff. 2004. Unsupervised learning of contextual role knowledge for coreference resolution. Proc. of HLT/NAACL, pages 297–304. Samuel Brody. 2007. Clustering Clauses for HighLevel Relation Detection: An Information-theoretic Approach. Proceedings of the 43rd Annual Meeting of the Association of Computational Linguistics, pages 448–455. Nathanael Chambers, Shan Wang, and Dan Jurafsky. 2007. Classifying temporal relations between events. In Proceedings of ACL-07, Prague, Czech Republic. Timothy Chklovski and Patrick Pantel. 2004. Verbocean: Mining the web for fine-grained semantic verb relations. In Proceedings of EMNLP-04. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC-06, pages 449–454. Tony Deyes. 1984. Towards an authentic ’discourse cloze’. Applied Linguistics, 5(2). Toshiaki Fujiki, Hidetsugu Nanba, and Manabu Okumura. 2003. Automatic acquisition of script knowledge from a text collection. In EACL, pages 91–94. David Graff. 2002. English Gigaword. Linguistic Data Consortium. Barbara J. Grosz, Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modelling the local coherence of discourse. Computational Linguistics, 21(2). Mirella Lapata and Alex Lascarides. 2006. Learning sentence-internal temporal relations. In Journal of AI Research, volume 27, pages 85–117. C.Y. Lin and E. Hovy. 2000. The automated acquisition of topic signatures for text summarization. Proceedings of the 17th conference on Computational linguistics-Volume 1, pages 495–501. Inderjeet Mani, Marc Verhagen, Ben Wellner, Chong Min Lee, and James Pustejovsky. 2006. Machine learning of temporal relations. In Proceedings of ACL-06, July. Raymond Mooney and Gerald DeJong. 1985. Learning schemata for natural language processing. In Ninth International Joint Conference on Artificial Intelligence (IJCAI), pages 681–687. Jane Morris and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17:21– 43. Patrick Pantel and Deepak Ravichandran. 2004. Automatically labeling semantic classes. Proceedings of HLT/NAACL, 4:321–328. James Pustejovsky, Patrick Hanks, Roser Sauri, Andrew See, David Day, Lisa Ferro, Robert Gaizauskas, Marcia Lazo, Andrea Setzer, and Beth Sundheim. 2003. The timebank corpus. Corpus Linguistics, pages 647– 656. Roger C. Schank and Robert P. Abelson. 1977. Scripts, plans, goals and understanding. Lawrence Erlbaum. Wilson L. Taylor. 1953. Cloze procedure: a new tool for measuring readability. Journalism Quarterly, 30:415– 433. 797
2008
90
Proceedings of ACL-08: HLT, pages 798–806, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Semantic Role Labeling Systems for Arabic using Kernel Methods Mona Diab CCLS, Columbia University New York, NY 10115, USA [email protected] Alessandro Moschitti DISI, University of Trento Trento, I-38100, Italy [email protected] Daniele Pighin FBK-irst; DISI, University of Trento Trento, I-38100, Italy [email protected] Abstract There is a widely held belief in the natural language and computational linguistics communities that Semantic Role Labeling (SRL) is a significant step toward improving important applications, e.g. question answering and information extraction. In this paper, we present an SRL system for Modern Standard Arabic that exploits many aspects of the rich morphological features of the language. The experiments on the pilot Arabic Propbank data show that our system based on Support Vector Machines and Kernel Methods yields a global SRL F1 score of 82.17%, which improves the current state-of-the-art in Arabic SRL. 1 Introduction Shallow approaches to semantic processing are making large strides in the direction of efficiently and effectively deriving tacit semantic information from text. Semantic Role Labeling (SRL) is one such approach. With the advent of faster and more powerful computers, more effective machine learning algorithms, and importantly, large data resources annotated with relevant levels of semantic information, such as the FrameNet (Baker et al., 1998) and ProbBank (Kingsbury and Palmer, 2003), we are seeing a surge in efficient approaches to SRL (Carreras and M`arquez, 2005). SRL is the process by which predicates and their arguments are identified and their roles are defined in a sentence. For example, in the English sentence, ‘John likes apples.’, the predicate is ‘likes’ whereas ‘John’ and ‘apples’, bear the semantic role labels agent (ARG0) and theme (ARG1). The crucial fact about semantic roles is that regardless of the overt syntactic structure variation, the underlying predicates remain the same. Hence, for the sentence ‘John opened the door’ and ‘the door opened’, though ‘the door’ is the object of the first sentence and the subject of the second, it is the ‘theme’ in both sentences. Same idea applies to passive constructions, for example. There is a widely held belief in the NLP and computational linguistics communities that identifying and defining roles of predicate arguments in a sentence has a lot of potential for and is a significant step toward improving important applications such as document retrieval, machine translation, question answering and information extraction (Moschitti et al., 2007). To date, most of the reported SRL systems are for English, and most of the data resources exist for English. We do see some headway for other languages such as German and Chinese (Erk and Pado, 2006; Sun and Jurafsky, 2004). The systems for the other languages follow the successful models devised for English, e.g. (Gildea and Jurafsky, 2002; Gildea and Palmer, 2002; Chen and Rambow, 2003; Thompson et al., 2003; Pradhan et al., 2003; Moschitti, 2004; Xue and Palmer, 2004; Haghighi et al., 2005). In the same spirit and facilitated by the release of the SemEval 2007 Task 18 data1, based on the Pilot Arabic Propbank, a preliminary SRL system exists for Arabic2 (Diab and Moschitti, 2007; Diab et al., 2007a). However, it did not exploit some special characteristics of the Arabic language on the SRL task. In this paper, we present an SRL system for MSA that exploits many aspects of the rich morphological features of the language. It is based on a supervised model that uses support vector machines (SVM) technology (Vapnik, 1998) for argument boundary detection and argument classification. It is trained and tested using the pilot Arabic Propbank data released as part of the SemEval 2007 data. Given the lack of a reliable Arabic deep syntactic parser, we 1http://nlp.cs.swarthmore.edu/semeval/ 2We use Arabic to refer to Modern Standard Arabic (MSA). 798 use gold standard trees from the Arabic Tree Bank (ATB) (Maamouri et al., 2004). This paper is laid out as follows: Section 2 presents facts about the Arabic language especially in relevant contrast to English; Section 3 presents the approach and system adopted for this work; Section 4 presents the experimental setup, results and discussion. Finally, Section 5 draws our conclusions. 2 Arabic Language and Impact on SRL Arabic is a very different language from English in several respects relevant to the SRL task. Arabic is a semitic language. It is known for its templatic morphology where words are made up of roots and affixes. Clitics agglutinate to words. Clitics include prepositions, conjunctions, and pronouns. In contrast to English, Arabic exhibits rich morphology. Similar to English, Arabic verbs explicitly encode tense, voice, Number, and Person features. Additionally, Arabic encodes verbs with Gender, Mood (subjunctive, indicative and jussive) information. For nominals (nouns, adjectives, proper names), Arabic encodes syntactic Case (accusative, genitive and nominative), Number, Gender and Definiteness features. In general, many of the morphological features of the language are expressed via short vowels also known as diacritics3. Unlike English, syntactically Arabic is a pro-drop language, where the subject of a verb may be implicitly encoded in the verb morphology. Hence, we observe sentences such as ÈA®KQ.Ë@ É¿@ Akl AlbrtqAl ‘ate-[he] the-oranges’, where the verb Akl encodes the third Person Masculine Singular subject in the verbal morphology. It is worth noting that in the ATB 35% of all sentences are pro-dropped for subject (Maamouri et al., 2006). Unless the syntactic parse is very accurate in identifying the pro-dropped case, identifying the syntactic subject and the underlying semantic arguments are a challenge for such pro-drop cases. Arabic syntax exhibits relative free word order. Arabic allows for both subject-verb-object (SVO) and verb-subject-object (VSO) argument orders.4 In 3Diacritics encode the vocalic structure, namely the short vowels, as well as the gemmination marker for consonantal doubling, among other markers. 4MSA less often allows for OSV, or OVS. the VSO constructions, the verb agrees with the syntactic subject in Gender only, while in the SVO constructions, the verb agrees with the subject in both Number and Gender. Even though, in the ATB, an equal distribution of both VSO and SVO is observed (each appearing 30% of the time), it is known that in general Arabic is predominantly in VSO order. Moreover, the pro-drop cases could effectively be perceived as VSO orders for the purposes of SRL. Syntactic Case is very important in the cases of VSO and pro-drop constructions as they indicate the syntactic roles of the object arguments with accusative Case. Unless the morphology of syntactic Case is explicitly present, such free word order could run the SRL system into significant confusion for many of the predicates where both arguments are semantically of the same type. Arabic exhibits more complex noun phrases than English mainly to express possession. These constructions are known as idafa constructions. Modern standard Arabic does not have a special particle expressing possession. In these complex structures a surface indefinite noun (missing an explicit definite article) may be followed by a definite noun marked with genitive Case, rendering the first noun syntactically definite. For example, I J.Ë@ Ég. P rjl Albyt ‘man the-house’ meaning ‘man of the house’, Ég. P becomes definite. An adjective modifying the noun Ég. P will have to agree with it in Number, Gender, Definiteness, and Case. However, without explicit morphological encoding of these agreements, the scope of the arguments would be confusing to an SRL system. In a sentence such as ÉK ñ¢Ë@ I J.Ë@ Ég. P rjlu Albyti AlTwylu meaning ‘the tall man of the house’: ‘man’ is definite, masculine, singular, nominative, corresponding to Definiteness, Gender, Number and Case, respectively; ‘the-house’ is definite, masculine, singular, genitive; ‘the-tall’ is definite, masculine, singular, nominative. We note that ‘man’ and ‘tall’ agree in Number, Gender, Case and Definiteness. Syntactic Case is marked using short vowels u, and i at the end of the word. Hence, rjlu and AlTwylu agree in their Case ending5 Without the explicit marking of the Case information, 5The presence of the Albyti is crucial as it renders rjlu definite therefore allowing the agreement with AlTwylu to be complete. 799 S VP VBDpredicate @YK. started NPARG0 NP NN  KP president NP NN Z@P PñË@ ministers JJ ú æJ ’Ë@ Chinese NP NNP ð P Zhu NNP ú m. 'ðP Rongji NPARG1 NP NN èPAK P visit JJ éJ ÖޅP official PP IN È to NP NNP Y JêË@ India NPARGM−T MP NP NN YgB@ Sunday JJ ú æ •AÖÏ@ past Figure 1: Annotated Arabic Tree corresponding to ‘Chinese Prime minister Zhu Rongjy started an official visit to India last Sunday.’ namely in the word endings, it could be equally valid that ‘the-tall’ modifies ‘the-house’ since they agree in Number, Gender and Definiteness as explicitly marked by the Definiteness article Al. Hence, these idafa constructions could be tricky for SRL in the absence of explicit morphological features. This is compounded by the general absence of short vowels, expressed by diacritics (i.e. the u and i in rjlu and Albyti,) in naturally occurring text. Idafa constructions in the ATB exhibit recursive structure, embedding other NPs, compared to English where possession is annotated with flat NPs and is designated by a possessive marker. Arabic texts are underspecified for diacritics to different degrees depending on the genre of the text (Diab et al., 2007b). Such an underspecification of diacritics masks some of the very relevant morpho-syntactic interactions between the different categories such as agreement between nominals and their modifiers as exemplified before, or verbs and their subjects. Having highlighted the differences, we hypothesize that the interaction between the rich morphology (if explicitly marked and present) and syntax could help with the SRL task. The presence of explicit Number and Gender agreement as well as Case information aids with identification of the syntactic subject and object even if the word order is relatively free. Gender, Number, Definiteness and Case agreement between nouns and their modifiers and other nominals, should give clues to the scope of arguments as well as their classes. The presence of such morpho-syntactic information should lead to better argument boundary detection and better classification. 3 An SRL system for Arabic The previous section suggests that an optimal model should take into account specific characteristics of Feature Name Description Predicate Lemmatization of the predicate word Path Syntactic path linking the predicate and an argument, e.g. NN↑NP↑VP↓VBX Partial path Path feature limited to the branching of the argument No-direction path Like Path without traversal directions Phrase type Syntactic type of the argument node Position Relative position of the argument with respect to the predicate Verb subcategorization Production rule expanding the predicate parent node Syntactic Frame Position of the NPs surrounding the predicate First and last word/POS First and last words and POS tags of candidate argument phrases Table 1: Standard linguistic features employed by most SRL systems. Arabic. In this research, we go beyond the previously proposed basic SRL system for Arabic (Diab et al., 2007a; Diab and Moschitti, 2007). We exploit the full morphological potential of the language to verify our hypothesis that taking advantage of the interaction between morphology and syntax can improve on a basic SRL system for morphologically rich languages. Similar to the previous Arabic SRL systems, our adopted SRL models use Support Vector Machines to implement a two step classification approach, i.e. boundary detection and argument classification. Such models have already been investigated in (Pradhan et al., 2005; Moschitti et al., 2005). The two step classification description is as follows. 3.1 Predicate Argument Extraction The extraction of predicative structures is based on the sentence level. Given a sentence, its predicates, as indicated by verbs, have to be identified along with their arguments. This problem is usually divided in two subtasks: (a) the detection of the target argument boundaries, i.e. the span of the argument words in the sentence, and (b) the classification of the argument type, e.g. Arg0 or ArgM for Propbank 800 S NP NNP Mary VP VBD bought NP D a N cat ⇒ VP VBD bought NP D a N cat VP VBD NP D a N cat VP VBD bought NP D N cat VP VBD bought NP D N VP VBD bought NP NP D a N cat NP NNP Mary NNP Mary VBD bought D a N cat ... Figure 2: Fragment space generated by a tree kernel function for the sentence Mary bought a cat. or Agent and Goal for the FrameNet. The standard approach to learn both the detection and the classification of predicate arguments is summarized by the following steps: (a) Given a sentence from the training-set, generate a full syntactic parse-tree; (b) let P and A be the set of predicates and the set of parse-tree nodes (i.e. the potential arguments), respectively; (c) for each pair ⟨p, a⟩∈P × A: extract the feature representation set, Fp,a and put it in T + (positive examples) if the subtree rooted in a covers exactly the words of one argument of p, otherwise put it in T − (negative examples). For instance, in Figure 1, for each combination of the predicate started with the nodes NP, S, VP, VPD, NNP, NN, PP, JJ or IN the instances Fstarted,a are generated. In case the node a exactly covers ‘president ministers Chinese Zhu Rongji’ or ‘visit official to India’, Fp,a will be a positive instance otherwise it will be a negative one, e.g. Fstarted,IN. The T + and T −sets are used to train the boundary classifier. To train the multi-class classifier, T + can be reorganized as positive T + argi and negative T − argi examples for each argument i. This way, an individual ONE-vs-ALL classifier for each argument i can be trained. We adopt this solution, according to (Pradhan et al., 2005), since it is simple and effective. In the classification phase, given an unseen sentence, all its Fp,a are generated and classified by each individual classifier Ci. The argument associated with the maximum among the scores provided by the individual classifiers is eventually selected. The above approach assigns labels independently, without considering the whole predicate argument structure. As a consequence, the classifier output may generate overlapping arguments. Thus, to make the annotations globally consistent, we apply a disambiguating heuristic adopted from (Diab and Moschitti, 2007) that selects only one argument among multiple overlapping arguments. 3.2 Features The discovery of relevant features is, as usual, a complex task. The choice of features is further compounded for a language such as Arabic given its rich morphology and morpho-syntactic interactions. To date, there is a common consensus on the set of basic standard features for SRL, which we will refer to as standard. The set of standard features, refers to unstructured information derived from parse trees. e.g. Phrase Type, Predicate Word or Head Word. Typically the standard features are language independent. In our experiments we employ the features listed in Table 1, defined in (Gildea and Jurafsky, 2002; Pradhan et al., 2005; Xue and Palmer, 2004). For example, the Phrase Type indicates the syntactic type of the phrase labeled as a predicate argument, e.g. NP for ARG1 in Figure 1. The Parse Tree Path contains the path in the parse tree between the predicate and the argument phrase, expressed as a sequence of nonterminal labels linked by direction (up or down) symbols, e.g. VBD ↑VP ↓NP for ARG1 in Figure 1. The Predicate Word is the surface form of the verbal predicate, e.g. started for all arguments. The standard features, as successful as they are, are designed primarily for English. They are not exploiting the different characteristics of the Arabic language as expressed through morphology. Hence, we explicitly encode new SRL features that capture the richness of Arabic morphology and its role in morpho-syntactic behavior. The set of morphological attributes include: inflectional morphology such as Number, Gender, Definiteness, Mood, Case, Person; derivational morphology such as the Lemma form of the words with all the diacritics explicitly marked; vowelized and fully diacritized form of the surface form; the English gloss6. It is worth noting that there exists highly accurate morphological taggers for Arabic such as the MADA system (Habash and Rambow, 2005; Roth et al., 2008). MADA tags 6The gloss is not sense disambiguated, hence they include homonyms. 801 Feature Name Description Definiteness Applies to nominals, values are definite, indefinite or inapplicable Number Applies to nominals and verbs, values are singular, plural or dual or inapplicable Gender Applies to nominals, values are feminine, masculine or inapplicable Case Applies to nominals, values are accusative, genitive, nominative or inapplicable Mood Applies to verbs, values are subjunctive, indicative, jussive or inapplicable Person Applies to verbs and pronouns, values are 1st, 2nd, 3rd person or inapplicable Lemma The citation form of the word fully diacritized with the short vowels and gemmination markers if applicable Gloss this is the corresponding English meaning as rendered by the underlying lexicon. Vocalized word The surface form of the word with all the relevant diacritics. Unlike Lemma, it includes all the inflections. Unvowelized word The naturally occurring form of the word in the sentence with no diacritics. Table 2: Rich morphological features encoded in the Extended Argument Structure Tree (EAST). modern standard Arabic with all the relevant morphological features as well as it produces highly accurate lemma and gloss information by tapping into an underlying morphological lexicon. A list of the extended features is described in Table 2. The set of possible features and their combinations are very large leading to an intractable feature selection problem. Therefore, we exploit well known kernel methods, namely tree kernels, to robustly experiment with all the features simultaneously. Such kernel engineering, as shown in (Moschitti, 2004), allows us to experiment with many syntactic/semantic features seamlessly. 3.3 Engineering Arabic Features with Kernel Methods Feature engineering via kernel methods is a useful technique that allows us to save a lot of time in the design and implementation of features. The basic idea is (a) to design a set of basic value-attribute features and apply polynomial kernels and generate all possible combinations; or (b) to design basic tree structures expressing properties related to the target linguistic objects and use tree kernels to generate all possible tree subparts, which will constitute the feature representation vectors for the learning algorithm. Tree kernels evaluate the similarity between two trees in terms of their overlap, generally measured as the number of common substructures (Collins and Duffy, 2002). For example, Figure 2, shows a small parse tree and some of its fragments. To design a function which computes the number of common substructures between two trees t1 and t2, let us define the set of fragments F={f1, f2, ..} and the indicator function Ii(n), equal to 1 if the target fi is rooted at node n and 0 otherwise. A tree kernel function KT (·) over two trees is defined as: VP VBD @YK. NP NP NN  KP NP NN Z@P PñË@ JJ ú æJ ’Ë@ NP NNP ð P NNP ú m. 'ðP Figure 3: Example of the positive AST structured feature encoding the argument ARG0 in the sentence depicted in Figure 1. KT (t1, t2) = P n1∈Nt1 P n2∈Nt2 ∆(n1, n2), where Nt1 and Nt2 are the sets of nodes of t1 and t2, respectively. The function ∆(·) evaluates the number of common fragments rooted in n1 and n2, i.e. ∆(n1, n2) = P|F| i=1 Ii(n1)Ii(n2). ∆can be efficiently computed with the algorithm proposed in (Collins and Duffy, 2002). 3.4 Structural Features for Arabic In order to incorporate the characteristically rich Arabic morphology features structurally in the tree representations, we convert the features into valueattribute pairs at the leaf node level of the tree. Fig 1 illustrates the morphologically underspecified tree with some of the morphological features encoded in the POS tag such as VBD indicating past tense. This contrasts with Fig. 4 which shows an excerpt of the same tree encoding the chosen relevant morphological features. For the sake of classification, we will be dealing with two kinds of structures: the Argument Structure Tree (AST) (Pighin and Basili, 2006) and the Extended Argument Structure Tree (EAST). The AST is defined as the minimal subtree encompassing all and only the leaf nodes encoding words belonging to the predicate or one of its arguments. An AST example is shown in Figure 3. The EAST is the corresponding structure in which all the leaf nodes have been extended with the ten morphological fea802 VP VBD FEAT Gender MASC FEAT Number S FEAT Person 3 FEAT Lemma bada>-a FEAT Gloss start/begin+he/it FEAT Vocal bada>a FEAT UnVocal bd> NP NP NN FEAT Definite DEF FEAT Gender MASC FEAT Number S FEAT Case GEN FEAT Lemma ra}iys FEAT Gloss president/head/chairman FEAT Vocal ra}iysi NP ... NP ... Figure 4: An excerpt of the EAST corresponding to the AST shown in Figure 3, with attribute-value extended morphological features represented as leaf nodes. tures described in Table 2, forming a vector of 10 preterminal-terminal node pairs that replace the surface of the leaf. The resulting EAST structure is shown in Figure 4. Not all the features are instantiated for all the leaf node words. Due to space limitations, in the figure we did not include the Features that have NULL values. For instance, Definiteness is always associated with nominals, hence the verb @YK. bd’ ‘start’ is assigned a NULL value for the Definite feature. Verbs exhibit Gender information depending on inflections. For our example, @YK. ‘started’ is inflected for masculine Gender, singular Number, third person. On the other hand, the noun Z@P PñË@ is definite and is assigned genitive Case since it is in a possessive, idafa, construction. The features encoded by the EAST can provide very useful hints for boundary and role classification. Considering Figure 1, argument boundaries is not as straight forward to identify as there are several NPs. Assuming that the inner most NP ‘ministers the-Chinese’ is a valid Argument could potentially be accepted. There is ample evidence that any NN followed by a JJ would make a perfectly valid Argument. However, an AST structure would mask the fact that the JJ ‘the-Chinese’ does not modify the NN ‘ministers’ since they do not agree in Number7, and in syntactic Case, where the latter is genitive and the former is nominative. ‘the-Chinese’ in fact modifies ‘president’ as they agree on all the underlying morphological features. Conversely, the EAST in Figure 4 explicitly encodes this agreement including an agreement on Definiteness. It is worth noting that just observing the Arabic word  KP ‘president’ in Fig 1, the system would assume that it is an indefinite word since it does not include the definite arti7The POS tag on this node is NN as broken plural, however, the underlying morphological feature Number is plural. cle È@. Therefore, the system could be lead astray to conclude that ‘the-Chinese’ does not modify ‘president’ but rather ‘the-ministers’. Without knowing the Case information and the agreement features between the verb @YK. ‘started’ and the two nouns heading the two main NPs in our tree, the syntactic subject can be either èPAK P ‘visit’ or  KP ‘president’ in Figure 1. The EAST is more effective in identifying the first noun as the syntactic subject and the second as the object since the morphological information indicates that they are in nominative and accusative Case, respectively. Also the agreement in Gender and Number between the verb and the syntactic subject is identified in the enriched tree. We see that @YK. ‘started’ and  KP ‘president’ agree in being singular and masculine. If èPAK P ‘visit’ were the syntactic subject, we would have seen the verb inflected as H @YK. ‘started-FEM’ with a feminine inflection to reflect the verb-subject agreement on Gender. Hence these agreement features should help with the classification task. 4 Experiments In these experiments we investigate (a) if the technology proposed in previous work for automatic SRL of English texts is suitable for Arabic SRL systems, and (b) the impact of tree kernels using new tree structures on Arabic SRL. For this purpose, we test our models on the two individual phases of the traditional 2-stage SRL model (i.e. boundary detection and argument classification) and on the complete SRL task. We use three different feature spaces: a set of standard attribute-value features and the AST and the EAST structures defined in 3.4. Standard feature vectors can be combined with a polynomial kernel (Poly), which, when the degree is larger than 1, automatically generates feature conjunctions. This, as suggested in (Pradhan et al., 2005; Moschitti, 2004), can help stressing the differ803 ences between different argument types. Tree structures can be used in the learning algorithm thanks to the tree kernels described in Section 3.3. Moreover, to verify if the above feature sets are equivalent or complementary, we can join them by means of additive operation which always produces a valid kernel (Shawe-Taylor and Cristianini, 2004). 4.1 Experimental setup We use the dataset released in the SemEval 2007 Task 18 on Arabic Semantic Labeling (Diab et al., 2007a). The data covers the 95 most frequent verbs in the Arabic Treebank III ver. 2 (ATB). The ATB consists of MSA newswire data from the Annhar newspaper, spanning the months from July to November, 2002. All our experiments are carried out with gold standard trees. An important characteristic of the dataset is the use of unvowelized Arabic in the Buckwalter transliteration scheme for deriving the basic features for the AST experimental condition. The data comprises a development set, a test set and a training set of 886, 902 and 8,402 sentences, respectively, where each set contain 1725, 1661 and 21,194 argument instances. These instances are distributed over 26 different role types. The training instances of the boundary detection task also include parse-tree nodes that do not correspond to correct boundaries (we only considered 350K examples). For the experiments, we use SVM-Light-TK toolkit8 (Moschitti, 2004; Moschitti, 2006) and its SVM-Light default parameters. The system performance, i.e. F1 on single boundary and role classifier, accuracy of the role multi-classifier and the F1 of the complete SRL systems, are computed by means of the CoNLL evaluator9. 4.2 Results Figure 5 reports the F1 of the SVM boundary classifier using Polynomial Kernels with a degree from 1 to 6 (i.e. Polyi), the AST and the EAST kernels and their combinations. We note that as we introduce conjunctions, i.e. a degree larger than 2, the F1 increases by more than 3 percentage points. Thus, not only are the English features meaningful for Arabic but also their combinations are important, reveal8http://disi.unitn.it/∼moschitti 9http://www.lsi.upc.es/∼srlconll/soft.html Figure 5: Impact of polynomial kernel, tree kernels and their combinations on boundary detection. Figure 6: Impact of the polynomial kernel, tree kernels and their combinations on the accuracy in role classification (gold boundaries) and on the F1 of complete SRL task (boundary + role classification). ing that both languages share an underlying syntaxsemantics interface. Moreover, we note that the F1 of EAST is higher than the F1 of AST which in turn is higher than the linear kernel (Poly1). However, when conjunctive features (Poly2-4) are used the system accuracy exceeds those of tree kernel models alone. Further increasing the polynomial degree (Poly5-6) generates very complex hypotheses which result in very low accuracy values. Therefore, to improve the polynomial kernel, we sum it to the contribution of AST and/or EAST, obtaining AST+Poly3 (polynomial kernel of degree 3), EAST+Poly3 and AST+EAST+Poly3, whose F1 scores are also shown in Figure 5. Such combined models improve on the best polynomial kernel. However, not much difference is shown between AST and EAST on boundary detection. This is expected since we are using gold standard trees. We hypothesize that the rich morphological features will help more with the role classification task. Therefore, we evaluate role classification with gold boundaries. The curve labeled ”classification” in Figure 6 illustrates the accuracy of the SVM role multi-classifier according to different kernels. 804 P3 AST EAST AST+ P3 EAST+ P3 AST+ EAST+ P3 P 81.73 80.33 81.7 81.73 82.46 83.08 R 78.93 75.98 77.42 80.01 80.67 81.28 F1 80.31 78.09 79.51 80.86 81.56 82.17 Table 3: F1 of different models on the Arabic SRL task. Again, we note that a degree larger than 1 yields a significant improvement of more than 3 percent points, suggesting that the design of Arabic SRL system based on SVMs requires polynomial kernels. In contrast to the boundary results, EAST highly improves over AST (by about 3 percentage points) and produces an F1 comparable to the best Polynomial kernel. Moreover, AST+Poly3, EAST+Poly3 and AST+EAST+Poly3 all yield different degrees of improvement, where the latter model is both the richest in terms of features and the most accurate. These results strongly suggest that: (a) tree kernels generate new syntactic features that are useful for the classification of Arabic semantic roles; (b) the richer morphology of Arabic language should be exploited effectively to obtain accurate SRL systems; (c) tree kernels appears to be a viable approach to effectively achieve this goal. To illustrate the practical feasibility of our system, we investigate the complete SRL task where both the boundary detection and argument role classification are performed automatically. The curve labeled ”boundary + role classification” in Figure 6 reports the F1 of SRL systems based on the previous kernels. The trend of the plot is similar to the goldstandard boundaries case. The difference among the F1 scores of the AST+Poly3, EAST+Poly3 and AST+EAST+Poly3 is slightly reduced. This may be attributed to the fact that they produce similar boundary detection results, which in turn, for the global SRL outcome, are summed to those of the classification phase. Table 3 details the differences among the models and shows that the best model improves the SRL system based on the polynomial kernel, i.e. the SRL state-of-the-art for Arabic, by about 2 percentage points. This is a very large improvement for SRL systems (Carreras and M`arquez, 2005). These results confirm that the new enriched structures along with tree kernels are a promising approach for Arabic SRL systems. Finally, Table 4 reports the F1 of the best model, AST+EAST+Poly3, for individual arguments in the Role Precision Recall Fβ=1 ARG0 96.14% 97.27% 96.70 ARG0-STR 100.00% 20.00% 33.33 ARG1 88.52% 92.70% 90.57 ARG1-STR 33.33% 15.38% 21.05 ARG2 69.35% 76.67% 72.82 ARG3 66.67% 16.67% 26.67 ARGM-ADV 66.98% 61.74% 64.25 ARGM-CAU 100.00% 9.09% 16.67 ARGM-CND 25.00% 33.33% 28.57 ARGM-LOC 67.44% 95.08% 78.91 ARGM-MNR 54.00% 49.09% 51.43 ARGM-NEG 80.85% 97.44% 88.37 ARGM-PRD 20.00% 8.33% 11.76 ARGM-PRP 85.71% 66.67% 75.00 ARGM-TMP 91.35% 88.79% 90.05 Table 4: SRL F1 of the single arguments using the AST+EAST+Poly3 kernel. SRL task. We note that, as for English SRL, ARG0 shows high values (96.70%). Conversely, ARG1 seems more difficult to be classified in Arabic. The F1 for ARG1 is only 90.57% compared with 96.70% for ARG0. This may be attributed to the different possible syntactic orders of Arabic consructions confusing the syntactic subject with the object especially where there is no clear morphological features on the arguments to decide either way. 5 Conclusions We have presented a model for Arabic SRL that yields a global SRL F1 score of 82.17% by combining rich structured features and traditional attributevalue features derived from English SRL systems. The resulting system significantly improves previously reported results on the same task and dataset. This outcome is very promising given that the available data is small compared to the English data sets. For future work, we would like to explore further explicit morphological features such as aspect tense and voice as well as richer POS tag sets such as those proposed in (Diab, 2007). Finally, we would like to experiment with automatic parses and different syntactic formalisms such as dependencies and shallow parses. Acknowledgements Mona Diab is partly funded by DARPA Contract No. HR001106-C-0023. Alessandro Moschitti has been partially funded by CCLS of the Columbia University and by the FP6 IST LUNA project contract no 33549. 805 References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The Berkeley FrameNet Project. In COLINGACL ’98: University of Montr´eal. Xavier Carreras and Llu´ıs M`arquez. 2005. Introduction to the CoNLL-2005 Shared Task: Semantic Role Labeling. In Proceedings of CoNLL-2005, Ann Arbor, Michigan. John Chen and Owen Rambow. 2003. Use of Deep Linguistic Features for the Recognition and Labeling of Semantic Arguments. In Proceedings of EMNLP, Sapporo, Japan. Michael Collins and Nigel Duffy. 2002. New Ranking Algorithms for Parsing and Tagging: Kernels over Discrete structures, and the voted perceptron. In ACL02. Mona Diab and Alessandro Moschitti. 2007. Semantic Parsing for Modern Standard Arabic. In Proceedings of RANLP, Borovets, Bulgaria. Mona Diab, Musa Alkhalifa, Sabry ElKateb, Christiane Fellbaum, Aous Mansouri, and Martha Palmer. 2007a. Semeval-2007 task 18: Arabic Semantic Labeling. In Proceedings of SemEval-2007, Prague, Czech Republic. Mona Diab, Mahmoud Ghoneim, and Nizar Habash. 2007b. Arabic Diacritization in the Context of Statistical Machine Translation. In Proceedings of MTSummit, Copenhagen, Denmark. Mona Diab. 2007. Towards an Optimal Pos Tag Set for Modern Standard Arabic Processing. In Proceedings of RANLP, Borovets, Bulgaria. Katrin Erk and Sebastian Pado. 2006. Shalmaneser – A Toolchain for Shallow Semantic Parsing. Proceedings of LREC. Daniel Gildea and Daniel Jurafsky. 2002. Automatic Labeling of Semantic Roles. Computational Linguistics. Daniel Gildea and Martha Palmer. 2002. The Necessity of Parsing for Predicate Argument Recognition. In Proceedings of ACL-02, Philadelphia, PA, USA. Nizar Habash and Owen Rambow. 2005. Arabic Tokenization, Part-of-Speech Tagging and Morphological Disambiguation in One Fell Swoop. In Proceedings of ACL’05, Ann Arbor, Michigan. Aria Haghighi, Kristina Toutanova, and Christopher Manning. 2005. A Joint Model for Semantic Role Labeling. In Proceedings ofCoNLL-2005, Ann Arbor, Michigan. Paul Kingsbury and Martha Palmer. 2003. Propbank: the Next Level of Treebank. In Proceedings of Treebanks and Lexical Theories. Mohamed Maamouri, Ann Bies, Tim Buckwalter, and Wigdan Mekki. 2004. The Penn Arabic Treebank : Building a Large-Scale Annotated Arabic Corpus. Mohamed Maamouri, Ann Bies, Tim Buckwalter, Mona Diab, Nizar Habash, Owen Rambow, and Dalila Tabessi. 2006. Developing and Using a Pilot Dialectal Arabic Treebank. Alessandro Moschitti, Ana-Maria Giuglea, Bonaventura Coppola, and Roberto Basili. 2005. Hierarchical Semantic Role Labeling. In Proceedings of CoNLL2005, Ann Arbor, Michigan. Alessandro Moschitti, Silvia Quarteroni, Roberto Basili, and Suresh Manandhar. 2007. Exploiting Syntactic and Shallow Semantic Kernels for Question Answer Classification. In Proceedings of ACL’07, Prague, Czech Republic. Alessandro Moschitti. 2004. A Study on Convolution Kernels for Shallow Semantic Parsing. In proceedings of ACL’04, Barcelona, Spain. Alessandro Moschitti. 2006. Making Tree Kernels Practical for Natural Language Learning. In Proceedings of EACL’06. Alessandro Moschitti, Daniele Pighin and Roberto Basili. 2006. Semantic Role Labeling via Tree Kernel Joint Inference. In Proceedings of CoNLL-X. Sameer Pradhan, Kadri Hacioglu, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2003. Semantic Role Parsing: Adding Semantic Structure to Unstructured Text. In Proceedings ICDM’03, Melbourne, USA. Sameer Pradhan, Kadri Hacioglu, Valerie Krugler, Wayne Ward, James H. Martin, and Daniel Jurafsky. 2005. Support Vector Learning for Semantic Argument Classification. Machine Learning. Ryan Roth, Owen Rambow, Nizar Habash, Mona Diab, and Cynthia Rudin. 2008. Arabic Morphological Tagging, Diacritization, and Lemmatization Using Lexeme Models and Feature Ranking. In ACL’08, Short Papers, Columbus, Ohio, June. John Shawe-Taylor and Nello Cristianini. 2004. Kernel Methods for Pattern Analysis. Cambridge University Press. Honglin Sun and Daniel Jurafsky. 2004. Shallow Semantic Parsing of Chinese. In Proceedings of NAACLHLT. Cynthia A. Thompson, Roger Levy, and Christopher Manning. 2003. A Generative Model for Semantic Role Labeling. In ECML’03. Vladimir N. Vapnik. 1998. Statistical Learning Theory. John Wiley and Sons. Nianwen Xue and Martha Palmer. 2004. Calibrating Features for Semantic Role Labeling. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, Barcelona, Spain. 806
2008
91
Proceedings of ACL-08: HLT, pages 807–815, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics An Unsupervised Approach to Biography Production using Wikipedia Fadi Biadsy,† Julia Hirschberg† and Elena Filatova* †Department of Computer Science Columbia University, New York, NY 10027, USA {fadi,julia}@cs.columbia.edu *InforSense LLC Cambridge, MA 02141, USA [email protected] Abstract We describe an unsupervised approach to multi-document sentence-extraction based summarization for the task of producing biographies. We utilize Wikipedia to automatically construct a corpus of biographical sentences and TDT4 to construct a corpus of non-biographical sentences. We build a biographical-sentence classifier from these corpora and an SVM regression model for sentence ordering from the Wikipedia corpus. We evaluate our work on the DUC2004 evaluation data and with human judges. Overall, our system significantly outperforms all systems that participated in DUC2004, according to the ROUGE-L metric, and is preferred by human subjects. 1 Introduction Producing biographies by hand is a labor-intensive task, generally done only for famous individuals. The process is particularly difficult when persons of interest are not well known and when information must be gathered from a wide variety of sources. We present an automatic, unsupervised, multi-document summarization (MDS) approach based on extractive techniques to producing biographies, answering the question “Who is X?” There is growing interest in automatic MDS in general due in part to the explosion of multilingual and multimedia data available online. The goal of MDS is to automatically produce a concise, wellorganized, and fluent summary of a set of documents on the same topic. MDS strategies have been employed to produce both generic summaries and query-focused summaries. Due to the complexity of text generation, most summarization systems employ sentence-extraction techniques, in which the most relevant sentences from one or more documents are selected to represent the summary. This approach is guaranteed to produce grammatical sentences, although they must subsequently be ordered appropriately to produce a coherent summary. In this paper we describe a sentence-extraction based MDS procedure to produce biographies from online resources automatically. We make use of Wikipedia, the largest free multilingual encyclopedia on the internet, to build a biographical-sentence classifier and a component for ordering sentences in the output summary. Section 2 presents an overview of our system. In Section 3 we describe our corpus and in Section 4 we discuss the components of our system in more detail. In Section 5, we present an evaluation of our work on the Document Understanding Conference of 2004 (DUC2004), the biography task (task 5) test set. In Section 6 we compare our research with previous work on biography generation. We conclude in Section 7 and identify directions for future research. 2 System Overview In this section, we present an overview of our biography extraction system. We assume as input a set of documents retrieved by an information retrieval engine from a query consisting of the name of the person for whom the biography is desired. We further assume that these documents have been tagged with Named Entities (NE)s with coreferences resolved 807 using a system such as NYU’s 2005 ACE system (Grishman et al., 2005), which we used for our experiments. Our task is to produce a concise biography from these documents. First, we need to select the most ‘important’ biographical sentences for the target person. To do so, we first extract from the input documents all sentences that contain some reference to the target person according to the coreference assignment algorithm; this reference may be the target’s name or a coreferential full NP or pronominal referring expression, such as the President or he. We call these sentences hypothesis sentences. We hypothesize that most ’biographical’ sentences will contain a reference to the target. However, some of these sentences may be irrelevant to a biography; therefore, we filter them using a binary classifier that retains only ‘biographical’ sentences. These biographical sentences may also include redundant information; therefore, we cluster them and choose one sentence from each cluster to represent the information in that cluster. Since some of these sentences have more salient biographical information than others and since manually produced biographies tend to include information in a certain order, we reorder our summary sentences using an SVM regression model trained on biographies. Finally, the first reference to the target person in the initial sentence in the reordering is rewritten using the longest coreference in our hypothesis sentences which contains the target’s full name. We then trim the output to a threshold to produce a biography of a certain length for evaluation against the DUC2004 systems. 3 Training Data One of the difficulties inherent in automatic biography generation is the lack of training data. One might collect training data by manually annotating a suitable corpus containing biographical and nonbiographical data about a person, as in (Zhou et al., 2004). However, such annotation is labor intensive. To avoid this problem, we adopt an unsupervised approach. We use Wikipedia biographies as our corpus of ’biographical’ sentences. We collect our ‘nonbiographical’ sentences from the English newswire documents in the TDT4 corpus.1 While each corpus 1http://projects.ldc.upenn.edu/TDT4 may contain positive and negative examples, we assume that most sentences in Wikipedia biographies are biographical and that the majority of TDT4 sentences are non-biographical. 3.1 Constructing the Biographical Corpus To automatically collect our biographical sentences, we first download the xml version of Wikipedia and extract only the documents whose authors used the Wikipedia biography template when creating their biography. There are 16,906 biographies in Wikipedia that used this template. We next apply simple text processing techniques to clean the text. We select at most the first 150 sentences from each page, to avoid sentences that are not critically important to the biography. For each of these sentences we perform the following steps: 1. We identify the biography’s subject from its title, terming this name the ‘target person.’ 2. We run NYU’s 2005 ACE system (Grishman et al., 2005) to tag NEs and do coreference resolution. There are 43 unique NE tags in our corpora, including PER Individual, ORG Educational, and so on, and TIMEX tags for all dates. 3. For each sentence, we replace each NE by its tag name and type ([name-type subtype]) as assigned by the NYU tagger. This modified sentence we term a class-based/lexical sentence. 4. Each non-pronominal referring expression (e.g., George W. Bush, the US president) that is tagged as coreferential with the target person is replaced by our own [TARGET PER] tag and every pronoun P that refers to the target person is replaced by [TARGET P], where P is the pronoun itself. This allows us to generalize our sentences while retaining a) the essential distinction between this NE (and its role in the sentence) and all other NEs in the sentence, and b) the form of referring expressions. 5. Sentences containing no reference to the target person are assumed to be irrelevant and removed from the corpus, as are sentences with 808 fewer than 4 tokens; short sentences are unlikely to contain useful information beyond the target reference. For example, given sentences from the Wikipedia biography of Martin Luther King, Jr. we produce class-based/lexical sentences as follows: Martin Luther King, Jr., was born on January 15, 1929, in Atlanta, Georgia. He was the son of Reverend Martin Luther King, Sr. and Alberta Williams King. He had an older sister, Willie Christine (September 11, 1927) and a younger brother, Albert Daniel. [TARGET PER], was born on [TIMEX], in [GPE PopulationCenter]. [TARGET HE] was the son of [PER Individual] and [PER Individual]. [TARGET HE] had an older sister, [PER Individual] ([TIMEX]) and a younger brother, [PER Individual]. 3.2 Constructing the Non-Biographical Corpus We use the TDT4 corpus to identify nonbiographical sentences. Again, we run NYU’s 2005 ACE system to tag NEs and do coreference resolution on each news story in TDT4. Since we have no target name for these stories, we select an NE tagged as PER Individual at random from all NEs in the story to represent the target person. We exclude any sentence with no reference to this target person and produce class-based/lexical sentences as above. 4 Our Biography Extraction System 4.1 Classifying Biographical Sentences Using the biographical and non-biographical corpora described in Section 3, we train a binary classifier to determine whether a new sentence should be included in a biography or not. For our experiments we extracted 30,002 sentences from Wikipedia biographies and held out 2,108 sentences for testing. Similarly. we extracted 23,424 sentences from TDT4, and held out 2,108 sentences for testing. For each sentence, we then extract the frequency of three class-based/lexical features — unigram, biagram, and trigram — and two POS features — the frequency of unigram and bigram POS. To reduce the dimensionality of our feature space, we first sort the features in decreasing order of Chi-square statistics computed from the contingency tables of the observed frequencies from the training data. We then take the highest 30-80% features, where the number of features used is determined empirically for Classifier Accuracy F-Measure SVM 87.6% 0.87 M. na¨ıve Bayes 84.1% 0.84 C4.5 81.8% 0.82 Table 1: Binary classification results: Wikipedia biography class-based/lexical sentences vs. TDT4 classbased/lexical sentences each feature type. This process identifies features that significantly contribute to the classification task. We extract 3K class-based/lexical unigrams, 5.5K bigrams, 3K trigrams, 20 POS unigrams, and 166 POS bigrams. Using the training data described above, we experimented with three different classification algorithms using the Weka machine learning toolkit (Witten et al., 1999): multinomial na¨ıve Bayes, SVM with linear kernel, and C4.5. Weka also provides a classification confidence score that represents how confident the classifier is on each classified sample, which we will make use of as well. Table 1 presents the classification results on our 4,216 held-out test-set sentences. These results are quite promising. However, we should note that they may not necessarily represent the successful classification of biographical vs. non-biographical sentences but rather the classification of Wikipedia sentences vs. TDT4 sentences. We will validate these results for our full systems in Section 5. 4.2 Removing Redundant Sentences Typically, redundancy removal is a standard component in MDS systems. In sentence-extraction based summarizers, redundant sentences are defined as those which include the same information without introducing new information and identified by some form of lexically-based clustering. We use an implementation of a single-link nearest neighbor clustering technique based on stem-overlap (BlairGoldensohn et al., 2004b) to cluster the sentences classified as biographical by our classifier, and then select the sentence from each cluster that maximizes the confidence score returned by the classifier as the representative for that cluster. 4.3 Sentence Reordering It is essential for MDS systems in the extraction framework to choose the order in which sentences 809 should be presented in the final summary. Presenting more important information earlier in a summary is a general strategy for most domains, although importance may be difficult to determine reliably. Similar to (Barzilay and Lee, 2004), we automatically learn how to order our biographical sentences by observing the typical order of presentation of information in a particular domain. We observe that our Wikipedia biographies tend to follow a general presentation template, in which birth information is mentioned before death information, information about current professional position and affiliations usually appear early in the biography, and nuclear family members are typically mentioned before more distant relations. Learning how to order information from these biographies however would require that we learn to identify particular types of biographical information in sentences. We directly use the position of each sentence in each Wikipedia biography as a way of determining where sentences containing similar information about different target individuals should appear in their biographies. We represent the absolute position of each sentence in its biography as an integer and train an SVM regression model with RBF kernel, from the class/lexical features of the sentence to its position. We represent each sentence by a feature vector whose elements correspond to the frequency of unigrams and bigrams of class-based items (e.g., GPE, PER) (cf. Section 3) and lexical items; for example, the unigrams born, became, and [GPE State-or-Province], and the bigrams was born, [TARGET PER] died and [TARGET PER] joined would be good candidates for such features. To minimize the dimensionality of our regression space, we constrained our feature choice to those features that are important to distinguish biographical sentences, which we term biographical terms. Since we want these biographical terms to impact the regression function, we define these to be phrases that consist of at least one lexical item that occurs in many biographies but rarely more than once in any given biography. We compute the biographical term score as in the following equation: bio score(t)=| Dt | | D | · P d∈Dt(1 − n(t)d maxt(n(t)d)) | D | (1) where D is the set of 16,906 Wikipedia biographies, n(t)d is the number of occurrences of term t in document d, and Dt = {d ∈D : t ∈d}. The left factor represents the document frequency of term t, and the right factor calculates how infrequent the term is in each biography that contains t at least once.2 We order the unigrams and bigrams in the biographies by their biographical term scores and select the highest 1K unigrams and 500 bigrams; these thresholds were determined empirically. 4.4 Reference Rewriting We observe that news articles typically mention biographical information that occurs early in Wikipedia biographies when they mention individuals for the first time in a story (e.g. Stephen Hawking, the Cambridge University physicist). We take advantage of the fact that the coreference resolution system we use tags full noun phrases including appositives as part of NEs. Therefore, we initially search for the sentence that contains the longest identified NE (of type PER) that includes the target person’s full name and is coreferential with the target according to the reference resolution system; we denote this NE NENP. If this sentence has already been classified as a biographical sentence by our classifier, we simply boost its rank in the summary to first. Otherwise, when we order our sentences, we replace the reference to the target person in the first sentence by NENP. For example, if the first sentence in the biography we have produced for Jimmy Carter is He was born in 1947 and a sentence not chosen for inclusion in our biography Jimmy Carter, former U.S. President, visited the University of California last year. contains the NE-NP, and Jimmy Carter and He are coreferential, then the first sentence in our biography will be rewritten as Jimmy Carter, former U.S. President, was born in 1947. Note that, in the evaluations presented in Section 5, sentence order was modified by this process in only eight summaries. 5 Evaluation To evaluate our biography generation system, we use the document sets created for the biography evalua2We considered various approaches to feature selection here, such as comparing term frequency between our biographical and non-biographical corpora. However, terms such as killed and died, which are useful biographical terms, also occur frequently in our non-biographical corpus. 810 ROUGE-L Average_F 0.25 0.275 0.3 0.325 0.35 0 1 2 3 4 5 6 7 8 9 10 11 12 SVM reg. only top-DUC2004 C4.5 SVM SVM + SVM reg. MNB + SVM reg MNB C4.5 + SVM reg. SVM + baseline order C4.5 + baseline order MNB + baseline order Figure 1: Comparing our approaches against the top performing system in DUC2004 according to ROUGE-L (diamond). tion (task 5) of DUC2004.3 The task for systems participating in this evalution was “ Given each document cluster and a question of the form “Who is X?”, where X is the name of a person or group of people, create a short summary (no longer than 665 bytes) of the cluster that responds to the question.” NIST assessors chose 50 clusters of TREC documents such that all the documents in a given cluster provide at least part of the answer to this question. Each cluster contained on average 10 documents. NIST had 4 human summaries written for each cluster. A baseline summary was also created for each cluster by extracting the first 665 bytes of the most recent document in the cluster. 22 systems participated in the competition, producing a total of 22 automatic summaries (restricted to 665 bytes) for each cluster. We evaluate our system against the top performing of these 22 systems, according to ROUGEL, which we denote top-DUC2004.4 5.1 Automatic Evaluation Using ROUGE As noted in Section 4.1, we experimented with a number of learning algorithms when building our biographical-sentence classifier. For each machine learning algorithm tested, we build a system that initially classifies the input list of sentences into biographical and non-biographical sentences and then 3http://duc.nist.gov/duc2004 4Note that this system out-performed 19 of the 22 systems on ROUGE-1 and 20 of 22 on ROUGE-L and ROUGE-W-1.2 (p < .05) (Blair-Goldensohn et al., 2004a). No ROUGE metric produced scores where this system scored significantly worse than any other system. See Figure 2 below for a comparison of all DUC2004 systems with our top system where all systems are evaluated using ROUGE-L-1.5.5. removes redundant sentences. Next, we produce three versions of each system: one which implements a baseline ordering procedure, in which sentences from the clusters are ordered by their appearance in their source document (e.g. any sentence which occurred first in its original document is placed first in the summary, with ties ordered randomly within the set), a second which orders the biographical sentences by the confidence score obtained from the classifier, and a third which uses the SVM regression as the reordering component. Finally, we run our reference rewriting component on each and trim the output to 665 bytes. We evaluate first using the ROUGE-L metric (Lin and Hovy, 2003) with a 95% (ROUGE computed) confidence interval for all systems and compared these to the ROUGE-L score of the best-performing DUC2004 system.5 The higher the ROUGE score, the closer the summary is to the DUC2004 human reference summaries. As shown in Figure 1, our best performing system is the multinomial na¨ıve Bayes classifier (MNB) using the classifier confidence scores to order the sentences in the biography. This system significantly outperforms the top ranked DUC2004 system (top-DUC2004).6 The success of this particularly learning algorithm on our task may be due to: (1) the nature of our feature space – ngram frequencies are modeled properly by a multinomial distribution; (2) the simplicity of this classifier particularly given our large feature dimensional5We used the same version (1.5.5) of the ROUGE metric to compute scores for the DUC systems and baseline also. 6Significance for each pair of systems was determined by paired t-test and calculated at the .05 significance level. 811 ity; and (3) the robustness of na¨ıve Bayes with respect to noisy data: Not all sentences in Wikipedia biographies are biographical sentences and some sentences in TDT4 are biographical. While the SVM regression reordering component has a slight negative impact on the performance of the MNB system, the difference between the two versions is not significant. Note however, that both the C4.5 and the SVM versions of our system are improved by the SVM regression sentence reordering. While neither performs better than topDUC2004 without this component, the C4.5 system with SVM reordering is significantly better than topDUC2004 and the performance of the SVM system with SVM regression is comparable to topDUC2004. In fact, when we use only the SVM regression model to rank the hypothesis sentences, without employing any classifier, then remove redundant sentences, rewrite and trim the results, we find that, interestingly, this approach also outperforms top-DUC2004, although the difference is not statistically significant. However, we believe that this is an area worth pursuing in future, with more sophisticated features. The following biography of Brian Jones was produced by our MNB system and then the sentences were ordered using the SVM regression model: Born in Bristol in 1947, Brian Jones, the co-pilot on the Breitling mission, learned to fly at 16, dropping out of school a year later to join the Royal Air Force. After earning his commercial balloon flying license, Jones became a ballooning instructor in 1989 and was certified as an examiner for balloon flight licenses by the British Civil Aviation Authority. He helped organize Breitling’s most recent around-the-world attempts, in 1997 and 1998. Jones, 52, replaces fellow British flight engineer Tony Brown. Jones, who is to turn 52 next week, is actually the team’s third co-pilot. After 13 years of service, he joined a catering business and, in the 1980s,... Figure 2 illustrates the performance of our MNB system with classifier confidence score sentence ordering when compared to mean ROUGE-L-1.5.5 scores of DUC2004 human-generated summaries and the 22 DUC2004 systems’ summaries across all summary tasks. Human summaries are labeled AH, DUC2004 systems 1-22, and our MNB system is marked by the rectangle. Results are sorted by mean ROUGE-L score. Note that our system performance is actually comparable in ROUGE-L score to one of the human summary generators and is significantly better that all DUC2004 systems, including top-DUC2004, which is System 1 in the figure. 5.2 Manual Evaluation ROUGE evaluation is based on n-gram overlap between the automatically produced summary and the human reference summaries. Thus, it is not able to measure how fluent or coherent a summary is. Sentence ordering is one factor in determining fluency and coherence. So, we conducted two experiments to measure these qualities, one comparing our topperforming system according to ROUGE-L score (MNB) vs. the top-performing DUC2004 system (top-DUC2004) and another comparing our top system with two different ordering methods, classifierbased and SVM regression.7 In each experiment, summaries were trimmed to 665 bytes. In the first experiment, three native American English speakers were presented with the 50 questions (Who is X?). For each question they were given a pair of summaries (presented in random order): one was the output of our MNB system and the other was the summary produced by the top-DUC2004 system. Subjects were asked to decide which summary was more responsive in form and content to the question or whether both were equally responsive. 85.3% (128/150) of subject judgments preferred one summary over the other. 100/128 (78.1%) of these judgments preferred the summaries produced by our MNB system over those produced by top-DUC2004. If we compute the majority vote, there were 42/50 summaries in which at least two subjects made the same choice. 37/42 (88.1%) of these majority judgments preferred our system’s summary (using binomial test, p = 4.4e−7). We used the weighted kappa statistic with quadratic weighting (Cohen, 1968) to determine the inter-rater agreement, obtaining a mean pairwise κ of 0.441. Recall from Section 5.1 that our SVM regression reordering component slightly decreases the average ROUGE score (although not significantly) for our MNB system. For our human evaluations, we decided to evaluate the quality of the presentation of our summaries with and without this compo7Note that top-DUC2004 was ranked sixth in the DUC 2004 manual evaluation, with no system performing significantly better for coverage and only 1 system performing significantly better for responsiveness. 812 ROUGE-L Average_F 0.2 0.25 0.3 0.35 0.4 0.45 B E H G F A D C * 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 BL 18 19 20 21 22 Figure 2: ROUGE-L scores for DUC2004 human summaries (A-H), our MNB system (rectangle), and the DUC2004 competing systems (1-22 anonymized), with the baseline system labeled BL. nent to see if this reordering component affected human judgments even if it did not improve ROUGE scores. For each question, we produced two summaries from the sentences classified as biographical by the MNB classifier, one ordered by the confidence score obtained by the MNB, in decreasing order, and the other ordered by the SVM regression values, in increasing order. Note that, in three cases, the summary sentences were ordered identically by both procedures, so we used only 47 summaries for this evaluation. Three (different) native American English speakers were presented with the 47 questions for which sentence ordering differed. For each question they were given the two summaries (presented in random order) and asked to determine which biography they preferred. We found inter-rater agreement for these judgments using Fleiss’ kappa (Fleiss, 1971) to be only moderate (κ=0.362). However, when we computed the majority vote for each question, we found that 61.7% (29/47) preferred the SVM regression ordering over the MNB classifier confidence score ordering. Although this difference is not statistically significant, again we find the SVM regression ordering results encouraging enough to motivate our further research on improving such ordering procedures. 6 Related Work The DUC2004 system achieving the highest overall ROUGE score, our top-DUC2004 in Section 5, was Blair-Goldensohn et al. (2004a)’s DefScriber, which treats “Who is X?” as a definition question and targets definitional themes (e.g. genus-species) found in the input document collections which include references to the target person. Extracted sentences are then rewritten using a reference rewriting system (Nenkova and McKeown, 2003) which attempts to shorten subsequent references to the target. Sentences are ordered in the summary based on a weighted combination of topic centrality, lexical cohesion, and topic coverage scores. A similar approach is explored in Biryukov et al. (2005), which uses Topic Signatures (Lin and Hovy, 2000) constructed around the target individual’s name to identify sentences to be included in the biography. Zhou et al. (2004)’s biography generation system, like ours, trains biographical and non-biographical sentence classifiers to select sentences to be included in the biography. Their system is trained on a handannotated corpus of 130 biographies of 12 people, tagged with 9 biographical elements (e.g., bio, education, nationality) and uses binary unigram and bigram lexical and unigram part-of-speech features for classification. Duboue et al. (2003) also address the problem of learning content selection rules for biography. They learn rules from two corpora, a semi-structured corpus with lists of biographical facts about show business celebrities and a corpus of free-text biographies about the same celebrities. Filatova et al. (2005) learn text features typical of biographical descriptions by deducing biographical and occupation-related activities automatically by compariing descriptions of people with different occupations. Weischedel et al. (2004) models kernel-fact features typical for biographies using linguistic and semantic processing. Linguistic features 813 are derived from predicate-argument structures deduced from parse trees, and semantic features are the set of biography-related relations and events defined in the ACE guidelines (Doddington et al., 2004). Sentences containing kernel facts are ranked using probabilities estimated from a corpus of manually created biographies, including Wikipedia, to estimate the conditional distribution of relevant material given a kernel fact and a background corpus. The problem of ordering sentences and preserving coherence in MDS is addressed by Barzilay et al. (2001), who combine chronological ordering of events with cohesion metrics. SVM regression has recently been used by (Li et al., 2007) for sentence ranking for general MDS. The authors calculated a similarity score for each sentence to the human summaries and then regress numeric features (e.g., the centroid) from each sentence to this score. Barzilay and Lee (2004) use HMMs to capture topic shift within a particular domain; sequence of topic shifts then guides the subsequent ordering of sentences within the summary. 7 Discussion and Future Work In this paper, we describe a MDS system for producing biographies, given a target name. We present an unsupervised approach using Wikipedia biography pages and a general news corpus (TDT4) to automatically construct training data for our system. We employ a NE tagger and a coreference resolution system to extract class-based and lexical features from each sentence which we use to train a binary classifier to identify biographical sentences. We also train an SVM regression model to reorder the sentences and then employ a rewriting heuristic to create the final summary. We compare versions of our system based upon three machine learning algorithms and two sentence reordering strategies plus a baseline. Our best performing system uses the multinomial na¨ıve Bayes (MNB) classifier with classifier confidence score reordering. However, our SVM regression reordering improves summaries produced by the other two classifiers and is preferred by human judges. We compare our MNB system on the DUC2004 biography task (task 5) to other DUC2004 systems and to human-generated summaries. Our system out-performs all DUC2004 systems significantly, according to ROUGE-L-1.5.5. When presented with summaries produced by our system and summaries produced by the best-performing (according to ROUGE scores) of the DUC2004 systems, human judges (majority vote of 3) prefer our system’s biographies in 88.1% of cases. In addition to its high performance, our approach has the following advantages: It employs no manual annotation but relies upon identifying appropriately different corpora to represent our training corpus. It employs class-based as well as lexical features where the classes are obtained automatically from an ACE NE tagger. It utilizes automatic coreference resolution to identify sentences containing references to the target person. Our sentence reordering approaches make use of either classifier confidence scores or ordering learned automatically from the actual ordering of sentences in Wikipedia biographies to determine the order of presentation of sentences in our summaries. Since our task is to produce concise summaries, one focus of our future research will be to simplify the sentences we extract before classifying them as biographical or non-biographical. This procedure should also help to remove irrelevant information from sentences. Recall that our SVM regression model for sentence ordering was trained using only biographical class-based/lexical items. In future, we would also like to experiment with more linguistically-informed features. While Wikipedia does not enforce any particular ordering of information in biographies, and while different biographies may emphasize different types of information, it would appear that the success of our automatically derived ordering procedures may capture some underlying shared view of how biographies are written. The same underlying views may also apply to domains such as organization descriptions or types of historical events. In future we plan to explore such a generalization of our procedures to such domains. Acknowledgments We thank Kathy McKeown, Andrew Rosenberg, Wisam Dakka, and the Speech and NLP groups at Columbia for useful discussions. This material is based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001106C0023 (approved for public release, distribution unlimited). Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA. 814 References Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of NAACL-HLT. Regina Barzilay, Noemie Elhadad, and Kathleen McKeown. 2001. Sentence ordering in multidocument summarization. In Proceedings of the First Human Language Technology Conference, San Diego, California. Maria Biryukov, Roxana Angheluta, and Marie-Francine Moens. 2005. Multidocument question answering text summarization using topic signatures. In Proceedings of the 5th Dutch-Belgium Information Retrieval Workshop, Utrecht, the Netherlands. Sasha Blair-Goldensohn, David Evans, Vasileios Hatzivassiloglou, Kathleen McKeown, Ani Nenkova, Rebecca Passonneau, Barry Schiffman, Andrew Schlaikjer, Advaith Siddharthan, and Sergey Siegelman. 2004a. Columbia University at DUC 2004. In Proceedings of the 4th Document Understanding Conference, Boston, Massachusetts, USA. Sasha Blair-Goldensohn, Kathy McKeown, and Andrew Schlaikjer. 2004b. Answering definitional questions: A hybrid approach. In Mark Maybury, editor, New Directions In Question Answering, chapter 4. AAAI Press. J. Cohen. 1968. Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. volume 70, pages 213–220. George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The automatic content extraction program - tasks, data, and evaluation. In Proceedings of the LREC Conference, Canary Islands, Spain, July. Pablo Duboue and Kathleen McKeown. 2003. Statistical acquisition of content selection rules for natural language generation. In Proceedings of the Conference on Empirical Methods for Natural Language Processing, pages 121–128, Sapporo, Japan, July. Elena Filatova and John Prager. 2005. Tell me what you do and I’ll tell you what you are: Learning occupation-related activities for biographies. In Proceedings of the Joint Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 113–120, Vancouver, Canada, October. J. L. Fleiss. 1971. Measuring nominal scale agreement among many raters. volume 76, No. 5, pages 378–382. Ralph Grishman, David Westbrook, and Adam Meyers. 2005. Nyu’s english ace 2005 system description. In ACE 05 Evaluation Workshop, Gaithersburg, MD. Sujian Li, You Ouyang, Wei Wang, and Bin Sun. 2007. Multi-document summarization using support vector regression. In http://duc.nist.gov/pubs/2007papers. Chin-Yew Lin and Eduard Hovy. 2000. The automated acquisition of topic signatures for text summarization. In Proceedings of the 18th International Conference on Computational Linguistics, pages 495–501, Saarbr¨ucken, Germany, July. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Language Technology Conference, Edmonton, Canada. Ani Nenkova and Kathleen McKeown. 2003. References to named entities: A corpus study. In Proceedings of the Joint Human Language Technology Conference and North American chapter of the Association for Computational Linguistics Annual Meeting, Edmonton, Canada, May. Ralph Weischedel, Jinxi Xu, and Ana Licuanan. 2004. A hybrid approach to answering biographical questions. In Mark Maybury, editor, New Directions In Question Answering, chapter 5. AAAI Press. I. Witten, E. Frank, L. Trigg, M. Hall, G. Holmes, and S. Cunningham. 1999. Weka: Practical machine learning tools and techniques with java implementation. In International Workshop: Emerging Knowledge Engineering and Connectionist-Based Information Systems, pages 192–196. Liang Zhou, Miruna Ticrea, and Eduard Hovy. 2004. Multi-document biography summarization. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 434–441, Barcelona, Spain. 815
2008
92
Proceedings of ACL-08: HLT, pages 816–824, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Generating Impact-Based Summaries for Scientific Literature Qiaozhu Mei University of Illinois at UrbanaChampaign [email protected] ChengXiang Zhai University of Illinois at UrbanaChampaign [email protected] Abstract In this paper, we present a study of a novel summarization problem, i.e., summarizing the impact of a scientific publication. Given a paper and its citation context, we study how to extract sentences that can represent the most influential content of the paper. We propose language modeling methods for solving this problem, and study how to incorporate features such as authority and proximity to accurately estimate the impact language model. Experiment results on a SIGIR publication collection show that the proposed methods are effective for generating impact-based summaries. 1 Introduction The volume of scientific literature has been growing rapidly. From recent statistics, each year 400,000 new citations are added to MEDLINE, the major biomedical literature database 1. This fast growth of literature makes it difficult for researchers, especially beginning researchers, to keep track of the research trends and find high impact papers on unfamiliar topics. Impact factors (Kaplan and Nelson, 2000) are useful, but they are just numerical values, so they cannot tell researchers which aspects of a paper are influential. On the other hand, a regular contentbased summary (e.g., the abstract or conclusion section of a paper or an automatically generated topical summary (Giles et al., 1998)) can help a user know 1http://www.nlm.nih.gov/bsd/history/tsld024.htm about the main content of a paper, but not necessarily the most influential content of the paper. Indeed, the abstract of a paper mostly reflects the expected impact of the paper as perceived by the author(s), which could significantly deviate from the actual impact of the paper in the research community. Moreover, the impact of a paper changes over time due to the evolution and progress of research in a field. For example, an algorithm published a decade ago may be no longer the state of the art, but the problem definition in the same paper can be still well accepted. Although much work has been done on text summarization (See Section 6 for a detailed survey), to the best of our knowledge, the problem of impact summarization has not been studied before. In this paper, we study this novel summarization problem and propose language modeling-based approaches to solving the problem. By definition, the impact of a paper has to be judged based on the consent of research community, especially by people who cited it. Thus in order to generate an impact-based summary, we must use not only the original content, but also the descriptions of that paper provided in papers which cited it, making it a challenging task and different from a regular summarization setup such as news summarization. Indeed, unlike a regular summarization system which identifies and interprets the topic of a document, an impact summarization system should identify and interpret the impact of a paper. We define the impact summarization problem in the framework of extraction-based text summarization (Luhn, 1958; McKeown and Radev, 1995), and cast the problem as an impact sentence retrieval 816 problem. We propose language models to exploit both the citation context and original content of a paper to generate an impact-based summary. We study how to incorporate features such as authority and proximity into the estimation of language models. We propose and evaluate several different strategies for estimating the impact language model, which is key to impact summarization. No existing test collection is available for evaluating impact summarization. We construct a test collection using 28 years of ACM SIGIR papers (1978 - 2005) to evaluate the proposed methods. Experiment results on this collection show that the proposed approaches are effective for generating impact-based summaries. The results also show that using both the original document content and the citation contexts is important and incorporating citation authority and proximity is beneficial. An impact-based summary is not only useful for facilitating the exploration of literature, but also helpful for suggesting query terms for literature retrieval, understanding the evolution of research trends, and identifying the interactions of different research fields. The proposed methods are also applicable to summarizing the impact of documents in other domains where citation context exists, such as emails and weblogs. The rest of the paper is organized as follows. In Section 2 and 3, we define the impact-based summarization problem and propose the general language modeling approach. In Section 4, we present different strategies and features for estimating an impact language model, a key challenge in impact summarization. We discuss our experiments and results in Section 5. Finally, the related work and conclusions are discussed in Section 6 and Section 7. 2 Impact Summarization Following the existing work on topical summarization of scientific literature (Paice, 1981; Paice and Jones, 1993), we define an impact-based summary of a paper as a set of sentences extracted from a paper that can reflect the impact of the paper, where “impact” is roughly defined as the influence of the paper on research of similar or related topics as reflected in the citations of the paper. Such an extraction-based definition of summarization has also been quite common in most existing general summarization work (Radev et al., 2002). By definition, in order to generate an impact summary of a paper, we must look at how other papers cite the paper, use this information to infer the impact of the paper, and select sentences from the original paper that can reflect the inferred impact. Note that we do not directly use the sentences from the citation context to form a summary. This is because in citations, the discussion of the paper cited is usually mixed with the content of the paper citing it, and sometimes also with discussion about other papers cited (Siddharthan and Teufel, 2007). Formally, let d = (s0, s1, ..., sn) be a paper to be summarized, where si is a sentence. We refer to a sentence (in another paper) in which there is an explicit citation of d as a citing sentence of d. When a paper is cited, it is often discussed consecutively in more than one sentence near the citation, thus intuitively we would like to consider a window of sentences centered at a citing sentence; the window size would be a parameter to set. We call such a window of sentences a citation context, and use C to denote the union of all the citation contexts of d in a collection of research papers. Thus C itself is a set (more precisely bag) of sentences. The task of impact-based summarization is thus to 1) construct a representation of the impact of d, I, based on d and C; 2) design a scoring function Score(.) to rank sentences in d based on how well a sentence reflects I. A user-defined number of top-ranked sentences can then be selected as the impact summary for d. The formulation above immediately suggests that we can cast the impact summarization problem as a retrieval problem where each candidate sentence in d is regarded as a “document,” the impact of the paper (i.e., I) as a “query,” and our goal is to “retrieve” sentences that can reflect the impact of the paper as indicated by the citation context. Looking at the problem in this way, we see that there are two main challenges in impact summarization: first, we must be able to infer the impact based on both the citation contexts and the original document; second, we should measure how well a sentence reflects this inferred impact. To solve these challenges, in the next section, we propose to model impact with unigram language models and score sentences using 817 Kullback-Leibler divergence. We further propose methods for estimating the impact language model based on several features including the authority of citations, and the citation proximity. 3 Language Models for Impact Summarization 3.1 Impact language models From the retrieval perspective, our collection is the paper to be summarized, and each sentence is a “document” to be retrieved. However, unlike in the case of ad hoc retrieval, we do not really have a query describing the impact of the paper; instead, we have a lot of citation contexts that can be used to infer information about the query. Thus the main challenge in impact summarization is to effectively construct a “virtual impact query” based on the citation contexts. What should such a virtual impact query look like? Intuitively, it should model the impactreflecting content of the paper. We thus propose to represent such a virtual impact query with a unigram language model. Such a model is expected to assign high probabilities to those words that can describe the impact of paper d, just as we expect a query language model in ad hoc retrieval to assign high probabilities to words that tend to occur in relevant documents (Ponte and Croft, 1998). We call such a language model the impact language model of paper d (denoted as θI); it can be estimated based on both d and its citation context C as will be discussed in Section 4. 3.2 KL-divergence scoring With the impact language model in place, we can then adopt many existing probabilistic retrieval models such as the classical probabilistic retrieval models (Robertson and Sparck Jones, 1976) and the Kullback-Leibler (KL) divergence retrieval model (Lafferty and Zhai, 2001; Zhai and Lafferty, 2001a), to solve the problem of impact summarization by scoring sentences based on the estimated impact language model. In our study, we choose to use the KLdivergence scoring method to score sentences as this method has performed well for regular ad hoc retrieval tasks (Zhai and Lafferty, 2001a) and has an information theoretic interpretation. To apply the KL-divergence scoring method, we assume that a candidate sentence s is generated from a sentence language model θs. Given s in d and the citation context C, we would first estimate θs based on s and estimate θI based on C, and then score s with the negative KL divergence of θs and θI. That is, Score(s) = −D(θI||θs) = X w∈V p(w|θI) log p(w|θs)− X w∈V p(w|θI) log p(w|θI) where V is the set of words in our vocabulary and w denotes a word. From the information theoretic perspective, the KL-divergence of θs and θI can be interpreted as measuring the average number of bits wasted in compressing messages generated according to θI (i.e., impact descriptions) with coding nonoptimally designed based on θs. If θs and θI are very close, the KL-divergence would be small and Score(s) would be high, which intuitively makes sense. Note that the second term (entropy of θI) is independent of s, so it can be ignored for ranking s. We see that according to the KL-divergence scoring method, our main tasks are to estimate θs and θI. Since s can be regarded as a short document, we can use any standard method to estimate θs. In this work, we use Dirichlet prior smoothing (Zhai and Lafferty, 2001b) to estimate θs as follows: p(w|θs) = c(w, s) + µs ∗P(w|D) |s| + µs (1) where |s| is the length of s, c(w, s) is the count of word w in s, p(w|D) is a background model estimated using c(w,D) P w′∈V c(w′,D) (D can be the set of all the papers available to us) and µs is a smoothing parameter to be empirically set. Note that as the length of a sentence is very short, smoothing is critical for addressing the data sparseness problem. The remaining challenge is to estimate θI accurately based on d and its citation contexts. 4 Estimation of Impact Language Models Intuitively, the impact of a paper is mostly reflected in the citation context. Thus the estimation of the impact language model should be primarily based on the citation context C. However, we would like 818 our impact model to be able to help us select impactreflecting sentences from d, thus it is important for the impact model to explain well the paper content in general. To achieve this balance, we treat the citation context C as prior information and the current document d as the observed data, and use Bayesian estimation to estimate the impact language model. Specifically, let p(w|C) be a citation context language model estimated based on the citation context C. We define Dirichlet prior with parameters {µCp(w|C)}w∈V for the impact model, where µC encodes our confidence on this prior and effectively serves as a weighting parameter for balancing the contribution of C and d for estimating the impact model. Given the observed document d, the posterior mean estimate of the impact model would be (MacKay and Peto, 1995; Zhai and Lafferty, 2001b) P(w|θI) = c(w, d) + µcp(w|C) |d| + µc (2) µc can be interpreted as the equivalent sample size of our prior. Thus setting µc = |d| means that we put equal weights on the citation context and the document itself. µc = 0 yields p(w|θI) = p(w|d), which is to say that the impact is entirely captured by the paper itself, and our impact summarization problem would then become the standard single document (topical) summarization. Intuitively though, we would want to set µc to a relatively large number to exploit the citation context in our estimation, which is confirmed in our experiments. An alternative way is to simply interpolate p(w|d) and p(w|C) with a constant coefficient: p(w|θI) = (1 −δ)p(w|d) + δp(w|C) (3) We will compare the two strategies in Section 5. How do we estimate p(w|C)? Intuitively, words occurring in C frequently should have high probabilities. A simple way is to pool together all the sentences in C and use the maximum likelihood estimator, p(w|C) = P s∈C c(w, s) P w′∈V P s′∈C c(w′, s′) (4) where c(w, s) is the count of w in s. One deficiency of this simple estimate is that we treat all the (extended) citation sentences equally. However, there are at least two reasons why we want to assign unequal weights to different citation sentences: (1) A sentence closer to the citation label should contribute more than one far away. (2) A sentence occurring in a highly authorative paper should contribute more than that in a less authorative paper. To capture these two heuristics, we define a weight coefficient αs for a sentence s in C as follows: αs = pg(s)pr(s) where pg(s) is an authority score of the paper containing s and pr(s) is a proximity score that rewards a sentence close to the citation label. For example, pg(s) can be the PageRank value (Brin and Page, 1998) of the document with s, which measures the authority of the document based on a citation graph, and is computed as follows: We construct a directed graph from the collection of scientific literature with each paper as a vertex and each citation as a directed edge pointing from the citing paper to the cited paper. We can then use the standard PageRank algorithm (Brin and Page, 1998) to compute a PageRank value for each document. We used this approach in our experiments. We define pr(s) as pr(s) = 1 αk , where k is the distance (counted in terms of the number of sentences) between sentence s and the center sentence of the window containing s; by “center sentence”, we mean the citing sentence containing the citation label. Thus the sentence with the citation label will have a proximity of 1 (because k = 0), while the sentences away from the citation label will have a decaying weight controlled by parameter α. With αs, we can then use the following “weighted” maximum likelihood estimate for the impact language model: p(w|C) = P s∈C αsc(w, s) P w′∈V P s′∈C αs′c(w′, s′) (5) As we will show in Section 5, this weighted maximum likelihood estimate performs better than the simple maximum likelihood estimate, and both pg(s) and pr(s) are useful. 819 5 Experiments and Results 5.1 Experiment Design 5.1.1 Test set construction Because no existing test set is available for evaluating impact summarization, we opt to create a test set based on 28 years of ACM SIGIR papers (1978 - 2005) available through the ACM Digital Library2 and the SIGIR membership. Leveraging the explicit citation information provided by ACM Digital Library, for each of the 1303 papers, we recorded all other papers that cited the paper and extracted the citation context from these citing papers. Each citation context contains 5 sentences with 2 sentences before and after the citing sentence. Since a low-impact paper would not be useful for evaluating impact summarization, we took all the 14 papers from the SIGIR collection that have no less than 20 citations by papers in the same collection as candidate papers for evaluation. An expert in Information Retrieval field read each paper and its citation context, and manually created an impact-based summary by selecting all the “impactcapturing” sentences from the paper. Specifically, the expert first attempted to understand the most influential content of a paper by reading the citation contexts. The expert then read each sentence of the paper and made a decision whether the sentence covers some “influential content” as indicated in the citation contexts. The sentences that were decided as covering some influential content were then collected as the gold standard impact summary for the paper. We assume that the title of a paper will always be included in the summary, so we excluded the title both when constructing the gold standard and when generating a summary. The gold standard summaries have a minimum length of 5 sentences and a maximum length of 18 sentences; the median length is 9 sentences. These 14 impact-based summaries are used as gold standards for our experiments, based on which all summaries generated by the system are evaluated. This data set is available at http://timan.cs.uiuc.edu/data/impact.html. We must admit that using only 14 papers and only one expert for evaluation is a limitation of our work. However, 2http://www.acm.org/dl going beyond the 14 papers would risk reducing the reliability of impact judgment due to the sparseness of citations. How to develop a better test collection is an important future direction. 5.1.2 Evaluation Metrics Following the current practice in evaluating summarization, particularly DUC3, we use the ROUGE evaluation package (Lin and Hovy, 2003). Among ROUGE metrics, ROUGE-N (models n-gram cooccurrence, N = 1, 2) and ROUGE-L (models longest common sequence) generally perform well in evaluating both single-document summarization and multi-document summarization (Lin and Hovy, 2003). Since they are general evaluation measures for summarization, they are also applicable to evaluating the MEAD-Doc+Cite baseline method to be described below. Thus although we evaluated our methods with all the metrics provided by ROUGE, we only report ROUGE-1 and ROUGE-L in this paper (other metrics give very similar results). 5.1.3 Baseline methods Since impact summarization has not been previously studied, there is no natural baseline method to compare with. We thus adapt some state-of-the-art conventional summarization methods implemented in the MEAD toolkit (Radev et al., 2003)4 to obtain three baseline methods: (1) LEAD: It simply extracts sentences from the beginning of a paper, i.e., sentences in the abstract or beginning of the introduction section; we include LEAD to see if such “leading sentences” reflect the impact of a paper as authors presumably would expect to summarize a paper’s contributions in the abstract. (2) MEADDoc: It uses the single-document summarizer in MEAD to generate a summary based solely on the original paper; comparison with this baseline can tell us how much better we can do than a conventional topic-based summarizer that does not consider the citation context. (3) MEAD-Doc+Cite: Here we concatenate all the citation contexts in a paper to form a “citation document” and then use the MEAD multidocument summarizer to generate a summary from the original paper plus all its citation documents; this baseline represents a reasonable way 3http://duc.nist.gov/ 4“http://www.summarization.com/mead/” 820 Sum. Length Metric Random LEAD MEAD-Doc MEAD-Doc+Cite KL-Divergence 3 ROUGE-1 0.163 0.167 0.301* 0.248 0.323 3 ROUGE-L 0.144 0.158 0.265 0.217 0.299 5 ROUGE-1 0.230 0.301 0.401 0.333 0.467 5 ROUGE-L 0.214 0.292 0.362 0.298 0.444 10 ROUGE-1 0.430 0.514 0.575 0.472 0.649 10 ROUGE-L 0.396 0.494 0.535 0.428 0.622 15 ROUGE-1 0.538 0.610 0.685 0.552 0.730 15 ROUGE-L 0.499 0.586 0.650 0.503 0.705 Table 1: Performance Comparison of Summarizers of applying an existing summarization method to generate an impact-based summary. Note that this method may extract sentences in the citation contexts but not in the original paper. 5.2 Basic Results We first show some basic results of impact summarization in Table 1. They are generated using constant coefficient interpolation for the impact language model (i.e., Equation 3) with δ = 0.8, weighted maximum likelihood estimate for the citation context model (i.e., Equation 5) with α = 3, and µs = 1, 000 for candidate sentence smoothing (Equation 1). These results are not necessarily optimal as will be seen when we examine parameter and method variations. From Table 1, we see clearly that our method consistently outperforms all the baselines. Among the baselines, MEAD-Doc is consistently better than both LEAD and MEAD-Doc+Cite. While MEADDoc’s outperforming LEAD is not surprising, it is a bit surprising that MEAD-Doc also outperforms MEAD-Doc+Cite as the latter uses both the citation context and the original document. One possible explanation may be that MEAD is not designed for impact summarization and it has been trapped by the distracting content in the citation context 5. Indeed, this can also explain why MEAD-Doc+Cite tends to perform worse than LEAD by ROUGE-L since if MEAD-Doc+Cite picks up sentences from the citation context rather than the original papers, it would not match as well with the gold standard as LEAD which selects sentences from the origi5One anonymous reviewer suggested an interesting improvement to the MEAD-Doc+Cite baseline, in which we would first extract sentences from the citation context and then for each extracted sentence find a similar one in the original paper. Unfortunately, we did not have time to test this approach before the deadline for the camera-ready version of this paper. nal papers. These results thus show that conventional summarization techniques are inadequate for impact summarization, and the proposed language modeling methods are more effective for generating impact-based summaries. In Table 2, we show a sample impact-based summary and the corresponding MEAD-Doc regular summary. We see that the regular summary tends to have general sentences about the problem, background and techniques, not very informative in conveying specific contributions of the paper. None of these sentences was selected by the human expert. In contrast, the sentences in the impact summary cover several details of the impact of the paper (i.e., specific smoothing methods especially Dirichlet prior, sensitivity of performance to smoothing, and dual role of smoothing), and sentences 4 and 6 are also among the 8 sentences picked by the human expert. Interestingly, neither sentence is in the abstract of the original paper, suggesting a deviation of the actual impact of a paper and that perceived by the author(s). 5.3 Component analysis We now turn to examine the effectiveness of each component in the proposed methods and different strategies for estimating θI. Effectiveness of interpolation: We hypothesized that we need to use both the original document and the citation context to estimate θI. To test this hypothesis, we compare the results of using only d, only the citation context, and interpolation of them in Table 3. We show two different strategies of interpolation (i.e., constant coefficient with δ = 0.8 and Dirichlet with µc = 20, 000) as described in Section 4. From Table 3, we see that both strategies of interpolation indeed outperform using either the origi821 Impact-based summary: 1. Figure 5: Interpolation versus backoff for Jelinek-Mercer (top), Dirichlet smoothing (middle), and absolute discounting (bottom). 2. Second, one can de-couple the two different roles of smoothing by adopting a two stage smoothing strategy in which Dirichlet smoothing is first applied to implement the estimation role and Jelinek-Mercer smoothing is then applied to implement the role of query modeling 3. We find that the backoff performance is more sensitive to the smoothing parameter than that of interpolation, especially in Jelinek-Mercer and Dirichlet prior. 4. We then examined three popular interpolation-based smoothing methods (Jelinek-Mercer method, Dirichlet priors, and absolute discounting), as well as their backoff versions, and evaluated them using several large and small TREC retrieval testing collections. summary 5. By rewriting the query-likelihood retrieval model using a smoothed document language model, we derived a general retrieval formula where the smoothing of the document language model can be interpreted in terms of several heuristics used intraditional models, including TF-IDF weighting and document length normalization. 6. We find that the retrieval performance is generally sensitive to the smoothing parameters, suggesting that an understanding and appropriate setting of smoothing parameters is very important in the language modeling approach. Regular summary (generated using MEAD-Doc): 1. Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. 2. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. 3. On the one hand, theoretical studies of an underlying model have been developed; this direction is, for example, represented by the various kinds of logic models and probabilistic models (e.g., [14, 3, 15, 22]). 4. After applying the Bayes’ formula and dropping a document-independent constant (since we are only interested in ranking documents), we have p(d|q) ∝(q|d)p(d). 5. As discussed in [1], the righthand side of the above equation has an interesting interpretation, where, p(d) is our prior belief that d is relevant to any query and p(q|d) is the query likelihood given the document, which captures how well the document ”fits” the particular query q. 6. The probability of an unseen word is typically taken as being proportional to the general frequency of the word, e.g., as computed using the document collection. Table 2: Impact-based summary vs. regular summary for the paper “A study of smoothing methods for language models applied to ad hoc information retrieval”. nal document model (p(w|d)) or the citation context model (p(w|C)) alone, which confirms that both the original paper and the citation context are important for estimating θI. We also see that using the citation context alone is better than using the original paper alone, which is expected. Between the two strategies, Dirichlet dynamic coefficient is slightly better than constant coefficient (CC), after optimizing the interpolation parameter for both strategy. Interpolation Measure P (w|d) P (w|C) ConstCoef Dirichlet ROUGE-1 0.529 0.635 0.643 0.647 ROUGE-L 0.501 0.607 0.619 0.623 Table 3: Effectiveness of interpolation Citation authority and proximity: These heuristics are very interesting to study as they are unique to impact summarization and not well studied in the existing summarization work. pg(s) pr(s)=1/αk pr(s) off α = 2 α = 3 α = 4 Off 0.685 0.711 0.714 0.700 On 0.708 0.712 0.706 0.703 Table 4: Authority (pg(s)) and proximity (pr(s)) In Table 4, we show the ROUGE-L values for various combinations of these two heuristics (summary length is 15). We turn off either pg(s) or pr(s) by setting it to a constant; when both are turned off, we have the unweighted MLE of p(w|C) (Equation 4). Clearly, using weighted MLE with any of the two heuristics is better than the unweighted MLE, indicating that both heuristics are effective. However, combining the two heuristics does not always improve over using a single one. Since intuitively these two heuristics are orthogonal, this may suggest that our way of combining the two scores (i.e., taking a product of them) may not be optimal; further study is needed to better understand this. The ROUGE-1 results are similar. Tuning of other parameters: There are three other parameters which need to be tuned: (1) µs for candidate sentence smoothing (Equation 1); (2) µc in Dirichlet interpolation for impact model estimation (Equation 2); and (3) δ in constant coefficient interpolation (Equation 3). We have examined the sensitivity of performance to these parameters. In general, for a wide range of values of these parameters, the performance is relatively stable and near optimal. Specifically, the performance is near optimal as 822 long as µs and µc are sufficiently large (µs ≥1000, µc ≥20, 000), and the interpolation parameter δ is between 0.4 and 0.9. 6 Related Work General text summarization, including single document summarization (Luhn, 1958; Goldstein et al., 1999) and multi-document summarization (Kraaij et al., 2001; Radev et al., 2003) has been well studied; our work is under the framework of extractive summarization (Luhn, 1958; McKeown and Radev, 1995; Goldstein et al., 1999; Kraaij et al., 2001), but our problem formulation differs from any existing formulation of the summarization problem. It differs from regular single-document summarization because we utilize extra information (i.e. citation contexts) to summarize the impact of a paper. It also differs from regular multi-document summarization because the roles of original documents and citation contexts are not equivalent. Specifically, citation contexts serve as an indicator of the impact of the paper, but the summary is generated by extracting the sentences from the original paper. Technical paper summarization has also been studied (Paice, 1981; Paice and Jones, 1993; Saggion and Lapalme, 2002; Teufel and Moens, 2002), but the previous work did not explore citation context to emphasize the impact of papers. Citation context has been explored in several studies (Nakov et al., 2004; Ritchie et al., 2006; Schwartz et al., 2007; Siddharthan and Teufel, 2007). However, none of the previous studies has used citation context in the same way as we did, though the potential of directly using citation sentences (called citances) to summarize a paper was pointed out in (Nakov et al., 2004). Recently, people have explored various types of auxiliary knowledge such as hyperlinks (Delort et al., 2003) and clickthrough data (Sun et al., 2005), to summarize a webpage; such work is related to ours as anchor text is similar to citation context, but it is based on a standard formulation of multi-document summarization and would contain only sentences from anchor text. Our work is also related to work on using language models for retrieval (Ponte and Croft, 1998; Zhai and Lafferty, 2001b; Lafferty and Zhai, 2001) and summarization (Kraaij et al., 2001). However, we do not have an explicit query and constructing the impact model is a novel exploration. We also proposed new language models to capture the impact. 7 Conclusions We have defined and studied the novel problem of summarizing the impact of a research paper. We cast the problem as an impact sentence retrieval problem, and proposed new language models to model the impact of a paper based on both the original content of the paper and its citation contexts in a literature collection with consideration of citation autority and proximity. To evaluate impact summarization, we created a test set based on ACM SIGIR papers. Experiment results on this test set show that the proposed impact summarization methods are effective and outperform several baselines that represent the existing summarization methods. An important future work is to construct larger test sets (e.g., of biomedical literature) to facilitate evaluation of impact summarization. Our formulation of the impact summarization problem can be further improved by going beyond sentence retrieval and considering factors such as redundancy and coherency to better organize an impact summary. Finally, automatically generating impact-based summaries can not only help users access and digest influential research publications, but also facilitate other literature mining tasks such as milestone mining and research trend monitoring. It would be interesting to explore all these applications. Acknowledgments We are grateful to the anonymous reviewers for their constructive comments. This work is in part supported by a Yahoo! Graduate Fellowship and NSF grants under award numbers 0713571, 0347933, and 0428472. References Sergey Brin and Lawrence Page. 1998. The anatomy of a large-scale hypertextual web search engine. In Proceedings of the Seventh International Conference on World Wide Web, pages 107–117. 823 J.-Y. Delort, B. Bouchon-Meunier, and M. Rifqi. 2003. Enhanced web document summarization using hyperlinks. In Proceedings of the Fourteenth ACM Conference on Hypertext and Hypermedia, pages 208–215. C. Lee Giles, Kurt D. Bollacker, and Steve Lawrence. 1998. Citeseer: an automatic citation indexing system. In Proceedings of the Third ACM Conference on Digital Libraries, pages 89–98. Jade Goldstein, Mark Kantrowitz, Vibhu Mittal, and Jaime Carbonell. 1999. Summarizing text documents: sentence selection and evaluation metrics. In Proceedings of ACM SIGIR 99, pages 121–128. Nancy R. Kaplan and Michael L. Nelson. 2000. Determining the publication impact of a digital library. J. Am. Soc. Inf. Sci., 51(4):324–339. W. Kraaij, M. Spitters, and M. van der Heijden. 2001. Combining a mixture language model and naive bayes for multi-document summarisation. In Proceedings of the DUC2001 workshop. John Lafferty and Chengxiang Zhai. 2001. Document language models, query models, and risk minimization for information retrieval. In Proceedings of ACM SIGIR 2001, pages 111–119. Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 71–78. H. P. Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of Research and Development, 2(2):159–165. D. MacKay and L. Peto. 1995. A hierarchical Dirichlet language model. Natural Language Engineering, 1(3):289–307. Kathleen McKeown and Dragomir R. Radev. 1995. Generating summaries of multiple news articles. In Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 74–82. P. Nakov, A. Schwartz, and M. Hearst. 2004. Citances: Citation sentences for semantic analysis of bioscience text. In Proceedings of ACM SIGIR’04 Workshop on Search and Discovery in Bioinformatics. Chris D. Paice and Paul A. Jones. 1993. The identification of important concepts in highly structured technical papers. In Proceedings of the 16th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 69–78. C. D. Paice. 1981. The automatic generation of literature abstracts: an approach based on the identification of self-indicating phrases. In Proceedings of the 3rd Annual ACM Conference on Research and Development in Information Retrieval, pages 172–191. Jay M. Ponte and W. Bruce Croft. 1998. A language modeling approach to information retrieval. In Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 275–281. Dragomir R. Radev, Eduard Hovy, and Kathleen McKeown. 2002. Introduction to the special issue on summarization. Comput. Linguist., 28(4):399–408. Dragomir R. Radev, Simone Teufel, Horacio Saggion, Wai Lam, John Blitzer, Hong Qi, Arda Celebi, Danyu Liu, and Elliott Drabek. 2003. Evaluation challenges in large-scale document summarization: the mead project. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pages 375–382. A. Ritchie, S. Teufel, and S. Robertson. 2006. Creating a test collection for citation-based ir experiments. In Proceedings of the HLT-NAACL 2006, pages 391–398. S. Robertson and K. Sparck Jones. 1976. Relevance weighting of search terms. Journal of the American Society for Information Science, 27:129–146. Hpracop Saggion and Guy Lapalme. 2002. Generating indicative-informativesummaries with sumUM. Computational Linguistics, 28(4):497–526. A. S. Schwartz, A. Divoli, and M. A. Hearst. 2007. Multiple alignment of citation sentences with conditional random fields and posterior decoding. In Proceedings of the 2007 EMNLP-CoNLL, pages 847–857. A. Siddharthan and S. Teufel. 2007. Whose idea was this, and why does it matter? attributing scientific work to citations. In Proceedings of NAACL/HLT-07, pages 316–323. Jian-Tao Sun, Dou Shen, Hua-Jun Zeng, Qiang Yang, Yuchang Lu, and Zheng Chen. 2005. Web-page summarization using clickthrough data. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 194–201. Simone Teufel and Marc Moens. 2002. Summarizing scientific articles: experiments with relevance and rhetorical status. Comput. Linguist., 28(4):409–445. ChengXiang Zhai and John Lafferty. 2001a. Modelbased feedback in the language modeling approach to information retrieval. In Proceedings of the Tenth International Conference on Information and Knowledge Management (CIKM 2001), pages 403–410. Chengxiang Zhai and John Lafferty. 2001b. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 334–342. 824
2008
93
Proceedings of ACL-08: HLT, pages 825–833, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Can you summarize this? Identifying correlates of input difficulty for generic multi-document summarization Ani Nenkova University of Pennsylvania Philadelphia, PA 19104, USA [email protected] Annie Louis University of Pennsylvania Philadelphia, PA 19104, USA [email protected] Abstract Different summarization requirements could make the writing of a good summary more difficult, or easier. Summary length and the characteristics of the input are such constraints influencing the quality of a potential summary. In this paper we report the results of a quantitative analysis on data from large-scale evaluations of multi-document summarization, empirically confirming this hypothesis. We further show that features measuring the cohesiveness of the input are highly correlated with eventual summary quality and that it is possible to use these as features to predict the difficulty of new, unseen, summarization inputs. 1 Introduction In certain situations even the best automatic summarizers or professional writers can find it hard to write a good summary of a set of articles. If there is no clear topic shared across the input articles, or if they follow the development of the same event in time for a longer period, it could become difficult to decide what information is most representative and should be conveyed in a summary. Similarly, length requirements could pre-determine summary quality—a short outline of a story might be confusing and unclear but a page long discussion might give an excellent overview of the same issue. Even systems that perform well on average produce summaries of poor quality for some inputs. For this reason, understanding what aspects of the input make it difficult for summarization becomes an interesting and important issue that has not been addressed in the summarization community untill now. In information retrieval, for example, the variable system performance has been recognized as a research challenge and numerous studies on identifying query difficulty have been carried out (most recently (Cronen-Townsend et al., 2002; Yom-Tov et al., 2005; Carmel et al., 2006)). In this paper we present results supporting the hypotheses that input topicality cohesiveness and summary length are among the factors that determine summary quality regardless of the choice of summarization strategy (Section 2). The data used for the analyses comes from the annual Document Understanding Conference (DUC) in which various summarization approaches are evaluated on common data, with new test sets provided each year. In later sections we define a suite of features capturing aspects of the topicality cohesiveness of the input (Section 3) and relate these to system performance, identifying reliable correlates of input difficulty (Section 4). Finally, in Section 5, we demonstrate that the features can be used to build a classifier predicting summarization input difficulty with accuracy considerably above chance level. 2 Preliminary analysis and distinctions: DUC 2001 Generic multi-document summarization was featured as a task at the Document Understanding Conference (DUC) in four years, 2001 through 2004. In our study we use the DUC 2001 multi-document task submissions as development data for in-depth analysis and feature selection. There were 29 input sets and 12 automatic summarizers participating in the evaluation that year. Summaries of different 825 lengths were produced by each system: 50, 100, 200 and 400 words. Each summary was manually evaluated to determine the extent to which its content overlaped with that of a human model, giving a coverage score. The content comparison was performed on a subsentence level and was based on elementary discourse units in the model summary.1 The coverage scores are taken as an indicator of difficultly of the input: systems achieve low coverage for difficult sets and higher coverage for easy sets. Since we are interested in identifying characteristics of generally difficult inputs rather than in discovering what types of inputs might be difficult for one given system, we use the average system score per set as indicator of general difficulty. 2.1 Analysis of variance Before attempting to derive characteristics of inputs difficult for summarization, we first confirm that indeed expected performance is influenced by the input itself. We performed analysis of variance for DUC 2001 data, with automatic system coverage score as the dependent variable, to gain some insight into the factors related to summarization difficulty. The results of the ANOVA with input set, summarizer identity and summary length as factors, as well as the interaction between these, are shown in Table 1. As expected, summarizer identity is a significant factor: some summarization strategies/systems are more effective than others and produce summaries with higher coverage score. More interestingly, the input set and summary length factors are also highly significant and explain more of the variability in coverage scores than summarizer identity does, as indicated by the larger values of the F statistic. Length The average automatic summarizer coverage scores increase steadily as length requirements are relaxed, going up from 0.50 for 50-word summaries to 0.76 for 400-word summaries as shown in Table 2 (second row). The general trend we observe is that on average systems are better at producing summaries when more space is available. The dif1The routinely used tool for automatic evaluation ROUGE was adopted exactly because it was demonstrated it is highly correlated with the manual DUC coverage scores (Lin and Hovy, 2003a; Lin, 2004). Type 50 100 200 400 Human 1.00 1.17 1.38 1.29 Automatic 0.50 0.55 0.70 0.76 Baseline 0.41 0.46 0.52 0.57 Table 2: Average human, system and baseline coverage scores for different summary lengths of N words. N = 50, 100, 200, and 400. ferences are statistically significant2 only between 50-word and 200- and 400-word summaries and between 100-word and 400-word summaries. The fact that summary quality improves with increasing summary length has been observed in prior studies as well (Radev and Tam, 2003; Lin and Hovy, 2003b; Kolluru and Gotoh, 2005) but generally little attention has been paid to this fact in system development and no specific user studies are available to show what summary length might be most suitable for specific applications. In later editions of the DUC conference, only summaries of 100 words were produced, focusing development efforts on one of the more demanding length restrictions. The interaction between summary length and summarizer is small but significant (Table 1), with certain summarization strategies more successful at particular summary lengths than at others. Improved performance as measured by increase in coverage scores is observed for human summarizers as well (shown in the first row of Table 2). Even the baseline systems (first n words of the most recent article in the input or first sentences from different input articles) show improvement when longer summaries are allowed (performance shown in the third row of the table). It is important to notice that the difference between automatic system and baseline performance increases as the summary length increases—the difference between systems and baselines coverage scores is around 0.1 for the shorter 50- and 100-word summaries but 0.2 for the longer summaries. This fact has favorable implications for practical system developments because it indicates that in applications where somewhat longer summaries are appropriate, automatically produced summaries will be much more informative than a baseline summary. 2One-sided t-test, 95% level of significance. 826 Factor DF Sum of squares Expected mean squares F stat Pr(> F) input 28 150.702 5.382 59.4227 0 summarizer 11 34.316 3.120 34.4429 0 length 3 16.082 5.361 59.1852 0 input:summarizer 306 65.492 0.214 2.3630 0 input:length 84 36.276 0.432 4.7680 0 summarizer:length 33 6.810 0.206 2.2784 0 Table 1: Analysis of variance for coverage scores of automatic systems with input, summarizer, and length as factors. Input The input set itself is a highly significant factor that influences the coverage scores that systems obtain: some inputs are handled by the systems better than others. Moreover, the input interacts both with the summarizers and the summary length. This is an important finding for several reasons. First, in system evaluations such as DUC the inputs for summarization are manually selected by annotators. There is no specific attempt to ensure that the inputs across different years have on average the same difficulty. Simply assuming this to be the case could be misleading: it is possible in a given year to have “easier” input test set compared to a previous year. Then system performance across years cannot be meaningfully compared, and higher system scores would not be indicative of system improvement between the evaluations. Second, in summarization applications there is some control over the input for summarization. For example, related documents that need to summarized could be split into smaller subsets that are more amenable to summarization or routed to an appropriate summarization system than can handle this kind of input using a different strategy, as done for instance in (McKeown et al., 2002). Because of these important implications we investigate input characteristics and define various features distinguishing easy inputs from difficult ones. 2.2 Difficulty for people and machines Before proceeding to the analysis of input difficulty in multi-document summarization, it is worth mentioning that our study is primarily motivated by system development needs and consequently the focus is on finding out what inputs are easy or difficult for automatic systems. Different factors might make summarization difficult for people. In order to see to what extent the notion of summarization input difsummary length correlation 50 0.50 100 0.57* 200 0.77** 400 0.70** Table 3: Pearson correlation between average human and system coverage scores on the DUC 2001 dataset. Significance levels: *p < 0.05 and **p < 0.00001. ficulty is shared between machines and people, we computed the correlation between the average system and average human coverage score at a given summary length for all DUC 2001 test sets (shown in Table 3). The correlation is highest for 200-word summaries, 0.77, which is also highly significant. For shorter summaries the correlation between human and system performance is not significant. In the remaining part of the paper we deal exclusively with difficulty as defined by system performance, which differs from difficulty for people summarizing the same material as evidenced by the correlations in Table 3. We do not attempt to draw conclusions about any cognitively relevant factors involved in summarizing. 2.3 Type of summary and difficulty In DUC 2001, annotators prepared test sets from five possible predefined input categories:3. Single event (3 sets) Documents describing a single event over a timeline (e.g. The Exxon Valdez oil spill). 3Participants in the evaluation were aware of the different categories of input and indeed some groups developed systems that handled different types of input employing different strategies (McKeown et al., 2001). In later years, the idea of multistrategy summarization has been further explored by (Lacatusu et al., 2006) 827 Subject (6 sets) Documents discussing a single topic (e.g. Mad cow disease) Biographical (2 sets) All documents in the input provide information about the same person (e.g. Elizabeth Taylor) Multiple distinct events (12 sets) The documents discuss different events of the same type (e.g. different occasions of police misconduct). Opinion (6 sets) Each document describes a different perspective to a common topic (e.g. views of the senate, congress, public, lawyers etc on the decision by the senate to count illegal aliens in the 1990 census). Figure 1 shows the average system coverage score for the different input types. The more topically cohesive input types such as biographical, single event and subject, which are more focused on a single entity or news item and narrower in scope, are easier for systems. The average system coverage score for them is higher than for the non-cohesive sets such as multiple distinct events and opinion sets, regardless of summary length. The difference is even more apparently clear when the scores are plotted after grouping input types into cohesive (biographical, single event and subject) and non-cohesive (multiple events and opinion). Such grouping also gives the necessary power to perform statistical test for significance, confirming the difference in coverage scores for the two groups. This is not surprising: a summary of documents describing multiple distinct events of the same type is likely to require higher degree of generalization and abstraction. Summarizing opinions would in addition be highly subjective. A summary of a cohesive set meanwhile would contain facts directly from the input and it would be easier to determine which information is important. The example human summaries for set D32 (single event) and set D19 (opinions) shown below give an idea of the potential difficulties automatic summarizers have to deal with. set D32 On 24 March 1989, the oil tanker Exxon Valdez ran aground on a reef near Valdez, Alaska, spilling 8.4 million gallons of crude oil into Prince William Sound. In two days, the oil spread over 100 miles with a heavy toll on wildlife. Cleanup proceeded at a slow pace, and a plan for cleaning 364 miles of Alaskan coastline was released. In June, the tanker was refloated. By early 1990, only 5 to 9 percent of spilled oil was recovered. A federal jury indicted Exxon on five criminal charges and the Valdez skipper was guilty of negligent discharge of oil. set D19 Congress is debating whether or not to count illegal aliens in the 1990 census. Congressional House seats are apportioned to the states and huge sums of federal money are allocated based on census population. California, with an estimated half of all illegal aliens, will be greatly affected. Those arguing for inclusion say that the Constitution does not mention “citizens”, but rather, instructs that House apportionment be based on the “whole number of persons” residing in the various states. Those opposed say that the framers were unaware of this issue. “Illegal aliens” did not exist in the U.S. until restrictive immigration laws were passed in 1875. The manual set-type labels give an intuitive idea of what factors might be at play but it is desirable to devise more specific measures to predict difficulty. Do such measures exist? Is there a way to automatically distinguish cohesive (easy) from non-cohesive (difficult) sets? In the next section we define a number of features that aim to capture the cohesiveness of an input set and show that some of them are indeed significantly related to set difficulty. 3 Features We implemented 14 features for our analysis of input set difficulty. The working hypothesis is that cohesive sets with clear topics are easier to summarize and the features we define are designed to capture aspects of input cohesiveness. Number of sentences in the input, calculated over all articles in the input set. Shorter inputs should be easier as there will be less information loss between the summary and the original material. Vocabulary size of the input set, equal to the number of unique words in the input. Smaller vocabularies would be characteristic of easier sets. Percentage of words used only once in the input. The rationale behind this feature is that cohesive input sets contain news articles dealing with a clearly defined topic, so words will be reused across documents. Sets that cover disparate events and opinions are likely to contain more words that appear in the input only once. Type-token ratio is a measure of the lexical variation in an input set and is equal to the input vocabulary size divided by the number of words in the 828 Figure 1: Average system coverage scores for summaries in a category input. A high type-token ratio indicates there is little (lexical) repetition in the input, a possible side-effect of non-cohesiveness. Entropy of the input set. Let X be a discrete random variable taking values from the finite set V = {w1, ..., wn} where V is the vocabulary of the input set and wi are the words that appear in the input. The probability distribution p(w) = Pr(X = w) can be easily calculated using frequency counts from the input. The entropy of the input set is equal to the entropy of X: H(X) = − i=n X i=1 p(wi) log2 p(wi) (1) Average, minimum and maximum cosine overlap between the news articles in the input. Repetition in the input is often exploited as an indicator of importance by different summarization approaches (Luhn, 1958; Barzilay et al., 1999; Radev et al., 2004; Nenkova et al., 2006). The more similar the different documents in the input are to each other, the more likely there is repetition across documents at various granularities. Cosine similarity between the document vector representations is probably the easiest and most commonly used among the various similarity measures. We use tf*idf weights in the vector representations, with term frequency (tf) normalized by the total number of words in the document in order to remove bias resulting from high frequencies by virtue of higher document length alone. The cosine similarity between two (document representation) vectors v1 and v2 is given by cosθ = v1.v2 ||v1||||v2||. A value of 0 indicates that the vectors are orthogonal and dissimilar, a value of 1 indicates perfectly similar documents in terms of the words contained in them. To compute the cosine overlap features, we find the pairwise cosine similarity between each two documents in an input set and compute their average. The minimum and maximum overlap features are also computed as an indication of the overlap bounds. We expect cohesive inputs to be composed of similar documents, hence the cosine overlaps in these sets of documents must be higher than those in non-cohesive inputs. KL divergence Another measure of relatedness of the documents comprising an input set is the difference in word distributions in the input compared to the word distribution in a large collection of diverse texts. If the input is found to be largely different from a generic collection, it is plausible to assume that the input is not a random collection of articles but rather is defined by a clear topic discussed within and across the articles. It is reasonable to expect that the higher the divergence is, the easier it is to define what is important in the article and hence the easier it is to produce a good summary. For computing the distribution of words in a general background corpus, we used all the inputs sets from DUC years 2001 to 2006. The divergence measure we used is the Kullback Leibler divergence, or 829 relative entropy, between the input (I) and collection language models. Let pinp(w) be the probability of the word w in the input and pcoll(w) be the probability of the word occurring in the large background collection. Then the relative entropy between the input and the collection is given by KL divergence = X w∈I pinp(w) log2 pinp(w) pcoll(w) (2) Low KL divergence from a random background collection may be characteristic of highly noncohesive inputs consisting of unrelated documents. Number of topic signature terms for the input set. The idea of topic signature terms was introduced by Lin and Hovy (Lin and Hovy, 2000) in the context of single document summarization, and was later used in several multi-document summarization systems (Conroy et al., 2006; Lacatusu et al., 2004; Gupta et al., 2007). Lin and Hovy’s idea was to automatically identify words that are descriptive for a cluster of documents on the same topic, such as the input to a multidocument summarizer. We will call this cluster T. Since the goal is to find descriptive terms for the cluster, a comparison collection of documents not on the topic is also necessary (we will call this background collection NT). Given T and NT, the likelihood ratio statistic (Dunning, 1994) is used to identify the topic signature terms. The probabilistic model of the data allows for statistical inference in order to decide which terms t are associated with T more strongly than with NT than one would expect by chance. More specifically, there are two possibilities for the distribution of a term t: either it is very indicative of the topic of cluster T, and appears more often in T than in documents from NT, or the term t is not topical and appears with equal frequency across both T and NT. These two alternatives can be formally written as the following hypotheses: H1: P(t|T) = P(t|NT) = p (t is not a descriptive term for the input) H2: P(t|T) = p1 and P(t|NT) = p2 and p1 > p2 (t is a descriptive term) In order to compute the likelihood of each hypothesis given the collection of the background documents and the topic cluster, we view them as a sequence of words wi: w1w2 . . . wN. The occurrence of a given word t, wi = t, can thus be viewed a Bernoulli trial with probability p of success, with success occurring when wi = t and failure otherwise. The probability of observing the term t appearing k times in N trials is given by the binomial distribution b(k, N, p) = N k ! pk(1 −p)N−k (3) We can now compute λ = Likelihood of the data given H1 Likelihood of the data given H2 (4) which is equal to λ = b(ct, N, p) b(cT , NT , p1) ∗b(cNT , NNT , p2) (5) The maximum likelihood estimates for the probabilities can be computed directly. p = ct N , where ct is equal to the number of times term t appeared in the entire corpus T+NT, and N is the number of words in the entire corpus. Similarly, p1 = cT NT , where cT is the number of times term t occurred in T and NT is the number of all words in T. p2 = cNT NNT , where cNT is the number of times term t occurred in NT and NNT is the total number of words in NT. −2logλ has a well-know distribution: χ2. Bigger values of −2logλ indicate that the likelihood of the data under H2 is higher, and the χ2 distribution can be used to determine when it is significantly higher (−2logλ exceeding 10 gives a significance level of 0.001 and is the cut-off we used). For terms for which the computed −2logλ is higher than 10, we can infer that they occur more often with the topic T than in a general corpus NT, and we can dub them “topic signature terms”. Percentage of signature terms in vocabulary The number of signature terms gives the total count of topic signatures over all the documents in the input. However, the number of documents in an input set and the size of the individual documents across different sets are not the same. It is therefore possible that the mere count feature is biased to the length 830 and number of documents in the input set. To account for this, we add the percentage of topic words in the vocabulary as a feature. Average, minimum and maximum topic signature overlap between the documents in the input. Cosine similarity measures the overlap between two documents based on all the words appearing in them. A more refined document representation can be defined by assuming the document vectors contain only the topic signature words rather than all words. A high overlap of topic words across two documents is indicative of shared topicality. The average, minimum and maximum pairwise cosine overlap between the tf*idf weighted topic signature vectors of the two documents are used as features for predicting input cohesiveness. If the overlap is large, then the topic is similar across the two documents and hence their combination will yield a cohesive input. 4 Feature selection Table 4 shows the results from a one-sided t-test comparing the values of the various features for the easy and difficult input set classes. The comparisons are for summary length of 100 words because in later years only such summaries were evaluated. The binary easy/difficult classes were assigned based on the average system coverage score for the given set, with half of the sets assigned to each class. In addition to the t-tests we also calculated Pearson’s correlation (shown in Table 5) between the features and the average system coverage score for each set. In the correlation analysis the input sets are not classified into easy or difficult but rather the real valued coverage scores are used directly. Overall, the features that were identified by the t-test as most descriptive of the differences between easy and difficult inputs were also the ones with higher correlations with real-valued coverage scores. Our expectations in defining the features are confirmed by the correlation results. For example, systems have low coverage scores for sets with highentropy vocabularies as indicated by the negative and high by absolute value correlation (-0.4256). Sets with high entropy are those in which there is little repetition within and across different articles, and for which it is subsequently difficult to deterfeature t-stat p-value KL divergence* -2.4725 0.01 % of sig. terms in vocab* -2.0956 0.02 average cosine overlap* -2.1227 0.02 vocabulary size* 1.9378 0.03 set entropy* 2.0288 0.03 average sig. term overlap* -1.8803 0.04 max cosine overlap -1.6968 0.05 max topic signature overlap -1.6380 0.06 number of sentences 1.4780 0.08 min topic signature overlap -0.9540 0.17 number of signature terms 0.8057 0.21 min cosine overlap -0.2654 0.39 % of words used only once 0.2497 0.40 type-token ratio 0.2343 0.41 ∗Significant at a 95% confidence level(p < 0.05) Table 4: Comparison of non-cohesive (average system coverage score < median average system score) vs cohesive sets for summary length of 100 words mine what is the most important content. On the other hand, sets characterized by bigger KL divergence are easier—there the distribution of words is skewed compared to a general collection of articles, with important topic words occurring more often. Easy to summarize sets are characterized by low entropy, small vocabulary, high average cosine and average topic signature overlaps, high KL divergence and a high percentage of the vocabulary consists of topic signature terms. 5 Classification results We used the 192 sets from multi-document summarization DUC evaluations in 2002 (55 generic sets), 2003 (30 generic summary sets and 7 viewpoint sets) and 2004 (50 generic and 50 biography sets) to train and test a logistic regression classifier. The sets from all years were pooled together and evenly divided into easy and difficult inputs based on the average system coverage score for each set. Table 6 shows the results from 10-fold cross validation. SIG is a classifier based on the six features identified as significant in distinguishing easy from difficult inputs based on a t-test comparison (Table 4). SIG+yt has two additional features: the year and the type of summarization input (generic, viewpoint and biographical). ALL is a classifier based on all 14 features defined in the previous section, and 831 feature correlation set entropy -0.4256 KL divergence 0.3663 vocabulary size -0.3610 % of sig. terms in vocab 0.3277 average sig. term overlap 0.2860 number of sentences -0.2511 max topic signature overlap 0.2416 average cosine overlap 0.2244 number of signature terms -0.1880 max cosine overlap 0.1337 min topic signature overlap 0.0401 min cosine overlap 0.0308 type-token ratio -0.0276 % of words used only once -0.0025 Table 5: Correlation between coverage score and feature values for the 29 DUC’01 100-word summaries. features accuracy P R F SIG 56.25% 0.553 0.600 0.576 SIG+yt 69.27% 0.696 0.674 0.684 ALL 61.45% 0.615 0.589 0.600 ALL+yt 65.10% 0.643 0.663 0.653 Table 6: Logistic regression classification results (accuracy, precision, recall and f-measure) for balanced data of 100-word summaries from DUC’02 through DUC’04. ALL+yt also includes the year and task features. Classification accuracy is considerably higher than the 50% random baseline. Using all features yields better accuracy (61%) than using solely the 6 significant features (accuracy of 56%). In both cases, adding the year and task leads to extra 3% net improvement. The best overall results are for the SIG+yt classifier with net improvement over the baseline equal to 20%. At the same time, it should be taken into consideration that the amount of training data for our experiments is small: a total of 192 sets. Despite this, the measures of input cohesiveness capture enough information to result in a classifier with above-baseline performance. 6 Conclusions We have addressed the question of what makes the writing of a summary for a multi-document input difficult. Summary length is a significant factor, with all summarizers (people, machines and baselines) performing better at longer summary lengths. An exploratory analysis of DUC 2001 indicated that systems produce better summaries for cohesive inputs dealing with a clear topic (single event, subject and biographical sets) while non-cohesive sets about multiple events and opposing opinions are consistently of lower quality. We defined a number of features aimed at capturing input cohesiveness, ranging from simple features such as input length and size to more sophisticated measures such as input set entropy, KL divergence from a background corpus and topic signature terms based on log-likelihood ratio. Generally, easy to summarize sets are characterized by low entropy, small vocabulary, high average cosine and average topic signature overlaps, high KL divergence and a high percentage of the vocabulary consists of topic signature terms. Experiments with a logistic regression classifier based on the features further confirms that input cohesiveness is predictive of the difficulty it will pose to automatic summarizers. Several important notes can be made. First, it is important to develop strategies that can better handle non-cohesive inputs, reducing fluctuations in system performance. Most current systems are developed with the expectation they can handle any input but this is evidently not the case and more attention should be paid to the issue. Second, the interpretations of year to year evaluations can be affected. As demonstrated, the properties of the input have a considerable influence on summarization quality. If special care is not taken to ensure that the difficulty of inputs in different evaluations is kept more or less the same, results from the evaluations are not comparable and we cannot make general claims about progress and system improvements between evaluations. Finally, the presented results are clearly just a beginning in understanding of summarization difficulty. A more complete characterization of summarization input will be necessary in the future. References Regina Barzilay, Kathleen McKeown, and Michael Elhadad. 1999. Information fusion in the context of multi-document summarization. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. David Carmel, Elad Yom-Tov, Adam Darlow, and Dan 832 Pelleg. 2006. What makes a query difficult? In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 390–397. John Conroy, Judith Schlesinger, and Dianne O’Leary. 2006. Topic-focused multi-document summarization using an approximate oracle score. In Proceedings of ACL, companion volume. Steve Cronen-Townsend, Yun Zhou, and W. Bruce Croft. 2002. Predicting query performance. In Proceedings of the 25th Annual International ACM SIGIR conference on Research and Development in Information Retrieval (SIGIR 2002), pages 299–306. Ted Dunning. 1994. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61–74. Surabhi Gupta, Ani Nenkova, and Dan Jurafsky. 2007. Measuring importance and query relevance in topicfocused multi-document summarization. In ACL’07, companion volume. BalaKrishna Kolluru and Yoshihiko Gotoh. 2005. On the subjectivity of human authored short summaries. In ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. Finley Lacatusu, Andrew Hickl, Sanda Harabagiu, and Luke Nezda. 2004. Lite gistexter at duc2004. In Proceedings of the 4th Document Understanding Conference (DUC’04). F. Lacatusu, A. Hickl, K. Roberts, Y. Shi, J. Bensley, B. Rink, P. Wang, and L. Taylor. 2006. Lcc’s gistexter at duc 2006: Multi-strategy multi-document summarization. In DUC’06. Chin-Yew Lin and Eduard Hovy. 2000. The automated acquisition of topic signatures for text summarization. In Proceedings of the 18th conference on Computational linguistics, pages 495–501. Chin-Yew Lin and Eduard Hovy. 2003a. Automatic evaluation of summaries using n-gram co-occurance statistics. In Proceedings of HLT-NAACL 2003. Chin-Yew Lin and Eduard Hovy. 2003b. The potential and limitations of automatic sentence extraction for summarization. In Proceedings of the HLT-NAACL 03 on Text summarization workshop, pages 73–80. Chin-Yew Lin. 2004. ROUGE: a package for automatic evaluation of summaries. In ACL Text Summarization Workshop. H. P. Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of Research and Development, 2(2):159–165. K. McKeown, R. Barzilay, D. Evans, V. Hatzivassiloglou, B. Schiffman, and S. Teufel. 2001. Columbia multidocument summarization: Approach and evaluation. In DUC’01. Kathleen McKeown, Regina Barzilay, David Evans, Vasleios Hatzivassiloglou, Judith Klavans, Ani Nenkova, Carl Sable, Barry Schiffman, and Sergey Sigelman. 2002. Tracking and summarizing news on a daily basis with columbia’s newsblaster. In Proceedings of the 2nd Human Language Technologies Conference HLT-02. Ani Nenkova, Lucy Vanderwende, and Kathleen McKeown. 2006. A compositional context sensitive multidocument summarizer: exploring the factors that influence summarization. In Proceedings of SIGIR. Dragomir Radev and Daniel Tam. 2003. Singledocument and multi-document summary evaluation via relative utility. In Poster session, International Conference on Information and Knowledge Management (CIKM’03). Dragomir Radev, Hongyan Jing, Malgorzata Sty, and Daniel Tam. 2004. Centroid-based summarization of multiple documents. Information Processing and Management, 40:919–938. Elad Yom-Tov, Shai Fine, David Carmel, and Adam Darlow. 2005. Learning to estimate query difficulty: including applications to missing content detection and distributed information retrieval. In SIGIR ’05: Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval, pages 512–519. 833
2008
94
Proceedings of ACL-08: HLT, pages 834–842, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics You talking to me? A Corpus and Algorithm for Conversation Disentanglement Micha Elsner and Eugene Charniak Brown Laboratory for Linguistic Information Processing (BLLIP) Brown University Providence, RI 02912 {melsner,ec}@@cs.brown.edu Abstract When multiple conversations occur simultaneously, a listener must decide which conversation each utterance is part of in order to interpret and respond to it appropriately. We refer to this task as disentanglement. We present a corpus of Internet Relay Chat (IRC) dialogue in which the various conversations have been manually disentangled, and evaluate annotator reliability. This is, to our knowledge, the first such corpus for internet chat. We propose a graph-theoretic model for disentanglement, using discourse-based features which have not been previously applied to this task. The model’s predicted disentanglements are highly correlated with manual annotations. 1 Motivation Simultaneous conversations seem to arise naturally in both informal social interactions and multi-party typed chat. Aoki et al. (2006)’s study of voice conversations among 8-10 people found an average of 1.76 conversations (floors) active at a time, and a maximum of four. In our chat corpus, the average is even higher, at 2.75. The typical conversation, therefore, is one which is interrupted– frequently. Disentanglement is the clustering task of dividing a transcript into a set of distinct conversations. It is an essential prerequisite for any kind of higher-level dialogue analysis: for instance, consider the multiparty exchange in figure 1. Contextually, it is clear that this corresponds to two conversations, and Felicia’s1 response “excel1Real user nicknames are replaced with randomly selected (Chanel) Felicia: google works :) (Gale) Arlie: you guys have never worked in a factory before have you (Gale) Arlie: there’s some real unethical stuff that goes on (Regine) hands Chanel a trophy (Arlie) Gale, of course ... thats how they make money (Gale) and people lose limbs or get killed (Felicia) excellent Figure 1: Some (abridged) conversation from our corpus. lent” is intended for Chanel and Regine. A straightforward reading of the transcript, however, might interpret it as a response to Gale’s statement immediately preceding. Humans are adept at disentanglement, even in complicated environments like crowded cocktail parties or chat rooms; in order to perform this task, they must maintain a complex mental representation of the ongoing discourse. Moreover, they adapt their utterances to some degree to make the task easier (O’Neill and Martin, 2003), which suggests that disentanglement is in some sense a “difficult” discourse task. Disentanglement has two practical applications. One is the analysis of pre-recorded transcripts in order to extract some kind of information, such as question-answer pairs or summaries. These tasks should probably take as as input each separate conversation, rather than the entire transcript. Another identifiers for ethical reasons. 834 application is as part of a user-interface system for active participants in the chat, in which users target a conversation of interest which is then highlighted for them. Aoki et al. (2003) created such a system for speech, which users generally preferred to a conventional system– when the disentanglement worked! Previous attempts to solve the problem (Aoki et al., 2006; Aoki et al., 2003; Camtepe et al., 2005; Acar et al., 2005) have several flaws. They cluster speakers, not utterances, and so fail when speakers move from one conversation to another. Their features are mostly time gaps between one utterance and another, without effective use of utterance content. Moreover, there is no framework for a principled comparison of results: there are no reliable annotation schemes, no standard corpora, and no agreed-upon metrics. We attempt to remedy these problems. We present a new corpus of manually annotated chat room data and evaluate annotator reliability. We give a set of metrics describing structural similarity both locally and globally. We propose a model which uses discourse structure and utterance contents in addition to time gaps. It partitions a chat transcript into distinct conversations, and its output is highly correlated with human annotations. 2 Related Work Two threads of research are direct attempts to solve the disentanglement problem: Aoki et al. (2006), Aoki et al. (2003) for speech and Camtepe et al. (2005), Acar et al. (2005) for chat. We discuss their approaches below. However, we should emphasize that we cannot compare our results directly with theirs, because none of these studies publish results on human-annotated data. Although Aoki et al. (2006) construct an annotated speech corpus, they give no results for model performance, only user satisfaction with their conversational system. Camtepe et al. (2005) and Acar et al. (2005) do give performance results, but only on synthetic data. All of the previous approaches treat the problem as one of clustering speakers, rather than utterances. That is, they assume that during the window over which the system operates, a particular speaker is engaging in only one conversation. Camtepe et al. (2005) assume this is true throughout the entire transcript; real speakers, by contrast, often participate in many conversations, sequentially or sometimes even simultaneously. Aoki et al. (2003) analyze each thirty-second segment of the transcript separately. This makes the single-conversation restriction somewhat less severe, but has the disadvantage of ignoring all events which occur outside the segment. Acar et al. (2005) attempt to deal with this problem by using a fuzzy algorithm to cluster speakers; this assigns each speaker a distribution over conversations rather than a hard assignment. However, the algorithm still deals with speakers rather than utterances, and cannot determine which conversation any particular utterance is part of. Another problem with these approaches is the information used for clustering. Aoki et al. (2003) and Camtepe et al. (2005) detect the arrival times of messages, and use them to construct an affinity graph between participants by detecting turn-taking behavior among pairs of speakers. (Turn-taking is typified by short pauses between utterances; speakers aim neither to interrupt nor leave long gaps.) Aoki et al. (2006) find that turn-taking on its own is inadequate. They motivate a richer feature set, which, however, does not yet appear to be implemented. Acar et al. (2005) adds word repetition to their feature set. However, their approach deals with all word repetitions on an equal basis, and so degrades quickly in the presence of noise words (their term for words which shared across conversations) to almost complete failure when only 1/2 of the words are shared. To motivate our own approach, we examine some linguistic studies of discourse, especially analysis of multi-party conversation. O’Neill and Martin (2003) point out several ways in which multi-party text chat differs from typical two-party conversation. One key difference is the frequency with which participants mention each others’ names. They hypothesize that mentioning is a strategy which participants use to make disentanglement easier, compensating for the lack of cues normally present in face-to-face dialogue. Mentions (such as Gale’s comments to Arlie in figure 1) are very common in our corpus, occurring in 36% of comments, and provide a useful feature. Another key difference is that participants may create a new conversation (floor) at any time, a process which Sacks et al. (1974) calls schisming. Dur835 ing a schism, a new conversation is formed, not necessarily because of a shift in the topic, but because certain participants have refocused their attention onto each other, and away from whoever held the floor in the parent conversation. Despite these differences, there are still strong similarities between chat and other conversations such as meetings. Our feature set incorporates information which has proven useful in meeting segmentation (Galley et al., 2003) and the task of detecting addressees of a specific utterance in a meeting (Jovanovic et al., 2006). These include word repetitions, utterance topic, and cue words which can indicate the bounds of a segment. 3 Dataset Our dataset is recorded from the IRC (Internet Relay Chat) channel ##LINUX at freenode.net, using the freely-available gaim client. ##LINUX is an unofficial tech support line for the Linux operating system, selected because it is one of the most active chat rooms on freenode, leading to many simultaneous conversations, and because its content is typically inoffensive. Although it is notionally intended only for tech support, it includes large amounts of social chat as well, such as the conversation about factory work in the example above (figure 1). The entire dataset contains 52:18 hours of chat, but we devote most of our attention to three annotated sections: development (706 utterances; 2:06 hr) and test (800 utts.; 1:39 hr) plus a short pilot section on which we tested our annotation system (359 utts.; 0:58 hr). 3.1 Annotation Our annotators were seven university students with at least some familiarity with the Linux OS, although in some cases very slight. Annotation of the test dataset typically took them about two hours. In all, we produced six annotations of the test set2. Our annotation scheme marks each utterance as part of a single conversation. Annotators are instructed to create as many, or as few conversations as they need to describe the data. Our instructions state that a conversation can be between any number of 2One additional annotation was discarded because the annotator misunderstood the task. people, and that, “We mean conversation in the typical sense: a discussion in which the participants are all reacting and paying attention to one another. . . it should be clear that the comments inside a conversation fit together.” The annotation system itself is a simple Java program with a graphical interface, intended to appear somewhat similar to a typical chat client. Each speaker’s name is displayed in a different color, and the system displays the elapsed time between comments, marking especially long pauses in red. Annotators group sentences into conversations by clicking and dragging them onto each other. 3.2 Metrics Before discussing the annotations themselves, we will describe the metrics we use to compare different annotations; these measure both how much our annotators agree with each other, and how well our model and various baselines perform. Comparing clusterings with different numbers of clusters is a non-trivial task, and metrics for agreement on supervised classification, such as the κ statistic, are not applicable. To measure global similarity between annotations, we use one-to-one accuracy. This measure describes how well we can extract whole conversations intact, as required for summarization or information extraction. To compute it, we pair up conversations from the two annotations to maximize the total overlap3, then report the percentage of overlap found. If we intend to monitor or participate in the conversation as it occurs, we will care more about local judgements. The local agreement metric counts agreements and disagreements within a context k. We consider a particular utterance: the previous k utterances are each in either the same or a different conversation. The lock score between two annotators is their average agreement on these k same/different judgements, averaged over all utterances. For example, loc1 counts pairs of adjacent utterances for which two annotations agree. 836 Mean Max Min Conversations 81.33 128 50 Avg. Conv. Length 10.6 16.0 6.2 Avg. Conv. Density 2.75 2.92 2.53 Entropy 4.83 6.18 3.00 1-to-1 52.98 63.50 35.63 loc 3 81.09 86.53 74.75 M-to-1 (by entropy) 86.70 94.13 75.50 Table 1: Statistics on 6 annotations of 800 lines of chat transcript. Inter-annotator agreement metrics (below the line) are calculated between distinct pairs of annotations. 3.3 Discussion A statistical examination of our data (table 1) shows that that it is eminently suitable for disentanglement: the average number of conversations active at a time is 2.75. Our annotators have high agreement on the local metric (average of 81.1%). On the 1-to1 metric, they disagree more, with a mean overlap of 53.0% and a maximum of only 63.5%. This level of overlap does indicate a useful degree of reliability, which cannot be achieved with naive heuristics (see section 5). Thus measuring 1-to-1 overlap with our annotations is a reasonable evaluation for computational models. However, we feel that the major source of disagreement is one that can be remedied in future annotation schemes: the specificity of the individual annotations. To measure the level of detail in an annotation, we use the information-theoretic entropy of the random variable which indicates which conversation an utterance is in. This quantity is non-negative, increasing as the number of conversations grow and their size becomes more balanced. It reaches its maximum, 9.64 bits for this dataset, when each utterance is placed in a separate conversation. In our annotations, it ranges from 3.0 to 6.2. This large variation shows that some annotators are more specific than others, but does not indicate how much they agree on the general structure. To measure this, we introduce the many-to-one accuracy. This measurement is asymmetrical, and maps each of the conversations of the source annotation to the single con3This is an example of max-weight bipartite matching, and can be computed optimally using, eg, max-flow. The widely used greedy algorithm is a two-approximation, although we have not found large differences in practice. (Lai) need money (Astrid) suggest a paypal fund or similar (Lai) Azzie [sic; typo for Astrid?]: my shack guy here said paypal too but i have no local bank acct (Felicia) second’s Azzie’s suggestion (Gale) we should charge the noobs $1 per question to [Lai’s] paypal (Felicia) bingo! (Gale) we’d have the money in 2 days max (Azzie) Lai: hrm, Have you tried to set one up? (Arlie) the federal reserve system conspiracy is keeping you down man (Felicia) Gale: all ubuntu users .. pay up! (Gale) and susers pay double (Azzie) I certainly would make suse users pay. (Hildegard) triple. (Lai) Azzie: not since being offline (Felicia) it doesn’t need to be “in state” either Figure 2: A schism occurring in our corpus (abridged): not all annotators agree on where the thread about charging for answers to techical questions diverges from the one about setting up Paypal accounts. Either Gale’s or Azzie’s first comment seems to be the schism-inducing utterance. versation in the target with which it has the greatest overlap, then counts the total percentage of overlap. This is not a statistic to be optimized (indeed, optimization is trivial: simply make each utterance in the source into its own conversation), but it can give us some intuition about specificity. In particular, if one subdivides a coarse-grained annotation to make a more specific variant, the many-to-one accuracy from fine to coarse remains 1. When we map high-entropy annotations (fine) to lower ones (coarse), we find high many-to-one accuracy, with a mean of 86%, which implies that the more specific annotations have mostly the same large-scale boundaries as the coarser ones. By examining the local metric, we can see even more: local correlations are good, at an average of 81.1%. This means that, in the three-sentence window preceding each sentence, the annotators are of837 ten in agreement. If they recognize subdivisions of a large conversation, these subdivisions tend to be contiguous, not mingled together, which is why they have little impact on the local measure. We find reasons for the annotators’ disagreement about appropriate levels of detail in the linguistic literature. As mentioned, new conversations often break off from old ones in schisms. Aoki et al. (2006) discuss conversational features associated with schisming and the related process of affiliation, by which speakers attach themselves to a conversation. Schisms often branch off from asides or even normal comments (toss-outs) within an existing conversation. This means that there is no clear beginning to the new conversation– at the time when it begins, it is not clear that there are two separate floors, and this will not become clear until distinct sets of speakers and patterns of turn-taking are established. Speakers, meanwhile, take time to orient themselves to the new conversation. An example schism is shown in Figure 2. Our annotation scheme requires annotators to mark each utterance as part of a single conversation, and distinct conversations are not related in any way. If a schism occurs, the annotator is faced with two options: if it seems short, they may view it as a mere digression and label it as part of the parent conversation. If it seems to deserve a place of its own, they will have to separate it from the parent, but this severs the initial comment (an otherwise unremarkable aside) from its context. One or two of the annotators actually remarked that this made the task confusing. Our annotators seem to be either “splitters” or “lumpers”– in other words, each annotator seems to aim for a consistent level of detail, but each one has their own idea of what this level should be. As a final observation about the dataset, we test the appropriateness of the assumption (used in previous work) that each speaker takes part in only one conversation. In our data, the average speaker takes part in about 3.3 conversations (the actual number varies for each annotator). The more talkative a speaker is, the more conversations they participate in, as shown by a plot of conversations versus utterances (Figure 3). The assumption is not very accurate, especially for speakers with more than 10 utterances. 0 10 20 30 40 50 60 Utterances 0 1 2 3 4 5 6 7 8 9 10 Threads Figure 3: Utterances versus conversations participated in per speaker on development data. 4 Model Our model for disentanglement fits into the general class of graph partitioning algorithms (Roth and Yih, 2004) which have been used for a variety of tasks in NLP, including the related task of meeting segmentation (Malioutov and Barzilay, 2006). These algorithms operate in two stages: first, a binary classifier marks each pair of items as alike or different, and second, a consistent partition is extracted.4 4.1 Classification We use a maximum-entropy classifier (Daum´e III, 2004) to decide whether a pair of utterances x and y are in same or different conversations. The most likely class is different, which occurs 57% of the time in development data. We describe the classifier’s performance in terms of raw accuracy (correct decisions / total), precision and recall of the same class, and F-score, the harmonic mean of precision and recall. Our classifier uses several types of features (table 2). The chat-specific features yield the highest accuracy and precision. Discourse and content-based features have poor accuracy on their own (worse than the baseline), since they work best on nearby pairs of utterances, and tend to fail on more distant pairs. Paired with the time gap feature, however, they boost accuracy somewhat and produce substantial gains in recall, encouraging the model to group related utterances together. The time gap, as discussed above, is the most widely used feature in previous work. We exam4Our first attempt at this task used a Bayesian generative model. However, we could not define a sharp enough posterior over new sentences, which made the model unstable and overly sensitive to its prior. 838 Chat-specific (Acc 73: Prec: 73 Rec: 61 F: 66) Time The time between x and y in seconds, bucketed logarithmically. Speaker x and y have the same speaker. Mention x mentions y (or vice versa), both mention the same name, either mentions any name. Discourse (Acc 52: Prec: 47 Rec: 77 F: 58) Cue words Either x or y uses a greeting (“hello” &c), an answer (“yes”, “no” &c), or thanks. Question Either asks a question (explicitly marked with “?”). Long Either is long (> 10 words). Content (Acc 50: Prec: 45 Rec: 74 F: 56) Repeat(i) The number of words shared between x and y which have unigram probability i, bucketed logarithmically. Tech Whether both x and y use technical jargon, neither do, or only one does. Combined (Acc 75: Prec: 73 Rec: 68 F: 71) Table 2: Feature functions with performance on development data. ine the distribution of pauses between utterances in the same conversation. Our choice of a logarithmic bucketing scheme is intended to capture two characteristics of the distribution (figure 4). The curve has its maximum at 1-3 seconds, and pauses shorter than a second are less common. This reflects turntaking behavior among participants; participants in the same conversation prefer to wait for each others’ responses before speaking again. On the other hand, the curve is quite heavy-tailed to the right, leading us to bucket long pauses fairly coarsely. Our discourse-based features model some pair0 10 100 1000 0 20 40 seconds Frequency Figure 4: Distribution of pause length (log-scaled) between utterances in the same conversation. wise relationships: questions followed by answers, short comments reacting to longer ones, greetings at the beginning and thanks at the end. Word repetition is a key feature in nearly every model for segmentation or coherence, so it is no surprise that it is useful here. We bucket repeated words by their unigram probability5 (measured over the entire 52 hours of transcript). The bucketing scheme allows us to deal with “noise words” which are repeated coincidentally. The point of the repetition feature is of course to detect sentences with similar topics. We also find that sentences with technical content are more likely to be related than non-technical sentences. We label an utterance as technical if it contains a web address, a long string of digits, or a term present in a guide for novice Linux users 6 but not in a large news corpus (Graff, 1995)7. This is a light-weight way to capture one “semantic dimension” or cluster of related words, in a corpus which is not amenable to full LSA or similar techniques. LSA in text corpora yields a better relatedness measure than simple repetition (Foltz et al., 1998), but is ineffective in our corpus because of its wide variety of topics and lack of distinct document boundaries. Pairs of utterances which are widely separated in the discourse are unlikely to be directly related– even if they are part of the same conversation, the link between them is probably a long chain of intervening utterances. Thus, if we run our classifier on a pair of very distant utterances, we expect it to default to the majority class, which in this case will be different, and this will damage our performance in case the two are really part of the same conversation. To deal with this, we run our classifier only on utterances separated by 129 seconds or less. This is the last of our logarithmic buckets in which the classifier has a significant advantage over the majority baseline. For 99.9% of utterances in an ongoing conversation, the previous utterance in that conversation is within this gap, and so the system has a 5We discard the 50 most frequent words entirely. 6“Introduction to Linux: A Hands-on Guide”. Machtelt Garrels. Edition 1.25 from http://tldp.org/LDP/introlinux/html/intro-linux.html . 7Our data came from the LA times, 94-97– helpfully, it predates the current wide coverage of Linux in the mainstream press. 839 chance of correctly linking the two. On test data, the classifier has a mean accuracy of 68.2 (averaged over annotations). The mean precision of same conversation is 53.3 and the recall is 71.3, with mean F-score of 60. This error rate is high, but the partitioning procedure allows us to recover from some of the errors, since if nearby utterances are grouped correctly, the bad decisions will be outvoted by good ones. 4.2 Partitioning The next step in the process is to cluster the utterances. We wish to find a set of clusters for which the weighted accuracy of the classifier would be maximal; this is an example of correlation clustering (Bansal et al., 2004), which is NP-complete8. Finding an exact solution proves to be difficult; the problem has a quadratic number of variables (one for each pair of utterances) and a cubic number of triangle inequality constraints (three for each triplet). With 800 utterances in our test set, even solving the linear program with CPLEX (Ilog, Inc., 2003) is too expensive to be practical. Although there are a variety of approximations and local searches, we do not wish to investigate partitioning methods in this paper, so we simply use a greedy search. In this algorithm, we assign utterance j by examining all previous utterances i within the classifier’s window, and treating the classifier’s judgement pi,j −.5 as a vote for cluster(i). If the maximum vote is greater than 0, we set cluster(j) = argmaxc votec. Otherwise j is put in a new cluster. Greedy clustering makes at least a reasonable starting point for further efforts, since it is a natural online algorithm– it assigns each utterance as it arrives, without reference to the future. At any rate, we should not take our objective function too seriously. Although it is roughly correlated with performance, the high error rate of the classifier makes it unlikely that small changes in objective will mean much. In fact, the objective value of our output solutions are generally higher than those for true so8We set up the problem by taking the weight of edge i, j as the classifier’s decision pi,j −.5. Roth and Yih (2004) use log probabilities as weights. Bansal et al. (2004) propose the log odds ratio log(p/(1 −p)). We are unsure of the relative merit of these approaches. lutions, which implies we have already reached the limits of what our classifier can tell us. 5 Experiments We annotate the 800 line test transcript using our system. The annotation obtained has 63 conversations, with mean length 12.70. The average density of conversations is 2.9, and the entropy is 3.79. This places it within the bounds of our human annotations (see table 1), toward the more general end of the spectrum. As a standard of comparison for our system, we provide results for several baselines– trivial systems which any useful annotation should outperform. All different Each utterance is a separate conversation. All same The whole transcript is a single conversation. Blocks of k Each consecutive group of k utterances is a conversation. Pause of k Each pause of k seconds or more separates two conversations. Speaker Each speaker’s utterances are treated as a monologue. For each particular metric, we calculate the best baseline result among all of these. To find the best block size or pause length, we search over multiples of 5 between 5 and 300. This makes these baselines appear better than they really are, since their performance is optimized with respect to the test data. Our results, in table 3, are encouraging. On average, annotators agree more with each other than with any artificial annotation, and more with our model than with the baselines. For the 1-to-1 accuracy metric, we cannot claim much beyond these general results. The range of human variation is quite wide, and there are annotators who are closer to baselines than to any other human annotator. As explained earlier, this is because some human annotations are much more specific than others. For very specific annotations, the best baselines are short blocks or pauses. For the most general, marking all utterances the same does very well (although for all other annotations, it is extremely poor). 840 Other Annotators Model Best Baseline All Diff All Same Mean 1-to-1 52.98 40.62 34.73 (Blocks of 40) 10.16 20.93 Max 1-to-1 63.50 51.12 56.00 (Pause of 65) 16.00 53.50 Min 1-to-1 35.63 33.63 28.62 (Pause of 25) 6.25 7.13 Mean loc 3 81.09 72.75 62.16 (Speaker) 52.93 47.07 Max loc 3 86.53 75.16 69.05 (Speaker) 62.15 57.47 Min loc 3 74.75 70.47 54.37 (Speaker) 42.53 37.85 Table 3: Metric values between proposed annotations and human annotations. Model scores typically fall between inter-annotator agreement and baseline performance. For the local metric, the results are much clearer. There is no overlap in the ranges; for every test annotation, agreement is highest with other annotator, then our model and finally the baselines. The most competitive baseline is one conversation per speaker, which makes sense, since if a speaker makes two comments in a four-utterance window, they are very likely to be related. The name mention features are critical for our model’s performance. Without this feature, the classifier’s development F-score drops from 71 to 56. The disentanglement system’s test performance decreases proportionally; mean 1-to-1 falls to 36.08, and mean loc 3 to 63.00, essentially baseline performance. On the other hand, mentions are not sufficient; with only name mention and time gap features, mean 1-to-1 is 38.54 and loc 3 is 67.14. For some utterances, of course, name mentions provide the only reasonable clue to the correct decision, which is why humans mention names in the first place. But our system is probably overly dependent on them, since they are very reliable compared to our other features. 6 Future Work Although our annotators are reasonably reliable, it seems clear that they think of conversations as a hierarchy, with digressions and schisms. We are interested to see an annotation protocol which more closely follows human intuition and explicitly includes these kinds of relationships. We are also interested to see how well this feature set performs on speech data, as in (Aoki et al., 2003). Spoken conversation is more natural than text chat, but when participants are not face-to-face, disentanglement remains a problem. On the other hand, spoken dialogue contains new sources of information, such as prosody. Turn-taking behavior is also more distinct, which makes the task easier, but according to (Aoki et al., 2006), it is certainly not sufficient. Improving the current model will definitely require better features for the classifier. However, we also left the issue of partitioning nearly completely unexplored. If the classifier can indeed be improved, we expect the impact of search errors to increase. Another issue is that human users may prefer more or less specific annotations than our model provides. We have observed that we can produce lower or higher-entropy annotations by changing the classifier’s bias to label more edges same or different. But we do not yet know whether this corresponds with human judgements, or merely introduces errors. 7 Conclusion This work provides a corpus of annotated data for chat disentanglement, which, along with our proposed metrics, should allow future researchers to evaluate and compare their results quantitatively9. Our annotations are consistent with one another, especially with respect to local agreement. We show that features based on discourse patterns and the content of utterances are helpful in disentanglement. The model we present can outperform a variety of baselines. Acknowledgements Our thanks to Suman Karumuri, Steve Sloman, Matt Lease, David McClosky, 7 test annotators, 3 pilot annotators, 3 anonymous reviewers and the NSF PIRE grant. 9Code and data for this project will be available at http://cs.brown.edu/people/melsner. 841 References Evrim Acar, Seyit Ahmet Camtepe, Mukkai S. Krishnamoorthy, and Blent Yener. 2005. Modeling and multiway analysis of chatroom tensors. In Paul B. Kantor, Gheorghe Muresan, Fred Roberts, Daniel Dajun Zeng, Fei-Yue Wang, Hsinchun Chen, and Ralph C. Merkle, editors, ISI, volume 3495 of Lecture Notes in Computer Science, pages 256–268. Springer. Paul M. Aoki, Matthew Romaine, Margaret H. Szymanski, James D. Thornton, Daniel Wilson, and Allison Woodruff. 2003. The mad hatter’s cocktail party: a social mobile audio space supporting multiple simultaneous conversations. In CHI ’03: Proceedings of the SIGCHI conference on Human factors in computing systems, pages 425–432, New York, NY, USA. ACM Press. Paul M. Aoki, Margaret H. Szymanski, Luke D. Plurkowski, James D. Thornton, Allison Woodruff, and Weilie Yi. 2006. Where’s the “party” in “multiparty”?: analyzing the structure of small-group sociable talk. In CSCW ’06: Proceedings of the 2006 20th anniversary conference on Computer supported cooperative work, pages 393–402, New York, NY, USA. ACM Press. Nikhil Bansal, Avrim Blum, and Shuchi Chawla. 2004. Correlation clustering. Machine Learning, 56(13):89–113. Seyit Ahmet Camtepe, Mark K. Goldberg, Malik Magdon-Ismail, and Mukkai Krishnamoorty. 2005. Detecting conversing groups of chatters: a model, algorithms, and tests. In IADIS AC, pages 89–96. Hal Daum´e III. 2004. Notes on CG and LM-BFGS optimization of logistic regression. Paper available at http://pub.hal3.name#daume04cg-bfgs, implementation available at http://hal3.name/megam/, August. Peter Foltz, Walter Kintsch, and Thomas Landauer. 1998. The measurement of textual coherence with latent semantic analysis. Discourse Processes, 25(2&3):285–307. Michel Galley, Kathleen McKeown, Eric Fosler-Lussier, and Hongyan Jing. 2003. Discourse segmentation of multi-party conversation. In ACL ’03: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pages 562–569, Morristown, NJ, USA. Association for Computational Linguistics. David Graff. 1995. North American News Text Corpus. Linguistic Data Consortium. LDC95T21. Ilog, Inc. 2003. Cplex solver. Natasa Jovanovic, Rieks op den Akker, and Anton Nijholt. 2006. Addressee identification in face-to-face meetings. In EACL. The Association for Computer Linguistics. Igor Malioutov and Regina Barzilay. 2006. Minimum cut model for spoken lecture segmentation. In ACL. The Association for Computer Linguistics. Jacki O’Neill and David Martin. 2003. Text chat in action. In GROUP ’03: Proceedings of the 2003 international ACM SIGGROUP conference on Supporting group work, pages 40–49, New York, NY, USA. ACM Press. Dan Roth and Wen-tau Yih. 2004. A linear programming formulation for global inference in natural language tasks. In Proceedings of CoNLL-2004, pages 1–8. Boston, MA, USA. Harvey Sacks, Emanuel A. Schegloff, and Gail Jefferson. 1974. A simplest systematics for the organization of turn-taking for conversation. Language, 50(4):696– 735. 842
2008
95
Proceedings of ACL-08: HLT, pages 843–851, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics An Entity-Mention Model for Coreference Resolution with Inductive Logic Programming Xiaofeng Yang1 Jian Su1 Jun Lang2 Chew Lim Tan3 Ting Liu2 Sheng Li2 1Institute for Infocomm Research {xiaofengy,sujian}@i2r.a-star.edu.sg 2Harbin Institute of Technology {bill lang,tliu}@ir.hit.edu.cn [email protected] 3National University of Singapore, [email protected] Abstract The traditional mention-pair model for coreference resolution cannot capture information beyond mention pairs for both learning and testing. To deal with this problem, we present an expressive entity-mention model that performs coreference resolution at an entity level. The model adopts the Inductive Logic Programming (ILP) algorithm, which provides a relational way to organize different knowledge of entities and mentions. The solution can explicitly express relations between an entity and the contained mentions, and automatically learn first-order rules important for coreference decision. The evaluation on the ACE data set shows that the ILP based entity-mention model is effective for the coreference resolution task. 1 Introduction Coreference resolution is the process of linking multiple mentions that refer to the same entity. Most of previous work adopts the mention-pair model, which recasts coreference resolution to a binary classification problem of determining whether or not two mentions in a document are co-referring (e.g. Aone and Bennett (1995); McCarthy and Lehnert (1995); Soon et al. (2001); Ng and Cardie (2002)). Although having achieved reasonable success, the mention-pair model has a limitation that information beyond mention pairs is ignored for training and testing. As an individual mention usually lacks adequate descriptive information of the referred entity, it is often difficult to judge whether or not two mentions are talking about the same entity simply from the pair alone. An alternative learning model that can overcome this problem performs coreference resolution based on entity-mention pairs (Luo et al., 2004; Yang et al., 2004b). Compared with the traditional mentionpair counterpart, the entity-mention model aims to make coreference decision at an entity level. Classification is done to determine whether a mention is a referent of a partially found entity. A mention to be resolved (called active mention henceforth) is linked to an appropriate entity chain (if any), based on classification results. One problem that arises with the entity-mention model is how to represent the knowledge related to an entity. In a document, an entity may have more than one mention. It is impractical to enumerate all the mentions in an entity and record their information in a single feature vector, as it would make the feature space too large. Even worse, the number of mentions in an entity is not fixed, which would result in variant-length feature vectors and make trouble for normal machine learning algorithms. A solution seen in previous work (Luo et al., 2004; Culotta et al., 2007) is to design a set of first-order features summarizing the information of the mentions in an entity, for example, “whether the entity has any mention that is a name alias of the active mention?” or “whether most of the mentions in the entity have the same head word as the active mention?” These features, nevertheless, are designed in an ad-hoc manner and lack the capability of describing each individual mention in an entity. In this paper, we present a more expressive entity843 mention model for coreference resolution. The model employs Inductive Logic Programming (ILP) to represent the relational knowledge of an active mention, an entity, and the mentions in the entity. On top of this, a set of first-order rules is automatically learned, which can capture the information of each individual mention in an entity, as well as the global information of the entity, to make coreference decision. Hence, our model has a more powerful representation capability than the traditional mention-pair or entity-mention model. And our experimental results on the ACE data set shows the model is effective for coreference resolution. 2 Related Work There are plenty of learning-based coreference resolution systems that employ the mention-pair model. A typical one of them is presented by Soon et al. (2001). In the system, a training or testing instance is formed for two mentions in question, with a feature vector describing their properties and relationships. At a testing time, an active mention is checked against all its preceding mentions, and is linked with the closest one that is classified as positive. The work is further enhanced by Ng and Cardie (2002) by expanding the feature set and adopting a “bestfirst” linking strategy. Recent years have seen some work on the entitymention model. Luo et al. (2004) propose a system that performs coreference resolution by doing search in a large space of entities. They train a classifier that can determine the likelihood that an active mention should belong to an entity. The entity-level features are calculated with an “Any-X” strategy: an entitymention pair would be assigned a feature X, if any mention in the entity has the feature X with the active mention. Culotta et al. (2007) present a system which uses an online learning approach to train a classifier to judge whether two entities are coreferential or not. The features describing the relationships between two entities are obtained based on the information of every possible pair of mentions from the two entities. Different from (Luo et al., 2004), the entitylevel features are computed using a “Most-X” strategy, that is, two given entities would have a feature X, if most of the mention pairs from the two entities have the feature X. Yang et al. (2004b) suggest an entity-based coreference resolution system. The model adopted in the system is similar to the mention-pair model, except that the entity information (e.g., the global number/gender agreement) is considered as additional features of a mention in the entity. McCallum and Wellner (2003) propose several graphical models for coreference analysis. These models aim to overcome the limitation that pairwise coreference decisions are made independently of each other. The simplest model conditions coreference on mention pairs, but enforces dependency by calculating the distance of a node to a partition (i.e., the probability that an active mention belongs to an entity) based on the sum of its distances to all the nodes in the partition (i.e., the sum of the probability of the active mention co-referring with the mentions in the entity). Inductive Logic Programming (ILP) has been applied to some natural language processing tasks, including parsing (Mooney, 1997), POS disambiguation (Cussens, 1996), lexicon construction (Claveau et al., 2003), WSD (Specia et al., 2007), and so on. However, to our knowledge, our work is the first effort to adopt this technique for the coreference resolution task. 3 Modelling Coreference Resolution Suppose we have a document containing n mentions {mj : 1 < j < n}, in which mj is the jth mention occurring in the document. Let ei be the ith entity in the document. We define P(L|ei, mj), (1) the probability that a mention belongs to an entity. Here the random variable L takes a binary value and is 1 if mj is a mention of ei. By assuming that mentions occurring after mj have no influence on the decision of linking mj to an entity, we can approximate (1) as: P(L|ei, mj) ∝ P(L|{mk ∈ei, 1 ≤k ≤j −1}, mj) (2) ∝ max mk∈ei,1≤k≤j−1 P(L|mk, mj) (3) (3) further assumes that an entity-mention score can be computed by using the maximum mention844 [ Microsoft Corp. ]1 1 announced [ [ its ]1 2 new CEO ]2 3 [ yesterday ]3 4. [ The company ]1 5 said [ he ]2 6 will . . . Table 1: A sample text pair score. Both (2) and (1) can be approximated with a machine learning method, leading to the traditional mention-pair model and the entity-mention model for coreference resolution, respectively. The two models will be described in the next subsections, with the sample text in Table 1 used for demonstration. In the table, a mention m is highlighted as [ m ]eid mid, where mid and eid are the IDs for the mention and the entity to which it belongs, respectively. Three entity chains can be found in the text, that is, e1 : Microsoft Corp. - its - The company e2 : its new CEO - he e3 : yesterday 3.1 Mention-Pair Model As a baseline, we first describe a learning framework with the mention-pair model as adopted in the work by Soon et al. (2001) and Ng and Cardie (2002). In the learning framework, a training or testing instance has the form of i{mk, mj}, in which mj is an active mention and mk is a preceding mention. An instance is associated with a vector of features, which is used to describe the properties of the two mentions as well as their relationships. Table 2 summarizes the features used in our study. For training, given each encountered anaphoric mention mj in a document, one single positive training instance is created for mj and its closest antecedent. And a group of negative training instances is created for every intervening mentions between mj and the antecedent. Consider the example text in Table 1, for the pronoun “he”, three instances are generated: i(“The company”,“he”), i(“yesterday”,“he”), and i(“its new CEO”,“he”). Among them, the first two are labelled as negative while the last one is labelled as positive. Based on the training instances, a binary classifier can be generated using any discriminative learning algorithm. During resolution, an input document is processed from the first mention to the last. For each encountered mention mj, a test instance is formed for each preceding mention, mk. This instance is presented to the classifier to determine the coreference relationship. mj is linked with the mention that is classified as positive (if any) with the highest confidence value. 3.2 Entity-Mention Model The mention-based solution has a limitation that information beyond a mention pair cannot be captured. As an individual mention usually lacks complete description about the referred entity, the coreference relationship between two mentions may be not clear, which would affect classifier learning. Consider a document with three coreferential mentions “Mr. Powell”, “he”, and “Powell”, appearing in that order. The positive training instance i(“he”, “Powell”) is not informative, as the pronoun “he” itself discloses nothing but the gender. However, if the whole entity is considered instead of only one mention, we can know that “he” refers to a male person named “Powell”. And consequently, the coreference relationships between the mentions would become more obvious. The mention-pair model would also cause errors at a testing time. Suppose we have three mentions “Mr. Powell”, “Powell”, and “she” in a document. The model tends to link “she” with “Powell” because of their proximity. This error can be avoided, if we know “Powell” belongs to the entity starting with “Mr. Powell”, and therefore refers to a male person and cannot co-refer with “she”. The entity-mention model based on Eq. (2) performs coreference resolution at an entity-level. For simplicity, the framework considered for the entitymention model adopts similar training and testing procedures as for the mention-pair model. Specifically, a training or testing instance has the form of i{ei, mj}, in which mj is an active mention and ei is a partial entity found before mj. During training, given each anaphoric mention mj, one single positive training instance is created for the entity to which mj belongs. And a group of negative training instances is created for every partial entity whose last mention occurs between mj and the closest antecedent of mj. See the sample in Table 1 again. For the pronoun “he”, the following three instances are generated for 845 Features describing an active mention, mj defNP mj 1 if mj is a definite description; else 0 indefNP mj 1 if mj is an indefinite NP; else 0 nameNP mj 1 if mj is a named-entity; else 0 pron mj 1 if mj is a pronoun; else 0 bareNP mj 1 if mj is a bare NP (i.e., NP without determiners) ; else 0 Features describing a previous mention, mk defNP mk 1 if mk is a definite description; else 0 indefNP mk 1 if mk is an indefinite NP; else 0 nameNP mk 1 if mk is a named-entity; else 0 pron mk 1 if mk is a pronoun; else 0 bareNP mk 1 if mk is a bare NP; else 0 subject mk 1 if mk is an NP in a subject position; else 0 Features describing the relationships between mk and mj sentDist sentence distance between two mentions numAgree 1 if two mentions match in the number agreement; else 0 genderAgree 1 if two mentions match in the gender agreement; else 0 parallelStruct 1 if two mentions have an identical collocation pattern; else 0 semAgree 1 if two mentions have the same semantic category; else 0 nameAlias 1 if two mentions are an alias of the other; else 0 apposition 1 if two mentions are in an appositive structure; else 0 predicative 1 if two mentions are in a predicative structure; else 0 strMatch Head 1 if two mentions have the same head string; else 0 strMatch Full 1 if two mentions contain the same strings, excluding the determiners; else 0 strMatch Contain 1 if the string of mj is fully contained in that of mk; else 0 Table 2: Feature set for coreference resolution entity e1, e3 and e2: i({“Microsoft Corp.”, “its”, “The company”},“he”), i({“yesterday”},“he”), i({“its new CEO”},“he”). Among them, the first two are labelled as negative, while the last one is positive. The resolution is done using a greedy clustering strategy. Given a test document, the mentions are processed one by one. For each encountered mention mj, a test instance is formed for each partial entity found so far, ei. This instance is presented to the classifier. mj is appended to the entity that is classified as positive (if any) with the highest confidence value. If no positive entity exists, the active mention is deemed as non-anaphoric and forms a new entity. The process continues until the last mention of the document is reached. One potential problem with the entity-mention model is how to represent the entity-level knowledge. As an entity may contain more than one candidate and the number is not fixed, it is impractical to enumerate all the mentions in an entity and put their properties into a single feature vector. As a baseline, we follow the solution proposed in (Luo et al., 2004) to design a set of first-order features. The features are similar to those for the mention-pair model as shown in Table 2, but their values are calculated at an entity level. Specifically, the lexical and grammatical features are computed by testing any mention1 in the entity against the active mention, for ex1Linguistically, pronouns usually have the most direct corefample, the feature nameAlias is assigned value 1 if at least one mention in the entity is a name alias of the active mention. The distance feature (i.e., sentDist) is the minimum distance between the mentions in the entity and the active mention. The above entity-level features are designed in an ad-hoc way. They cannot capture the detailed information of each individual mention in an entity. In the next section, we will present a more expressive entity-mention model by using ILP. 4 Entity-mention Model with ILP 4.1 Motivation The entity-mention model based on Eq. (2) requires relational knowledge that involves information of an active mention (mj), an entity (ei), and the mentions in the entity ({mk ∈ei}). However, normal machine learning algorithms work on attribute-value vectors, which only allows the representation of atomic proposition. To learn from relational knowledge, we need an algorithm that can express first-order logic. This requirement motivates our use of Inductive Logic Programming (ILP), a learning algorithm capable of inferring logic programs. The relational nature of ILP makes it possible to explicitly represent relations between an entity and its mentions, and thus provides a powerful expressiveness for the coreference resolution task. erence relationship with antecedents in a local discourse. Hence, if an active mention is a pronoun, we only consider the mentions in its previous two sentences for feature computation. 846 ILP uses logic programming as a uniform representation for examples, background knowledge and hypotheses. Given a set of positive and negative example E = E+ ∪E−, and a set of background knowledge K of the domain, ILP tries to induce a set of hypotheses h that covers most of E+ with no E−, i.e., K ∧h |= E+ and K ∧h ̸|= E−. In our study, we choose ALEPH2, an ILP implementation by Srinivasan (2000) that has been proven well suited to deal with a large amount of data in multiple domains. For its routine use, ALEPH follows a simple procedure to induce rules. It first selects an example and builds the most specific clause that entertains the example. Next, it tries to search for a clause more general than the bottom one. The best clause is added to the current theory and all the examples made redundant are removed. The procedure repeats until all examples are processed. 4.2 Apply ILP to coreference resolution Given a document, we encode a mention or a partial entity with a unique constant. Specifically, mj represents the jth mention (e.g., m6 for the pronoun “he”). ei j represents the partial entity i before the jth mention. For example, e1 6 denotes the part of e1 before m6, i.e., {“Microsoft Corp.”, “its”, “the company”}, while e1 5 denotes the part of e1 before m5 (“The company”), i.e., {“Microsoft Corp.”, “its”}. Training instances are created as described in Section 3.2 for the entity-mention model. Each instance is recorded with a predicate link(ei j, mj), where mj is an active mention and ei j is a partial entity. For example, the three training instances formed by the pronoun “he” are represented as follows: link(e1 6, m6). link(e3 6, m6). link(e2 6, m6). The first two predicates are put into E−, while the last one is put to E+. The background knowledge for an instance link(ei j, mj) is also represented with predicates, which are divided into the following types: 1. Predicates describing the information related to ei j and mj. The properties of mj are pre2http://web.comlab.ox.ac.uk/oucl/ research/areas/machlearn/Aleph/aleph toc.html sented with predicates like f(m, v), where f corresponds to a feature in the first part of Table 2 (removing the suffix mj), and v is its value. For example, the pronoun “he” can be described by the following predicates: defNP(m6, 0). indefNP(m6, 0). nameNP(m6, 0). pron(m6, 1). bareNP(m6, 0). The predicates for the relationships between ei j and mj take a form of f(e, m, v). In our study, we consider the number agreement (entNumAgree) and the gender agreement (entGenderAgree) between ei j and mj. v is 1 if all of the mentions in ei j have consistent number/gender agreement with mj, e.g, entNumAgree(e1 6, m6, 1). 2. Predicates describing the belonging relations between ei j and its mentions. A predicate has mention(e, m) is used for each mention in e 3. For example, the partial entity e1 6 has three mentions, m1, m2 and m5, which can be described as follows: has mention(e1 6, m1). has mention(e1 6, m2). has mention(e1 6, m5). 3. Predicates describing the information related to mj and each mention mk in ei j. The predicates for the properties of mk correspond to the features in the second part of Table 2 (removing the suffix mk), while the predicates for the relationships between mj and mk correspond to the features in the third part of Table 2. For example, given the two mentions m1 (“Microsoft Corp.) and m6 (“he), the following predicates can be applied: nameNP(m1, 1). pron(m1, 0). . . . nameAlias(m1, m6, 0). sentDist(m1, m6, 1). . . . the last two predicates represent that m1 and 3If an active mention mj is a pronoun, only the previous mentions in two sentences apart are recorded by has mention, while the farther ones are ignored as they have less impact on the resolution of the pronoun. 847 m6 are not name alias, and are one sentence apart. By using the three types of predicates, the different knowledge related to entities and mentions are integrated. The predicate has mention acts as a bridge connecting the entity-mention knowledge and the mention-pair knowledge. As a result, when evaluating the coreference relationship between an active mention and an entity, we can make use of the “global” information about the entity, as well as the “local” information of each individual mention in the entity. From the training instances and the associated background knowledge, a set of hypotheses can be automatically learned by ILP. Each hypothesis is output as a rule that may look like: link(A,B):predi1, predi2, . . . , has mention(A,C), . . . , prediN. which corresponds to first-order logic ∀A, B(predi1 ∧predi2 ∧. . . ∧ ∃C(has mention(A, C) ∧. . . ∧prediN) →link(A, B)) Consider an example rule produced in our system: link(A,B) :has mention(A,C), numAgree(B,C,1), strMatch Head(B,C,1), bareNP(C,1). Here, variables A and B stand for an entity and an active mention in question. The first-order logic is implemented by using non-instantiated arguments C in the predicate has mention. This rule states that a mention B should belong to an entity A, if there exists a mention C in A such that C is a bare noun phrase with the same head string as B, and matches in number with B. In this way, the detailed information of each individual mention in an entity can be captured for resolution. A rule is applicable to an instance link(e, m), if the background knowledge for the instance can be described by the predicates in the body of the rule. Each rule is associated with a score, which is the accuracy that the rule can produce for the training instances. The learned rules are applied to resolution in a similar way as described in Section 3.2. Given an active mention m and a partial entity e, a test instance link(e, m) is formed and tested against every rule in the rule set. The confidence that m should Train Test #entity #mention #entity #mention NWire 1678 9861 411 2304 NPaper 1528 10277 365 2290 BNews 1695 8986 468 2493 Table 3: statistics of entities (length > 1) and contained mentions belong to e is the maximal score of the applicable rules. An active mention is linked to the entity with the highest confidence value (above 0.5), if any. 5 Experiments and Results 5.1 Experimental Setup In our study, we did evaluation on the ACE-2003 corpus, which contains two data sets, training and devtest, used for training and testing respectively. Each of these sets is further divided into three domains: newswire (NWire), newspaper (NPaper), and broadcast news (BNews). The number of entities with more than one mention, as well as the number of the contained mentions, is summarized in Table 3. For both training and resolution, an input raw document was processed by a pipeline of NLP modules including Tokenizer, Part-of-Speech tagger, NP Chunker and Named-Entity (NE) Recognizer. Trained and tested on Penn WSJ TreeBank, the POS tagger could obtain an accuracy of 97% and the NP chunker could produce an F-measure above 94% (Zhou and Su, 2000). Evaluated for the MUC6 and MUC-7 Named-Entity task, the NER module (Zhou and Su, 2002) could provide an F-measure of 96.6% (MUC-6) and 94.1%(MUC-7). For evaluation, Vilain et al. (1995)’s scoring algorithm was adopted to compute recall and precision rates. By default, the ALEPH algorithm only generates rules that have 100% accuracy for the training data. And each rule contains at most three predicates. To accommodate for coreference resolution, we loosened the restrictions to allow rules that have above 50% accuracy and contain up to ten predicates. Default parameters were applied for all the other settings in ALEPH as well as other learning algorithms used in the experiments. 5.2 Results and Discussions Table 4 lists the performance of different coreference resolution systems. For comparison, we first 848 NWire NPaper BNews R P F R P F R P F C4.5 - Mention-Pair 68.2 54.3 60.4 67.3 50.8 57.9 66.5 59.5 62.9 - Entity-Mention 66.8 55.0 60.3 64.2 53.4 58.3 64.6 60.6 62.5 - Mention-Pair (all mentions in entity) 66.7 49.3 56.7 65.8 48.9 56.1 66.5 47.6 55.4 ILP - Mention-Pair 66.1 54.8 59.5 65.6 54.8 59.7 63.5 60.8 62.1 - Entity-Mention 65.0 58.9 61.8 63.4 57.1 60.1 61.7 65.4 63.5 Table 4: Results of different systems for coreference resolution examined the C4.5 algorithm4 which is widely used for the coreference resolution task. The first line of the table shows the baseline system that employs the traditional mention-pair model (MP) as described in Section 3.1. From the table, our baseline system achieves a recall of around 66%-68% and a precision of around 50%-60%. The overall F-measure for NWire, NPaper and BNews is 60.4%, 57.9% and 62.9% respectively. The results are comparable to those reported in (Ng, 2005) which uses similar features and gets an F-measure ranging in 50-60% for the same data set. As our system relies only on simple and knowledge-poor features, the achieved Fmeasure is around 2-4% lower than the state-of-theart systems do, like (Ng, 2007) and (Yang and Su, 2007) which utilized sophisticated semantic or realworld knowledge. Since ILP has a strong capability in knowledge management, our system could be further improved if such helpful knowledge is incorporated, which will be explored in our future work. The second line of Table 4 is for the system that employs the entity-mention model (EM) with “Any-X” based entity features, as described in Section 3.2. We can find that the EM model does not show superiority over the baseline MP model. It achieves a higher precision (up to 2.6%), but a lower recall (2.9%), than MP. As a result, we only see ±0.4% difference between the F-measure. The results are consistent with the reports by Luo et al. (2004) that the entity-mention model with the “AnyX” first-order features performs worse than the normal mention-pair model. In our study, we also tested the “Most-X” strategy for the first-order features as in (Culotta et al., 2007), but got similar results without much difference (±0.5% F-measure) in perfor4http://www.rulequest.com/see5-info.html mance. Besides, as with our entity-mention predicates described in Section 4.2, we also tried the “AllX” strategy for the entity-level agreement features, that is, whether all mentions in a partial entity agree in number and gender with an active mention. However, we found this bring no improvement against the “Any-X” strategy. As described, given an active mention mj, the MP model only considers the mentions between mj and its closest antecedent. By contrast, the EM model considers not only these mentions, but also their antecedents in the same entity link. We were interested in examining what if the MP model utilizes all the mentions in an entity as the EM model does. As shown in the third line of Table 4, such a solution damages the performance; while the recall is at the same level, the precision drops significantly (up to 12%) and as a result, the F-measure is even lower than the original MP model. This should be because a mention does not necessarily have direct coreference relationships with all of its antecedents. As the MP model treats each mention-pair as an independent instance, including all the antecedents would produce many less-confident positive instances, and thus adversely affect training. The second block of the table summarizes the performance of the systems with ILP. We were first concerned with how well ILP works for the mentionpair model, compared with the normally used algorithm C4.5. From the results shown in the fourth line of Table 4, ILP exhibits the same capability in the resolution; it tends to produce a slightly higher precision but a lower recall than C4.5 does. Overall, it performs better in F-measure (1.8%) for Npaper, while slightly worse (<1%) for Nwire and BNews. These results demonstrate that ILP could be used as 849 link(A,B) :bareNP(B,0), has mention(A,C), appositive(C,1). link(A,B) :has mention(A,C), numAgree(B,C,1), strMatch Head(B,C,1), bareNP(C,1). link(A,B) :nameNP(B,0), has mention(A,C), predicative(C,1). link(A,B) :has mention(A,C), strMatch Contain(B,C,1), strMatch Head(B,C,1), bareNP(C,0). link(A,B) :nameNP(B,0), has mention(A,C), nameAlias(C,1), bareNP(C,0). link(A,B) :pron(B,1), has mention(A,C), nameNP(C,1), has mention(A,D), indefNP(D,1), subject(D, 1). ... Figure 1: Examples of rules produced by ILP (entitymention model) a good classifier learner for the mention-pair model. The fifth line of Table 4 is for the ILP based entitymention model (described in Section 4.2). We can observe that the model leads to a better performance than all the other models. Compared with the system with the MP model (under ILP), the EM version is able to achieve a higher precision (up to 4.6% for BNews). Although the recall drops slightly (up to 1.8% for BNews), the gain in the precision could compensate it well; it beats the MP model in the overall F-measure for all three domains (2.3% for Nwire, 0.4% for Npaper, 1.4% for BNews). Especially, the improvement in NWire and BNews is statistically significant under a 2-tailed t test (p < 0.05). Compared with the EM model with the manually designed first-order feature (the second line), the ILP-based EM solution also yields better performance in precision (with a slightly lower recall) as well as the overall F-measure (1.0% - 1.8%). The improvement in precision against the mention-pair model confirms that the global information beyond a single mention pair, when being considered for training, can make coreference relations clearer and help classifier learning. The better performance against the EM model with heuristically designed features also suggests that ILP is able to learn effective first-order rules for the coreference resolution task. In Figure 1, we illustrate part of the rules produced by ILP for the entity-mention model (NWire domain), which shows how the relational knowledge of entities and mentions is represented for decision making. An interesting finding, as shown in the last rule of the table, is that multiple non-instantiated arguments (i.e. C and D) could possibly appear in the same rule. According to this rule, a pronominal mention should be linked with a partial entity which contains a named-entity and contains an indefinite NP in a subject position. This supports the claims in (Yang et al., 2004a) that coreferential information is an important factor to evaluate a candidate antecedent in pronoun resolution. Such complex logic makes it possible to capture information of multiple mentions in an entity at the same time, which is difficult to implemented in the mention-pair model and the ordinary entity-mention model with heuristic first-order features. 6 Conclusions This paper presented an expressive entity-mention model for coreference resolution by using Inductive Logic Programming. In contrast to the traditional mention-pair model, our model can capture information beyond single mention pairs for both training and testing. The relational nature of ILP enables our model to explicitly express the relations between an entity and its mentions, and to automatically learn the first-order rules effective for the coreference resolution task. The evaluation on ACE data set shows that the ILP based entity-model performs better than the mention-pair model (with up to 2.3% increase in F-measure), and also beats the entity-mention model with heuristically designed first-order features. Our current work focuses on the learning model that calculates the probability of a mention belonging to an entity. For simplicity, we just use a greedy clustering strategy for resolution, that is, a mention is linked to the current best partial entity. In our future work, we would like to investigate more sophisticated clustering methods that would lead to global optimization, e.g., by keeping a large search space (Luo et al., 2004) or using integer programming (Denis and Baldridge, 2007). Acknowledgements This research is supported by a Specific Targeted Research Project (STREP) of the European Union’s 6th Framework Programme within IST call 4, Bootstrapping Of Ontologies and Terminologies STrategic REsearch Project (BOOTStrep). 850 References C. Aone and S. W. Bennett. 1995. Evaluating automated and manual acquisition of anaphora resolution strategies. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 122–129. V. Claveau, P. Sebillot, C. Fabre, and P. Bouillon. 2003. Learning semantic lexicons from a part-of-speech and semantically tagged corpus using inductive logic programming. Journal of Machine Learning Research, 4:493–525. A. Culotta, M. Wick, and A. McCallum. 2007. Firstorder probabilistic models for coreference resolution. In Proceedings of the Annual Meeting of the North America Chapter of the Association for Computational Linguistics (NAACL), pages 81–88. J. Cussens. 1996. Part-of-speech disambiguation using ilp. Technical report, Oxford University Computing Laboratory. P. Denis and J. Baldridge. 2007. Joint determination of anaphoricity and coreference resolution using integer programming. In Proceedings of the Annual Meeting of the North America Chapter of the Association for Computational Linguistics (NAACL), pages 236–243. X. Luo, A. Ittycheriah, H. Jing, N. Kambhatla, and S. Roukos. 2004. A mention-synchronous coreference resolution algorithm based on the bell tree. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 135–142. A. McCallum and B. Wellner. 2003. Toward conditional models of identity uncertainty with application to proper noun coreference. In Proceedings of IJCAI03 Workshop on Information Integration on the Web, pages 79–86. J. McCarthy and W. Lehnert. 1995. Using decision trees for coreference resolution. In Proceedings of the 14th International Conference on Artificial Intelligences (IJCAI), pages 1050–1055. R. Mooney. 1997. Inductive logic programming for natural language processing. In Proceedings of the sixth International Inductive Logic Programming Workshop, pages 3–24. V. Ng and C. Cardie. 2002. Improving machine learning approaches to coreference resolution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 104–111, Philadelphia. V. Ng. 2005. Machine learning for coreference resolution: From local classification to global ranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL), pages 157–164. V. Ng. 2007. Semantic class induction and coreference resolution. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 536–543. W. Soon, H. Ng, and D. Lim. 2001. A machine learning approach to coreference resolution of noun phrases. Computational Linguistics, 27(4):521–544. L. Specia, M. Stevenson, and M. V. Nunes. 2007. Learning expressive models for words sense disambiguation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 41–48. A. Srinivasan. 2000. The aleph manual. Technical report, Oxford University Computing Laboratory. M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings of the Sixth Message understanding Conference (MUC-6), pages 45– 52, San Francisco, CA. Morgan Kaufmann Publishers. X. Yang and J. Su. 2007. Coreference resolution using semantic relatedness information from automatically discovered patterns. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 528–535. X. Yang, J. Su, G. Zhou, and C. Tan. 2004a. Improving pronoun resolution by incorporating coreferential information of candidates. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 127–134, Barcelona. X. Yang, J. Su, G. Zhou, and C. Tan. 2004b. An NP-cluster approach to coreference resolution. In Proceedings of the 20th International Conference on Computational Linguistics, pages 219–225, Geneva. G. Zhou and J. Su. 2000. Error-driven HMM-based chunk tagger with context-dependent lexicon. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 71–79, Hong Kong. G. Zhou and J. Su. 2002. Named Entity recognition using a HMM-based chunk tagger. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), pages 473–480, Philadelphia. 851
2008
96
Proceedings of ACL-08: HLT, pages 852–860, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Gestural Cohesion for Topic Segmentation Jacob Eisenstein, Regina Barzilay and Randall Davis Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology 77 Massachusetts Ave., Cambridge MA 02139 {jacobe, regina, davis}@csail.mit.edu Abstract This paper explores the relationship between discourse segmentation and coverbal gesture. Introducing the idea of gestural cohesion, we show that coherent topic segments are characterized by homogeneous gestural forms and that changes in the distribution of gestural features predict segment boundaries. Gestural features are extracted automatically from video, and are combined with lexical features in a Bayesian generative model. The resulting multimodal system outperforms text-only segmentation on both manual and automaticallyrecognized speech transcripts. 1 Introduction When people communicate face-to-face, discourse cues are expressed simultaneously through multiple channels. Previous research has extensively studied how discourse cues correlate with lexico-syntactic and prosodic features (Hearst, 1994; Hirschberg and Nakatani, 1998; Passonneau and Litman, 1997); this work informs various text and speech processing applications, such as automatic summarization and segmentation. Gesture is another communicative modality that frequently accompanies speech, yet it has not been exploited for computational discourse analysis. This paper empirically demonstrates that gesture correlates with discourse structure. In particular, we show that automatically-extracted visual features can be combined with lexical cues in a statistical model to predict topic segmentation, a frequently studied form of discourse structure. Our method builds on the idea that coherent discourse segments are characterized by gestural cohesion; in other words, that such segments exhibit homogeneous gestural patterns. Lexical cohesion (Halliday and Hasan, 1976) forms the backbone of many verbal segmentation algorithms, on the theory that segmentation boundaries should be placed where the distribution of words changes (Hearst, 1994). With gestural cohesion, we explore whether the same idea holds for gesture features. The motivation for this approach comes from a series of psycholinguistic studies suggesting that gesture supplements speech with meaningful and unique semantic content (McNeill, 1992; Kendon, 2004). We assume that repeated patterns in gesture are indicative of the semantic coherence that characterizes well-defined discourse segments. An advantage of this view is that gestures can be brought to bear on discourse analysis without undertaking the daunting task of recognizing and interpreting individual gestures. This is crucial because coverbal gesture – unlike formal sign language – rarely follows any predefined form or grammar, and may vary dramatically by speaker. A key implementational challenge is automatically extracting gestural information from raw video and representing it in a way that can applied to discourse analysis. We employ a representation of visual codewords, which capture clusters of low-level motion patterns. For example, one codeword may correspond to strong left-right motion in the upper part of the frame. These codewords are then treated similarly to lexical items; our model identifies changes in their distribution, and predicts topic 852 boundaries appropriately. The overall framework is implemented as a hierarchical Bayesian model, supporting flexible integration of multiple knowledge sources. Experimental results support the hypothesis that gestural cohesion is indicative of discourse structure. Applying our algorithm to a dataset of faceto-face dialogues, we find that gesture communicates unique information, improving segmentation performance over lexical features alone. The positive impact of gesture is most pronounced when automatically-recognized speech transcripts are used, but gestures improve performance by a significant margin even in combination with manual transcripts. 2 Related Work Gesture and discourse Much of the work on gesture in natural language processing has focused on multimodal dialogue systems in which the gestures and speech may be constrained, e.g. (Johnston, 1998). In contrast, we focus on improving discourse processing on unconstrained natural language between humans. This effort follows basic psychological and linguistic research on the communicative role of gesture (McNeill, 1992; Kendon, 2004), including some efforts that made use of automatically acquired visual features (Quek, 2003). We extend these empirical studies with a statistical model of the relationship between gesture and discourse segmentation. Hand-coded descriptions of body posture shifts and eye gaze behavior have been shown to correlate with topic and turn boundaries in task-oriented dialogue (Cassell et al., 2001). These findings are exploited to generate realistic conversational “grounding” behavior in an animated agent. The semantic content of gesture was leveraged – again, for gesture generation – in (Kopp et al., 2007), which presents an animated agent that is capable of augmenting navigation directions with gestures that describe the physical properties of landmarks along the route. Both systems generate plausible and human-like gestural behavior; we address the converse problem of interpreting such gestures. In this vein, hand-coded gesture features have been used to improve sentence segmentation, showing that sentence boundaries are unlikely to overlap gestures that are in progress (Chen et al., 2006). Features that capture the start and end of gestures are shown to improve sentence segmentation beyond lexical and prosodic features alone. This idea of gestural features as a sort of visual punctuation has parallels in the literature on prosody, which we discuss in the next subsection. Finally, ambiguous noun phrases can be resolved by examining the similarity of co-articulated gestures (Eisenstein and Davis, 2007). While noun phrase coreference can be viewed as a discourse processing task, we address the higher-level discourse phenomenon of topic segmentation. In addition, this prior work focused primarily on pointing gestures directed at pre-printed visual aids. The current paper presents a new domain, in which speakers do not have access to visual aids. Thus pointing gestures are less frequent than “iconic” gestures, in which the form of motion is the principle communicative feature (McNeill, 1992). Non-textual features for topic segmentation Research on non-textual features for topic segmentation has primarily focused on prosody, under the assumption that a key prosodic function is to mark structure at the discourse level (Steedman, 1990; Grosz and Hirshberg, 1992; Swerts, 1997). The ultimate goal of this research is to find correlates of hierarchical discourse structure in phonetic features. Today, research on prosody has converged on prosodic cues which correlate with discourse structure. Such markers include pause duration, fundamental frequency, and pitch range manipulations (Grosz and Hirshberg, 1992; Hirschberg and Nakatani, 1998). These studies informed the development of applications such as segmentation tools for meeting analysis, e.g. (Tur et al., 2001; Galley et al., 2003). In comparison, the connection between gesture and discourse structure is a relatively unexplored area, at least with respect to computational approaches. One conclusion that emerges from our analysis is that gesture may signal discourse structure in a different way than prosody does: while specific prosodic markers characterize segment boundaries, gesture predicts segmentation through intrasegmental cohesion. The combination of these two 853 modalities is an exciting direction for future research. 3 Visual Features for Discourse Analysis This section describes the process of building a representation that permits the assessment of gestural cohesion. The core signal-level features are based on spatiotemporal interest points, which provide a sparse representation of the motion in the video. At each interest point, visual, spatial, and kinematic characteristics are extracted and then concatenated into vectors. Principal component analysis (PCA) reduces the dimensionality to a feature vector of manageable size (Bishop, 2006). These feature vectors are then clustered, yielding a codebook of visual forms. This video processing pipeline is shown in Figure 1; the remainder of the section describes the individual steps in greater detail. 3.1 Spatiotemporal Interest Points Spatiotemporal interest points (Laptev, 2005) provide a sparse representation of motion in video. The idea is to select a few local regions that contain high information content in both the spatial and temporal dimensions. The image features at these regions should be relatively robust to lighting and perspective changes, and they should capture the relevant movement in the video. The set of spatiotemporal interest points thereby provides a highly compressed representation of the key visual features. Purely spatial interest points have been successful in a variety of image processing tasks (Lowe, 1999), and spatiotemporal interest points are beginning to show similar advantages for video processing (Laptev, 2005). The use of spatiotemporal interest points is specifically motivated by techniques from the computer vision domain of activity recognition (Efros et al., 2003; Niebles et al., 2006). The goal of activity recognition is to classify video sequences into semantic categories: e.g., walking, running, jumping. As a simple example, consider the task of distinguishing videos of walking from videos of jumping. In the walking videos, the motion at most of the interest points will be horizontal, while in the jumping videos it will be vertical. Spurious vertical motion in a walking video is unlikely to confuse the classifier, as long as the majority of interest points move horizontally. The hypothesis of this paper is that just as such low-level movement features can be applied in a supervised fashion to distinguish activities, they can be applied in an unsupervised fashion to group co-speech gestures into perceptually meaningful clusters. The Activity Recognition Toolbox (Doll´ar et al., 2005)1 is used to detect spatiotemporal interest points for our dataset. This toolbox ranks interest points using a difference-of-Gaussians filter in the spatial dimension, and a set of Gabor filters in the temporal dimension. The total number of interest points extracted per video is set to equal the number of frames in the video. This bounds the complexity of the representation to be linear in the length of the video; however, the system may extract many interest points in some frames and none in other frames. Figure 2 shows the interest points extracted from a representative video frame from our corpus. Note that the system has identified high contrast regions of the gesturing hand. From manual inspection, the large majority of interest points extracted in our dataset capture motion created by hand gestures. Thus, for this dataset it is reasonable to assume that an interest point-based representation expresses the visual properties of the speakers’ hand gestures. In videos containing other sources of motion, preprocessing may be required to filter out interest points that are extraneous to gestural communication. 3.2 Visual Descriptors At each interest point, the temporal and spatial brightness gradients are constructed across a small space-time volume of nearby pixels. Brightness gradients have been used for a variety of problems in computer vision (Forsyth and Ponce, 2003), and provide a fairly general way to describe the visual appearance of small image patches. However, even for a small space-time volume, the resulting dimensionality is still quite large: a 10-by-10 pixel box across 5 video frames yields a 500-dimensional feature vector for each of the three gradients. For this reason, principal component analysis (Bishop, 2006) is used to reduce the dimensionality. The spatial location of the interest point is added to the final feature vector. 1http://vision.ucsd.edu/∼pdollar/research/cuboids doc/index.html 854 Figure 1: The visual processing pipeline for the extraction of gestural codewords from video. Figure 2: Circles indicate the interest points extracted from this frame of the corpus. This visual feature representation is substantially lower-level than the descriptions of gesture form found in both the psychology and computer science literatures. For example, when manually annotating gesture, it is common to employ a taxonomy of hand shapes and trajectories, and to describe the location with respect to the body and head (McNeill, 1992; Martell, 2005). Working with automatic hand tracking, Quek (2003) automatically computes perceptually-salient gesture features, such as symmetric motion and oscillatory repetitions. In contrast, our feature representation takes the form of a vector of continuous values and is not easily interpretable in terms of how the gesture actually appears. However, this low-level approach offers several important advantages. Most critically, it requires no initialization and comparatively little tuning: it can be applied directly to any video with a fixed camera position and static background. Second, it is robust: while image noise may cause a few spurious interest points, the majority of interest points should still guide the system to an appropriate characterization of the gesture. In contrast, hand tracking can become irrevocably lost, requiring manual resets (Gavrila, 1999). Finally, the success of similar low-level interest point representations at the activity-recognition task provides reason for optimism that they may also be applicable to unsupervised gesture analysis. 3.3 A Lexicon of Visual Forms After extracting a set of low-dimensional feature vectors to characterize the visual appearance at each spatiotemporal interest point, it remains only to convert this into a representation amenable to a cohesion-based analysis. Using k-means clustering (Bishop, 2006), the feature vectors are grouped into codewords: a compact, lexicon-like representation of salient visual features in video. The number of clusters is a tunable parameter, though a systematic investigation of the role of this parameter is left for future work. Codewords capture frequently-occurring patterns of motion and appearance at a local scale – interest points that are clustered together have a similar visual appearance. Because most of the motion in our videos is gestural, the codewords that appear during a given sentence provide a succinct representation of the ongoing gestural activity. Distributions of codewords over time can be analyzed in similar terms to the distribution of lexical features. A change in the distribution of codewords indicates new visual kinematic elements entering the discourse. Thus, the codeword representation allows gestural cohesion to be assessed in much the same way as lexical cohesion. 4 Bayesian Topic Segmentation Topic segmentation is performed in a Bayesian framework, with each sentence’s segment index encoded in a hidden variable, written zt. The hidden variables are assumed to be generated by a linear segmentation, such that zt ∈{zt−1, zt−1 + 1}. Observations – the words and gesture codewords – are 855 generated by multinomial language models that are indexed according to the segment. In this framework, a high-likelihood segmentation will include language models that are tightly focused on a compact vocabulary. Such a segmentation maximizes the lexical cohesion of each segment. This model thus provides a principled, probabilistic framework for cohesion-based segmentation, and we will see that the Bayesian approach is particularly wellsuited to the combination of multiple modalities. Formally, our goal is to identify the best possible segmentation S, where S is a tuple: S = ⟨z, θ, φ⟩. The segment indices for each sentence are written zt; for segment i, θi and φi are multinomial language models over words and gesture codewords respectively. For each sentence, xt and yt indicate the words and gestures that appear. We will seek to identify the segmentation ˆS = argmaxSp(S, x, y), conditioned on priors that will be defined below. p(S, x, y) = p(x, y|S)p(S) p(x, y|S) = Y i p({xt : zt = i}|θi)p({yt : zt = i}|φi) (1) p(S) = p(z) Y i p(θi)p(φi) (2) The language models θi and φi are multinomial distributions, so the log-likelihood of the observations xt is log p(xt|θi) = PW j n(t, j) log θi,j, where n(t, j) is the count of word j in sentence t, and W is the size of the vocabulary. An analogous equation is used for the gesture codewords. Each language model is given a symmetric Dirichlet prior α. As we will see shortly, the use of different priors for the verbal and gestural language models allows us to weight these modalities in a Bayesian framework. Finally, we model the probability of the segmentation z by considering the durations of each segment: p(z) = Q i p(dur(i)|ψ). A negativebinomial distribution with parameter ψ is applied to discourage extremely short or long segments. Inference Crucially, both the likelihood (equation 1) and the prior (equation 2) factor into a product across the segments. This factorization enables the optimal segmentation to be found using a dynamic program, similar to those demonstrated by Utiyama and Isahara (2001) and Malioutov and Barzilay (2006). For each set of segmentation points z, the associated language models are set to their posterior expectations, e.g., θi = E[θ|{xt : zt = i}, α]. The Dirichlet prior is conjugate to the multinomial, so this expectation can be computed in closed form: θi,j = n(i, j) + α N(i) + Wα, (3) where n(i, j) is the count of word j in segment i and N(i) is the total number of words in segment i (Bernardo and Smith, 2000). The symmetric Dirichlet prior α acts as a smoothing pseudo-count. In the multimodal context, the priors act to control the weight of each modality. If the prior for the verbal language model θ is high relative to the prior for the gestural language model φ then the verbal multinomial will be smoother, and will have a weaker impact on the final segmentation. The impact of the priors on the weights of each modality is explored in Section 6. Estimation of priors The distribution over segment durations is negative-binomial, with parameters ψ. In general, the maximum likelihood estimate of the parameters of a negative-binomial distribution cannot be found in closed form (Balakrishnan and Nevzorov, 2003). For any given segmentation, the maximum-likelihood setting for ψ is found via a gradient-based search. This setting is then used to generate another segmentation, and the process is iterated until convergence, as in hard expectationmaximization. The Dirichlet priors on the language models are symmetric, and are chosen via crossvalidation. Sampling or gradient-based techniques may be used to estimate these parameters, but this is left for future work. Relation to other segmentation models Other cohesion-based techniques have typically focused on hand-crafted similarity metrics between sentences, such as cosine similarity (Galley et al., 2003; Malioutov and Barzilay, 2006). In contrast, the model described here is probabilistically motivated, maximizing the joint probability of the segmentation with the observed words and gestures. Our objective criterion is similar in form to that of Utiyama and Isahara (2001); however, in contrast to this prior 856 work, our criterion is justified by a Bayesian approach. Also, while the smoothing in our approach arises naturally from the symmetric Dirichlet prior, Utiyama and Isahara apply Laplace’s rule and add pseudo-counts of one in all cases. Such an approach would be incapable of flexibly balancing the contributions of each modality. 5 Evaluation Setup Dataset Our dataset is composed of fifteen audiovideo recordings of dialogues limited to three minutes in duration. The dataset includes nine different pairs of participants. In each video one of five subjects is discussed. The potential subjects include a “Tom and Jerry” cartoon, a “Star Wars” toy, and three mechanical devices: a latchbox, a piston, and a candy dispenser. One participant – “participant A” – was familiarized with the topic, and is tasked with explaining it to participant B, who is permitted to ask questions. Audio from both participants is used, but only video of participant A is used; we do not examine whether B’s gestures are relevant to discourse segmentation. Video was recorded using standard camcorders, with a resolution of 720 by 480 at 30 frames per second. The video was reduced to 360 by 240 grayscale images before visual analysis is applied. Audio was recorded using headset microphones. No manual postprocessing is applied to the video. Annotations and data processing All speech was transcribed by hand, and time stamps were obtained using the SPHINX-II speech recognition system for forced alignment (Huang et al., 1993). Sentence boundaries are annotated according to (NIST, 2003), and additional sentence boundaries are automatically inserted at all turn boundaries. Commonlyoccurring terms unlikely to impact segmentation are automatically removed by using a stoplist. For automatic speech recognition, the default Microsoft speech recognizer was applied to each sentence, and the top-ranked recognition result was reported. As is sometimes the case in real-world applications, no speaker-specific training data is available. The resulting recognition quality is very poor, yielding a word error rate of 77%. Annotators were instructed to select segment boundaries that divide the dialogue into coherent topics. Segmentation points are required to coincide with sentence or turn boundaries. A second annotator – who is not an author on any paper connected with this research – provided an additional set of segment annotations on six documents. On this subset of documents, the Pk between annotators was .306, and the WindowDiff was .325 (these metrics are explained in the next subsection). This is similar to the interrater agreement reported by Malioutov and Barzilay (2006). Over the fifteen dialogues, a total of 7458 words were transcribed (497 per dialogue), spread over 1440 sentences or interrupted turns (96 per dialogue). There were a total of 102 segments (6.8 per dialogue), from a minimum of four to a maximum of ten. This rate of fourteen sentences or interrupted turns per segment indicates relatively finegrained segmentation. In the physics lecture corpus used by Malioutov and Barzilay (2006), there are roughly 100 sentences per segment. On the ICSI corpus of meeting transcripts, Galley et al. (2003) report 7.5 segments per meeting, with 770 “potential boundaries,” suggesting a similar rate of roughly 100 sentences or interrupted turns per segment. The size of this multimodal dataset is orders of magnitude smaller than many other segmentation corpora. For example, the Broadcast News corpus used by Beeferman et al. (1999) and others contains two million words. The entire ICSI meeting corpus contains roughly 600,000 words, although only one third of this dataset was annotated for segmentation (Galley et al., 2003). The physics lecture corpus that was mentioned above contains 232,000 words (Malioutov and Barzilay, 2006). The task considered in this section is thus more difficult than much of the previous discourse segmentation work on two dimensions: there is less training data, and a finer-grained segmentation is required. Metrics All experiments are evaluated in terms of the commonly-used Pk (Beeferman et al., 1999) and WindowDiff (WD) (Pevzner and Hearst, 2002) scores. These metrics are penalties, so lower values indicate better segmentations. The Pk metric expresses the probability that any randomly chosen pair of sentences is incorrectly segmented, if they are k sentences apart (Beeferman et al., 1999). Following tradition, k is set to half of the mean seg857 Method Pk WD 1. gesture only .486 .502 2. ASR only .462 .476 3. ASR + gesture .388 .401 4. transcript only .382 .397 5. transcript + gesture .332 .349 6. random .473 .526 7. equal-width .508 .515 Table 1: For each method, the score of the best performing configuration is shown. Pk and WD are penalties, so lower values indicate better performance. ment length. The WindowDiff metric is a variation of Pk (Pevzner and Hearst, 2002), applying a penalty whenever the number of segments within the k-sentence window differs for the reference and hypothesized segmentations. Baselines Two na¨ıve baselines are evaluated. Given that the annotator has divided the dialogue into K segments, the random baseline arbitrary chooses K random segmentation points. The results of this baseline are averaged over 1000 iterations. The equal-width baseline places boundaries such that all segments contain an equal number of sentences. Both the experimental systems and these na¨ıve baselines were given the correct number of segments, and also were provided with manually annotated sentence boundaries – their task is to select the k sentence boundaries that most accurately segment the text. 6 Results Table 1 shows the segmentation performance for a range of feature sets, as well as the two baselines. Given only gesture features the segmentation results are poor (line 1), barely outperforming the baselines (lines 6 and 7). However, gesture proves highly effective as a supplementary modality. The combination of gesture with ASR transcripts (line 3) yields an absolute 7.4% improvement over ASR transcripts alone (line 4). Paired t-tests show that this result is statistically significant (t(14) = 2.71, p < .01 for both Pk and WindowDiff). Even when manual speech transcripts are available, gesture features yield a substantial improvement, reducing Pk and WD by roughly 5%. This result is statistically significant for both Pk (t(14) = 2.00, p < .05) and WD (t(14) = 1.94, p < .05). Interactions of verbal and gesture features We now consider the relative contribution of the verbal and gesture features. In a discriminative setting, the contribution of each modality would be explicitly weighted. In a Bayesian generative model, the same effect is achieved through the Dirichlet priors, which act to smooth the verbal and gestural multinomials – see equation 3. For example, when the gesture prior is high and verbal prior is low, the gesture counts are smoothed, and the verbal counts play a greater role in segmentation. When both priors are very high, the model will simply try to find equally-sized segments, satisfying the distribution over durations. The effects of these parameters can be seen in Figure 3. The gesture model prior is held constant at its ideal value, and the segmentation performance is plotted against the logarithm of the verbal prior. Low values of the verbal prior cause it to dominate the segmentation; this can be seen at the left of both graphs, where the performance of the multimodal and verbal-only systems are nearly identical. High values of the verbal prior cause it to be oversmoothed, and performance thus approaches that of the gesture-only segmenter. Comparison to other models While much of the research on topic segmentation focuses on written text, there are some comparable systems that also aim at unsupervised segmentation of spontaneous spoken language. For example, Malioutov and Barzilay (2006) segment a corpus of classroom lectures, using similar lexical cohesion-based features. With manual transcriptions, they report a .383 Pk and .417 WD on artificial intelligence (AI) lectures, and .298 Pk and .311 WD on physics lectures. Our results are in the range bracketed by these two extremes; the wide range of results suggests that segmentation scores are difficult to compare across domains. The segmentation of physics lectures was at a very course level of granularity, while the segmentation of AI lectures was more similar to our annotations. We applied the publicly-available executable for this algorithm to our data, but performance was poor, yielding a .417 Pk and .465 WD even when both verbal and gestural features were available. 858 −3 −2.5 −2 −1.5 −1 −0.5 0.32 0.34 0.36 0.38 0.4 0.42 log verbal prior Pk verbal−only multimodal −3 −2.5 −2 −1.5 −1 −0.5 0.32 0.34 0.36 0.38 0.4 0.42 log verbal prior WD verbal−only multimodal Figure 3: The multimodal and verbal-only performance using the reference transcript. The x-axis shows the logarithm of the verbal prior; the gestural prior is held fixed at the optimal value. This may be because the technique is not designed for the relatively fine-grained segmentation demanded by our dataset (Malioutov, 2006). 7 Conclusions This research shows a novel relationship between gestural cohesion and discourse structure. Automatically extracted gesture features are predictive of discourse segmentation when used in isolation; when lexical information is present, segmentation performance is further improved. This suggests that gestures provide unique information not present in the lexical features alone, even when perfect transcripts are available. There are at least two possibilities for how gesture might impact topic segmentation: “visual punctuation,” and cohesion. The visual punctuation view would attempt to identify specific gestural patterns that are characteristic of segment boundaries. This is analogous to research that identifies prosodic signatures of topic boundaries, such as (Hirschberg and Nakatani, 1998). By design, our model is incapable of exploiting such phenomena, as our goal is to investigate the notion of gestural cohesion. Thus, the performance gains demonstrated in this paper cannot be explained by such punctuation-like phenomena; we believe that they are due to the consistent gestural themes that characterize coherent topics. However, we are interested in pursuing the idea of visual punctuation in the future, so as to compare the power of visual punctuation and gestural cohesion to predict segment boundaries. In addition, the interaction of gesture and prosody suggests additional possibilities for future research. The videos in the dataset for this paper are focused on the description of physical devices and events, leading to a fairly concrete set of gestures. In other registers of conversation, gestural form may be driven more by spatial metaphors, or may consist mainly of temporal “beats.” In such cases, the importance of gestural cohesion for discourse segmentation may depend on the visual expressivity of the speaker. We plan to examine the extensibility of gesture cohesion to more naturalistic settings, such as classroom lectures. Finally, topic segmentation provides only an outline of the discourse structure. Richer models of discourse include hierarchical structure (Grosz and Sidner, 1986) and Rhetorical Structure Theory (Mann and Thompson, 1988). The application of gestural analysis to such models may lead to fruitful areas of future research. Acknowledgments We thank Aaron Adler, C. Mario Christoudias, Michael Collins, Lisa Guttentag, Igor Malioutov, Brian Milch, Matthew Rasmussen, Candace Sidner, Luke Zettlemoyer, and the anonymous reviewers. This research was supported by Quanta Computer, the National Science Foundation (CAREER grant IIS-0448168 and grant IIS-0415865) and the Microsoft Research Faculty Fellowship. 859 References Narayanaswamy Balakrishnan and Valery B. Nevzorov. 2003. A primer on statistical distributions. John Wiley & Sons. Doug Beeferman, Adam Berger, and John D. Lafferty. 1999. Statistical models for text segmentation. Machine Learning, 34(1-3):177–210. Jos´e M. Bernardo and Adrian F. M. Smith. 2000. Bayesian Theory. Wiley. Christopher M. Bishop. 2006. Pattern Recognition and Machine Learning. Springer. Justine Cassell, Yukiko I. Nakano, Timothy W. Bickmore, Candace L. Sidner, and Charles Rich. 2001. Non-verbal cues for discourse structure. In Proceedings of ACL, pages 106–115. Lei Chen, Mary Harper, and Zhongqiang Huang. 2006. Using maximum entropy (ME) model to incorporate gesture cues for sentence segmentation. In Proceedings of ICMI, pages 185–192. Piotr Doll´ar, Vincent Rabaud, Garrison Cottrell, and Serge Belongie. 2005. Behavior recognition via sparse spatio-temporal features. In ICCV VS-PETS. Alexei A. Efros, Alexander C. Berg, Greg Mori, and Jitendra Malik. 2003. Recognizing action at a distance. In Proceedings of ICCV, pages 726–733. Jacob Eisenstein and Randall Davis. 2007. Conditional modality fusion for coreference resolution. In Proceedings of ACL, pages 352–359. David A. Forsyth and Jean Ponce. 2003. Computer Vision: A Modern Approach. Prentice Hall. Michel Galley, Kathleen R. McKeown, Eric FoslerLussier, and Hongyan Jing. 2003. Discourse segmentation of multi-party conversation. Proceedings of ACL, pages 562–569. Dariu M. Gavrila. 1999. Visual analysis of human movement: A survey. Computer Vision and Image Understanding, 73(1):82–98. Barbara Grosz and Julia Hirshberg. 1992. Some intonational characteristics of discourse structure. In Proceedings of ICSLP, pages 429–432. Barbara Grosz and Candace Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 12(3):175–204. M. A. K. Halliday and Ruqaiya Hasan. 1976. Cohesion in English. Longman. Marti A. Hearst. 1994. Multi-paragraph segmentation of expository text. In Proceedings of ACL. Julia Hirschberg and Christine Nakatani. 1998. Acoustic indicators of topic segmentation. In Proceedings of ICSLP. Xuedong Huang, Fileno Alleva, Mei-Yuh Hwang, and Ronald Rosenfeld. 1993. An overview of the SphinxII speech recognition system. In Proceedings of ARPA Human Language Technology Workshop, pages 81– 86. Michael Johnston. 1998. Unification-based multimodal parsing. In Proceedings of COLING, pages 624–630. Adam Kendon. 2004. Gesture: Visible Action as Utterance. Cambridge University Press. Stefan Kopp, Paul Tepper, Kim Ferriman, and Justine Cassell. 2007. Trading spaces: How humans and humanoids use speech and gesture to give directions. In Toyoaki Nishida, editor, Conversational Informatics: An Engineering Approach. Wiley. Ivan Laptev. 2005. On space-time interest points. International Journal of Computer Vision, 64(2-3):107– 123. David G. Lowe. 1999. Object recognition from local scale-invariant features. In Proceedings of ICCV, volume 2, pages 1150–1157. Igor Malioutov and Regina Barzilay. 2006. Minimum cut model for spoken lecture segmentation. In Proceedings of ACL, pages 25–32. Igor Malioutov. 2006. Minimum cut model for spoken lecture segmentation. Master’s thesis, Massachusetts Institute of Technology. William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8:243–281. Craig Martell. 2005. FORM: An experiment in the annotation of the kinematics of gesture. Ph.D. thesis, University of Pennsylvania. David McNeill. 1992. Hand and Mind. The University of Chicago Press. Juan Carlos Niebles, Hongcheng Wang, and Li Fei-Fei. 2006. Unsupervised Learning of Human Action Categories Using Spatial-Temporal Words. In Proceedings of the British Machine Vision Conference. NIST. 2003. The Rich Transcription Fall 2003 (RT-03F) Evaluation plan. Rebecca J. Passonneau and Diane J. Litman. 1997. Discourse segmentation by human and automated means. Computational Linguistics, 23(1):103–139. Lev Pevzner and Marti A. Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28(1):19–36. Francis Quek. 2003. The catchment feature model for multimodal language analysis. In Proceedings of ICCV. Mark Steedman. 1990. Structure and intonation in spoken language understanding. In Proceedings of ACL, pages 9–16. Marc Swerts. 1997. Prosodic features at discourse boundaries of different strength. The Journal of the Acoustical Society of America, 101:514. Gokhan Tur, Dilek Hakkani-Tur, Andreas Stolcke, and Elizabeth Shriberg. 2001. Integrating prosodic and lexical cues for automatic topic segmentation. Computational Linguistics, 27(1):31–57. Masao Utiyama and Hitoshi Isahara. 2001. A statistical model for domain-independent text segmentation. In Proceedings of ACL, pages 491–498. 860
2008
97
Proceedings of ACL-08: HLT, pages 861–869, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Multi-Task Active Learning for Linguistic Annotations Roi Reichart1∗Katrin Tomanek2∗ Udo Hahn2 Ari Rappoport1 1Institute of Computer Science Hebrew University of Jerusalem, Israel {roiri|arir}@cs.huji.ac.il 2Jena University Language & Information Engineering (JULIE) Lab Friedrich-Schiller-Universit¨at Jena, Germany {katrin.tomanek|udo.hahn}@uni-jena.de Abstract We extend the classical single-task active learning (AL) approach. In the multi-task active learning (MTAL) paradigm, we select examples for several annotation tasks rather than for a single one as usually done in the context of AL. We introduce two MTAL metaprotocols, alternating selection and rank combination, and propose a method to implement them in practice. We experiment with a twotask annotation scenario that includes named entity and syntactic parse tree annotations on three different corpora. MTAL outperforms random selection and a stronger baseline, onesided example selection, in which one task is pursued using AL and the selected examples are provided also to the other task. 1 Introduction Supervised machine learning methods have successfully been applied to many NLP tasks in the last few decades. These techniques have demonstrated their superiority over both hand-crafted rules and unsupervised learning approaches. However, they require large amounts of labeled training data for every level of linguistic processing (e.g., POS tags, parse trees, or named entities). When, when domains and text genres change (e.g., moving from commonsense newspapers to scientific biology journal articles), extensive retraining on newly supplied training material is often required, since different domains may use different syntactic structures as well as different semantic classes (entities and relations). ∗Both authors contributed equally to this work. Consequently, with an increasing coverage of a wide variety of domains in human language technology (HLT) systems, we can expect a growing need for manual annotations to support many kinds of application-specific training data. Creating annotated data is extremely laborintensive. The Active Learning (AL) paradigm (Cohn et al., 1996) offers a promising solution to deal with this bottleneck, by allowing the learning algorithm to control the selection of examples to be manually annotated such that the human labeling effort be minimized. AL has been successfully applied already for a wide range of NLP tasks, including POS tagging (Engelson and Dagan, 1996), chunking (Ngai and Yarowsky, 2000), statistical parsing (Hwa, 2004), and named entity recognition (Tomanek et al., 2007). However, AL is designed in such a way that it selects examples for manual annotation with respect to a single learning algorithm or classifier. Under this AL annotation policy, one has to perform a separate annotation cycle for each classifier to be trained. In the following, we will refer to the annotations supplied for a classifier as the annotations for a single annotation task. Modern HLT systems often utilize annotations resulting from different tasks. For example, a machine translation system might use features extracted from parse trees and named entity annotations. For such an application, we obviously need the different annotations to reside in the same text corpus. It is not clear how to apply the single-task AL approach here, since a training example that is beneficial for one task might not be so for others. We could annotate 861 the same corpus independently by the two tasks and merge the resulting annotations, but that (as we show in this paper) would possibly yield sub-optimal usage of human annotation efforts. There are two reasons why multi-task AL, and by this, a combined corpus annotated for various tasks, could be of immediate benefit. First, annotators working on similar annotation tasks (e.g., considering named entities and relations between them), might exploit annotation data from one subtask for the benefit of the other. If for each subtask a separate corpus is sampled by means of AL, annotators will definitely lack synergy effects and, therefore, annotation will be more laborious and is likely to suffer in terms of quality and accuracy. Second, for dissimilar annotation tasks – take, e.g., a comprehensive HLT pipeline incorporating morphological, syntactic and semantic data – a classifier might require features as input which constitute the output of another preceding classifier. As a consequence, training such a classifier which takes into account several annotation tasks will best be performed on a rich corpus annotated with respect to all inputrelevant tasks. Both kinds of annotation tasks, similar and dissimilar ones, constitute examples of what we refer to as multi-task annotation problems. Indeed, there have been efforts in creating resources annotated with respect to various annotation tasks though each of them was carried out independently of the other. In the general language UPenn annotation efforts for the WSJ sections of the Penn Treebank (Marcus et al., 1993), sentences are annotated with POS tags, parse trees, as well as discourse annotation from the Penn Discourse Treebank (Miltsakaki et al., 2008), while verbs and verb arguments are annotated with Propbank rolesets (Palmer et al., 2005). In the biomedical GENIA corpus (Ohta et al., 2002), scientific text is annotated with POS tags, parse trees, and named entities. In this paper, we introduce multi-task active learning (MTAL), an active learning paradigm for multiple annotation tasks. We propose a new AL framework where the examples to be annotated are selected so that they are as informative as possible for a set of classifiers instead of a single classifier only. This enables the creation of a single combined corpus annotated with respect to various annotation tasks, while preserving the advantages of AL with respect to the minimization of annotation efforts. In a proof-of-concept scenario, we focus on two highly dissimilar tasks, syntactic parsing and named entity recognition, study the effects of multi-task AL under rather extreme conditions. We propose two MTAL meta-protocols and a method to implement them for these tasks. We run experiments on three corpora for domains and genres that are very different (WSJ: newspapers, Brown: mixed genres, and GENIA: biomedical abstracts). Our protocols outperform two baselines (random and a stronger onesided selection baseline). In Section 2 we introduce our MTAL framework and present two MTAL protocols. In Section 3 we discuss the evaluation of these protocols. Section 4 describes the experimental setup, and results are presented in Section 5. We discuss related work in Section 6. Finally, we point to open research issues for this new approach in Section 7. 2 A Framework for Multi-Task AL In this section we introduce a sample selection framework that aims at reducing the human annotation effort in a multiple annotation scenario. 2.1 Task Definition To measure the efficiency of selection methods, we define the training quality TQ of annotated material S as the performance p yielded with a reference learner X trained on that material: TQ(X, S) = p. A selection method can be considered better than another one if a higher TQ is yielded with the same amount of examples being annotated. Our framework is an extension of the Active Learning (AL) framework (Cohn et al., 1996)). The original AL framework is based on querying in an iterative manner those examples to be manually annotated that are most useful for the learner at hand. The TQ of an annotated corpus selected by means of AL is much higher than random selection. This AL approach can be considered as single-task AL because it focuses on a single learner for which the examples are to be selected. In a multiple annotation scenario, however, there are several annotation tasks to be accomplished at once and for each task typically a separate statistical model will then be trained. Thus, the goal of multi-task AL is to query those examples for 862 human annotation that are most informative for all learners involved. 2.2 One-Sided Selection vs. Multi-Task AL The naive approach to select examples in a multiple annotation scenario would be to perform a singletask AL selection, i.e., the examples to be annotated are selected with respect to one of the learners only.1 In a multiple annotation scenario we call such an approach one-sided selection. It is an intrinsic selection for the reference learner, and an extrinsic selection for all the other learners also trained on the annotated material. Obviously, a corpus compiled with the help of one-sided selection will have a good TQ for that learner for which the intrinsic selection has taken place. For all the other learners, however, we have no guarantee that their TQ will not be inferior than the TQ of a random selection process. In scenarios where the different annotation tasks are highly dissimilar we can expect extrinsic selection to be rather poor. This intuition is demonstrated by experiments we conducted for named entity (NE) and parse annotation tasks2 (Figure 1). In this scenario, extrinsic selection for the NE annotation task means that examples where selected with respect to the parsing task. Extrinsic selection performed about the same as random selection for the NE task, while for the parsing task extrinsic selection performed markedly worse. This shows that examples that were very informative for the NE learner were not that informative for the parse learner. 2.3 Protocols for Multi-Task AL Obviously, we can expect one-sided selection to perform better for the reference learner (the one for which an intrinsic selection took place) than multitask AL selection, because the latter would be a compromise for all learners involved in the multiple annotation scenario. However, the goal of multitask AL is to minimize the annotation effort over all annotation tasks and not just the effort for a single annotation task. For a multi-task AL protocol to be valuable in a specific multiple annotation scenario, the TQ for all considered learners should be 1Of course, all selected examples would be annotated w.r.t. all annotation tasks. 2See Section 4 for our experimental setup. 1. better than the TQ of random selection, 2. and better than the TQ of any extrinsic selection. In the following, we introduce two protocols for multi-task AL. Multi-task AL protocols can be considered meta-protocols because they basically specify how task-specific, single-task AL approaches can be combined into one selection decision. By this, the protocols are independent of the underlying taskspecific AL approaches. 2.3.1 Alternating Selection The alternating selection protocol alternates onesided AL selection. In sj consecutive AL iterations, the selection is performed as one-sided selection with respect to learning algorithm Xj. After that, another learning algorithm is considered for selection for sk consecutive iterations and so on. Depending on the specific scenario, this enables to weight the different annotation tasks by allowing them to guide the selection in more or less AL iterations. This protocol is a straight-forward compromise between the different single-task selection approaches. In this paper we experiment with the special case of si = 1, where in every AL iteration the selection leadership is changed. More sophisticated calibration of the parameters si is beyond the scope of this paper and will be dealt with in future work. 2.3.2 Rank Combination The rank combination protocol is more directly based on the idea to combine single-task AL selection decisions. In each AL iteration, the usefulness score sXj(e) of each unlabeled example e from the pool of examples is calculated with respect to each learner Xj and then translated into a rank rXj(e) where higher usefulness means lower rank number (examples with identical scores get the same rank number). Then, for each example, we sum the rank numbers of each annotation task to get the overall rank r(e) = Pn j=1 rXj(e). All examples are sorted by this combined rank and b examples with lowest rank numbers are selected for manual annotation.3 3As the number of ranks might differ between the single annotation tasks, we normalize them to the coarsest scale. Then we can sum up the ranks as explained above. 863 10000 20000 30000 40000 50000 0.65 0.70 0.75 0.80 tokens f−score random selection extrinsic selection (PARSE−AL) 10000 20000 30000 40000 0.76 0.78 0.80 0.82 0.84 constituents f−score random selection extrinsic selection (NE−AL) Figure 1: Learning curves for random and extrinsic selection on both tasks: named entity annotation (left) and syntactic parse annotation (right), using the WSJ corpus scenario This protocol favors examples which are good for all learning algorithms. Examples that are highly informative for one task but rather uninformative for another task will not be selected. 3 Evaluation of Multi-Task AL The notion of training quality (TQ) can be used to quantify the effectiveness of a protocol, and by this, annotation costs in a single-task AL scenario. To actually quantify the overall training quality in a multiple annotation scenario one would have to sum over all the single task’s TQs. Of course, depending on the specific annotation task, one would not want to quantify the number of examples being annotated but different task-specific units of annotation. While for entity annotations one does typically count the number of tokens being annotated, in the parsing scenario the number of constituents being annotated is a generally accepted measure. As, however, the actual time needed for the annotation of one example usually differs for different annotation tasks, normalizing exchange rates have to be specified which can then be used as weighting factors. In this paper, we do not define such weighting factors4, and leave this challenging question to be discussed in the context of psycholinguistic research. We could quantify the overall efficiency score E of a MTAL protocol P by E(P) = n X j=1 αj · TQ(Xj, uj) where uj denotes the individual annotation task’s 4Such weighting factors not only depend on the annotation level or task but also on the domain, and especially on the cognitive load of the annotation task. number of units being annotated (e.g., constituents for parsing) and the task-specific weights are defined by αj. Given weights are properly defined, such a score can be applied to directly compare different protocols and quantify their differences. In practice, such task-specific weights might also be considered in the MTAL protocols. In the alternating selection protocol, the numbers of consecutive iterations si each single task protocol can be tuned according to the α parameters. As for the rank combination protocol, the weights can be considered when calculating the overall rank: r(e) = Pn j=1 βj · rXj(e) where the parameters β1 . . . βn reflect the values of α1 . . . αn (though they need not necessarily be the same). In our experiments, we assumed the same weight for all annotation schemata, thus simply setting si = 1, βi = 1. This was done for the sake of a clear framework presentation. Finding proper weights for the single tasks and tuning the protocols accordingly is a subject for further research. 4 Experiments 4.1 Scenario and Task-Specific Selection Protocols The tasks in our scenario comprise one semantic task (annotation with named entities (NE)) and one syntactic task (annotation with PCFG parse trees). The tasks are highly dissimilar, thus increasing the potential value of MTAL. Both tasks are subject to intensive research by the NLP community. The MTAL protocols proposed are metaprotocols that combine the selection decisions of the underlying, task-specific AL protocols. In our scenario, the task-specific AL protocols are 864 committee-based (Freund et al., 1997) selection protocols. In committee-based AL, a committee consists of k classifiers of the same type trained on different subsets of the training data.5 Each committee member then makes its predictions on the unlabeled examples, and those examples on which the committee members disagree most are considered most informative for learning and are thus selected for manual annotation. In our scenario the example grain-size is the sentence level. For the NE task, we apply the AL approach of Tomanek et al. (2007). The committee consists of k1 = 3 classifiers and the vote entropy (VE) (Engelson and Dagan, 1996) is employed as disagreement metric. It is calculated on the token-level as V Etok(t) = − 1 log k c X i=0 V (li, t) k log V (li, t) k (1) where V (li,t) k is the ratio of k classifiers where the label li is assigned to a token t. The sentence level vote entropy V Esent is then the average over all tokens tj of sentence s. For the parsing task, the disagreement score is based on a committee of k2 = 10 instances of Dan Bikel’s reimplementation of Collins’ parser (Bickel, 2005; Collins, 1999). For each sentence in the unlabeled pool, the agreement between the committee members was calculated using the function reported by Reichart and Rappoport (2007): AF(s) = 1 N X i,l∈[1...N],i̸=l fscore(mi, ml) (2) Where mi and ml are the committee members and N = k2·(k2−1) 2 is the number of pairs of different committee members. This function calculates the agreement between the members of each pair by calculating their relative f-score and then averages the pairs’ scores. The disagreement of the committee on a sentence is simply 1 −AF(s). 4.2 Experimental settings For the NE task we employed the classifier described by Tomanek et al. (2007): The NE tagger is based on Conditional Random Fields (Lafferty et al., 2001) 5We randomly sampled L = 3 4 of the training data to create each committee member. and has a rich feature set including orthographical, lexical, morphological, POS, and contextual features. For parsing, Dan Bikel’s reimplementation of Collins’ parser is employed, using gold POS tags. In each AL iteration we select 100 sentences for manual annotation.6 We start with a randomly chosen seed set of 200 sentences. Within a corpus we used the same seed set in all selection scenarios. We compare the following five selection scenarios: Random selection (RS), which serves as our baseline; one-sided AL selection for both tasks (called NE-AL and PARSE-AL); and multi-task AL selection with the alternating selection protocol (alter-MTAL) and the rank combination protocol (ranks-MTAL). We performed our experiments on three different corpora, namely one from the newspaper genre (WSJ), a mixed-genre corpus (Brown), and a biomedical corpus (Bio). Our simulation corpora contain both entity annotations and (constituent) parse annotations. For each corpus we have a pool set (from which we select the examples for annotation) and an evaluation set (used for generating the learning curves). The WSJ corpus is based on the WSJ part of the PENN TREEBANK (Marcus et al., 1993); we used the first 10,000 sentences of section 2-21 as the pool set, and section 00 as evaluation set (1,921 sentences). The Brown corpus is also based on the respective part of the PENN TREEBANK. We created a sample consisting of 8 of any 10 consecutive sentences in the corpus. This was done as Brown contains text from various English text genres, and we did that to create a representative sample of the corpus domains. We finally selected the first 10,000 sentences from this sample as pool set. Every 9th from every 10 consecutive sentences package went into the evaluation set which consists of 2,424 sentences. For both WSJ and Brown only parse annotations though no entity annotations were available. Thus, we enriched both corpora with entity annotations (three entities: person, location, and organization) by means of a tagger trained on the English data set of the CoNLL-2003 shared task (Tjong Kim Sang and De Meulder, 2003).7 The Bio corpus 6Manual annotation is simulated by just unveiling the annotations already contained in our corpora. 7We employed a tagger similar to the one presented by Settles (2004). Our tagger has a performance of ≈84% f-score on the CoNLL-2003 data; inspection of the predicted entities on 865 is based on the parsed section of the GENIA corpus (Ohta et al., 2002). We performed the same divisions as for Brown, resulting in 2,213 sentences in our pool set and 276 sentences for the evaluation set. This part of the GENIA corpus comes with entity annotations. We have collapsed the entity classes annotated in GENIA (cell line, cell type, DNA, RNA, protein) into a single, biological entity class. 5 Results In this section we present and discuss our results when applying the five selection strategies (RS, NEAL, PARSE-AL, alter-MTAL, and ranks-MTAL) to our scenario on the three corpora. We refrain from calculating the overall efficiency score (Section 3) here due to the lack of generally accepted weights for the considered annotation tasks. However, we require from a good selection protocol to exceed the performance of random selection and extrinsic selection. In addition, recall from Section 3 that we set the alternate selection and rank combination parameters to si = 1, βi = 1, respectively to reflect a tradeoff between the annotation efforts of both tasks. Figures 2 and 3 depict the learning curves for the NE tagger and the parser on WSJ and Brown, respectively. Each figure shows the five selection strategies. As expected, on both corpora and both tasks intrinsic selection performs best, i.e., for the NE tagger NE-AL and for the parser PARSE-AL. Further, random selection and extrinsic selection perform worst. Most importantly, both MTAL protocols clearly outperform extrinsic and random selection in all our experiments. This is in contrast to NE-AL which performs worse than random selection for all corpora when used as extrinsic selection, and for PARSE-AL that outperforms the random baseline only for Brown when used as extrinsic selection. That is, the MTAL protocols suggest a tradeoff between the annotation efforts of the different tasks, here. On WSJ, both for the NE and the parse annotation tasks, the performance of the MTAL protocols is very similar, though ranks-MTAL performs slightly better. For the parser task, up to 30,000 constituents MTAL performs almost as good as does PARSEAL. This is different for the NE task where NE-AL WSJ and Brown revealed a good tagging performance. clearly outperforms MTAL. On Brown, in general we see the same results, with some minor differences. On the NE task, extrinsic selection (PARSEAL) performs better than random selection, but it is still much worse than intrinsic AL or MTAL. Here, ranks-MTAL significantly outperforms alter-MTAL and almost performs as good as intrinsic selection. For the parser task, we see that extrinsic and random selection are equally bad. Both MTAL protocols perform equally well, again being quite similar to the intrinsic selection. On the BIO corpus8 we observed the same tendencies as in the other two corpora, i.e., MTAL clearly outperforms extrinsic and random selection and supplies a better tradeoff between annotation efforts of the task at hand than onesided selection. Overall, we can say that in all scenarios MTAL performs much better than random selection and extrinsic selection, and in most cases the performance of MTAL (especially but not exclusively, ranksMTAL) is even close to intrinsic selection. This is promising evidence that MTAL selection can be a better choice than one-sided selection in multiple annotation scenarios. Thus, considering all annotation tasks in the selection process (even if the selection protocol is as simple as the alternating selection protocol) is better than selecting only with respect to one task. Further, it should be noted that overall the more sophisticated rank combination protocol does not perform much better than the simpler alternating selection protocol in all scenarios. Finally, Figure 4 shows the disagreement curves for the two tasks on the WSJ corpus. As has already been discussed by Tomanek and Hahn (2008), disagreement curves can be used as a stopping criterion and to monitor the progress of AL-driven annotation. This is especially valuable when no annotated validation set is available (which is needed for plotting learning curves). We can see that the disagreement curves significantly flatten approximately at the same time as the learning curves do. In the context of MTAL, disagreement curves might not only be interesting as a stopping criterion but rather as a switching criterion, i.e., to identify when MTAL could be turned into one-sided selection. This would be the case if in an MTAL scenario, the disagree8The plots for the Bio are omitted due to space restrictions. 866 10000 20000 30000 40000 50000 0.65 0.70 0.75 0.80 0.85 tokens f−score RS NE−AL PARSE−AL alter−MTAL ranks−MTAL 5000 10000 15000 20000 25000 30000 0.55 0.60 0.65 0.70 0.75 0.80 tokens f−score RS NE−AL PARSE−AL alter−MTAL ranks−MTAL Figure 2: Learning curves for NE task on WSJ (left) and Brown (right) 10000 20000 30000 40000 0.76 0.78 0.80 0.82 0.84 constituents f−score RS NE−AL PARSE−AL alter−MTAL ranks−MTAL 5000 10000 15000 20000 25000 30000 35000 0.65 0.70 0.75 0.80 constituents f−score RS NE−AL PARSE−AL alter−MTAL ranks−MTAL Figure 3: Learning curves for parse task on WSJ (left) and Brown (right) ment curve of one task has a slope of (close to) zero. Future work will focus on issues related to this. 6 Related Work There is a large body of work on single-task AL approaches for many NLP tasks where the focus is mainly on better, task-specific selection protocols and methods to quantify the usefulness score in different scenarios. As to the tasks involved in our scenario, several papers address AL for NER (Shen et al., 2004; Hachey et al., 2005; Tomanek et al., 2007) and syntactic parsing (Tang et al., 2001; Hwa, 2004; Baldridge and Osborne, 2004; Becker and Osborne, 2005). Further, there is some work on questions arising when AL is to be used in real-life annotation scenarios, including impaired inter-annotator agreement, stopping criteria for AL-driven annotation, and issues of reusability (Baldridge and Osborne, 2004; Hachey et al., 2005; Zhu and Hovy, 2007; Tomanek et al., 2007). Multi-task AL is methodologically related to approaches of decision combination, especially in the context of classifier combination (Ho et al., 1994) and ensemble methods (Breiman, 1996). Those approaches focus on the combination of classifiers in order to improve the classification error rate for one specific classification task. In contrast, the focus of multi-task AL is on strategies to select training material for multi classifier systems where all classifiers cover different classification tasks. 7 Discussion Our treatment of MTAL within the context of the orthogonal two-task scenario leads to further interesting research questions. First, future investigations will have to focus on the question whether the positive results observed in our orthogonal (i.e., highly dissimilar) two-task scenario will also hold for a more realistic (and maybe more complex) multiple annotation scenario where tasks are more similar and more than two annotation tasks might be involved. Furthermore, several forms of interdependencies may arise between the single annotation tasks. As a first example, consider the (functional) interdependencies (i.e., task similarity) in higherlevel semantic NLP tasks of relation or event recognition. In such a scenario, several tasks including entity annotations and relation/event annotations, as well as syntactic parse data, have to be incorporated at the same time. Another type of (data flow) inter867 10000 20000 30000 40000 0.010 0.014 0.018 tokens disagreement RS NE−AL PARSE−AL alter−MTAL ranks−MTAL 10000 20000 30000 40000 5 10 15 20 25 30 35 40 constituents disagreement RS NE−AL PARSE−AL alter−MTAL ranks−MTAL Figure 4: Disagreement curves for NE task (left) and parse task (right) on WSJ dependency occurs in a second scenario where material for several classifiers that are data-dependent on each other – one takes the output of another classifier as input features – has to be efficiently annotated. Whether the proposed protocols are beneficial in the context of such highly interdependent tasks is an open issue. Even more challenging is the idea to provide methodologies helping to predict in an arbitrary application scenario whether the choice of MTAL is truly advantageous. Another open question is how to measure and quantify the overall annotation costs in multiple annotation scenarios. Exchange rates are inherently tied to the specific task and domain. In practice, one might just want to measure the time needed for the annotations. However, in a simulation scenario, a common metric is necessary to compare the performance of different selection strategies with respect to the overall annotation costs. This requires studies on how to quantify, with a comparable cost function, the efforts needed for the annotation of a textual unit of choice (e.g., tokens, sentences) with respect to different annotation tasks. Finally, the question of reusability of the annotated material is an important issue. Reusability in the context of AL means to which degree corpora assembled with the help of any AL technique can be (re)used as a general resource, i.e., whether they are well suited for the training of classifiers other than the ones used during the selection process.This is especially interesting as the details of the classifiers that should be trained in a later stage are typically not known at the resource building time. Thus, we want to select samples valuable to a family of classifiers using the various annotation layers. This, of course, is only possible if data annotated with the help of AL is reusable by modified though similar classifiers (e.g., with respect to the features being used) – compared to the classifiers employed for the selection procedure. The issue of reusability has already been raised but not yet conclusively answered in the context of single-task AL (see Section 6). Evidence was found that reusability up to a certain, though not wellspecified, level is possible. Of course, reusability has to be analyzed separately in the context of various MTAL scenarios. We feel that these scenarios might both be more challenging and more relevant to the reusability issue than the single-task AL scenario, since resources annotated with multiple layers can be used to the design of a larger number of a (possibly more complex) learning algorithms. 8 Conclusions We proposed an extension to the single-task AL approach such that it can be used to select examples for annotation with respect to several annotation tasks. To the best of our knowledge this is the first paper on this issue, with a focus on NLP tasks. We outlined a problem definition and described a framework for multi-task AL. We presented and tested two protocols for multi-task AL. Our results are promising as they give evidence that in a multiple annotation scenario, multi-task AL outperforms naive one-sided and random selection. Acknowledgments The work of the second author was funded by the German Ministry of Education and Research within the STEMNET project (01DS001A-C), while the work of the third author was funded by the EC within the BOOTSTREP project (FP6-028099). 868 References Jason Baldridge and Miles Osborne. 2004. Active learning and the total cost of annotation. In Proceedings of EMNLP’04, pages 9–16. Markus Becker and Miles Osborne. 2005. A two-stage method for active learning of statistical grammars. In Proceedings of IJCAI’05, pages 991–996. Daniel M. Bickel. 2005. Code developed at the University of Pennsylvania, http://www.cis.upenn. edu/˜dbikel/software.html. Leo Breiman. 1996. Bagging predictors. Machine Learning, 24(2):123–140. David A. Cohn, Zoubin Ghahramani, and Michael I. Jordan. 1996. Active learning with statistical models. Journal of Artificial Intelligence Research, 4:129–145. Michael Collins. 1999. Head-driven statistical models for natural language parsing. Ph.D. thesis, University of Pennsylvania. Sean Engelson and Ido Dagan. 1996. Minimizing manual annotation cost in supervised training from corpora. In Proceedings of ACL’96, pages 319–326. Yoav Freund, Sebastian Seung, Eli Shamir, and Naftali Tishby. 1997. Selective sampling using the query by committee algorithm. Machine Learning, 28(23):133–168. Ben Hachey, Beatrice Alex, and Markus Becker. 2005. Investigating the effects of selective sampling on the annotation task. In Proceedings of CoNLL’05, pages 144–151. Tin Kam Ho, Jonathan J. Hull, and Sargur N. Srihari. 1994. Decision combination in multiple classifier systems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 16(1):66–75. Rebecca Hwa. 2004. Sample selection for statistical parsing. Computational Linguistics, 30(3):253–276. John D. Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML’01, pages 282–289. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Eleni Miltsakaki, Livio Robaldo, Alan Lee, and Aravind K. Joshi. 2008. Sense annotation in the penn discourse treebank. In Proceedings of CICLing’08, pages 275–286. Grace Ngai and David Yarowsky. 2000. Rule writing or annotation: Cost-efficient resource usage for base noun phrase chunking. In Proceedings of ACL’00, pages 117–125. Tomoko Ohta, Yuka Tateisi, and Jin-Dong Kim. 2002. The GENIA corpus: An annotated research abstract corpus in molecular biology domain. In Proceedings of HLT’02, pages 82–86. Martha Palmer, Paul Kingsbury, and Daniel Gildea. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71– 106. Roi Reichart and Ari Rappoport. 2007. An ensemble method for selection of high quality parses. In Proceedings of ACL’07, pages 408–415, June. Burr Settles. 2004. Biomedical named entity recognition using conditional random fields and rich feature sets. In Proceedings of JNLPBA’04, pages 107–110. Dan Shen, Jie Zhang, Jian Su, Guodong Zhou, and Chew Lim Tan. 2004. Multi-criteria-based active learning for named entity recognition. In Proceedings of ACL’04, pages 589–596. Min Tang, Xiaoqiang Luo, and Salim Roukos. 2001. Active learning for statistical natural language parsing. In Proceedings of ACL’02, pages 120–127. Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the CONLL-2003 shared task: Language-independent named entity recognition. In Proceedings of CoNLL’03, pages 142–147. Katrin Tomanek and Udo Hahn. 2008. Approximating learning curves for active-learning-driven annotation. In Proceedings of LREC’08. Katrin Tomanek, Joachim Wermter, and Udo Hahn. 2007. An approach to text corpus construction which cuts annotation costs and maintains corpus reusability of annotated data. In Proceedings of EMNLPCoNLL’07, pages 486–495. Jingbo Zhu and Eduard Hovy. 2007. Active learning for word sense disambiguation with methods for addressing the class imbalance problem. In Proceedings of EMNLP-CoNLL’07, pages 783–790. 869
2008
98
Proceedings of ACL-08: HLT, pages 870–878, Columbus, Ohio, USA, June 2008. c⃝2008 Association for Computational Linguistics Generalized Expectation Criteria for Semi-Supervised Learning of Conditional Random Fields Gideon S. Mann Google Inc. 76 Ninth Avenue New York, NY 10011 Andrew McCallum Department of Computer Science University of Massachusetts 140 Governors Drive Amherst, MA 01003 Abstract This paper presents a semi-supervised training method for linear-chain conditional random fields that makes use of labeled features rather than labeled instances. This is accomplished by using generalized expectation criteria to express a preference for parameter settings in which the model’s distribution on unlabeled data matches a target distribution. We induce target conditional probability distributions of labels given features from both annotated feature occurrences in context and adhoc feature majority label assignment. The use of generalized expectation criteria allows for a dramatic reduction in annotation time by shifting from traditional instance-labeling to feature-labeling, and the methods presented outperform traditional CRF training and other semi-supervised methods when limited human effort is available. 1 Introduction A significant barrier to applying machine learning to new real world domains is the cost of obtaining the necessary training data. To address this problem, work over the past several years has explored semi-supervised or unsupervised approaches to the same problems, seeking to improve accuracy with the addition of lower cost unlabeled data. Traditional approaches to semi-supervised learning are applied to cases in which there is a small amount of fully labeled data and a much larger amount of unlabeled data, presumably from the same data source. For example, EM (Nigam et al., 1998), transductive SVMs (Joachims, 1999), entropy regularization (Grandvalet and Bengio, 2004), and graph-based address : *number* oak avenue rent $ ADDRESS ADDRESS ADDRESS ADDRESS ADDRESS RENT RENT Traditional Full Instance Labeling ADDRESS address : *number* oak avenue rent $ .... CONTACT .. ( please include the address of this rental ) ADDRESS ... pm . address : *number* marie street sausalito ... ADDRESS .. laundry . address : *number* macarthur blvd .... Feature Labeling Conditional Distribution of Labels Given Word=address ADDRESS CONTACT Figure 1: Top: Traditional instance-labeling in which sequences of contiguous tokens are annotated as to their correct label. Bottom: Feature-labeling in which noncontiguous feature occurrences in context are labeled for the purpose of deriving a conditional probability distribution of labels given a particular feature. methods (Zhu and Ghahramani, 2002; Szummer and Jaakkola, 2002) have all been applied to a limited amount of fully labeled data in conjunction with unlabeled data to improve the accuracy of a classifier. In this paper, we explore an alternative approach in which, instead of fully labeled instances, the learner has access to labeled features. These features can often be labeled at a lower-cost to the human annotator than labeling entire instances, which may require annotating the multiple sub-parts of a sequence structure or tree. Features can be labeled either by specifying the majority label for a particular feature or by annotating a few occurrences of a particular feature in context with the correct label (Figure 1). To train models using this information we use 870 generalized expectation (GE) criteria. GE criteria are terms in a training objective function that assign scores to values of a model expectation. In particular we use a version of GE that prefers parameter settings in which certain model expectations are close to target distributions. Previous work has shown how to apply GE criteria to maximum entropy classifiers. In section 4, we extend GE criteria to semi-supervised learning of linear-chain conditional random fields, using conditional probability distributions of labels given features. To empirically evaluate this method we compare it with several competing methods for CRF training, including entropy regularization and expected gradient, showing that GE provides significant improvements. We achieve competitive performance in comparison to alternate model families, in particular generative models such as MRFs trained with EM (Haghighi and Klein, 2006) and HMMs trained with soft constraints (Chang et al., 2007). Finally, in Section 5.3 we show that feature-labeling can lead to dramatic reductions in the annotation time that is required in order to achieve the same level of accuracy as traditional instance-labeling. 2 Related Work There has been a significant amount of work on semi-supervised learning with small amounts of fully labeled data (see Zhu (2005)). However there has been comparatively less work on learning from alternative forms of labeled resources. One example is Schapire et al. (2002) who present a method in which features are annotated with their associated majority labels and this information is used to bootstrap a parameterized text classification model. Unlike the model presented in this paper, they require some labeled data in order to train their model. This type of input information (features + majority label) is a powerful and flexible model for specifying alternative inputs to a classifier, and has been additionally used by Haghighi and Klein (2006). In that work, “prototype” features—words with their associated labels—are used to train a generative MRF sequence model. Their probability model can be formally described as: pθ(x, y) = 1 Z(θ) exp X k θkFk(x, y) ! . Although the partition function must be computed over all (x, y) tuples, learning via EM in this model is possible because of approximations made in computing the partition function. Another way to gather supervision is by means of prior label distributions. Mann and McCallum (2007) introduce a special case of GE, label regularization, and demonstrate its effectiveness for training maximum entropy classifiers. In label regularization, the model prefers parameter settings in which the model’s predicted label distribution on the unsupervised data match a target distribution. Note that supervision here consists of the the full distribution over labels (i.e. conditioned on the maximum entropy “default feature”), instead of simply the majority label. Druck et al. (2007) also use GE with full distributions for semi-supervised learning of maximum entropy models, except here the distributions are on labels conditioned on features. In Section 4 we describe how GE criteria can be applied to CRFs given conditional probability distributions of labels given features. Another recent method that has been proposed for training sequence models with constraints is Chang et al. (2007). They use constraints for approximate EM training of an HMM, incorporating the constraints by looking only at the top K most-likely sequences from a joint model of likelihood and the constraints. This model can be applied to the combination of labeled and unlabeled instances, but cannot be applied in situations where only labeled features are available. Additionally, our model can be easily combined with other semi-supervised criteria, such as entropy regularization. Finally, their model is a generative HMM which cannot handle the rich, nonindependent feature sets that are available to a CRF. There have been relatively few different approaches to CRF semi-supervised training. One approach has been that proposed in both Miller et al. (2004) and Freitag (2004), uses distributional clustering to induce features from a large corpus, and then uses these features to augment the feature space of the labeled data. Since this is an orthogonal method for improving accuracy it can be combined with many of the other methods discussed above, and indeed we have obtained positive preliminary experimental results with GE criteria (not reported on here). 871 Another method for semi-supervised CRF training is entropy regularization, initially proposed by Grandvalet and Bengio (2004) and extended to linear-chain CRFs by Jiao et al. (2006). In this formulation, the traditional label likelihood (on supervised data) is augmented with an additional term that encourages the model to predict low-entropy label distributions on the unlabeled data: O(θ; D, U) = X d log pθ(y(d)|x(d)) −λH(y|x). This method can be quite brittle, since the minimal entropy solution assigns all of the tokens the same label.1 In general, entropy regularization is fragile, and accuracy gains can come only with precise settings of λ. High values of λ fall into the minimal entropy trap, while low values of λ have no effect on the model (see (Jiao et al., 2006) for an example). When some instances have partial labelings (i.e. labels for some of their tokens), it is possible to train CRFs via expected gradient methods (Salakhutdinov et al., 2003). Here a reformulation is presented in which the gradient is computed for a probability distribution with a marginalized hidden variable, z, and observed training labels y: ∇L(θ) = ∂ ∂θ X z log p(x, y, z; θ) = X z p(z|y, x)fk(x, y, z) − X z,y′ p(z, y′|x; θ)fk(x, y, z). In essence, this resembles the standard gradient for the CRF, except that there is an additional marginalization in the first term over the hidden variable z. This type of training has been applied by Quattoni et al. (2007) for hidden-state conditional random fields, and can be equally applied to semi-supervised conditional random fields. Note, however, that labeling variables of a structured instance (e.g. tokens) is different than labeling features—being both more coarse-grained and applying supervision narrowly only to the individual subpart, not to all places in the data where the feature occurs. 1In the experiments in this paper, we use λ = 0.001, which we tuned for best performance on the test set, giving an unfair advantage to our competitor. Finally, there are some methods that use auxiliary tasks for training sequence models, though they do not train linear-chain CRFs per se. Ando and Zhang (2005) include a cluster discovery step into the supervised training. Smith and Eisner (2005) use neighborhoods of related instances to figure out what makes found instances “good”. Although these methods can often find good solutions, both are quite sensitive to the selection of auxiliary information, and making good selections requires significant insight.2 3 Conditional Random Fields Linear-chain conditional random fields (CRFs) are a discriminative probabilistic model over sequences x of feature vectors and label sequences y = ⟨y1..yn⟩, where |x| = |y| = n, and each label yi has s different possible discrete values. This model is analogous to maximum entropy models for structured outputs, where expectations can be efficiently calculated by dynamic programming. For a linear-chain CRF of Markov order one: pθ(y|x) = 1 Z(x) exp X k θkFk(x, y) ! , where Fk(x, y) = P i fk(x, yi, yi+1, i), and the partition function Z(x) = P y exp(P k θkFk(x, y)). Given training data D = (x(1), y(1))..(x(n), y(n)) , the model is traditionally trained by maximizing the log-likelihood O(θ; D) = P d log pθ(y(d)|x(d)) by gradient ascent where the gradient of the likelihood is: ∂ ∂θk O(θ; D) = X d Fk(x(d), y(d)) − X d X y pθ(y|x(d))Fk(x(d), y). The second term (the expected counts of the features given the model) can be computed in a tractable amount of time, since according to the Markov as2Often these are more complicated than picking informative features as proposed in this paper. One example of the kind of operator used is the transposition operator proposed by Smith and Eisner (2005). 872 sumption, the feature expectations can be rewritten: X y pθ(y|x)Fk(x, y) = X i X yi,yi+1 pθ(yi, yi+1|x)fk(x, yi, yi+1, i). A dynamic program (the forward/backward algorithm) then computes in time O(ns2) all the needed probabilities pθ(yi, yi+1), where n is the sequence length, and s is the number of labels. 4 Generalized Expectation Criteria for Conditional Random Fields Prior semi-supervised learning methods have augmented a limited amount of fully labeled data with either unlabeled data or with constraints (e.g. features marked with their majority label). GE criteria can use more information than these previous methods. In particular GE criteria can take advantage of conditional probability distributions of labels given a feature (p(y|fk(x) = 1)). This information provides richer constraints to the model while remaining easily interpretable. People have good intuitions about the relative predictive strength of different features. For example, it is clear that the probability of label PERSON given the feature WORD=JOHN is high, perhaps around 0.95, where as for WORD=BROWN it would be lower, perhaps 0.4. These distributions need not be not estimated with great precision—it is far better to have the freedom to express shades of gray than to be force into a binary supervision signal. Another advantage of using conditional probability distributions as probabilistic constraints is that they can be easily estimated from data. For the feature INITIAL-CAPITAL, we identify all tokens with the feature, and then count the labels with which the feature co-occurs. GE criteria attempt to match these conditional probability distributions by model expectations on unlabeled data, encouraging, for example, the model to predict that the proportion of the label PERSON given the word “john” should be .95 over all of the unlabeled data. In general, a GE (generalized expectation) criterion (McCallum et al., 2007) expresses a preference on the value of a model expectation. One kind of preference may be expressed by a distance function ∆, a target expectation ˆf, data D, a function f, and a model distribution pθ, the GE criterion objective function term is ∆  ˆf, E[f(x)]  . For the purposes of this paper, we set the functions to be conditional probability distributions and set ∆(p, q) = D(p||q), the KL-divergence between two distributions.3 For semi-supervised training of CRFs, we augment the objective function with the regularization term: O(θ; D, U) = X d log pθ(y(d)|x(d)) − P k θk 2σ2 −λD(ˆp||˜pθ), where ˆp is given as a target distribution and ˜pθ = ˜pθ(yj|fm(x, j) = 1) = 1 Um X x∈Um X j⋆ pθ(y⋆ j |x), with the unnormalized potential ˜qθ = ˜qθ(yj|fm(x, j) = 1) = X x∈Um X j⋆ pθ(y⋆ j |x), where fm(x, j) is a feature that depends only on the observation sequence x, and j⋆is defined as {j : fm(x, j) = 1}, and Um is the set of sequences where fm(x, j) is present for some j.4 Computing the Gradient To compute the gradient of the GE criteria, D(ˆp||˜pθ), first we drop terms that are constant with respect to the partial derivative, and we derive the gradient as follows: ∂ ∂θk X l ˆp log ˜qθ = X l ˆp ˜qθ ∂ ∂θk ˜qθ = X l ˆp ˜qθ X x∈U X j⋆ ∂ ∂θk pθ(yj⋆= l|x) = X l ˆp ˜qθ X x∈U X j⋆ X y−j⋆ ∂ ∂θk pθ(yj⋆= l, y−j⋆|x), where y−j = ⟨y1..(j−1)y(j+1)..n⟩. The last step follows from the definition of the marginal probability 3We are actively investigating different choices of distance functions which may have different generalization properties. 4This formulation assumes binary features. 873 P(yj|x). Now that we have a familiar form in which we are taking the gradient of a particular label sequence, we can continue: = X l ˆp ˜qθ X x∈U X j⋆ X y−j⋆ pθ(yj⋆= l, y−j⋆|x)Fk(x, y) − X l ˆp ˜qθ X x∈U X j⋆ X y−j⋆ pθ(yj⋆= l, y−j⋆|x) X y′ pθ(y′|x)Fk(x, y) = X l ˆp ˜qθ X x∈U X i X yi,yi+1 fk(x, yi, yi+1, i) X j⋆ pθ(yi, yi+1, yj⋆= l|x) − X l ˆp ˜qθ X x∈U X i X yi,yi+1 fk(x, yi, yi+1, i) pθ(yi, yi+1|x) X j⋆ pθ(yj⋆= l|x). After combining terms and rearranging we arrive at the final form of the gradient: = X x∈U X i X yi,yi+1 fk(x, yi, yi+1, i) X l ˆp ˜qθ × X j⋆ pθ(yi, yi+1, yj⋆= l|x)− pθ(yi, yi+1|x) X j⋆ pθ(yj⋆= l|x)  . Here, the second term is easily gathered from forward/backward, but obtaining the first term is somewhat more complicated. Computing this term naively would require multiple runs of constrained forward/backward. Here we present a more efficient method that requires only one run of forward/backward.5 First we decompose the probability into two parts: P j⋆pθ(yi, yi+1, yj⋆ = l|x) = Pi j=1 pθ(yi, yi+1, yj = l|x)I(j ∈j⋆) + PJ j=i+1 pθ(yi, yi+1, yj = l|x)I(j ∈j⋆). Next, we show how to compute these terms efficiently. Similar to forward/backward, we build a lattice of intermediate results that then can be used to calculate the 5(Kakade et al., 2002) propose a related method that computes p(y1..i = l1..i|yi+1 = l). quantity of interest: i X j=1 pθ(yi, yi+1, yj = l|x)I(j ∈j⋆) = p(yi, yi+1|x)δ(yi, l)I(i ∈j⋆) + i−1 X j=1 pθ(yi, yi+1, yj = l|x)I(j ∈j⋆) = p(yi, yi+1|x)δ(yi, l)I(i ∈j⋆) +  X yi−1 i−1 X j=1 pθ(yi−1, yi, yj = l|x)I(j ∈j⋆)   pθ(yi+1|yi, x). For efficiency, P yi−1 Pi−1 j=1 pθ(yi−1, yi, yj = l|x)I(j ∈j⋆) is saved at each stage in the lattice. PJ j=i+1 pθ(yi−1, yi, yj = l|x)I(j ∈j⋆) can be computed in the same fashion. To compute the lattices it takes time O(ns2), and one lattice must be computed for each label so the total time is O(ns3). 5 Experimental Results We use the CLASSIFIEDS data provided by Grenager et al. (2005) and compare with results reported by HK06 (Haghighi and Klein, 2006) and CRR07 (Chang et al., 2007). HK06 introduced a set of 33 features along with their majority labels, these are the primary set of additional constraints (Table 1). As HK06 notes, these features are selected using statistics of the labeled data, and here we used similar features here in order to compare with previous results. Though in practice we have found that feature selection is often intuitive, recent work has experimented with automatic feature selection using LDA (Druck et al., 2008). For some of the experiments we also use two sets of 33 additional features that we chose by the same method as HK06, the first 33 of which are also shown in Table 1. We use the same tokenization of the dataset as HK06, and training/test/unsupervised sets of 100 instances each. This data differs slightly from the tokenization used by CRR07. In particular it lacks the newline breaks which might be a useful piece of information. There are three types of supervised/semisupervised data used in the experiments. Labeled instances are the traditional or conventionally 874 Label HK06: 33 Features 33 Added Features CONTACT *phone* call *time please appointment more FEATURES kitchen laundry parking room new large ROOMMATES roommate respectful drama i bit mean RESTRICTIONS pets smoking dog no sorry cats UTILITIES utilities pays electricity water garbage included AVAILABLE immediately begin cheaper *month* now *ordinal*0 SIZE *number*1*1 br sq *number*0*1 bedroom bath PHOTOS pictures image link *url*long click photos RENT *number*15*1 $ month deposit lease rent NEIGHBORHOOD close near shopping located bart downtown ADDRESS address carlmont ave san *ordinal*5 # Table 1: Features and their associated majority label. Features for each label were chosen by the method described in HK06 – top frequency for that label and not higher frequency for any other label. + SVD features HK06 53.7% 71.5% CRF + GE/Heuristic 66.9% 68.3% Table 2: Accuracy of semi-supervised learning methods with majority labeled features alone. GE outperforms HK06 when neither model has access to SVD features. When SVD features are included, HK06 has an edge in accuracy. labeled instances used for estimation in traditional CRF training. Majority labeled features are features annotated with their majority label.6 Labeled features are features m where the distribution p(yi|fm(x, i)) has been specified. In Section 5.3 we estimate these distributions from isolated labeled tokens. We evaluate the system in two scenarios: (1) with feature constraints alone and (2) feature constraints in conjunction with a minimal amount of labeled instances. There is little prior work that demonstrates the use of both scenarios; CRR07 can only be applied when there is some labeled data, while HK06 could be applied in both scenarios though there are no such published experiments. 5.1 Majority Labeled Features Only When using majority labeled features alone, it can be seen in Table 2 that GE is the best performing method. This is important, as it demonstrates that GE out of the box can be used effectively, without tuning and extra modifications. 6While HK06 and CRR07 require only majority labeled features, GE criteria use conditional probability distributions of labels given features, and so in order to apply GE we must decide on a particular distribution for each feature constraint. In sections 5.1 and 5.2 we use a simple heuristic to derive distributions from majority label information: we assign .99 probability to the majority label of the feature and divide the remaining probability uniformly among the remainder of the labels. Labeled Instances 10 25 100 supervised HMM 61.6% 70.0% 76.3% supervised CRF 64.6% 72.9% 79.4% CRF+ Entropy Reg. 67.3% 73.7% 79.5% CRR07 70.9% 74.8% 78.6% + inference constraints 74.7% 78.5% 81.7% CRF+GE/Heuristic 72.6% 76.3% 80.1% Table 3: Accuracy of semi-supervised learning methods with constraints and limited amounts of training data. Even though CRR07 uses more constraints and requires additional development data for estimating mixture weights, GE still outperforms CRR07 when that system is run without applying constraints during inference. When these constraints are applied during test-time inference, CRR07 has an edge over the CRF trained with GE criteria. In their original work, HK06 propose a method for generating additional features given a set of “prototype” features (the feature constraints in Table 1), which they demonstrate to be highly effective. In their method, they collect contexts around all words in the corpus, then perform a SVD decomposition. They take the first 50 singular values for all words, and then if a word is within a thresholded distance to a prototype feature, they assign that word a new feature which indicates close similarity to a prototype feature. When SVD features such as these are made available to the systems, HK06 has a higher accuracy.7 For the remainder of the experiments we use the SVD feature enhanced data sets.8 We ran additional experiments with expected gradient methods but found them to be ineffective, reaching around 50% accuracy on the experiments with the additional SVD features, around 20% less than the competing methods. 5.2 Majority Labeled Features and Labeled Instances Labeled instances are available, the technique described in CRR07 can be used. While CRR07 is run on the same data set as used by HK06, a direct comparison is problematic. First, they use additional constraints beyond those used in this paper and those 7We generated our own set of SVD features, so they might not match exactly the SVD features described in HK06. 8One further experiment HK06 performs which we do not duplicate here is post-processing the label assignments to better handle field boundaries. With this addition they realize another 2.5% improvement. 875 used by HK06 (e.g. each contiguous label sequence must be at least 3 labels long)—so their results cannot be directly compared. Second, they require additional training data to estimate weights for their soft constraints, and do not measure how much of this additional data is needed. Third, they use a slightly different tokenization procedure. Fourth, CRR07 uses different subsets of labeled training instances than used here. For these reasons, the comparison between the method presented here and CRR07 cannot be exact. The technique described in CRR07 can be applied in two ways: constraints can be applied during learning, and they can also be applied during inference. We present comparisons with both of these systems in Table 3. CRFs trained with GE criteria consistently outperform CRR07 when no constraints are applied during inference time, even though CRR07 has additional constraints. When the method in CRR07 is applied with constraints in inference time, it is able to outperform CRFs trained with GE. We tried adding the additional constraints described in CRR07 during test-time inference in our system, but found no accuracy improvement. After doing error inspection, those additional constraints weren’t frequently violated by the GE trained method, which also suggests that adding them wouldn’t have a significant effect during training either. It is possible that for GE training there are alternative inferencetime constraints that would improve performance, but we didn’t pursue this line of investigation as there are benefits to operating within a formal probabilistic model, and eschewing constraints applied during inference time. Without these constraints, probabilistic models can be combined easily with one another in order to arrive at a joint model, and adding in these constraints at inference time complicates the nature of the combination. 5.3 Labeled Features vs. Labeled Instances In the previous section, the supervision signal was the majority label of each feature.9 Given a feature of interest, a human can gather a set of tokens that have this feature and label them to discover the cor9It is not clear how these features would be tagged with majority label in a real use case. Tagging data to discover the majority label could potentially require a large number of tagged instances before the majority label was definitively identified. Accuracy Tokens 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 10 100 1000 10000 100000 Traditional Instance Labeling 33 Labeled Features 66 Labeled Features 99 Labeled Features CRR07 + inference time constraints Figure 2: Accuracy of supervised and semi-supervised learning methods for fixed numbers of labeled tokens. Training a GE model with only labeled features significantly outperforms traditional log-likelihood training with labeled instances for comparable numbers of labeled tokens. When training on less than 1500 annotated tokens, it also outperforms CRR07 + inference time constraints, which uses not only labeled tokens but additional constraints and development data for estimating mixture weights. Labeled Instances 0 10 25 100 HK06 71.5% GE/Heuristic 68.3% 72.6% 76.3% 80.1% GE/Sampled 73.0% 74.6% 77.2% 80.5% Table 4: Accuracy of semi-supervised learning methods comparing the effects of (1) a heuristic for setting conditional distributions of labels given features and (2) estimating this distributions via human annotation. When GE is given feature distributions are better than the simple heuristic it is able to realize considerable gains. relation between the feature and the labels.10 While the resulting label distribution information could not be fully utilized by previous methods (HK06 and CRR07 use only the majority label of the word), it can, however, be integrated into the GE criteria by using the distribution from the relative proportions of labels rather than a the previous heuristic distribution. We present a series of experiments that test the advantages of this annotation paradigm. To simulate a human labeler, we randomly sample (without replacement) tokens with the particular feature in question, and generate a label using the human annotations provided in the data. Then we normalize and smooth the raw counts to obtain a 10In this paper we observe a 10x speed-up by using isolated labeled tokens instead of a wholly labeled instances—so even if it takes slightly longer to label isolated tokens, there will still be a substantial gain. 876 conditional probability distribution over labels given feature. We experiment with samples of 1, 2,5, 10, 100 tokens per feature, as well as with all available labeled data. We sample instances for labeling exclusively from the training and development data, not from the testing data. We train a model using GE with these estimated conditional probability distributions and compare them with corresponding numbers of tokens of traditionally labeled instances. Training from labeled features significantly outperforms training from traditional labeled instances for equivalent numbers of labeled tokens (Figure 2). With 1000 labeled tokens, instance-labeling achieves accuracy around 65%, while labeling 33 features reaches 72% accuracy.11 To achieve the same level of performance as traditional instance labeling, it can require as much as a factor of ten-fold fewer annotations of feature occurrences. For example, the accuracy achieved after labeling 257 tokens of 33 features is 71% – the same accuracy achieved only after labeling more than 2000 tokens in traditional instance-labeling.12 Assuming that labeling one token in isolation takes the same time as labeling one token in a sequence, these results strongly support a new paradigm of labeling in which instead of annotating entire sentences, the human instead selects some key features of interest and labels tokens that have this feature. Particularly intriguing is the flexibility our scenario provides for the selection of “features of interest” to be driven by error analysis. Table 4 compares the heuristic method described above against sampled conditional probability distributions of labels given features13. Sampled distributions yield consistent improvements over the heuristic method. The accuracy with no labeled instances (73.0%) is better than HK06 (71.5%), which demonstrates that the precisely estimated feature distributions are helpful for improving accuracy. Though accuracy begins to level off with distri11Labeling 99 features with 1000 tokens reaches nearly 76%. 12Accuracy at one labeled token per feature is much worse than accuracy with majority label information. This due to the noise introduced by sampling, as there is the potential for a relatively rare label be sampled and labeled, and thereby train the system on a non-canonical supervision signal. 13Where the tokens labeled is the total available number in the data, roughly 2500 tokens. 0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 10 12 Probability Label 0 0.2 0.4 0.6 0.8 1 0 2 4 6 8 10 12 Probability Label Figure 3: From left to right: distributions (with standard error) for the feature WORD=ADDRESS obtained from sampling, using 1 sample per feature and 10 samples per feature. Labels 1, 2, 3, and 9 are (respectively) FEATURES, CONTACT, SIZE, and ADDRESS. Instead of more precisely estimating these distributions, it is more beneficial to label a larger set of features. butions over the original set of 33 labeled features, we ran additional experiments with 66 and 99 labeled features, whose results are also shown in Figure 2.14 The graph shows that with an increased number of labeled features, for the same numbers of labeled tokens, accuracy can be improved. The reason behind this is clear—while there is some gain from increased precision of probability estimates (as they asymptotically approach their “true” values as shown in Figure 3), there is more information to be gained from rougher estimates of a larger set of features. One final point about these additional features is that their distributions are less peaked than the original feature set. Where the original feature set distribution has entropy of 8.8, the first 33 added features have an entropy of 22.95. Surprisingly, even ambiguous feature constraints are able to improve accuracy. 6 Conclusion We have presented generalized expectation criteria for linear-chain conditional random fields, a new semi-supervised training method that makes use of labeled features rather than labeled instances. Previous semi-supervised methods have typically used ad-hoc feature majority label assignments as constraints. Our new method uses conditional probability distributions of labels given features and can dramatically reduce annotation time. When these distributions are estimated by means of annotated feature occurrences in context, there is as much as a ten-fold reduction in the annotation time that is required in order to achieve the same level of accuracy over traditional instance-labeling. 14Also note that for less than 1500 tokens of labeling, the 99 labeled features outperform CRR07 with inference time constraints. 877 References R. K. Ando and T. Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. JMLR, 6. M.-W. Chang, L. Ratinov, and D. Roth. 2007. Guiding semi-supervision with constraint-driven learning. In ACL. G. Druck, G. Mann, and A. McCallum. 2007. Leveraging existing resources using generalized expectation criteria. In NIPS Workshop on Learning Problem Design. G. Druck, G. S. Mann, and A. McCallum. 2008. Learning from labeled features using generalized expectation criteria. In SIGIR. D. Freitag. 2004. Trained named entity recognition using distributional clusters. In EMNLP. Y. Grandvalet and Y. Bengio. 2004. Semi-supervised learning by entropy minimization. In NIPS. T. Grenager, D. Klein, and C. Manning. 2005. Unsupervised learning of field segmentation models for information extraction. In ACL. A. Haghighi and D. Klein. 2006. Prototype-driver learning for sequence models. In NAACL. F. Jiao, S. Wang, C.-H. Lee, R. Greiner, and D. Schuurmans. 2006. Semi-supervised conditional random fields for improved sequence segmentation and labeling. In COLING/ACL. Thorsten Joachims. 1999. Transductive inference for text classification using support vector machines. In ICML. S. Kakade, Y-W. Teg, and S.Roweis. 2002. An alternate objective function for markovian fields. In ICML. G. Mann and A. McCallum. 2007. Simple, robust, scalable semi-supervised learning via expectation regularization. In ICML. A. McCallum, G. S. Mann, and G. Druck. 2007. Generalized expectation criteria. Computer science technical note, University of Massachusetts, Amherst, MA. S. Miller, J. Guinness, and A. Zamanian. 2004. Name tagging with word clusters and discriminative training. In ACL. K. Nigam, A. McCallum, S. Thrun, and T. Mitchell. 1998. Learning to classify text from labeled and unlabeled documents. In AAAI. A. Quattoni, S. Wang, L-P. Morency, M. Collins, and T. Darrell. 2007. Hidden-state conditional random fields. In PAMI. H. Raghavan, O. Madani, and R. Jones. 2006. Active learning with feedback on both features and instances. JMLR. R. Salakhutdinov, S. Roweis, and Z. Ghahramani. 2003. Optimization with em and expectation-conjugategradient. In ICML. R. Schapire, M. Rochery, M. Rahim, and N. Gupta. 2002. Incorporating prior knowledge into boosting. In ICML. N. Smith and J. Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In ACL. Martin Szummer and Tommi Jaakkola. 2002. Partially labeled classification with markov random walks. In NIPS, volume 14. X. Zhu and Z. Ghahramani. 2002. Learning from labeled and unlabeled data with label propagation. Technical Report CMU-CALD-02-107, CMU. X. Zhu. 2005. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison. http://www.cs.wisc.edu/∼jerryzhu/pub/ssl survey.pdf. 878
2008
99
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1–9, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Heterogeneous Transfer Learning for Image Clustering via the Social Web Qiang Yang Hong Kong University of Science and Technology, Clearway Bay, Kowloon, Hong Kong [email protected] Yuqiang Chen Gui-Rong Xue Wenyuan Dai Yong Yu Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240, China {yuqiangchen,grxue,dwyak,yyu}@apex.sjtu.edu.cn Abstract In this paper, we present a new learning scenario, heterogeneous transfer learning, which improves learning performance when the data can be in different feature spaces and where no correspondence between data instances in these spaces is provided. In the past, we have classified Chinese text documents using English training data under the heterogeneous transfer learning framework. In this paper, we present image clustering as an example to illustrate how unsupervised learning can be improved by transferring knowledge from auxiliary heterogeneous data obtained from the social Web. Image clustering is useful for image sense disambiguation in query-based image search, but its quality is often low due to imagedata sparsity problem. We extend PLSA to help transfer the knowledge from social Web data, which have mixed feature representations. Experiments on image-object clustering and scene clustering tasks show that our approach in heterogeneous transfer learning based on the auxiliary data is indeed effective and promising. 1 Introduction Traditional machine learning relies on the availability of a large amount of data to train a model, which is then applied to test data in the same feature space. However, labeled data are often scarce and expensive to obtain. Various machine learning strategies have been proposed to address this problem, including semi-supervised learning (Zhu, 2007), domain adaptation (Wu and Dietterich, 2004; Blitzer et al., 2006; Blitzer et al., 2007; Arnold et al., 2007; Chan and Ng, 2007; Daume, 2007; Jiang and Zhai, 2007; Reichart and Rappoport, 2007; Andreevskaia and Bergler, 2008), multi-task learning (Caruana, 1997; Reichart et al., 2008; Arnold et al., 2008), self-taught learning (Raina et al., 2007), etc. A commonality among these methods is that they all require the training data and test data to be in the same feature space. In addition, most of them are designed for supervised learning. However, in practice, we often face the problem where the labeled data are scarce in their own feature space, whereas there may be a large amount of labeled heterogeneous data in another feature space. In such situations, it would be desirable to transfer the knowledge from heterogeneous data to domains where we have relatively little training data available. To learn from heterogeneous data, researchers have previously proposed multi-view learning (Blum and Mitchell, 1998; Nigam and Ghani, 2000) in which each instance has multiple views in different feature spaces. Different from previous works, we focus on the problem of heterogeneous transfer learning, which is designed for situation when the training data are in one feature space (such as text), and the test data are in another (such as images), and there may be no correspondence between instances in these spaces. The type of heterogeneous data can be very different, as in the case of text and image. To consider how heterogeneous transfer learning relates to other types of learning, Figure 1 presents an intuitive illustration of four learning strategies, including traditional machine learning, transfer learning across different distributions, multi-view learning and heterogeneous transfer learning. As we can see, an important distinguishing feature of heterogeneous transfer learning, as compared to other types of learning, is that more constraints on the problem are relaxed, such that data instances do not need to correspond anymore. This allows, for example, a collection of Chinese text documents to be classified using another collection of English text as the 1 training data (c.f. (Ling et al., 2008) and Section 2.1). In this paper, we will give an illustrative example of heterogeneous transfer learning to demonstrate how the task of image clustering can benefit from learning from the heterogeneous social Web data. A major motivation of our work is Web-based image search, where users submit textual queries and browse through the returned result pages. One problem is that the user queries are often ambiguous. An ambiguous keyword such as “Apple” might retrieve images of Apple computers and mobile phones, or images of fruits. Image clustering is an effective method for improving the accessibility of image search result. Loeff et al. (2006) addressed the image clustering problem with a focus on image sense discrimination. In their approach, images associated with textual features are used for clustering, so that the text and images are clustered at the same time. Specifically, spectral clustering is applied to the distance matrix built from a multimodal feature set associated with the images to get a better feature representation. This new representation contains both image and text information, with which the performance of image clustering is shown to be improved. A problem with this approach is that when images contained in the Web search results are very scarce and when the textual data associated with the images are very few, clustering on the images and their associated text may not be very effective. Different from these previous works, in this paper, we address the image clustering problem as a heterogeneous transfer learning problem. We aim to leverage heterogeneous auxiliary data, social annotations, etc. to enhance image clustering performance. We observe that the World Wide Web has many annotated images in Web sites such as Flickr (http://www.flickr.com), which can be used as auxiliary information source for our clustering task. In this work, our objective is to cluster a small collection of images that we are interested in, where these images are not sufficient for traditional clustering algorithms to perform well due to data sparsity and the low level of image features. We investigate how to utilize the readily available socially annotated image data on the Web to improve image clustering. Although these auxiliary data may be irrelevant to the images to be clustered and cannot be directly used to solve the data sparsity problem, we show that they can still be used to estimate a good latent feature representation, which can be used to improve image clustering. 2 Related Works 2.1 Heterogeneous Transfer Learning Between Languages In this section, we summarize our previous work on cross-language classification as an example of heterogeneous transfer learning. This example is related to our image clustering problem because they both rely on data from different feature spaces. As the World Wide Web in China grows rapidly, it has become an increasingly important problem to be able to accurately classify Chinese Web pages. However, because the labeled Chinese Web pages are still not sufficient, we often find it difficult to achieve high accuracy by applying traditional machine learning algorithms to the Chinese Web pages directly. Would it be possible to make the best use of the relatively abundant labeled English Web pages for classifying the Chinese Web pages? To answer this question, in (Ling et al., 2008), we developed a novel approach for classifying the Web pages in Chinese using the training documents in English. In this subsection, we give a brief summary of this work. The problem to be solved is: we are given a collection of labeled English documents and a large number of unlabeled Chinese documents. The English and Chinese texts are not aligned. Our objective is to classify the Chinese documents into the same label space as the English data. Our key observation is that even though the data use different text features, they may still share many of the same semantic information. What we need to do is to uncover this latent semantic information by finding out what is common among them. We did this in (Ling et al., 2008) by using the information bottleneck theory (Tishby et al., 1999). In our work, we first translated the Chinese document into English automatically using some available translation software, such as Google translate. Then, we encoded the training text as well as the translated target text together, in terms of the information theory. We allowed all the information to be put through a ‘bottleneck’ and be represented by a limited number of code2 Figure 1: An intuitive illustration of different kinds learning strategies using classification/clustering of image apple and banana as the example. words (i.e. labels in the classification problem). Finally, information bottleneck was used to maintain most of the common information between the two data sources, and discard the remaining irrelevant information. In this way, we can approximate the ideal situation where similar training and translated test pages shared in the common part are encoded into the same codewords, and are thus assigned the correct labels. In (Ling et al., 2008), we experimentally showed that heterogeneous transfer learning can indeed improve the performance of cross-language text classification as compared to directly training learning models (e.g., Naive Bayes or SVM) and testing on the translated texts. 2.2 Other Works in Transfer Learning In the past, several other works made use of transfer learning for cross-feature-space learning. Wu and Oard (2008) proposed to handle the crosslanguage learning problem by translating the data into a same language and applying kNN on the latent topic space for classification. Most learning algorithms for dealing with cross-language heterogeneous data require a translator to convert the data to the same feature space. For those data that are in different feature spaces where no translator is available, Davis and Domingos (2008) proposed a Markov-logic-based transfer learning algorithm, which is called deep transfer, for transferring knowledge between biological domains and Web domains. Dai et al. (2008a) proposed a novel learning paradigm, known as translated learning, to deal with the problem of learning heterogeneous data that belong to quite different feature spaces by using a risk minimization framework. 2.3 Relation to PLSA Our work makes use of PLSA. Probabilistic latent semantic analysis (PLSA) is a widely used probabilistic model (Hofmann, 1999), and could be considered as a probabilistic implementation of latent semantic analysis (LSA) (Deerwester et al., 1990). An extension to PLSA was proposed in (Cohn and Hofmann, 2000), which incorporated the hyperlink connectivity in the PLSA model by using a joint probabilistic model for connectivity and content. Moreover, PLSA has shown a lot of applications ranging from text clustering (Hofmann, 2001) to image analysis (Sivic et al., 2005). 2.4 Relation to Clustering Compared to many previous works on image clustering, we note that traditional image clustering is generally based on techniques such as Kmeans (MacQueen, 1967) and hierarchical clustering (Kaufman and Rousseeuw, 1990). However, when the data are sparse, traditional clustering algorithms may have difficulties in obtaining high-quality image clusters. Recently, several researchers have investigated how to leverage the auxiliary information to improve target clustering 3 performance, such as supervised clustering (Finley and Joachims, 2005), semi-supervised clustering (Basu et al., 2004), self-taught clustering (Dai et al., 2008b), etc. 3 Image Clustering with Annotated Auxiliary Data In this section, we present our annotation-based probabilistic latent semantic analysis algorithm (aPLSA), which extends the traditional PLSA model by incorporating annotated auxiliary image data. Intuitively, our algorithm aPLSA performs PLSA analysis on the target images, which are converted to an image instance-to-feature cooccurrence matrix. At the same time, PLSA is also applied to the annotated image data from social Web, which is converted into a text-to-imagefeature co-occurrence matrix. In order to unify those two separate PLSA models, these two steps are done simultaneously with common latent variables used as a bridge linking them. Through these common latent variables, which are now constrained by both target image data and auxiliary annotation data, a better clustering result is expected for the target data. 3.1 Probabilistic Latent Semantic Analysis Let F = {fi}|F| i=1 be an image feature space, and V = {vi}|V| i=1 be the image data set. Each image vi ∈V is represented by a bag-of-features {f|f ∈ vi ∧f ∈F}. Based on the image data set V, we can estimate an image instance-to-feature co-occurrence matrix A|V|×|F| ∈R|V|×|F|, where each element Aij (1 ≤i ≤|V| and 1 ≤j ≤|F|) in the matrix A is the frequency of the feature fj appearing in the instance vi. Let W = {wi}|W| i=1 be a text feature space. The annotated image data allow us to obtain the cooccurrence information between images v and text features w ∈W. An example of annotated image data is the Flickr (http://www.flickr. com), which is a social Web site containing a large number of annotated images. By extracting image features from the annotated images v, we can estimate a text-to-image feature co-occurrence matrix B|W|×|F| ∈R|W|×|F|, where each element Bij (1 ≤i ≤|W| and 1 ≤j ≤|F|) in the matrix B is the frequency of the text feature wi and the image feature fj occurring together in the annotated image data set. V Z F P(z|v) P(f|z) Figure 2: Graphical model representation of PLSA model. Let Z = {zi}|Z| i=1 be the latent variable set in our aPLSA model. In clustering, each latent variable zi ∈Z corresponds to a certain cluster. Our objective is to estimate a clustering function g : V 7→Z with the help of the two cooccurrence matrices A and B as defined above. To formally introduce the aPLSA model, we start from the probabilistic latent semantic analysis (PLSA) (Hofmann, 1999) model. PLSA is a probabilistic implementation of latent semantic analysis (LSA) (Deerwester et al., 1990). In our image clustering task, PLSA decomposes the instance-feature co-occurrence matrix A under the assumption of conditional independence of image instances V and image features F, given the latent variables Z. P(f|v) = X z∈Z P(f|z)P(z|v). (1) The graphical model representation of PLSA is shown in Figure 2. Based on the PLSA model, the log-likelihood can be defined as: L = X i X j Aij P j′ Aij′ log P(fj|vi) (2) where A|V|×|F| ∈R|V|×|F| is the image instancefeature co-occurrence matrix. The term Aij P j′ Aij′ in Equation (2) is a normalization term ensuring each image is giving the same weight in the loglikelihood. Using EM algorithm (Dempster et al., 1977), which locally maximizes the log-likelihood of the PLSA model (Equation (2)), the probabilities P(f|z) and P(z|v) can be estimated. Then, the clustering function is derived as g(v) = argmax z∈Z P(z|v). (3) Due to space limitation, we omit the details for the PLSA model, which can be found in (Hofmann, 1999). 3.2 aPLSA: Annotation-based PLSA In this section, we consider how to incorporate a large number of socially annotated images in a 4 V W Z F P(z|v) P(z|w) P(f|z) Figure 3: Graphical model representation of aPLSA model. unified PLSA model for the purpose of utilizing the correlation between text features and image features. In the auxiliary data, each image has certain textual tags that are attached by users. The correlation between text features and image features can be formulated as follows. P(f|w) = X z∈Z P(f|z)P(z|w). (4) It is clear that Equations (1) and (4) share a same term P(f|z). So we design a new PLSA model by joining the probabilistic model in Equation (1) and the probabilistic model in Equation (4) into a unified model, as shown in Figure 3. In Figure 3, the latent variables Z depend not only on the correlation between image instances V and image features F, but also the correlation between text features W and image features F. Therefore, the auxiliary socially-annotated image data can be used to help the target image clustering performance by estimating good set of latent variables Z. Based on the graphical model representation in Figure 3, we derive the log-likelihood objective function, in a similar way as in (Cohn and Hofmann, 2000), as follows L = X j " λ X i Aij P j′ Aij′ log P(fj|vi) +(1 −λ) X l Blj P j′ Blj′ log P(fj|wl) # , (5) where A|V|×|F| ∈R|V|×|F| is the image instancefeature co-occurrence matrix, and B|W|×|F| ∈ R|W|×|F| is the text-to-image feature-level cooccurrence matrix. Similar to Equation (2), Aij P j′ Aij′ and Blj P j′ Blj′ in Equation (5) are the normalization terms to prevent imbalanced cases. Furthermore, λ acts as a trade-off parameter between the co-occurrence matrices A and B. In the extreme case when λ = 1, the log-likelihood objective function ignores all the biases from the text-to-image occurrence matrix B. In this case, the aPLSA model degenerates to the traditional PLSA model. Therefore, aPLSA is an extension to the PLSA model. Now, the objective is to maximize the loglikelihood L of the aPLSA model in Equation (5). Then we apply the EM algorithm (Dempster et al., 1977) to estimate the conditional probabilities P(f|z), P(z|w) and P(z|v) with respect to each dependence in Figure 3 as follows. • E-Step: calculate the posterior probability of each latent variable z given the observation of image features f, image instances v and text features w based on the old estimate of P(f|z), P(z|w) and P(z|v): P(zk|vi, fj) = P(fj|zk)P(zk|vi) P k′ P(fj|zk′)P(zk′|vi) (6) P(zk|wl, fj) = P(fj|zk)P(zk|wl) P k′ P(fj|zk′)P(zk′|wl) (7) • M-Step: re-estimates conditional probabilities P(zk|vi) and P(zk|wl): P(zk|vi) = X j Aij P j′ Aij′ P(zk|vi, fj) (8) P(zk|wl) = X j Blj P j′ Blj′ P(zk|wl, fj) (9) and conditional probability P(fj|zk), which is a mixture portion of posterior probability of latent variables P(fj|zk) ∝λ X i Aij P j′ Aij′ P(zk|vi, fj) + (1 −λ) X l Blj P j′ Blj′ P(zk|wl, fj) (10) Finally, the clustering function for a certain image v is g(v) = argmax z∈Z P(z|v). (11) From the above equations, we can derive our annotation-based probabilistic latent semantic analysis (aPLSA) algorithm. As shown in Algorithm 1, aPLSA iteratively performs the E-Step and the M-Step in order to seek local optimal points based on the objective function L in Equation (5). 5 Algorithm 1 Annotation-based PLSA Algorithm (aPLSA) Input: The V-F co-occurrence matrix A and WF co-occurrence matrix B. Output: A clustering (partition) function g : V 7→ Z, which maps an image instance v ∈V to a latent variable z ∈Z. 1: Initial Z so that |Z| equals the number clusters desired. 2: Initialize P(z|v), P(z|w), P(f|z) randomly. 3: while the change of L in Eq. (5) between two sequential iterations is greater than a predefined threshold do 4: E-Step: Update P(z|v, f) and P(z|w, f) based on Eq. (6) and (7) respectively. 5: M-Step: Update P(z|v), P(z|w) and P(f|z) based on Eq. (8), (9) and (10) respectively. 6: end while 7: for all v in V do 8: g(v) ←argmax z P(z|v). 9: end for 10: Return g. 4 Experiments In this section, we empirically evaluate the aPLSA algorithm together with some state-of-art baseline methods on two widely used image corpora, to demonstrate the effectiveness of our algorithm aPLSA. 4.1 Data Sets In order to evaluate the effectiveness of our algorithm aPLSA, we conducted experiments on several data sets generated from two image corpora, Caltech-256 (Griffin et al., 2007) and the fifteenscene (Lazebnik et al., 2006). The Caltech-256 data set has 256 image objective categories, ranging from animals to buildings, from plants to automobiles, etc. The fifteen-scene data set contains 15 scenes such as store and forest. From these two corpora, we randomly generated eleven image clustering tasks, including seven 2way clustering tasks, two 4-way clustering task, one 5-way clustering task and one 8-way clustering task. The detailed descriptions for these clustering tasks are given in Table 1. In these tasks, bi7 and oct1 were generated from fifteen-scene data set, and the rest were from Caltech-256 data set. DATA SET INVOLVED CLASSES DATA SIZE bi1 skateboard, airplanes 102, 800 bi2 billiards, mars 278, 155 bi3 cd, greyhound 102, 94 bi4 electric-guitar, snake 122, 112 bi5 calculator, dolphin 100, 106 bi6 mushroom, teddy-bear 202, 99 bi7 MIThighway, livingroom 260, 289 quad1 calculator, diamond-ring, dolphin, microscope 100, 118, 106, 116 quad2 bonsai, comet, frog, saddle 122, 120, 115, 110 quint1 frog, kayak, bear, jesus-christ, watch 115, 102, 101, 87, 201 oct1 MIThighway, MITmountain, kitchen, MITcoast, PARoffice, MITtallbuilding, livingroom, bedroom 260, 374, 210, 360, 215, 356, 289, 216 tune1 coin, horse 123, 270 tune2 socks, spider 111, 106 tune3 galaxy, snowmobile 80, 112 tune4 dice, fern 98, 110 tune5 backpack, lightning, mandolin, swan 151, 136, 93, 114 Table 1: The descriptions of all the image clustering tasks used in our experiment. Among these data sets, bi7 and oct1 were generated from fifteen-scene data set, and the rest were from Caltech-256 data set. To empirically investigate the parameter λ and the convergence of our algorithm aPLSA, we generated five more date sets as the development sets. The detailed description of these five development sets, namely tune1 to tune5 is listed in Table 1 as well. The auxiliary data were crawled from the Flickr (http://www.flickr.com/) web site during August 2007. Flickr is an internet community where people share photos online and express their opinions as social tags (annotations) attached to each image. From Flicker, we collected 19, 959 images and 91, 719 related annotations, among which 2, 600 words are distinct. Based on the method described in Section 3, we estimated the co-occurrence matrix B between text features and image features. This co-occurrence matrix B was used by all the clustering tasks in our experiments. For data preprocessing, we adopted the bag-offeatures representation of images (Li and Perona, 2005) in our experiments. Interesting points were found in the images and described via the SIFT descriptors (Lowe, 2004). Then, the interesting points were clustered to generate a codebook to form an image feature space. The size of codebook was set to 2, 000 in our experiments. Based on the codebook, which serves as the image feature space, each image can be represented as a corresponding feature vector to be used in the next step. To set our evaluation criterion, we used the 6 Data Set KMeans PLSA STC aPLSA separate combined separate combined bi1 0.645±0.064 0.548±0.031 0.544±0.074 0.537±0.033 0.586±0.139 0.482±0.062 bi2 0.687±0.003 0.662±0.014 0.464±0.074 0.692±0.001 0.577±0.016 0.455±0.096 bi3 1.294±0.060 1.300±0.015 1.085±0.073 1.126±0.036 1.103±0.108 1.029±0.074 bi4 1.227±0.080 1.164±0.053 0.976±0.051 1.038±0.068 1.024±0.089 0.919±0.065 bi5 1.450±0.058 1.417±0.045 1.426±0.025 1.405±0.040 1.411±0.043 1.377±0.040 bi6 1.969±0.078 1.852±0.051 1.514±0.039 1.709±0.028 1.589±0.121 1.503±0.030 bi7 0.686±0.006 0.683±0.004 0.643±0.058 0.632±0.037 0.651±0.012 0.624±0.066 quad1 0.591±0.094 0.675±0.017 0.488±0.071 0.662±0.013 0.580±0.115 0.432±0.085 quad2 0.648±0.036 0.646±0.045 0.614±0.062 0.626±0.026 0.591±0.087 0.515±0.098 quint1 0.557±0.021 0.508±0.104 0.547±0.060 0.539±0.051 0.538±0.100 0.502±0.067 oct1 0.659±0.031 0.680±0.012 0.340±0.147 0.691±0.002 0.411±0.089 0.306±0.101 average 0.947±0.029 0.922±0.017 0.786±0.009 0.878±0.006 0.824±0.036 0.741±0.018 Table 2: Experimental result in term of entropy for all data sets and evaluation methods. entropy to measure the quality of our clustering results. In information theory, entropy (Shannon, 1948) is a measure of the uncertainty associated with a random variable. In our problem, entropy serves as a measure of randomness of clustering result. The entropy of g on a single latent variable z is defined to be H(g, z) ≜ −P c∈C P(c|z) log2 P(c|z), where C is the class label set of V and P(c|z) = |{v|g(v)=z∧t(v)=c}| |{v|g(v)=z}| , in which t(v) is the true class label of image v. Lower entropy H(g, Z) indicates less randomness and thus better clustering result. 4.2 Empirical Analysis We now empirically analyze the effectiveness of our aPLSA algorithm. Because, to our best of knowledge, few existing methods addressed the problem of image clustering with the help of social annotation image data, we can only compare our aPLSA with several state-of-the-art clustering algorithms that are not directly designed for our problem. The first baseline is the well-known KMeans algorithm (MacQueen, 1967). Since our algorithm is designed based on PLSA (Hofmann, 1999), we also included PLSA for clustering as a baseline method in our experiments. For each of the above two baselines, we have two strategies: (1) separated: the baseline method was applied on the target image data only; (2) combined: the baseline method was applied to cluster the combined data consisting of both target image data and the annotated image data. Clustering results on target image data were used for evaluation. Note that, in the combined data, all the annotations were thrown away since baseline methods evaluated in this paper do not leverage annotation information. In addition, we compared our algorithm aPLSA to a state-of-the-art transfer clustering strategy, known as self-taught clustering (STC) (Dai et al., 2008b). STC makes use of auxiliary data to estimate a better feature representation to benefit the target clustering. In these experiments, the annotated image data were used as auxiliary data in STC, which does not use the annotation text. In our experiments, the performance is in the form of the average entropy and variance of five repeats by randomly selecting 50 images from each of the categories. We selected only 50 images per category, since this paper is focused on clustering sparse data. Table 2 shows the performance with respect to all comparison methods on each of the image clustering tasks measured by the entropy criterion. From the tables, we can see that our algorithm aPLSA outperforms the baseline methods in all the data sets. We believe that is because aPLSA can effectively utilize the knowledge from the socially annotated image data. On average, aPLSA gives rise to 21.8% of entropy reduction and as compared to KMeans, 5.7% of entropy reduction as compared to PLSA, and 10.1% of entropy reduction as compared to STC. 4.2.1 Varying Data Size We now show how the data size affects aPLSA, with two baseline methods KMeans and PLSA as reference. The experiments were conducted on different amounts of target image data, varying from 10 to 80. The corresponding experimental results in average entropy over all the 11 clustering tasks are shown in Figure 4(a). From this figure, we observe that aPLSA always yields a significant reduction in entropy as compared with two baseline methods KMeans and PLSA, regardless of the size of target image data that we used. 7 10 20 30 40 50 60 70 80 0.7 0.75 0.8 0.85 0.9 0.95 1 Data size per category Entropy KMeans PLSA aPLSA (a) 0 0.2 0.4 0.6 0.8 1 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 λ Entropy average over 5 development sets (b) 0 50 100 150 200 250 300 0.5 0.55 0.6 0.65 0.7 0.75 Number of Iteration Entropy average over 5 development sets (c) Figure 4: (a) The entropy curve as a function of different amounts of data per category. (b) The entropy curve as a function of different number of iterations. (c) The entropy curve as a function of different trade-off parameter λ. 4.2.2 Parameter Sensitivity In aPLSA, there is a trade-off parameter λ that affects how the algorithm relies on auxiliary data. When λ = 0, the aPLSA relies only on annotated image data B. When λ = 1, aPLSA relies only on target image data A, in which case aPLSA degenerates to PLSA. Smaller λ indicates heavier reliance on the annotated image data. We have done some experiments on the development sets to investigate how different λ affect the performance of aPLSA. We set the number of images per category to 50, and tested the performance of aPLSA. The result in average entropy over all development sets is shown in Figure 4(b). In the experiments described in this paper, we set λ to 0.2, which is the best point in Figure 4(b). 4.2.3 Convergence In our experiments, we tested the convergence property of our algorithm aPLSA as well. Figure 4(c) shows the average entropy curve given by aPLSA over all development sets. From this figure, we see that the entropy decreases very fast during the first 100 iterations and becomes stable after 150 iterations. We believe that 200 iterations is sufficient for aPLSA to converge. 5 Conclusions In this paper, we proposed a new learning scenario called heterogeneous transfer learning and illustrated its application to image clustering. Image clustering, a vital component in organizing search results for query-based image search, was shown to be improved by transferring knowledge from unrelated images with annotations in a social Web. This is done by first learning the high-quality latent variables in the auxiliary data, and then transferring this knowledge to help improve the clustering of the target image data. We conducted experiments on two image data sets, using the Flickr data as the annotated auxiliary image data, and showed that our aPLSA algorithm can greatly outperform several state-of-the-art clustering algorithms. In natural language processing, there are many future opportunities to apply heterogeneous transfer learning. In (Ling et al., 2008) we have shown how to classify the Chinese text using English text as the training data. We may also consider clustering, topic modeling, question answering, etc., to be done using data in different feature spaces. We can consider data in different modalities, such as video, image and audio, as the training data. Finally, we will explore the theoretical foundations and limitations of heterogeneous transfer learning as well. Acknowledgement Qiang Yang thanks Hong Kong CERG grant 621307 for supporting the research. References Alina Andreevskaia and Sabine Bergler. 2008. When specialists and generalists work together: Overcoming domain dependence in sentiment tagging. In ACL-08: HLT, pages 290–298, Columbus, Ohio, June. Andrew Arnold, Ramesh Nallapati, and William W. Cohen. 2007. A comparative study of methods for transductive transfer learning. In ICDM 2007 Workshop on Mining and Management of Biological Data, pages 77-82. Andrew Arnold, Ramesh Nallapati, and William W. Cohen. 2008. Exploiting feature hierarchy for transfer learning in named entity recognition. In ACL-08: HLT. Sugato Basu, Mikhail Bilenko, and Raymond J. Mooney. 2004. A probabilistic framework for semi-supervised clustering. In ACM SIGKDD 2004, pages 59–68. John Blitzer, Ryan Mcdonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In EMNLP 2006, pages 120–128, Sydney, Australia. 8 John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL 2007, pages 440–447, Prague, Czech Republic. Avrim Blum and Tom Mitchell. 1998. Combining labeled and unlabeled data with co-training. In COLT 1998, pages 92–100, New York, NY, USA. ACM. Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41–75. Yee Seng Chan and Hwee Tou Ng. 2007. Domain adaptation with active learning for word sense disambiguation. In ACL 2007, Prague, Czech Republic. David A. Cohn and Thomas Hofmann. 2000. The missing link - a probabilistic model of document content and hypertext connectivity. In NIPS 2000, pages 430–436. Wenyuan Dai, Yuqiang Chen, Gui-Rong Xue, Qiang Yang, and Yong Yu. 2008a. Translated learning: Transfer learning across different feature spaces. In NIPS 2008, pages 353–360. Wenyuan Dai, Qiang Yang, Gui-Rong Xue, and Yong Yu. 2008b. Self-taught clustering. In ICML 2008, pages 200– 207. Omnipress. Hal Daume, III. 2007. Frustratingly easy domain adaptation. In ACL 2007, pages 256–263, Prague, Czech Republic. Jesse Davis and Pedro Domingos. 2008. Deep transfer via second-order markov logic. In AAAI 2008 Workshop on Transfer Learning, Chicago, USA. Scott Deerwester, Susan T. Dumais, George W. Furnas, Thomas K. L, and Richard Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science, pages 391–407. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. J. of the Royal Statistical Society, 39:1–38. Thomas Finley and Thorsten Joachims. 2005. Supervised clustering with support vector machines. In ICML 2005, pages 217–224, New York, NY, USA. ACM. G. Griffin, A. Holub, and P. Perona. 2007. Caltech-256 object category dataset. Technical Report 7694, California Institute of Technology. Thomas Hofmann. 1999 Probabilistic latent semantic analysis. In Proc. of Uncertainty in Artificial Intelligence, UAI99. Pages 289–296 Thomas Hofmann. 2001. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning. volume 42, number 1-2, pages 177–196. Kluwer Academic Publishers. Jing Jiang and Chengxiang Zhai. 2007. Instance weighting for domain adaptation in NLP. In ACL 2007, pages 264– 271, Prague, Czech Republic, June. Leonard Kaufman and Peter J. Rousseeuw. 1990. Finding groups in data: an introduction to cluster analysis. John Wiley and Sons, New York. Svetlana Lazebnik, Cordelia Schmid, and Jean Ponce. 2006. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR 2006, pages 2169–2178, Washington, DC, USA. Fei-Fei Li and Pietro Perona. 2005. A bayesian hierarchical model for learning natural scene categories. In CVPR 2005, pages 524–531, Washington, DC, USA. Xiao Ling, Gui-Rong Xue, Wenyuan Dai, Yun Jiang, Qiang Yang, and Yong Yu. 2008. Can chinese web pages be classified with english data source? In WWW 2008, pages 969–978, New York, NY, USA. ACM. Nicolas Loeff, Cecilia Ovesdotter Alm, and David A. Forsyth. 2006. Discriminating image senses by clustering with multimodal features. In COLING/ACL 2006 Main conference poster sessions, pages 547–554. David G. Lowe. 2004. Distinctive image features from scaleinvariant keypoints. International Journal of Computer Vision (IJCV) 2004, volume 60, number 2, pages 91–110. J. B. MacQueen. 1967. Some methods for classification and analysis of multivariate observations. In Proceedings of Fifth Berkeley Symposium on Mathematical Statistics and Probability, pages 1:281–297, Berkeley, CA, USA. Kamal Nigam and Rayid Ghani. 2000. Analyzing the effectiveness and applicability of co-training. In Proceedings of the Ninth International Conference on Information and Knowledge Management, pages 86–93, New York, USA. Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. 2007. Self-taught learning: transfer learning from unlabeled data. In ICML 2007, pages 759– 766, New York, NY, USA. ACM. Roi Reichart and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In ACL 2007. Roi Reichart, Katrin Tomanek, Udo Hahn, and Ari Rappoport. 2008. Multi-task active learning for linguistic annotations. In ACL-08: HLT, pages 861–869. C. E. Shannon. 1948. A mathematical theory of communication. Bell system technical journal, 27. J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. 2005. Discovering object categories in image collections. In ICCV 2005. Naftali Tishby, Fernando C. Pereira, and William Bialek. The information bottleneck method. 1999. In Proc. of the 37th Annual Allerton Conference on Communication, Control and Computing, pages 368–377. Pengcheng Wu and Thomas G. Dietterich. 2004. Improving svm accuracy by training on auxiliary data sources. In ICML 2004, pages 110–117, New York, NY, USA. Yejun Wu and Douglas W. Oard. 2008. Bilingual topic aspect classification with a few training examples. In ACM SIGIR 2008, pages 203–210, New York, NY, USA. Xiaojin Zhu. 2007. Semi-supervised learning literature survey. Technical Report 1530, Computer Sciences, University of Wisconsin-Madison. 9
2009
1
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 82–90, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Reinforcement Learning for Mapping Instructions to Actions S.R.K. Branavan, Harr Chen, Luke S. Zettlemoyer, Regina Barzilay Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {branavan, harr, lsz, regina}@csail.mit.edu Abstract In this paper, we present a reinforcement learning approach for mapping natural language instructions to sequences of executable actions. We assume access to a reward function that defines the quality of the executed actions. During training, the learner repeatedly constructs action sequences for a set of documents, executes those actions, and observes the resulting reward. We use a policy gradient algorithm to estimate the parameters of a log-linear model for action selection. We apply our method to interpret instructions in two domains — Windows troubleshooting guides and game tutorials. Our results demonstrate that this method can rival supervised learning techniques while requiring few or no annotated training examples.1 1 Introduction The problem of interpreting instructions written in natural language has been widely studied since the early days of artificial intelligence (Winograd, 1972; Di Eugenio, 1992). Mapping instructions to a sequence of executable actions would enable the automation of tasks that currently require human participation. Examples include configuring software based on how-to guides and operating simulators using instruction manuals. In this paper, we present a reinforcement learning framework for inducing mappings from text to actions without the need for annotated training examples. For concreteness, consider instructions from a Windows troubleshooting guide on deleting temporary folders, shown in Figure 1. We aim to map 1Code, data, and annotations used in this work are available at http://groups.csail.mit.edu/rbg/code/rl/ Figure 1: A Windows troubleshooting article describing how to remove the “msdownld.tmp” temporary folder. this text to the corresponding low-level commands and parameters. For example, properly interpreting the third instruction requires clicking on a tab, finding the appropriate option in a tree control, and clearing its associated checkbox. In this and many other applications, the validity of a mapping can be verified by executing the induced actions in the corresponding environment and observing their effects. For instance, in the example above we can assess whether the goal described in the instructions is achieved, i.e., the folder is deleted. The key idea of our approach is to leverage the validation process as the main source of supervision to guide learning. This form of supervision allows us to learn interpretations of natural language instructions when standard supervised techniques are not applicable, due to the lack of human-created annotations. Reinforcement learning is a natural framework for building models using validation from an environment (Sutton and Barto, 1998). We assume that supervision is provided in the form of a reward function that defines the quality of executed actions. During training, the learner repeatedly constructs action sequences for a set of given documents, executes those actions, and observes the resulting reward. The learner’s goal is to estimate a 82 policy — a distribution over actions given instruction text and environment state — that maximizes future expected reward. Our policy is modeled in a log-linear fashion, allowing us to incorporate features of both the instruction text and the environment. We employ a policy gradient algorithm to estimate the parameters of this model. We evaluate our method on two distinct applications: Windows troubleshooting guides and puzzle game tutorials. The key findings of our experiments are twofold. First, models trained only with simple reward signals achieve surprisingly high results, coming within 11% of a fully supervised method in the Windows domain. Second, augmenting unlabeled documents with even a small fraction of annotated examples greatly reduces this performance gap, to within 4% in that domain. These results indicate the power of learning from this new form of automated supervision. 2 Related Work Grounded Language Acquisition Our work fits into a broader class of approaches that aim to learn language from a situated context (Mooney, 2008a; Mooney, 2008b; Fleischman and Roy, 2005; Yu and Ballard, 2004; Siskind, 2001; Oates, 2001). Instances of such approaches include work on inferring the meaning of words from video data (Roy and Pentland, 2002; Barnard and Forsyth, 2001), and interpreting the commentary of a simulated soccer game (Chen and Mooney, 2008). Most of these approaches assume some form of parallel data, and learn perceptual cooccurrence patterns. In contrast, our emphasis is on learning language by proactively interacting with an external environment. Reinforcement Learning for Language Processing Reinforcement learning has been previously applied to the problem of dialogue management (Scheffler and Young, 2002; Roy et al., 2000; Litman et al., 2000; Singh et al., 1999). These systems converse with a human user by taking actions that emit natural language utterances. The reinforcement learning state space encodes information about the goals of the user and what they say at each time step. The learning problem is to find an optimal policy that maps states to actions, through a trial-and-error process of repeated interaction with the user. Reinforcement learning is applied very differently in dialogue systems compared to our setup. In some respects, our task is more easily amenable to reinforcement learning. For instance, we are not interacting with a human user, so the cost of interaction is lower. However, while the state space can be designed to be relatively small in the dialogue management task, our state space is determined by the underlying environment and is typically quite large. We address this complexity by developing a policy gradient algorithm that learns efficiently while exploring a small subset of the states. 3 Problem Formulation Our task is to learn a mapping between documents and the sequence of actions they express. Figure 2 shows how one example sentence is mapped to three actions. Mapping Text to Actions As input, we are given a document d, comprising a sequence of sentences (u1, . . . , uℓ), where each ui is a sequence of words. Our goal is to map d to a sequence of actions ⃗a = (a0, . . . , an−1). Actions are predicted and executed sequentially.2 An action a = (c, R, W ′) encompasses a command c, the command’s parameters R, and the words W ′ specifying c and R. Elements of R refer to objects available in the environment state, as described below. Some parameters can also refer to words in document d. Additionally, to account for words that do not describe any actions, c can be a null command. The Environment The environment state E specifies the set of objects available for interaction, and their properties. In Figure 2, E is shown on the right. The environment state E changes in response to the execution of command c with parameters R according to a transition distribution p(E′|E, c, R). This distribution is a priori unknown to the learner. As we will see in Section 5, our approach avoids having to directly estimate this distribution. State To predict actions sequentially, we need to track the state of the document-to-actions mapping over time. A mapping state s is a tuple (E, d, j, W), where E refers to the current environment state; j is the index of the sentence currently being interpreted in document d; and W contains words that were mapped by previous actions for 2That is, action ai is executed before ai+1 is predicted. 83 Figure 2: A three-step mapping from an instruction sentence to a sequence of actions in Windows 2000. For each step, the figure shows the words selected by the action, along with the corresponding system command and its parameters. The words of W ′ are underlined, and the words of W are highlighted in grey. the same sentence. The mapping state s is observed after each action. The initial mapping state s0 for document d is (Ed, d, 0, ∅); Ed is the unique starting environment state for d. Performing action a in state s = (E, d, j, W) leads to a new state s′ according to distribution p(s′|s, a), defined as follows: E transitions according to p(E′|E, c, R), W is updated with a’s selected words, and j is incremented if all words of the sentence have been mapped. For the applications we consider in this work, environment state transitions, and consequently mapping state transitions, are deterministic. Training During training, we are provided with a set D of documents, the ability to sample from the transition distribution, and a reward function r(h). Here, h = (s0, a0, . . . , sn−1, an−1, sn) is a history of states and actions visited while interpreting one document. r(h) outputs a realvalued score that correlates with correct action selection.3 We consider both immediate reward, which is available after each action, and delayed reward, which does not provide feedback until the last action. For example, task completion is a delayed reward that produces a positive value after the final action only if the task was completed successfully. We will also demonstrate how manually annotated action sequences can be incorporated into the reward. 3In most reinforcement learning problems, the reward function is defined over state-action pairs, as r(s, a) — in this case, r(h) = P t r(st, at), and our formulation becomes a standard finite-horizon Markov decision process. Policy gradient approaches allow us to learn using the more general case of history-based reward. The goal of training is to estimate parameters θ of the action selection distribution p(a|s, θ), called the policy. Since the reward correlates with action sequence correctness, the θ that maximizes expected reward will yield the best actions. 4 A Log-Linear Model for Actions Our goal is to predict a sequence of actions. We construct this sequence by repeatedly choosing an action given the current mapping state, and applying that action to advance to a new state. Given a state s = (E, d, j, W), the space of possible next actions is defined by enumerating subspans of unused words in the current sentence (i.e., subspans of the jth sentence of d not in W), and the possible commands and parameters in environment state E.4 We model the policy distribution p(a|s; θ) over this action space in a log-linear fashion (Della Pietra et al., 1997; Lafferty et al., 2001), giving us the flexibility to incorporate a diverse range of features. Under this representation, the policy distribution is: p(a|s; θ) = eθ·φ(s,a) X a′ eθ·φ(s,a′) , (1) where φ(s, a) ∈Rn is an n-dimensional feature representation. During test, actions are selected according to the mode of this distribution. 4For parameters that refer to words, the space of possible values is defined by the unused words in the current sentence. 84 5 Reinforcement Learning During training, our goal is to find the optimal policy p(a|s; θ). Since reward correlates with correct action selection, a natural objective is to maximize expected future reward — that is, the reward we expect while acting according to that policy from state s. Formally, we maximize the value function: Vθ(s) = Ep(h|θ) [r(h)] , (2) where the history h is the sequence of states and actions encountered while interpreting a single document d ∈D. This expectation is averaged over all documents in D. The distribution p(h|θ) returns the probability of seeing history h when starting from state s and acting according to a policy with parameters θ. This distribution can be decomposed into a product over time steps: p(h|θ) = n−1 Y t=0 p(at|st; θ)p(st+1|st, at). (3) 5.1 A Policy Gradient Algorithm Our reinforcement learning problem is to find the parameters θ that maximize Vθ from equation 2. Although there is no closed form solution, policy gradient algorithms (Sutton et al., 2000) estimate the parameters θ by performing stochastic gradient ascent. The gradient of Vθ is approximated by interacting with the environment, and the resulting reward is used to update the estimate of θ. Policy gradient algorithms optimize a non-convex objective and are only guaranteed to find a local optimum. However, as we will see, they scale to large state spaces and can perform well in practice. To find the parameters θ that maximize the objective, we first compute the derivative of Vθ. Expanding according to the product rule, we have: ∂ ∂θVθ(s) = Ep(h|θ) " r(h) X t ∂ ∂θ log p(at|st; θ) # , (4) where the inner sum is over all time steps t in the current history h. Expanding the inner partial derivative we observe that: ∂ ∂θ log p(a|s; θ) = φ(s, a)− X a′ φ(s, a′)p(a′|s; θ), (5) which is the derivative of a log-linear distribution. Equation 5 is easy to compute directly. However, the complete derivative of Vθ in equation 4 Input: A document set D, Feature representation φ, Reward function r(h), Number of iterations T Initialization: Set θ to small random values. for i = 1 . . . T do 1 foreach d ∈D do 2 Sample history h ∼p(h|θ) where 3 h = (s0, a0, . . . , an−1, sn) as follows: 3a for t = 0 . . . n −1 do 3b Sample action at ∼p(a|st; θ) 3c Execute at on state st: st+1 ∼p(s|st, at) end ∆←P t ` φ(st, at) −P a′ φ(st, a′)p(a′|st; θ) ´ 4 θ ←θ + r(h)∆ 5 end end Output: Estimate of parameters θ Algorithm 1: A policy gradient algorithm. is intractable, because computing the expectation would require summing over all possible histories. Instead, policy gradient algorithms employ stochastic gradient ascent by computing a noisy estimate of the expectation using just a subset of the histories. Specifically, we draw samples from p(h|θ) by acting in the target environment, and use these samples to approximate the expectation in equation 4. In practice, it is often sufficient to sample a single history h for this approximation. Algorithm 1 details the complete policy gradient algorithm. It performs T iterations over the set of documents D. Step 3 samples a history that maps each document to actions. This is done by repeatedly selecting actions according to the current policy, and updating the state by executing the selected actions. Steps 4 and 5 compute the empirical gradient and update the parameters θ. In many domains, interacting with the environment is expensive. Therefore, we use two techniques that allow us to take maximum advantage of each environment interaction. First, a history h = (s0, a0, . . . , sn) contains subsequences (si, ai, . . . sn) for i = 1 to n −1, each with its own reward value given by the environment as a side effect of executing h. We apply the update from equation 5 for each subsequence. Second, for a sampled history h, we can propose alternative histories h′ that result in the same commands and parameters with different word spans. We can again apply equation 5 for each h′, weighted by its probability under the current policy, p(h′|θ) p(h|θ) . 85 The algorithm we have presented belongs to a family of policy gradient algorithms that have been successfully used for complex tasks such as robot control (Ng et al., 2003). Our formulation is unique in how it represents natural language in the reinforcement learning framework. 5.2 Reward Functions and ML Estimation We can design a range of reward functions to guide learning, depending on the availability of annotated data and environment feedback. Consider the case when every training document d ∈D is annotated with its correct sequence of actions, and state transitions are deterministic. Given these examples, it is straightforward to construct a reward function that connects policy gradient to maximum likelihood. Specifically, define a reward function r(h) that returns one when h matches the annotation for the document being analyzed, and zero otherwise. Policy gradient performs stochastic gradient ascent on the objective from equation 2, performing one update per document. For document d, this objective becomes: Ep(h|θ)[r(h)] = X h r(h)p(h|θ) = p(hd|θ), where hd is the history corresponding to the annotated action sequence. Thus, with this reward policy gradient is equivalent to stochastic gradient ascent with a maximum likelihood objective. At the other extreme, when annotations are completely unavailable, learning is still possible given informative feedback from the environment. Crucially, this feedback only needs to correlate with action sequence quality. We detail environment-based reward functions in the next section. As our results will show, reward functions built using this kind of feedback can provide strong guidance for learning. We will also consider reward functions that combine annotated supervision with environment feedback. 6 Applying the Model We study two applications of our model: following instructions to perform software tasks, and solving a puzzle game using tutorial guides. 6.1 Microsoft Windows Help and Support On its Help and Support website,5 Microsoft publishes a number of articles describing how to per5support.microsoft.com Notation o Parameter referring to an environment object L Set of object class names (e.g. “button”) V Vocabulary Features on W and object o Test if o is visible in s Test if o has input focus Test if o is in the foreground Test if o was previously interacted with Test if o came into existence since last action Min. edit distance between w ∈W and object labels in s Features on words in W, command c, and object o ∀c′ ∈C, w ∈V : test if c′ = c and w ∈W ∀c′ ∈C, l ∈L: test if c′ = c and l is the class of o Table 1: Example features in the Windows domain. All features are binary, except for the normalized edit distance which is real-valued. form tasks and troubleshoot problems in the Windows operating systems. Examples of such tasks include installing patches and changing security settings. Figure 1 shows one such article. Our goal is to automatically execute these support articles in the Windows 2000 environment. Here, the environment state is the set of visible user interface (UI) objects, and object properties such as label, location, and parent window. Possible commands include left-click, right-click, double-click, and type-into, all of which take a UI object as a parameter; type-into additionally requires a parameter for the input text. Table 1 lists some of the features we use for this domain. These features capture various aspects of the action under consideration, the current Windows UI state, and the input instructions. For example, one lexical feature measures the similarity of a word in the sentence to the UI labels of objects in the environment. Environment-specific features, such as whether an object is currently in focus, are useful when selecting the object to manipulate. In total, there are 4,438 features. Reward Function Environment feedback can be used as a reward function in this domain. An obvious reward would be task completion (e.g., whether the stated computer problem was fixed). Unfortunately, verifying task completion is a challenging system issue in its own right. Instead, we rely on a noisy method of checking whether execution can proceed from one sentence to the next: at least one word in each sentence has to correspond to an object in the envi86 Figure 3: Crossblock puzzle with tutorial. For this level, four squares in a row or column must be removed at once. The first move specified by the tutorial is greyed in the puzzle. ronment.6 For instance, in the sentence from Figure 2 the word “Run” matches the Run... menu item. If no words in a sentence match a current environment object, then one of the previous sentences was analyzed incorrectly. In this case, we assign the history a reward of -1. This reward is not guaranteed to penalize all incorrect histories, because there may be false positive matches between the sentence and the environment. When at least one word matches, we assign a positive reward that linearly increases with the percentage of words assigned to non-null commands, and linearly decreases with the number of output actions. This reward signal encourages analyses that interpret all of the words without producing spurious actions. 6.2 Crossblock: A Puzzle Game Our second application is to a puzzle game called Crossblock, available online as a Flash game.7 Each of 50 puzzles is played on a grid, where some grid positions are filled with squares. The object of the game is to clear the grid by drawing vertical or horizontal line segments that remove groups of squares. Each segment must exactly cross a specific number of squares, ranging from two to seven depending on the puzzle. Humans players have found this game challenging and engaging enough to warrant posting textual tutorials.8 A sample puzzle and tutorial are shown in Figure 3. The environment is defined by the state of the grid. The only command is clear, which takes a parameter specifying the orientation (row or column) and grid location of the line segment to be 6We assume that a word maps to an environment object if the edit distance between the word and the object’s name is below a threshold value. 7hexaditidom.deviantart.com/art/Crossblock-108669149 8www.jayisgames.com/archives/2009/01/crossblock.php removed. The challenge in this domain is to segment the text into the phrases describing each action, and then correctly identify the line segments from references such as “the bottom four from the second column from the left.” For this domain, we use two sets of binary features on state-action pairs (s, a). First, for each vocabulary word w, we define a feature that is one if w is the last word of a’s consumed words W ′. These features help identify the proper text segmentation points between actions. Second, we introduce features for pairs of vocabulary word w and attributes of action a, e.g., the line orientation and grid locations of the squares that a would remove. This set of features enables us to match words (e.g., “row”) with objects in the environment (e.g., a move that removes a horizontal series of squares). In total, there are 8,094 features. Reward Function For Crossblock it is easy to directly verify task completion, which we use as the basis of our reward function. The reward r(h) is -1 if h ends in a state where the puzzle cannot be completed. For solved puzzles, the reward is a positive value proportional to the percentage of words assigned to non-null commands. 7 Experimental Setup Datasets For the Windows domain, our dataset consists of 128 documents, divided into 70 for training, 18 for development, and 40 for test. In the puzzle game domain, we use 50 tutorials, divided into 40 for training and 10 for test.9 Statistics for the datasets are shown below. Windows Puzzle Total # of documents 128 50 Total # of words 5562 994 Vocabulary size 610 46 Avg. words per sentence 9.93 19.88 Avg. sentences per document 4.38 1.00 Avg. actions per document 10.37 5.86 The data exhibits certain qualities that make for a challenging learning problem. For instance, there are a surprising variety of linguistic constructs — as Figure 4 shows, in the Windows domain even a simple command is expressed in at least six different ways. 9For Crossblock, because the number of puzzles is limited, we did not hold out a separate development set, and report averaged results over five training/test splits. 87 Figure 4: Variations of “click internet options on the tools menu” present in the Windows corpus. Experimental Framework To apply our algorithm to the Windows domain, we use the Win32 application programming interface to simulate human interactions with the user interface, and to gather environment state information. The operating system environment is hosted within a virtual machine,10 allowing us to rapidly save and reset system state snapshots. For the puzzle game domain, we replicated the game with an implementation that facilitates automatic play. As is commonly done in reinforcement learning, we use a softmax temperature parameter to smooth the policy distribution (Sutton and Barto, 1998), set to 0.1 in our experiments. For Windows, the development set is used to select the best parameters. For Crossblock, we choose the parameters that produce the highest reward during training. During evaluation, we use these parameters to predict mappings for the test documents. Evaluation Metrics For evaluation, we compare the results to manually constructed sequences of actions. We measure the number of correct actions, sentences, and documents. An action is correct if it matches the annotations in terms of command and parameters. A sentence is correct if all of its actions are correctly identified, and analogously for documents.11 Statistical significance is measured with the sign test. Additionally, we compute a word alignment score to investigate the extent to which the input text is used to construct correct analyses. This score measures the percentage of words that are aligned to the corresponding annotated actions in correctly analyzed documents. Baselines We consider the following baselines to characterize the performance of our approach. 10VMware Workstation, available at www.vmware.com 11In these tasks, each action depends on the correct execution of all previous actions, so a single error can render the remainder of that document’s mapping incorrect. In addition, due to variability in document lengths, overall action accuracy is not guaranteed to be higher than document accuracy. • Full Supervision Sequence prediction problems like ours are typically addressed using supervised techniques. We measure how a standard supervised approach would perform on this task by using a reward signal based on manual annotations of output action sequences, as defined in Section 5.2. As shown there, policy gradient with this reward is equivalent to stochastic gradient ascent with a maximum likelihood objective. • Partial Supervision We consider the case when only a subset of training documents is annotated, and environment reward is used for the remainder. Our method seamlessly combines these two kinds of rewards. • Random and Majority (Windows) We consider two na¨ıve baselines. Both scan through each sentence from left to right. A command c is executed on the object whose name is encountered first in the sentence. This command c is either selected randomly, or set to the majority command, which is leftclick. This procedure is repeated until no more words match environment objects. • Random (Puzzle) We consider a baseline that randomly selects among the actions that are valid in the current game state.12 8 Results Table 2 presents evaluation results on the test sets. There are several indicators of the difficulty of this task. The random and majority baselines’ poor performance in both domains indicates that na¨ıve approaches are inadequate for these tasks. The performance of the fully supervised approach provides further evidence that the task is challenging. This difficulty can be attributed in part to the large branching factor of possible actions at each step — on average, there are 27.14 choices per action in the Windows domain, and 9.78 in the Crossblock domain. In both domains, the learners relying only on environment reward perform well. Although the fully supervised approach performs the best, adding just a few annotated training examples to the environment-based learner significantly reduces the performance gap. 12Since action selection is among objects, there is no natural majority baseline for the puzzle. 88 Windows Puzzle Action Sent. Doc. Word Action Doc. Word Random baseline 0.128 0.101 0.000 —– 0.081 0.111 —– Majority baseline 0.287 0.197 0.100 —– —– —– —– Environment reward ∗0.647 ∗0.590 ∗0.375 0.819 ∗0.428 ∗0.453 0.686 Partial supervision ⋄0.723 ∗0.702 0.475 0.989 0.575 ∗0.523 0.850 Full supervision ⋄0.756 0.714 0.525 0.991 0.632 0.630 0.869 Table 2: Performance on the test set with different reward signals and baselines. Our evaluation measures the proportion of correct actions, sentences, and documents. We also report the percentage of correct word alignments for the successfully completed documents. Note the puzzle domain has only singlesentence documents, so its sentence and document scores are identical. The partial supervision line refers to 20 out of 70 annotated training documents for Windows, and 10 out of 40 for the puzzle. Each result marked with ∗or ⋄is a statistically significant improvement over the result immediately above it; ∗indicates p < 0.01 and ⋄indicates p < 0.05. Figure 5: Comparison of two training scenarios where training is done using a subset of annotated documents, with and without environment reward for the remaining unannotated documents. Figure 5 shows the overall tradeoff between annotation effort and system performance for the two domains. The ability to make this tradeoff is one of the advantages of our approach. The figure also shows that augmenting annotated documents with additional environment-reward documents invariably improves performance. The word alignment results from Table 2 indicate that the learners are mapping the correct words to actions for documents that are successfully completed. For example, the models that perform best in the Windows domain achieve nearly perfect word alignment scores. To further assess the contribution of the instruction text, we train a variant of our model without access to text features. This is possible in the game domain, where all of the puzzles share a single goal state that is independent of the instructions. This variant solves 34% of the puzzles, suggesting that access to the instructions significantly improves performance. 9 Conclusions In this paper, we presented a reinforcement learning approach for inducing a mapping between instructions and actions. This approach is able to use environment-based rewards, such as task completion, to learn to analyze text. We showed that having access to a suitable reward function can significantly reduce the need for annotations. Acknowledgments The authors acknowledge the support of the NSF (CAREER grant IIS-0448168, grant IIS-0835445, grant IIS-0835652, and a Graduate Research Fellowship) and the ONR. Thanks to Michael Collins, Amir Globerson, Tommi Jaakkola, Leslie Pack Kaelbling, Dina Katabi, Martin Rinard, and members of the MIT NLP group for their suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. 89 References Kobus Barnard and David A. Forsyth. 2001. Learning the semantics of words and pictures. In Proceedings of ICCV. David L. Chen and Raymond J. Mooney. 2008. Learning to sportscast: a test of grounded language acquisition. In Proceedings of ICML. Stephen Della Pietra, Vincent J. Della Pietra, and John D. Lafferty. 1997. Inducing features of random fields. IEEE Trans. Pattern Anal. Mach. Intell., 19(4):380–393. Barbara Di Eugenio. 1992. Understanding natural language instructions: the case of purpose clauses. In Proceedings of ACL. Michael Fleischman and Deb Roy. 2005. Intentional context in situated language learning. In Proceedings of CoNLL. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML. Diane J. Litman, Michael S. Kearns, Satinder Singh, and Marilyn A. Walker. 2000. Automatic optimization of dialogue management. In Proceedings of COLING. Raymond J. Mooney. 2008a. Learning language from its perceptual context. In Proceedings of ECML/PKDD. Raymond J. Mooney. 2008b. Learning to connect language and perception. In Proceedings of AAAI. Andrew Y. Ng, H. Jin Kim, Michael I. Jordan, and Shankar Sastry. 2003. Autonomous helicopter flight via reinforcement learning. In Advances in NIPS. James Timothy Oates. 2001. Grounding knowledge in sensors: Unsupervised learning for language and planning. Ph.D. thesis, University of Massachusetts Amherst. Deb K. Roy and Alex P. Pentland. 2002. Learning words from sights and sounds: a computational model. Cognitive Science 26, pages 113–146. Nicholas Roy, Joelle Pineau, and Sebastian Thrun. 2000. Spoken dialogue management using probabilistic reasoning. In Proceedings of ACL. Konrad Scheffler and Steve Young. 2002. Automatic learning of dialogue strategy using dialogue simulation and reinforcement learning. In Proceedings of HLT. Satinder P. Singh, Michael J. Kearns, Diane J. Litman, and Marilyn A. Walker. 1999. Reinforcement learning for spoken dialogue systems. In Advances in NIPS. Jeffrey Mark Siskind. 2001. Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic. J. Artif. Intell. Res. (JAIR), 15:31–90. Richard S. Sutton and Andrew G. Barto. 1998. Reinforcement Learning: An Introduction. The MIT Press. Richard S. Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 2000. Policy gradient methods for reinforcement learning with function approximation. In Advances in NIPS. Terry Winograd. 1972. Understanding Natural Language. Academic Press. Chen Yu and Dana H. Ballard. 2004. On the integration of grounding language and learning objects. In Proceedings of AAAI. 90
2009
10
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 888–896, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Setting Up User Action Probabilities in User Simulations for Dialog System Development Hua Ai University of Pittsburgh Pittsburgh PA, 15260, USA [email protected] Diane Litman University of Pittsburgh Pittsburgh PA, 15260, USA [email protected] Abstract User simulations are shown to be useful in spoken dialog system development. Since most current user simulations deploy probability models to mimic human user behaviors, how to set up user action probabilities in these models is a key problem to solve. One generally used approach is to estimate these probabilities from human user data. However, when building a new dialog system, usually no data or only a small amount of data is available. In this study, we compare estimating user probabilities from a small user data set versus handcrafting the probabilities. We discuss the pros and cons of both solutions for different dialog system development tasks. 1 Introduction User simulations are widely used in spoken dialog system development. Recent studies use user simulations to generate training corpora to learn dialog strategies automatically ((Williams and Young, 2007), (Lemon and Liu, 2007)), or to evaluate dialog system performance (L´opez-C´ozar et al., 2003). Most studies show that using user simulations significantly improves dialog system performance as well as speeds up system development. Since user simulation is such a useful tool, dialog system researchers have studied how to build user simulations from a variety of perspectives. Some studies look into the impact of training data on user simulations. For example, (Georgila et al., 2008) observe differences between simulated users trained from human users of different age groups. Other studies explore different simulation models, i.e. the mechanism of deciding the next user actions given the current dialog context. (Schatzmann et al., 2006) give a thorough review of different types of simulation models. Since most of these current user simulation techniques use probabilistic models to generate user actions, how to set up the probabilities in the simulations is another important problem to solve. One general approach to set up user action probabilities is to learn the probabilities from a collected human user dialog corpus ((Schatzmann et al., 2007b), (Georgila et al., 2008)). While this approach takes advantage of observed user behaviors in predicting future user behaviors, it suffers from the problem of learning probabilities from one group of users while potentially using them with another group of users. The accuracy of the learned probabilities becomes more questionable when the collected human corpus is small. However, this is a common problem in building new dialog systems, when often no data1 or only a small amount of data is available. An alternative approach is to handcraft user action probabilities ((Schatzmann et al., 2007a), (Janarthanam and Lemon, 2008)). This approach is less dataintensive, but requires nontrivial work by domain experts. What is more, as the number of probabilities increases, it is hard even for the experts to set the probabilities. Since both handcrafting and training user action probabilities have their own pros and cons, it is an interesting research question to investigate which approach is better for a certain task given the amount of data that is available. In this study, we investigate a manual and a trained approach in setting up user action probabilities, applied to building the same probabilistic simulation model. For the manual user simulations, we look into two sets of handcrafted probabilities which use the same expert knowledge but differ in individual probability values. This aims to take into account small variations that can possi1When no human user data is collected with the dialog system, Wizard-of-Oz experiments can be conducted to collect training data for building user simulations. 888 bly be introduced by different domain experts. For the trained user simulations, we examine two sets of probabilities trained from user corpora of different sizes, since the amount of training data will impact the quality of the trained probability models. We compare the trained and the handcrafted simulations on three tasks. We observe that in our task settings, the two manual simulations do not differ significantly on any tasks. In addition, there is no significant difference among the trained and the manual simulations in generating corpus level dialog behaviors as well as in generating training corpora for learning dialog strategies. When comparing on a dialog system evaluation task, the simulation trained from more data significantly outperforms the two manual simulations, which again outperforms the simulation trained from less data. Based on our observations, we answer the original question of how to design user action probabilities for simulations that are similar to ours in terms of the complexity of the simulations2. We suggest that handcrafted user simulations can perform reasonably well in building a new dialog system, especially when we are not sure that there is enough data for training simulation models. However, once we have a dialog system, it is useful to collect human user data in order to train a new user simulation model since the trained simulations perform better than the handcrafted user simulations on more tasks. Since how to decide whether enough data is available for simulation training is another research question to answer, we will further discuss the impact of our results later in Section 6. 2 Related Work Most current simulation models are probabilistic models in which the models simulate user actions based on dialog context features (Schatzmann et al., 2006). We represent these models as: P(user action|feature1, . . .,featuren) (1) The number of probabilities involved in this model is: (# of possible actions-1) ∗ n Y k=1 (# of feature values). (2) Some studies handcraft these probabilities. For example, (Schatzmann et al., 2007a) condition the 2The number of user action probabilities and the simulated user behaviors will impact the design choice. user actions on user’s goals and the agenda to reach those goals. They manually author the probabilities in the user’s agenda update model and the goal update model, and then calculate the user action probabilities based on the two models. (Janarthanam and Lemon, 2008) handcraft 15 probabilities in simulated users’ initial profiles and then author rules to update these probabilities during the dialogs. Other studies use a human user corpus as the training corpus to learn user action probabilities in user simulations. Since the human user corpus often does not include all possible actions that users may take during interactions with the dialog system, different strategies are used to account for user actions that do not appear in the training corpus but may be present when testing the user simulations. For example, (Schatzmann et al., 2007b) introduce a summary space approach to map the actual dialog context space into a more tractable summary space. Then, they use forward and backward learning algorithms to learn the probabilities from a corpus generated by 40 human users (160 dialogs). (Rieser and Lemon, 2006) use a two step approach in computing the probabilities from a corpus consisting of dialogs from 24 human users (70 dialogs). They first cluster dialog contexts based on selected features and then build conditional probability models for each cluster. In our study, we build a conditional probability model which will be described in detail in Section 3.2.1. There are 40 probabilities to set up in this model3. We will explain different approaches to assign these probabilities later in Section 3.2.2. 3 System and User Simulations In this section, we describe the dialog system, the human user corpus we collected with the system, and the user simulation we used. 3.1 System and Corpus The ITSPOKE system (Litman and Silliman, 2004) is an Intelligent Tutoring System which teaches Newtonian physics. It is a speechenhanced version of the Why2-Atlas tutoring system (Vanlehn et al., 2002). During the interaction with students, the system initiates a spoken tutoring dialog to correct misconceptions and to 3There are 2 possible actions in our model, 20 possible values for the first feature qCluster and 2 possible values for the second feature prevCorrectness as described later in Section 3.2.1. Using Equation 2, 40=(2-1)*20*2. 889 SYSTEM1: Do you recall what Newton’s third law says? [3rdLaw] Student1: Force equals mass times acceleration. [ic, c%=0, ncert] SYSTEM2: Newton’s third law says ... If you hit the wall harder, is the force of your fist acting on the wall greater or less? [3rdLaw] Student2: Greater. [c, c%=50%,cert] Dialog goes on Table 1: Sample coded dialog excerpt. elicit further explanation. A pretest is given before the interaction and a posttest is given afterwards. We calculate a Normalized Learning Gain for each student to evaluate the performance of the system in terms of the student’s knowledge gain: NLG = posttest score - pretest score 1-pretest score (3) The current tutoring dialog strategy was handcrafted in a finite state paradigm by domain experts, and the tutor’s response is based only on the correctness of the student’s answer4. However, tutoring research (Craig et al., 2004) suggests that other underlying information in student utterances (e.g., student certainty) is also useful in improving learning. Therefore, we are working on learning a dialog strategy to also take into account student certainty. In our prior work, a corpus of 100 dialogs (1388 student turns) was collected between 20 human subjects (5 dialogs per subject) and the ITSPOKE system. Correctness (correct(c), incorrect(ic)) is automatically judged by the system and is kept in the system’s logs. We also computed the student’s correctness rate (c%) and labeled it after every student turn. Each student utterance was manually annotated for certainty (certain(cert), notcertain(ncert)) in a previous study based on both lexical and prosodic information5. In addition, we manually clustered tutor questions into 20 clusters based on the knowledge that is required to answer that question, e.g. questions on Newton’s Third Law are put into a cluster labeled as (3rdLaw). There are other clusters such as gravity, acceleration, etc. An example of a coded dialog between the system and a student is given in Table 1. 4Despite the limitation of the current system, students learn significantly after interacting with the system. 5Kappa of 0.68 is gained in the agreement study. 3.2 User Simulation Model and Model Probabilities Set-up 3.2.1 User Simulation Model We build a Knowledge Consistency Model6 (KC Model) to simulate consistent student behaviors while interacting with a tutoring system. According to learning literature (Cen et al., 2006), once a student acquires certain knowledge, his/her performance on similar problems that require the same knowledge (i.e. questions from the same cluster we introduced in Section 3.1) will become stable. Therefore, in the KC Model, we condition the student action stuAction based on the cluster of tutor question (qCluster) and the student’s correctness when last encountering a question from that cluster (prevCorrectness): P(stuAction|qCluster, prevCorrectness). For example, in Table 1, when deciding the student’s answer after the second tutor question, the simulation looks back into the dialog and finds out that the last time (in Student1) the student answered a question from the same cluster 3rdLaw incorrectly. Therefore, this time the simulation gives a correct student answer based on the probability P(c|3rdLaw, ic). Since different groups of students often have different learning abilities, we examine such differences among our users by grouping the users based on Normalized Learning Gains (NLG), which is an important feature to describe user behaviors in tutoring systems. By dividing our human users into high/low learners based on the median of NLG, we find a significant difference in the NLG of the two groups based on 2-tailed t-tests (p < 0.05). Therefore, we construct a simulation to represent low learners and another simulation to represent high learners to better characterize the differences in high/low learners’ behaviors. Similar approaches are adopted in other studies in building user simulations for dialog systems (e.g., (Georgila et al., 2008) simulate old versus young users separately). Our simulation models work on the word level 7 because generating student dialog acts alone does not provide sufficient information for our tutoring system to decide the next system action. Since it is hard to generate a natural language utterance for each tutor’s question, we use the student answers 6This is the best model we built in our previous studies (Ai and Litman, 2007). 7See (Ai and Litman, 2006) for more details. 890 in the human user corpus as the candidate answers for the simulated students. 3.2.2 Model Probabilities Set-up Now we discuss how to set up user action probabilities in the KC Model. We compare learning probabilities from human user data to handcrafting probabilities based on expert knowledge. Since we represent high/low learners using different models, we build simulation models with separate user action probabilities to represent the two groups of learners. When learning the probabilities in the Trained KC Models, we calculate user action probabilities for high/low learners in our human corpus separately. We use add-one smoothing to account for user actions that do not appear in the human user corpus. For the first time the student answers a question in a certain cluster, we back-off the user action probability to P(stuAction | average correctness rate of this question in human user corpus). We first train a KC model using the data from all 20 human users to build the TrainedMore (Tmore) Model. Then, in order to investigate the impact of the amount of training data on the quality of trained simulations, we randomly pick 5 out of the 10 high learners and 5 out of the 10 low learners to get an even smaller human user corpus. We train the TrainedLess (Tless) Model from this small corpus . When handcrafting the probabilities in the Manual KC Models8, the clusters of questions are first grouped into three difficulty groups (Easy, Medium, Hard). Based on expert knowledge, we assume on average 70% of students can correctly answer the tutor questions from the Easy group, while for the Medium group only 60% and for the hard group 50%. Then, we assign a correctness rate higher than the average for the high learners and a corresponding correctness rate lower than the average for the low learners. For the first Manual KC model (M1), within the same difficulty group, the same two probabilities P1(stuAction|qClusteri, prevCorrectness = c) and P2(stuAction|qClusteri, prevCorrectness = ic) are assigned to each clusteri as the averages for the corresponding high/low learners. Since a different human expert will possibly provide a slightly different set of probabilities even based on the same mechanism, we also design another set of prob8The first author of the paper acts as the domain expert. abilities to account for such variations. For the second Manual KC model (M2), we allow differences among the clusters within the same difficulty group. For the clusters in each difficulty group, we randomly assign a probability that differs no more than 5% from the average. For example, for the easy clusters, we assign average probabilities of high/low learners between [65%, 75%]. Although human experts may differ to some extent in assigning individual probability values, we hypothesize that in general a certain amount of expertise is required in assigning these probabilities. To investigate this, we build a baseline simulation with no expert knowledge, which is a Random Model (Ran) that randomly assigns values for these user action probabilities. 4 Evaluation Measures In this section, we introduce the evaluation measures for comparing the simulated corpora generated by different simulation models to the human user corpus. In Section 4.1, we use a set of widely used domain independent features to compare the simulated and the human user corpora on corpus-level dialog behaviors. These comparisons give us a direct impression of how similar the simulated dialogs are to human user dialogs. Then, we compare the simulations in task-oriented contexts. Since simulated user corpora are often used as training corpora for using MDPs to learn new dialog strategies, in Section 4.2 we estimate how different the learned dialog strategies would be when trained from different simulated corpora. Another way to use user simulation is to test dialog systems. Therefore, in Section 4.3, we compare the user actions predicted by the various simulation models with actual human user actions. 4.1 Measures on Corpus Level Dialog Behaviors We compare the dialog corpora generated by user simulations to our human user corpus using a comprehensive set of corpus level measures proposed by (Schatzmann et al., 2005). Here, we use a subset of the measures which describe high-level dialog features that are applicable to our data. The measures we use include the number of student turns (Sturn), the number of tutor turns (Tturn), the number of words per student turn (Swordrate), the number of words per tutor turn (Twordrate), the ratio of system/user words per dialog (WordRatio), 891 and the percentage of correct answers (cRate). 4.2 Measures on Dialog Strategy Learning In this section, we introduce two measures to compare the simulations based on their performance on a dialog strategy learning task. In recent studies (e.g., (Janarthanam and Lemon, 2008)), user simulations are built to generate a large corpus to build MDPs in using Reinforcement Learning (RL) to learn new dialog strategies. When building an MDP from a training corpus9, we compute the transition probabilities P(st+1|st, a) (the probability of getting from state st to the next state st+1 after taking action a), and the reward of this transition R(st, a, st+1). Then, the expected cumulative value (V-value) of a state s can be calculated using this recursive function: V (s) = X st+1 P(st+1|st, a)[R(st, a, st+1) + γV (st+1)] (4) γ is a discount factor which ranges between 0 and 1. For our evaluation, we first compare the transition probabilities calculated from all simulated corpora. The transition probabilities are only determined by the states and user actions presented by the training corpus, regardless of the rest of the MDP configuration. Since the MDP configuration has a big impact on the learned strategies, we want to first factor this impact out and estimate the differences in learned strategies that are brought in by the training corpora alone. As a second evaluation measure, we apply reinforcement learning to the MDP representing each simulated corpus separately to learn dialog strategies. We compare the Expected Cumulative Rewards (ECRs)(Williams and Young, 2007) of these dialog strategies, which show the expectation of the rewards we can obtain by applying the learned strategies. The MDP learning task in our study is to maximize student certainty during tutoring dialogs. The dialog states are characterized using the correctness of the current student answer and the student correctness rate so far. We represent the correctness rate as a binary feature: lc if it is below the training corpus average and hc if it is above the average. The end of dialog reward is assigned to be +100 if the dialog has a percent certainty higher 9In this paper, we use off-line model-based RL (Paek, 2006) rather than learning an optimal strategy online during system-user interactions. than the median from the training corpus and -100 otherwise. The action choice of the tutoring system is to give a strong (s) or weak (w) feedback. A strong feedback clearly indicates the correctness of the current student answer while the weak feedback does not. For example, the second system turn in Table 1 contains a weak feedback. If the system says “Your answer is incorrect” at the beginning of this turn, that would be a strong feedback. In order to simulate student certainty, we simply output the student certainty originally associated in each student utterance. Thus, the output of the KC Models here is a student utterance along with the student certainty (cert, ncert). In a previous study (Ai et al., 2007), we investigated the impact of different MDP configurations by comparing the ECRs of the learned dialog strategies. Here, we use one of the best-performing MDP configurations, but vary the simulated corpora that we train the dialog strategies on. Our goal is to see which user simulation performs better in generating a training corpus for dialog strategy learning. 4.3 Measures on Dialog System Evaluation In this section, we introduce two ways to compare human user actions with the actions predicted by the simulations. The aim of this comparison is to assess how accurately the simulations can replicate human user behaviors when encountering the same dialog situation. A simulated user that can accurately predict human user behaviors is needed to replace human users when evaluating dialog systems. We randomly divide the human user dialog corpus into four parts: each part contains a balanced amount of high/low learner data. Then we perform four fold cross validation by always using 3 parts of the data as our training corpus for user simulations, and the remaining one part of the data as testing data to compare with simulated user actions. We always compare high human learners only with simulation models that represent high learners and low human learners only with simulation models that represent low learners. Comparisons are done on a turn by turn basis. Every time the human user takes an action in the dialogs in the testing data, the user simulations are used to predict an action based on related dialog information from the human user dialog. For a KC Model, the related dialog information includes qCluster and prevCorrectness . We first compare the simulation 892 predicted user actions directly with human user actions. We define simulation accuracy as: Accuracy = Correctly predicted human user actions Total number of human user actions (5) However, since our simulation model is a probabilistic model, the model will take an action stochastically after the same tutor turn. In other words, we need to take into account the probability for the simulation to predict the right human user action. If the simulation outputs the right action with a small probability, it is less likely that this simulation can correctly predict human user behaviors when generating a large dialog corpus. We consider a simulated action associated with a higher probability to be ranked higher than an action with a lower probability. Then, we use the reciprocal ranking from information retrieval tasks (Radev et al., 2002) to assess the simulation performance10. Mean Reciprocal Ranking is defined as: MRR = 1 A A X k=1 1 ranki (6) In Equation 6, A stands for the total number of human user actions, ranki stands for the ranking of the simulated action which matches the i-th human user action. Table 2 shows an example of comparing simulated user actions with human user actions in the sample dialog in Table 1. In the first turn Student1, a simulation model has a 60% chance to output an incorrect answer and a 40% chance to output a correct answer while it actually outputs an incorrect answer. In this case, we consider the simulation ranks the actions in the order of: ic, c. Since the human user gives an incorrect answer at this time, the simulated action matches with this human user action and the reciprocal ranking is 1. However, in the turn Student2, the simulation’s output does not match the human user action. This time, the correct simulated user action is ranked second. Therefore, the reciprocal ranking of this simulation action is 1/2. We hypothesize that the measures introduced in this section have larger power in differentiating different simulated user behaviors since every 10(Georgila et al., 2008) use Precision and Recall to capture similar information as our accuracy, and Expected Precision and Expected Recall to capture similar information as our reciprocal ranking. simulated user action contributes to the comparison between different simulations. In contrast, the measures introduced in Section 4.1 and Section 4.2 have less differentiating power since they compare at the corpus level. 5 Results We let all user simulations interact with our dialog system, where each simulates 250 low learners and 250 high learners. In this section, we report the results of applying the evaluation measures we discuss in Section 4 on comparing simulated and human user corpora. When we talk about significant results in the statistics tests below, we always mean that the p-value of the test is ≤0.05. 5.1 Comparing on Corpus Level Dialog Behavior Figure 1 shows the results of comparisons using domain independent high-level dialog features of our corpora. The x-axis shows the evaluation measures; the y-axis shows the mean for each corpus normalized to the mean of the human user corpus. Error bars show the standard deviations of the mean values. As we can see from the figure, the Random Model performs differently from the human and all the other simulated models. There is no difference in dialog behaviors among the human corpus, the trained and the manual simulated corpora. In sum, both the Trained KC Models and the Manual KC Models can generate human-like high-level dialog behaviors while the Random Model cannot. 5.2 Comparing on Dialog Strategy Learning Task Next, we compare the difference in dialog strategy learning when training on the simulated corpora using similar approaches in (Tetreault and Litman, 2008). Table 3 shows the transition probabilities starting from the state (c, lc). For example, the first cell shows in the Tmore corpus, the probability of starting from state (c, lc), getting a strong feedback, and transitioning into the same state is 24.82%. We calculate the same table for the other three states (c, hc), (ic, lc), and (ic, hc). Using paired-sample t-tests with bonferroni corrections, the only significant differences are observed between the random simulated corpus and each of the other simulated corpora. 893 i-th Turn human Simulation Model Simulation Output CorrectlyPredictedActions ReciprocalRanking Student1 ic 60% ic, 40% c ic 1 1 Student2 c 70% ic, 30% c ic 0 1/2 Average / / / (1+0)/2 (1+1/2)/2 Table 2: An Example of Comparing Simulated Actions with Human User Actions. Figure 1: Comparison of human and simulated dialogs by high-level dialog features. Tmore Tless M1 M2 Ran s→c lc 24.82 31.42 25.64 22.70 13.25 w→c lc 17.64 12.35 16.62 18.85 9.74 s→ic lc 2.11 7.07 1.70 1.63 19.31 w→ic lc 1.80 2.17 2.05 3.25 21.06 s→c hc 29.95 26.46 22.23 31.04 10.54 w→c hc 13.93 9.50 22.73 15.10 11.29 s→ic hc 5.52 2.51 4.29 0.54 7.13 w→ic hc 4.24 9.08 4.74 6.89 7.68 Table 3: Comparisons of MDP transition probabilities at state (c, lc) (Numbers in this table are percentages). Tmore Tless M1 M2 Ran ECR 15.10 11.72 15.24 15.51 7.03 CI ±2.21 ±1.95 ±2.07 ±3.46 ±2.11 Table 4: Comparisons of ECR of learned dialog strategies. We also use a MDP toolkit to learn dialog strategies from all the simulated corpora and then compute the Expected Cumulative Reward (ECR) for the learned strategies. In Table 4, the upper part of each cell shows the ECR of the learned dialog strategy; the lower part of the cell shows the 95% Confidence Interval (CI) of the ECR. We can see from the overlap of the confidence intervals that the only significant difference is observed between the dialog strategy trained from the random simulated corpus and the strategies trained from each of the other simulated corpora. Also, it is interesting to see that the CI of the two manual simulations overlap more with the CI of Tmore model than with the CI of the Tless model. In sum, the manual user simulations work as well as the trained user simulation when being used to generate a training corpus to apply MDPs to learn new dialog strategies. Tmore Tless M1 M2 Ran Accu0.78 0.60 0.70 0.72 0.41 racy (±0.01) (±0.02) (±0.02) (±0.02) (±0.02) MRR 0.72 0.52 0.63 0.64 0.32 (±0.02) (±0.02) (±0.02) (±0.01) (±0.02) Table 5: Comparisons of correctly predicted human user actions. 5.3 Comparisons in Dialog System Evaluation Finally, we compare how accurately the user simulations can predict human user actions given the same dialog context. Table 5 shows the averages and CIs (in parenthesis) from the four fold cross validations. The second row shows the results based on direct comparisons with human user actions, and the third row shows the mean reciprocal ranking of simulated actions. We observe that in terms of both the accuracy and the reciprocal ranking, the performance ranking from the highest to the lowest (with significant difference between adjacent ranks) is: the Tmore Model, both of the manual models (no significant differences between these two models), the Tless Model, and the Ran Model. Therefore, we suggest that the handcrafted user simulation is not sufficient to be used in evaluating dialog systems because it does not generate user actions that are as similar to human user actions. However, the handcrafted user simulation is still better than a user simulation trained with not enough training data. This result also indicates that this evaluation measure has more differentiating power than the previous measures since it captures significant differences that are not shown by the previous measures. In sum, the Tmore simulation performs the best in predicting human user actions. 894 6 Conclusion and Future Work Setting up user action probabilities in user simulation is a non-trivial task, especially when no training data or only a small amount of data is available. In this study, we compare several approaches in setting up user action probabilities for the same simulation model: training from all available human user data, training from half of the available data, two handcrafting approaches which use the same expert knowledge but differ slightly in individual probability assignments, and a baseline approach which randomly assigns all user action probabilities. We compare the built simulations from different aspects. We find that the two trained simulations and the two handcrafted simulations outperform the random simulation in all tasks. No significant difference is observed among the trained and the handcrafted simulations when comparing their generated corpora on corpus-level dialog features as well as when serving as the training corpora for learning dialog strategies. However, the simulation trained from all available human user data can predict human user actions more accurately than the handcrafted simulations, which again perform better than the model trained from half of the human user corpus. Nevertheless, no significant difference is observed between the two handcrafted simulations. Our study takes a first step in comparing the choices of handcrafting versus training user simulations when only limited or even no training data is available, e.g., when constructing a new dialog system. As shown for our task setting, both types of user simulations can be used in generating training data for learning new dialog strategies. However, we observe (as in a prior study by (Schatzmann et al., 2007b)) that the simulation trained from more user data has a better chance to outperform the simulation trained from less training data. We also observe that a handcrafted user simulation with expert knowledge can reach the performance of the better trained simulation. However, a certain level of expert knowledge is needed in handcrafting user simulations since a random simulation does not perform well in any tasks. Therefore, our results suggest that if an expert is available for designing a user simulation when not enough user data is collected, it may be better to handcraft the user simulation than training the simulation from the small amount of human user data. However, it is another open research question to answer how much data is enough for training a user simulation, which depends on many factors such as the complexity of the user simulation model. When using simulations to test a dialog system, our results suggest that once we have enough human user data, it is better to use the data to train a new simulation to replace the handcrafted simulation. In the future, we will conduct follow up studies to confirm our current findings since there are several factors that can impact our results. First of all, our current system mainly distinguishes the student answers as correct and incorrect. We are currently looking into dividing the incorrect student answers into more categories (such as partially correct answers, vague answers, or overspecific answers) which will increase the number of simulated user actions. Also, although the size of the human corpus which we build the trained user simulations from is comparable to other studies (e.g., (Rieser and Lemon, 2006), (Schatzmann et al., 2007b)), using a larger human corpus may improve the performance of the trained simulations. We are in the process of collecting another corpus which will consist of 60 human users (300 dialogs). We plan to re-train a simulation when this new corpus is available. Also, we would be able to train more complex models (e.g., a simulation model which takes into account a longer dialog history) with the extra data. Finally, although we add some noise into the current manual simulation designed by our domain expert to account for variations of expert knowledge, we would like to recruit another human expert to construct a new manual simulation to compare with the existing simulations. It would also be interesting to replicate our experiments on other dialog systems to see whether our observations will generalize. Our long term goal is to provide guidance of how to effectively build user simulations for different dialog system development tasks given limited resources. Acknowledgments The first author is supported by Mellon Fellowship from the University of Pittsburgh. This work is supported partially by NSF 0325054. We thank K. Forbes-Riley, P. Jordan and the anonymous reviewers for their insightful suggestions. References H. Ai and D. Litman. 2006. Comparing Real-Real, Simulated-Simulated, and Simulated-Real Spoken 895 Dialogue Corpora. In Proc. of the AAAI Workshop on Statistical and Empirical Approaches for Spoken Dialogue Systems. H. Ai and D. Litman. 2007. Knowledge Consistent User Simulations for Dialog Systems. In Proc. of Interspeech 2007. H. Ai, J. Tetreault, and D. Litman. 2007. Comparing User Simulation Models for Dialog Strategy Learning. In Proc. of NAACL-HLT 2007. H. Cen, K. Koedinger and B. Junker. 2006. Learning Factors Analysis-A General Method for Cognitive Model Evaluation and Improvement. In Proc. of 8th International Conference on ITS. S. Craig, A. Graesser, J. Sullins, and B. Gholson. 2004. Affect and learning: an exploratory look into the role of affect in learning with AutoTutor. Journal of Educational Media 29(3), 241250. K. Georgila, J. Henderson, and O. Lemon. 2005. Learning User Simulations for Information State Update Dialogue Systems. In Proc. of Interspeech 2005. K. Georgila, M. Wolters, and J. Moore. 2008. Simulating the Behaviour of Older versus Younger Users when Interacting with Spoken Dialogue Systems. In Proc. of 46th ACL. S. Janarthanam and O. Lemon. 2008. User simulations for online adaptation and knowledge-alignment in Troubleshooting dialogue systems. In Proc. of the 12th SEMdial Workshop on on the Semantics and Pragmatics of Dialogues. O. Lemon and X. Liu. 2007. Dialogue Policy Learning for combinations of Noise and User Simulation: transfer results. In Proc. of 8th SIGdial. D. Litman and S. Silliman. 2004. ITSPOKE: An Intelligent Tutoring Spoken Dialogue System. In Companion Proc. of the Human Language Technology: NAACL. R. L´opez-C´ozar, A. De la Torre, J. C. Segura and A. J. Rubio. 2003. Assessment of dialogue systems by means of a new simulation technique. Speech Communication (40): 387-407. T. Paek. 2006. Reinforcement learning for spoken dialogue systems: Comparing strengths and weaknesses for practical deployment. In Proc. of Interspeech-06 Workshop on ”Dialogue on Dialogues - Multidisciplinary Evaluation of Advanced Speech-based Interacive Systems”. D. Radev, H. Qi, H. Wu, and W. Fan. 2002. Evaluating web-based question answering systems. In Proc. of LREC 2002. V. Rieser and O. Lemon. 2006. Cluster-based User Simulations for Learning Dialogue Strategies. In Proc. of Interspeech 2006. J. Schatzmann, K. Georgila, and S. Young. 2005. Quantitative Evaluation of User Simulation Techniques for Spoken Dialogue Systems. In Proc. of 6th SIGDial. J. Schatzmann, K. Weilhammer, M. Stuttle, and S. Young. 2006. A Survey of Statistical User Simulation Techniques for Reinforcement-Learning of Dialogue Management Strategies. Knowledge Engineering Review 21(2): 97-126. J. Schatzmann, B. Thomson, K. Weilhammer, H. Ye, and S. Young. 2007a. Agenda-based User Simulation for Bootstrapping a POMDP Dialogue System. In Proc. of HLT/NAACL 2007. J. Schatzmann, B. Thomson and S. Young. 2007b. Statistical User Simulation with a Hidden Agenda. In Proc. of 8th SIGdial. J. Tetreault and D. Litman. 2008. A Reinforcement Learning Approach to Evaluating State Representations in Spoken Dialogue Systems. Speech Communication (Special Issue on Evaluating new methods and models for advanced speech-based interactive systems), 50(8-9): 683-696. K. VanLehn, P. Jordan, C. Ros´e, D. Bhembe, M. B¨ottner, A. Gaydos, M. Makatchev, U. Pappuswamy, M. Ringenberg, A. Roque, S. Siler, R. Srivastava, and R. Wilson. 2002. The architecture of Why2-Atlas: A coach for qualitative physics essay writing. In Proc. Intelligent Tutoring Systems Conference.. J. Williams and S. Young. 2007. Partially Observable Markov Decision Processes for Spoken Dialog Systems. Computer Speech and Language 21(2): 231422. 896
2009
100
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 897–904, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Dialogue Segmentation with Large Numbers of Volunteer Internet Annotators T. Daniel Midgley Discipline of Linguistics, School of Computer Science and Software Engineering University of Western Australia Perth, Australia [email protected] Abstract This paper shows the results of an experiment in dialogue segmentation. In this experiment, segmentation was done on a level of analysis similar to adjacency pairs. The method of annotation was somewhat novel: volunteers were invited to participate over the Web, and their responses were aggregated using a simple voting method. Though volunteers received a minimum of training, the aggregated responses of the group showed very high agreement with expert opinion. The group, as a unit, performed at the top of the list of annotators, and in many cases performed as well as or better than the best annotator. 1 Introduction Aggregated human behaviour is a valuable source of information. The Internet shows us many examples of collaboration as a means of resource creation. Wikipedia, Amazon.com reviews, and Yahoo! Answers are just some examples of large repositories of information powered by individuals who voluntarily contribute their time and talents. Some NLP projects are now using this idea, notably the ÔESP GameÕ (von Ahn 2004), a data collection effort presented as a game in which players label images from the Web. This paper presents an extension of this collaborative volunteer ethic in the area of dialogue annotation. For dialogue researchers, the prospect of using volunteer annotators from the Web can be an attractive option. The task of training annotators can be time-consuming, expensive, and (if inter-annotator agreement turns out to be poor) risky. Getting Internet volunteers for annotation has its own pitfalls. Dialogue annotation is often not very interesting, so it can be difficult to attract willing participants. Experimenters will have little control over the conditions of the annotation and the skill of the annotators. Training will be minimal, limited to whatever an average Web surfer is willing to read. There may also be perverse or uncomprehending users whose answers may skew the data. This project began as an exploratory study about the intuitions of language users with regard to dialogue segmentation. We wanted information about how language users perceive dialogue segments, and we wanted to be able to use this information as a kind of gold standard against which we could compare the performance of an automatic dialogue segmenter. For our experiment, the advantages of Internet annotation were compelling. We could get free data from as many language users as we could attract, instead of just two or three well-trained experts. Having more respondents meant that our results could be more readily generalised to language users as a whole. We expected that multiple users would converge upon some kind of uniform result. What we found (for this task at least) was that large numbers of volunteers show very strong tendencies that correspond well to expert opinion, and that these patterns of agreement are surprisingly resilient in the face of noisy input from some users. We also gained some insights into the way that people perceived dialogue segments. 2 Segmentation While much work in dialogue segmentation centers around topic (e.g. Galley et al. 2003, Hsueh et al. 2006, Purver et al. 2006), we decided to examine dialogue at a more finegrained level. The level of analysis that we have chosen corresponds most closely to adjacency pairs (after Sacks, Schegloff and Jefferson 1974), where a segment is made of matched sets of utterances from different speakers (e.g. question/answer or suggest/accept). We chose to segment dialogues this way in order to improve dialogue act tagging, and we think that 897 examining the back-and-forth detail of the mechanics of dialogue will be the most helpful level of analysis for this task. The back-and-forth nature of dialogue also appears in Clark and SchaeferÕs (1989) influential work on contributions in dialogue. In this view, two-party dialogue is seen as a set of cooperative acts used to add information to the common ground for the purpose of accomplishing some joint action. Clark and Schaefer map these speech acts onto contribution trees. Each utterance within a contribution tree serves either to present some proposition or to acknowledge a previous one. Accordingly, each contribution tree has a presentation phase and an acceptance phase. Participants in dialogue assume that items they present will be added to the common ground unless there is evidence to the contrary. However, participants do not always show acceptance of these items explicitly. Speaker B may repeat SpeakerÕs AÕs information verbatim to show understanding (as one does with a phone number), but for other kinds of information a simple Ôuh-huhÕ will constitute adequate evidence of understanding. In general, less and less evidence will be required the farther on in the segment one goes. In practice, then, segments have a tailing-off quality that we can see in many dialogues. Table 1 shows one example from Verbmobil-2, a corpus of appointment scheduling dialogues. (A description of this corpus appears in Alexandersson 1997.) A segment begins when WJH brings a question to the table (utterances 1 and 2 in our example), AHS answers it (utterance 3), and WJH acknowledges the response (utterance 4). At this point, the question is considered to be resolved, and a new contribution can be issued. WJH starts a new segment in utterance 5, and this utterance shows features that will be familiar to dialogue researchers: the number of words increases, as does the incidence of new words. By the end of this segment (utterance 8), AHS only needs to offer a simple ÔokayÕ to show acceptance of the foregoing. Our work is not intended to be a strict implementation of Clark and SchaeferÕs contribution trees. The segments represented by these units is what we were asking our volunteer annotators to find. Other researchers have also used a level of analysis similar to our own. JšnssonÕs (1991) initiative-response units is one example. Taking a cue from Mann (1987), we decided to describe the behaviour in these segments using an atomic metaphor: dialogue segments have nuclei, where someone says something, and someone says something back (roughly corresponding to adjacency pairs), and satellites, usually shorter utterances that give feedback on whatever the nucleus is about. For our annotators, the process was simply to find the nuclei, with both speakers taking part, and then attach any nearby satellites that pertained to the segment. We did not attempt to distinguish nested adjacency pairs. These would be placed within the same segment. Eventually we plan to modify our system to recognise these nested pairs. 3 Experimental Design 3.1 Corpus In the pilot phase of the experiment, volunteers could choose to segment up to four randomlychosen dialogues from the Verbmobil-2 corpus. (One longer dialogue was separated into two.) We later ran a replication of the experiment with eleven dialogues. For this latter phase, each volunteer started on a randomly chosen dialogue to ensure evenness of responses. The dialogues contained between 44 and 109 utterances. The average segment was 3.59 utterances in length, by our annotation. Two dialogues have not been examined because they will be used as held-out data for the next phase of our research. Results from the 1 WJH <uhm> basically we have to be in Hanover for a day and a half 2 WJH correct 3 AHS right 4 WJH okay 5 WJH <uh> I am looking through my schedule for the next three months 6 WJH and I just noticed I am working all of Christmas week 7 WJH so I am going to do it in Germany if at all possible 8 AHS okay Table 1. A sample of the corpus. Two segments are represented here. 898 other thirteen dialogues appear in part 4 of this paper. 3.2 Annotators Volunteers were recruited via postings on various email lists and websites. This included a posting on the university events mailing list, sent to people associated with the university, but with no particular linguistic training. Linguistics first-year students and Computer Science students and staff were also informed of the project. We sent advertisements to a variety of international mailing lists pertaining to language, computation, and cognition, since these lists were most likely to have a readership that was interested in language. These included Linguist List, Corpora, CogLing-L, and HCSNet. An invitation also appeared on the personal blog of the first author. At the experimental website, volunteers were asked to read a brief description of how to annotate, including the descriptions of nuclei and satellites. The instruction page showed some examples of segments. Volunteers were requested not to return to the instruction page once they had started the experiment. The annotator guide with examples can be seen at the following URL: http://tinyurl.com/ynwmx9 A scheme that relies on volunteer annotation will need to address the issue of motivation. People have a desire to be entertained, but dialogue annotation can often be tedious and difficult. We attempted humor as a way of keeping annotators amused and annotating for as long as possible. After submitting a dialogue, annotators would see an encouraging page, sometimes with pretend ÔbadgesÕ like the one pictured in Figure 1. This was intended as a way of keeping annotators interested to see what comments would come next. Figure 2 shows statistics on how many dialogues were marked by any one IP address. While over half of the volunteers marked only one dialogue, many volunteers marked all four (or in the replication, all eleven) dialogues. Sometimes more than eleven dialogues were submitted from the same location, most likely due to multiple users sharing a computer. In all, we received 626 responses from about 231 volunteers (though this is difficult to determine from only the volunteersÕ IP numbers). We collected between 32 and 73 responses for each of the 15 dialogues. 3.3 Method of Evaluation We used the WindowDiff (WD) metric (Pevzner and Hearst 2002) to evaluate the responses of our volunteers against expert opinion (our responses). The WD algorithm calculates agreement between a reference copy of the corpus and a volunteerÕs hypothesis by moving a window over the utterances in the two corpora. The window has a size equal to half the average segment length. Within the window, the algorithm examines the number of segment boundaries in the reference and in the hypothesis, and a counter is augmented by one if they disagree. The WD score between the reference and the hypothesis is equal to the number of discrepancies divided by the number of measurements taken. A score of 0 would be given to two annotators who agree perfectly, and 1 would signify perfect disagreement. Figure 3 shows the WD scores for the volunteers. Most volunteers achieved a WD score between .15 and .2, with an average of á245. CohenÕs Kappa (!) (Carletta 1996) is another method of comparing inter-annotator agreement 0 30 60 90 120 150 1 2 3 4 5 6 7 8 9 10 11 >11 120 25 10 32 3 4 3 1 2 0 17 2 Number of annotators Number of dialogues completed Figure 2. Number of dialogues annotated by single IP addresses Figure 1. One of the screens that appears after an annotator submits a marked form. 899 in segmentation that is widely used in computational language tasks. It measures the observed agreement (AO) against the agreement we should expect by chance (AE), as follows: ! = AO - AE 1 - AE For segmentation tasks, ! is a more stringent method than WindowDiff, as it does not consider near-misses. Even so, ! scores are reported in Section 4. About a third of the data came from volunteers who chose to complete all eleven of the dialogues. Since they contributed so much of the data, we wanted to find out whether they were performing better than the other volunteers. This group had an average WD score of .199, better than the rest of the group at .268. However, skill does not appear to increase smoothly as more dialogues are completed. The highest performance came from the group that completed 5 dialogues (average WD = .187), the 0 25 50 75 100 125 150 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Number of responses WindowDiff range lowest from those that completed 8 dialogues (. 299). 3.4 Aggregation We wanted to determine, insofar as was possible, whether there was a group consensus as to where the segment boundaries should go. We decided to try overlaying the results from all respondents on top of each other, so that each click from each respondent acted as a sort of vote. Figure 4 shows the result of aggregating annotator responses from one dialogue in this way. There are broad patterns of agreement; high ÔpeaksÕ where many annotators agreed that an utterance was a segment boundary, areas of uncertainty where opinion was split between two adjacent utterances, and some background noise from near-random respondents. Group opinion is manifested in these peaks. Figure 5 shows a hypothetical example to illustrate how we defined this notion. A peak is any local maximum (any utterance u where u - 1 < u > u + 1) above background noise, which we define as any utterance with a number of votes below the arithmetic mean. Utterance 5, being a local maximum, is a peak. Utterance 2, though a local maximum, is not a peak as it is below the mean. Utterance 4 has a comparatively large number of votes, but it is not considered a peak because its neighbour, utterance 5, is higher. Defining peaks this way allows us to focus on the points of highest agreement, while ignoring not only the relatively low-scoring utterances, Figure 4. The results for one dialogue. Each utterance in the dialogue is represented in sequence along the x axis. Numbers in dots represent the number of respondents that ÔvotedÕ for that utterance as a segment boundary. Peaks appear where agreement is strongest. A circle around a data point indicates our choices for segment boundary. 0 5 10 15 20 25 30 35 40 45 after_e059ach1_000_ANV_00 after_e059ach1_000_ANV_01 after_e059ach2_001_CNK_02 after_e059ach2_001_CNK_03 after_e059ach1_002_ANV_04 after_e059ach1_002_ANV_05 after_e059ach1_002_ANV_06 after_e059ach2_003_CNK_07 after_e059ach1_004_ANV_08 after_e059ach2_005_CNK_09 after_e059ach1_006_ANV_10 after_e059ach1_006_ANV_11 after_e059ach1_006_ANV_12 after_e059ach2_007_CNK_13 after_e059ach2_007_CNK_14 after_e059ach2_007_CNK_15 after_e059ach1_008_ANV_16 after_e059ach1_008_ANV_17 after_e059ach1_008_ANV_18 after_e059ach1_008_ANV_19 after_e059ach2_009_CNK_20 after_e059ach1_010_ANV_21 after_e059ach1_010_ANV_22 after_e059ach1_010_ANV_23 after_e059ach1_010_ANV_24 after_e059ach2_011_CNK_25 after_e059ach1_012_ANV_26 after_e059ach1_012_ANV_27 after_e059ach2_013_CNK_28 after_e059ach2_014_CNK_29 after_e059ach1_015_ANV_30 after_e059ach1_016_ANV_31 after_e059ach1_016_ANV_32 after_e059ach1_016_ANV_33 after_e059ach2_017_CNK_34 after_e059ach1_018_ANV_35 after_e059ach1_018_ANV_36 after_e059ach1_018_ANV_37 after_e059ach2_019_CNK_38 after_e059ach2_019_CNK_39 after_e059ach1_020_ANV_40 after_e059ach2_021_CNK_41 after_e059ach2_021_CNK_42 after_e059ach1_022_ANV_43 after_e059ach2_023_CNK_44 after_e059ach1_024_ANV_45 after_e059ach2_025_CNK_46 after_e059ach1_026_ANV_47 after_e059ach2_027_CNK_48 after_e059ach1_028_ANV_49 after_e059ach2_029_CNK_50 after_e059ach2_030_CNK_51 after_e059ach2_030_CNK_52 after_e059ach2_030_CNK_53 after_e059ach2_030_CNK_54 after_e059ach2_030_CNK_55 after_e059ach2_030_CNK_56 after_e059ach2_030_CNK_57 after_e059ach1_031_ANV_58 after_e059ach2_032_CNK_59 after_e059ach1_033_ANV_60 after_e059ach2_034_CNK_61 after_e059ach1_035_ANV_62 after_e059ach1_035_ANV_63 after_e059ach2_036_CNK_64 after_e059ach1_037_ANV_65 after_e059ach2_038_CNK_66 after_e059ach2_039_CNK_67 after_e059ach1_040_ANV_68 after_e059ach1_040_ANV_69 after_e059ach1_040_ANV_70 after_e059ach2_041_CNK_71 after_e059ach1_042_ANV_72 after_e059ach1_042_ANV_73 after_e059ach2_043_CNK_74 after_e059ach1_044_ANV_75 22 37 3 41 11 24 2 6 30 6 3 643 37 3 0 3 24 2 12 023 38 44 9 1 86 32 38 12 23 8 24 17 1 42 221 37 32200 4 132 15 2 35 44 25 10 21 26 16 1 4 7 28 43 36 e059 n = 42 mean = 9.89 Figure 3. WD scores for individual responses. A score of 0 indicates perfect agreement. 900 but also the potentially misleading utterances near a peak. There are three disagreements in the dialogue presented in Figure 4. For the first, annotators saw a break where we saw a continuation. The other two disagreements show the reverse: annotators saw a continuation of topic as a continuation of segment. 4 Results Table 2 shows the agreement of the aggregated group votes with regard to expert opinion. The aggregated responses from the volunteer annotators agree extremely well with expert opinion. Acting as a unit, the groupÕs WindowDiff scores always perform better than the individual annotators on average. While the individual annotators attained an average WD score of .245, the annotators-as-group scored WD = .108. On five of the thirteen dialogues, the group performed as well as or better than the best individual annotator. On the other eight dialogues, the group performance was toward the top of the group, bested by one annotator (three times), two annotators (once), four annotators (three times), or six annotators (once), out of a field of 32Ð73 individuals. This suggests that aggregating the scores in this way causes a Ômajority ruleÕ effect that brings out the best answers of the group. One drawback of the WD statistic (as opposed to !) is that there is no clear consensus for what constitutes Ôgood agreementÕ. For computational linguistics, ! ! .67 is generally considered strong agreement. We found that ! for the aggregated group ranged from .71 to .94. Over all the dialogues, ! = á84. This is surprisingly high agreement for a dialogue-level task, especially considering the stringency of the ! statistic, and that the data comes from untrained volunteers, none of whom were dropped from the sample. 5 Comparison to Trivial Baselines We used a number of trivial baselines to see if our results could be bested by simple means. These were random placement of boundaries, majority class, marking the last utterance in each turn as a boundary, and a set of hand-built rules we called Ôthe TriggerÕ. The results of these trials can be seen in Figure 6. Dialogue name WD average as marked by volunteers WD single annotator best WD single annotator worst WD for group opinion How many annotators did better? Number of annotators e041a 0.210 0.094 0.766 0.094 0 39 e041b 0.276 0.127 0.794 0.095 0 39 e059 0.236 0.080 0.920 0.107 1 42 e081a 0.244 0.037 0.611 0.148 4 36 e081b 0.267 0.093 0.537 0.148 4 32 e096a 0.219 0.083 0.604 32 e096b 0.160 0.000 0.689 0.044 1 36 e115 0.214 0.079 0.750 0.079 0 34 e119 0.241 0.102 0.610 32 e123a 0.259 0.043 1.000 0.174 6 34 e123b 0.193 0.093 0.581 0.047 0 33 e030 0.298 0.110 0.807 0.147 2 55 e066 0.288 0.063 0.921 0.063 0 69 e076a 0.235 0.026 0.868 0.053 1 73 e076b 0.270 0.125 0.700 0.175 4 40 ALL 0.245 0.000 1.000 0.108 60 626 Table 2. Summary of WD results for dialogues. Data has not been aggregated for two dialogues because they are being held out for future work. mean = 9.5 utt1 utt2 utt3 utt4 utt5 utt6 2 7 3 11 27 5 Figure 5. Defining the notion of ÔpeakÕ. Numbers in circles indicate number of ÔvotesÕ for that utterance as a boundary. 901 5.1 Majority Class This baseline consisted of marking every utterance with the most common classification, which was Ônot a boundaryÕ. (About one in four utterances was marked as the end of a segment in the reference dialogues.) This was one of the worst case baselines, and gave WD = .551 over all dialogues. 5.2 Random Boundary Placement We used a random number generator to randomly place as many boundaries in each dialogue as we had in our reference dialogues. This method gave about the same accuracy as the Ômajority classÕ method with WD = .544. 5.3 Last Utterance in Turn In these dialogues, a speakerÕs turn could consist of more than one utterance. For this baseline, every final utterance in a turn was marked as the beginning of a segment, except when lone utterances would have created a segment with only one speaker. This method was suggested by work from Sacks, Schegloff, and Jefferson (1974) who observed that the last utterance in a turn tends to be the first pair part for another adjacency pair. Wright, Poesio, and Isard (1999) used a variant of this idea in a dialogue act tagger, including not only the previous utterance as a feature, but also the previous speakerÕs last speech act type. This method gave a WD score of .392. 5.4 The Trigger This method of segmentation was a set of handbuilt rules created by the author. In this method, two conditions have to exist in order to start a new segment. ¥ Both speakers have to have spoken. ¥ One utterance must contain four words or less. The Ôfour wordsÕ requirement was determined empirically during the feature selection phase of an earlier experiment. Once both these conditions have been met, the ÔtriggerÕ is set. The next utterance to have more than four words is the start of a new segment. This method performed comparatively well, with WD = .210, very close to the average individual annotator score of .245. As mentioned, the aggregated annotator score was WD = .108. 0 0.1 0.2 0.3 0.4 0.5 0.6 Majority Random Last utterance Trigger Group 0.108 0.210 0.392 0.544 0.551 WD scores Figure 6. Comparison of the groupÕs aggregated responses to trivial baselines. 5.5 Comparison to Other Work Comparing these results to other work is difficult because very little research focuses on dialogue segmentation at this level of analysis. Jšnsson (1991) uses initiative-response pairs as a part of a dialogue manager, but does not attempt to recognise these segments explicitly. Comparable statistics exist for a different task, that of multiparty topic segmentation. WD scores for this task fall consistently into the .25 range, with Galley et al. (2003) at .254, Hsueh et al. (2006) at .283, and Purver et al. (2006) at .á284. We can only draw tenuous conclusions between this task and our own, however this does show the kind of scores we should be expecting to see for a dialogue-level task. A more similar project would help us to make a more valid comparison. 6 Discussion The discussion of results will follow the two foci of the project: first, some comments about the aggregation of the volunteer data, and then some comments about the segmentation itself. 6.1 Discussion of Aggregation A combination of factors appear to have contributed to the success of this method, some involving the nature of the task itself, and some involving the nature of aggregated group opinion, which has been called Ôthe wisdom of crowdsÕ (for an informal introduction, see Surowiecki 2004). The fact that annotator responses were aggregated means that no one annotator had to perform particularly well. We noticed a range of styles among our annotators. Some annotators agreed very well with the expert opinion. A few 902 annotators seemed to mark utterances in nearrandom ways. Some Ôcasual annotatorsÕ seemed to drop in, click only a few of the most obvious boundaries in the dialogue, and then submit the form. This kind of behaviour would give that annotator a disastrous individual score, but when aggregated, the work of the casual annotator actually contributes to the overall picture provided by the group. As long as the wrong responses are randomly wrong, they do not detract from the overall pattern and no volunteers need to be dropped from the sample. It may not be surprising that people with language experience tend to arrive at more or less the same judgments on this kind of task, or that the aggregation of the group data would normalise out the individual errors. What is surprising is that the judgments of the group, aggregated in this way, correspond more closely to expert opinion than (in many cases) the best individual annotators. 6.2 Discussion of Segmentation The concept of segmentation as described here, including the description of nuclei and satellites, appears to be one that annotators can grasp even with minimal training. The task of segmentation here is somewhat different from other classification tasks. Annotators were asked to find segment boundaries, making this essentially a two-class classification task where each utterance was marked as either a boundary or not a boundary. It may be easier for volunteers to cope with fewer labels than with many, as is more common in dialogue tasks. The comparatively low perplexity would also help to ensure that volunteers would see the annotation through. One of the outcomes of seeing annotator opinion was that we could examine and learn from cases where the annotators voted overwhelmingly contrary to expert opinion. This gave us a chance to learn from what the human annotators thought about language. Even though these results do not literally come from one person, it is still interesting to look at the general patterns suggested by these results. ÔletÕs seeÕ: This utterance usually appears near boundaries, but does it mark the end of a segment, or the beginning of a new one? We tended to place it at the end of the previous segment, but human annotators showed a very strong tendency to group it with the next segment. This was despite an example on the training page that suggested joining these utterances with the previous segment. Topic: The segments under study here are different from topic. The segments tend to be smaller, and they focus on the mechanics of the exchanges rather than centering around one topic to its conclusion. Even though the annotators were asked to mark for adjacency pairs, there was a distinct tendency to mark longer units more closely pertaining to topic. Table 3 shows one example. We had marked the space between utterances 2 and 3 as a boundary; volunteers ignored it. It was slightly more common for annotators to omit our boundaries than to suggest new ones. The average segment length was 3.64 utterances for our volunteers, compared with 3.59 utterances for experts. Areas of uncertainty: At certain points on the chart, opinion seemed to be split as one or more potential boundaries presented themselves. This seemed to happen most often when two or more of the same speech act appeared sequentially, e.g. two or more questions, information-giving statements, or the like. 7 Conclusions and Future Work We drew a number of conclusions from this study, both about the viability of our method, and about the outcomes of the study itself. First, it appears that for this task, aggregating the responses from a large number of anonymous volunteers is a valid method of annotation. We would like to see if this pattern holds for other kinds of classification tasks. If it does, it could have tremendous implications for dialogue-level annotation. Reliable results could be obtained quickly and cheaply from large numbers of volunteers over the Internet, without the time, the expense, and the logistical complexity of training. At present, however, it is unclear whether this volunteer annotation 1 MGT so what time should we meet 2 ADB <uh> well it doesn't matter as long as we both checked in I mean whenever we meet is kind of irrelevant 3 ADB so maybe about try to 4 ADB you want to get some lunch at the airport before we go 5 MGT that is a good idea Table 3. Example from a dialogue. 903 technique could be extended to other classification tasks. It is possible that the strong agreement seen here would also be seen on any two-class annotation problem. A retest is underway with annotation for a different twoclass annotation set and for a multi-class task. Second, it appears that the concept of segmentation on the adjacency pair level, with this description of nuclei and satellites, is one that annotators can grasp even with minimal training. We found very strong agreement between the aggregated group answers and the expert opinion. We now have a sizable amount of information from language users as to how they perceive dialogue segmentation. Our next step is to use these results as the corpus for a machine learning task that can duplicate human performance. We are considering the Transformation-Based Learning algorithm, which has been used successfully in NLP tasks such as part of speech tagging (Brill 1995) and dialogue act classification (Samuel 1998). TBL is attractive because it allows one to start from a marked up corpus (perhaps the Trigger, as the best-performing trivial baseline), and improves performance from there. We also plan to use the information from the segmentation to examine the structure of segments, especially the sequences of dialogue acts within them, with a view to improving a dialogue act tagger. Acknowledgements Thanks to Alan Dench and to T. Mark Ellison for reviewing an early draft of this paper. We especially wish to thank the individual volunteers who contributed the data for this research. References Luis von Ahn and Laura Dabbish. 2004. Labeling images with a computer game. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. pp. 319Ð326. Jan Alexandersson, Bianka Buschbeck-Wolf, Tsutomu Fujinami, Elisabeth Maier, Norbert Reithinger, Birte Schmitz, and Melanie Siegel. 1997. Dialogue acts in VERBMOBIL-2. Verbmobil Report 204, DFKI, University of Saarbruecken. Eric Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4): 543Ð565. Jean C. Carletta. 1996. Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22(2): 249Ð254. Herbert H. Clark and Edward F. Schaefer. 1989. Contributing to discourse. Cognitive Science, 13:259Ð294. Michael Galley, Kathleen McKeown, Eric FoslerLussier, and Hongyan Jing. 2003. Discourse segmentation of multi-party conversation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pp. 562Ð569. Pei-Yun Hsueh, Johanna Moore, and Steve Renals. 2006. Automatic segmentation of multiparty dialogue. In Proceedings of the EACL 2006, pp. 273Ð280. Arne Jšnsson. 1991. A dialogue manager using initiative-response units and distributed control. In Proceedings of the Fifth Conference of the European Association for Computational Linguistics, pp. 233Ð238. William C. Mann and Sandra A. Thompson. 1987. Rhetorical structure theory: A framework for the analysis of texts. In IPRA Papers in Pragmatics 1: 1-21. Lev Pevzner and Marti A. Hearst. 2002. A critique and improvement of an evaluation metric for text segmentation. Computational Linguistics, 28(1): 19Ð36. Matthew Purver, Konrad P. Kšrding, Thomas L. Griffiths, and Joshua B. Tenenbaum. 2006. Unsupervised topic modelling for multi-party spoken discourse. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the ACL, pp. 17Ð24. Harvey Sacks, Emanuel A. Schegloff, and Gail Jefferson. 1974. A simplest systematics for the organization of turn-taking for conversation. Language, 50:696Ð735. Ken Samuel, Sandra Carberry, and K. Vijay-Shanker. 1998. Dialogue act tagging with transformationbased learning. In Proceedings of COLING/ ACL'98, pp. 1150Ð1156. James Surowiecki. 2004. The wisdom of crowds: Why the many are smarter than the few. Abacus: London, UK. Helen Wright, Massimo Poesio, and Stephen Isard. 1999. Using high level dialogue information for dialogue act recognition using prosodic features. In DIAPRO-1999, pp. 139Ð143. 904
2009
101
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 905–913, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Robust Approach to Abbreviating Terms: A Discriminative Latent Variable Model with Global Information Xu Sun†, Naoaki Okazaki†, Jun’ichi Tsujii†‡§ †Department of Computer Science, University of Tokyo, Hongo 7-3-1, Bunkyo-ku, Tokyo 113-0033, Japan ‡School of Computer Science, University of Manchester, UK §National Centre for Text Mining, UK {sunxu, okazaki, tsujii}@is.s.u-tokyo.ac.jp Abstract The present paper describes a robust approach for abbreviating terms. First, in order to incorporate non-local information into abbreviation generation tasks, we present both implicit and explicit solutions: the latent variable model, or alternatively, the label encoding approach with global information. Although the two approaches compete with one another, we demonstrate that these approaches are also complementary. By combining these two approaches, experiments revealed that the proposed abbreviation generator achieved the best results for both the Chinese and English languages. Moreover, we directly apply our generator to perform a very different task from tradition, the abbreviation recognition. Experiments revealed that the proposed model worked robustly, and outperformed five out of six state-of-the-art abbreviation recognizers. 1 Introduction Abbreviations represent fully expanded forms (e.g., hidden markov model) through the use of shortened forms (e.g., HMM). At the same time, abbreviations increase the ambiguity in a text. For example, in computational linguistics, the acronym HMM stands for hidden markov model, whereas, in the field of biochemistry, HMM is generally an abbreviation for heavy meromyosin. Associating abbreviations with their fully expanded forms is of great importance in various NLP applications (Pakhomov, 2002; Yu et al., 2006; HaCohen-Kerner et al., 2008). The core technology for abbreviation disambiguation is to recognize the abbreviation definitions in the actual text. Chang and Sch¨utze (2006) reported that 64,242 new abbreviations were introduced into the biomedical literatures in 2004. As such, it is important to maintain sense inventories (lists of abbreviation definitions) that are updated with the neologisms. In addition, based on the one-sense-per-discourse assumption, the recognition of abbreviation definitions assumes senses of abbreviations that are locally defined in a document. Therefore, a number of studies have attempted to model the generation processes of abbreviations: e.g., inferring the abbreviating mechanism of the hidden markov model into HMM. An obvious approach is to manually design rules for abbreviations. Early studies attempted to determine the generic rules that humans use to intuitively abbreviate given words (Barrett and Grems, 1960; Bourne and Ford, 1961). Since the late 1990s, researchers have presented various methods by which to extract abbreviation definitions that appear in actual texts (Taghva and Gilbreth, 1999; Park and Byrd, 2001; Wren and Garner, 2002; Schwartz and Hearst, 2003; Adar, 2004; Ao and Takagi, 2005). For example, Schwartz and Hearst (2003) implemented a simple algorithm that mapped all alpha-numerical letters in an abbreviation to its expanded form, starting from the end of both the abbreviation and its expanded forms, and moving from right to left. These studies performed highly, especially for English abbreviations. However, a more extensive investigation of abbreviations is needed in order to further improve definition extraction. In addition, we cannot simply transfer the knowledge of the hand-crafted rules from one language to another. For instance, in English, abbreviation characters are preferably chosen from the initial and/or capital characters in their full forms, whereas some 905 p o l y g l y c o l i c a c i d P S S S P S S S S S S S S P S S S [PGA] 历史语言研究所 S P P S S S P [史语所] Institute of History and Philology at Academia Sinica (b): Chinese Abbreviation Generation (a): English Abbreviation Generation Figure 1: English (a) and Chinese (b) abbreviation generation as a sequential labeling problem. other languages, including Chinese and Japanese, do not have word boundaries or case sensitivity. A number of recent studies have investigated the use of machine learning techniques. Tsuruoka et al. (2005) formalized the processes of abbreviation generation as a sequence labeling problem. In the present study, each character in the expanded form is tagged with a label, y ∈{P, S}1, where the label P produces the current character and the label S skips the current character. In Figure 1 (a), the abbreviation PGA is generated from the full form polyglycolic acid because the underlined characters are tagged with P labels. In Figure 1 (b), the abbreviation is generated using the 2nd and 3rd characters, skipping the subsequent three characters, and then using the 7th character. In order to formalize this task as a sequential labeling problem, we have assumed that the label of a character is determined by the local information of the character and its previous label. However, this assumption is not ideal for modeling abbreviations. For example, the model cannot make use of the number of words in a full form to determine and generate a suitable number of letters for the abbreviation. In addition, the model would be able to recognize the abbreviating process in Figure 1 (a) more reasonably if it were able to segment the word polyglycolic into smaller regions, e.g., poly-glycolic. Even though humans may use global or non-local information to abbreviate words, previous studies have not incorporated this information into a sequential labeling model. In the present paper, we propose implicit and explicit solutions for incorporating non-local information. The implicit solution is based on the 1Although the original paper of Tsuruoka et al. (2005) attached case sensitivity information to the P label, for simplicity, we herein omit this information. y1 y2 ym xm x2 x1 h1 h2 hm xm x2 x1 ym y2 y1 CRF DPLVM Figure 2: CRF vs. DPLVM. Variables x, y, and h represent observation, label, and latent variables, respectively. discriminative probabilistic latent variable model (DPLVM) in which non-local information is modeled by latent variables. We manually encode nonlocal information into the labels in order to provide an explicit solution. We evaluate the models on the task of abbreviation generation, in which a model produces an abbreviation for a given full form. Experimental results indicate that the proposed models significantly outperform previous abbreviation generation studies. In addition, we apply the proposed models to the task of abbreviation recognition, in which a model extracts the abbreviation definitions in a given text. To the extent of our knowledge, this is the first model that can perform both abbreviation generation and recognition at the state-of-the-art level, across different languages and with a simple feature set. 2 Abbreviator with Non-local Information 2.1 A Latent Variable Abbreviator To implicitly incorporate non-local information, we propose discriminative probabilistic latent variable models (DPLVMs) (Morency et al., 2007; Petrov and Klein, 2008) for abbreviating terms. The DPLVM is a natural extension of the CRF model (see Figure 2), which is a special case of the DPLVM, with only one latent variable assigned for each label. The DPLVM uses latent variables to capture additional information that may not be expressed by the observable labels. For example, using the DPLVM, a possible feature could be “the current character xi = X, the label yi = P, and the latent variable hi = LV.” The non-local information can be effectively modeled in the DPLVM, and the additional information at the previous position or many of the other positions in the past could be transferred via the latent variables (see Figure 2). 906 Using the label set Y = {P, S}, abbreviation generation is formalized as the task of assigning a sequence of labels y = y1, y2, . . . , ym for a given sequence of characters x = x1, x2, . . . , xm in an expanded form. Each label, yj, is a member of the possible labels Y . For each sequence, we also assume a sequence of latent variables h = h1, h2, . . . , hm, which are unobservable in training examples. We model the conditional probability of the label sequence P(y|x) using the DPLVM, P(y|x, Θ) = X h P(y|h, x, Θ)P(h|x, Θ). (1) Here, Θ represents the parameters of the model. To ensure that the training and inference are efficient, the model is often restricted to have disjointed sets of latent variables associated with each label (Morency et al., 2007). Each hj is a member in a set Hyj of possible latent variables for the label yj. Here, H is defined as the set of all possible latent variables, i.e., H is the union of all Hyj sets. Since the sequences having hj /∈Hyj will, by definition, yield P(y|x, Θ) = 0, the model is rewritten as follows (Morency et al., 2007; Petrov and Klein, 2008): P(y|x, Θ) = X h∈Hy1×...×Hym P(h|x, Θ). (2) Here, P(h|x, Θ) is defined by the usual formulation of the conditional random field, P(h|x, Θ) = exp Θ·f(h, x) P ∀h exp Θ·f(h, x), (3) where f(h, x) represents a feature vector. Given a training set consisting of n instances, (xi, yi) (for i = 1 . . . n), we estimate the parameters Θ by maximizing the regularized loglikelihood, L(Θ) = n X i=1 log P(yi|xi, Θ) −R(Θ). (4) The first term expresses the conditional loglikelihood of the training data, and the second term represents a regularizer that reduces the overfitting problem in parameter estimation. 2.2 Label Encoding with Global Information Alternatively, we can design the labels such that they explicitly incorporate non-local information. 国家濒危物种进出口管理办公室 S S P S S S S S S P S P S S S0 S0 P1 S1 S1 S1 S1 S1 S1 P2 S2 P3 S3 S3 Management office of the imports and exports of endangered species Orig. GI Figure 3: Comparison of the proposed label encoding method with global information (GI) and the conventional label encoding method. In this approach, the label yi at position i attaches the information of the abbreviation length generated by its previous labels, y1, y2, . . . , yi−1. Figure 3 shows an example of a Chinese abbreviation. In this encoding, a label not only contains the produce or skip information, but also the abbreviation-length information, i.e., the label includes the number of all P labels preceding the current position. We refer to this method as label encoding with global information (hereinafter GI). The concept of using label encoding to incorporate non-local information was originally proposed by Peshkin and Pfeffer (2003). Note that the model-complexity is increased only by the increase in the number of labels. Since the length of the abbreviations is usually quite short (less than five for Chinese abbreviations and less than 10 for English abbreviations), the model is still tractable even when using the GI encoding. The implicit (DPLVM) and explicit (GI) solutions address the same issue concerning the incorporation of non-local information, and there are advantages to combining these two solutions. Therefore, we will combine the implicit and explicit solutions by employing the GI encoding in the DPLVM (DPLVM+GI). The effects of this combination will be demonstrated through experiments. 2.3 Feature Design Next, we design two types of features: languageindependent features and language-specific features. Language-independent features can be used for abbreviating terms in English and Chinese. We use the features from #1 to #3 listed in Table 1. Feature templates #4 to #7 in Table 1 are used for Chinese abbreviations. Templates #4 and #5 express the Pinyin reading of the characters, which represents a Romanization of the sound. Templates #6 and #7 are designed to detect character duplication, because identical characters will normally be skipped in the abbreviation process. On 907 #1 The input char. xi−1 and xi #2 Whether xj is a numeral, for j = (i −3) . . . i #3 The char. bigrams starting at (i −2) . . . i #4 The Pinyin of char. xi−1 and xi #5 The Pinyin bigrams starting at (i −2) . . . i #6 Whether xj = xj+1, for j = (i −2) . . . i #7 Whether xj = xj+2, for j = (i −3) . . . i #8 Whether xj is uppercase, for j = (i −3) . . . i #9 Whether xj is lowercase, for j = (i −3) . . . i #10 The char. 3-grams starting at (i −3) . . . i #11 The char. 4-grams starting at (i −4) . . . i Table 1: Language-independent features (#1 to #3), Chinese-specific features (#4 through #7), and English-specific features (#8 through #11). the other hand, such duplication detection features are not so useful for English abbreviations. Feature templates #8–#11 are designed for English abbreviations. Features #8 and #9 encode the orthographic information of expanded forms. Features #10 and #11 represent a contextual n-gram with a large window size. Since the number of letters in Chinese (more than 10K characters) is much larger than the number of letters in English (26 letters), in order to avoid a possible overfitting problem, we did not apply these feature templates to Chinese abbreviations. Feature templates are instantiated with values that occur in positive training examples. We used all of the instantiated features because we found that the low-frequency features also improved the performance. 3 Experiments For Chinese abbreviation generation, we used the corpus of Sun et al. (2008), which contains 2,914 abbreviation definitions for training, and 729 pairs for testing. This corpus consists primarily of noun phrases (38%), organization names (32%), and verb phrases (21%). For English abbreviation generation, we evaluated the corpus of Tsuruoka et al. (2005). This corpus contains 1,200 aligned pairs extracted from MEDLINE biomedical abstracts (published in 2001). For both tasks, we converted the aligned pairs of the corpora into labeled full forms and used the labeled full forms as the training/evaluation data. The evaluation metrics used in the abbreviation generation are exact-match accuracy (hereinafter accuracy), including top-1 accuracy, top-2 accuracy, and top-3 accuracy. The top-N accuracy represents the percentage of correct abbreviations that are covered, if we take the top N candidates from the ranked labelings of an abbreviation generator. We implemented the DPLVM in C++ and optimized the system to cope with large-scale problems. We employ the feature templates defined in Section 2.3, taking into account these 81,827 features for the Chinese abbreviation generation task, and the 50,149 features for the English abbreviation generation task. For numerical optimization, we performed a gradient descent with the Limited-Memory BFGS (L-BFGS) optimization technique (Nocedal and Wright, 1999). L-BFGS is a second-order Quasi-Newton method that numerically estimates the curvature from previous gradients and updates. With no requirement on specialized Hessian approximation, L-BFGS can handle largescale problems efficiently. Since the objective function of the DPLVM model is non-convex, different parameter initializations normally bring different optimization results. Therefore, to approach closer to the global optimal point, it is recommended to perform multiple experiments on DPLVMs with random initialization and then select a good start point. To reduce overfitting, we employed a L2 Gaussian weight prior (Chen and Rosenfeld, 1999), with the objective function: L(Θ) = Pn i=1 log P(yi|xi, Θ)−||Θ||2/σ2. During training and validation, we set σ = 1 for the DPLVM generators. We also set four latent variables for each label, in order to make a compromise between accuracy and efficiency. Note that, for the label encoding with global information, many label transitions (e.g., P2S3) are actually impossible: the label transitions are strictly constrained, i.e., yiyi+1 ∈ {PjSj, PjPj+1, SjPj+1, SjSj}. These constraints on the model topology (forward-backward lattice) are enforced by giving appropriate features a weight of −∞, thereby forcing all forbidden labelings to have zero probability. Sha and Pereira (2003) originally proposed this concept of implementing transition restrictions. 4 Results and Discussion 4.1 Chinese Abbreviation Generation First, we present the results of the Chinese abbreviation generation task, as listed in Table 2. To evaluate the impact of using latent variables, we chose the baseline system as the DPLVM, in which each label has only one latent variable. Since this 908 Model T1A T2A T3A Time Heu (S08) 41.6 N/A N/A N/A HMM (S08) 46.1 N/A N/A N/A SVM (S08) 62.7 80.4 87.7 1.3 h CRF 64.5 81.1 88.7 0.2 h CRF+GI 66.8 82.5 90.0 0.5 h DPLVM 67.6 83.8 91.3 0.4 h DPLVM+GI (*) 72.3 87.6 94.9 1.1 h Table 2: Results of Chinese abbreviation generation. T1A, T2A, and T3A represent top-1, top2, and top-3 accuracy, respectively. The system marked with the * symbol is the recommended system. special case of the DPLVM is exactly the CRF (see Section 2.1), this case is hereinafter denoted as the CRF. We compared the performance of the DPLVM with the CRFs and other baseline systems, including the heuristic system (Heu), the HMM model, and the SVM model described in S08, i.e., Sun et al. (2008). The heuristic method is a simple rule that produces the initial character of each word to generate the corresponding abbreviation. The SVM method described by Sun et al. (2008) is formalized as a regression problem, in which the abbreviation candidates are scored and ranked. The results revealed that the latent variable model significantly improved the performance over the CRF model. All of its top-1, top-2, and top-3 accuracies were consistently better than those of the CRF model. Therefore, this demonstrated the effectiveness of using the latent variables in Chinese abbreviation generation. As the case for the two alternative approaches for incorporating non-local information, the latent variable method and the label encoding method competed with one another (see DPLVM vs. CRF+GI). The results showed that the latent variable method outperformed the GI encoding method by +0.8% on the top-1 accuracy. The reason for this could be that the label encoding approach is a solution without the adaptivity on different instances. We will present a detailed discussion comparing DPLVM and CRF+GI for the English abbreviation generation task in the next subsection, where the difference is more significant. In contrast, to a larger extent, the results demonstrate that these two alternative approaches are complementary. Using the GI encoding further improved the performance of the DPLVM (with +4.7% on top-1 accuracy). We found that major 国家烟草专卖局 P S P S P S P P1 S1 P2 S2 S2 S2 P3 State Tobacco Monopoly Administration DPLVM DPLVM+GI 国烟专局[Wrong] 国烟局 [Correct] Figure 4: An example of the results. 0 10 20 30 40 50 60 70 80 0 1 2 3 4 5 6 Percentage (%) Length of Produced Abbr. Gold Train Gold Test DPLVM DPLVM+GI Figure 5: Percentage distribution of Chinese abbreviations/Viterbi-labelings grouped by length. improvements were achieved through the more exact control of the output length. An example is shown in Figure 4. The DPLVM made correct decisions at three positions, but failed to control the abbreviation length.2 The DPLVM+GI succeeded on this example. To perform a detailed analysis, we collected the statistics of the length distribution (see Figure 5) and determined that the GI encoding improved the abbreviation length distribution of the DPLVM. In general, the results indicate that all of the sequential labeling models outperformed the SVM regression model with less training time.3 In the SVM regression approach, a large number of negative examples are explicitly generated for the training, which slowed the process. The proposed method, the latent variable model with GI encoding, is 9.6% better with respect to the top-1 accuracy compared to the best system on this corpus, namely, the SVM regression method. Furthermore, the top-3 accuracy of the latent variable model with GI encoding is as high as 94.9%, which is quite encouraging for practical usage. 4.2 English Abbreviation Generation In the English abbreviation generation task, we randomly selected 1,481 instances from the gen2The Chinese abbreviation with length = 4 should have a very low probability, e.g., only 0.6% of abbreviations with length = 4 in this corpus. 3On Intel Dual-Core Xeon 5160/3 GHz CPU, excluding the time for feature generation and data input/output. 909 Model T1A T2A T3A Time CRF 55.8 65.1 70.8 0.3 h CRF+GI 52.7 63.2 68.7 1.3 h CRF+GIB 56.8 66.1 71.7 1.3 h DPLVM 57.6 67.4 73.4 0.6 h DPLVM+GI 53.6 63.2 69.2 2.5 h DPLVM+GIB (*) 58.3 N/A N/A 3.0 h Table 3: Results of English abbreviation generation. somatosensory evoked potentials (a) P1P2 P3 P4 P5 SMEPS (b) P P P P SEPS (a): CRF+GI with p=0.001 [Wrong] (b): DPLVM with p=0.191 [Correct] Figure 6: A result of “CRF+GI vs. DPLVM”. For simplicity, the S labels are masked. eration corpus for training, and 370 instances for testing. Table 3 shows the experimental results. We compared the performance of the DPLVM with the performance of the CRFs. Whereas the use of the latent variables still significantly improves the generation performance, using the GI encoding undermined the performance in this task. In comparing the implicit and explicit solutions for incorporating non-local information, we can see that the implicit approach (the DPLVM) performs much better than the explicit approach (the GI encoding). An example is shown in Figure 6. The CRF+GI produced a Viterbi labeling with a low probability, which is an incorrect abbreviation. The DPLVM produced the correct labeling. To perform a systematic analysis of the superior-performance of DPLVM compare to CRF+GI, we collected the probability distributions (see Figure 7) of the Viterbi labelings from these models (“DPLVM vs. CRF+GI” is highlighted). The curves suggest that the data sparseness problem could be the reason for the differences in performance. A large percentage (37.9%) of the Viterbi labelings from the CRF+GI (ENG) have very small probability values (p < 0.1). For the DPLVM (ENG), there were only a few (0.5%) Viterbi labelings with small probabilities. Since English abbreviations are often longer than Chinese abbreviations (length < 10 in English, whereas length < 5 in Chinese4), using the GI encoding resulted in a larger label set in English. 4See the curve DPLVM+GI (CHN) in Figure 7, which could explain the good results of GI encoding for the Chinese task. 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1 Percentage (%) Probability of Viterbi labeling CRF (ENG) CRF+GI (ENG) DPLVM (ENG) DPLVM+GI (ENG) DPLVM+GI (CHN) Figure 7: For various models, the probability distributions of the produced abbreviations on the test data of the English abbreviation generation task. mitomycin C DPLVM P P MC [Wrong] DPLVM+GI P1 P2 P3 MMC [Correct] Figure 8: Example of abbreviations composed of non-initials generated by the DPLVM and the DPLVM+GI. Hence, the features become more sparse than in the Chinese case.5 Therefore, a significant number of features could have been inadequately trained, resulting in Viterbi labelings with low probabilities. For the latent variable approach, its curve demonstrates that it did not cause a severe data sparseness problem. The aforementioned analysis also explains the poor performance of the DPLVM+GI. However, the DPLVM+GI can actually produce correct abbreviations with ‘believable’ probabilities (high probabilities) in some ‘difficult’ instances. In Figure 8, the DPLVM produced an incorrect labeling for the difficult long form, whereas the DPLVM+GI produced the correct labeling containing non-initials. Hence, we present a simple voting method to better combine the latent variable approach with the GI encoding method. We refer to this new combination as GI encoding with ‘back-off’ (hereinafter GIB): when the abbreviation generated by the DPLVM+GI has a ‘believable’ probability (p > 0.3 in the present case), the DPLVM+GI then outputs it. Otherwise, the system ‘backs-off’ 5In addition, the training data of the English task is much smaller than for the Chinese task, which could make the models more sensitive to data sparseness. 910 Model T1A Time CRF+GIB 67.2 0.6 h DPLVM+GIB (*) 72.5 1.4 h Table 4: Re-evaluating Chinese abbreviation generation with GIB. Model T1A Heu (T05) 47.3 MEMM (T05) 55.2 DPLVM (*) 57.5 Table 5: Results of English abbreviation generation with five-fold cross validation. to the parameters trained without the GI encoding (i.e., the DPLVM). The results in Table 3 demonstrate that the DPVLM+GIB model significantly outperformed the other models because the DPLVM+GI model improved the performance in some ‘difficult’ instances. The DPVLM+GIB model was robust even when the data sparseness problem was severe. By re-evaluating the DPLVM+GIB model for the previous Chinese abbreviation generation task, we demonstrate that the back-off method also improved the performance of the Chinese abbreviation generators (+0.2% from DPLVM+GI; see Table 4). Furthermore, for interests, like Tsuruoka et al. (2005), we performed a five-fold cross-validation on the corpus. Concerning the training time in the cross validation, we simply chose the DPLVM for comparison. Table 5 shows the results of the DPLVM, the heuristic system (Heu), and the maximum entropy Markov model (MEMM) described by Tsuruoka et al. (2005). 5 Recognition as a Generation Task We directly migrate this model to the abbreviation recognition task. We simplify the abbreviation recognition to a restricted generation problem (see Figure 9). When a context expression (CE) with a parenthetical expression (PE) is met, the recognizer generates the Viterbi labeling for the CE, which leads to the PE or NULL. Then, if the Viterbi labeling leads to the PE, we can, at the same time, use the labeling to decide the full form within the CE. Otherwise, NULL indicates that the PE is not an abbreviation. For example, in Figure 9, the recognition is restricted to a generation task with five possible la... cannulate for arterial pressure (AP)... (1) P P AP (2) P P AP (3) P P AP (4) P P AP (5) SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS NULL Figure 9: Abbreviation recognition as a restricted generation problem. In some labelings, the S labels are masked for simplicity. Model P R F Schwartz & Hearst (SH) 97.8 94.0 95.9 SaRAD 89.1 91.9 90.5 ALICE 96.1 92.0 94.0 Chang & Sch¨utze (CS) 94.2 90.0 92.1 Nadeau & Turney (NT) 95.4 87.1 91.0 Okazaki et al. (OZ) 97.3 96.9 97.1 CRF 89.8 94.8 92.1 CRF+GI 93.9 97.8 95.9 DPLVM 92.5 97.7 95.1 DPLVM+GI (*) 94.2 98.1 96.1 Table 6: Results of English abbreviation recognition. belings. Other labelings are impossible, because they will generate an abbreviation that is not AP. If the first or second labeling is generated, AP is selected as an abbreviation of arterial pressure. If the third or fourth labeling is generated, then AP is selected as an abbreviation of cannulate for arterial pressure. Finally, the fifth labeling (NULL) indicates that AP is not an abbreviation. To evaluate the recognizer, we use the corpus6 of Okazaki et al. (2008), which contains 864 abbreviation definitions collected from 1,000 MEDLINE scientific abstracts. In implementing the recognizer, we simply use the model from the abbreviation generator, with the same feature templates (31,868 features) and training method; the major difference is in the restriction (according to the PE) of the decoding stage and penalizing the probability values of the NULL labelings7. For the evaluation metrics, following Okazaki et al. (2008), we use precision (P = k/m), recall (R = k/n), and the F-score defined by 6The previous abbreviation generation corpus is improper for evaluating recognizers, and there is no related research on this corpus. In addition, there has been no report of Chinese abbreviation recognition because there is no data available. The previous generation corpus (Sun et al., 2008) is improper because it lacks local contexts. 7Due to the data imbalance of the training corpus, we found the probability values of the NULL labelings are abnormally high. To deal with this imbalance problem, we simply penalize all NULL labelings by using p = p −0.7. 911 Model P R F CRF+GIB 94.0 98.9 96.4 DPLVM+GIB 94.5 99.1 96.7 Table 7: English abbreviation recognition with back-off. 2PR/(P + R), where k represents #instances in which the system extracts correct full forms, m represents #instances in which the system extracts the full forms regardless of correctness, and n represents #instances that have annotated full forms. Following Okazaki et al. (2008), we perform 10fold cross validation. We prepared six state-of-the-art abbreviation recognizers as baselines: Schwartz and Hearst’s method (SH) (2003), SaRAD (Adar, 2004), ALICE (Ao and Takagi, 2005), Chang and Sch¨utze’s method (CS) (Chang and Sch¨utze, 2006), Nadeau and Turney’s method (NT) (Nadeau and Turney, 2005), and Okazaki et al.’s method (OZ) (Okazaki et al., 2008). Some methods use implementations on the web, including SH8, CS9, and ALICE10. The results of other methods, such as SaRAD, NT, and OZ, are reproduced for this corpus based on their papers (Okazaki et al., 2008). As can be seen in Table 6, using the latent variables significantly improved the performance (see DPLVM vs. CRF), and using the GI encoding improved the performance of both the DPLVM and the CRF. With the F-score of 96.1%, the DPLVM+GI model outperformed five of six stateof-the-art abbreviation recognizers. Note that all of the six systems were specifically designed and optimized for this recognition task, whereas the proposed model is directly transported from the generation task. Compared with the generation task, we find that the F-measure of the abbreviation recognition task is much higher. The major reason for this is that there are far fewer classification candidates of the abbreviation recognition problem, as compared to the generation problem. For interests, we also tested the effect of the GIB approach. Table 7 shows that the back-off method further improved the performance of both the DPLVM and the CRF model. 8http://biotext.berkeley.edu/software.html 9http://abbreviation.stanford.edu/ 10http://uvdb3.hgc.jp/ALICE/ALICE index.html 6 Conclusions and Future Research We have presented the DPLVM and GI encoding by which to incorporate non-local information in abbreviating terms. They were competing and generally the performance of the DPLVM was superior. On the other hand, we showed that the two approaches were complementary. By combining these approaches, we were able to achieve stateof-the-art performance in abbreviation generation and recognition in the same model, across different languages, and with a simple feature set. As discussed earlier herein, the training data is relatively small. Since there are numerous unlabeled full forms on the web, it is possible to use a semisupervised approach in order to make use of such raw data. This is an area for future research. Acknowledgments We thank Yoshimasa Tsuruoka for providing the English abbreviation generation corpus. We also thank the anonymous reviewers who gave helpful comments. This work was partially supported by Grant-in-Aid for Specially Promoted Research (MEXT, Japan). References Eytan Adar. 2004. SaRAD: A simple and robust abbreviation dictionary. Bioinformatics, 20(4):527– 533. Hiroko Ao and Toshihisa Takagi. 2005. ALICE: An algorithm to extract abbreviations from MEDLINE. Journal of the American Medical Informatics Association, 12(5):576–586. June A. Barrett and Mandalay Grems. 1960. Abbreviating words systematically. Communications of the ACM, 3(5):323–324. Charles P. Bourne and Donald F. Ford. 1961. A study of methods for systematically abbreviating english words and names. Journal of the ACM, 8(4):538– 552. Jeffrey T. Chang and Hinrich Sch¨utze. 2006. Abbreviations in biomedical text. In Sophia Ananiadou and John McNaught, editors, Text Mining for Biology and Biomedicine, pages 99–119. Artech House, Inc. Stanley F. Chen and Ronald Rosenfeld. 1999. A gaussian prior for smoothing maximum entropy models. Technical Report CMU-CS-99-108, CMU. Yaakov HaCohen-Kerner, Ariel Kass, and Ariel Peretz. 2008. Combined one sense disambiguation of abbreviations. In Proceedings of ACL’08: HLT, Short Papers, pages 61–64, June. 912 Louis-Philippe Morency, Ariadna Quattoni, and Trevor Darrell. 2007. Latent-dynamic discriminative models for continuous gesture recognition. Proceedings of CVPR’07, pages 1–8. David Nadeau and Peter D. Turney. 2005. A supervised learning approach to acronym identification. In the 8th Canadian Conference on Artificial Intelligence (AI’2005) (LNAI 3501), page 10 pages. Jorge Nocedal and Stephen J. Wright. 1999. Numerical optimization. Springer. Naoaki Okazaki, Sophia Ananiadou, and Jun’ichi Tsujii. 2008. A discriminative alignment model for abbreviation recognition. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING’08), pages 657–664, Manchester, UK. Serguei Pakhomov. 2002. Semi-supervised maximum entropy based approach to acronym and abbreviation normalization in medical texts. In Proceedings of ACL’02, pages 160–167. Youngja Park and Roy J. Byrd. 2001. Hybrid text mining for finding abbreviations and their definitions. In Proceedings of EMNLP’01, pages 126–133. Leonid Peshkin and Avi Pfeffer. 2003. Bayesian information extraction network. In Proceedings of IJCAI’03, pages 421–426. Slav Petrov and Dan Klein. 2008. Discriminative loglinear grammars with latent variables. Proceedings of NIPS’08. Ariel S. Schwartz and Marti A. Hearst. 2003. A simple algorithm for identifying abbreviation definitions in biomedical text. In the 8th Pacific Symposium on Biocomputing (PSB’03), pages 451–462. Fei Sha and Fernando Pereira. 2003. Shallow parsing with conditional random fields. Proceedings of HLT/NAACL’03. Xu Sun, Houfeng Wang, and Bo Wang. 2008. Predicting chinese abbreviations from definitions: An empirical learning approach using support vector regression. Journal of Computer Science and Technology, 23(4):602–611. Kazem Taghva and Jeff Gilbreth. 1999. Recognizing acronyms and their definitions. International Journal on Document Analysis and Recognition (IJDAR), 1(4):191–198. Yoshimasa Tsuruoka, Sophia Ananiadou, and Jun’ichi Tsujii. 2005. A machine learning approach to acronym generation. In Proceedings of the ACLISMB Workshop, pages 25–31. Jonathan D. Wren and Harold R. Garner. 2002. Heuristics for identification of acronym-definition patterns within text: towards an automated construction of comprehensive acronym-definition dictionaries. Methods of Information in Medicine, 41(5):426–434. Hong Yu, Won Kim, Vasileios Hatzivassiloglou, and John Wilbur. 2006. A large scale, corpus-based approach for automatically disambiguating biomedical abbreviations. ACM Transactions on Information Systems (TOIS), 24(3):380–404. 913
2009
102
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 914–922, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A non-contiguous Tree Sequence Alignment-based Model for Statistical Machine Translation Jun Sun1,2 Min Zhang1 Chew Lim Tan2 1 Institute for Infocomm Research 2School of Computing, National University of Singapore [email protected] [email protected] [email protected] Abstract The tree sequence based translation model allows the violation of syntactic boundaries in a rule to capture non-syntactic phrases, where a tree sequence is a contiguous sequence of subtrees. This paper goes further to present a translation model based on non-contiguous tree sequence alignment, where a non-contiguous tree sequence is a sequence of sub-trees and gaps. Compared with the contiguous tree sequencebased model, the proposed model can well handle non-contiguous phrases with any large gaps by means of non-contiguous tree sequence alignment. An algorithm targeting the noncontiguous constituent decoding is also proposed. Experimental results on the NIST MT-05 Chinese-English translation task show that the proposed model statistically significantly outperforms the baseline systems. 1 Introduction Current research in statistical machine translation (SMT) mostly settles itself in the domain of either phrase-based or syntax-based. Between them, the phrase-based approach (Marcu and Wong, 2002; Koehn et al, 2003; Och and Ney, 2004) allows local reordering and contiguous phrase translation. However, it is hard for phrase-based models to learn global reorderings and to deal with noncontiguous phrases. To address this issue, many syntax-based approaches (Yamada and Knight, 2001; Eisner, 2003; Gildea, 2003; Ding and Palmer, 2005; Quirk et al, 2005; Zhang et al, 2007, 2008a; Bod, 2007; Liu et al, 2006, 2007; Hearne and Way, 2003) tend to integrate more syntactic information to enhance the non-contiguous phrase modeling. In general, most of them achieve this goal by introducing syntactic non-terminals as translational equivalent placeholders in both source and target sides. Nevertheless, the generated rules are strictly required to be derived from the contiguous translational equivalences (Galley et al, 2006; Marcu et al, 2006; Zhang et al, 2007, 2008a, 2008b; Liu et al, 2006, 2007). Among them, Zhang et al. (2008a) acquire the non-contiguous phrasal rules from the contiguous tree sequence pairs1, and find them useless via real syntax-based translation systems. However, Wellington et al. (2006) statistically report that discontinuities are very useful for translational equivalence analysis using binary branching structures under word alignment and parse tree constraints. Bod (2007) also finds that discontinues phrasal rules make significant improvement in linguistically motivated STSG-based translation model. The above observations are conflicting to each other. In our opinion, the non-contiguous phrasal rules themselves may not play a trivial role, as reported in Zhang et al. (2008a). We believe that the effectiveness of non-contiguous phrasal rules highly depends on how to extract and utilize them. To verify the above assumption, suppose there is only one tree pair in the training data with its alignment information illustrated as Fig. 1(a) 2. A test sentence is given in Fig. 1(b): the source sentence with its syntactic tree structure as the upper tree and the expected target output with its syntactic structure as the lower tree. In the tree sequence alignment based model, in addition to the entire tree pair, it is capable to acquire the contiguous tree sequence pairs: TSP (1~4) 3 in Fig. 1. By means of the rules derived from these contiguous tree sequence pairs, it is easy to translate the contiguous phrase “ /he  /show up  /’s”. As for the non-contiguous phrase “  /at, ***,  /time”, the only related rule is r1 derived from TSP4 and the entire tree pair. However, the source side of r1 does not match the source tree structure of the test sentence. Therefore, we can only partially translate the illustrated test sentence with this training sample. 1 A tree sequence pair in this context is a kind of translational equivalence comprised of a pair of tree sequences. 2 We illustrate the rule extraction with an example from the tree-to-tree translation model based on tree sequence alignment (Zhang et al, 2008a) without losing of generality to most syntactic tree based models. 3 We only list the contiguous tree sequence pairs with one single sub-tree in both sides without losing of generality. 914 As discussed above, the problem lies in that the non-contiguous phrases derived from the contiguous tree sequence pairs demand greater reliance on the context. Consequently, when applying those rules to unseen data, it may suffer from the data sparseness problem. The expressiveness of the model also slacks due to their weak ability of generalization. To address this issue, we propose a syntactic translation model based on non-contiguous tree sequence alignment. This model extracts the translation rules not only from the contiguous tree sequence pairs but also from the non-contiguous tree sequence pairs where a non-contiguous tree sequence is a sequence of sub-trees and gaps. With the help of the non-contiguous tree sequence, the proposed model can well capture the noncontiguous phrases in avoidance of the constraints of large applicability of context and enhance the non-contiguous constituent modeling. As for the above example, the proposed model enables the non-contiguous tree sequence pair indexed as TSP5 in Fig. 1 and is allowed to further derive r2 from TSP5. By means of r2 and the same processing to the contiguous phrase “ /he   /show up  /’s” as the contiguous tree sequence based model, we can successfully translate the entire source sentence in Fig. 1(b). We define a synchronous grammar, named Synchronous non-contiguous Tree Sequence Substitution Grammar (SncTSSG), extended from synchronous tree substitution grammar (STSG: Chiang, 2006) to illustrate our model. The proposed synchronous grammar is able to cover the previous proposed grammar based on tree (STSG, Eisner, 2003; Zhang et al, 2007) and tree sequence (STSSG, Zhang et al, 2008a) alignment. Besides, we modify the traditional parsing based decoding algorithm for syntax-based SMT to facilitate the non-contiguous constituent decoding for our model. To the best of our knowledge, this is the first attempt to acquire the translation rules with rich syntactic structures from the non-contiguous Translational Equivalences (non-contiguous tree sequence pairs in this context). The rest of this paper is organized as follows: Section 2 presents a formal definition of our model with detailed parameterization. Sections 3 and 4 elaborate the extraction of the non-contiguous tree sequence pairs and the decoding algorithm respectively. The experiments we conduct to assess the effectiveness of the proposed method are reported in Section 5. We finally conclude this work in Section 6. 2 Non-Contiguous Tree sequence Alignment-based Model In this section, we give a formal definition of SncTSSG and accordingly we propose the alignment based translation model. The details of probabilistic parameterization are elaborated based on the log-linear framework. 2.1 Synchronous non-contiguous TSSG (SncTSSG) Extended from STSG (Shiever, 2004), SncTSSG can be formalized as a quintuple G = < , , , , R>, where: x and are source and target terminal alphabets (words) respectively, and x and are source and target nonterminal alphabets (linguistically syntactic tags, i.e. NP, VP) respectively; as well as the non-terminal to denote a gap, VP NP AS VV PN IP CP NN DEC VV     SBAR VP S RP VBZ PRP WRB up shows he when TSP1: PN( ) PRP(he) r1: VP(VV( ),AS(  ),NP(CP[0],NN(  )))  SBAR(WRB(when),S[0]) TSP5: VV( ), *** ,NN(  ) WRB(when) TSP3: IP(PN( ),VV(  )) S((PRP(he), VP(VBZ(shows), RP(up)))) TSP2: VV(  ) VP(VBZ(shows),RP(up)) r2: VV( ), *** ,NN(  )  WRB(when) TSP4: CP(IP(PN( ),VV(  )),DEC(  )) S((PRP(he), VP(VBZ(shows), RP(up)))) (at) (NULL) (he) (show up) (þs) (time) VP NP VV PN IP CP NN DEC VV     SBAR VP S RP VBZ PRP WRB up shows he when (at) (he) (show up) (þs) (time) (a) (b) Figure 1: Rule extraction of tree-to-tree model based on tree sequence pairs 915 can represent any syntactic or nonsyntactic tree sequences, and x R is a production rule set consisting of rules derived from corresponding contiguous or non-contiguous tree sequence pairs, where a rule is a pair of contiguous or noncontiguous tree sequence with alignment relation between leaf nodes across the tree sequence pair. A non-contiguous tree sequence translation rule r R can be further defined as a triple , where: x is a non-contiguous source tree sequence, covering the span set in , where which means each subspan has nonzero width and which means there is a non-zero gap between each pair of consecutive intervals. A gap of interval [ ] is denoted as , and x is a non-contiguous target tree sequence, covering the span set in , where which means each subspan has non-zero width and which means there is a non-zero gap between each pair of consecutive intervals. A gap of interval [ ] is denoted as , and x are the alignments between leaf nodes of the source and target non-contiguous tree sequences, satisfying the following conditions : , where and In SncTSSG, the leaf nodes in a non-contiguous tree sequence rule can be either non-terminal symbols (grammar tags) or terminal symbols (lexical words) and the non-terminal symbols with the same index which are subsumed simultaneously are not required to be contiguous. Fig. 4 shows two examples of non-contiguous tree sequence rules (“non-contiguous rule” for short in the following context) derived from the noncontiguous tree sequence pair (in Fig. 3) which is extracted from the bilingual tree pair in Fig. 2. Between them, ncTSr1 is a tree rule with internal nodes non-contiguously subsumed from a contiguous tree sequence pair (dashed in Fig. 2) while ncTSr2 is a non-contiguous rule with a contiguous source side and a non-contiguous target side. Obviously, the non-contiguous tree sequence rule ncTSr2 is more flexible by neglecting the context among the gaps of the tree sequence pair while capturing all aligned counterparts with the corresponding syntactic structure information. We Figure 2: A word-aligned parse tree pair Figure 3: A non-contiguous tree sequence pair Figure 4: Two examples of non-contiguous tree sequence translation rules 916 expect these properties can well address the issues of non-contiguous phrase modeling. 2.2 SncTSSG based Translation Model Given the source and target sentence and , as well as the corresponding parse trees  and , our approach directly approximates the posterior probability based on the log-linear framework: ‡š’ In this model, the feature function hm is loglinearly combined by the corresponding parameter (Och and Ney, 2002). The following features are utilized in our model: 1) The bi-phrasal translation probabilities 2) The bi-lexical translation probabilities 3) The target language model 4) The # of words in the target sentence 5) The # of rules utilized 6) The average tree depth in the source side of the rules adopted 7) The # of non-contiguous rules utilized 8) The # of reordering times caused by the utilization of the non-contiguous rules Feature 1~6 can be applied to either STSSG or SncTSSG based models, while the last two targets SncTSSG only. 3 Tree Sequence Pair Extraction In training, other than the contiguous tree sequence pairs, we extract the non-contiguous ones as well. Nevertheless, compared with the contiguous tree sequence pairs, the non-contiguous ones suffer more from the tree sequence pair redundancy problem that one non-contiguous tree sequence pair can be comprised of two or more unrelated and nonadjacent contiguous ones. To model the contiguous phrases, this problem is actually trivial, since the contiguous phrases stay adjacently and share the related syntactic constraints; however, as for non-contiguous phrase modeling, the cohesion of syntactically and semantically unrelated tree sequence pairs is more likely to generate noisy rules which do not benefit at all. In order to minimize the number of redundant tree sequence pairs, we limit the # of gaps of non-contiguous tree sequence pairs to be 0 in either source or target side. In other words, we only allow one side to be noncontiguous (either source or target side) to partially reserve its syntactic and semantic cohesion4. We further design a two-phase algorithm to extract the tree sequence pairs as described in Algorithm 1. For the first phase (line 1-11), we extract the contiguous tree sequence pairs (line 3-5) and the non-contiguous ones with contiguous tree sequence in the source side (line 6-9). In the second phase (line 12-19), the ones with contiguous tree sequence in the target side and non-contiguous tree sequence on the source side are extracted. 4 Wellington et al. (2006) also reports that allowing gaps in one side only is enough to eliminate the hierarchical alignment failure with word alignment and one side parse tree constraints. This is a particular case of our definition of non-contiguous tree sequence pair since a non-contiguous tree sequence can be considered to overcome the structural constraint by neglecting the structural information in the gaps. Algorithm 1: Tree Sequence Pair Extraction Input: source tree and target tree Output: the set of tree sequence pairs Data structure: p[j1, j2] to store tree sequence pairs covering source span[j1, j2] 1: foreach source span [j1, j2], do 2: find a target span [i1,i2] with minimal length covering all the target words aligned to [j1, j2] 3: if all the target words in [i1,i2] are aligned with source words only in [j1, j2], then 4: Pair each source tree sequence covering [j1, j2] with those in target covering [i1,i2] as a contiguous tree sequence pair 5: Insert them into p[j1, j2] 6: else 7: create sub-span set s([i1,i2]) to cover all the target words aligned to [j1, j2] 8: Pair each source tree sequence covering [j1, j2] with each target tree sequence covering s([i1,i2]) as a non-contiguous tree sequence pair 9: Insert them into p[j1, j2] 10: end if 11:end do 12: foreach target span [i1,i2], do 13: find a source span [j1, j2] with minimal length covering all the source words aligned to [i1,i2] 14: if any source word in [j1, j2] is aligned with target words outside [i1,i2], then 15: create sub-span set s([j1, j2]) to cover all the source words aligned to [i1,i2] 16: Pair each source tree sequence covering s([j1, j2]) with each target tree sequence covering [i1,i2] as a non-contiguous tree sequence pair 17: Insert them into p[j1, j2] 18: end if 19: end do 917 The extracted tree sequence pairs are then utilized to derive the translation rules. In fact, both the contiguous and non-contiguous tree sequence pairs themselves are applicable translation rules; we denote these rules as Initial rules. By means of the Initial rules, we derive the Abstract rules similarly as in Zhang et al. (2008a). Additionally, we develop a few constraints to limit the number of Abstract rules. The depth of a tree in a rule is no greater than h. The number of non-terminals as leaf nodes is no greater than c. The tree number is no greater than d. Besides, the number of lexical words at leaf nodes in an Initial rule is no greater than l. The maximal number of gaps for a non-contiguous rule is no greater than . 4 The Pisces decoder We implement our decoder Pisces by simulating the span based CYK parser constrained by the rules of SncTSSG. The decoder translates each span iteratively in a bottom up manner which guarantees that when translating a source span, any of its sub-spans is already translated. For each source span [j1, j2], we perform a threephase decoding process. In the first phase, the source side contiguous translation rules are utilized as described in Algorithm 2. When translating using a source side contiguous rule, the target tree sequence of the rule whether contiguous or noncontiguous is directly considered as a candidate translation for this span (line 3), if the rule is an Initial rule; otherwise, the non-terminal leaf nodes are replaced with the corresponding sub-spans’ translations (line 5). In the second phase, the source side noncontiguous rules5 for [j1, j2] are processed. As for 5 A source side non-contiguous translation rules which cover a list of n non-contiguous spans s([ , ], i=1,…,n) is considered to cover the source span [j1, j2] if and only if = j1 and = j2. the ones with non-terminal leaf nodes, the replacement with corresponding spans’ translations is initially performed in the same way as with the contiguous rules in the first phase. After that, an operation specified for the source side noncontiguous rules named “Source gap insertion” is performed. As illustrated in Fig. 5, to use the noncontiguous rule r1, which covers the source span set ([0,0], [4,4]), the target portion “IN(in)” is first attained, then the translations to the gap span [1,3] is acquired from the previous steps and is inserted either to the right or to the left of “IN(in)”. The insertion is rather cohesion based but leaves a gap <***> for further “Target tree sequence reordering” in the next phase if necessary. In the third phase, we carry out the other noncontiguous rule specific operation named “Target tree sequence reordering”. Algorithm 3 gives an overview of this operation. For each source span, we first binarize the span into the left one and the right one. The translation hypothesis for this span is generated by firstly inserting the candidate translations of the right span to each gap in the ones of the left span respectively (line 2-9) and then repeating in the alternative direction (line10-17). The gaps for the insertion of the tree sequences in the target side are generated from either the inherit Figure 5: Illustration of “Source gap insertion” Algorithm 2: Contiguous rule processing Data structure: h[j1, j2]to store translations covering source span[j1, j2] 1: foreach rule r contiguous in source span [j1, j2], do 2: if r is an Initial rule, then 3: insert r into h[j1, j2] 4: else //Abstract rule 5: generate translations by replacing the nonterminal leaf nodes of r with their corresponding spans’ translation 6: insert the new translation into h[j1, j2] 7: end if 8: end do 918 ance of the target side non-contiguous tree sequence pairs or the production of the previous operations of “Source gap insertion”. Therefore, the insertion for target gaps helps search for a better order of the non-contiguous constituents in the target side. On the other hand, the non-contiguous tree sequences with rich syntactic information are reordered, nevertheless, without much consideration of the constraints of the syntactic structure. Consequently, this distortional operation, like phrase-based models, is much more flexible in the order of the target constituents than the traditional syntax-based models which are limited by the syntactic structure. As a result, “Target tree sequence reordering” enhances the reordering ability of the model. To speed up the decoder, we use several thresholds to limit the searching space for each span. The maximal number of the rules in a source span is no greater than . The maximal number of translation candidates for a source span is no greater than . On the other hand, to simplify the computation of language model, we only compute for source side contiguous translational hypothesis, while neglecting gaps in the target side if any. 5 Experiments 5.1 Experimental Settings In the experiments, we train the translation model on FBIS corpus (7.2M (Chinese) + 9.2M (English) words) and train a 4-gram language model on the Xinhua portion of the English Gigaword corpus (181M words) using the SRILM Toolkits (Stolcke, 2002). We use these sentences with less than 50 characters from the NIST MT-2002 test set as the development set and the NIST MT-2005 test set as our test set. We use the Stanford parser (Klein and Manning, 2003) to parse bilingual sentences on the training set and Chinese sentences on the development and test set. The evaluation metric is casesensitive BLEU-4 (Papineni et al., 2002). We base on the m-to-n word alignments dumped by GIZA++ to extract the tree sequence pairs. For the MER training, we modify Koehn’s version (Koehn, 2004). We use Zhang et al’s implementation (Zhang et al, 2004) for 95% confidence intervals significant test. We compare the SncTSSG based model against two baseline models: the phrase-based and the STSSG-based models. For the phrase-based model, we use Moses (Koehn et al, 2007) with its default settings; for the STSSG and SncTSSG based models we use our decoder Pisces by setting the following parameters: , , , , , . Additionally, for STSSG we set , and for SncTSSG, we set . 5.2 Experimental Results Table 1 compares the performance of different models across the two systems. The proposed SncTSSG based model significantly outperforms (p < 0.05) the two baseline models. Since the SncTSSG based model covers the STSSG based model in its modeling ability and obtains a superset in rules, the improvement empirically verifies the effectiveness of the additional non-contiguous rules. System Model BLEU Moses cBP 23.86 Pisces STSSG 25.92 SncTSSG 26.53 Table 1: Translation results of different models (cBP refers to contiguous bilingual phrases without syntactic structural information, as used in Moses) Table 2 measures the contribution of different combination of rules. cR refers to the rules derived from contiguous tree sequence pairs (i.e., all STSSG rules); ncPR refers to non-contiguous phrasal rules derived from contiguous tree sequence pairs with at least one non-terminal leaf node between two lexicalized leaf nodes (i.e., all non-contiguous rules in STSSG defined as in Zhang et al. (2008a)); srcncR refers to source side non-contiguous rules (SncTSSG rules only, not STSSG rules); tgtncR refers to target side noncontiguous rules (SncTSSG rules only, not STSSG rules) and src&tgtncR refers non-contiguous rules Algorithm 3: Target tree sequence reordering Data structure: h[j1, j2]to store translations covering source span[j1, j2] 1: foreach k [j1, j2), do 2: foreach translation h[j1, k], do 3: foreach gap in , do 4: foreach translation h[k+1, j2], do 5: insert into the position of 6: insert the new translation into h[j1, j2] 7: end do 8: end do 9: end do 10: foreach translation h[k+1, j2], do 11: foreach gap in , do 12: foreach translation h[j1, k], do 13: insert into the position of 14: insert the new translation into h[j1, j2] 15: end do 16: end do 17: end do 18:end do 919 with gaps in either side (srcncR+ tgtncR). The last three kinds of rules are all derived from noncontiguous tree sequence pairs. 1) From Exp 1 and 2 in Table 2, we find that non-contiguous phrasal rules (ncPR) derived from contiguous tree sequence pairs make little impact on the translation performance which is consistent with the discovery of Zhang et al. (2008a). However, if we append the non-contiguous phrasal rules derived from non-contiguous tree sequence pairs, no matter whether non-contiguous in source or in target, the performance statistically significantly (p < 0.05) improves (as presented in Exp 2~5), which validates our prediction that the noncontiguous rules derived from non-contiguous tree sequence pairs contribute more to the performance than those acquired from contiguous tree sequence pairs. 2) Not only that, after comparing Exp 6,7,8 against Exp 3,4,5 respectively, we find that the ability of rules derived from non-contiguous tree sequence pairs generally covers that of the rules derived from the contiguous tree sequence pairs, due to the slight change in BLEU score. 3) The further comparison of the noncontiguous rules from non-contiguous spans in Exp. 6&7 as well as Exp 3&4, shows that noncontiguity in the target side in Chinese-English translation task is not so useful as that in the source side when constructing the non-contiguous phrasal rules. This also validates the findings in Wellington et al. (2006) that varying the gaps on the English side (the target side in this context) seldom reduce the hierarchical alignment failures. Table 3 explores the contribution of the noncontiguous translational equivalence to phrasebased models (all the rules in Table 3 has no grammar tags, but a gap <***> is allowed in the last three rows). tgtncBP refers to the bilingual phrases with gaps in the target side; srcncBP refers to the bilingual phrases with gaps in the source side; src&tgtncBP refers to the bilingual phrases with gaps in either side. System Rule Set BLEU Moses cBP 23.86 Pisces cBP 22.63 cBP + tgtncBP 23.74 cBP + srcncBP 23.93 cBP + src&tgtncBP 24.24 Table 3: Performance of bilingual phrasal rules 1) As presented in Table 3, the effectiveness of the bilingual phrases derived from noncontiguous tree sequence pairs is clearly indicated. Models adopting both tgtncBP and srcncBP significantly (p < 0.05) outperform the model adopting cBP only. 2) Pisces underperforms Moses when utilizing cBPs only, since Pisces can only perform monotonic search with cBPs. 3) The bilingual phrase model with both tgtncBP and srcncBP even outperforms Moses. Compared with Moses, we only utilize plain features in Pisces for the bilingual phrase model (Feature 1~5 for all phrases and additional 7, 8 only for non-contiguous bilingual phrases as stated in Section 2.2; None of the complex reordering features or distortion features are employed by Pisces while Moses uses them), which suggests the effectiveness of the non-contiguous rules and the advantages of the proposed decoding algorithm. Table 4 studies the impact on performance when setting different maximal gaps allowed for either side in a tree sequence pair (parameter ) and the relation with the quantity of rule set. Significant improvement is achieved when allowing at least one gap on either side compared with when only allowing contiguous tree sequence pairs. However, the further increment of gaps does not benefit much. The result exhibits the accordance with the growing amplitude of the rule set filtered for the test set, in which the rule size increases more slowly as the maximal number of gaps increments. As a result, this slow increase against the increment of gaps can be probably attributed to the small augmentation of the effective ID Rule Set BLEU 1 cR (STSSG) 25.92 2 cR w/o ncPR 25.87 3 cR w/o ncPR + tgtncR 26.14 4 cR w/o ncPR + srcncR 26.50 5 cR w/o ncPR + src&tgtncR 26.51 6 cR + tgtncR 26.11 7 cR + srcncR 26.56 8 cR+src&tgtncR(SncTSSG) 26.53 Table 2: Performance of different rule combination Max gaps allowed Rule # BLEU source target 0 0 1,661,045 25.92 1 1 +841,263 26.53 2 2 +447,161 26.55 3 3 +17,782 26.56 ’ +8,223 26.57 Table 4: Performance and rule size changing with different maximal number of gaps 920 non-contiguous rules. In order to facilitate a better intuition to the ability of the SncTSSG based model against the STSSG based model, we present in Table 5, two translation outputs produced by both models. In the first example, GIZA++ wrongly aligns the idiom word “ /confront at court” to a noncontiguous phrase “confront other countries at court*** leisurely manner” in training, in which only the first constituent “confront other countries at court” is reasonable, indicated from the key rules of SncTSSG leant from the training set. The STSSG or any contiguous translational equivalence based model is unable to attain the corresponding target output for this idiom word via the non-contiguous word alignment and consider it as an out-of-vocabulary (OOV). On the contrary, the SncTSSG based model can capture the noncontiguous tree sequence pair consistent with the word alignment and further provide a reasonable target translation. It suggests that SncTSSG can easily capture the non-contiguous translational candidates while STSSG cannot. Besides, SncTSSG is less sensitive to the error of word alignment when extracting the translation candidates than the contiguous translational equivalence based models. In the second example, “  /in  /recent  /’s /survey /middle” is correctly translated into “in the recent surveys” by both the STSSG and SncTSSG based models. This suggests that the short non-contiguous phrase “  /in *** /middle” is well handled by both models. Nevertheless, as for the one with a larger gap, “  /will ***  /continue” is correctly translated and well reordering into “will continue” by SncTSSG but failed by STSSG. Although the STSSG is theoretically able to capture this phrase from the contiguous tree sequence pair, the richer context in the gap as in this example, the more difficult STSSG can correctly translate the non-contiguous phrases. This exhibits the flexibility of SncTSSG to the rich context among the non-contiguous constituents. 6 Conclusions and Future Work In this paper, we present a non-contiguous tree sequence alignment model based on SncTSSG to enhance the ability of non-contiguous phrase modeling and the reordering caused by non-contiguous constituents with large gaps. A three-phase decoding algorithm is developed to facilitate the usage of non-contiguous translational equivalences (tree sequence pairs in this work) which provides much flexibility for the reordering of the non-contiguous constituents with rich syntactic structural information. The experimental results show that our model outperforms the baseline models and verify the effectiveness of non-contiguous translational equivalences to non-contiguous phrase modeling in both syntax-based and phrase-based systems. We also find that in Chinese-English translation task, gaps are more effective in Chinese side than in the English side. Although the characteristic of more sensitiveness to word alignment error enables SncTSSG to capture the additional non-contiguous language phenomenon, it also induces many redundant noncontiguous rules. Therefore, further work of our studies includes the optimization of the large rule set of the SncTSSG based model. Output & References Source  /only  /pass  /null  /five years   /two people  /null  /confront at court Reference after only five years the two confronted each other at court STSSG only in the five years , the two candidates would  SncTSSG the two people can confront other countries at court leisurely manner only in the five years key rules VV( ) ! VB(confront)NP(JJ(other),NNS(countries))IN(at) NN(court) JJ(leisurely)NN(manner) Source "# /Euro $ /’s %'& /substantial () /appreciation * /will + /in ,/recent $ /’s ./ /survey 0 /middle 12 /continue  /for 34 /economy 56 /confidence 78 /produce 9': /impact Reference substantial appreciation of the euro will continue to impact the economic confidence in the recent surveys STSSG substantial appreciation of the euro has continued to have an impact on confidence in the economy , in the recent surveys will SncTSSG substantial appreciation of the euro will continue in the recent surveys have an impact on economic confidence key rules AD(* ) VV( 12 ) ! VP(MD(will),VB(continue)) P( + ) LC( 0 ) ! IN(in) Table 5: Sample translations (tokens in italic match the reference provided) 921 References Rens Bod. 2007. Unsupervised Syntax-Based Machine Translation: The Contribution of Discontinuous Phrases. MT-Summmit-07. 51-56. David Chiang. 2006. An Introduction to Synchronous Grammars. Tutorial on ACL-06 Yuan Ding and Martha Palmer. 2005. Machine translation using probabilistic synchronous dependency insert grammars. ACL-05. 541-548 Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. ACL-03. Michel Galley, J. Graehl, K. Knight, D. Marcu, S. DeNeefe, W. Wang and I. Thayer. 2006. Scalable Inference and training of context-rich syntactic translation models. COLING-ACL-06. 961-968 Daniel Gildea. 2003. Loosely Tree-Based Alignment for Machine Translation. ACL-03. 80-87. Mary Hearne and Andy Way. 2003. Seeing the wood for the trees: data-oriented translation. MT Summit IX, 165-172. Dan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. ACL-03. 423-430. Philipp Koehn, Franz J. Och and Daniel Marcu. 2003. Statistical phrase-based translation. HLT-NAACL03. 127-133 Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. ACL-07. 77-180. Yang Liu, Qun Liu and Shouxun Lin. 2006. Tree-toString Alignment Template for Statistical Machine Translation. ACL-06, 609-616 Yang Liu, Yun Huang, Qun Liu and Shouxun Lin. 2007. Forest-to-String Statistical Translation Rules. ACL-07. 704-711. Daniel Marcu and William Wong. 2002. A phrasebased, joint probability model for statistical machine translation. EMNLP-02, 133-139 Daniel Marcu, W. Wang, A. Echihabi and K. Knight. 2006. SPMT: statistical machine translation with syntactified target language phrases. EMNLP-06. 44-52. Franz J. Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417-449 Kishore Papineni, Salim Roukos, ToddWard and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. ACL-02. 311-318. Chris Quirk, Arul Menezes and Colin Cherry. 2005. Dependency treelet translation: syntactically informed phrasal SMT. ACL-05. 271-279. S. Shieber. 2004. Synchronous grammars as tree transducers. In Proceedings of the Seventh International Workshop on Tree Adjoining Grammar and Related Formalisms Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. ICSLP-02. 901-904. Benjamin Wellington, Sonjia Waxmonsky and I. Dan Melamed. 2006. Empirical Lower Bounds on the Complexity of Translational Equivalence. ACL-06. 977-984 Kenji Yamada and Kevin Knight. 2001. A syntax-based statistical translation model. ACL-01. 523-530 Min Zhang, Hongfei Jiang, AiTi Aw, Jun Sun, Sheng Li and Chew Lim Tan. 2007. A tree-to-tree alignmentbased model for statistical machine translation. MTSummit-07. 535-542. Min Zhang, Hongfei Jiang, AiTi Aw, Haizhou Li, Chew Lim Tan and Sheng Li. 2008a. A tree sequence alignment-based tree-to-tree translation model. ACL08. 559-567. Min Zhang, Hongfei Jiang, Haizhou Li, Aiti Aw, Sheng Li. 2008b. Grammar Comparison Study for Translational Equivalence Modeling and Statistical Machine Translation. COLING-08. 1097-1104. Ying Zhang. Stephan Vogel. Alex Waibel. 2004. Interpreting BLEU/NIST scores: How much improvement do we need to have a better system? LREC-04. 20512054. 922
2009
103
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 923–931, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Better Word Alignments with Supervised ITG Models Aria Haghighi, John Blitzer, John DeNero and Dan Klein Computer Science Division, University of California at Berkeley { aria42,blitzer,denero,klein }@cs.berkeley.edu Abstract This work investigates supervised word alignment methods that exploit inversion transduction grammar (ITG) constraints. We consider maximum margin and conditional likelihood objectives, including the presentation of a new normal form grammar for canonicalizing derivations. Even for non-ITG sentence pairs, we show that it is possible learn ITG alignment models by simple relaxations of structured discriminative learning objectives. For efficiency, we describe a set of pruning techniques that together allow us to align sentences two orders of magnitude faster than naive bitext CKY parsing. Finally, we introduce many-to-one block alignment features, which significantly improve our ITG models. Altogether, our method results in the best reported AER numbers for Chinese-English and a performance improvement of 1.1 BLEU over GIZA++ alignments. 1 Introduction Inversion transduction grammar (ITG) constraints (Wu, 1997) provide coherent structural constraints on the relationship between a sentence and its translation. ITG has been extensively explored in unsupervised statistical word alignment (Zhang and Gildea, 2005; Cherry and Lin, 2007a; Zhang et al., 2008) and machine translation decoding (Cherry and Lin, 2007b; Petrov et al., 2008). In this work, we investigate large-scale, discriminative ITG word alignment. Past work on discriminative word alignment has focused on the family of at-most-one-to-one matchings (Melamed, 2000; Taskar et al., 2005; Moore et al., 2006). An exception to this is the work of Cherry and Lin (2006), who discriminatively trained one-to-one ITG models, albeit with limited feature sets. As they found, ITG approaches offer several advantages over general matchings. First, the additional structural constraint can result in superior alignments. We confirm and extend this result, showing that one-toone ITG models can perform as well as, or better than, general one-to-one matching models, either using heuristic weights or using rich, learned features. A second advantage of ITG approaches is that they admit a range of training options. As with general one-to-one matchings, we can optimize margin-based objectives. However, unlike with general matchings, we can also efficiently compute expectations over the set of ITG derivations, enabling the training of conditional likelihood models. A major challenge in both cases is that our training alignments are often not one-to-one ITG alignments. Under such conditions, directly training to maximize margin is unstable, and training to maximize likelihood is ill-defined, since the target alignment derivations don’t exist in our hypothesis class. We show how to adapt both margin and likelihood objectives to learn good ITG aligners. In the case of likelihood training, two innovations are presented. The simple, two-rule ITG grammar exponentially over-counts certain alignment structures relative to others. Because of this, Wu (1997) and Zens and Ney (2003) introduced a normal form ITG which avoids this over-counting. We extend this normal form to null productions and give the first extensive empirical comparison of simple and normal form ITGs, for posterior decoding under our likelihood models. Additionally, we show how to deal with training instances where the gold alignments are outside of the hypothesis class by instead optimizing the likelihood of a set of minimum-loss alignments. Perhaps the greatest advantage of ITG models is that they straightforwardly permit block923 structured alignments (i.e. phrases), which general matchings cannot efficiently do. The need for block alignments is especially acute in ChineseEnglish data, where oracle AERs drop from 10.2 without blocks to around 1.2 with them. Indeed, blocks are the primary reason for gold alignments being outside the space of one-to-one ITG alignments. We show that placing linear potential functions on many-to-one blocks can substantially improve performance. Finally, to scale up our system, we give a combination of pruning techniques that allows us to sum ITG alignments two orders of magnitude faster than naive inside-outside parsing. All in all, our discriminatively trained, block ITG models produce alignments which exhibit the best AER on the NIST 2002 Chinese-English alignment data set. Furthermore, they result in a 1.1 BLEU-point improvement over GIZA++ alignments in an end-to-end Hiero (Chiang, 2007) machine translation system. 2 Alignment Families In order to structurally restrict attention to reasonable alignments, word alignment models must constrain the set of alignments considered. In this section, we discuss and compare alignment families used to train our discriminative models. Initially, as in Taskar et al. (2005) and Moore et al. (2006), we assume the score a of a potential alignment a) decomposes as s(a) = X (i,j)∈a sij + X i/∈a siϵ + X j /∈a sϵj (1) where sij are word-to-word potentials and siϵ and sϵj represent English null and foreign null potentials, respectively. We evaluate our proposed alignments (a) against hand-annotated alignments, which are marked with sure (s) and possible (p) alignments. The alignment error rate (AER) is given by, AER(a, s, p) = 1 −|a ∩s| + |a ∩p| |a| + |s| 2.1 1-to-1 Matchings The class of at most 1-to-1 alignment matchings, A1-1, has been considered in several works (Melamed, 2000; Taskar et al., 2005; Moore et al., 2006). The alignment that maximizes a set of potentials factored as in Equation (1) can be found in O(n3) time using a bipartite matching algorithm (Kuhn, 1955).1 On the other hand, summing over A1-1 is #P-hard (Valiant, 1979). Initially, we consider heuristic alignment potentials given by Dice coefficients Dice(e, f) = 2Cef Ce + Cf where Cef is the joint count of words (e, f) appearing in aligned sentence pairs, and Ce and Cf are monolingual unigram counts. We extracted such counts from 1.1 million French-English aligned sentence pairs of Hansards data (see Section 6.1). For each sentence pair in the Hansards test set, we predicted the alignment from A1-1 which maximized the sum of Dice potentials. This yielded 30.6 AER. 2.2 Inversion Transduction Grammar Wu (1997)’s inversion transduction grammar (ITG) is a synchronous grammar formalism in which derivations of sentence pairs correspond to alignments. In its original formulation, there is a single non-terminal X spanning a bitext cell with an English and foreign span. There are three rule types: Terminal unary productions X →⟨e, f⟩, where e and f are an aligned English and foreign word pair (possibly with one being null); normal binary rules X →X(L)X(R), where the English and foreign spans are constructed from the children as ⟨X(L)X(R), X(L)X(R)⟩; and inverted binary rules X ; X(L)X(R), where the foreign span inverts the order of the children ⟨X(L)X(R), X(R)X(L)⟩.2 In general, we will call a bitext cell a normal cell if it was constructed with a normal rule and inverted if constructed with an inverted rule. Each ITG derivation yields some alignment. The set of such ITG alignments, AITG, are a strict subset of A1-1 (Wu, 1997). Thus, we will view ITG as a constraint on A1-1 which we will argue is generally beneficial. The maximum scoring alignment from AITG can be found in O(n6) time with synchronous CFG parsing; in practice, we can make ITG parsing efficient using a variety of pruning techniques. One computational advantage of AITG over A1-1 alignments is that summation over AITG is tractable. The corresponding 1We shall use n throughout to refer to the maximum of foreign and English sentence lengths. 2The superscripts on non-terminals are added only to indicate correspondence of child symbols. 924 Indonesia 's parliament speaker arraigned in court 印 尼 国会 议长 出庭 受审 印 尼 国会 议长 出庭 受审 Indonesia 's parliament speaker arraigned in court (a) Max-Matching Alignment (b) Block ITG Alignment Figure 1: Best alignments from (a) 1-1 matchings and (b) block ITG (BITG) families respectively. The 1-1 matching is the best possible alignment in the model family, but cannot capture the fact that Indonesia is rendered as two words in Chinese or that in court is rendered as a single word in Chinese. dynamic program allows us to utilize likelihoodbased objectives for learning alignment models (see Section 4). Using the same heuristic Dice potentials on the Hansards test set, the maximal scoring alignment from AITG yields 28.4 AER—2.4 better than A1-1 —indicating that ITG can be beneficial as a constraint on heuristic alignments. 2.3 Block ITG An important alignment pattern disallowed by A1-1 is the many-to-one alignment block. While not prevalent in our hand-aligned French Hansards dataset, blocks occur frequently in our handaligned Chinese-English NIST data. Figure 1 contains an example. Extending A1-1 to include blocks is problematic, because finding a maximal 1-1 matching over phrases is NP-hard (DeNero and Klein, 2008). With ITG, it is relatively easy to allow contiguous many-to-one alignment blocks without added complexity.3 This is accomplished by adding additional unary terminal productions aligning a foreign phrase to a single English terminal or vice versa. We will use BITG to refer to this block ITG variant and ABITG to refer to the alignment family, which is neither contained in nor contains A1-1. For this alignment family, we expand the alignment potential decomposition in Equation (1) to incorporate block potentials sef and sef which represent English and foreign many-to-one alignment blocks, respectively. One way to evaluate alignment families is to 3In our experiments we limited the block size to 4. consider their oracle AER. In the 2002 NIST Chinese-English hand-aligned data (see Section 6.2), we constructed oracle alignment potentials as follows: sij is set to +1 if (i, j) is a sure or possible alignment in the hand-aligned data, 1 otherwise. All null potentials (siϵ and sϵj) are set to 0. A max-matching under these potentials is generally a minimal loss alignment in the family. The oracle AER computed in this was is 10.1 for A1-1 and 10.2 for AITG. The ABITG alignment family has an oracle AER of 1.2. These basic experiments show that AITG outperforms A1-1 for heuristic alignments, and ABITG provide a much closer fit to true Chinese-English alignments than A1-1. 3 Margin-Based Training In this and the next section, we discuss learning alignment potentials. As input, we have a training set D = (x1, a∗ 1), . . . , (xn, a∗ n) of hand-aligned data, where x refers to a sentence pair. We will assume the score of a alignment is given as a linear function of a feature vector φ(x, a). We will further assume the feature representation of an alignment, φ(x, a) decomposes as in Equation (1), X (i,j)∈a φij(x) + X i/∈a φiϵ(x) + X j /∈a φϵj(x) In the framework of loss-augmented margin learning, we seek a w such that w · φ(x, a∗) is larger than w · φ(x, a) + L(a, a∗) for all a in an alignment family, where L(a, a∗) is the loss between a proposed alignment a and the gold alignment a∗. As in Taskar et al. (2005), we utilize a 925 loss that decomposes across alignments. Specifically, for each alignment cell (i, j) which is not a possible alignment in a∗, we incur a loss of 1 when aij ̸= a∗ ij; note that if (i, j) is a possible alignment, our loss is indifferent to its presence in the proposal alignment. A simple loss-augmented learning procedure is the margin infused relaxed algorithm (MIRA) (Crammer et al., 2006). MIRA is an online procedure, where at each time step t + 1, we update our weights as follows: wt+1 = argminw||w −wt||2 2 (2) s.t. w · φ(x, a∗) ≥w · φ(x, ˆa) + L(ˆa, a∗) where ˆa = arg max a∈A wt · φ(x, a) In our data sets, many a∗are not in A1-1 (and thus not in AITG), implying the minimum infamily loss must exceed 0. Since MIRA operates in an online fashion, this can cause severe stability problems. On the Hansards data, the simple averaging technique described by Collins (2002) yields a reasonable model. On the Chinese NIST data, however, where almost no alignment is in A1-1, the update rule from Equation (2) is completely unstable, and even the averaged model does not yield high-quality results. We instead use a variant of MIRA similar to Chiang et al. (2008). First, rather than update towards the hand-labeled alignment a∗, we update towards an alignment which achieves minimal loss within the family.4 We call this bestin-class alignment a∗ p. Second, we perform lossaugmented inference to obtain ˆa. This yields the modified QP, wt+1 = argminw||w −wt||2 2 (3) s.t. w · φ(x, a∗ p) ≥w · φ(x, ˆa) + L(a, a∗ p) where ˆa = arg max a∈A wt · φ(x, a) + λL(a, a∗ p) By setting λ = 0, we recover the MIRA update from Equation (2). As λ grows, we increase our preference that ˆa have high loss (relative to a∗ p) rather than high model score. With this change, MIRA is stable, but still performs suboptimally. The reason is that initially the score for all alignments is low, so we are biased toward only using very high loss alignments in our constraint. This slows learning and prevents us from finding a useful weight vector. Instead, in all the experiments 4There might be several alignments which achieve this minimal loss; we choose arbitrarily among them. we report here, we begin with λ = 0 and slowly increase it to λ = 0.5. 4 Likelihood Objective An alternative to margin-based training is a likelihood objective, which learns a conditional alignment distribution Pw(a|x) parametrized as follows, log Pw(a|x)=w·φ(x,a)−log X a′∈A exp(w·φ(x,a′)) where the log-denominator represents a sum over the alignment family A. This alignment probability only places mass on members of A. The likelihood objective is given by, max w X (x,a∗)∈A log Pw(a∗|x) Optimizing this objective with gradient methods requires summing over alignments. For AITG and ABITG, we can efficiently sum over the set of ITG derivations in O(n6) time using the inside-outside algorithm. However, for the ITG grammar presented in Section 2.2, each alignment has multiple grammar derivations. In order to correctly sum over the set of ITG alignments, we need to alter the grammar to ensure a bijective correspondence between alignments and derivations. 4.1 ITG Normal Form There are two ways in which ITG derivations double count alignments. First, n-ary productions are not binarized to remove ambiguity; this results in an exponential number of derivations for diagonal alignments. This source of overcounting is considered and fixed by Wu (1997) and Zens and Ney (2003), which we briefly review here. The resulting grammar, which does not handle null alignments, consists of a symbol N to represent a bitext cell produced by a normal rule and I for a cell formed by an inverted rule; alignment terminals can be either N or I. In order to ensure unique derivations, we stipulate that a N cell can be constructed only from a sequence of smaller inverted cells I. Binarizing the rule N →I2+ introduces the intermediary symbol N (see Figure 2(a)). Similarly for inverse cells, we insist an I cell only be built by an inverted combination of N cells; binarization of I ; N2+ requires the introduction of the intermediary symbol I (see Figure 2(b)). Null productions are also a source of double counting, as there are many possible orders in 926 N →I2+ N →IN N →I } N →IN I I I N N N (a) Normal Domain Rules } I ⇝N 2+ I ⇝NI I ⇝NI I ⇝N N N N I I I (b) Inverted Domain Rules N11 →⟨·, f⟩N11 N11 →N10 N10 →N10⟨e, ·⟩ N10 →N00 }N11 →⟨·, f⟩∗N10 } N10 →N00⟨e, ·⟩∗ } N00 →I11N N →I11N N →I00 N00 →I+ 11I00 N00 N10 N10 N11 N N I11 I11 I00 N00 N11 (c) Normal Domain with Null Rules } } } I11 ⇝⟨·, f⟩I11 I11 ⇝I10 I11 ⇝⟨·, f⟩∗I10 I10 ⇝I10⟨e, ·⟩ I10 ⇝I00 I10 ⇝I00⟨e, ·⟩∗ I00 ⇝N + 11N00 I I N00 N11 N11 I00 ⇝N11I I ⇝N11I I ⇝N00 I00 I00 I10 I10 I11 I11 (d) Inverted Domain with Null Rules Figure 2: Illustration of two unambiguous forms of ITG grammars: In (a) and (b), we illustrate the normal grammar without nulls (presented in Wu (1997) and Zens and Ney (2003)). In (c) and (d), we present a normal form grammar that accounts for null alignments. which to attach null alignments to a bitext cell; we address this by adapting the grammar to force a null attachment order. We introduce symbols N00, N10, and N11 to represent whether a normal cell has taken no nulls, is accepting foreign nulls, or is accepting English nulls, respectively. We also introduce symbols I00, I10, and I11 to represent inverse cells at analogous stages of taking nulls. As Figures 2 (c) and (d) illustrate, the directions in which nulls are attached to normal and inverse cells differ. The N00 symbol is constructed by one or more ‘complete’ inverted cells I11 terminated by a no-null I00. By placing I00 in the lower right hand corner, we allow the larger N00 to unambiguously attach nulls. N00 transitions to the N10 symbol and accepts any number of ⟨e, ·⟩English terminal alignments. Then N10 transitions to N11 and accepts any number of ⟨·, f⟩foreign terminal alignments. An analogous set of grammar rules exists for the inverted case (see Figure 2(d) for an illustration). Given this normal form, we can efficiently compute model expectations over ITG alignments without double counting.5 To our knowledge, the alteration of the normal form to accommodate null emissions is novel to this work. 5The complete grammar adds sentinel symbols to the upper left and lower right, and the root symbol is constrained to be a N00. 4.2 Relaxing the Single Target Assumption A crucial obstacle for using the likelihood objective is that a given a∗may not be in the alignment family. As in our alteration to MIRA (Section 3), we could replace a∗with a minimal loss in-class alignment a∗ p. However, in contrast to MIRA, the likelihood objective will implicitly penalize proposed alignments which have loss equal to a∗ p. We opt instead to maximize the probability of the set of alignments M(a∗) which achieve the same optimal in-class loss. Concretely, let m∗be the minimal loss achievable relative to a∗in A. Then, M(a∗) = {a ∈A|L(a, a∗) = m∗} When a∗is an ITG alignment (i.e., m∗is 0), M(a∗) consists only of alignments which have all the sure alignments in a∗, but may have some subset of the possible alignments in a∗. See Figure 3 for a specific example where m∗= 1. Our modified likelihood objective is given by, max w X (x,a∗)∈D log X a∈M(a∗) Pw(a|x) Note that this objective is no longer convex, as it involves a logarithm of a summation, however we still utilize gradient-based optimization. Summing and obtaining feature expectations over M(a∗) can be done efficiently using a constrained variant 927 MIRA Likelihood 1-1 ITG ITG-S ITG-N Features P R AER P R AER P R AER P R AER Dice,dist 85.9 82.6 15.6 86.7 82.9 15.0 89.2 85.2 12.6 87.8 82.6 14.6 +lex,ortho 89.3 86.0 12.2 90.1 86.4 11.5 92.0 90.6 8.6 90.3 88.8 10.4 +joint HMM 95.8 93.8 5.0 96.0 93.2 5.2 95.5 94.2 5.0 95.6 94.0 5.1 Table 1: Results on the French Hansards dataset. Columns indicate models and training methods. The rows indicate the feature sets used. ITG-S uses the simple grammar (Section 2.2). ITG-N uses the normal form grammar (Section 4.1). For MIRA (Viterbi inference), the highest-scoring alignment is the same, regardless of grammar. That is not good enough Se ne est pas suffisant a∗ Gold Alignment Target Alignments M(a∗) Figure 3: Often, the gold alignment a∗isn’t in our alignment family, here ABIT G. For the likelihood objective (Section 4.2), we maximize the probability of the set M(a∗) consisting of alignments ABIT G which achieve minimal loss relative to a∗. In this example, the minimal loss is 1, and we have a choice of removing either of the sure alignments to the English word not. We also have the choice of whether to include the possible alignment, yielding 4 alignments in M(a∗). of the inside-outside algorithm where sure alignments not present in a∗are disallowed, and the number of missing sure alignments is appended to the state of the bitext cell.6 One advantage of the likelihood-based objective is that we can obtain posteriors over individual alignment cells, Pw((i, j)|x) = X a∈A:(i,j)∈a Pw(a|x) We obtain posterior ITG alignments by including all alignment cells (i, j) such that Pw((i, j)|x) exceeds a fixed threshold t. Posterior thresholding allows us to easily trade-off precision and recall in our alignments by raising or lowering t. 5 Dynamic Program Pruning Both discriminative methods require repeated model inference: MIRA depends upon lossaugmented Viterbi parsing, while conditional like6Note that alignments that achieve the minimal loss would not introduce any alignments not either sure or possible, so it suffices to keep track only of the number of sure recall errors. lihood uses the inside-outside algorithm for computing cell posteriors. Exhaustive computation of these quantities requires an O(n6) dynamic program that is prohibitively slow even on small supervised training sets. However, most of the search space can safely be pruned using posterior predictions from a simpler alignment models. We use posteriors from two jointly estimated HMM models to make pruning decisions during ITG inference (Liang et al., 2006). Our first pruning technique is broadly similar to Cherry and Lin (2007a). We select high-precision alignment links from the HMM models: those word pairs that have a posterior greater than 0.9 in either model. Then, we prune all bitext cells that would invalidate more than 8 of these high-precision alignments. Our second pruning technique is to prune all one-by-one (word-to-word) bitext cells that have a posterior below 10−4 in both HMM models. Pruning a one-by-one cell also indirectly prunes larger cells containing it. To take maximal advantage of this indirect pruning, we avoid explicitly attempting to build each cell in the dynamic program. Instead, we track bounds on the spans for which we have successfully built ITG cells, and we only iterate over larger spans that fall within those bounds. The details of a similar bounding approach appear in DeNero et al. (2009). In all, pruning reduces MIRA iteration time from 175 to 5 minutes on the NIST ChineseEnglish dataset with negligible performance loss. Likelihood training time is reduced by nearly two orders of magnitude. 6 Alignment Quality Experiments We present results which measure the quality of our models on two hand-aligned data sets. Our first is the English-French Hansards data set from the 2003 NAACL shared task (Mihalcea and Pedersen, 2003). Here we use the same 337/100 train/test split of the labeled data as Taskar et al. 928 MIRA Likelihood 1-1 ITG BITG BITG-S BITG-N Features P R AER P R AER P R AER P R AER P R AER Dice, dist, blcks, dict, lex 85.7 63.7 26.8 86.2 65.8 25.2 85.0 73.3 21.1 85.7 73.7 20.6 85.3 74.8 20.1 +HMM 90.5 69.4 21.2 91.2 70.1 20.3 90.2 80.1 15.0 87.3 82.8 14.9 88.2 83.0 14.4 Table 2: Word alignment results on Chinese-English. Each column is a learning objective paired with an alignment family. The first row represents our best model without external alignment models and the second row includes features from the jointly trained HMM. Under likelihood, BITG-S uses the simple grammar (Section 2.2). BITG-N uses the normal form grammar (Section 4.1). (2005); we compute external features from the same unlabeled data, 1.1 million sentence pairs. Our second is the Chinese-English hand-aligned portion of the 2002 NIST MT evaluation set. This dataset has 491 sentences, which we split into a training set of 150 and a test set of 191. When we trained external Chinese models, we used the same unlabeled data set as DeNero and Klein (2007), including the bilingual dictionary. For likelihood based models, we set the L2 regularization parameter, σ2, to 100 and the threshold for posterior decoding to 0.33. We report results using the simple ITG grammar (ITG-S, Section 2.2) where summing over derivations double counts alignments, as well as the normal form ITG grammar (ITG-N,Section 4.1) which does not double count. We ran our annealed lossaugmented MIRA for 15 iterations, beginning with λ at 0 and increasing it linearly to 0.5. We compute Viterbi alignments using the averaged weight vector from this procedure. 6.1 French Hansards Results The French Hansards data are well-studied data sets for discriminative word alignment (Taskar et al., 2005; Cherry and Lin, 2006; Lacoste-Julien et al., 2006). For this data set, it is not clear that improving alignment error rate beyond that of GIZA++ is useful for translation (Ganchev et al., 2008). Table 1 illustrates results for the Hansards data set. The first row uses dice and the same distance features as Taskar et al. (2005). The first two rows repeat the experiments of Taskar et al. (2005) and Cherry and Lin (2006), but adding ITG models that are trained to maximize conditional likelihood. The last row includes the posterior of the jointly-trained HMM of Liang et al. (2006) as a feature. This model alone achieves an AER of 5.4. No model significantly improves over the HMM alone, which is consistent with the results of Taskar et al. (2005). 6.2 Chinese NIST Results Chinese-English alignment is a much harder task than French-English alignment. For example, the HMM aligner achieves an AER of 20.7 when using the competitive thresholding heuristic of DeNero and Klein (2007). On this data set, our block ITG models make substantial performance improvements over the HMM, and moreover these results do translate into downstream improvements in BLEU score for the Chinese-English language pair. Because of this, we will briefly describe the features used for these models in detail. For features on one-by-one cells, we consider Dice, the distance features from (Taskar et al., 2005), dictionary features, and features for the 50 most frequent lexical pairs. We also trained an HMM aligner as described in DeNero and Klein (2007) and used the posteriors of this model as features. The first two columns of Table 2 illustrate these features for ITG and one-to-one matchings. For our block ITG models, we include all of these features, along with variants designed for many-to-one blocks. For example, we include the average Dice of all the cells in a block. In addition, we also created three new block-specific features types. The first type comprises bias features for each block length. The second type comprises features computed from N-gram statistics gathered from a large monolingual corpus. These include features such as the number of occurrences of the phrasal (multi-word) side of a many-to-one block, as well as pointwise mutual information statistics for the multi-word parts of many-to-one blocks. These features capture roughly how “coherent” the multi-word side of a block is. The final block feature type consists of phrase shape features. These are designed as follows: For each word in a potential many-to-one block alignment, we map an individual word to X if it is not one of the 25 most frequent words. Some example features of this type are, 929 • English Block: [the X, X], [in X of, X] • Chinese Block: [  X, X] [X |, X] For English blocks, for example, these features capture the behavior of phrases such as in spite of or in front of that are rendered as one word in Chinese. For Chinese blocks, these features capture the behavior of phrases containing classifier phrases like  Ç or  P, which are rendered as English indefinite determiners. The right-hand three columns in Table 2 present supervised results on our Chinese English data set using block features. We note that almost all of our performance gains (relative to both the HMM and 1-1 matchings) come from BITG and block features. The maximum likelihood-trained normal form ITG model outperforms the HMM, even without including any features derived from the unlabeled data. Once we include the posteriors of the HMM as a feature, the AER decreases to 14.4. The previous best AER result on this data set is 15.9 from Ayan and Dorr (2006), who trained stacked neural networks based on GIZA++ alignments. Our results are not directly comparable (they used more labeled data, but did not have the HMM posteriors as an input feature). 6.3 End-To-End MT Experiments We further evaluated our alignments in an end-toend Chinese to English translation task using the publicly available hierarchical pipeline JosHUa (Li and Khudanpur, 2008). The pipeline extracts a Hiero-style synchronous context-free grammar (Chiang, 2007), employs suffix-array based rule extraction (Lopez, 2007), and tunes model parameters with minimum error rate training (Och, 2003). We trained on the FBIS corpus using sentences up to length 40, which includes 2.7 million English words. We used a 5-gram language model trained on 126 million words of the Xinhua section of the English Gigaword corpus, estimated with SRILM (Stolcke, 2002). We tuned on 300 sentences of the NIST MT04 test set. Results on the NIST MT05 test set appear in Table 3. We compared four sets of alignments. The GIZA++ alignments7 are combined across directions with the grow-diag-final heuristic, which outperformed the union. The joint HMM alignments are generated from competitive posterior 7We used a standard training regimen: 5 iterations of model 1, 5 iterations of HMM, 3 iterations of Model 3, and 3 iterations of Model 4. Alignments Translations Model Prec Rec Rules BLEU GIZA++ 62 84 1.9M 23.22 Joint HMM 79 77 4.0M 23.05 Viterbi ITG 90 80 3.8M 24.28 Posterior ITG 81 83 4.2M 24.32 Table 3: Results on the NIST MT05 Chinese-English test set show that our ITG alignments yield improvements in translation quality. thresholding (DeNero and Klein, 2007). The ITG Viterbi alignments are the Viterbi output of the ITG model with all features, trained to maximize log likelihood. The ITG Posterior alignments result from applying competitive thresholding to alignment posteriors under the ITG model. Our supervised ITG model gave a 1.1 BLEU increase over GIZA++. 7 Conclusion This work presented the first large-scale application of ITG to discriminative word alignment. We empirically investigated the performance of conditional likelihood training of ITG word aligners under simple and normal form grammars. We showed that through the combination of relaxed learning objectives, many-to-one block alignment potential, and efficient pruning, ITG models can yield state-of-the art word alignments, even when the underlying gold alignments are highly nonITG. Our models yielded the lowest published error for Chinese-English alignment and an increase in downstream translation performance. References Necip Fazil Ayan and Bonnie Dorr. 2006. Going beyond AER: An extensive analysis of word alignments and their impact on MT. In ACL. Colin Cherry and Dekang Lin. 2006. Soft syntactic constraints for word alignment through discriminative training. In ACL. Colin Cherry and Dekang Lin. 2007a. Inversion transduction grammar for joint phrasal translation modeling. In NAACL-HLT 2007. Colin Cherry and Dekang Lin. 2007b. A scalable inversion transduction grammar for joint phrasal translation modeling. In SSST Workshop at ACL. David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and structural translation features. In EMNLP. 930 David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In EMNLP. Koby Crammer, Ofer Dekel, Shai S. Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. Journal of Machine Learning Research. John DeNero and Dan Klein. 2007. Tailoring word alignments to syntactic machine translation. In ACL. John DeNero and Dan Klein. 2008. The complexity of phrase alignment problems. In ACL Short Paper Track. John DeNero, Mohit Bansal, Adam Pauls, and Dan Klein. 2009. Efficient parsing for transducer grammars. In NAACL. Kuzman Ganchev, Joao Graca, and Ben Taskar. 2008. Better alignments = better translations? In ACL. H. W. Kuhn. 1955. The Hungarian method for the assignment problem. Naval Research Logistic Quarterly. Simon Lacoste-Julien, Ben Taskar, Dan Klein, and Michael Jordan. 2006. Word alignment via quadratic assignment. In NAACL. Zhifei Li and Sanjeev Khudanpur. 2008. A scalable decoder for parsing-based machine translation with equivalent language model state maintenance. In SSST Workshop at ACL. Percy Liang, Dan Klein, and Dan Klein. 2006. Alignment by agreement. In NAACL-HLT. Adam Lopez. 2007. Hierarchical phrase-based translation with suffix arrays. In EMNLP. I. Dan Melamed. 2000. Models of translational equivalence among words. Computational Linguistics. Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In HLT/NAACL Workshop on Building and Using Parallel Texts. Robert C. Moore, Wen tau Yih, and Andreas Bode. 2006. Improved discriminative bilingual word alignment. In ACL-COLING. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In ACL. Slav Petrov, Aria Haghighi, and Dan Klein. 2008. Coarse-to-fine syntactic machine translation using language projections. In Empirical Methods in Natural Language Processing. Andreas Stolcke. 2002. Srilm: An extensible language modeling toolkit. In ICSLP 2002. Ben Taskar, Simon Lacoste-Julien, and Dan Klein. 2005. A discriminative matching approach to word alignment. In NAACL-HLT. L. G. Valiant. 1979. The complexity of computing the permanent. Theoretical Computer Science, 8:189– 201. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23. Richard Zens and Hermann Ney. 2003. A comparative study on reordering constraints in statistical machine translation. In ACL. Hao Zhang and Dan Gildea. 2005. Stochastic lexicalized inversion transduction grammar for alignment. In ACL. Hao Zhang, Chris Quirk, Robert C. Moore, and Daniel Gildea. 2008. Bayesian learning of noncompositional phrases with synchronous parsing. In ACL. 931
2009
104
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 932–940, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Confidence Measure for Word Alignment Fei Huang IBM T.J.Watson Research Center Yorktown Heights, NY 10598, USA [email protected] Abstract In this paper we present a confidence measure for word alignment based on the posterior probability of alignment links. We introduce sentence alignment confidence measure and alignment link confidence measure. Based on these measures, we improve the alignment quality by selecting high confidence sentence alignments and alignment links from multiple word alignments of the same sentence pair. Additionally, we remove low confidence alignment links from the word alignment of a bilingual training corpus, which increases the alignment F-score, improves Chinese-English and Arabic-English translation quality and significantly reduces the phrase translation table size. 1 Introduction Data-driven approaches have been quite active in recent machine translation (MT) research. Many MT systems, such as statistical phrase-based and syntax-based systems, learn phrase translation pairs or translation rules from large amount of bilingual data with word alignment. The quality of the parallel data and the word alignment have significant impacts on the learned translation models and ultimately the quality of translation output. Due to the high cost of commissioned translation, many parallel sentences are automatically extracted from comparable corpora, which inevitably introduce many ”noises”, i.e., inaccurate or non-literal translations. Given the huge amount of bilingual training data, word alignments are automatically generated using various algorithms ((Brown et al., 1994), (Vogel et al., 1996) Figure 1: An example of inaccurate translation and word alignment. and (Ittycheriah and Roukos, 2005)), which also introduce many word alignment errors. The example in Figure 1 shows the word alignment of the given Chinese and English sentence pair, where the English words following each Chinese word is its literal translation. We find untranslated Chinese and English words (marked with underlines). These spurious words cause significant word alignment errors (as shown with dash lines), which in turn directly affect the quality of phrase translation tables or translation rules that are learned based on word alignment. In this paper we introduce a confidence measure for word alignment, which is robust to extra or missing words in the bilingual sentence pairs, as well as word alignment errors. We propose a sentence alignment confidence measure based on the alignment’s posterior probability, and extend it to the alignment link confidence measure. We illustrate the correlation between the alignment confidence measure and the alignment quality on the sentence level, and present several approaches to improve alignment accuracy based on the proposed confidence measure: sentence alignment selection, alignment link combination and alignment link filtering. Finally we demonstrate 932 the improved alignments also lead to better MT quality. The paper is organized as follows: In section 2 we introduce the sentence and alignment link confidence measures. In section 3 we demonstrate two approaches to improve alignment accuracy through alignment combination. In section 4 we show how to improve a MaxEnt word alignment quality by removing low confidence alignment links, which also leads to improved translation quality as shown in section 5. 2 Sentence Alignment Confidence Measure 2.1 Definition Given a bilingual sentence pair (S,T) where S={s1,. . . , sI} is the source sentence and T={t1, . . . ,tJ} is the target sentence. Let A = {aij} be the alignment between S and T. The alignment confidence measure C(A|S, T) is defined as the geometric mean of the alignment posterior probabilities calculated in both directions: C(A|S, T) = p Ps2t(A|S, T)Pt2s(A|T, S), (1) where Ps2t(A|S, T) = P(A, T|S) P A′ P(A′, T|S). (2) When computing the source-to-target alignment posterior probability, the numerator is the sentence translation probability calculated according to the given alignment A: P(A, T|S) = J Y j=1 p(tj|si, aij ∈A). (3) It is the product of lexical translation probabilities for the aligned word pairs. For unaligned target word tj, consider si = NULL. The source-totarget lexical translation model p(t|s) and targetto-source model p(s|t) can be obtained through IBM Model-1 or HMM training. The denominator is the sentence translation probability summing over all possible alignments, which can be calculated similar to IBM Model 1 in (Brown et al., 1994): X A′ P(A′, T|S) = J Y j=1 I X i=1 p(tj|si). (4) Aligner F-score Cor. Coeff. HMM 54.72 -0.710 BM 62.53 -0.699 MaxEnt 69.26 -0.699 Table 1: Correlation coefficients of multiple alignments. Note that here only the word-based lexicon model is used to compute the confidence measure. More complex models such as alignment models, fertility models and distortion models as described in (Brown et al., 1994) could estimate the probability of a given alignment more accurately. However the summation over all possible alignments is very complicated, even intractable, with the richer models. For the efficient computation of the denominator, we use the lexical translation model. Similarly, Pt2s(A|T, S) = P(A, S|T) P A′ P(A′, S|T), (5) and P(A, S|T) = IY i=1 p(si|tj, aij ∈A). (6) X A′ P(A′, S|T) = IY i=1 J X j=1 p(si|tj). (7) We randomly selected 512 Chinese-English (CE) sentence pairs and generated word alignment using the MaxEnt aligner (Ittycheriah and Roukos, 2005). We evaluate per sentence alignment Fscores by comparing the system output with a reference alignment. For each sentence pair, we also calculate the sentence alignment confidence score −log C(A|S, T). We compute the correlation coefficients between the alignment confidence measure and the alignment F-scores. The results in Figure 2 shows strong correlation between the confidence measure and the alignment F-score, with the correlation coefficients equals to -0.69. Such strong correlation is also observed on an HMM alignment (Ge, 2004) and a Block Model (BM) alignment (Zhao et al., 2005) with varying alignment accuracies, as seen in Table1. 2.2 Sentence Alignment Selection Based on Confidence Measure The strong correlation between the sentence alignment confidence measure and the alignment F933 Figure 2: Correlation between sentence alignment confidence measure and F-score. measure suggests the possibility of selecting the alignment with the highest confidence score to obtain better alignments. For each sentence pair in the C-E test set, we calculate the confidence scores of the HMM alignment, the Block Model alignment and the MaxEnt alignment, then select the alignment with the highest confidence score. As a result, 82% of selected alignments have higher Fscores, and the F-measure of the combined alignments is increased over the best aligner (the MaxEnt aligner) by 0.8. This relatively small improvement is mainly due to the selection of the whole sentence alignment: for many sentences the best alignment still contains alignment errors, some of which could be fixed by other aligners. Therefore, it is desirable to combine alignment links from different alignments. 3 Alignment Link Confidence Measure 3.1 Definition Similar to the sentence alignment confidence measure, the confidence of an alignment link aij in the sentence pair (S, T) is defined as c(aij|S, T) = q qs2t(aij|S, T)qt2s(aij|T, S) (8) where the source-to-target link posterior probability qs2t(aij|S, T) = p(tj|si) PJ j′=1 p(tj′|si) , (9) which is defined as the word translation probability of the aligned word pair divided by the sum of the translation probabilities over all the target words in the sentence. The higher p(tj|si) is, the higher confidence the link has. Similarly, the target-to-source link posterior probability is defined as: qt2s(aij|T, S) = p(si|tj) PI i′=1 p(si′|tj) . (10) Intuitively, the above link confidence definition compares the lexical translation probability of the aligned word pair with the translation probabilities of all the target words given the source word. If a word t occurs N times in the target sentence, for any i ∈{1, ..., I}, J X j′=1 p(tj′|si) ≥Np(t|si), thus for any tj = t, qs2t(aij) ≤1 N . This indicates that the confidence score of any link connecting tj to any source word is at most 1/N. On the one hand this is expected because multiple occurrences of the same word does increase the confusion for word alignment and reduce the link confidence. On the other hand, additional information (such as the distance of the word pair, the alignment of neighbor words) could indicate higher likelihood for the alignment link. We will introduce a context-dependent link confidence measure in section 4. 3.2 Alignment Link Selection From multiple alignments of the same sentence pair, we select high confidence links from different alignments based on their link confidence scores and alignment agreement ratio. Typically, links appearing in multiple alignments are more likely correct alignments. The alignment agreement ratio measures the popularity of a link. Suppose the sentence pair (S, T) have alignments A1,.. ., AD, the agreement ratio of a link aij is defined as r(aij|S, T) = P d C(Ad|S, T : aij ∈Ad) P d′ C(Ad′|S, T) , (11) where C(A) is the confidence score of the alignment A as defined in formula 1. This formula computes the sum of the alignment confidence scores for the alignments containing aij, which is 934 Figure 3: Example of alignment link selection by combining MaxEnt, HMM and BM alignments. normalized by the sum of all alignments’ confidence scores. We collect all the links from all the alignments. For each link we calculate the link confidence score c(aij) and the alignment agreement ratio r(aij). We link the word pair (si, tj) if either c(aij) > h1 or r(aij) > r1, where h1 and r1 are empirically chosen thresholds. We combine the HMM alignment, the BM alignment and the MaxEnt alignment (ME) using the above link selection algorithm. Figure 3 shows such an example, where alignment errors in the MaxEnt alignment are shown with dotted lines. As some of the links are correctly aligned in the HMM and BM alignments (shown with solid lines), the combined alignment corrects some alignment errors while still contains common incorrect alignment links. Table 2 shows the precision, recall and F-score of individual alignments and the combined alignment. F-content and F-function are the F-scores for content words and function words, respectively. The link selection algorithm improves the recall over the best aligner (the ME alignment) by 7 points (from 65.4 to 72.5) while decreasing the precision by 4.4 points (from 73.6 to 69.2). Overall it improves the F-score by 1.5 points (from 69.3 to 70.8), 1.8 point improvement for content words and 1.0 point for function words. It also significantly outperforms the traditionally used heuristics, ”intersection-union-refine” (Och and Ney, 2003) by 6 points. 4 Improved MaxEnt Aligner with Confidence-based Link Filtering In addition to the alignment combination, we also improve the performance of the MaxEnt aligner through confidence-based alignment link filtering. Here we select the MaxEnt aligner because it has 935 Precision Recall F-score F-content F-function HMM 62.65 48.57 54.72 62.10 34.39 BM 72.76 54.82 62.53 68.64 43.93 ME 72.66 66.17 69.26 72.52 61.41 Link-Select 69.19 72.49 70.81 74.31 60.26 Intersection-Union-Refine 63.34 66.07 64.68 70.15 49.72 Table 2: Link Selection and Combination Results the highest F-measure among the three aligners, although the algorithm described below can be applied to any aligner. It is often observed that words within a constituent (such as NP, PP) are typically translated together, and their alignments are close. As a result the confidence measure of an alignment link aij can be boosted given the alignment of its context words. From the initial sentence alignment we first identify an anchor link amn, the high confidence alignment link closest to aij. The anchor link is considered as the most reliable connection between the source and target context. The context is then defined as a window centering at amn with window width proportional to the distance between aij and amn. When computing the context-dependent link confidence, we only consider words within the context window. The context-dependent alignment link confidence is calculated in the following steps: 1. Calculate the context-independent link confidence measure c(aij) according to formula (8). 2. Sort all links based on their link confidence measures in decreasing order. 3. Select links whose confidence scores are higher than an empirically chosen threshold H as anchor links 1. 4. Walking along the remaining sorted links. For each link {aij : c(aij) < H}, (a) Find the closest anchor link amn2, (b) Define the context window width w = |m −i| + |n −j|. 1H is selected to maximize the F-score on an alignment devset. 2When two equally close alignment links have the same confidence score), we randomly select one of the tied links as the anchor link. (c) Compute the link posterior probabilities within the context window: qs2t(aij|amn) = p(tj|si) Pj+w j′=j−w p(tj′|si) , qt2s(aij|amn) = p(si|tj) Pi+w i′=i−w p(si′|tj) . (d) Compute the context-dependent link confidence score c(aij|amn) = q qs2t(aij|amn)qt2s(aij|amn). If c(aij|amn) > H, add aij into the set of anchor links. 5. Only keep anchor links and remove all the remaining links with low confidence scores. The above link filtering algorithm is designed to remove incorrect links. Furthermore, it is possible to create new links by relinking unaligned source and target word pairs within the context window if their context-dependent link posterior probability is high. Figure 4 shows context-independent link confidence scores for the given sentence alignment. The subscript following each word indicates the word’s position. Incorrect alignment links are shown with dashed lines, which have low confidence scores (a5,7, a7,3, a8,2, a11,9) and will be removed through filtering. When the anchor link a4,11 is selected, the context-dependent link confidence of a6,12 is increased from 0.12 to 0.51. Also note that a new link a7,12 (shown as a dotted line) is created because within the context window, the link confidence score is as high as 0.96. This example shows that the context-dependent link filtering not only removes incorrect links, but also create new links based on updated confidence scores. We applied the confidence-based link filtering on Chinese-English and Arabic-English word alignment. The C-E alignment test set is the same 936 Figure 4: Alignment link filtering based on context-independent link confidence. Precision Recall F-score Baseline 72.66 66.17 69.26 +ALF 78.14 64.36 70.59 Table 3: Confidence-based Alignment Link Filtering on C-E Alignment Precision Recall F-score Baseline 84.43 83.64 84.04 +ALF 88.29 83.14 85.64 Table 4: Confidence-based Alignment Link Filtering on A-E Alignment 512 sentence pairs, and the A-E alignment test set is the 200 Arabic-English sentence pairs from NIST MT03 test set. Tables 3 and 4 show the improvement of C-E and A-E alignment F-measures with the confidence-based alignment link filtering (ALF). For C-E alignment, removing low confidence alignment links increased alignment precision by 5.5 point, while decreased recall by 1.8 point, and the overall alignment F-measure is increased by 1.3 point. When looking into the alignment links which are removed during the alignment link filtering process, we found that 80% of the removed links (1320 out of 1661 links) are incorrect alignments, For A-E alignment, it increased the precision by 3 points while reducing recall by 0.5 points, and the alignment F-measure is increased by about 1.5 points absolute, a 10% relative alignment error rate reduction. Similarly, 90% of the removed links are incorrect alignments. 5 Translation We evaluate the improved alignment on several Chinese-English and Arabic-English machine translation tasks. The documents to be translated are from difference genres: newswire (NW) and web-blog (WB). The MT system is a phrasebased SMT system as described in (Al-Onaizan and Papineni, 2006). The training data are bilingual sentence pairs with word alignment, from which we obtained phrase translation pairs. We extract phrase translation tables from the baseline MaxEnt word alignment as well as the alignment with confidence-based link filtering, then translate the test set with each phrase translation table. We measure the translation quality with automatic metrics including BLEU (Papineni et al., 2001) and TER (Snover et al., 2006). The higher the BLEU score is, or the lower the TER score is, the better the translation quality is. We combine the two metrics into (TER-BLEU)/2 and try to minimize it. In addition to the whole test set’s scores, we also measure the scores of the ”tail” documents, whose (TER-BLEU)/2 scores are at the bottom 10 percentile (for A-E translation) and 20 percentile (for C-E translation) and are considered the most difficult documents to translate. In the Chinese-English MT experiment, we selected 40 NW documents, 41 WB documents as the test set, which includes 623 sentences with 16667 words. The training data includes 333 thousand C-E sentence pairs subsampled from 10 million sentence pairs according to the test data. Tables 5 and 6 show the newswire and web-blog translation scores as well as the number of phrase translation pairs obtained from each alignment. Because the alignment link filtering removes many incorrect alignment links, the number of phrase translation pairs is reduced by 15%. For newswire, the translation quality is improved by 0.44 on the whole test set and 1.1 on the tail documents, as measured by (TER-BLEU)/2. For web-blog, we observed 0.2 improvement on the whole test set and 0.5 on the tail documents. The tail documents typically have lower phrase coverage, thus incorrect phrase translation pairs derived from incorrect 937 # phrase pairs Average Tail TER BLEU (TER-BLEU)/2 TER BLEU (TER-BLEU)/2 Baseline 934206 60.74 28.05 16.35 69.02 17.83 25.60 ALF 797685 60.33 28.52 15.91 68.31 19.27 24.52 Table 5: Improved Chinese-English Newswire Translation with Alignment Link Filtering # phrase pairs Average Tail TER BLEU (TER-BLEU)/2 TER BLEU (TER-BLEU)/2 Baseline 934206 62.87 25.08 18.89 66.55 18.80 23.88 ALF 797685 62.30 24.89 18.70 65.97 19.25 23.36 Table 6: Improved Chinese-English Web-Blog Translation with Alignment Link Filtering alignment links are more likely to be selected. The removal of incorrect alignment links and cleaner phrase translation pairs brought more gains on the tail documents. In the Arabic-English MT, we selected 80 NW documents and 55 WB documents. The NW training data includes 319 thousand A-E sentence pairs subsampled from 7.2 million sentence pairs with word alignments. The WB training data includes 240 thousand subsampled sentence pairs. Tables 7 and 8 show the corresponding translation results. Similarly, the phrase table size is significantly reduced by 35%, while the gains on the tail documents range from 0.6 to 1.4. On the whole test set the difference is smaller, 0.07 for the newswire translation and 0.58 for the web-blog translation. 6 Related Work In the machine translation area, most research on confidence measure focus on the confidence of MT output: how accurate a translated sentence is. (Gandrabur and Foster, 2003) used neural-net to improve the confidence estimate for text predictions in a machine-assisted translation tool. (Ueffing et al., 2003) presented several word-level confidence measures for machine translation based on word posterior probabilities. (Blatz et al., 2004) conducted extensive study incorporating various sentence-level and word-level features thru multilayer perceptron and naive Bayes algorithms for sentence and word confidence estimation. (Quirk, 2004) trained a sentence level confidence measure using a human annotated corpus. (Bach et al., 2008) used the sentence-pair confidence scores estimated with source and target language models to weight phrase translation pairs. However, there has been little research focusing on confidence measure for word alignment. This work is the first attempt to address the alignment confidence problem. Regarding word alignment combination, in addition to the commonly used ”intersection-unionrefine” approach (Och and Ney, 2003), (Ayan and Dorr, 2006b) and (Ayan et al., 2005) combined alignment links from multiple word alignment based on a set of linguistic and alignment features within the MaxEnt framework or a neural net model. While in this paper, the alignment links are combined based on their confidence scores and alignment agreement ratios. (Fraser and Marcu, 2007) discussed the impact of word alignment’s precision and recall on MT quality. Here removing low confidence links results in higher precision and slightly lower recall for the alignment. In our phrase extraction, we allow extracting phrase translation pairs with unaligned functional words at the boundary. This is similar to the ”loose phrases” described in (Ayan and Dorr, 2006a), which increased the number of correct phrase translations and improved the translation quality. On the other hand, removing incorrect content word links produced cleaner phrase translation tables. When translating documents with lower phrase coverage (typically the “tail” documents), high quality phrase translations are particularly important because a bad phrase translation can be picked up more easily due to limited phrase translation pairs available. 7 Conclusion In this paper we presented two alignment confidence measures for word alignment. The first is the sentence alignment confidence measure, based on which the best whole sentence alignment is se938 # phrase pairs Average Tail TER BLEU (TER-BLEU)/2 TER BLEU (TER-BLEU)/2 Baseline 939911 43.53 50.51 -3.49 53.14 40.60 6.27 ALF 618179 43.11 50.24 -3.56 51.75 42.05 4.85 Table 7: Improved Arabic-English Newswire Translation with Alignment Link Filtering # phrase pairs Average Tail TER BLEU (TER-BLEU)/2 TER BLEU (TER-BLEU)/2 Baseline 598721 49.91 39.90 5.00 57.30 30.98 13.16 ALF 383561 48.94 40.00 4.42 55.99 31.92 12.04 Table 8: Improved Arabic-English Web-Blog Translation with Alignment Link Filtering lected among multiple alignments and it obtained 0.8 F-measure improvement over the single best Chinese-English aligner. The second is the alignment link confidence measure, which selects the most reliable links from multiple alignments and obtained 1.5 F-measure improvement. When we removed low confidence links from the MaxEnt aligner, we reduced the Chinese-English alignment error by 5% and the Arabic-English alignment error by 10%. The cleaned alignment significantly reduced the size of phrase translation tables by 15-35%. It furthermore led to better translation scores for Chinese and Arabic documents with different genres. In particular, it improved the translation scores of the tail documents by 0.5-1.4 points measured by the combined metric of (TERBLEU)/2. For future work we would like to explore richer models to estimate alignment posterior probability. In most cases, exact calculation by summing over all possible alignments is impossible, and approximation using N-best alignments is needed. Acknowledgments We are grateful to Abraham Ittycheriah, Yaser AlOnaizan, Niyu Ge and Salim Roukos and anonymous reviewers for their constructive comments. This work was supported in part by the DARPA GALE project, contract No. HR0011-08-C-0110. References Yaser Al-Onaizan and Kishore Papineni. 2006. Distortion Models for Statistical Machine Translation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 529–536, Sydney, Australia, July. Association for Computational Linguistics. Necip Fazil Ayan and Bonnie J. Dorr. 2006a. Going beyond aer: An extensive analysis of word alignments and their impact on mt. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 9–16, Sydney, Australia, July. Association for Computational Linguistics. Necip Fazil Ayan and Bonnie J. Dorr. 2006b. A maximum entropy approach to combining word alignments. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 96–103, New York City, USA, June. Association for Computational Linguistics. Necip Fazil Ayan, Bonnie J. Dorr, and Christof Monz. 2005. Neuralign: Combining word alignments using neural networks. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 65–72, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics. Nguyen Bach, Qin Gao, and Stephan Vogel. 2008. Improving word alignment with language model based confidence scores. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 151–154, Columbus, Ohio, June. Association for Computational Linguistics. John Blatz, Erin Fitzgerald, George Foster, Simona Gandrabur, Cyril Goutte, Alex Kulesza, Alberto Sanchis, and Nicola Ueffing. 2004. Confidence estimation for machine translation. In COLING ’04: Proceedings of the 20th international conference on Computational Linguistics, page 315, Morristown, NJ, USA. Association for Computational Linguistics. Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1994. The Mathematic of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263– 311. 939 Alexander Fraser and Daniel Marcu. 2007. Measuring word alignment quality for statistical machine translation. Comput. Linguist., 33(3):293–303. Simona Gandrabur and George Foster. 2003. Confidence estimation for translation prediction. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003, pages 95–102, Morristown, NJ, USA. Association for Computational Linguistics. Niyu Ge. 2004. Max-posterior hmm alignment for machine translation. In Presentation given at DARPA/TIDES NIST MT Evaluation workshop. Abraham Ittycheriah and Salim Roukos. 2005. A maximum entropy word aligner for arabic-english machine translation. In HLT ’05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 89–96, Morristown, NJ, USA. Association for Computational Linguistics. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Comput. Linguist., 29(1):19–51, March. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2001. BLEU: a Method for Automatic Evaluation of Machine Translation. In ACL ’02: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311– 318, Morristown, NJ, USA. Association for Computational Linguistics. Chris Quirk. 2004. Training a sentence-level machine translation confidence measure. In In Proc. LREC 2004, pages 825–828, Lisbon, Portual. SpringerVerlag. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In Proceedings of Association for Machine Translation in the Americas. Nicola Ueffing, Klaus Macherey, and Hermann Ney. 2003. Confidence measures for statistical machine translation. In In Proc. MT Summit IX, pages 394– 401. Springer-Verlag. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. Hmm-based word alignment in statistical translation. In Proceedings of the 16th conference on Computational linguistics, pages 836–841, Morristown, NJ, USA. Association for Computational Linguistics. Bing Zhao, Niyu Ge, and Kishore Papineni. 2005. Inner-outer bracket models for word alignment using hidden blocks. In HLT ’05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, pages 177–184, Morristown, NJ, USA. Association for Computational Linguistics. 940
2009
105
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 941–948, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Comparative Study of Hypothesis Alignment and its Improvement for Machine Translation System Combination Boxing Chen*, Min Zhang, Haizhou Li and Aiti Aw Institute for Infocomm Research 1 Fusionopolis Way, 138632 Singapore {bxchen, mzhang, hli, aaiti}@i2r.a-star.edu.sg Abstract Recently confusion network decoding shows the best performance in combining outputs from multiple machine translation (MT) systems. However, overcoming different word orders presented in multiple MT systems during hypothesis alignment still remains the biggest challenge to confusion network-based MT system combination. In this paper, we compare four commonly used word alignment methods, namely GIZA++, TER, CLA and IHMM, for hypothesis alignment. Then we propose a method to build the confusion network from intersection word alignment, which utilizes both direct and inverse word alignment between the backbone and hypothesis to improve the reliability of hypothesis alignment. Experimental results demonstrate that the intersection word alignment yields consistent performance improvement for all four word alignment methods on both Chinese-to-English spoken and written language tasks. 1 Introduction Machine translation (MT) system combination technique leverages on multiple MT systems to achieve better performance by combining their outputs. Confusion network based system combination for machine translation has shown promising advantage compared with other techniques based system combination, such as sentence level hypothesis selection by voting and source sentence re-decoding using the phrases or translation models that are learned from the source sentences and target hypotheses pairs (Rosti et al., 2007a; Huang and Papineni, 2007). In general, the confusion network based system combination method for MT consists of four steps: 1) Backbone selection: to select a backbone (also called “skeleton”) from all hypotheses. The backbone defines the word orders of the final translation. 2) Hypothesis alignment: to build word-alignment between backbone and each hypothesis. 3) Confusion network construction: to build a confusion network based on hypothesis alignments. 4) Confusion network decoding: to decode the best translation from a confusion network. Among the four steps, the hypothesis alignment presents the biggest challenge to the method due to the varying word orders between outputs from different MT systems (Rosti et al, 2007). Many techniques have been studied to address this issue. Bangalore et al. (2001) used the edit distance alignment algorithm which is extended to multiple strings to build confusion network, it only allows monotonic alignment. Jayaraman and Lavie (2005) proposed a heuristic-based matching algorithm which allows nonmonotonic alignments to align the words between the hypotheses. More recently, Matusov et al. (2006, 2008) used GIZA++ to produce word alignment for hypotheses pairs. Sim et al. (2007), Rosti et al. (2007a), and Rosti et al. (2007b) used minimum Translation Error Rate (TER) (Snover et al., 2006) alignment to build the confusion network. Rosti et al. (2008) extended TER algorithm which allows a confusion network as the reference to compute word alignment. Karakos et al. (2008) used ITG-based method for hypothesis alignment. Chen et al. (2008) used Competitive Linking Algorithm (CLA) (Melamed, 2000) to align the words to construct confusion network. Ayan et al. (2008) proposed to improve alignment of hypotheses using synonyms as found in WordNet (Fellbaum, 1998) and a two-pass alignment strategy based on TER word alignment approach. He et al. (2008) proposed an IHMM-based word alignment method which the parameters are estimated indirectly from a variety of sources. Although many methods have been attempted, no systematic comparison among them has been reported. A through and fair comparison among them would be of great meaning to the MT sys941 tem combination research. In this paper, we implement a confusion network-based decoder. Based on this decoder, we compare four commonly used word alignment methods (GIZA++, TER, CLA and IHMM) for hypothesis alignment using the same experimental data and the same multiple MT system outputs with similar features in terms of translation performance. We conduct the comparison study and other experiments in this paper on both spoken and newswire domains: Chinese-to-English spoken and written language translation tasks. Our comparison shows that although the performance differences between the four methods are not significant, IHMM consistently show slightly better performance than other methods. This is mainly due to the fact the IHMM is able to explore more knowledge sources and Viterbi decoding used in IHMM allows more thorough search for the best alignment while other methods has to use less optimal greedy search. In addition, for better performance, instead of only using one direction word alignment (n-to-1 from hypothesis to backbone) as in previous work, we propose to use more reliable word alignments which are derived from the intersection of two-direction hypothesis alignment to construct confusion network. Experimental results show that the intersection word alignmentbased method consistently improves the performance for all four methods on both spoken and written language tasks. This paper is organized as follows. Section 2 presents a standard framework of confusion network based machine translation system combination. Section 3 introduces four word alignment methods, and the algorithm of computing intersection word alignment for all four word alignment methods. Section 4 describes the experiments setting and results on two translation tasks. Section 5 concludes the paper. 2 Confusion network based system combination In order to compare different hypothesis alignment methods, we implement a confusion network decoding system as follows: Backbone selection: in the previous work, Matusov et al. (2006, 2008) let every hypothesis play the role of the backbone (also called “skeleton” or “alignment reference”) once. We follow the work of (Sim et al., 2007; Rosti et al., 2007a; Rosti et al., 2007b; He et al., 2008) and choose the hypothesis that best agrees with other hypotheses on average as the backbone by applying Minimum Bayes Risk (MBR) decoding (Kumar and Byrne, 2004). TER score (Snover et al, 2006) is used as the loss function in MBR decoding. Given a hypothesis set H, the backbone can be computed using the following equation, where ( , ) TER • • returns the TER score of two hypotheses. ˆ ˆ arg min ( , ) b E H E H E TER E E ∈ ∈ = ∑ (1) Hypothesis alignment: all hypotheses are word-aligned to the corresponding backbone in a many-to-one manner. We apply four word alignment methods: GIZA++-based, TER-based, CLA-based, and IHMM-based word alignment algorithm. For each method, we will give details in the next section. Confusion network construction: confusion network is built from one-to-one word alignment; therefore, we need to normalize the word alignment before constructing the confusion network. The first normalization operation is removing duplicated links, since GIZA++ and IHMMbased word alignments could be n-to-1 mappings between the hypothesis and backbone. Similar to the work of (He et al., 2008), we keep the link which has the highest similarity measure ( , ) j i S e e ′ based on surface matching score, such as the length of maximum common subsequence (MCS) of the considered word pair. 2 ( ( , )) ( , ) ( ) ( ) j i j i j i len MCS e e S e e len e len e ′ × ′ = ′ + (2) where ( , ) j i MCS e e ′ is the maximum common subsequence of word je′ and ie ; (.) len is a function to compute the length of letter sequence. The other hypothesis words are set to align to the null word. For example, in Figure 1, 1e′ and 3e′ are aligned to the same backbone word 2e , we remove the link between 2e and 3e′ if 3 2 1 2 ( , ) ( , ) S e e S e e ′ ′ < , as shown in Figure 1 (b). The second normalization operation is reordering the hypothesis words to match the word order of the backbone. The aligned words are reordered according to their alignment indices. To reorder the null-aligned words, we need to first insert the null words into the proper position in the backbone and then reorder the null-aligned hypothesis words to match the nulls on the backbone side. Reordering null-aligned words varies based to the word alignment method in the pre942 vious work. We reorder the null-aligned word following the approach of Chen et al. (2008) with some extension. The null-aligned words are reordered with its adjacent word: moving with its left word (as Figure 1 (c)) or right word (as Figure 1 (d)). However, to reduce the possibility of breaking a syntactic phrase, we extend to choose one of the two above operations depending on which one has the higher likelihood with the current null-aligned word. It is implemented by comparing two association scores based on cooccurrence frequencies. They are association score of the null-aligned word and its left word, or the null-aligned word and its right word. We use point-wise mutual information (MI) as Equation 3 to estimate the likelihood. 1 1 1 ( ) ( , ) log ( ) ( ) i i i i i i p e e MI e e p e p e + + + ′ ′ ′ ′ = ′ ′ (3) where 1 ( ) i i p e e + ′ ′ is the occurrence probability of bigram 1 i i e e + ′ ′ observed in the hypothesis list; ( ) i p e′ and 1 ( ) i p e +′ are probabilities of hypothesis word ie′ and 1 ie +′ respectively. In example of Figure 1, we choose (c) if 2 3 3 4 ( , ) ( , ) MI e e MI e e ′ ′ ′ ′ > , otherwise, word is reordered as (d). a 1e 2e 3e 1e′ 2e′ 3e′ 4e′ b 1e 2e 3e 1e′ 2e′ 3e′ 4e′ c 1e 2e 3e 4e′ 1e′ 2e′ 3e′ d 1e 2e 3e 3e′ 4e′ 1e′ 2e′ Figure 1: Example of alignment normalization. Confusion network decoding: the output translations for a given source sentence are extracted from the confusion network through a beam-search algorithm with a log-linear combination of a set of feature functions. The feature functions which are employed in the search process are: • Language model(s), • Direct and inverse IBM model-1, • Position-based word posterior probabilities (arc scores of the confusion network), • Word penalty, • N-gram frequencies (Chen et al., 2005), • N-gram posterior probabilities (Zens and Ney, 2006). The n-grams used in the last two feature functions are collected from the original hypotheses list from each single system. The weights of feature functions are optimized to maximize the scoring measure (Och, 2003). 3 Word alignment algorithms We compare four word alignment methods which are widely used in confusion network based system combination or bilingual parallel corpora word alignment. 3.1 Hypothesis-to-backbone word alignment GIZA++: Matusov et al. (2006, 2008) proposed using GIZA++ (Och and Ney, 2003) to align words between the backbone and hypothesis. This method uses enhanced HMM model bootstrapped from IBM Model-1 to estimate the alignment model. All hypotheses of the whole test set are collected to create sentence pairs for GIZA++ training. GIZA++ produces hypothesisbackbone many-to-1 word alignments. TER-based: TER-based word alignment method (Sim et al., 2007; Rosti et al., 2007a; Rosti et al., 2007b) is an extension of multiple string matching algorithm based on Levenshtein edit distance (Bangalore et al., 2001). The TER (translation error rate) score (Snover et al., 2006) measures the ratio of minimum number of string edits between a hypothesis and reference where the edits include insertions, deletions, substitutions and phrase shifts. The hypothesis is modified to match the reference, where a greedy search is used to select the set of shifts because an optimal sequence of edits (with shifts) is very expensive to find. The best alignment is the one that gives the minimum number of translation edits. TER-based method produces 1-to-1 word alignments. CLA-based: Chen et al. (2008) used competitive linking algorithm (CLA) (Melamed, 2000) to build confusion network for hypothesis regeneration. Firstly, an association score is computed for every possible word pair from the backbone and hypothesis to be aligned. Then a greedy algorithm is applied to select the best word alignment. We compute the association score from a linear combination of two clues: 943 surface similarity computed as Equation (2) and position difference based distortion score by following (He et al., 2008). CLA works under a 1to-1 assumption, so it produces 1-to-1 word alignments. IHMM-based: He et al. (2008) propose an indirect hidden Markov model (IHMM) for hypothesis alignment. Different from traditional HMM, this model estimates the parameters indirectly from various sources, such as word semantic similarity, surface similarity and distortion penalty, etc. For fair comparison reason, we also use the surface similarity computed as Equation (2) and position difference based distortion score which are used for CLA-based word alignment. IHMM-based method produces many-to-1 word alignments. 3.2 Intersection word alignment and its expansion In previous work, Matusov et al. (2006, 2008) used both direction word alignments to compute so-called state occupation probabilities and then compute the final word alignment. The other work usually used only one direction word alignment (many/1-to-1 from hypothesis to backbone). In this paper, we use more reliable word alignments which are derived from the intersection of both direct (hypothesis-to-backbone) and inverse (backbone-to-hypothesis) word alignments with heuristic-based expansion which is widely used in bilingual word alignment. The algorithm includes two steps: 1) Generate bi-directional word alignments. It is straightforward for GIZA++ and IHMM to generate bi-directional word alignments. This is simply achieved by switching the parameters of source and target sentences. Due to the nature of greedy search in TER, the bi-directional TERbased word alignments by switching the parameters of source and target sentences are not necessary exactly the same. For example, in Figure 2, the word “shot” can be aligned to either “shoot” or “the” as the edit cost of word pair (shot, shoot) and (shot, the) are the same when compute the minimum-edit-distance for TER score. I shot killer I shoot the killer a I shoot the killer I shot killer b Figure 2: Example of two directions TER-based word alignments. For CLA word alignment, if we use the same association score, direct and inverse CLA word alignments should be exactly the same. Therefore, we use different functions to compute the surface similarities, such as using maximum common subsequence (MCS) to compute inverse word alignment, and using longest matched prefix (LMP) for computing direct word alignment, as in Equation (4). 2 ( ( , )) ( , ) ( ) ( ) j i j i j i len LMP e e S e e len e len e ′ × ′ = ′ + (4) 2) When two word alignments are ready, we start from the intersection of the two word alignments, and then continuously add new links between backbone and hypothesis if and only if both of the two words of the new link are unaligned and this link exists in the union of two word alignments. If there are more than two links share a same hypothesis or backbone word and also satisfy the constraints, we choose the link that with the highest similarity score. For example, in Figure 2, since MCS-based similarity scores ( , ) ( , ) S shot shoot S shot the > , we choose alignment (a). 4 Experiments and results 4.1 Tasks and single systems Experiments are carried out in two domains. One is in spoken language domain while the other is on newswire corpus. Both experiments are on Chinese-to-English translation. Experiments on spoken language domain were carried out on the Basic Traveling Expression Corpus (BTEC) (Takezawa et al., 2002) Chinese- to-English data augmented with HITcorpus1. BTEC is a multilingual speech corpus which contains sentences spoken by tourists. 40K sentence-pairs are used in our experiment. HIT-corpus is a balanced corpus and has 500K sentence-pairs in total. We selected 360K sentence-pairs that are more similar to BTEC data according to its sub-topic. Additionally, the English sentences of Tanaka corpus2 were also used to train our language model. We ran experiments on an IWSLT challenge task which uses IWSLT20063 DEV clean text set as development set and IWSLT-2006 TEST clean text as test set. 1 http://mitlab.hit.edu.cn/ 2 http://www.csse.monash.edu.au/~jwb/tanakacorpus.html 3 http:// www.slc.atr.jp/IWSLT2006/ 944 Experiments on newswire domain were carried out on the FBIS4 corpus. We used NIST5 2002 MT evaluation test set as our development set, and the NIST 2005 test set as our test set. Table 1 summarizes the statistics of the training, dev and test data for IWSLT and NIST tasks. task data Ch En IWSLT Train Sent. 406K Words 4.4M 4.6M Dev Sent. 489 489×7 Words 5,896 45,449 Test Sent. 500 500×7 Words 6,296 51,227 Add. Words - 1.7M NIST Train Sent. 238K Words 7.0M 8.9M Dev 2002 Sent. 878 878×4 Words 23,248 108,616 Test 2005 Sent. 1,082 1,082×4 Words 30,544 141,915 Add. Words - 61.5M Table 1: Statistics of training, dev and test data for IWSLT and NIST tasks. In both experiments, we used four systems, as listed in Table 2, they are phrase-based system Moses (Koehn et al., 2007), hierarchical phrasebased system (Chiang, 2007), BTG-based lexicalized reordering phrase-based system (Xiong et al., 2006) and a tree sequence alignment-based tree-to-tree translation system (Zhang et al., 2008). Each system for the same task is trained on the same data set. 4.2 Experiments setting For each system, we used the top 10 scored hypotheses to build the confusion network. Similar to (Rosti et al., 2007a), each word in the hypothesis is assigned with a rank-based score of 1/ (1 )r + , where r is the rank of the hypothesis. And we assign the same weights to each system. For selecting the backbone, only the top hypothesis from each system is considered as a candidate for the backbone. Concerning the four alignment methods, we use the default setting for GIZA++; and use toolkit TERCOM (Snover et al., 2006) to compute the TER-based word alignment, and also use the default setting. For fair comparison reason, we 4 LDC2003E14 5 http://www.nist.gov/speech/tests/mt/ decide to do not use any additional resource, such as target language synonym list, IBM model lexicon; therefore, only surface similarity is applied in IHMM-based and CLA-based methods. We compute the distortion model by following (He et al., 2008) for IHMM and CLA-based methods. The weights for each model are optimized on held-out data. System Dev Test IWSLT Sys1 30.75 27.58 Sys2 30.74 28.54 Sys3 29.99 26.91 Sys4 31.32 27.48 NIST Sys1 25.64 23.59 Sys2 24.70 23.57 Sys3 25.89 22.02 Sys4 26.11 21.62 Table 2: Results (BLEU% score) of single systems involved to system combination. 4.3 Experiments results Our evaluation metric is BLEU (Papineni et al., 2002), which are to perform case-insensitive matching of n-grams up to n = 4. Performance comparison of four methods: the results based on direct word alignments are reported in Table 3, row Best is the best single systems’ scores; row MBR is the scores of backbone; GIZA++, TER, CLA, IHMM stand for scores of systems for four word alignment methods. z MBR decoding slightly improves the performance over the best single system for both tasks. This suggests that the simple voting strategy to select backbone is workable. z For both tasks, all methods improve the performance over the backbone. For IWSLT test set, the improvements are from 2.06 (CLA, 30.8828.82) to 2.52 BLEU-score (IHMM, 31.3428.82). For NIST test set, the improvements are from 0.63 (TER, 24.31-23.68) to 1.40 BLEUscore (IHMM, 25.08-23.68). This verifies that the confusion network decoding is effective in combining outputs from multiple MT systems and the four word-alignment methods are also workable for hypothesis-to-backbone alignment. z For IWSLT task where source sentences are shorter (12-13 words per sentence in average), the four word alignment methods achieve similar performance on both dev and test set. The biggest difference is only 0.46 BLEU score (30.88 for CLA, vs. 31.34 for IHMM). For NIST task 945 where source sentences are longer (26-28 words per sentence in average), the difference is more significant. Here IHMM method achieves the best performance, followed by GIZA++, CLA and TER. IHMM is significantly better than TER by 0.77 BLEU-score (from 24.31 to 25.08, p<0.05). This is mainly because IHMM exploits more knowledge source and Viterbi decoding allows more thorough search for the best alignment while other methods use less optimal greedy search. Another reason is that TER uses hard matching in computing edit distance. method Dev Test IWSLT Best 31.32 28.54 MBR 31.40 28.82 GIZA++ 34.16 31.06 TER 33.92 30.96 CLA 33.85 30.88 IHMM 34.35 31.34 NIST Best 26.11 23.59 MBR 26.36 23.68 GIZA++ 27.58 24.88 TER 27.15 24.31 CLA 27.44 24.51 IHMM 27.76 25.08 Table 3: Results (BLEU% score) of combined systems based on direct word alignments. Performance improvement by intersection word alignment: Table 4 reports the performance of the system combinations based on intersection word alignments. It shows that: z Comparing Tables 3 and 4, we can see that the intersection word alignment-based expansion method improves the performance in all the dev and test sets for both tasks by 0.2-0.57 BLEUscore and the improvements are consistent under all conditions. This suggests that the intersection word alignment-based expansion method is more effective than the commonly used direct wordalignment-based hypothesis alignment method in confusion network-based MT system combination. This is because intersection word alignments are more reliable compared with direct word alignments, and so for heuristic-based expansion which is based on the aligned words with higher scores. z TER-based method achieves the biggest performance improvement by 0.4 BLEU-score in IWSLT and 0.57 in NIST. Our statistics shows that the TER-based word alignment generates more inconsistent links between the twodirectional word alignments than other methods. This may give the intersection with heuristicbased expansion method more room to improve performance. z On the contrast, CLA-based method obtains relatively small improvement of 0.26 BLEUscore in IWSLT and 0.21 in NIST. The reason could be that the similarity functions used in the two directions are more similar. Therefore, there are not so many inconsistent links between the two directions. z Table 5 shows the number of links modified by intersection operation and the BLEU-score improvement. We can see that the more the modified links, the bigger the improvement. method Dev Test IWSLT MBR 31.40 28.82 GIZA++ 34.38 31.40 TER 34.17 31.36 CLA 34.03 31.14 IHMM 34.59 31.74 NIST MBR 26.36 23.68 GIZA++ 27.80 25.11 TER 27.58 24.88 CLA 27.64 24.72 IHMM 27.96 25.37 Table 4: Results (BLEU% score) of combined systems based on intersection word alignments. system IWSLT NIST Inc. Imp. Inc. Imp. CLA 1.2K 0.26 9.2K 0.21 GIZA++ 3.2K 0.36 25.5K 0.23 IHMM 3.7K 0.40 21.7K 0.29 TER 4.3K 0.40 40.2K 0.57 #total links 284K 1,390K Table 5: Number of modified links and absolute BLEU(%) score improvement on test sets. Effect of fuzzy matching in TER: the previous work on TER-based word alignment uses hard match in counting edits distance. Therefore, it is not able to handle cognate words match, such as in Figure 2, original TER script count the edit cost of (shoot, shot) equals to word pair (shot, the). Following (Leusch et al., 2006), we modified the TER script to allow fuzzy matching: change the substitution cost from 1 for any word pair to 946 ( , ) 1 ( , ) sub j i j i COST e e S e e ′ ′ = − (5) which ( , ) j i S e e ′ is the similarity score based on the length of longest matched prefix (LMP) computed as in Equation (4). As a result, the fuzzy matching reports ( , ) 1 (2 3)/(5 4) 1/3 SubCost shoot shot = − × + = and ( , ) 1 (2 0)/(5 3) 1 SubCost shoot the = − × + = while in original TER, both of the two scores are equal to 1. Since cost of word pair (shoot, shot) is smaller than that of word pair (shot, the), word “shot” has higher chance to be aligned to “shoot” (Figure 2 (a)) instead of “the” (Figure 2 (b)). This fuzzy matching mechanism is very useful to such kind of monolingual alignment task as in hypothesis-to-backbone word alignment since it can well model word variances and morphological changes. Table 6 summaries the results of TER-based systems with or without fuzzy matching. We can see that the fuzzy matching improves the performance for all cases. This verifies the effect of fuzzy matching for TER in monolingual word alignment. In addition, the improvement in NIST test set (0.36 BLEU-score for direct alignment and 0.21 BLEU-score for intersection one) are more than that in IWSLT test set (0.15 BLEUscore for direct alignment and 0.11 BLEU-score for intersection one). This is because the sentences of IWSLT test set are much shorter than that of NIST test set. TER-based systems IWSLT NIST Dev Test Dev Test Direct align +fuzzy match 33.92 34.14 30.96 31.11 27.15 27.53 24.31 24.67 Intersect align +fuzzy match 34.17 34.40 31.36 31.47 27.58 27.79 24.88 25.09 Table 6: Results (BLEU% score) of TER-based combined systems with or without fuzzy match. 5 Conclusion Confusion-network-based system combination shows better performance than other methods in combining multiple MT systems’ outputs, and hypothesis alignment is a key step. In this paper, we first compare four word alignment methods for hypothesis alignment under the confusion network framework. We verify that the confusion network framework is very effective in MT system combination and IHMM achieves the best performance. Moreover, we propose an intersection word alignment-based expansion method for hypothesis alignment, which is more reliable as it leverages on both direct and inverse word alignment. Experimental results on Chinese-toEnglish spoken and newswire domains show that the intersection word alignment-based method yields consistent improvements across all four word alignment methods. Finally, we evaluate the effect of fuzzy matching for TER. Theoretically, confusion network decoding is still a word-level voting algorithm although it is more complicated than other sentence-level voting algorithms. It changes lexical selection by considering the posterior probabilities of words in hypothesis lists. Therefore, like other voting algorithms, its performance strongly depends on the quality of the n-best hypotheses of each single system. In some extreme cases, it may not be able to improve BLEU-score (Mauser et al., 2006; Sim et al., 2007). References N. F. Ayan. J. Zheng and W. Wang. 2008. Improving Alignments for Better Confusion Networks for Combining Machine Translation Systems. In Proceedings of COLING 2008, pp. 33–40. Manchester, Aug. S. Bangalore, G. Bordel, and G. Riccardi. 2001. Computing consensus translation from multiple machine translation systems. In Proceeding of IEEE workshop on Automatic Speech Recognition and Understanding, pp. 351–354. Madonna di Campiglio, Italy. B. Chen, R. Cattoni, N. Bertoldi, M. Cettolo and M. Federico. 2005. The ITC-irst SMT System for IWSLT-2005. In Proceeding of IWSLT-2005, pp.98-104, Pittsburgh, USA, October. B. Chen, M. Zhang, A. Aw and H. Li. 2008. Regenerating Hypotheses for Statistical Machine Translation. In: Proceeding of COLING 2008. pp105-112. Manchester, UK. Aug. D. Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. C. Fellbaum. editor. 1998. WordNet: An Electronic Lexical Database. MIT Press. X. He, M. Yang, J. Gao, P. Nguyen, R. Moore, 2008. Indirect-HMM-based Hypothesis Alignment for Combining Outputs from Machine Translation Systems. In Proceeding of EMNLP. Hawaii, US, Oct. F. Huang and K. Papinent. 2007. Hierarchical System Combination for Machine Translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and 947 Computational Natural Language Learning (EMNLP-CoNLL’2007), pp. 277 – 286, Prague, Czech Republic, June. S. Jayaraman and A. Lavie. 2005. Multi-engine machine translation guided by explicit word matching. In Proceeding of EAMT. pp.143–152. D. Karakos, J. Eisner, S. Khudanpur, and M. Dreyer. 2008. Machine Translation System Combination using ITG-based Alignments. In Proceeding of ACL-HLT 2008, pp. 81–84. O. Kraif, B. Chen. 2004. Combining clues for lexical level aligning using the Null hypothesis approach. In: Proceedings of COLING 2004, Geneva, August, pp. 1261-1264. P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A. Constantin and E. Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of ACL-2007. pp. 177-180, Prague, Czech Republic. S. Kumar and W. Byrne. 2004. Minimum Bayes Risk Decoding for Statistical Machine Translation. In Proceedings of HLT-NAACL 2004, May 2004, Boston, MA, USA. G. Leusch, N. Ueffing and H. Ney. 2006. CDER: Efficient MT Evaluation Using Block Movements. In Proceedings of EACL. pp. 241-248. Trento Italy. E. Matusov, N. Ueffing, and H. Ney. 2006. Computing consensus translation from multiple machine translation systems using enhanced hypotheses alignment. In Proceeding of EACL, pp. 33-40, Trento, Italy, April. E. Matusov, G. Leusch, R. E. Banchs, N. Bertoldi, D. Dechelotte, M. Federico, M. Kolss, Y. Lee, J. B. Marino, M. Paulik, S. Roukos, H. Schwenk, and H. Ney. System Combination for Machine Translation of Spoken and Written Language. IEEE Transactions on Audio, Speech and Language Processing, volume 16, number 7, pp. 1222-1237, September. A. Mauser, R. Zens, E. Matusov, S. Hasan, and H. Ney. 2006. The RWTH Statistical Machine Translation System for the IWSLT 2006 Evaluation. In Proceeding of IWSLT 2006, pp. 103-110, Kyoto, Japan, November. I. D. Melamed. 2000. Models of translational equivalence among words. Computational Linguistics, 26(2), pp. 221-249. F. J. Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of ACL2003. Sapporo, Japan. F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19-51. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceeding of ACL-2002, pp. 311-318. A. I. Rosti, N. F. Ayan, B. Xiang, S. Matsoukas, R. Schwartz and B. Dorr. 2007a. Combining Outputs from Multiple Machine Translation Systems. In Proceeding of NAACL-HLT-2007, pp. 228-235. Rochester, NY. A. I. Rosti, S. Matsoukas and R. Schwartz. 2007b. Improved Word-Level System Combination for Ma-chine Translation. In Proceeding of ACL-2007, Prague. A. I. Rosti, B. Zhang, S. Matsoukas, and R. Schwartz. 2008. Incremental Hypothesis Alignment for Building Confusion Networks with Application to Machine Translation System Combination, In Proceeding of the Third ACL Workshop on Statistical Machine Translation, pp. 183-186. K. C. Sim, W. J. Byrne, M. J.F. Gales, H. Sahbi, and P. C. Woodland. 2007. Consensus network decoding for statistical machine translation system combination. In Proceeding of ICASSP-2007. M. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceeding of AMTA. T. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto. 2002. Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world. In Proceeding of LREC-2002, Las Palmas de Gran Canaria, Spain. D. Xiong, Q. Liu and S. Lin. 2006. Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation. In Proceeding of ACL-2006. pp.521-528. R. Zens and H. Ney. 2006. N-gram Posterior Probabilities for Statistical Machine Translation. In Proceeding of HLT-NAACL Workshop on SMT, pp. 72-77, NY. M. Zhang, H. Jiang, A. Aw, H. Li, C. L. Tan, and S. Li. 2008. A Tree Sequence Alignment-based Treeto-Tree Translation Model. In Proceeding of ACL2008. Columbus, US. June. Y. Zhang, S. Vogel, and A. Waibel 2004. Interpreting BLEU/NIST scores: How much improvement do we need to have a better system? In Proceedings of LREC 2004, pp. 2051-2054. * The first author has moved to National Research Council, Canada. His current email address is: [email protected]. 948
2009
106
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 949–957, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Incremental HMM Alignment for MT System Combination Chi-Ho Li Microsoft Research Asia 49 Zhichun Road, Beijing, China [email protected] Xiaodong He Microsoft Research One Microsoft Way, Redmond, USA [email protected] Yupeng Liu Harbin Institute of Technology 92 Xidazhi Street, Harbin, China [email protected] Ning Xi Nanjing University 8 Hankou Road, Nanjing, China [email protected] Abstract Inspired by the incremental TER alignment, we re-designed the Indirect HMM (IHMM) alignment, which is one of the best hypothesis alignment methods for conventional MT system combination, in an incremental manner. One crucial problem of incremental alignment is to align a hypothesis to a confusion network (CN). Our incremental IHMM alignment is implemented in three different ways: 1) treat CN spans as HMM states and define state transition as distortion over covered ngrams between two spans; 2) treat CN spans as HMM states and define state transition as distortion over words in component translations in the CN; and 3) use a consensus decoding algorithm over one hypothesis and multiple IHMMs, each of which corresponds to a component translation in the CN. All these three approaches of incremental alignment based on IHMM are shown to be superior to both incremental TER alignment and conventional IHMM alignment in the setting of the Chinese-to-English track of the 2008 NIST Open MT evaluation. 1 Introduction Word-level combination using confusion network (Matusov et al. (2006) and Rosti et al. (2007)) is a widely adopted approach for combining Machine Translation (MT) systems’ output. Word alignment between a backbone (or skeleton) translation and a hypothesis translation is a key problem in this approach. Translation Edit Rate (TER, Snover et al. (2006)) based alignment proposed in Sim et al. (2007) is often taken as the baseline, and a couple of other approaches, such as the Indirect Hidden Markov Model (IHMM, He et al. (2008)) and the ITG-based alignment (Karakos et al. (2008)), were recently proposed with better results reported. With an alignment method, each hypothesis is aligned against the backbone and all the alignments are then used to build a confusion network (CN) for generating a better translation. However, as pointed out by Rosti et al. (2008), such a pair-wise alignment strategy will produce a low-quality CN if there are errors in the alignment of any of the hypotheses, no matter how good the alignments of other hypotheses are. For example, suppose we have the backbone “he buys a computer” and two hypotheses “he bought a laptop computer” and “he buys a laptop”. It will be natural for most alignment methods to produce the alignments in Figure 1a. The alignment of hypothesis 2 against the backbone cannot be considered an error if we consider only these two translations; nevertheless, when added with the alignment of another hypothesis, it produces the low-quality CN in Figure 1b, which may generate poor translations like “he bought a laptop laptop”. While it could be argued that such poor translations are unlikely to be selected due to language model, this CN does disperse the votes to the word “laptop” to two distinct arcs. Rosti et al. (2008) showed that this problem can be rectified by incremental alignment. If hypothesis 1 is first aligned against the backbone, the CN thus produced (depicted in Figure 2a) is then aligned to hypothesis 2, giving rise to the good CN as depicted in Figure 2b.1 On the other hand, the 1Note that this CN may generate an incomplete sentence “he bought a”, which is nevertheless unlikely to be selected as it leads to low language model score. 949 Figure 1: An example bad confusion network due to pair-wise alignment strategy correct result depends on the order of hypotheses. If hypothesis 2 is aligned before hypothesis 1, the final CN will not be good. Therefore, the observation in Rosti et al. (2008) that different order of hypotheses does not affect translation quality is counter-intuitive. This paper attempts to answer two questions: 1) as incremental TER alignment gives better performance than pair-wise TER alignment, would the incremental strategy still be better than the pairwise strategy if the TER method is replaced by another alignment method? 2) how does translation quality vary for different orders of hypotheses being incrementally added into a CN? For question 1, we will focus on the IHMM alignment method and propose three different ways of implementing incremental IHMM alignment. Our experiments will also try several orders of hypotheses in response to question 2. This paper is structured as follows. After setting the notations on CN in section 2, we will first introduce, in section 3, two variations of the basic incremental IHMM model (IncIHMM1 and IncIHMM2). In section 4, a consensus decoding algorithm (CD-IHMM) is proposed as an alternative way to search for the optimal alignment. The issues of alignment normalization and the order of hypotheses being added into a CN are discussed in sections 5 and 6 respectively. Experiment results and analysis are presented in section 7. Figure 2: An example good confusion network due to incremental alignment strategy 2 Preliminaries: Notation on Confusion Network Before the elaboration of the models, let us first clarify the notation on CN. A CN is usually described as a finite state graph with many spans. Each span corresponds to a word position and contains several arcs, each of which represents an alternative word (could be the empty symbol , ϵ) at that position. Each arc is also associated with M weights in an M-way system combination task. Follow Rosti et al. (2007), the i-th weight of an arc is P r 1 1+r, where r is the rank of the hypothesis in the i-th system that votes for the word represented by the arc. This conception of CN is called the conventional or compact form of CN. The networks in Figures 1b and 2b are examples. On the other hand, as a CN is an integration of the skeleton and all hypotheses, it can be conceived as a list of the component translations. For example, the CN in Figure 2b can be converted to the form in Figure 3. In such an expanded or tabular form, each row represents a component translation. Each column, which is equivalent to a span in the compact form, comprises the alternative words at a word position. Thus each cell represents an alternative word at certain word position voted by certain translation. Each row is assigned the weight 1 1+r, where r is the rank of the translation of some MT system. It is assumed that all MT systems are weighted equally and thus the 950 Figure 3: An example of confusion network in tabular form rank-based weights from different system can be compared to each other without adjustment. The weight of a cell is the same as the weight of the corresponding row. In this paper the elaboration of the incremental IHMM models is based on such tabular form of CN. Let EI 1 = (E1 . . . EI) denote the backbone CN, and e′J 1 = (e′ 1 . . . e′ J) denote a hypothesis being aligned to the backbone. Each e′ j is simply a word in the target language. However, each Ei is a span, or a column, of the CN. We will also use E(k) to denote the k-th row of the tabular form CN, and Ei(k) to denote the cell at the k-th row and the i-th column. W(k) is the weight for E(k), and Wi(k) = W(k) is the weight for Ei(k). pi(k) is the normalized weight for the cell Ei(k), such that pi(k) = Wi(k) P i Wi(k). Note that E(k) contains the same bag-of-words as the k-th original translation, but may have different word order. Note also that E(k) represents a word sequence with inserted empty symbols; the sequence with all inserted symbols removed is known as the compact form of E(k). 3 The Basic IncIHMM Model A na¨ıve application of the incremental strategy to IHMM is to treat a span in the CN as an HMM state. Like He et al. (2008), the conditional probability of the hypothesis given the backbone CN can be decomposed into similarity model and distortion model in accordance with equation 1 p(e′J 1 |EI 1) = X aJ 1 J Y j=1 [p(aj|aj−1, I)p(e′ j|eaj)] (1) The similarity between a hypothesis word e′ j and a span Ei is simply a weighted sum of the similarities between e′ j and each word contained in Ei as equation 2: p(e′ j|Ei) = X Ei(k)ϵEi pi(k) · p(e′ j|Ei(k)) (2) The similarity between two words is estimated in exactly the same way as in conventional IHMM alignment. As to the distortion model, the incremental IHMM model also groups distortion parameters into a few ‘buckets’: c(d) = (1 + |d −1|)−K The problem in incremental IHMM is when to apply a bucket. In conventional IHMM, the transition from state i to j has probability: p′(j|i, I) = c(j −i) PI l=1 c(l −i) (3) It is tempting to apply the same formula to the transitions in incremental IHMM. However, the backbone in the incremental IHMM has a special property that it is gradually expanding due to the insertion operator. For example, initially the backbone CN contains the option ei in the i-th span and the option ei+1 in the (i+1)-th span. After the first round alignment, perhaps ei is aligned to the hypothesis word e′ j, ei+1 to e′ j+2, and the hypothesis word e′ j+1 is left unaligned. Then the consequent CN have an extra span containing the option e′ j+1 inserted between the i-th and (i + 1)-th spans of the initial CN. If the distortion buckets are applied as in equation 3, then in the first round alignment, the transition from the span containing ei to that containing ei+1 is based on the bucket c(1), but in the second round alignment, the same transition will be based on the bucket c(2). It is therefore not reasonable to apply equation 3 to such gradually extending backbone as the monotonic alignment assumption behind the equation no longer holds. There are two possible ways to tackle this problem. The first solution estimates the transition probability as a weighted average of different distortion probabilities, whereas the second solution converts the distortion over spans to the distortion over the words in each hypothesis E(k) in the CN. 3.1 Distortion Model 1: simple weighting of covered n-grams Distortion Model 1 shifts the monotonic alignment assumption from spans of CN to n-grams covered by state transitions. Let us illustrate this point with the following examples. In conventional IHMM, the distortion probability p′(i + 1|i, I) is applied to the transition from state i to i+1 given I states because such transition 951 jumps across only one word, viz. the i-th word of the backbone. In incremental IHMM, suppose the i-th span covers two arcs ea and ϵ, with probabilities p1 and p2 = 1 −p1 respectively, then the transition from state i to i + 1 jumps across one word (ea) with probability p1 and jumps across nothing with probability p2. Thus the transition probability should be p1 · p′(i + 1|i, I) + p2 · p′(i|i, I). Suppose further that the (i + 1)-th span covers two arcs eb and ϵ, with probabilities p3 and p4 respectively, then the transition from state i to i + 2 covers 4 possible cases: 1. nothing (ϵϵ) with probability p2 · p4; 2. the unigram ea with probability p1 · p4; 3. the unigram eb with probability p2 · p3; 4. the bigram eaeb with probability p1 · p3. Accordingly the transition probability should be p2p4p′(i|i, I) + p1p3p′(i + 2|i, I) + (p1p4 + p2p3)p′(i + 1|i, I). The estimation of transition probability can be generalized to any transition from i to i′ by expanding all possible n-grams covered by the transition and calculating the corresponding probabilities. We enumerate all possible cell sequences S(i, i′) covered by the transition from span i to i′; each sequence is assigned the probability P i′ i = i′−1 Y q=i pq(k). where the cell at the i′-th span is on some row E(k). Since a cell may represent an empty word, a cell sequence may represent an n-gram where 0 ≤n ≤i′ −i (or 0 ≤n ≤i −i′ in backward transition). We denote |S(i, i′)| to be the length of n-gram represented by a particular cell sequence S(i, i′). All the cell sequences S(i, i′) can be classified, with respect to the length of corresponding n-grams, into a set of parameters where each element (with a particular value of n) has the probability P i′ i (n; I) = X |S(i,i′)|=n P i′ i . The probability of the transition from i to i′ is: p(i′|i, I) = X n [P i′ i (n; I) · p′(i + n|i, I)]. (4) That is, the transition probability of incremental IHMM is a weighted sum of probabilities of ‘ngram jumping’, defined as conventional IHMM distortion probabilities. However, in practice it is not feasible to expand all possible n-grams covered by any transition since the number of n-grams grows exponentially. Therefore a length limit L is imposed such that for all state transitions where |i′ −i| ≤L, the transition probability is calculated as equation 4, otherwise it is calculated by: p(i′|i, I) = max q p(i′|q, I) · p(q|i, I) for some q between i and i′. In other words, the probability of longer state transition is estimated in terms of the probabilities of transitions shorter or equal to the length limit.2 All the state transitions can be calculated efficiently by dynamic programming. A fixed value P0 is assigned to transitions to null state, which can be optimized on held-out data. The overall distortion model is: ˜p(j|i, I) = ( P0 if j is null state (1 −P0)p(j|i, I) otherwise 3.2 Distortion Model 2: weighting of distortions of component translations The cause of the problem of distortion over CN spans is the gradual extension of CN due to the inserted empty words. Therefore, the problem will disappear if the inserted empty words are removed. The rationale of Distortion Model 2 is that the distortion model is defined over the actual word sequence in each component translation E(k). Distortion Model 2 implements a CN in such a way that the real position of the i-th word of the kth component translation can always be retrieved. The real position of Ei(k), δ(i, k), refers to the position of the word represented by Ei(k) in the compact form of E(k) (i.e. the form without any inserted empty words), or, if Ei(k) represents an empty word, the position of the nearest preceding non-empty word. For convenience, we also denote by δϵ(i, k) the null state associated with the state of the real word δ(i, k). Similarly, the real length 2This limit L is also imposed on the parameter I in distortion probability p′(i′|i, I), because the value of I is growing larger and larger during the incremental alignment process. I is defined as L if I > L. 952 of E(k), L(k), refers to the number of non-empty words of E(k). The transition from span i′ to i is then defined as p(i|i′) = 1 P k W(k) X k [W(k) · pk(i|i′)] (5) where k is the row index of the tabular form CN. Depending on Ei(k) and Ei′(k), pk(i|i′) is computed as follows: 1. if both Ei(k) and Ei′(k) represent real words, then pk(i|i′) = p′(δ(i, k)|δ(i′, k), L(k)) where p′ refers to the conventional IHMM distortion probability as defined by equation 3. 2. if Ei(k) represents a real word but Ei′(k) the empty word, then pk(i|i′) = p′(δ(i, k)|δϵ(i′, k), L(k)) Like conventional HMM-based word alignment, the probability of the transition from a null state to a real word state is the same as that of the transition from the real word state associated with that null state to the other real word state. Therefore, p′(δ(i, k)|δϵ(i′, k), L(k)) = p′(δ(i, k)|δ(i′, k), L(k)) 3. if Ei(k) represents the empty word but Ei′(k) a real word, then pk(i|i′) = ( P0 ifδ(i, k) = δ(i′, k) P0Pδ(i|i′; k) otherwise where Pδ(i|i′; k) = p′(δ(i, k)|δ(i′, k), L(k)). The second option is due to the constraint that a null state is accessible only to itself or the real word state associated with it. Therefore, the transition from i′ to i is in fact composed of the first transition from i′ to δ(i, k) and the second transition from δ(i, k) to the null state at i. 4. if both Ei(k) and Ei′(k) represent the empty word, then, with similar logic as cases 2 and 3, pk(i|i′) = ( P0 ifδ(i, k) = δ(i′, k) P0Pδ(i|i′; k) otherwise 4 Incremental Alignment using Consensus Decoding over Multiple IHMMs The previous section describes an incremental IHMM model in which the state space is based on the CN taken as a whole. An alternative approach is to conceive the rows (component translations) in the CN as individuals, and transforms the alignment of a hypothesis against an entire network to that against the individual translations. Each individual translation constitutes an IHMM and the optimal alignment is obtained from consensus decoding over these multiple IHMMs. Alignment over multiple sequential patterns has been investigated in different contexts. For example, Nair and Sreenivas (2007) proposed multipattern dynamic time warping (MPDTW) to align multiple speech utterances to each other. However, these methods usually assume that the alignment is monotonic. In this section, a consensus decoding algorithm that searches for the optimal (non-monotonic) alignment between a hypothesis and a set of translations in a CN (which are already aligned to each other) is developed as follows. A prerequisite of the algorithm is a function for converting a span index to the corresponding HMM state index of a component translation. The two functions δ and δϵ s defined in section 3.2 are used to define a new function: ¯δ(i, k) = ( δϵ(i, k) if Ei(k) is null δ(i, k) otherwise Accordingly, given the alignment aJ 1 = a1 . . . aJ of a hypothesis (with J words) against a CN (where each aj is an index referring to the span of the CN), we can obtain the alignment ˜ak = ¯δ(a1, k) . . . ¯δ(aJ, k) between the hypothesis and the k-th row of the tabular CN. The real length function L(k) is also used to obtain the number of non-empty words of E(k). Given the k-th row of a CN, E(k), an IHMM λ(k) is formed and the cost of the pair-wise alignment, ˜ak, between a hypothesis h and λ(k) is defined as: C( ˜ak; h, λ(k)) = −log P(˜ak|h, λ(k)) (6) The cost of the alignment of h against a CN is then defined as the weighted sum of the costs of the K alignments ˜ak: C(a; h, Λ) = X k W(k)C(˜ak; h, λ(k)) 953 = − X k W(k) log P(˜ak|h, λ(k)) where Λ = {λ(k)} is the set of pair-wise IHMMs, and W(k) is the weight of the k-th row. The optimal alignment ˆa is the one that minimizes this cost: ˆa = arg max a X k W(k) log P(˜ak|h, λ(k)) = arg max a X k W(k)[ X j [ log P(¯δ(aj, k)|¯δ(aj−1, k), L(k)) + log P(ej|Ei(k))]] = arg max a X j [ X k W(k) log P(¯δ(aj, k)|¯δ(aj−1, k), L(k)) + X k W(k) log P(ej|Ei(k))] = arg max a X j [log P ′(aj|aj−1) + log P ′(ej|Eaj)] A Viterbi-like dynamic programming algorithm can be developed to search for ˆa by treating CN spans as HMM states, with a pseudo emission probability as P ′(ej|Eaj) = K Y k=1 P(ej|Eaj(k))W(k) and a pseudo transition probability as P ′(j|i) = K Y k=1 P(¯δ(j, k)|¯δ(i, k), L(k))W(k) Note that P ′(ej|Eaj) and P ′(j|i) are not true probabilities and do not have the sum-to-one property. 5 Alignment Normalization After alignment, the backbone CN and the hypothesis can be combined to form an even larger CN. The same principles and heuristics for the construction of CN in conventional system combination approaches can be applied. Our incremental alignment approaches adopt the same heuristics for alignment normalization stated in He et al. (2008). There is one exception, though. All 1N mappings are not converted to N −1 ϵ-1 mappings since this conversion leads to N −1 insertion in the CN and therefore extending the network to an unreasonable length. The Viterbi alignment is abandoned if it contains an 1-N mapping. The best alignment which contains no 1-N mapping is searched in the N-Best alignments in a way inspired by Nilsson and Goldberger (2001). For example, if both hypothesis words e′ 1 and e′ 2 are aligned to the same backbone span E1, then all alignments aj={1,2} = i (where i ̸= 1) will be examined. The alignment leading to the least reduction of Viterbi probability when replacing the alignment aj={1,2} = 1 will be selected. 6 Order of Hypotheses The default order of hypotheses in Rosti et al. (2008) is to rank the hypotheses in descending of their TER scores against the backbone. This paper attempts several other orders. The first one is system-based order, i.e. assume an arbitrary order of the MT systems and feeds all the translations (in their original order) from a system before the translations from the next system. The rationale behind the system-based order is that the translations from the same system are much more similar to each other than to the translations from other systems, and it might be better to build CN by incorporating similar translations first. The second one is N-best rank-based order, which means, rather than keeping the translations from the same system as a block, we feed the top-1 translations from all systems in some order of systems, and then the second best translations from all systems, and so on. The presumption of the rank-based order is that top-ranked hypotheses are more reliable and it seemed beneficial to incorporate more reliable hypotheses as early as possible. These two kinds of order of hypotheses involve a certain degree of randomness as the order of systems is arbitrary. Such randomness can be removed by imposing a Bayes Risk order on MT systems, i.e. arrange the MT systems in ascending order of the Bayes Risk of their top-1 translations. These four orders of hypotheses are summarized in Table 1. We also tried some intuitively bad orders of hypotheses, including the reversal of these four orders and the random order. 7 Evaluation The proposed approaches of incremental IHMM are evaluated with respect to the constrained Chinese-to-English track of 2008 NIST Open MT 954 Order Example System-based 1:1 .. .1:N 2:1 .. . 2:N . .. M:1 . . . M:N N-best Rank-based 1:1 2:1 .. .M:1 . .. 1:2 2:2 .. . M:2 . . . 1:N . . . M:N Bayes Risk + System-based 4:1 4:2 .. .4:N .. . 1:1 1:2 . .. 1:N . . . 5:1 5:2 . . . 5:N Bayes Risk + Rank-based 4:1 .. .1:1 . .. 5:1 4:2 .. .1:2 . . . 5:2 . . . 4:N . . . 1:N . . . 5:N Table 1: The list of order of hypothesis and examples. Note that ‘m:n’ refers to the n-th translation from the m-th system. Evaluation (NIST (2008)). In the following sections, the incremental IHMM approaches using distortion model 1 and 2 are named as IncIHMM1 and IncIHMM2 respectively, and the consensus decoding of multiple IHMMs as CD-IHMM. The baselines include the TER-based method in Rosti et al. (2007), the incremental TER method in Rosti et al. (2008), and the IHMM approach in He et al. (2008). The development (dev) set comprises the newswire and newsgroup sections of MT06, whereas the test set is the entire MT08. The 10best translations for every source sentence in the dev and test sets are collected from eight MT systems. Case-insensitive BLEU-4, presented in percentage, is used as evaluation metric. The various parameters in the IHMM model are set as the optimal values found in He et al. (2008). The lexical translation probabilities used in the semantic similarity model are estimated from a small portion (FBIS + GALE) of the constrained track training data, using standard HMM alignment model (Och and Ney (2003)). The backbone of CN is selected by MBR. The loss function used for TER-based approaches is TER and that for IHMM-based approaches is BLEU. As to the incremental systems, the default order of hypotheses is the ascending order of TER score against the backbone, which is the order proposed in Rosti et al. (2008). The default order of hypotheses for our three incremental IHMM approaches is N-best rank order with Bayes Risk system order, which is empirically found to be giving the highest BLEU score. Once the CN is built, the final system combination output can be obtained by decoding it with a set of features and decoding parameters. The features we used include word confidences, language model score, word penalty and empty word penalty. The decoding parameters are trained by maximum BLEU training on the dev set. The training and decoding processes are the same as described by Rosti et al. (2007). Method dev test best single system 32.60 27.75 pair-wise TER 37.90 30.96 incremental TER 38.10 31.23 pair-wise IHMM 38.52 31.65 incremental IHMM 39.22 32.63 Table 2: Comparison between IncIHMM2 and the three baselines 7.1 Comparison against Baselines Table 2 lists the BLEU scores achieved by the three baseline combination methods and IncIHMM2. The comparison between pairwise and incremental TER methods justifies the superiority of the incremental strategy. However, the benefit of incremental TER over pair-wise TER is smaller than that mentioned in Rosti et al. (2008), which may be because of the difference between test sets and other experimental conditions. The comparison between the two pair-wise alignment methods shows that IHMM gives a 0.7 BLEU point gain over TER, which is a bit smaller than the difference reported in He et al. (2008). The possible causes of such discrepancy include the different dev set and the smaller training set for estimating semantic similarity parameters. Despite that, the pair-wise IHMM method is still a strong baseline. Table 2 also shows the performance of IncIHMM2, our best incremental IHMM approach. It is almost one BLEU point higher than the pair-wise IHMM baseline and much higher than the two TER baselines. 7.2 Comparison among the Incremental IHMM Models Table 3 lists the BLEU scores achieved by the three incremental IHMM approaches. The two distortion models for IncIHMM approach lead to almost the same performance, whereas CD-IHMM is much less satisfactory. For IncIHMM, the gist of both distortion mod955 Method dev test IncIHMM1 39.06 32.60 IncIHMM2 39.22 32.63 CD-IHMM 38.64 31.87 Table 3: Comparison between the three incremental IHMM approaches els is to shift the distortion over spans to the distortion over word sequences. In distortion model 2 the word sequences are those sequences available in one of the component translations in the CN. Distortion model 1 is more encompassing as it also considers the word sequences which are combined from subsequences from various component translations. However, as mentioned in section 3.1, the number of sequences grows exponentially and there is therefore a limit L to the length of sequences. In general the limit L ≥8 would render the tuning/decoding process intolerably slow. We tried the values 5 to 8 for L and the variation of performance is less than 0.1 BLEU point. That is, distortion model 1 cannot be improved by tuning L. The similar BLEU scores as shown in Table 3 implies that the incorporation of more word sequences in distortion model 1 does not lead to extra improvement. Although consensus decoding is conceptually different from both variations of IncIHMM, it can indeed be transformed into a form similar to IncIHMM2. IncIHMM2 calculates the parameters of the IHMM as a weighted sum of various probabilities of the component translations. In contrast, the equations in section 4 shows that CD-IHMM calculates the weighted sum of the logarithm of those probabilities of the component translations. In other words, IncIHMM2 makes use of the sum of probabilities whereas CD-IHMM makes use of the product of probabilities. The experiment results indicate that the interaction between the weights and the probabilities is more fragile in the product case than in the summation case. 7.3 Impact of Order of Hypotheses Table 4 lists the BLEU scores on the test set achieved by IncIHMM1 using different orders of hypotheses. The column ‘reversal’ shows the impact of deliberately bad order, viz. more than one BLEU point lower than the best order. The random order is a baseline for not caring about order of hypotheses at all, which is about 0.7 BLEU normal reversal System 32.36 31.46 Rank 32.53 31.56 BR+System 32.37 31.44 BR+Rank 32.6 31.47 random 31.94 Table 4: Comparison between various orders of hypotheses. ‘System’ means system-based order; ‘Rank’ means N-best rank-based order; ‘BR’ means Bayes Risk order of systems. The numbers are the BLEU scores on the test set. point lower than the best order. Among the orders with good performance, it is observed that N-best rank order leads to about 0.2 to 0.3 BLEU point improvement, and that the Bayes Risk order of systems does not improve performance very much. In sum, the performance of incremental alignment is sensitive to the order of hypotheses, and the optimal order is defined in terms of the rank of each hypothesis on some system’s n-best list. 8 Conclusions This paper investigates the application of the incremental strategy to IHMM, one of the state-ofthe-art alignment methods for MT output combination. Such a task is subject to the problem of how to define state transitions on a gradually expanding CN. We proposed three different solutions, which share the principle that transition over CN spans must be converted to the transition over word sequences provided by the component translations. While the consensus decoding approach does not improve performance much, the two distortion models for incremental IHMM (IncIHMM1 and IncIHMM2) give superb performance in comparison with pair-wise TER, pair-wise IHMM, and incremental TER. We also showed that the order of hypotheses is important as a deliberately bad order would reduce translation quality by one BLEU point. References Xiaodong He, Mei Yang, Jianfeng Gao, Patrick Nguyen, and Robert Moore 2008. Indirect-HMMbased Hypothesis Alignment for Combining Outputs from Machine Translation Systems. Proceedings of EMNLP 2008. Damianos Karakos, Jason Eisner, Sanjeev Khudanpur, and Markus Dreyer 2008. Machine Translation 956 System Combination using ITG-based Alignments. Proceedings of ACL 2008. Evgeny Matusov, Nicola Ueffing and Hermann Ney. 2006. Computing Consensus Translation from Multiple Machine Translation Systems using Enhanced Hypothesis Alignment. Proceedings of EACL. Nishanth Ulhas Nair and T.V. Sreenivas. 2007. Joint Decoding of Multiple Speech Patterns for Robust Speech Recognition. Proceedings of ASRU. Dennis Nilsson and Jacob Goldberger 2001. Sequentially Finding the N-Best List in Hidden Markov Models. Proceedings of IJCAI 2001. NIST 2008. The NIST Open Machine Translation Evaluation. www.nist.gov/ speech/tests/mt/2008/doc/ Franz J. Och and Hermann Ney 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics 29(1):pp 19-51 Kishore Papineni, Salim Roukos, Todd Ward and WeiJing Zhu 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. Proceedings of ACL 2002 Antti-Veikko I. Rosti, Spyros Matsoukas, and Richard Schwartz 2007. Improved Word-level System Combination for Machine Translation. Proceedings of ACL 2007. Antti-Veikko I. Rosti, Bing Zhang, Spyros Matsoukas, and Richard Schwartz 2008. Incremental Hypothesis Alignment for Building Confusion Networks with Application to Machine Translation System Combination. Proceedings of the 3rd ACL Workshop on SMT. Khe Chai Sim, William J. Byrne, Mark J.F. Gales, Hichem Sahbi, and Phil C. Woodland 2007. Consensus Network Decoding for Statistical Machine Translation System Combination. Proceedings of ICASSP vol. 4. Matthew Snover, Bonnie Dorr, Rich Schwartz, Linnea Micciulla and John Makhoul 2006. A Study of Translation Edit Rate with Targeted Human Annotation. Proceedings of AMTA 2006 957
2009
107
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 958–966, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP K-Best A∗Parsing Adam Pauls and Dan Klein Computer Science Division University of California, Berkeley {adpauls,klein}@cs.berkeley.edu Abstract A∗parsing makes 1-best search efficient by suppressing unlikely 1-best items. Existing kbest extraction methods can efficiently search for top derivations, but only after an exhaustive 1-best pass. We present a unified algorithm for k-best A∗parsing which preserves the efficiency of k-best extraction while giving the speed-ups of A∗methods. Our algorithm produces optimal k-best parses under the same conditions required for optimality in a 1-best A∗parser. Empirically, optimal k-best lists can be extracted significantly faster than with other approaches, over a range of grammar types. 1 Introduction Many situations call for a parser to return the kbest parses rather than only the 1-best. Uses for k-best lists include minimum Bayes risk decoding (Goodman, 1998; Kumar and Byrne, 2004), discriminative reranking (Collins, 2000; Charniak and Johnson, 2005), and discriminative training (Och, 2003; McClosky et al., 2006). The most efficient known algorithm for k-best parsing (Jim´enez and Marzal, 2000; Huang and Chiang, 2005) performs an initial bottom-up dynamic programming pass before extracting the k-best parses. In that algorithm, the initial pass is, by far, the bottleneck (Huang and Chiang, 2005). In this paper, we propose an extension of A∗ parsing which integrates k-best search with an A∗based exploration of the 1-best chart. A∗parsing can avoid significant amounts of computation by guiding 1-best search with heuristic estimates of parse completion costs, and has been applied successfully in several domains (Klein and Manning, 2002; Klein and Manning, 2003c; Haghighi et al., 2007). Our algorithm extends the speedups achieved in the 1-best case to the k-best case and is optimal under the same conditions as a standard A∗algorithm. The amount of work done in the k-best phase is no more than the amount of work done by the algorithm of Huang and Chiang (2005). Our algorithm is also equivalent to standard A∗parsing (up to ties) if it is terminated after the 1-best derivation is found. Finally, our algorithm can be written down in terms of deduction rules, and thus falls into the well-understood view of parsing as weighted deduction (Shieber et al., 1995; Goodman, 1998; Nederhof, 2003). In addition to presenting the algorithm, we show experiments in which we extract k-best lists for three different kinds of grammars: the lexicalized grammars of Klein and Manning (2003b), the state-split grammars of Petrov et al. (2006), and the tree transducer grammars of Galley et al. (2006). We demonstrate that optimal k-best lists can be extracted significantly faster using our algorithm than with previous methods. 2 A k-Best A∗Parsing Algorithm We build up to our full algorithm in several stages, beginning with standard 1-best A∗parsing and making incremental modifications. 2.1 Parsing as Weighted Deduction Our algorithm can be formulated in terms of prioritized weighted deduction rules (Shieber et al., 1995; Nederhof, 2003; Felzenszwalb and McAllester, 2007). A prioritized weighted deduction rule has the form φ1 : w1, . . . , φn : wn p(w1,...,wn) −−−−−−−−→φ0 : g(w1, . . . , wn) where φ1, . . . , φn are the antecedent items of the deduction rule and φ0 is the conclusion item. A deduction rule states that, given the antecedents φ1, . . . , φn with weights w1, . . . , wn, the conclusion φ0 can be formed with weight g(w1, . . . , wn) and priority p(w1, . . . , wn). 958 These deduction rules are “executed” within a generic agenda-driven algorithm, which constructs items in a prioritized fashion. The algorithm maintains an agenda (a priority queue of unprocessed items), as well as a chart of items already processed. The fundamental operation of the algorithm is to pop the highest priority item φ from the agenda, put it into the chart with its current weight, and form using deduction rules any items which can be built by combining φ with items already in the chart. If new or improved, resulting items are put on the agenda with priority given by p(·). 2.2 A∗Parsing The A∗parsing algorithm of Klein and Manning (2003c) can be formulated in terms of weighted deduction rules (Felzenszwalb and McAllester, 2007). We do so here both to introduce notation and to build to our final algorithm. First, we must formalize some notation. Assume we have a PCFG1 G and an input sentence s1 . . . sn of length n. The grammar G has a set of symbols Σ, including a distinguished goal (root) symbol G. Without loss of generality, we assume Chomsky normal form, so each non-terminal rule r in G has the form r = A →B C with weight wr (the negative log-probability of the rule). Edges are labeled spans e = (A, i, j). Inside derivations of an edge (A, i, j) are trees rooted at A and spanning si+1 . . . sj. The total weight of the best (minimum) inside derivation for an edge e is called the Viterbi inside score β(e). The goal of the 1-best A∗parsing algorithm is to compute the Viterbi inside score of the edge (G, 0, n); backpointers allow the reconstruction of a Viterbi parse in the standard way. The basic A∗algorithm operates on deduction items I(A, i, j) which represent in a collapsed way the possible inside derivations of edges (A, i, j). We call these items inside edge items or simply inside items where clear; a graphical representation of an inside item can be seen in Figure 1(a). The space whose items are inside edges is called the edge space. These inside items are combined using the single IN deduction schema shown in Table 1. This schema is instantiated for every grammar rule r 1While we present the algorithm specialized to parsing with a PCFG, it generalizes to a wide range of hypergraph search problems as shown in Klein and Manning (2001). VP s3 s4 s5 s1 s2 ... s6 sn ... VP VBZ NP DT NN s3 s4 s5 VP G (a) (b) (c) VP VBZ1 NP4 DT NN s3 s4 s5 (e) VP6 s3 s4 s5 VBZ NP DT NN (d) Figure 1: Representations of the different types of items used in parsing. (a) An inside edge item: I(VP, 2, 5). (b) An outside edge item: O(VP, 2, 5). (c) An inside derivation item: D(TVP, 2, 5) for a tree TVP. (d) A ranked derivation item: K(VP, 2, 5, 6). (e) A modified inside derivation item (with backpointers to ranked items): D(VP, 2, 5, 3, VP → VBZ NP, 1, 4). in G. For IN, the function g(·) simply sums the weights of the antecedent items and the grammar rule r, while the priority function p(·) adds a heuristic to this sum. The heuristic is a bound on the Viterbi outside score α(e) of an edge e; see Klein and Manning (2003c) for details. A good heuristic allows A∗to reach the goal item I(G, 0, n) while constructing few inside items. If the heuristic is consistent, then A∗guarantees that whenever an inside item comes off the agenda, its weight is its true Viterbi inside score (Klein and Manning, 2003c). In particular, this guarantee implies that the goal item I(G, 0, n) will be popped with the score of the 1-best parse of the sentence. Consistency also implies that items are popped off the agenda in increasing order of bounded Viterbi scores: β(e) + h(e) We will refer to this monotonicity as the ordering property of A∗(Felzenszwalb and McAllester, 2007). One final property implied by consistency is admissibility, which states that the heuristic never overestimates the true Viterbi outside score for an edge, i.e. h(e) ≤α(e). For the remainder of this paper, we will assume our heuristics are consistent. 2.3 A Naive k-Best A∗Algorithm Due to the optimal substructure of 1-best PCFG derivations, a 1-best parser searches over the space of edges; this is the essence of 1-best dynamic programming. Although most edges can be built 959 Inside Edge Deductions (Used in A∗and KA∗) IN: I(B, i, l) : w1 I(C, l, j) : w2 w1+w2+wr+h(A,i,j) −−−−−−−−−−−−−→ I(A, i, j) : w1 + w2 + wr Table 1: The deduction schema (IN) for building inside edge items, using a supplied heuristic. This schema is sufficient on its own for 1-best A∗, and it is used in KA∗. Here, r is the rule A →B C. Inside Derivation Deductions (Used in NAIVE) DERIV: D(TB, i, l) : w1 D(TC, l, j) : w2 w1+w2+wr+h(A,i,j) −−−−−−−−−−−−−→ D A TB TC , i, j ! : w1 + w2 + wr Table 2: The deduction schema for building derivations, using a supplied heuristic. TB and TC denote full tree structures rooted at symbols B and C. This schema is the same as the IN deduction schema, but operates on the space of fully specified inside derivations rather than dynamic programming edges. This schema forms the NAIVE k-best algorithm. Outside Edge Deductions (Used in KA∗) OUT-B: I(G, 0, n) : w1 w1 −−→ O(G, 0, n) : 0 OUT-L: O(A, i, j) : w1 I(B, i, l) : w2 I(C, l, j) : w3 w1+w3+wr+w2 −−−−−−−−−−→ O(B, i, l) : w1 + w3 + wr OUT-R: O(A, i, j) : w1 I(B, i, l) : w2 I(C, l, j) : w3 w1+w2+wr+w3 −−−−−−−−−−→ O(C, l, j) : w1 + w2 + wr Table 3: The deduction schemata for building ouside edge items. The first schema is a base case that constructs an outside item for the goal (G, 0, n) from the inside item I(G, 0, n). The second two schemata build outside items in a top-down fashion. Note that for outside items, the completion cost is the weight of an inside item rather than a value computed by a heuristic. Delayed Inside Derivation Deductions (Used in KA∗) DERIV: D(TB, i, l) : w1 D(TC, l, j) : w2 O(A, i, j) : w3 w1+w2+wr+w3 −−−−−−−−−−→D A TB TC , i, j ! : w1 + w2 + wr Table 4: The deduction schema for building derivations, using exact outside scores computed using OUT deductions. The dependency on the outside item O(A, i, j) delays building derivation items until exact Viterbi outside scores have been computed. This is the final search space for the KA∗algorithm. Ranked Inside Derivation Deductions (Lazy Version of NAIVE) BUILD: K(B, i, l, u) : w1 K(C, l, j, v) : w2 w1+w2+wr+h(A,i,j) −−−−−−−−−−−−−→ D(A, i, j, l, r, u, v) : w1 + w2 + wr RANK: D1(A, i, j, ·) : w1 . . . Dk(A, i, j, ·) : wk maxm wm+h(A,i,j) −−−−−−−−−−−−→ K(A, i, j, k) : maxm wm Table 5: The schemata for simultaneously building and ranking derivations, using a supplied heuristic, for the lazier form of the NAIVE algorithm. BUILD builds larger derivations from smaller ones. RANK numbers derivations for each edge. Note that RANK requires distinct Di, so a rank k RANK rule will first apply (optimally) as soon as the kth-best inside derivation item for a given edge is removed from the queue. However, it will also still formally apply (suboptimally) for all derivation items dequeued after the kth. In practice, the RANK schema need not be implemented explicitly – one can simply assign a rank to each inside derivation item when it is removed from the agenda, and directly add the appropriate ranked inside item to the chart. Delayed Ranked Inside Derivation Deductions (Lazy Version of KA∗) BUILD: K(B, i, l, u) : w1 K(C, l, j, v) : w2 O(A, i, j) : w3 w1+w2+wr+w3 −−−−−−−−−−→ D(A, i, j, l, r, u, v) : w1 + w2 + wr RANK: D1(A, i, j, ·) : w1 . . . Dk(A, i, j, ·) : wk O(A, i, j) : wk+1 maxm wm+wk+1 −−−−−−−−−−−→K(A, i, j, k) : maxm wm Table 6: The deduction schemata for building and ranking derivations, using exact outside scores computed from OUT deductions, used for the lazier form of the KA∗algorithm. 960 using many derivations, each inside edge item will be popped exactly once during parsing, with a score and backpointers representing its 1-best derivation. However, k-best lists involve suboptimal derivations. One way to compute k-best derivations is therefore to abandon optimal substructure and dynamic programming entirely, and to search over the derivation space, the much larger space of fully specified trees. The items in this space are called inside derivation items, or derivation items where clear, and are of the form D(TA, i, j), specifying an entire tree TA rooted at symbol A and spanning si+1 . . . sj (see Figure 1(c)). Derivation items are combined using the DERIV schema of Table 2. The goals in this space, representing root parses, are any derivation items rooted at symbol G that span the entire input. In this expanded search space, each distinct parse has its own derivation item, derivable only in one way. If we continue to search long enough, we will pop multiple goal items. The first k which come off the agenda will be the k-best derivations. We refer to this approach as NAIVE. It is very inefficient on its own, but it leads to the full algorithm. The correctness of this k-best algorithm follows from the correctness of A∗parsing. The derivation space of full trees is simply the edge space of a much larger grammar (see Section 2.5). Note that the DERIV schema’s priority includes a heuristic just like 1-best A∗. Because of the context freedom of the grammar, any consistent heuristic for inside edge items usable in 1-best A∗ is also consistent for inside derivation items (and vice versa). In particular, the 1-best Viterbi outside score for an edge is a “perfect” heuristic for any derivation of that edge. While correct, NAIVE is massively inefficient. In comparison with A∗parsing over G, where there are O(n2) inside items, the size of the derivation space is exponential in the sentence length. By the ordering property, we know that NAIVE will process all derivation items d with δ(d) + h(d) ≤δ(gk) where gk is the kth-best root parse and δ(·) is the inside score of a derivation item (analogous to β for edges).2 Even for reasonable heuristics, this 2The new symbol emphasizes that δ scores a specific derivation rather than a minimum over a set of derivations. number can be very large; see Section 3 for empirical results. This naive algorithm is, of course, not novel, either in general approach or specific computation. Early k-best parsers functioned by abandoning dynamic programming and performing beam search on derivations (Ratnaparkhi, 1999; Collins, 2000). Huang (2005) proposes an extension of Knuth’s algorithm (Knuth, 1977) to produce k-best lists by searching in the space of derivations, which is essentially this algorithm. While Huang (2005) makes no explicit mention of a heuristic, it would be easy to incorporate one into their formulation. 2.4 A New k-Best A∗Parser While NAIVE suffers severe performance degradation for loose heuristics, it is in fact very efficient if h(·) is “perfect,” i.e. h(e) = α(e) ∀e. In this case, the ordering property of A∗guarantees that only inside derivation items d satisfying δ(d) + α(d) ≤δ(gk) will be placed in the chart. The set of derivation items d satisfying this inequality is exactly the set which appear in the k-best derivations of (G, 0, n) (as always, modulo ties). We could therefore use NAIVE quite efficiently if we could obtain exact Viterbi outside scores. One option is to compute outside scores with exhaustive dynamic programming over the original grammar. In a certain sense, described in greater detail below, this precomputation of exact heuristics is equivalent to the k-best extraction algorithm of Huang and Chiang (2005). However, this exhaustive 1-best work is precisely what we want to use A∗to avoid. Our algorithm solves this problem by integrating three searches into a single agenda-driven process. First, an A∗search in the space of inside edge items with an (imperfect) external heuristic h(·) finds exact inside scores. Second, exact outside scores are computed from inside and outside items. Finally, these exact outside scores guide the search over derivations. It can be useful to imagine these three operations as operating in phases, but they are all interleaved, progressing in order of their various priorities. In order to calculate outside scores, we introduce outside items O(A, i, j), which represent best derivations of G →s1 . . . si A sj+1 . . . sn; see Figure 1(b). Where the weights of inside items 961 compute Viterbi inside scores, the weights of outside items compute Viterbi outside scores. Table 3 shows deduction schemata for building outside items. These schemata are adapted from the schemata used in the general hierarchical A∗ algorithm of Felzenszwalb and McAllester (2007). In that work, it is shown that such schemata maintain the property that the weight of an outside item is the true Viterbi outside score when it is removed from the agenda. They also show that outside items o follow an ordering property, namely that they are processed in increasing order of β(o) + α(o) This quantity is the score of the best root derivation which includes the edge corresponding to o. Felzenszwalb and McAllester (2007) also show that both inside and outside items can be processed on the same queue and the ordering property holds jointly for both types of items. If we delay the construction of a derivation item until its corresponding outside item has been popped, then we can gain the benefits of using an exact heuristic h(·) in the naive algorithm. We realize this delay by modifying the DERIV deduction schema as shown in Table 4 to trigger on and prioritize with the appropriate outside scores. We now have our final algorithm, which we call KA∗. It is the union of the IN, OUT, and new “delayed” DERIV deduction schemata. In words, our algorithm functions as follows: we initialize the agenda with I(si, i −1, i) and D(si, i −1, i) for i = 1 . . . n. We compute inside scores in standard A∗fashion using the IN deduction rule, using any heuristic we might provide to 1-best A∗. Once the inside item I(G, 0, n) is found, we automatically begin to compute outside scores via the OUT deduction rules. Once O(si, i −1, i) is found, we can begin to also search in the space of derivation items, using the perfect heuristics given by the just-computed outside scores. Note, however, that all computation is done with a single agenda, so the processing of all three types of items is interleaved, with the k-best search possibly terminating without a full inside computation. As with NAIVE, the algorithm terminates when a k-th goal derivation is dequeued. 2.5 Correctness We prove the correctness of this algorithm by a reduction to the hierarchical A∗(HA∗) algorithm of Felzenszwalb and McAllester (2007). The input to HA∗is a target grammar Gm and a list of grammars G0 . . . Gm−1 in which Gt−1 is a relaxed projection of Gt for all t = 1 . . . m. A grammar Gt−1 is a projection of Gt if there exists some onto function πt : Σt 7→Σt−1 defined for all symbols in Gt. We use At−1 to represent πt(At). A projection is relaxed if, for every rule r = At →BtCt with weight wr there is a rule r′ = At−1 →Bt−1Ct−1 in Gt−1 with weight wr′ ≤wr. We assume that our external heuristic function h(·) is constructed by parsing our input sentence with a relaxed projection of our target grammar. This assumption, though often true anyway, is to allow proof by reduction to Felzenszwalb and McAllester (2007).3 We construct an instance of HA∗as follows: Let G0 be the relaxed projection which computes the heuristic. Let G1 be the input grammar G, and let G2, the target grammar of our HA∗instance, be the grammar of derivations in G formed by expanding each symbol A in G to all possible inside derivations TA rooted at A. The rules in G2 have the form TA →TB TC with weight given by the weight of the rule A →B C. By construction, G1 is a relaxed projection of G2; by assumption G0 is a relaxed projection of G1. The deduction rules that describe KA∗build the same items as HA∗with same weights and priorities, and so the guarantees from HA∗carry over to KA∗. We can characterize the amount of work done using the ordering property. Let gk be the kth-best derivation item for the goal edge g. Our algorithm processes all derivation items d, outside items o, and inside items i satisfying δ(d) + α(d) ≤ δ(gk) β(o) + α(o) ≤ δ(gk) β(i) + h(i) ≤ δ(gk) We have already argued that the set of derivation items satisfying the first inequality is the set of subtrees that appear in the optimal k-best parses, modulo ties. Similarly, it can be shown that the second inequality is satisfied only for edges that appear in the optimal k-best parses. The last inequality characterizes the amount of work done in the bottom-up pass. We compare this to 1-best A∗, which pops all inside items i satisfying β(i) + h(i) ≤β(g) = δ(g1) 3KA∗is correct for any consistent heuristic but a nonreductive proof is not possible in the present space. 962 Thus, the “extra” inside items popped in the bottom-up pass during k-best parsing as compared to 1-best parsing are those items i satisfying δ(g1) ≤β(i) + h(i) ≤δ(gk) The question of how many items satisfy these inequalities is empirical; we show in our experiments that it is small for reasonable heuristics. At worst, the bottom-up phase pops all inside items and reduces to exhaustive dynamic programming. Additionally, it is worth noting that our algorithm is naturally online in that it can be stopped at any k without advance specification. 2.6 Lazy Successor Functions The global ordering property guarantees that we will only dequeue derivation fragments of top parses. However, we will enqueue all combinations of such items, which is wasteful. By exploiting a local ordering amongst derivations, we can be more conservative about combination and gain the advantages of a lazy successor function (Huang and Chiang, 2005). To do so, we represent inside derivations not by explicitly specifying entire trees, but rather by using ranked backpointers. In this representation, inside derivations are represented in two ways, shown in Figure 1(d) and (e). The first way (d) simply adds a rank u to an edge, giving a tuple (A, i, j, u). The corresponding item is the ranked derivation item K(A, i, j, u), which represents the uth-best derivation of A over (i, j). The second representation (e) is a backpointer of the form (A, i, j, l, r, u, v), specifying the derivation formed by combining the uth-best derivation of (B, i, l) and the vth-best derivation of (C, l, j) using rule r = A →B C. The corresponding items D(A, i, j, l, r, u, v) are the new form of our inside derivation items. The modified deduction schemata for the NAIVE algorithm over these representations are shown in Table 5. The BUILD schema produces new inside derivation items from ranked derivation items, while the RANK schema assigns each derivation item a rank; together they function like DERIV. We can find the k-best list by searching until K(G, 0, n, k) is removed from the agenda. The k-best derivations can then be extracted by following the backpointers for K(G, 0, n, 1) . . . K(G, 0, n, k). The KA∗algorithm can be modified in the same way, shown in Table 6. 1 5 50 500 Heuristic Derivation items pushed (millions) 5-split 4-split 3-split 2-split 1-split 0-split NAIVE KA* Figure 2: Number of derivation items enqueued as a function of heuristic. Heuristics are shown in decreasing order of tightness. The y-axis is on a log-scale. The actual laziness is provided by additionally delaying the combination of ranked items. When an item K(B, i, l, u) is popped off the queue, a naive implementation would loop over items K(C, l, j, v) for all v, C, and j (and similarly for left combinations). Fortunately, little looping is actually necessary: there is a partial ordering of derivation items, namely, that D(A, i, j, l, r, u, v) will have a lower computed priority than D(A, i, j, l, r, u −1, v) and D(A, i, j, l, r, u, v −1) (Jim´enez and Marzal, 2000). So, we can wait until one of the latter two is built before “triggering” the construction of the former. This triggering is similar to the “lazy frontier” used by Huang and Chiang (2005). All of our experiments use this lazy representation. 3 Experiments 3.1 State-Split Grammars We performed our first experiments with the grammars of Petrov et al. (2006). The training procedure for these grammars produces a hierarchy of increasingly refined grammars through statesplitting. We followed Pauls and Klein (2009) in computing heuristics for the most refined grammar from outside scores for less-split grammars. We used the Berkeley Parser4 to learn such grammars from Sections 2-21 of the Penn Treebank (Marcus et al., 1993). We trained with 6 split-merge cycles, producing 7 grammars. We tested these grammars on 100 sentences of length at most 30 of Section 23 of the Treebank. Our “target grammar” was in all cases the most split grammar. 4http://berkeleyparser.googlecode.com 963 0 2000 4000 6000 8000 10000 0 5000 15000 25000 KA* k Items pushed (millions) K Best Bottom-up Heuristic 0 2000 4000 6000 8000 10000 0 5000 15000 25000 EXH k Items pushed (millions) K Best Bottom-up Figure 3: The cost of k-best extraction as a function of k for state-split grammars, for both KA∗and EXH. The amount of time spent in the k-best phase is negligible compared to the cost of the bottom-up phase in both cases. Heuristics computed from projections to successively smaller grammars in the hierarchy form successively looser bounds on the outside scores. This allows us to examine the performance as a function of the tightness of the heuristic. We first compared our algorithm KA∗against the NAIVE algorithm. We extracted 1000-best lists using each algorithm, with heuristics computed using each of the 6 smaller grammars. In Figure 2, we evaluate only the k-best extraction phase by plotting the number of derivation items and outside items added to the agenda as a function of the heuristic used, for increasingly loose heuristics. We follow earlier work (Pauls and Klein, 2009) in using number of edges pushed as the primary, hardware-invariant metric for evaluating performance of our algorithms.5 While KA∗scales roughly linearly with the looseness of the heuristic, NAIVE degrades very quickly as the heuristics get worse. For heuristics given by grammars weaker than the 4-split grammar, NAIVE ran out of memory. Since the bottom-up pass of k-best parsing is the bottleneck, we also examine the time spent in the 1-best phase of k-best parsing. As a baseline, we compared KA∗to the approach of Huang and Chiang (2005), which we will call EXH (see below for more explanation) since it requires exhaustive parsing in the bottom-up pass. We performed the exhaustive parsing needed for EXH in our agenda-based parser to facilitate comparison. For KA∗, we included the cost of computing the heuristic, which was done by running our agenda-based parser exhaustively on a smaller grammar to compute outside items; we chose the 5We found that edges pushed was generally well correlated with parsing time. 0 2000 4000 6000 8000 10000 0 200 600 1000 KA* k Items pushed (millions) K Best Bottom-up Heuristic Figure 4: The performance of KA∗for lexicalized grammars. The performance is dominated by the computation of the heuristic, so that both the bottom-up phase and the k-best phase are barely visible. 3-split grammar for the heuristic since it gives the best overall tradeoff of heuristic and bottom-up parsing time. We separated the items enqueued into items enqueued while computing the heuristic (not strictly part of the algorithm), inside items (“bottom-up”), and derivation and outside items (together “k-best”). The results are shown in Figure 3. The cost of k-best extraction is clearly dwarfed by the the 1-best computation in both cases. However, KA∗is significantly faster over the bottom-up computations, even when the cost of computing the heuristic is included. 3.2 Lexicalized Parsing We also experimented with the lexicalized parsing model described in Klein and Manning (2003b). This model is constructed as the product of a dependency model and the unlexicalized PCFG model in Klein and Manning (2003a). We 964 0 2000 4000 6000 8000 10000 0 500 1500 2500 KA* k Items pushed (millions) K Best Bottom-up Heuristic 0 2000 4000 6000 8000 10000 0 500 1500 2500 EXH k Items pushed (millions) K Best Bottom-up Figure 5: k-best extraction as a function of k for tree transducer grammars, for both KA∗and EXH. constructed these grammars using the Stanford Parser.6 The model was trained on Sections 2-20 of the Penn Treebank and tested on 100 sentences of Section 21 of length at most 30 words. For this grammar, Klein and Manning (2003b) showed that a very accurate heuristic can be constructed by taking the sum of outside scores computed with the dependency model and the PCFG model individually. We report performance as a function of k for KA∗in Figure 4. Both NAIVE and EXH are impractical on these grammars due to memory limitations. For KA∗, computing the heuristic is the bottleneck, after which bottom-up parsing and k-best extraction are very fast. 3.3 Tree Transducer Grammars Syntactic machine translation (Galley et al., 2004) uses tree transducer grammars to translate sentences. Transducer rules are synchronous contextfree productions that have both a source and a target side. We examine the cost of k-best parsing in the source side of such grammars with KA∗, which can be a first step in translation. We extracted a grammar from 220 million words of Arabic-English bitext using the approach of Galley et al. (2006), extracting rules with at most 3 non-terminals. These rules are highly lexicalized. About 300K rules are applicable for a typical 30-word sentence; we filter the rest. We tested on 100 sentences of length at most 40 from the NIST05 Arabic-English test set. We used a simple but effective heuristic for these grammars, similar to the FILTER heuristic suggested in Klein and Manning (2003c). We projected the source projection to a smaller grammar by collapsing all non-terminal symbols to X, and 6http://nlp.stanford.edu/software/ also collapsing pre-terminals into related clusters. For example, we collapsed the tags NN, NNS, NNP, and NNPS to N. This projection reduced the number of grammar symbols from 149 to 36. Using it as a heuristic for the full grammar suppressed ∼60% of the total items (Figure 5). 4 Related Work While formulated very differently, one limiting case of our algorithm relates closely to the EXH algorithm of Huang and Chiang (2005). In particular, if all inside items are processed before any derivation items, the subsequent number of derivation items and outside items popped by KA∗is nearly identical to the number popped by EXH in our experiments (both algorithms have the same ordering bounds on which derivation items are popped). The only real difference between the algorithms in this limited case is that EXH places k-best items on local priority queues per edge, while KA∗makes use of one global queue. Thus, in addition to providing a method for speeding up k-best extraction with A∗, our algorithm also provides an alternate form of Huang and Chiang (2005)’s k-best extraction that can be phrased in a weighted deduction system. 5 Conclusions We have presented KA∗, an extension of A∗parsing that allows extraction of optimal k-best parses without the need for an exhaustive 1-best pass. We have shown in several domains that, with an appropriate heuristic, our algorithm can extract kbest lists in a fraction of the time required by current approaches to k-best extraction, giving the best of both A∗parsing and efficient k-best extraction, in a unified procedure. 965 References Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and maxent discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL). Michael Collins. 2000. Discriminative reranking for natural language parsing. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML). P. Felzenszwalb and D. McAllester. 2007. The generalized A* architecture. Journal of Artificial Intelligence Research. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLTACL). Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In The Annual Conference of the Association for Computational Linguistics (ACL). Joshua Goodman. 1998. Parsing Inside-Out. Ph.D. thesis, Harvard University. Aria Haghighi, John DeNero, and Dan Klein. 2007. Approximate factoring for A* search. In Proceedings of HLT-NAACL. Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of the International Workshop on Parsing Technologies (IWPT), pages 53–64. Liang Huang. 2005. Unpublished manuscript. http://www.cis.upenn.edu/˜lhuang3/ knuth.pdf. V´ıctor M. Jim´enez and Andr´es Marzal. 2000. Computation of the n best parse trees for weighted and stochastic context-free grammars. In Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition, pages 183–192, London, UK. Springer-Verlag. Dan Klein and Christopher D. Manning. 2001. Parsing and hypergraphs. In IWPT, pages 123–134. Dan Klein and Chris Manning. 2002. Fast exact inference with a factored model for natural language processing,. In Proceedings of NIPS. Dan Klein and Chris Manning. 2003a. Accurate unlexicalized parsing. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL). Dan Klein and Chris Manning. 2003b. Factored A* search for models over sequences and trees. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). Dan Klein and Christopher D. Manning. 2003c. A* parsing: Fast exact Viterbi parse selection. In In Proceedings of the Human Language Technology Conference and the North American Association for Computational Linguistics (HLT-NAACL), pages 119–126. Donald Knuth. 1977. A generalization of Dijkstra’s algorithm. Information Processing Letters, 6(1):1– 5. Shankar Kumar and William Byrne. 2004. Minimum bayes-risk decoding for statistical machine translation. In Proceedings of The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). M. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. In Computational Linguistics. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 152–159. Mark-Jan Nederhof. 2003. Weighted deductive parsing and Knuth’s algorithm. Computationl Linguistics, 29(1):135–143. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics (ACL), pages 160–167, Morristown, NJ, USA. Association for Computational Linguistics. Adam Pauls and Dan Klein. 2009. Hierarchical search for parsing. In Proceedings of The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of COLING-ACL 2006. Adwait Ratnaparkhi. 1999. Learning to parse natural language with maximum entropy models. In Machine Learning, volume 34, pages 151–5175. Stuart M. Shieber, Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and implementation of deductive parsing. Journal of Logic Programming, 24:3–36. 966
2009
108
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 967–975, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Coordinate Structure Analysis with Global Structural Constraints and Alignment-Based Local Features Kazuo Hara Masashi Shimbo Hideharu Okuma Yuji Matsumoto Graduate School of Information Science Nara Institute of Science and Technology Ikoma, Nara 630-0192, Japan {kazuo-h,shimbo,hideharu-o,matsu}@is.naist.jp Abstract We propose a hybrid approach to coordinate structure analysis that combines a simple grammar to ensure consistent global structure of coordinations in a sentence, and features based on sequence alignment to capture local symmetry of conjuncts. The weight of the alignmentbased features, which in turn determines the score of coordinate structures, is optimized by perceptron training on a given corpus. A bottom-up chart parsing algorithm efficiently finds the best scoring structure, taking both nested or nonoverlapping flat coordinations into account. We demonstrate that our approach outperforms existing parsers in coordination scope detection on the Genia corpus. 1 Introduction Coordinate structures are common in life science literature. In Genia Treebank Beta (Kim et al., 2003), the number of coordinate structures is nearly equal to that of sentences. In clinical papers, the outcome of clinical trials is typically described with coordination, as in Median times to progression and median survival times were 6.1 months and 8.9 months in arm A and 7.2 months and 9.5 months in arm B. (Schuette et al., 2006) Despite the frequency and implied importance of coordinate structures, coordination disambiguation remains a difficult problem even for state-ofthe-art parsers. Figure 1(a) shows the coordinate structure extracted from the output of Charniak and Johnson’s (2005) parser on the above example. This is somewhat surprising, given that the symmetry of conjuncts in the sentence is obvious to human eyes, and its correct coordinate structure shown in Figure 1(b) can be readily observed. 6.1 months and 8.9 months in arm A 7.2 months and 9.5 months inarm B and 6.1 months and 8.9 months in arm A 7.2 months and 9.5 months in arm B and (b) (a) Figure 1: (a) Output from the Charniak-Johnson parser and (b) the correct coordinate structure. Structural and semantic symmetry of conjuncts is one of the frequently observed features of coordination. This feature has been explored by previous studies on coordination, but these studies often dealt with a restricted form of coordination with apparently too much information provided from outside. Sometimes it was assumed that the coordinate structure contained two conjuncts each solely composed of a few nouns; and in many cases, the longest span of coordination (e.g., outer noun phrase scopes) was given a priori. Such rich information might be given by parsers, but this is still an unfounded assumption. In this paper, we approach coordination by taking an extreme stance, and assume that the input is a whole sentence with no subsidiary information except for the parts-of-speech of words. As it assumes minimal information about syntactic constructs, our method provides a baseline for future work exploiting deeper syntactic information for coordinate structure analysis. Moreover, this stand-alone approach has its own merits as well: 1. Even apart from parsing, the output coordinate structure alone may provide valuable information for higher-level applications, in the same vein as the recent success of named entity recognition and other shallow parsing 967 technologies. One such potential application is extracting the outcome of clinical tests as illustrated above. 2. As the system is designed independently from parsers, it can be combined with any types of parsers (e.g., phrase structure or dependency parsers), if necessary. 3. Because coordination bracketing is sometimes inconsistent with phrase structure bracketing, processing coordinations apart from phrase structures might be beneficial. Consider, for example, John likes, and Bill adores, Sue. (Carston and Blakemore, 2005) This kind of structure might be treated by assuming the presence of null elements, but the current parsers have limited ability to detect them. On the other hand, the symmetry of conjuncts, John likes and Bill adores, is rather obvious and should be easy to detect. The method proposed in this paper builds a tree-like coordinate structure from the input sentence annotated with parts-of-speech. Each tree is associated with a score, which is defined in terms of features based on sequence alignment between conjuncts occurring in the tree. The feature weights are optimized with a perceptron algorithm on a training corpus annotated with the scopes of conjuncts. The reason we build a tree of coordinations is to cope with nested coordinations, which are in fact quite common. In Genia Treebank Beta, for example, about 1/3 of the whole coordinations are nested. The method proposed in this paper improves upon our previous work (Shimbo and Hara, 2007) which also takes a sentence as input but is restricted to flat coordinations. Our new method, on the other hand, can successfully output the correct nested structure of Figure 1(b). 2 Related work Resnik (1999) disambiguated coordinations of the form [n1 and n2 n3], where ni are all nouns. This type of phrase has two possible readings: [(n1) and (n2 n3)] and [((n1) and (n2)) n3]. He demonstrated the effectiveness of semantic similarity calculated from a large text collection, and agreement of numbers between n1 and n2 and between n1 and n3. Nakov and Hearst (2005) collected web-based statistics with search engines and applied them to a task similar to Resnik’s. Hogan (2007) improved the parsing accuracy of sentences in which coordinated noun phrases are known to exist. She presented a generative model incorporating symmetry in conjunct structures and dependencies between coordinated head words. The model was then used to rerank the nbest outputs of the Bikel parser (2005). Recently, Buyko et al. (2007; 2008) and Shimbo and Hara (2007) applied discriminative learning methods to coordinate structure analysis. Buyko et al. used a linear-chain CRF, whereas Shimbo and Hara proposed an approach based on perceptron learning of edit distance between conjuncts. Shimbo and Hara’s approach has its root in Kurohashi and Nagao’s (1994) rule-based method for Japanese coordinations. Other studies on coordination include (Agarwal and Boggess, 1992; Chantree et al., 2005; Goldberg, 1999; Okumura and Muraki, 1994). 3 Proposed method We propose a method for learning and detecting the scopes of coordinations. It makes no assumption about the number of coordinations in a sentence, and the sentence can contain either nested coordinations, multiple flat coordinations, or both. The method consists of (i) a simple grammar tailored for coordinate structure, and (ii) a perceptron-based algorithm for learning feature weights. The features are defined in terms of sequence alignment between conjuncts. We thus use the grammar to filter out inconsistent nested coordinations and non-valid (overlapping) conjunct scopes, and the alignment-based features to evaluate the similarity of conjuncts. 3.1 Grammar for coordinations The sole objective of the grammar we present below is to ensure the consistency of two or more coordinations in a sentence; i.e., for any two coordinations, either (i) they must be totally nonoverlapping (non-nested coordinations), or (ii) one coordination must be embedded within the scope of a conjunct of the other coordination (nested coordinations). Below, we call a parse tree built from the grammar a coordination tree. 968 Table 1: Non-terminals COORD Complete coordination. COORD′ Partially-built coordination. CJT Conjunct. N Non-coordination. CC Coordinate conjunction like “and,” “or,” and “but”. SEP Connector of conjuncts other than CC: e.g., punctuations like “,” and “;”. W Any word. Table 2: Production rules for coordination trees. (... | ... | ...) denotes a disjunction (matches any one of the elements). A ‘*’ matches any word. Rules for coordinations: (i) COORDi,m →CJTi, j CCj+1,k−1 CJTk,m (ii) COORDi,n →CJTi, j SEPj+1,k−1 COORD′k,n[m] (iii) COORD′i,m[ j] →CJTi, j CCj+1,k−1 CJTk,m (iv) COORD′i,n[ j] →CJTi, j SEPj+1,k−1 COORD′k,n[m] Rules for conjuncts: (v) CJTi, j →(COORD| N)i, j Rules for non-coordinations: (vi) Ni,k →COORDi, j Nj+1,k (vii) Ni, j →Wi,i (COORD|N)i+1, j (viii) Ni,i →Wi,i Rules for pre-terminals: (ix) CCi,i →(and | or | but )i (x) CCi,i+1 →(, | ; )i (and | or | but )i+1 (xi) SEPi,i →(, | ; )i (xii) Wi,i →∗i 3.1.1 Non-terminals The grammar is composed of non-terminal symbols listed in Table 1. The distinction between COORD and COORD′ is made to cope with three or more conjuncts in a coordination. For example “a , b and c” is treated as a tree of the form (a , (b and c))), and the inner tree (b and c) is not a complete coordination, until it is conjoined with the first conjunct a. We represent this inner tree by a COORD′ (partial coordination), to distinguish it from a complete coordination represented by COORD. Compare Figures 2(a) and (b), which respectively depict the coordination tree for this example, and a tree for nested coordination with a similar structure. 3.1.2 Production rules Table 2 lists the production rules. Rules are shown with explicit subscripts indicating the span of their production. The subscript to a terminal word (shown in a box) specifies its position within a sentence (word index). Non-terminals have two subscript indices denoting the span of the production. COORD′ in rules (iii) and (iv) has an extra index j shown in brackets. This bracketed index maintains the end of the first conjunct (CJT) on the right-hand side. After a COORD′ is produced by these rules, it may later constitute a larger COORD or COORD′ through the application of productions (ii) or (iv). At this point, the bracketed index of the constituent COORD′ allows us to identify the scope of the first conjunct immediately underneath. As we describe in Section 3.2.4, the scope of this conjunct is necessary to compute the score of coordination trees. These grammar rules are admittedly minimal and need further elaboration to cover all real use cases of coordination (e.g., conjunctive phrases like “as well as”, etc.). Yet they are sufficient to generate the basic trees illustrated in Figure 2. The experiments of Section 5 will apply this grammar on a real biomedical corpus. Note that although non-conjunction cue expressions, such as “both” and “either,” are not the part of this grammar, such cues can be learned (through perceptron training) from training examples if appropriate features are introduced. Indeed, in Section 5 we use features indicating which words precede coordinations. 3.2 Score of a coordination tree Given a sentence, our system outputs the coordination tree with the highest score among all possible trees for the sentence. The score of a coordination tree is simply the sum of the scores of all its nodes, and the node scores are computed independently from each other. Hence a bottom-up chart parsing algorithm can be designed to efficiently compute the highest scoring tree. While scores can be assigned to any nodes, we have chosen to assign a non-zero score only to two types of coordination nodes, namely COORD and COORD′, in the experiment of Section 5; all other nodes are ignored in score computation. The score of a coordination node is defined via sequence alignment (Gusfield, 1997) between conjuncts below the node, to capture the symmetry of these 969 (a) a , b and c W W W COORD COORD′ N SEP N CC N (b) a or b and c W CC W CC W N N N COORD COORD (c) a W b W c W N N N Figure 2: Coordination trees for (a) a coordination with three conjuncts, (b) nested coordinations, and (c) a non-coordination. The CJT nodes in (a) and (b) are omitted for brevity. W W CC W W W W W CC W W CC W W W W W N N N N N N N N N N N N COORD N NCOORD N N COORD 6.1 months 8.9 months 9.5 months 7.2 months 6.1 months and 8.9 months in arm A 7.2 months and 9.5 months in arm B Median times to progression and median survival times were 6.1 months and 8.9 months in arm A and 7.2 months and 9.5 months in arm B W W W W CC W W W N N N N N N N COORD W N N Median times to progression median survival times Figure 3: A coordination tree for the example sentence presented in Section 1, with the edit graphs attached to COORDnodes. median survival times Median times to progression initial vertex terminal vertex Figure 4: An edit graph and an alignment path (bold line). conjuncts. Figure 3 schematically illustrates the relation between a coordination tree and alignment-based computation of the coordination nodes. The score of this tree is given by the sum of the scores of the four COORD nodes, and the score of a COORD node is computed with the edit graph shown above the node. 3.2.1 Edit graph The edit graph is a basic data structure for computing sequence alignment. An example edit graph is depicted in Figure 4 for word sequences “Median times to progression” and “median survival times.” A diagonal edge represents alignment (or substitution) between the word at the top of the edge and the one on the left, while horizontal and vertical edges represent skipping (or deletion) of respective word. With this representation, a path starting from the top-left corner (initial vertex) and arriving at the bottom-right corner (terminal vertex) corresponds one-to-one to a sequence of edit operations transforming one word sequence to the other. In standard sequence alignment, each edge of an edit graph is associated with a score representing the merit of the corresponding edit operation. By defining the score of a path as the total score of its component edges, we can assess the similarity of a pair of sequences as the maximum score over all paths in its edit graph. 3.2.2 Features In our model, instead of assigning a score independently to edges of an edit graph, we assign a vector of features to edges. The score of an edge is the inner product of this feature vector and another vector w, called global weight vector. Feature vectors may differ from one edge to another, but the vector w is unique in the entire system and consistently determines the relative importance of individual features. In parallel to the definition of a path score, the feature vector of a path can be defined as the sum of the feature vectors assigned to its component edges. Then the score of a path is equal to the inner product ⟨w,f⟩of w and the feature vector f of the path. A feature assigned to an edge can be an arbitrary indicator of edge directions (horizontal, vertical, or diagonal), edge coordinates in the edit graph, attributes (such as the surface form, partof-speech, and the location in the sentence) of the current or surrounding words, or their combination. Section 5.3 will describe the exact features used in our experiments. 970 3.2.3 Averaged path score as the score of a coordination node Finally, we define the score of a COORD(or COORD′) node in a coordination tree as the average score of all paths in its associated edit graph. This is another deviation from standard sequence alignment, in that we do not take the maximum scoring paths as representing the similarity of conjuncts, but instead use the average over all paths. Notice that the average is taken over paths, and not edges. In this way, a natural bias is incurred towards features occurring near the diagonal connecting the initial vertex and the terminal vertex. For instance, in an edit graph of size 8 × 8, there is only one path that goes through the vertex at the top-right corner, while more than 3,600 paths pass through the vertex at the center of the graph. In other words, the features associated with the center vertex receives 3,600 times more weights than those at the top-right corner after averaging. The major benefit of this averaging is the reduced computation during training. During the perceptron training, the global weight vector w changes and the score of individual paths changes accordingly. On the other hand, the average feature vector f (as opposed to the average score ⟨w,f⟩) over all paths in the edit graph remains constant. This means that f can be pre-computed once before the training starts, and the score computation during training reduces to simply taking the inner product of the current w and the precomputed f. Alternatively, the alignment score could be defined as that of the best scoring path with respect to the current w, following the standard sequence alignment computation. However, it would require running the Viterbi algorithm in each iteration of the perceptron training, for all possible spans of conjuncts. While we first pursued this direction, it was abandoned as the training was intolerably slow. 3.2.4 Coordination with three or more conjuncts For a coordination with three or more conjuncts, we define its score as the sum of the similarity scores of all pairwise consecutive conjuncts; i.e., for a coordination “a, b, c, and d” with four conjuncts, the score is the sum of the similarity scores for conjunct pairs (a, b), (b, c), and (c, d). Ideally, we should take all combinations of conjuncts into account, but it would lead to a combinatorial a , b , c and d W W W W COORD COORD′ COORD′ N SEP N SEP N CC N Figure 5: A coordination tree with four conjuncts. All CJT nodes are omitted. explosion and is impractical. Recall that in the grammar introduced in Section 3.1, we attached a bracketed index to COORD′. This bracketed index was introduced for the computation of this pairwise similarity. Figure 5 shows the coordination tree for “a, b, c, and d.” The pairwise similarity scores for (a, b), (b, c), and (c, d) are respectively computed at the top COORD, left COORD′, and right COORD′ nodes, using the scheme described in Section 3.2.3. To compute the similarity of a and b, we need to lift the information about the end position of b upward to the COORDnode. The same applies to computing the similarity of b and c; the end position of c is needed at the left COORD′. The bracketed index of COORD′ exactly maintains this information, i.e., the end of the first conjunct below the COORD′. See production rules (iii) and (iv) in Table 2. 3.3 Perceptron learning of feature weights As we saw above, our model is a linear model with the global weight vector w acting as the coefficient vector, and hence various existing techniques can be exploited to optimize w. In this paper, we use the averaged perceptron learning (Collins, 2002; Freund and Schapire, 1999) to optimize w on a training corpus, so that the system assigns the highest score to the correct coordination tree among all possible trees for each training sentence. 4 Discussion 4.1 Computational complexity Given an input sentence of N words, finding its maximum scoring coordination tree by a bottomup chart parsing algorithm incurs a time complexity of O(N3). While the right-hand side of rules (i)–(iv) involves more than three variables and thus appears to increase complexity, this is not the case since 971 some of the variables ( j and k in rules (i) and (iii), and j, k, and m in rules (ii) and (iv)) are constrained by the location of conjunct connectors (CC and SEP), whose number in a sentence is negligible compared to the sentence length N. As a result, these rules can be processed in O(N2) time. Hence the run-time complexity is dominated by rule (vi), which has three variables and leads to O(N3). Each iteration of the perceptron algorithm for a sentence of length N also incurs O(N3) for the same reason. Our method also requires pre-processing in the beginning of perceptron training, to compute the average feature vectors f for all possible spans (i, j) and (k,m) of conjuncts in a sentence. With a reasoning similar to the complexity analysis of the chart parsing algorithm above, we can show that the pre-processing takes O(N4) time. 4.2 Difference from Shimbo and Hara’s method The method proposed in this paper extends the work of Shimbo and Hara (2007). Both take a whole sentence as input and use perceptron learning, and the difference lies in how hypothesis coordination(s) are encoded as a feature vector. Unlike our new method which constructs a tree of coordinations, Shimbo and Hara used a chainable partial paths (representing non-overlapping series of local alignments; see (Shimbo and Hara, 2007, Figure 5)) in a global triangular edit graph. In our method, we compute many edit graphs of smaller size, one for each possible conjunct pair in a sentence. We use global alignment (a complete path) in these smaller graphs, as opposed to chainable local alignment (partial paths) in a global edit graph used by Shimbo and Hara. Since nested coordinations cannot be encoded as chainable partial paths (Shimbo and Hara, 2007), their method cannot cope with nested coordinations such as those illustrated in Figure 2(b). 4.3 Integration with parsers Charniak and Johnson (2005) reported an improved parsing accuracy by reranking n-best parse trees, using features based on similarity of coordinated phrases, among others. It should be interesting to investigate whether alignment-based features like ours can be built into their reranker, or more generally, whether the coordination scopes output by our method help improving parsing accuracy. The combinatory categorial grammar (CCG) (Steedman, 2000) provides an account for various coordination constructs in an elegant manner, and incorporating alignment-based features into the CCG parser (Clark and Curran, 2007) is also a viable possibility. 5 Evaluation We evaluated the performance of our method1 on the Genia corpus (Kim et al., 2003). 5.1 Dataset Genia Treebank Beta is a collection of Penn Treebank-like phrase structure trees for 4529 sentences from Medline abstracts. In this corpus, each scope of coordinate structures is annotated with an explicit tag, and the conjuncts are always placed inside brackets. Not many treebanks explicitly mark the scope of conjuncts; for example, the Penn Treebank frequently omits bracketing of coordination and conjunct scopes, leaving them as a flat structure. Genia contains a total of 4129 occurrences of COOD tags indicating coordination. These tags are further subcategorized into phrase types such as NP-COODand VP-COOD. Among coordinations annotated with COOD tags, we selected those surrounding “and,” “or,” and “but.” This yielded 3598 coordinations (2997, 355, and 246 for “and,” “or,” and “but,” respectively) in 2508 sentences. These coordinations constitute nearly 90% of all coordinations in Genia, and we used them as the evaluation dataset. The length of these sentences is 30.0 words on average. 5.2 Evaluation method We tested the proposed method in two tasks: (i) identify the scope of coordinations regardless of phrase types, and (ii) detect noun phrase (NP) coordinations and identify their scopes. While the goal of task (i) is to determine the scopes of 3598 coordinations, task (ii) demands both to judge whether each of the coordinations constructs an NP, and if it does, to determine its scope. 1A C++ implementation of our method can be found at http://cl.naist.jp/project/coordination/, along with supplementary materials including the preliminary experimental results of the CCG parser on the same dataset. 972 Table 3: Features in the edit graph for conjuncts wkwk+1 ···wm and wlwl+1 ···wn. edge/vertex type vertical edge horizontal edge diagonal edge initial vertex terminal vertex ··· wj−1 wj wj+1 ··· ... wi−1 wi wi+1 ... ··· wj−1 w j wj+1 ··· ... wi−1 wi wi+1 ... ··· wj−1 wj w j+1 ··· ... wi−1 wi wi+1 ... wl wl+1 ··· wk wk+1 ... ··· wn−1 wn ... wm−1 wm vertical bigrams wi−1wi wiwi+1 wi−1wi wi−1wi wiwi+1 wk−2wk−1 wk−1wk wkwk+1 wm−2wm−1 wm−1wm wmwm+1 horizontal bigrams w j−1w j w j−1w j w jw j+1 w j−1w j w jw j+1 wl−2wl−1 wl−1wl wlwl+1 wn−2wn−1 wn−1wn wnwn+1 orthogonal bigrams wiw j wk−1wl−1 wk−1wl wkwl−1 wkwl wm−1wn−1 wm−1wn wmwn−1 wmwn For comparison, two parsers, the Bikel-Collins parser (Bikel, 2005)2 and Charniak-Johnson reranking parser3, were applied in both tasks. Task (ii) imitates the evaluation reported by Shimbo and Hara (2007), and to compare our method with their coordination analysis method. Because their method can only process flat coordinations, in task (ii) we only used 1613 sentences in which “and” occurs just once, following (Shimbo and Hara, 2007). Note however that the split of data is different from their experiments. We evaluate the performance of the tested methods by the accuracy of coordination-level bracketing (Shimbo and Hara, 2007); i.e., we count each of the coordination (as opposed to conjunct) scopes as one output of the system, and the system output is deemed correct if the beginning of the first output conjunct and the end of the last conjunct both match annotations in the Genia Treebank. In both tasks, we report the micro-averaged results of five-fold cross validation. The Bikel-Collins and Charniak-Johnson parsers were trained on Genia, using all the phrase structure trees in the corpus except the test set; i.e., the training set also contains (in addition to the four folds) 2021(= 4129 −2508) sentences which are not in the five folds. Since the two parsers were also trained on Genia, we interpret the bracketing above each conjunction in the parse tree output by them as the coordination scope output by the parsers, in accordance with how coordinations are annotated in Genia. In 2http://www.cis.upenn.edu/∼dbikel/software.html 3ftp://ftp.cs.brown.edu/pub/nlparser/ reranking-parserAug06.tar.gz testing, the Bikel-Collins parser and Shimbo-Hara method were given the gold parts-of-speech (POS) of the test sentences in Genia. We trained the proposed method twice, once with the gold POS tags and once with the POS tags output by the Charniak-Johnson parser. This is because the Charniak-Johnson parser does not accept POS tags of the test sentences. 5.3 Features To compute features for our method, each word in a sentence was represented as a list of attributes. The attributes include the surface word, part-of-speech, suffix, prefix, and the indicators of whether the word is capitalized, whether it is composed of all uppercase letters or digits, and whether it contains digits or hyphens. All features are defined as an indicator of an attribute in two words coming from either a single conjunct (either horizontal or vertical word sequences associated with the edit graph) or two conjuncts (one from the horizontal word sequence and one from the vertical sequence). We call the first type horizontal/vertical bigrams and the second orthogonal bigrams. Table 3 summarizes the features in an edit graph for two conjuncts (wkwk+1 ···wm) and (wlwl+1 ···wn), where wi denotes the ith word in the sentence. As seen from the table, features are assigned to the initial and terminal vertices as well as to edges. A wiwj in the table indicates that for each attribute (e.g., part-of-speech, etc.), an indicator function for the combination of the attribute values in wi and wj is assigned to the vertex or edge shown in the figure above. Note that the features 973 Table 4: Results of Task (i). The number of coordinations of each type (#), and the recall (%) for the proposed method, Bikel-Collins parser (BC), and Charniak-Johnson parser (CJ). gold POS CJ POS COOD # Proposed BC Proposed CJ Overall 3598 61.5 52.1 57.5 52.9 NP 2317 64.2 45.5 62.5 50.1 VP 465 54.2 67.7 42.6 61.9 ADJP 321 80.4 66.4 76.3 48.6 S 188 22.9 67.0 15.4 63.3 PP 167 59.9 53.3 53.9 58.1 UCP 60 36.7 18.3 38.3 26.7 SBAR 56 51.8 85.7 33.9 83.9 ADVP 21 85.7 90.5 85.7 90.5 Others 3 66.7 33.3 33.3 0.0 assigned to different types of vertex or edge are treated as distinct even if the word indices i and j are identical; i.e., all features are conditioned on edge/vertex types to which they are assigned. 5.4 Results Task (i) Table 4 shows the results of task (i). We only list the recall score in the table, as precision (and hence F1-measure, too) was equal to recall for all methods in this task; this is not surprising given that in this data set, conjunctions “and”, “or”, and “but” always indicate the existence of a coordination, and all methods successfully learned this trend from the training data. The proposed method outperformed parsers on the coordination scope identification overall. The table also indicates that our method considerably outperformed two parsers on NP-COOD, ADJP-COOD, and UCP-COOD categories, but it did not work well on VP-COOD, S-COOD, and SBAR-COOD. In contrast, the parsers performed quite well in the latter categories. Task (ii) Table 5 lists the results of task (ii). The proposed method outperformed Shimbo-Hara method in this task, although the setting of this task is mostly identical to (Shimbo and Hara, 2007) and does not include nested coordinations. Note also that both methods use roughly equivalent features. One reason should be that our grammar rules can strictly enforce the scope consistency of conjuncts in coordinations with three or more conjuncts. Because the Shimbo-Hara method represents such coordinations as a series of sub-paths in an edit graph which are output independently of each other without enforcing consistency, their Table 5: Results of Task (ii). Proposed method, BC: Bikel-Collins, CJ: Charniak-Johnson, SH: Shimbo-Hara. gold POS CJ POS Proposed BC SH Proposed CJ Precision 61.7 45.6 55.9 60.2 49.0 Recall 57.9 46.1 53.7 55.6 46.8 F1 59.7 45.8 54.8 57.8 47.9 method can produce inconsistent scopes of conjuncts in the middle. In fact, the advantage of the proposed method in task (ii) is noticeable especially in coordinations with three or more conjuncts; if we restrict the test set only to coordinations with three or more conjuncts, the F-measures in the proposed method and Shimbo-Hara become 53.0 and 42.3, respectively; i.e., the margin increases to 10.7 from 4.9 points. 6 Conclusion and outlook We have proposed a method for learning and analyzing generic coordinate structures including nested coordinations. It consists of a simple grammar for coordination and perceptron learning of alignment-based features. The method performed well overall and on coordinated noun and adjective phrases, but not on coordinated verb phrases and sentences. The latter coordination types are in fact easy for parsers, as the experimental results show. The proposed method failing in verbal and sentential coordinations is as expected, since conjuncts in these coordinations are not necessarily similar, if they are viewed as a sequence of words. We will investigate similarity measures different from sequence alignment, to better capture the symmetry of these conjuncts. We will also pursue integration of our method with parsers. Because they have advantages in different coordination phrase types, their integration looks promising. Acknowledgments We thank anonymous reviewers for helpful comments and the pointer to the combinatory categorial grammar. References Rajeev Agarwal and Lois Boggess. 1992. A simple but useful approach to conjunct identification. In Proceedings of the 30th Annual Meeting of the Associa974 tion for Computational Linguistics (ACL’92), pages 15–21. Daniel M. Bikel. 2005. Multilingual statistical parsing engine version 0.9.9c. http://www.cis.upenn. edu/∼dbikel/software.html. Ekaterina Buyko and Udo Hahn. 2008. Are morphosyntactic features more predicative for the resolution of noun phrase coordination ambiguity than lexicosemantic similarity scores. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING 2008), pages 89–96, Manchester, UK. Ekaterina Buyko, Katrin Tomanek, and Udo Hahn. 2007. Resolution of coordination ellipses in biological named entities using conditional random fields. In Proceedings of the Pacific Association for Computational Linguistics (PACLIC’07), pages 163–171. Robyn Carston and Diane Blakemore. 2005. Editorial: Introduction to coordination: syntax, semantics and pragmatics. Lingua, 115:353–358. Francis Chantree, Adam Kilgarriff, Anne de Roeck, and Alistair Willis. 2005. Disambiguating coordinations using word distribution information. In Proceedings of the Int’l Conference on Recent Advances in Natural Language Processing, Borovets, Bulgaria. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and MaxEnt discriminative reranking. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL 2005), pages 173–180, Ann Arbor, Michigan, USA. Stephen Clark and James R. Curran. 2007. Widecoverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics, 33(4):493–552. Michael Collins. 2002. Discriminative training methods for hidden Markov models: theory and experiments with perceptron algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2002), pages 1– 8, Philadelphia, PA, USA. Yoav Freund and Robert E. Schapire. 1999. Large margin classification using the perceptron algorithm. Machine Learning, 37(3):277–296. Miriam Goldberg. 1999. An unsupervised model for statistically determining coordinate phrase attachment. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL 1999), pages 610–614, College Park, Maryland, USA. Dan Gusfield. 1997. Algorithms on Strings, Trees, and Sequences. Cambridge University Press. Deirdre Hogan. 2007. Coordinate noun phrase disambiguation in a generative parsing model. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics (ACL 2007), pages 680– 687, Prague, Czech Republic. J.-D. Kim, T. Ohta, Y. Tateisi, and J. Tsujii. 2003. GENIA corpus: a semantically annotated corpus for bio-textmining. Bioinformatics, 19(Suppl. 1):i180– i182. Sadao Kurohashi and Makoto Nagao. 1994. A syntactic analysis method of long Japanese sentences based on the detection of conjunctive structures. Computational Linguistics, 20:507–534. Preslav Nakov and Marti Hearst. 2005. Using the web as an implicit training set: application to structural ambiguity resolution. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language (HLTEMNLP 2005), pages 835–842, Vancouver, Canada. Akitoshi Okumura and Kazunori Muraki. 1994. Symmetric pattern matching analysis for English coordinate structures. In Proceedings of the Fourth Conference on Applied Natural Language Processing, pages 41–46. Philip Resnik. 1999. Semantic similarity in a taxonomy. Journal of Artificial Intelligence Research, 11:95–130. Wolfgang Schuette, Thomas Blankenburg, Wolf Guschall, Ina Dittrich, Michael Schroeder, Hans Schweisfurth, Assaad Chemaissani, Christian Schumann, Nikolas Dickgreber, Tabea Appel, and Dieter Ukena. 2006. Multicenter randomized trial for stage iiib/iv non-small-cell lung cancer using every3-week versus weekly paclitaxel/carboplatin. Clinical Lung Cancer, 7:338–343. Masashi Shimbo and Kazuo Hara. 2007. A discriminative learning model for coordinate conjunctions. In Proceedings of Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL 2007), pages 610–619, Prague, Czech Republic. Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA, USA. 975
2009
109
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 91–99, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Learning Semantic Correspondences with Less Supervision Percy Liang UC Berkeley [email protected] Michael I. Jordan UC Berkeley [email protected] Dan Klein UC Berkeley [email protected] Abstract A central problem in grounded language acquisition is learning the correspondences between a rich world state and a stream of text which references that world state. To deal with the high degree of ambiguity present in this setting, we present a generative model that simultaneously segments the text into utterances and maps each utterance to a meaning representation grounded in the world state. We show that our model generalizes across three domains of increasing difficulty—Robocup sportscasting, weather forecasts (a new domain), and NFL recaps. 1 Introduction Recent work in learning semantics has focused on mapping sentences to meaning representations (e.g., some logical form) given aligned sentence/meaning pairs as training data (Ge and Mooney, 2005; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Lu et al., 2008). However, this degree of supervision is unrealistic for modeling human language acquisition and can be costly to obtain for building large-scale, broadcoverage language understanding systems. A more flexible direction is grounded language acquisition: learning the meaning of sentences in the context of an observed world state. The grounded approach has gained interest in various disciplines (Siskind, 1996; Yu and Ballard, 2004; Feldman and Narayanan, 2004; Gorniak and Roy, 2007). Some recent work in the NLP community has also moved in this direction by relaxing the amount of supervision to the setting where each sentence is paired with a small set of candidate meanings (Kate and Mooney, 2007; Chen and Mooney, 2008). The goal of this paper is to reduce the amount of supervision even further. We assume that we are given a world state represented by a set of records along with a text, an unsegmented sequence of words. For example, in the weather forecast domain (Section 2.2), the text is the weather report, and the records provide a structured representation of the temperature, sky conditions, etc. In this less restricted data setting, we must resolve multiple ambiguities: (1) the segmentation of the text into utterances; (2) the identification of relevant facts, i.e., the choice of records and aspects of those records; and (3) the alignment of utterances to facts (facts are the meaning representations of the utterances). Furthermore, in some of our examples, much of the world state is not referenced at all in the text, and, conversely, the text references things which are not represented in our world state. This increased amount of ambiguity and noise presents serious challenges for learning. To cope with these challenges, we propose a probabilistic generative model that treats text segmentation, fact identification, and alignment in a single unified framework. The parameters of this hierarchical hidden semi-Markov model can be estimated efficiently using EM. We tested our model on the task of aligning text to records in three different domains. The first domain is Robocup sportscasting (Chen and Mooney, 2008). Their best approach (KRISPER) obtains 67% F1; our method achieves 76.5%. This domain is simplified in that the segmentation is known. The second domain is weather forecasts, for which we created a new dataset. Here, the full complexity of joint segmentation and alignment arises. Nonetheless, we were able to obtain reasonable results on this task. The third domain we considered is NFL recaps (Barzilay and Lapata, 2005; Snyder and Barzilay, 2007). The language used in this domain is richer by orders of magnitude, and much of it does not reference the world state. Nonetheless, taking the first unsupervised approach to this problem, we were able to make substantial progress: We achieve an F1 of 53.2%, which closes over half of the gap between a heuristic baseline (26%) and supervised systems (68%–80%). 91 Dataset # scenarios |w| |T | |s| |A| Robocup 1919 5.7 9 2.4 0.8 Weather 22146 28.7 12 36.0 5.8 NFL 78 969.0 44 329.0 24.3 Table 1: Statistics for the three datasets. We report average values across all scenarios in the dataset: |w| is the number of words in the text, |T | is the number of record types, |s| is the number of records, and |A| is the number of gold alignments. 2 Domains and Datasets Our goal is to learn the correspondence between a text w and the world state s it describes. We use the term scenario to refer to such a (w, s) pair. The text is simply a sequence of words w = (w1, . . . , w|w|). We represent the world state s as a set of records, where each record r ∈s is described by a record type r.t ∈T and a tuple of field values r.v = (r.v1, . . . , r.vm).1 For example, temperature is a record type in the weather domain, and it has four fields: time, min, mean, and max. The record type r.t ∈T specifies the field type r.tf ∈{INT, STR, CAT} of each field value r.vf, f = 1, . . . , m. There are three possible field types—integer (INT), string (STR), and categorical (CAT)—which are assumed to be known and fixed. Integer fields represent numeric properties of the world such as temperature, string fields represent surface-level identifiers such as names of people, and categorical fields represent discrete concepts such as score types in football (touchdown, field goal, and safety). The field type determines the way we expect the field value to be rendered in words: integer fields can be numerically perturbed, string fields can be spliced, and categorical fields are represented by open-ended word distributions, which are to be learned. See Section 3.3 for details. 2.1 Robocup Sportscasting In this domain, a Robocup simulator generates the state of a soccer game, which is represented by a set of event records. For example, the record pass(arg1=pink1,arg2=pink5) denotes a passing event; this type of record has two fields: arg1 (the actor) and arg2 (the recipient). As the game is progressing, humans interject commentaries about notable events in the game, e.g., pink1 passes back to pink5 near the middle of the field. All of the 1To simplify notation, we assume that each record has m fields, though in practice, m depends on the record type r.t. fields in this domain are categorical, which means there is no a priori association between the field value pink1 and the word pink1. This degree of flexibility is desirable because pink1 is sometimes referred to as pink goalie, a mapping which does not arise from string operations but must instead be learned. We used the dataset created by Chen and Mooney (2008), which contains 1919 scenarios from the 2001–2004 Robocup finals. Each scenario consists of a single sentence representing a fragment of a commentary on the game, paired with a set of candidate records. In the annotation, each sentence corresponds to at most one record (possibly one not in the candidate set, in which case we automatically get that sentence wrong). See Figure 1(a) for an example and Table 1 for summary statistics on the dataset. 2.2 Weather Forecasts In this domain, the world state contains detailed information about a local weather forecast and the text is a short forecast report (see Figure 1(b) for an example). To create the dataset, we collected local weather forecasts for 3,753 cities in the US (those with population at least 10,000) over three days (February 7–9, 2009) from www.weather.gov. For each city and date, we created two scenarios, one for the day forecast and one for the night forecast. The forecasts consist of hour-by-hour measurements of temperature, wind speed, sky cover, chance of rain, etc., which represent the underlying world state. This world state is summarized by records which aggregate measurements over selected time intervals. For example, one of the records states the minimum, average, and maximum temperature from 5pm to 6am. This aggregation process produced 22,146 scenarios, each containing |s| = 36 multi-field records. There are 12 record types, each consisting of only integer and categorical fields. To annotate the data, we split the text by punctuation into lines and labeled each line with the records to which the line refers. These lines are used only for evaluation and are not part of the model (see Section 5.1 for further discussion). The weather domain is more complex than the Robocup domain in several ways: The text w is longer, there are more candidate records, and most notably, w references multiple records (5.8 on av92 x badPass(arg1=pink11,arg2=purple3) ballstopped() ballstopped() kick(arg1=pink11) turnover(arg1=pink11,arg2=purple3) s w: pink11 makes a bad pass and was picked offby purple3 (a) Robocup sportscasting . . . rainChance(time=26-30,mode=Def) temperature(time=17-30,min=43,mean=44,max=47) windDir(time=17-30,mode=SE) windSpeed(time=17-30,min=11,mean=12,max=14,mode=10-20) precipPotential(time=17-30,min=5,mean=26,max=75) rainChance(time=17-30,mode=--) windChill(time=17-30,min=37,mean=38,max=42) skyCover(time=17-30,mode=50-75) rainChance(time=21-30,mode=--) . . . s w: Occasional rain after 3am . Low around 43 . South wind between 11 and 14 mph . Chance of precipitation is 80 % . New rainfall amounts between a quarter and half of an inch possible . (b) Weather forecasts . . . rushing(entity=richie anderson,att=5,yds=37,avg=7.4,lg=16,td=0) receiving(entity=richie anderson,rec=4,yds=46,avg=11.5,lg=20,td=0) play(quarter=1,description=richie anderson ( dal ) rushed left side for 13 yards .) defense(entity=eric ogbogu,tot=4,solo=3,ast=1,sck=0,yds=0) . . . s w: . . . Former Jets player Richie Anderson finished with 37 yards on 5 carries plus 4 receptions for 46 yards . . . . (c) NFL recaps Figure 1: An example of a scenario for each of the three domains. Each scenario consists of a candidate set of records s and a text w. Each record is specified by a record type (e.g., badPass) and a set of field values. Integer values are in Roman, string values are in italics, and categorical values are in typewriter. The gold alignments are shown. erage), so the segmentation of w is unknown. See Table 1 for a comparison of the two datasets. 2.3 NFL Recaps In this domain, each scenario represents a single NFL football game (see Figure 1(c) for an example). The world state (the things that happened during the game) is represented by database tables, e.g., scoring summary, team comparison, drive chart, play-by-play, etc. Each record is a database entry, for instance, the receiving statistics for a certain player. The text is the recap of the game— an article summarizing the game highlights. The dataset we used was collected by Barzilay and Lapata (2005). The data includes 466 games during the 2003–2004 NFL season. 78 of these games were annotated by Snyder and Barzilay (2007), who aligned each sentence to a set of records. This domain is by far the most complicated of the three. Many records corresponding to inconsequential game statistics are not mentioned. Conversely, the text contains many general remarks (e.g., it was just that type of game) which are not present in any of the records. Furthermore, the complexity of the language used in the recap is far greater than what we can represent using our simple model. Fortunately, most of the fields are integer fields or string fields (generally names or brief descriptions), which provide important anchor points for learning the correspondences. Nonetheless, the same names and numbers occur in multiple records, so there is still uncertainty about which record is referenced by a given sentence. 3 Generative Model To learn the correspondence between a text w and a world state s, we propose a generative model p(w | s) with latent variables specifying this correspondence. Our model combines segmentation with alignment. The segmentation aspect of our model is similar to that of Grenager et al. (2005) and Eisenstein and Barzilay (2008), but in those two models, the segments are clustered into topics rather than grounded to a world state. The alignment aspect of our model is similar to the HMM model for word alignment (Ney and Vogel, 1996). DeNero et al. (2008) perform joint segmentation and word alignment for machine translation, but the nature of that task is different from ours. The model is defined by a generative process, 93 which proceeds in three stages (Figure 2 shows the corresponding graphical model): 1. Record choice: choose a sequence of records r = (r1, . . . , r|r|) to describe, where each ri ∈s. 2. Field choice: for each chosen record ri, select a sequence of fields fi = (fi1, . . . , fi|fi|), where each fij ∈{1, . . . , m}. 3. Word choice: for each chosen field fij, choose a number cij > 0 and generate a sequence of cij words. The observed text w is the terminal yield formed by concatenating the sequences of words of all fields generated; note that the segmentation of w provided by c = {cij} is latent. Think of the words spanned by a record as constituting an utterance with a meaning representation given by the record and subset of fields chosen. Formally, our probabilistic model places a distribution over (r, f, c, w) and factorizes according to the three stages as follows: p(r, f, c, w | s) = p(r | s)p(f | r)p(c, w | r, f, s) The following three sections describe each of these stages in more detail. 3.1 Record Choice Model The record choice model specifies a distribution over an ordered sequence of records r = (r1, . . . , r|r|), where each record ri ∈s. This model is intended to capture two types of regularities in the discourse structure of language. The first is salience, that is, some record types are simply more prominent than others. For example, in the NFL domain, 70% of scoring records are mentioned whereas only 1% of punting records are mentioned. The second is the idea of local coherence, that is, the order in which one mentions records tend to follow certain patterns. For example, in the weather domain, the sky conditions are generally mentioned first, followed by temperature, and then wind speed. To capture these two phenomena, we define a Markov model on the record types (and given the record type, a record is chosen uniformly from the set of records with that type): p(r | s) = |r| Y i=1 p(ri.t | ri−1.t) 1 |s(ri.t)|, (1) where s(t) def = {r ∈s : r.t = t} and r0.t is a dedicated START record type.2 We also model the transition of the final record type to a designated STOP record type in order to capture regularities about the types of records which are described last. More sophisticated models of coherence could also be employed here (Barzilay and Lapata, 2008). We assume that s includes a special null record whose type is NULL, responsible for generating parts of our text which do not refer to any real records. 3.2 Field Choice Model Each record type t ∈T has a separate field choice model, which specifies a distribution over a sequence of fields. We want to capture salience and coherence at the field level like we did at the record level. For instance, in the weather domain, the minimum and maximum fields of a temperature record are mentioned whereas the average is not. In the Robocup domain, the actor typically precedes the recipient in passing event records. Formally, we have a Markov model over the fields:3 p(f | r) = |r| Y i=1 |fj| Y j=1 p(fij | fi(j−1)). (2) Each record type has a dedicated null field with its own multinomial distribution over words, intended to model words which refer to that record type in general (e.g., the word passes for passing records). We also model transitions into the first field and transitions out of the final field with special START and STOP fields. This Markov structure allows us to capture a few elements of rudimentary syntax. 3.3 Word Choice Model We arrive at the final component of our model, which governs how the information about a particular field of a record is rendered into words. For each field fij, we generate the number of words cij from a uniform distribution over {1, 2, . . . , Cmax}, where Cmax is set larger than the length of the longest text we expect to see. Conditioned on 2We constrain our inference to only consider record types t that occur in s, i.e., s(t) ̸= ∅. 3During inference, we prohibit consecutive fields from repeating. 94 s r f c, w s r1 f11 w1 · · · w c11 · · · · · · ri fi1 w · · · w ci1 · · · fi|fi| w · · · w ci|fi| · · · rn · · · fn|fn| w · · · w|w| cn|fn| Record choice Field choice Word choice Figure 2: Graphical model representing the generative model. First, records are chosen and ordered from the set s. Then fields are chosen for each record. Finally, words are chosen for each field. The world state s and the words w are observed, while (r, f, c) are latent variables to be inferred (note that the number of latent variables itself is unknown). the fields f, the words w are generated independently:4 p(w | r, f, c, s) = |w| Y k=1 pw(wk | r(k).tf(k), r(k).vf(k)), where r(k) and f(k) are the record and field responsible for generating word wk, as determined by the segmentation c. The word choice model pw(w | t, v) specifies a distribution over words given the field type t and field value v. This distribution is a mixture of a global backoff distribution over words and a field-specific distribution which depends on the field type t. Although we designed our word choice model to be relatively general, it is undoubtedly influenced by the three domains. However, we can readily extend or replace it with an alternative if desired; this modularity is one principal benefit of probabilistic modeling. Integer Fields (t = INT) For integer fields, we want to capture the intuition that a numeric quantity v is rendered in the text as a word which is possibly some other numerical value w due to stylistic factors. Sometimes the exact value v is used (e.g., in reporting football statistics). Other times, it might be customary to round v (e.g., wind speeds are typically rounded to a multiple of 5). In other cases, there might just be some unexplained error, where w deviates from v by some noise ϵ+ = w −v > 0 or ϵ−= v −w > 0. We model ϵ+ and ϵ−as geometric distributions.5 In 4While a more sophisticated model of words would be useful if we intended to use this model for natural language generation, the false independence assumptions present here matter less for the task of learning the semantic correspondences because we always condition on w. 5Specifically, p(ϵ+; α+) = (1 −α+)ϵ+−1α+, where α+ is a field-specific parameter; p(ϵ−; α−) is defined analogously. 8 9 10 11 12 13 14 15 16 17 18 w 0.1 0.2 0.3 0.4 0.5 pw(w | v = 13) 8 9 10 11 12 13 14 15 16 17 18 w 0.1 0.2 0.3 0.4 0.6 pw(w | v = 13) (a) temperature.min (b) windSpeed.min Figure 3: Two integer field types in the weather domain for which we learn different distributions over the ways in which a value v might appear in the text as a word w. Suppose the record field value is v = 13. Both distributions are centered around v, as is to be expected, but the two distributions have different shapes: For temperature.min, almost all the mass is to the left, suggesting that forecasters tend to report conservative lower bounds. For the wind speed, the mass is concentrated on 13 and 15, suggesting that forecasters frequently round wind speeds to multiples of 5. summary, we allow six possible ways of generating the word w given v: v ⌈v⌉5 ⌊v⌋5 round5(v) v −ϵ− v + ϵ+ Separate probabilities for choosing among these possibilities are learned for each field type (see Figure 3 for an example). String Fields (t = STR) Strings fields are intended to represent values which we expect to be realized in the text via a simple surface-level transformation. For example, a name field with value v = Moe Williams is sometimes referenced in the text by just Williams. We used a simple generic model of rendering string fields: Let w be a word chosen uniformly from those in v. Categorical Fields (t = CAT) Unlike string fields, categorical fields are not tied down to any lexical representation; in fact, the identities of the categorical field values are irrelevant. For each categorical field f and possible value v, we have a 95 v pw(w | t, v) 0-25 , clear mostly sunny 25-50 partly , cloudy increasing 50-75 mostly cloudy , partly 75-100 of inch an possible new a rainfall Table 2: Highest probability words for the categorical field skyCover.mode in the weather domain. It is interesting to note that skyCover=75-100 is so highly correlated with rain that the model learns to connect an overcast sky in the world to the indication of rain in the text. separate multinomial distribution over words from which w is drawn. An example of a categorical field is skyCover.mode in the weather domain, which has four values: 0-25, 25-50, 50-75, and 75-100. Table 2 shows the top words for each of these field values learned by our model. 4 Learning and Inference Our learning and inference methodology is a fairly conventional application of Expectation Maximization (EM) and dynamic programming. The input is a set of scenarios D, each of which is a text w paired with a world state s. We maximize the marginal likelihood of our data, summing out the latent variables (r, f, c): max θ Y (w,s)∈D X r,f,c p(r, f, c, w | s; θ), (3) where θ are the parameters of the model (all the multinomial probabilities). We use the EM algorithm to maximize (3), which alternates between the E-step and the M-step. In the E-step, we compute expected counts according to the posterior p(r, f, c | w, s; θ). In the M-step, we optimize the parameters θ by normalizing the expected counts computed in the E-step. In our experiments, we initialized EM with a uniform distribution for each multinomial and applied add-0.1 smoothing to each multinomial in the M-step. As with most complex discrete models, the bulk of the work is in computing expected counts under p(r, f, c | w, s; θ). Formally, our model is a hierarchical hidden semi-Markov model conditioned on s. Inference in the E-step can be done using a dynamic program similar to the inside-outside algorithm. 5 Experiments Two important aspects of our model are the segmentation of the text and the modeling of the coherence structure at both the record and field levels. To quantify the benefits of incorporating these two aspects, we compare our full model with two simpler variants. • Model 1 (no model of segmentation or coherence): Each record is chosen independently; each record generates one field, and each field generates one word. This model is similar in spirit to IBM model 1 (Brown et al., 1993). • Model 2 (models segmentation but not coherence): Records and fields are still generated independently, but each field can now generate multiple words. • Model 3 (our full model of segmentation and coherence): Records and fields are generated according to the Markov chains described in Section 3. 5.1 Evaluation In the annotated data, each text w has been divided into a set of lines. These lines correspond to clauses in the weather domain and sentences in the Robocup and NFL domains. Each line is annotated with a (possibly empty) set of records. Let A be the gold set of these line-record alignment pairs. To evaluate a learned model, we compute the Viterbi segmentation and alignment (argmaxr,f,c p(r, f, c | w, s)). We produce a predicted set of line-record pairs A′ by aligning a line to a record ri if the span of (the utterance corresponding to) ri overlaps the line. The reason we evaluate indirectly using lines rather than using utterances is that it is difficult to annotate the segmentation of text into utterances in a simple and consistent manner. We compute standard precision, recall, and F1 of A′ with respect to A. Unless otherwise specified, performance is reported on all scenarios, which were also used for training. However, we did not tune any hyperparameters, but rather used generic values which worked well enough across all three domains. 5.2 Robocup Sportscasting We ran 10 iterations of EM on Models 1–3. Table 3 shows that performance improves with increased model sophistication. We also compare 96 Method Precision Recall F1 Model 1 78.6 61.9 69.3 Model 2 74.1 84.1 78.8 Model 3 77.3 84.0 80.5 Table 3: Alignment results on the Robocup sportscasting dataset. Method F1 Random baseline 48.0 Chen and Mooney (2008) 67.0 Model 3 75.7 Table 4: F1 scores based on the 4-fold cross-validation scheme in Chen and Mooney (2008). our model to the results of Chen and Mooney (2008) in Table 4. Figure 4 provides a closer look at the predictions made by each of our three models for a particular example. Model 1 easily mistakes pink10 for the recipient of a pass record because decisions are made independently for each word. Model 2 chooses the correct record, but having no model of the field structure inside a record, it proposes an incorrect field segmentation (although our evaluation is insensitive to this). Equipped with the ability to prefer a coherent field sequence, Model 3 fixes these errors. Many of the remaining errors are due to the garbage collection phenomenon familiar from word alignment models (Moore, 2004; Liang et al., 2006). For example, the ballstopped record occurs frequently but is never mentioned in the text. At the same time, there is a correlation between ballstopped and utterances such as pink2 holds onto the ball, which are not aligned to any record in the annotation. As a result, our model incorrectly chooses to align the two. 5.3 Weather Forecasts For the weather domain, staged training was necessary to get good results. For Model 1, we ran 15 iterations of EM. For Model 2, we ran 5 iterations of EM on Model 1, followed by 10 iterations on Model 2. For Model 3, we ran 5 iterations of Model 1, 5 iterations of a simplified variant of Model 3 where records were chosen independently, and finally, 5 iterations of Model 3. When going from one model to another, we used the final posterior distributions of the former to iniMethod Precision Recall F1 Model 1 49.9 75.1 60.0 Model 2 67.3 70.4 68.8 Model 3 76.3 73.8 75.0 Table 5: Alignment results on the weather forecast dataset. [Model 1] r: f: w: pass arg2=pink10 pink10 turns the ball over to purple5 [Model 2] r: f: w: turnover x pink10 turns the ball over arg2=purple5 to purple5 [Model 3] r: f: w: turnover arg1=pink10 pink10 x turns the ball over to arg2=purple5 purple5 Figure 4: An example of predictions made by each of the three models on the Robocup dataset. tialize the parameters of the latter.6 We also prohibited utterances in Models 2 and 3 from crossing punctuation during inference. Table 5 shows that performance improves substantially in the more sophisticated models, the gains being greater than in the Robocup domain. Figure 5 shows the predictions of the three models on an example. Model 1 is only able to form isolated (but not completely inaccurate) associations. By modeling segmentation, Model 2 accounts for the intermediate words, but errors are still made due to the lack of Markov structure. Model 3 remedies this. However, unexpected structures are sometimes learned. For example, the temperature.time=6-21 field indicates daytime, which happens to be perfectly correlated with the word high, although high intuitively should be associated with the temperature.max field. In these cases of high correlation (Table 2 provides another example), it is very difficult to recover the proper alignment without additional supervision. 5.4 NFL Recaps In order to scale up our models to the NFL domain, we first pruned for each sentence the records which have either no numerical values (e.g., 23, 23-10, 2/4) nor name-like words (e.g., those that appear only capitalized in the text) in common. This eliminated all but 1.5% of the record candidates per sentence, while maintaining an ora6It is interesting to note that this type of staged training is evocative of language acquisition in children: lexical associations are formed (Model 1) before higher-level discourse structure is learned (Model 3). 97 [Model 1] r: f: w: cloudy , with a windDir time=6-21 high near temperature max=63 63 . windDir mode=SE east southeast wind between windSpeed min=5 5 and windSpeed mean=9 11 mph . [Model 2] r: f: w: rainChance mode=– cloudy , temperature x with a time=6-21 high near max=63 63 . windDir mode=SE east southeast wind x between 5 and windSpeed mean=9 11 mph . [Model 3] r: f: w: skyCover x cloudy , temperature x with a time=6-21 high near max=63 63 mean=56 . windDir mode=SE east southeast x wind between windSpeed min=5 5 max=13 and 11 x mph . Figure 5: An example of predictions made by each of the three models on the weather dataset. cle alignment F1 score of 88.7. Guessing a single random record for each sentence yields an F1 of 12.0. A reasonable heuristic which uses weighted number- and string-matching achieves 26.7. Due to the much greater complexity of this domain, Model 2 was easily misled as it tried without success to find a coherent segmentation of the fields. We therefore created a variant, Model 2’, where we constrained each field to generate exactly one word. To train Model 2’, we ran 5 iterations of EM where each sentence is assumed to have exactly one record, followed by 5 iterations where the constraint was relaxed to also allow record boundaries at punctuation and the word and. We did not experiment with Model 3 since the discourse structure on records in this domain is not at all governed by a simple Markov model on record types—indeed, most regions do not refer to any records at all. We also fixed the backoff probability to 0.1 instead of learning it and enforced zero numerical deviation on integer field values. Model 2’ achieved an F1 of 39.9, an improvement over Model 1, which attained 32.8. Inspection of the errors revealed the following problem: The alignment task requires us to sometimes align a sentence to multiple redundant records (e.g., play and score) referenced by the same part of the text. However, our model generates each part of text from only one record, and thus it can only allow an alignment to one record.7 To cope with this incompatibility between the data and our notion of semantics, we used the following solution: We divided the records into three groups by type: play, score, and other. Each group has a copy of the model, but we enforce that they share the same segmentation. We also introduce a potential that couples the presence or absence of records across 7The model can align a sentence to multiple records provided that the records are referenced by non-overlapping parts of the text. Method Precision Recall F1 Random (with pruning) 13.1 11.0 12.0 Baseline 29.2 24.6 26.7 Model 1 25.2 46.9 32.8 Model 2’ 43.4 37.0 39.9 Model 2’ (with groups) 46.5 62.1 53.2 Graph matching (sup.) 73.4 64.5 68.6 Multilabel global (sup.) 87.3 74.5 80.3 Table 6: Alignment results on the NFL dataset. Graph matching and multilabel are supervised results reported in Snyder and Barzilay (2007).9 groups on the same segment to capture regular cooccurrences between redundant records. Table 6 shows our results. With groups, we achieve an F1 of 53.2. Though we still trail supervised techniques, which attain numbers in the 68–80 range, we have made substantial progress over our baseline using an unsupervised method. Furthermore, our model provides a more detailed analysis of the correspondence between the world state and text, rather than just producing a single alignment decision. Most of the remaining errors made by our model are due to a lack of calibration. Sometimes, our false positives are close calls where a sentence indirectly references a record, and our model predicts the alignment whereas the annotation standard does not. We believe that further progress is possible with a richer model. 6 Conclusion We have presented a generative model of correspondences between a world state and an unsegmented stream of text. By having a joint model of salience, coherence, and segmentation, as well as a detailed rendering of the values in the world state into words in the text, we are able to cope with the increased ambiguity that arises in this new data setting, successfully pushing the limits of unsupervision. 98 References R. Barzilay and M. Lapata. 2005. Collective content selection for concept-to-text generation. In Human Language Technology and Empirical Methods in Natural Language Processing (HLT/EMNLP), pages 331–338, Vancouver, B.C. R. Barzilay and M. Lapata. 2008. Modeling local coherence: An entity-based approach. Computational Linguistics, 34:1–34. P. F. Brown, S. A. D. Pietra, V. J. D. Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguistics, 19:263–311. D. L. Chen and R. J. Mooney. 2008. Learning to sportscast: A test of grounded language acquisition. In International Conference on Machine Learning (ICML), pages 128– 135. Omnipress. J. DeNero, A. Bouchard-Cˆot´e, and D. Klein. 2008. Sampling alignment structure under a Bayesian translation model. In Empirical Methods in Natural Language Processing (EMNLP), pages 314–323, Honolulu, HI. J. Eisenstein and R. Barzilay. 2008. Bayesian unsupervised topic segmentation. In Empirical Methods in Natural Language Processing (EMNLP), pages 334–343. J. Feldman and S. Narayanan. 2004. Embodied meaning in a neural theory of language. Brain and Language, 89:385– 392. R. Ge and R. J. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Computational Natural Language Learning (CoNLL), pages 9–16, Ann Arbor, Michigan. P. Gorniak and D. Roy. 2007. Situated language understanding as filtering perceived affordances. Cognitive Science, 31:197–231. T. Grenager, D. Klein, and C. D. Manning. 2005. Unsupervised learning of field segmentation models for information extraction. In Association for Computational Linguistics (ACL), pages 371–378, Ann Arbor, Michigan. Association for Computational Linguistics. R. J. Kate and R. J. Mooney. 2007. Learning language semantics from ambiguous supervision. In Association for the Advancement of Artificial Intelligence (AAAI), pages 895–900, Cambridge, MA. MIT Press. P. Liang, B. Taskar, and D. Klein. 2006. Alignment by agreement. In North American Association for Computational Linguistics (NAACL), pages 104–111, New York City. Association for Computational Linguistics. W. Lu, H. T. Ng, W. S. Lee, and L. S. Zettlemoyer. 2008. A generative model for parsing natural language to meaning representations. In Empirical Methods in Natural Language Processing (EMNLP), pages 783–792. R. C. Moore. 2004. Improving IBM word alignment model 1. In Association for Computational Linguistics (ACL), pages 518–525, Barcelona, Spain. Association for Computational Linguistics. H. Ney and S. Vogel. 1996. HMM-based word alignment in statistical translation. In International Conference on Computational Linguistics (COLING), pages 836–841. Association for Computational Linguistics. J. M. Siskind. 1996. A computational study of crosssituational techniques for learning word-to-meaning mappings. Cognition, 61:1–38. B. Snyder and R. Barzilay. 2007. Database-text alignment via structured multilabel classification. In International Joint Conference on Artificial Intelligence (IJCAI), pages 1713–1718, Hyderabad, India. C. Yu and D. H. Ballard. 2004. On the integration of grounding language and learning objects. In Association for the Advancement of Artificial Intelligence (AAAI), pages 488– 493, Cambridge, MA. MIT Press. L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI), pages 658–666. L. S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL), pages 678–687. 99
2009
11
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 976–984, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Learning Context-Dependent Mappings from Sentences to Logical Form Luke S. Zettlemoyer and Michael Collins MIT CSAIL Cambridge, MA 02139 {lsz,mcollins}@csail.mit.com Abstract We consider the problem of learning context-dependent mappings from sentences to logical form. The training examples are sequences of sentences annotated with lambda-calculus meaning representations. We develop an algorithm that maintains explicit, lambda-calculus representations of salient discourse entities and uses a context-dependent analysis pipeline to recover logical forms. The method uses a hidden-variable variant of the perception algorithm to learn a linear model used to select the best analysis. Experiments on context-dependent utterances from the ATIS corpus show that the method recovers fully correct logical forms with 83.7% accuracy. 1 Introduction Recently, researchers have developed algorithms that learn to map natural language sentences to representations of their underlying meaning (He and Young, 2006; Wong and Mooney, 2007; Zettlemoyer and Collins, 2005). For instance, a training example might be: Sent. 1: List flights to Boston on Friday night. LF 1: λx.flight(x) ∧to(x, bos) ∧day(x, fri) ∧during(x, night) Here the logical form (LF) is a lambda-calculus expression defining a set of entities that are flights to Boston departing on Friday night. Most of this work has focused on analyzing sentences in isolation. In this paper, we consider the problem of learning to interpret sentences whose underlying meanings can depend on the context in which they appear. For example, consider an interaction where Sent. 1 is followed by the sentence: Sent. 2: Show me the flights after 3pm. LF 2: λx.flight(x) ∧to(x, bos) ∧day(x, fri) ∧depart(x) > 1500 In this case, the fact that Sent. 2 describes flights to Boston on Friday must be determined based on the context established by the first sentence. We introduce a supervised, hidden-variable approach for learning to interpret sentences in context. Each training example is a sequence of sentences annotated with logical forms. Figure 1 shows excerpts from three training examples in the ATIS corpus (Dahl et al., 1994). For context-dependent analysis, we develop an approach that maintains explicit, lambda-calculus representations of salient discourse entities and uses a two-stage pipeline to construct contextdependent logical forms. The first stage uses a probabilistic Combinatory Categorial Grammar (CCG) parsing algorithm to produce a contextindependent, underspecified meaning representation. The second stage resolves this underspecified meaning representation by making a sequence of modifications to it that depend on the context provided by previous utterances. In general, there are a large number of possible context-dependent analyses for each sentence. To select the best one, we present a weighted linear model that is used to make a range of parsing and context-resolution decisions. Since the training data contains only the final logical forms, we model these intermediate decisions as hidden variables that must be estimated without explicit supervision. We show that this model can be effectively trained with a hidden-variable variant of the perceptron algorithm. In experiments on the ATIS DEC94 test set, the approach recovers fully correct logical forms with 83.7% accuracy. 2 The Learning Problem We assume access to a training set that consists of n interactions D = ⟨I1, . . . , In⟩. The i’th interaction Ii contains ni sentences, wi,1, . . . , wi,ni. Each sentence wi,j is paired with a lambda-calculus ex976 Example #1: (a) show me the flights from boston to philly λx.flight(x) ∧from(x, bos) ∧to(x, phi) (b) show me the ones that leave in the morning λx.flight(x) ∧from(x, bos) ∧to(x, phi) ∧during(x, morning) (c) what kind of plane is used on these flights λy.∃x.flight(x) ∧from(x, bos) ∧to(x, phi) ∧during(x, morning) ∧aircraft(x) = y Example #2: (a) show me flights from milwaukee to orlando λx.flight(x) ∧from(x, mil) ∧to(x, orl) (b) cheapest argmin(λx.flight(x) ∧from(x, mil) ∧to(x, orl), λy.fare(y)) (c) departing wednesday after 5 o’clock argmin(λx.flight(x) ∧from(x, mil) ∧to(x, orl) ∧day(x, wed) ∧depart(x) > 1700 , λy.fare(y)) Example #3: (a) show me flights from pittsburgh to la thursday evening λx.flight(x) ∧from(x, pit) ∧to(x, la) ∧day(x, thur) ∧during(x, evening) (b) thursday afternoon λx.flight(x) ∧from(x, pit) ∧to(x, la) ∧day(x, thur) ∧during(x, afternoon) (c) thursday after 1700 hours λx.flight(x) ∧from(x, pit) ∧to(x, la) ∧day(x, thur) ∧depart(x) > 1700 Figure 1: ATIS interaction excerpts. pression zi,j specifying the target logical form. Figure 1 contains example interactions. The logical forms in the training set are representations of each sentence’s underlying meaning. In most cases, context (the previous utterances and their interpretations) is required to recover the logical form for a sentence. For instance, in Example 1(b) in Figure 1, the sentence “show me the ones that leave in the morning” is paired with λx.flight(x) ∧from(x, bos) ∧to(x, phi) ∧during(x, morning) Some parts of this logical form (from(x, bos) and to(x, phi)) depend on the context. They have to be recovered from the previous logical forms. At step j in interaction i, we define the context ⟨zi,1, . . . , zi,j−1⟩to be the j −1 preceding logical forms.1 Now, given the training data, we can create training examples (xi,j, zi,j) for i = 1 . . . n, j = 1 . . . ni. Each xi,j is a sentence and a context, xi,j = (wi,j, ⟨zi,1, . . . , zi,j−1⟩). Given this set up, we have a supervised learning problem with input xi,j and output zi,j. 1In general, the context could also include the previous sentences wi,k for k < j. In our data, we never observed any interactions where the choice of the correct logical form zi,j depended on the words in the previous sentences. 3 Overview of Approach In general, the mapping from a sentence and a context to a logical form can be quite complex. In this section, we present an overview of our learning approach. We assume the learning algorithm has access to: • A training set D, defined in Section 2. • A CCG lexicon.2 See Section 4 for an overview of CCG. Each entry in the lexicon pairs a word (or sequence of words), with a CCG category specifying both the syntax and semantics for that word. One example CCG entry would pair flights with the category N : λx.flight(x). Derivations A derivation for the j’th sentence in an interaction takes as input a pair x = (wj, C), where C = ⟨z1 . . . zj−1⟩is the current context. It produces a logical form z. There are two stages: • First, the sentence wj is parsed using the CCG lexicon to form an intermediate, context-independent logical form π. • Second, in a series of steps, π is mapped to z. These steps depend on the context C. As one sketch of a derivation, consider how we might analyze Example 1(b) in Figure 1. In this case the sentence is “show me the ones that leave in the morning.” The CCG parser would produce the following context-independent logical form: λx.!⟨e, t⟩(x) ∧during(x, morning) The subexpression !⟨e, t⟩results directly from the referential phrase the ones; we discuss this in more detail in Section 4.2, but intuitively this subexpression specifies that a lambda-calculus expression of type ⟨e, t⟩must be recovered from the context and substituted in its place. In the second (contextually dependent) stage of the derivation, the expression λx.flight(x) ∧from(x, bos) ∧to(x, phi) is recovered from the context, and substituted for the !⟨e, t⟩subexpression, producing the desired final logical form, seen in Example 1(b). 2Developing algorithms that learn the CCG lexicon from the data described in this paper is an important area for future work. We could possibly extend algorithms that learn from context-independent data (Zettlemoyer and Collins, 2005). 977 In addition to substitutions of this type, we will also perform other types of context-dependent resolution steps, as described in Section 5. In general, both of the stages of the derivation involve considerable ambiguity – there will be a large number of possible context-independent logical forms π for wj and many ways of modifying each π to create a final logical form zj. Learning We model the problem of selecting the best derivation as a structured prediction problem (Johnson et al., 1999; Lafferty et al., 2001; Collins, 2002; Taskar et al., 2004). We present a linear model with features for both the parsing and context resolution stages of the derivation. In our setting, the choice of the context-independent logical form π and all of the steps that map π to the output z are hidden variables; these steps are not annotated in the training data. To estimate the parameters of the model, we use a hidden-variable version of the perceptron algorithm. We use an approximate search procedure to find the best derivation both while training the model and while applying it to test examples. Evaluation We evaluate the approach on sequences of sentences ⟨w1, . . . , wk⟩. For each wj, the algorithm constructs an output logical form zj which is compared to a gold standard annotation to check correctness. At step j, the context contains the previous zi, for i < j, output by the system. 4 Context-independent Parsing In this section, we first briefly review the CCG parsing formalism. We then define a set of extensions that allow the parser to construct logical forms containing references, such as the !⟨e, t⟩expression from the example derivation in Section 3. 4.1 Background: CCG CCG is a lexicalized, mildly context-sensitive parsing formalism that models a wide range of linguistic phenomena (Steedman, 1996; Steedman, 2000). Parses are constructed by combining lexical entries according to a small set of relatively simple rules. For example, consider the lexicon flights := N : λx.flight(x) to := (N\N)/NP : λy.λf.λx.f(x) ∧to(x, y) boston := NP : boston Each lexical entry consists of a word and a category. Each category includes syntactic and semantic content. For example, the first entry pairs the word flights with the category N : λx.flight(x). This category has syntactic type N, and includes the lambda-calculus semantic expression λx.flight(x). In general, syntactic types can either be simple types such as N, NP, or S, or can be more complex types that make use of slash notation, for example (N\N)/NP. CCG parses construct parse trees according to a set of combinator rules. For example, consider the functional application combinators:3 A/B : f B : g ⇒ A : f(g) (>) B : g A\B : f ⇒ A : f(g) (<) The first rule is used to combine a category with syntactic type A/B with a category to the right of syntactic type B to create a new category of type A. It also constructs a new lambda-calculus expression by applying the function f to the expression g. The second rule handles arguments to the left. Using these rules, we can parse the following phrase: flights to boston N (N\N)/NP NP λx.flight(x) λy.λf.λx.f(x) ∧to(x, y) boston > (N\N) λf.λx.f(x) ∧to(x, boston) < N λx.flight(x) ∧to(x, boston) The top-most parse operations pair each word with a corresponding category from the lexicon. The later steps are labeled with the rule that was applied (−> for the first and −< for the second). 4.2 Parsing with References In this section, we extend the CCG parser to introduce references. We use an exclamation point followed by a type expression to specify references in a logical form. For example, !e is a reference to an entity and !⟨e, t⟩is a reference to a function. As motivated in Section 3, we introduce these expressions so they can later be replaced with appropriate lambda-calculus expressions from the context. Sometimes references are lexically triggered. For example, consider parsing the phrase “show me the ones that leave in the morning” from Example 1(b) in Figure 1. Given the lexical entry: ones := N : λx.!⟨e, t⟩(x) a CCG parser could produce the desired context3In addition to application, we make use of composition, type raising and coordination combinators. A full description of these combinators is beyond the scope of this paper. Steedman (1996; 2000) presents a detailed description of CCG. 978 independent logical form: λx.!⟨e, t⟩(x) ∧during(x, morning) Our first extension is to simply introduce lexical items that include references into the CCG lexicon. They describe anaphoric words, for example including “ones,” “those,” and “it.” In addition, we sometimes need to introduce references when there is no explicit lexical trigger. For instance, Example 2(c) in Figure 1 consists of the single word “cheapest.” This query has the same meaning as the longer request “show me the cheapest one,” but it does not include the lexical reference. We add three CCG type-shifting rules to handle these cases. The first two new rules are applicable when there is a category that is expecting an argument with type ⟨e, t⟩. This argument is replaced with a !⟨e, t⟩reference: A/B : f ⇒ A : f(λx.!⟨e, t⟩(x)) A\B : f ⇒ A : f(λx.!⟨e, t⟩(x)) For example, using the first rule, we could produce the following parse for Example 2(c) cheapest NP/N λg.argmin(λx.g(x), λy.fare(y)) NP argmin(λx.!⟨e, t⟩(x), λy.fare(y)) where the final category has the desired lambdacaculus expression. The third rule is motivated by examples such as “show me nonstop flights.” Consider this sentence being uttered after Example 1(a) in Figure 1. Although there is a complete, context-independent meaning, the request actually restricts the salient set of flights to include only the nonstop ones. To achieve this analysis, we introduce the rule: A : f ⇒ A : λx.f(x) ∧!⟨e, t⟩(x) where f is an function of type ⟨e, t⟩. With this rule, we can construct the parse nonstop flights N/N N λf.λx.f(x) ∧nonstop(x) λx.flight(x) > N λx.nonstop(x) ∧flight(x) N λx.nonstop(x) ∧flight(x) ∧!⟨e, t⟩(x) where the last parsing step is achieved with the new type-shifting rule. These three new parsing rules allow significant flexibility when introducing references. Later, we develop an approach that learns when to introduce references and how to best resolve them. 5 Contextual Analysis In this section, we first introduce the general patterns of context-dependent analysis that we consider. We then formally define derivations that model these phenomena. 5.1 Overview This section presents an overview of the ways that the context C is used during the analysis. References Every reference expression (!e or !⟨e, t⟩) must be replaced with an expression from the context. For example, in Section 3, we considered the following logical form: λx.!⟨e, t⟩(x) ∧during(x, morning) In this case, we saw that replacing the !⟨e, t⟩ subexpression with the logical form for Example 1(a), which is directly available in C, produces the desired final meaning. Elaborations Later statements can expand the meaning of previous ones in ways that are difficult to model with references. For example, consider analyzing Example 2(c) in Figure 1. Here the phrase “departing wednesday after 5 o’clock” has a context-independent logical form:4 λx.day(x, wed) ∧depart(x) > 1700 (1) that must be combined with the meaning of the previous sentence from the context C: argmin(λx.fight(x) ∧from(x, mil) ∧to(x, orl), λy.fare(y)) to produce the expression argmin(λx.fight(x) ∧from(x, mil) ∧to(x, orl) ∧day(x, wed) ∧depart(x) > 1700, λy.fare(y)) Intuitively, the phrase “departing wednesday after 5 o’clock” is providing new constraints for the set of flights embedded in the argmin expression. We handle examples of this type by constructing elaboration expressions from the zi in C. For example, if we constructed the following function: λf.argmin(λx.fight(x) ∧from(x, mil) ∧to(x, orl) ∧f(x), (2) λy.fare(y)) 4Another possible option is the expression λx.!⟨e, t⟩∧ day(x, wed) ∧depart(x) > 1700. However, there is no obvious way to resolve the !⟨e, t⟩expression that would produce the desired final meaning. 979 we could apply this function to Expression 1 and produce the desired result. The introduction of the new variable f provides a mechanism for expanding the embedded subexpression. References with Deletion When resolving references, we will sometimes need to delete subparts of the expressions that we substitute from the context. For instance, consider Example 3(b) in Figure 1. The desired, final logical form is: λx.flight(x) ∧from(x, pit) ∧to(x, la) ∧day(x, thur) ∧during(x, afternoon) We need to construct this from the contextindependent logical form: λx.!⟨e, t⟩∧day(x, thur) ∧during(x, afternoon) The reference !⟨e, t⟩must be resolved. The only expression in the context C is the meaning from the previous sentence, Example 3(a): λx.flight(x) ∧from(x, pit) ∧to(x, la) (3) ∧day(x, thur) ∧during(x, evening) Substituting this expression directly would produce the following logical form: λx.flight(x) ∧from(x, pit) ∧to(x, la) ∧day(x, thur) ∧during(x, evening) ∧day(x, thur) ∧during(x, afternoon) which specifies the day twice and has two different time spans. We can achieve the desired analysis by deleting parts of expressions before they are substituted. For example, we could remove the day and time constraints from Expression 3 to create: λx.flight(x) ∧from(x, pit) ∧to(x, la) which would produce the desired final meaning when substituted into the original expression. Elaborations with Deletion We also allow deletions for elaborations. In this case, we delete subexpressions of the elaboration expression that is constructed from the context. 5.2 Derivations We now formally define a derivation that maps a sentence wj and a context C = {z1, . . . , zj−1} to an output logical form zj. We first introduce notation for expressions in C that we will use in the derivation steps. We then present a definition of deletion. Finally, we define complete derivations. Context Sets Given a context C, our algorithm constructs three sets of expressions: • Re(C): A set of e-type expressions that can be used to resolve references. • R⟨e,t⟩(C): A set of ⟨e, t⟩-type expressions that can be used to resolve references. • E(C): A set of possible elaboration expressions (for example, see Expression 2). We will provide the details of how these sets are defined in Section 5.3. As an example, if C contains only the logical form λx.flight(x) ∧from(x, pit) ∧to(x, la) then Re(C) = {pit, la} and R⟨e,t⟩(C) is a set that contains a single entry, the complete logical form. Deletion A deletion operator accepts a logical form l and produces a new logical form l′. It constructs l′ by removing a single subexpression that appears in a coordination (conjunction or disjunction) in l. For example, if l is λx.flight(x) ∧from(x, pit) ∧to(x, la) there are three possible deletion operations, each of which removes a single subexpression. Derivations We now formally define a derivation to be a sequence d = (Π, s1, . . . , sm). Π is a CCG parse that constructs a context-independent logical form π with m −1 reference expressions.5 Each si is a function that accepts as input a logical form, makes some change to it, and produces a new logical form that is input to the next function si+1. The initial si for i < m are reference steps. The final sm is an optional elaboration step. • Reference Steps: A reference step is a tuple (l, l′, f, r, r1, . . . , rp). This operator selects a reference f in the input logical form l and an appropriately typed expression r from either Re(C) or R⟨e,t⟩(C). It then applies a sequence of p deletion operators to create new expressions r1 . . . rp. Finally, it constructs the output logical form l′ by substituting rp for the selected reference f in l. • Elaboration Steps: An elaboration step is a tuple (l, l′, b, b1, . . . , bq). This operator selects an expression b from E(C) and applies q deletions to create new expressions b1 . . . bq. The output expression l′ is bq(l). 5In practice, π rarely contains more than one reference. 980 In general, the space of possible derivations is large. In Section 6, we describe a linear model and decoding algorithm that we use to find high scoring derivations. 5.3 Context Sets For a context C = {z1, . . . , zj−1}, we define sets Re(C), R⟨e,t⟩(C), and E(C) as follows. e-type Expressions Re(z) is a set of e-type expressions extracted from a logical form z. We define Re(C) = Sj−1 i=1 Re(zi). Re(z) includes all e-type subexpressions of z.6 For example, if z is argmin(λx.flight(x) ∧from(x, mil) ∧to(x, orl), λy.fare(y)) the resulting set is Re(z) = {mil, orl, z}, where z is included because the entire argmin expression has type e. ⟨e, t⟩-type Expressions R⟨e,t⟩(z) is a set of ⟨e, t⟩-type expressions extracted from a logical form z. We define R⟨e,t⟩(C) = Sj−1 i=1 R⟨e,t⟩(zi). The set R⟨e,t⟩(z) contains all of the ⟨e, t⟩-type subexpressions of z. For each quantified variable x in z, it also contains a function λx.g. The expression g contains the subexpressions in the scope of x that do not have free variables. For example, if z is λy.∃x.flight(x) ∧from(x, bos) ∧to(x, phi) ∧during(x, morning) ∧aircraft(x) = y R⟨e,t⟩(z) would contain two functions: the entire expression z and the function λx.flight(x) ∧from(x, bos) ∧to(x, phi) ∧during(x, morning) constructed from the variable x, where the subexpression aircraft(x) = y has been removed because it contains the free variable y. Elaboration Expressions Finally, E(z) is a set of elaboration expressions constructed from a logical form z. We define E(C) = Sj−1 i=1 E(zi). E(z) is defined by enumerating the places where embedded variables are found in z. For each logical variable x and each coordination (conjunction or disjunction) in the scope of x, a new expression is created by defining a function λf.z′ where z′ has the function f(x) added to the appropriate coordination. This procedure would 6A lambda-calculus expression can be represented as a tree structure with flat branching for coordination (conjunction and disjunction). The subexpressions are the subtrees. produce the example elaboration Expression 2 and elaborations that expand other embedded expressions, such as the quantifier in Example 1(c). 6 A Linear Model In general, there will be many possible derivations d for an input sentence w in the current context C. In this section, we introduce a weighted linear model that scores derivations and a decoding algorithm that finds high scoring analyses. We define GEN(w; C) to be the set of possible derivations d for an input sentence w given a context C, as described in Section 5.2. Let φ(d) ∈Rm be an m-dimensional feature representation for a derivation d and θ ∈Rm be an m-dimensional parameter vector. The optimal derivation for a sentence w given context C and parameters θ is d∗(w; C) = arg max d∈GEN(w;C) θ · φ(d) Decoding We now describe an approximate algorithm for computing d∗(w; C). The CCG parser uses a CKY-style chart parsing algorithm that prunes to the top N = 50 entries for each span in the chart. We use a beam search procedure to find the best contextual derivations, with beam size N = 50. The beam is initialized to the top N logical forms from the CCG parser. The derivations are extended with reference and elaboration steps. The only complication is selecting the sequence of deletions. For each possible step, we use a greedy search procedure that selects the sequence of deletions that would maximize the score of the derivation after the step is applied. 7 Learning Figure 2 details the complete learning algorithm. Training is online and error-driven. Step 1 parses the current sentence in context. If the optimal logical form is not correct, Step 2 finds the best derivation that produces the labeled logical form7 and does an additive, perceptron-style parameter update. Step 3 updates the context. This algorithm is a direct extension of the one introduced by Zettlemoyer and Collins (2007). It maintains the context but does not have the lexical induction step that was previously used. 7For this computation, we use a modified version of the beam search algorithm described in Section 6, which prunes derivations that could not produce the desired logical form. 981 Inputs: Training examples {Ii|i = 1 . . . n}. Each Ii is a sequence {(wi,j, zi,j) : j = 1 . . . ni} where wi,j is a sentence and zi,j is a logical form. Number of training iterations T. Initial parameters θ. Definitions: The function φ(d) represents the features described in Section 8. GEN(w; C) is the set of derivations for sentence w in context C. GEN(w, z; C) is the set of derivations for sentence w in context C that produce the final logical form z. The function L(d) maps a derivation to its associated final logical form. Algorithm: • For t = 1 . . . T, i = 1 . . . n: (Iterate interactions) • Set C = {}. (Reset context) • For j = 1 . . . ni: (Iterate training examples) Step 1: (Check correctness) • Let d∗= arg maxd∈GEN(wi,j;C) θ · φ(d) . • If L(d∗) = zi,j, go to Step 3. Step 2: (Update parameters) • Let d′ = arg maxd∈GEN(wi,j,zi,j;C) θ · φ(d) . • Set θ = θ + φ(d′) −φ(d∗) . Step 3: (Update context) • Append zi,j to the current context C. Output: Estimated parameters θ. Figure 2: An online learning algorithm. 8 Features We now describe the features for both the parsing and context resolution stages of the derivation. 8.1 Parsing Features The parsing features are used to score the contextindependent CCG parses during the first stage of analysis. We use the set developed by Zettlemoyer and Collins (2007), which includes features that are sensitive to lexical choices and the structure of the logical form that is constructed. 8.2 Context Features The context features are functions of the derivation steps described in Section 5.2. In a derivation for sentence j of an interaction, let l be the input logical form when considering a new step s (a reference or elaboration step). Let c be the expression that s selects from a context set Re(zi), R⟨e,t⟩(zi), or E(zi), where zi, i < j, is an expression in the current context. Also, let r be a subexpression deleted from c. Finally, let f1 and f2 be predicates, for example from or to. Distance Features The distance features are binary indicators on the distance j −i. These features allow the model to, for example, favor resolving references with lambda-calculus expressions recovered from recent sentences. Copy Features For each possible f1 there is a feature that tests if f1 is present in the context expression c but not in the current expression l. These features allow the model to learn to select expressions from the context that introduce expected predicates. For example, flights usually have a from predicate in the current expression. Deletion Features For each pair (f1, f2) there is a feature that tests if f1 is in the current expression l and f2 is in the deleted expression r. For example, if f1 = f2 = days the model can favor overriding old constraints about the departure day with new ones introduced in the current utterance. When f1 = during and f2 = depart time the algorithm can learn that specific constraints on the departure time override more general constraints about the period of day. 9 Related Work There has been a significant amount of work on the problem of learning context-independent mappings from sentences to meaning representations. Researchers have developed approaches using models and algorithms from statistical machine translation (Papineni et al., 1997; Ramaswamy and Kleindienst, 2000; Wong and Mooney, 2007), statistical parsing (Miller et al., 1996; Ge and Mooney, 2005), inductive logic programming (Zelle and Mooney, 1996; Tang and Mooney, 2000) and probabilistic push-down automata (He and Young, 2006). There were a large number of successful handengineered systems developed for the original ATIS task and other related tasks (e.g., (Carbonell and Hayes, 1983; Seneff, 1992; Ward and Issar, 1994; Levin et al., 2000; Popescu et al., 2004)). We are only aware of one system that learns to construct context-dependent interpretations (Miller et al., 1996). The Miller et al. (1996) approach is fully supervised and produces a final meaning representation in SQL. It requires complete annotation of all of the syntactic, semantic, and discourse decisions required to correctly analyze each training example. In contrast, we learn from examples annotated with lambdacalculus expressions that represent only the final, context-dependent logical forms. Finally, the CCG (Steedman, 1996; Steedman, 982 Train Dev. Test All Interactions 300 99 127 526 Sentences 2956 857 826 4637 Table 1: Statistics of the ATIS training, development and test (DEC94) sets, including the total number of interactions and sentences. Each interaction is a sequence of sentences. 2000) parsing setup is closely related to previous CCG research, including work on learning parsing models (Clark and Curran, 2003), wide-coverage semantic parsing (Bos et al., 2004) and grammar induction (Watkinson and Manandhar, 1999). 10 Evaluation Data In this section, we present experiments in the context-dependent ATIS domain (Dahl et al., 1994). Table 1 presents statistics for the training, development, and test sets. To facilitate comparison with previous work, we used the standard DEC94 test set. We randomly split the remaining data to make training and development sets. We manually converted the original SQL meaning annotations to lambda-calculus expressions. Evaluation Metrics Miller et al. (1996) report accuracy rates for recovering correct SQL annotations on the test set. For comparison, we report exact accuracy rates for recovering completely correct lambda-calculus expressions. We also present precision, recall and F-measure for partial match results that test if individual attributes, such as the from and to cities, are correctly assigned. See the discussion by Zettlemoyer and Collins (2007) (ZC07) for the full details. Initialization and Parameters The CCG lexicon is hand engineered. We constructed it by running the ZC07 algorithm to learn a lexicon on the context-independent ATIS data set and making manual corrections to improve performance on the training set. We also added lexical items with reference expressions, as described in Section 4. We ran the learning algorithm for T = 4 training iterations. The parsing feature weights were initialized as in ZC07, the context distance features were given small negative weights, and all other feature weights were initially set to zero. Test Setup During evaluation, the context C = {z1 . . . zj−1} contains the logical forms output by the learned system for the previous sentences. In general, errors made while constructing these expressions can propogate if they are used in derivations for new sentences. System Partial Match Exact Prec. Rec. F1 Acc. Full Method 95.0 96.5 95.7 83.7 Miller et al. – – – 78.4 Table 2: Performance on the ATIS DEC94 test set. Limited Context Partial Match Exact Prec. Rec. F1 Acc. M = 0 96.2 57.3 71.8 45.4 M = 1 94.9 91.6 93.2 79.8 M = 2 94.8 93.2 94.0 81.0 M = 3 94.5 94.3 94.4 82.1 M = 4 94.9 92.9 93.9 81.6 M = 10 94.2 94.0 94.1 81.4 Table 3: Performance on the ATIS development set for varying context window lengths M. Results Table 2 shows performance on the ATIS DEC94 test set. Our approach correctly recovers 83.7% of the logical forms. This result compares favorably to Miller et al.’s fully-supervised approach (1996) while requiring significantly less annotation effort. We also evaluated performance when the context is limited to contain only the M most recent logical forms. Table 3 shows results on the development set for different values of M. The poor performance with no context (M = 0) demonstrates the need for context-dependent analysis. Limiting the context to the most recent statement (M = 1) significantly improves performance while using the last three utterances (M = 3) provides the best results. Finally, we evaluated a variation where the context contains gold-standard logical forms during evaluation instead of the output of the learned model. On the development set, this approach achieved 85.5% exact-match accuracy, an improvement of approximately 3% over the standard approach. This result suggests that incorrect logical forms in the context have a relatively limited impact on overall performance. 11 Conclusion In this paper, we addressed the problem of learning context-dependent mappings from sentences to logical form. We developed a contextdependent analysis model and showed that it can be effectively trained with a hidden-variable variant of the perceptron algorithm. In the experiments, we showed that the approach recovers fully correct logical forms with 83.7% accuracy. 983 References Johan Bos, Stephen Clark, Mark Steedman, James R. Curran, and Julia Hockenmaier. 2004. Widecoverage semantic representations from a CCG parser. In Proceedings of the International Conference on Computational Linguistics. Jaime G. Carbonell and Philip J. Hayes. 1983. Recovery strategies for parsing extragrammatical language. American Journal of Computational Linguistics, 9. Stephen Clark and James R. Curran. 2003. Log-linear models for wide-coverage CCG parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Deborah A. Dahl, Madeleine Bates, Michael Brown, William Fisher, Kate Hunicke-Smith, David Pallett, Christine Pao, Alexander Rudnicky, and Elizabeth Shriberg. 1994. Expanding the scope of the ATIS task: the ATIS-3 corpus. In ARPA HLT Workshop. Ruifang Ge and Raymond J. Mooney. 2005. A statistical semantic parser that integrates syntax and semantics. In Proceedings of the Conference on Computational Natural Language Learning. Yulan He and Steve Young. 2006. Spoken language understanding using the hidden vector state model. Speech Communication, 48(3-4). Mark Johnson, Stuart Geman, Steven Canon, Zhiyi Chi, and Stefan Riezler. 1999. Estimators for stochastic “unification-based” grammars. In Proc. of the Association for Computational Linguistics. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the International Conference on Machine Learning. E. Levin, S. Narayanan, R. Pieraccini, K. Biatov, E. Bocchieri, G. Di Fabbrizio, W. Eckert, S. Lee, A. Pokrovsky, M. Rahim, P. Ruscitti, and M. Walker. 2000. The AT&T darpa communicator mixedinitiative spoken dialogue system. In Proceedings of the International Conference on Spoken Language Processing. Scott Miller, David Stallard, Robert J. Bobrow, and Richard L. Schwartz. 1996. A fully statistical approach to natural language interfaces. In Proc. of the Association for Computational Linguistics. K. A. Papineni, S. Roukos, and T. R. Ward. 1997. Feature-based language understanding. In Proceedings of European Conference on Speech Communication and Technology. Ana-Maria Popescu, Alex Armanasu, Oren Etzioni, David Ko, and Alexander Yates. 2004. Modern natural language interfaces to databases: Composing statistical parsing with semantic tractability. In Proceedings of the International Conference on Computational Linguistics. Ganesh N. Ramaswamy and Jan Kleindienst. 2000. Hierarchical feature-based translation for scalable natural language understanding. In Proceedings of International Conference on Spoken Language Processing. Stephanie Seneff. 1992. Robust parsing for spoken language systems. In Proc. of the IEEE Conference on Acoustics, Speech, and Signal Processing. Mark Steedman. 1996. Surface Structure and Interpretation. The MIT Press. Mark Steedman. 2000. The Syntactic Process. The MIT Press. Lappoon R. Tang and Raymond J. Mooney. 2000. Automated construction of database interfaces: Integrating statistical and relational learning for semantic parsing. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Very Large Corpora. Ben Taskar, Dan Klein, Michael Collins, Daphne Koller, and Christopher Manning. 2004. Maxmargin parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Wayne Ward and Sunil Issar. 1994. Recent improvements in the CMU spoken language understanding system. In Proceedings of the workshop on Human Language Technology. Stephen Watkinson and Suresh Manandhar. 1999. Unsupervised lexical learning with categorial grammars using the LLL corpus. In Proceedings of the 1st Workshop on Learning Language in Logic. Yuk Wah Wong and Raymond Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the Association for Computational Linguistics. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the National Conference on Artificial Intelligence. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Proceedings of the Conference on Uncertainty in Artificial Intelligence. Luke S. Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proc. of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. 984
2009
110
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 985–993, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP An Optimal-Time Binarization Algorithm for Linear Context-Free Rewriting Systems with Fan-Out Two Carlos G´omez-Rodr´ıguez Departamento de Computaci´on Universidade da Coru˜na, Spain [email protected] Giorgio Satta Department of Information Engineering University of Padua, Italy [email protected] Abstract Linear context-free rewriting systems (LCFRSs) are grammar formalisms with the capability of modeling discontinuous constituents. Many applications use LCFRSs where the fan-out (a measure of the discontinuity of phrases) is not allowed to be greater than 2. We present an efficient algorithm for transforming LCFRS with fan-out at most 2 into a binary form, whenever this is possible. This results in asymptotical run-time improvement for known parsing algorithms for this class. 1 Introduction Since its early years, the computational linguistics field has devoted much effort to the development of formal systems for modeling the syntax of natural language. There has been a considerable interest in rewriting systems that enlarge the generative power of context-free grammars, still remaining far below the power of the class of contextsensitive grammars; see (Joshi et al., 1991) for discussion. Following this line, (Vijay-Shanker et al., 1987) have introduced a formalism called linear context-free rewriting systems (LCFRSs) that has received much attention in later years by the community. LCFRSs allow the derivation of tuples of strings,1 i.e., discontinuous phrases, that turn out to be very useful in modeling languages with relatively free word order. This feature has recently been used for mapping non-projective dependency grammars into discontinuous phrase structures (Kuhlmann and Satta, 2009). Furthermore, LCFRSs also implement so-called synchronous 1In its more general definition, an LCFRS provides a framework where abstract structures can be generated, as for instance trees and graphs. Throughout this paper we focus on so-called string-based LCFRSs, where rewriting is defined over strings only. rewriting, up to some bounded degree, and have recently been exploited, in some syntactic variant, in syntax-based machine translation (Chiang, 2005; Melamed, 2003) as well as in the modeling of syntax-semantic interface (Nesson and Shieber, 2006). The maximum number f of tuple components that can be generated by an LCFRS G is called the fan-out of G, and the maximum number r of nonterminals in the right-hand side of a production is called the rank of G. As an example, contextfree grammars are LCFRSs with f = 1 and r given by the maximum length of a production right-hand side. Tree adjoining grammars (Joshi and Levy, 1977), or TAG for short, can be viewed as a special kind of LCFRS with f = 2, since each elementary tree generates two strings, and r given by the maximum number of adjunction sites in an elementary tree. Several parsing algorithms for LCFRS or equivalent formalisms are found in the literature; see for instance (Seki et al., 1991; Boullier, 2004; Burden and Ljungl¨of, 2005). All of these algorithms work in time O(|G| · |w|f·(r+1)). Parsing time is then exponential in the input grammar size, since |G| depends on both f and r. In the development of efficient algorithms for parsing based on LCFRS the crucial goal is therefore to optimize the term f · (r + 1). In practical natural language processing applications the fan-out of the grammar is typically bounded by some small number. As an example, in the case of discontinuous parsing discussed above, we have f = 2 for most practical cases. On the contrary, LCFRS productions with a relatively large number of nonterminals are usually observed in real data. The reduction of the rank of a LCFRS, called binarization, is a process very similar to the reduction of a context-free grammar into Chomsky normal form. While in the special case of CFG and TAG this can always be achieved, 985 binarization of an LCFRS requires, in the general case, an increase in the fan-out of the grammar much larger than the achieved reduction in the rank. Worst cases and some lower bounds have been discussed in (Rambow and Satta, 1999; Satta, 1998). Nonetheless, in many cases of interest binarization of an LCFRS can be carried out without any extra increase in the fan-out. As an example, in the case where f = 2, binarization of a LCFRS would result in parsing time of O(|G| · |w|6). With the motivation of parsing efficiency, much research has been recently devoted to the design of efficient algorithms for rank reduction, in cases in which this can be carried out at no extra increase in the fan-out. (G´omez-Rodr´ıguez et al., 2009) reports a general binarization algorithm for LCFRS. In the case where f = 2, this algorithm works in time O(|p|7), where p is the input production. A more efficient algorithm is presented in (Kuhlmann and Satta, 2009), working in time O(|p|) in case of f = 2. However, this algorithm works for a restricted typology of productions, and does not cover all cases in which some binarization is possible. Other linear time algorithms for rank reduction are found in the literature (Zhang et al., 2008), but they are restricted to the case of synchronous context-free grammars, a strict subclass of the LCFRS with f = 2. In this paper we focus our attention on LCFRS with a fan-out of two. We improve upon all of the above mentioned results, by providing an algorithm that computes a binarization of an LCFRS production in all cases in which this is possible and works in time O(|p|). This is an optimal result in terms of time complexity, since Θ(|p|) is also the size of any output binarization of an LCFRS production. 2 Linear context-free rewriting systems We briefly summarize here the terminology and notation that we adopt for LCFRS; for detailed definitions, see (Vijay-Shanker et al., 1987). We denote the set of non-negative integers by N. For i, j ∈N, the interval {k | i ≤k ≤j} is denoted by [i, j]. We write [i] as a shorthand for [1, i]. For an alphabet V , we write V ∗for the set of all (finite) strings over V . As already mentioned in Section 1, linear context-free rewriting systems generate tuples of strings over some finite alphabet. This is done by associating each production p of a grammar with a function g that rearranges the string components in the tuples generated by the nonterminals in p’s right-hand side, possibly adding some alphabet symbols. Let V be some finite alphabet. For natural numbers r ≥0 and f, f1, . . . , fr ≥1, consider a function g : (V ∗)f1 × · · · × (V ∗)fr → (V ∗)f defined by an equation of the form g(⟨x1,1, . . . , x1,f1⟩, . . . , ⟨xr,1, . . . , xr,fr⟩) = ⃗α, where ⃗α = ⟨α1, . . . , αf⟩is an f-tuple of strings over g’s argument variables and symbols in V . We say that g is linear, non-erasing if ⃗α contains exactly one occurrence of each argument variable. We call r and f the rank and the fan-out of g, respectively, and write r(g) and f(g) to denote these quantities. A linear context-free rewriting system (LCFRS) is a tuple G = (VN, VT , P, S), where VN and VT are finite, disjoint alphabets of nonterminal and terminal symbols, respectively. Each A ∈VN is associated with a value f(A), called its fan-out. The nonterminal S is the start symbol, with f(S) = 1. Finally, P is a set of productions of the form p : A →g(A1, A2, . . . , Ar(g)) , where A, A1, . . . , Ar(g) ∈VN, and g : (V ∗ T )f(A1) × · · · × (V ∗ T )f(Ar(g)) →(V ∗ T )f(A) is a linear, nonerasing function. A production p of G can be used to transform a sequence of r(g) string tuples generated by the nonterminals A1, . . . , Ar(g) into a tuple of f(A) strings generated by A. The values r(g) and f(g) are called the rank and fan-out of p, respectively, written r(p) and f(p). The rank and fan-out of G, written r(G) and f(G), respectively, are the maximum rank and fan-out among all of G’s productions. Given that f(S) = 1, S generates a set of strings, defining the language of G. Example 1 Consider the LCFRS G defined by the productions p1 : S →g1(A), g1(⟨x1,1, x1,2⟩) = ⟨x1,1x1,2⟩ p2 : A →g2(A), g2(⟨x1,1, x1,2⟩) = ⟨ax1,1b, cx1,2d⟩ p3 : A →g3(), g3() = ⟨ε, ε⟩ We have f(S) = 1, f(A) = f(G) = 2, r(p3) = 0 and r(p1) = r(p2) = r(G) = 1. G generates the string language {anbncndn | n ∈N}. For instance, the string a3b3c3d3 is generated by means 986 of the following bottom-up process. First, the tuple ⟨ε, ε⟩is generated by A through p3. We then iterate three times the application of p2 to ⟨ε, ε⟩, resulting in the tuple ⟨a3b3, c3d3⟩. Finally, the tuple (string) ⟨a3b3c3d3⟩is generated by S through application of p1. 2 3 Position sets and binarizations Throughout this section we assume an LCFRS production p : A →g(A1, . . . , Ar) with g defined through a tuple ⃗α as in section 2. We also assume that the fan-out of A and the fan-out of each Ai are all bounded by two. 3.1 Production representation We introduce here a specialized representation for p. Let $ be a fresh symbol that does not occur in p. We define the characteristic string of p as the string σN(p) = α′ 1$α′ 2$ · · · $α′ f(A), where each α′ j is obtained from αj by removing all the occurrences of symbols in VT . Consider now some occurrence Ai of a nonterminal symbol in the right-hand side of p. We define the position set of Ai, written XAi, as the set of all non-negative integers j ∈[|σN(p)|] such that the j-th symbol in σN(p) is a variable of the form xi,h for some h. Example 2 Let p : A →g(A1, A2, A3), where g(⟨x1,1, x1,2⟩, ⟨x2,1⟩, ⟨x3,1, x3,2⟩) = ⃗α with ⃗α = ⟨x1,1ax2,1x1,2, x3,1bx3,2⟩. We have σN(p) = x1,1x2,1x1,2$x3,1x3,2, XA1 = {1, 3}, XA2 = {2} and XA3 = {5, 6}. 2 Each position set X ⊆[|σN(p)|] can be represented by means of non-negative integers i1 < i2 < · · · < i2k satisfying X = k[ j=1 [i2j−1 + 1, i2j]. In other words, we are decomposing X into the union of k intervals, with k as small as possible. It is easy to see that this decomposition is always unique. We call set E = {i1, i2, . . . , i2k} the endpoint set associated with X, and we call k the fan-out of X, written f(X). Throughout this paper, we will represent p as the collection of all the position sets associated with the occurrences of nonterminals in its right-hand side. Let X1 and X2 be two disjoint position sets (i.e., X1 ∩X2 = ∅), with f(X1) = k1 and f(X2) = k2 and with associated endpoint sets E1 and E2, respectively. We define the merge of X1 and X2 as the set X1 ∪X2. We extend the position set and end-point set terminology to these merge sets as well. It is easy to check that the endpoint set associated to position set X1 ∪X2 is (E1 ∪E2)\(E1 ∩E2). We say that X1 and X2 are 2-combinable if f(X1 ∪X2) ≤2. We also say that X1 and X2 are adjacent, written X1 ↔X2, if f(X1 ∪X2) ≤max(k1, k2). It is not difficult to see that X1 ↔X2 if and only if X1 and X2 are disjoint and |E1 ∩E2| ≥min(k1, k2). Note also that X1 ↔X2 always implies that X1 and X2 are 2-combinable (but not the other way around). Let X be a collection of mutually disjoint position sets. A reduction of X is the process of merging two position sets X1, X2 ∈X, resulting in a new collection X ′ = (X \{X1, X2})∪{X1∪X2}. The reduction is 2-feasible if X1 and X2 are 2combinable. A binarization of X is a sequence of reductions resulting in a new collection with two or fewer position sets. The binarization is 2-feasible if all of the involved reductions are 2feasible. Finally, we say that X is 2-feasible if there exists at least one 2-feasible binarization for X. As an important remark, we observe that when a collection X represents the position sets of all the nonterminals in the right-hand side of a production p with r(p) > 2, then a 2-feasible reduction merging XAi, XAj ∈X can be interpreted as follows. We replace p by means of a new production p′ obtained from p by substituting Ai and Aj with a fresh nonterminal symbol B, so that r(p′) = r(p) −1. Furthermore, we create a new production p′′ with Ai and Aj in its right-hand side, such that f(p′′) = f(B) ≤2 and r(p′′) = 2. Productions p′ and p′′ together are equivalent to p, but we have now achieved a local reduction in rank of one unit. Example 3 Let p be defined as in example 2 and let X = {XA1, XA2, XA3}. We have that XA1 and XA2 are 2-combinable, and their merge is the new position set X = XA1 ∪XA2 = {1, 2, 3}. This merge corresponds to a 2-feasible reduction of X resulting in X ′ = {X, XA3}. Such a reduction corresponds to the construction of a new production p′ : A →g′(B, A3) with g′(⟨x1,1⟩, ⟨x3,1, x3,2⟩) = ⟨x1,1, x3,1bx3,2⟩; 987 and a new production p′′ : B →g′′(A1, A2) with g′′(⟨x1,1, x1,2⟩, ⟨x2,1⟩) = ⟨x1,1ax2,1x1,2⟩. 2 It is easy to see that X is 2-feasible if and only if there exists a binarization of p that does not increase its fan-out. Example 4 It has been shown in (Rambow and Satta, 1999) that binarization of an LCFRS G with f(G) = 2 and r(G) = 3 is always possible without increasing the fan-out, and that if r(G) ≥ 4 then this is no longer true. Consider the LCFRS production p : A →g(A1, A2, A3, A4), with g(⟨x1,1, x1,2⟩, ⟨x2,1, x2,2⟩, ⟨x3,1, x3,2⟩, ⟨x4,1, x4,2⟩) = ⃗α, ⃗α = ⟨x1,1x2,1x3,1x4,1, x2,2x4,2x1,2x3,2⟩. It is not difficult to see that replacing any set of two or three nonterminals in p’s right-hand side forces the creation of a fresh nonterminal of fan-out larger than two. 2 3.2 Greedy decision theorem The binarization algorithm presented in this paper proceeds by representing each LCFRS production p as a collection of disjoint position sets, and then finding a 2-feasible binarization of p. This binarization is computed deterministically, by an iterative process that greedily chooses merges corresponding to pairs of adjacent position sets. The key idea behind the algorithm is based on a theorem that guarantees that any merge of adjacent sets preserves the property of 2-feasibility: Theorem 1 Let X be a 2-feasible collection of position sets. The reduction of X by merging any two adjacent position sets D1, D2 ∈X results in a new collection X ′ which is 2-feasible. To prove Theorem 1 we consider that, since X is 2-feasible, there must exist at least one 2-feasible binarization for X. We can write this binarization β as a sequence of reductions, where each reduction is characterized by a pair of position sets (X1, X2) which are merged into X1 ∪X2, in such a way that both each of the initial sets and the result of the merge have fan-out at most 2. We will show that, under these conditions, for every pair of adjacent position sets D1 and D2, there exists a binarization that starts with the reduction merging D1 with D2. Without loss of generality, we assume that f(D1) ≤f(D2) (if this inequality does not hold we can always swap the names of the two position sets, since the merging operation is commutative), and we define a function hD1→D2 : 2N →2N as follows: • hD1→D2(X) = X; if D1 ⊈X ∧D2 ⊈X. • hD1→D2(X) = X; if D1 ⊆X ∧D2 ⊆X. • hD1→D2(X) = X ∪D1; if D1 ⊈X ∧D2 ⊆ X. • hD1→D2(X) = X \ D1; if D1 ⊆X ∧D2 ⊈ X. With this, we construct a binarization β′ from β as follows: • The first reduction in β′ merges the pair of position sets (D1, D2), • We consider the reductions in β in order, and for each reduction o merging (X1, X2), if X1 ̸= D1 and X2 ̸= D1, we append a reduction o′ merging (hD1→D2(X1), hD1→D2(X2)) to β′. We will now prove that, if β is a 2-feasible binarization, then β′ is also a 2-feasible binarization. To prove this, it suffices to show the following:2 (i) Every position set merged by a reduction in β′ is either one of the original sets in X, or the result of a previous merge in β′. (ii) Every reduction in β′ merges a pair of position sets (X1, X2) which are 2-combinable. To prove (i) we note that by construction of β′, if an operand of a merging operation in β′ is not one of the original position sets in X, then it must be an hD1→D2(X) for some X that appears as an operand of a merging operation in β. Since the binarization β is itself valid, this X must be either one of the position sets in X, or the result of a previous merge in the binarization β. So we divide the proof into two cases: • If X ∈X: First of all, we note that X cannot be D1, since the merging operations of β that have D1 as an operand do not produce 2It is also necessary to show that no position set is merged in two different reductions, but this easily follows from the fact that hD1→D2(X) = hD1→D2(Y ) if and only if X ∪ D1 = Y ∪D1. Thus, two reductions in β can only produce conflicting reductions in β′ if they merge two position sets differing only by D1, but in this case, one of the reductions must merge D1 so it does not produce any reduction in β′. 988 a corresponding operation in β′. If X equals D2, then hD1→D2(X) is D1 ∪D2, which is the result of the first merging operation in β′. Finally, if X is one of the position sets in X, and not D1 or D2, then hD1→D2(X) = X, so our operand is also one of the position sets in X. • If X is the result of a previous merging operation o in binarization β: Then, hD1→D2(X) is the result of a previous merging operation o′ in binarization β′, which is obtained by applying the function hD1→D2 to the operands and result of o. 3 To prove (ii), we show that, under the assumptions of the theorem, the function hD1→D2 preserves 2-combinability. Since two position sets of fan-out ≤2 are 2-combinable if and only if they are disjoint and the fan-out of their union is at most 2, it suffices to show that, for every X, X1, X2 unions of one or more sets of X, having fan-out ≤2, such that X1 ̸= D1, X2 ̸= D1 and X ̸= D1; (a) The function hD1→D2 preserves disjointness, that is, if X1 and X2 are disjoint, then hD1→D2(X1) and hD1→D2(X2) are disjoint. (b) The function hD1→D2 is distributive with respect to the union of position sets, that is, hD1→D2(X1 ∪X2) = hD1→D2(X1) ∪ hD1→D2(X2). (c) The function hD1→D2 preserves the property of having fan-out ≤2, that is, if X has fan-out ≤2, then hD1→D2(X) has fan-out ≤2. If X1 and X2 do not contain D1 or D2, or if one of the two unions X1 or X2 contains D1 ∪D2, properties (a) and (b) are trivial, since the function hD1→D2 behaves as the identity function in these cases. It remains to show that (a) and (b) are true in the following cases: • X1 contains D1 but not D2, and X2 does not contain D1 or D2: 3Except if one of the operands of the operation o was D1. But in this case, if we call the other operand Z, then we have that X = D1 ∪Z. If Z contains D2, then X = D1 ∪ Z = hD1→D2(X) = hD1→D2(Z), so we apply this same reasoning with hD1→D2(Z) where we cannot fall into this case, since there can be only one merge operation in β that uses D1 as an operand. If Z does not contain D2, then we have that hD1→D2(X) = X \ D1 = Z = hD1→D2(Z), so we can do the same. In this case, if X1 and X2 are disjoint, we can write X1 = Y1∪D1, such that Y1, X2, D1 are pairwise disjoint. By definition, we have that hD1→D2(X1) = Y1, and hD1→D2(X2) = X2, which are disjoint, so (a) holds. Property (b) also holds because, with these expressions for X1 and X2, we can calculate hD1→D2(X1 ∪X2) = Y1 ∪X2 = hD1→D2(X1) ∪hD1→D2(X2). • X1 contains D2 but not D1, X2 does not contain D1 or D2: In this case, if X1 and X2 are disjoint, we can write X1 = Y1 ∪D2, such that Y1, X2, D1, D2 are pairwise disjoint. By definition, hD1→D2(X1) = Y1 ∪D2 ∪D1, and hD1→D2(X2) = X2, which are disjoint, so (a) holds. Property (b) also holds, since we can check that hD1→D2(X1 ∪X2) = Y1 ∪X2 ∪D2 ∪ D1 = hD1→D2(X1) ∪hD1→D2(X2). • X1 contains D1 but not D2, X2 contains D2 but not D1: In this case, if X1 and X2 are disjoint, we can write X1 = Y1 ∪D1 and X2 = Y2 ∪D2, such that Y1, Y2, D1, D2 are pairwise disjoint. By definition, we know that hD1→D2(X1) = Y1, and hD1→D2(X2) = Y2 ∪D1 ∪D2, which are disjoint, so (a) holds. Finally, property (b) also holds in this case, since hD1→D2(X1 ∪X2) = Y1 ∪X2 ∪D2 ∪ D1 = hD1→D2(X1) ∪hD1→D2(X2). This concludes the proof of (a) and (b). To prove (c), we consider a position set X, union of one or more sets of X, with fan-out ≤2 and such that X ̸= D1. First of all, we observe that if X does not contain D1 or D2, or if it contains D1 ∪D2, (c) is trivial, because the function hD1→D2 behaves as the identity function in this case. So it remains to prove (c) in the cases where X contains D1 but not D2, and where X contains D2 but not D1. In any of these two cases, if we call E(Y ) the endpoint set associated with an arbitrary position set Y , we can make the following observations: 1. Since X has fan-out ≤2, E(X) contains at most 4 endpoints. 2. Since D1 has fan-out f(D1), E(D1) contains at most 2f(D1) endpoints. 989 3. Since D2 has fan-out f(D2), E(D2) contains at most 2f(D2) endpoints. 4. Since D1 and D2 are adjacent, we know that E(D1) ∩E(D2) contains at least min(f(D1), f(D2)) = f(D1) endpoints. 5. Therefore, E(D1) \ (E(D1) ∩E(D2)) can contain at most 2f(D1) −f(D1) = f(D1) endpoints. 6. On the other hand, since X contains only one of D1 and D2, we know that the endpoints where D1 is adjacent to D2 must also be endpoints of X, so that E(D1) ∩E(D2) ⊆ E(X). Therefore, E(X)\(E(D1)∩E(D2)) can contain at most 4 −f(D1) endpoints. Now, in the case where X contains D1 but not D2, we know that hD1→D2(X) = X\D1. We calculate a bound for the fan-out of X\D1 as follows: we observe that all the endpoints in E(X \ D1) must be either endpoints of X or endpoints of D1, since E(X) = (E(X \ D1) ∪E(D1)) \ (E(X \ D1) ∩E(D1)), so every position that is in E(X \ D1) but not in E(D1) must be in E(X). But we also observe that E(X \ D1) cannot contain any of the endpoints where D1 is adjacent to D2 (i.e., the members of E(D1) ∩E(D2)), since X \ D1 does not contain D1 or D2. Thus, we can say that any endpoint of X \ D1 is either a member of E(D1) \ (E(D1) ∩E(D2)), or a member of E(X) \ (E(D1) ∩E(D2)). Thus, the number of endpoints in E(X \ D1) cannot exceed the sum of the number of endpoints in these two sets, which, according to the reasonings above, is at most 4 −f(D1) + f(D1) = 4. Since E(X \ D1) cannot contain more than 4 endpoints, we conclude that the fan-out of X \ D1 is at most 2, so the function hD1→D2 preserves the property of position sets having fan-out ≤2 in this case. In the other case, where X contains D2 but not D1, we follow a similar reasoning: in this case, hD1→D2(X) = X ∪D1. To bound the fan-out of X ∪D1, we observe that all the endpoints in E(X ∪D1) must be either in E(X) or in E(D1), since E(X ∪D1) = (E(X) ∪E(D1)) \ (E(X) ∩ E(D1)). But we also know that E(X ∪D1) cannot contain any of the endpoints where D1 is adjacent to D2 (i.e., the members of E(D1)∩E(D2)), since X ∪D1 contains both D1 and D2. Thus, we can say that any endpoint of X ∪D1 is either a 1: Function BINARIZATION(p) 2: A ←∅; {working agenda} 3: R ←⟨⟩; {empty list of reductions} 4: for all i from 1 to r(p) do 5: A ←A ∪{XAi}; 6: while |A| > 2 and A contains two adjacent position sets do 7: choose X1, X2 ∈A such that X1 ↔X2; 8: X ←X1 ∪X2; 9: A ←(A \ {X1, X2}) ∪{X}; 10: append (X1, X2) to R; 11: if |A| = 2 then 12: return R; 13: else 14: return fail; Figure 1: Binarization algorithm for a production p : A →g(A1, . . . , Ar(p)). Result is either a list of reductions or failure. member of E(D1)\(E(D1)∩E(D2)), or a member of E(X) \ (E(D1) ∩E(D2)). Reasoning as in the previous case, we conclude that the fan-out of X ∪D1 is at most 2, so the function hD1→D2 also preserves the property of position sets having fan-out ≤2 in this case. This concludes the proof of Theorem 1. 4 Binarization algorithm Let p : A →g(A1, . . . , Ar(p)) be a production with r(p) > 2 from some LCFRS with fan-out not greater than 2. Recall from Subsection 3.1 that each occurrence of nonterminal Ai in the righthand side of p is represented as a position set XAi. The specification of an algorithm for finding a 2feasible binarization of p is reported in Figure 1. The algorithm uses an agenda A as a working set, where all position sets that still need to be processed are stored. A is initialized with the position sets XAi, 1 ≤i ≤r(p). At each step in the algorithm, the size of A represents the maximum rank among all productions that can be obtained from the reductions that have been chosen so far in the binarization process. The algorithm also uses a list R, initialized as the empty list, where all reductions that are attempted in the binarization process are appended. At each iteration, the algorithm performs a reduction by arbitrarily choosing a pair of adjacent endpoint sets from the agenda and by merging them. As already discussed in Subsection 3.1, this 990 corresponds to some specific transformation of the input production p that preserves its generative capacity and that decreases its rank by one unit. We stop the iterations of the algorithm when we reach a state in which there are no more than two position sets in the agenda. This means that the binarization process has come to an end with the reduction of p to a set of productions equivalent to p and with rank and fan-out at most 2. This set of productions can be easily constructed from the output list R. We also stop the iterations in case no adjacent pair of position sets can be found in the agenda. If the agenda has more than two position sets, this means that no binarization has been found and the algorithm returns a failure. 4.1 Correctness To prove the correctness of the algorithm in Figure 1, we need to show that it produces a 2-feasible binarization of the given production p whenever such a binarization exists. This is established by the following theorem: Theorem 2 Let X be a 2-feasible collection of position sets, such that the union of all sets in X is a position set with fan-out ≤2. The procedure: while ( X contains any pair of adjacent sets X1, X2 ) reduce X by merging X1 with X2; always finds a 2-feasible binarization of X. In order to prove this, the loop invariant is that X is a 2-feasible set, and that the union of all position sets in X has fan-out ≤2: reductions can never change the union of all sets in X, and Theorem 1 guarantees us that every change to the state of X maintains 2-feasibility. We also know that the algorithm eventually finishes, because every iteration reduces the amount of position sets in X by 1; and the looping condition will not hold when the number of sets gets to be 1. So it only remains to prove that the loop is only exited if X contains at most two position sets. If we show this, we know that the sequence of reductions produced by this procedure is a 2-feasible binarization. Since the loop is exited when X is 2feasible but it contains no pair of adjacent position sets, it suffices to show the following: Proposition 1 Let X be a 2-feasible collection of position sets, such that the union of all the sets in X is a position set with fan-out ≤2. If X has more than two elements, then it contains at least a pair of adjacent position sets. 2 Let X be a 2-feasible collection of more than two position sets. Since X is 2-feasible, we know that there must be a 2-feasible binarization of X. Suppose that β is such a binarization, and let D1 and D2 be the two position sets that are merged in the first reduction of β. Since β is 2-feasible, D1 and D2 must be 2-combinable. If D1 and D2 are adjacent, our proposition is true. If they are not adjacent, then, in order to be 2combinable, the fan-out of both position sets must be 1: if any of them had fan-out 2, their union would need to have fan-out > 2 for D1 and D2 not to be adjacent, and thus they would not be 2combinable. Since D1 and D2 have fan-out 1 and are not adjacent, their sets of endpoints are of the form {b1, b2} and {c1, c2}, and they are disjoint. If we call EX the set of endpoints corresponding to the union of all the position sets in X and ED1D2 = {b1, b2, c1, c2}, we can show that at least one of the endpoints in ED1D2 does not appear in EX , since we know that EX can have at most 4 elements (as the union has fan-out ≤2) and that it cannot equal ED1D2 because this would mean that X = {D1, D2}, and by hypothesis X has more than two position sets. If we call this endpoint x, this means that there must be a position set D3 in X, different from D1 and D2, that has x as one of its endpoints. Since D1 and D2 have fan-out 1, this implies that D3 must be adjacent either to D1 or to D2, so we conclude the proof. 4.2 Implementation and complexity We now turn to the computational analysis of the algorithm in Figure 1. We define the length of an LCFRS production p, written |p|, as the sum of the length of all strings αj in ⃗α in the definition of the linear, non-erasing function associated with p. Since we are dealing with LCFRS of fan-out at most two, we easily derive that |p| = O(r(p)). In the implementation of the algorithm it is convenient to represent each position set by means of the corresponding endpoint set. Since at any time in the computation we are only processing position sets with fan-out not greater than two, each endpoint set will contain at most four integers. The for-loop at lines 4 and 5 in the algorithm can be easily implemented through a left-to-right scan of the characteristic string σN(p), detecting the endpoint sets associated with each position set XAi. This can be done in constant time for each 991 XAi, and thus in linear time in |p|. At each iteration of the while-loop at lines 6 to 10 we have that A is reduced in size by one unit. This means that the number of iterations is bounded by r(p). We will show below that each iteration of this loop can be executed in constant time. We can therefore conclude that our binarization algorithm runs in optimal time O(|p|). In order to run in constant time each single iteration of the while-loop at lines 6 to 10, we need to perform some additional bookkeeping. We use two arrays Ve and Va, whose elements are indexed by the endpoints associated with characteristic string σN(p), that is, integers i ∈[0, |σN(p)|]. For each endpoint i, Ve[i] stores all the endpoint sets that share endpoint i. Since each endpoint can be shared by at most two endpoint sets, such a data structure has size O(|p|). If there exists some position set X in A with leftmost endpoint i, then Va[i] stores all the position sets (represented as endpoint sets) that are adjacent to X. Since each position set can be adjacent to at most four other position sets, such a data structure has size O(|p|). Finally, we assume we can go back and forth between position sets in the agenda and their leftmost endpoints. We maintain arrays Ve and Va through the following simple procedures. • Whenever a new position set X is added to A, for each endpoint i of X we add X to Ve[i]. We also check whether any position set in Ve[i] other than X is adjacent to X, and add these position sets to Va[il], where il is the leftmost end point of X. • Whenever some position set X is removed from A, for each endpoint i of X we remove X from Ve[i]. We also remove all of the position sets in Va[il], where il is the leftmost end point of X. It is easy to see that, for any position set X which is added/removed from A, each of the above procedures can be executed in constant time. We maintain a set I of integer numbers i ∈ [0, |σN(p)|] such that i ∈I if and only if Va[i] is not empty. Then at each iteration of the while-loop at lines 6 to 10 we pick up some index in I and retrieve at Va[i] some pair X, X′ such that X ↔X′. Since X, X′ are represented by means of endpoint sets, we can compute the endpoint set of X ∪X′ in constant time. Removal of X, X′ and addition of X∪X′ in our data structures Ve and Va is then performed in constant time, as described above. This proves our claim that each single iteration of the while loop can be executed in constant time. 5 Discussion We have presented an algorithm for the binarization of a LCFRS with fan-out 2 that does not increase the fan-out, and have discussed how this can be applied to improve parsing efficiency in several practical applications. In the algorithm of Figure 1, we can modify line 14 to return R even in case of failure. If we do this, when a binarization with fan-out ≤2 does not exist the algorithm will still provide us with a list of reductions that can be converted into a set of productions equivalent to p with fan-out at most 2 and rank bounded by some rb, with 2 < rb ≤r(p). In case rb < r(p), we are not guaranteed to have achieved an optimal reduction in the rank, but we can still obtain an asymptotic improvement in parsing time if we use the new productions obtained in the transformation. Our algorithm has optimal time complexity, since it works in linear time with respect to the input production length. It still needs to be investigated whether the proposed technique, based on determinization of the choice of the reduction, can also be used for finding binarizations for LCFRS with fan-out larger than two, again without increasing the fan-out. However, it seems unlikely that this can still be done in linear time, since the problem of binarization for LCFRS in general, i.e., without any bound on the fan-out, might not be solvable in polynomial time. This is still an open problem; see (G´omez-Rodr´ıguez et al., 2009) for discussion. Acknowledgments The first author has been supported by Ministerio de Educaci´on y Ciencia and FEDER (HUM200766607-C04) and Xunta de Galicia (PGIDIT07SIN005206PR, INCITE08E1R104022ES, INCITE08ENA305025ES, INCITE08PXIB302179PR and Rede Galega de Procesamento da Linguaxe e Recuperaci´on de Informaci´on). The second author has been partially supported by MIUR under project PRIN No. 2007TJNZRE 002. 992 References Pierre Boullier. 2004. Range concatenation grammars. In H. Bunt, J. Carroll, and G. Satta, editors, New Developments in Parsing Technology, volume 23 of Text, Speech and Language Technology, pages 269– 289. Kluwer Academic Publishers. H˚akan Burden and Peter Ljungl¨of. 2005. Parsing linear context-free rewriting systems. In IWPT05, 9th International Workshop on Parsing Technologies. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43rd ACL, pages 263–270. Carlos G´omez-Rodr´ıguez, Marco Kuhlmann, Giorgio Satta, and David Weir. 2009. Optimal reduction of rule length in linear context-free rewriting systems. In Proc. of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies Conference (NAACL’09:HLT), Boulder, Colorado. To appear. Aravind K. Joshi and Leon S. Levy. 1977. Constraints on local descriptions: Local transformations. SIAM J. Comput., 6(2):272–284. Aravind K. Joshi, K. Vijay-Shanker, and David Weir. 1991. The convergence of mildly context-sensitive grammatical formalisms. In P. Sells, S. Shieber, and T. Wasow, editors, Foundational Issues in Natural Language Processing. MIT Press, Cambridge MA. Marco Kuhlmann and Giorgio Satta. 2009. Treebank grammar techniques for non-projective dependency parsing. In Proc. of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL-09), pages 478–486, Athens, Greece. I. Dan Melamed. 2003. Multitext grammars and synchronous parsers. In Proceedings of HLT-NAACL 2003. Rebecca Nesson and Stuart M. Shieber. 2006. Simpler TAG semantics through synchronization. In Proceedings of the 11th Conference on Formal Grammar, Malaga, Spain, 29–30 July. Owen Rambow and Giorgio Satta. 1999. Independent parallelism in finite copying parallel rewriting systems. Theoretical Computer Science, 223:87–120. Giorgio Satta. 1998. Trading independent for synchronized parallelism in finite copying parallel rewriting systems. Journal of Computer and System Sciences, 56(1):27–45. Hiroyuki Seki, Takashi Matsumura, Mamoru Fujii, and Tadao Kasami. 1991. On multiple context-free grammars. Theoretical Computer Science, 88:191– 229. K. Vijay-Shanker, David J. Weir, and Aravind K. Joshi. 1987. Characterizing structural descriptions produced by various grammatical formalisms. In Proceedings of the 25th Meeting of the Association for Computational Linguistics (ACL’87). Hao Zhang, Daniel Gildea, and David Chiang. 2008. Extracting synchronous grammar rules from wordlevel alignments in linear time. In 22nd International Conference on Computational Linguistics (Coling), pages 1081–1088, Manchester, England, UK. 993
2009
111
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 994–1002, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Polynomial-Time Parsing Algorithm for TT-MCTAG Laura Kallmeyer Collaborative Research Center 441 Universit¨at T¨ubingen T¨ubingen, Germany [email protected] Giorgio Satta Department of Information Engineering University of Padua Padova, Italy [email protected] Abstract This paper investigates the class of TreeTuple MCTAG with Shared Nodes, TTMCTAG for short, an extension of Tree Adjoining Grammars that has been proposed for natural language processing, in particular for dealing with discontinuities and word order variation in languages such as German. It has been shown that the universal recognition problem for this formalism is NP-hard, but so far it was not known whether the class of languages generated by TT-MCTAG is included in PTIME. We provide a positive answer to this question, using a new characterization of TTMCTAG. 1 Introduction For a large range of linguistic phenomena, extensions of Tree Adjoining Grammars (Joshi et al., 1975), or TAG for short, have been proposed based on the idea of separating the contribution of a lexical item into several components. Instead of single trees, these grammars contain (multi-)sets of trees. Examples are tree-local and set-local multicomponent TAG (Joshi, 1985; Weir, 1988), MCTAG for short, non-local MCTAG with dominance links (Becker et al., 1991), Vector-TAG with dominance links (Rambow, 1994) and, more recently, Tree-Tuple MCTAG with Shared Nodes (Lichte, 2007)), or TT-MCTAG for short. For some of the above formalisms the word recognition problem is NP-hard. This has been shown for non-local MCTAG (Rambow and Satta, 1992), even in the lexicalized case (Champollion, 2007). Some others generate only polynomial languages but their generative capacity is too limited to deal with all natural language phenomena. This has been argued for tree-local and even set-local MCTAG on the basis of scrambling data from languages such as German (Becker et al., 1992; Rambow, 1994). In this paper, we focus on TT-MCTAG (Lichte, 2007). So far, it has been shown that the universal recognition problem for TT-MCTAG is NPhard (Søgaard et al., 2007). A restriction on TTMCTAG has been proposed in (Kallmeyer and Parmentier, 2008): with such a restriction, the universal recognition problem is still NP-hard, but the class of generated languages is included in PTIME, i.e., all these languages can be recognized in deterministic polynomial time. In this paper, we address the question of whether for general TTMCTAG, i.e., TT-MCTAG without the constraint from (Kallmeyer and Parmentier, 2008), the class of generated languages is included in PTIME. We provide a positive answer to this question. The TT-MCTAG definition from (Lichte, 2007; Kallmeyer and Parmentier, 2008) imposes a condition on the way different tree components from a tree tuple in the grammar combine with each other. This condition is formulated in terms of mapping between argument and head trees, i.e., in order to test such a condition one has to guess some grouping of the tree components used in a derivation into instances of tree tuples from the grammar. This results in a combinatorial explosion of parsing analyses. In order to obtain a polynomial parsing algorithm, we need to avoid this effect. On this line, we propose an alternative characterization of TT-MCTAG that only requires (i) a counting of tree components and (ii) the check of some local conditions on these counts. This allows for parsing in polynomial deterministic time. TT-MCTAG uses so-called ‘parallel unordered’ rewriting. The first polynomial time parsing results on this class were presented in (Rambow and Satta, 1994; Satta, 1995) for some string-based systems, exploiting counting techniques closely related to those we use in this paper. In contrast to string-based rewriting, the tree 994 rewriting formalisms we consider here are structurally more complex and require specializations of the above techniques. Polynomial parsing results for tree rewriting systems based on parallel unordered rewriting have also been reported in (Rambow, 1994; Rambow et al., 1995). However, in the approach proposed by these authors, tree-based grammars are first translated into equivalent string-based systems, and the result is again provided on the string domain. 2 Tree Adjoining Grammars Tree Adjoining Grammars (Joshi et al., 1975) are a formalism based on tree rewriting. We briefly summarize here the relevant definitions and refer the reader to (Joshi and Schabes, 1997) for a more complete introduction. Definition 1 A Tree Adjoining Grammar (TAG) is a tuple G = (VN, VT , S, I, A) where VN and VT are disjoint alphabets of non-terminal and terminal symbols, respectively, S ∈VN is the start symbol, and I and A are finite sets of initial and auxiliary trees, respectively. 2 Trees in I ∪A are called elementary trees. The internal nodes in the elementary trees are labeled with non-terminal symbols, the leaves with nonterminal or terminal symbols. As a special property, each auxiliary tree β has exactly one of its leaf nodes marked as the foot node, having the same label as the root. Such a node is denoted by Ft(β). Leaves with non-terminal labels that are not foot nodes are called substitution nodes. In a TAG, larger trees can be derived from the elementary trees by subsequent applications of the operations substitution and adjunction. The substitution operation replaces a substitution node η with an initial tree having root node with the same label as η. The adjunction operation replaces an internal node η in a previously derived tree γ with an auxiliary tree β having root node with the same label as η. The subtree of γ rooted at η is then placed below the foot node of β. Only internal nodes can allow for adjunction, adjunction at leaves is not possible. See figure 1 for an example of a tree derivation. Usually, a TAG comes with restrictions on the two operations, specified at each node η by sets Sbst(η) and Adj(η) listing all elementary trees that can be substituted or adjoined, respectively. Furthermore, adjunction at η might be obligatory. NP John S NP VP V laughs VP ADV VP∗ always derived tree: S NP VP John ADV VP always V laughs derivation tree: laugh 1 2 john always Figure 1: TAG derivation for John always laughs TAG derivations are represented by derivation trees that record the history of how the elementary trees are put together. A derivation tree is an unordered tree whose nodes are labeled with elements in I ∪A and whose edges are labeled with Gorn addresses of elementary trees.1 Each edge in a derivation tree stands for an adjunction or a substitution. E.g., the derivation tree in figure 1 indicates that the elementary tree for John is substituted for the node at address 1 and always is adjoined at node address 2. In the following, we write a derivation tree D as a directed graph ⟨V, E, r⟩where V is the set of nodes, E ⊂V × V is the set of arcs and r ∈V is the root. For every v ∈V , Lab(v) gives the node label and for every ⟨v1, v2⟩∈E, Lab(⟨v1, v2⟩) gives the edge label. A derived tree is the result of carrying out the substitutions and the adjunctions in a derivation tree, i.e., the derivation tree describes uniquely the derived tree; see again figure 1. 3 TT-MCTAG 3.1 Introduction to TT-MCTAG For a range of linguistic phenomena, multicomponent TAG (Weir, 1988) have been proposed, also called MCTAG for short. The underlying motivation is the desire to split the contribution of a single lexical item (e.g., a verb and its arguments) into several elementary trees. An MCTAG consists of (multi-)sets of elementary trees, called tree sets. If an elementary tree from some set is used in a derivation, then all of the remaining trees in the set must be used as well. Several variants of MCTAGs can be found the literature, differing on the 1In this convention, the root address is ε and the jth child of a node with address p has address p · j. 995 specific definition of the derivation process. The particular MCTAG variant we are concerned with is Tree-Tuple MCTAG with Shared Nodes, TT-MCTAG (Lichte, 2007). TT-MCTAG were introduced to deal with free word order phenomena in languages such as German. An example is (1) where the argument es of reparieren precedes the argument der Mann of versucht and is not adjacent to the predicate it depends on. (1) ... dass es der Mann zu reparieren versucht ... that it the man to repair tries ‘... that the man tries to repair it’ A TT-MCTAG is slightly different from standard MCTAGs since each elementary tree set contains one specially marked lexicalized tree called the head, and all of the remaining trees in the set function as arguments of the head. Furthermore, in a TT-MCTAG derivation the argument trees must either adjoin directly to their head tree, or they must be linked in the derivation tree to an elementary tree that attaches to the head tree, by means of a chain of adjunctions at root nodes. In other words, in the corresponding TAG derivation tree, the head tree must dominate the argument trees in such a way that all positions on the path between them, except the first one, must be labeled by ε. This captures the notion of adjunction under node sharing from (Kallmeyer, 2005).2 Definition 2 A TT-MCTAG is a tuple G = (VN, VT , S, I, A, T ) where GT = (VN, VT , S, I, A) is an underlying TAG and T is a finite set of tree tuples of the form Γ = ⟨γ, {β1, . . . , βr}⟩where γ ∈(I ∪A) has at least one node with a terminal label, and β1, . . . , βn ∈A. 2 For each Γ = ⟨γ, {β1, . . . , βr}⟩∈T , we call γ the head tree and the βj’s the argument trees. We informally say that γ and the βj’s belong to Γ, and write |Γ| = r + 1. As a remark, an elementary tree γ from the underlying TAG GT can be found in different tree tuples in G, or there could even be multiple instances of such a tree within the same tree tuple Γ. In these cases, we just treat these tree instances as distinct trees that are isomorphic and have identical labels. 2The intuition is that, if a tree γ′ adjoins to some γ, its root in the resulting derived tree somehow belongs both to γ and γ′ or, in other words, is shared by them. A further tree β adjoining to this node can then be considered as adjoining to γ, not only to γ′ as in standard TAG. Note that we assume that foot nodes do not allow adjunctions, otherwise node sharing would also apply to them. For a given argument tree β in Γ, h(β) denotes the head of β in Γ. For a given γ ∈I∪A, a(γ) denotes the set of argument trees of γ, if there are any, or the empty set otherwise. Furthermore, for a given TT-MCTAG G, H(G) is the set of head trees and A(G) is the set of argument trees. Finally, a node v in a derivation tree for G with Lab(v) = γ is called a γ-node. Definition 3 Let G = (VN, VT , S, I, A, T ) be some TT-MCTAG. A derivation tree D = ⟨V, E, r⟩in the underlying TAG GT is licensed in G if and only if the following conditions (MC) and (SN-TTL) are both satisfied. • (MC): For all Γ from G and for all γ1, γ2 in Γ, we have |{v | v ∈V, Lab(v) = γ1}| = |{v | v ∈V, Lab(v) = γ2}|. • (SN-TTL): For all β ∈A(G) and n ≥1, let v1, . . . , vn ∈V be pairwise different h(β)-nodes, 1 ≤i ≤n. Then there are pairwise different β-nodes u1, . . . , un ∈V , 1 ≤i ≤n. Furthermore, for 1 ≤i ≤ n, either ⟨vi, ui⟩∈E, or else there are ui,1, . . . , ui,k, k ≥2, with auxiliary tree labels, such that ui = ui,k, ⟨vi, ui,1⟩∈E and, for 1 ≤j ≤k −1, ⟨ui,j, ui,j+1⟩∈E with Lab(⟨ui,j, ui,j+1⟩) = ε. 2 The separation between (MC) and (SN-TTL) in definition 3 is motivated by the desire to separate the multicomponent property that TTMCTAG shares with a range of related formalisms (e.g., tree-local and set-local MCTAG, VectorTAG, etc.) from the notion of tree-locality with shared nodes that is peculiar to TT-MCTAG. Figure 2 shows a TT-MCTAG derivation for (1). Here, the NPnom auxiliary tree adjoins directly to versucht (its head) while the NPacc tree adjoins to the root of a tree that adjoins to the root of a tree that adjoins to reparieren. TT-MCTAG can generate languages that, in a strong sense, cannot be generated by Linear Context-Free Rewriting Systems (Vijay-Shanker et al., 1987; Weir, 1988), or LCFRS for short. An example is the language of all strings π(n[1] . . . n[m])v[1] . . . v[m] with m ≥1, π a permutation, and n[i] = n is a nominal argument of v[i] = v for 1 ≤i ≤m, i.e., these occurrences come from the same tree set in the grammar. Such a language has been proposed as an abstract description of the scrambling phenomenon as found in German and other free word order languages, 996 * VP VP∗ versucht , ( VP NPnom VP∗ ) + * NPnom der Mann , {} + * VP zu reparieren , ( VP NPacc VP∗ ) + * NPacc es , {} + derivation tree: reparieren ε versucht ε NPnom 1 ε Mann NPacc 1 es Figure 2: TT-MCTAG derivation of (1) * α VP v , ( β1 VPv=− n VP∗ NA )+ * β2 VP v VP∗ NAv=+ , ( β3 VPv=− n VP∗ NA )+ Figure 3: TT-MCTAG and cannot be generated by a LCFRS (Becker et al., 1992; Rambow, 1994). Figure 3 reports a TTMCTAG for this language. Concerning the other direction, at the time of writing it is not known whether there are languages generated by LCFRS but not by TTMCTAG. It is well known that LCFRS is closed under the finite-copy operator. This means that, for any fixed k > 1, if L is generated by a LCFRS then the language {w | w = uk, u ∈L} can also be generated by a LCFRS. We conjecture that TT-MCTAG does not have such a closure property. However, from a first inspection of the MCTAG analyses proposed for natural languages (see Chen-Main and Joshi (2007) for an overview), it seems that there are no important natural language phenomena that can be described by LCFRS and not by TT-MCTAG. Any construction involving some kind of component stacking along the VP projection such as subject-auxiliary inversion can be modelled with TT-MCTAG. Unbounded extraposition phenomena cannot be described with TTMCTAG but they constitute a problem for any local formalism and so far the nature of these phenomena is not sufficiently well-understood. Note that, in contrast to non-local MCTAG, in TT-MCTAG the trees coming from the same instance of a tuple in the grammar are not required to be added at the same time. TT-MCTAGs share this property of ‘non-simultaneity’ with other vector grammars such as Unordered Vector Grammars (Cremers and Mayer, 1973) and VectorTAG (Rambow, 1994), V-TAG for short, and it is crucial for the polynomial parsing algorithm. The non-simultaneity seems to be an advantage when using synchronous grammars to model the syntax-semantics interface (Nesson and Shieber, 2008). The closest formalism to TT-MCTAG is V-TAG. However, there are fundamental differences between the two. Firstly, they make a different use of dominance links: In V-TAG dominance links relate different nodes in the trees of a tree set from the grammar. They present dominance requirements that constrain the derived tree. In TT-MCTAG, there are no dominance links between nodes in elementary trees. Instead, the node of a head tree in the derivation tree must dominate all its arguments. Furthermore, even though TT-MCTAG arguments can adjoin with a delay to their head, their possible adjunction site is restricted with respect to their head. As a result, one obtains a slight degree of locality that can be exploited for natural language phenomena that are unbounded only in a limited domain. This is proposed in (Lichte and Kallmeyer, 2008) where the fact that substitution nodes block argument adjunction to higher heads is used to model the limited domain of scrambling in German. V-TAG does not have any such notion of locality. Instead, it uses explicit constraints, so-called integrity constraints, to establish islands. 3.2 An alternative characterization of TT-MCTAG The definition of TT-MCTAG in subsection 3.1 is taken from (Lichte, 2007; Kallmeyer and Parmentier, 2008). The condition (SN-TTL) on the TAG derivation tree is formulated in terms of heads and arguments belonging together, i.e., coming from the same tuple instance. For our parsing algorithm, we want to avoid grouping the instances of elementary trees in a derivation tree into tuple instances. In other words, we want to check whether a TAG derivation tree is a valid TT997 MCTAG derivation tree without deciding, for every occurrence of some argument β, which of the h(β)-nodes represents its head. Therefore we propose to reformulate (SN-TTL). For a node v in a derivation tree D, we write Dv to represent the subtree of D rooted at v. For γ ∈(I ∪A), we define Dom(v, γ) as the set of nodes of Dv that are labeled by γ. Furthermore, for an argument tree β ∈A(G), we let π(v, β) = |Dom(v, β)| −|Dom(v, h(β))|. Lemma 1 Let G be a TT-MCTAG with underlying TAG GT , and let D = ⟨V, E, r⟩be a derivation tree in GT that satisfies (MC). D satisfies (SNTTL) if and only if, for every v ∈V and every β ∈A(G), the following conditions both hold. (i) π(v, β) ≥0. (ii) If π(v, β) > 0, then one of the following conditions must be satisfied: (a) Lab(v) = β and π(v, β) = 1; (b) Lab(v) = β and π(v, β) > 1, and there is some ⟨v, vε⟩∈E with Lab(⟨v, vε⟩) = ε and π(vε, β) + 1 = π(v, β); (c) Lab(v) /∈{β, h(β)} and there is some ⟨v, vε⟩∈E with Lab(⟨v, vε⟩) = ε and π(vε, β) = π(v, β); (d) Lab(v) = h(β) and there is some ⟨v, vε⟩∈E with Lab(⟨v, vε⟩) = ε and π(v, β) ≤π(vε, β) ≤π(v, β) + 1. Intuitively, condition (i) in lemma 1 captures the fact that heads always dominate their arguments in the derivation tree. Condition (ii)b states that, if v is a β-node and if v is not the only ‘pending’ β-node in Dv, then all pending β-nodes in Dv, except v itself, must be below the root adjoining node. Here pending means that the node is not matched to a head-node within Dv. Condition (ii)c treats the case in which there are pending βnodes in Dv for some node v whose label is neither β nor h(β). Then the pending nodes must all be below the root adjoining node. Finally, condition (ii)d deals with the case of a h(β)-node v where, besides the β-node that serves as an argument of v, there are other pending β-nodes in Dv. These other pending β-nodes must all be in Dvε, where vε is the (unique) root adjoining node, if it exists. The argument of v might as well be below vε, and then the number of pending β-nodes in Dvε is the number of pending nodes in Dv, incremented by 1, since the argument of v is not pending in Dv but it is pending in Dvε. Otherwise, the argument of v is a pending β-node below some other daughter of v. Then the number of pending β-nodes in Dvε is the same as in Dv. PROOF We first show that (SN-TTL) implies both (i) and (ii). Condition (i): Assume that there is a v ∈V and a β ∈A(G) with π(v, β) < 0. Then for some n and for pairwise different v1, . . . , vn with ⟨v, vi⟩∈E∗, Lab(vi) = h(β) (1 ≤i ≤n), we cannot find pairwise different u1, . . . , un with ⟨vi, ui⟩∈E∗, Lab(ui) = β. This is in contradiction with (SN-TTL). Consequently, condition (i) must be satisfied. Condition (ii): Assume β and v as in the statement of the lemma, with π(v, β) > 0. Let v1, . . . , vn be all the h(β)-nodes in D. There is a bijection fβ from these nodes to n pairwise distinct β-nodes in D, such that every pair vi, fβ(vi) = ui satisfies the conditions in (SN-TTL). Because of (MC), the nodes u1, . . . , un must be all the β-nodes in D. There must be at least one vi (1 ≤i ≤n) with ⟨vi, v⟩∈E+, ⟨v, fβ(vi)⟩∈E∗. Then we have one of the following cases. (a) ui = v and vi is the only h(β)-node dominating v with a corresponding β-node dominated by v. In this case (ii)a holds. (b) Lab(v) = β, i.e., ⟨f −1 β (v), v⟩∈E+ and there are other nodes u ∈Dom(v, β), u ̸= v with ⟨f −1 β (u), v⟩∈E+. Then, with (SN-TTL), there must be a vε with ⟨v, vε⟩∈E, Lab(⟨v, vε⟩) = ε and for all such nodes u, ⟨vε, u⟩∈E∗. Consequently, (ii)b holds. (c) Lab(v) /∈{β, h(β)}. Then, as in (b), there must be a vε with ⟨v, vε⟩∈E, Lab(⟨v, vε⟩) = ε and for all u ∈Dom(v, β) with ⟨f −1 β (u), v⟩∈ E+, ⟨vε, u⟩∈E∗. Consequently, (ii)c holds. (d) Lab(v) = h(β). If fβ(v) is dominated by a vε that is a daughter of v with Lab(⟨v, vε⟩) = ε, then for all u ∈Dom(v, β) with ⟨f −1 β (u), v⟩∈E+ we have ⟨vε, u⟩∈E∗. Consequently, π(vε, β) = π(v, β) + 1. Alternatively, fβ(v) is dominated by some other daughter v′ of v with Lab(⟨v, v′⟩) ̸= ε. In this case vε must still exist and, for all u ∈ Dom(v, β) with u ̸= fβ(v) and with ⟨f −1 β (u), v⟩∈E+, we have ⟨vε, u⟩∈E∗. Consequently, π(vε, β) = π(v, β). Now we show that (i) and (ii) imply (SN-TTL). With (MC), the number of β-nodes and h(β)nodes in V are the same, for every β ∈A(G). For every β ∈A(G), we construct a bijection fβ of the 998 same type as in the first part of the proof, and show that (SN-TTL) is satisfied. To construct fβ, for every v ∈V we define sets Vβ,v ⊆Dom(v, β) of βnodes vβ that have a matching head fβ(vβ) dominating v. The definition satisfies |Vβ,v| = π(v, β). For every v with v1, . . . , vn being all its daughters: a) If Lab(v) = β, then (by (ii)) for every 1 ≤j ≤ n with Lab(⟨v, vj⟩) ̸= ε, Vβ,vj = ∅. If there is a vi with Lab(⟨v, vi⟩) = ε, then Vβ,v = Vβ,vi ∪{v}, else Vβ,v = {v}. b) If Lab(v) /∈{β, h(β)}, then (by (ii)) Vβ,vj = ∅ for every 1 ≤j ≤n with Lab(⟨v, vj⟩) ̸= ε. If there is a vi with Lab(⟨v, vi⟩) = ε, then Vβ,v = Vβ,vi, else Vβ,v = ∅. c) If Lab(v) = h(β), then there must be some i, 1 ≤i ≤n, such that Vβ,vi ̸= ∅. We need to distinguish two cases. In the first case we have Lab(⟨v, vi⟩) ̸= ε, |Vβ,vi| = 1 and, for every 1 ≤j ≤n with j ̸= i, either Vβ,vj = ∅or Lab(⟨v, vj⟩) = ε. In this case we define fβ(v) = v′ for {v′} = Vβ,vi. In the second case we have Lab(⟨v, vi⟩) = ε and, for every 1 ≤j ≤n with j ̸= i, Vβ,vj = ∅. In this case we pick an arbitrary v′ ∈Vβ,vi and let fβ(v) = v′. In both cases we let Vβ,v = (Sn i=1 Vβ,vi) \ {fβ(v)}. With this mapping, (SN-TTL) is satisfied when choosing for each h(β)-node vi the β-node ui = fβ(vi) as its corresponding node. ■ 4 Parsing algorithm In this section we present a recognition algorithm for TT-MCTAG working in polynomial time in the size of the input string. The algorithm can be easily converted into a parsing algorithm. The basic idea is to use a parsing algorithm for TAG, and impose on-the-fly additional restrictions on the underlying derivation trees that are being constructed, in order to fulfill the definition of valid TT-MCTAG derivation. To simplify the presentation, we assume without loss of generality that all elementary trees in our grammars are binary trees. The input string has the form w = a1 · · · an with each ai ∈VT and n ≥0 (n = 0 means w = ε). 4.1 TAG recognition We start with the discussion of a baseline recognition algorithm for TAG, along the lines of (VijayShanker and Joshi, 1985). The algorithm is specified by means of deduction rules, following (Shieber et al., 1995), and can be implemented using standard tabular techniques. Items have the form [γ, pt, i, f1, f2, j] where γ ∈I ∪A, p is the address of a node in γ, subscript t ∈{⊤, ⊥} specifies whether substitution or adjunction has already taken place (⊤) or not (⊥) at p, and 0 ≤i ≤f1 ≤ f2 ≤j ≤n are indices with i, j indicating the left and right edges of the span recognized by p and f1, f2 indicating the span of a gap in case a foot node is dominated by p. We write f1 = f2 = −if no gap is involved. For combining indices, we use the operator f ′ ⊕f ′′ = f where f = f ′ if f ′′ = −, f = f ′′ if f ′ = −, and f is undefined otherwise. The deduction rules are shown in figure 4. The algorithm walks bottom-up on the derivation tree. Rules (1) and (2) process leaf nodes in elementary trees and require precondition Lab(γ, p) = wi+1 and Lab(γ, p) = ε, respectively. Rule (3) processes the foot node of auxiliary tree β ∈A by guessing the portion of w spanned by the gap. Note that we use p⊤in the consequent item in order to block adjunction at foot nodes, as usually required in TAG. We move up along nodes in an elementary tree by means of rules (4) and (5), depending on whether the current node has no sibling or has a single sibling, respectively. Rule (6) substitutes initial tree α at p in γ, under the precondition α ∈Sbst(γ, p). Similarly, rule (7) adjoins auxiliary tree β at p in γ, under the precondition β ∈Adj (γ, p). Both these rules use p⊤in the consequent item in order to block multiple adjunction or substitution at p, as usually required in TAG. Rule (8) processes nodes at which adjunction is not obligatory. The algorithm recognizes w if and only if some item [α, ε⊤, 0, −, −, n] can be inferred with α ∈I and Lab(α, ε) = S. 4.2 TT-MCTAG recognition We now extend the recognition algorithm of figure 4 to TT-MCTAG. Let G be an input TTMCTAG. We assume that the tuples in T are numbered from 1 to |T |, and that the elementary trees in each Γi are also numbered from 1 to |Γi|, with the first element being the head. We then write γq,r for the r-th elementary tree in the q-th tuple in T . A t-counter is a ragged array T of integers with primary index q ranging over {1, . . . , |T |} and with secondary index r ranging over {1, . . . , |Γi|}. We write T (q,r) to denote the t-counter with T[q, r] = 1 and zero everywhere else. We also use the sum and the difference of t-counters, which are 999 [γ, p⊥, i, −, −, i + 1] (1) [γ, p⊥, i, −, −, i] (2) [β, Ft(β)⊤, i, i, j, j] (3) [γ, (p · 1)⊤, i, f1, f2, j] [γ, p⊥, i, f1, f2, j] (4) [γ, (p · 1)⊤, i, f1, f2, k] [γ, (p · 2)⊤, k, f ′ 1, f ′ 2, j] [γ, p⊥, i, f1 ⊕f ′ 1, f2 ⊕f ′ 2, j] (5) [α, ε⊤, i, −, −, j] [γ, p⊤, i, −, −, j] (6) [β, ε⊤, i, f1, f2, j] [γ, p⊥, f1, f ′ 1, f ′ 2, f2] [γ, p⊤, i, f ′ 1, f ′ 2, j] (7) [γ, p⊥, i, f1, f2, j] [γ, p⊤, i, f1, f2, j] (8) Figure 4: A baseline recognition algorithm for TAG. Rule preconditions and goal item are described in the text. [γq,r, p⊥, i, −, −, i + 1, T (q,r)] (9) [γq,r, p⊥, i, −, −, i, T (q,r)] (10) [γq,r, Ft(γq,r)⊤, i, i, j, j, T (q,r)] (11) [γq,r, (p · 1)⊤, i, f1, f2, j, T ] [γq,r, p⊥, i, f1, f2, j, T ] (12) [γq,r, (p · 1)⊤, i, f1, f2, k, T1] [γq,r, (p · 2)⊤, k, f ′ 1, f ′ 2, j, T2] [γq,r, p⊥, i, f1 ⊕f ′ 1, f2 ⊕f ′ 2, j, T1 + T2 −T (q,r)] (13) [γq′,r′, ε⊤, i, −, −, j, T ′] [γq,r, p⊤, i, −, −, j, T ′ + T (q,r)] (14) [γq′,r′, ε⊤, i, f1, f2, j, T ′] [γq,r, p⊥, f1, f ′ 1, f ′ 2, f2, T] [γq,r, p⊤, i, f ′ 1, f ′ 2, j, T + T ′] (15) [γ, p⊥, i, f1, f2, j, T ] [γ, p⊤, i, f1, f2, j, T ] (16) Figure 5: A recognition algorithm for TT-MCTAG. Rule preconditions are the same as for figure 4, filtering conditions on rules are described in the text. defined elementwise in the obvious way. Let D be a derivation tree generated by the TAG underlying G. We associate D with the t-counter T such that T[q, r] equals the count of all occurrences of elementary tree γq,r appearing in D. Intuitively, we use t-counters to represent information about TAG derivation trees that are relevant to the licensing of such trees by the input TTMCTAG G. We are now ready to present a recognizer based on TT-MCTAG. To simplify the presentation, we first discuss how to extend the algorithm of fig. 4 in order to compute t-counters, and will later specify how to apply TT-MCTAG filtering conditions through such counters. The reader should however keep in mind that the two processes are strictly interleaved, with filtering conditions being tested right after the construction of each new t-counter. We use items of the form [γq,r, pt, i, f1, f2, j, T], where the first six components are defined as in the case of TAG items, and the last component is a t-counter associated with the constructed derivations. Our algorithm is specified in figure 5. The simplest case is that of rules (12) and (16). These rules do not alter the underlying derivation tree, and thus the t-counter is simply copied from the antecedent item to the consequent item. Rules (9), (10) and (11) introduce γq,r as the first elementary tree in the analysis (γq,r ∈A in case of rule (11)). Therefore we set the associated t-counter to T (q,r). In rule (14) we substitute initial tree γq′,r′ at node p in γq,r. In terms of derivation structures, we extend a derivation tree D′ rooted at node v′ with Lab(v′) = γq′,r′ to a new derivation tree D with root node v, Lab(v) = γq,r. Node v has a single child represented by the root of D′. Thus the t-counter associated with D should be T ′ + T (q,r). A slightly different operation needs to be performed when applying rule (15). Here we have a derivation tree D with root node v, Lab(v) = γq,r and a derivation tree D′ with root node v′, Lab(v′) = γq′,r′. When adjoining γq′,r′ into γq,r, we need to add to the root of D a new child node, represented by the root of D′. This means that the t-counter associated with the consequent item should be the sum of the t-counters associated with D and D′. Finally, rule (13) involves derivation trees D1 and D2, rooted at nodes v1 and v2, respectively. Nodes v1 and v2 have the same label γq,r. The application of the rule corresponds to the ‘merging’ of v1 and v2 into a new node v with label γq,r as well, Node v inherits all of the children of v1 and v2. In this case the t-counter associated with the consequent item is T1 + T2 −T (q,r). Here T (q,r) 1000 needs to be subtracted because the contribution of tree γq,r is accounted for in both v1 and v2. We can now discuss the filtering conditions that need to be applied when using the above deduction rules. We start by observing that the algorithm in figure 5 might not even stop if there is an infinite set of derivation trees for the input string w = a1 · · · an in the underlying TAG GT . This is because each derivation can have a distinct tcounter. However, the definition of TT-MCTAG imposes that the head tree of each tuple contains at least one lexical element. Together with condition (MC), this implies that no more than n tuple instances can occur in a derivation tree for w according to G. To test for such a condition, we introduce a norm for t-counters ||T||m = |T | X q=1 max|Γq| r=1 T[q, r] . We then impose ||T||m ≤n for each t-counter constructed by our deduction rule, and block the corresponding derivation if this is not satisfied. We also need to test conditions (i) and (ii) from lemma 1. Since these conditions apply to nodes of the derivation tree, this testing is done at each deduction rule in which a consequent item may be constructed for a node ε⊤, that is, rules (14), (15) and (16). We introduce two specialized predicates F≤(T) ≡ ∀(q, r) : T[q, 1] ≤T[q, r] ; F=(T) ≡ ∀(q, r) : T[q, 1] = T[q, r] . We then test F≤(T), which amounts to testing condition (i) for each argument tree in A(G). Furthermore, if at some rule we have F≤(T) ∧ ¬F=(T), then we need to test for condition (ii). To do this, we consider each argument tree γq,r, r ̸= 1, and compare the elementary tree γq,r in the consequent item of the current rule with γq,r and h(γq,r) = γq,1, to select the appropriate subcondition of (ii). As an example, assume that we are applying rule (15) as in figure 5, with p = ε. Let Tc = T + T ′ be the t-counter associated with the consequent item. When we come to process some argument tree γq,r such that Tc[q, r] −Tc[q, 1] > 0 and γq,r ̸∈{γq,r, γq,1}, we need to test (ii)c. This is done by requiring T ′[q, r] −T ′[q, 1] = Tc[q, r] −Tc[q, 1]. If we are instead applying rule (16) with p = ε and T[q, r] −T[q, 1] > 0, then we test (ii)a, since there is no adjunction at the root node, by requiring γq,r = γq,r and T[q, r] −T[q, 1] = 1. We block the current derivation whenever the conditions in lemma 1 are not satisfied. The algorithm recognizes w if and only if some item [γq,1, ε⊤, 0, −, −, n, T] can be inferred satisfying γq,1 ∈I, Lab(γq,1, ε) = S and F=(T). The correctness immediately follows from the correctness of the underlying TAG parser and from lemma 1. Finally, we turn to the computational analysis of the algorithm. We assume a tabular implementation of the process of item inference using our deduction rules. Our algorithm clearly stops after some finite amount of time, because of the filtering condition ||T||m ≤n. We then need to derive an upper bound on the number of applications of deduction rules. To do this, we use an argument that is rather standard in the tabular parsing literature. The number of t-counters satisfying ||T||m ≤n is O(ncG), with cG = P|T | i=1 |Γi|. Since all of the other components in an item are bounded by O(n4), there are polynomially (in n) many items that can be constructed for an input w. It is not difficult to see that each individual item can be constructed by a number of rule applications bounded by a polynomial as well. Therefore, the total number of applications of our deduction rules is also bounded by some polynomial in n. We thus conclude that the languages generated by the class TTMCTAG are all included in PTIME. 5 Conclusion and open problems We have shown in this paper that the class of languages generated by TT-MCTAG is included in PTIME, by characterizing the definition of TTMCTAG through some conditions that can be tested locally. PTIME is one of the required properties in the definition of the class of Mildly Context-Sensitive (MCS) formalisms (Joshi et al., 1991). In order to settle membership in MCS for TT-MCTAG, what is still missing is the constantgrowth property or, more generally, the semilinearity property. Acknowledgments The work of the first author has been supported by the DFG within the Emmy-Noether Program. The second author has been partially supported by MIUR under project PRIN No. 2007TJNZRE 002. 1001 References Tilman Becker, Aravind K. Joshi, and Owen Rambow. 1991. Long-distance scrambling and tree adjoining grammars. In Proceedings of ACL-Europe. Tilman Becker, Owen Rambow, and Michael Niv. 1992. The Derivationel Generative Power of Formal Systems or Scrambling is Beyond LCFRS. Technical Report IRCS-92-38, Institute for Research in Cognitive Science, University of Pennsylvania. Lucas Champollion. 2007. Lexicalized non-local MCTAG with dominance links is NP-complete. In Gerald Penn and Ed Stabler, editors, Proceedings of Mathematics of Language (MOL) 10, CSLI On-Line Publications. Joan Chen-Main and Aravind Joshi. 2007. Some observations on a graphical model-theoretical approach and generative models. In Model Theoretic Syntax at 10. Workshop, ESSLLI 2007, Dublin, Ireland. Armin B. Cremers and Otto Mayer. 1973. On matrix languages. Information and Control, 23:86–96. Aravind K. Joshi and Yves Schabes. 1997. TreeAdjoning Grammars. In G. Rozenberg and A. Salomaa, editors, Handbookof Formal Languages, pages 69–123. Springer, Berlin. Aravind K. Joshi, Leon S. Levy, and Masako Takahashi. 1975. Tree Adjunct Grammars. Journal of Computer and System Science, 10:136–163. A. Joshi, K. Vijay-Shanker, and D. Weir. 1991. The convergence of mildly context-sensitive grammatical formalisms. In P. Sells, S. Shieber, and T. Wasow, editors, Foundational Issues in Natural Language Processing. MIT Press, Cambridge MA. Aravind K. Joshi. 1985. Tree adjoining grammars: How much contextsensitivity is required ro provide reasonable structural descriptions? In D. Dowty, L. Karttunen, and A. Zwicky, editors, Natural Language Parsing, pages 206–250. Cambridge University Press. Laura Kallmeyer and Yannick Parmentier. 2008. On the relation between Multicomponent Tree Adjoining Grammars with Tree Tuples (TT-MCTAG) and Range Concatenation Grammars (RCG). In Carlos Mart´ın-Vide, Friedrich Otto, and Henning Fernaus, editors, Language and Automata Theory and Applications. Second International Conference, LATA 2008, number 5196 in Lecture Notes in Computer Science, pages 263–274. Springer-Verlag, Heidelberg Berlin. Laura Kallmeyer. 2005. Tree-local multicomponent tree adjoining grammars with shared nodes. Computational Linguistics, 31(2):187–225. Timm Lichte and Laura Kallmeyer. 2008. Factorizing Complementation in a TT-MCTAG for German. In Proceedings of the Ninth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+9), pages 57–64, T¨ubingen, June. Timm Lichte. 2007. An MCTAG with Tuples for Coherent Constructions in German. In Proceedings of the 12th Conference on Formal Grammar 2007, Dublin, Ireland. Rebecca Nesson and Stuart Shieber. 2008. Synchronous Vector TAG for Syntax and Semantics: Control Verbs, Relative Clauses, and Inverse Linking. In Proceedings of the Ninth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+9), T¨ubingen, June. Owen Rambow and Giorgio Satta. 1992. Formal properties of non-locality. In Proceedings of 1st International Workshop on Tree Adjoining Grammars. Owen Rambow and Giorgio Satta. 1994. A rewriting system for free word order syntax that is nonlocal and mildly context sensitive. In C. Mart´ınVide, editor, Current Issues in Mathematical Linguistics, North-Holland Linguistic series, Volume 56. Elsevier-North Holland, Amsterdam. Owen Rambow, K. Vijay-shanker, and David Weir. 1995. Parsing d-Ttree grammars. In Proceedings of the Fourth International Workshop on Parsing Technologies, Prague, pages 252–259. Owen Rambow. 1994. Formal and Computational Aspects of Natural Language Syntax. Ph.D. thesis, University of Pennsylvania. Giorgio Satta. 1995. The membership problem for unordered vector languages. In Developments in Language Theory, pages 267–275. Stuart M. Shieber, Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and Implementation of Deductive Parsing. Journal of Logic Programming, 24(1&2):3–36. Anders Søgaard, Timm Lichte, and Wolfgang Maier. 2007. The complexity of linguistically motivated extensions of tree-adjoining grammar. In Recent Advances in Natural Language Processing 2007, Borovets, Bulgaria. K. Vijay-Shanker and Aravind K. Joshi. 1985. Some computational properties of Tree Adjoining Grammars. In Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics, pages 82–93. K. Vijay-Shanker, D. J. Weir, and A. K. Joshi. 1987. Characterizing structural descriptions produced by various grammatical formalisms. In 25th Meeting of the Association for Computational Linguistics (ACL’87). David J. Weir. 1988. Characterizing mildly contextsensitive grammar formalisms. Ph.D. thesis, University of Pennsylvania. 1002
2009
112
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1003–1011, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Distant supervision for relation extraction without labeled data Mike Mintz, Steven Bills, Rion Snow, Dan Jurafsky Stanford University / Stanford, CA 94305 {mikemintz,sbills,rion,jurafsky}@cs.stanford.edu Abstract Modern models of relation extraction for tasks like ACE are based on supervised learning of relations from small hand-labeled corpora. We investigate an alternative paradigm that does not require labeled corpora, avoiding the domain dependence of ACEstyle algorithms, and allowing the use of corpora of any size. Our experiments use Freebase, a large semantic database of several thousand relations, to provide distant supervision. For each pair of entities that appears in some Freebase relation, we find all sentences containing those entities in a large unlabeled corpus and extract textual features to train a relation classifier. Our algorithm combines the advantages of supervised IE (combining 400,000 noisy pattern features in a probabilistic classifier) and unsupervised IE (extracting large numbers of relations from large corpora of any domain). Our model is able to extract 10,000 instances of 102 relations at a precision of 67.6%. We also analyze feature performance, showing that syntactic parse features are particularly helpful for relations that are ambiguous or lexically distant in their expression. 1 Introduction At least three learning paradigms have been applied to the task of extracting relational facts from text (for example, learning that a person is employed by a particular organization, or that a geographic entity is located in a particular region). In supervised approaches, sentences in a corpus are first hand-labeled for the presence of entities and the relations between them. The NIST Automatic Content Extraction (ACE) RDC 2003 and 2004 corpora, for example, include over 1,000 documents in which pairs of entities have been labeled with 5 to 7 major relation types and 23 to 24 subrelations, totaling 16,771 relation instances. ACE systems then extract a wide variety of lexical, syntactic, and semantic features, and use supervised classifiers to label the relation mention holding between a given pair of entities in a test set sentence, optionally combining relation mentions (Zhou et al., 2005; Zhou et al., 2007; Surdeanu and Ciaramita, 2007). Supervised relation extraction suffers from a number of problems, however. Labeled training data is expensive to produce and thus limited in quantity. Also, because the relations are labeled on a particular corpus, the resulting classifiers tend to be biased toward that text domain. An alternative approach, purely unsupervised information extraction, extracts strings of words between entities in large amounts of text, and clusters and simplifies these word strings to produce relation-strings (Shinyama and Sekine, 2006; Banko et al., 2007). Unsupervised approaches can use very large amounts of data and extract very large numbers of relations, but the resulting relations may not be easy to map to relations needed for a particular knowledge base. A third approach has been to use a very small number of seed instances or patterns to do bootstrap learning (Brin, 1998; Riloff and Jones, 1999; Agichtein and Gravano, 2000; Ravichandran and Hovy, 2002; Etzioni et al., 2005; Pennacchiotti and Pantel, 2006; Bunescu and Mooney, 2007; Rozenfeld and Feldman, 2008). These seeds are used with a large corpus to extract a new set of patterns, which are used to extract more instances, which are used to extract more patterns, in an iterative fashion. The resulting patterns often suffer from low precision and semantic drift. We propose an alternative paradigm, distant supervision, that combines some of the advantages of each of these approaches. Distant supervision is an extension of the paradigm used by Snow et al. (2005) for exploiting WordNet to extract hypernym (is-a) relations between entities, and is similar to the use of weakly labeled data in bioinformatics (Craven and Kumlien, 1999; Morgan et al., 1003 Relation name New instance /location/location/contains Paris, Montmartre /location/location/contains Ontario, Fort Erie /music/artist/origin Mighty Wagon, Cincinnati /people/deceased person/place of death Fyodor Kamensky, Clearwater /people/person/nationality Marianne Yvonne Heemskerk, Netherlands /people/person/place of birth Wavell Wayne Hinds, Kingston /book/author/works written Upton Sinclair, Lanny Budd /business/company/founders WWE, Vince McMahon /people/person/profession Thomas Mellon, judge Table 1: Ten relation instances extracted by our system that did not appear in Freebase. 2004). Our algorithm uses Freebase (Bollacker et al., 2008), a large semantic database, to provide distant supervision for relation extraction. Freebase contains 116 million instances of 7,300 relations between 9 million entities. The intuition of distant supervision is that any sentence that contains a pair of entities that participate in a known Freebase relation is likely to express that relation in some way. Since there may be many sentences containing a given entity pair, we can extract very large numbers of (potentially noisy) features that are combined in a logistic regression classifier. Thus whereas the supervised training paradigm uses a small labeled corpus of only 17,000 relation instances as training data, our algorithm can use much larger amounts of data: more text, more relations, and more instances. We use 1.2 million Wikipedia articles and 1.8 million instances of 102 relations connecting 940,000 entities. In addition, combining vast numbers of features in a large classifier helps obviate problems with bad features. Because our algorithm is supervised by a database, rather than by labeled text, it does not suffer from the problems of overfitting and domain-dependence that plague supervised systems. Supervision by a database also means that, unlike in unsupervised approaches, the output of our classifier uses canonical names for relations. Our paradigm offers a natural way of integrating data from multiple sentences to decide if a relation holds between two entities. Because our algorithm can use large amounts of unlabeled data, a pair of entities may occur multiple times in the test set. For each pair of entities, we aggregate the features from the many different sentences in which that pair appeared into a single feature vector, allowing us to provide our classifier with more information, resulting in more accurate labels. Table 1 shows examples of relation instances extracted by our system. We also use this system to investigate the value of syntactic versus lexical (word sequence) features in relation extraction. While syntactic features are known to improve the performance of supervised IE, at least using clean hand-labeled ACE data (Zhou et al., 2007; Zhou et al., 2005), we do not know whether syntactic features can improve the performance of unsupervised or distantly supervised IE. Most previous research in bootstrapping or unsupervised IE has used only simple lexical features, thereby avoiding the computational expense of parsing (Brin, 1998; Agichtein and Gravano, 2000; Etzioni et al., 2005), and the few systems that have used unsupervised IE have not compared the performance of these two types of feature. 2 Previous work Except for the unsupervised algorithms discussed above, previous supervised or bootstrapping approaches to relation extraction have typically relied on relatively small datasets, or on only a small number of distinct relations. Approaches based on WordNet have often only looked at the hypernym (is-a) or meronym (part-of) relation (Girju et al., 2003; Snow et al., 2005), while those based on the ACE program (Doddington et al., 2004) have been restricted in their evaluation to a small number of relation instances and corpora of less than a million words. Many early algorithms for relation extraction used little or no syntactic information. For example, the DIPRE algorithm by Brin (1998) used string-based regular expressions in order to recognize relations such as author-book, while the SNOWBALL algorithm by Agichtein and Gravano (2000) learned similar regular expression patterns over words and named entity tags. Hearst (1992) used a small number of regular expressions over words and part-of-speech tags to find examples of the hypernym relation. The use of these patterns has been widely replicated in successful systems, for example by Etzioni et al. (2005). Other work 1004 Relation name Size Example /people/person/nationality 281,107 John Dugard, South Africa /location/location/contains 253,223 Belgium, Nijlen /people/person/profession 208,888 Dusa McDuff, Mathematician /people/person/place of birth 105,799 Edwin Hubble, Marshfield /dining/restaurant/cuisine 86,213 MacAyo’s Mexican Kitchen, Mexican /business/business chain/location 66,529 Apple Inc., Apple Inc., South Park, NC /biology/organism classification rank 42,806 Scorpaeniformes, Order /film/film/genre 40,658 Where the Sidewalk Ends, Film noir /film/film/language 31,103 Enter the Phoenix, Cantonese /biology/organism higher classification 30,052 Calopteryx, Calopterygidae /film/film/country 27,217 Turtle Diary, United States /film/writer/film 23,856 Irving Shulman, Rebel Without a Cause /film/director/film 23,539 Michael Mann, Collateral /film/producer/film 22,079 Diane Eskenazi, Aladdin /people/deceased person/place of death 18,814 John W. Kern, Asheville /music/artist/origin 18,619 The Octopus Project, Austin /people/person/religion 17,582 Joseph Chartrand, Catholicism /book/author/works written 17,278 Paul Auster, Travels in the Scriptorium /soccer/football position/players 17,244 Midfielder, Chen Tao /people/deceased person/cause of death 16,709 Richard Daintree, Tuberculosis /book/book/genre 16,431 Pony Soldiers, Science fiction /film/film/music 14,070 Stavisky, Stephen Sondheim /business/company/industry 13,805 ATS Medical, Health care Table 2: The 23 largest Freebase relations we use, with their size and an instance of each relation. such as Ravichandran and Hovy (2002) and Pantel and Pennacchiotti (2006) use the same formalism of learning regular expressions over words and part-of-speech tags to discover patterns indicating a variety of relations. More recent approaches have used deeper syntactic information derived from parses of the input sentences, including work exploiting syntactic dependencies by Lin and Pantel (2001) and Snow et al. (2005), and work in the ACE paradigm such as Zhou et al. (2005) and Zhou et al. (2007). Perhaps most similar to our distant supervision algorithm is the effective method of Wu and Weld (2007) who extract relations from a Wikipedia page by using supervision from the page’s infobox. Unlike their corpus-specific method, which is specific to a (single) Wikipedia page, our algorithm allows us to extract evidence for a relation from many different documents, and from any genre. 3 Freebase Following the literature, we use the term ‘relation’ to refer to an ordered, binary relation between entities. We refer to individual ordered pairs in this relation as ‘relation instances’. For example, the person-nationality relation holds between the entities named ‘John Steinbeck’ and ‘United States’, so it has ⟨John Steinbeck, United States⟩as an instance. We use relations and relation instances from Freebase, a freely available online database of structured semantic data. Data in Freebase is collected from a variety of sources. One major source is text boxes and other tabular data from Wikipedia. Data is also taken from NNDB (biographical information), MusicBrainz (music), the SEC (financial and corporate data), as well as direct, wiki-style user editing. After some basic processing of the July 2008 link export to convert Freebase’s data representation into binary relations, we have 116 million instances of 7,300 relations between 9 million entities. We next filter out nameless and uninteresting entities such as user profiles and music tracks. Freebase also contains the reverses of many of its relations (bookauthor v. author-book), and these are merged. Filtering and removing all but the largest relations leaves us with 1.8 million instances of 102 relations connecting 940,000 entities. Examples are shown in Table 2. 4 Architecture The intuition of our distant supervision approach is to use Freebase to give us a training set of relations and entity pairs that participate in those relations. In the training step, all entities are identified 1005 in sentences using a named entity tagger that labels persons, organizations and locations. If a sentence contains two entities and those entities are an instance of one of our Freebase relations, features are extracted from that sentence and are added to the feature vector for the relation. The distant supervision assumption is that if two entities participate in a relation, any sentence that contain those two entities might express that relation. Because any individual sentence may give an incorrect cue, our algorithm trains a multiclass logistic regression classifier, learning weights for each noisy feature. In training, the features for identical tuples (relation, entity1, entity2) from different sentences are combined, creating a richer feature vector. In the testing step, entities are again identified using the named entity tagger. This time, every pair of entities appearing together in a sentence is considered a potential relation instance, and whenever those entities appear together, features are extracted on the sentence and added to a feature vector for that entity pair. For example, if a pair of entities occurs in 10 sentences in the test set, and each sentence has 3 features extracted from it, the entity pair will have 30 associated features. Each entity pair in each sentence in the test corpus is run through feature extraction, and the regression classifier predicts a relation name for each entity pair based on the features from all of the sentences in which it appeared. Consider the location-contains relation, imagining that in Freebase we had two instances of this relation: ⟨Virginia, Richmond⟩and ⟨France, Nantes⟩. As we encountered sentences like ‘Richmond, the capital of Virginia’ and ‘Henry’s Edict of Nantes helped the Protestants of France’ we would extract features from these sentences. Some features would be very useful, such as the features from the Richmond sentence, and some would be less useful, like those from the Nantes sentence. In testing, if we came across a sentence like ‘Vienna, the capital of Austria’, one or more of its features would match those of the Richmond sentence, providing evidence that ⟨Austria, Vienna⟩belongs to the locationcontains relation. Note that one of the main advantages of our architecture is its ability to combine information from many different mentions of the same relation. Consider the entity pair ⟨Steven Spielberg, Saving Private Ryan⟩ from the following two sentences, as evidence for the film-director relation. [Steven Spielberg]’s film [Saving Private Ryan] is loosely based on the brothers’ story. Allison co-produced the Academy Awardwinning [Saving Private Ryan], directed by [Steven Spielberg]... The first sentence, while providing evidence for film-director, could instead be evidence for filmwriter or film-producer. The second sentence does not mention that Saving Private Ryan is a film, and so could instead be evidence for the CEO relation (consider ‘Robert Mueller directed the FBI’). In isolation, neither of these features is conclusive, but in combination, they are. 5 Features Our features are based on standard lexical and syntactic features from the literature. Each feature describes how two entities are related in a sentence, using either syntactic or non-syntactic information. 5.1 Lexical features Our lexical features describe specific words between and surrounding the two entities in the sentence in which they appear: • The sequence of words between the two entities • The part-of-speech tags of these words • A flag indicating which entity came first in the sentence • A window of k words to the left of Entity 1 and their part-of-speech tags • A window of k words to the right of Entity 2 and their part-of-speech tags Each lexical feature consists of the conjunction of all these components. We generate a conjunctive feature for each k ∈{0, 1, 2}. Thus each lexical row in Table 3 represents a single lexical feature. Part-of-speech tags were assigned by a maximum entropy tagger trained on the Penn Treebank, and then simplified into seven categories: nouns, verbs, adverbs, adjectives, numbers, foreign words, and everything else. In an attempt to approximate syntactic features, we also tested variations on our lexical features: (1) omitting all words that are not verbs and (2) omitting all function words. In combination with the other lexical features, they gave a small boost to precision, but not large enough to justify the increased demand on our computational resources. 1006 Feature type Left window NE1 Middle NE2 Right window Lexical [] PER [was/VERB born/VERB in/CLOSED] LOC [] Lexical [Astronomer] PER [was/VERB born/VERB in/CLOSED] LOC [,] Lexical [#PAD#, Astronomer] PER [was/VERB born/VERB in/CLOSED] LOC [, Missouri] Syntactic [] PER [⇑s was ⇓pred born ⇓mod in ⇓pcomp−n] LOC [] Syntactic [Edwin Hubble ⇓lex−mod] PER [⇑s was ⇓pred born ⇓mod in ⇓pcomp−n] LOC [] Syntactic [Astronomer ⇓lex−mod] PER [⇑s was ⇓pred born ⇓mod in ⇓pcomp−n] LOC [] Syntactic [] PER [⇑s was ⇓pred born ⇓mod in ⇓pcomp−n] LOC [⇓lex−mod ,] Syntactic [Edwin Hubble ⇓lex−mod] PER [⇑s was ⇓pred born ⇓mod in ⇓pcomp−n] LOC [⇓lex−mod ,] Syntactic [Astronomer ⇓lex−mod] PER [⇑s was ⇓pred born ⇓mod in ⇓pcomp−n] LOC [⇓lex−mod ,] Syntactic [] PER [⇑s was ⇓pred born ⇓mod in ⇓pcomp−n] LOC [⇓inside Missouri] Syntactic [Edwin Hubble ⇓lex−mod] PER [⇑s was ⇓pred born ⇓mod in ⇓pcomp−n] LOC [⇓inside Missouri] Syntactic [Astronomer ⇓lex−mod] PER [⇑s was ⇓pred born ⇓mod in ⇓pcomp−n] LOC [⇓inside Missouri] Table 3: Features for ‘Astronomer Edwin Hubble was born in Marshfield, Missouri’. Astronomer Edwin Hubble was born in Marshfield , Missouri lex-mod s pred mod pcomp-n lex-mod inside Figure 1: Dependency parse with dependency path from ‘Edwin Hubble’ to ‘Marshfield’ highlighted in boldface. 5.2 Syntactic features In addition to lexical features we extract a number of features based on syntax. In order to generate these features we parse each sentence with the broad-coverage dependency parser MINIPAR (Lin, 1998). A dependency parse consists of a set of words and chunks (e.g. ‘Edwin Hubble’, ‘Missouri’, ‘born’), linked by directional dependencies (e.g. ‘pred’, ‘lex-mod’), as in Figure 1. For each sentence we extract a dependency path between each pair of entities. A dependency path consists of a series of dependencies, directions and words/chunks representing a traversal of the parse. Part-of-speech tags are not included in the dependency path. Our syntactic features are similar to those used in Snow et al. (2005). They consist of the conjunction of: • A dependency path between the two entities • For each entity, one ‘window’ node that is not part of the dependency path A window node is a node connected to one of the two entities and not part of the dependency path. We generate one conjunctive feature for each pair of left and right window nodes, as well as features which omit one or both of them. Thus each syntactic row in Table 3 represents a single syntactic feature. 5.3 Named entity tag features Every feature contains, in addition to the content described above, named entity tags for the two entities. We perform named entity tagging using the Stanford four-class named entity tagger (Finkel et al., 2005). The tagger provides each word with a label from {person, location, organization, miscellaneous, none}. 5.4 Feature conjunction Rather than use each of the above features in the classifier independently, we use only conjunctive features. Each feature consists of the conjunction of several attributes of the sentence, plus the named entity tags. For two features to match, all of their conjuncts must match exactly. This yields low-recall but high-precision features. With a small amount of data, this approach would be problematic, since most features would only be seen once, rendering them useless to the classifier. Since we use large amounts of data, even complex features appear multiple times, allowing our highprecision features to work as intended. Features for a sample sentence are shown in Table 3. 6 Implementation 6.1 Text For unstructured text we use the Freebase Wikipedia Extraction, a dump of the full text of all Wikipedia articles (not including discussion and 1007 Relation Feature type Left window NE1 Middle NE2 Right window /architecture/structure/architect LEX↶ ORG , the designer of the PER SYN designed ⇑s ORG ⇑s designed ⇓by−subj by ⇓pcn PER ⇑s designed /book/author/works written LEX PER s novel ORG SYN PER ⇑pcn by ⇑mod story ⇑pred is ⇓s ORG /book/book edition/author editor LEX↶ ORG s novel PER SYN PER ⇑nn series ⇓gen PER /business/company/founders LEX ORG co - founder PER SYN ORG ⇑nn owner ⇓person PER /business/company/place founded LEX↶ ORG - based LOC SYN ORG ⇑s founded ⇓mod in ⇓pcn LOC /film/film/country LEX PER , released in LOC SYN opened ⇑s ORG ⇑s opened ⇓mod in ⇓pcn LOC ⇑s opened /geography/river/mouth LEX LOC , which flows into the LOC SYN the ⇓det LOC ⇑s is ⇓pred tributary ⇓mod of ⇓pcn LOC ⇓det the /government/political party/country LEX↶ ORG politician of the LOC SYN candidate ⇑nn ORG ⇑nn candidate ⇓mod for ⇓pcn LOC ⇑nn candidate /influence/influence node/influenced LEX↶ PER , a student of PER SYN of ⇑pcn PER ⇑pcn of ⇑mod student ⇑appo PER ⇑pcn of /language/human language/region LEX LOC - speaking areas of LOC SYN LOC ⇑lex−mod speaking areas ⇓mod of ⇓pcn LOC /music/artist/origin LEX↶ ORG based band LOC SYN is ⇑s ORG ⇑s is ⇓pred band ⇓mod from ⇓pcn LOC ⇑s is /people/deceased person/place of death LEX PER died in LOC SYN hanged ⇑s PER ⇑s hanged ⇓mod in ⇓pcn LOC ⇑s hanged /people/person/nationality LEX PER is a citizen of LOC SYN PER ⇓mod from ⇓pcn LOC /people/person/parents LEX PER , son of PER SYN father ⇑gen PER ⇑gen father ⇓person PER ⇑gen father /people/person/place of birth LEX↶ PER is the birthplace of PER SYN PER ⇑s born ⇓mod in ⇓pcn LOC /people/person/religion LEX PER embraced LOC SYN convert ⇓appo PER ⇓appo convert ⇓mod to ⇓pcn LOC ⇓appo convert Table 4: Examples of high-weight features for several relations. Key: SYN = syntactic feature; LEX = lexical feature; ↶= reversed; NE# = named entity tag of entity. user pages) which has been sentence-tokenized by Metaweb Technologies, the developers of Freebase (Metaweb, 2008). This dump consists of approximately 1.8 million articles, with an average of 14.3 sentences per article. The total number of words (counting punctuation marks) is 601,600,703. For our experiments we use about half of the articles: 800,000 for training and 400,000 for testing. We use Wikipedia because it is relatively upto-date, and because its sentences tend to make explicit many facts that might be omitted in newswire. Much of the information in Freebase is derived from tabular data from Wikipedia, meaning that Freebase relations are more likely to appear in sentences in Wikipedia. 6.2 Parsing and chunking Each sentence of this unstructured text is dependency parsed by MINIPAR to produce a dependency graph. In preprocessing, consecutive words with the same named entity tag are ‘chunked’, so that Edwin/PERSON Hubble/PERSON becomes [Edwin Hubble]/PERSON. This chunking is restricted by the dependency parse of the sentence, however, in that chunks must be contiguous in the parse (i.e., no chunks across subtrees). This ensures that parse tree structure is preserved, since the parses must be updated to reflect the chunking. 6.3 Training and testing For held-out evaluation experiments (see section 7.1), half of the instances of each relation are not used in training, and are later used to compare against newly discovered instances. This means that 900,000 Freebase relation instances are used in training, and 900,000 are held out. These experiments used 800,000 Wikipedia articles in the training phase and 400,000 different articles in the testing phase. For human evaluation experiments, all 1.8 million relation instances are used in training. Again, we use 800,000 Wikipedia articles in the training phase and 400,000 different articles in the testing phase. For all our experiments, we only extract relation instances that do not appear in our training data, i.e., instances that are not already in Freebase. Our system needs negative training data for the purposes of constructing the classifier. Towards this end, we build a feature vector in the training phase for an ‘unrelated’ relation by randomly selecting entity pairs that do not appear in any Freebase relation and extracting features for them. While it is possible that some of these entity pairs 1008 0
 0.1
 0.2
 0.3
 0.4
 0.5
 0.6
 0.7
 0.8
 0.9
 1
 0
 0.05
 0.1
 0.15
 0.2
 0.25
 0.3
 0.35
 0.4
 0.45
 Precision
 Oracle
recall
 Both
 Syntax
 Surface
 Figure 2: Automatic evaluation with 50% of Freebase relation data held out and 50% used in training on the 102 largest relations we use. Precision for three different feature sets (lexical features, syntactic features, and both) is reported at recall levels from 10 to 100,000. At the 100,000 recall level, we classify most of the instances into three relations: 60% as location-contains, 13% as person-place-of-birth, and 10% as person-nationality. are in fact related but are wrongly omitted from the Freebase data, we expect that on average these false negatives will have a small effect on the performance of the classifier. For performance reasons, we randomly sample 1% of such entity pairs for use as negative training examples. By contrast, in the actual test data, 98.7% of the entity pairs we extract do not possess any of the top 102 relations we consider in Freebase. We use a multi-class logistic classifier optimized using L-BFGS with Gaussian regularization. Our classifier takes as input an entity pair and a feature vector, and returns a relation name and a confidence score based on the probability of the entity pair belonging to that relation. Once all of the entity pairs discovered during testing have been classified, they can be ranked by confidence score and used to generate a list of the n most likely new relation instances. Table 4 shows some high-weight features learned by our system. We discuss the results in the next section. 7 Evaluation We evaluate labels in two ways: automatically, by holding out part of the Freebase relation data during training, and comparing newly discovered relation instances against this held-out data, and manually, having humans who look at each positively labeled entity pair and mark whether the relation indeed holds between the participants. Both evaluations allow us to calculate the precision of the system for the best N instances. 7.1 Held-out evaluation Figure 2 shows the performance of our classifier on held-out Freebase relation data. While held-out evaluation suffers from false negatives, it gives a rough measure of precision without requiring expensive human evaluation, making it useful for parameter setting. At most recall levels, the combination of syntactic and lexical features offers a substantial improvement in precision over either of these feature sets on its own. 7.2 Human evaluation Human evaluation was performed by evaluators on Amazon’s Mechanical Turk service, shown to be effective for natural language annotation in Snow et al. (2008). We ran three experiments: one using only syntactic features; one using only lexical features; and one using both syntactic and lexical features. For each of the 10 relations that appeared most frequently in our test data (according to our classifier), we took samples from the first 100 and 1000 instances of this relation generated in each experiment, and sent these to Mechanical Turk for 1009 Relation name 100 instances 1000 instances Syn Lex Both Syn Lex Both /film/director/film 0.49 0.43 0.44 0.49 0.41 0.46 /film/writer/film 0.70 0.60 0.65 0.71 0.61 0.69 /geography/river/basin countries 0.65 0.64 0.67 0.73 0.71 0.64 /location/country/administrative divisions 0.68 0.59 0.70 0.72 0.68 0.72 /location/location/contains 0.81 0.89 0.84 0.85 0.83 0.84 /location/us county/county seat 0.51 0.51 0.53 0.47 0.57 0.42 /music/artist/origin 0.64 0.66 0.71 0.61 0.63 0.60 /people/deceased person/place of death 0.80 0.79 0.81 0.80 0.81 0.78 /people/person/nationality 0.61 0.70 0.72 0.56 0.61 0.63 /people/person/place of birth 0.78 0.77 0.78 0.88 0.85 0.91 Average 0.67 0.66 0.69 0.68 0.67 0.67 Table 5: Estimated precision on human-evaluation experiments of the highest-ranked 100 and 1000 results per relation, using stratified samples. ‘Average’ gives the mean precision of the 10 relations. Key: Syn = syntactic features only. Lex = lexical features only. We use stratified samples because of the overabundance of location-contains instances among our high-confidence results. human evaluation. Our sample size was 100. Each predicted relation instance was labeled as true or false by between 1 and 3 labelers on Mechanical Turk. We assigned the truth or falsehood of each relation according to the majority vote of the labels; in the case of a tie (one vote each way) we assigned the relation as true or false with equal probability. The evaluation of the syntactic, lexical, and combination of features at a recall of 100 and 1000 instances is presented in Table 5. At a recall of 100 instances, the combination of lexical and syntactic features has the best performance for a majority of the relations, while at a recall level of 1000 instances the results are mixed. No feature set strongly outperforms any of the others across all relations. 8 Discussion Our results show that the distant supervision algorithm is able to extract high-precision patterns for a reasonably large number of relations. The held-out results in Figure 2 suggest that the combination of syntactic and lexical features provides better performance than either feature set on its own. In order to understand the role of syntactic features, we examine Table 5, the human evaluation of the most frequent 10 relations. For the topranking 100 instances of each relation, most of the best results use syntactic features, either alone or in combination with lexical features. For the topranking 1000 instances of each relation, the results are more mixed, but syntactic features still helped in most classifications. We then examine those relations for which syntactic features seem to help. For example, syntactic features consistently outperform lexical features for the director-film and writer-film relations. As discussed in section 4, these two relations are particularly ambiguous, suggesting that syntactic features may help tease apart difficult relations. Perhaps more telling, we noticed many examples with a long string of words between the director and the film: Back Street is a 1932 film made by Universal Pictures, directed by John M. Stahl, and produced by Carl Laemmle Jr. Sentences like this have very long (and thus rare) lexical features, but relatively short dependency paths. Syntactic features can more easily abstract from the syntactic modifiers that comprise the extraneous parts of these strings. Our results thus suggest that syntactic features are indeed useful in distantly supervised information extraction, and that the benefit of syntax occurs in cases where the individual patterns are particularly ambiguous, and where they are nearby in the dependency structure but distant in terms of words. It remains for future work to see whether simpler, chunk-based syntactic features might be able to capture enough of this gain without the overhead of full parsing, and whether coreference resolution could improve performance. Acknowledgments We would like to acknowledge Sarah Spikes for her help in developing the relation extraction system, Christopher Manning and Mihai Surdeanu for their invaluable advice, and Fuliang Weng and Baoshi Yan for their guidance. Our research was partially funded by the NSF via award IIS0811974 and by Robert Bosch LLC. 1010 References Eugene Agichtein and Luis Gravano. 2000. Snowball: Extracting relations from large plain-text collections. In Proceedings of the 5th ACM International Conference on Digital Libraries. Michele Banko, Michael J. Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction from the web. In Manuela M Veloso, editor, IJCAI-07, pages 2670– 2676. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In SIGMOD ’08, pages 1247– 1250, New York, NY. ACM. Sergei Brin. 1998. Extracting patterns and relations from the World Wide Web. In Proceedings World Wide Web and Databases International Workshop, Number 1590 in LNCS, pages 172–183. Springer. Razvan Bunescu and Raymond Mooney. 2007. Learning to extract relations from the web using minimal supervision. In ACL-07, pages 576–583, Prague, Czech Republic, June. Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Thomas Lengauer, Reinhard Schneider, Peer Bork, Douglas L. Brutlag, Janice I. Glasgow, Hans W. Mewes, and Ralf Zimmer, editors, ISMB, pages 77–86. AAAI. George Doddington, Alexis Mitchell, Mark Przybocki, Lance Ramshaw, Stephanie Strassel, and Ralph Weischedel. 2004. The Automatic Content Extraction (ACE) Program–Tasks, Data, and Evaluation. LREC-04, pages 837–840. Oren Etzioni, Michael Cafarella, Doug Downey, AnaMaria Popescu, Tal Shaked, Stephen Soderland, Daniel S. Weld, and Alexander Yates. 2005. Unsupervised named-entity extraction from the web: An experimental study. Artificial Intelligence, 165(1):91–134. Jenny R. Finkel, Trond Grenager, and Christopher Manning. 2005. Incorporating non-local information into information extraction systems by gibbs sampling. In ACL-05, pages 363–370, Ann Arbor, MI. Roxana Girju, Adriana Badulescu, and Dan Moldovan. 2003. Learning semantic constraints for the automatic discovery of part-whole relations. In HLTNAACL-03, pages 1–8, Edmonton, Canada. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In COLING-92, Nantes, France. Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question-answering. Natural Language Engineering, 7(4):343–360. Dekang Lin. 1998. Dependency-based evaluation of minipar. In Workshop on the Evaluation of Parsing Systems. Metaweb. 2008. Freebase data dumps. http:// download.freebase.com/datadumps/. Alexander A. Morgan, Lynette Hirschman, Marc Colosimo, Alexander S. Yeh, and Jeff B. Colombe. 2004. Gene name identification and normalization using a model organism database. J. of Biomedical Informatics, 37(6):396–410. Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: leveraging generic patterns for automatically harvesting semantic relations. In COLING/ACL 2006, pages 113–120, Sydney, Australia. Marco Pennacchiotti and Patrick Pantel. 2006. A bootstrapping algorithm for automatically harvesting semantic relations. In in Proceedings of Inference in Computational Semantics (ICoS-06), pages 87–96. Deepak Ravichandran and Eduard H. Hovy. 2002. Learning surface text patterns for a question answering system. In ACL-02, pages 41–47, Philadelphia, PA. Ellen Riloff and Rosie Jones. 1999. Learning dictionaries for information extraction by multi-level bootstrapping. In AAAI-99, pages 474–479. Benjamin Rozenfeld and Ronen Feldman. 2008. Selfsupervised relation extraction from the web. Knowledge and Information Systems, 17(1):17–33. Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive information extraction using unrestricted relation discovery. In HLT-NAACL-06, pages 304–311, New York, NY. Rion Snow, Daniel Jurafsky, and Andrew Y. Ng. 2005. Learning syntactic patterns for automatic hypernym discovery. In Lawrence K. Saul, Yair Weiss, and L´eon Bottou, editors, NIPS 17, pages 1297–1304. MIT Press. Rion Snow, Brendan O’Connor, Daniel Jurafsky, and Andrew Ng. 2008. Cheap and fast – but is it good? evaluating non-expert annotations for natural language tasks. In EMNLP 2008, pages 254–263, Honolulu, HI. Mihai Surdeanu and Massimiliano Ciaramita. 2007. Robust information extraction with perceptrons. In Proceedings of the NIST 2007 Automatic Content Extraction Workshop (ACE07), March. Fei Wu and Daniel S. Weld. 2007. Autonomously semantifying wikipedia. In CIKM ’07: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 41– 50, Lisbon, Portugal. Guodong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In ACL-05, pages 427–434, Ann Arbor, MI. Guodong Zhou, Min Zhang, Donghong Ji, and Qiaoming Zhu. 2007. Tree kernel-based relation extraction with context-sensitive structured parse tree information. In EMNLP/CoNLL 2007. 1011
2009
113
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1012–1020, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Multi-Task Transfer Learning for Weakly-Supervised Relation Extraction Jing Jiang School of Information Systems Singapore Management University 80 Stamford Road, Singapore 178902 [email protected] Abstract Creating labeled training data for relation extraction is expensive. In this paper, we study relation extraction in a special weakly-supervised setting when we have only a few seed instances of the target relation type we want to extract but we also have a large amount of labeled instances of other relation types. Observing that different relation types can share certain common structures, we propose to use a multi-task learning method coupled with human guidance to address this weakly-supervised relation extraction problem. The proposed framework models the commonality among different relation types through a shared weight vector, enables knowledge learned from the auxiliary relation types to be transferred to the target relation type, and allows easy control of the tradeoff between precision and recall. Empirical evaluation on the ACE 2004 data set shows that the proposed method substantially improves over two baseline methods. 1 Introduction Relation extraction is the task of detecting and characterizing semantic relations between entities from free text. Recent work on relation extraction has shown that supervised machine learning coupled with intelligent feature engineering or kernel design provides state-of-the-art solutions to the problem (Culotta and Sorensen, 2004; Zhou et al., 2005; Bunescu and Mooney, 2005; Qian et al., 2008). However, supervised learning heavily relies on a sufficient amount of labeled data for training, which is not always available in practice due to the labor-intensive nature of human annotation. This problem is especially serious for relation extraction because the types of relations to be extracted are highly dependent on the application domain. For example, when working in the financial domain we may be interested in the employment relation, but when moving to the terrorism domain we now may be interested in the ethnic and ideology affiliation relation, and thus have to create training data for the new relation type. However, is the old training data really useless? Inspired by recent work on transfer learning and domain adaptation, in this paper, we study how we can leverage labeled data of some old relation types to help the extraction of a new relation type in a weakly-supervised setting, where only a few seed instances of the new relation type are available. While transfer learning was proposed more than a decade ago (Thrun, 1996; Caruana, 1997), its application in natural language processing is still a relatively new territory (Blitzer et al., 2006; Daume III, 2007; Jiang and Zhai, 2007a; Arnold et al., 2008; Dredze and Crammer, 2008), and its application in relation extraction is still unexplored. Our idea of performing transfer learning is motivated by the observation that different relation types share certain common syntactic structures, which can possibly be transferred from the old types to the new type. We therefore propose to use a general multi-task learning framework in which classification models for a number of related tasks are forced to share a common model component and trained together. By treating classification of different relation types as related tasks, the learning framework can naturally model the common syntactic structures among different relation types in a principled manner. It also allows us to introduce human guidance in separating the common model component from the type-specific components. The framework naturally transfers the knowledge learned from the old relation types to the new relation type and helps improve the recall of the relation extractor. We also exploit ad1012 ditional human knowledge about the entity type constraints on the relation arguments, which can usually be derived from the definition of a relation type. Imposing these constraints further improves the precision of the final relation extractor. Empirical evaluation on the ACE 2004 data set shows that our proposed method largely outperforms two baseline methods, improving the average F1 measure from 0.1532 to 0.4132 when only 10 seed instances of the new relation type are used. 2 Related work Recent work on relation extraction has been dominated by feature-based and kernel-based supervised learning methods. Zhou et al. (2005) and Zhao and Grishman (2005) studied various features and feature combinations for relation extraction. We systematically explored the feature space for relation extraction (Jiang and Zhai, 2007b) . Kernel methods allow a large set of features to be used without being explicitly extracted. A number of relation extraction kernels have been proposed, including dependency tree kernels (Culotta and Sorensen, 2004), shortest dependency path kernels (Bunescu and Mooney, 2005) and more recently convolution tree kernels (Zhang et al., 2006; Qian et al., 2008). However, in both feature-based and kernel-based studies, availability of sufficient labeled training data is always assumed. Chen et al. (2006) explored semi-supervised learning for relation extraction using label propagation, which makes use of unlabeled data. Zhou et al. (2008) proposed a hierarchical learning strategy to address the data sparseness problem in relation extraction. They also considered the commonality among different relation types, but compared with our work, they had a different problem setting and a different way of modeling the commonality. Banko and Etzioni (2008) studied open domain relation extraction, for which they manually identified several common relation patterns. In contrast, our method obtains common patterns through statistical learning. Xu et al. (2008) studied the problem of adapting a rule-based relation extraction system to new domains, but the types of relations to be extracted remain the same. Transfer learning aims at transferring knowledge learned from one or a number of old tasks to a new task. Domain adaptation is a special case of transfer learning where the learning task remains the same but the distribution of data changes. There has been an increasing amount of work on transfer learning and domain adaptation in natural language processing recently. Blitzer et al. (2006) proposed a structural correspondence learning method for domain adaptation and applied it to part-of-speech tagging. Daume III (2007) proposed a simple feature augmentation method to achieve domain adaptation. Arnold et al. (2008) used a hierarchical prior structure to help transfer learning and domain adaptation for named entity recognition. Dredze and Crammer (2008) proposed an online method for multi-domain learning and adaptation. Multi-task learning is another learning paradigm in which multiple related tasks are learned simultaneously in order to achieve better performance for each individual task (Caruana, 1997; Evgeniou and Pontil, 2004). Although it was not originally proposed to transfer knowledge to a particular new task, it can be naturally used to achieve this goal because it models the commonality among tasks, which is the knowledge that should be transferred to a new task. In our work, transfer learning is done through a multi-task learning framework similar to Evgeniou and Pontil (2004). 3 Task definition Our study is conducted using data from the Automatic Content Extraction (ACE) program1. We focus on extracting binary relation instances between two relation arguments occurring in the same sentence. Some example relation instances and their corresponding relation types as defined by ACE can be found in Table 1. We consider the following weakly-supervised problem setting. We are interested in extracting instances of a target relation type T , but this relation type is only specified by a small set of seed instances. We may possibly have some additional knowledge about the target type not in the form of labeled instances. For example, we may be given the entity type restrictions on the two relation arguments. In addition to such limited information about the target relation type, we also have a large amount of labeled instances for K auxiliary relation types A1, . . . , AK. Our goal is to learn a relation extractor for T , leveraging all the data and information we have. 1http://projects.ldc.upenn.edu/ace/ 1013 Syntactic Pattern Relation Instance Relation Type (Subtype) arg-2 arg-1 Arab leaders OTHER-AFF (Ethnic) his father PER-SOC (Family) South Jakarta Prosecution Office GPE-AFF (Based-In) arg-1 of arg-2 leader of a minority government EMP-ORG (Employ-Executive) the youngest son of ex-director Suharto PER-SOC (Family) the Socialist People’s Party of Montenegro GPE-AFF (Based-In) arg-1 [verb] arg-2 Yemen [sent] planes to Baghdad ART (User-or-Owner) his wife [had] three young children PER-SOC (Family) Jody Scheckter [paced] Ferrari to both victories EMP-ORG (Employ-Staff) Table 1: Examples of similar syntactic structures across different relation types. The head words of the first and the second arguments are shown in italic and bold, respectively. Before introducing our transfer learning solution, let us first briefly explain our basic classification approach and the features we use, as well as two baseline solutions. 3.1 Feature configuration We treat relation extraction as a classification problem. Each pair of entities within a single sentence is considered a candidate relation instance, and the task becomes predicting whether or not each candidate is a true instance of T . We use feature-based logistic regression classifiers. Following our previous work (Jiang and Zhai, 2007b), we extract features from a sequence representation and a parse tree representation of each relation instance. Each node in the sequence or the parse tree is augmented by an argument tag that indicates whether the node subsumes arg-1, arg2, both or neither. Nodes that represent the arguments are also labeled with the entity type, subtype and mention type as defined by ACE. Based on the findings of Qian et al. (2008), we trim the parse tree of a relation instance so that it contains only the most essential components. We extract unigram features (consisting of a single node) and bigram features (consisting of two connected nodes) from the graphic representations. An example of the graphic representation of a relation instance is shown in Figure 1 and some features extracted from this instance are shown in Table 2. This feature configuration gives state-of-the-art performance (F1 = 0.7223) on the ACE 2004 data set in a standard setting with sufficient data for training. 3.2 Baseline solutions We consider two baseline solutions to the weaklysupervised relation extraction problem. In the first NP NPB 3 PP 1 leader NN PER of IN government NN ORG NPB 1 0 2 2 2 Figure 1: The combined sequence and parse tree representation of the relation instance “leader of a minority government.” The non-essential nodes for “a” and for “minority” are removed based on the algorithm from Qian et al. (2008). Feature Explanation ORG2 arg-2 is an ORG entity. of0 government2 arg-2 is “government” and follows the word “of.” NP3 →PP2 There is a noun phrase containing both arguments, with arg-2 contained in a prepositional phrase inside the noun phrase. Table 2: Examples of unigram and bigram features extracted from Figure 1. baseline, we use only the few seed instances of the target relation type together with labeled negative relation instances (i.e. pairs of entities within the same sentence but having no relation) to train a binary classifier. In the second baseline, we take the union of the positive instances of both the target relation type and the auxiliary relation types as our positive training set, and together with the negative instances we train a binary classifier. Note that the second baseline method essentially learns 1014 a classifier for any relation type. Another existing solution to weakly-supervised learning problems is semi-supervised learning, e.g. bootstrapping. However, because our proposed transfer learning method can be combined with semi-supervised learning, here we do not include semi-supervised learning as a baseline. 4 A multi-task transfer learning solution We now present a multi-task transfer learning solution to the weakly-supervised relation extraction problem, which makes use of the labeled data from the auxiliary relation types. 4.1 Syntactic similarity between relation types To see why the auxiliary relation types may help the identification of the target relation type, let us first look at how different relation types may be related and even similar to each other. Based on our inspection of a sample of the ACE data, we find that instances of different relation types can share certain common syntactic structures. For example, the syntactic pattern “arg-1 of arg-2” strongly indicates that there exists some relation between the two arguments, although the nature of the relation may be well dependent on the semantic meanings of the two arguments. More examples are shown in Table 1. This observation suggests that some of the syntactic patterns learned from the auxiliary relation types may be transferable to the target relation type, making it easier to learn the target relation type and thus alleviating the insufficient training data problem with the target type. How can we incorporate this desired knowledge transfer process into our learning method? While one can make explicit use of these general syntactic patterns in a rule-based relation extraction system, here we restrict our attention to feature-based linear classifiers. We note that in feature-based linear classifiers, a useful syntactic pattern is translated into large weights for features related to the syntactic pattern. For example, if “arg-1 of arg-2” is a useful pattern, in the learned linear classifier we should have relatively large weights for features such as “the word of occurs before arg-2” or “a preposition occurs before arg-2,” or even more complex features such as “there is a prepositional phrase containing arg-2 attached to arg-1.” It is the weights of these generally useful features that are transferable from the auxiliary relation types to the target relation type. 4.2 Statistical learning model As we have discussed, we want to force the linear classifiers for different relation types to share their model weights for those features that are related to the common syntactic patterns. Formally, we consider the following statistical learning model. Let ωk denote the weight vector of the linear classifier that separates positive instances of auxiliary type Ak from negative instances, and let ωT denote a similar weight vector for the target type T . If different relation types are totally unrelated, these weight vectors should also be independent of each other. But because we observe similar syntactic structures across different relation types, we now assume that these weight vectors are related through a common component ν: ωT = µT + ν, ωk = µk + ν for k = 1, . . . , K. If we assume that only weights of certain general features can be shared between different relation types, we can force certain dimensions of ν to be 0. We express this constraint by introducing a matrix F and setting Fν = 0. Here F is a square matrix with all entries set to 0 except that Fi,i = 1 if we want to force νi = 0. Now we can learn these weight vectors in a multi-task learning framework. Let x represent the feature vector of a candidate relation instance, and y ∈{+1, −1} represent a class label. Let DT = {(xT i , yT i )}NT i=1 denote the set of labeled instances for the target type T . (Note that the number of positive instances in DT is very small.) And let Dk = {(xk i , yk i )}Nk i=1 denote the labeled instances for the auxiliary type Ak. We learn the optimal weight vectors {ˆµk}K k=1, ˆµT and ˆν by optimizing the following objective function: µ {ˆµk}K k=1, ˆµT , ˆν ¶ = arg min {µk},µT ,ν,F ν=0 " L(DT , µT + ν) + K X k=1 L(Dk, µk + ν) +λT µ ∥µT ∥2 + K X k=1 λk µ∥µk∥2 + λν∥ν∥2 # . (1) 1015 The objective function follows standard empirical risk minimization with regularization. Here L(D, ω) is the aggregated loss of labeling x with y for all (x, y) in D, using weight vector ω. In logistic regression models, the loss function is the negative log likelihood, that is, L(D, ω) = − X (x,y)∈D log p(y|x, ω), p(y|x, ω) = exp(ωy · x) P y′∈{+1,−1} exp(ωy′ · x). λT µ , λk µ and λν are regularization parameters. By adjusting their values, we can control the degree of weight sharing among the relation types. The larger the ratio λT µ /λν (or λk µ/λν) is, the more we believe that the model for T (or Ak) should conform to the common model, and the smaller the type-specific weight vector µT (or µk) will be. The model presented above is based on our previous work (Jiang and Zhai, 2007c), which bears the same spirit of some other recent work on multitask learning (Ando and Zhang, 2005; Evgeniou and Pontil, 2004; Daume III, 2007). It is general for any transfer learning problem with auxiliary labeled data from similar tasks. Here we are mostly interested in the model’s applicability and effectiveness on the relation extraction problem. 4.3 Feature separation Recall that we impose a constraint Fν = 0 when optimizing the objective function. This constraint gives us the freedom to force only the weights of a subset of the features to be shared among different relation types. A remaining question is how to set this matrix F, that is, how to determine the set of general features to use. We propose two ways of setting this matrix F. Automatically setting F One way is to fix the number of non-zero entries in ν to be a pre-defined number H of general features, and allow F to change during the optimization process. This can be done by repeating the following two steps until F converges: 1. Fix F, and optimize the objective function as in Equation (1). 2. Fix ¡ µT + ν ¢ and ¡ µk + ν ¢ , and search for µT , {µk} and ν that minimizes ¡ λT µ ∥µT ∥2 + PK k=1 λk µ∥µk∥2 + λν∥ν∥2¢ , subject to the constraint that at most H entries of ν are nonzero. Human guidance Another way to select the general features is to follow some guidance from human knowledge. Recall that in Section 4.1 we find that the commonality among different relation types usually lies in the syntactic structures between the two arguments. This observation gives some intuition about how to separate general features from typespecific features. In particular, here we consider two hypotheses regarding the generality of different kinds of features. Argument word features: We hypothesize that the head words of the relation arguments are more likely to be strong indicators of specific relation types rather than any relation type. For example, if an argument has the head word “sister,” it strongly indicates a family relation. We refer to the set of features that contain any head word of an argument as “arg-word” features. Entity type features: We hypothesize that the entity types and subtypes of the relation arguments are also more likely to be associated with specific relation types. For example, arguments that are location entities may be strongly correlated with physical proximity relations. We refer to the set of features that contain the entity type or subtype of an argument as “arg-NE” features. We hypothesize that the arg-word and arg-NE features are type-specific and therefore should be excluded from the set of general features. We can force the weights of these hypothesized typespecific features to be 0 in the shared weight vector ν, i.e. we can set the matrix F to achieve this feature separation. Combined method We can also combine the automatic way of setting F with human guidance. Specifically, we still follow the first automatic procedure to choose general features, but we then filter out any hypothesized type-specific feature from the set of general features chosen by the automatic procedure. 4.4 Imposing entity type constraints Finally, we consider how we can exploit additional human knowledge about the target relation type T to further improve the classifier. We note that usually when a relation type is defined, we often have strong preferences or even hard constraints on the types of entities that can possibly be the two relation arguments. These type constraints can help us 1016 Target Type T BL BL-A TL-auto TL-guide TL-comb TL-NE P 0.0000 0.1692 0.2920 0.2934 0.3325 0.5056 Physical R 0.0000 0.0848 0.1696 0.1722 0.2383 0.2316 F 0.0000 0.1130 0.2146 0.2170 0.2777 0.3176 Personal P 1.0000 0.0804 0.1005 0.3069 0.3214 0.6412 /Social R 0.0386 0.1708 0.1598 0.7245 0.7686 0.7631 F 0.0743 0.1093 0.1234 0.4311 0.4533 0.6969 Employment P 0.9231 0.3561 0.5230 0.5428 0.5973 0.7145 /Membership R 0.0075 0.1850 0.2617 0.2648 0.3632 0.3601 /Subsidiary F 0.0148 0.2435 0.3488 0.3559 0.4518 0.4789 AgentP 0.8750 0.0603 0.1813 0.1825 0.1835 0.1967 Artifact R 0.0343 0.2353 0.6471 0.6225 0.6422 0.6373 F 0.0660 0.0960 0.2833 0.2822 0.2854 0.3006 PER/ORG P 0.8889 0.0838 0.1510 0.1592 0.1667 0.1844 Affiliation R 0.0567 0.4965 0.6950 0.8369 0.8794 0.8723 F 0.1067 0.1434 0.2481 0.2676 0.2802 0.3045 GPE P 1.0000 0.2530 0.3904 0.3604 0.3560 0.5824 Affiliation R 0.0077 0.4509 0.6416 0.5992 0.6166 0.6127 F 0.0153 0.3241 0.4854 0.4501 0.4513 0.5972 P 1.0000 0.0298 0.0503 0.0471 0.1370 0.1370 Discourse R 0.0036 0.0789 0.1075 0.1147 0.3477 0.3477 F 0.0071 0.0433 0.0685 0.0668 0.1966 0.1966 P 0.8124 0.1475 0.2412 0.2703 0.2992 0.4231 Average R 0.0212 0.2432 0.3832 0.4764 0.5509 0.5464 F 0.0406 0.1532 0.2532 0.2958 0.3423 0.4132 Table 3: Comparison of different methods on ACE 2004 data set. P, R and F stand for precision, recall and F1, respectively. remove some false positive instances. We therefore manually identify the entity type constraints for each target relation type based on the definition of the relation type given in the ACE annotation guidelines, and impose these type constraints as a final refinement step on top of the predicted positive instances. 5 Experiments 5.1 Data set and experiment setup We used the ACE 2004 data set to evaluate our proposed methods. There are seven relation types defined in ACE 2004. After data cleaning, we obtained 4290 positive instances among 48614 candidate relation instances. We took each relation type as the target type and used the remaining types as auxiliary types. This gave us seven sets of experiments. In each set of experiments for a single target relation type, we randomly divided all the data into five subsets, and used each subset for testing while using the other four subsets for training, i.e. each experiment was repeated five times with different training and test sets. Each time, we removed most of the positive instances of the target type from the training set except only a small number S of seed instances. This gave us the weakly-supervised setting. We kept all the positive instances of the target type in the test set. In order to concentrate on the classification accuracy for the target relation type, we removed the positive instances of the auxiliary relation types from the test set, although in practice we need to extract these auxiliary relation instances using learned classifiers for these relation types. 5.2 Comparison of different methods We first show the comparison of our proposed multi-task transfer learning methods with the two baseline methods described in Section 3.2. The performance on each target relation type and the average performance across seven types are shown in Table 3. BL refers to the first baseline and BLA refers to the second baseline which uses auxil1017 λT µ 100 1000 10000 P 0.6265 0.3162 0.2992 R 0.1170 0.3959 0.5509 F 0.1847 0.2983 0.3423 Table 4: The average performance of TL-comb with different λT µ . (λk µ = 104 and λν = 1.) iary relation instances. The four TL methods are all based on the multi-task transfer learning framework. TL-auto sets F automatically within the optimization problem itself. TL-guide chooses all features except arg-word and arg-NE features as general features and sets F accordingly. TL-comb combines TL-auto and TL-guide, as described in Section 4.3. Finally, TL-NE builds on top of TLcomb and uses the entity type constraints to refine the predictions. In this set of experiments, the number of seed instances for each target relation type was set to 10. The parameters were set to their optimal values (λT µ = 104, λk µ = 104, λν = 1, and H = 500). As we can see from the table, first of all, BL generally has high precision but very low recall. BL-A performs better than BL in terms of F1 because it gives better recall. However, BL-A still cannot achieve as high recall as the TL methods. This is probably because the model learned by BLA still focuses more on type-specific features for each relation type rather than on the commonly useful general features, and therefore does not help much in classifying the target relation type. The four TL methods all outperform the two baseline methods. TL-comb performs better than both TL-auto and TL-guide, which shows that while we can either choose general features automatically by the learning algorithm or manually with human knowledge, it is more effective to combine human knowledge with the multi-task learning framework. Not surprisingly, TL-NE improves the precision over TL-comb without hurting the recall much. Ideally, TL-NE should not decrease recall if the type constraints are strictly observed in the data. We find that it is not always the case with the ACE data, leading to the small decrease of recall from TL-comb to TL-NE. 5.3 The effect of λT µ Let us now take a look at the effect of using different λT µ . As we can see from Table 4, smaller λT µ gives higher precision while larger λT µ gives 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 100 1000 10000 avg F1 H TL-comb TL-auto BL-A Figure 2: Performance of TL-comb and TL-auto as H changes. higher recall. These results make sense because the larger λT µ is, the more we penalize large weights of µT . As a result, the model for the target type is forced to conform to the shared model ν and prevented from overfitting the few seed target instances. λT µ is therefore a useful parameter to help us control the tradeoff between precision and recall for the target type. While varying λk µ also gives similar effect for type Ak, we found that setting λk µ to smaller values would not help T because in this case the auxiliary relation instances would be used more for training the type-specific component µk rather than the common component ν. 5.4 Sensitivity of H Another parameter in the multi-task transfer learning framework is the number of general features H, i.e. the number of non-zero entries in the shared weight vector ν. To see how the performance may vary as H changes, we plot the performance of TL-comb and TL-auto in terms of the average F1 across the seven target relation types, with H ranging from 100 to 50000. As we can see in Figure 2, the performance is relatively stable, and always above BL-A. This suggests that the performance of TL-comb and TL-auto is not very sensitive to the value of H. 5.5 Hypothesized type-specific features In Section 4.3, we showed two sets of hypothesized type-specific features, namely, arg-word features and arg-NE features. We also experimented with each set separately to see whether both sets are useful. The comparison is shown in Table 5. As we can see, using either set of typespecific features in either TL-guide or TL-comb can improve the performance over BL-A, but the 1018 arg-word arg-NE union TL-guide 0.2095 0.2983 0.2958 TL-comb 0.2215 0.3331 0.3423 BL-A 0.1532 Table 5: Average F1 using different hypothesized type-specific features. 0 0.1 0.2 0.3 0.4 0.5 0.6 10 100 1000 avg F1 S TL-NE (104) TL-NE (102) BL BL-A Figure 3: Performance of TL-NE, BL and BL-A as the number of seed instances S of the target type increases. (H = 500. λT µ was set to 104 and 102). arg-NE features are probably more type-specific than arg-word features because they give better performance. Using the union of the two sets is still the best for TL-comb. 5.6 Changing the number of seed instances Finally, we compare TL-NE with BL and BL-A when the number of seed instances increases. We set S from 5 up to 1000. When S is large, the problem becomes more like traditional supervised learning, and our setting of λT µ = 104 is no longer optimal because we are now not afraid of overfitting the large set of seed target instances. Therefore we also included another TL-NE experiment with λT µ set to 102. The comparison of the performance is shown in Figure 3. We see that as S increases, both BL and BL-A catch up, and BL overtakes BL-A when S is sufficiently large because BL uses positive training examples only from the target type. Overall, TL-NE still outperforms the two baselines in most of the cases over the wide range of values of S, but the optimal value for λT µ decreases as S increases, as we have suspected. The results show that if λT µ is set appropriately, our multi-task transfer learning method is robust and advantageous over the baselines under both the weakly-supervised setting and the traditional supervised setting. 6 Conclusions and future work In this paper, we applied multi-task transfer learning to solve a weakly-supervised relation extraction problem, leveraging both labeled instances of auxiliary relation types and human knowledge including hypotheses on feature generality and entity type constraints. In the multi-task learning framework that we introduced, different relation types are treated as different but related tasks that are learned together, with the common structures among the relation types modeled by a shared weight vector. The shared weight vector corresponds to the general features across different relation types. We proposed to choose the general features either automatically inside the learning algorithm or guided by human knowledge. We also leveraged additional human knowledge about the target relation type in the form of entity type constraints. Experiment results on the ACE 2004 data show that the multi-task transfer learning method achieves the best performance when we combine human guidance with automatic general feature selection, followed by imposing the entity type constraints. The final method substantially outperforms two baseline methods, improving the average F1 measure from 0.1532 to 0.4132 when only 10 seed target instances are used. Our work is the first to explore transfer learning for relation extraction, and we have achieved very promising results. Because of the practical importance of transfer learning and adaptation for relation extraction due to lack of training data in new domains, we hope our study and findings will lead to further investigation into this problem. There are still many issues that remain unsolved. For example, we have not looked at the degrees of relatedness between different pairs of relation types. Presumably, when adapting to a specific target relation type, we want to choose the most similar auxiliary relation types to use. Our current study is based on ACE relation types. It would also be interesting to study similar problems in other domains, for example, the protein-protein interaction extraction problem in biomedical text mining. References Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817–1853, November. 1019 Andrew Arnold, Ramesh Nallapati, and William W. Cohen. 2008. Exploiting feature hierarchy for transfer learning in named entity recognition. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 245– 253. Michele Banko and Oren Etzioni. 2008. The tradeoffs between open and traditional relation extraction. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, pages 28–36. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 120–128. Razvan Bunescu and Raymond Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 724–731. Rich Caruana. 1997. Multitask learning. Machine Learning, 28:41–75. Jinxiu Chen, Donghong Ji, Chew Lim Tan, and Zhengyu Niu. 2006. Relation extraction using label propagation based semi-supervised learning. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 129–136. Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proceedings of the 42nd Meeting of the Association for Computational Linguistics, pages 423–429. Hal Daume III. 2007. Frustratingly easy domain adaptation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pages 256–263. Mark Dredze and Koby Crammer. 2008. Online methods for multi-domain learning and adaptation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 689–697. Theodoros Evgeniou and Massimiliano Pontil. 2004. Regularized multi-task learning. In Proceedings of the 10th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 109– 117. Jing Jiang and ChengXiang Zhai. 2007a. Instance weighting for domain adaptation in nlp. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pages 264–271. Jing Jiang and ChengXiang Zhai. 2007b. A systematic exploration of the feature space for relation extraction. In Proceedings of the Human Language Technologies Conference, pages 113–120. Jing Jiang and ChengXiang Zhai. 2007c. A two-stage approach to domain adaptation for statistical classifiers. In Proceedings of the 16th ACM Conference on Information and Knowledge Management, pages 401–410. Longhua Qian, Guodong Zhou, Fang Kong, Qiaoming Zhu, and Peide Qian. 2008. Exploiting constituent dependencies for tree kernel-based semantic relation extraction. In Proceedings of the 22nd International Conference on Computational Linguistics, pages 697–704. Sebastian Thrun. 1996. Is learning the n-th thing any easier than learning the first? In Advances in Neural Information Processing Systems 8, pages 640–646. Feiyu Xu, Hans Uszkoreit, Hong Li, and Niko Felger. 2008. Adaptation of relation extraction rules to new domains. In Proceedings of the 6th International Conference on Language Resources and Evaluation, pages 2446–2450. Min Zhang, Jie Zhang, and Jian Su. 2006. Exploring syntactic features for relation extraction using a convolution tree kernel. In Proceedings of the Human Language Technology Conference, pages 288–295. Shubin Zhao and Ralph Grishman. 2005. Extracting relations with integrated information using kernel methods. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 419–426. GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 427–434. GuoDong Zhou, Min Zhang, DongHong Ji, and QiaoMing Zhu. 2008. Hierarchical learning strategy in semantic relation extraction. Information Processing and Management, 44(3):1008–1021. 1020
2009
114
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1021–1029, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Unsupervised Relation Extraction by Mining Wikipedia Texts Using Information from the Web Yulan Yan, Naoaki Okazaki, Yutaka Matsuo, Zhenglu Yang and Mitsuru Ishizuka The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan [email protected] [email protected] [email protected] [email protected] [email protected] Abstract This paper presents an unsupervised relation extraction method for discovering and enhancing relations in which a specified concept in Wikipedia participates. Using respective characteristics of Wikipedia articles and Web corpus, we develop a clustering approach based on combinations of patterns: dependency patterns from dependency analysis of texts in Wikipedia, and surface patterns generated from highly redundant information related to the Web. Evaluations of the proposed approach on two different domains demonstrate the superiority of the pattern combination over existing approaches. Fundamentally, our method demonstrates how deep linguistic patterns contribute complementarily with Web surface patterns to the generation of various relations. 1 Introduction Machine learning approaches for relation extraction tasks require substantial human effort, particularly when applied to the broad range of documents, entities, and relations existing on the Web. Even with semi-supervised approaches, which use a large unlabeled corpus, manual construction of a small set of seeds known as true instances of the target entity or relation is susceptible to arbitrary human decisions. Consequently, a need exists for development of semantic information-retrieval algorithms that can operate in a manner that is as unsupervised as possible. Currently, the leading methods in unsupervised information extraction collect redundancy information from a local corpus or use the Web as a corpus (Pantel and Pennacchiotti, 2006); (Banko et al., 2007); (Bollegala et al., 2007): (Fan et al., 2008); (Davidov and Rappoport, 2008). The standard process is to scan or search the corpus to collect co-occurrences of word pairs with strings between them, and then to calculate term co-occurrence or generate surface patterns. The method is used widely. However, even when patterns are generated from well-written texts, frequent pattern mining is non-trivial because the number of unique patterns is loose, but many patterns are non-discriminative and correlated. A salient challenge and research interest for frequent pattern mining is abstraction away from different surface realizations of semantic relations to discover discriminative patterns efficiently. Linguistic analysis is another effective technology for semantic relation extraction, as described in many reports such as (Kambhatla, 2004); (Bunescu and Mooney, 2005); (Harabagiu et al., 2005); (Nguyen et al., 2007). Currently, linguistic approaches for semantic relation extraction are mostly supervised, relying on pre-specification of the desired relation or initial seed words or patterns from hand-coding. The common process is to generate linguistic features based on analyses of the syntactic features, dependency, or shallow semantic structure of text. Then the system is trained to identify entity pairs that assume a relation and to classify them into pre-defined relations. The advantage of these methods is that they use linguistic technologies to learn semantic information from different surface expressions. As described herein, we consider integrating linguistic analysis with Web frequency information to improve the performance of unsupervised relation extraction. As (Banko et al., 2007) reported, “deep” linguistic technology presents problems when applied to heterogeneous text on the Web. Therefore, we do not parse information from the Web corpus, but from well written texts. Particularly, we specifically examine unsupervised relation extraction from existing texts of Wikipedia articles. Wikipedia resources of a fun1021 damental type are of concepts (e.g., represented by Wikipedia articles as a special case) and their mutual relations. We propose our method, which groups concept pairs into several clusters based on the similarity of their contexts. Contexts are collected as patterns of two kinds: dependency patterns from dependency analysis of sentences in Wikipedia, and surface patterns generated from highly redundant information from the Web. The main contributions of this paper are as follows: • Using characteristics of Wikipedia articles and the Web corpus respectively, our study yields an example of bridging the gap separating “deep” linguistic technology and redundant Web information for Information Extraction tasks. • Our experimental results reveal that relations are extractable with good precision using linguistic patterns, whereas surface patterns from Web frequency information contribute greatly to the coverage of relation extraction. • The combination of these patterns produces a clustering method to achieve high precision for different Information Extraction applications, especially for bootstrapping a high-recall semi-supervised relation extraction system. 2 Related Work (Hasegawa et al., 2004) introduced a method for discovering a relation by clustering pairs of cooccurring entities represented as vectors of context features. They used a simple representation of contexts; the features were words in sentences between the entities of the candidate pairs. (Turney, 2006) presented an unsupervised algorithm for mining the Web for patterns expressing implicit semantic relations. Given a word pair, the output list of lexicon-syntactic patterns was ranked by pertinence, which showed how well each pattern expresses the relations between word pairs. (Davidov et al., 2007) proposed a method for unsupervised discovery of concept specific relations, requiring initial word seeds. That method used pattern clusters to define general relations, specific to a given concept. (Davidov and Rappoport, 2008) presented an approach to discover and represent general relations present in an arbitrary corpus. That approach incorporated a fully unsupervised algorithm for pattern cluster discovery, which searches, clusters, and merges highfrequency patterns around randomly selected concepts. The field of Unsupervised Relation Identification (URI)—the task of automatically discovering interesting relations between entities in large text corpora—was introduced by (Hasegawa et al., 2004). Relations are discovered by clustering pairs of co-occurring entities represented as vectors of context features. (Rosenfeld and Feldman, 2006) showed that the clusters discovered by URI are useful for seeding a semi-supervised relation extraction system. To compare different clustering algorithms, feature extraction and selection method, (Rosenfeld and Feldman, 2007) presented a URI system that used surface patterns of two kinds: patterns that test two entities together and patterns that test either of two entities. In this paper, we propose an unsupervised relation extraction method that combines patterns of two types: surface patterns and dependency patterns. Surface patterns are generated from the Web corpus to provide redundancy information for relation extraction. In addition, to obtain semantic information for concept pairs, we generate dependency patterns to abstract away from different surface realizations of semantic relations. Dependency patterns are expected to be more accurate and less spam-prone than surface patterns from the Web corpus. Surface patterns from redundancy Web information are expected to address the data sparseness problem. Wikipedia is currently widely used information extraction as a local corpus; the Web is used as a global corpus. 3 Characteristics of Wikipedia articles Wikipedia, unlike the whole Web corpus, has several characteristics that markedly facilitate information extraction. First, as an earlier report (Giles, 2005) explained, Wikipedia articles are much cleaner than typical Web pages. Because the quality is not so different from standard written English, we can use “deep” linguistic technologies, such as syntactic or dependency parsing. Secondly, Wikipedia articles are heavily crosslinked, in a manner resembling cross-linking of the Web pages. (Gabrilovich and Markovitch, 2006) assumed that these links encode numerous interesting relations among concepts, and that they provide an important source of information in ad1022 dition to the article texts. To establish the background for this paper, we start by defining the problem under consideration: relation extraction from Wikipedia. We use the encyclopedic nature of the corpus by specifically examining the relation extraction between the entitled concept (ec) and a related concept (rc), which are described in anchor text in this article. A common assumption is that, when investigating the semantics in articles such as those in Wikipedia (e.g. semantic Wikipedia (Volkel et al., 2006)), key information related to a concept described on a page p lies within the set of links l(p) on that page; particularly, it is likely that a salient semantic relation r exists between p and a related page p′ ∈l(p). Given the scenario we described along with earlier related works, the challenges we face are these: 1) enumerating all potential relation types of interest for extraction is highly problematic for corpora as large and varied as Wikipedia; 2) training data or seed data are difficult to label. Considering (Davidov and Rappoport, 2008), which describes work to get the target word and relation cluster given a single (‘hook’) word, their method depends mainly on frequency information from the Web to obtain a target and clusters. Attempting to improve the performance, our solution for these challenges is to combine frequency information from the Web and the “high quality” characteristic of Wikipedia text. 4 Pattern Combination Method for Relation Extraction With the scene and challenges stated, we propose a solution in the following way. The intuitive idea is that we integrate linguistic technologies on highquality text in Wikipedia and Web mining technologies on a large-scale Web corpus. In this section, we first provide an overview of our method along with the function of the main modules. Subsequently, we explain each module in the method in detail. 4.1 Overview of the Method Given a set of Wikipedia articles as input, our method outputs a list of concept pairs for each article with a relation label assigned to each concept pair. Briefly, the proposed approach has four main modules, as depicted in Fig. 1. • Text Preprocessor and Concept Pair Collector preprocesses Wikipedia articles to Wikipedia articles Preprocessor Concept pair collection Sentence filtering Web context collector Web Context Ti = t1, t2…tn Pi = p1,p2…pn Dependency pattern Extractor n1i,…n1j … ni2i, ..n2j ni,…nj … surface clustering depend clustering Relation list Output: relations for each article input: Eric Emerson Schmidt CEO a-member-of Born Google Board of Directors Washington, D.C. Is-a chairman Novell Eric Emerson Schmidt CEO a-member-of Born Google Board of Directors Washington, D.C. Is-a chairman Novell Eric Emerson Schmidt CEO a-member-of Born Google Board of Directors Washington, D.C. Is-a chairman Novell ... ... … … … … ... ... … … … … ... ... … … … … Tyco becoming joined comp: CEO obj: cc: joined obj: subj: joined obj: cc: Clustering approach Wikipedia articles Preprocessor Concept pair collection Sentence filtering Web context collector Web Context Ti = t1, t2…tn Pi = p1,p2…pn Dependency pattern Extractor n1i,…n1j … ni2i, ..n2j ni,…nj … surface clustering depend clustering Relation list Output: relations for each article input: Eric Emerson Schmidt CEO a-member-of Born Google Board of Directors Washington, D.C. Is-a chairman Novell Eric Emerson Schmidt CEO a-member-of Born Google Board of Directors Washington, D.C. Is-a chairman Novell Eric Emerson Schmidt CEO a-member-of Born Google Board of Directors Washington, D.C. Is-a chairman Novell Eric Emerson Schmidt CEO a-member-of Born Google Board of Directors Washington, D.C. Is-a chairman Novell Eric Emerson Schmidt CEO a-member-of Born Google Board of Directors Washington, D.C. Is-a chairman Novell Eric Emerson Schmidt CEO a-member-of Born Google Board of Directors Washington, D.C. Is-a chairman Novell ... ... … … … … ... ... … … … … ... ... … … … … Tyco becoming joined comp: CEO obj: cc: joined obj: subj: joined obj: cc: Tyco becoming joined comp: CEO obj: cc: joined obj: subj: joined obj: cc: Clustering approach Figure 1: Framework of the proposed approach split text and filter sentences. It outputs concept pairs, each of which has an accompanying sentence. • Web Context Collector collects context information from the Web and generates ranked relational terms and surface patterns for each concept pair. • Dependency Pattern Extractor generates dependency patterns for each concept pair from corresponding sentences in Wikipedia articles. • Clustering Algorithm clusters concept pairs based on their context. It consists of the two sub-modules described below. – Depend Clustering, which merges concept pairs using dependency patterns alone, aiming at obtaining clusters of concept pairs with good precision; – Surface Clustering, which clusters concept pairs using surface patterns based on the resultant clusters of depend clustering. The aim is to merge more concept pairs into existing clusters with surface patterns to improve the coverage of clusters. 1023 4.2 Text Preprocessor and Concept Pair Collector This module pre-processes Wikipedia article texts to collect concept pairs and corresponding sentences. Given a concept described in a Wikipedia article, our idea of preprocessing executes initial consideration of all anchor-text concepts linking to other Wikipedia articles in the article as related concepts that might share a semantic relation with the entitled concept. The link structure, more particularly, the structure of outgoing links, provides a simple mechanism for identifying relevant articles. We split text into sentences and select sentences containing one reference of an entitled concept and one of the linked texts for the dependency pattern extractor module. 4.3 Web Context Collector Querying a concept pair using a search engine (Google), we characterize the semantic relation between the pair by leveraging the vast size of the Web. Our hypothesis is that there exist some key terms and patterns that provide clues to the relations between pairs. From the snippets retrieved by the search engine, we extract relational information of two kinds: ranked relational terms as keywords and surface patterns. Here surface patterns are generated with support of ranked relational terms. 4.3.1 Relational Term Ranking To collect relational terms as indicators for each concept pair, we look for verbs and nouns from qualified sentences in the snippets instead of simply finding verbs. Using only verbs as relational terms might engender the loss of various important relations, e.g. noun relations “CEO”, “founder” between a person and a company. Therefore, for each concept pair, a list of relational terms is collected. Then all the collected terms of all concept pairs are combined and ranked using an entropybased algorithm which is described in (Chen et al., 2005). With their algorithm, the importance of terms can be assessed using the entropy criterion, which is based on the assumption that a term is irrelevant if its presence obscures the separability of the dataset. After the ranking, we obtain a global ranked list of relational terms Tall for the whole dataset (all the concept pairs). For each concept pair, a local list of relational terms Tcp is sorted according to the terms’ order in Tall. Then from the relational term list Tcp, a keyword tcp is selected Table 1: Surface patterns for a concept pair Pattern Pattern ec ceo rc rc found ec ceo rc found ec rc succeed as ceo of ec rc be ceo of ec ec ceo of rc ec assign rc as ceo ec found by ceo rc ceo of ec rc ec found in by rc for each concept pair cp as the first term appearing in the term list Tcp. Keyword tcp will be used to initialize the clustering algorithm in Section 4.5.1. 4.3.2 Surface Pattern Generation Because simply taking the entire string between two concept words captures an excess of extraneous and incoherent information, we use Tcp of each concept pair as a key for surface pattern generation. We classified words into Content Words (CWs) and Functional Words (FWs). From each snippet sentence, the entitled concept, related concept, or the keyword kcp is considered to be a Content Word (CW). Our idea of obtaining FWs is to look for verbs, nouns, prepositions, and coordinating conjunctions that can help make explicit the hidden relations between the target nouns. Surface patterns have the following general form. [CW1] Infix1 [CW2] Infix2 [CW3] (1) Therein, Infix1 and Infix2 respectively contain only and any number of FWs. A pattern example is “ec assign rc as ceo (keyword)”. All generated patterns are sorted by their frequency, and all occurrences of the entitled concept and related concept are replaced with “ec” and “rc”, respectively for pattern matching of different concept pairs. Table 1 presents examples of surface patterns for a sample concept pair. Pattern windows are bounded by CWs to obtain patterns more precisely because 1) if we use only the string between two concepts, it may not contain some important relational information, such as “ceo ec resign rc” in Table 1; 2) if we generate patterns by setting a windows surrounding two concepts, the number of unique patterns is often exponential. 4.4 Dependency Pattern Extractor In this section, we describe how to obtain dependency patterns for relation clustering. After preprocessing, selected sentences that contain at least 1024 one mention of an entitled concept or related concept are parsed into dependency structures. We define dependency patterns as sub-paths of the shortest dependency path between a concept pair for two reasons. One is that the shortest path dependency kernels outperform dependency tree kernels by offering a highly condensed representation of the information needed to assess their relation (Bunescu and Mooney, 2005). The other reason is that embedded structures of the linguistic representation are important for obtaining good coverage of the pattern acquisition, as explained in (Culotta and Sorensen, 2005); (Zhang et al., 2006). The process of inducing dependency patterns has two steps. 1. Shortest dependency path inducement. From the original dependency tree structure by parsing the selected sentence for each concept pair, we first induce the shortest dependency path with the entitled concept and related concept. 2. Dependency pattern generation. We use a frequent tree-mining algorithm (Zaki, 2002) to generate sub-paths as dependency patterns from the shortest dependency path for relation clustering. 4.5 Clustering Algorithm for Relation Extraction In this subsection, we present a clustering algorithm that merges concept pairs based on dependency patterns and surface patterns. The algorithm is based on k-means clustering for relation clustering. The dependency pattern has the properties of being more accurate, but the Web context has the advantage of containing much more redundant information than Wikipedia. Our idea of concept pair clustering is a two-step clustering process: first it clusters concept pairs into clusters with good precision using dependency patterns; then it improves the coverage of the clusters using surface patterns. 4.5.1 Initial Centroid Selection and Distance Function Definition The standard k-means algorithm is affected by the choice of seeds and the number of clusters k. However, as we claimed in the Introduction section, because we aim to extract relations from Wikipedia articles in an unsupervised manner, cluster number k is unknown and no good centroids can be predicted. As described in this paper, we select centroids based on the keyword tcp of each concept pair. First of all, all concept pairs are grouped by their keywords tcp. Let G = {G1, G2, ...Gn} be the resultant groups, where each Gi = {cpi1, cpi2, ...} identify a group of concept pairs sharing the same keyword tcp (such as “CEO”). We rank all the groups by their number of concept pairs and then choose the top k groups. Then a centroid ci is selected for each group Gi by Eq. 2. ci = arg max cp∈Gi |{cpij|(dis1(cpij, cp)+ λ ∗dis2(cpij, cp)) <= Dz, 1 ≤j ≤|Gi|}| (2) We assume a centroid for each group to be the concept pair which has the most other concept pairs in the same group that have distance less than Dz with it. Also, Dz is a threshold to avoid noisy concept pairs: we assign it 1/3. To balance the contribution between dependency patterns and surface patterns, λ is used. The distance function to calculate the distance between dependency pattern sets DPi, DPj of two concept pairs cpi and cpj is dis1. The distance is decided by the number of overlapped dependency patterns with Eq. 3. dis1(cpi, cpj) = 1 − |DPi ∩DPj| p (|DPi| ∗|DPj|) (3) Actually, dis2 is the distance function to calculate distance between two surface pattern sets of two concept pairs. To compute the distance over surface patterns, we implement the distance function dis2(cpi, cpj) in Fig. 2. Algorithm 1: distance function dis2(cpi, cpj) Input: SP1 = {sp11, ..., sp1m}(surface patterns of cpi) SP2 = {sp21, ..., sp2n} (surface patterns of cpj) Output: dis (distance between SP1 and SP2) define a m × n distance matrix A: {Aij = LD(sp1i,sp2j) Max(|sp1i|,|sp2j|), 1≤i≤m; 1≤j≤n}; dis ←0 for min(m, n) times do (x, y) ←argmin0<i<m;0<j<nAij; dis ←dis + Axy/min(m, n); Ax∗←1; A∗y ←1; return dis Figure 2: Distance function over surface patterns As shown in Fig. 2, the distance algorithm performs as: firstly it defines a m × n distance matrix A, then repeatedly selects two nearest sequences and sums up their distances. While computing 1025 dis2, we use the Levenshtein distance LD to measure the difference of two surface patterns. The Levenshtein distance is a metric for measuring the amount of difference between two sequences (i.e., the so-called edit distance). Each generated surface pattern is a sequence of words. The distance of two surface patterns is defined as the fraction of the LD value to the length of the longer sequence. For estimating the number of clusters k, we apply the stability-based criteria from (Chen et al., 2005) to decide the number of optimal clusters k automatically. 4.5.2 Concept Pair Clustering with Dependency Patterns Given the initial seed concept pairs and cluster number k, this stage merges concept pairs over dependency patterns into k clusters. Each concept pair cpi has a set of dependency patterns DPi. We calculate distances between two pairs cpi and cpj using above the function dis1(cpi, cpj). The clustering algorithm is portrayed in Fig. 3. The process of depend clustering is to assign each concept pair to the cluster with the closest centroid and then recomputing each centroid based on the current members of its cluster. As shown in Figure 3, this is done iteratively by repeating both two steps until a stopping criterion is met. We apply the termination condition as: centroids do not change between iterations. Algorithm 2: Depend Clustering Input: I = {cp1, ..., cpn}(all concept pairs) C = {c1, ..., ck} (k initial centroids) Output: Md : I →C (cluster membership) Ir (rest of concept pairs not clustered) Cd = {c1, ..., ck} (recomputed centroids) while stopping criterion has not been met do for each cpi ∈I do if mins∈1..k dis1(cpi, cs) <= Dl then Md(cpi) ←argmins∈1..k dis1(cpi, cs) else Md(cpi) ←0 for each j ∈{1..k} do recompute cj as the centroid of {cpi|mloc(cpi) = j} Ir ←C0 return C and Cd Figure 3: Clustering with dependency patterns Because many concept pairs are scattered and do not belong to any of the top k clusters, we filter concept pairs with distance larger than Dl with the seed concept pairs. Such concept pairs ST1 ST3 ST4 ST2 Text3: RC was hired as EC’s CEO Text4: EC assign RC as CEO Text1: the CEO of EC is RC Text2: RC is the CEO of EC ST1 ST3 ST4 ST2 Text3: RC was hired as EC’s CEO Text4: EC assign RC as CEO Text1: the CEO of EC is RC Text2: RC is the CEO of EC Figure 4: Example showing why surface clustering is needed are stored in C0. We named the cluster of concept pairs Ir which are left to be clustered in the next step of clustering. After this step, concept pairs with similar dependency patterns are merged into same clusters, see Fig. 4 (ST1, ST2). 4.5.3 Concept Pair Clustering with Surface Patterns A salient difficulty posed by dependency pattern clustering is that concept pairs of the same semantic relation cannot be merged if they are expressed in different dependency structures. Figure 4 presents an example demonstrating why we perform surface pattern clustering. As depicted in Fig. 4, ST1, ST2, ST3, and ST4 are dependency structures for four concept pairs that should be classified as the same relation “CEO”. However ST3 and ST4 can not be merged with ST1 and ST2 using the dependency patterns because their dependency structures are too diverse to share sufficient dependency patterns. In this step, we use surface patterns to merge more concept pairs for each cluster to improve the coverage. Figure 5 portrays the algorithm. We assume that each concept pair has a set of surface patterns from the Web context collector module. As shown in Figure 5, surface clustering is done iteratively by repeating two steps until a stopping criterion is met: using the distance function dis2 explained in the preceding section, assign each concept pair to the cluster with the closest centroid and recomputing each centroid based on the current members of its cluster. We apply the same termination condition as depend clustering. 1026 Additionally, we filter concept pairs with distance greater than Dg with the centroid concept pairs. Algorithm 3: Surface Clustering Input: Ir (rest of concept pairs) Cd = {c1, ..., ck} (initial centroids) Output: Ms : Ir →C (cluster membership) Cs = {c1, ..., ck} (final centroids) while stopping criterion has not been met do for each cpi ∈Ir do if mins∈1..k dis2(cpi, cs) <= Dg then Ms(cpi) ←argmins∈1..k dis2(cpi, cs) else Ms(cpi) ←0 for each j ∈1..k do recompute cj as the centroid of cluster {cpi|Md(cpi) = j ∨Ms(cpi) = j} return clusters C Figure 5: Clustering with surface patterns Finally we have k clusters of concept pairs, each of which has a centroid concept pair. To attach a single relation label to each cluster, we use the centroid concept pair. 5 Experiments We apply our algorithm to two categories in Wikipedia: “American chief executives” and “Companies”. Both categories are well defined and closed. We conduct experiments for extracting various relations and for measuring the quality of these relations in terms of precision and coverage. We use coverage as an evaluation instead of using recall as a measure. The coverage is used to evaluate all correctly extracted concept pairs. It is defined as the fraction of all the correctly extracted concept pairs to the whole set of concept pairs. To balance between precision and coverage of clustering, we integrate two parameters: Dl, Dg. We downloaded the Wikipedia dump as of December 3, 2008. The performance of the proposed method is evaluated using different pattern types: dependency patterns, surface patterns, and their combination. We compare our method with (Rosenfeld and Feldman, 2007)’s URI method. Their algorithm outperformed that presented in the earlier work using surface features of two kinds for unsupervised relation extraction: features that test two entities together and features that test only one entity each. For comparison, we use a k-means clustering algorithm using the same cluster number k. Table 2: Results for the category: “American chief executives” method Existing method Proposed method (Rosenfeld et al.) (Our method) Relation # Ins. pre # Ins. pre (sample) chairman 434 63.52 547 68.37 (x be chairman of y) ceo 396 73.74 423 77.54 (x be ceo of y) bear 138 83.33 276 86.96 (x be bear in y) attend 225 67.11 313 70.28 (x attend y) member 14 85.71 175 91.43 (x be member of y) receive 97 67.97 117 73.53 (x receive y) graduate 18 83.33 92 88.04 (x graduate from y) degree 5 80.00 78 82.05 (x obtain y degree) marry 55 41.67 74 61.25 (x marry y) earn 23 86.96 51 88.24 (x earn y) award 23 43.47 46 84.78 (x won y award) hold 5 80.00 37 72.97 (x hold y degree) become 35 74.29 37 81.08 (x become y) director 24 67.35 29 79.31 (x be director of y) die 18 77.78 19 84.21 (x die in y) all 1510 68.27 2314 75.63 5.1 Wikipedia Category: “American chief executives” We choose appropriate Dl(concept pair filter in depend clustering) and Dg(concept pair filter in surface clustering) in a development set. To balance precision and coverage, we set 1/3 for both Dl and Dg. The 526 articles in this category are used for evaluation. We obtain 7310 concept pairs from the articles as our dataset. The top 18 groups are chosen to obtain the centroid concept pairs. Of these, 15 binary relations are the clearly identifiable relations shown in Table 2, where # Ins. represents the number of concept pairs clustered using each method, and pre denotes the precision of each cluster. The proposed approach shows higher precision and better coverage than URI in Table 2. This result demonstrates that adding dependency patterns from linguistic analysis contributes more to the precision and coverage of the clustering task than the sole use of surface patterns. 1027 Table 3: Performance of different pattern types Pattern type #Instance Precision Coverage dependency 1127 84.29 13.00% surface 1510 68.27 14.10% Combined 2314 75.63 23.94% Table 4: Results for the category: “Companies” Method Existing method Proposed method (Rosenfeld et al.) (Our method) Relation # Ins. pre # Ins. pre (sample) found 82 75.61 163 84.05 (found x in y) base 82 76.83 122 82.79 (x be base in y) headquarter 23 86.97 120 89.34 (x be headquarter in y) service 37 51.35 108 69.44 (x offer y service) store 113 77.88 88 72.72 (x open store in y) acquire 59 62.71 70 64.28 (x acquire y) list 51 64.71 67 70.15 (x list on y) product 25 76.00 57 77.19 (x produce y) CEO 37 64.86 39 66.67 (ceo x found y) buy 53 62.26 37 56.76 (x buy y) establish 35 82.86 26 80.77 (x be establish in y) locate 14 50.00 24 75.00 (x be locate in y) all 685 71.03 1039 76.87 To examine the contribution of dependency patterns, we compare results obtained with patterns of different kinds. Table 3 shows the precision and coverage scores. The best precision is achieved by dependency patterns. The precision is markedly better than that of surface patterns. However, the coverage is worse than that by surface patterns. As we reported, many concept pairs are scattered and do not belong to any of the top k clusters, the coverage is low. 5.2 Wikipedia Category: “Companies” We also evaluate the performance for the “Companies” category. Instead of using all the articles, we randomly select 434 articles for evaluation and 4073 concept pairs from the articles form our dataset for this category. We also set Dl and Dg to 1/3. Then 28 groups are chosen. For each group, a centroid concept pair is obtained. Finally, of 28 clusters, 25 binary relations are clearly identifiable relations. Table 4 presents some relations. Table 5: Performance of different pattern types Pattern type #Instance Precision Coverage dependency 551 82.58 11.17% surface 685 71.03 11.95% Combined 1039 76.87 19.61% Our clustering algorithms use two filters Dl and Dg to filter scattering concept pairs. In Table 4, we present that concept pairs are clustered with good precision. As in the first experiments, the combination of dependency patterns and surface patterns contribute greatly to the precision and coverage. Table 5 shows that, using dependency patterns, the precision is the highest (82.58%), although the coverage is the lowest. All experimental results support our idea mainly in two aspects: 1) Dependency analysis can abstract away from different surface realizations of text. In addition, embedded structures of the dependency representation are important for obtaining a good coverage of the pattern acquisition. Furthermore, the precision is better than that of the string surface patterns from Web pages of various kinds. 2) Surface patterns are used to merge concept pairs with relations represented in different dependency structures with redundancy information from the vast size of Web pages. Using surface patterns, more concept pairs are clustered, and the coverage is improved. 6 Conclusions To discover a range of semantic relations from a large corpus, we present an unsupervised relation extraction method using deep linguistic information to alleviate surface and noisy surface patterns generated from a large corpus, and use Web frequency information to ease the sparseness of linguistic information. We specifically examine texts from Wikipedia articles. Relations are gathered in an unsupervised way over patterns of two types: dependency patterns by parsing sentences in Wikipedia articles using a linguistic parser, and surface patterns from redundancy information from the Web corpus using a search engine. We report our experimental results in comparison to those of previous works. The results show that the best performance arises from a combination of dependency patterns and surface patterns. 1028 References Michele Banko, Michael J. Cafarella, Stephen Soderland, Matt Broadhead and Oren Etzioni. 2007. Open information extraction from the Web. In Proceedings of IJCAI-2007. Danushka Bollegala, Yutaka Matsuo and Mitsuru Ishizuka. 2007. Measuring Semantic Similarity between Words Using Web Search Engines. In Proceedings of WWW-2007. Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of HLT/EMLNP-2005. Jinxiu Chen, Donghong Ji, Chew Lim Tan and Zhengyu Niu. 2005. Unsupervised Feature Selection for Relation Extraction. In Proceedings of IJCNLP-2005. Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proceedings of the ACL-2004. Dmitry Davidov, Ari Rappoport and Moshe Koppel. 2007. Fully unsupervised discovery of conceptspecific relationships by Web mining. In Proceedings of ACL-2007. Dmitry Davidov and Ari Rappoport. 2008. Classification of Semantic Relationships between Nominals Using Pattern Clusters. In Proceedings of ACL2008. Wei Fan, Kun Zhang, Hong Cheng, Jing Gao, Xifeng Yan, Jiawei Han, Philip S. Yu and Olivier Verscheure. 2008. Direct Mining of Discriminative and Essential Frequent Patterns via Model-based Search Tree. In Proceedings of KDD-2008. Evgeniy Gabrilovich and Shaul Markovitch. 2006. Overcoming the brittleness bottleneck using wikipedia: Enhancing text categorization with encyclopedic knowledge. In Proceedings of AAAI-2006. Jim Giles. 2005. Internet encyclopaedias go head to head. Nature 438:900C901. Sanda Harabagiu, Cosmin Adrian Bejan and Paul Morarescu. 2005. Shallow semantics for relation extraction. In Proceedings of IJCAI-2005. Takaaki Hasegawa, Satoshi Sekine and Ralph Grishman. 2004. Discovering Relations among Named Entities from Large Corpora. In Proceedings of ACL-2004. Nanda Kambhatla. 2004. Combining lexical, syntactic and semantic features with maximum entropy models. In Proceedings of ACL-2004. Dat P.T. Nguyen, Yutaka Matsuo and Mitsuru Ishizuka. 2007. Relation extraction from Wikipedia using subtree mining. In Proceedings of AAAI-2007. Patrick Pantel and Marco Pennacchiotti. 2006. Espresso: Leveraging generic patterns for automatically harvesting semantic relations. In Proceedings of ACL-2006. Benjamin Rosenfeld and Ronen Feldman. 2006. URES: an Unsupervised Web Relation Extraction System. In Proceedings of COLING/ACL-2006. Benjamin Rosenfeld and Ronen Feldman. 2007. Clustering for Unsupervised Relation Identification. In Proceedings of CIKM-2007. Peter D. Turney. 2006. Expressing implicit semantic relations without supervision. In Proceedings of ACL-2006. Max Volkel, Markus Krotzsch, Denny Vrandecic, Heiko Haller and Rudi Studer. 2006. Semantic wikipedia. In Proceedings of WWW-2006. Mohammed J. Zaki. 2002. Efficiently mining frequent trees in a forest. In Proceedings of SIGKDD-2002. Min Zhang, Jie Zhang, Jian Su and Guodong Zhou. 2006. A Composite Kernel to Extract Relations between Entities with both Flat and Structured Features. In Proceedings of ACL-2006. 1029
2009
115
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1030–1038, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Phrase Clustering for Discriminative Learning Dekang Lin and Xiaoyun Wu Google, Inc. 1600 Amphitheater Parkway, Mountain View, CA {lindek,xiaoyunwu}@google.com Abstract We present a simple and scalable algorithm for clustering tens of millions of phrases and use the resulting clusters as features in discriminative classifiers. To demonstrate the power and generality of this approach, we apply the method in two very different applications: named entity recognition and query classification. Our results show that phrase clusters offer significant improvements over word clusters. Our NER system achieves the best current result on the widely used CoNLL benchmark. Our query classifier is on par with the best system in KDDCUP 2005 without resorting to labor intensive knowledge engineering efforts. 1 Introduction Over the past decade, supervised learning algorithms have gained widespread acceptance in natural language processing (NLP). They have become the workhorse in almost all sub-areas and components of NLP, including part-ofspeech tagging, chunking, named entity recognition and parsing. To apply supervised learning to an NLP problem, one first represents the problem as a vector of features. The learning algorithm then optimizes a regularized, convex objective function that is expressed in terms of these features. The performance of such learning-based solutions thus crucially depends on the informativeness of the features. The majority of the features in these supervised classifiers are predicated on lexical information, such as word identities. The long-tailed distribution of natural language words implies that most of the word types will be either unseen or seen very few times in the labeled training data, even if the data set is a relatively large one (e.g., the Penn Treebank). While the labeled data is generally very costly to obtain, there is a vast amount of unlabeled textual data freely available on the web. One way to alleviate the sparsity problem is to adopt a two-stage strategy: first create word clusters with unlabeled data and then use the clusters as features in supervised training. Under this approach, even if a word is not found in the training data, it may still fire cluster-based features as long as it shares cluster assignments with some words in the labeled data. Since the clusters are obtained without any labeled data, they may not correspond directly to concepts that are useful for decision making in the problem domain. However, the supervised learning algorithms can typically identify useful clusters and assign proper weights to them, effectively adapting the clusters to the domain. This method has been shown to be quite successful in named entity recognition (Miller et al. 2004) and dependency parsing (Koo et al., 2008). In this paper, we present a semi-supervised learning algorithm that goes a step further. In addition to word-clusters, we also use phraseclusters as features. Out of context, natural language words are often ambiguous. Phrases are much less so because the words in a phrase provide contexts for one another. Consider the phrase “Land of Odds”. One would never have guessed that it is a company name based on the clusters containing Odds and Land. With phrase-based clustering, “Land of Odds” is grouped with many names that are labeled as company names, which is a strong indication that it is a company name as well. The disambiguation power of phrases is also evidenced by the improvements of phrase-based machine translation systems (Koehn et. al., 2003) over word-based ones. Previous approaches, e.g., (Miller et al. 2004) and (Koo et al. 2008), have all used the Brown algorithm for clustering (Brown et al. 1992). The main idea of the algorithm is to minimize the bigram language-model perplexity of a text corpus. The algorithm is quadratic in the number of elements to be clustered. It is able to cluster tens of thousands of words, but is not scalable enough to deal with tens of millions of phrases. Uszkoreit and Brants (2008) proposed a 1030 distributed clustering algorithm with a similar objective function as the Brown algorithm. It substantially increases the number of elements that can be clustered. However, since it still needs to load the current clustering of all elements into each of the workers in the distributed system, the memory requirement becomes a bottleneck. We present a distributed version of a much simpler K-Means clustering that allows us to cluster tens of millions of elements. We demonstrate the advantages of phrase-based clusters over word-based ones with experimental results from two distinct application domains: named entity recognition and query classification. Our named entity recognition system achieves an F1-score of 90.90 on the CoNLL 2003 English data set, which is about 1 point higher than the previous best result. Our query classifier reaches the same level of performance as the KDDCUP 2005 winning systems, which were built with a great deal of knowledge engineering. 2 Distributed K-Means clustering K-Means clustering (MacQueen 1967) is one of the simplest and most well-known clustering algorithms. Given a set of elements represented as feature vectors and a number, k, of desired clusters, the K-Means algorithm consists of the following steps: Step Operation i. Select k elements as the initial centroids for k clusters. ii. Assign each element to the cluster with the closest centroid according to a distance (or similarity) function. iii. Recompute each cluster’s centroid by averaging the vectors of its elements iv. Repeat Steps ii and iii until convergence Before describing our parallel implementation of the K-Means algorithm, we first describe the phrases to be clusters and how their feature vectors are constructed. 2.1 Phrases To obtain a list of phrases to be clustered, we followed the approach in (Lin et al., 2008) by collecting 20 million unique queries from an anonymized query log that are found in a 700 billion token web corpus with a minimum frequency count of 100. Note that many of these queries are not phrases in the linguistic sense. However, this does not seem to cause any real problem because non-linguistic phrases may form their own clusters. For example, one cluster contains {“Cory does”, “Ben saw”, “I can’t lose”, …..}. To reduce the memory requirement for storing a large number of phrases, we used Bloom Filter (Bloom 1970) to decide whether a sequence of tokens is a phrase. The Bloom filter allows a small percentage of false positives to pass through. We did not remove them with post processing since our notion of phrases is quite loose to begin with. 2.2 Context representation Distributional word clustering is based on the assumption that words that appear in similar contexts tend to have similar meanings. The same assumption holds for phrases as well. Following previous approaches to distributional clustering of words, we represent the contexts of a phrase as a feature vector. There are many possible definitions for what constitutes the contexts. In the literature, contexts have been defined as subject and object relations involving the word (Hindle, 1990), as the documents containing the word (Deerwester et al, 1990), or as search engine snippets for the word as a query (Sahami and Heilman, 2006). We define the contexts of a phrase to be small, fixed-sized windows centered on occurrences of the phrase in a large corpus. The features are the words (tokens) in the window. The context feature vector of a phrase is constructed by first aggregating the frequency counts of the words in the context windows of different instances of the Table 1 Cluster of “English lessons” Window Cluster members (partial list) size=1 environmental courses, summer school courses, professional development classes, professional training programs, further education courses, leadership courses, accelerated courses, vocational classes, technical courses, technical classes, special education courses, ….. size=3 learn english spanish, grammar learn, language learning spanish, translation spanish language, learning spanish language, english spanish language, learn foreign language, free english learning, language study english, spanish immersion course, how to speak french, spanish learning games, ….. 1031 phrase. The frequency counts are then converted into point-wise mutual information (PMI) values: 2/+:LDNá B; L Ž‘‰ F 2:LDNá B; 2:LDN;2:B;G where phr is a phrase and f is a feature of phr. PMI effectively discounts the prior probability of the features and measures how much beyond random a feature tends to occur in a phrase’s context window. Given two feature vectors, we compute the similarity between two vectors as the cosine function of the angle between the vectors. Note that even though a phrase phr can have multiple tokens, its feature f is always a single-word token. We impose an upper limit on the number of instances of each phrase when constructing its feature vector. The idea is that if we have already seen 300K instances of a phrase, we should have already collected enough data for the phrase. More data for the same phrase will not necessarily tell us anything more about it. There are two benefits for such an upper limit. First, it drastically reduces the computational cost. Second, it reduces the variance in the sizes of the feature vectors of the phrases. 2.3 K-Means by MapReduce K-Means is an embarrassingly parallelizable algorithm. Since the centroids of clusters are assumed to be constant within each iteration, the assignment of elements to clusters (Step ii) can be done totally independently. The algorithm fits nicely into the MapReduce paradigm for parallel programming (Dean and Ghemawat, 2004). The most straightforward MapReduce implementation of K-Means would be to have mappers perform Step ii and reducers perform Step iii. The keys of intermediate pairs are cluster ids and the values are feature vectors of elements assigned to the corresponding cluster. When the number of elements to be clustered is very large, sorting the intermediate pairs in the shuffling stage can be costly. Furthermore, when summing up a large number of features vectors, numerical underflow becomes a potential problem. A more efficient and numerically more stable method is to compute, for each input partition, the partial vector sums of the elements belonging to each cluster. When the whole partition is done, the mapper emits the cluster ids as keys and the partial vector sums as values. The reducers then aggregate the partial sums to compute the centroids. 2.4 Indexing centroid vectors In a naïve implementation of Step ii of K-Means, one would compute the similarities between a feature vector and all the centroids in order to find the closest one. The kd-tree algorithm (Bentley 1980) aims at speeding up nearest neighbor search. However, it only works when the vectors are low-dimensional, which is not the case here. Fortunately, the high-dimensional and sparse nature of our feature vectors can also be exploited. Since the cosine measure of two unit length vectors is simply their dot product, when searching for the closest centroid to an element, we only care about features in the centroids that are in common with the element. We therefore create an inverted index that maps a feature to the list of centroids having that feature. Given an input feature vector, we can iterate through all of its components and compute its dot product with all the centroids at the same time. 2.5 Sizes of context window In our experiments, we use either 1 or 3 as the size of the context windows. Window size has an interesting effect on the types of clusters. With larger windows, the clusters tend to be more topical, whereas smaller windows result in categorical clusters. For example, Table 1 contains the cluster that the phrase “English lessons” belongs to. With 3word context windows, the cluster is about language learning and translation. With 1-word context windows, the cluster contains different types of lessons. The ability to produce both kinds of clusters turns out to be very useful. In different applications we need different types of clusters. For example, in the named entity recognition task, categorical clusters are more successful, whereas in query categorization, the topical clusters are much more beneficial. The Brown algorithm uses essentially the same information as our 1-word window clusters. We therefore expect it to produce mostly categorical clusters. 2.6 Soft clustering Although K-Means is generally described as a hard clustering algorithm (each element belongs to at most one cluster), it can produce soft clustering simply by assigning an element to all clusters whose similarity to the element is greater than a threshold. For natural language words and 1032 phrases, the soft cluster assignments often reveal different senses of a word. For example, the word Whistler may refer to a town in British Columbia, Canada, which is also a ski resort, or to a painter. These meanings are reflected in the top clusters assignments for Whistler in Table 2 (window size = 3). 2.7 Clustering data sets We experimented with two corpora (Table 3). One contains web documents with 700 billion tokens. The second consists of various news texts from LDC: English Gigaword, the Tipster corpus and Reuters RCV1. The last column lists the numbers of phrases we used when running the clustering with that corpus. Even though our cloud computing infrastructure made phrase clustering possible, there is no question that it is still very time consuming. To create 3000 clusters among 20 million phrases using 3-word windows, each KMeans iteration takes about 20 minutes on 1000 CPUs. Without using the indexing technique in Section 2.4, each iteration takes about 4 times as long. In all our experiments, we set the maximum number of iterations to be 50. 3 Named Entity Recognition Named entity recognition (NER) is one of the first steps in many applications of information extraction, information retrieval, question answering and other applications of NLP. Conditional Random Fields (CRF) (Lafferty et. al. 2001) is one of the most competitive NER algorithms. We employed a linear chain CRF with L2 regularization as the baseline algorithm to which we added phrase cluster features. The CoNLL 2003 Shared Task (Tjong Kim Sang and Meulder 2003) offered a standard experimental platform for NER. The CoNLL data set consists of news articles from Reuters1. The training set has 203,621 tokens and the development and test set have 51,362 and 46,435 tokens, respectively. We adopted the same evaluation criteria as the CoNLL 2003 Shared Task. To make the clusters more relevant to this domain, we adopted the following strategy: 1. Construct the feature vectors for 20 million phrases using the web data. 2. Run K-Means clustering on the phrases that appeared in the CoNLL training data to obtain K centroids. 3. Assign each of the 20 million phrases to the nearest centroid in the previous step. 3.1 Baseline features The features in our baseline CRF classifier are a subset of the conventional features. They are defined with the following templates: >Uæ?,>Uæ?5ãæ?,<>Uæá Sè?=è@æ?5 æ>5 á <>Uæ?5ãæá Sè?=è@æ?5 æ>5 , <>Uæá OBTuè?=è@æ?5 æ>5 , <>Uæ?5ãæá OBTuè?=è@æ?5 æ>5 , <<>Uæá SPLè ç?=è@æ?5 æ>5 =ç@6 8 ,<<>Uæ?5ãæá SPLè ç?=è@æ?5 æ>5 =ç@6 8 á <>Uæá Sè?5ãè?=è@æ æ>5,<>Uæ?5ãæá Sè?5ãè?=è@æ æ>5, <<>Uæá SPLè?5ãè ç ?=è@æ æ>5=ç@5 7 ,<<>Uæ?5ãæá SPLè?5ãè ç ?=è@æ æ>5=ç@5 7 Here, s denotes a position in the input sequence; ys is a label that indicates whether the token at position s is a named entity as well as its type; wu is the word at position u; sfx3 is a word’s threeletter suffix; <SPLç=ç@5 8 are indicators of 1 http://www.reuters.com/researchandstandards/ Table 2 Soft clusters for Whistler cluster1: sim=0.17, members=104048 bc vancouver, british columbia accommodations, coquitlam vancouver, squamish vancouver, langley vancouver, vancouver surrey, … cluster2: sim=0. 16, members= 182692 vail skiing, skiing colorado, tahoe ski vacation, snowbird skiing, lake tahoe skiing, breckenridge skiing, snow ski packages, ski resort whistler, … cluster3: sim=0.12, members= 91895 ski chalets france, ski chalet holidays, france ski, catered chalets, luxury ski chalets, france skiing, france skiing, ski chalet holidays, …… cluster4: sim=0.11, members=237262 ocean kayaking, mountain hiking, horse trekking, river kayaking, mountain bike riding, white water canoeing, mountain trekking, sea kayaking, …… cluster5: sim=0.10, members=540775 rent cabin, pet friendly cabin, cabins rental, cabin vacation, cabins colorado, cabin lake tahoe, maine cabin, tennessee mountain cabin, … cluster6: sim=0.09, members=117365 mary cassatt, oil painting reproductions, henri matisse, pierre bonnard, edouard manet, auguste renoir, paintings famous, picasso paintings, …… …… Table 3 Corpora used in experiments Corpus Description tokens phrases Web web documents 700B 20M LDC News text from LDC 3.4B 700K 1033 different word types: wtp1 is true when a word is punctuation; wtp2 indicates whether a word is in lower case, upper case, or all-caps; wtp3 is true when a token is a number; wtp4 is true when a token is a hyphenated word with different capitalization before and after the hyphen. NER systems often have global features to capture discourse-level regularities (Chieu and Ng 2003). For example, documents often have a full mention of an entity at the beginning and then refer to the entity in partial or abbreviated forms. To help in recognizing the shorter versions of the entities, we maintain a history of unigram word features. If a token is encountered again, the word unigram features of the previous instances are added as features for the current instance as well. We have a total of 48 feature templates. In comparison, there are 79 templates in (Suzuki and Isozaki, 2008). Part-of-speech tags were used in the topranked systems in CoNLL 2003, as well as in many follow up studies that used the data set (Ando and Zhang 2005; Suzuki and Isozaki 2008). Our system does not need this information to achieve its peak performance. An important advantage of not needing a POS tagger as a preprocessor is that the system is much easier to adapt to other languages, since training a tagger often requires a larger amount of more extensively annotated data than the training data for NER. 3.2 Phrase cluster features We used hard clustering with 1-word context windows for NER. For each input token sequence, we identify all sequences of tokens that are found in the phrase clusters. The phrases are allowed to overlap with or be nested in one another. If a phrase belonging to cluster c is found at positions b to e (inclusive), we add the following features to the CRF classifier: >UÕ?5á $Ö?á >UØ>5á #Ö?á >UÕ?6ãÕ?5á $Ö?á >UØãØ>5á #Ö? >UÕá 5Ö?á <>Uèá /Ö?=è@Õ>5 Ø?5 á >UØá 'Ö? >UÕ?5ãÕá 5Ö?á <>Uè?5ãèá /Ö?=è@Õ>5 Ø?5 á >UØ?5ãØá 'Ö? where B (before), A (after), S (start), M (middle), and E (end) denote a position in the input sequence relative to the phrase belonging to cluster c. We treat the cluster membership as binary. The similarity between an element and its cluster centroid is ignored. For example, suppose the input sentence is “… guitar legend Jimi Hendrix was …” and “Jimi Hendrix” belongs to cluster 183. Figure 1 shows the attributes at different input positions. The cluster features are the cross product of the unigram/bigram labels and the attributes. Figure 1 Phrase cluster features The phrasal cluster features not only help in resolving the ambiguities of words within a phrase, the B and A features also allow words adjacent to a phrase to consider longer contexts than a single word. Although one may argue longer n-grams can also capture this information, the sparseness of n-grams means that long ngram features are rarely useful in practice. We can easily use multiple clusterings in feature extraction. This allows us to side-step the matter of choosing the optimal value k in the KMeans clustering algorithm. Even though the phrases include single token words, we create word clusters with the same clustering algorithm as well. The reason is that the phrase list, which comes from query logs, does not necessarily contain all the single token words in the documents. Furthermore, due to tokenization differences between the query logs and the documents, we systematically missed some words, such as hyphenated words. When creating the word clusters, we do not rely on a predefined list. Instead, any word above a minimum frequency threshold is included. In their dependency parser with cluster-based features, Koo et al. (2008) found it helpful to restrict lexicalized features to only relatively frequent words. We did not observe a similar phenomenon with our CRF. We include all words as features and rely on the regularized CRF to select from them. 3.3 Evaluation results Table 4 summarizes the evaluation results for our NER system and compares it with the two best results on the data set in the literature, as well the top-3 systems in CoNLL 2003. In this table, W and P refer to word and phrase clusters created with the web corpus. The superscripts are the numbers of clusters. LDC refers to the clusters created with the smaller LDC corpus and +pos indicates the use of part-of-speech tags as features. The performance of our baseline system is rather mediocre because it has far fewer feature functions than the more competitive systems. 1034 The Top CoNLL 2003 systems all employed gazetteers or other types of specialized resources (e.g., lists of words that tend to co-occur with certain named entity types) in addition to part-ofspeech tags. Introducing the word clusters immediately brings the performance up to a very competitive level. Phrasal clusters obtained from the LDC corpus give the same level of improvement as word clusters from the web corpus that is 20 times larger. The best F-score of 90.90, which is about 1 point higher than the previous best result, is obtained with a combination of clusters. Adding POS tags to this configuration caused a small drop in F1. 4 Query Classification We now look at the use of phrasal clusters in a very different application: query classification. The goal of query classification is to determine to which ones of a predefined set of classes a query belongs. Compared with documents, queries are much shorter and their categories are much more ambiguous. 4.1 KDDCUP 2005 data set The task in the KDDCUP 2005 competition2 is to classify 800,000 internet user search queries into 67 predefined topical categories. The training set consists of 111 example queries, each of which belongs to up to 5 of the 67 categories. Table 5 shows three example queries and their classes. Three independent human labelers classified 800 queries that were randomly selected from the 2 http://www.acm.org/sigs/sigkdd/kdd2005/kddcup.html complete set of 800,000. The participating systems were evaluated by their average F-scores (F1) and average precision (P) over these three sets of answer keys for the 800 selected queries.  L à S“—‡”‹‡•…‘””‡…–Ž›–ƒ‰‰‡†ƒ•…g g à S“—‡”‹‡•–ƒ‰‰‡†ƒ•…g g  L à S“—‡”‹‡•…‘””‡…–Ž›–ƒ‰‰‡†ƒ•…g g à S‘ˆ“—‡”‹‡•Žƒ„‡Ž‡†„›ƒ•…g g s L t H  H   E  Here, ‘tagged as’ refer to systems outputs and ‘labeled as’ refer to human judgments. The subscript i ranges over all the query classes. Table 6 shows the scores of each of the three human labelers when each of them is evaluated against the other two. It can be seen that the consistency among the labelers is quite low, indicating that the query classification task is very difficult even for humans. To maximize the little information we have about the query classes, we treat the words in query class names as additional example queries. For example, we added three queries: living, tools, and hardware to the class Living\Tools & Hardware. 4.2 Baseline classifier Since the query classes are not mutually exclusive, we treat the query classification task as 67 binary classification problems. For each query class, we train a logistic regression classifier (Vapnik 1999) with L2 regularization. Table 4 CoNLL NER test set results System Test F1 Improv. Baseline CRF (Sec. 3.1) 83.78 W500 88.34 +4.56 P64 89.73 +5.94 P125 89.80 +6.02 W500 + P125 90.62 +6.84 W500 + P64 90.63 +6.85 W500 + P125 + P64 90.90 +7.12 W500 + P125 + P64+pos 90.62 +6.84 LDC64 87.24 +3.46 LDC125 88.33 +4.55 LDC64 +LDC125 88.44 +4.66 (Suzuki and Isozaki, 2008) 89.92 (Ando and Zhang, 2005) 89.31 (Florian et al., 2003) 88.76 (Chieu and Ng, 2003) 88.31 (Klein et al., 2003) 86.31 Table 5 Example queries and their classes ford field Sports/American Football Information/Local & Regional Sports/Schedules & Tickets john deere gator Living/Landscaping & Gardening Living/Tools & Hardware Information/Companies & Industries Shopping/Stores & Products Shopping/Buying Guides & Researching justin timberlake lyrics Entertainment/Music Information/Arts & Humanities Entertainment/Celebrities Table 6 Labeler Consistency L1 L2 L3 Average F1 0.538 0.477 0.512 0.509 P 0.501 0.613 0.463 0.526 1035 Given an input x, represented as a vector of m features: (x1, x2, ....., xm), a logistic regression classifier with parameter vector • L(w1, w2, ....., wm) computes the posterior probability of the output y, which is either 1 or -1, as L:Už; L s s E A?ì•Å ž We tag a query as belonging to a class if the probability of the class is among the highest 5 and is greater than 0.5. The baseline system uses only the words in the queries as features (the bag-of-words representation), treating the query classification problem as a typical text categorization problem. We found the prior distribution of the query classes to be extremely important. In fact, a system that always returns the top-5 most frequent classes has an F1 score of 26.55, which would have outperformed 2/3 of the 37 systems in the KDDCUP and ranked 13th. We made a small modification to the objective function for logistic regression to take into account the prior distribution and to use 50% as a uniform decision boundary for all the classes. Normally, training a logistic regression classifier amounts to solving: ƒ”‰ IEJ• ]ã•Í• E s J Í Ž‘‰@s E A?ìÔ•Å žÔA á Ü@5 a where n is the number of training examples and ã is the regularization constant. In this formula, 1/n can be viewed as the weight of an example in the training corpus. When training the classifier for a class with p positive examples out of a total of n examples, we change the objective function to: ƒ”‰ IEJ• Pã•Í• E Ã Ž‘‰@s E A?ìÔ•Å žÔA á Ü@5 J E UÜ:tL F J; Q With this modification, the total weight of the positive and negative examples become equal. 4.3 Phrasal clusters in query classification Since topical information is much more relevant to query classification than categorical information, we use clusters created with 3-word context windows. Moreover, we use soft clustering instead of hard clustering. A phrase belongs to a cluster if the cluster’s centroid is among the top-50 most similar centroids to the phrase (by cosine similarity), and the similarity is greater than 0.04. Given a query, we first retrieve all its phrases (allowing overlap) and the clusters they belong to. For each of these clusters, we sum the cluster’s similarity to all the phrases in the query and select the top-N as features for the logistic regression classifier (N=150 in our experiments). When we extract features from multiple clusterings, the selection of the top-N clusters is done separately for each clustering. Once a cluster is selected, its similarity values are ignored. Using the numerical feature values in our experiments always led to worse results. We suspect that such features make the optimization of the objective function much more difficult. Figure 2 Comparison with KDDCUP systems 4.4 Evaluation results Table 7 contains the evaluation results of various configurations of our system. Here, bow indicates the use of bag-of-words features; WN refers to word clusters of size N; and PN refers to phrase clusters of size N. All the clusters are soft clusters created with the web corpus using 3word context windows. The bag-of-words features alone have dismal performance. This is obviously due to the extreme paucity of training examples. In fact, only 12% of the words in the 800 test queries are found in the training examples. Using word clusters as features resulted in a big increase in F-score. The phrasal cluster features offer another big improvement. The best result is achieved with multiple phrasal clusterings. Figure 2 compares the performance of our system (the dark bar at 2) with the top tercile systems in KDDCUP 2005. The best two systems in the competition (Shen et al., 2005) and (Vogel et al., 2005) resorted to knowledge engineering techniques to bridge the gap between 0 0.1 0.2 0.3 0.4 0.5 1 2 3 4 5 6 7 8 9 10 11 12 13 Table 7 Query Classification results System F1 bow 11.58 bow+W3K 34.71 bow+P500 39.84 bow+P3K 40.80 bow+P500+P1K +P2K +P3K+P5K 43.80 1036 the small set of examples and the new queries. They manually constructed a mapping from the query classes to hierarchical directories such as Google Directory3 or Open Directory Project4. They then sent training and testing queries to internet search engines to retrieve the top pages in these directories. The positions of the result pages in the directory hierarchies as well as the words in the pages are used to classify the queries. With phrasal clusters, we can achieve top-level performance without manually constructed resources, or having to rely on internet search results. 5 Discussion and Related Work In earlier work on semi-supervised learning, e.g., (Blum and Mitchell 1998), the classifiers learned from unlabeled data were used directly. Recent research shows that it is better to use whatever is learned from the unlabeled data as features in a discriminative classifier. This approach is taken by (Miller et. al. 2004), (Wong and Ng 2007), (Suzuki and Isozaki 2008), and (Koo et. al., 2008), as well as this paper. Wong and Ng (2007) and Suzuki and Isozaki (2008) are similar in that they run a baseline discriminative classifier on unlabeled data to generate pseudo examples, which are then used to train a different type of classifier for the same problem. Wong and Ng (2007) made the assumption that each proper named belongs to one class (they observed that this is true about 85% of the time for English). Suzuki and Isozaki (2008), on the other hand, used the automatically labeled corpus to train HMMs. Ando and Zhang (2005) defined an objective function that combines the original problem on the labeled data with a set of auxiliary problems on unlabeled data. The definition of an auxiliary problem can be quite flexible as long as it can be automatically labeled and shares some structural properties with the original problem. The combined objective function is then alternatingly optimized with the labeled and unlabeled data. This training regime puts pressure on the discriminative learner to exploit the structures uncovered from the unlabeled data. In the two-stage cluster-based approaches such as ours, clustering is mostly decoupled from the supervised learning problem. However, one can rely on a discriminative classifier to establish the connection by assigning proper weights to the 3 http://directory.google.com 4 http://www.dmoz.org cluster features. One advantage of the two-stage approach is that the same clusterings may be used for different problems or different components of the same system. Another advantage is that it can be applied to a wider range of domains and problems. Although the method in (Suzuki and Isozaki 2008) is quite general, it is hard to see how it can be applied to the query classification problem. Compared with Brown clustering, our algorithm for distributional clustering with distributed K-Means offers several benefits: (1) it is more scalable and parallelizable; (2) it has the ability to generate topical as well as categorical clusters for use in different applications; (3) it can create soft clustering as well as hard ones. There are two main scenarios that motivate semi-supervised learning. One is to leverage a large amount of unsupervised data to train an adequate classifier with a small amount of labeled data. Another is to further boost the performance of a supervised classifier that is already trained with a large amount of supervised data. The named entity problem in Section 3 and the query classification problem in Section 4 exemplify the two scenarios. One nagging issue with K-Means clustering is how to set k. We show that this question may not need to be answered because we can use clusterings with different k’s at the same time and let the discriminative classifier cherry-pick the clusters at different granularities according to the supervised data. This technique has also been used with Brown clustering (Miller et. al. 2004, Koo, et. al. 2008). However, they require clusters to be strictly hierarchical, whereas we do not. 6 Conclusions We presented a simple and scalable algorithm to cluster tens of millions of phrases and we used the resulting clusters as features in discriminative classifiers. We demonstrated the power and generality of this approach on two very different applications: named entity recognition and query classification. Our system achieved the best current result on the CoNLL NER data set. Our query categorization system is on par with the best system in KDDCUP 2005, which, unlike ours, involved a great deal of knowledge engineering effort. Acknowledgments The authors wish to thank the anonymous reviewers for their comments. 1037 References R. Ando and T. Zhang A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data. Journal of Machine Learning Research, Vol 6:1817-1853, 2005. B.H. Bloom. 1970, Space/time trade-offs in hash coding with allowable errors, Communications of the ACM 13 (7): 422–426 A. Blum and T. Mitchell. 1998. Combining labeled and unlabeled data with co-training. Proceedings of the Eleventh Annual Conference on Computational Learning Theory pp. 92–100. P.F. Brown, V.J. Della Pietra, P.V. de Souza, J.C. Lai, and R.L. Mercer. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467–479. H. L. Chieu and H. T. Ng. Named entity recognition with a maximum entropy approach. In Proceedings CoNLL-2003, pages 160–163, 2003. J. Dean and S. Ghemawat. 2004. MapReduce: Simplified data processing on large clusters. In Proceedings of the Sixth Symposium on Operating System Design and Implementation (OSDI-04), San Francisco, CA, USA S Deerwester, S. T. Dumais, G. W. Furnas, T. K. Landauer, and R. A. Harshman. 1990. Indexing by latent semantic analysis, Journal of the American Society for Information Science, 1990, 41(6), 391407 R. Florian, A. Ittycheriah, H. Jing, and T. Zhang. Named entity recognition through classifier combination. In Proceedings CoNLL-2003, pages 168–171, 2003. D. Klein, J. Smarr, H. Nguyen, and C. D. Manning. Named entity recognition with character-level models. In Proceedings CoNLL-2003, pages 188– 191, 2003. P. Koehn, F.J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proceedings of HLTNAACL 2003, pp. 127–133. T. Koo, X. Carreras, and M. Collins. Simple Semisupervised Dependency Parsing. Proceedings of ACL, 2008. J. Lafferty, A. McCallum, F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In: Proc. 18th International Conf. on Machine Learning, Morgan Kaufmann, San Francisco, CA (2001) 282–289 Y. Li, Z. Zheng, and H.K. Dai, KDD Cup-2005 Report: Facing a Great Challenge. SIGKDD Explorations, 7 (2), 2005, 91-99. D. Lin, S. Zhao, and B. Van Durme, and M. Pasca. 2008. Mining Parenthetical Translations from the Web by Word Alignment. Proc. of ACL-08. Columbus, OH. J. Lin. Scalable Language Processing Algorithms for the Masses: A Case Study in Computing Word Cooccurrence Matrices with MapReduce. Proceedings of EMNLP 2008, pp. 419-428, Honolulu, Hawaii. J. B. MacQueen (1967): Some Methods for classification and Analysis of Multivariate Observations, Proc. of 5-th Berkeley Symposium on Mathematical Statistics and Probability", Berkeley, University of California Press, 1:281297 S. Miller, J. Guinness, and A. Zamanian. 2004. Name Tagging with Word Clusters and Discriminative Training. In Proceedings of HLT-NAACL, pages 337–342. M. Sahami and T.D. Heilman. 2006. A web-based kernel function for measuring the similarity of short text snippets. Proceedings of the 15th international conference on World Wide Web, pp. 377–386. D. Shen, R. Pan, J.T. Sun, J.J. Pan, K. Wu, J. Yin, Q. Yang. Q2C@UST: our winning solution to query classification in KDDCUP 2005. SIGKDD Explorations, 2005: 100~110. J. Suzuki, and H. Isozaki. 2008. Semi-Supervised Sequential Labeling and Segmentation using Gigaword Scale Unlabeled Data. In Proc. of ACL/HLT08. Columbus, Ohio. pp. 665-673. E. T. Tjong Kim Sang and F. De Meulder. 2003. Introduction to the CoNLL-2003 Shared Task: Language-Independent Named Entity Recognition. In Proc. of CoNLL-2003, pages 142–147. Y. Wong and H. T. Ng, 2007. One Class per Named Entity: Exploiting Unlabeled Text for Named Entity Recognition. In Proc. of IJCAI-07, Hyderabad, India. J. Uszkoreit and T. Brants. 2008. Distributed Word Clustering for Large Scale Class-Based Language Modeling in Machine Translation. Proceedings of ACL-08: HLT, pp. 755-762. V. Vapnik, 1999. The Nature of Statistical Learning Theory, 2nd edition. Springer Verlag. D. Vogel, S. Bickel, P. Haider, R. Schimpfky, P. Siemen, S. Bridges, T. Scheffer. Classifying Search Engine Queries Using the Web as Background Knowledge. SIGKDD Explorations 7(2): 117-122. 2005. 1038
2009
116
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1039–1047, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Semi-Supervised Active Learning for Sequence Labeling Katrin Tomanek and Udo Hahn Jena University Language & Information Engineering (JULIE) Lab Friedrich-Schiller-Universit¨at Jena, Germany {katrin.tomanek|udo.hahn}@uni-jena.de Abstract While Active Learning (AL) has already been shown to markedly reduce the annotation efforts for many sequence labeling tasks compared to random selection, AL remains unconcerned about the internal structure of the selected sequences (typically, sentences). We propose a semisupervised AL approach for sequence labeling where only highly uncertain subsequences are presented to human annotators, while all others in the selected sequences are automatically labeled. For the task of entity recognition, our experiments reveal that this approach reduces annotation efforts in terms of manually labeled tokens by up to 60 % compared to the standard, fully supervised AL scheme. 1 Introduction Supervised machine learning (ML) approaches are currently the methodological backbone for lots of NLP activities. Despite their success they create a costly follow-up problem, viz. the need for human annotators to supply large amounts of “golden” annotation data on which ML systems can be trained. In most annotation campaigns, the language material chosen for manual annotation is selected randomly from some reference corpus. Active Learning (AL) has recently shaped as a much more efficient alternative for the creation of precious training material. In the AL paradigm, only examples of high training utility are selected for manual annotation in an iterative manner. Different approaches to AL have been successfully applied to a wide range of NLP tasks (Engelson and Dagan, 1996; Ngai and Yarowsky, 2000; Tomanek et al., 2007; Settles and Craven, 2008). When used for sequence labeling tasks such as POS tagging, chunking, or named entity recognition (NER), the examples selected by AL are sequences of text, typically sentences. Approaches to AL for sequence labeling are usually unconcerned about the internal structure of the selected sequences. Although a high overall training utility might be attributed to a sequence as a whole, the subsequences it is composed of tend to exhibit different degrees of training utility. In the NER scenario, e.g., large portions of the text do not contain any target entity mention at all. To further exploit this observation for annotation purposes, we here propose an approach to AL where human annotators are required to label only uncertain subsequences within the selected sentences, while the remaining subsequences are labeled automatically based on the model available from the previous AL iteration round. The hardness of subsequences is characterized by the classifier’s confidence in the predicted labels. Accordingly, our approach is a combination of AL and self-training to which we will refer as semi-supervised Active Learning (SeSAL) for sequence labeling. While self-training and other bootstrapping approaches often fail to produce good results on NLP tasks due to an inherent tendency of deteriorated data quality, SeSAL circumvents this problem and still yields large savings in terms annotation decisions, i.e., tokens to be manually labeled, compared to a standard, fully supervised AL approach. After a brief overview of the formal underpinnings of Conditional Random Fields, our base classifier for sequence labeling tasks (Section 2), a fully supervised approach to AL for sequence labeling is introduced and complemented by our semi-supervised approach in Section 3. In Section 4, we discuss SeSAL in relation to bootstrapping and existing AL techniques. Our experiments are laid out in Section 5 where we compare fully and semi-supervised AL for NER on two corpora, the newspaper selection of MUC7 and PENNBIOIE, a biological abstracts corpus. 1039 2 Conditional Random Fields for Sequence Labeling Many NLP tasks, such as POS tagging, chunking, or NER, are sequence labeling problems where a sequence of class labels ⃗y = (y1, . . . ,yn) ∈Yn are assigned to a sequence of input units ⃗x = (x1, . . . ,xn) ∈X n. Input units xj are usually tokens, class labels yj can be POS tags or entity classes. Conditional Random Fields (CRFs) (Lafferty et al., 2001) are a probabilistic framework for labeling structured data and model P⃗λ(⃗y|⃗x). We focus on first-order linear-chain CRFs, a special form of CRFs for sequential data, where P⃗λ(⃗y|⃗x) = 1 Z⃗λ(⃗x) · exp  n X j=1 m X i=1 λifi(yj−1,yj,⃗x, j)  (1) with normalization factor Z⃗λ(⃗x), feature functions fi(·), and feature weights λi. Parameter Estimation. The model parameters λi are set to maximize the penalized log-likelihood L on some training data T : L(T ) = X (⃗x,⃗y)∈T log p(⃗y|⃗x) − m X i=1 λ2 i 2σ2 (2) The partial derivations of L(T ) are ∂L(T ) ∂λi = ˜E(fi) −E(fi) −λi σ2 (3) where ˜E(fi) is the empirical expectation of feature fi and can be calculated by counting the occurrences of fi in T . E(fi) is the model expectation of fi and can be written as E(fi) = X (⃗x,⃗y)∈T X ⃗y ′∈Yn P⃗λ(⃗y ′|⃗x)· n X j=1 fi(y′ j−1, y′ j, ⃗x,j) (4) Direct computation of E(fi) is intractable due to the sum over all possible label sequences ⃗y ′ ∈Yn. The Forward-Backward algorithm (Rabiner, 1989) solves this problem efficiently. Forward (α) and backward (β) scores are defined by αj(y|⃗x) = X y′∈T −1 j (y) αj−1(y′|⃗x) · Ψj(⃗x, y′, y) βj(y|⃗x) = X y′∈Tj(y) βj+1(y′|⃗x) · Ψj(⃗x, y, y′) where Ψj(⃗x,a,b) = exp  Pm i=1 λifi(a,b,⃗x, j)  , Tj(y) is the set of all successors of a state y at a specified position j, and, accordingly, T −1 j (y) is the set of predecessors. Normalized forward and backward scores are inserted into Equation (4) to replace P ⃗y ′∈Yn P⃗λ(⃗y ′|⃗x) so that L(T ) can be optimized with gradient-based or iterative-scaling methods. Inference and Probabilities. The marginal probability P⃗λ(yj = y′|⃗x) = αj(y′|⃗x) · βj(y′|⃗x) Z⃗λ(⃗x) (5) specifies the model’s confidence in label y′ at position j of an input sequence ⃗x. The forward and backward scores are obtained by applying the Forward-Backward algorithm on ⃗x. The normalization factor is efficiently calculated by summing over all forward scores: Z⃗λ(⃗x) = X y∈Y αn(y|⃗x) (6) The most likely label sequence ⃗y ∗= argmax ⃗y∈Yn exp  n X j=1 m X i=1 λifi(yj−1,yj,⃗x, j)  (7) is computed using the Viterbi algorithm (Rabiner, 1989). See Equation (1) for the conditional probability P⃗λ(⃗y ∗|⃗x) with Z⃗λ calculated as in Equation (6). The marginal and conditional probabilities are used by our AL approaches as confidence estimators. 3 Active Learning for Sequence Labeling AL is a selective sampling technique where the learning protocol is in control of the data to be used for training. The intention with AL is to reduce the amount of labeled training material by querying labels only for examples which are assumed to have a high training utility. This section, first, describes a common approach to AL for sequential data, and then presents our approach to semi-supervised AL. 3.1 Fully Supervised Active Learning Algorithm 1 describes the general AL framework. A utility function UM(pi) is the core of each AL approach – it estimates how useful it would be for 1040 Algorithm 1 General AL framework Given: B: number of examples to be selected L: set of labeled examples P: set of unlabeled examples UM: utility function Algorithm: loop until stopping criterion is met 1. learn model M from L 2. for all pi ∈P : upi ←UM(pi) 3. select B examples pi ∈P with highest utility upi 4. query human annotator for labels of all B examples 5. move newly labeled examples from P to L return L a specific base learner to have an unlabeled example labeled and, subsequently included in the training set. In the sequence labeling scenario, such an example is a stream of linguistic items – a sentence is usually considered as proper sequence unit. We apply CRFs as our base learner throughout this paper and employ a utility function which is based on the conditional probability of the most likely label sequence ⃗y ∗for an observation sequence ⃗x (cf. Equations (1) and (7)): U⃗λ(⃗x) = 1 −P⃗λ(⃗y ∗|⃗x) (8) Sequences for which the current model is least confident on the most likely label sequence are preferably selected.1 These selected sentences are fully manually labeled. We refer to this AL mode as fully supervised Active Learning (FuSAL). 3.2 Semi-Supervised Active Learning In the sequence labeling scenario, an example which, as a whole, has a high utility U⃗λ(⃗x), can still exhibit subsequences which do not add much to the overall utility and thus are fairly easy for the current model to label correctly. One might therefore doubt whether it is reasonable to manually label the entire sequence. Within many sequences of natural language data, there are probably large subsequences on which the current model already does quite well and thus could automatically generate annotations with high quality. This might, in particular, apply to NER where larger stretches of sentences do not contain any entity mention at all, or merely trivial instances of an entity class easily predictable by the current model. 1There are many more sophisticated utility functions for sequence labeling. We have chosen this straightforward one for simplicity and because it has proven to be very effective (Settles and Craven, 2008). For the sequence labeling scenario, we accordingly modify the fully supervised AL approach from Section 3.1. Only those tokens remain to be manually labeled on which the current model is highly uncertain regarding their class labels, while all other tokens (those on which the model is sufficiently certain how to label them correctly) are automatically tagged. To select the sequence examples the same utility function as for FuSAL (cf. Equation (8)) is applied. To identify tokens xj from the selected sequences which still have to be manually labeled, the model’s confidence in label y∗ j is estimated by the marginal probability (cf. Equation (5)) C⃗λ(y∗ j ) = P⃗λ(yj = y∗ j |⃗x) (9) where y∗ j specifies the label at the respective position of the most likely label sequence ⃗y ∗(cf. Equation (7)). If C⃗λ(y∗ j ) exceeds a certain confidence threshold t, y∗ j is assumed to be the correct label for this token and assigned to it.2 Otherwise, manual annotation of this token is required. So, compared to FuSAL as described in Algorithm 1 only the third step step is modified. We call this semi-supervised Active Learning (SeSAL) for sequence labeling. SeSAL joins the standard, fully supervised AL schema with a bootstrapping mode, namely self-training, to combine the strengths of both approaches. Examples with high training utility are selected using AL, while self-tagging of certain “safe” regions within such examples additionally reduces annotation effort. Through this combination, SeSAL largely evades the problem of deteriorated data quality, a limiting factor of “pure” bootstrapping approaches. This approach requires two parameters to be set: Firstly, the confidence threshold t which directly influences the portion of tokens to be manually labeled. Using lower thresholds, the self-tagging component of SeSAL has higher impact – presumably leading to larger amounts of tagging errors. Secondly, a delay factor d can be specified which channels the amount of manually labeled tokens obtained with FuSAL before SeSAL is to start. Only with d = 0, SeSAL will already affect the first AL iteration. Otherwise, several iterations of FuSAL are run until a switch to SeSAL will happen. 2Sequences of consecutive tokens xj for which C⃗λ(y∗ j ) ≤ t are presented to the human annotator instead of single, isolated tokens. 1041 It is well known that the performance of bootstrapping approaches crucially depends on the size of the seed set – the amount of labeled examples available to train the initial model. If class boundaries are poorly defined by choosing the seed set too small, a bootstrapping system cannot learn anything reasonable due to high error rates. If, on the other hand, class boundaries are already too well defined due to an overly large seed set, nothing to be learned is left. Thus, together with low thresholds, a delay rate of d > 0 might be crucial to obtain models of high performance. 4 Related Work Common approaches to AL are variants of the Query-By-Committee approach (Seung et al., 1992) or based on uncertainty sampling (Lewis and Catlett, 1994). Query-by-Committee uses a committee of classifiers, and examples on which the classifiers disagree most regarding their predictions are considered highly informative and thus selected for annotation. Uncertainty sampling selects examples on which a single classifier is least confident. AL has been successfully applied to many NLP tasks; Settles and Craven (2008) compare the effectiveness of several AL approaches for sequence labeling tasks of NLP. Self-training (Yarowsky, 1995) is a form of semi-supervised learning. From a seed set of labeled examples a weak model is learned which subsequently gets incrementally refined. In each step, unlabeled examples on which the current model is very confident are labeled with their predictions, added to the training set, and a new model is learned. Similar to self-training, cotraining (Blum and Mitchell, 1998) augments the training set by automatically labeled examples. It is a multi-learner algorithm where the learners have independent views on the data and mutually produce labeled examples for each other. Bootstrapping approaches often fail when applied to NLP tasks where large amounts of training material are required to achieve acceptable performance levels. Pierce and Cardie (2001) showed that the quality of the automatically labeled training data is crucial for co-training to perform well because too many tagging errors prevent a highperforming model from being learned. Also, the size of the seed set is an important parameter. When it is chosen too small data quality gets deteriorated quickly, when it is chosen too large no improvement over the initial model can be expected. To address the problem of data pollution by tagging errors, Pierce and Cardie (2001) propose corrected co-training. In this mode, a human is put into the co-training loop to review and, if necessary, to correct the machine-labeled examples. Although this effectively evades the negative side effects of deteriorated data quality, one may find the correction of labeled data to be as time-consuming as annotations from the scratch. Ideally, a human should not get biased by the proposed label but independently examine the example – so that correction eventually becomes annotation. In contrast, our SeSAL approach which also applies bootstrapping, aims at avoiding to deteriorate data quality by explicitly pointing human annotators to classification-critical regions. While those regions require full annotation, regions of high confidence are automatically labeled and thus do not require any manual inspection. Self-training and co-training, in contradistinction, select examples of high confidence only. Thus, these bootstrapping methods will presumably not find the most useful unlabeled examples but require a human to review data points of limited training utility (Pierce and Cardie, 2001). This shortcoming is also avoided by our SeSAL approach, as we intentionally select informative examples only. A combination of active and semi-supervised learning has first been proposed by McCallum and Nigam (1998) for text classification. Committeebased AL is used for the example selection. The committee members are first trained on the labeled examples and then augmented by means of Expectation Maximization (EM) (Dempster et al., 1977) including the unlabeled examples. The idea is to avoid manual labeling of examples whose labels can be reliably assigned by EM. Similarly, co-testing (Muslea et al., 2002), a multi-view AL algorithms, selects examples for the multi-view, semi-supervised Co-EM algorithm. In both works, semi-supervision is based on variants of the EM algorithm in combination with all unlabeled examples from the pool. Our approach to semisupervised AL is different as, firstly, we augment the training data using a self-tagging mechanism (McCallum and Nigam (1998) and Muslea et al. (2002) performed semi-supervision to augment the models using EM), and secondly, we operate in the sequence labeling scenario where an example is made up of several units each requiring 1042 a label – partial labeling of sequence examples is a central characteristic of our approach. Another work also closely related to ours is that of Kristjansson et al. (2004). In an information extraction setting, the confidence per extracted field is calculated by a constrained variant of the ForwardBackward algorithm. Unreliable fields are highlighted so that the automatically annotated corpus can be corrected. In contrast, AL selection of examples together with partial manual labeling of the selected examples are the main foci of our work. 5 Experiments and Results In this section, we turn to the empirical assessment of semi-supervised AL (SeSAL) for sequence labeling on the NLP task of named entity recognition. By the nature of this task, the sequences – in this case, sentences – are only sparsely populated with entity mentions and most of the tokens belong to the OUTSIDE class3 so that SeSAL can be expected to be very beneficial. 5.1 Experimental Settings In all experiments, we employ the linear-chain CRF model described in Section 2 as the base learner. A set of common feature functions was employed, including orthographical (regular expression patterns), lexical and morphological (suffixes/prefixes, lemmatized tokens), and contextual (features of neighboring tokens) ones. All experiments start from a seed set of 20 randomly selected examples and, in each iteration, 50 new examples are selected using AL. The efficiency of the different selection mechanisms is determined by learning curves which relate the annotation costs to the performance achieved by the respective model in terms of F1-score. The unit of annotation costs are manually labeled tokens. Although the assumption of uniform costs per token has already been subject of legitimate criticism (Settles et al., 2008), we believe that the number of annotated tokens is still a reasonable approximation in the absence of an empirically more adequate task-specific annotation cost model. We ran the experiments on two entity-annotated corpora. From the general-language newspaper domain, we took the training part of the MUC7 corpus (Linguistic Data Consortium, 2001) which incorporates seven different entity types, viz. per3The OUTSIDE class is assigned to each token that does not denote an entity in the underlying domain of discourse. corpus entity classes sentences tokens MUC7 7 3,020 78,305 PENNBIOIE 3 10,570 267,320 Table 1: Quantitative characteristics of the chosen corpora sons, organizations, locations, times, dates, monetary expressions, and percentages. From the sublanguage biology domain, we used the oncology part of the PENNBIOIE corpus (Kulick et al., 2004) and removed all but three gene entity subtypes (generic, protein, and rna). Table 1 summarizes the quantitative characteristics of both corpora.4 The results reported below are averages of 20 independent runs. For each run, we randomly split each corpus into a pool of unlabeled examples to select from (90 % of the corpus), and a complementary evaluation set (10 % of the corpus). 5.2 Empirical Evaluation We compare semi-supervised AL (SeSAL) with its fully supervised counterpart (FuSAL), using a passive learning scheme where examples are randomly selected (RAND) as baseline. SeSAL is first applied in a default configuration with a very high confidence threshold (t = 0.99) without any delay (d = 0). In further experiments, these parameters are varied to study their impact on SeSAL’s performance. All experiments were run on both the newspaper (MUC7) and biological (PENNBIOIE) corpus. When results are similar to each other, only one data set will be discussed. Distribution of Confidence Scores. The leading assumption for SeSAL is that only a small portion of tokens within the selected sentences constitute really hard decision problems, while the majority of tokens are easy to account for by the current model. To test this stipulation we investigate the distribution of the model’s confidence values C⃗λ(y∗ j ) over all tokens of the sentences (cf. Equation (9)) selected within one iteration of FuSAL. Figure 1, as an example, depicts the histogram for an early AL iteration round on the MUC7 corpus. The vast majority of tokens has a confidence score close to 1, the median lies at 0.9966. Histograms of subsequent AL iterations are very similar with an even higher median. This is so because 4We removed sentences of considerable over and under length (beyond +/- 3 standard deviations around the average sentence length) so that the numbers in Table 1 differ from those cited in the original sources. 1043 confidence score frequency 0.2 0.4 0.6 0.8 1.0 0 500 1000 1500 Figure 1: Distribution of token-level confidence scores in the 5th iteration of FuSAL on MUC7 (number of tokens: 1,843) the model gets continuously more confident when trained on additional data and fewer hard cases remain in the shrinking pool. Fully Supervised vs. Semi-Supervised AL. Figure 2 compares the performance of FuSAL and SeSAL on the two corpora. SeSAL is run with a delay rate of d = 0 and a very high confidence threshold of t = 0.99 so that only those tokens are automatically labeled on which the current model is almost certain. Figure 2 clearly shows that SeSAL is much more efficient than its fully supervised counterpart. Table 2 depicts the exact numbers of manually labeled tokens to reach the maximal (supervised) F-score on both corpora. FuSAL saves about 50 % compared to RAND, while SeSAL saves about 60 % compared to FuSAL which constitutes an overall saving of over 80 % compared to RAND. These savings are calculated relative to the number of tokens which have to be manually labeled. Yet, consider the following gedanken experiment. Assume that, using SeSAL, every second token in a sequence would have to be labeled. Though this comes to a ‘formal’ saving of 50 %, the actual annotation effort in terms of the time needed would hardly go down. It appears that only when SeSAL splits a sentence into larger Corpus Fmax RAND FuSAL SeSAL MUC7 87.7 63,020 36,015 11,001 PENNBIOIE 82.3 194,019 83,017 27,201 Table 2: Tokens manually labeled to reach the maximal (supervised) F-score 0 10000 30000 50000 0.60 0.70 0.80 0.90 MUC7 manually labeled tokens F−score SeSAL FuSAL RAND 0 10000 30000 50000 0.60 0.70 0.80 0.90 PennBioIE manually labeled tokens F−score SeSAL FuSAL RAND Figure 2: Learning curves for Semi-supervised AL (SeSAL), Fully Supervised AL (FuSAL), and RAND(om) selection well-packaged, chunk-like subsequences annotation time can really be saved. To demonstrate that SeSAL comes close to this, we counted the number of base noun phrases (NPs) containing one or more tokens to be manually labeled. On the MUC7 corpus, FuSAL requires 7,374 annotated NPs to yield an F-score of 87 %, while SeSAL hit the same F-score with only 4,017 NPs. Thus, also in terms of the number of NPs, SeSAL saves about 45 % of the material to be considered.5 Detailed Analysis of SeSAL. As Figure 2 reveals, the learning curves of SeSAL stop early (on MUC7 after 12,800 tokens, on PENNBIOIE after 27,600 tokens) because at that point the whole corpus has been labeled exhaustively – either manually, or automatically. So, using SeSAL the complete corpus can be labeled with only a small fraction of it actually being manually annotated (MUC7: about 18 %, PENNBIOIE: about 13 %). 5On PENNBIOIE, SeSAL also saves about 45 % compared to FuSAL to achieve an F-score of 81 %. 1044 Table 3 provides additional analysis results on MUC7. In very early AL rounds, a large ratio of tokens has to be manually labeled (70-80 %). This number decreases increasingly as the classifier improves (and the pool contains fewer informative sentences). The number of tagging errors is quite low, resulting in a high accuracy of the created corpus of constantly over 99 %. labeled tokens manual automatic Σ AR (%) errors ACC 1,000 253 1,253 79.82 6 99.51 5,000 6,207 11,207 44.61 82 99.27 10,000 25,506 34,406 28.16 174 99.51 12,800 57,371 70,171 18.24 259 99.63 Table 3: Analysis of SeSAL on MUC7: Manually and automatically labeled tokens, annotation rate (AR) as the portion of manually labeled tokens in the total amount of labeled tokens, errors and accuracy (ACC) of the created corpus. The majority of the automatically labeled tokens (97-98 %) belong to the OUTSIDE class. This coincides with the assumption that SeSAL works especially well for labeling tasks where some classes occur predominantly and can, in most cases, easily be discriminated from the other classes, as is the case in the NER scenario. An analysis of the errors induced by the self-tagging component reveals that most of the errors (90100 %) are due to missed entity classes, i.e., while the correct class label for a token is one of the entity classes, the OUTSIDE class was assigned. This effect is more severe in early than in later AL iterations (see Table 4 for the exact numbers). labeled error types (%) corpus tokens errors E2O O2E E2E MUC7 10,000 75 100 – – 70,000 259 96 1.3 2.7 Table 4: Distribution of errors of the self-tagging component. Error types: OUTSIDE class assigned though an entity class is correct (E2O), entity class assigned but OUTSIDE is correct (O2E), wrong entity class assigned (E2E). Impact of the Confidence Threshold. We also ran SeSAL with different confidence thresholds t (0.99, 0.95, 0.90, and 0.70) and analyzed the results with respect to tagging errors and the model performance. Figure 3 shows the learning and error curves for different thresholds on the MUC7 corpus. The supervised F-score of 87.7 % is only reached by the highest and most restrictive threshold of t = 0.99. With all other thresholds, SeSAL 0 2000 6000 10000 0.60 0.70 0.80 0.90 learning curves manually labeled tokens F−score t=0.99 t=0.95 t=0.90 t=0.70 0 20000 40000 60000 0 500 1000 2000 error curves all labeled tokens errors t=0.99 t=0.95 t=0.90 t=0.70 Figure 3: Learning and error curves for SeSAL with different thresholds on the MUC7 corpus stops at much lower F-scores and produces labeled training data of lower accuracy. Table 5 contains the exact numbers and reveals that the poor model performance of SeSAL with lower thresholds is mainly due to dropping recall values. threshold F R P Acc 0.99 87.7 85.9 89.9 99.6 0.95 85.4 82.3 88.7 98.8 0.90 84.3 80.6 88.3 98.1 0.70 69.9 61.8 81.1 96.5 Table 5: Maximum model performance on MUC7 in terms of F-score (F), recall (R), precision (P) and accuracy (Acc) – the labeled corpus obtained by SeSAL with different thresholds Impact of the Delay Rate. We also measured the impact of delay rates on SeSAL’s efficiency considering three delay rates (1,000, 5,000, and 10,000 tokens) in combination with three confidence thresholds (0.99, 0.9, and 0.7). Figure 4 depicts the respective learning curves on the MUC7 corpus. For SeSAL with t = 0.99, the delay 1045 0 5000 10000 15000 20000 0.60 0.70 0.80 0.90 threshold 0.99 manually labeled tokens F−score FuSAL SeSAL, d=0 SeSAL, d=1000 SeSAL, d=5000 SeSAL, d=10000 F=0.877 0 5000 10000 15000 20000 0.60 0.70 0.80 0.90 threshold 0.9 manually labeled tokens F−score FuSAL SeSAL, d=0 SeSAL, d=1000 SeSAL, d=5000 SeSAL, d=10000 F=0.843 F=0.877 0 2000 6000 10000 0.60 0.70 0.80 0.90 threshold 0.7 manually labeled tokens F−score FuSAL SeSAL, d=0 SeSAL, d=1000 SeSAL, d=5000 SeSAL, d=10000 F=69.9 F=0.877 Figure 4: SeSAL with different delay rates and thresholds on MUC7. Horizontal lines mark the supervised F-score (upper line) and the maximal F-score achieved by SeSAL with the respective threshold and d = 0 (lower line). has no particularly beneficial effect. However, in combination with lower thresholds, the delay rates show positive effects as SeSAL yields Fscores closer to the maximal F-score of 87.7 %, thus clearly outperforming undelayed SeSAL. 6 Summary and Discussion Our experiments in the context of the NER scenario render evidence to the hypothesis that the proposed approach to semi-supervised AL (SeSAL) for sequence labeling indeed strongly reduces the amount of tokens to be manually annotated — in terms of numbers, about 60% compared to its fully supervised counterpart (FuSAL), and over 80% compared to a totally passive learning scheme based on random selection. For SeSAL to work well, a high and, by this, restrictive threshold has been shown to be crucial. Otherwise, large amounts of tagging errors lead to a poorer overall model performance. In our experiments, tagging errors in such a scenario were OUTSIDE labelings, while an entity class would have been correct – with the effect that the resulting models showed low recall rates. The delay rate is important when SeSAL is run with a low threshold as early tagging errors can be avoided which otherwise reinforce themselves. Finding the right balance between the delay factor and low thresholds requires experimental calibration. For the most restrictive threshold (t = 0.99) though such a delay is unimportant so that it can be set to d = 0 circumventing this calibration step. In summary, the self-tagging component of SeSAL gets more influential when the confidence threshold and the delay factor are set to lower values. At the same time though, under these conditions negative side-effects such as deteriorated data quality and, by this, inferior models emerge. These problems are major drawbacks of many bootstrapping approaches. However, our experiments indicate that as long as self-training is cautiously applied (as is done for SeSAL with restrictive parameters), it can definitely outperform an entirely supervised approach. From an annotation point of view, SeSAL efficiently guides the annotator to regions within the selected sentence which are very useful for the learning task. In our experiments on the NER scenario, those regions were mentions of entity names or linguistic units which had a surface appearance similar to entity mentions but could not yet be correctly distinguished by the model. While we evaluated SeSAL here in terms of tokens to be manually labeled, an open issue remains, namely how much of the real annotation effort – measured by the time needed – is saved by this approach. We here hypothesize that human annotators work much more efficiently when pointed to the regions of immediate interest instead of making them skim in a self-paced way through larger passages of (probably) semantically irrelevant but syntactically complex utterances – a tiring and error-prone task. Future research is needed to empirically investigate into this area and quantify the savings in terms of the time achievable with SeSAL in the NER scenario. Acknowledgements This work was funded by the EC within the BOOTStrep (FP6-028099) and CALBC (FP7231727) projects. We want to thank Roman Klinger (Fraunhofer SCAI) for fruitful discussions. 1046 References A. Blum and T. Mitchell. 1998. Combining labeled and unlabeled data with co-training. In COLT’98 – Proceedings of the 11th Annual Conference on Computational Learning Theory, pages 92–100. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(1):1–38. S. Engelson and I. Dagan. 1996. Minimizing manual annotation cost in supervised training from corpora. In ACL’96 – Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pages 319–326. T. Kristjansson, A. Culotta, and P. Viola. 2004. Interactive information extraction with constrained Conditional Random Fields. In AAAI’04 – Proceedings of 19th National Conference on Artificial Intelligence, pages 412–418. S. Kulick, A. Bies, M. Liberman, M. Mandel, R. T. McDonald, M. S. Palmer, and A. I. Schein. 2004. Integrated annotation for biomedical information extraction. In Proceedings of the HLT-NAACL 2004 Workshop ‘Linking Biological Literature, Ontologies and Databases: Tools for Users’, pages 61–68. J. D. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data. In ICML’01 – Proceedings of the 18th International Conference on Machine Learning, pages 282–289. D. D. Lewis and J. Catlett. 1994. Heterogeneous uncertainty sampling for supervised learning. In ICML’94 – Proceedings of the 11th International Conference on Machine Learning, pages 148–156. Linguistic Data Consortium. 2001. Message Understanding Conference (MUC) 7. LDC2001T02. FTP FILE. Philadelphia: Linguistic Data Consortium. A. McCallum and K. Nigam. 1998. Employing EM and pool-based Active Learning for text classification. In ICML’98 – Proceedings of the 15th International Conference on Machine Learning, pages 350– 358. I. A. Muslea, S. Minton, and C. A. Knoblock. 2002. Active semi-supervised learning = Robust multiview learning. In ICML’02 – Proceedings of the 19th International Conference on Machine Learning, pages 435–442. G. Ngai and D. Yarowsky. 2000. Rule writing or annotation: Cost-efficient resource usage for base noun phrase chunking. In ACL’00 – Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 117–125. D. Pierce and C. Cardie. 2001. Limitations of cotraining for natural language learning from large datasets. In EMNLP’01 – Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing, pages 1–9. L. R. Rabiner. 1989. A tutorial on Hidden Markov Models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257–286. B. Settles and M. Craven. 2008. An analysis of Active Learning strategies for sequence labeling tasks. In EMNLP’08 – Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1069–1078. B. Settles, M. Craven, and L. Friedland. 2008. Active Learning with real annotation costs. In Proceedings of the NIPS 2008 Workshop on ‘Cost-Sensitive Machine Learning’, pages 1–10. H. S. Seung, M. Opper, and H. Sompolinsky. 1992. Query by committee. In COLT’92 – Proceedings of the 5th Annual Workshop on Computational Learning Theory, pages 287–294. K. Tomanek, J. Wermter, and U. Hahn. 2007. An approach to text corpus construction which cuts annotation costs and maintains corpus reusability of annotated data. In EMNLP-CoNLL’07 – Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Language Learning, pages 486–495. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In ACL’95 – Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 189– 196. 1047
2009
117
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1048–1056, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Word or Phrase? Learning Which Unit to Stress for Information Retrieval∗ Young-In Song† and Jung-Tae Lee‡ and Hae-Chang Rim‡ †Microsoft Research Asia, Beijing, China ‡Dept. of Computer & Radio Communications Engineering, Korea University, Seoul, Korea [email protected]†, {jtlee,rim}@nlp.korea.ac.kr‡ Abstract The use of phrases in retrieval models has been proven to be helpful in the literature, but no particular research addresses the problem of discriminating phrases that are likely to degrade the retrieval performance from the ones that do not. In this paper, we present a retrieval framework that utilizes both words and phrases flexibly, followed by a general learning-to-rank method for learning the potential contribution of a phrase in retrieval. We also present useful features that reflect the compositionality and discriminative power of a phrase and its constituent words for optimizing the weights of phrase use in phrase-based retrieval models. Experimental results on the TREC collections show that our proposed method is effective. 1 Introduction Various researches have improved the quality of information retrieval by relaxing the traditional ‘bag-of-words’ assumption with the use of phrases. (Miller et al., 1999; Song and Croft, 1999) explore the use n-grams in retrieval models. (Fagan, 1987; Gao et al., 2004; Metzler and Croft, 2005; Tao and Zhai, 2007) use statistically-captured term dependencies within a query. (Strzalkowski et al., 1994; Kraaij and Pohlmann, 1998; Arampatzis et al., 2000) study the utility of various kinds of syntactic phrases. Although use of phrases clearly helps, there still exists a fundamental but unsolved question: Do all phrases contribute an equal amount of increase in the performance of information retrieval models? Let us consider a search query ‘World Bank Criticism’, which has the following phrases: ‘world ∗This work was done while Young-In Song was with the Dept. of Computer & Radio Communications Engineering, Korea University. bank’ and ‘bank criticism’. Intuitively, the former should be given more importance than its constituents ‘world’ and ‘bank’, since the meaning of the original phrase cannot be predicted from the meaning of either constituent. In contrast, a relatively less attention could be paid to the latter ‘bank criticism’, because there may be alternate expressions, of which the meaning is still preserved, that could possibly occur in relevant documents. However, virtually all the researches ignore the relation between a phrase and its constituent words when combining both words and phrases in a retrieval model. Our approach to phrase-based retrieval is motivated from the following linguistic intuitions: a) phrases have relatively different degrees of significance, and b) the influence of a phrase should be differentiated based on the phrase’s constituents in retrieval models. In this paper, we start out by presenting a simple language modeling-based retrieval model that utilizes both words and phrases in ranking with use of parameters that differentiate the relative contributions of phrases and words. Moreover, we propose a general learning-to-rank based framework to optimize the parameters of phrases against their constituent words for retrieval models that utilize both words and phrases. In order to estimate such parameters, we adapt the use of a cost function together with a gradient descent method that has been proven to be effective for optimizing information retrieval models with multiple parameters (Taylor et al., 2006; Metzler, 2007). We also propose a number of potentially useful features that reflect not only the characteristics of a phrase but also the information of its constituent words for minimizing the cost function. Our experimental results demonstrate that 1) differentiating the weights of each phrase over words yields statistically significant improvement in retrieval performance, 2) the gradient descent-based parameter optimization is reasonably appropriate 1048 to our task, and 3) the proposed features can distinguish good phrases that make contributions to the retrieval performance. The rest of this paper is organized as follows. The next section discusses previous work. Section 3 presents our learning-based retrieval framework and features. Section 4 reports the evaluations of our techniques. Section 5 finally concludes the paper and discusses future work. 2 Previous Work To date, there have been numerous researches to utilize phrases in retrieval models. One of the most earliest work on phrase-based retrieval was done by (Fagan, 1987). In (Fagan, 1987), the effectiveness of proximity-based phrases (i.e. words occurring within a certain distance) in retrieval was investigated with varying criteria to extract phrases from text. Subsequently, various types of phrases, such as sequential n-grams (Mitra et al., 1997), head-modifier pairs extracted from syntactic structures (Lewis and Croft, 1990; Zhai, 1997; Dillon and Gray, 1983; Strzalkowski et al., 1994), proximity-based phrases (Turpin and Moffat, 1999), were examined with conventional retrieval models (e.g. vector space model). The benefit of using phrases for improving the retrieval performance over simple ‘bag-of-words’ models was far less than expected; the overall performance improvement was only marginal and sometimes even inconsistent, specifically when a reasonably good weighting scheme was used (Mitra et al., 1997). Many researchers argued that this was due to the use of improper retrieval models in the experiments. In many cases, the early researches on phrase-based retrieval have only focused on extracting phrases, not concerning about how to devise a retrieval model that effectively considers both words and phrases in ranking. For example, the direct use of traditional vector space model combining a phrase weight and a word weight virtually yields the result assuming independence between a phrase and its constituent words (Srikanth and Srihari, 2003). In order to complement the weakness, a number of research efforts were devoted to the modeling of dependencies between words directly within retrieval models instead of using phrases over the years (van Rijsbergen, 1977; Wong et al., 1985; Croft et al., 1991; Losee, 1994). Most studies were conducted on the probabilistic retrieval framework, such as the BIM model, and aimed on producing a better retrieval model by relaxing the word independence assumption based on the cooccurrence information of words in text. Although those approaches theoretically explain the relation between words and phrases in the retrieval context, they also showed little or no improvements in retrieval effectiveness, mainly because of their statistical nature. While a phrase-based approach selectively incorporated potentially-useful relation between words, the probabilistic approaches force to estimate parameters for all possible combinations of words in text. This not only brings parameter estimation problems but causes a retrieval system to fail by considering semanticallymeaningless dependency of words in matching. Recently, a number of retrieval approaches have been attempted to utilize a phrase in retrieval models. These approaches have focused to model statistical or syntactic phrasal relations under the language modeling method for information retrieval. (Srikanth and Srihari, 2003; Maisonnasse et al., 2005) examined the effectiveness of syntactic relations in a query by using language modeling framework. (Song and Croft, 1999; Miller et al., 1999; Gao et al., 2004; Metzler and Croft, 2005) investigated the effectiveness of language modeling approach in modeling statistical phrases such as n-grams or proximity-based phrases. Some of them showed promising results in their experiments by taking advantages of phrases soundly in a retrieval model. Although such approaches have made clear distinctions by integrating phrases and their constituents effectively in retrieval models, they did not concern the different contributions of phrases over their constituents in retrieval performances. Usually a phrase score (or probability) is simply combined with scores of its constituent words by using a uniform interpolation parameter, which implies that a uniform contribution of phrases over constituent words is assumed. Our study is clearly distinguished from previous phrase-based approaches; we differentiate the influence of each phrase according to its constituent words, instead of allowing equal influence for all phrases. 3 Proposed Method In this section, we present a phrase-based retrieval framework that utilizes both words and phrases effectively in ranking. 1049 3.1 Basic Phrase-based Retrieval Model We start out by presenting a simple phrase-based language modeling retrieval model that assumes uniform contribution of words and phrases. Formally, the model ranks a document D according to the probability of D generating phrases in a given query Q, assuming that the phrases occur independently: s(Q; D) = P(Q|D) ≈ |Q| Y i=1 P(qi|qhi, D) (1) where qi is the ith query word, qhi is the head word of qi, and |Q| is the query size. To simplify the mathematical derivations, we modify Eq. 1 using logarithm as follows: s(Q; D) ∝ |Q| X i=1 log[P(qi|qhi, D)] (2) In practice, the phrase probability is mixed with the word probability (i.e. deleted interpolation) as: P(qi|qhi,D)≈λP(qi|qhi,D)+(1−λ)P(qi|D) (3) where λ is a parameter that controls the impact of the phrase probability against the word probability in the retrieval model. 3.2 Adding Multiple Parameters Given a phrase-based retrieval model that utilizes both words and phrases, one would definitely raise a fundamental question on how much weight should be given to the phrase information compared to the word information. In this paper, we propose to differentiate the value of λ in Eq. 3 according to the importance of each phrase by adding multiple free parameters to the retrieval model. Specifically, we replace λ with wellknown logistic function, which allows both numerical and categorical variables as input, whereas the output is bounded to values between 0 and 1. Formally, the input of a logistic function is a set of evidences (i.e. feature vector) X generated from a given phrase and its constituents, whereas the output is the probability predicted by fitting X to a logistic curve. Therefore, λ is replaced as follows: λ(X) = 1 1 + e−f(X) · α (4) where α is a scaling factor to confine the output to values between 0 and α. f(X) = β0 + |X| X i=1 βixi (5) where xi is the ith feature, βi is the coefficient parameter of xi, and β0 is the ‘intercept’, which is the value of f(X) when all feature values are zero. 3.3 RankNet-based Parameter Optimization The β parameters in Eq. 5 are the ones we wish to learn for resulting retrieval performance via parameter optimization methods. In many cases, parameters in a retrieval model are empirically determined through a series of experiments or automatically tuned via machine learning to maximize a retrieval metric of choice (e.g. mean average precision). The most simple but guaranteed way would be to directly perform brute force search for the global optimum over the entire parameter space. However, not only the computational cost of this so-called direct search would become undoubtfully expensive as the number of parameters increase, but most retrieval metrics are nonsmooth with respect to model parameters (Metzler, 2007). For these reasons, we propose to adapt a learning-to-rank framework that optimizes multiple parameters of phrase-based retrieval models effectively with less computation cost and without any specific retrieval metric. Specifically, we use a gradient descent method with the RankNet cost function (Burges et al., 2005) to perform effective parameter optimizations, as in (Taylor et al., 2006; Metzler, 2007). The basic idea is to find a local minimum of a cost function defined over pairwise document preference. Assume that, given a query Q, there is a set of document pairs RQ based on relevance judgements, such that (D1, D2) ∈RQ implies document D1 should be ranked higher than D2. Given a defined set of pairwise preferences R, the RankNet cost function is computed as: C(Q, R) = X ∀Q∈Q X ∀(D1,D2)∈RQ log(1 + eY ) (6) where Q is the set of queries, and Y = s(Q; D2)− s(Q; D1) using the current parameter setting. In order to minimize the cost function, we compute gradients of Eq. 6 with respect to each parameter βi by applying the chain rule: δC δβi = X ∀Q∈Q X ∀(D1,D2)∈RQ δC δY δY δβi (7) where δC δY and δY δβi are computed as: δC δY = exp[s(Q; D2) −s(Q; D1)] 1 + exp[s(Q; D2) −s(Q; D1)] (8) 1050 δY δβi = δs(Q; D2) δβi −δs(Q; D1) δβi (9) With the retrieval model in Eq. 2 and λ(X), f(X) in Eq. 4 and 5, the partial derivate of s(Q; D) with respect to βi is computed as follows: δs(Q;D) δβi = |Q| X i=1 xiλ(X)(1−λ(X) α )·(P (qi|qhi,D)−P (qi|D)) λ(X)P (qi|qhi, D) + (1 −λ(X))P (qi|D) (10) 3.4 Features We experimented with various features that are potentially useful for not only discriminating a phrase itself but characterizing its constituents. In this section, we report only the ones that have made positive contributions to the overall retrieval performance. The two main criteria considered in the selection of the features are the followings: compositionality and discriminative power. Compositionality Features Features on phrase compositionality are designed to measure how likely a phrase can be represented as its constituent words without forming a phrase; if a phrase in a query has very high compositionality, there is a high probability that its relevant documents do not contain the phrase. In this case, emphasizing the phrase unit could be very risky in retrieval. In the opposite case that a phrase is uncompositional, it is obvious that occurrence of a phrase in a document can be a stronger evidence of relevance than its constituent words. Compositionality of a phrase can be roughly measured by using corpus statistics or its linguistic characteristics; we have observed that, in many times, an extremely-uncompositional phrase appears as a noun phrase, and the distance between its constituent words is generally fixed within a short distance. In addition, it has a tendency to be used repeatedly in a document because its semantics cannot be represented with individual constituent words. Based on these intuitions, we devise the following features: Ratio of multiple occurrences (RMO): This is a real-valued feature that measures the ratio of the phrase repeatedly used in a document. The value of this feature is calculated as follows: x = P ∀D;count(wi→whi ,D)>1 count(wi →whi, D) count(wi →whi, C) + γ (11) where wi →whi is a phrase in a given query, count(x, y) is the count of x in y, and γ is a smallvalued constant to prevent unreliable estimation by very rarely-occurred phrases. Ratio of single-occurrences (RSO): This is a binary feature that indicates whether or not a phrase occurs once in most documents containing it. This can be regarded as a supplementary feature of RMO. Preferred phrasal type (PPT): This feature indicates the phrasal type that the phrase prefers in a collection. We consider only two cases (whether the phrase prefers verb phrase or adjective-noun phrase types) as features in the experiments1. Preferred distance (PD): This is a binary feature indicating whether or not the phrase prefers long distance (> 1) between constituents in the document collection. Uncertainty of preferred distance (UPD): We also use the entropy (H) of the modification distance (d) of the given phrase in the collection to measure the compositionality; if the distance is not fixed and is highly uncertain, the phrase may be very compositional. The entropy is computed as: x = H(p(d = x|wi →whi)) (12) where d ∈1, 2, 3, long and all probabilities are estimated with discount smoothing. We simply use two binary features regarding the uncertainty of distance; one indicates whether the uncertainty of a phrase is very high (> 0.85), and the other indicates whether the uncertainty is very low (< 0.05)2. Uncertainty of preferred phrasal type (UPPT): As similar to the uncertainty of preferred distance, the uncertainty of the preferred phrasal type of the phrase can be also used as a feature. We consider this factor as a form of a binary feature indicating whether the uncertainty is very high or not. Discriminative Power Features In some cases, the occurrence of a phrase can be a valuable evidence even if the phrase is very likely to be compositional. For example, it is well known that the use of a phrase can be effective in retrieval when its constituent words appear very frequently in the collection, because each word would have a very low discriminative power for relevance. On the contrary, if a constituent word occurs very 1For other phrasal types, significant differences were not observed in the experiments. 2Although it may be more natural to use a real-valued feature, we use these binary features because of the two practical reasons; firstly, it could be very difficult to find an adequate transformation function with real values, and secondly, the two intervals at tails were observed to be more important than the rest. 1051 rarely in the collection, it could not be effective to use the phrase even if the phrase is highly uncompositional. Similarly, if the probability that a phrase occurs in a document where its constituent words co-occur is very high, we might not need to place more emphasis on the phrase than on words, because co-occurrence information naturally incorporated in retrieval models may have enough power to distinguish relevant documents. Based on these intuitions, we define the following features: Document frequency of constituents (DF): We use the document frequency of a constituent as two binary features: one indicating whether the word has very high document frequency (>10% of documents in a collection) and the other one indicating whether it has very low document frequency (<0.2% of documents, which is approximately 1,000 in our experiments). Probability of constituents as phrase (CPP): This feature is computed as a relative frequency of documents containing a phrase over documents where two constituent words appear together. One interesting fact that we observe is that document frequency of the modifier is generally a stronger evidence on the utility of a phrase in retrieval than of the headword. In the case of the headword, we could not find an evidence that it has to be considered in phrase weighting. It seems to be a natural conclusion, because the importance of the modifier word in retrieval is subordinate to the relation to its headword, but the headword is not in many phrases. For example, in the case of the query ‘tropical storms’, retrieving a document only containing tropical can be meaningless, but a document about storm can be meaningful. Based on this observation, we only incorporate document frequency features of syntactic modifiers in the experiments. 4 Experiments In this section, we report the retrieval performances of the proposed method with appropriate baselines over a range of training sets. 4.1 Experimental Setup Retrieval models: We have set two retrieval models, namely the word model and the (phrase-based) one-parameter model, as baselines. The ranking function of the word model is equivalent to Eq. 2, with λ in Eq. 3 being set to zero (i.e. the phrase probability makes no effect on the ranking). The ranking function of the one-parameter model is also equivalent to Eq. 2, with λ in Eq. 3 used “as is” (i.e. as a constant parameter value optimized using gradient descent method, without being replaced to a logistic function). Both baseline models cannot differentiate the importance of phrases in a query. To make a distinction from the baseline models, we will name our proposed method as a multi-parameter model. In our experiments, all the probabilities in all retrieval models are smoothed with the collection statistics by using dirichlet priors (Zhai and Lafferty, 2001). Corpus (Training/Test): We have conducted large-scale experiments on three sets of TREC’s Ad Hoc Test Collections, namely TREC-6, TREC7, and TREC-8. Three query sets, TREC-6 topics 301-350, TREC-7 topics 351-400, and TREC8 topics 401-450, along with their relevance judgments have been used. We only used the title field as query. When performing experiments on each query set with the one-parameter and the multiparameter models, the other two query sets have been used for learning the optimal parameters. For each query in the training set, we have generated document pairs for training by the following strategy: first, we have gathered top m ranked documents from retrieval results by using the word model and the one-parameter model (by manually setting λ in Eq. 3 to the fixed constants, 0 and 0.1 respectively). Then, we have sampled at most r relevant documents and n non-relevant documents from each one and generated document pairs from them. In our experiments, m, r, and n is set to 100, 10, and 40, respectively. Phrase extraction and indexing: We evaluate our proposed method on two different types of phrases: syntactic head-modifier pairs (syntactic phrases) and simple bigram phrases (statistical phrases). To index the syntactic phrases, we use the method proposed in (Strzalkowski et al., 1994) with Connexor FDG parser3, the syntactic parser based on the functional dependency grammar (Tapanainen and Jarvinen, 1997). All necessary information for feature values were indexed together for both syntactic and statistical phrases. To maintain indexes in a manageable size, phrases 3Connexor FDG parser is a commercial parser; the demo is available at: http://www.connexor.com/demo 1052 Test set ←Training set 6 ←7+8 7 ←6+8 8 ←6+7 Model Metric \ Query all partial all partial all partial Word MAP 0.2135 0.1433 0.1883 0.1876 0.2380 0.2576 (Baseline 1) R-Prec 0.2575 0.1894 0.2351 0.2319 0.2828 0.2990 P@10 0.3660 0.3333 0.4100 0.4324 0.4520 0.4517 One-parameter MAP 0.2254 0.1633† 0.1988 0.2031 0.2352 0.2528 (Baseline 2) R-Prec 0.2738 0.2165 0.2503 0.2543 0.2833 0.2998 P@10 0.3820 0.3600 0.4540 0.4971 0.4580 0.4621 Multi-parameter MAP 0.2293‡ 0.1697‡ 0.2038† 0.2105† 0.2452 0.2701 (Proposed) R-Prec 0.2773 0.2225 0.2534 0.2589 0.2891 0.3099 P@10 0.4020 0.3933 0.4540 0.4971 0.4700 0.4828 Table 1: Retrieval performance of different models on syntactic phrases. Italicized MAP values with symbols † and ‡ indicate statistically significant improvements over the word model according to Student’s t-test at p < 0.05 level and p < 0.01 level, respectively. Bold figures indicate the best performed case for each metric. that occurred less than 10 times in the document collections were not indexed. 4.2 Experimental Results Table 1 shows the experimental results of the three retrieval models on the syntactic phrase (headmodifier pair). In the table, partial denotes the performance evaluated on queries containing more than one phrase that appeared in the document collection4; this shows the actual performance difference between models. Note that the ranking results of all retrieval models would be the same as the result of the word model if a query does not contain any phrases in the document collection, because P(qi|qhi, D) would be calculated as zero eventually. As evaluation measures, we used the mean average precision (MAP), R-precision (RPrec), and precisions at top 10 ranks (P@10). As shown in Table 1, when a syntactic phrase is used for retrieval, one-parameter model trained by gradient-descent method generally performs better than the word model, but the benefits are inconsistent; it achieves approximately 15% and 8% improvements on the partial query set of TREC6 and 7 over the word model, but it fails to show any improvement on TREC-8 queries. This may be a natural result since the one-parameter model is very sensitive to the averaged contribution of phrases used for training. Compared to the queries in TREC-6 and 7, the TREC-8 queries contain more phrases that are not effective for retrieval 4The number of queries containing a phrase in TREC-6, 7, and 8 query set is 31, 34, and 29, respectively. (i.e. ones that hurt the retrieval performance when used). This indicates that without distinguishing effective phrases from ineffective phrases for retrieval, the model trained from one training set for phrase would not work consistently on other unseen query sets. Note that the proposed model outperforms all the baselines over all query sets; this shows that differentiating relative contributions of phrases can improve the retrieval performance of the oneparameter model considerably and consistently. As shown in the table, the multi-parameter model improves by approximately 18% and 12% on the TREC-6 and 7 partial query sets, and it also significantly outperforms both the word model and the one-parameter model on the TREC-8 query set. Specifically, the improvement on the TREC-8 query set shows one advantage of using our proposed method; by separating potentiallyineffective phrases and effective phrases based on the features, it not only improves the retrieval performance for each query but makes parameter learning less sensitive to the training set. Figure 1 shows some examples demonstrating the different behaviors of the one-parameter model and the multi-parameters model. On the figure, the un-dotted lines indicate the variation of average precision scores when λ value in Eq. 3 is manually set. As λ gets closer to 0, the ranking formula becomes equivalent to the word model. As shown in the figure, the optimal point of λ is quiet different from query to query. For example, in cases of the query ‘ferry sinking’ and industrial 1053 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0 0.1 0.2 0.3 0.4 0.5 AvgPr lambda Performance variation for the query ‘ferry sinking’ varing lambda one-parameter multiple-parameter 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0 0.1 0.2 0.3 0.4 0.5 AvgPr lambda Performance variation for the query ‘industrial espionage’ varing lambda one-parameter multiple-parameter 0.32 0.33 0.34 0.35 0.36 0.37 0.38 0.39 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 AvgPr lambda Performance variation for the query ‘ declining birth rates’ varing lambda one-parameter multiple-parameter 0.2 0.25 0.3 0.35 0.4 0.45 0 0.1 0.2 0.3 0.4 0.5 AvgPr lambda Performance variation for the query ‘amazon rain forest’ varing lambda one-parameter multiple-parameter Figure 1: Performance variations for the queries ‘ferry sinking’, ‘industrial espionage’, ‘declining birth rate’ and ‘Amazon rain forest’ according to λ in Eq. 3. espionage’ on the upper side, the optimal point is the value close to 0 and 1 respectively. This means that the occurrences of the phrase ‘ferry sinking’ in a document is better to be less-weighted in retrieval while ‘industrial espionage’ should be treated as a much more important evidence than its constituent words. Obviously, such differences are not good for one-parameter model assuming relative contributions of phrases uniformly. For both opposite cases, the multi-parameter model significantly outperforms one-parameter model. The two examples at the bottom of Figure 1 show the difficulty of optimizing phrase-based retrieval using one uniform parameter. For example, the query ‘declining birth rate’ contains two different phrases, ‘declining rate’ and ‘birth rate’, which have potentially-different effectiveness in retrieval; the phrase ‘declining rate’ would not be helpful for retrieval because it is highly compositional, but the phrase ‘birth rate’ could be a very strong evidence for relevance since it is conventionally used as a phrase. In this case, we can get only small benefit from the one-parameter model even if we find optimal λ from gradient descent, because it will be just a compromised value between two different, optimized λs. For such query, the multi-parameter model could be more effective than the one-parameter model by enabling to set different λs on phrases according to their predicted contributions. Note that the multi-parameter model significantly outperforms the one-parameter model and all manually-set λs for the queries ‘declining birth rate’ and ‘Amazon rain forest’, which also has one effective phrase, ‘rain forest’, and one non-effective phrase, ‘Amazon forest’. Since our method is not limited to a particular type of phrases, we have also conducted experiments on statistical phrases (bigrams) with a reduced set of features directed applicable; RMO, RSO, PD5, DF, and CPP; the features requiring linguistic preprocessing (e.g. PPT) are not used, because it is unrealistic to use them under bigrambased retrieval setting. Moreover, the feature UPD is not used in the experiments because the uncer5In most cases, the distance between words in a bigram is 1, but sometimes, it could be more than 1 because of the effect of stopword removal. 1054 Test ←Training Model Metric 6 ←7+8 7 ←6+8 8 ←6+7 Word MAP 0.2135 0.1883 0.2380 (Baseline 1) R-Prec 0.2575 0.2351 0.2828 P@10 0.3660 0.4100 0.4520 One-parameter MAP 0.2229 0.1979 0.2492† (Baseline 2) R-Prec 0.2716 0.2456 0.2959 P@10 0.3720 0.4500 0.4620 Multi-parameter MAP 0.2224 0.2025† 0.2499† (Proposed) R-Prec 0.2707 0.2457 0.2952 P@10 0.3780 0.4520 0.4600 Table 2: Retrieval performance of different models, using statistical phrases. tainty of preferred distance does not vary much for bigram phrases. The results are shown in Table 2. The results of experiments using statistical phrases show that multi-parameter model yields additional performance improvement against baselines in many cases, but the benefit is insignificant and inconsistent. As shown in Table 2, according to the MAP score, the multi-parameter model outperforms the one-parameter model on the TREC-7 and 8 query sets, but it performs slightly worse on the TREC-6 query set. We suspect that this is because of the lack of features to distinguish an effective statistical phrases from ineffective statistical phrase. In our observation, the bigram phrases also show a very similar behavior in retrieval; some of them are very effective while others can deteriorate the performance of retrieval models. However, in case of using statistical phrases, the λ computed by our multi-parameter model would be often similar to the one computed by the one-parameter model, when there is no sufficient evidence to differentiate a phrase. Moreover, the insufficient amount of features may have caused the multi-parameter model to overfit to the training set easily. The small size of training corpus could be an another reason. The number of queries we used for training is less than 80 when removing a query not containing a phrase, which is definitely not a sufficient amount to learn optimal parameters. However, if we recall that the multi-parameter model worked reasonably in the experiments using syntactic phrases with the same training sets, the lack of features would be a more important reason. Although we have not mainly focused on features in this paper, it would be strongly necessary to find other useful features, not only for statistical phrases, but also for syntactic phrases. For example, statistics from query logs and the probability of snippet containing a same phrase in a query is clicked by user could be considered as useful features. Also, the size of the training data (queries) and the document collection may not be sufficient enough to conclude the effectiveness of our proposed method; our method should be examined in a larger collection with more queries. Those will be one of our future works. 5 Conclusion In this paper, we present a novel method to differentiate impacts of phrases in retrieval according to their relative contribution over the constituent words. The contributions of this paper can be summarized in three-fold: a) we proposed a general framework to learn the potential contribution of phrases in retrieval by “parameterizing” the factor interpolating the phrase weight and the word weight on features and optimizing the parameters using RankNet-based gradient descent algorithm, b) we devised a set of potentially useful features to distinguish effective and non-effective phrases, and c) we showed that the proposed method can be effective in terms of retrieval by conducting a series of experiments on the TREC test collections. As mentioned earlier, the finding of additional features, specifically for statistical phrases, would be necessary. Moreover, for a thorough analysis on the effect of our framework, additional experiments on larger and more realistic collections (e.g. the Web environment) would be required. These will be our future work. 1055 References Avi Arampatzis, Theo P. van der Weide, Cornelis H. A. Koster, and P. van Bommel. 2000. Linguisticallymotivated information retrieval. In Encyclopedia of Library and Information Science. Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg Hullender. 2005. Learning to rank using gradient descent. In Proceedings of ICML ’05, pages 89–96. W. Bruce Croft, Howard R. Turtle, and David D. Lewis. 1991. The use of phrases and structured queries in information retrieval. In Proceedings of SIGIR ’91, pages 32–45. Martin Dillon and Ann S. Gray. 1983. Fasit: A fully automatic syntactically based indexing system. Journal of the American Society for Information Science, 34(2):99–108. Joel L. Fagan. 1987. Automatic phrase indexing for document retrieval. In Proceedings of SIGIR ’87, pages 91–101. Jianfeng Gao, Jian-Yun Nie, Guangyuan Wu, and Guihong Cao. 2004. Dependence language model for information retrieval. In Proceedings of SIGIR ’04, pages 170–177. Wessel Kraaij and Ren´ee Pohlmann. 1998. Comparing the effect of syntactic vs. statistical phrase indexing strategies for dutch. In Proceedings of ECDL ’98, pages 605–617. David D. Lewis and W. Bruce Croft. 1990. Term clustering of syntactic phrases. In Proceedings of SIGIR ’90, pages 385–404. Robert M. Losee, Jr. 1994. Term dependence: truncating the bahadur lazarsfeld expansion. Information Processing and Management, 30(2):293–303. Loic Maisonnasse, Gilles Serasset, and Jean-Pierre Chevallet. 2005. Using syntactic dependency and language model x-iota ir system for clips mono and bilingual experiments in clef 2005. In Working Notes for the CLEF 2005 Workshop. Donald Metzler and W. Bruce Croft. 2005. A markov random field model for term dependencies. In Proceedings of SIGIR ’05, pages 472–479. Donald Metzler. 2007. Using gradient descent to optimize language modeling smoothing parameters. In Proceedings of SIGIR ’07, pages 687–688. David R. H. Miller, Tim Leek, and Richard M. Schwartz. 1999. A hidden markov model information retrieval system. In Proceedings of SIGIR ’99, pages 214–221. Mandar Mitra, Chris Buckley, Amit Singhal, and Claire Cardie. 1997. An analysis of statistical and syntactic phrases. In Proceedings of RIAO ’97, pages 200–214. Fei Song and W. Bruce Croft. 1999. A general language model for information retrieval. In Proceedings of CIKM ’99, pages 316–321. Munirathnam Srikanth and Rohini Srihari. 2003. Exploiting syntactic structure of queries in a language modeling approach to ir. In Proceedings of CIKM ’03, pages 476–483. Tomek Strzalkowski, Jose Perez-Carballo, and Mihnea Marinescu. 1994. Natural language information retrieval: Trec-3 report. In Proceedings of TREC-3, pages 39–54. Tao Tao and ChengXiang Zhai. 2007. An exploration of proximity measures in information retrieval. In Proceedings of SIGIR ’07, pages 295–302. Pasi Tapanainen and Timo Jarvinen. 1997. A nonprojective dependency parser. In Proceedings of ANLP ’97, pages 64–71. Michael Taylor, Hugo Zaragoza, Nick Craswell, Stephen Robertson, and Chris Burges. 2006. Optimisation methods for ranking functions with multiple parameters. In Proceedings of CIKM ’06, pages 585–593. Andrew Turpin and Alistair Moffat. 1999. Statistical phrases for vector-space information retrieval. In Proceedings of SIGIR ’99, pages 309–310. C. J. van Rijsbergen. 1977. A theoretical basis for the use of co-occurrence data in information retrieval. Journal of Documentation, 33(2):106–119. S. K. M. Wong, Wojciech Ziarko, and Patrick C. N. Wong. 1985. Generalized vector spaces model in information retrieval. In Proceedings of SIGIR ’85, pages 18–25. Chengxiang Zhai and John Lafferty. 2001. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of SIGIR ’01, pages 334–342. Chengxiang Zhai. 1997. Fast statistical parsing of noun phrases for document indexing. In Proceedings of ANLP ’97, pages 312–319. 1056
2009
118
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1057–1065, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Generative Blog Post Retrieval Model that Uses Query Expansion based on External Collections Wouter Weerkamp [email protected] Krisztian Balog [email protected] ISLA, University of Amsterdam Maarten de Rijke [email protected] Abstract User generated content is characterized by short, noisy documents, with many spelling errors and unexpected language usage. To bridge the vocabulary gap between the user’s information need and documents in a specific user generated content environment, the blogosphere, we apply a form of query expansion, i.e., adding and reweighing query terms. Since the blogosphere is noisy, query expansion on the collection itself is rarely effective but external, edited collections are more suitable. We propose a generative model for expanding queries using external collections in which dependencies between queries, documents, and expansion documents are explicitly modeled. Different instantiations of our model are discussed and make different (in)dependence assumptions. Results using two external collections (news and Wikipedia) show that external expansion for retrieval of user generated content is effective; besides, conditioning the external collection on the query is very beneficial, and making candidate expansion terms dependent on just the document seems sufficient. 1 Introduction One of the grand challenges in information retrieval is to bridge the vocabulary gap between a user and her information need on the one hand and the relevant documents on the other (Baeza-Yates and Ribeiro-Neto, 1999). In the setting of blogs or other types of user generated content, bridging this gap becomes even more challenging. This has several causes: (i) the spelling errors, unusual, creative or unfocused language usage resulting from the lack of top-down rules and editors in the content creation process, and (ii) the (often) limited length of user generated documents. Query expansion, i.e., modifying the query by adding and reweighing terms, is an often used technique to bridge the vocabulary gap. In general, query expansion helps more queries than it hurts (Balog et al., 2008b; Manning et al., 2008). However, when working with user generated content, expanding a query with terms taken from the very corpus in which one is searching tends to be less effective (Arguello et al., 2008a; Weerkamp and de Rijke, 2008b)—topic drift is a frequent phenomenon here. To be able to arrive at a richer representation of the user’s information need, while avoiding topic drift resulting from query expansion against user generated content, various authors have proposed to expand the query against an external corpus, i.e., a corpus different from the target (user generated) corpus from which documents need to be retrieved. Our aim in this paper is to define and evaluate generative models for expanding queries using external collections. We propose a retrieval framework in which dependencies between queries, documents, and expansion documents are explicitly modeled. We instantiate the framework in multiple ways by making different (in)dependence assumptions. As one of the instantiations we obtain the mixture of relevance models originally proposed by Diaz and Metzler (2006). We address the following research questions: (i) Can we effectively apply external expansion in the retrieval of user generated content? (ii) Does conditioning the external collection on the query help improve retrieval performance? (iii) Can we obtain a good estimate of this query-dependent collection probability? (iv) Which of the collection, the query, or the document should the selection of an expansion term be dependent on? In other words, what are the strongest simplifications in terms of conditional independencies between variables that can be assumed, without hurting performance? (v) Do our models show similar behavior across topics or do we observe strong per-topic 1057 differences between models? The remainder of this paper is organized as follows. We discuss previous work related to query expansion and external sources in §2. Next, we introduce our retrieval framework (§3) and continue with our main contribution, external expansion models, in §4. §5 details how the components of the model can be estimated. We put our models to the test, using the experimental setup discussed in §6, and report on results in §7. We discuss our results (§8) and conclude in §9. 2 Related Work Related work comes in two main flavors: (i) query modeling in general, and (ii) query expansion using external sources (external expansion). We start by shortly introducing the general ideas behind query modeling, and continue with a quick overview of work related to external expansion. 2.1 Query Modeling Query modeling, i.e., transformations of simple keyword queries into more detailed representations of the user’s information need (e.g., by assigning (different) weights to terms, expanding the query, or using phrases), is often used to bridge the vocabulary gap between the query and the document collection. Many query expansion techniques have been proposed, and they mostly fall into two categories, i.e., global analysis and local analysis. The idea of global analysis is to expand the query using global collection statistics based, for instance, on a co-occurrence analysis of the entire collection. Thesaurus- and dictionary-based expansion as, e.g., in Qiu and Frei (1993), also provide examples of the global approach. Our focus in this paper is on local approaches to query expansion, that use the top retrieved documents as examples from which to select terms to improve the retrieval performance (Rocchio, 1971). In the setting of language modeling approaches to query expansion, the local analysis idea has been instantiated by estimating additional query language models (Lafferty and Zhai, 2003; Tao and Zhai, 2006) or relevance models (Lavrenko and Croft, 2001) from a set of feedback documents. Yan and Hauptmann (2007) explore query expansion in a multimedia setting. Balog et al. (2008b) compare methods for sampling expansion terms to support query-dependent and query-independent query expansion; the latter is motivated by the wish to increase “aspect recall” and attempts to uncover aspects of the information need not captured by the query. Kurland et al. (2005) also try to uncover multiple aspects of a query, and to that they provide an iterative “pseudo-query” generation technique, using cluster-based language models. The notion of “aspect recall” is mentioned in (Buckley, 2004; Harman and Buckley, 2004) and identified as one of the main reasons of failure of the current information retrieval systems. Even though we acknowledge the possibilities of our approach in improving aspect recall, by introducing aspects mainly covered by the external collection being used, we are currently unable to test this assumption. 2.2 External Expansion The use of external collections for query expansion has a long history, see, e.g., (Kwok et al., 2001; Sakai, 2002). Diaz and Metzler (2006) were the first to give a systematic account of query expansion using an external corpus in a language modeling setting, to improve the estimation of relevance models. As will become clear in §4, Diaz and Metzler’s approach is an instantiation of our general model for external expansion. Typical query expansion techniques, such as pseudo-relevance feedback, using a blog or blog post corpus do not provide significant performance improvements and often dramatically hurt performance. For this reason, query expansion using external corpora has been a popular technique at the TREC Blog track (Ounis et al., 2007). For blog post retrieval, several TREC participants have experimented with expansion against external corpora, usually a news corpus, Wikipedia, the web, or a mixture of these (Zhang and Yu, 2007; Java et al., 2007; Ernsting et al., 2008). For the blog finding task introduced in 2007, TREC participants again used expansion against an external corpus, usually Wikipedia (Elsas et al., 2008a; Ernsting et al., 2008; Balog et al., 2008a; Fautsch and Savoy, 2008; Arguello et al., 2008b). The motivation underlying most of these approaches is to improve the estimation of the query representation, often trying to make up for the unedited nature of the corpus from which posts or blogs need to be retrieved. Elsas et al. (2008b) go a step further and develop a query expansion technique using the links in Wikipedia. Finally, Weerkamp and de Rijke (2008b) study 1058 external expansion in the setting of blog retrieval to uncover additional perspectives of a given topic. We are driven by the same motivation, but where they considered rank-based result combinations and simple mixtures of query models, we take a more principled and structured approach, and develop four versions of a generative model for query expansion using external collections. 3 Retrieval Framework We work in the setting of generative language models. Here, one usually assumes that a document’s relevance is correlated with query likelihood (Ponte and Croft, 1998; Miller et al., 1999; Hiemstra, 2001). Within the language modeling approach, one builds a language model from each document, and ranks documents based on the probability of the document model generating the query. The particulars of the language modeling approach have been discussed extensively in the literature (see, e.g., Balog et al. (2008b)) and will not be repeated here. Our final formula for ranking documents given a query is based on Eq. 1: log P(D|Q) ∝ log P(D) + X t∈Q P(t|θQ) log P(t|θD) (1) Here, we see the prior probability of a document being relevant, P(D) (which is independent of the query Q), the probability of a term t for a given query model, θQ, and the probability of observing the term t given the document model, θD. Our main interest lies in in obtaining a better estimate of P(t|θQ). To this end, we take the query model to be a linear combination of the maximumlikelihood query estimate P(t|Q) and an expanded query model P(t| ˆQ): P(t|θQ) = λQ · P(t|Q) + (1 −λQ) · P(t| ˆQ) (2) In the next section we introduce our models for estimating p(t| ˆQ), i.e., query expansion using (multiple) external collections. 4 Query Modeling Approach Our goal is to build an expanded query model that combines evidence from multiple external collections. We estimate the probability of a term t in the expanded query ˆQ using a mixture of collectionspecific query expansion models. P(t| ˆQ) = P c∈C P(t|Q, c) · P(c|Q), (3) where C is the set of document collections. To estimate the probability of a term given the query and the collection, P(t|Q, c), we compute the expectation over the documents in the collection c: P(t|Q, c) = X D∈c P(t|Q, c, D) · P(D|Q, c). (4) Substituting Eq. 4 back into Eq. 3 we get P(t| ˆQ) = (5) X c∈C P(c|Q) · X D∈c P(t|Q, c, D) · P(D|Q, c). This, then, is our query model for combining evidence from multiple sources. The following subsections introduce four instances of the general external expansion model (EEM) we proposed in this section; each of the instances differ in independence assumptions: • EEM1 (§4.1) assumes collection c to be independent of query Q and document D jointly, and document D individually, but keeps the dependence on Q and of t and Q on D. • EEM2 (§4.2) assumes that term t and collection c are conditionally independent, given document D and query Q; moreover, D and Q are independent given c but the dependence of t and Q on D is kept. • EEM3 (§4.3) assumes that expansion term t and original query Q are independent given document D. • On top of EEM3, EEM4 (§4.4) makes one more assumption, viz. the dependence of collection c on query Q. 4.1 External Expansion Model 1 (EEM1) Under this model we assume collection c to be independent of query Q and document D jointly, and document D individually, but keep the dependence on Q. We rewrite P(t|Q, c) as follows: P(t|Q, c) = X D∈c P(t|Q, D) · P(t|c) · P(D|Q) = X D∈c P(t, Q|D) P(Q|D) · P(t|c) · P(Q|D)P(D) P(Q) ∝ X D∈c P(t, Q|D) · P(t|c) · P(D) (6) Note that we drop P(Q) from the equation as it does not influence the ranking of terms for a given 1059 query Q. Further, P(D) is the prior probability of a document, regardless of the collection it appears in (as we assumed D to be independent of c). We assume P(D) to be uniform, leading to the following equation for ranking expansion terms: P(t| ˆQ) ∝ X c∈C P(t|c) · P(c|Q) · X D∈c P(t, Q|D). (7) In this model we capture the probability of the expansion term given the collection (P(t|c)). This allows us to assign less weight to terms that are less meaningful in the external collection. 4.2 External Expansion Model 2 (EEM2) Here, we assume that term t and collection c are conditionally independent, given document D and query Q: P(t|Q, c, D) = P(t|Q, D). This leaves us with the following: P(t|Q, D) = P(t, Q, D) P(Q, D) = P(t, Q|D) · P(D) P(Q|D) · P(D) = P(t, Q|D) P(Q|D) (8) Next, we assume document D and query Q to be independent given collection c: P(D|Q, c) = P(D|c). Substituting our choices into Eq. 4 gives us our second way of estimating P(t|Q, c): P(t|Q, c) = X D∈c P(t, Q|D) P(Q|D) · P(D|c) (9) Finally, we put our choices so far together, and implement Eq. 9 in Eq. 3, yielding our final term ranking equation: P(t| ˆQ) ∝ (10) X c∈C P(c|Q) · X D∈c P(t, Q|D) P(Q|D) · P(D|c). 4.3 External Expansion Model 3 (EEM3) Here we assume that expansion term t and both collection c and original query Q are independent given document D. Hence, we set P(t|Q, c, D) = P(t|D). Then P(t|Q, c) = X D∈c P(t|D) · P(D|Q, c) = X D∈c P(t|D) · P(Q|D, c) · P(D|c) P(Q|c) ∝ X D∈c P(t|D) · P(Q|D, c) · P(D|c) We dropped P(Q|c) as it does not influence the ranking of terms for a given query Q. Assuming independence of Q and c given D, we obtain P(t|Q, c) ∝ X D∈c P(D|c) · P(t|D) · P(Q|D) so P(t| ˆQ) ∝ X c∈C P(c|Q) · X D∈c P(D|c) · P(t|D) · P(Q|D). We follow Lavrenko and Croft (2001) and assume that P(D|c) = 1 |Rc|, the size of the set of top ranked documents in c (denoted by Rc), finally arriving at P(t| ˆQ) ∝ X c∈C P(c|Q) |Rc| · X D∈Rc P(t|D) · P(Q|D). (11) 4.4 External Expansion Model 4 (EEM4) In this fourth model we start from EEM3 and drop the assumption that c depends on the query Q, i.e., P(c|Q) = P(c), obtaining P(t| ˆQ) ∝ X c∈C P(c) |Rc| · X D∈Rc P(t|D) · P(Q|D). (12) Eq. 12 is in fact the “mixture of relevance models” external expansion model proposed by Diaz and Metzler (2006). The fundamental difference between EEM1, EEM2, EEM3 on the one hand and EEM4 on the other is that EEM4 assumes independence between c and Q (thus P(c|Q) is set to P(c)). That is, the importance of the external collection is independent of the query. How reasonable is this choice? Mishne and de Rijke (2006) examined queries submitted to a blog search engine and found many to be either news-related context queries (that aim to track mentions of a named entity) or concept queries (that seek posts about a general topic). For context queries such as cheney hunting (TREC topic 867) a news collection is likely to offer different (relevant) aspects of the topic, whereas for a concept query such as jihad (TREC topic 878) a knowledge source such as Wikipedia seems an appropriate source of terms that capture aspects of the topic. These observations suggest the collection should depend on the query. 1060 EEM3 and EEM4 assume that expansion term t and original query Q are independent given document D. This may or may not be too strong an assumption. Models EEM1 and EEM2 also make independence assumptions, but weaker ones. 5 Estimating Components The models introduced above offer us several choices in estimating the main components. Below we detail how we estimate (i) P(c|Q), the importance of a collection for a given query, (ii) P(t|c), the unimportance of a term for an external collection, (iii) P(Q|D), the relevance of a document in the external collection for a given query, and (iv) P(t, Q|D), the likelihood of a term co-occurring with the query, given a document. 5.1 Importance of a Collection Represented as P(c|Q) in our models, the importance of an external collection depends on the query; how we can estimate this term? We consider three alternatives, in terms of (i) query clarity, (ii) coherence and (iii) query-likelihood, using documents in that collection. First, query clarity measures the structure of a set of documents based on the assumption that a small number of topical terms will have unusually large probabilities (Cronen-Townsend et al., 2002). We compute the query clarity of the top ranked documents in a given collection c: clarity(Q, c) = X t P(t|Q) · log P(t|Q) P(t|Rc) Finally, we normalize clarity(Q, c) over all collections, and set P(c|Q) ∝ clarity(Q,c) P c′∈C clarity(Q,c′). Second, a measure called “coherence score” is defined by He et al. (2008). It is the fraction of “coherent” pairs of documents in a given set of documents, where a coherent document pair is one whose similarity exceeds a threshold. The coherence of the top ranked documents Rc is: Co(Rc) = P i̸=j∈{1,...,|Rc|} δ(di, dj) |Rc|(|Rc| −1) , where δ(di, dj) is 1 in case of a similar pair (computed using cosine similarity), and 0 otherwise. Finally, we set P(c|Q) ∝ Co(Rc) P c′∈C Co(Rc′). Third, we compute the conditional probability of the collection using Bayes’ theorem. We observe that P(c|Q) ∝P(Q|c) (omitting P(Q) as it will not influence the ranking and P(c) which we take to be uniform). Further, for the sake of simplicity, we assume that all documents within c are equally important. Then, P(Q|c) is estimated as P(Q|c) = 1 |c| · X D∈c P(Q|D) (13) where P(Q|D) is estimated as described in §5.3, and |c| is the number of documents in c. 5.2 Unimportance of a Term Rather than simply estimating the importance of a term for a given query, we also estimate the unimportance of a term for a collection; i.e., we assign lower probability to terms that are common in that collection. Here, we take a straightforward approach in estimating this, and define P(t|c) = 1 − n(t,c) P t′ n(t′,c). 5.3 Likelihood of a Query We need an estimate of the probability of a query given a document, P(Q|D). We do so by using Hauff et al. (2008)’s refinement of term dependencies in the query as proposed by Metzler and Croft (2005). 5.4 Likelihood of a Term Estimating the likelihood of observing both the query and a term for a given document P(t, Q|D) is done in a similar way to estimating P(Q|D), but now for t, Q in stead of Q. 6 Experimental Setup In his section we detail our experimental setup: the (external) collections we use, the topic sets and relevance judgements available, and the significance testing we perform. 6.1 Collections and Topics We make use of three collections: (i) a collection of user generated documents (blog posts), (ii) a news collection, and (iii) an online knowledge source. The blog post collection is the TREC Blog06 collection (Ounis et al., 2007), which contains 3.2 million blog posts from 100,000 blogs monitored for a period of 11 weeks, from December 2005 to March 2006; all posts from this period have been stored as HTML files. Our news collection is the AQUAINT-2 collection (AQUAINT2, 2007), from which we selected news articles that appeared in the period covered by the blog 1061 collection, leaving us with about 150,000 news articles. Finally, we use a dump of the English Wikipedia from August 2007 as our online knowledge source; this dump contains just over 3.8 million encyclopedia articles. During 2006–2008, the TRECBlog06 collection has been used for the topical blog post retrieval task (Weerkamp and de Rijke, 2008a) at the TREC Blog track (Ounis et al., 2007): to retrieve posts about a given topic. For every year, 50 topics were developed, consisting of a title field, description, and narrative; we use only the title field, and ignore the other available information. For all 150 topics relevance judgements are available. 6.2 Metrics and Significance We report on the standard IR metrics Mean Average Precision (MAP), precision at 5 and 10 documents (P5, P10), and the Mean Reciprocal Rank (MRR). To determine whether or not differences between runs are significant, we use a two-tailed paired t-test, and report on significant differences for α = .05 (△and ▽) and α = .01 (▲and ▼). 7 Results We first discuss the parameter tuning for our four EEM models in Section 7.1. We then report on the results of applying these settings to obtain our retrieval results on the blog post retrieval task. Section 7.2 reports on these results. We follow with a closer look in Section 8. 7.1 Parameters Our model has one explicit parameter, and one more or less implicit parameter. The obvious parameter is λQ, used in Eq. 2, but also the number of terms to include in the final query model makes a difference. For training of the parameters we use two TREC topic sets to train and test on the held-out topic set. From the training we conclude that the following parameter settings work best across all topics: (EEM1) λQ = 0.6, 30 terms; (EEM2) λQ = 0.6, 40 terms; (EEM3 and EEM4) λQ = 0.5, 30 terms. In the remainder of this section, results for our models are reported using these parameter settings. 7.2 Retrieval Results As a baseline we use an approach without external query expansion, viz. Eq. 1. In Table 1 we list the results on the topical blog post finding task model P(c|Q) MAP P5 P10 MRR Baseline 0.3815 0.6813 0.6760 0.7643 EEM1 uniform 0.3976▲0.7213▲0.7080▲0.7998 0.8N/0.2W 0.3992 0.7227 0.7107 0.7988 coherence 0.3976 0.7187 0.7060 0.7976 query clarity 0.3970 0.7187 0.7093 0.7929 P(Q|c) 0.3983 0.7267 0.7093 0.7951 oracle 0.4126▲0.7387△0.7320▲0.8252△ EEM2 uniform 0.3885▲0.7053△0.6967△0.7706 0.9N/0.1W 0.3895 0.7133 0.6953 0.7736 coherence 0.3890 0.7093 0.7020 0.7740 query clarity 0.3872 0.7067 0.6953 0.7745 P(Q|c) 0.3883 0.7107 0.6967 0.7717 oracle 0.3995▲0.7253▲0.7167▲0.7856 EEM3 uniform 0.4048▲0.7187△0.7207▲0.8261▲ coherence 0.4058 0.7253 0.7187 0.8306 query clarity 0.4033 0.7253 0.7173 0.8228 P(Q|c) 0.3998 0.7253 0.7100 0.8133 oracle 0.4194▲0.7493▲0.7353▲0.8413 EEM4 0.5N/0.5W 0.4048▲0.7187△0.7207▲0.8261▲ Table 1: Results for all model instances on all topics (i.e., 2006, 2007, and 2008); aN/bW stands for the weights assigned to the news (a) and Wikipedia corpora (b). Significance is tested between (i) each uniform run and the baseline, and (ii) each other setting and its uniform counterpart. of (i) our baseline, and (ii) our model (instantiated by EEM1, EEM2, EEM3, and EEM4). For all models that contain the query-dependent collection probability (P(c|Q)) we report on multiple ways of estimating this: (i) uniform, (ii) best global mixture (independent of the query, obtained by a sweep over collection probabilities), (iii) coherence, (iv) query clarity, (v) P(Q|c), and (vi) using an oracle for which optimal settings were obtained by the same sweep as (ii). Note that methods (i) and (ii) are not query dependent; for EEM3 we do not mention (ii) since it equals (i). Finally, for EEM4 we only have a query-independent component, P(c): the best performance here is obtained using equal weights for both collections. A few observations. First, our baseline performs well above the median for all three years (2006–2008). Second, in each of its four instances our model for query expansion against external corpora improves over the baseline. Third, we see that it is safe to assume that a term is dependent only on the document from which it is sampled (EEM1 vs. EEM2 vs. EEM3). EEM3 makes the strongest assumptions about terms in this respect, yet it performs best. Fourth, capturing the dependence of the collection on the query helps, as we can see from the significant improvements of the “oracle” runs over their “uniform” counterparts. However, we do not have a good method yet for automatically estimating this dependence, 1062 as is clear from the insignificant differences between the runs labeled “coherence,” “query clarity,” “P(Q|c)” and the run labeled “uniform.” 8 Discussion Rather than providing a pairwise comparison of all runs listed in the previous section, we consider two pairwise comparisons—between (an instantion of) our model and the baseline, and between two instantiations of our model—and highlight phenomena that we also observed in other pairwise comparisons. Based on this discussion, we also consider a combination of approaches. 8.1 EEM1 vs. the Baseline We zoom in on EEM1 and make a per-topic comparison against the baseline. First of all, we observe behavior typical for all query expansion methods: some topics are helped, some are not affected, and some are hurt by the use of EEM1; see Figure 1, top row. Specifically, 27 topics show a slight drop in AP (maximum drop is 0.043 AP), 3 topics do not change (as no expansion terms are identified) and the remainder of the topics (120) improve in AP. The maximum increase in AP is 0.5231 (+304%) for topic 949 (ford bell); Topics 887 (world trade organization, +87%), 1032 (I walk the line, +63%), 865 (basque, +53%), and 1014 (tax break for hybrid automobiles, +50%) also show large improvements. The largest drop (20% AP) is for topic 1043 (a million little pieces, a controversial memoir that was in the news during the time coverd by the blog crawl); because we do not do phrase or entity recognition in the query, but apply stopword removal, it is reduced to million pieces which introduced a lot of topic drift. Let us examine the “collection preference” of topics: 35 had a clear preference for Wikipedia, 32 topics for news, and the remainder (83 topics) required a mixture of both collections. First, we look at topics that require equal weights for both collections; topic 880 (natalie portman, +21% AP) concerns a celebrity with a large Wikipedia biography, as well as news coverage due to new movie releases during the period covered by the blog crawl. Topic 923 (challenger, +7% AP) asks for information on the space shuttle that exploded during its launch; the 20th anniversary of this event was commemorated during the period covered by the crawl and therefore it is newsworthy as well as present in Wikipedia (due to its historic impact). Finally, topic 869 (muhammad cartoon, +20% AP) deals with the controversy surrounding the publication of cartoons featuring Muhammad: besides its obvious news impact, this event is extensively discussed in multiple Wikipedia articles. As to topics that have a preference for Wikipedia, we see some very general ones (as is to be expected): Topic 942 (lawful access, +30% AP) on the government accessing personal files; Topic 1011 (chipotle restaurant, +13% AP) on information concerning the Chipotle restaurants; Topic 938 (plug awards, +21% AP) talks about an award show. Although this last topic could be expected to have a clear preference for expansion terms from the news corpus, the awards were not handed out during the period covered by the news collection and, hence, full weight is given to Wikipedia. At the other end of the scale, topics that show a preference for the news collection are topic 1042 (david irving, +28% AP), who was on trial during the period of the crawl for denying the Holocaust and received a lot of media attention. Further examples include Topic 906 (davos, +20% AP), which asks for information on the annual world economic forum meeting in Davos in January, something typically related to news, and topic 949 (ford bell, +304% AP), which seeks information on Ford Bell, Senate candidate at the start of 2006. 8.2 EEM1 vs. EEM3 Next we turn to a comparison between EEM1 and EEM3. Theoretically, the main difference between these two instantiations of our general model is that EEM3 makes much stronger simplifying indepence assumptions than EEM1. In Figure 1 we compare the two, not only against the baseline, but, more interestingly, also in terms of the difference in performance brought about by switching from uniform estimation of P(c|Q) to oracle estimation. Most topics gain in AP when going from the uniform distribution to the oracle setting. This happens for both models, EEM1 and EEM3, leading to less topics decreasing in AP over the baseline (the right part of the plots) and more topics increasing (the left part). A second observation is that both gains and losses are higher for EEM3 than for EEM1. Zooming in on the differences between EEM1 and EEM3, we compare the two in the same way, now using EEM3 as “baseline” (Figure 2). We observe that EEM3 performs better than EEM1 in 87 1063 -0.4 -0.2 0 0.2 0.4 AP difference topics -0.4 -0.2 0 0.2 0.4 AP difference topics -0.4 -0.2 0 0.2 0.4 AP difference topics -0.4 -0.2 0 0.2 0.4 AP difference topics Figure 1: Per-topic AP differences between the baseline and (Top): EEM1 and (Bottom): EEM3, for (Left): uniform P(c|Q) and (Right): oracle. -0.4 -0.2 0 0.2 0.4 AP difference topics Figure 2: Per-topic AP differences between EEM3 and EEM1 in the oracle setting. cases, while EEM1 performs better for 60 topics. Topics 1041 (federal shield law, 47% AP), 1028 (oregon death with dignity act, 32% AP), and 1032 (I walk the line, 32% AP) have the highest difference in favor of EEM3; Topics 877 (sonic food industry, 139% AP), 1013 (iceland european union, 25% AP), and 1002 (wikipedia primary source, 23% AP) are helped most by EEM1. Overall, EEM3 performs significantly better than EEM1 in terms of MAP (for α = .05), but not in terms of the early precision metrics (P5, P10, and MRR). 8.3 Combining Our Approaches One observation to come out of §8.1 and 8.2 is that different topics prefer not only different external expansion corpora but also different external expansion methods. To examine this phenomemon, we created an articificial run by taking, for every topic, the best performing model (with settings optimized for the topic). Twelve topics preferred the baseline, 37 EEM1, 20 EEM2, and 81 EEM3. The articifical run produced the following results: MAP 0.4280, P5 0.7600, P10 0.7480, and MRR 0.8452; the differences in MAP and P10 between this run and EEM3 are significant for α = .01. We leave it as future work to (learn to) predict for a given topic, which approach to use, thus refining ongoing work on query difficulty prediction. 9 Conclusions We explored the use of external corpora for query expansion in a user generated content setting. We introduced a general external expansion model, which offers various modeling choices, and instantiated it based on different (in)dependence assumptions, leaving us with four instances. Query expansion using external collection is effective for retrieval in a user generated content setting. Furthermore, conditioning the collection on the query is beneficial for retrieval performance, but estimating this component remains difficult. Dropping the dependencies between terms and collection and terms and query leads to better performance. Finally, the best model is topicdependent: constructing an artificial run based on the best model per topic achieves significant better results than any of the individual models. Future work focuses on two themes: (i) topicdependent model selection and (ii) improved estimates of components. As to (i), we first want to determine whether a query should be expanded, and next select the appropriate expansion model. For (ii), we need better estimates of P(Q|c); one aspect that could be included is taking P(c) into account in the query-likelihood estimate of P(Q|c). One can make this dependent on the task at hand (blog post retrieval vs. blog feed search). Another possibility is to look at solutions used in distributed IR. Finally, we can also include the estimation of P(D|c), the importance of a document in the collection. Acknowledgements We thank our reviewers for their valuable feedback. This research is supported by the DuOMAn project carried out within the STEVIN programme which is funded by the Dutch and Flemish Governments (http://www.stevin-tst.org) under project number STE-09-12, and by the Netherlands Organisation for Scientific Research (NWO) under project numbers 017.001.190, 640.001.501, 640.002.501, 612.066.512, 612.061.814, 612.061.815, 640.004.802. 1064 References AQUAINT-2 (2007). URL: http://trec.nist.gov/ data/qa/2007 qadata/qa.07.guidelines. html#documents. Arguello, J., Elsas, J., Callan, J., and Carbonell, J. (2008a). Document representation and query expansion models for blog recommendation. In Proceedings of ICWSM 2008. Arguello, J., Elsas, J. L., Callan, J., and Carbonell, J. G. (2008b). Document representation and query expansion models for blog recommendation. In Proc. of the 2nd Intl. Conf. on Weblogs and Social Media (ICWSM). Baeza-Yates, R. and Ribeiro-Neto, B. (1999). Modern Information Retrieval. ACM. Balog, K., Meij, E., Weerkamp, W., He, J., and de Rijke, M. (2008a). The University of Amsterdam at TREC 2008: Blog, Enterprise, and Relevance Feedback. In TREC 2008 Working Notes. Balog, K., Weerkamp, W., and de Rijke, M. (2008b). A few examples go a long way: constructing query models from elaborate query formulations. In SIGIR ’08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 371–378, New York, NY, USA. ACM. Buckley, C. (2004). Why current IR engines fail. In SIGIR ’04, pages 584–585. Cronen-Townsend, S., Zhou, Y., and Croft, W. B. (2002). Predicting query performance. In SIGIR02, pages 299–306. Diaz, F. and Metzler, D. (2006). Improving the estimation of relevance models using large external corpora. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 154–161, New York, NY, USA. ACM. Elsas, J., Arguello, J., Callan, J., and Carbonell, J. (2008a). Retrieval and feedback models for blog distillation. In The Sixteenth Text REtrieval Conference (TREC 2007) Proceedings. Elsas, J. L., Arguello, J., Callan, J., and Carbonell, J. G. (2008b). Retrieval and feedback models for blog feed search. In SIGIR ’08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 347–354, New York, NY, USA. ACM. Ernsting, B., Weerkamp, W., and de Rijke, M. (2008). Language modeling approaches to blog post and feed finding. In The Sixteenth Text REtrieval Conference (TREC 2007) Proceedings. Fautsch, C. and Savoy, J. (2008). UniNE at TREC 2008: Fact and Opinion Retrieval in the Blogsphere. In TREC 2008 Working Notes. Harman, D. and Buckley, C. (2004). The NRRC reliable information access (RIA) workshop. In SIGIR ’04, pages 528–529. Hauff, C., Murdock, V., and Baeza-Yates, R. (2008). Improved query difficulty prediction for the web. In CIKM ’08: Proceedings of the seventeenth ACM conference on Conference on information and knowledge management, pages 439–448. He, J., Larson, M., and de Rijke, M. (2008). Using coherence-based measures to predict query difficulty. In 30th European Conference on Information Retrieval (ECIR 2008), page 689694. Springer, Springer. Hiemstra, D. (2001). Using Language Models for Information Retrieval. PhD thesis, University of Twente. Java, A., Kolari, P., Finin, T., Joshi, A., and Martineau, J. (2007). The blogvox opinion retrieval system. In The Fifteenth Text REtrieval Conference (TREC 2006) Proceedings. Kurland, O., Lee, L., and Domshlak, C. (2005). Better than the real thing?: Iterative pseudo-query processing using cluster-based language models. In SIGIR ’05, pages 19– 26. Kwok, K. L., Grunfeld, L., Dinstl, N., and Chan, M. (2001). TREC-9 cross language, web and question-answering track experiments using PIRCS. In TREC-9 Proceedings. Lafferty, J. and Zhai, C. (2003). Probabilistic relevance models based on document and query generation. In Language Modeling for Information Retrieval, Kluwer International Series on Information Retrieval. Springer. Lavrenko, V. and Croft, W. B. (2001). Relevance based language models. In SIGIR ’01, pages 120–127. Manning, C. D., Raghavan, P., and Sch¨utze, H. (2008). Introduction to Information Retrieval. Cambridge University Press. Metzler, D. and Croft, W. B. (2005). A markov random field model for term dependencies. In SIGIR ’05, pages 472– 479, New York, NY, USA. ACM. Miller, D., Leek, T., and Schwartz, R. (1999). A hidden Markov model information retrieval system. In SIGIR ’99, pages 214–221. Mishne, G. and de Rijke, M. (2006). A study of blog search. In Lalmas, M., MacFarlane, A., R¨uger, S., Tombros, A., Tsikrika, T., and Yavlinsky, A., editors, Advances in Information Retrieval: Proceedings 28th European Conference on IR Research (ECIR 2006), volume 3936 of LNCS, pages 289–301. Springer. Ounis, I., Macdonald, C., de Rijke, M., Mishne, G., and Soboroff, I. (2007). Overview of the TREC 2006 Blog Track. In The Fifteenth Text Retrieval Conference (TREC 2006). NIST. Ponte, J. M. and Croft, W. B. (1998). A language modeling approach to information retrieval. In SIGIR ’98, pages 275–281. Qiu, Y. and Frei, H.-P. (1993). Concept based query expansion. In SIGIR ’93, pages 160–169. Rocchio, J. (1971). Relevance feedback in information retrieval. In The SMART Retrieval System: Experiments in Automatic Document Processing. Prentice Hall. Sakai, T. (2002). The use of external text data in crosslanguage information retrieval based on machine translation. In Proceedings IEEE SMC 2002. Tao, T. and Zhai, C. (2006). Regularized estimation of mixture models for robust pseudo-relevance feedback. In SIGIR ’06: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages 162–169, New York, NY, USA. ACM. Weerkamp, W. and de Rijke, M. (2008a). Credibility improves topical blog post retrieval. In ACL-08: HLT, pages 923–931. Weerkamp, W. and de Rijke, M. (2008b). Looking at things differently: Exploring perspective recall for informal text retrieval. In 8th Dutch-Belgian Information Retrieval Workshop (DIR 2008), pages 93–100. Yan, R. and Hauptmann, A. (2007). Query expansion using probabilistic local feedback with application to multimedia retrieval. In CIKM ’07: Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 361–370, New York, NY, USA. ACM. Zhang, W. and Yu, C. (2007). UIC at TREC 2006 Blog Track. In The Fifteenth Text REtrieval Conference (TREC 2006) Proceedings. 1065
2009
119
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 100–108, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Bayesian Unsupervised Word Segmentation with Nested Pitman-Yor Language Modeling Daichi Mochihashi Takeshi Yamada Naonori Ueda NTT Communication Science Laboratories Hikaridai 2-4, Keihanna Science City, Kyoto, Japan {daichi,yamada,ueda}@cslab.kecl.ntt.co.jp Abstract In this paper, we propose a new Bayesian model for fully unsupervised word segmentation and an efficient blocked Gibbs sampler combined with dynamic programming for inference. Our model is a nested hierarchical Pitman-Yor language model, where Pitman-Yor spelling model is embedded in the word model. We confirmed that it significantly outperforms previous reported results in both phonetic transcripts and standard datasets for Chinese and Japanese word segmentation. Our model is also considered as a way to construct an accurate word n-gram language model directly from characters of arbitrary language, without any “word” indications. 1 Introduction “Word” is no trivial concept in many languages. Asian languages such as Chinese and Japanese have no explicit word boundaries, thus word segmentation is a crucial first step when processing them. Even in western languages, valid “words” are often not identical to space-separated tokens. For example, proper nouns such as “United Kingdom” or idiomatic phrases such as “with respect to” actually function as a single word, and we often condense them into the virtual words “UK” and “w.r.t.”. In order to extract “words” from text streams, unsupervised word segmentation is an important research area because the criteria for creating supervised training data could be arbitrary, and will be suboptimal for applications that rely on segmentations. It is particularly difficult to create “correct” training data for speech transcripts, colloquial texts, and classics where segmentations are often ambiguous, let alone is impossible for unknown languages whose properties computational linguists might seek to uncover. From a scientific point of view, it is also interesting because it can shed light on how children learn “words” without the explicitly given boundaries for every word, which is assumed by supervised learning approaches. Lately, model-based methods have been introduced for unsupervised segmentation, in particular those based on Dirichlet processes on words (Goldwater et al., 2006; Xu et al., 2008). This maximizes the probability of word segmentation w given a string s : ˆw = argmax w p(w|s) . (1) This approach often implicitly includes heuristic criteria proposed so far1, while having a clear statistical semantics to find the most probable word segmentation that will maximize the probability of the data, here the strings. However, they are still na¨ıve with respect to word spellings, and the inference is very slow owing to inefficient Gibbs sampling. Crucially, since they rely on sampling a word boundary between two neighboring words, they can leverage only up to bigram word dependencies. In this paper, we extend this work to propose a more efficient and accurate unsupervised word segmentation that will optimize the performance of the word n-gram Pitman-Yor (i.e. Bayesian Kneser-Ney) language model, with an accurate character ∞-gram Pitman-Yor spelling model embedded in word models. Furthermore, it can be viewed as a method for building a high-performance n-gram language model directly from character strings of arbitrary language. It is carefully smoothed and has no “unknown words” problem, resulting from its model structure. This paper is organized as follows. In Section 2, 1For instance, TANGO algorithm (Ando and Lee, 2003) essentially finds segments such that character n-gram probabilities are maximized blockwise, averaged over n. 100 (a) Generating n-gram distributions G hierarchically from the Pitman-Yor process. Here, n = 3. (b) Equivalent representation using a hierarchical Chinese Restaurant process. Each word in a training text is a “customer” shown in italic, and added to the leaf of its two words context. Figure 1: Hierarchical Pitman-Yor Language Model. we briefly describe a language model based on the Pitman-Yor process (Teh, 2006b), which is a generalization of the Dirichlet process used in previous research. By embedding a character n-gram in word n-gram from a Bayesian perspective, Section 3 introduces a novel language model for word segmentation, which we call the Nested PitmanYor language model. Section 4 describes an efficient blocked Gibbs sampler that leverages dynamic programming for inference. In Section 5 we describe experiments on the standard datasets in Chinese and Japanese in addition to English phonetic transcripts, and semi-supervised experiments are also explored. Section 6 is a discussion and Section 7 concludes the paper. 2 Pitman-Yor process and n-gram models To compute a probability p(w|s) in (1), we adopt a Bayesian language model lately proposed by (Teh, 2006b; Goldwater et al., 2005) based on the Pitman-Yor process, a generalization of the Dirichlet process. As we shall see, this is a Bayesian theory of the best-performing KneserNey smoothing of n-grams (Kneser and Ney, 1995), allowing an integrated modeling from a Bayesian perspective as persued in this paper. The Pitman-Yor (PY) process is a stochastic process that generates discrete probability distribution G that is similar to another distribution G0, called a base measure. It is written as G ∼PY(G0, d, θ) , (2) where d is a discount factor and θ controls how similar G is to G0 on average. Suppose we have a unigram word distribution G1 ={ p(·) } where · ranges over each word in the lexicon. The bigram distribution G2 = { p(·|v) } given a word v is different from G1, but will be similar to G1 especially for high frequency words. Therefore, we can generate G2 from a PY process of base measure G1, as G2 ∼PY(G1, d, θ). Similarly, trigram distribution G3 = { p(·|v′v) } given an additional word v′ is generated as G3 ∼ PY(G2, d, θ), and G1, G2, G3 will form a tree structure shown in Figure 1(a). In practice, we cannot observe G directly because it will be infinite dimensional distribution over the possible words, as we shall see in this paper. However, when we integrate out G it is known that Figure 1(a) can be represented by an equivalent hierarchical Chinese Restaurant Process (CRP) (Aldous, 1985) as in Figure 1(b). In this representation, each n-gram context h (including the null context ϵ for unigrams) is a Chinese restaurant whose customers are the n-gram counts c(w|h) seated over the tables 1 · · · thw. The seatings has been incrementally constructed by choosing the table k for each count in c(w|h) with probability proportional to ( chwk −d (k = 1, · · · , thw) θ + d·th· (k = new) , (3) where chwk is the number of customers seated at table k thus far and th· = P w thw is the total number of tables in h. When k = new is selected, thw is incremented, and this means that the count was actually generated from the shorter context h′. Therefore, in that case a proxy customer is sent to the parent restaurant and this process will recurse. For example, if we have a sentence “she will sing” in the training data for trigrams, we add each word “she” “will” “sing” “$” as a customer to its two preceding words context node, as described in Figure 1(b). Here, “$” is a special token representing a sentence boundary in language model101 ing (Brown et al., 1992). As a result, the n-gram probability of this hierarchical Pitman-Yor language model (HPYLM) is recursively computed as p(w|h) = c(w|h)−d·thw θ+c(h) + θ+d·th· θ+c(h) p(w|h′), (4) where p(w|h′) is the same probability using a (n−1)-gram context h′. When we set thw ≡1, (4) recovers a Kneser-Ney smoothing: thus a HPYLM is a Bayesian Kneser-Ney language model as well as an extension of the hierarchical Dirichlet Process (HDP) used in Goldwater et al. (2006). θ, d are hyperparameters that can be learned as Gamma and Beta posteriors, respectively, given the data. For details, see Teh (2006a). The inference of this model interleaves adding and removing a customer to optimize thw, d, and θ using MCMC. However, in our case “words” are not known a priori: the next section describes how to accomplish this by constructing a nested HPYLM of words and characters, with the associated inference algorithm. 3 Nested Pitman-Yor Language Model Thus far we have assumed that the unigram G1 is already given, but of course it should also be generated as G1 ∼PY(G0, d, θ). Here, a problem occurs: What should we use for G0, namely the prior probabilities over words2? If a lexicon is finite, we can use a uniform prior G0(w) = 1/|V | for every word w in lexicon V . However, with word segmentation every substring could be a word, thus the lexicon is not limited but will be countably infinite. Building an accurate G0 is crucial for word segmentation, since it determines how the possible words will look like. Previous work using a Dirichlet process used a relatively simple prior for G0, namely an uniform distribution over characters (Goldwater et al., 2006), or a prior solely dependent on word length with a Poisson distribution whose parameter is fixed by hand (Xu et al., 2008). In contrast, in this paper we use a simple but more elaborate model, that is, a character n-gram language model that also employs HPYLM. This is important because in English, for example, words are likely to end in ‘–tion’ and begin with 2Note that this is different from unigrams, which are posterior distribution given data. Figure 2: Chinese restaurant representation of our Nested Pitman-Yor Language Model (NPYLM). ‘re–’, but almost never end in ‘–tio’ nor begin with ‘sre–’ 3. Therefore, we use G0(w) = p(c1 · · · ck) (5) = k Y i=1 p(ci|c1 · · · ci−1) (6) where string c1 · · · ck is a spelling of w, and p(ci|c1 · · · ci−1) is given by the character HPYLM according to (4). This language model, which we call Nested Pitman-Yor Language Model (NPYLM) hereafter, is the hierarchical language model shown in Figure 2, where the character HPYLM is embedded as a base measure of the word HPYLM.4 As the final base measure for the character HPYLM, we used a uniform prior over the possible characters of a given language. To avoid dependency on ngram order n, we actually used the ∞-gram language model (Mochihashi and Sumita, 2007), a variable order HPYLM, for characters. However, for generality we hereafter state that we used the HPYLM. The theory remains the same for ∞grams, except sampling or marginalizing over n as needed. Furthermore, we corrected (5) so that word length will have a Poisson distribution whose parameter can now be estimated for a given language and word type. We describe this in detail in Section 4.3. Chinese Restaurant Representation In our NPYLM, the word model and the character model are not separate but connected through a nested CRP. When a word w is generated from its parent at the unigram node, it means that w 3Imagine we try to segment an English character string “itisrecognizedasthe· · · .” 4Strictly speaking, this is not “nested” in the sense of a Nested Dirichlet process (Rodriguez et al., 2008) and could be called “hierarchical HPYLM”, which denotes another model for domain adaptation (Wood and Teh, 2008). 102 is drawn from the base measure, namely a character HPYLM. Then we divide w into characters c1 · · · ck to yield a “sentence” of characters and feed this into the character HPYLM as data. Conversely, when a table becomes empty, this means that the data associated with the table are no longer valid. Therefore we remove the corresponding customers from the character HPYLM using the inverse procedure of adding a customer in Section 2. All these processes will be invoked when a string is segmented into “words” and customers are added to the leaves of the word HPYLM. To segment a string into “words”, we used efficient dynamic programming combined with MCMC, as described in the next section. 4 Inference To find the hidden word segmentation w of a string s = c1 · · · cN, which is equivalent to the vector of binary hidden variables z = z1 · · · zN, the simplest approach is to build a Gibbs sampler that randomly selects a character ci and draw a binary decision zi as to whether there is a word boundary, and then update the language model according to the new segmentation (Goldwater et al., 2006; Xu et al., 2008). When we iterate this procedure sufficiently long, it becomes a sample from the true distribution (1) (Gilks et al., 1996). However, this sampler is too inefficient since time series data such as word segmentation have a very high correlation between neighboring words. As a result, the sampler is extremely slow to converge. In fact, (Goldwater et al., 2006) reports that the sampler would not mix without annealing, and the experiments needed 20,000 times of sampling for every character in the training data. Furthermore, it has an inherent limitation that it cannot deal with larger than bigrams, because it uses only local statistics between directly contiguous words for word segmentation. 4.1 Blocked Gibbs sampler Instead, we propose a sentence-wise Gibbs sampler of word segmentation using efficient dynamic programming, as shown in Figure 3. In this algorithm, first we randomly select a string, and then remove the “sentence” data of its word segmentation from the NPYLM. Sampling a new segmentation, we update the NPYLM by adding a new “sentence” according to the new seg1: for j = 1 · · · J do 2: for s in randperm (s1, · · · , sD) do 3: if j >1 then 4: Remove customers of w(s) from Θ 5: end if 6: Draw w(s) according to p(w|s, Θ) 7: Add customers of w(s) to Θ 8: end for 9: Sample hyperparameters of Θ 10: end for Figure 3: Blocked Gibbs Sampler of NPYLM Θ. mentation. When we repeat this process, it is expected to mix rapidly because it implicitly considers all possible segmentations of the given string at the same time. This is called a blocked Gibbs sampler that samples z block-wise for each sentence. It has an additional advantage in that we can accommodate higher-order relationships than bigrams, particularly trigrams, for word segmentation. 5 4.2 Forward-Backward inference Then, how can we sample a segmentation w for each string s? In accordance with the Forward filtering Backward sampling of HMM (Scott, 2002), this is achieved by essentially the same algorithm employed to sample a PCFG parse tree within MCMC (Johnson et al., 2007) and grammar-based segmentation (Johnson and Goldwater, 2009). Forward Filtering. For this purpose, we maintain a forward variable α[t][k] in the bigram case. α[t][k] is the probability of a string c1 · · · ct with the final k characters being a word (see Figure 4). Segmentations before the final k characters are marginalized using the following recursive relationship: α[t][k] = t−k X j=1 p(ct t−k+1|ct−k t−k−j+1)·α[t−k][j] (7) where α[0][0] = 1 and we wrote cn · · · cm as cm n .6 The rationale for (7) is as follows. Since maintaining binary variables z1, · · · , zN is equivalent to maintaining a distance to the nearest backward 5In principle fourgrams or beyond are also possible, but will be too complex while the gain will be small. For this purpose, Particle MCMC (Doucet et al., 2009) is promising but less efficient in a preliminary experiment. 6As Murphy (2002) noted, in semi-HMM we cannot use a standard trick to avoid underflow by normalizing α[t][k] into p(k|t), since the model is asynchronous. Instead we always compute (7) using logsumexp(). 103 Figure 4: Forward filtering of α[t][k] to marginalize out possible segmentations j before t−k. 1: for t = 1 to N do 2: for k = max(1, t−L) to t do 3: Compute α[t][k] according to (7). 4: end for 5: end for 6: Initialize t ←N, i ←0, w0 ←$ 7: while t > 0 do 8: Draw k ∝p(wi|ct t−k+1, Θ) · α[t][k] 9: Set wi ←ct t−k+1 10: Set t ←t −k, i ←i + 1 11: end while 12: Return w = wi, wi−1, · · · , w1. Figure 5: Forward-Backward sampling of word segmentation w. (in bigram case) word boundary for each t as qt, we can write α[t][k]=p(ct 1, qt =k) (8) = X j p(ct 1, qt =k, qt−k =j) (9) = X j p(ct−k 1 , ct t−k+1, qt=k, qt−k =j)(10) = X j p(ct t−k+1|ct−k 1 )p(ct−k 1 , qt−k =j)(11) = X j p(ct t−k+1|ct−k 1 )α[t−k][j] , (12) where we used conditional independency of qt given qt−k and uniform prior over qt in (11) above. Backward Sampling. Once the probability table α[t][k] is obtained, we can sample a word segmentation backwards. Since α[N][k] is a marginal probability of string cN 1 with the last k characters being a word, and there is always a sentence boundary token $ at the end of the string, with probability proportional to p($|cN N−k)·α[N][k] we can sample k to choose the boundary of the final word. The second final word is similarly sampled using the probability of preceding the last word just sampled: we continue this process until we arrive at the beginning of the string (Figure 5). Trigram case. For simplicity, we showed the algorithm for bigrams above. For trigrams, we maintain a forward variable α[t][k][j], which represents a marginal probability of string c1 · · · ct with both the final k characters and further j characters preceding it being words. ForwardBackward algorithm becomes complicated thus omitted, but can be derived following the extended algorithm for second order HMM (He, 1988). Complexity This algorithm has a complexity of O(NL2) for bigrams and O(NL3) for trigrams for each sentence, where N is the length of the sentence and L is the maximum allowed length of a word (≤N). 4.3 Poisson correction As Nagata (1996) noted, when only (5) is used inadequately low probabilities are assigned to long words, because it has a largely exponential distribution over length. To correct this, we assume that word length k has a Poisson distribution with a mean λ: Po(k|λ) = e−λ λk k! . (13) Since the appearance of c1 · · · ck is equivalent to that of length k and the content, by making the character n-gram model explicit as Θ we can set p(c1 · · · ck) = p(c1 · · · ck, k) (14) = p(c1 · · · ck, k|Θ) p(k|Θ) Po(k|λ) (15) where p(c1 · · · ck, k|Θ) is an n-gram probability given by (6), and p(k|Θ) is a probability that a word of length k will be generated from Θ. While previous work used p(k|Θ) = (1 − p($))k−1p($), this is only true for unigrams. Instead, we employed a Monte Carlo method that generates words randomly from Θ to obtain the empirical estimates of p(k|Θ). Estimating λ. Of course, we do not leave λ as a constant. Instead, we put a Gamma distribution p(λ) = Ga(a, b) = ba Γ(a)λa−1e−bλ (16) to estimate λ from the data for given language and word type.7 Here, Γ(x) is a Gamma function and a, b are the hyperparameters chosen to give a nearly uniform prior distribution.8 7We used different λ for different word types, such as digits, alphabets, hiragana, CJK characters, and their mixtures. W is a set of words of each such type, and (13) becomes a mixture of Poisson distributions in this case. 8In the following experiments, we set a=0.2, b=0.1. 104 Denoting W as a set of “words” obtained from word segmentation, the posterior distribution of λ used for (13) is p(λ|W) ∝p(W|λ)p(λ) = Ga a+ X w∈W t(w)|w|, b+ X w∈W t(w)  , (17) where t(w) is the number of times word w is generated from the character HPYLM, i.e. the number of tables tϵw for w in word unigrams. We sampled λ from this posterior for each Gibbs iteration. 5 Experiments To validate our model, we conducted experiments on standard datasets for Chinese and Japanese word segmentation that are publicly available, as well as the same dataset used in (Goldwater et al., 2006). Note that NPYLM maximizes the probability of strings, equivalently, minimizes the perplexity per character. Therefore, the recovery of the “ground truth” that is not available for inference is a byproduct in unsupervised learning. Since our implementation is based on Unicode and learns all hyperparameters from the data, we also confirmed that NPYLM segments the Arabic Gigawords equally well. 5.1 English phonetic transcripts In order to directly compare with the previously reported result, we first used the same dataset as Goldwater et al. (2006). This dataset consists of 9,790 English phonetic transcripts from CHILDES data (MacWhinney and Snow, 1985). Since our algorithm converges rather fast, we ran the Gibbs sampler of trigram NPYLM for 200 iterations to obtain the results in Table 1. Among the token precision (P), recall (R), and F-measure (F), the recall is especially higher to outperform the previous result based on HDP in F-measure. Meanwhile, the same measures over the obtained lexicon (LP, LR, LF) are not always improved. Moreover, the average length of words inferred was surprisingly similar to ground truth: 2.88, while the ground truth is 2.87. Table 2 shows the empirical computational time needed to obtain these results. Although the convergence in MCMC is not uniquely identified, improvement in efficiency is also outstanding. 5.2 Chinese and Japanese word segmentation To show applicability beyond small phonetic transcripts, we used standard datasets for Chinese and Model P R F LP LR LF NPY(3) 74.8 75.2 75.0 47.8 59.7 53.1 NPY(2) 74.8 76.7 75.7 57.3 56.6 57.0 HDP(2) 75.2 69.6 72.3 63.5 55.2 59.1 Table 1: Segmentation accuracies on English phonetic transcripts. NPY(n) means n-gram NPYLM. Results for HDP(2) are taken from Goldwater et al. (2009), which corrects the errors in Goldwater et al. (2006). Model time iterations NPYLM 17min 200 HDP 10h 55min 20000 Table 2: Computations needed for Table 1. Iterations for “HDP” is the same as described in Goldwater et al. (2009). Actually, NPYLM approximately converged around 50 iterations, 4 minutes. Japanese word segmentation, with all supervised segmentations removed in advance. Chinese For Chinese, we used a publicly available SIGHAN Bakeoff 2005 dataset (Emerson, 2005). To compare with the latest unsupervised results (using a closed dataset of Bakeoff 2006), we chose the common sets prepared by Microsoft Research Asia (MSR) for simplified Chinese, and by City University of Hong Kong (CITYU) for traditional Chinese. We used a random subset of 50,000 sentences from each dataset for training, and the evaluation was conducted on the enclosed test data. 9 Japanese For Japanese, we used the Kyoto Corpus (Kyoto) (Kurohashi and Nagao, 1998): we used random subset of 1,000 sentences for evaluation and the remaining 37,400 sentences for training. In all cases we removed all whitespaces to yield raw character strings for inference, and set L = 4 for Chinese and L = 8 for Japanese to run the Gibbs sampler for 400 iterations. The results (in token F-measures) are shown in Table 3. Our NPYLM significantly ourperforms the best results using a heuristic approach reported in Zhao and Kit (2008). While Japanese accuracies appear lower, subjective qualities are much higher. This is mostly because NPYLM segments inflectional suffixes and combines frequent proper names, which are inconsistent with the “correct” 9Notice that analyzing a test data is not easy for characterwise Gibbs sampler of previous work. Meanwhile, NPYLM easily finds the best segmentation using the Viterbi algorithm once the model is learned. 105 Model MSR CITYU Kyoto NPY(2) 80.2 (51.9) 82.4 (126.5) 62.1 (23.1) NPY(3) 80.7 (48.8) 81.7 (128.3) 66.6 (20.6) ZK08 66.7 (—) 69.2 (—) — Table 3: Accuracies and perplexities per character (in parentheses) on actual corpora. “ZK08” are the best results reported in Zhao and Kit (2008). We used ∞-gram for characters. MSR CITYU Kyoto Semi 0.895 (48.8) 0.898 (124.7) 0.913 (20.3) Sup 0.945 (81.4) 0.941 (194.8) 0.971 (21.3) Table 4: Semi-supervised and supervised results. Semi-supervised results used only 10K sentences (1/5) of supervised segmentations. segmentations. Bigram and trigram performances are similar for Chinese, but trigram performs better for Japanese. In fact, although the difference in perplexity per character is not so large, the perplexity per word is radically reduced: 439.8 (bigram) to 190.1 (trigram). This is because trigram models can leverage complex dependencies over words to yield shorter words, resulting in better predictions and increased tokens. Furthermore, NPYLM is easily amenable to semi-supervised or even supervised learning. In that case, we have only to replace the word segmentation w(s) in Figure 3 to the supervised one, for all or part of the training data. Table 4 shows the results using 10,000 sentences (1/5) or complete supervision. Our completely generative model achieves the performance of 94% (Chinese) or even 97% (Japanese) in supervised case. The result also shows that the supervised segmentations are suboptimal with respect to the perplexity per character, and even worse than unsupervised results. In semi-supervised case, using only 10K reference segmentations gives a performance of around 90% accuracy and the lowest perplexity, thanks to a combination with unsupervised data in a principled fashion. 5.3 Classics and English text Our model is particularly effective for spoken transcripts, colloquial texts, classics, or unknown languages where supervised segmentation data is difficult or even impossible to create. For example, we are pleased to say that we can now analyze (and build a language model on) “The Tale of Genji”, the core of Japanese classics written 1,000 years ago (Figure 6). The inferred segmentations are  !  "#%$ &('*),+.-/'!0213 54% (6879&:9;<>=8 ?@19>BAC(DE"ED,F4.G ?2HIDJK4L'2M. %7NDOP#Q%RES(T? U /V1%WX ZY('V!?C[B\>]>BA8F^IGQ_ ` L[aHIDEbac.9>Ld%4&e.Vf=%)>: (gFih.j5 kBlK m@"a=EWO · · · Figure 6: Unsupervised segmentation result for “The Tale of Genji”. (16,443 sentences, 899,668 characters in total) mostly correct, with some inflectional suffixes being recognized as words, which is also the case with English. Finally, we note that our model is also effective for western languages: Figure 7 shows a training text of “Alice in Wonderland ” with all whitespaces removed, and the segmentation result. While the data is extremely small (only 1,431 lines, 115,961 characters), our trigram NPYLM can infer the words surprisingly well. This is because our model contains both word and character models that are combined and carefully smoothed, from a Bayesian perspective. 6 Discussion In retrospect, our NPYLM is essentially a hierarchical Markov model where the units (=words) evolve as the Markov process, and each unit has subunits (=characters) that also evolve as the Markov process. Therefore, for such languages as English that have already space-separated tokens, we can also begin with tokens besides the character-based approach in Section 5.3. In this case, each token is a “character” whose code is the integer token type, and a sentence is a sequence of “characters.” Figure 8 shows a part of the result computed over 100K sentences from Penn Treebank. We can see that some frequent phrases are identified as “words”, using a fully unsupervised approach. Notice that this is only attainable with NPYLM where each phrase is described as a ngram model on its own, here a word ∞-gram language model. While we developed an efficient forwardbackward algorithm for unsupervised segmentation, it is reminiscent of CRF in the discriminative approach. Therefore, it is also interesting to combine them in a discriminative way as persued in POS tagging using CRF+HMM (Suzuki et al., 2007), let alone a simple semi-supervised approach in Section 5.2. This paper provides a foundation of such possibilities. 106 lastly,shepicturedtoherselfhowthissamelittlesisterofhersw ould,intheafter-time,beherselfagrownwoman;andhowshe wouldkeep,throughallherriperyears,thesimpleandlovingh eartofherchildhood:andhowshewouldgatheraboutherothe rlittlechildren,andmaketheireyesbrightandeagerwithmany astrangetale,perhapsevenwiththedreamofwonderlandoflo ngago:andhowshewouldfeelwithalltheirsimplesorrows,an dfindapleasureinalltheirsimplejoys,rememberingherownc hild-life,andthehappysummerdays. (a) Training data (in part). last ly , she pictured to herself how this same little sister of her s would , inthe after - time , be herself agrown woman ; and how she would keep , through allher ripery ears , the simple and loving heart of her child hood : and how she would gather about her other little children ,and make theireyes bright and eager with many a strange tale , perhaps even with the dream of wonderland of longago : and how she would feel with all their simple sorrow s , and find a pleasure in all their simple joys , remember ing her own child - life , and thehappy summerday s . (b) Segmentation result. Note we used no dictionary. Figure 7: Word segmentation of “Alice in Wonderland ”. 7 Conclusion In this paper, we proposed a much more efficient and accurate model for fully unsupervised word segmentation. With a combination of dynamic programming and an accurate spelling model from a Bayesian perspective, our model significantly outperforms the previous reported results, and the inference is very efficient. This model is also considered as a way to build a Bayesian Kneser-Ney smoothed word n-gram language model directly from characters with no “word” indications. In fact, it achieves lower perplexity per character than that based on supervised segmentations. We believe this will be particularly beneficial to build a language model on such texts as speech transcripts, colloquial texts or unknown languages, where word boundaries are hard or even impossible to identify a priori. Acknowledgments We thank Vikash Mansinghka (MIT) for a motivating discussion leading to this research, and Satoru Takabayashi (Google) for valuable technical advice. References David Aldous, 1985. Exchangeability and Related Topics, pages 1–198. Springer Lecture Notes in Math. 1117. Rie Kubota Ando and Lillian Lee. 2003. MostlyUnsupervised Statistical Segmentation of Japanese nevertheless , he was admired by many of his immediate subordinates for his long work hours and dedication to building northwest into what he called a “ mega carrier . ” although preliminary findings were reported more than a year ago , the latest results appear in today ’s new england journal of medicine , a forum likely to bring new attention to the problem . south korea registered a trade deficit of $ 101 million in october , reflecting the country ’s economic sluggishness , according to government figures released wednesday . Figure 8: Generative phrase segmentation of Penn Treebank text computed by NPYLM. Each line is a “word” consisting of actual words. Kanji Sequences. Natural Language Engineering, 9(2):127–149. Peter F. Brown, Vincent J. Della Pietra, Robert L. Mercer, Stephen A. Della Pietra, and Jennifer C. Lai. 1992. An Estimate of an Upper Bound for the Entropy of English. Computational Linguistics, 18:31– 40. Arnaud Doucet, Christophe Andrieu, and Roman Holenstein. 2009. Particle Markov Chain Monte Carlo. in submission. Tom Emerson. 2005. SIGHAN Bakeoff 2005. http://www.sighan.org/bakeoff2005/. W. R. Gilks, S. Richardson, and D. J. Spiegelhalter. 1996. Markov Chain Monte Carlo in Practice. Chapman & Hall / CRC. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2005. Interpolating Between Types and Tokens by Estimating Power-Law Generators. In NIPS 2005. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2006. Contextual Dependencies in Unsupervised Word Segmentation. In Proceedings of ACL/COLING 2006, pages 673–680. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2009. A Bayesian framework for word segmentation: Exploring the effects of context. Cognition, in press. Yang He. 1988. Extended Viterbi algorithm for second order hidden Markov process. In Proceedings of ICPR 1988, pages 718–720. 107 Mark Johnson and Sharon Goldwater. 2009. Improving nonparameteric Bayesian inference: experiments on unsupervised word segmentation with adaptor grammars. In NAACL 2009. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007. Bayesian Inference for PCFGs via Markov Chain Monte Carlo. In Proceedings of HLT/NAACL 2007, pages 139–146. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. In Proceedings of ICASSP, volume 1, pages 181–184. Sadao Kurohashi and Makoto Nagao. 1998. Building a Japanese Parsed Corpus while Improving the Parsing System. In Proceedings of LREC 1998, pages 719–724. http://nlp.kuee.kyoto-u.ac.jp/nl-resource/ corpus.html. Brian MacWhinney and Catherine Snow. 1985. The Child Language Data Exchange System. Journal of Child Language, 12:271–296. Daichi Mochihashi and Eiichiro Sumita. 2007. The Infinite Markov Model. In NIPS 2007. Kevin Murphy. 2002. Hidden semi-Markov models (segment models). http://www.cs.ubc.ca/˜murphyk/ Papers/segment.pdf. Masaaki Nagata. 1996. Automatic Extraction of New Words from Japanese Texts using Generalized Forward-Backward Search. In Proceedings of EMNLP 1996, pages 48–59. Abel Rodriguez, David Dunson, and Alan Gelfand. 2008. The Nested Dirichlet Process. Journal of the American Statistical Association, 103:1131–1154. Steven L. Scott. 2002. Bayesian Methods for Hidden Markov Models. Journal of the American Statistical Association, 97:337–351. Jun Suzuki, Akinori Fujino, and Hideki Isozaki. 2007. Semi-Supervised Structured Output Learning Based on a Hybrid Generative and Discriminative Approach. In Proceedings of EMNLP-CoNLL 2007, pages 791–800. Yee Whye Teh. 2006a. A Bayesian Interpretation of Interpolated Kneser-Ney. Technical Report TRA2/06, School of Computing, NUS. Yee Whye Teh. 2006b. A Hierarchical Bayesian Language Model based on Pitman-Yor Processes. In Proceedings of ACL/COLING 2006, pages 985–992. Frank Wood and Yee Whye Teh. 2008. A Hierarchical, Hierarchical Pitman-Yor Process Language Model. In ICML 2008 Workshop on Nonparametric Bayes. Jia Xu, Jianfeng Gao, Kristina Toutanova, and Hermann Ney. 2008. Bayesian Semi-Supervised Chinese Word Segmentation for Statistical Machine Translation. In Proceedings of COLING 2008, pages 1017–1024. Hai Zhao and Chunyu Kit. 2008. An Empirical Comparison of Goodness Measures for Unsupervised Chinese Word Segmentation with a Unified Framework. In Proceedings of IJCNLP 2008. 108
2009
12
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1066–1074, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Language Identification of Search Engine Queries Hakan Ceylan Department of Computer Science University of North Texas Denton, TX, 76203 [email protected] Yookyung Kim Yahoo! Inc. 2821 Mission College Blvd. Santa Clara, CA, 95054 [email protected] Abstract We consider the language identification problem for search engine queries. First, we propose a method to automatically generate a data set, which uses clickthrough logs of the Yahoo! Search Engine to derive the language of a query indirectly from the language of the documents clicked by the users. Next, we use this data set to train two decision tree classifiers; one that only uses linguistic features and is aimed for textual language identification, and one that additionally uses a non-linguistic feature, and is geared towards the identification of the language intended by the users of the search engine. Our results show that our method produces a highly reliable data set very efficiently, and our decision tree classifier outperforms some of the best methods that have been proposed for the task of written language identification on the domain of search engine queries. 1 Introduction The language identification problem refers to the task of deciding in which natural language a given text is written. Although the problem is heavily studied by the Natural Language Processing community, most of the research carried out to date has been concerned with relatively long texts such as articles or web pages which usually contain enough text for the systems built for this task to reach almost perfect accuracy. Figure 1 shows the performance of 6 different language identification methods on written texts of 10 European languages that use the Roman Alphabet. It can be seen that the methods reach a very high accuracy when the text has 100 or more characters. However, search engine queries are very short in length; they have about 2 to 3 words on average, Figure 1: Performance of six Language Identification methods on varying text size. Adapted from (Poutsma, 2001). which requires a reconsideration of the existing methods built for this problem. Correct identification of the language of the queries is of critical importance to search engines. Major search engines such as Yahoo! Search (www.yahoo.com), or Google (www.google.com) crawl billions of web pages in more than 50 languages, and about a quarter of their queries are in languages other than English. Therefore a correct identification of the language of a query is needed in order to aid the search engine towards more accurate results. Moreover, it also helps further processing of the queries, such as stemming or spell checking of the query terms. One of the challenges in this problem is the lack of any standard or publicly available data set. Furthermore, creating such a data set is expensive as it requires an extensive amount of work by human annotators. In this paper, we introduce a new method to overcome this bottleneck by automatically generating a data set of queries with language annotations. We show that the data generated this way is highly reliable and can be used to train a machine learning algorithm. We also distinguish the problem of identifying the textual language vs. the language intended by the users for the search engine queries. For search engines, there are cases where a correct identifi1066 cation of the language does not necessarily imply that the user wants to see the results in the same language. For example, although the textual identification of the language for the query ”homo sapiens” is Latin, a user entering this query from Spain, would most probably want to see Spanish web pages, rather than web pages in Latin. We address this issue by adding a non-linguistic feature to our system. We organize the rest of the paper as follows: First, we provide an overview of the previous research in this area. Second, we present our method to automatically generate a data set, and evaluate the effectiveness of this technique. As a result of this evaluation, we obtain a human-annotated data set which we use to evaluate the systems implemented in the following sections. In Section 4, we implement some of the existing models and compare their performance on our test set. We then use the results from these models to build a decision tree system. Next, we consider identifying the language intended by the user for the results of the query, and describe a system geared towards this task. Finally, we conclude our study and discuss the future directions for the problem. 2 Related Work Most of the work carried out to date on the written language identification problem consists of supervised approaches that are trained on a list of words or n-gram models for each reference language. The word based approaches use a list of short words, common words, or a complete vocabulary which are extracted from a corpus for each language. The short words approach uses a list of words with at most four or five characters; such as determiners, prepositions, and conjunctions, and is used in (Ingle, 1976; Grefenstette, 1995). The common words method is a generalization over the short words one which, in addition, includes other frequently occuring words without limiting them to a specific length, and is used in (Souter et al., 1994; Cowie et al., 1999). For classification, the word-based approaches sort the list of words in descending order of their frequency in the corpus from which they are extracted. Then the likelihood of each word in a given text can be calculated by using rank-order statistics or by transforming the frequencies into probabilities. The n-gram based approaches are based on the counts of character or byte n-grams, which are sequences of n characters or bytes, extracted from a corpus for each reference language. Different classification models that use the n-gram features have been proposed. (Cavnar and Trenkle, 1994) used an out-of-place rank order statistic to measure the distance of a given text to the n-gram profile of each language. (Dunning, 1994) proposed a system that uses Markov Chains of byte ngrams with Bayesian Decision Rules to minimize the probability error. (Grefenstette, 1995) simply used trigram counts that are transformed into probabilities, and found this superior to the short words technique. (Sibun and Reynar, 1996) used Relative Entropy by first generating n-gram probability distributions for both training and test data, and then measuring the distance between the two probability distributions by using the Kullback-Liebler Distance. (Poutsma, 2001) developed a system based on Monte Carlo Sampling. Linguini, a system proposed by (Prager, 1999), combines the word-based and n-gram models using a vector-space based model and examines the effectiveness of the combined model and the individual features on varying text size. Similarly, (Lena Grothe and Nrnberger, 2008) combines both models using the ad-hoc method of (Cavnar and Trenkle, 1994), and also presents a comparison study. The work most closely related to ours is presented very recently in (Hammarstr¨om, 2007), which proposes a model that uses a frequency dictionary together with affix information in order to identify the language of texts as short as one word. Other systems that use methods aside from the ones discussed above have also been proposed. (Takci and Sogukpinar, 2004) used letter frequency features in a centroid based classification model. (Kruengkrai et al., 2005) proposed a feature based on alignment of string kernels using suffix trees, and used it in two different classifiers. Finally, (Biemann and Teresniak, 2005) presented an unsupervised system that clusters the words based on sentence co-occurence. Recently, (Hughes et al., 2006) surveyed the previous work in this area and suggested that the problem of language identification for written resources, although well studied, has too many open challenges which requires a more systematic and collaborative study. 3 Data Generation We start the construction of our data set by retrieving the queries, together with the clicked urls, from the Yahoo! Search Engine for a three months time period. For each language desired in our data set, we retrieve the queries from the corresponding 1067 Yahoo! web site in which the default language is the same as the one sought.1 Then we preprocess the queries by getting rid of the ones that have any numbers or special characters in them, removing extra spaces between query terms, and lowercasing all the letters of the queries2. Next, we aggregate the queries that are exactly the same, by calculating the frequencies of the urls clicked for each query. As we pointed out in Section 1, and illustrated in Figure 1, the language identification methods give almost perfect accuracy when the text has 100 or more characters. Furthermore, it is suggested in (Levering and Cutler, 2006) that the average textual content in a web page is 474 words. Thus we assume that it is a fairly trivial task to identify the language for an average web page using one of the existing methods.3 In our case, this task gets already accomplished by the crawler for all the web pages crawled by the search engine. Thus we can summarize our information in two separate tables; T1 and T2. For Table T1, we have a set of queries Q, and each q ∈Q maps to a set of url-frequency pairs. Each mapping is of the form (q, u, fu), where u is a url clicked for q, and fu is the frequency of u. Table T2, on the other hand, contains the urls of all the web pages known to the search engine and has only two columns; (u, l), where u is a unique url, and l is the language identified for u. Since we do not consider multilingual web pages, every url in T2 is unique and has only one language associated with it. Next, we combine the tables T1 and T2 using an inner join operation on the url columns. After the join, we group the results by the language and query columns, during which we also count the number of distinct urls per query, and sum their frequencies. We illustrate this operation with a SQL query in Algorithm 1. As a result of these operations, we have, for each query q ∈Q, a set of triplets (l, fl, cu,l) where l is a language, fl is the count of clicks for l (which we obtained through the urls in language l), and cu,l is the count of unique urls in language l. The resulting table T3 associates queries with languages, but also contains a lot of noise. First, 1We do not make a distinction between the different dialects of the same languge. For English, Spanish and Portuguese we gather queries from the web sites of United States, Mexico, and Brazil respectively. 2In this study, we only considered languages that use the Roman alphabet. 3Although not done in this study, the urls of web pages that have less than a defined number of words, such as 100, can be discarded to ensure a higher confidence. Input: Tables T1:[q, u, fu], T2:[u, l] Output: Table T3:[q, l, fl, cu,l] CREATE VIEW T3 AS SELECT T1.q, T2.l, COUNT(T1.u) AS cu,l, SUM(T1.fu) AS fl FROM T1 INNER JOIN T2 ON T1.u = T2.u GROUP BY q, l; Algorithm 1: Join Tables T1 and T2, group by query and language, aggregate distinct url and frequency counts. we have queries that map to more than one language, which suggests that the users clicked on the urls in different languages for the same query. To quantify the strength of each of these mappings, we calculate a weight wq,l for each mapping of a query q to a language l as: wq,l = fl/Fq where Fq, the total frequency of a query q, is defined as: Fq = X l∈Lq fl where Lq is the set of languages for which q has a mapping. Having computed a weight wq,l for each mapping, we introduce our first threshold parameter, W. We eliminate all the queries in our data set, which have weights, wq,l, below the threshold W. Second, even though some of the queries map to only one language, this mapping cannot be trusted due to the high frequency of the queries together with too few distinct urls. This case suggests that the query is most likely navigational. The intent of navigational queries, such as ”ACL 2009”, is to find a particular web site. Therefore they usually consist of proper names, or acronyms that would not be of much use to our language identification problem. Hence we would like to get rid of the navigational queries in our data set by using some of the features proposed for the task of automatic taxonomy of search engine queries. For a more detailed discussion of this task, we refer the reader to (Broder, 2002; Rose and Levinson, 2004; Lee et al., 2005; Liu et al., 2006; Jansen et al., 2008). Two of the features used in (Liu et al., 2006) in identification of the navigational queries from click-through data, are the number of Clicks Satisfied (nCS) and number of Results Satisfied (nRS). In our problem, we substitute nCS with Fq, the total click frequency of the query q, and nRS with 1068 Uq, the number of distinct urls clicked for q. Thus we eliminate the queries that have a total click frequency above a given frequency threshold F, and, that have less than a given distinct number of urls, U. Thus, we have three parameters that help us in eliminating the noise from the inital data; W, F, and U. We show the usage of these parameters in SQL queries, in Algorithm 2. Input: Tables T1:[q, u, fu], T2:[u, l], T3:[q, l, fl, cu,l] Parameters W, F, and U Output: Table D:[q, l] CREATE VIEW T4 AS SELECT T1.q, COUNT(T1.u) AS cu, SUM(T1.fu) AS Fq FROM T1 INNER JOIN T2 ON T1.u = T2.u GROUP BY q; CREATE VIEW D AS SELECT T3.q, T3.l, T3.fl / T4.Fq AS wq,l FROM T1 INNER JOIN T4 ON T3.q = T4.q 10 WHERE T4.Fq < F AND wq,l >= W AND T4.cu,l >= U; Algorithm 2: Construction of the final data set D, by eliminating queries from T3 based on the parameters W, F, and U. The parameters F, U, and W are actually dependent on the size of the data set under consideration, and the study in (Silverstein et al., 1999) suggests that we can get enough click-through data for our analysis by retrieving a large sample of queries. Since we retrieve the queries that are submitted within a three months period, for each language, we have millions of unique queries in our data set. Investigating a held-out development set of queries retrieved from the United States web site (www.yahoo.com), we empirically decided the following values for the parameters, W = 1, F = 50, and U = 5. In other words, we only accepted the queries for which the contents of the urls agree on the same language, that are submitted less than 50 times, and at least have 5 unique urls clicked. The filtering process leaves us with 5-10% of the queries due to the conservative choice of the parameters. From the resulting set, we randomly picked 500 queries and asked a native speaker to annotate them. For each query, the annotator was to classify the query into one of three categories: • Category-1: If the query does not contain any foreign terms. Language Category-1 Category-1+2 Category-3 English 90.6% 94.2% 5.8% French 84.6% 93.4% 6.6% Portuguese 85.2% 93.4% 6.6% Spanish 86.6% 97.4% 2.6% Italian 82.4% 96.6% 3.4% German 76.8% 87.2% 12.8% Dutch 81.0% 92.0% 8.0% Danish 82.4% 93.2% 6.8% Finnish 87.2% 94.0% 6.0% Swedish 86.6% 95.4% 4.6% Average 84.3% 93.7% 6.3% Table 1: Annotation of 500 sample queries drawn from the automatically generated data. • Category-2: If there exists some foreign terms but the query would still be expected to bring web pages in the same language. • Category-3: If the query belongs to other languages, or all the terms are foreign to the annotator.4 90.6% of the queries in our data set were annotated as Category-1, and 94.2% as Category-1 and Category-2 combined. Having successful results for the United States data set, we applied the same parameters to the data sets retrieved for other languages as well, and had the native speakers of each language annotate the queries in the same way. We list these results in Table 1. The results for English have the highest accuracy for Category-1, mostly due to the fact that we tuned our parameters using the United States data. The scores for German on the other hand, are the lowest. We attribute this fact to the highly multilinguality of the Yahoo! Germany website, which receives a high number of non-German queries. In order to see how much of this multi-linguality our parameter selection successfully eliminate, we randomly picked 500 queries from the aggregated but unfiltered queries of the Yahoo! Germany website, and had them annotated as before. As suspected, the second annotation results showed that, only 47.6% of the queries were annotated as Category-1 and 60.2% are annotated as Category-1 and Category-2 combined. Our method was indeed successful and achieved 29.2% improvement over Category-1, and 27% improvement over Category-1 and Category-2 queries combined. Another interesting fact to note is the absolute differences between Category-1 and Category-1+2 scores. While this number is very low, 3.8%, for English, it is much higher for the other lan4We do not expect the annotators to know the etymology of the words or have the knowledge of all the acronyms. 1069 Language MinC MaxC µC MinW MaxW µW English 7 46 21.8 1 6 3.35 French 6 74 22.6 1 10 3.38 Portug. 3 87 22.5 1 14 3.55 Spanish 5 57 23.5 1 9 3.51 Italian 4 51 21.9 1 8 3.09 German 3 53 18.1 1 6 2.05 Dutch 5 43 16.3 1 6 2.11 Danish 3 40 14.3 1 6 1.93 Finnish 3 34 13.3 1 5 1.49 Swedish 3 42 13.7 1 8 1.80 Average 4.2 52.7 18.8 1 7.8 2.63 Table 2: Properties of the test set formed by taking 350 Category-1 queries from each language. guages. Through an investigation of Category-2 non-English queries, we find out that this is mostly due to the usage of some common internet or computer terms such as ”download”, ”software”, ”flash player”, among other native language query terms. 4 Language Identification We start this section with the implementation of three models each of which use a different existing feature. We categorize these models as statistical, knowledge based, and morphological. We then combine all three models in a machine learning framework using a novel approach. Finally, we extend this framework by adding a non-linguistic feature in order to identify the language intended by the search engine user. To train each model implemented, we used the EuroParl Corpora, (Koehn, 2005), and the same 10 languages in Section 3. EuroParl Corpora is well balanced, so we would not have any bias towards a particular language resulting from our choice of the corpora. We tested all the systems in this section on a test set of 3500 human annotated queries, which is formed by taking 350 Category-1 queries from each language. All the queries in the test set are obtained from the evaluation results in Section 3. In Table 2, we give the properties of this test set. We list the minimum, maximum, and average number of characters and words (MinC, MaxC, µC, MinW, MaxW, and µW respectively). As can be seen in Table 2, the queries in our test set have 18.8 characters on average, which is much lower than the threshold suggested by the existing systems to achieve a good accuracy. Another interesting fact about the test set is that, languages which are in the bottom half of Table 2 (German, Dutch, Danish, Finnish, and Swedish) have lower number of characters and words on average compared to the languages in the upper half. This is due to the characteristics of those languages, which allow the construction of composite words from multiple words, or have a richer morphology. Thus, the concepts can be expressed in less number of words or characters. 4.1 Models for Language Identification We implement a statistical model using a character based n-gram feature. For each language, we collect the n-gram counts (for n = 1 to n = 7 also using the word beginning and ending spaces) from the vocabulary of the training corpus, and then generate a probability distribution from these counts. We implemented this model using the SRILM Toolkit (Stolcke, 2002) with the modified Kneser-Ney Discounting and interpolation options. For comparison purposes, we also implemented the Rank-Order method using the parameters described in (Cavnar and Trenkle, 1994). For the knowledge based method, we used the vocabulary of each language obtained from the training corpora, together with the word counts. From these counts, we obtained a probability distribution for all the words in our vocabulary. In other words, this time we used a word-based ngram method, only with n = 1. It should be noted that increasing the size of n, which might help in language identification of other types of written texts, will not be helpful in this task due to the unique nature of the search engine queries. For the morphological feature; we gathered the affix information for each language from the corpora in an unsupervised fashion as described in (Hammarstr¨om, 2006). This method basically considers each possible morphological segmentation of the words in the training corpora by assuming a high frequency of occurence of salient affixes, and also assuming that words are made up of random characters. Each possible affix is assigned a score based on its frequency, random adjustment, and curve-drop probabilities, which respectively indicate the probability of the affix being a random sequence, and the probability of being a valid morphological segment based on the information of the preceding or the succeding character. In Table 3, we present the top 10 results of the probability distributions obtained from the vocabulary of English, Finnish, and German corpora. We give the performance of each model on our test set in Table 4. The character based ngram model outperforms all the other models with the exception of French, Spanish, and Italian on which the word-based unigram model is better. 1070 English Finnish German -nts 0.133 erityis0.216 -ungen 0.172 -ity 0.119 ihmisoikeus- 0.050 -en 0.066 -ised 0.079 -inen 0.038 gesamt0.066 -ated 0.075 -iksi 0.037 gemeinschafts0.051 -ing 0.069 -iseksi 0.030 verhandlugs0.040 -tions 0.069 -ssaan 0.028 agrar0.024 -ted 0.048 maatalous0.028 s¨ud0.018 -ed 0.047 -aisesta 0.024 menschenrechts- 0.018 -ically 0.041 -iseen 0.023 umwelt0.017 -ly 0.040 -amme 0.023 -ches 0.017 Table 3: Top 10 prefixes and suffixes together with their probabilities, obtained for English, Finnish, and German. The word-based unigram model performs poorly on languages that may have highly inflected or composite words such as Finnish, Swedish, and German. This result is expected as we cannot make sure that the training corpus will include all the possible inflections or compositions of the words in the language. The Rank-Order method performs poorly compared to the character based n-gram model, which suggests that for shorter texts, a well-defined probability distribution with a proper discounting strategy is better than using an ad-hoc ranking method. The success of the morphological feature depends heavily on the probability distribution of affixes in each language, which in turn depends on the corpus due to the unsupervised affix extraction algorithm. As can be seen in Table 3, English affixes have a more uniform distribution than both Finnish and German. Each model implemented in the previous section has both strengths and weaknesses. The statistical approach is more robust to noise, such as misspellings, than the others, however it may fail to identify short queries or single words because of the lack of enough evidence, and it may confuse two languages that are very similar. In such cases, the knowledge-based model could be more useful, as it can find those query terms in the vocabulary. On the other hand, the knowledge-based model would have a sparse vocabulary for languages that can have heavily inflected words such as Turkish, and Finnish. In such cases, the morphological feature could provide a strong clue for identification from the affix information of the terms. 4.2 Decision Tree Classification Noting the fact that each model can complement the other(s) in certain cases, we combined them by using a decision tree (DT) classifier. We trained the classifier using the automatically annotated data set, which we created in Section 3. Since this set comes with a certain amount of noise, we Language Stat. Knowl. Morph. Rank-Order English 90.3% 83.4% 60.6% 78.0% French 77.4% 82.0% 4.86% 56.0% Portuguese 79.7% 75.7% 11.7% 70.3% Spanish 73.1% 78.3% 2.86% 46.3% Italian 85.4% 87.1% 43.4% 77.7% German 78.0% 60.0% 26.6% 58.3% Dutch 85.7% 64.9% 23.1% 65.1% Danish 87.7% 67.4% 46.9% 61.7% Finnish 87.4% 49.4% 38.0% 82.3% Swedish 81.7% 55.1% 2.0% 56.6% Average 82.7% 70.3% 26.0% 65.2% Table 4: Evaluation of the models built from the individual features, and the Rank-Order method on the test set. pruned the DT during the training phase to avoid overfitting. This way, we built a robust machine learning framework at a very low cost and without any human labour. As the features of our DT classifier, we use the results of the models that are implemented in Section 4.1, together with the confidence scores calculated for each instance. To calculate a confidence score for the models, we note that since each model makes its selection based on the language that gives the highest probability, a confidence score should indicate the relative highness of that probability compared to the probabilities of other languages. To calculate this relative highness, we use the Kurtosis measure, which indicates how peaked or flat the probabilities in a distribution are compared to a normal distribution. To calculate the Kurtosis value, κ, we use the equation below. κ = P l∈L(pl −µ)4 (N −1)σ4 where L is the set of languages, N is the number of languages in the set, pl is the probability for language l ∈L, and µ and σ are respectively the mean and the the standard deviation values of P = {pl|l ∈L}. We calculate a κ measure for the result of each model, and then discretize it into one of three categories: • HIGH: If κ ≥(µ′ + σ′) • MEDIUM: If [κ > (µ′ −σ′)∧κ < (µ′ +σ′)] • LOW: If κ ≤(µ′ −σ′) where µ′ and σ′ are the mean and the standard deviation values respectively, for a set of confidence scores calculated for a model on a small development set of 25 annotated queries from each language. For the statistical model, we found µ′ = 4.47, and σ′ = 1.96, for the knowledge 1071 Language 500 1,000 5,000 10,000 English 78.6% 81.1% 84.3% 85.4% French 83.4% 85.7% 85.4% 86.6% Portuguese 81.1% 79.1% 81.7% 81.1% Spanish 77.4% 79.4% 81.4% 82.3% Italian 90.6% 89.7% 90.6% 90.0% German 81.1% 82.3% 83.1% 83.1% Dutch 86.3% 87.1% 88.3% 87.4% Danish 86.3% 87.7% 88.0% 88.0% Finnish 88.3% 88.3% 89.4% 90.3% Swedish 81.4% 81.4% 81.1% 81.7% Average 83.5% 84.2% 85.3% 85.6% Table 5: Evaluation of the Decision Tree Classifier with varying sizes of training data. based µ′ = 4.69, and σ′ = 3.31, and finally for the morphological model we found µ′ = 4.65, and σ′ = 2.25. Hence, for a given query, we calculate the identification result of each model together with the model’s confidence score, and then discretize the confidence score into one of the three categories described above. Finally, in order to form an association between the output of the model and its confidence, we create a composite attribute by appending the discretized confidence to the identified language. As an example, our statistical model identifies the query ”the sovereign individual” as English (en), and reports a κ = 7.60, which is greater than or equal to µ′ + σ′ = 4.47 + 1.96 = 6.43. Therefore the resulting composite attribute assigned to this query by the statistical model is ”en-HIGH”. We used the Weka Machine Learning Toolkit (Witten and Frank, 2005) to implement our DT classifier. We trained our system with 500, 1,000, 5,000, and 10,000 instances of the automatically annotated data and evaluate it on the same test set of 3500 human-annotated queries. We show the results in Table 5. The results in Table 5 show that our DT classifier, on average, outperforms all the models in Table 4 for each size of the training data. Furthermore, the performance of the system increases with the increasing size of training data. In particular, the improvement that we get for Spanish, French, and German queries are strikingly good. This shows that our DT classifier can take advantage of the complementary features to make a better classification. The classifier that uses 10,000 instances gets outperformed by the statistical model (by 4.9%) only in the identification of English queries. In order to evaluate the significance of our improvement, we performed a paired t-test, with a null hypothesis and α = 0.01 on the outputs of da de en es fi fr it nl sv pt da 308 4 9 0 2 3 1 7 14 2 de 7 291 6 2 4 4 5 19 9 3 en 6 8 299 3 3 9 4 5 8 5 es 3 2 4 288 2 2 10 1 1 37 fi 0 5 3 4 316 1 7 4 7 3 fr 2 7 6 3 2 303 10 7 2 8 it 0 1 2 7 4 4 315 2 1 14 nl 5 8 8 4 6 4 4 306 4 1 sv 24 8 6 5 6 2 2 6 286 5 pt 0 1 3 41 1 4 13 2 1 284 Figure 2: Confusion Matrix for the Decision Tree Classifier that uses 10,000 training instances. the statistical model, and the DT classifier that uses 10,000 training instances. The test resulted in P = 1.12−10 ≪α, which strongly indicates that the improvement of the DT classifier over the statistical model is statistically significant. In order to illustrate the errors made by our DT classifier, we show the confusion matrix M in Figure 2. The matrix entry Mli,lj simply gives the number of test instances that are in language li but misclassified by the system as lj. From the figure, we can infer that, Portuguese and Spanish are the languages that are confused mostly by the system. This is an expected result because of the high similarity between the two languages. 4.3 Towards Identifying the Language Intent As a final step in our study, we build another DT classifier by introducing a non-linguistic feature to our system, which is the language information of the country from which the user entered the query.5 Our intuition behind introducing this extra feature is to help the search engine in guessing the language in which the user wants to see the resulting web pages. Since the real purpose of a search engine is to bring the expected results to its users, we believe that a correct identification of the language that the user intended for the results when typing the query is an important first part of this process. To illustrate this with an example, we consider the query, ”how to tape for plantar fasciitis”, which we selected among the 500 humanannotated queries retrieved from the United States web site. This query is labelled as Category-2 by the human annotator. Our DT classifier, together with the statistical and knowledge-based models, classifies this query falsely as a Porteguese query, which is most likely caused due to the presence of the Latin phrase ”plantar fasciitis”. In order to test the effectiveness of our new feature, we introduce all the Category-2 queries to our 5For countries, where the number of official languages is more than one, we simply pick the first one listed in our table. 1072 Language New Feat. Classifier-1 Classifier-2 English 74.9% 82.8% 89.5% French 77.0% 85.6% 93.7% Portuguese 79.1% 78.1% 93.3% Spanish 84.1% 80.7% 94.2% Italian 90.6% 86.7% 96.3% German 80.2% 80.7% 94.2% Dutch 91.6% 85.8% 95.3% Danish 88.6% 87.0% 94.9% Finnish 94.0% 87.7% 97.9% Swedish 87.9% 80.9% 95.3% Average 85.0% 83.6% 94.5% Table 6: Evaluation of the new feature and the two decision tree classifiers on the new test set. test set and increase its size to 430 queries for each language.6 Then we run both classifiers, with and without the new feature, using a training data size of 10,000 instances, and display the results in Table 6. We also show the contribution of the new feature as a standalone classifier in the first column of Table 6. We labeled the DT classifier that we implemented in Section 4.2 as ”Classifier-1” and the new one as ”Classifier-2”. Interestingly, the results in Table 6 tell us that a search engine can achieve a better accuracy than Classifier-1 on average, should it decide to bring the results based only on the geographical information of its users. However one can argue that this would be a bad idea for the web sites that receive a lot of visitors from all over the world, and also are visited very often. For example, if the search engine’s United States web site, which is considered as one of the most important markets in the world, was to employ such an approach, it’d only receive 74.9% accuracy by misclassifying the English queries entered from countries for which the default language is not English. On the other hand, when this geographical information is used as a feature in our decision tree framework, we get a very high boost on the accuracy of the results for all the languages. As can be seen in Table 6, Classifier-2 gives the best results. 5 Conclusions and Future Work In this paper, we considered the language identification problem for search engine queries. First, we presented a completely automated method to generate a reliable data set with language annotations that can be used to train a decision tree classifier. Second, we implemented three features used in the existing language identification meth6We don’t have equal number of Category-2 queries in each language. For example, English has only 18 of them whereas Italian has 71. Hence the resulting data set won’t be balanced in terms of this category. ods, and compared their performance. Next, we built a decision tree classifier that improves the results on average by combining the outputs of the three models together with their confidence scores. Finally, we considered the practical application of this problem for search engines, and built a second classifier that takes into account the geographical information of the users. Human annotations on 5000 automatically annotated queries showed that our data generation method is highly accurate, achieving 84.3% accuracy on average for Category-1 queries, and 93.7% accuracy for Category-1 and Category-2 queries combined. Furthermore, the process is fast as we can get a data set of size approximately 50,000 queries in a few hours by using only 15 computers in a cluster. The decision tree classifier that we built for the textual language identification in Section 4.2 outperforms all three models that we implemented in Section 4.1, for all the languages except English, for which the statistical model is better by 4.9%, and Swedish, for which we get a tie. Introducing the geographical information feature to our decision tree framework boosts the accuracy greatly even in the case of a noisier test set. This suggests that the search engines can do a better job in presenting the results to their users by taking the non-linguistic features into account in identifying the intended language of the queries. In future, we would like to improve the accuracy of our data generation system by considering additional features proposed in the studies of automated query taxonomy, and doing a more careful examination in the assignment of the parameter values. We are also planning to extend the number of languages in our data set. Furthermore, we would like to improve the accuracy of Classifier2 with additional non-linguistic features. Finally, we will consider other alternatives to the decision tree framework when combining the results of the models with their confidence scores. 6 Acknowledgments We are grateful to Romain Vinot, and Rada Mihalcea, for their comments on an earlier draft of this paper. We also would like to thank Sriram Cherukiri for his contributions during the course of this project. Finally, many thanks to Murat Birinci, and Sec¸kin Kara, for their help on the data annotation process, and Cem S¨ozgen for his remarks on the SQL formulations. 1073 References C. Biemann and S. Teresniak. 2005. Disentangling from babylonian confusion - unsupervised language identification. In Proceedings of CICLing-2005, Computational Linguistics and Intelligent Text Processing, pages 762–773. Springer. Andrei Broder. 2002. A taxonomy of web search. SIGIR Forum, 36(2):3–10. William B. Cavnar and John M. Trenkle. 1994. Ngram-based text categorization. In Proceedings of SDAIR-94, 3rd Annual Symposium on Document Analysis and Information Retrieval, pages 161–175, Las Vegas, US. J. Cowie, Y. Ludovic, and R. Zacharski. 1999. Language recognition for mono- and multi-lingual documents. In Proceedings of Vextal Conference, Venice, Italy. Ted Dunning. 1994. Statistical identification of language. Technical Report MCCS-94-273, Computing Research Lab (CRL), New Mexico State University. Gregory Grefenstette. 1995. Comparing two language identification schemes. In Proceedings of JADT-95, 3rd International Conference on the Statistical Analysis of Textual Data, Rome, Italy. Harald Hammarstr¨om. 2006. A naive theory of affixation and an algorithm for extraction. In Proceedings of the Eighth Meeting of the ACL Special Interest Group on Computational Phonology and Morphology at HLT-NAACL 2006, pages 79–88, New York City, USA, June. Association for Computational Linguistics. Harald Hammarstr¨om. 2007. A fine-grained model for language identification. In F. Lazarinis, J. Vilares, J. Tait (eds) Improving Non-English Web Searching (iNEWS07) SIGIR07 Workshop, pages 14–20. B. Hughes, T. Baldwin, S. G. Bird, J. Nicholson, and A. Mackinlay. 2006. Reconsidering language identification for written language resources. In 5th International Conference on Language Resources and Evaluation (LREC2006), Genoa, Italy. Norman C Ingle. 1976. A language identification table. The Incorporated Linguist, 15(4):98–101. Bernard J. Jansen, Danielle L. Booth, and Amanda Spink. 2008. Determining the informational, navigational, and transactional intent of web queries. Inf. Process. Manage., 44(3):1251–1266. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the 10th Machine Translation Summit, Phuket, Thailand, pages 79–86. Canasai Kruengkrai, Prapass Srichaivattana, Virach Sornlertlamvanich, and Hitoshi Isahara. 2005. Language identification based on string kernels. In In Proceedings of the 5th International Symposium on Communications and Information Technologies (ISCIT-2005, pages 896–899. Uichin Lee, Zhenyu Liu, and Junghoo Cho. 2005. Automatic identification of user goals in web search. In WWW ’05: Proceedings of the 14th international conference on World Wide Web, pages 391–400, New York, NY, USA. ACM. Ernesto William De Luca Lena Grothe and Andreas Nrnberger. 2008. A comparative study on language identification methods. In Proceedings of the Sixth International Language Resources and Evaluation (LREC’08), Marrakech, Morocco, May. European Language Resources Association (ELRA). http://www.lrec-conf.org/proceedings/lrec2008/. Ryan Levering and Michal Cutler. 2006. The portrait of a common html web page. In DocEng ’06: Proceedings of the 2006 ACM symposium on Document engineering, pages 198–204, New York, NY, USA. ACM Press. Yiqun Liu, Min Zhang, Liyun Ru, and Shaoping Ma. 2006. Automatic query type identification based on click through information. In AIRS, pages 593–600. Arjen Poutsma. 2001. Applying monte carlo techniques to language identification. In In Proceedings of Computational Linguistics in the Netherlands (CLIN). John M. Prager. 1999. Linguini: Language identification for multilingual documents. In HICSS ’99: Proceedings of the Thirty-Second Annual Hawaii International Conference on System Sciences-Volume 2, page 2035, Washington, DC, USA. IEEE Computer Society. Daniel E. Rose and Danny Levinson. 2004. Understanding user goals in web search. In WWW ’04: Proceedings of the 13th international conference on World Wide Web, pages 13–19, New York, NY, USA. ACM. Penelope Sibun and Jeffrey C. Reynar. 1996. Language identification: Examining the issues. In 5th Symposium on Document Analysis and Information Retrieval, pages 125–135, Las Vegas, Nevada, U.S.A. Craig Silverstein, Hannes Marais, Monika Henzinger, and Michael Moricz. 1999. Analysis of a very large web search engine query log. SIGIR Forum, 33(1):6–12. C. Souter, G. Churcher, J. Hayes, and J. Hughes. 1994. Natural language identification using corpus-based models. Hermes Journal of Linguistics, 13:183– 203. Andreas Stolcke. 2002. Srilm – an extensible language modeling toolkit. In Proc. Intl. Conf. on Spoken Language Processing, volume 2, pages 901–904, Denver, CO. Hidayet Takci and Ibrahim Sogukpinar. 2004. Centroid-based language identification using letter feature set. In CICLing, pages 640–648. Ian H. Witten and Eibe Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, 2 edition. 1074
2009
120
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 1075–1083, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Exploiting Bilingual Information to Improve Web Search Wei Gao1, John Blitzer2, Ming Zhou3, and Kam-Fai Wong1 1The Chinese University of Hong Kong, Shatin, N.T., Hong Kong, China {wgao,kfwong}@se.cuhk.edu.hk 2Computer Science Division, University of California at Berkeley, CA 94720-1776, USA [email protected] 3Microsoft Research Asia, Beijing 100190, China [email protected] Abstract Web search quality can vary widely across languages, even for the same information need. We propose to exploit this variation in quality by learning a ranking function on bilingual queries: queries that appear in query logs for two languages but represent equivalent search interests. For a given bilingual query, along with corresponding monolingual query log and monolingual ranking, we generate a ranking on pairs of documents, one from each language. Then we learn a linear ranking function which exploits bilingual features on pairs of documents, as well as standard monolingual features. Finally, we show how to reconstruct monolingual ranking from a learned bilingual ranking. Using publicly available Chinese and English query logs, we demonstrate for both languages that our ranking technique exploiting bilingual data leads to significant improvements over a state-of-the-art monolingual ranking algorithm. 1 Introduction Web search quality can vary widely across languages, even for a single query and search engine. For example, we might expect that ranking search results for the query Wj„ ßY„ (Thomas Hobbes) to be more difficult in Chinese than it is in English, even while holding the basic ranking function constant. At the same time, ranking search results for the query Han Feizi (8 :) is likely to be harder in English than in Chinese. A large portion of web queries have such properties that they are originated in a language different from the one they are searched. This variance in problem difficulty across languages is not unique to web search; it appears in a wide range of natural language processing problems. Much recent work on bilingual data has focused on exploiting these variations in difficulty to improve a variety of monolingual tasks, including parsing (Hwa et al., 2005; Smith and Smith, 2004; Burkett and Klein, 2008; Snyder and Barzilay, 2008), named entity recognition (Chang et al., 2009), and topic clustering (Wu and Oard, 2008). In this work, we exploit a similar intuition to improve monolingual web search. Our problem setting differs from cross-lingual web search, where the goal is to return machinetranslated results from one language in response to a query from another (Lavrenko et al., 2002). We operate under the assumption that for many monolingual English queries (e.g., Han Feizi), there exist good documents in English. If we have Chinese information as well, we can exploit it to help find these documents. As we will see, machine translation can provide important predictive information in our setting, but we do not wish to display machine-translated output to the user. We approach our problem by learning a ranking function for bilingual queries – queries that are easily translated (e.g., with machine translation) and appear in the query logs of two languages (e.g., English and Chinese). Given query logs in both languages, we identify bilingual queries with sufficient clickthrough statistics in both sides. Large-scale aggregated clickthrough data were proved useful and effective in learning ranking functions (Dou et al., 2008). Using these statistics, we can construct a ranking over pairs of documents, one from each language. We use this ranking to learn a linear scoring function on pairs of documents given a bilingual query. We find that our bilingual rankings have good monolingual ranking properties. In particular, given an optimal pairwise bilingual ranking, we show that simple heuristics can effectively approximate the optimal monolingual ranking. Using 1075 1 10 100 1,000 10,000 50,000 0 5 10 15 20 25 30 35 40 45 50 Frequency (# of times that queries are issued) Proportion of bilingual queries (%) English Chinese Figure 1: Proportion of bilingual queries in the query logs of different languages. these heuristics and our learned pairwise scoring function, we can derive a ranking for new, unseen bilingual queries. We develop and test our bilingual ranker on English and Chinese with two large, publicly available query logs from the AOL search engine1 (English query log) (Pass et al., 2006) and the Sougou search engine2 (Chinese query log) (Liu et al., 2007). For both languages, we achieve significant improvements over monolingual Ranking SVM (RSVM) baselines (Herbrich et al., 2000; Joachims, 2002), which exploit a variety of monolingual features. 2 Bilingual Query Statistics We designate a query as bilingual if the concept has been searched by users of both two languages. As a result, not only does it occur in the query log of its own language, but its translation also appears in the log of the second language. So a bilingual query yields reasonable queries in both languages. Of course, most queries are not bilingual. For example, our English log contains map of Alabama, but not our Chinese log. In this case, we wouldn’t expect the Chinese results for the query’s translation, ƒn®jC, to be helpful in ranking the English results. In total, we extracted 4.8 million English queries from AOL log, of which 1.3% of their translations appear in Sogou log. Similarly, of our 3.1 million Chinese queries from Sogou log, 2.3% of their translations appear in AOL log. By total number of queries issued (i.e., counting dupli1http://search.aol.com 2http://www.sogou.com cates), the proportion of bilingual queries is much higher. As Figure 1 shows as the number of times a query is issued increases, so does the chance of it being bilingual. In particular, nearly 45% of the highest-frequency English queries and 35% of the highest-frequency Chinese queries are bilingual. 3 Learning to Rank Using Bilingual Information Given a set of bilingual queries, we now describe how to learn a ranking function for monolingual data that exploits information from both languages. Our procedure has three steps: Given two monolingual rankings, we construct a bilingual ranking on pairs of documents, one from each language. Then we learn a linear scoring function for pairs of documents that exploits monolingual information (in both languages) and bilingual information. Finally, given this ranking function on pairs and a new bilingual query, we reconstruct a monolingual ranking for the language of interest. This section addresses these steps in turn. 3.1 Creating Bilingual Training Data Without loss of generality, suppose we rank English documents with constraints from Chinese documents. Given an English log Le and a Chinese log Lc, our ranking algorithm takes as input a bilingual query pair q = (qe, qc) where qe ∈Le and qc ∈Lc, a set of returned English documents {ei}N i=1 from qe, and a set of constraint Chinese documents {cj}n j=1 from qc. In order to create bilingual ranking data, we first generate monolingual ranking data from clickthrough statistics. For each language-query-document triple, we calculate the aggregated click count across all users and rank documents according to this statistic. We denote the count of a page as C(ei) or C(cj). The use of clickthrough statistics as feedback for learning ranking functions is not without controversy, but recent empirical results on large data sets suggest that the aggregated user clicks provides an informative indicator of relevance preference for a query. Joachims et al. (2007) showed that relative feedback signals generated from clicks correspond well with human judgments. Dou et al. (2008) revealed that a straightforward use of aggregated clicks can achieve a better ranking than using explicitly labeled data because clickthrough data contain fine-grained differences between documents useful for learning an 1076 Table 1: Clickthrough data of a bilingual query pair extracted from query logs. Bilingual query pair (Mazda, j j j  H H H) doc URL click # e1 www.mazda.com 229 e2 www.mazdausa.com 185 e3 www.mazda.co.uk 5 e4 www.starmazda.com 2 e5 www.mazdamotosports.com 2 . . . . . . c1 www.faw-mazda.com 50 c2 price.pcauto.com.cn/brand. jsp?bid=17 43 c3 auto.sina.com.cn/salon/ FORD/MAZDA.shtml 20 c4 car.autohome.com.cn/brand/ 119/ 18 c5 jsp.auto.sohu.com/view/ brand-bid-263.html 9 . . . . . . accurate and reliable ranking. Therefore, we leverage aggregated clicks for comparing the relevance order of documents. Note that there is nothing specific to our technique that requires clickthrough statistics. Indeed, our methods could easily be employed with human annotated data. Table 1 gives an example of a bilingual query pair and the aggregated click count of each result page. Given two monolingual documents, a preference order can be inferred if one document is clicked more often than another. To allow for cross-lingual information, we extend the order of individual documents into that of bilingual document pairs: given two bilingual document pairs, we will write  e(1) i , c(1) j  ≻  e(2) i , c(2) j  to indicate that the pair of  e(1) i , c(1) j  is ranked higher than the pair of  e(2) i , c(2) j  . Definition 1  e(1) i , c(1) j  ≻  e(2) i , c(2) j  if and only if one of the following relations hold: 1. C(e(1) i ) > C(e(2) i ) and C(c(1) j ) ≥C(c(2) j ) 2. C(e(1) i ) ≥C(e(2) i ) and C(c(1) j ) > C(c(2) j ) Note, however, that from a purely monolingual perspective, this definition introduces orderings on documents that should not initially have existed. For English ranking, for example, we may have  e(1) i , c(1) j  ≻  e(2) i , c(2) j  even when C(e(1) i ) = C(e(2) i ). This leads us to the following asymmetric definition of ≻that we use in practice: Definition 2  e(1) i , c(1) j  ≻  e(2) i , c(2) j  if and only if C(e(1) i ) > C(e(2) i ) and C(c(1) j ) ≥C(c(2) j ) With this definition, we can unambiguously compare the relevance of bilingual document pairs based on the order of monolingual documents. The advantages are two-fold: (1) we can treat multiple cross-lingual document similarities the same way as the commonly used query-document features in a uniform manner of learning; (2) with the similarities, the relevance estimation on bilingual document pairs can be enhanced, and this in return can improve the ranking of documents. 3.2 Ranking Model Given a pair of bilingual queries (qe, qc), we can extract the set of corresponding bilingual document pairs and their click counts {(ei, cj), (C(ei), C(cj))}, where i = 1, . . . , N and j = 1, . . . , n. Based on that, we produce a set of bilingual ranking instances S = {Φij, zij}, where each Φij = {xi; yj; sij} is the feature vector of (ei, cj) consisting of three components: xi = f(qe, ei) is the vector of monolingual relevancy features of ei, yi = f(qc, cj) is the vector of monolingual relevancy features of cj, and sij = sim(ei, cj) is the vector of cross-lingual similarities between ei and cj, and zij = (C(ei), C(cj)) is the corresponding click counts. The task is to select the optimal function that minimizes a given loss with respect to the order of ranked bilingual document pairs and the gold. We resort to Ranking SVM (RSVM) (Herbrich et al., 2000; Joachims, 2002) learning for classification on pairs of instances. Compared the baseline RSVM (monolingual), our algorithm learns to classify on pairs of bilingual document pairs rather than on pairs of individual documents. Let f being a linear function: f⃗w(ei, cj) = ⃗wx · xi + ⃗wy · yj + ⃗ws · sij (1) where ⃗w = {⃗wx; ⃗wy; ⃗ws} denotes the weight vector, in which the elements correspond to the relevancy features and similarities. For any two bilingual document pairs, their preference relation is measured by the difference of the functional values of Equation 1:  e(1) i , c(1) j  ≻  e(2) i , c(2) j  ⇔ f⃗w  e(1) i , c(1) j  −f⃗w  e(2) i , c(2) j  > 0 ⇔ ⃗wx ·  x(1) i −x(2) i  + ⃗wy ·  y(1) j −y(2) j  + ⃗ws ·  s(1) ij −s(2) ij  > 0 1077 We then create a new training corpus based on the preference ordering of any two such pairs: S′ = {Φ′ ij, z′ ij}, where the new feature vector becomes Φ′ ij = n x(1) i −x(2) i ; y(1) j −y(2) j ; s(1) ij −s(2) ij o , and the class label z′ ij =    +1, if  e(1) i , c(1) j  ≻  e(2) i , c(2) j  ; −1, if  e(2) i , c(2) j  ≻  e(1) i , c(1) j  is a binary preference value depending on the order of bilingual document pairs. The problem is to solve SVM objective: min ⃗w 1 2∥⃗w∥2 + λ P i P j ξij subject to bilingual constraints: z′ ij · (⃗w · Φ′ ij) ≥ 1 −ξij and ξij ≥0. There are potentially Γ = nN bilingual document pairs for each query, and the number of comparable pairs may be much larger due to the combinatorial nature (but less than Γ(Γ −1)/2). To speed up training, we resort to stochastic gradient descent (SGD) optimizer (Shalev-Shwartz et al., 2007) to approximate the true gradient of the loss function evaluated on a single instance (i.e., per constraint). The parameters are then adjusted by an amount proportional to this approximate gradient. For large data set, SGD-RSVM can be much faster than batch-mode gradient descent. 3.3 Inference The solution ⃗w forms a vector orthogonal to the hyper-plane of RSVM. To predict the order of bilingual document pairs, the ranking score can be simply calculated by Equation 1. However, a prominent problem is how to derive the full order of monolingual documents for output from the order of bilingual document pairs. To our knowledge, there is no precise conversion algorithm in polynomial time. We thus adopt two heuristics for approximating the true document score: • H-1 (max score): Choose the maximum score of the pair as the score of document, i.e., score(ei) = maxj(f(ei, cj)). • H-2 (mean score): Average over all the scores of pairs associated with the ranked document as the score of this document, i.e., score(ei) = 1/n P j f(ei, cj). Intuitively, for the rank score of a single document, H-2 combines the “voting” scores from its n constraint documents weighted equally, while H-1 simply chooses the maximum one. A formal approach to the problem is to leverage rank aggregation formalism (Dwork et a., 2001; Liu et al., 2007), which will be left for our future work. The two simple heuristics are employed here because of their simplicity and efficiency. The time complexity of the approximation is linear to the number of ranked documents given n is constant. 4 Features and Similarities Standard features for learning to rank include various query-document features, e.g., BM25 (Robertson, 1997), as well as query-independent features, e.g., PageRank (Brin and Page, 1998). Our feature space consists of both these standard monolingual features and cross-lingual similarities among documents. The cross-lingual similarities are valuated using different translation mechanisms, e.g., dictionary-based translation or machine translation, or even without any translation at all. 4.1 Monolingual Relevancy Features In learning to rank, the relevancy between query and documents and the measures based on link analysis are commonly used as features. The discussion on their details is beyond the scope of this paper. Readers may refer to (Liu et al., 2007) for the definitions of many such features. We implement six of these features that are considered the most typical shown as Table 2. These include sets of measures such as BM25, language-modelbased IR score, and PageRank. Because most conventional IR and web search relevancy measures fall into this category, we call them altogether IR features in what follows. Note that for a given bilingual document pair (e, c), the monolingual IR features consist of relevance score vectors f(qe, e) in English and f(qc, c) in Chinese. 4.2 Cross-lingual Document Similarities To measure the document similarity across different languages, we define the similarity vector sim(e, c) as a series of functions mapping a bilingual document pair to positive real numbers. Intuitively, a good similarity function is one which maps cross-lingual relevant documents into close scores and maintains a large distance between irrelevant and relevant documents. Four categories of similarity measures are employed. Dictionary-based Similarity (DIC): For dictionary-based document translation, we use 1078 Table 2: List of monolingual relevancy measures used as IR features in our model. IR Feature Description BM25 Okapi BM25 score (Robertson, 1997) BM25 PRF Okapi BM25 score with pseudorelevance feedback (Robertson and Jones, 1976) LM DIR Language-model-based IR score with Dirichlet smoothing (Zhai and Lafferty, 2001) LM JM Language-model-based IR score with Jelinek-Mercer smoothing (Zhai and Lafferty, 2001) LM ABS Language-model-based IR score with absolute discounting (Zhai and Lafferty, 2001) PageRank PageRank score (Brin and Page, 1998) the similarity measure proposed by Mathieu et al. (2004). Given a bilingual dictionary, we let T(e, c) denote the set of word pairs (we, wc) such that we is a word in English document e, and wc is a word in Chinese document c, and we is the English translation of wc. We define tf(we, e) and tf(wc, c) to be the term frequency of we in e and that of wc in c, respectively. Let df(we) and df(wc) be the English document frequency for we and Chinese document frequency for wc. If ne (nc) is the total number of English (Chinese), then the bilingual idf is defined as idf(we, wc) = log ne+nc df(we)+df(wc). Then the cross-lingual document similarity is calculated by sim(e, c) = P (we,wc)∈T (e,c) tf(we,e)tf(wc,c)idf(we,wc)2 √ Z where Z is a normalization coefficient (see Mathieu et al. (2004) for detail). This similarity function can be understood as the cross-lingual counterpart of the monolingual cosine similarity function (Salton, 1998). Similarity Based on Machine Translation (MT): For machine translation, the cross-lingual measure actually becomes a monolingual similarity between one document and another’s translation. We therefore adopt cosine function for it directly (Salton, 1998). Translation Ratio (RATIO): Translation ratio is defined as two sets of ratios of translatable terms using a bilingual dictionary: RATIO FOR – what percent of words in e can be translated to words in c; RATIO BACK – what percent of words in c can be translated back to words in e. URL LCS Ratio (URL): The ratio of longest common subsequence (Cormen et al., 2001) between the URLs of two pages being compared. This measure is useful to capture pages in different languages but with similar URLs such as www. airbus.com, www.airbus.com.cn, etc. Note that each set of similarities above except URL includes 3 values based on different fields of web page: title, body, and title+body. 5 Experiments and Results This section presents evaluation metric, data sets and experiments for our proposed ranker. 5.1 Evaluation Metric Commonly adopted metrics for ranking, such as mean average precision (Buckley and Voorhees, 2000) and Normalized Discounted Cumulative Gain (J¨arvelin and Kek¨al¨ainen, 2000), is designed for data sets with human relevance judgment, which is not available to us. Therefore, we use the Kendall’s tau coefficient (Kendall, 1938; Joachims, 2002) to measure the degree of correlation between two rankings. For simplicity, let’s assume strict orderings of any given ranking. Therefore we ignore all the pairs with ties (instances with the identical click count). Kendall’s tau is defined as τ(ra, rb) = (P −Q)/(P + Q), where P is the number of concordant pairs and Q is the number of disconcordant pairs in the given orderings ra and rb. The value is a real number within [−1, +1], where −1 indicates a complete inversion, and +1 stands for perfect agreement, and a value of zero indicates no correlation. Existing ranking techniques heavily depend on human relevance judgment that is very costly to obtain. Similar to Dou et al (2008), our method utilizes the automatically aggregated click count in query logs as the gold for deriving the true order of relevancy, but we use the clickthrough of different languages. We average Kendall’s tau values between the algorithm output and the gold based on click frequency for all test queries. 5.2 Data Sets Query logs can be the basis for constructing high quality ranking corpus. Due to the proprietary issue of log, no public ranking corpus based on real-world search engine log is currently available. Moreover, to build a predictable bilingual ranking corpus, the logs of different languages are needed and have to meet certain conditions: (1) they should be sufficiently large so that a good number of bilingual query pairs could be identi1079 Table 3: Statistics on AOL and Sogou query logs. AOL(EN) Sogou(CH) # sessions 657,426 5,131,000 # unique queries 10,154,743 3,117,902 # clicked queries 4,811,650 3,117,590 # clicked URLs 1,632,788 8,627,174 time span 2006/03-05 2006/08 size 2.12GB 1.56GB fied; (2) for the identified query pairs, there should be sufficient statistics of associated clickthrough data; (3) The click frequency should be well distributed at both sides so that the preference order between bilingual document pairs can be derived for SVM learning. For these reasons, we use two independent and publicly accessible query logs to construct our bilingual ranking corpus: English AOL log3 and Chinese Sogou log4. Table 3 shows some statistics of these two large query logs. We automatically identify 10,544 bilingual query pairs from the two logs using the Java API for Google Translate5, in which each query has certain number of clicked URLs. To better control the bilingual equivalency of queries, we make sure the bilingual queries in each of these pairs are bi-directional translations. Then we download all their clicked pages, which results in 70,180 English6 and 111,197 Chinese documents. These documents form two independent collections, which are indexed separately for retrieval and feature calculation. For good quality, it is necessary to have sufficient clickthrough data for each query. So we further identify 1,084 out of 10,544 bilingual query pairs, in which each query has at least 10 clicked and downloadable documents. This smaller collection is used for learning our model, which contains 21,711 English and 28,578 Chinese documents7. In order to compute cross-lingual document similarities based on machine translation 3http://gregsadetsky.com/aol-data/ 4http://www.sogou.com/labs/dl/q.html 5http://code.google.com/p/ google-api-translate-java/ 6AOL log only records the domain portion of the clicked URLs, which misleads document downloading. We use the “search within site or domain” function of a major search engine to approximate the real clicked URLs by keeping the first returned result for each query. 7Because Sogou log has a lot more clicked URLs, for balancing with the number of English pages, we kept at most 50 pages per Chinese query. Table 4: Kendall’s tau values of English ranking. The significant improvements over baseline (99% confidence) are bolded with the p-values given in parenthesis. * indicates significant improvement over IR (no similarity). n = 5. Models Pair H-1 (max) H-2 (mean) RSVM (baseline) n/a 0.2424 0.2424 IR (no similarity) 0.2783 0.2445 0.2445 IR+DIC 0.2909 0.2453 0.2496 IR+MT 0.2858 0.2488* 0.2494* (p=0.0003) (p=0.0004) IR+DIC+MT 0.2901 0.2481 0.2514* (p=0.0009) IR+DIC+RATIO 0.2946 0.2466 0.2519* (p=0.0004) IR+DIC+MT +RATIO 0.2940 0.2473* 0.2539* (p=0.0009) (p=1.5e-5) IR+DIC+MT +RATIO+URL 0.2979 0.2533* 0.2577* (p=2.2e-5) (p=4.4e-7) (see Section 4.2), we automatically translate all these 50,298 documents using Google Translate, i.e., English to Chinese and vice versa. Then the bilingual document pairs are constructed, and all the monolingual features and cross-lingual similarities are computed (see Section 4.1&4.2). 5.3 English Ranking Performance Here we examine the ranking performance of our English ranker under different similarity settings. We use traditional RSVM (Herbrich et al., 2000; Joachims, 2002) without any bilingual consideration as the baseline, which uses only English IR features. We conduct this experiment using all the 1,084 bilingual query pairs with 4-fold cross validation (each fold with 271 query pairs). The number of constraint documents n is empirically set as 5. The results are shown in Table 4. Clearly, bilingual constraints are helpful to improve English ranking. Our pairwise settings unanimously outperforms the RSVM baseline. The paired two-tailed t-test (Smucker et al., 2007) shows that most improvements resulted from heuristic H-2 (mean score) are statistically significant at 99% confidence level (p<0.01). Relatively fewer significant improvements can be made by heuristic H-1 (max score). This is because the maximum score on pair is just a rough approximation to the optimal document score. But this simple scheme works surprisingly well and still consistently outperforms the baseline. Note that our bilingual model with only IR features, i.e., IR (no similarity), also outperforms the baseline. The reason is that in this setting there are 1080 1 2 3 4 5 6 7 8 9 10 0.23 0.235 0.24 0.245 0.25 0.255 0.26 # of constraint documents in a different language Kendall’s tau RSVM (baseline) IR+DIC IR+MT IR+DIC+MT IR+DIC+RAIO+MT IR+DIC+RAIO+MT+URL Figure 2: English ranking results vary with the number of constraint Chinese documents. IR features of n Chinese documents introduced in addition to the IR features of English documents in the baseline. The DIC similarity does not work as effectively as MT. This may be due to the limitation of bilingual dictionary alone for translating documents, where the issues like out-of-vocabulary words and translation ambiguity are common but can be better dealt with by MT. When DIC is combined with RATIO, which considers both forward and backward translation of words, it can capture the correlation between bilingually very similar pages, thus performs better. We find that the URL similarity, although simple, is very useful and improves 1.5–2.4% of Kendall’s tau value than not using it. This is because the URLs of the top Chinese (constraint) documents are often similar to many of returned English URLs which are generally more regular. For example, in query pair (Toyota Camry, T›), 9/13 English pages are anchored by the URLs containing keywords “toyota” and/or “camry”, and 3/5 constraint documents’ URLs also contain them. In contrast, the URLs of returned Chinese pages are less regular in general. This also explains why this measure does not improve much for Chinese ranking (see Section 5.4). We also vary the parameter n to study how the performance changes with different number of constraint Chinese documents. Figure 2 shows the results using heuristic H-2. More constraint documents are generally helpful, but when only one constraint document is used, it may be detrimenTable 5: Kendall’s tau values of Chinese ranking. The significant improvements over baseline (99% confidence) are bolded with the p-values given in parenthesis. * indicates significant improvement over IR (no similarity). n = 5. Models Pair H-1 (max) H-2 (mean) RSVM (baseline) n/a 0.2935 0.2935 IR (no similarity) 0.3201 0.2938 0.2938 IR+DIC 0.3220 0.2970 0.2973* (p=0.0060) (p=0.0020) IR+MT 0.3299 0.2992* 0.3008* (p=0.0034) (p=0.0003) IR+DIC+MT 0.3295 0.2991* 0.3004* (p=0.0014) (p=0.0008) IR+DIC+RATIO 0.3240 0.2972* 0.2968* (p=0.0010) (p=0.0014) IR+DIC+MT +RATIO 0.3303 0.2973* 0.3007* (p=0.0004) (p=0.0002) IR+DIC+MT +RATIO+URL 0.3288 0.2981* 0.3024* (p=0.0005) (p=1.5e-6) tal to the ranking for some features. One explanation is that the document clicked most often is not necessarily relevant, and it is very likely that no English page is similar to the first Chinese page. Joachims et al. (2007) found that users’ click behavior is biased by the rank of search engine at the first and/or second positions (especially the first). More constraint pages are helpful because the pages after the first are less biased and the click counts can reflect the relevancy more accurately. 5.4 Chinese Ranking Performance We also benchmark Chinese ranking with English constraint documents under the similar configurations as Section 5.3. The results are given by Table 5 and Figure 3. As shown in Table 5, improvements on Chinese ranking are even more encouraging. Kendall’s tau values under all the settings are significantly better than not only the baseline but also IR (no similarity). This may suggest that English information is generally more helpful to Chinese ranking than the other way round. The reason is straightforward: there are a high proportion of Chinese queries having English or foreign-language origins in our data set. For these queries, relevant information at Chinese side may be relatively poorer, so the English ranking can be more reliable. As far as we can, we manually identified 215 such queries from all the 1,084 bilingual queries (amount to 23.2%). To shed more light on this finding, we examine top-20 queries improved most by our method 1081 1 2 3 4 5 6 7 8 9 10 0.286 0.288 0.29 0.292 0.294 0.296 0.298 0.3 0.302 0.304 # of constraint documents in a different language Kendall’s tau RSVM (baseline) IR+DIC IR+MT IR+DIC+MT IR+DIC+RATIO+MT IR+DIC+RATIO+MT+URL Figure 3: Chinese ranking results vary with the number of constraint English documents. (with all features and similarities) over the baseline. As shown in Table 6, most of the top improved Chinese queries are about concepts originated from English or other languages, or something non-local (bolded). Interestingly, u£{ ˜ (political catoons) are among these Chinese queries improved most by English ranking, which is believed as rare (or sensitive) content on Chinese web. In contrast, top English queries are short of this type of queries. But we can still see Bruce Lee (¯B), a Chinese Kung-Fu actor, and peony (î[), the national flower of China. Their information tends to be more popular on Chinese web, and thus helpful to English ranking. For the exceptions like Sunrider ( šy) and Aniston (“„î), despite their English origins, we find they have surprisingly sparse click counts in English log while Chinese users look much more interested and provide a lot of clickthrough that is helpful. 6 Conclusions and Future Work We aim to improve web search ranking for an important set of queries, called bilingual queries, by exploiting bilingual information derived from clickthrough logs of different languages. The thrust of our technique is using search ranking of one language and cross-lingual information to help ranking of another language. Our pairwise ranking scheme based on bilingual document pairs can easily integrate all kinds of similarities into the existing framework and significantly improves both English and Chinese ranking performance. Table 6: Top 20 most improved bilingual queries. Bold means a positive example for our hypothesis. * marks an exception. Most improved CH queries Most improved EN queries        < < <ÿ ÿ ÿ (salmonella) free online tv (½Dó"ž @)      Â  Â} } } (scotland) weapons (Éì) ; ; ;O O O (caffeine) lily (º\) ò“Õ (epitaph) cable (žƒ) ] ] ] ) ) ) » » » $ $ $ (british history) *sunrider ( šy) u u u£ £ £{ { {˜ ˜ ˜ (political cartoons) *aniston (“„î) ½<ø: (immune system) clothes (q) ÄúË´ (wine bottles) *three little pigs (®B Â) z z z¿ ¿ ¿¼ ¼ ¼ (hungary) hair care () ¼b (witchcraft) neon (t}) 벓 (popcorn) bruce lee (¯ ¯ ¯B B B  ) >F (impetigo) radish (YT) ¥ - ÷  (bathroom design) chile (œ¼) ¼ (pigeon) peony (î î î[ [ [) ð ð ðô ô ô} } } (polar bear) toothache (¿;) : : :³ ³ ³  C C C (map of africa) free online translation (½ Dó" H) n n n Y Y Y n n n õ õ õ _ _ _ (labrador retriever) water (y) X X X² ² ²n n n“ “ “y y y¼ ¼ ¼ (pamela anderson) oil (ˆ)   “ “ “q q qã ã ã (yoga clothing) shopping network (é Ô ) É É É Ï Ï Ï O O O ” ” ” (federal express) *prince harry (-°|) Our model can be generally applied to other search ranking problems, such as ranking using monolingual similarities or ranking for crosslingual/multilingual web search. Another interesting direction is to study the recovery of the optimal document ordering from pairwise ordering using well-founded formalism such as rank aggregation approaches (Dwork et a., 2001; Liu et al., 2007). Furthermore, we may involve more sophisticated monolingual features that do not transfer cross-lingually but are asymmetric for either side, such as clustering, document classification features built from domain taxonomies like DMOZ. Acknowledgments This work is partially supported by the Innovation Technology Fund, Hong Kong (project No.: ITS/182/08). We would like to thank Cheng Niu for the insightful advice and anonymous reviewers for the useful comments. 1082 References Sergey Brin and Lawrence Page. 1998. The Anatomy of a Large-Scale Hypertextual Web Search Engine. In Proceedings of WWW. Chris Buckley and Ellen M. Voorhees. 2000. Evaluating Evaluation Measure Stability. In Proceedings of ACM SIGIR, pp. 33-40. David Burkett and Dan Klein. 2008. Two Languages are Better than One (for Syntactic Parsing). In Proceedings of EMNLP, pp. 877-886. Ming-Wei Chang, Dan Goldwasser Dan Roth and Yuancheng Tu. 2009. Unsupervised Constraint Driven Learning for Transliteration Discovery. In Proceedings of NAACL-HLT. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest and Clifford Stein. 2001. Introduction to Algorithms (2nd Edition), MIT Press, pp. 350-355. Zhicheng Dou, Ruihua Song, Xiaojie Yuan and Ji-Rong Wen. 2008. Are Click-through Data Adequate for Learning Web Search Rankings? In Proceedings of ACM CIKM, pp. 73-82. Cynthia Dwork, Ravi Kumar, Moni Naor and D. Sivakumar. 2001. Rank Aggregation Methods for the Web. In Proceedings of WWW, pp. 613-622. Ralf Herbrich, Thore Graepel and Klaus Obermayer. 2000. Large Margin Rank Boundaries for Ordinal Regression. Advances in Large Margin Classifiers, The MIT Press, pp. 115-132. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak 2005. Bootstrapping Parsers via Syntactic Projection across Parallel Texts. Natural Language Engineering, 11(3):311325. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2000. IR Evaluation Methods for Retrieving Highly Relevant Documents. In Proceedings of ACM SIGIR, pp. 41-48. Thorsten Joachims. 2002. Optimizing Search Engines Using Clickthrough Data. In Proceedings of ACM SIGKDD, pp. 133-142. Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, Filip Radlinski and Geri Gay 2007. Evaluating the Accuracy of Implicit Feedback from Clicks and Query Reformulations in Web Search. ACM Transaction on Information Systems, 25(2):7. M. Kendall. 1938. A New Measure of Rank Correlation. Biometrika, 30:81-89. Victor Lavrenko, Martin Choquette and Bruce W. Croft. 2002. Cross-Lingual Relevance Models. In Proceedings of ACM SIGIR, pp. 175-182. Tie-Yan Liu, Jun Xu, Tao Qin, Wenying Xiong, and Hang Li. 2007. LECTOR: Benchmark Dataset for Research on Learning to Rank for Information Retrieval. In Proceedings of SIGIR 2007 Workshop on Learning to Rank for Information Retrieval, pp. 310, Amsterdam, The Netherland. Yiqun Liu, Yupeng Fu, Min Zhang, Shaoping Ma and Liyun Ru. 2007. Automatic Search Engine Performance Evaluation with Click-through Data Analysis. In Proceedings of WWW, pp. 1133-1134. Yu-Ting Liu, Tie-Yan Liu, Tao Qin, Zhi-Ming Ma, and Hang Li. 2007. Supervised Rank Aggregation. In Proceedings of WWW, pp. 481-489. Benoit Mathieu, Romanic Besancon and Christian Fluhr. 2004. Multilingual Document Clusters Discovery. In proceedings of Recherche d’Information Assist´ee par Ordinateur (RIAO), pp. 1-10. Greg Pass, Abdur Chowdhury and Cayley Torgeson. 2006. A Picture of Search. In Proceedings of the 1st International Conference on Scalable Information Systems (INFOSCALE), Hong Kong. S. E. Robertson. 1997. Overview of the OKAPI Projects. Journal of Documentation, 53(1):3-7. S. E. Robertson and K. Sparc Jones. 1976. Relevance Weighting of Search Terms. Journal of the American Society of Information Science, 27(3):129-146. Gerard Salton. 1998. Automatic Text Processing. Addison-Wesley Publishing Company. Shai Shalev-Shwartz, Yoram Singer and Nathan Srebro. 2007. Pegasos: Primal Estimated subGrAdient SOlver for SVM. In Proceedings of ICML, pp. 807-814. David A. Smith and Noah A. Smith. 2004. Bilingual Parsing with Factored Estimation: Using English to Parse Korean. In Proceedings of EMNLP. Mark D. Smucker, James Allan, and Ben Carterette. 2007. A Comparison of Statistical Significance Tests for Information Retrieval Evaluation. In Proceedings of ACM CIKM, pp. 623-632. Benjamin Snyder and Regina Barzilay. 2008. Unsupervised Multilingual Learning for Morphological Segmentation. In Proceedings of ACL, pp. 737-745. Yejun Wu and Douglas W. Oard. 2008. Bilingual Topic Aspect Classification with a Few Training Examples. In Proceedings of ACM SIGIR, pp. 203210. Chengxiang Zhai and John Lafferty. 2001. A Study of Smoothing Methods for Language Models Applied to Ad Hoc Information Retrieval. In Proceedings of ACM SIGIR, pp. 334-342. 1083
2009
121
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 109–117, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Knowing the Unseen: Estimating Vocabulary Size over Unseen Samples Suma Bhat Department of ECE University of Illinois [email protected] Richard Sproat Center for Spoken Language Understanding Oregon Health & Science University [email protected] Abstract Empirical studies on corpora involve making measurements of several quantities for the purpose of comparing corpora, creating language models or to make generalizations about specific linguistic phenomena in a language. Quantities such as average word length are stable across sample sizes and hence can be reliably estimated from large enough samples. However, quantities such as vocabulary size change with sample size. Thus measurements based on a given sample will need to be extrapolated to obtain their estimates over larger unseen samples. In this work, we propose a novel nonparametric estimator of vocabulary size. Our main result is to show the statistical consistency of the estimator – the first of its kind in the literature. Finally, we compare our proposal with the state of the art estimators (both parametric and nonparametric) on large standard corpora; apart from showing the favorable performance of our estimator, we also see that the classical Good-Turing estimator consistently underestimates the vocabulary size. 1 Introduction Empirical studies on corpora involve making measurements of several quantities for the purpose of comparing corpora, creating language models or to make generalizations about specific linguistic phenomena in a language. Quantities such as average word length or average sentence length are stable across sample sizes. Hence empirical measurements from large enough samples tend to be reliable for even larger sample sizes. On the other hand, quantities associated with word frequencies, such as the number of hapax legomena or the number of distinct word types changes are strictly sample size dependent. Given a sample we can obtain the seen vocabulary and the seen number of hapax legomena. However, for the purpose of comparison of corpora of different sizes or linguistic phenomena based on samples of different sizes it is imperative that these quantities be compared based on similar sample sizes. We thus need methods to extrapolate empirical measurements of these quantities to arbitrary sample sizes. Our focus in this study will be estimators of vocabulary size for samples larger than the sample available. There is an abundance of estimators of population size (in our case, vocabulary size) in existing literature. Excellent survey articles that summarize the state-of-the-art are available in (Bunge and Fitzpatrick, 1993) and (Gandolfiand Sastri, 2004). Of particular interest to us is the set of estimators that have been shown to model word frequency distributions well. This study proposes a nonparametric estimator of vocabulary size and evaluates its theoretical and empirical performance. For comparison we consider some state-of-the-art parametric and nonparametric estimators of vocabulary size. The proposed non-parametric estimator for the number of unseen elements assumes a regime characterizing word frequency distributions. This work is motivated by a scaling formulation to address the problem of unlikely events proposed in (Baayen, 2001; Khmaladze, 1987; Khmaladze and Chitashvili, 1989; Wagner et al., 2006). We also demonstrate that the estimator is strongly consistent under the natural scaling formulation. While compared with other vocabulary size estimates, we see that our estimator performs at least as well as some of the state of the art estimators. 2 Previous Work Many estimators of vocabulary size are available in the literature and a comparison of several non 109 parametric estimators of population size occurs in (Gandolfiand Sastri, 2004). While a definite comparison including parametric estimators is lacking, there is also no known work comparing methods of extrapolation of vocabulary size. Baroni and Evert, in (Baroni and Evert, 2005), evaluate the performance of some estimators in extrapolating vocabulary size for arbitrary sample sizes but limit the study to parametric estimators. Since we consider both parametric and nonparametric estimators here, we consider this to be the first study comparing a set of estimators for extrapolating vocabulary size. Estimators of vocabulary size that we compare can be broadly classified into two types: 1. Nonparametric estimators- here word frequency information from the given sample alone is used to estimate the vocabulary size. A good survey of the state of the art is available in (Gandolfiand Sastri, 2004). In this paper, we compare our proposed estimator with the canonical estimators available in (Gandolfiand Sastri, 2004). 2. Parametric estimators- here a probabilistic model capturing the relation between expected vocabulary size and sample size is the estimator. Given a sample of size n, the sample serves to calculate the parameters of the model. The expected vocabulary for a given sample size is then determined using the explicit relation. The parametric estimators considered in this study are (Baayen, 2001; Baroni and Evert, 2005), (a) Zipf-Mandelbrot estimator (ZM); (b) finite Zipf-Mandelbrot estimator (fZM). In addition to the above estimators we consider a novel non parametric estimator. It is the nonparametric estimator that we propose, taking into account the characteristic feature of word frequency distributions, to which we will turn next. 3 Novel Estimator of Vocabulary size We observe (X1, . . . , Xn), an i.i.d. sequence drawn according to a probability distribution P from a large, but finite, vocabulary Ω. Our goal is in estimating the “essential” size of the vocabulary Ωusing only the observations. In other words, having seen a sample of size n we wish to know, given another sample from the same population, how many unseen elements we would expect to see. Our nonparametric estimator for the number of unseen elements is motivated by the characteristic property of word frequency distributions, the Large Number of Rare Events (LNRE) (Baayen, 2001). We also demonstrate that the estimator is strongly consistent under a natural scaling formulation described in (Khmaladze, 1987). 3.1 A Scaling Formulation Our main interest is in probability distributions P with the property that a large number of words in the vocabulary Ωare unlikely, i.e., the chance any word appears eventually in an arbitrarily long observation is strictly between 0 and 1. The authors in (Baayen, 2001; Khmaladze and Chitashvili, 1989; Wagner et al., 2006) propose a natural scaling formulation to study this problem; specifically, (Baayen, 2001) has a tutorial-like summary of the theoretical work in (Khmaladze, 1987; Khmaladze and Chitashvili, 1989). In particular, the authors consider a sequence of vocabulary sets and probability distributions, indexed by the observation size n. Specifically, the observation (X1, . . . , Xn) is drawn i.i.d. from a vocabulary Ωn according to probability Pn. If the probability of a word, say ω ∈Ωn is p, then the probability that this specific word ω does not occur in an observation of size n is (1 −p)n . For ω to be an unlikely word, we would like this probability for large n to remain strictly between 0 and 1. This implies that ˇc n ≤p ≤ˆc n, (1) for some strictly positive constants 0 < ˇc < ˆc < ∞. We will assume throughout this paper that ˇc and ˆc are the same for every word ω ∈Ωn. This implies that the vocabulary size is growing linearly with the observation size: n ˆc ≤|Ωn| ≤n ˇc . This model is called the LNRE zone and its applicability in natural language corpora is studied in detail in (Baayen, 2001). 3.2 Shadows Consider the observation string (X1, . . . , Xn) and let us denote the quantity of interest – the number 110 of word types in the vocabulary Ωn that are not observed – by On. This quantity is random since the observation string itself is. However, we note that the distribution of On is unaffected if one relabels the words in Ωn. This motivates studying of the probabilities assigned by Pn without reference to the labeling of the word; this is done in (Khmaladze and Chitashvili, 1989) via the structural distribution function and in (Wagner et al., 2006) via the shadow. Here we focus on the latter description: Definition 1 Let Xn be a random variable on Ωn with distribution Pn. The shadow of Pn is defined to be the distribution of the random variable Pn({Xn}). For the finite vocabulary situation we are considering, specifying the shadow is exactly equivalent to specifying the unordered components of Pn, viewed as a probability vector. 3.3 Scaled Shadows Converge We will follow (Wagner et al., 2006) and suppose that the scaled shadows, the distribution of n · Pn(Xn), denoted by Qn converge to a distribution Q. As an example, if Pn is a uniform distribution over a vocabulary of size cn, then n · Pn(Xn) equals 1 c almost surely for each n (and hence it converges in distribution). From this convergence assumption we can, further, infer the following: 1. Since the probability of each word ω is lower and upper bounded as in Equation (1), we know that the distribution Qn is non-zero only in the range [ˇc, ˆc]. 2. The “essential” size of the vocabulary, i.e., the number of words of Ωn on which Pn puts non-zero probability can be evaluated directly from the scaled shadow, scaled by 1 n as Z ˆc ˇc 1 y dQn(y). (2) Using the dominated convergence theorem, we can conclude that the convergence of the scaled shadows guarantees that the size of the vocabulary, scaled by 1/n, converges as well: |Ωn| n → Z ˆc ˇc 1 y dQ(y). (3) 3.4 Profiles and their Limits Our goal in this paper is to estimate the size of the underlying vocabulary, i.e., the expression in (2), Z ˆc ˇc n y dQn(y), (4) from the observations (X1, . . . , Xn). We observe that since the scaled shadow Qn does not depend on the labeling of the words in Ωn, a sufficient statistic to estimate (4) from the observation (X1, . . . , Xn) is the profile of the observation: (ϕn 1, . . . , ϕn n), defined as follows. ϕn k is the number of word types that appear exactly k times in the observation, for k = 1, . . . , n. Observe that n X k=1 kϕn k = n, and that V def = n X k=1 ϕn k (5) is the number of observed words. Thus, the object of our interest is, On = |Ωn| −V. (6) 3.5 Convergence of Scaled Profiles One of the main results of (Wagner et al., 2006) is that the scaled profiles converge to a deterministic probability vector under the scaling model introduced in Section 3.3. Specifically, we have from Proposition 1 of (Wagner et al., 2006): n X k=1 kϕk n −λk−1 −→0, almost surely, (7) where λk := Z ˇc ˇc yk exp(−y) k! dQ(y) k = 0, 1, 2, . . . . (8) This convergence result suggests a natural estimator for On, expressed in Equation (6). 3.6 A Consistent Estimator of On We start with the limiting expression for scaled profiles in Equation (7) and come up with a natural estimator for On. Our development leading to the estimator is somewhat heuristic and is aimed at motivating the structure of the estimator for the number of unseen words, On. We formally state and prove its consistency at the end of this section. 111 3.6.1 A Heuristic Derivation Starting from (7), let us first make the approximation that kϕk n ≈λk−1, k = 1, . . . , n. (9) We now have the formal calculation n X k=1 ϕn k n ≈ n X k=1 λk−1 k (10) = n X k=1 Z ˆc ˇc e−yyk−1 k! dQ(y) ≈ Z ˆc ˇc e−y y n X k=1 yk k! ! dQ(y)(11) ≈ Z ˆc ˇc e−y y (ey −1) dQ(y) (12) ≈ |Ωn| n − Z ˆc ˇc e−y y dQ(y). (13) Here the approximation in Equation (10) follows from the approximation in Equation (9), the approximation in Equation (11) involves swapping the outer discrete summation with integration and is justified formally later in the section, the approximation in Equation (12) follows because n X k=1 yk k! →ey −1, as n →∞, and the approximation in Equation (13) is justified from the convergence in Equation (3). Now, comparing Equation (13) with Equation (6), we arrive at an approximation for our quantity of interest: On n ≈ Z ˆc ˇc e−y y dQ(y). (14) The geometric series allows us to write 1 y = 1 ˆc ∞ X ℓ=0  1 −y ˆc ℓ , ∀y ∈(0, ˆc) . (15) Approximating this infinite series by a finite summation, we have for all y ∈(ˇc, ˆc), 1 y −1 ˆc M X ℓ=0  1 −y ˆc ℓ = 1 −y ˆc M y ≤ 1 −ˇc ˆc M ˇc . (16) It helps to write the truncated geometric series as a power series in y: 1 ˆc M X ℓ=0  1 −y ˆc ℓ = 1 ˆc M X ℓ=0 ℓ X k=0  ℓ k  (−1)k y ˆc k = 1 ˆc M X k=0 M X ℓ=k  ℓ k ! (−1)k y ˆc k = M X k=0 (−1)k aM k yk, (17) where we have written aM k := 1 ˆck+1 M X ℓ=k  ℓ k ! . Substituting the finite summation approximation in Equation 16 and its power series expression in Equation (17) into Equation (14) and swapping the discrete summation with the integral, we can continue On n ≈ M X k=0 (−1)k aM k Z ˆc ˇc e−yyk dQ(y) = M X k=0 (−1)k aM k k!λk. (18) Here, in Equation (18), we used the definition of λk from Equation (8). From the convergence in Equation (7), we finally arrive at our estimate: On ≈ M X k=0 (−1)k aM k (k + 1)! ϕk+1. (19) 3.6.2 Consistency Our main result is the demonstration of the consistency of the estimator in Equation (19). Theorem 1 For any ǫ > 0, lim n→∞ On −PM k=0 (−1)k aM k (k + 1)! ϕk+1 n ≤ǫ almost surely, as long as M ≥ ˇc log2 e + log2 (ǫˇc) log2 (ˆc −ˇc) −1 −log2 (ˆc). (20) 112 Proof: From Equation (6), we have On n = |Ωn| n − n X k=1 ϕk n = |Ωn| n − n X k=1 λk−1 k − n X k=1 1 k kϕk n −λk−1  . (21) The first term in the right hand side (RHS) of Equation (21) converges as seen in Equation (3). The third term in the RHS of Equation (21) converges to zero, almost surely, as seen from Equation (7). The second term in the RHS of Equation (21), on the other hand, n X k=1 λk−1 k = Z ˆc ˇc e−y y n X k=1 yk k! ! dQ(y) → Z ˆc ˇc e−y y (ey −1) dQ(y), n →∞, = Z ˆc ˇc 1 y dQ(y) − Z ˆc ˇc e−y y dQ(y). The monotone convergence theorem justifies the convergence in the second step above. Thus we conclude that lim n→∞ On n = Z ˆc ˇc e−y y dQ(y) (22) almost surely. Coming to the estimator, we can write it as the sum of two terms: M X k=0 (−1)k aM k k!λk (23) + M X k=0 (−1)k aM k k! (k + 1) ϕk+1 n −λk  . The second term in Equation (23) above is seen to converge to zero almost surely as n →∞, using Equation (7) and noting that M is a constant not depending on n. The first term in Equation (23) can be written as, using the definition of λk from Equation (8), Z ˆc ˇc e−y M X k=0 (−1)k aM k yk ! dQ(y). (24) Combining Equations (22) and (24), we have that, almost surely, lim n→∞ On −PM k=0 (−1)k aM k (k + 1)! ϕk+1 n = Z ˆc ˇc e−y 1 y − M X k=0 (−1)k aM k yk ! dQ(y). (25) Combining Equation (16) with Equation (17), we have 0 < 1 y − M X k=0 (−1)k aM k yk ≤ 1 −ˇc ˆc M ˇc . (26) The quantity in Equation (25) can now be upper bounded by, using Equation (26), e−ˇc 1 −ˇc ˆc M ˇc . For M that satisfy Equation (20) this term is less than ǫ. The proof concludes. 3.7 Uniform Consistent Estimation One of the main issues with actually employing the estimator for the number of unseen elements (cf. Equation (19)) is that it involves knowing the parameter ˆc. In practice, there is no natural way to obtain any estimate on this parameter ˆc. It would be most useful if there were a way to modify the estimator in a way that it does not depend on the unobservable quantity ˆc. In this section we see that such a modification is possible, while still retaining the main theoretical performance result of consistency (cf. Theorem 1). The first step to see the modification is in observing where the need for ˆc arises: it is in writing the geometric series for the function 1 y (cf. Equations (15) and (16)). If we could let ˆc along with the number of elements M itself depend on the sample size n, then we could still have the geometric series formula. More precisely, we have 1 y −1 ˆcn Mn X ℓ=0  1 −y ˆcn ℓ = 1 y  1 −y ˆcn Mn → 0, n →∞, as long as ˆcn Mn →0, n →∞. (27) This simple calculation suggests that we can replace ˆc and M in the formula for the estimator (cf. Equation (19)) by terms that depend on n and satisfy the condition expressed by Equation (27). 113 4 Experiments 4.1 Corpora In our experiments we used the following corpora: 1. The British National Corpus (BNC): A corpus of about 100 million words of written and spoken British English from the years 19751994. 2. The New York Times Corpus (NYT): A corpus of about 5 million words. 3. The Malayalam Corpus (MAL): A collection of about 2.5 million words from varied articles in the Malayalam language from the Central Institute of Indian Languages. 4. The Hindi Corpus (HIN): A collection of about 3 million words from varied articles in the Hindi language also from the Central Institute of Indian Languages. 4.2 Methodology We would like to see how well our estimator performs in terms of estimating the number of unseen elements. A natural way to study this is to expose only half of an existing corpus to be observed and estimate the number of unseen elements (assuming the the actual corpus is twice the observed size). We can then check numerically how well our estimator performs with respect to the “true” value. We use a subset (the first 10%, 20%, 30%, 40% and 50%) of the corpus as the observed sample to estimate the vocabulary over twice the sample size. The following estimators have been compared. Nonparametric: Along with our proposed estimator (in Section 3), the following canonical estimators available in (Gandolfiand Sastri, 2004) and (Baayen, 2001) are studied. 1. Our proposed estimator On (cf. Section 3): since the estimator is rather involved we consider only small values of M (we see empirically that the estimator converges for very small values of M itself) and choose ˆc = M. This allows our estimator for the number of unseen elements to be of the following form, for different values of M: M On 1 2 (ϕ1 −ϕ2) 2 3 2 (ϕ1 −ϕ2) + 3 4ϕ3 3 4 3 (ϕ1 −ϕ2) + 8 9 ϕ3 −ϕ4 3  Using this, the estimator of the true vocabulary size is simply, On + V. (28) Here (cf. Equation (5)) V = n X k=1 ϕn k. (29) In the simulations below, we have considered M large enough until we see numerical convergence of the estimators: in all the cases, no more than a value of 4 is needed for M. For the English corpora, very small values of M suffice – in particular, we have considered the average of the first three different estimators (corresponding to the first three values of M). For the non-English corpora, we have needed to consider M = 4. 2. Gandolfi-Sastri estimator, VGS def = n n −ϕ1 V + ϕ1γ2 , (30) where γ2 = ϕ1 −n −V 2n + p 5n2 + 2n(V −3ϕ1) + (V −ϕ1)2 2n ; 3. Chao estimator, VChao def = V + ϕ2 1 2ϕ2 ; (31) 4. Good-Turing estimator, VGT def = V 1 −ϕ1 n ; (32) 5. “Simplistic” estimator, VSmpl def = V nnew n  ; (33) here the supposition is that the vocabulary size scales linearly with the sample size (here nnew is the new sample size); 6. Baayen estimator, VByn def = V + ϕ1 n  nnew; (34) here the supposition is that the vocabulary growth rate at the observed sample size is given by the ratio of the number of hapax legomena to the sample size (cf. (Baayen, 2001) pp. 50). 114 % error of top 2 and Good−Turing estimates compared % error −40 −30 −20 −10 0 10 Our GT ZM Our GT ZM Our GT ZM Our GT ZM BNC NYT Malayalam Hindi Figure 1: Comparison of error estimates of the 2 best estimators-ours and the ZM, with the GoodTuring estimator using 10% sample size of all the corpora. A bar with a positive height indicates and overestimate and that with a negative height indicates and underestimate. Our estimator outperforms ZM. Good-Turing estimator widely underestimates vocabulary size. Parametric: Parametric estimators use the observations to first estimate the parameters. Then the corresponding models are used to estimate the vocabulary size over the larger sample. Thus the frequency spectra of the observations are only indirectly used in extrapolating the vocabulary size. In this study we consider state of the art parametric estimators, as surveyed by (Baroni and Evert, 2005). We are aided in this study by the availability of the implementations provided by the ZipfR package and their default settings. 5 Results and Discussion The performance of the different estimators as percentage errors of the true vocabulary size using different corpora are tabulated in tables 1-4. We now summarize some important observations. • From the Figure 1, we see that our estimator compares quite favorably with the best of the state of the art estimators. The best of the state of the art estimator is a parametric one (ZM), while ours is a nonparametric estimator. • In table 1 and table 2 we see that our estimate is quite close to the true vocabulary, at all sample sizes. Further, it compares very favorably to the state of the art estimators (both parametric and nonparametric). • Again, on the two non-English corpora (tables 3 and 4) we see that our estimator compares favorably with the best estimator of vocabulary size and at some sample sizes even surpasses it. • Our estimator has theoretical performance guarantees and its empirical performance is comparable to that of the state of the art estimators. However, this performance comes at a very small fraction of the computational cost of the parametric estimators. • The state of the art nonparametric GoodTuring estimator wildly underestimates the vocabulary; this is true in each of the four corpora studied and at all sample sizes. 6 Conclusion In this paper, we have proposed a new nonparametric estimator of vocabulary size that takes into account the LNRE property of word frequency distributions and have shown that it is statistically consistent. We then compared the performance of the proposed estimator with that of the state of the art estimators on large corpora. While the performance of our estimator seems favorable, we also see that the widely used classical Good-Turing estimator consistently underestimates the vocabulary size. Although as yet untested, with its computational simplicity and favorable performance, our estimator may serve as a more reliable alternative to the Good-Turing estimator for estimating vocabulary sizes. Acknowledgments This research was partially supported by Award IIS-0623805 from the National Science Foundation. References R. H. Baayen. 2001. Word Frequency Distributions, Kluwer Academic Publishers. Marco Baroni and Stefan Evert. 2001. “Testing the extrapolation quality of word frequency models”, Proceedings of Corpus Linguistics , volume 1 of The Corpus Linguistics Conference Series, P. Danielsson and M. Wagenmakers (eds.). J. Bunge and M. Fitzpatrick. 1993. “Estimating the number of species: a review”, Journal of the American Statistical Association, Vol. 88(421), pp. 364373. 115 Sample True % error w.r.t the true value (% of corpus) value Our GT ZM fZM Smpl Byn Chao GS 10 153912 1 -27 -4 -8 46 23 8 -11 20 220847 -3 -30 -9 -12 39 19 4 -15 30 265813 -2 -30 -9 -11 39 20 6 -15 40 310351 1 -29 -7 -9 42 23 9 -13 50 340890 2 -28 -6 -8 43 24 10 -12 Table 1: Comparison of estimates of vocabulary size for the BNC corpus as percentage errors w.r.t the true value. A negative value indicates an underestimate. Our estimator outperforms the other estimators at all sample sizes. Sample True % error w.r.t the true value (% of corpus) value Our GT ZM fZM Smpl Byn Chao GS 10 37346 1 -24 5 -8 48 28 4 -8 20 51200 -3 -26 0 -11 46 22 -1 -11 30 60829 -2 -25 1 -10 48 23 1 -10 40 68774 -3 -25 0 -10 49 21 -1 -11 50 75526 -2 -25 0 -10 50 21 0 -10 Table 2: Comparison of estimates of vocabulary size for the NYT corpus as percentage errors w.r.t the true value. A negative value indicates an underestimate. Our estimator compares favorably with ZM and Chao. Sample True % error w.r.t the true value (% of corpus) value Our GT ZM fZM Smpl Byn Chao GS 10 146547 -2 -27 -5 -10 9 34 82 -2 20 246723 8 -23 4 -2 19 47 105 5 30 339196 4 -27 0 -5 16 42 93 -1 40 422010 5 -28 1 -4 17 43 95 -1 50 500166 5 -28 1 -4 18 44 94 -2 Table 3: Comparison of estimates of vocabulary size for the Malayalam corpus as percentage errors w.r.t the true value. A negative value indicates an underestimate. Our estimator compares favorably with ZM and GS. Sample True % error w.r.t the true value (% of corpus) value Our GT ZM fZM Smpl Byn Chao GS 10 47639 -2 -34 -4 -9 25 32 31 -12 20 71320 7 -30 2 -1 34 43 51 -7 30 93259 2 -33 -1 -5 30 38 42 -10 40 113186 0 -35 -5 -7 26 34 39 -13 50 131715 -1 -36 -6 -8 24 33 40 -14 Table 4: Comparison of estimates of vocabulary size for the Hindi corpus as percentage errors w.r.t the true value. A negative value indicates an underestimate. Our estimator outperforms the other estimators at certain sample sizes. 116 A. Gandolfiand C. C. A. Sastri. 2004. “Nonparametric Estimations about Species not Observed in a Random Sample”, Milan Journal of Mathematics, Vol. 72, pp. 81-105. E. V. Khmaladze. 1987. “The statistical analysis of large number of rare events”, Technical Report, Department of Mathematics and Statistics., CWI, Amsterdam, MS-R8804. E. V. Khmaladze and R. J. Chitashvili. 1989. “Statistical analysis of large number of rate events and related problems”, Probability theory and mathematical statistics (Russian), Vol. 92, pp. 196-245. . P. Santhanam, A. Orlitsky, and K. Viswanathan, “New tricks for old dogs: Large alphabet probability estimation”, in Proc. 2007 IEEE Information Theory Workshop, Sept. 2007, pp. 638–643. A. B. Wagner, P. Viswanath and S. R. Kulkarni. 2006. “Strong Consistency of the Good-Turing estimator”, IEEE Symposium on Information Theory, 2006. 117
2009
13
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 118–126, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Ranking Approach to Stress Prediction for Letter-to-Phoneme Conversion Qing Dou, Shane Bergsma, Sittichai Jiampojamarn and Grzegorz Kondrak Department of Computing Science University of Alberta Edmonton, AB, T6G 2E8, Canada {qdou,bergsma,sj,kondrak}@cs.ualberta.ca Abstract Correct stress placement is important in text-to-speech systems, in terms of both the overall accuracy and the naturalness of pronunciation. In this paper, we formulate stress assignment as a sequence prediction problem. We represent words as sequences of substrings, and use the substrings as features in a Support Vector Machine (SVM) ranker, which is trained to rank possible stress patterns. The ranking approach facilitates inclusion of arbitrary features over both the input sequence and output stress pattern. Our system advances the current state-of-the-art, predicting primary stress in English, German, and Dutch with up to 98% word accuracy on phonemes, and 96% on letters. The system is also highly accurate in predicting secondary stress. Finally, when applied in tandem with an L2P system, it substantially reduces the word error rate when predicting both phonemes and stress. 1 Introduction In many languages, certain syllables in words are phonetically more prominent in terms of duration, pitch, and loudness. This phenomenon is referred to as lexical stress. In some languages, the location of stress is entirely predictable. For example, lexical stress regularly falls on the initial syllable in Hungarian, and on the penultimate syllable in Polish. In other languages, such as English and Russian, any syllable in the word can be stressed. Correct stress placement is important in textto-speech systems because it affects the accuracy of human word recognition (Tagliapietra and Tabossi, 2005; Arciuli and Cupples, 2006). However, the issue has often been ignored in previous letter-to-phoneme (L2P) systems. The systems that do generate stress markers often do not report separate figures on stress prediction accuracy, or they only provide results on a single language. Some only predict primary stress markers (Black et al., 1998; Webster, 2004; Demberg et al., 2007), while those that predict both primary and secondary stress generally achieve lower accuracy (Bagshaw, 1998; Coleman, 2000; Pearson et al., 2000). In this paper, we formulate stress assignment as a sequence prediction problem. We divide each word into a sequence of substrings, and use these substrings as features for a Support Vector Machine (SVM) ranker. For a given sequence length, there is typically only a small number of stress patterns in use. The task of the SVM is to rank the true stress pattern above the small number of acceptable alternatives. This is the first system to predict stress within a powerful discriminative learning framework. By using a ranking approach, we enable the use of arbitrary features over the entire (input) sequence and (output) stress pattern. We show that the addition of a feature for the entire output sequence improves prediction accuracy. Our experiments on English, German, and Dutch demonstrate that our ranking approach substantially outperforms previous systems. The SVM ranker achieves exceptional 96.2% word accuracy on the challenging task of predicting the full stress pattern in English. Moreover, when combining our stress predictions with a state-ofthe-art L2P system (Jiampojamarn et al., 2008), we set a new standard for the combined prediction of phonemes and stress. The paper is organized as follows. Section 2 provides background on lexical stress and a task definition. Section 3 presents our automatic stress prediction algorithm. In Section 4, we confirm the power of the discriminative approach with experiments on three languages. Section 5 describes how stress is integrated into L2P conversion. 118 2 Background and Task Definition There is a long history of research into the principles governing lexical stress placement. Zipf (1929) showed that stressed syllables are often those with low frequency in speech, while unstressed syllables are usually very common. Chomsky and Halle (1968) proposed a set of context-sensitive rules for producing English stress from underlying word forms. Due to its importance in text-to-speech, there is also a long history of computational stress prediction systems (Fudge, 1984; Church, 1985; Williams, 1987). While these early approaches depend on human definitions of vowel tensity, syllable weight, word etymology, etc., our work follows a recent trend of purely data-driven approaches to stress prediction (Black et al., 1998; Pearson et al., 2000; Webster, 2004; Demberg et al., 2007). In many languages, only two levels of stress are distinguished: stressed and unstressed. However, some languages exhibit more than two levels of stress. For example, in the English word economic, the first and the third syllable are stressed, with the former receiving weaker emphasis than the latter. In this case, the initial syllable is said to carry a secondary stress. Although each word has only one primary stress, it may have any number of secondary stresses. Predicting the full stress pattern is therefore inherently more difficult than predicting the location of primary stress only. Our objective is to automatically assign primary and, where possible, secondary stress to out-ofvocabulary words. Stress is an attribute of syllables, but syllabification is a non-trivial task in itself (Bartlett et al., 2008). Rather than assuming correct syllabification of the input word, we instead follow Webster (2004) in placing the stress on the vowel which constitutes the nucleus of the stressed syllable. If the syllable boundaries are known, the mapping from the vowel to the corresponding syllable is straightforward. We investigate the assignment of stress to two related but different entities: the spoken word (represented by its phonetic transcription), and the written word (represented by its orthographic form). Although stress is a prosodic feature, assigning stress to written words (“stressed orthography”) has been utilized as a preprocessing stage for the L2P task (Webster, 2004). This preprocessing is motivated by two factors. First, stress greatly influences the pronunciation of vowels in English (c.f., allow vs. alloy). Second, since phoneme predictors typically utilize only local context around a letter, they do not incorporate the global, long-range information that is especially predictive of stress, such as penultimate syllable emphasis associated with the suffix -ation. By taking stressed orthography as input, the L2P system is able to implicitly leverage morphological information beyond the local context. Indicating stress on letters can also be helpful to humans, especially second-language learners. In some languages, such as Spanish, orthographic markers are obligatory in words with irregular stress. The location of stress is often explicitly marked in textbooks for students of Russian. In both languages, the standard method of indicating stress is to place an acute accent above the vowel bearing primary stress, e.g., adi´os. The secondary stress in English can be indicated with a grave accent (Coleman, 2000), e.g., pr`ec´ede. In summary, our task is to assign primary and secondary stress markers to stress-bearing vowels in an input word. The input word may be either phonemes or letters. If a stressed vowel is represented by more than one letter, we adopt the convention of marking the first vowel of the vowel sequence, e.g., m´eeting. In this way, we are able to focus on the task of stress prediction, without having to determine at the same time the exact syllable boundaries, or whether a vowel letter sequence represents one or more spoken vowels (e.g., beating vs. be-at-i-fy). 3 Automatic Stress Prediction Our stress assignment system maps a word, w, to a stressed-form of the word, ¯w. We formulate stress assignment as a sequence prediction problem. The assignment is made in three stages: (1) First, we map words to substrings (s), the basic units in our sequence (Section 3.1). (2) Then, a particular stress pattern (t) is chosen for each substring sequence. We use a support vector machine (SVM) to rank the possible patterns for each sequence (Section 3.2). (3) Finally, the stress pattern is used to produce the stressed-form of the word (Section 3.3). Table 1 gives examples of words at each stage of the algorithm. We discuss each step in more detail. 119 Word Substrings Pattern Word’ w → s → t → ¯w worker →wor-ker → 1-0 →w´orker overdo →ov-ver-do →2-0-1 →`overd´o react → re-ac → 0-1 → re´act æbstrækt → æb-ræk → 0-1 →æbstr ´ækt prisid → ri-sid → 2-1 → pr`ıs´ıd Table 1: The steps in our stress prediction system (with orthographic and phonetic prediction examples): (1) word splitting, (2) support vector ranking of stress patterns, and (3) pattern-to-vowel mapping. 3.1 Word Splitting The first step in our approach is to represent the word as a sequence of N individual units: w → s = {s1-s2-...-sN}. These units are used to define the features and outputs used by the SVM ranker. Although we are ultimately interested in assigning stress to individual vowels in the phoneme and letter sequence, it is beneficial to represent the task in units larger than individual letters. Our substrings are similar to syllables; they have a vowel as their nucleus and include consonant context. By approximating syllables, our substring patterns will allow us to learn recurrent stress regularities, as well as dependencies between neighboring substrings. Since determining syllable breaks is a non-trivial task, we instead adopt the following simple splitting technique. Each vowel in the word forms the nucleus of a substring. Any single preceding or following consonant is added to the substring unit. Thus, each substring consists of at most three symbols (Table 1). Using shorter substrings reduces the sparsity of our training data; words like cryer, dryer and fryer are all mapped to the same form: ry-er. The SVM can thus generalize from observed words to similarly-spelled, unseen examples. Since the number of vowels equals the number of syllables in the phonetic form of the word, applying this approach to phonemes will always generate the correct number of syllables. For letters, splitting may result in a different number of units than the true syllabification, e.g., pronounce →ron-no-un-ce. This does not prevent the system from producing the correct stress assignment after the pattern-to-vowel mapping stage (Section 3.3) is complete. 3.2 Stress Prediction with SVM Ranking After creating a sequence of substring units, s = {s1-s2-...-sN}, the next step is to choose an output sequence, t = {t1-t2-...-tN}, that encodes whether each unit is stressed or unstressed. We use the number ‘1’ to indicate that a substring receives primary stress, ‘2’ for secondary stress, and ‘0’ to indicate no stress. We call this output sequence the stress pattern for a word. Table 1 gives examples of words, substrings, and stress patterns. We use supervised learning to train a system to predict the stress pattern. We generate training (s, t) pairs in the obvious way from our stressmarked training words, ¯w. That is, we first extract the letter/phoneme portion, w, and use it to create the substrings, s. We then create the stress pattern, t, using ¯w’s stress markers. Given the training pairs, any sequence predictor can be used, for example a Conditional Random Field (CRF) (Lafferty et al., 2001) or a structured perceptron (Collins, 2002). However, we can take advantage of a unique property of our problem to use a more expressive framework than is typically used in sequence prediction. The key observation is that the output space of possible stress patterns is actually fairly limited. Clopper (2002) shows that people have strong preferences for particular sequences of stress, and this is confirmed by our training data (Section 4.1). In English, for example, we find that for each set of spoken words with the same number of syllables, there are no more than fifteen different stress patterns. In total, among 55K English training examples, there are only 70 different stress patterns. In both German and Dutch there are only about 50 patterns in 250K examples.1 Therefore, for a particular input sequence, we can safely limit our consideration to only the small set of output patterns of the same length. Thus, unlike typical sequence predictors, we do not have to search for the highest-scoring output according to our model. We can enumerate the full set of outputs and simply choose the highestscoring one. This enables a more expressive representation. We can define arbitrary features over the entire output sequence. In a typical CRF or structured perceptron approach, only output features that can be computed incrementally during search are used (e.g. Markov transition features that permit Viterbi search). Since search is not 1See (Dou, 2009) for more details. 120 needed here, we can exploit longer-range features. Choosing the highest-scoring output from a fixed set is a ranking problem, and we provide the full ranking formulation below. Unlike previous ranking approaches (e.g. Collins and Koo (2005)), we do not rely on a generative model to produce a list of candidates. Candidates are chosen in advance from observed training patterns. 3.2.1 Ranking Formulation For a substring sequence, s, of length N, our task is to select the correct output pattern from the set of all length-N patterns observed in our training data, a set we denote as TN. We score each possible input-output combination using a linear model. Each substring sequence and possible output pattern, (s, t), is represented with a set of features, Φ(s, t). The score for a particular (s, t) combination is a weighted sum of these features, λ·Φ(s, t). The specific features we use are described in Section 3.2.2. Let tj be the stress pattern for the jth training sequence sj, both of length N. At training time, the weights, λ, are chosen such that for each sj, the correct output pattern receives a higher score than other patterns of the same length: ∀u ∈ TN, u ̸= tj, λ · Φ(sj, tj) > λ · Φ(sj, u) (1) The set of constraints generated by Equation 1 are called rank constraints. They are created separately for every (sj, tj) training pair. Essentially, each training pair is matched with a set of automatically-created negative examples. Each negative has an incorrect, but plausible, stress pattern, u. We adopt a Support Vector Machine (SVM) solution to these ranking constraints as described by Joachims (2002). The learner finds the weights that ensure a maximum (soft) margin separation between the correct scores and the competitors. We use an SVM because it has been successful in similar settings (learning with thousands of sparse features) for both ranking and classification tasks, and because an efficient implementation is available (Joachims, 1999). At test time we simply score each possible output pattern using the learned weights. That is, for an input sequence s of length N, we compute λ·Φ(s, t) for all t ∈TN, and we take the highest scoring t as our output. Note that because we only Substring si, ti si, i, ti Context si−1, ti si−1si, ti si+1, ti sisi+1, ti si−1sisi+1, ti Stress Pattern t1t2 . . . tN Table 2: Feature Template consider previously-observed output patterns, it is impossible for our system to produce a nonsensical result, such as having two primary stresses in one word. Standard search-based sequence predictors need to be specially augmented with hard constraints in order to prevent such output (Roth and Yih, 2005). 3.2.2 Features The power of our ranker to identify the correct stress pattern depends on how expressive our features are. Table 2 shows the feature templates used to create the features Φ(s, t) for our ranker. We use binary features to indicate whether each combination occurs in the current (s,t) pair. For example, if a substring tion is unstressed in a (s, t) pair, the Substring feature {si, ti = tion,0} will be true.2 In English, often the penultimate syllable is stressed if the final syllable is tion. We can capture such a regularity with the Context feature si+1, ti. If the following syllable is tion and the current syllable is stressed, the feature {si+1, ti = tion,1} will be true. This feature will likely receive a positive weight, so that output sequences with a stress before tion receive a higher rank. Finally, the full Stress Pattern serves as an important feature. Note that such a feature would not be possible in standard sequence predictors, where such information must be decomposed into Markov transition features like ti−1ti. In a ranking framework, we can score output sequences using their full output pattern. Thus we can easily learn the rules in languages with regular stress rules. For languages that do not have a fixed stress rule, preferences for particular patterns can be learned using this feature. 2tion is a substring composed of three phonemes but we use its orthographic representation here for clarity. 121 3.3 Pattern-to-Vowel Mapping The final stage of our system uses the predicted pattern t to create the stress-marked form of the word, ¯w. Note the number of substrings created by our splitting method always equals the number of vowels in the word. We can thus simply map the indicator numbers in t to markers on their corresponding vowels to produce the stressed word. For our example, pronounce →ron-no-un-ce, if the SVM chooses the stress pattern, 0-1-00, we produce the correct stress-marked word, pron´ounce. If we instead stress the third vowel, 00-1-0, we produce an incorrect output, prono´unce. 4 Stress Prediction Experiments In this section, we evaluate our ranking approach to stress prediction by assigning stress to spoken and written words in three languages: English, German, and Dutch. We first describe the data and the various systems we evaluate, and then provide the results. 4.1 Data The data is extracted from CELEX (Baayen et al., 1996). Following previous work on stress prediction, we randomly partition the data into 85% for training, 5% for development, and 10% for testing. To make results on German and Dutch comparable with English, we reduce the training, development, and testing set by 80% for each. After removing all duplicated items as well as abbreviations, phrases, and diacritics, each training set contains around 55K words. In CELEX, stress is labeled on syllables in the phonetic form of the words. Since our objective is to assign stress markers to vowels (as described in Section 2) we automatically map the stress markers from the stressed syllables in the phonetic forms onto phonemes and letters representing vowels. For phonemes, the process is straightforward: we move the stress marker from the beginning of a syllable to the phoneme which constitutes the nucleus of the syllable. For letters, we map the stress from the vowel phoneme onto the orthographic forms using the ALINE algorithm (Dwyer and Kondrak, 2009). The stress marker is placed on the first letter within the syllable that represents a vowel sound.3 3Our stand-off stress annotations for English, German, and Dutch CELEX orthographic data can be downloaded at: http://www.cs.ualberta.ca/˜kondrak/celex.html. System Eng Ger Dut P+S P P P SUBSTRING 96.2 98.0 97.1 93.1 ORACLESYL 95.4 96.4 97.1 93.2 TOPPATTERN 66.8 68.9 64.1 60.8 Table 3: Stress prediction word accuracy (%) on phonemes for English, German, and Dutch. P: predicting primary stress only. P+S: primary and secondary. CELEX also provides secondary stress annotation for English. We therefore evaluate on both primary and secondary stress (P+S) in English and on primary stress assignment alone (P) for English, German, and Dutch. 4.2 Comparison Approaches We evaluate three different systems on the letter and phoneme sequences in the experimental data: 1) SUBSTRING is the system presented in Section 3. It uses the vowel-based splitting method, followed by SVM ranking. 2) ORACLESYL splits the input word into syllables according to the CELEX gold-standard, before applying SVM ranking. The output pattern is evaluated directly against the goldstandard, without pattern-to-vowel mapping. 3) TOPPATTERN is our baseline system. It uses the vowel-based splitting method to produce a substring sequence of length N. Then it simply chooses the most common stress pattern among all the stress patterns of length N. SUBSTRING and ORACLESYL use scores produced by an SVM ranker trained on the training data. We employ the ranking mode of the popular learning package SVMlight (Joachims, 1999). In each case, we learn a linear kernel ranker on the training set stress patterns and tune the parameter that trades-off training error and margin on the development set. We evaluate the systems using word accuracy: the percent of words for which the output form of the word, ¯w, matches the gold standard. 4.3 Results Table 3 provides results on English, German, and Dutch phonemes. Overall, the performance of our automatic stress predictor, SUBSTRING, is excellent. It achieves 98.0% accuracy for predicting 122 System Eng Ger Dut P+S P P P SUBSTRING 93.5 95.1 95.9 91.0 ORACLESYL 94.6 96.0 96.6 92.8 TOPPATTERN 65.5 67.6 64.1 60.8 Table 4: Stress prediction word accuracy (%) on letters for English, German, and Dutch. P: predicting primary stress only. P+S: primary and secondary. primary stress in English, 97.1% in German, and 93.1% in Dutch. It also predicts both primary and secondary stress in English with high accuracy, 96.2%. Performance is much higher than our baseline accuracy, which is between 60% and 70%. ORACLESYL, with longer substrings and hence sparser data, does not generally improve performance. This indicates that perfect syllabification is unnecessary for phonetic stress assignment. Our system is a major advance over the previous state-of-the-art in phonetic stress assignment. For predicting stressed/unstressed syllables in English, Black et al. (1998) obtained a persyllable accuracy of 94.6%. We achieve 96.2% per-word accuracy for predicting both primary and secondary stress. Others report lower numbers on English phonemes. Bagshaw (1998) obtained 65%-83.3% per-syllable accuracy using Church (1985)’s rule-based system. For predicting both primary and secondary stress, Coleman (2000) and Pearson et al. (2000) report 69.8% and 81.0% word accuracy, respectively. The performance on letters (Table 4) is also quite encouraging. SUBSTRING predicts primary stress with accuracy above 95% for English and German, and equal to 91% in Dutch. Performance is 1-3% lower on letters than on phonemes. On the other hand, the performance of ORACLESYL drops much less on letters. This indicates that most of SUBSTRING’s errors are caused by the splitting method. Letter vowels may or may not represent spoken vowels. By creating a substring for every vowel letter we may produce an incorrect number of syllables. Our pattern feature is therefore less effective. Nevertheless, SUBSTRING’s accuracy on letters also represents a clear improvement over previous work. Webster (2004) reports 80.3% word accuracy on letters in English and 81.2% in German. The most comparable work is Demberg et al. 84 86 88 90 92 94 96 98 100 10000 100000 Word Accuracy (%) Number of training examples German Dutch English Figure 1: Stress prediction accuracy on letters. (2007), which achieves 90.1% word accuracy on letters in German CELEX, assuming perfect letter syllabification. In order to reproduce their strict experimental setup, we re-partition the full set of German CELEX data to ensure that no overlap of word stems exists between the training and test sets. Using the new data sets, our system achieves a word accuracy of 92.3%, a 2.2% improvement over Demberg et al. (2007)’s result. Moreover, if we also assume perfect syllabification, the accuracy is 94.3%, a 40% reduction in error rate. We performed a detailed analysis to understand the strong performance of our system. First of all, note that an error could happen if a test-set stress pattern was not observed in the training data; its correct stress pattern would not be considered as an output. In fact, no more than two test errors in any test set were so caused. This strongly justifies the reduced set of outputs used in our ranking formulation. We also tested all systems with the Stress Pattern feature removed. Results were worse in all cases. As expected, it is most valuable for predicting primary and secondary stress. On English phonemes, accuracy drops from 96.2% to 95.3% without it. On letters, it drops from 93.5% to 90.0%. The gain from this feature also validates our ranking framework, as such arbitrary features over the entire output sequence can not be used in standard search-based sequence prediction. Finally, we examined the relationship between training data size and performance by plotting learning curves for letter stress accuracy (Figure 1). Unlike the tables above, here we use the 123 full set of data in Dutch and German CELEX to create the largest-possible training sets (255K examples). None of the curves are levelling off; performance grows log-linearly across the full range. 5 Lexical stress and L2P conversion In this section, we evaluate various methods of combining stress prediction with phoneme generation. We first describe the specific system that we use for letter-to-phoneme (L2P) conversion. We then discuss the different ways stress prediction can be integrated with L2P, and define the systems used in our experiments. Finally, we provide the results. 5.1 The L2P system We combine stress prediction with a state-of-theart L2P system (Jiampojamarn et al., 2008). Like our stress ranker, their system is a data-driven sequence predictor that is trained with supervised learning. The score for each output sequence is a weighted combination of features. The feature weights are trained using the Margin Infused Relaxed Algorithm (MIRA) (Crammer and Singer, 2003), a powerful online discriminative training framework. Like other recent L2P systems (Bisani and Ney, 2002; Marchand and Damper, 2007; Jiampojamarn et al., 2007), this approach does not generate stress, nor does it consider stress when it generates phonemes. For L2P experiments, we use the same training, testing, and development data as was used in Section 4. For all experiments, we use the development set to determine at which iteration to stop training in the online algorithm. 5.2 Combining stress and phoneme generation Various methods have been used for combining stress and phoneme generation. Phonemes can be generated without regard to stress, with stress assigned as a post-process (Bagshaw, 1998; Coleman, 2000). Both van den Bosch (1997) and Black et al. (1998) argue that stress should be predicted at the same time as phonemes. They expand the output set to distinguish between stressed and unstressed phonemes. Similarly, Demberg et al. (2007) produce phonemes, stress, and syllableboundaries within a single joint n-gram model. Pearson et al. (2000) generate phonemes and stress together by jointly optimizing a decision-tree phoneme-generator and a stress predictor based on stress pattern counts. In contrast, Webster (2004) first assigns stress to letters, creating an expanded input set, and then predicts both phonemes and stress jointly. The system marks stress on letter vowels by determining the correspondence between affixes and stress in written words. Following the above approaches, we can expand the input or output symbols of our L2P system to include stress. However, since both decision tree systems and our L2P predictor utilize only local context, they may produce invalid global output. One option, used by Demberg et al. (2007), is to add a constraint to the output generation, requiring each output sequence to have exactly one primary stress. We enhance this constraint, based on the observation that the number of valid output sequences is fairly limited (Section 3.2). The modified system produces the highest-scoring sequence such that the output’s corresponding stress pattern has been observed in our training data. We call this the stress pattern constraint. This is a tighter constraint than having only one primary stress.4 Another advantage is that it provides some guidance for the assignment of secondary stress. Inspired by the aforementioned strategies, we evaluate the following approaches: 1) JOINT: The L2P system’s input sequence is letters, the output sequence is phonemes+stress. 2) JOINT+CONSTR: Same as JOINT, except it selects the highest scoring output that obeys the stress pattern constraint. 3) POSTPROCESS: The L2P system’s input is letters, the output is phonemes. It then applies the SVM stress ranker (Section 3) to the phonemes to produce the full phoneme+stress output. 4) LETTERSTRESS: The L2P system’s input is letters+stress, the output is phonemes+stress. It creates the stress-marked letters by applying the SVM ranker to the input letters as a preprocess. 5) ORACLESTRESS: The same input/output as LETTERSTRESS, except it uses the goldstandard stress on letters (Section 4.1). 4In practice, the L2P system generates a top-N list, and we take the highest-scoring output on the list that satisfies the constraint. If none satisfy the constraint, we take the top output that has only one primary stress. 124 System Eng Ger Dut P+S P P P JOINT 78.9 80.0 86.0 81.1 JOINT+CONSTR 84.6 86.0 90.8 88.7 POSTPROCESS 86.2 87.6 90.9 88.8 LETTERSTRESS 86.5 87.2 90.1 86.6 ORACLESTRESS 91.4 91.4 92.6 94.5 Festival 61.2 62.5 71.8 65.1 Table 5: Combined phoneme and stress prediction word accuracy (%) for English, German, and Dutch. P: predicting primary stress only. P+S: primary and secondary. Note that while the first approach uses only local information to make predictions (features within a context window around the current letter), systems 2 to 5 leverage global information in some manner: systems 3 and 4 use the predictions of our stress ranker, while 2 uses a global stress pattern constraint.5 We also generated stress and phonemes using the popular Festival Speech Synthesis System6 (version 1.96, 2004) and report its accuracy. 5.3 Results Word accuracy results for predicting both phonemes and stress are provided in Table 5. First of all, note that the JOINT approach, which simply expands the output set, is 4%8% worse than all other comparison systems across the three languages. These results clearly indicate the drawbacks of predicting stress using only local information. In English, both LETTERSTRESS and POSTPROCESS perform best, while POSTPROCESS and the constrained system are highest on German and Dutch. Results using the oracle letter stress show that given perfect stress assignment on letters, phonemes and stress can be predicted very accurately, in all cases above 91%. We also found that the phoneme prediction accuracy alone (i.e., without stress) is quite similar for all the systems. The gains over JOINT on combined stress and phoneme accuracy are almost entirely due to more accurate stress assignment. Utilizing the oracle stress on letters markedly improves phoneme prediction in English 5This constraint could also help the other systems. However, since they already use global information, it yields only marginal improvements. 6http://www.cstr.ed.ac.uk/projects/festival/ (from 88.8% to 91.4%). This can be explained by the fact that English vowels are often reduced to schwa when unstressed (Section 2). Predicting both phonemes and stress is a challenging task, and each of our globally-informed systems represents a major improvement over previous work. The accuracy of Festival is much lower even than our JOINT approach, but the relative performance on the different languages is quite similar. A few papers report accuracy on the combined stress and phoneme prediction task. The most directly comparable work is van den Bosch (1997), which also predicts primary and secondary stress using English CELEX data. However, the reported word accuracy is only 62.1%. Three other papers report word accuracy on phonemes and stress, using different data sets. Pearson et al. (2000) report 58.5% word accuracy for predicting phonemes and primary/secondary stress. Black et al. (1998) report 74.6% word accuracy in English, while Webster (2004) reports 68.2% on English and 82.9% in German (all primary stress only). Finally, Demberg et al. (2007) report word accuracy on predicting phonemes, stress, and syllabification on German CELEX data. They achieve 86.3% word accuracy. 6 Conclusion We have presented a discriminative ranking approach to lexical stress prediction, which clearly outperforms previously developed systems. The approach is largely language-independent, applicable to both orthographic and phonetic representations, and flexible enough to handle multiple stress levels. When combined with an existing L2P system, it achieves impressive accuracy in generating pronunciations together with their stress patterns. In the future, we will investigate additional features to leverage syllabic and morphological information, when available. Kernel functions could also be used to automatically create a richer feature space; preliminary experiments have shown gains in performance using polynomial and RBF kernels with our stress ranker. Acknowledgements This research was supported by the Natural Sciences and Engineering Research Council of Canada, the Alberta Ingenuity Fund, and the Alberta Informatics Circle of Research Excellence. 125 References Joanne Arciuli and Linda Cupples. 2006. The processing of lexical stress during visual word recognition: Typicality effects and orthographic correlates. Quarterly Journal of Experimental Psychology, 59(5):920–948. Harald Baayen, Richard Piepenbrock, and Leon Gulikers. 1996. The CELEX2 lexical database. LDC96L14. Paul C. Bagshaw. 1998. Phonemic transcription by analogy in text-to-speech synthesis: Novel word pronunciation and lexicon compression. Computer Speech and Language, 12(2):119–142. Susan Bartlett, Grzegorz Kondrak, and Colin Cherry. 2008. Automatic syllabification with structured SVMs for letter-to-phoneme conversion. In ACL08: HLT, pages 568–576. Maximilian Bisani and Hermann Ney. 2002. Investigations on joint-multigram models for grapheme-tophoneme conversion. In ICSLP, pages 105–108. Alan W Black, Kevin Lenzo, and Vincent Pagel. 1998. Issues in building general letter to sound rules. In The 3rd ESCA Workshop on Speech Synthesis, pages 77–80. Noam Chomsky and Morris Halle. 1968. The sound pattern of English. New York: Harper and Row. Kenneth Church. 1985. Stress assignment in letter to sound rules for speech synthesis. In ACL, pages 246–253. Cynthia G. Clopper. 2002. Frequency of stress patterns in English: A computational analysis. IULC Working Papers Online. John Coleman. 2000. Improved prediction of stress in out-of-vocabulary words. In IEEE Seminar on the State of the Art in Speech Synthesis. Michael Collins and Terry Koo. 2005. Discriminative reranking for natural language parsing. Computational Linguistics, 31(1):25–70. Michael Collins. 2002. Discriminative training methods for Hidden Markov Models: Theory and experiments with perceptron algorithms. In EMNLP, pages 1–8. Koby Crammer and Yoram Singer. 2003. Ultraconservative online algorithms for multiclass problems. Journal of Machine Learning Research, 3:951–991. Vera Demberg, Helmut Schmid, and Gregor M¨ohler. 2007. Phonological constraints and morphological preprocessing for grapheme-to-phoneme conversion. In ACL, pages 96–103. Qing Dou. 2009. An SVM ranking approach to stress assignment. Master’s thesis, University of Alberta. Kenneth Dwyer and Grzegorz Kondrak. 2009. Reducing the annotation effort for letter-to-phoneme conversion. In ACL-IJCNLP. Erik C. Fudge. 1984. English word-stress. London: Allen and Unwin. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and Hidden Markov Models to letter-to-phoneme conversion. In NAACL-HLT 2007, pages 372–379. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2008. Joint processing and discriminative training for letter-to-phoneme conversion. In ACL08: HLT, pages 905–913. Thorsten Joachims. 1999. Making large-scale Support Vector Machine learning practical. In B. Sch¨olkopf and C. Burges, editors, Advances in Kernel Methods: Support Vector Machines, pages 169–184. MIT-Press. Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In KDD, pages 133–142. John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional Random Fields: Probabilistic models for segmenting and labeling sequence data. In ICML, pages 282–289. Yannick Marchand and Robert I. Damper. 2007. Can syllabification improve pronunciation by analogy of English? Natural Language Engineering, 13(1):1– 24. Steve Pearson, Roland Kuhn, Steven Fincke, and Nick Kibre. 2000. Automatic methods for lexical stress assignment and syllabification. In ICSLP, pages 423–426. Dan Roth and Wen-tau Yih. 2005. Integer linear programming inference for conditional random fields. In ICML, pages 736–743. Lara Tagliapietra and Patrizia Tabossi. 2005. Lexical stress effects in Italian spoken word recognition. In The XXVII Annual Conference of the Cognitive Science Society, pages 2140–2144. Antal van den Bosch. 1997. Learning to pronounce written words: A study in inductive language learning. Ph.D. thesis, Universiteit Maastricht. Gabriel Webster. 2004. Improving letterto-pronunciation accuracy with automatic morphologically-based stress prediction. In ICSLP, pages 2573–2576. Briony Williams. 1987. Word stress assignment in a text-to-speech synthesis system for British English. Computer Speech and Language, 2:235–272. George Kingsley Zipf. 1929. Relative frequency as a determinant of phonetic change. Harvard Studies in Classical Philology, 15:1–95. 126
2009
14
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 127–135, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Reducing the Annotation Effort for Letter-to-Phoneme Conversion Kenneth Dwyer and Grzegorz Kondrak Department of Computing Science University of Alberta Edmonton, AB, Canada, T6G 2E8 {dwyer,kondrak}@cs.ualberta.ca Abstract Letter-to-phoneme (L2P) conversion is the process of producing a correct phoneme sequence for a word, given its letters. It is often desirable to reduce the quantity of training data — and hence human annotation — that is needed to train an L2P classifier for a new language. In this paper, we confront the challenge of building an accurate L2P classifier with a minimal amount of training data by combining several diverse techniques: context ordering, letter clustering, active learning, and phonetic L2P alignment. Experiments on six languages show up to 75% reduction in annotation effort. 1 Introduction The task of letter-to-phoneme (L2P) conversion is to produce a correct sequence of phonemes, given the letters that comprise a word. An accurate L2P converter is an important component of a text-to-speech system. In general, a lookup table does not suffice for L2P conversion, since out-of-vocabulary words (e.g., proper names) are inevitably encountered. This motivates the need for classification techniques that can predict the phonemes for an unseen word. Numerous studies have contributed to the development of increasingly accurate L2P systems (Black et al., 1998; Kienappel and Kneser, 2001; Bisani and Ney, 2002; Demberg et al., 2007; Jiampojamarn et al., 2008). A common assumption made in these works is that ample amounts of labelled data are available for training a classifier. Yet, in practice, this is the case for only a small number of languages. In order to train an L2P classifier for a new language, we must first annotate words in that language with their correct phoneme sequences. As annotation is expensive, we would like to minimize the amount of effort that is required to build an adequate training set. The objective of this work is not necessarily to achieve state-of-the-art performance when presented with large amounts of training data, but to outperform other approaches when training data is limited. This paper proposes a system for training an accurate L2P classifier while requiring as few annotated words as possible. We employ decision trees as our supervised learning method because of their transparency and flexibility. We incorporate context ordering into a decision tree learner that guides its tree-growing procedure towards generating more intuitive rules. A clustering over letters serves as a back-off model in cases where individual letter counts are unreliable. An active learning technique is employed to request the phonemes (labels) for the words that are expected to be the most informative. Finally, we apply a novel L2P alignment technique based on phonetic similarity, which results in impressive gains in accuracy without relying on any training data. Our empirical evaluation on several L2P datasets demonstrates that significant reductions in annotation effort are indeed possible in this domain. Individually, all four enhancements improve the accuracy of our decision tree learner. The combined system yields savings of up to 75% in the number of words that have to be labelled, and reductions of at least 52% are observed on all the datasets. This is achieved without any additional tuning for the various languages. The paper is organized as follows. Section 2 explains how supervised learning for L2P conversion is carried out with decision trees, our classifier of choice. Sections 3 through 6 describe our four main contributions towards reducing the annotation effort for L2P: context ordering (Section 3), clustering letters (Section 4), active learning (Section 5), and phonetic alignment (Section 6). Our experimental setup and results are discussed in 127 Sections 7 and 8, respectively. Finally, Section 9 offers some concluding remarks. 2 Decision tree learning of L2P classifiers In this work, we employ a decision tree model to learn the mapping from words to phoneme sequences. Decision tree learners are attractive because they are relatively fast to train, require little or no parameter tuning, and the resulting classifier can be interpreted by the user. A number of prior studies have applied decision trees to L2P data and have reported good generalization accuracy (Andersen et al., 1996; Black et al., 1998; Kienappel and Kneser, 2001). Also, the widely-used Festival Speech Synthesis System (Taylor et al., 1998) relies on decision trees for L2P conversion. We adopt the standard approach of using the letter context as features. The decision tree predicts the phoneme for the focus letter based on the m letters that appear before and after it in the word (including the focus letter itself, and beginning/end of word markers, where applicable). The model predicts a phoneme independently for each letter in a given word. In order to keep our model simple and transparent, we do not explore the possibility of conditioning on adjacent (predicted) phonemes. Any improvement in accuracy resulting from the inclusion of phoneme features would also be realized by the baseline that we compare against, and thus would not materially influence our findings. We employ binary decision trees because they substantially outperformed n-ary trees in our preliminary experiments. In L2P, there are many unique values for each attribute, namely, the letters of a given alphabet. In a n-ary tree each decision node partitions the data into n subsets, one per letter, that are potentially sparse. By contrast, a binary tree creates one branch for the nominated letter, and one branch grouping the remaining letters into a single subset. In the forthcoming experiments, we use binary decision trees exclusively. 3 Context ordering In the L2P task, context letters that are adjacent to the focus letter tend to be more important than context letters that are further away. For example, the English letter c is usually pronounced as [s] if the following letter is e or i. The general tree-growing algorithm has no notion of the letter distance, but instead chooses the letters on the basis of their estimated information gain (Manning and Schütze, 1999). As a result, it will sometimes query a letter at position +3 (denoted l3), for example, before examining the letters that are closer to the center of the context window. We propose to modify the tree-growing procedure to encourage the selection of letters near the focus letter before those at greater offsets are examined. In its strictest form, which resembles the “dynamically expanding context” search strategy of Davel and Barnard (2004), li can only be queried after l0, . . . , li−1 have been queried. However, this approach seems overly rigid for L2P. In English, for example, l2 can directly influence the pronunciation of a vowel regardless of the value of l1 (c.f., the difference between rid and ride). Instead, we adopt a less intrusive strategy, which we refer to as “context ordering,” that biases the decision tree toward letters that are closer to the focus, but permits gaps when the information gain for a distant letter is relatively high. Specifically, the ordering constraint described above is still applied, but only to letters that have aboveaverage information gain (where the average is calculated across all letters/attributes). This means that a letter with above-average gain that is eligible with respect to the ordering will take precedence over an ineligible letter that has an even higher gain. However, if all the eligible letters have below-average gain, the ineligible letter with the highest gain is selected irrespective of its position. Our only strict requirement is that the focus letter must always be queried first, unless its information gain is zero. Kienappel and Kneser (2001) also worked on improving decision tree performance for L2P, and devised tie-breaking rules in the event that the treegrowing procedure ranked two or more questions as being equally informative. In our experience with L2P datasets, exact ties are rare; our context ordering mechanism will have more opportunities to guide the tree-growing process. We expect this change to improve accuracy, especially when the amount of training data is very limited. By biasing the decision tree learner toward questions that are intuitively of greater utility, we make it less prone to overfitting on small data samples. 4 Clustering letters A decision tree trained on L2P data bases its phonetic predictions on the surrounding letter context. 128 Yet, when making predictions for unseen words, contexts will inevitably be encountered that did not appear in the training data. Instead of relying solely on the particular letters that surround the focus letter, we postulate that the learner could achieve better generalization if it had access to information about the types of letters that appear before and after. That is, instead of treating letters as abstract symbols, we would like to encode knowledge of the similarity between certain letters as features. One way of achieving this goal is to group the letters into classes or clusters based on their contextual similarity. Then, when a prediction has to be made for an unseen (or low probability) letter sequence, the letter classes can provide additional information. Kienappel and Kneser (2001) report accuracy gains when applying letter clustering to the L2P task. However, their decision tree learner incorporates neighboring phoneme predictions, and employs a variety of different pruning strategies; the portion of the gains attributable to letter clustering are not evident. In addition to exploring the effect of letter clustering on a wider range of languages, we are particularly concerned with the impact that clustering has on decision tree performance when the training set is small. The addition of letter class features to the data may enable the active learner to better evaluate candidate words in the pool, and therefore make more informed selections. To group the letters into classes, we employ a hierarchical clustering algorithm (Brown et al., 1992). One advantage of inducing a hierarchy is that we need not commit to a particular level of granularity; in other words, we are not required to specify the number of classes beforehand, as is the case with some other clustering algorithms.1 The clustering algorithm is initialized by placing each letter in its own class, and then proceeds in a bottom-up manner. At each step, the pair of classes is merged that leads to the smallest loss in the average mutual information (Manning and Schütze, 1999) between adjacent classes. The merging process repeats until a single class remains that contains all the letters in the alphabet. Recall that in our problem setting we have access to a (presumably) large pool of unannotated words. The unigram and bigram frequencies required by the clustering algorithm are cal1This approach is inspired by the work of Miller et al. (2004), who clustered words for a named-entity tagging task. Letter Bit String Letter Bit String a 01000 n 1111 b 10000000 o 01001 c 10100 p 10001 d 11000 q 1000001 e 0101 r 111010 f 100001 s 11010 g 11001 t 101010 h 10110 u 0111 i 0110 v 100110 j 10000001 w 100111 k 10111 x 111011 l 11100 y 11011 m 10010 z 101011 # 00 Table 1: Hierarchical clustering of English letters culated from these words; hence, the letters can be grouped into classes prior to annotation. The letter classes only need to be computed once for a given language. We implemented a brute-force version of the algorithm that examines all the possible merges at each step, and generates a hierarchy within a few hours. However, when dealing with a larger number of unique tokens (e.g., when clustering words instead of letters), additional optimizations are needed in order to make the procedure tractable. The resulting hierarchy takes the form of a binary tree, where the root node/cluster contains all the letters, and each leaf contains a single letter. Hence, each letter can be represented by a bit string that describes the path from the root to its leaf. As an illustration, the clustering in Table 1 was automatically generated from the words in the English CMU Pronouncing Dictionary (Carnegie Mellon University, 1998). It is interesting to note that the first bit distinguishes vowels from consonants, meaning that these were the last two groups that were merged by the clustering algorithm. Note also that the beginning/end of word marker (#) is included in the hierarchy, and is the last character to be absorbed into a larger cluster. This indicates that # carries more information than most letters, as is to be expected, in light of its distinct status. We also experimented with a manually-constructed letter hierarchy, but observed no significant differences in accuracy visà-vis the automatic clustering. 129 5 Active learning Whereas a passive supervised learning algorithm is provided with a collection of training examples that are typically drawn at random, an active learner has control over the labelled data that it obtains (Cohn et al., 1992). The latter attempts to select its training set intelligently by requesting the labels of only those examples that are judged to be the most useful or informative. Numerous studies have demonstrated that active learners can make more efficient use of unlabelled data than do passive learners (Abe and Mamitsuka, 1998; Miller et al., 2004; Culotta and McCallum, 2005). However, relatively few researchers have applied active learning techniques to the L2P domain. This is despite the fact that annotated data for training an L2P classifier is not available in most languages. We briefly review two relevant studies before proceeding to describe our active learning strategy. Maskey et al. (2004) propose a bootstrapping technique that iteratively requests the labels of the n most frequent words in a corpus. A classifier is trained on the words that have been annotated thus far, and then predicts the phonemes for each of the n words being considered. Words for which the prediction confidence is above a certain threshold are immediately added to the lexicon, while the remaining words must be verified (and corrected, if necessary) by a human annotator. The main drawback of such an approach lies in the risk of adding erroneous entries to the lexicon when the classifier is overly confident in a prediction. Kominek and Black (2006) devise a word selection strategy based on letter n-gram coverage and word length. Their method slightly outperforms random selection, thereby establishing passive learning as a strong baseline. However, only a single Italian dataset was used, and the results do not necessarily generalize to other languages. In this paper, we propose to apply an active learning technique known as Query-byBagging (Abe and Mamitsuka, 1998). We consider a pool-based active learning setting, whereby the learner has access to a pool of unlabelled examples (words), and may obtain labels (phoneme sequences) at a cost. This is an iterative procedure in which the learner trains a classifier on the current set of labelled training data, then selects one or more new examples to label, according to the classifier’s predictions on the pool data. Once labelled, these examples are added to the training set, the classifier is re-trained, and the process repeats until some stopping criterion is met (e.g., annotation resources are exhausted). Query-by-Bagging (QBB) is an instance of the Query-by-Committee algorithm (Freund et al., 1997), which selects examples that have high classification variance. At each iteration, QBB employs the bagging procedure (Breiman, 1996) to create a committee of classifiers C. Given a training set T containing k examples (in our setting, k is the total number of letters that have been labelled), bagging creates each committee member by sampling k times from T (with replacement), and then training a classifier Ci on the resulting data. The example in the pool that maximizes the disagreement among the predictions of the committee members is selected. A crucial question is how to calculate the disagreement among the predicted phoneme sequences for a word in the pool. In the L2P domain, we assume that a human annotator specifies the phonemes for an entire word, and that the active learner cannot query individual letters. We require a measure of confidence at the word level; yet, our classifiers make predictions at the letter level. This is analogous to the task of estimating record confidence using field confidence scores in information extraction (Culotta and McCallum, 2004). Our solution is as follows. Let w be a word in the pool. Each classifier Ci predicts the phoneme for each letter l ∈w. These “votes” are aggregated to produce a vector vl for letter l that indicates the distribution of the |C| predictions over its possible phonemes. We then compute the margin for each letter: If {p, p′} ∈vl are the two highest vote totals, then the margin is M(vl) = |p −p′|. A small margin indicates disagreement among the constituent classifiers. We define the disagreement score for the entire word as the minimum margin: score(w) = min l∈w{M(vl)} (1) We also experimented with maximum vote entropy and average margin/entropy, where the average is taken over all the letters in a word. The minimum margin exhibited the best performance on our development data; hence, we do not provide a detailed evaluation of the other measures. 6 L2P alignment Before supervised learning can take place, the letters in each word need to be aligned with 130 phonemes. However, a lexicon typically provides just the letter and phoneme sequences for each word, without specifying the specific phoneme(s) that each letter elicits. The sub-task of L2P that pairs letters with phonemes in the training data is referred to as alignment. The L2P alignments that are specified in the training data can influence the accuracy of the resulting L2P classifier. In our setting, we are interested in mapping each letter to either a single phoneme or the “null” phoneme. The standard approach to L2P alignment is described by Damper et al. (2005). It performs an Expectation-Maximization (EM) procedure that takes a (preferably large) collection of words as input and computes alignments for them simultaneously. However, since in our active learning setting the data is acquired incrementally, we cannot count on the initial availability of a substantial set of words accompanied by their phonemic transcriptions. In this paper, we apply the ALINE algorithm to the task of L2P alignment (Kondrak, 2000; Inkpen et al., 2007). ALINE, which performs phonetically-informed alignment of two strings of phonemes, requires no training data, and so is ideal for our purposes. Since our task requires the alignment of phonemes with letters, we wish to replace every letter with a phoneme that is the most likely to be produced by that letter. On the other hand, we would like our approach to be languageindependent. Our solution is to simply treat every letter as an IPA symbol (International Phonetic Association, 1999). The IPA is based on the Roman alphabet, but also includes a number of other symbols. The 26 IPA letter symbols tend to correspond to the usual phonetic value that the letter represents in the Latin script.2 For example, the IPA symbol [m] denotes “voiced bilabial nasal,” which is the phoneme represented by the letter m in most languages that utilize Latin script. The alignments produced by ALINE are of high quality. The example below shows the alignment of the Italian word scianchi to its phonetic transcription [SaNki]. ALINE correctly aligns not only identical IPA symbols (i:i), but also IPA symbols that represent similar sounds (s:S, n:N, c:k). s c i a n c h i | | | | | S a N k i 2ALINE can also be applied to non-Latin scripts by replacing every grapheme with the IPA symbol that is phonetically closest to it (Jiampojamarn et al., 2009). 7 Experimental setup We performed experiments on six datasets, which were obtained from the PRONALSYL letterto-phoneme conversion challenge.3 They are: English CMUDict (Carnegie Mellon University, 1998); French BRULEX (Content et al., 1990), Dutch and German CELEX (Baayen et al., 1996), the Italian Festival dictionary (Cosi et al., 2000), and the Spanish lexicon. Duplicate words and words containing punctuation or numerals were removed, as were abbreviations and acronyms. The resulting datasets range in size from 31,491 to 111,897 words. The PRONALSYL datasets are already divided into 10 folds; we used the first fold as our test set, and the other folds were merged together to form the learning set. In our preliminary experiments, we randomly set aside 10 percent of this learning set to serve as our development set. Since the focus of our work is on algorithmic enhancements, we simulate the annotator with an oracle and do not address the potential human interface factors. During an experiment, 100 words were drawn at random from the learning set; these constituted the data on which an initial classifier was trained. The rest of the words in the learning set formed the unlabelled pool for active learning; their phonemes were hidden, and a given word’s phonemes were revealed if the word was selected for labelling. After training a classifier on the 100 annotated words, we performed 190 iterations of active learning. On each iteration, 10 words were selected according to Equation 1, labelled by an oracle, and added to the training set. In order to speed up the experiments, a random sample of 2000 words was drawn from the pool and presented to the active learner each time. Hence, QBB selected 10 words from the 2000 candidates. We set the QBB committee size |C| to 10. At each step, we measured word accuracy with respect to the holdout set as the percentage of test words that yielded no erroneous phoneme predictions. Henceforth, we use accuracy to refer to word accuracy. Note that although we query examples using a committee, we train a single tree on these examples in order to produce an intelligible model. Prior work has demonstrated that this configuration performs well in practice (Dwyer and Holte, 2007). Our results report the accuracy of the single tree grown on each iteration, averaged 3Available at http://pascallin.ecs.soton.ac.uk/Challenges/ PRONALSYL/Datasets/ 131 over 10 random draws of the initial training set. For our decision tree learner, we utilized the J48 algorithm provided by Weka (Witten and Frank, 2005). We also experimented with Wagon (Taylor et al., 1998), an implementation of CART, but J48 performed better during preliminary trials. We ran J48 with default parameter settings, except that binary trees were grown (see Section 2), and subtree raising was disabled.4 Our feature template was established during development set experiments with the English CMU data; the data from the other five languages did not influence these choices. The letter context consisted of the focus letter and the 3 letters appearing before and after the focus (or beginning/end of word markers, where applicable). For letter class features, bit strings of length 1 through 6 were used for the focus letter and its immediate neighbors. Bit strings of length at most 3 were used at positions +2 and −2, and no such features were added at ±3.5 We experimented with other configurations, including using bit strings of up to length 6 at all positions, but they did not produce consistent improvements over the selected scheme. 8 Results We first examine the contributions of the individual system components, and then compare our complete system to the baseline. The dashed curves in Figure 1 represent the baseline performance with no clustering, no context ordering, random sampling, and ALINE, unless otherwise noted. In all plots, the error bars show the 99% confidence interval for the mean. Because the average word length differs across languages, we report the number of words along the x-axis. We have verified that our system does not substantially alter the average number of letters per word in the training set for any of these languages. Hence, the number of words reported here is representative of the true annotation effort. 4Subtree raising is an expensive pruning operation that had a negligible impact on accuracy during preliminary experiments. Our pruning performs subtree replacement only. 5The idea of lowering the specificity of letter class questions as the context length increases is due to Kienappel and Kneser (2001), and is intended to avoid overfitting. However, their configuration differs from ours in that they use longer context lengths (4 for German and 5 for English) and ask letter class questions at every position. Essentially, the authors tuned the feature set in order to optimize performance on each problem, whereas we seek a more general representation that will perform well on a variety of languages. 8.1 Context ordering Our context ordering strategy improved the accuracy of the decision tree learner on every language (see Figure 1a). Statistically significant improvements were realized on Dutch, French, and German. Our expectation was that context ordering would be particularly helpful during the early rounds of active learning, when there is a greater risk of overfitting on the small training sets. For some languages (notably, German and Spanish) this was indeed the case; yet, for Dutch, context ordering became more effective as the training set increased in size. It should be noted that our context ordering strategy is sufficiently general that it can be implemented in other decision tree learners that grow binary trees, such as Wagon/CART (Taylor et al., 1998). An n-ary implementation is also feasible, although we have not tried this variation. 8.2 Clustering letters As can be seen in Figure 1b, clustering letters into classes tended to produce a steady increase in accuracy. The only case where it had no statistically significant effect was on English. Another benefit of clustering is that it reduces variance. The confidence intervals are generally wider when clustering is disabled, meaning that the system’s performance was less sensitive to changes in the initial training set when letter classes were used. 8.3 Active learning On five of the six datasets, Query-by-Bagging required significantly fewer labelled examples to reach the maximum level of performance achieved by the passive learner (see Figure 1c). For instance, on the Spanish dataset, random sampling reached 97% word accuracy after 1420 words had been annotated, whereas QBB did so with only 510 words — a 64% reduction in labelling effort. Similarly, savings ranging from 30% to 63% were observed for the other languages, with the exception of English, where a statistically insignificant 4% reduction was recorded. Since English is highly irregular in comparison with the other five languages, the active learner tends to query examples that are difficult to classify, but which are unhelpful in terms of generalization. It is important to note that empirical comparisons of different active learning techniques have shown that random sampling establishes a very 132 0 5 10 15 20 Number of training words (x100) 10 20 30 40 50 60 70 80 90 100 Word accuracy (%) Context Ordering No Context Ordering (a) Context Ordering 0 5 10 15 20 Number of training words (x100) 10 20 30 40 50 60 70 80 90 100 Word accuracy (%) Clustering No Clustering (b) Clustering 0 5 10 15 20 Number of training words (x100) 10 20 30 40 50 60 70 80 90 100 Word accuracy (%) Query-by-Bagging Random Sampling (c) Active learning 0 5 10 15 20 Number of training words (x100) 10 20 30 40 50 60 70 80 90 100 Word accuracy (%) ALINE EM (d) L2P alignment Spanish Italian + French Dutch + German English Figure 1: Performance of the individual system components strong baseline on some datasets (Schein and Ungar, 2007; Settles and Craven, 2008). It is rarely the case that a given active learning strategy is able to unanimously outperform random sampling across a range of datasets. From this perspective, to achieve statistically significant improvements on five of six L2P datasets (without ever being beaten by random) is an excellent result for QBB. 8.4 L2P alignment The ALINE method for L2P alignment outperformed EM on all six datasets (see Figure 1d). As was mentioned in Section 6, the EM aligner depends on all the available training data, whereas ALINE processes words individually. Only on Spanish and Italian, languages which have highly regular spelling systems, was the EM aligner competitive with ALINE. The accuracy gains on the remaining four datasets are remarkable, considering that better alignments do not necessarily translate into improved classification. We hypothesized that EM’s inferior performance was due to the limited quantities of data that were available in the early stages of active learning. In a follow-up experiment, we allowed EM to align the entire learning set in advance, and these aligned entries were revealed when requested by the learner. We compared this with the usual procedure whereby EM is applied to the labelled training data at each iteration of learning. The learning curves (not shown) were virtually indistinguishable, and there were no statistically significant differences on any of the languages. EM appears to produce poor alignments regardless of the amount of available data. 133 0 5 10 15 20 Number of training words (x100) 10 20 30 40 50 60 70 80 90 100 Word accuracy (%) Complete System Baseline Spanish Italian + French Dutch + German English Figure 2: Performance of the complete system 8.5 Complete system The complete system consists of context ordering, clustering, Query-by-Bagging, and ALINE; the baseline represents random sampling with EM alignment and no additional enhancements. Figure 2 plots the word accuracies for all six datasets. Although the absolute word accuracies varied considerably across the different languages, our system significantly outperformed the baseline in every instance. On the French dataset, for example, the baseline labelled 1850 words before reaching its maximum accuracy of 64%, whereas the complete system required only 480 queries to reach 64% accuracy. This represents a reduction of 74% in the labelling effort. The savings for the other languages are: Spanish, 75%; Dutch, 68%; English, 59%; German, 59%; and Italian, 52%.6 Interestingly, the savings are the highest on Spanish, even though the corresponding accuracy gains are the smallest. This demonstrates that our approach is also effective on languages with relatively transparent orthography. At first glance, the performance of both systems appears to be rather poor on the English dataset. To put our results into perspective, Black et al. (1998) report 57.8% accuracy on this dataset with a similar alignment method and decision tree learner. Our baseline system achieves 57.3% accuracy when 90,000 words have been labelled. Hence, the low values in Figure 2 simply reflect the fact that many more examples are required to 6The average savings in the number of labelled words with respect to the entire learning curve are similar, ranging from 50% on Italian to 73% on Spanish. learn an accurate classifier for the English data. 9 Conclusions We have presented a system for learning a letterto-phoneme classifier that combines four distinct enhancements in order to minimize the amount of data that must be annotated. Our experiments involving datasets from several languages clearly demonstrate that unlabelled data can be used more efficiently, resulting in greater accuracy for a given training set size, without any additional tuning for the different languages. The experiments also show that a phonetically-based aligner may be preferable to the widely-used EM alignment technique, a discovery that could lead to the improvement of L2P accuracy in general. While this work represents an important step in reducing the cost of constructing an L2P training set, we intend to explore other active learners and classification algorithms, including sequence labelling strategies (Settles and Craven, 2008). We also plan to incorporate user-centric enhancements (Davel and Barnard, 2004; Culotta and McCallum, 2005) with the aim of reducing both the effort and expertise that is required to annotate words with their phoneme sequences. Acknowledgments We would like to thank Sittichai Jiampojamarn for helpful discussions and for providing an implementation of the Expectation-Maximization alignment algorithm. This research was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Informatics Circle of Research Excellence (iCORE). References Naoki Abe and Hiroshi Mamitsuka. 1998. Query learning strategies using boosting and bagging. In Proc. International Conference on Machine Learning, pages 1–9. Ove Andersen, Ronald Kuhn, Ariane Lazaridès, Paul Dalsgaard, Jürgen Haas, and Elmar Nöth. 1996. Comparison of two tree-structured approaches for grapheme-to-phoneme conversion. In Proc. International Conference on Spoken Language Processing, volume 3, pages 1700–1703. R. Harald Baayen, Richard Piepenbrock, and Leon Gulikers, 1996. The CELEX2 lexical database. Linguistic Data Consortium, Univ. of Pennsylvania. 134 Maximilian Bisani and Hermann Ney. 2002. Investigations on joint-multigram models for grapheme-tophoneme conversion. In Proc. International Conference on Spoken Language Processing, pages 105– 108. Alan W. Black, Kevin Lenzo, and Vincent Pagel. 1998. Issues in building general letter to sound rules. In ESCA Workshop on Speech Synthesis, pages 77–80. Leo Breiman. 1996. Bagging predictors. Machine Learning, 24(2):123–140. Peter F. Brown, Vincent J. Della Pietra, Peter V. deSouza, Jennifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural language. Computational Linguistics, 18(4):467–479. Carnegie Mellon University. 1998. The Carnegie Mellon pronouncing dictionary. David A. Cohn, Les E. Atlas, and Richard E. Ladner. 1992. Improving generalization with active learning. Machine Learning, 15(2):201–221. Alain Content, Phillppe Mousty, and Monique Radeau. 1990. Brulex: Une base de données lexicales informatisée pour le français écrit et parlé. L’année Psychologique, 90:551–566. Piero Cosi, Roberto Gretter, and Fabio Tesser. 2000. Festival parla Italiano. In Proc. Giornate del Gruppo di Fonetica Sperimentale. Aron Culotta and Andrew McCallum. 2004. Confidence estimation for information extraction. In Proc. HLT-NAACL, pages 109–114. Aron Culotta and Andrew McCallum. 2005. Reducing labeling effort for structured prediction tasks. In Proc. National Conference on Artificial Intelligence, pages 746–751. Robert I. Damper, Yannick Marchand, John-David S. Marsters, and Alexander I. Bazin. 2005. Aligning text and phonemes for speech technology applications using an EM-like algorithm. International Journal of Speech Technology, 8(2):147–160. Marelie Davel and Etienne Barnard. 2004. The efficient generation of pronunciation dictionaries: Human factors during bootstrapping. In Proc. International Conference on Spoken Language Processing, pages 2797–2800. Vera Demberg, Helmut Schmid, and Gregor Möhler. 2007. Phonological constraints and morphological preprocessing for grapheme-to-phoneme conversion. In Proc. ACL, pages 96–103. Kenneth Dwyer and Robert Holte. 2007. Decision tree instability and active learning. In Proc. European Conference on Machine Learning, pages 128–139. Yoav Freund, H. Sebastian Seung, Eli Shamir, and Naftali Tishby. 1997. Selective sampling using the query by committee algorithm. Machine Learning, 28(2-3):133–168. Diana Inkpen, Raphaëlle Martin, and Alain Desrochers. 2007. Graphon: un outil pour la transcription phonétique des mots français. Unpublished manuscript. International Phonetic Association. 1999. Handbook of the International Phonetic Association: A Guide to the Use of the International Phonetic Alphabet. Cambridge University Press. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2008. Joint processing and discriminative training for letter-to-phoneme conversion. In Proc. ACL, pages 905–913. Sittichai Jiampojamarn, Aditya Bhargava, Qing Dou, Kenneth Dwyer, and Grzegorz Kondrak. 2009. DirecTL: a language-independent approach to transliteration. In Named Entities Workshop (NEWS): Shared Task on Transliteration. Submitted. Anne K. Kienappel and Reinhard Kneser. 2001. Designing very compact decision trees for graphemeto-phoneme transcription. In Proc. European Conference on Speech Communication and Technology, pages 1911–1914. John Kominek and Alan W. Black. 2006. Learning pronunciation dictionaries: Language complexity and word selection strategies. In Proc. HLTNAACL, pages 232–239. Grzegorz Kondrak. 2000. A new algorithm for the alignment of phonetic sequences. In Proc. NAACL, pages 288–295. Christopher D. Manning and Hinrich Schütze. 1999. Foundations of Statistical Natural Language Processing. MIT Press. Sameer R. Maskey, Alan W. Black, and Laura M. Tomokiya. 2004. Boostrapping phonetic lexicons for new languages. In Proc. International Conference on Spoken Language Processing, pages 69–72. Scott Miller, Jethran Guinness, and Alex Zamanian. 2004. Name tagging with word clusters and discriminative training. In Proc. HLT-NAACL, pages 337–342. Andrew I. Schein and Lyle H. Ungar. 2007. Active learning for logistic regression: an evaluation. Machine Learning, 68(3):235–265. Burr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence labeling tasks. In Proc. Conference on Empirical Methods in Natural Language Processing, pages 1069–1078. Paul A. Taylor, Alan Black, and Richard Caley. 1998. The architecture of the Festival Speech Synthesis System. In ESCA Workshop on Speech Synthesis, pages 147–151. Ian H. Witten and Eibe Frank. 2005. Data Mining: Practical Machine Learning Tools and Techniques. Morgan Kaufmann, 2nd edition. 135
2009
15
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 136–144, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Transliteration Alignment Vladimir Pervouchine, Haizhou Li Institute for Infocomm Research A*STAR, Singapore 138632 {vpervouchine,hli}@i2r.a-star.edu.sg Bo Lin School of Computer Engineering NTU, Singapore 639798 [email protected] Abstract This paper studies transliteration alignment, its evaluation metrics and applications. We propose a new evaluation metric, alignment entropy, grounded on the information theory, to evaluate the alignment quality without the need for the gold standard reference and compare the metric with F-score. We study the use of phonological features and affinity statistics for transliteration alignment at phoneme and grapheme levels. The experiments show that better alignment consistently leads to more accurate transliteration. In transliteration modeling application, we achieve a mean reciprocal rate (MRR) of 0.773 on Xinhua personal name corpus, a significant improvement over other reported results on the same corpus. In transliteration validation application, we achieve 4.48% equal error rate on a large LDC corpus. 1 Introduction Transliteration is a process of rewriting a word from a source language to a target language in a different writing system using the word’s phonological equivalent. The word and its transliteration form a transliteration pair. Many efforts have been devoted to two areas of studies where there is a need to establish the correspondence between graphemes or phonemes between a transliteration pair, also known as transliteration alignment. One area is the generative transliteration modeling (Knight and Graehl, 1998), which studies how to convert a word from one language to another using statistical models. Since the models are trained on an aligned parallel corpus, the resulting statistical models can only be as good as the alignment of the corpus. Another area is the transliteration validation, which studies the ways to validate transliteration pairs. For example Knight and Graehl (1998) use the lexicon frequency, Qu and Grefenstette (2004) use the statistics in a monolingual corpus and the Web, Kuo et al. (2007) use probabilities estimated from the transliteration model to validate transliteration candidates. In this paper, we propose using the alignment distance between the a bilingual pair of words to establish the evidence of transliteration candidacy. An example of transliteration pair alignment is shown in Figure 1. e5 e1 e2 e3 e4 c1 c2 c3 A L I C E 艾 丽 斯 source graphemes target graphemes e1 e2 e3 grapheme tokens Figure 1: An example of grapheme alignment (Alice, 艾丽斯), where a Chinese grapheme, a character, is aligned to an English grapheme token. Like the word alignment in statistical machine translation (MT), transliteration alignment becomes one of the important topics in machine transliteration, which has several unique challenges. Firstly, the grapheme sequence in a word is not delimited into grapheme tokens, resulting in an additional level of complexity. Secondly, to maintain the phonological equivalence, the alignment has to make sense at both grapheme and phoneme levels of the source and target languages. This paper reports progress in our ongoing spoken language translation project, where we are interested in the alignment problem of personal name transliteration from English to Chinese. This paper is organized as follows. In Section 2, we discuss the prior work. In Section 3, we introduce both statistically and phonologically motivated alignment techniques and in Section 4 we advocate an evaluation metric, alignment entropy that measures the alignment quality. We report the experiments in Section 5. Finally, we conclude in Section 6. 136 2 Related Work A number of transliteration studies have touched on the alignment issue as a part of the transliteration modeling process, where alignment is needed at levels of graphemes and phonemes. In their seminal paper Knight and Graehl (1998) described a transliteration approach that transfers the grapheme representation of a word via the phonetic representation, which is known as phonemebased transliteration technique (Virga and Khudanpur, 2003; Meng et al., 2001; Jung et al., 2000; Gao et al., 2004). Another technique is to directly transfer the grapheme, known as direct orthographic mapping, that was shown to be simple and effective (Li et al., 2004). Some other approaches that use both source graphemes and phonemes were also reported with good performance (Oh and Choi, 2002; Al-Onaizan and Knight, 2002; Bilac and Tanaka, 2004). To align a bilingual training corpus, some take a phonological approach, in which the crafted mapping rules encode the prior linguistic knowledge about the source and target languages directly into the system (Wan and Verspoor, 1998; Meng et al., 2001; Jiang et al., 2007; Xu et al., 2006). Others adopt a statistical approach, in which the affinity between phonemes or graphemes is learned from the corpus (Gao et al., 2004; AbdulJaleel and Larkey, 2003; Virga and Khudanpur, 2003). In the phoneme-based technique where an intermediate level of phonetic representation is used as the pivot, alignment between graphemes and phonemes of the source and target words is needed (Oh and Choi, 2005). If source and target languages have different phoneme sets, alignment between the the different phonemes is also required (Knight and Graehl, 1998). Although the direct orthographic mapping approach advocates a direct transfer of grapheme at run-time, we still need to establish the grapheme correspondence at the model training stage, when phoneme level alignment can help. It is apparent that the quality of transliteration alignment of a training corpus has a significant impact on the resulting transliteration model and its performance. Although there are many studies of evaluation metrics of word alignment for MT (Lambert, 2008), there has been much less reported work on evaluation metrics of transliteration alignment. In MT, the quality of training corpus alignment A is often measured relatively to the gold standard, or the ground truth alignment G, which is a manual alignment of the corpus or a part of it. Three evaluation metrics are used: precision, recall, and F-score, the latter being a function of the former two. They indicate how close the alignment under investigation is to the gold standard alignment (Mihalcea and Pedersen, 2003). Denoting the number of cross-lingual mappings that are common in both A and G as CAG, the number of cross-lingual mappings in A as CA and the number of cross-lingual mappings in G as CG, precision Pr is given as CAG/CA, recall Rc as CAG/CG and F-score as 2Pr · Rc/(Pr + Rc). Note that these metrics hinge on the availability of the gold standard, which is often not available. In this paper we propose a novel evaluation metric for transliteration alignment grounded on the information theory. One important property of this metric is that it does not require a gold standard alignment as a reference. We will also show that how this metric is used in generative transliteration modeling and transliteration validation. 3 Transliteration alignment techniques We assume in this paper that the source language is English and the target language is Chinese, although the technique is not restricted to EnglishChinese alignment. Let a word in the source language (English) be {ei} = {e1 . . . eI} and its transliteration in the target language (Chinese) be {cj} = {c1 . . . cJ}, ei ∈E, cj ∈C, and E, C being the English and Chinese sets of characters, or graphemes, respectively. Aligning {ei} and {cj} means for each target grapheme token ¯cj finding a source grapheme token ¯em, which is an English substring in {ei} that corresponds to cj, as shown in the example in Figure 1. As Chinese is syllabic, we use a Chinese character cj as the target grapheme token. 3.1 Grapheme affinity alignment Given a distance function between graphemes of the source and target languages d(ei, cj), the problem of alignment can be formulated as a dynamic programming problem with the following function to minimize: Dij = min(Di−1,j−1 + d(ei, cj), Di,j−1 + d(∗, cj), Di−1,j + d(ei, ∗)) (1) 137 Here the asterisk * denotes a null grapheme that is introduced to facilitate the alignment between graphemes of different lengths. The minimum distance achieved is then given by D = I X i=1 d(ei, cθ(i)) (2) where j = θ(i) is the correspondence between the source and target graphemes. The alignment can be performed via the Expectation-Maximization (EM) by starting with a random initial alignment and calculating the affinity matrix count(ei, cj) over the whole parallel corpus, where element (i, j) is the number of times character ei was aligned to cj. From the affinity matrix conditional probabilities P(ei|cj) can be estimated as P(ei|cj) = count(ei, cj)/ X j count(ei, cj) (3) Alignment j = θ(i) between {ei} and {cj} that maximizes probability P = Y i P(cθ(i)|ei) (4) is also the same alignment that minimizes alignment distance D: D = −log P = − X i log P(cθ(i)|ei) (5) In other words, equations (2) and (5) are the same when we have the distance function d(ei, cj) = −log P(cj|ei). Minimizing the overall distance over a training corpus, we conduct EM iterations until the convergence is achieved. This technique solely relies on the affinity statistics derived from training corpus, thus is called grapheme affinity alignment. It is also equally applicable for alignment between a pair of symbol sequences representing either graphemes or phonemes. (Gao et al., 2004; AbdulJaleel and Larkey, 2003; Virga and Khudanpur, 2003). 3.2 Grapheme alignment via phonemes Transliteration is about finding phonological equivalent. It is therefore a natural choice to use the phonetic representation as the pivot. It is common though that the sound inventory differs from one language to another, resulting in different phonetic representations for source and target words. Continuing with the earlier example, 艾 AE L AH S A L I C E AY l i s iz 丽 斯 graphemes phonemes phonemes graphemes source target Figure 2: An example of English-Chinese transliteration alignment via phonetic representations. Figure 2 shows the correspondence between the graphemes and phonemes of English word “Alice” and its Chinese transliteration, with CMU phoneme set used for English (Chase, 1997) and IIR phoneme set for Chinese (Li et al., 2007a). A Chinese character is often mapped to a unique sequence of Chinese phonemes. Therefore, if we align English characters {ei} and Chinese phonemes {cpk} (cpk ∈ CP set of Chinese phonemes) well, we almost succeed in aligning English and Chinese grapheme tokens. Alignment between {ei} and {cpk} becomes the main task in this paper. 3.2.1 Phoneme affinity alignment Let the phonetic transcription of English word {ei} be {epn}, epn ∈EP, where EP is the set of English phonemes. Alignment between {ei} and {epn}, as well as between {epn} and {cpk} can be performed via EM as described above. We estimate conditional probability of Chinese phoneme cpk after observing English character ei as P(cpk|ei) = X {epn} P(cpk|epn)P(epn|ei) (6) We use the distance function between English graphemes and Chinese phonemes d(ei, cpk) = −log P(cpk|ei) to perform the initial alignment between {ei} and {cpk} via dynamic programming, followed by the EM iterations until convergence. The estimates for P(cpk|epn) and P(epn|ei) are obtained from the affinity matrices: the former from the alignment of English and Chinese phonetic representations, the latter from the alignment of English words and their phonetic representations. 3.2.2 Phonological alignment Alignment between the phonetic representations of source and target words can also be achieved using the linguistic knowledge of phonetic similarity. Oh and Choi (2002) define classes of 138 phonemes and assign various distances between phonemes of different classes. In contrast, we make use of phonological descriptors to define the similarity between phonemes in this paper. Perhaps the most common way to measure the phonetic similarity is to compute the distances between phoneme features (Kessler, 2005). Such features have been introduced in many ways, such as perceptual attributes or articulatory attributes. Recently, Tao et al. (2006) and Yoon et al. (2007) have studied the use of phonological features and manually assigned phonological distance to measure the similarity of transliterated words for extracting transliterations from a comparable corpus. We adopt the binary-valued articulatory attributes as the phonological descriptors, which are used to describe the CMU and IIR phoneme sets for English and Chinese Mandarin respectively. Withgott and Chen (1993) define a feature vector of phonological descriptors for English sounds. We extend the idea by defining a 21-element binary feature vector for each English and Chinese phoneme. Each element of the feature vector represents presence or absence of a phonological descriptor that differentiates various kinds of phonemes, e.g. vowels from consonants, front from back vowels, nasals from fricatives, etc1. In this way, a phoneme is described by a feature vector. We express the similarity between two phonemes by the Hamming distance, also called the phonological distance, between the two feature vectors. A difference in one descriptor between two phonemes increases their distance by 1. As the descriptors are chosen to differentiate between sounds, the distance between similar phonemes is low, while that between two very different phonemes, such as a vowel and a consonant, is high. The null phoneme, added to both English and Chinese phoneme sets, has a constant distance to any actual phonemes, which is higher than that between any two actual phonemes. We use the phonological distance to perform the initial alignment between English and Chinese phonetic representations of words. After that we proceed with recalculation of the distances between phonemes using the affinity matrix as described in Section 3.1 and realign the corpus again. We continue the iterations until convergence is 1The complete table of English and Chinese phonemes with their descriptors, as well as the transliteration system demo is available at http://translit.i2r.astar.edu.sg/demos/transliteration/ reached. Because of the use of phonological descriptors for the initial alignment, we call this technique the phonological alignment. 4 Transliteration alignment entropy Having aligned the graphemes between two languages, we want to measure how good the alignment is. Aligning the graphemes means aligning the English substrings, called the source grapheme tokens, to Chinese characters, the target grapheme tokens. Intuitively, the more consistent the mapping is, the better the alignment will be. We can quantify the consistency of alignment via alignment entropy grounded on information theory. Given a corpus of aligned transliteration pairs, we calculate count(cj, ¯em), the number of times each Chinese grapheme token (character) cj is mapped to each English grapheme token ¯em. We use the counts to estimate probabilities P(¯em, cj) = count(cj, ¯em)/ X m,j count(cj, ¯em) P(¯em|cj) = count(cj, ¯em)/ X m count(cj, ¯em) The alignment entropy of the transliteration corpus is the weighted average of the entropy values for all Chinese tokens: H = − X j P(cj) X m P(¯em|cj) log P(¯em|cj) = − X m,j P(¯em, cj) log P(¯em|cj) (7) Alignment entropy indicates the uncertainty of mapping between the English and Chinese tokens resulting from alignment. We expect and will show that this estimate is a good indicator of the alignment quality, and is as effective as the Fscore, but without the need for a gold standard reference. A lower alignment entropy suggests that each Chinese token tends to be mapped to fewer distinct English tokens, reflecting better consistency. We expect a good alignment to have a sharp cross-lingual mapping with low alignment entropy. 5 Experiments We use two transliteration corpora: Xinhua corpus (Xinhua News Agency, 1992) of 37,637 personal name pairs and LDC Chinese-English 139 named entity list LDC2005T34 (Linguistic Data Consortium, 2005), containing 673,390 personal name pairs. The LDC corpus is referred to as LDC05 for short hereafter. For the results to be comparable with other studies, we follow the same splitting of Xinhua corpus as that in (Li et al., 2007b) having a training and testing set of 34,777 and 2,896 names respectively. In contrast to the well edited Xinhua corpus, LDC05 contains erroneous entries. We have manually verified and corrected around 240,000 pairs to clean up the corpus. As a result, we arrive at a set of 560,768 EnglishChinese (EC) pairs that follow the Chinese phonetic rules, and a set of 83,403 English-Japanese Kanji (EJ) pairs, which follow the Japanese phonetic rules, and the rest 29,219 pairs (REST) being labeled as incorrect transliterations. Next we conduct three experiments to study 1) alignment entropy vs. F-score, 2) the impact of alignment quality on transliteration accuracy, and 3) how to validate transliteration using alignment metrics. 5.1 Alignment entropy vs. F-score As mentioned earlier, for English-Chinese grapheme alignment, the main task is to align English graphemes to Chinese phonemes. Phonetic transcription for the English names in Xinhua corpus are obtained by a grapheme-to-phoneme (G2P) converter (Lenzo, 1997), which generates phoneme sequence without providing the exact correspondence between the graphemes and phonemes. G2P converter is trained on the CMU dictionary (Lenzo, 2008). We align English grapheme and phonetic representations e −ep with the affinity alignment technique (Section 3.1) in 3 iterations. We further align the English and Chinese phonetic representations ep −cp via both affinity and phonological alignment techniques, by carrying out 6 and 7 iterations respectively. The alignment methods are schematically shown in Figure 3. To study how alignment entropy varies according to different quality of alignment, we would like to have many different alignment results. We pair the intermediate results from the e −ep and ep −cp alignment iterations (see Figure 3) to form e −ep −cp alignments between English graphemes and Chinese phonemes and let them converge through few more iterations, as shown in Figure 4. In this way, we arrive at a total of 114 phonological and 80 affinity alignments of different quality. {cpk} {ei} English graphemes {epn} English phonemes Chinese phonemes affinity alignment affinity alignment e −ep iteration 1 e −ep iteration 2 e −ep iteration 3 ep −cp iteration 1 ep −cp iteration 2 ... ep −cp iteration 6 phonological alignment ep −cp iteration 1 ep −cp iteration 2 ... ep −cp iteration 7 Figure 3: Aligning English graphemes to phonemes e−ep and English phonemes to Chinese phonemes ep−cp. Intermediate e−ep and ep−cp alignments are used for producing e −ep −cp alignments. e −ep alignments ep −cp affinity / phonological alignments iteration 1 iteration 2 iteration 3 iteration 1 iteration 2 iteration n ... ... calculating d(ei, cpk) affinity alignment iteration 1 iteration 2 ... e −ep −cp etc Figure 4: Example of aligning English graphemes to Chinese phonemes. Each combination of e−ep and ep −cp alignments is used to derive the initial distance d(ei, cpk), resulting in several e−ep−cp alignments due to the affinity alignment iterations. We have manually aligned a random set of 3,000 transliteration pairs from the Xinhua training set to serve as the gold standard, on which we calculate the precision, recall and F-score as well as alignment entropy for each alignment. Each alignment is reflected as a data point in Figures 5a and 5b. From the figures, we can observe a clear correlation between the alignment entropy and Fscore, that validates the effectiveness of alignment entropy as an evaluation metric. Note that we don’t need the gold standard reference for reporting the alignment entropy. We also notice that the data points seem to form clusters inside which the value of F-score changes insignificantly as the alignment entropy changes. Further investigation reveals that this could be due to the limited number of entries in the gold standard. The 3,000 names in the gold standard are not enough to effectively reflect the change across different alignments. F-score requires a large gold standard which is not always available. In contrast, because the alignment entropy doesn’t depend on the gold standard, one can easily report the alignment performance on any unaligned parallel corpus. 140 !"#$% !"#&% !"#'% !"##% !"(!% !"($% !"(&% $")*% $"&*% $"**% $"'*% !"#$%&' ()*+,-',./',.&%01 2345 2365 (a) 80 affinity alignments !"#$% !"#&% !"#'% !"##% !"(!% !"($% !"(&% $")*% $"&*% $"**% $"'*% !"#$%&'%()'%(*+,./01 ./21 3456+*' (b) 114 phonological alignments Figure 5: Correlation between F-score and alignment entropy for Xinhua training set alignments. Results for precision and recall have similar trends . 5.2 Impact of alignment quality on transliteration accuracy We now further study how the alignment affects the generative transliteration model in the framework of the joint source-channel model (Li et al., 2004). This model performs transliteration by maximizing the joint probability of the source and target names P({ei}, {cj}), where the source and target names are sequences of English and Chinese grapheme tokens. The joint probability is expressed as a chain product of a series of conditional probabilities of token pairs P({ei}, {cj}) = P((¯ek, ck)|(¯ek−1, ck−1)), k = 1 . . . N, where we limit the history to one preceding pair, resulting in a bigram model. The conditional probabilities for token pairs are estimated from the aligned training corpus. We use this model because it was shown to be simple yet accurate (Ekbal et al., 2006; Li et al., 2007b). We train a model for each of the 114 phonological alignments and the 80 affinity alignments in Section 5.1 and conduct transliteration experiment on the Xinhua test data. During transliteration, an input English name is first decoded into a lattice of all possible English and Chinese grapheme token pairs. Then the joint source-channel transliteration model is used to score the lattice to obtain a ranked list of m most likely Chinese transliterations (m-best list). We measure transliteration accuracy as the mean reciprocal rank (MRR) (Kantor and Voorhees, 2000). If there is only one correct Chinese transliteration of the k-th English word and it is found at the rk-th position in the m-best list, its reciprocal rank is 1/rk. If the list contains no correct transliterations, the reciprocal rank is 0. In case of multiple correct transliterations, we take the one that gives the highest reciprocal rank. MRR is the average of the reciprocal ranks across all words in the test set. It is commonly used as a measure of transliteration accuracy, and also allows us to make a direct comparison with other reported work (Li et al., 2007b). We take m = 20 and measure MRR on Xinhua test set for each alignment of Xinhua training set as described in Section 5.1. We report MRR and the alignment entropy in Figures 6a and 7a for the affinity and phonological alignments respectively. The highest MRR we achieve is 0.771 for affinity alignments and 0.773 for phonological alignments. This is a significant improvement over the MRR of 0.708 reported in (Li et al., 2007b) on the same data. We also observe that the phonological alignment technique produces, on average, better alignments than the affinity alignment technique in terms of both the alignment entropy and MRR. We also report the MRR and F-scores for each alignment in Figures 6b and 7b, from which we observe that alignment entropy has stronger correlation with MRR than F-score does. The Spearman’s rank correlation coefficients are −0.89 and −0.88 for data in Figure 6a and 7a respectively. This once again demonstrates the desired property of alignment entropy as an evaluation metric of alignment. To validate our findings from Xinhua corpus, we further carry out experiments on the EC set of LDC05 containing 560,768 entries. We split the set into 5 almost equal subsets for crossvalidation: in each of 5 experiments one subset is used for testing and the remaining ones for training. Since LDC05 contains one-to-many EnglishChinese transliteration pairs, we make sure that an English name only appears in one subset. Note that the EC set of LDC05 contains many names of non-English, and, generally, nonEuropean origin. This makes the G2P converter less accurate, as it is trained on an English phonetic dictionary. We therefore only apply the affinity alignment technique to align the EC set. We 141 !"#$!% !"#$$% !"#&!% !"#&$% !"##!% !"##$% '"($% '")$% '"$$% '"&$% MRR Alignment
entropy (a) 80 affinity alignments !"#$!% !"#$$% !"#&!% !"#&$% !"##!% !"##$% !"'(% !"')% !"'&% !"''% !"*!% !"*(% !"*)% MRR F‐score (b) 80 affinity alignments Figure 6: Mean reciprocal ratio on Xinhua test set vs. alignment entropy and F-score for models trained with different affinity alignments. use each iteration of the alignment in the transliteration modeling and present the resulting MRR along with alignment entropy in Figure 8. The MRR results are the averages of five values produced in the five-fold cross-validations. We observe a clear correlation between the alignment entropy and transliteration accuracy expressed by MRR on LDC05 corpus, similar to that on Xinhua corpus, with the Spearman’s rank correlation coefficient of −0.77. We obtain the highest average MRR of 0.720 on the EC set. 5.3 Validating transliteration using alignment measure Transliteration validation is a hypothesis test that decides whether a given transliteration pair is genuine or not. Instead of using the lexicon frequency (Knight and Graehl, 1998) or Web statistics (Qu and Grefenstette, 2004), we propose validating transliteration pairs according to the alignment distance D between the aligned English graphemes and Chinese phonemes (see equations (2) and (5)). A distance function d(ei, cpk) is established from each alignment on the Xinhua training set as discussed in Section 5.2. An audit of LDC05 corpus groups the corpus into three sets: an English-Chinese (EC) set of 560,768 samples, an English-Japanese (EJ) set of 83,403 samples and the REST set of 29,219 !"#$!% !"#$$% !"#&!% !"#&$% !"##!% !"##$% '"($% '")$% '"$$% '"&$% MRR Alignment
entropy (a) 114 phonological alignments !"#$!% !"#$$% !"#&!% !"#&$% !"##!% !"##$% !"'(% !"')% !"'&% !"''% !"*!% !"*(% !"*)% MRR F‐score (b) 114 phonological alignments Figure 7: Mean reciprocal ratio on Xinhua test set vs. alignment entropy and F-score for models trained with different phonological alignments. !"#!$% !"#!&% !"#!'% !"#(!% !"#()% !"#($% !"#(&% !"#('% !"#)!% ("*!% )"!!% )"(!% )")!% !"" #$%&'()'*+)'*,-./ Figure 8: Mean reciprocal ratio vs. alignment entropy for alignments of EC set. samples that are not transliteration pairs. We mark the EC name pairs as genuine and the rest 112,622 name pairs that do not follow the Chinese phonetic rules as false transliterations, thus creating the ground truth labels for an EnglishChinese transliteration validation experiment. In other words, LDC05 has 560,768 genuine transliteration pairs and 112,622 false ones. We run one iteration of alignment over LDC05 (both genuine and false) with the distance function d(ei, cpk) derived from the affinity matrix of one aligned Xinhua training set. In this way, each transliteration pair in LDC05 provides an alignment distance. One can expect that a genuine transliteration pair typically aligns well, leading to a low distance, while a false transliteration pair will do otherwise. To remove the effect of word length, we normalize the distance by the English name length, the Chinese phonetic transcription 142 length, and the sum of both, producing score1, score2 and score3 respectively. Miss
probability
(%) False
alarm
probability
(%) 2 5 10 1 2 5 10 1 20 20 score2 EER:
4.48
% score1 EER:
7.13
% score3 EER:
4.80
% (a) DET with score1, score2, score3. 1 2 5 10 1 2 5 10 Miss
probability
(%) False
alarm
probability
(%) Entropy:
2.396 MRR:
0.773 EER:
4.48
% Entropy:
2.529 MRR:
0.764 EER:
4.52% Entropy:
2.625 MRR:
0.754 EER:
4.70% (b) DET results vs. three different alignment quality. Figure 9: Detection error tradeoff (DET) curves for transliteration validation on LDC05. We can now classify each LDC05 name pair as genuine or false by having a hypothesis test. When the test score is lower than a pre-set threshold, the name pair is accepted as genuine, otherwise false. In this way, each pre-set threshold will present two types of errors, a false alarm and a miss-detect rate. A common way to present such results is via the detection error tradeoff (DET) curves, which show all possible decision points, and the equal error rate (EER), when false alarm and miss-detect rates are equal. Figure 9a shows three DET curves based on score1, score2 and score3 respectively for one one alignment solution on the Xinhua training set. The horizontal axis is the probability of missdetecting a genuine transliteration, while the vertical one is the probability of false-alarms. It is clear that out of the three, score2 gives the best results. We select the alignments of Xinhua training set that produce the highest and the lowest MRR. We also randomly select three other alignments that produce different MRR values from the pool of 114 phonological and 80 affinity alignments. Xinhua train set alignment Alignment entropy of Xinhua train set MRR on Xinhua test set LDC classification EER, % 1 2 3 4 5 2.396 0.773 4.48 2.529 0.764 4.52 2.586 0.761 4.51 2.621 0.757 4.71 2.625 0.754 4.70 Table 1: Equal error ratio of LDC transliteration pair validation for different alignments of Xinhua training set. We use each alignment to derive distance function d(ei, cpk). Table 1 shows the EER of LDC05 validation using score2, along with the alignment entropy of the Xinhua training set that derives d(ei, cpk), and the MRR on Xinhua test set in the generative transliteration experiment (see Section 5.2) for all 5 alignments. To avoid cluttering Figure 9b, we show the DET curves for alignments 1, 2 and 5 only. We observe that distance function derived from better aligned Xinhua corpus, as measured by both our alignment entropy metric and MRR, leads to a higher validation accuracy consistently on LDC05. 6 Conclusions We conclude that the alignment entropy is a reliable indicator of the alignment quality, as confirmed by our experiments on both Xinhua and LDC corpora. Alignment entropy does not require the gold standard reference, it thus can be used to evaluate alignments of large transliteration corpora and is possibly to give more reliable estimate of alignment quality than the F-score metric as shown in our transliteration experiment. The alignment quality of training corpus has a significant impact on the transliteration models. We achieve the highest MRR of 0.773 on Xinhua corpus with phonological alignment technique, which represents a significant performance gain over other reported results. Phonological alignment outperforms affinity alignment on clean database. We propose using alignment distance to validate transliterations. A high quality alignment on a small verified corpus such as Xinhua can be effectively used to validate a large noisy corpus, such as LDC05. We believe that this property would be useful in transliteration extraction, cross-lingual information retrieval applications. 143 References Nasreen AbdulJaleel and Leah S. Larkey. 2003. Statistical transliteration for English-Arabic cross language information retrieval. In Proc. ACM CIKM. Yaser Al-Onaizan and Kevin Knight. 2002. Machine transliteration of names in arabic text. In Proc. ACL Workshop: Computational Apporaches to Semitic Languages. Slaven Bilac and Hozumi Tanaka. 2004. A hybrid back-transliteration system for Japanese. In Proc. COLING, pages 597–603. Lin L. Chase. 1997. Error-responsive feedback mechanisms for speech recognizers. Ph.D. thesis, CMU. Asif Ekbal, Sudip Kumar Naskar, and Sivaji Bandyopadhyay. 2006. A modified joint source-channel model for transliteration. In Proc. COLING/ACL, pages 191–198 Wei Gao, Kam-Fai Wong, and Wai Lam. 2004. Phoneme-based transliteration of foreign names for OOV problem. In Proc. IJCNLP, pages 374–381. Long Jiang, Ming Zhou, Lee-Feng Chien, and Cheng Niu. 2007. Named entity translation with web mining and transliteration. In IJCAI, pages 1629–1634. Sung Young Jung, SungLim Hong, and Eunok Paek. 2000. An English to Korean transliteration model of extended Markov window. In Proc. COLING, volume 1. Paul. B. Kantor and Ellen. M. Voorhees. 2000. The TREC-5 confusion track: comparing retrieval methods for scanned text. Information Retrieval, 2:165– 176. Brett Kessler. 2005. Phonetic comparison algorithms. Transactions of the Philological Society, 103(2):243–260. Kevin Knight and Jonathan Graehl. 1998. Machine transliteration. Computational Linguistics, 24(4). Jin-Shea Kuo, Haizhou Li, and Ying-Kuei Yang. 2007. A phonetic similarity model for automatic extraction of transliteration pairs. ACM Trans. Asian Language Information Processing, 6(2). Patrik Lambert. 2008. Exploiting lexical information and discriminative alignment training in statistical machine translation. Ph.D. thesis, Universitat Polit`ecnica de Catalunya, Barcelona, Spain. Kevin Lenzo. 1997. t2p: text-to-phoneme converter builder. http://www.cs.cmu.edu/˜lenzo/t2p/. Kevin Lenzo. 2008. The CMU pronouncing dictionary. http://www.speech.cs.cmu.edu/cgibin/cmudict. Haizhou Li, Min Zhang, and Jian Su. 2004. A joint source-channel model for machine transliteration. In Proc. ACL, pages 159–166. Haizhou Li, Bin Ma, and Chin-Hui Lee. 2007a. A vector space modeling approach to spoken language identification. IEEE Trans. Acoust., Speech, Signal Process., 15(1):271–284. Haizhou Li, Khe Chai Sim, Jin-Shea Kuo, and Minghui Dong. 2007b. Semantic transliteration of personal names. In Proc. ACL, pages 120–127. Linguistic Data Consortium. 2005. LDC ChineseEnglish name entity lists LDC2005T34. Helen M. Meng, Wai-Kit Lo, Berlin Chen, and Karen Tang. 2001. Generate phonetic cognates to handle name entities in English-Chinese cross-language spoken document retrieval. In Proc. ASRU. Rada Mihalcea and Ted Pedersen. 2003. An evaluation exercise for word alignment. In Proc. HLT-NAACL, pages 1–10. Jong-Hoon Oh and Key-Sun Choi. 2002. An EnglishKorean transliteration model using pronunciation and contextual rules. In Proc. COLING 2002. Jong-Hoon Oh and Key-Sun Choi. 2005. Machine learning based english-to-korean transliteration using grapheme and phoneme information. IEICE Trans. Information and Systems, E88-D(7):1737– 1748. Yan Qu and Gregory Grefenstette. 2004. Finding ideographic representations of Japanese names written in Latin script via language identification and corpus validation. In Proc. ACL, pages 183–190. Tao Tao, Su-Youn Yoon, Andrew Fisterd, Richard Sproat, and ChengXiang Zhai. 2006. Unsupervised named entity transliteration using temporal and phonetic correlation. In Proc. EMNLP, pages 250–257. Paola Virga and Sanjeev Khudanpur. 2003. Transliteration of proper names in cross-lingual information retrieval. In Proc. ACL MLNER. Stephen Wan and Cornelia Maria Verspoor. 1998. Automatic English-Chinese name transliteration for development of multilingual resources. In Proc. COLING, pages 1352–1356. M. M. Withgott and F. R. Chen. 1993. Computational models of American speech. Centre for the study of language and information. Xinhua News Agency. 1992. Chinese transliteration of foreign personal names. The Commercial Press. LiLi Xu, Atsushi Fujii, and Tetsuya Ishikawa. 2006. Modeling impression in probabilistic transliteration into Chinese. In Proc. EMNLP, pages 242–249. Su-Youn Yoon, Kyoung-Young Kim, and Richard Sproat. 2007. Multilingual transliteration using feature based phonetic method. In Proc. ACL, pages 112–119. 144
2009
16
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 145–153, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Automatic training of lemmatization rules that handle morphological changes in pre-, in- and suffixes alike Bart Jongejan CST-University of Copenhagen Njalsgade 140-142 2300 København S Denmark [email protected] Hercules Dalianis† ‡ †DSV, KTH - Stockholm University Forum 100, 164 40 Kista, Sweden ‡Euroling AB, SiteSeeker Igeldammsgatan 22c 112 49 Stockholm, Sweden [email protected] Abstract We propose a method to automatically train lemmatization rules that handle prefix, infix and suffix changes to generate the lemma from the full form of a word. We explain how the lemmatization rules are created and how the lemmatizer works. We trained this lemmatizer on Danish, Dutch, English, German, Greek, Icelandic, Norwegian, Polish, Slovene and Swedish full form-lemma pairs respectively. We obtained significant improvements of 24 percent for Polish, 2.3 percent for Dutch, 1.5 percent for English, 1.2 percent for German and 1.0 percent for Swedish compared to plain suffix lemmatization using a suffix-only lemmatizer. Icelandic deteriorated with 1.9 percent. We also made an observation regarding the number of produced lemmatization rules as a function of the number of training pairs. 1 Introduction Lemmatizers and stemmers are valuable human language technology tools to improve precision and recall in an information retrieval setting. For example, stemming and lemmatization make it possible to match a query in one morphological form with a word in a document in another morphological form. Lemmatizers can also be used in lexicography to find new words in text material, including the words’ frequency of use. Other applications are creation of index lists for book indexes as well as key word lists Lemmatization is the process of reducing a word to its base form, normally the dictionary look-up form (lemma) of the word. A trivial way to do this is by dictionary look-up. More advanced systems use hand crafted or automatically generated transformation rules that look at the surface form of the word and attempt to produce the correct base form by replacing all or parts of the word. Stemming conflates a word to its stem. A stem does not have to be the lemma of the word, but can be any trait that is shared between a group of words, so that even the group membership itself can be regarded as the group’s stem. The most famous stemmer is the Porter Stemmer for English (Porter 1980). This stemmer removes around 60 different suffixes, using rewriting rules in two steps. The paper is structured as follows: section 2 discusses related work, section 3 explains what the new algorithm is supposed to do, section 4 describes some details of the new algorithm, section 5 evaluates the results, conclusions are drawn in section 6, and finally in section 7 we mention plans for further tests and improvements. 2 Related work There have been some attempts in creating stemmers or lemmatizers automatically. Ekmekçioglu et al. (1996) have used N-gram matching for Turkish that gave slightly better results than regular rule based stemming. Theron and Cloete (1997) learned two-level rules for English, Xhosa and Afrikaans, but only single character insertions, replacements and additions were allowed. Oard et al. (2001) used a language independent stemming technique in a dictionary based cross language information retrieval experiment for German, French and Italian where English was the search language. A four stage backoff strategy for improving recall was intro145 duced. The system worked fine for French but not so well for Italian and German. Majumder et al. (2007) describe a statistical stemmer, YASS (Yet Another Suffix Stripper), mainly for Bengali and French, but they propose it also for Hindi and Gujarati. The method finds clusters of similar words in a corpus. The clusters are called stems. The method works best for languages that are basically suffix based. For Bengali precision was 39.3 percent better than without stemming, though no absolute numbers were reported for precision. The system was trained on a corpus containing 301 562 words. Kanis & Müller (2005) used an automatic technique called OOV Words Lemmatization to train their lemmatizer on Czech, Finnish and English data. Their algorithm uses two pattern tables to handle suffixes as well as prefixes. Plisson et al. (2004) presented results for a system using Ripple Down Rules (RDR) to generate lemmatization rules for Slovene, achieving up to 77 percent accuracy. Matjaž et al. (2007) present an RDR system producing efficient suffix based lemmatizers for 14 languages, three of which (English, German and Slovene) our algorithm also has been tested with. Stempel (Białecki 2004) is a stemmer for Polish that is trained on Polish full form – lemma pairs. When tested with inflected out-ofvocabulary (OOV) words Stempel produces 95.4 percent correct stems, of which about 81 percent also happen to be correct lemmas. Hedlund (2001) used two different approaches to automatically find stemming rules from a corpus, for both Swedish and English. Unfortunately neither of these approaches did beat the hand crafted rules in the Porter stemmer for English (Porter 1980) or the Euroling SiteSeeker stemmer for Swedish, (Carlberger et al. 2001). Jongejan & Haltrup (2005) constructed a trainable lemmatizer for the lexicographical task of finding lemmas outside the existing dictionary, bootstrapping from a training set of full form – lemma pairs extracted from the existing dictionary. This lemmatizer looks only at the suffix part of the word. Its performance was compared with a stemmer using hand crafted stemming rules, the Euroling SiteSeeker stemmer for Swedish, Danish and Norwegian, and also with a stemmer for Greek, (Dalianis & Jongejan 2006). The results showed that lemmatizer was as good as the stemmer for Swedish, slightly better for Danish and Norwegian but worse for Greek. These results are very dependent on the quality (errors, size) and complexity (diacritics, capitals) of the training data. In the current work we have used Jongejan & Haltrup’s lemmatizer as a reference, referring to it as the ‘suffix lemmatizer’. 3 Delineation 3.1 Why affix rules? German and Dutch need more advanced methods than suffix replacement since their affixing of words (inflection of words) can include both prefixing, infixing and suffixing. Therefore we created a trainable lemmatizer that handles pre- and infixes in addition to suffixes. Here is an example to get a quick idea of what we wanted to achieve with the new training algorithm. Suppose we have the following Dutch full form – lemma pair: afgevraagd → afvragen (Translation: wondered, to wonder) If this were the sole input given to the training program, it should produce a transformation rule like this: *ge*a*d → ***en The asterisks are wildcards and placeholders. The pattern on the left hand side contains three wildcards, each one corresponding to one placeholder in the replacement string on the right hand side, in the same order. The characters matched by a wildcard are inserted in the place kept free by the corresponding placeholder in the replacement expression. With this “set” of rules a lemmatizer would be able to construct the correct lemma for some words that had not been used during the training, such as the word verstekgezaagd (Translation: mitre cut): Word verstek ge z a ag d Pattern * ge * a * d Replacement * * * en Lemma verstek z ag en Table 1. Application of a rule to an OOV word. For most words, however, the lemmatizer would simply fail to produce any output, because not all words do contain the literal strings ge and a and a final d. We remedy this by adding a one-sizefits-all rule that says “return the input as output”: * → * 146 So now our rule set consists of two rules: *ge*a*d → ***en * → * The lemmatizer then finds the rule with the most specific pattern (see 4.2) that matches and applies only this rule. The last rule’s pattern matches any word and so the lemmatizer cannot fail to produce output. Thus, in our toy rule set consisting of two rules, the first rule handles words like gevraagd, afgezaagd, geklaagd, (all three correctly) and getalmd (incorrectly) while the second rule handles words like directeur (correctly) and zei (incorrectly). 3.2 Inflected vs. agglutinated languages A lemmatizer that only applies one rule per word is useful for inflected languages, a class of languages that includes all Indo-European languages. For these languages morphological change is not a productive process, which means that no word can be morphologically changed in an unlimited number of ways. Ideally, there are only a finite number of inflection schemes and thus a finite number of lemmatization rules should suffice to lemmatize indefinitely many words. In agglutinated languages, on the other hand, there are classes of words that in principle have innumerous word forms. One way to lemmatize such words is to peel off all agglutinated morphemes one by one. This is an iterative process and therefore the lemmatizer discussed in this paper, which applies only one rule per word, is not an obvious choice for agglutinated languages. 3.3 Supervised training An automatic process to create lemmatization rules is described in the following sections. By reserving a small part of the available training data for testing it is possible to quite accurately estimate the probability that the lemmatizer would produce the right lemma given any unknown word belonging to the language, even without requiring that the user masters the language (Kohavi 1995). On the downside, letting a program construct lemmatization rules requires an extended list of full form – lemma pairs that the program can exercise on – at least tens of thousands and possibly over a million entries (Dalianis and Jongejan 2006). 3.4 Criteria for success The main challenge for the training algorithm is that it must produce rules that accurately lemmatize OOV words. This requirement translates to two opposing tendencies during training. On the one hand we must trust rules with a wide basis of training examples more than rules with a small basis, which favours rules with patterns that fit many words. On the other hand we have the incompatible preference for cautious rules with rather specific patterns, because these must be better at avoiding erroneous rule applications than rules with generous patterns. The envisaged expressiveness of the lemmatization rules – allowing all kinds of affixes and an unlimited number of wildcards – turns the challenge into a difficult balancing act. In the current work we wanted to get an idea of the advantages of an affix-based algorithm compared to a suffix-only based algorithm. Therefore we have made the task as hard as possible by not allowing language specific adaptations to the algorithms and by not subdividing the training words in word classes. 4 Generation of rules and look-up data structure 4.1 Building a rule set from training pairs The training algorithm generates a data structure consisting of rules that a lemmatizer must traverse to arrive at a rule that is elected to fire. Conceptually the training process is as follows. As the data structure is being built, the full form in each training pair is tentatively lemmatized using the data structure that has been created up to that stage. If the elected rule produces the right lemma from the full form, nothing needs to be done. Otherwise, the data structure must be expanded with a rule such that the new rule a) is elected instead of the erroneous rule and b) produces the right lemma from the full form. The training process terminates when the full forms in all pairs in the training set are transformed to their corresponding lemmas. After training, the data structure of rules is made permanent and can be consulted by a lemmatizer. The lemmatizer must elect and fire rules in the same way as the training algorithm, so that all words from the training set are lemmatized correctly. It may however fail to produce the correct lemmas for words that were not in the training set – the OOV words. 147 4.2 Internal structure of rules: prime and derived rules During training the Ratcliff/Obershelp algorithm (Ratcliff & Metzener 1988) is used to find the longest non-overlapping similar parts in a given full form – lemma pair. For example, in the pair afgevraagd → afvragen the longest common substring is vra, followed by af and g. These similar parts are replaced with wildcards and placeholders: *ge*a*d → ***en Now we have the prime rule for the training pair, the least specific rule necessary to lemmatize the word correctly. Rules with more specific patterns – derived rules – can be created by adding characters and by removing or adding wildcards. A rule that is derived from another rule (derived or prime) is more specific than the original rule: Any word that is successfully matched by the pattern of a derived rule is also successfully matched by the pattern of the original rule, but the converse is not the case. This establishes a partial ordering of all rules. See Figures 1 and 2, where the rules marked ‘p’ are prime rules and those marked ‘d’ are derived. Innumerous rules can be derived from a rule with at least one wildcard in its pattern, but only a limited number can be tested in a finite time. To keep the number of candidate rules within practical limits, we used the strategy that the pattern of a candidate is minimally different from its parent’s pattern: it can have one extra literal character or one wildcard less or replace one wildcard with one literal character. Alternatively, a candidate rule (such as the bottom rule in Figure 4) can arise by merging two rules. Within these constraints, the algorithm creates all possible candidate rules that transform one or more training words to their corresponding lemmas. 4.3 External structure of rules: partial ordering in a DAG and in a tree We tried two different data structures to store new lemmatizer rules, a directed acyclic graph (DAG) and a plain tree structure with depth first, left to right traversal. The DAG (Figure 1) expresses the complete partial ordering of the rules. There is no preferential order between the children of a rule and all paths away from the root must be regarded as equally valid. Therefore the DAG may lead to several lemmas for the same input word. For example, without the rule in the bottom part of Figure 1, the word gelopen would have been lemmatized to both lopen (correct) and gelopen (incorrect): gelopen: *ge* → ** lopen *pen → *pen gelopen By adding a derived rule as a descendent of both these two rules, we make sure that lemmatization of the word gelopen is only handled by one rule and only results in the correct lemma: gelopen: *ge*pen → **pen lopen Figure 1. Five training pairs as supporters for five rules in a DAG. The tree in Figure 2 is a simpler data structure and introduces a left to right preferential order between the children of a rule. Only one rule fires and only one lemma per word is produced. For example, because the rule *ge* → ** precedes its sibling rule *en → *, whenever the former rule is applicable, the latter rule and its descendents are not even visited, irrespective of their applicability. In our example, the former rule – and only the former rule – handles the lemmatization of gelopen, and since it produces the correct lemma an additional rule is not necessary. In contrast to the DAG, the tree implements negation: if the Nth sibling of a row of children fires, it not only means that the pattern of the Nth rule matches the word, it also means that the patterns of the N-1 preceding siblings do not match the word. Such implicit negation is not possible in the DAG, and this is probably the main reason why the experiments with the DAG-structure lead to huge numbers of rules, very little gener* → * ui → ui *ge* → ** overgegaan → overgaan *en → * uien→ ui *pen →*pen lopen → lopen *ge*pen → **pen gelopen → lopen p p p d d 148 alization, uncontrollable training times (months, not minutes!) and very low lemmatization quality. On the other hand, the experiments with the tree structure were very successful. The building time of the rules is acceptable, taking small recursive steps during the training part. The memory use is tractable and the quality of the results is good provided good training material. Figure 2. The same five training pairs as supporters for only four rules in a tree. 4.4 Rule selection criteria This section pertains to the training algorithm employing a tree. The typical situation during training is that a rule that already has been added to the tree makes lemmatization errors on some of the training words. In that case one or more corrective children have to be added to the rule1. If the pattern of a new child rule only matches some, but not all training words that are lemmatized incorrectly by the parent, a right sibling rule must be added. This is repeated until all training words that the parent does not lemmatize correctly are matched by the leftmost child rule or one of its siblings. A candidate child rule is faced with training words that the parent did not lemmatize correctly and, surprisingly, also supporters of the parent, because the pattern of the candidate cannot discriminate between these two groups. On the output side of the candidate appear the training pairs that are lemmatized correctly by the candidate, those that are lemmatized incor 1 If the case of a DAG, care must be taken that the complete representation of the partial ordering of rules is maintained. Any new rule not only becomes a child of the rule that it was aimed at as a corrective child, but often also of several other rules. rectly and those that do not match the pattern of the candidate. For each candidate rule the training algorithm creates a 2×3 table (see Table 2) that counts the number of training pairs that the candidate lemmatizes correctly or incorrectly or that the candidate does not match. The two columns count the training pairs that, respectively, were lemmatized incorrectly and correctly by the parent. These six parameters Nxy can be used to select the best candidate. Only four parameters are independent, because the numbers of training words that the parent lemmatized incorrectly (Nw) and correctly (Nr) are the same for all candidates. Thus, after the application of the first and most significant selection criterion, up to three more selection criteria of decreasing significance can be applied if the preceding selection ends in a tie. Parent Child Incorrect Correct (supporters) Correct Nwr Nrr Incorrect Nww Nrw Not matched Nwn Nrn Sum Nw Nr Table 2. The six parameters for rule selection among candidate rules. A large Nwr and a small Nrw are desirable. Nwr is a measure for the rate at which the updated data structure has learned to correctly lemmatize those words that previously were lemmatized incorrectly. A small Nrw indicates that only few words that previously were lemmatized correctly are spoiled by the addition of the new rule. It is less obvious how the other numbers weigh in. We have obtained the most success with criteria that first select for highest Nwr + Nrr - Nrw . If the competition ends in a tie, we select for lowest Nrr among the remaining candidates. If the competition again ends in a tie, we select for highest Nrn – Nww . Due to the marginal effect of a fourth criterion we let the algorithm randomly select one of the remaining candidates instead. The training pairs that are matched by the pattern of the winning rule become the supporters and non-supporters of that new rule and are no longer supporters or non-supporters of the parent. If the parent still has at least one nonsupporter, the remaining supporters and nonsupporters – the training pairs that the winning * → * ui → ui *ge* → ** overgegaan → overgaan gelopen → lopen *en → * uien→ ui *pen →*pen lopen → lopen p p p d 149 candidate does not match – are used to select the right sibling of the new rule. 5 Evaluation We trained the new lemmatizer using training material for Danish (STO), Dutch (CELEX), English (CELEX), German (CELEX), Greek (Petasis et al. 2003), Icelandic (IFD), Norwegian (SCARRIE), Polish (Morfologik), Slovene (Juršič et al. 2007) and Swedish (SUC). The guidelines for the construction of the training material are not always known to us. In some cases, we know that the full forms have been generated automatically from the lemmas. On the other hand, we know that the Icelandic data is derived from a corpus and only contains word forms occurring in that corpus. Because of the uncertainties, the results cannot be used for a quantitative comparison of the accuracy of lemmatization between languages. Some of the resources were already disambiguated (one lemma per full form) when we received the data. We decided to disambiguate the remaining resources as well. Handling homographs wisely is important in many lemmatization tasks, but there are many pitfalls. As we only wanted to investigate the improvement of the affix algorithm over the suffix algorithm, we decided to factor out ambiguity. We simply chose the lemma that comes first alphabetically and discarded the other lemmas from the available data. The evaluation was carried out by dividing the available material in training data and test data in seven different ratios, setting aside between 1.54% and 98.56% as training data and the remainder as OOV test data. (See section 7). To keep the sample standard deviation s for the accuracy below an acceptable level we used the evaluation method repeated random subsampling validation that is proposed in Voorhees (2000) and Bouckaert & Frank (2000). We repeated the training and evaluation for each ratio with several randomly chosen sets, up to 17 times for the smallest and largest ratios, because these ratios lead to relatively small training sets and test sets respectively. The same procedure was followed for the suffix lemmatizer, using the same training and test sets. Table 3 shows the results for the largest training sets. For some languages lemmatization accuracy for OOV words improved by deleting rules that are based on very few examples from the training data. This pruning was done after the training of the rule set was completed. Regarding the affix algorithm, the results for half of the languages became better with mild pruning, i.e. deleting rules with only one example. For Danish, Dutch, German, Greek and Icelandic pruning did not improve accuracy. Regarding the suffix algorithm, only English and Swedish profited from pruning. Language Suffix % Affix % Δ % N × 1000 n Icelandic 73.2±1.4 71.3±1.5 -1.9 58 17 Danish 93.2±0.4 92.8±0.2 -0.4 553 5 Norwegian 87.8±0.4 87.6±0.3 -0.2 479 6 Greek 90.2±0.3 90.4±0.4 0.2 549 5 Slovene 86.0±0.6 86.7±0.3 0.7 199 9 Swedish 91.24±0.18 92.3±0.3 1.0 478 6 German 90.3±0.5 91.46±0.17 1.2 315 7 English 87.5±0.9 89.0±1.3 1.5 76 15 Dutch 88.2±0.5 90.4±0.5 2.3 302 7 Polish 69.69±0.06 93.88±0.08 24.2 3443 2 Table 3. Accuracy for the suffix and affix algorithms. The fifth column shows the size of the available data. Of these, 98.56% was used for training and 1.44% for testing. The last column shows the number n of performed iterations, which was inversely proportional to √N with a minimum of two. 6 Some language specific notes For Polish, the suffix algorithm suffers from overtraining. The accuracy tops at about 100 000 rules, which is reached when the training set comprises about 1 000 000 pairs. Figure 3. Accuracy vs. number of rules for Polish Upper swarm of data points: affix algorithm. Lower swarm of data points: suffix algorithm. Each swarm combines results from six rule sets with varying amounts of pruning (no pruning and pruning with cut-off = 1..5). If more training pairs are added, the number of rules grows, but the accuracy falls. The affix algorithm shows no sign of overtraining, even 150 though the Polish material comprised 3.4 million training pairs, more than six times the number of the second language on the list, Danish. See Figure 3. The improvement of the accuracy for Polish was tremendous. The inflectional paradigm in Polish (as in other Slavic languages) can be left factorized, except for the superlative. However, only 3.8% of the words in the used Polish data have the superlative forming prefix naj, and moreover this prefix is only removed from adverbs and not from the much more numerous adjectives. The true culprit of the discrepancy is the great number (> 23%) of words in the Polish data that have the negative prefix nie, which very often does not recur in the lemma. The suffix algorithm cannot handle these 23% correctly. The improvement over the suffix lemmatizer for the case of German is unassuming. To find out why, we looked at how often rules with infix or prefix patterns fire and how well they are doing. We trained the suffix algorithm with 9/10 of the available data and tested with the remaining 1/10, about 30 000 words. Of these, 88% were lemmatized correctly (a number that indicates the smaller training set than in Table 3). German Dutch Acc. % Freq % Acc. % Freq % all 88.1 100.0 87.7 100.0 suffixonly 88.7 94.0 88.1 94.9 prefix 79.9 4.4 80.9 2.4 infix 83.3 2.3 77.4 3.0 ä ö ü 92.8 0.26 N/A 0.0 ge infix 68.6 0.94 77.9 2.6 Table 4. Prevalence of suffix-only rules, rules specifying a prefix, rules specifying an infix and rules specifying infixes containing either ä, ö or ü or the letter combination ge. Almost 94% of the lemmas were created using suffix-only rules, with an accuracy of almost 89%. Less than 3% of the lemmas were created using rules that included at least one infix subpattern. Of these, about 83% were correctly lemmatized, pulling the average down. We also looked at two particular groups of infix-rules: those including the letters ä, ö or ü and those with the letter combination ge. The former group applies to many words that display umlaut, while the latter applies to past participles. The first group of rules, accounting for 11% of all words handled by infix rules, performed better than average, about 93%, while the latter group, accounting for 40% of all words handled by infix rules, performed poorly at 69% correct lemmas. Table 4 summarizes the results for German and the closely related Dutch language. 7 Self-organized criticality Over the whole range of training set sizes the number of rules goes like d N C. with C < 0 , and N the number of training pairs. The value of C and d not only depended on the chosen algorithm, but also on the language. Figure 4 shows how the number of generated lemmatization rules for Polish grows as a function of the number of training pairs. Figure 4. Number of rules vs. number of training pairs for Polish (double logarithmic scale). Upper row: unpruned rule sets Lower row: heavily pruned rule sets (cut-off=5) There are two rows of data, each row containing seven data points. The rules are counted after training with 1.54 percent of the available data and then repeatedly doubling to 3.08, 6.16, 12.32, 24.64, 49.28 and 98.56 percent of the available data. The data points in the upper row designate the number of rules resulting from the training process. The data points in the lower row arise by pruning rules that are based on less than six examples from the training set. The power law for the upper row of data points for Polish in Figure 4 is 87 .0 80 .0 training rules N N = 151 As a comparison, for Icelandic the power law for the unpruned set of rules is 90 .0 32 .1 training rules N N = These power law expressions are derived for the affix algorithm. For the suffix algorithm the exponent in the Polish power law expression is very close to 1 (0.98), which indicates that the suffix lemmatizer is not good at all at generalizing over the Polish training data: the number of rules grows almost proportionally with the number of training words. (And, as Figure 3 shows, to no avail.) On the other hand, the suffix lemmatizer fares better than the affix algorithm for Icelandic data, because in that case the exponent in the power law expression is lower: 0.88 versus 0.90. The power law is explained by self-organized criticality (Bak et al. 1987, 1988). Rule sets that originate from training sets that only differ in a single training example can be dissimilar to any degree depending on whether and where the difference is tipping the balance between competing rule candidates. Whether one or the other rule candidate wins has a very significant effect on the parts of the tree that emanate as children or as siblings from the winning node. If the difference has an effect close to the root of the tree, a large expanse of the tree is affected. If the difference plays a role closer to a leaf node, only a small patch of the tree is affected. The effect of adding a single training example can be compared with dropping a single rice corn on top of a pile of rice, which can create an avalanche of unpredictable size. 8 Conclusions Affix rules perform better than suffix rules if the language has a heavy pre- and infix morphology and the size of the training data is big. The new algorithm worked very well with the Polish Morfologik dataset and compares well with the Stempel algorithm (Białecki 2008). Regarding Dutch and German we have observed that the affix algorithm most often applies suffix-only rules to OOV words. We have also observed that words lemmatized this way are lemmatized better than average. The remaining words often need morphological changes in more than one position, for example both in an infix and a suffix. Although these changes are correlated by the inflectional rules of the language, the number of combinations is still large, while at the same time the number of training examples exhibiting such combinations is relatively small. Therefore the more complex rules involving infix or prefix subpatterns or combinations thereof are less well-founded than the simple suffix-only rules. The lemmatization accuracy of the complex rules will therefore in general be lower than that of the suffix-only rules. The reason why the affix algorithm is still better than the algorithm that only considers suffix rules is that the affix algorithm only generates suffix-only rules from words with suffix-only morphology. The suffixonly algorithm is not able to generalize over training examples that do not fulfil this condition and generates many rules based on very few examples. Consequently, everything else being equal, the set of suffix-only rules generated by the affix algorithm must be of higher quality than the set of rules generated by the suffix algorithm. The new affix algorithm has fewer rules supported by only one example from the training data than the suffix algorithm. This means that the new algorithm is good at generalizing over small groups of words with exceptional morphology. On the other hand, the bulk of ‘normal’ training words must be bigger for the new affix based lemmatizer than for the suffix lemmatizer. This is because the new algorithm generates immense numbers of candidate rules with only marginal differences in accuracy, requiring many examples to find the best candidate. When we began experimenting with lemmatization rules with unrestricted numbers of affixes, we could not know whether the limited amount of available training data would be sufficient to fix the enormous amount of free variables with enough certainty to obtain higher quality results than obtainable with automatically trained lemmatizers allowing only suffix transformations. However, the results that we have obtained with the new affix algorithm are on a par with or better than those of the suffix lemmatizer. There is still room for improvements as only part of the parameter space of the new algorithm has been searched. The case of Polish shows the superiority of the new algorithm, whereas the poor results for Icelandic, a suffix inflecting language with many inflection types, were foreseeable, because we only had a small training set. 9 Future work Work with the new affix lemmatizer has until now focused on the algorithm. To really know if the carried out theoretical work is valuable we would like to try it out in a real search setting in a search engine and see if the users appreciate the new algorithm’s results. 152 References Per Bak, Chao Tang and Kurt Wiesenfeld. 1987. SelfOrganized Criticality: An Explanation of 1/f Noise, Phys. Rev. Lett., vol. 59,. pp. 381-384, 1987 Per Bak, Chao Tang and Kurt Wiesenfeld . 1988. Phys. Rev. A38, (1988), pp. 364-374 Andrzej Białecki, 2004, Stempel - Algorithmic Stemmer for Polish Language http://www.getopt.org/stempel/ Remco R. Bouckaert and Eibe Frank. 2000. Evaluating the Replicability of Significance Tests for Comparing Learning Algorithms. In H. Dai, R. Srikant, & C. Zhang (Eds.), Proc. 8th Pacific-Asia Conference, PAKDD 2004, Sydney, Australia, May 26-28, 2004 (pp. 3-12). Berlin: Springer. Johan Carlberger, Hercules Dalianis, Martin Hassel, and Ola Knutsson. 2001. Improving Precision in Information Retrieval for Swedish using Stemming. In the Proceedings of NoDaLiDa-01 - 13th Nordic Conference on Computational Linguistics, May 21-22, Uppsala, Sweden. Celex: http://celex.mpi.nl/ Hercules Dalianis and Bart Jongejan 2006. Handcrafted versus Machine-learned Inflectional Rules: the Euroling-SiteSeeker Stemmer and CST's Lemmatiser, in Proceedings of the International Conference on Language Resources and Evaluation, LREC 2006. F. Çuna Ekmekçioglu, Mikael F. Lynch, and Peter Willett. 1996. Stemming and N-gram matching for term conflation in Turkish texts. Information Research, 7(1) pp 2-6. Niklas Hedlund 2001. Automatic construction of stemming rules, Master Thesis, NADA-KTH, Stockholm, TRITA-NA-E0194. IFD: Icelandic Centre for Language Technology, http://tungutaekni.is/researchsystems/rannsoknir_1 2en.html Bart Jongejan and Dorte Haltrup. 2005. The CST Lemmatiser. Center for Sprogteknologi, University of Copenhagen version 2.7 (August, 23 2005) http://cst.dk/online/lemmatiser/cstlemma.pdf Jakub Kanis and Ludek Müller. 2005. Automatic Lemmatizer Construction with Focus on OOV Words Lemmatization in Text, Speech and Dialogue, Lecture Notes in Computer Science, Berlin / Heidelberg, pp 132-139 Ron Kohavi. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence 2 (12): 1137–1143, Morgan Kaufmann, San Mateo. Prasenjit Majumder, Mandar Mitra, Swapan K. Parui, Gobinda Kole, Pabitra Mitra, and Kalyankumar Datta. 2007. YASS: Yet another suffix stripper. ACM Transactions on Information Systems , Volume 25 , Issue 4, October 2007. Juršič Matjaž, Igor Mozetič, and Nada Lavrač. 2007. Learning ripple down rules for efficient lemmatization In proceeding of the Conference on Data Mining and Data Warehouses (SiKDD 2007), October 12, 2007, Ljubljana, Slovenia Morfologik: Polish morphological analyzer http://mac.softpedia.com/get/WordProcessing/Morfologik.shtml Douglas W. Oard, Gina-Anne Levow, and Clara I. Cabezas. 2001. CLEF experiments at Maryland: Statistical stemming and backoff translation. In Cross-language information retrieval and evaluation: Proceeding of the Clef 2000 workshops Carol Peters Ed. Springer Verlag pp. 176-187. 2001. Georgios Petasis, Vangelis Karkaletsis , Dimitra Farmakiotou , Ion Androutsopoulos and Constantine D. Spyropoulo. 2003. A Greek Morphological Lexicon and its Exploitation by Natural Language Processing Applications. In Lecture Notes on Computer Science (LNCS), vol.2563, "Advances in Informatics - Post-proceedings of the 8th Panhellenic Conference in Informatics", Springer Verlag. Joël Plisson, Nada Lavrač, and Dunja Mladenic. 2004, A rule based approach to word lemmatization, Proceedings of the 7th International Multiconference Information Society, IS-2004, Institut Jozef Stefan, Ljubljana, pp.83-6. Martin F. Porter 1980. An algorithm for suffix stripping. Program, vol 14, no 3, pp 130-130. John W. Ratcliff and David Metzener, 1988. Pattern Matching: The Gestalt Approach, Dr. Dobb's Journal, page 46, July 1988. SCARRIE 2009. Scandinavian Proofreading Tools http://ling.uib.no/~desmedt/scarrie/ STO: http://cst.ku.dk/sto_ordbase/ SUC 2009. Stockholm Umeå corpus, http://www.ling.su.se/staff/sofia/suc/suc.html Pieter Theron and Ian Cloete 1997 Automatic acquisition of two-level morphological rules, Proceedings of the fifth conference on Applied natural language processing, p.103-110, March 31-April 03, 1997, Washington, DC. Ellen M. Voorhees. 2000. Variations in relevance judgments and the measurement of retrieval effectiveness, J. of Information Processing and Management 36 (2000) pp 697-716 153
2009
17
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 154–162, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Revisiting Pivot Language Approach for Machine Translation Hua Wu and Haifeng Wang Toshiba (China) Research and Development Center 5/F., Tower W2, Oriental Plaza, Beijing, 100738, China {wuhua, wanghaifeng}@rdc.toshiba.com.cn Abstract This paper revisits the pivot language approach for machine translation. First, we investigate three different methods for pivot translation. Then we employ a hybrid method combining RBMT and SMT systems to fill up the data gap for pivot translation, where the sourcepivot and pivot-target corpora are independent. Experimental results on spoken language translation show that this hybrid method significantly improves the translation quality, which outperforms the method using a source-target corpus of the same size. In addition, we propose a system combination approach to select better translations from those produced by various pivot translation methods. This method regards system combination as a translation evaluation problem and formalizes it with a regression learning model. Experimental results indicate that our method achieves consistent and significant improvement over individual translation outputs. 1 Introduction Current statistical machine translation (SMT) systems rely on large parallel and monolingual training corpora to produce translations of relatively higher quality. Unfortunately, large quantities of parallel data are not readily available for some languages pairs, therefore limiting the potential use of current SMT systems. In particular, for speech translation, the translation task often focuses on a specific domain such as the travel domain. It is especially difficult to obtain such a domain-specific corpus for some language pairs such as Chinese to Spanish translation. To circumvent the data bottleneck, some researchers have investigated to use a pivot language approach (Cohn and Lapata, 2007; Utiyama and Isahara, 2007; Wu and Wang 2007; Bertoldi et al., 2008). This approach introduces a third language, named the pivot language, for which there exist large source-pivot and pivot-target bilingual corpora. A pivot task was also designed for spoken language translation in the evaluation campaign of IWSLT 2008 (Paul, 2008), where English is used as a pivot language for Chinese to Spanish translation. Three different pivot strategies have been investigated in the literature. The first is based on phrase table multiplication (Cohn and Lapata 2007; Wu and Wang, 2007). It multiples corresponding translation probabilities and lexical weights in source-pivot and pivot-target translation models to induce a new source-target phrase table. We name it the triangulation method. The second is the sentence translation strategy, which first translates the source sentence to the pivot sentence, and then to the target sentence (Utiyama and Isahara, 2007; Khalilov et al., 2008). We name it the transfer method. The third is to use existing models to build a synthetic source-target corpus, from which a source-target model can be trained (Bertoldi et al., 2008). For example, we can obtain a source-pivot corpus by translating the pivot sentence in the source-pivot corpus into the target language with pivot-target translation models. We name it the synthetic method. The working condition with the pivot language approach is that the source-pivot and pivot-target parallel corpora are independent, in the sense that they are not derived from the same set of sentences, namely independently sourced corpora. Thus, some linguistic phenomena in the sourcepivot corpus will lost if they do not exist in the pivot-target corpus, and vice versa. In order to fill up this data gap, we make use of rule-based machine translation (RBMT) systems to translate the pivot sentences in the source-pivot or pivot-target 154 corpus into target or source sentences. As a result, we can build a synthetic multilingual corpus, which can be used to improve the translation quality. The idea of using RBMT systems to improve the translation quality of SMT sysems has been explored in Hu et al. (2007). Here, we re-examine the hybrid method to fill up the data gap for pivot translation. Although previous studies proposed several pivot translation methods, there are no studies to combine different pivot methods for translation quality improvement. In this paper, we first compare the individual pivot methods and then investigate to improve pivot translation quality by combining the outputs produced by different systems. We propose to regard system combination as a translation evaluation problem. For translations from one of the systems, this method uses the outputs from other translation systems as pseudo references. A regression learning method is used to infer a function that maps a feature vector (which measures the similarity of a translation to the pseudo references) to a score that indicates the quality of the translation. Scores are first generated independently for each translation, then the translations are ranked by their respective scores. The candidate with the highest score is selected as the final translation. This is achieved by optimizing the regression learning model’s output to correlate against a set of training examples, where the source sentences are provided with several reference translations, instead of manually labeling the translations produced by various systems with quantitative assessments as described in (Albrecht and Hwa, 2007; Duh, 2008). The advantage of our method is that we do not need to manually label the translations produced by each translation system, therefore enabling our method suitable for translation selection among any systems without additional manual work. We conducted experiments for spoken language translation on the pivot task in the IWSLT 2008 evaluation campaign, where Chinese sentences in travel domain need to be translated into Spanish, with English as the pivot language. Experimental results show that (1) the performances of the three pivot methods are comparable when only SMT systems are used. However, the triangulation method and the transfer method significantly outperform the synthetic method when RBMT systems are used to improve the translation quality; (2) The hybrid method combining SMT and RBMT system for pivot translation greatly improves the translation quality. And this translation quality is higher than that of those produced by the system trained with a real Chinese-Spanish corpus; (3) Our sentence-level translation selection method consistently and significantly improves the translation quality over individual translation outputs in all of our experiments. Section 2 briefly introduces the three pivot translation methods. Section 3 presents the hybrid method combining SMT and RBMT systems. Section 4 describes the translation selection method. Experimental results are presented in Section 5, followed by a discussion in Section 6. The last section draws conclusions. 2 Pivot Methods for Phrase-based SMT 2.1 Triangulation Method Following the method described in Wu and Wang (2007), we train the source-pivot and pivot-target translation models using the source-pivot and pivot-target corpora, respectively. Based on these two models, we induce a source-target translation model, in which two important elements need to be induced: phrase translation probability and lexical weight. Phrase Translation Probability We induce the phrase translation probability by assuming the independence between the source and target phrases when given the pivot phrase. φ(¯s|¯t) = X ¯p φ(¯s|¯p)φ(¯p|¯t) (1) Where ¯s, ¯p and ¯t represent the phrases in the languages Ls, Lp and Lt, respectively. Lexical Weight According to the method described in Koehn et al. (2003), there are two important elements in the lexical weight: word alignment information a in a phrase pair (¯s, ¯t) and lexical translation probability w(s|t). Let a1 and a2 represent the word alignment information inside the phrase pairs (¯s, ¯p) and (¯p, ¯t) respectively, then the alignment information inside (¯s, ¯t) can be obtained as shown in Eq. (2). a = {(s, t)|∃p : (s, p) ∈a1 & (p, t) ∈a2} (2) Based on the the induced word alignment information, we estimate the co-occurring frequencies of word pairs directly from the induced phrase 155 pairs. Then we estimate the lexical translation probability as shown in Eq. (3). w(s|t) = count(s, t) P s′ count(s′, t) (3) Where count(s, t) represents the co-occurring frequency of the word pair (s, t). 2.2 Transfer Method The transfer method first translates from the source language to the pivot language using a source-pivot model, and then from the pivot language to the target language using a pivot-target model. Given a source sentence s, we can translate it into n pivot sentences p1, p2, ..., pn using a source-pivot translation system. Each pi can be translated into m target sentences ti1, ti2, ..., tim. We rescore all the n × m candidates using both the source-pivot and pivot-target translation scores following the method described in Utiyama and Isahara (2007). If we use hfp and hpt to denote the features in the source-pivot and pivot-target systems, respectively, we get the optimal target translation according to the following formula. ˆt = argmax t L X k=1 (λsp k hsp k (s, p)+λpt k hpt k (p, t)) (4) Where L is the number of features used in SMT systems. λsp and λpt are feature weights set by performing minimum error rate training as described in Och (2003). 2.3 Synthetic Method There are two possible methods to obtain a sourcetarget corpus using the source-pivot and pivottarget corpora. One is to obtain target translations for the source sentences in the source-pivot corpus. This can be achieved by translating the pivot sentences in source-pivot corpus to target sentences with the pivot-target SMT system. The other is to obtain source translations for the target sentences in the pivot-target corpus using the pivot-source SMT system. And we can combine these two source-target corpora to produced a final synthetic corpus. Given a pivot sentence, we can translate it into n source or target sentences. These n translations together with their source or target sentences are used to create a synthetic bilingual corpus. Then we build a source-target translation model using this corpus. 3 Using RBMT Systems for Pivot Translation Since the source-pivot and pivot-target parallel corpora are independent, the pivot sentences in the two corpora are distinct from each other. Thus, some linguistic phenomena in the source-pivot corpus will lost if they do not exist in the pivottarget corpus, and vice versa. Here we use RBMT systems to fill up this data gap. For many sourcetarget language pairs, the commercial pivot-source and/or pivot-target RBMT systems are available on markets. For example, for Chinese to Spanish translation, English to Chinese and English to Spanish RBMT systems are available. With the RBMT systems, we can create a synthetic multilingual source-pivot-target corpus by translating the pivot sentences in the pivot-source or pivot-target corpus. The source-target pairs extracted from this synthetic multilingual corpus can be used to build a source-target translation model. Another way to use the synthetic multilingual corpus is to add the source-pivot or pivot-target sentence pairs in this corpus to the training data to rebuild the source-pivot or pivot-target SMT model. The rebuilt models can be applied to the triangulation method and the transfer method as described in Section 2. Moreover, the RBMT systems can also be used to enlarge the size of bilingual training data. Since it is easy to obtain monolingual corpora than bilingual corpora, we use RBMT systems to translate the available monolingual corpora to obtain synthetic bilingual corpus, which are added to the training data to improve the performance of SMT systems. Even if no monolingual corpus is available, we can also use RBMT systems to translate the sentences in the bilingual corpus to obtain alternative translations. For example, we can use source-pivot RBMT systems to provide alternative translations for the source sentences in the sourcepivot corpus. In addition to translating training data, the source-pivot RBMT system can be used to translate the test set into the pivot language, which can be further translated into the target language with the pivot-target RBMT system. The translated test set can be added to the training data to further improve translation quality. The advantage of this method is that the RBMT system can provide translations for sentences in the test set and cover some out-of-vocabulary words in the test set 156 that are uncovered by the training data. It can also change the distribution of some phrase pairs and reinforce some phrase pairs relative to the test set. 4 Translation Selection We propose a method to select the optimal translation from those produced by various translation systems. We regard sentence-level translation selection as a machine translation (MT) evaluation problem and formalize this problem with a regression learning model. For each translation, this method uses the outputs from other translation systems as pseudo references. The regression objective is to infer a function that maps a feature vector (which measures the similarity of a translation from one system to the pseudo references) to a score that indicates the quality of the translation. Scores are first generated independently for each translation, then the translations are ranked by their respective scores. The candidate with the highest score is selected. The similar ideas have been explored in previous studies. Albrecht and Hwa (2007) proposed a method to evaluate MT outputs with pseudo references using support vector regression as the learner to evaluate translations. Duh (2008) proposed a ranking method to compare the translations proposed by several systems. These two methods require quantitative quality assessments by human judges for the translations produced by various systems in the training set. When we apply such methods to translation selection, the relative values of the scores assigned by the subject systems are important. In different data conditions, the relative values of the scores assigned by the subject systems may change. In order to train a reliable learner, we need to prepare a balanced training set, where the translations produced by different systems under different conditions are required to be manually evaluated. In extreme cases, we need to relabel the training data to obtain better performance. In this paper, we modify the method in Albrecht and Hwa (2007) to only prepare human reference translations for the training examples, and then evaluate the translations produced by the subject systems against the references using BLEU score (Papineni et al., 2002). We use smoothed sentence-level BLEU score to replace the human assessments, where we use additive smoothing to avoid zero BLEU scores when we calculate the n-gram precisions. In this case, we ID Description 1-4 n-gram precisions against pseudo references (1 ≤n ≤4) 5-6 PER and WER 7-8 precision, recall, fragmentation from METEOR (Lavie and Agarwal, 2007) 9-12 precisions and recalls of nonconsecutive bigrams with a gap size of m (1 ≤m ≤2) 13-14 longest common subsequences 15-19 n-gram precision against a target corpus (1 ≤n ≤5) Table 1: Feature sets for regression learning can easily retrain the learner under different conditions, therefore enabling our method to be applied to sentence-level translation selection from any sets of translation systems without any additional human work. In regression learning, we infer a function f that maps a multi-dimensional input vector x to a continuous real value y, such that the error over a set of m training examples, (x1, y1), (x2, y2), ..., (xm, ym), is minimized according to a loss function. In the context of translation selection, y is assigned as the smoothed BLEU score. The function f represents a mathematic model of the automatic evaluation metrics. The input sentence is represented as a feature vector x, which are extracted from the input sentence and the comparisons against the pseudo references. We use the features as shown in Table 1. 5 Experiments 5.1 Data We performed experiments on spoken language translation for the pivot task of IWSLT 2008. This task translates Chinese to Spanish using English as the pivot language. Table 2 describes the data used for model training in this paper, including the BTEC (Basic Travel Expression Corpus) ChineseEnglish (CE) corpus and the BTEC EnglishSpanish (ES) corpus provided by IWSLT 2008 organizers, the HIT olympic CE corpus (2004-863008)1 and the Europarl ES corpus2. There are two kinds of BTEC CE corpus: BTEC CE1 and 1http://www.chineseldc.org/EN/purchasing.htm 2http://www.statmt.org/europarl/ 157 Corpus Size SW TW BTEC CE1 20,000 164K 182K BTEC CE2 18,972 177K 182K HIT CE 51,791 490K 502K BTEC ES 19,972 182K 185K Europarl ES 400,000 8,485K 8,219K Table 2: Training data. SW and TW represent source words and target words, respectively. BTEC CE2. BTEC CE1 was distributed for the pivot task in IWSLT 2008 while BTEC CE2 was for the BTEC CE task, which is parallel to the BTEC ES corpus. For Chinese-English translation, we mainly used BTEC CE1 corpus. We used the BTEC CE2 corpus and the HIT Olympic corpus for comparison experiments only. We used the English parts of the BTEC CE1 corpus, the BTEC ES corpus, and the HIT Olympic corpus (if involved) to train a 5-gram English language model (LM) with interpolated Kneser-Ney smoothing. For English-Spanish translation, we selected 400k sentence pairs from the Europarl corpus that are close to the English parts of both the BTEC CE corpus and the BTEC ES corpus. Then we built a Spanish LM by interpolating an out-of-domain LM trained on the Spanish part of this selected corpus with the in-domain LM trained with the BTEC corpus. For Chinese-English-Spanish translation, we used the development set (devset3) released for the pivot task as the test set, which contains 506 source sentences, with 7 reference translations in English and Spanish. To be capable of tuning parameters on our systems, we created a development set of 1,000 sentences taken from the training sets, with 3 reference translations in both English and Spanish. This development set is also used to train the regression learning model. 5.2 Systems and Evaluation Method We used two commercial RBMT systems in our experiments: System A for Chinese-English bidirectional translation and System B for EnglishChinese and English-Spanish translation. For phrase-based SMT translation, we used the Moses decoder (Koehn et al., 2007) and its support training scripts. We ran the decoder with its default settings and then used Moses’ implementation of minimum error rate training (Och, 2003) to tune the feature weights on the development set. To select translation among outputs produced by different pivot translation systems, we used SVM-light (Joachins, 1999) to perform support vector regression with the linear kernel. Translation quality was evaluated using both the BLEU score proposed by Papineni et al. (2002) and also the modified BLEU (BLEU-Fix) score3 used in the IWSLT 2008 evaluation campaign, where the brevity calculation is modified to use closest reference length instead of shortest reference length. 5.3 Results by Using SMT Systems We conducted the pivot translation experiments using the BTEC CE1 and BTEC ES described in Section 5.1. We used the three methods described in Section 2 for pivot translation. For the transfer method, we selected the optimal translations among 10 × 10 candidates. For the synthetic method, we used the ES translation model to translate the English part of the CE corpus to Spanish to construct a synthetic corpus. And we also used the BTEC CE1 corpus to build a EC translation model to translate the English part of ES corpus into Chinese. Then we combined these two synthetic corpora to build a Chinese-Spanish translation model. In our experiments, only 1-best Chinese or Spanish translation was used since using n-best results did not greatly improve the translation quality. We used the method described in Section 4 to select translations from the translations produced by the three systems. For each system, we used three different alignment heuristics (grow, grow-diag, grow-diag-final4) to obtain the final alignment results, and then constructed three different phrase tables. Thus, for each system, we can get three different translations for each input. These different translations can serve as pseudo references for the outputs of other systems. In our case, for each sentence, we have 6 pseudo reference translations. In addition, we found out that the grow heuristic performed the best for all the systems. Thus, for an individual system, we used the translation results produced using the grow alignment heuristic. The translation results are shown in Table 3. ASR and CRR represent different input conditions, namely the result of automatic speech recog3https://www.slc.atr.jp/Corpus/IWSLT08/eval/IWSLT08 auto eval.tgz 4A description of the alignment heuristics can be found at http://www.statmt.org/jhuws/?n=FactoredTraining.Training Parameters 158 Method BLEU BLEU-Fix Triangulation 33.70/27.46 31.59/25.02 Transfer 33.52/28.34 31.36/26.20 Synthetic 34.35/27.21 32.00/26.07 Combination 38.14/29.32 34.76/27.39 Table 3: CRR/ASR translation results by using SMT systems nition and correct recognition result, respectively. Here, we used the 1-best ASR result. From the translation results, it can be seen that three methods achieved comparable translation quality on both ASR and CRR inputs, with the translation results on CRR inputs are much better than those on ASR inputs because of the errors in the ASR inputs. The results also show that our translation selection method is very effective, which achieved absolute improvements of about 4 and 1 BLEU scores on CRR and ASR inputs, respectively. 5.4 Results by Using both RBMT and SMT Systems In order to fill up the data gap as discussed in Section 3, we used the RBMT System A to translate the English sentences in the ES corpus into Chinese. As described in Section 3, this corpus can be used by the three pivot translation methods. First, the synthetic Chinese-Spanish corpus can be combined with those produced by the EC and ES SMT systems, which were used in the synthetic method. Second, the synthetic Chinese-English corpus can be added into the BTEC CE1 corpus to build the CE translation model. In this way, the intersected English phrases in the CE corpus and ES corpus becomes more, which enables the ChineseSpanish translation model induced using the triangulation method to cover more phrase pairs. For the transfer method, the CE translation quality can be also improved, which would result in the improvement of the Spanish translation quality. The translation results are shown in the columns under ”EC RBMT” in Table 4. As compared with those in Table 3, the translation quality was greatly improved, with absolute improvements of at least 5.1 and 3.9 BLEU scores on CRR and ASR inputs for system combination results. The above results indicate that RBMT systems indeed can be used to fill up the data gap for pivot translation. In our experiments, we also used a CE RBMT system to enlarge the size of training data by pro0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 2 3 4 5 6 7 Phrase length Coverage SMT (Triangulation) +EC RBMT +EC RBMT+CE RBMT +EC RBMT+CE RBMT+Test Set Figure 1: Coverage on test source phrases viding alternative English translations for the Chinese part of the CE corpus. The translation results are shown in the columns under “+CE RBMT” in Table 4. From the translation results, it can be seen that, enlarging the size of training data with RBMT systems can further improve the translation quality. In addition to translating the training data, the CE RBMT system can be also used to translate the test set into English, which can be further translated into Spanish with the ES RBMT system B.56 The translated test set can be further added to the training data to improve translation quality. The columns under “+Test Set” in Table 4 describes the translation results. The results show that translating the test set using RBMT systems greatly improved the translation result, with further improvements of about 2 and 1.5 BLEU scores on CRR and ASR inputs, respectively. The results also indicate that both the triangulation method and the transfer method greatly outperformed the synthetic method when we combined both RBMT and SMT systems in our experiments. Further analysis shows that the synthetic method contributed little to system combination. The selection results are almost the same as those selected from the translations produced by the triangulation and transfer methods. In order to further analyze the translation results, we evaluated the above systems by examining the coverage of the phrase tables over the test phrases. We took the triangulation method as a case study, the results of which are shown in Fig5Although using the ES RBMT system B to translate the training data did not improve the translation quality, it improved the translation quality by translating the test set. 6The RBMT systems achieved a BLEU score of 24.36 on the test set. 159 EC RBMT + CE RBMT + Test Set Method BLEU BLEU-Fix BLEU BLEU-Fix BLEU BLEU-Fix Triangulation 40.69/31.02 37.99/29.15 41.59/31.43 39.39/29.95 44.71/32.60 42.37/31.14 Transfer 42.06/31.72 39.73/29.35 43.40/33.05 40.73/30.06 45.91/34.52 42.86/31.92 Synthetic 39.10/29.73 37.26/28.45 39.90/30.00 37.90/28.66 41.16/31.30 37.99/29.36 Combination 43.21/33.23 40.58/31.17 45.09/34.10 42.88/31.73 47.06/35.62 44.94/32.99 Table 4: CRR/ASR translation results by using RBMT and SMT systems Method BLEU BLEU-Fix Triangulation 45.64/33.15 42.11/31.11 Transfer 47.18/34.56 43.61/32.17 Combination 48.42/36.42 45.42/33.52 Table 5: CRR/ASR translation results by using additional monolingual corpora ure 1. It can be seen that using RBMT systems to translate the training and/or test data can cover more source phrases in the test set, which results in translation quality improvement. 5.5 Results by Using Monolingual Corpus In addition to translating the limited bilingual corpus, we also translated additional monolingual corpus to further enlarge the size of the training data. We assume that it is easier to obtain a monolingual pivot corpus than to obtain a monolingual source or target corpus. Thus, we translated the English part of the HIT Olympic corpus into Chinese and Spanish using EC and ES RBMT systems. The generated synthetic corpus was added to the training data to train EC and ES SMT systems. Here, we used the synthetic CE Olympic corpus to train a model, which was interpolated with the CE model trained with both the BTEC CE1 corpus and the synthetic BTEC corpus to obtain an interpolated CE translation model. Similarly, we obtained an interpolated ES translation model. Table 5 describes the translation results.7 The results indicate that translating monolingual corpus using the RBMT system further improved the translation quality as compared with those in Table 4. 6 Discussion 6.1 Effects of Different RBMT Systems In this section, we compare the effects of two commercial RBMT systems with different transla7Here we excluded the synthetic method since it greatly falls behind the other two methods. Method Sys. A Sys. B Sys. A+B Triangulation 40.69 39.28 41.01 Transfer 42.06 39.57 43.03 Synthetic 39.10 38.24 39.26 Combination 43.21 40.59 44.27 Table 6: CRR translation results (BLEU scores) by using different RBMT systems tion accuracy on spoken language translation. The goals are (1) to investigate whether a RBMT system can improve pivot translation quality even if its translation accuracy is not high, and (2) to compare the effects of RBMT system with different translation accuracy on pivot translation. Besides the EC RBMT system A used in the above section, we also used the EC RBMT system B for this experiment. We used the two systems to translate the test set from English to Chinese, and then evaluated the translation quality against Chinese references obtained from the IWSLT 2008 evaluation campaign. The BLEU scores are 43.90 and 29.77 for System A and System B, respectively. This shows that the translation quality of System B on spoken language corpus is much lower than that of System A. Then we applied these two different RBMT systems to translate the English part of the BTEC ES corpus into Chinese as described in Section 5.4. The translation results on CRR inputs are shown in Table 6.8 We replicated some of the results in Table 4 for the convenience of comparison. The results indicate that the higher the translation accuracy of the RBMT system is, the better the pivot translation is. If we compare the results with those only using SMT systems as described in Table 3, the translation quality was greatly improved by at least 3 BLEU scores, even if the translation ac8We omitted the ASR translation results since the trends are the same as those for CRR inputs. And we only showed BLEU scores since the trend for BLEU-Fix scores is similar. 160 Method Multilingual + BTEC CE1 Triangulation 41.86/39.55 42.41/39.55 Transfer 42.46/39.09 43.84/40.34 Standard 42.21/40.23 42.21/40.23 Combination 43.75/40.34 44.68/41.14 Table 7: CRR translation results by using multilingual corpus. ”/” separates the BLEU and BLEUfix scores. curacy of System B is not so high. Combining two RBMT systems further improved the translation quality, which indicates that the two systems complement each other. 6.2 Results by Using Multilingual Corpus In this section, we compare the translation results by using a multilingual corpus with those by using independently sourced corpora. BTEC CE2 and BTEC ES are from the same source sentences, which can be taken as a multilingual corpus. The two corpora were employed to build CE and ES SMT models, which were used in the triangulation method and the transfer method. We also extracted the Chinese-Spanish (CS) corpus to build a standard CS translation system, which is denoted as Standard. The comparison results are shown in Table 7. The translation quality produced by the systems using a multilingual corpus is much higher than that produced by using independently sourced corpora as described in Table 3, with an absolute improvement of about 5.6 BLEU scores. If we used the EC RBMT system, the translation quality of those in Table 4 is comparable to that by using the multilingual corpus, which indicates that our method using RBMT systems to fill up the data gap is effective. The results also indicate that our translation selection method for pivot translation outperforms the method using only a real sourcetarget corpus. For comparison purpose, we added BTEC CE1 into the training data. The translation quality was improved by only 1 BLEU score. This again proves that our method to fill up the data gap is more effective than that to increase the size of the independently sourced corpus. 6.3 Comparison with Related Work In IWSLT 2008, the best result for the pivot task is achieved by Wang et al. (2008). In order to compare the results, we added the bilingual HIT Ours Wang TSAL BLEU 49.57 48.25 BLEU-Fix 46.74 45.10 45.27 Table 8: Comparison with related work Olympic corpus into the CE training data.9 We also compared our translation selection method with that proposed in (Wang et al., 2008) that is based on the target sentence average length (TSAL). The translation results are shown in Table 8. ”Wang” represents the results in Wang et al. (2008). ”TSAL” represents the translation selection method proposed in Wang et al. (2008), which is applied to our experiment. From the results, it can be seen that our method outperforms the best system in IWSLT 2008 and that our translation selection method outperforms the method based on target sentence average length. 7 Conclusion In this paper, we have compared three different pivot translation methods for spoken language translation. Experimental results indicated that the triangulation method and the transfer method generally outperform the synthetic method. Then we showed that the hybrid method combining RBMT and SMT systems can be used to fill up the data gap between the source-pivot and pivot-target corpora. By translating the pivot sentences in independent corpora, the hybrid method can produce translations whose quality is higher than those produced by the method using a source-target corpus of the same size. We also showed that even if the translation quality of the RBMT system is low, it still greatly improved the translation quality. In addition, we proposed a system combination method to select better translations from outputs produced by different pivot methods. This method is developed through regression learning, where only a small size of training examples with reference translations are required. Experimental results indicate that this method can consistently and significantly improve translation quality over individual translation outputs. And our system outperforms the best system for the pivot task in the IWSLT 2008 evaluation campaign. 9We used about 70k sentence pairs for CE model training, while Wang et al. (2008) used about 100k sentence pairs, a CE translation dictionary and more monolingual corpora for model training. 161 References Joshua S. Albrecht and Rebecca Hwa. 2007. Regression for Sentence-Level MT Evaluation with Pseudo References. In Proceedings of the 45th Annual Meeting of the Accosiation of Computational Linguistics, pages 296–303. Nicola Bertoldi, Madalina Barbaiani, Marcello Federico, and Roldano Cattoni. 2008. Phrase-Based Statistical Machine Translation with Pivot Languages. In Proceedings of the International Workshop on Spoken Language Translation, pages 143149. Tevor Cohn and Mirella Lapata. 2007. Machine Translation by Triangulation: Making Effective Use of Multi-Parallel Corpora. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pages 348–355. Kevin Duh. 2008. Ranking vs. Regression in Machine Translation Evaluation. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 191–194. Xiaoguang Hu, Haifeng Wang, and Hua Wu. 2007. Using RBMT Systems to Produce Bilingual Corpus for SMT. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 287–295. Thorsten Joachims. 1999. Making Large-Scale SVM Learning Practical. In Bernhard Sch¨oelkopf, Christopher Burges, and Alexander Smola, editors, Advances in Kernel Methods - Support Vector Learning. MIT Press. Maxim Khalilov, Marta R. Costa-Juss`a, Carlos A. Henr´ıquez, Jos´e A.R. Fonollosa, Adolfo Hern´andez, Jos´e B. Mari˜no, Rafael E. Banchs, Chen Boxing, Min Zhang, Aiti Aw, and Haizhou Li. 2008. The TALP & I2R SMT Systems for IWSLT 2008. In Proceedings of the International Workshop on Spoken Language Translation, pages 116–123. Philipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In HLTNAACL: Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 127–133. Philipp Koehn, Hieu Hoang, Alexanda Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Associa-tion for Computational Linguistics, demonstration session, pages 177–180. Alon Lavie and Abhaya Agarwal. 2007. METEOR: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments. In Proceedings of Workshop on Statistical Machine Translation at the 45th Annual Meeting of the Association of Computational Linguistics, pages 228– 231. Franz J. Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, pages 160–167. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, pages 311–318. Michael Paul. 2008. Overview of the IWSLT 2008 Evaluation Campaign. In Proceedings of the International Workshop on Spoken Language Translation, pages 1–17. Masao Utiyama and Hitoshi Isahara. 2007. A Comparison of Pivot Methods for Phrase-Based Statistical Machine Translation. In Proceedings of human language technology: the Conference of the North American Chapter of the Association for Computational Linguistics, pages 484–491. Haifeng Wang, Hua Wu, Xiaoguang Hu, Zhanyi Liu, Jianfeng Li, Dengjun Ren, and Zhengyu Niu. 2008. The TCH Machine Translation System for IWSLT 2008. In Proceedings of the International Workshop on Spoken Language Translation, pages 124–131. Hua Wu and Haifeng Wang. 2007. Pivot Language Approach for Phrase-Based Statistical Machine Translation. In Proceedings of 45th Annual Meeting of the Association for Computational Linguistics, pages 856–863. 162
2009
18
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 163–171, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Efficient Minimum Error Rate Training and Minimum Bayes-Risk Decoding for Translation Hypergraphs and Lattices Shankar Kumar1 and Wolfgang Macherey1 and Chris Dyer2 and Franz Och1 1Google Inc. 1600 Amphitheatre Pkwy. Mountain View, CA 94043, USA {shankarkumar,wmach,och}@google.com 2Department of Linguistics University of Maryland College Park, MD 20742, USA [email protected] Abstract Minimum Error Rate Training (MERT) and Minimum Bayes-Risk (MBR) decoding are used in most current state-of-theart Statistical Machine Translation (SMT) systems. The algorithms were originally developed to work with N-best lists of translations, and recently extended to lattices that encode many more hypotheses than typical N-best lists. We here extend lattice-based MERT and MBR algorithms to work with hypergraphs that encode a vast number of translations produced by MT systems based on Synchronous Context Free Grammars. These algorithms are more efficient than the lattice-based versions presented earlier. We show how MERT can be employed to optimize parameters for MBR decoding. Our experiments show speedups from MERT and MBR as well as performance improvements from MBR decoding on several language pairs. 1 Introduction Statistical Machine Translation (SMT) systems have improved considerably by directly using the error criterion in both training and decoding. By doing so, the system can be optimized for the translation task instead of a criterion such as likelihood that is unrelated to the evaluation metric. Two popular techniques that incorporate the error criterion are Minimum Error Rate Training (MERT) (Och, 2003) and Minimum BayesRisk (MBR) decoding (Kumar and Byrne, 2004). These two techniques were originally developed for N-best lists of translation hypotheses and recently extended to translation lattices (Macherey et al., 2008; Tromble et al., 2008) generated by a phrase-based SMT system (Och and Ney, 2004). Translation lattices contain a significantly higher number of translation alternatives relative to Nbest lists. The extension to lattices reduces the runtimes for both MERT and MBR, and gives performance improvements from MBR decoding. SMT systems based on synchronous context free grammars (SCFG) (Chiang, 2007; Zollmann and Venugopal, 2006; Galley et al., 2006) have recently been shown to give competitive performance relative to phrase-based SMT. For these systems, a hypergraph or packed forest provides a compact representation for encoding a huge number of translation hypotheses (Huang, 2008). In this paper, we extend MERT and MBR decoding to work on hypergraphs produced by SCFG-based MT systems. We present algorithms that are more efficient relative to the lattice algorithms presented in Macherey et al. (2008; Tromble et al. (2008). Lattice MBR decoding uses a linear approximation to the BLEU score (Papineni et al., 2001); the weights in this linear loss are set heuristically by assuming that n-gram precisions decay exponentially with n. However, this may not be optimal in practice. We employ MERT to select these weights by optimizing BLEU score on a development set. A related MBR-inspired approach for hypergraphs was developed by Zhang and Gildea (2008). In this work, hypergraphs were rescored to maximize the expected count of synchronous constituents in the translation. In contrast, our MBR algorithm directly selects the hypothesis in the hypergraph with the maximum expected approximate corpus BLEU score (Tromble et al., 2008). will soon announce X1 X2 X1 X2 X1 X2 X1 X2 X1 X2 X1 its future in the X1 its future in the Suzuki soon its future in X1 announces Rally World Championship Figure 1: An example hypergraph. 163 2 Translation Hypergraphs A translation lattice compactly encodes a large number of hypotheses produced by a phrase-based SMT system. The corresponding representation for an SMT system based on SCFGs (e.g. Chiang (2007), Zollmann and Venugopal (2006), Mi et al. (2008)) is a directed hypergraph or a packed forest (Huang, 2008). Formally, a hypergraph is a pair H = ⟨V, E⟩ consisting of a vertex set V and a set of hyperedges E ⊆V∗× V. Each hyperedge e ∈E connects a head vertex h(e) with a sequence of tail vertices T(e) = {v1, ..., vn}. The number of tail vertices is called the arity (|e|) of the hyperedge. If the arity of a hyperedge is zero, h(e) is called a source vertex. The arity of a hypergraph is the maximum arity of its hyperedges. A hyperedge of arity 1 is a regular edge, and a hypergraph of arity 1 is a regular graph (lattice). Each hyperedge is labeled with a rule re from the SCFG. The number of nonterminals on the right-hand side of re corresponds with the arity of e. An example without scores is shown in Figure 1. A path in a translation hypergraph induces a translation hypothesis E along with its sequence of SCFG rules D = r1, r2, ..., rK which, if applied to the start symbol, derives E. The sequence of SCFG rules induced by a path is also called a derivation tree for E. 3 Minimum Error Rate Training Given a set of source sentences F S 1 with corresponding reference translations RS 1 , the objective of MERT is to find a parameter set ˆλM 1 which minimizes an automated evaluation criterion under a linear model: ˆλM 1 = arg min λM 1  S X s=1 Err ` Rs, ˆE(Fs; λM 1 ) ´ff ˆE(Fs; λM 1 ) = arg max E  S X s=1 λmhm(E, Fs) ff . In the context of statistical machine translation, the optimization procedure was first described in Och (2003) for N-best lists and later extended to phrase-lattices in Macherey et al. (2008). The algorithm is based on the insight that, under a loglinear model, the cost function of any candidate translation can be represented as a line in the plane if the initial parameter set λM 1 is shifted along a direction dM 1 . Let C = {E1, ..., EK} denote a set of candidate translations, then computing the best scoring translation hypothesis ˆE out of C results in the following optimization problem: ˆE(F; γ) = arg max E∈C n (λM 1 + γ · dM 1 )⊤· hM 1 (E, F) o = arg max E∈C X m λmhm(E, F) | {z } =a(E,F ) + γ · X m dmhm(E, F) | {z } =b(E,F ) ff = arg max E∈C ˘ a(E, F) + γ · b(E, F) | {z } (∗) ¯ Hence, the total score (∗) for each candidate translation E ∈C can be described as a line with γ as the independent variable. For any particular choice of γ, the decoder seeks that translation which yields the largest score and therefore corresponds to the topmost line segment. If γ is shifted from −∞to +∞, other translation hypotheses may at some point constitute the topmost line segments and thus change the decision made by the decoder. The entire sequence of topmost line segments is called upper envelope and provides an exhaustive representation of all possible outcomes that the decoder may yield if γ is shifted along the chosen direction. Both the translations and their corresponding line segments can efficiently be computed without incorporating any error criterion. Once the envelope has been determined, the translation candidates of its constituent line segments are projected onto their corresponding error counts, thus yielding the exact and unsmoothed error surface for all candidate translations encoded in C. The error surface can now easily be traversed in order to find that ˆγ under which the new parameter set λM 1 + ˆγ · dM 1 minimizes the global error. In this section, we present an extension of the algorithm described in Macherey et al. (2008) that allows us to efficiently compute and represent upper envelopes over all candidate translations encoded in hypergraphs. Conceptually, the algorithm works by propagating (initially empty) envelopes from the hypergraph’s source nodes bottom-up to its unique root node, thereby expanding the envelopes by applying SCFG rules to the partial candidate translations that are associated with the envelope’s constituent line segments. To recombine envelopes, we need two operators: the sum and the maximum over convex polygons. To illustrate which operator is applied when, we transform H = ⟨V, E⟩into a regular graph with typed nodes by (1) marking all vertices v ∈V with the symbol ∨and (2) replacing each hyperedge e ∈E, |e| > 1, with a small subgraph consisting of a new vertex v∧(e) whose incoming and outgoing edges connect the same head and tail nodes 164 Algorithm 1 ∧-operation (Sum) input: associative map a: V →Env(V), hyperarc e output: Minkowski sum of envelopes over T(e) for (i = 0; i < |T(e)|; ++i) { v = Ti(e); pq.enqueue(⟨v, i, 0⟩); } L = ∅; D = ⟨e, ε1 · · · ε|e|⟩ while (!pq.empty()) { ⟨v, i, j⟩= pq.dequeue(); ℓ= A[v][j]; D[i+1] = ℓ.D; if (L.empty() ∨L.back().x < ℓ.x) { if (0 < j) { ℓ.y += L.back().y - A[v][j-1].y; ℓ.m += L.back().m - A[v][j-1].m; } L.push_back(ℓ); L.back().D = D; } else { L.back().y += ℓ.y; L.back().m += ℓ.m; L.back().D[i+1] = ℓ.D; if (0 < j) { L.back().y -= A[v][j-1].y; L.back().m -= A[v][j-1].m; } } if (++j < A[v].size()) pq.enqueue(⟨v, i, j⟩); } return L; in the transformed graph as were connected by e in the original graph. The unique outgoing edge of v∧(e) is associated with the rule re; incoming edges are not linked to any rule. Figure 2 illustrates the transformation for a hyperedge with arity 3. The graph transformation is isomorphic. The rules associated with every hyperedge specify how line segments in the envelopes of a hyperedge’s tail nodes can be combined. Suppose we have a hyperedge e with rule re : X →aX1bX2c and T(e) = {v1, v2}. Then we substitute X1 and X2 in the rule with candidate translations associated with line segments in envelopes Env(v1) and Env(v2) respectively. To derive the algorithm, we consider the general case of a hyperedge e with rule re : X → w1X1w2...wnXnwn+1. Because the right-hand side of re has n nonterminals, the arity of e is |e| = n. Let T(e) = {v1, ..., vn} denote the tail nodes of e. We now assume that each tail node vi ∈T(e) is associated with the upper envelope over all candidate translations that are induced by derivations of the corresponding nonterminal symbol Xi. These envelopes shall be deAlgorithm 2 ∨-operation (Max) input: array L[0..K-1] containing line objects output: upper envelope of L Sort(L:m); j = 0; K = size(L); for (i = 0; i < K; ++i) { ℓ= L[i]; ℓ.x = -∞; if (0 < j) { if (L[j-1].m == ℓ.m) { if (ℓ.y <= L[j-1].y) continue; --j; } while (0 < j) { ℓ.x = (ℓ.y - L[j-1].y)/ (L[j-1].m - ℓ.m); if (L[j-1].x < ℓ.x) break; --j; } if (0 == j) ℓ.x = -∞; L[j++] = ℓ; } else L[j++] = ℓ; } L.resize(j); return L; noted by Env(vi). To decompose the problem of computing and propagating the tail envelopes over the hyperedge e to its head node, we now define two operations, one for either node type, to specify how envelopes associated with the tail vertices are propagated to the head vertex. Nodes of Type “∧”: For a type ∧node, the resulting envelope is the Minkowski sum over the envelopes of the incoming edges (Berg et al., 2008). Since the envelopes of the incoming edges are convex hulls, the Minkowski sum provides an upper bound to the number of line segments that constitute the resulting envelope: the bound is the sum over the number of line segments in the envelopes of the incoming edges, i.e.: Env(v∧(e)) ≤P v∨∈T(e) Env(v∨) . Algorithm 1 shows the pseudo code for computing the Minkowski sum over multiple envelopes. The line objects ℓused in this algorithm are encoded as 4-tuples, each consisting of the xintercept with ℓ’s left-adjacent line stored as ℓ.x, the slope ℓ.m, the y-intercept ℓ.y, and the (partial) derivation tree ℓ.D. At the beginning, the leftmost line segment of each envelope is inserted into a priority queue pq. The priority is defined in terms of a line’s x-intercept such that lower values imply higher priority. Hence, the priority queue enumerates all line segments from left to right in ascending order of their x-intercepts, which is the order needed to compute the Minkowski sum. Nodes of Type “∨”: The operation performed 165 =! = max Figure 2: Transformation of a hypergraph into a factor graph and bottom-up propagation of envelopes. at nodes of type “∨” computes the convex hull over the union of the envelopes propagated over the incoming edges. This operation is a “max” operation and it is identical to the algorithm described in (Macherey et al., 2008) for phrase lattices. Algorithm 2 contains the pseudo code. The complete algorithm then works as follows: Traversing all nodes in H bottom-up in topological order, we proceed for each node v ∈V over its incoming hyperedges and combine in each such hyperedge e the envelopes associated with the tail nodes T(e) by computing their sum according to Algorithm 1 (∧-operation). For each incoming hyperedge e, the resulting envelope is then expanded by applying the rule re to its constituent line segments. The envelopes associated with different incoming hyperedges of node v are then combined and reduced according to Algorithm 2 (∨-operation). By construction, the envelope at the root node is the convex hull over the line segments of all candidate translations that can be derived from the hypergraph. The suggested algorithm has similar properties as the algorithm presented in (Macherey et al., 2008). In particular, it has the same upper bound on the number of line segments that constitute the envelope at the root node, i.e, the size of this envelope is guaranteed to be no larger than the number of edges in the transformed hypergraph. 4 Minimum Bayes-Risk Decoding We first review Minimum Bayes-Risk (MBR) decoding for statistical MT. An MBR decoder seeks the hypothesis with the least expected loss under a probability model (Bickel and Doksum, 1977). If we think of statistical MT as a classifier that maps a source sentence F to a target sentence E, the MBR decoder can be expressed as follows: ˆE = argmin E′∈G X E∈G L(E, E′)P(E|F), (1) where L(E, E′) is the loss between any two hypotheses E and E′, P(E|F) is the probability model, and G is the space of translations (N-best list, lattice, or a hypergraph). MBR decoding for translation can be performed by reranking an N-best list of hypotheses generated by an MT system (Kumar and Byrne, 2004). This reranking can be done for any sentencelevel loss function such as BLEU (Papineni et al., 2001), Word Error Rate, or Position-independent Error Rate. Recently, Tromble et al. (2008) extended MBR decoding to translation lattices under an approximate BLEU score. They approximated log(BLEU) score by a linear function of n-gram matches and candidate length. If E and E′ are the reference and the candidate translations respectively, this linear function is given by: G(E, E′) = θ0|E′| + X w θ|w|#w(E′)δw(E), (2) where w is an n-gram present in either E or E′, and θ0, θ1, ..., θN are weights which are determined empirically, where N is the maximum ngram order. Under such a linear decomposition, the MBR decoder (Equation 1) can be written as ˆE = argmax E′∈G θ0|E′| + X w θ|w|#w(E′)p(w|G), (3) where the posterior probability of an n-gram in the lattice is given by p(w|G) = X E∈G 1w(E)P(E|F). (4) Tromble et al. (2008) implement the MBR decoder using Weighted Finite State Automata (WFSA) operations. First, the set of n-grams is extracted from the lattice. Next, the posterior probability of each n-gram is computed. A new automaton is then created by intersecting each ngram with weight (from Equation 2) to an unweighted lattice. Finally, the MBR hypothesis is extracted as the best path in the automaton. We will refer to this procedure as FSAMBR. The above steps are carried out one n-gram at a time. For a moderately large lattice, there can be several thousands of n-grams and the procedure becomes expensive. We now present an alternate approximate procedure which can avoid this 166 enumeration making the resulting algorithm much faster than FSAMBR. 4.1 Efficient MBR for lattices The key idea behind this new algorithm is to rewrite the n-gram posterior probability (Equation 4) as follows: p(w|G) = X E∈G X e∈E f(e, w, E)P(E|F)(5) where f(e, w, E) is a score assigned to edge e on path E containing n-gram w: f(e, w, E) =    1 w ∈e, p(e|G) > p(e′|G), e′ precedes e on E 0 otherwise (6) In other words, for each path E, we count the edge that contributes n-gram w and has the highest edge posterior probability relative to its predecessors on the path E; there is exactly one such edge on each lattice path E. We note that f(e, w, E) relies on the full path E which means that it cannot be computed based on local statistics. We therefore approximate the quantity f(e, w, E) with f∗(e, w, G) that counts the edge e with n-gram w that has the highest arc posterior probability relative to predecessors in the entire lattice G. f∗(e, w, G) can be computed locally, and the n-gram posterior probability based on f∗can be determined as follows: p(w|G) = X E∈G X e∈E f ∗(e, w, G)P(E|F) (7) = X e∈E 1w∈ef ∗(e, w, G) X E∈G 1E(e)P(E|F) = X e∈E 1w∈ef ∗(e, w, G)P(e|G), where P(e|G) is the posterior probability of a lattice edge. The algorithm to perform Lattice MBR is given in Algorithm 3. For each node t in the lattice, we maintain a quantity Score(w, t) for each n-gram w that lies on a path from the source node to t. Score(w, t) is the highest posterior probability among all edges on the paths that terminate on t and contain n-gram w. The forward pass requires computing the n-grams introduced by each edge; to do this, we propagate n-grams (up to maximum order −1) terminating on each node. 4.2 Extension to Hypergraphs We next extend the Lattice MBR decoding algorithm (Algorithm 3) to rescore hypergraphs produced by a SCFG based MT system. Algorithm 4 is an extension to the MBR decoder on lattices Algorithm 3 MBR Decoding on Lattices 1: Sort the lattice nodes topologically. 2: Compute backward probabilities of each node. 3: Compute posterior prob. of each n-gram: 4: for each edge e do 5: Compute edge posterior probability P(e|G). 6: Compute n-gram posterior probs. P(w|G): 7: for each n-gram w introduced by e do 8: Propagate n −1 gram suffix to he. 9: if p(e|G) > Score(w, T(e)) then 10: Update posterior probs. and scores: p(w|G) += p(e|G) −Score(w, T(e)). Score(w, he) = p(e|G). 11: else 12: Score(w, he) = Score(w, T(e)). 13: end if 14: end for 15: end for 16: Assign scores to edges (given by Equation 3). 17: Find best path in the lattice (Equation 3). (Algorithm 3). However, there are important differences when computing the n-gram posterior probabilities (Step 3). In this inside pass, we now maintain both n-gram prefixes and suffixes (up to the maximum order −1) on each hypergraph node. This is necessary because unlike a lattice, new ngrams may be created at subsequent nodes by concatenating words both to the left and the right side of the n-gram. When the arity of the edge is 2, a rule has the general form aX1bX2c, where X1 and X2 are sequences from tail nodes. As a result, we need to consider all new sequences which can be created by the cross-product of the n-grams on the two tail nodes. E.g. if X1 = {c, cd, d} and X2 = {f, g}, then a total of six sequences will result. In practice, such a cross-product is not proAlgorithm 4 MBR Decoding on Hypergraphs 1: Sort the hypergraph nodes topologically. 2: Compute inside probabilities of each node. 3: Compute posterior prob. of each hyperedge P(e|G). 4: Compute posterior prob. of each n-gram: 5: for each hyperedge e do 6: Merge the n-grams on the tail nodes T(e). If the same n-gram is present on multiple tail nodes, keep the highest score. 7: Apply the rule on e to the n-grams on T(e). 8: Propagate n −1 gram prefixes/suffixes to he. 9: for each n-gram w introduced by this hyperedge do 10: if p(e|G) > Score(w, T(e)) then 11: p(w|G) += p(e|G) −Score(w, T(e)) Score(w, he) = p(e|G) 12: else 13: Score(w, he) = Score(w, T(e)) 14: end if 15: end for 16: end for 17: Assign scores to hyperedges (Equation 3). 18: Find best path in the hypergraph (Equation 3). 167 hibitive when the maximum n-gram order in MBR does not exceed the order of the n-gram language model used in creating the hypergraph. In the latter case, we will have a small set of unique prefixes and suffixes on the tail nodes. 5 MERT for MBR Parameter Optimization Lattice MBR Decoding (Equation 3) assumes a linear form for the gain function (Equation 2). This linear function contains n + 1 parameters θ0, θ1, ..., θN, where N is the maximum order of the n-grams involved. Tromble et al. (2008) obtained these factors as a function of n-gram precisions derived from multiple training runs. However, this does not guarantee that the resulting linear score (Equation 2) is close to the corpus BLEU. We now describe how MERT can be used to estimate these factors to achieve a better approximation to the corpus BLEU. We recall that MERT selects weights in a linear model to optimize an error criterion (e.g. corpus BLEU) on a training set. The lattice MBR decoder (Equation 3) can be written as a linear model: ˆE = argmaxE′∈G PN i=0 θigi(E′, F), where g0(E′, F) = |E′| and gi(E′, F) = P w:|w|=i #w(E′)p(w|G). The linear approximation to BLEU may not hold in practice for unseen test sets or languagepairs. Therefore, we would like to allow the decoder to backoff to the MAP translation in such cases. To do that, we introduce an additional feature function gN+1(E, F) equal to the original decoder cost for this sentence. A weight assignment of 1.0 for this feature function and zeros for the other feature functions would imply that the MAP translation is chosen. We now have a total of N +2 feature functions which we optimize using MERT to obtain highest BLEU score on a training set. 6 Experiments We now describe our experiments to evaluate MERT and MBR on lattices and hypergraphs, and show how MERT can be used to tune MBR parameters. 6.1 Translation Tasks We report results on two tasks. The first one is the constrained data track of the NIST Arabicto-English (aren) and Chinese-to-English (zhen) translation task1. On this task, the parallel and the 1http://www.nist.gov/speech/tests/mt Dataset # of sentences aren zhen dev 1797 1664 nist02 1043 878 nist03 663 919 Table 1: Statistics over the NIST dev/test sets. monolingual data included all the allowed training sets for the constrained track. Table 1 reports statistics computed over these data sets. Our development set (dev) consists of the NIST 2005 eval set; we use this set for optimizing MBR parameters. We report results on NIST 2002 and NIST 2003 evaluation sets. The second task consists of systems for 39 language-pairs with English as the target language and trained on at most 300M word tokens mined from the web and other published sources. The development and test sets for this task are randomly selected sentences from the web, and contain 5000 and 1000 sentences respectively. 6.2 MT System Description Our phrase-based statistical MT system is similar to the alignment template system described in (Och and Ney, 2004; Tromble et al., 2008). Translation is performed using a standard dynamic programming beam-search decoder (Och and Ney, 2004) using two decoding passes. The first decoder pass generates either a lattice or an N-best list. MBR decoding is performed in the second pass. We also train two SCFG-based MT systems: a hierarchical phrase-based SMT (Chiang, 2007) system and a syntax augmented machine translation (SAMT) system using the approach described in Zollmann and Venugopal (2006). Both systems are built on top of our phrase-based systems. In these systems, the decoder generates an initial hypergraph or an N-best list, which are then rescored using MBR decoding. 6.3 MERT Results Table 2 shows runtime experiments for the hypergraph MERT implementation in comparison with the phrase-lattice implementation on both the aren and the zhen system. The first two columns show the average amount of time in msecs that either algorithm requires to compute the upper envelope when applied to phrase lattices. Compared to the algorithm described in (Macherey et al., 2008) which is optimized for phrase lattices, the hypergraph implementation causes a small increase in 168 Avg. Runtime/sent [msec] (Macherey 2008) Suggested Alg. aren zhen aren zhen phrase lattice 8.57 7.91 10.30 8.65 hypergraph – – 8.19 8.11 Table 2: Average time for computing envelopes. running time. This increase is mainly due to the representation of line segments; while the phraselattice implementation stores a single backpointer, the hypergraph version stores a vector of backpointers. The last two columns show the average amount of time that is required to compute the upper envelope on hypergraphs. For comparison, we prune hypergraphs to the same density (# of edges per edge on the best path) and achieve identical running times for computing the error surface. 6.4 MBR Results We first compare the new lattice MBR (Algorithm 3) with MBR decoding on 1000-best lists and FSAMBR (Tromble et al., 2008) on lattices generated by the phrase-based systems; evaluation is done using both BLEU and average run-time per sentence (Table 3). Note that N-best MBR uses a sentence BLEU loss function. The new lattice MBR algorithm gives about the same performance as FSAMBR while yielding a 20X speedup. We next report the performance of MBR on hypergraphs generated by Hiero/SAMT systems. Table 4 compares Hypergraph MBR (HGMBR) with MAP and MBR decoding on 1000 best lists. On some systems such as the Arabic-English SAMT, the gains from Hypergraph MBR over 1000-best MBR are significant. In other cases, Hypergraph MBR performs at least as well as N-best MBR. In all cases, we observe a 7X speedup in runtime. This shows the usefulness of Hypergraph MBR decoding as an efficient alternative to Nbest MBR. 6.5 MBR Parameter Tuning with MERT We now describe the results by tuning MBR ngram parameters (Equation 2) using MERT. We first compute N + 1 MBR feature functions on each edge of the lattice/hypergraph. We also include the total decoder cost on the edge as as additional feature function. MERT is then performed to optimize the BLEU score on a development set; For MERT, we use 40 random initial parameters as well as parameters computed using corpus based statistics (Tromble et al., 2008). BLEU (%) Avg. aren zhen time nist03 nist02 nist03 nist02 (ms.) MAP 54.2 64.2 40.1 39.0 N-best MBR 54.3 64.5 40.2 39.2 3.7 Lattice MBR FSAMBR 54.9 65.2 40.6 39.5 3.7 LatMBR 54.8 65.2 40.7 39.4 0.2 Table 3: Lattice MBR for a phrase-based system. BLEU (%) Avg. aren zhen time nist03 nist02 nist03 nist02 (ms.) Hiero MAP 52.8 62.9 41.0 39.8 N-best MBR 53.2 63.0 41.0 40.1 3.7 HGMBR 53.3 63.1 41.0 40.2 0.5 SAMT MAP 53.4 63.9 41.3 40.3 N-best MBR 53.8 64.3 41.7 41.1 3.7 HGMBR 54.0 64.6 41.8 41.1 0.5 Table 4: Hypergraph MBR for Hiero/SAMT systems. Table 5 shows results for NIST systems. We report results on nist03 set and present three systems for each language pair: phrase-based (pb), hierarchical (hier), and SAMT; Lattice MBR is done for the phrase-based system while HGMBR is used for the other two. We select the MBR scaling factor (Tromble et al., 2008) based on the development set; it is set to 0.1, 0.01, 0.5, 0.2, 0.5 and 1.0 for the aren-phrase, aren-hier, aren-samt, zhen-phrase zhen-hier and zhen-samt systems respectively. For the multi-language case, we train phrase-based systems and perform lattice MBR for all language pairs. We use a scaling factor of 0.7 for all pairs. Additional gains can be obtained by tuning this factor; however, we do not explore that dimension in this paper. In all cases, we prune the lattices/hypergraphs to a density of 30 using forward-backward pruning (Sixtus and Ortmanns, 1999). We consider a BLEU score difference to be a) gain if is at least 0.2 points, b) drop if it is at most -0.2 points, and c) no change otherwise. The results are shown in Table 6. In both tables, the following results are reported: Lattice/HGMBR with default parameters (−5, 1.5, 2, 3, 4) computed using corpus statistics (Tromble et al., 2008), Lattice/HGMBR with parameters derived from MERT both without/with the baseline model cost feature (mert−b, mert+b). For multi-language systems, we only show the # of language-pairs with gains/no-changes/drops for each MBR variant with respect to the MAP translation. 169 We observed in the NIST systems that MERT resulted in short translations relative to MAP on the unseen test set. To prevent this behavior, we modify the MERT error criterion to include a sentence-level brevity scorer with parameter α: BLEU+brevity(α). This brevity scorer penalizes each candidate translation that is shorter than the average length over its reference translations, using a penalty term which is linear in the difference between either length. We tune α on the development set so that the brevity score of MBR translation is close to that of the MAP translation. In the NIST systems, MERT yields small improvements on top of MBR with default parameters. This is the case for Arabic-English Hiero/SAMT. In all other cases, we see no change or even a slight degradation due to MERT. We hypothesize that the default MBR parameters (Tromble et al., 2008) are well tuned. Therefore there is little gain by additional tuning using MERT. In the multi-language systems, the results show a different trend. We observe that MBR with default parameters results in gains on 18 pairs, no differences on 9 pairs, and losses on 12 pairs. When we optimize MBR features with MERT, the number of language pairs with gains/no changes/drops is 22/5/12. Thus, MERT has a bigger impact here than in the NIST systems. We hypothesize that the default MBR parameters are sub-optimal for some language pairs and that MERT helps to find better parameter settings. In particular, MERT avoids the need for manually tuning these parameters by language pair. Finally, when baseline model costs are added as an extra feature (mert+b), the number of pairs with gains/no changes/drops is 26/8/5. This shows that this feature can allow MBR decoding to backoff to the MAP translation. When MBR does not produce a higher BLEU score relative to MAP on the development set, MERT assigns a higher weight to this feature function. We see such an effect for 4 systems. 7 Discussion We have presented efficient algorithms which extend previous work on lattice-based MERT (Macherey et al., 2008) and MBR decoding (Tromble et al., 2008) to work with hypergraphs. Our new MERT algorithm can work with both lattices and hypergraphs. On lattices, it achieves similar run-times as the implementation System BLEU (%) MAP MBR default mert-b mert+b aren.pb 54.2 54.8 54.8 54.9 aren.hier 52.8 53.3 53.5 53.7 aren.samt 53.4 54.0 54.4 54.0 zhen.pb 40.1 40.7 40.7 40.9 zhen.hier 41.0 41.0 41.0 41.0 zhen.samt 41.3 41.8 41.6 41.7 Table 5: MBR Parameter Tuning on NIST systems MBR wrt. MAP default mert-b mert+b # of gains 18 22 26 # of no-changes 9 5 8 # of drops 12 12 5 Table 6: MBR on Multi-language systems. described in Macherey et al. (2008). The new Lattice MBR decoder achieves a 20X speedup relative to either FSAMBR implementation described in Tromble et al. (2008) or MBR on 1000-best lists. The algorithm gives comparable results relative to FSAMBR. On hypergraphs produced by Hierarchical and Syntax Augmented MT systems, our MBR algorithm gives a 7X speedup relative to 1000-best MBR while giving comparable or even better performance. Lattice MBR decoding is obtained under a linear approximation to BLEU, where the weights are obtained using n-gram precisions derived from development data. This may not be optimal in practice for unseen test sets and language pairs, and the resulting linear loss may be quite different from the corpus level BLEU. In this paper, we have described how MERT can be employed to estimate the weights for the linear loss function to maximize BLEU on a development set. On an experiment with 40 language pairs, we obtain improvements on 26 pairs, no difference on 8 pairs and drops on 5 pairs. This was achieved without any need for manual tuning for each language pair. The baseline model cost feature helps the algorithm effectively back off to the MAP translation in language pairs where MBR features alone would not have helped. MERT and MBR decoding are popular techniques for incorporating the final evaluation metric into the development of SMT systems. We believe that our efficient algorithms will make them more widely applicable in both SCFG-based and phrase-based MT systems. 170 References M. Berg, O. Cheong, M. Krefeld, and M. Overmars, 2008. Computational Geometry: Algorithms and Applications, chapter 13, pages 290–296. SpringerVerlag, 3rd edition. P. J. Bickel and K. A. Doksum. 1977. Mathematical Statistics: Basic Ideas and Selected topics. HoldenDay Inc., Oakland, CA, USA. D. Chiang. 2007. Hierarchical phrase based translation . Computational Linguistics, 33(2):201 – 228. M. Galley, J. Graehl, K. Knight, D. Marcu, S. DeNeefe, W. Wang, and I. Thayer. 2006. Scalable Inference and Training of Context-Rich Syntactic Translation Models. . In COLING/ACL, Sydney, Australia. L. Huang. 2008. Advanced Dynamic Programming in Semiring and Hypergraph Frameworks. In COLING, Manchester, UK. S. Kumar and W. Byrne. 2004. Minimum BayesRisk Decoding for Statistical Machine Translation. In HLT-NAACL, Boston, MA, USA. W. Macherey, F. Och, I. Thayer, and J. Uszkoreit. 2008. Lattice-based Minimum Error Rate Training for Statistical Machine Translation. In EMNLP, Honolulu, Hawaii, USA. H. Mi, L. Huang, and Q. Liu. 2008. Forest-Based Translation. In ACL, Columbus, OH, USA. F. Och and H. Ney. 2004. The Alignment Template Approach to Statistical Machine Translation. Computational Linguistics, 30(4):417 – 449. F. Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In ACL, Sapporo, Japan. K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2001. Bleu: a Method for Automatic Evaluation of Machine Translation. Technical Report RC22176 (W0109-022), IBM Research Division. A. Sixtus and S. Ortmanns. 1999. High Quality Word Graphs Using Forward-Backward Pruning. In ICASSP, Phoenix, AZ, USA. R. Tromble, S. Kumar, F. Och, and W. Macherey. 2008. Lattice Minimum Bayes-Risk Decoding for Statistical Machine Translation. In EMNLP, Honolulu, Hawaii. H. Zhang and D. Gildea. 2008. Efficient Multi-pass Decoding for Synchronous Context Free Grammars. In ACL, Columbus, OH, USA. A. Zollmann and A. Venugopal. 2006. Syntax Augmented Machine Translation via Chart Parsing. In HLT-NAACL, New York, NY, USA. 171
2009
19
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 10–18, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Investigations on Word Senses and Word Usages Katrin Erk University of Texas at Austin [email protected] Diana McCarthy University of Sussex [email protected] Nicholas Gaylord University of Texas at Austin [email protected] Abstract The vast majority of work on word senses has relied on predefined sense inventories and an annotation schema where each word instance is tagged with the best fitting sense. This paper examines the case for a graded notion of word meaning in two experiments, one which uses WordNet senses in a graded fashion, contrasted with the “winner takes all” annotation, and one which asks annotators to judge the similarity of two usages. We find that the graded responses correlate with annotations from previous datasets, but sense assignments are used in a way that weakens the case for clear cut sense boundaries. The responses from both experiments correlate with the overlap of paraphrases from the English lexical substitution task which bodes well for the use of substitutes as a proxy for word sense. This paper also provides two novel datasets which can be used for evaluating computational systems. 1 Introduction The vast majority of work on word sense tagging has assumed that predefined word senses from a dictionary are an adequate proxy for the task, although of course there are issues with this enterprise both in terms of cognitive validity (Hanks, 2000; Kilgarriff, 1997; Kilgarriff, 2006) and adequacy for computational linguistics applications (Kilgarriff, 2006). Furthermore, given a predefined list of senses, annotation efforts and computational approaches to word sense disambiguation (WSD) have usually assumed that one best fitting sense should be selected for each usage. While there is usually some allowance made for multiple senses, this is typically not adopted by annotators or computational systems. Research on the psychology of concepts (Murphy, 2002; Hampton, 2007) shows that categories in the human mind are not simply sets with clearcut boundaries: Some items are perceived as more typical than others (Rosch, 1975; Rosch and Mervis, 1975), and there are borderline cases on which people disagree more often, and on whose categorization they are more likely to change their minds (Hampton, 1979; McCloskey and Glucksberg, 1978). Word meanings are certainly related to mental concepts (Murphy, 2002). This raises the question of whether there is any such thing as the one appropriate sense for a given occurrence. In this paper we will explore using graded responses for sense tagging within a novel annotation paradigm. Modeling the annotation framework after psycholinguistic experiments, we do not train annotators to conform to sense distinctions; rather we assess individual differences by asking annotators to produce graded ratings instead of making a binary choice. We perform two annotation studies. In the first one, referred to as WSsim (Word Sense Similarity), annotators give graded ratings on the applicability of WordNet senses. In the second one, Usim (Usage Similarity), annotators rate the similarity of pairs of occurrences (usages) of a common target word. Both studies explore whether users make use of a graded scale or persist in making binary decisions even when there is the option for a graded response. The first study additionally tests to what extent the judgments on WordNet senses fall into clear-cut clusters, while the second study allows us to explore meaning similarity independently of any lexicon resource. 10 2 Related Work Manual word sense assignment is difficult for human annotators (Krishnamurthy and Nicholls, 2000). Reported inter-annotator agreement (ITA) for fine-grained word sense assignment tasks has ranged between 69% (Kilgarriff and Rosenzweig, 2000) for a lexical sample using the HECTOR dictionary and 78.6.% using WordNet (Landes et al., 1998) in all-words annotation. The use of more coarse-grained senses alleviates the problem: In OntoNotes (Hovy et al., 2006), an ITA of 90% is used as the criterion for the construction of coarsegrained sense distinctions. However, intriguingly, for some high-frequency lemmas such as leave this ITA threshold is not reached even after multiple re-partitionings of the semantic space (Chen and Palmer, 2009). Similarly, the performance of WSD systems clearly indicates that WSD is not easy unless one adopts a coarse-grained approach, and then systems tagging all words at best perform a few percentage points above the most frequent sense heuristic (Navigli et al., 2007). Good performance on coarse-grained sense distinctions may be more useful in applications than poor performance on fine-grained distinctions (Ide and Wilks, 2006) but we do not know this yet and there is some evidence to the contrary (Stokoe, 2005). Rather than focus on the granularity of clusters, the approach we will take in this paper is to examine the phenomenon of word meaning both with and without recourse to predefined senses by focusing on the similarity of uses of a word. Human subjects show excellent agreement on judging word similarity out of context (Rubenstein and Goodenough, 1965; Miller and Charles, 1991), and human judgments have previously been used successfully to study synonymy and nearsynonymy (Miller and Charles, 1991; Bybee and Eddington, 2006). We focus on polysemy rather than synonymy. Our aim will be to use WSsim to determine to what extent annotations form cohesive clusters. In principle, it should be possible to use existing sense-annotated data to explore this question: almost all sense annotation efforts have allowed annotators to assign multiple senses to a single occurrence, and the distribution of these sense labels should indicate whether annotators viewed the senses as disjoint or not. However, the percentage of markables that received multiple sense labels in existing corpora is small, and it varies massively between corpora: In the SemCor corpus (Landes et al., 1998), only 0.3% of all markables received multiple sense labels. In the SENSEVAL-3 English lexical task corpus (Mihalcea et al., 2004) (hereafter referred to as SE-3), the ratio is much higher at 8% of all markables1. This could mean annotators feel that there is usually a single applicable sense, or it could point to a bias towards single-sense assignment in the annotation guidelines and/or the annotation tool. The WSsim experiment that we report in this paper is designed to eliminate such bias as far as possible and we conduct it on data taken from SemCor and SE-3 so that we can compare the annotations. Although we use WordNet for the annotation, our study is not a study of WordNet per se. We choose WordNet because it is sufficiently fine-grained to examine subtle differences in usage, and because traditionally annotated datasets exist to which we can compare our results. Predefined dictionaries and lexical resources are not the only possibilities for annotating lexical items with meaning. In cross-lingual settings, the actual translations of a word can be taken as the sense labels (Resnik and Yarowsky, 2000). Recently, McCarthy and Navigli (2007) proposed the English Lexical Substitution task (hereafter referred to as LEXSUB) under the auspices of SemEval-2007. It uses paraphrases for words in context as a way of annotating meaning. The task was proposed following a background of discussions in the WSD community as to the adequacy of predefined word senses. The LEXSUB dataset comprises open class words (nouns, verbs, adjectives and adverbs) with token instances of each word appearing in the context of one sentence taken from the English Internet Corpus (Sharoff, 2006). The methodology can only work where there are paraphrases, so the dataset only contains words with more than one meaning where at least two different meanings have near synonyms. For meanings without obvious substitutes the annotators were allowed to use multiword paraphrases or words with slightly more general meanings. This dataset has been used to evaluate automatic systems which can find substitutes appropriate for the context. To the best of our knowledge there has been no study of how the data collected relates to word sense annotations or judgments of semantic similarity. In this paper we examine these relation1This is even though both annotation efforts use balanced corpora, the Brown corpus in the case of SemCor, the British National Corpus for SE-3. 11 ships by re-using data from LEXSUB in both new annotation experiments and testing the results for correlation. 3 Annotation We conducted two experiments through an online annotation interface. Three annotators participated in each experiment; all were native British English speakers. The first experiment, WSsim, collected annotator judgments about the applicability of dictionary senses using a 5-point rating scale. The second, Usim, also utilized a 5-point scale but collected judgments on the similarity in meaning between two uses of a word. 2 The scale was 1 – completely different, 2 – mostly different, 3 – similar, 4 – very similar and 5 – identical. In Usim, this scale rated the similarity of the two uses of the common target word; in WSsim it rated the similarity between the use of the target word and the sense description. In both experiments, the annotation interface allowed annotators to revisit and change previously supplied judgments, and a comment box was provided alongside each item. WSsim. This experiment contained a total of 430 sentences spanning 11 lemmas (nouns, verbs and adjectives). For 8 of these lemmas, 50 sentences were included, 25 of them randomly sampled from SemCor 3 and 25 randomly sampled from SE-3.4 The remaining 3 lemmas in the experiment each had 10 sentences taken from the LEXSUB data. WSsim is a word sense annotation task using WordNet senses.5 Unlike previous word sense annotation projects, we asked annotators to provide judgments on the applicability of every WordNet sense of the target lemma with the instruction: 6 2Throughout this paper, a target word is assumed to be a word in a given PoS. 3The SemCor dataset was produced alongside WordNet, so it can be expected to support the WordNet sense distinctions. The same cannot be said for SE-3. 4Sentence fragments and sentences with 5 or fewer words were excluded from the sampling. Annotators were given the sentences, but not the original annotation from these resources. 5WordNet 1.7.1 was used in the annotation of both SE-3 and SemCor; we used the more current WordNet 3.0 after verifying that the lemmas included in this experiment had the same senses listed in both versions. Care was taken additionally to ensure that senses were not presented in an order that reflected their frequency of occurrence. 6The guidelines for both experiments are available at http://comp.ling.utexas.edu/ people/katrin erk/graded sense and usage annotation Your task is to rate, for each of these descriptions, how well they reflect the meaning of the boldfaced word in the sentence. Applicability judgments were not binary, but were instead collected using the five-point scale given above which allowed annotators to indicate not only whether a given sense applied, but to what degree. Each annotator annotated each of the 430 items. By having multiple annotators per item and a graded, non-binary annotation scheme we allow for and measure differences between annotators, rather than training annotators to conform to a common sense distinction guideline. By asking annotators to provide ratings for each individual sense, we strive to eliminate all bias towards either single-sense or multiple-sense assignment. In traditional word sense annotation, such bias could be introduced directly through annotation guidelines or indirectly, through tools that make it easier to assign fewer senses. We focus not on finding the best fitting sense but collect judgments on the applicability of all senses. Usim. This experiment used data from LEXSUB. For more information on LEXSUB, see McCarthy and Navigli (2007). 34 lemmas (nouns, verbs, adjectives and adverbs) were manually selected, including the 3 lemmas also used in WSsim. We selected lemmas which exhibited a range of meanings and substitutes in the LEXSUB data, with as few multiword substitutes as possible. Each lemma is the target in 10 LEXSUB sentences. For our experiment, we took every possible pairwise comparison of these 10 sentences for a lemma. We refer to each such pair of sentences as an SPAIR. The resulting dataset comprised 45 SPAIRs per lemma, adding up to 1530 comparisons per annotator overall. In this annotation experiment, annotators saw SPAIRs with a common target word and rated the similarity in meaning between the two uses of the target word with the instruction: Your task is to rate, for each pair of sentences, how similar in meaning the two boldfaced words are on a five-point scale. In addition annotators had the ability to respond with “Cannot Decide”, indicating that they were unable to make an effective comparison between the two contexts, for example because the meaning of one usage was unclear. This occurred in 9 paired occurrences during the course of annotation, and these items (paired occurrences) were 12 excluded from further analysis. The purpose of Usim was to collect judgments about degrees of similarity between a word’s meaning in different contexts. Unlike WSsim, Usim does not rely upon any dictionary resource as a basis for the judgments. 4 Analyses This section reports on analyses on the annotated data. In all the analyses we use Spearman’s rank correlation coefficient (ρ), a nonparametric test, because the data does not seem to be normally distributed. We used two-tailed tests in all cases, rather than assume the direction of the relationship. As noted above, we have three annotators per task, and each annotator gave judgments for every sentence (WSsim) or sentence pair (Usim). Since the annotators may vary as to how they use the ordinal scale, we do not use the mean of judgments7 but report all individual correlations. All analyses were done using the R package.8 4.1 WSsim analysis In the WSsim experiment, annotators rated the applicability of each WordNet 3.0 sense for a given target word occurrence. Table 1 shows a sample annotation for the target argument.n. 9 Pattern of annotation and annotator agreement. Figure 1 shows how often each of the five judgments on the scale was used, individually and summed over all annotators. (The y-axis shows raw counts of each judgment.) We can see from this figure that the extreme ratings 1 and 5 are used more often than the intermediate ones, but annotators make use of the full ordinal scale when judging the applicability of a sense. Also, the figure shows that annotator 1 used the extreme negative rating 1 much less than the other two annotators. Figure 2 shows the percentage of times each judgment was used on senses of three lemmas, different.a, interest.n, and win.v. In WordNet, they have 5, 7, and 4 senses, respectively. The pattern for win.v resembles the overall distribution of judgments, with peaks at the extreme ratings 1 and 5. The lemma interest.n has a single peak at rating 1, partly due to the fact that senses 5 (financial 7We have also performed several of our calculations using the mean judgment, and they also gave highly significant results in all the cases we tested. 8http://www.r-project.org/ 9We use word.PoS to denote a target word (lemma). Annotator 1 Annotator 2 Annotator 3 overall 1 2 3 4 5 0 500 1000 1500 2000 2500 3000 Figure 1: WSsim experiment: number of times each judgment was used, by annotator and summed over all annotators. The y-axis shows raw counts of each judgment. different.a interest.n win.v 1 2 3 4 5 0.0 0.1 0.2 0.3 0.4 0.5 Figure 2: WSsim experiment: percentage of times each judgment was used for the lemmas different.a, interest.n and win.v. Judgment counts were summed over all three annotators. involvement) and 6 (interest group) were rarely judged to apply. For the lemma different.a, all judgments have been used with approximately the same frequency. We measured the level of agreement between annotators using Spearman’s ρ between the judgments of every pair of annotators. The pairwise correlations were ρ = 0.506, ρ = 0.466 and ρ = 0.540, all highly significant with p < 2.2e-16. Agreement with previous annotation in SemCor and SE-3. 200 of the items in WSsim had been previously annotated in SemCor, and 200 in SE-3. This lets us compare the annotation results across annotation efforts. Table 2 shows the percentage of items where more than one sense was assigned in the subset of WSsim from SemCor (first row), from SE-3 (second row), and 13 Senses Sentence 1 2 3 4 5 6 7 Annotator This question provoked arguments in America about the Norton Anthology of Literature by Women, some of the contents of which were said to have had little value as literature. 1 4 4 2 1 1 3 Ann. 1 4 5 4 2 1 1 4 Ann. 2 1 4 5 1 1 1 1 Ann. 3 Table 1: A sample annotation in the WSsim experiment. The senses are: 1:statement, 2:controversy, 3:debate, 4:literary argument, 5:parameter, 6:variable, 7:line of reasoning WSsim judgment Data Orig. ≥3 ≥4 5 WSsim/SemCor 0.0 80.2 57.5 28.3 WSsim/SE-3 24.0 78.0 58.3 27.1 All WSsim 78.8 57.4 27.7 Table 2: Percentage of items with multiple senses assigned. Orig: in the original SemCor/SE-3 data. WSsim judgment: items with judgments at or above the specified threshold. The percentages for WSsim are averaged over the three annotators. all of WSsim (third row). The Orig. column indicates how many items had multiple labels in the original annotation (SemCor or SE-3) 10. Note that no item had more than one sense label in SemCor. The columns under WSsim judgment show the percentage of items (averaged over the three annotators) that had judgments at or above the specified threshold, starting from rating 3 – similar. Within WSsim, the percentage of multiple assignments in the three rows is fairly constant. WSsim avoids the bias to one sense by deliberately asking for judgments on the applicability of each sense rather than asking annotators to find the best one. To compute the Spearman’s correlation between the original sense labels and those given in the WSsim annotation, we converted SemCor and SE-3 labels to the format used within WSsim: Assigned senses were converted to a judgment of 5, and unassigned senses to a judgment of 1. For the WSsim/SemCor dataset, the correlation between original and WSsim annotation was ρ = 0.234, ρ = 0.448, and ρ = 0.390 for the three annotators, each highly significant with p < 2.2e-16. For the WSsim/SE-3 dataset, the correlations were ρ = 0.346, ρ = 0.449 and ρ = 0.338, each of them again highly significant at p < 2.2e-16. Degree of sense grouping. Next we test to what extent the sense applicability judgments in the 10Overall, 0.3% of tokens in SemCor have multiple labels, and 8% of tokens in SE-3, so the multiple label assignment in our sample is not an underestimate. p < 0.05 p < 0.01 pos neg pos neg Ann. 1 30.8 11.4 23.2 5.9 Ann. 2 22.2 24.1 19.6 19.6 Ann. 3 12.7 12.0 10.0 6.0 Table 3: Percentage of sense pairs that were significantly positively (pos) or negatively (neg) correlated at p < 0.05 and p < 0.01, shown by annotator. j ≥3 j ≥4 j = 5 Ann. 1 71.9 49.1 8.1 Ann. 2 55.3 24.7 8.1 Ann. 3 42.8 24.0 4.9 Table 4: Percentage of sentences in which at least two uncorrelated (p > 0.05) or negatively correlated senses have been annotated with judgments at the specified threshold. WSsim task could be explained by more coarsegrained, categorial sense assignments. We first test how many pairs of senses for a given lemma show similar patterns in the ratings that they receive. Table 3 shows the percentage of sense pairs that were significantly correlated for each annotator.11 Significantly positively correlated senses can possibly be reduced to more coarse-grained senses. Would annotators have been able to designate a single appropriate sense given these more coarse-grained senses? Call two senses groupable if they are significantly positively correlated; in order not to overlook correlations that are relatively weak but existent, we use a cutoff of p = 0.05 for significant correlation. We tested how often annotators gave ratings of at least similar, i.e. ratings ≥3, to senses that were not groupable. Table 4 shows the percentages of items where at least two non-groupable senses received ratings at or above the specified threshold. The table shows that regardless of which annotator we look at, over 40% of all items had two or more non-groupable senses receive judgments of at least 3 (similar). There 11We exclude senses that received a uniform rating of 1 on all items. This concerned 4 senses for annotator 2 and 6 for annotator 3. 14 1) We study the methods and concepts that each writer uses to defend the cogency of legal, deliberative, or more generally political prudence against explicit or implicit charges that practical thinking is merely a knack or form of cleverness. 2) Eleven CIRA members have been convicted of criminal charges and others are awaiting trial. Figure 3: An SPAIR for charge.n. Annotator judgments: 2,3,4 were even several items where two or more nongroupable senses each got a judgment of 5. The sentence in table 1 is a case where several nongroupable senses got ratings ≥3. This is most pronounced for Annotator 2, who along with sense 2 (controversy) assigned senses 1 (statement), 7 (line of reasoning), and 3 (debate), none of which are groupable with sense 2. 4.2 Usim analysis In this experiment, ratings between 1 and 5 were given for every pairwise combination of sentences for each target lemma. An example of an SPAIR for charge.n is shown in figure 3. In this case the verdicts from the annotators were 2, 3 and 4. Pattern of Annotations and Annotator Agreement Figure 4 gives a bar chart of the judgments for each annotator and summed over annotators. We can see from this figure that the annotators use the full ordinal scale when judging the similarity of a word’s usages, rather than sticking to the extremes. There is variation across words, depending on the relatedness of each word’s usages. Figure 5 shows the judgments for the words bar.n, work.v and raw.a. We see that bar.n has predominantly different usages with a peak for category 1, work.v has more similar judgments (category 5) compared to any other category and raw.a has a peak in the middle category (3). 12 There are other words, like for example fresh.a, where the spread is more uniform. To gauge the level of agreement between annotators, we calculated Spearman’s ρ between the judgments of every pair of annotators as in section 4.1. The pairwise correlations are all highly significant (p < 2.2e-16) with Spearman’s ρ = 0.502, 0.641 and 0.501 giving an average correlation of 0.548. We also perform leave-one-out resampling following Lapata (2006) which gave us a Spearman’s correlation of 0.630. 12For figure 5 we sum the judgments over annotators. Annotator 4 Annotator 5 Annotator 6 overall 1 2 3 4 5 0 500 1000 1500 Figure 4: Usim experiment: number of times each judgment was used, by annotator and summed over all annotators bar.n raw.a work.v 1 2 3 4 5 0 10 20 30 40 50 60 Figure 5: Usim experiment: number of times each judgment was used for bar.n, work.v and raw.a Comparison with LEXSUB substitutions Next we look at whether the Usim judgments on sentence pairs (SPAIRs) correlate with LEXSUB substitutes. To do this we use the overlap of substitutes provided by the five LEXSUB annotators between two sentences in an SPAIR. In LEXSUB the annotators had to replace each item (a target word within the context of a sentence) with a substitute that fitted the context. Each annotator was permitted to supply up to three substitutes provided that they all fitted the context equally. There were 10 sentences per lemma. For our analyses we take every SPAIR for a given lemma and calculate the overlap (inter) of the substitutes provided by the annotators for the two usages under scrutiny. Let s1 and s2 be a pair of sentences in an SPAIR and 15 x1 and x2 be the multisets of substitutes for the respective sentences. Let freq(w,x) be the frequency of a substitute w in a multiset x of substitutes for a given sentence. 13 INTER(s1,s2) = ∑w∈x1∩x2 min(freq(w,x1), freq(w,x2)) max(|x1|,|x2|) Using this calculation for each SPAIR we can now compute the correlation between the Usim judgments for each annotator and the INTER values, again using Spearman’s. The figures are shown in the leftmost block of table 5. The average correlation for the 3 annotators was 0.488 and the p-values were all < 2.2e-16. This shows a highly significant correlation of the Usim judgments and the overlap of substitutes. We also compare the WSsim judgments against the LEXSUB substitutes, again using the INTER measure of substitute overlap. For this analysis, we only use those WSsim sentences that are originally from LEXSUB. In WSsim, the judgments for a sentence comprise judgments for each WordNet sense of that sentence. In order to compare against INTER, we need to transform these sentence-wise ratings in WSsim to a WSsim-based judgment of sentence similarity. To this end, we compute the Euclidean Distance14 (ED) between two vectors J1 and J2 of judgments for two sentences s1,s2 for the same lemma ℓ. Each of the n indexes of the vector represent one of the n different WordNet senses for ℓ. The value at entry i of the vector J1 is the judgment that the annotator in question (we do not average over annotators here) provided for sense i of ℓfor sentence s1. ED(J1,J2) = p ( n ∑ i=1 (J1[i]−J2[i])2) (1) We correlate the Euclidean distances with INTER. We can only test correlation for the subset of WSsim that overlaps with the LEXSUB data: the 30 sentences for investigator.n, function.n and order.v, which together give 135 unique SPAIRs. We refer to this subset as W∩U. The results are given in the third block of table 5. Note that since we are measuring distance between SPAIRs for WSsim 13The frequency of a substitute in a multiset depends on the number of LEXSUB annotators that picked the substitute for this item. 14We use Euclidean Distance rather than a normalizing measure like Cosine because a sentence where all ratings are 5 should be very different from a sentence where all senses received a rating of 1. Usim All Usim W∩U WSsim W∩U ann. ρ ρ ann. ρ 4 0.383 0.330 1 -0.520 5 0.498 0.635 2 -0.503 6 0.584 0.631 3 -0.463 Table 5: Annotator correlation with LEXSUB substitute overlap (inter) whereas INTER is a measure of similarity, the correlation is negative. The results are highly significant with individual p-values from < 1.067e-10 to < 1.551e-08 and a mean correlation of -0.495. The results in the first and third block of table 5 are not directly comparable, as the results in the first block are for all Usim data and not the subset of LEXSUB with WSsim annotations. We therefore repeated the analysis for Usim on the subset of data in WSsim and provide the correlation in the middle section of table 5. The mean correlation for Usim on this subset of the data is 0.532, which is a stronger relationship compared to WSsim, although there is more discrepancy between individual annotators, with the result for annotator 4 giving a p-value = 9.139e-05 while the other two annotators had p-values < 2.2e-16. The LEXSUB substitute overlaps between different usages correlate well with both Usim and WSsim judgments, with a slightly stronger relationship to Usim, perhaps due to the more complicated representation of word meaning in WSsim which uses the full set of WordNet senses. 4.3 Correlation between WSsim and Usim As we showed in section 4.1, WSsim correlates with previous word sense annotations in SemCor and SE-3 while allowing the user a more graded response to sense tagging. As we saw in section 4.2, Usim and WSsim judgments both have a highly significant correlation with similarity of usages as measured using the overlap of substitutes from LEXSUB. Here, we look at the correlation of WSsim and Usim, considering again the subset of data that is common to both experiments. We again transform WSsim sense judgments for individual sentences to distances between SPAIRs using Euclidean Distance (ED). The Spearman’s ρ range between −0.307 and −0.671, and all results are highly significant with p-values between 0.0003 and < 2.2e-16. As above, the correlation is negative because ED is a distance measure between sentences in an SPAIR, whereas the judg16 ments for Usim are similarity judgments. We see that there is highly significant correlation for every pairing of annotators from the two experiments. 5 Discussion Validity of annotation scheme. Annotator ratings show highly significant correlation on both tasks. This shows that the tasks are well-defined. In addition, there is a strong correlation between WSsim and Usim, which indicates that the potential bias introduced by the use of dictionary senses in WSsim is not too prominent. However, we note that WSsim only contained a small portion of 3 lemmas (30 sentences and 135 SPAIRs) in common with Usim, so more annotation is needed to be certain of this relationship. Given the differences between annotator 1 and the other annotators in Fig. 1, it would be interesting to collect judgments for additional annotators. Graded judgments of use similarity and sense applicability. The annotators made use of the full spectrum of ratings, as shown in Figures 1 and 4. This may be because of a graded perception of the similarity of uses as well as senses, or because some uses and senses are very similar. Table 4 shows that for a large number of WSsim items, multiple senses that were not significantly positively correlated got high ratings. This seems to indicate that the ratings we obtained cannot simply be explained by more coarse-grained senses. It may hence be reasonable to pursue computational models of word meaning that are graded, maybe even models that do not rely on dictionary senses at all (Erk and Pado, 2008). Comparison to previous word sense annotation. Our graded WSsim annotations do correlate with traditional “best fitting sense” annotations from SemCor and SE-3; however, if annotators perceive similarity between uses and senses as graded, traditional word sense annotation runs the risk of introducing bias into the annotation. Comparison to lexical substitutions. There is a strong correlation between both Usim and WSsim and the overlap in paraphrases that annotators generated for LEXSUB. This is very encouraging, and especially interesting because LEXSUB annotators freely generated paraphrases rather than selecting them from a list. 6 Conclusions We have introduced a novel annotation paradigm for word sense annotation that allows for graded judgments and for some variation between annotators. We have used this annotation paradigm in two experiments, WSsim and Usim, that shed some light on the question of whether differences between word usages are perceived as categorial or graded. Both datasets will be made publicly available. There was a high correlation between annotator judgments within and across tasks, as well as with previous word sense annotation and with paraphrases proposed in the English Lexical Substitution task. Annotators made ample use of graded judgments in a way that cannot be explained through more coarse-grained senses. These results suggest that it may make sense to evaluate WSD systems on a task of graded rather than categorial meaning characterization, either through dictionary senses or similarity between uses. In that case, it would be useful to have more extensive datasets with graded annotation, even though this annotation paradigm is more time consuming and thus more expensive than traditional word sense annotation. As a next step, we will automatically cluster the judgments we obtained in the WSsim and Usim experiments to further explore the degree to which the annotation gives rise to sense grouping. We will also use the ratings in both experiments to evaluate automatically induced models of word meaning. The SemEval-2007 word sense induction task (Agirre and Soroa, 2007) already allows for evaluation of automatic sense induction systems, but compares output to gold-standard senses from OntoNotes. We hope that the Usim dataset will be particularly useful for evaluating methods which relate usages without necessarily producing hard clusters. Also, we will extend the current dataset using more annotators and exploring additional lexicon resources. Acknowledgments. We acknowledge support from the UK Royal Society for a Dorothy Hodkin Fellowship to the second author. We thank Sebastian Pado for many helpful discussions, and Andrew Young for help with the interface. References E. Agirre and A. Soroa. 2007. SemEval-2007 task 2: Evaluating word sense induction and dis17 crimination systems. In Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007), pages 7–12, Prague, Czech Republic. J. Bybee and D. Eddington. 2006. A usage-based approach to Spanish verbs of ’becoming’. Language, 82(2):323–355. J. Chen and M. Palmer. 2009. Improving English verb sense disambiguation performance with linguistically motivated features and clear sense distinction boundaries. Journal of Language Resources and Evaluation, Special Issue on SemEval-2007. in press. K. Erk and S. Pado. 2008. A structured vector space model for word meaning in context. In Proceedings of EMNLP-08, Waikiki, Hawaii. J. A. Hampton. 1979. Polymorphous concepts in semantic memory. Journal of Verbal Learning and Verbal Behavior, 18:441–461. J. A. Hampton. 2007. Typicality, graded membership, and vagueness. Cognitive Science, 31:355–384. P. Hanks. 2000. Do word meanings exist? Computers and the Humanities, 34(1-2):205–215(11). E. H. Hovy, M. Marcus, M. Palmer, S. Pradhan, L. Ramshaw, and R. Weischedel. 2006. OntoNotes: The 90% solution. In Proceedings of the Human Language Technology Conference of the North American Chapter of the ACL (NAACL-2006), pages 57–60, New York. N. Ide and Y. Wilks. 2006. Making sense about sense. In E. Agirre and P. Edmonds, editors, Word Sense Disambiguation, Algorithms and Applications, pages 47–73. Springer. A. Kilgarriff and J. Rosenzweig. 2000. Framework and results for English Senseval. Computers and the Humanities, 34(1-2):15–48. A. Kilgarriff. 1997. I don’t believe in word senses. Computers and the Humanities, 31(2):91–113. A. Kilgarriff. 2006. Word senses. In E. Agirre and P. Edmonds, editors, Word Sense Disambiguation, Algorithms and Applications, pages 29–46. Springer. R. Krishnamurthy and D. Nicholls. 2000. Peeling an onion: the lexicographers’ experience of manual sense-tagging. Computers and the Humanities, 34(1-2). S. Landes, C. Leacock, and R. Tengi. 1998. Building semantic concordances. In C. Fellbaum, editor, WordNet: An Electronic Lexical Database. The MIT Press, Cambridge, MA. M. Lapata. 2006. Automatic evaluation of information ordering. Computational Linguistics, 32(4):471– 484. D. McCarthy and R. Navigli. 2007. SemEval-2007 task 10: English lexical substitution task. In Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007), pages 48–53, Prague, Czech Republic. M. McCloskey and S. Glucksberg. 1978. Natural categories: Well defined or fuzzy sets? Memory & Cognition, 6:462–472. R. Mihalcea, T. Chklovski, and A. Kilgarriff. 2004. The Senseval-3 English lexical sample task. In 3rd International Workshop on Semantic Evaluations (SensEval-3) at ACL-2004, Barcelona, Spain. G. Miller and W. Charles. 1991. Contextual correlates of semantic similarity. Language and cognitive processes, 6(1):1–28. G. L. Murphy. 2002. The Big Book of Concepts. MIT Press. R. Navigli, K. C. Litkowski, and O. Hargraves. 2007. SemEval-2007 task 7: Coarse-grained English all-words task. In Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007), pages 30–35, Prague, Czech Republic. P. Resnik and D. Yarowsky. 2000. Distinguishing systems and distinguishing senses: New evaluation methods for word sense disambiguation. Natural Language Engineering, 5(3):113–133. E. Rosch and C. B. Mervis. 1975. Family resemblance: Studies in the internal structure of categories. Cognitive Psychology, 7:573–605. E. Rosch. 1975. Cognitive representations of semantic categories. Journal of Experimental Psychology: General, 104:192–233. H. Rubenstein and J. Goodenough. 1965. Contextual correlates of synonymy. Computational Linguistics, 8:627–633. S. Sharoff. 2006. Open-source corpora: Using the net to fish for linguistic data. International Journal of Corpus Linguistics, 11(4):435–462. C. Stokoe. 2005. Differentiating homonymy and polysemy in information retrieval. In Proceedings of HLT/EMNLP-05, pages 403–410, Vancouver, B.C., Canada. 18
2009
2
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 172–180, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Forest-based Tree Sequence to String Translation Model Hui Zhang1, 2 Min Zhang1 Haizhou Li1 Aiti Aw1 Chew Lim Tan2 1Institute for Infocomm Research 2National University of Singapore [email protected] {mzhang, hli, aaiti}@i2r.a-star.edu.sg [email protected] Abstract This paper proposes a forest-based tree sequence to string translation model for syntax- based statistical machine translation, which automatically learns tree sequence to string translation rules from word-aligned sourceside-parsed bilingual texts. The proposed model leverages on the strengths of both tree sequence-based and forest-based translation models. Therefore, it can not only utilize forest structure that compactly encodes exponential number of parse trees but also capture nonsyntactic translation equivalences with linguistically structured information through tree sequence. This makes our model potentially more robust to parse errors and structure divergence. Experimental results on the NIST MT-2003 Chinese-English translation task show that our method statistically significantly outperforms the four baseline systems. 1 Introduction Recently syntax-based statistical machine translation (SMT) methods have achieved very promising results and attracted more and more interests in the SMT research community. Fundamentally, syntax-based SMT views translation as a structural transformation process. Therefore, structure divergence and parse errors are two of the major issues that may largely compromise the performance of syntax-based SMT (Zhang et al., 2008a; Mi et al., 2008). Many solutions have been proposed to address the above two issues. Among these advances, forest-based modeling (Mi et al., 2008; Mi and Huang, 2008) and tree sequence-based modeling (Liu et al., 2007; Zhang et al., 2008a) are two interesting modeling methods with promising results reported. Forest-based modeling aims to improve translation accuracy through digging the potential better parses from n-bests (i.e. forest) while tree sequence-based modeling aims to model non-syntactic translations with structured syntactic knowledge. In nature, the two methods would be complementary to each other since they manage to solve the negative impacts of monolingual parse errors and cross-lingual structure divergence on translation results from different viewpoints. Therefore, one natural way is to combine the strengths of the two modeling methods for better performance of syntax-based SMT. However, there are many challenges in combining the two methods into a single model from both theoretical and implementation engineering viewpoints. In theory, one may worry about whether the advantage of tree sequence has already been covered by forest because forest encodes implicitly a huge number of parse trees and these parse trees may generate many different phrases and structure segmentations given a source sentence. In system implementation, the exponential combinations of tree sequences with forest structures make the rule extraction and decoding tasks much more complicated than that of the two individual methods. In this paper, we propose a forest-based tree sequence to string model, which is designed to integrate the strengths of the forest-based and the tree sequence-based modeling methods. We present our solutions that are able to extract translation rules and decode translation results for our model very efficiently. A general, configurable platform was designed for our model. With this platform, we can easily implement our method and many previous syntax-based methods by simple parameter setting. We evaluate our method on the NIST MT-2003 Chinese-English translation tasks. Experimental results show that our method significantly outperforms the two individual methods and other baseline methods. Our study shows that the proposed method is able to effectively combine the strengths of the forest-based and tree sequence-based methods, and thus having great potential to address the issues of parse errors and non-syntactic transla172 tions resulting from structure divergence. It also indicates that tree sequence and forest play different roles and make contributions to our model in different ways. The remainder of the paper is organized as follows. Section 2 describes related work while section 3 defines our translation model. In section 4 and section 5, the key rule extraction and decoding algorithms are elaborated. Experimental results are reported in section 6 and the paper is concluded in section 7. 2 Related work As discussed in section 1, two of the major challenges to syntax-based SMT are structure divergence and parse errors. Many techniques have been proposed to address the structure divergence issue while only fewer studies are reported in addressing the parse errors in the SMT research community. To address structure divergence issue, many researchers (Eisner, 2003; Zhang et al., 2007) propose using the Synchronous Tree Substitution Grammar (STSG) grammar in syntax-based SMT since the STSG uses larger tree fragment as translation unit. Although promising results have been reported, STSG only uses one single subtree as translation unit which is still committed to the syntax strictly. Motivated by the fact that non-syntactic phrases make non-trivial contribution to phrase-based SMT, the tree sequencebased translation model is proposed (Liu et al., 2007; Zhang et al., 2008a) that uses tree sequence as the basic translation unit, rather than using single sub-tree as in the STSG. Here, a tree sequence refers to a sequence of consecutive sub-trees that are embedded in a full parse tree. For any given phrase in a sentence, there is at least one tree sequence covering it. Thus the tree sequence-based model has great potential to address the structure divergence issue by using tree sequence-based non-syntactic translation rules. Liu et al. (2007) propose the tree sequence concept and design a tree sequence to string translation model. Zhang et al. (2008a) propose a tree sequence-based tree to tree translation model and Zhang et al. (2008b) demonstrate that the tree sequence-based modelling method can well address the structure divergence issue for syntaxbased SMT. To overcome the parse errors for SMT, Mi et al. (2008) propose a forest-based translation method that uses forest instead of one best tree as translation input, where a forest is a compact representation of exponentially number of n-best parse trees. Mi and Huang (2008) propose a forest-based rule extraction algorithm, which learn tree to string rules from source forest and target string. By using forest in rule extraction and decoding, their methods are able to well address the parse error issue. From the above discussion, we can see that traditional tree sequence-based method uses single tree as translation input while the forestbased model uses single sub-tree as the basic translation unit that can only learn tree-to-string (Galley et al. 2004; Liu et al., 2006) rules. Therefore, the two methods display different strengths, and which would be complementary to each other. To integrate their strengths, in this paper, we propose a forest-based tree sequence to string translation model. 3 Forest-based tree sequence to string model In this section, we first explain what a packed forest is and then define the concept of the tree sequence in the context of forest followed by the discussion on our proposed model. 3.1 Packed Forest A packed forest (forest in short) is a special kind of hyper-graph (Klein and Manning, 2001; Huang and Chiang, 2005), which is used to represent all derivations (i.e. parse trees) for a given sentence under a context free grammar (CFG). A forest F is defined as a triple ൏ܸ, ܧ, ܵ൐, where ܸ is non-terminal node set, ܧ is hyper-edge set and ܵ is leaf node set (i.e. all sentence words). A forest F satisfies the following two conditions: 1) Each node ݊ in ܸ should cover a phrase, which is a continuous word sub-sequence in ܵ. 2) Each hyper-edge ݁ in ܧ is defined as ݒ௙֜ ݒଵ… ݒ௜… ݒ௡, ൫ݒ௜א ሺܸ׫ ܵሻ, ݒ௙א ܸ൯ , where ݒଵ… ݒ௜… ݒ௡ covers a sequence of continuous and non-overlap phrases, ݒ௙ is the father node of the children sequence ݒଵ… ݒ௜… ݒ௡. The phrase covered by ݒ௙ is just the sum of all the phrases covered by each child node ݒ௜. We here introduce another concept that is used in our subsequent discussions. A complete forest CF is a general forest with one additional condition that there is only one root node N in CF, i.e., all nodes except the root N in a CF must have at least one father node. Fig. 1 is a complete forest while Fig. 7 is a non-complete forest due to the virtual node “VV+VV” introduced in Fig. 7. Fig. 2 is a hyperedge (IP => NP VP) of Fig. 1, where NP covers 173 the phrase “Xinhuashe”, VP covers the phrase “shengming youguan guiding” and IP covers the entire sentence. In Fig.1, only root IP has no father node, so it is a complete forest. The two parse trees T1 and T2 encoded in Fig. 1 are shown separately in Fig. 3 and Fig. 41. Different parse tree represents different derivations and explanations for a given sentence. For example, for the same input sentence in Fig. 1, T1 interprets it as “XNA (Xinhua News Agency) declares some regulations.” while T2 interprets it as “XNA declaration is related to some regulations.”. Figure 1. A packed forest for sentence “新华社 /Xinhuashe 声明/shengming 有关/youguan 规定 /guiding” Figure 2. A hyper-edge used in Fig. 1 Figure 3. Tree 1 (T1) Figure 4. Tree 2 (T2) 3.2 Tree sequence in packed forest Similar to the definition of tree sequence used in a single parse tree defined in Liu et al. (2007) and Zhang et al. (2008a), a tree sequence in a forest also refers to an ordered sub-tree sequence that covers a continuous phrase without overlapping. However, the major difference between 1 Please note that a single tree (as T1 and T2 shown in Fig. 3 and Fig. 4) is represented by edges instead of hyper-edges. A hyper-edge is a group of edges satisfying the 2nd condition as shown in the forest definition. them lies in that the sub-trees of a tree sequence in forest may belongs to different single parse trees while, in a single parse tree-based model, all the sub-trees in a tree sequence are committed to the same parse tree. The forest-based tree sequence enables our model to have the potential of exploring additional parse trees that may be wrongly pruned out by the parser and thus are not encoded in the forest. This is because that a tree sequence in a forest allows its sub-trees coming from different parse trees, where these sub-trees may not be merged finally to form a complete parse tree in the forest. Take the forest in Fig. 1 as an example, where ((VV shengming) (JJ youguan)) is a tree sequence that all sub-trees appear in T1 while ((VV shengming) (VV youguan)) is a tree sequence whose sub-trees do not belong to any single tree in the forest. But, indeed the two subtrees (VV shengming) and (VV youguan) can be merged together and further lead to a complete single parse tree which may offer a correct interpretation to the input sentence (as shown in Fig. 5). In addition, please note that, on the other hand, more parse trees may introduce more noisy structures. In this paper, we leave this problem to our model and let the model decide which substructures are noisy features. Figure 5. A parse tree that was wrongly pruned out Figure 6. A tree sequence to string rule 174 A tree-sequence to string translation rule in a forest is a triple <L, R, A>, where L is the tree sequence in source language, R is the string containing words and variables in target language, and A is the alignment between the leaf nodes of L and R. This definition is similar to that of (Liu et al. 2007, Zhang et al. 2008a) except our treesequence is defined in forest. The shaded area of Fig. 6 exemplifies a tree sequence to string translation rule in the forest. 3.3 Forest-based tree-sequence to string translation model Given a source forest F and target translation TS as well as word alignment A, our translation model is formulated as: Prሺܨ, ܶ௦, ܣሻൌ∑ ∏ ݌ሺݎ௜ሻ ௥೔אఏ೔ ఏ೔א ஀,஼ሺ஀ሻୀሺி,்ೞ,஺ሻ By the above Eq., translation becomes a tree sequence structure to string mapping issue. Given the F, TS and A, there are multiple derivations that could map F to TS under the constraint A. The mapping probability Prሺܨ, ܶ௦, ܣሻ in our study is obtained by summing over the probabilities of all derivations Θ. The probability of each derivation ߠ௜ is given as the product of the probabilities of all the rules ( ) i p r used in the derivation (here we assume that each rule is applied independently in a derivation). Our model is implemented under log-linear framework (Och and Ney, 2002). We use seven basic features that are analogous to the commonly used features in phrase-based systems (Koehn, 2003): 1) bidirectional rule mapping probabilities, 2) bidirectional lexical rule translation probabilities, 3) target language model, 4) number of rules used and 5) number of target words. In addition, we define two new features: 1) number of leaf nodes in auxiliary rules (the auxiliary rule will be explained later in this paper) and 2) product of the probabilities of all hyper-edges of the tree sequences in forest. 4 Training This section discusses how to extract our translation rules given a triple ൏ܨ, ܶ௦, ܣ൐. As we know, the traditional tree-to-string rules can be easily extracted from ൏ܨ, ܶ௦, ܣ൐ using the algorithm of Mi and Huang (2008)2. We would like 2 Mi and Huang (2008) extend the tree-based rule extraction algorithm (Galley et al., 2004) to forest-based by introducing non-deterministic mechanism. Their algorithm consists of two steps, minimal rule extraction and composed rule generation. to leverage on their algorithm in our study. Unfortunately, their algorithm is not directly applicable to our problem because tree rules have only one root while tree sequence rules have multiple roots. This makes the tree sequence rule extraction very complex due to its interaction with forest structure. To address this issue, we introduce the concepts of virtual node and virtual hyperedge to convert a complete parse forest ܨ to a non-complete forest ܨ which is designed to encode all the tree sequences that we want. Therefore, by doing so, the tree sequence rules can be extracted from a forest in the following two steps: 1) Convert the complete parse forest ܨ into a non-complete forest ܨ in order to cover those tree sequences that cannot be covered by a single tree node. 2) Employ the forest-based tree rule extraction algorithm (Mi and Huang, 2008) to extract our rules from the non-complete forest. To facilitate our discussion, here we introduce two notations: • Alignable: A consecutive source phrase is an alignable phrase if and only if it can be aligned with at least one consecutive target phrase under the word-alignment constraint. The covered source span is called alignable span. • Node sequence: a sequence of nodes (either leaf or internal nodes) in a forest covering a consecutive span. Algorithm 1 illustrates the first step of our rule extraction algorithm, which is a CKY-style Dynamic Programming (DP) algorithm to add virtual nodes into forest. It includes the following steps: 1) We traverse the forest to visit each span in bottom-up fashion (line 1-2), 1.1) for each span [u,v] that is covered by single tree nodes3, we put these tree nodes into the set NSS(u,v) and go back to step 1 (line 4-6). 1.2) otherwise we concatenate the tree sequences of sub-spans to generate the set of tree sequences covering the current larger span (line 8-13). Then, we prune the set of node sequences (line 14). If this span is alignable, we create virtual father nodes and corresponding virtual hyper-edges to link the node sequences with the virtual father nodes (line 15-20). 3 Note that in a forest, there would be multiple single tree nodes covering the same span as shown Fig.1. 175 2) Finally we obtain a forest with each alignable span covered by either original tree nodes or the newly-created tree sequence virtual nodes. Theoretically, there is exponential number of node sequences in a forest. Take Fig. 7 as an example. The NSS of span [1,2] only contains “NP” since it is alignable and covered by the single tree node NP. However, span [2,3] cannot be covered by any single tree node, so we have to create the NSS of span[2,3] by concatenating the NSSs of span [2,2] and span [3,3]. Since NSS of span [2,2] contains 4 element {“NN”, “NP”, “VV”, “VP”} and NSS of span [3, 3] also contains 4 element {“VV”, “VP”, “JJ”, “ADJP”}, NSS of span [2,3] contains 16=4*4 elements. To make the NSS manageable, we prune it with the following thresholds: • each node sequence should contain less than n nodes • each node sequence set should contain less than m node sequences • sort node sequences according to their lengths and only keep the k shortest ones Each virtual node is simply labeled by the concatenation of all its children’s labels as shown in Fig. 7. Algorithm 1. add virtual nodes into forest Input: packed forest F, alignment A Notation: L: length of source sentence NSS(u,v): the set of node sequences covering span [u,v] VN(ns): virtual father node for node sequence ns. Output: modified forest F with virtual nodes 1. for length := 0 to L - 1 do 2. for start := 1 to L - length do 3. stop := start + length 4. if span[start, stop] covered by tree nodes then 5. for each node n of span [start, stop] do 6. add n into NSS(start, stop) 7. else 8. for pivot := start to stop - 1 9. for each ns1 in NSS(start, pivot) do 10. for each ns2 in NSS(pivot+1, stop) do 11. create ݊ݏ׷ൌ݊ݏ1 ۩ ݊ݏ2 12. if ns is not in NSS(start, stop) then 13. add ns into NSS(start, stop) 14. do pruning on NSS(start, stop) 15. if the span[start, stop] is alignable then 16. for each ns of NSS(start, stop) do 17. if node VN(ns) is not in F then 18. add node VN(ns) into F 19. add a hyper-edge h into F, 20. let lhs(h) := VN(ns), rhs(h) := ns Algorithm 1 outputs a non-complete forest CF with each alignable span covered by either tree nodes or virtual nodes. Then we can easily extract our rules from the CF using the tree rule extraction algorithm (Mi and Huang, 2008). Finally, to calculate rule feature probabilities for our model, we need to calculate the fractional counts (it is a kind of probability defined in Mi and Huang, 2008) of each translation rule in a parse forest. In the tree case, we can use the inside-outside-based methods (Mi and Huang 2008) to do it. In the tree sequence case, since the previous method cannot be used directly, we provide another solution by making an independent assumption that each tree in a tree sequence is independent to each other. With this assumption, the fractional counts of both tree and tree sequence can be calculated as follows: ܿሺݎሻൌ ఈఉሺ௟௛௦ሺ௥ሻሻ ఈఉሺ்ை௉ሻ ߙߚሺ݂ݎܽ݃ሻൌ ෑ ߙሺݒሻ ௩א௥௢௢௧ሺ௙௥௔௚ሻ כ ෑܲሺ݄ሻ ௛א௙௥௔௚ כ ෑ ߚሺݒሻ ௩א௟௘௔௩௘௦ሺ௙௥௔௚ሻ where ܿሺݎሻ is the fractional counts to be calculated for rule r, a frag is either lhs(r) (excluding virtual nodes and virtual hyper-edges) or any tree node in a forest, TOP is the root of the forest, ߙሺ. ሻ and ߚሺ.) are the outside and inside probabilities of nodes, ݎ݋݋ݐሺ. ሻ returns the root nodes of a tree sequence fragment, ݈݁ܽݒ݁ݏሺ. ሻ returns the leaf nodes of a tree sequence fragment, ݌ሺ݄ሻ is the hyper-edge probability. Figure 7. A virtual node in forest 5 Decoding We benefit from the same strategy as used in our rule extraction algorithm in designing our decoding algorithm, recasting the forest-based tree sequence-to-string decoding problem as a forestbased tree-to-string decoding problem. Our decoding algorithm consists of four steps: 1) Convert the complete parse forest to a noncomplete one by introducing virtual nodes. 176 2) Convert the non-complete parse forest into a translation forest4 ܶܨ by using the translation rules and the pattern-matching algorithm presented in Mi et al. (2008). 3) Prune out redundant nodes and add auxiliary hyper-edge into the translation forest for those nodes that have either no child or no father. By this step, the translation forest ܶܨ becomes a complete forest. 4) Decode the translation forest using our translation model and a dynamic search algorithm. The process of step 1 is similar to Algorithm 1 except no alignment constraint used here. This may generate a large number of additional virtual nodes; however, all redundant nodes will be filtered out in step 3. In step 2, we employ the treeto-string pattern match algorithm (Mi et al., 2008) to convert a parse forest to a translation forest. In step 3, all those nodes not covered by any translation rules are removed. In addition, please note that the translation forest is already not a complete forest due to the virtual nodes and the pruning of rule-unmatchable nodes. We, therefore, propose Algorithm 2 to add auxiliary hyper-edges to make the translation forest complete. In Algorithm 2, we travel the forest in bottomup fashion (line 4-5). For each span, we do: 1) generate all the NSS for this span (line 7-12) 2) filter the NSS to a manageable size (line 13) 3) add auxiliary hyper-edges for the current span (line 15-19) if it can be covered by at least one single tree node, otherwise go to step 1 . This is the key step in our Algorithm 2. For each tree node and each node sequences covering the same span (stored in the current NSS), if the tree node has no children or at least one node in the node sequence has no father, we add an auxiliary hyper-edge to connect the tree node as father node with the node sequence as children. Since Algorithm 2 is DP-based and traverses the forest in a bottom-up way, all the nodes in a node sequence should already have children node after the lower level process in a small span. Finally, we re-build the NSS of current span for upper level NSS combination use (line 20-22). In Fig. 8, the hyper-edge “IP=>NP VV+VV NP” is an auxiliary hyper-edge introduced by Algorithm 2. By Algorithm 2, we convert the translation forest into a complete translation forest. We then use a bottom-up node-based search 4 The concept of translation forest is proposed in Mi et al. (2008). It is a forest that consists of only the hyperedges induced from translation rules. algorithm to do decoding on the complete translation forest. We also use Cube Pruning algorithm (Huang and Chiang 2007) to speed up the translation process. Figure 8. Auxiliary hyper-edge in a translation forest Algorithm 2. add auxiliary hyper-edges into mt forest F Input: mt forest F Output: complete forest F with auxiliary hyper-edges 1. for i := 1 to L do 2. for each node n of span [i, i] do 3. add n into NSS(i, i) 4. for length := 1 to L - 1 do 5. for start := 1 to L - length do 6. stop := start + length 7. for pivot := start to stop-1 do 8. for each ns1 in NSS (start, pivot) do 9. for each ns2 in NSS (pivot+1,stop) do 10. create ݊ݏ׷ൌ݊ݏ1 ۩ ݊ݏ2 11. if ns is not in NSS(start, stop) then 12. add ns into NSS (start, stop) 13. do pruning on NSS(start, stop) 14. if there is tree node cover span [start, stop] then 15. for each tree node n of span [start,stop] do 16. for each ns of NSS(start, stop) do 17. if node n have no children or there is node in ns with no father then 18. add auxiliary hyper-edge h into F 19. let lhs(h) := n, rhs(h) := ns 20. empty NSS(start, stop) 21. for each node n of span [start, stop] do 22. add n into NSS(start, stop) 6 Experiment 6.1 Experimental Settings We evaluate our method on Chinese-English translation task. We use the FBIS corpus as training set, the NIST MT-2002 test set as development (dev) set and the NIST MT-2003 test set as test set. We train Charniak’s parser (Charniak 2000) on CTB5 to do Chinese parsing, and modify it to output packed forest. We tune the parser on section 301-325 and test it on section 271300. The F-measure on all sentences is 80.85%. A 3-gram language model is trained on the Xin177 hua portion of the English Gigaword3 corpus and the target side of the FBIS corpus using the SRILM Toolkits (Stolcke, 2002) with modified Kneser-Ney smoothing (Kenser and Ney, 1995). GIZA++ (Och and Ney, 2003) and the heuristics “grow-diag-final-and” are used to generate m-ton word alignments. For the MER training (Och, 2003), Koehn’s MER trainer (Koehn, 2007) is modified for our system. For significance test, we use Zhang et al.’s implementation (Zhang et al, 2004). Our evaluation metrics is casesensitive BLEU-4 (Papineni et al., 2002). For parse forest pruning (Mi et al., 2008), we utilize the Margin-based pruning algorithm presented in (Huang, 2008). Different from Mi et al. (2008) that use a static pruning threshold, our threshold is sentence-depended. For each sentence, we compute the Margin between the n-th best and the top 1 parse tree, then use the Margin-based pruning algorithm presented in (Huang, 2008) to do pruning. By doing so, we can guarantee to use at least all the top n best parse trees in the forest. However, please note that even after pruning there is still exponential number of additional trees embedded in the forest because of the sharing structure of forest. Other parameters are set as follows: maximum number of roots in a tree sequence is 3, maximum height of a translation rule is 3, maximum number of leaf nodes is 7, maximum number of node sequences on each span is 10, and maximum number of rules extracted from one node is 10000. 6.2 Experimental Results We implement our proposed methods as a general, configurable platform for syntax-based SMT study. Based on this platform, we are able to easily implement most of the state-of-the-art syntax-based x-to-string SMT methods via simple parameter setting. For training, we set forest pruning threshold to 1 best for tree-based methods and 100 best for forest-based methods. For decoding, we set: 1) TT2S: tree-based tree-to-string model by setting the forest pruning threshold to 1 best and the number of sub-trees in a tree sequence to 1. 2) TTS2S: tree-based tree-sequence to string system by setting the forest pruning threshold to 1 best and the maximum number of sub-trees in a tree sequence to 3. 3) FT2S: forest-based tree-to-string system by setting the forest pruning threshold to 500 best, the number of sub-trees in a tree sequence to 1. 4) FTS2S: forest-based tree-sequence to string system by setting the forest pruning threshold to 500 best and the maximum number of sub-trees in a tree sequence to 3. Model BLEU(%) Moses 25.68 TT2S 26.08 TTS2S 26.95 FT2S 27.66 FTS2S 28.83 Table 1. Performance Comparison We use the first three syntax-based systems (TT2S, TTS2S, FT2S) and Moses (Koehn et al., 2007), the state-of-the-art phrase-based system, as our baseline systems. Table 1 compares the performance of the five methods, all of which are fine-tuned. It shows that: 1) FTS2S significantly outperforms (p<0.05) FT2S. This shows that tree sequence is very useful to forest-based model. Although a forest can cover much more phrases than a single tree does, there are still many non-syntactic phrases that cannot be captured by a forest due to structure divergence issue. On the other hand, tree sequence is a good solution to non-syntactic translation equivalence modeling. This is mainly because tree sequence rules are only sensitive to word alignment while tree rules, even extracted from a forest (like in FT2S), are also limited by syntax according to grammar parsing rules. 2) FTS2S shows significant performance improvement (p<0.05) over TTS2S due to the contribution of forest. This is mainly due to the fact that forest can offer very large number of parse trees for rule extraction and decoder. 3) Our model statistically significantly outperforms all the baselines system. This clearly demonstrates the effectiveness of our proposed model for syntax-based SMT. It also shows that the forest-based method and tree sequence-based method are complementary to each other and our proposed method is able to effectively integrate their strengths. 4) All the four syntax-based systems show better performance than Moses and three of them significantly outperforms (p<0.05) Moses. This suggests that syntax is very useful to SMT and translation can be viewed as a structure mapping issue as done in the four syntax-based systems. Table 2 and Table 3 report the distribution of different kinds of translation rules in our model (training forest pruning threshold is set to 100 best) and in our decoding (decoding forest pruning threshold is set to 500 best) for one best translation generation. From the two tables, we can find that: 178 Rule Type Tree to String Tree Sequence to String L 4,854,406 20,526,674 P 37,360,684 58,826,261 U 3,297,302 3,775,734 All 45,512,392 83,128,669 Table 2. # of rules extracted from training corpus. L means fully lexicalized, P means partially lexicalized, U means unlexicalized. Rule Type Tree to String Tree Sequence to String L 10,592 1,161 P 7,132 742 U 4,874 278 All 22,598 2,181 Table 3. # of rules used to generate one-best translation result in testing 1) In Table 2, the number of tree sequence rules is much larger than that of tree rules although our rule extraction algorithm only extracts those tree sequence rules over the spans that tree rules cannot cover. This suggests that the non-syntactic structure mapping is still a big challenge to syntax-based SMT. 2) Table 3 shows that the tree sequence rules is around 9% of the tree rules when generating the one-best translation. This suggests that around 9% of translation equivalences in the test set can be better modeled by tree sequence to string rules than by tree to string rules. The 9% tree sequence rules contribute 1.17 BLEU score improvement (28.83-27.66 in Table 1) to FTS2S over FT2S. 3) In Table 3, the fully-lexicalized rules are the major part (around 60%), followed by the partially-lexicalized (around 35%) and unlexicalized (around 15%). However, in Table 2, partially-lexicalized rules extracted from training corpus are the major part (more than 70%). This suggests that most partially-lexicalized rules are less effective in our model. This clearly directs our future work in model optimization. BLEU (%) N-best \ model FT2S FTS2S 100 Best 27.40 28.61 500 Best 27.66 28.83 2500 Best 27.66 28.96 5000 Best 27.79 28.89 Table 4. Impact of the forest pruning Forest pruning is a key step for forest-based method. Table 4 reports the performance of the two forest-based models using different values of the forest pruning threshold for decoding. It shows that: 1) FTS2S significantly outperforms (p<0.05) FT2S consistently in all test cases. This again demonstrates the effectiveness of our proposed model. Even if in the 5000 Best case, tree sequence is still able to contribute 1.1 BLEU score improvement (28.89-27.79). It indicates the advantage of tree sequence cannot be covered by forest even if we utilize a very large forest. 2) The BLEU scores are very similar to each other when we increase the forest pruning threshold. Moreover, in one case the performance even drops. This suggests that although more parse trees in a forest can offer more structure information, they may also introduce more noise that may confuse the decoder. 7 Conclusion In this paper, we propose a forest-based treesequence to string translation model to combine the strengths of forest-based methods and treesequence based methods. This enables our model to have the great potential to address the issues of structure divergence and parse errors for syntax-based SMT. We convert our forest-based tree sequence rule extraction and decoding issues to tree-based by introducing virtual nodes, virtual hyper-edges and auxiliary rules (hyper-edges). In our system implementation, we design a general and configurable platform for our method, based on which we can easily realize many previous syntax-based methods. Finally, we examine our methods on the FBIS corpus and the NIST MT2003 Chinese-English translation task. Experimental results show that our model greatly outperforms the four baseline systems. Our study demonstrates that forest-based method and tree sequence-based method are complementary to each other and our proposed method is able to effectively combine the strengths of the two individual methods for syntax-based SMT. Acknowledgement We would like to thank Huang Yun for preparing the pictures in this paper; Run Yan for providing the java version modified MERT program and discussion on the details of MOSES; Mi Haitao for his help and discussion on re-implementing the FT2S model; Sun Jun and Xiong Deyi for their valuable suggestions. 179 References Eugene Charniak. 2000. A maximum-entropy inspired parser. NAACL-00. Jason Eisner. 2003. Learning non-isomorphic tree mappings for MT. ACL-03 (companion volume). Michel Galley, Mark Hopkins, Kevin Knight and Daniel Marcu. 2004. What’s in a translation rule? HLT-NAACL-04. 273-280. Liang Huang. 2008. Forest Reranking: Discriminative Parsing with Non-Local Features. ACL-HLT-08. 586-594 Liang Huang and David Chiang. 2005. Better k-best Parsing. IWPT-05. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. ACL-07. 144–151 Liang Huang, Kevin Knight and Aravind Joshi. 2006. Statistical Syntax-Directed Translation with Extended Domain of Locality. AMTA-06. (poster) Reinhard Kenser and Hermann Ney. 1995. Improved backing-off for M-gram language modeling. ICASSP-95. 181-184 Dan Klein and Christopher D. Manning. 2001. Parsing and Hypergraphs. IWPT-2001. Philipp Koehn, F. J. Och and D. Marcu. 2003. Statistical phrase-based translation. HLT-NAACL-03. 127-133. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. ACL-07. 177-180. (poster) Yang Liu, Qun Liu and Shouxun Lin. 2006. Tree-toString Alignment Template for Statistical Machine Translation. COLING-ACL-06. 609-616. Yang Liu, Yun Huang, Qun Liu and Shouxun Lin. 2007. Forest-to-String Statistical Translation Rules. ACL-07. 704-711. Haitao Mi, Liang Huang, and Qun Liu. 2008. Forestbased translation. ACL-HLT-08. 192-199. Haitao Mi and Liang Huang. 2008. Forest-based Translation Rule Extraction. EMNLP-08. 206-214. Franz J. Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. ACL-02. 295-302. Franz J. Och. 2003. Minimum error rate training in statistical machine translation. ACL-03. 160-167. Franz Josef Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics. 29(1) 19-51. Kishore Papineni, Salim Roukos, ToddWard and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. ACL-02. 311318. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. ICSLP-02. 901-904. Min Zhang, Hongfei Jiang, Ai Ti Aw, Jun Sun, Sheng Li and Chew Lim Tan. 2007. A Tree-to-Tree Alignment-based Model for Statistical Machine Translation. MT-Summit-07. 535-542. Min Zhang, Hongfei Jiang, Aiti Aw, Haizhou Li, Chew Lim Tan, Sheng Li. 2008a. A Tree Sequence Alignment-based Tree-to-Tree Translation Model. ACL-HLT-08. 559-567. Min Zhang, Hongfei Jiang, Haizhou Li, Aiti Aw, Sheng Li. 2008b. Grammar Comparison Study for Translational Equivalence Modeling and Statistical Machine Translation. COLING-08. 1097-1104. Ying Zhang, Stephan Vogel, Alex Waibel. 2004. Interpreting BLEU/NIST scores: How much improvement do we need to have a better system? LREC-04. 2051-2054. 180
2009
20
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 181–189, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Active Learning for Multilingual Statistical Machine Translation∗ Gholamreza Haffari and Anoop Sarkar School of Computing Science, Simon Fraser University British Columbia, Canada {ghaffar1,anoop}@cs.sfu.ca Abstract Statistical machine translation (SMT) models require bilingual corpora for training, and these corpora are often multilingual with parallel text in multiple languages simultaneously. We introduce an active learning task of adding a new language to an existing multilingual set of parallel text and constructing high quality MT systems, from each language in the collection into this new target language. We show that adding a new language using active learning to the EuroParl corpus provides a significant improvement compared to a random sentence selection baseline. We also provide new highly effective sentence selection methods that improve AL for phrase-based SMT in the multilingual and single language pair setting. 1 Introduction The main source of training data for statistical machine translation (SMT) models is a parallel corpus. In many cases, the same information is available in multiple languages simultaneously as a multilingual parallel corpus, e.g., European Parliament (EuroParl) and U.N. proceedings. In this paper, we consider how to use active learning (AL) in order to add a new language to such a multilingual parallel corpus and at the same time we construct an MT system from each language in the original corpus into this new target language. We introduce a novel combined measure of translation quality for multiple target language outputs (the same content from multiple source languages). The multilingual setting provides new opportunities for AL over and above a single language pair. This setting is similar to the multi-task AL scenario (Reichart et al., 2008). In our case, the multiple tasks are individual machine translation tasks for several language pairs. The nature of the translation processes vary from any of the source ∗Thanks to James Peltier for systems support for our experiments. This research was partially supported by NSERC, Canada (RGPIN: 264905) and an IBM Faculty Award. languages to the new language depending on the characteristics of each source-target language pair, hence these tasks are competing for annotating the same resource. However it may be that in a single language pair, AL would pick a particular sentence for annotation, but in a multilingual setting, a different source language might be able to provide a good translation, thus saving annotation effort. In this paper, we explore how multiple MT systems can be used to effectively pick instances that are more likely to improve training quality. Active learning is framed as an iterative learning process. In each iteration new human labeled instances (manual translations) are added to the training data based on their expected training quality. However, if we start with only a small amount of initial parallel data for the new target language, then translation quality is very poor and requires a very large injection of human labeled data to be effective. To deal with this, we use a novel framework for active learning: we assume we are given a small amount of parallel text and a large amount of monolingual source language text; using these resources, we create a large noisy parallel text which we then iteratively improve using small injections of human translations. When we build multiple MT systems from multiple source languages to the new target language, each MT system can be seen as a different ‘view’ on the desired output translation. Thus, we can train our multiple MT systems using either self-training or co-training (Blum and Mitchell, 1998). In selftraining each MT system is re-trained using human labeled data plus its own noisy translation output on the unlabeled data. In co-training each MT system is re-trained using human labeled data plus noisy translation output from the other MT systems in the ensemble. We use consensus translations (He et al., 2008; Rosti et al., 2007; Matusov et al., 2006) as an effective method for co-training between multiple MT systems. This paper makes the following contributions: • We provide a new framework for multilingual MT, in which we build multiple MT systems and add a new language to an existing multilingual parallel corpus. The multilingual set181 ting allows new features for active learning which we exploit to improve translation quality while reducing annotation effort. • We introduce new highly effective sentence selection methods that improve phrase-based SMT in the multilingual and single language pair setting. • We describe a novel co-training based active learning framework that exploits consensus translations to effectively select only those sentences that are difficult to translate for all MT systems, thus sharing annotation cost. • We show that using active learning to add a new language to the EuroParl corpus provides a significant improvement compared to the strong random sentence selection baseline. 2 AL-SMT: Multilingual Setting Consider a multilingual parallel corpus, such as EuroParl, which contains parallel sentences for several languages. Our goal is to add a new language to this corpus, and at the same time to construct high quality MT systems from the existing languages (in the multilingual corpus) to the new language. This goal is formalized by the following objective function: O = D X d=1 αd × TQ(MF d→E) (1) where F d’s are the source languages in the multilingual corpus (D is the total number of languages), and E is the new language. The translation quality is measured by TQ for individual systems MF d→E; it can be BLEU score or WER/PER (Word error rate and position independent WER) which induces a maximization or minimization problem, respectively. The non-negative weights αd reflect the importance of the different translation tasks and P d αd = 1. AL-SMT formulation for single language pair is a special case of this formulation where only one of the αd’s in the objective function (1) is one and the rest are zero. Moreover the algorithmic framework that we introduce in Sec. 2.1 for AL in the multilingual setting includes the single language pair setting as a special case (Haffari et al., 2009). We denote the large unlabeled multilingual corpus by U := {(f1 j , .., fD j )}, and the small labeled multilingual corpus by L := {(f1 i , .., fD i , ei)}. We overload the term entry to denote a tuple in L or in U (it should be clear from the context). For a single language pair we use U and L. 2.1 The Algorithmic Framework Algorithm 1 represents our AL approach for the multilingual setting. We train our initial MT systems {MF d→E}D d=1 on the multilingual corpus L, and use them to translate all monolingual sentences in U. We denote sentences in U together with their multiple translations by U+ (line 4 of Algorithm 1). Then we retrain the SMT systems on L ∪U+ and use the resulting model to decode the test set. Afterwards, we select and remove a subset of highly informative sentences from U, and add those sentences together with their human-provided translations to L. This process is continued iteratively until a certain level of translation quality is met (we use the BLEU score, WER and PER) (Papineni et al., 2002). In the baseline, against which we compare our sentence selection methods, the sentences are chosen randomly. When (re-)training the models, two phrase tables are learned for each SMT model: one from the labeled data L and the other one from pseudolabeled data U+ (which we call the main and auxiliary phrase tables respectively). (Ueffing et al., 2007; Haffari et al., 2009) show that treating U+ as a source for a new feature function in a loglinear model for SMT (Och and Ney, 2004) allows us to maximally take advantage of unlabeled data by finding a weight for this feature using minimum error-rate training (MERT) (Och, 2003). Since each entry in U+ has multiple translations, there are two options when building the auxiliary table for a particular language pair (F d, E): (i) to use the corresponding translation ed of the source language in a self-training setting, or (ii) to use the consensus translation among all the translation candidates (e1, .., eD) in a co-training setting (sharing information between multiple SMT models). A whole range of methods exist in the literature for combining the output translations of multiple MT systems for a single language pair, operating either at the sentence, phrase, or word level (He et al., 2008; Rosti et al., 2007; Matusov et al., 2006). The method that we use in this work operates at the sentence level, and picks a single high quality translation from the union of the n-best lists generated by multiple SMT models. Sec. 5 gives 182 Algorithm 1 AL-SMT-Multiple 1: Given multilingual corpora L and U 2: {MF d→E}D d=1 = multrain(L, ∅) 3: for t = 1, 2, ... do 4: U+ = multranslate(U, {MF d→E}D d=1) 5: Select k sentences from U+, and ask a human for their true translations. 6: Remove the k sentences from U, and add the k sentence pairs (translated by human) to L 7: {MF d→E}D d=1 = multrain(L, U+) 8: Monitor the performance on the test set 9: end for more details about features which are used in our consensus finding method, and how it is trained. Now let us address the important question of selecting highly informative sentences (step 5 in the Algorithm 1) in the following section. 3 Sentence Selection: Multiple Language Pairs The goal is to optimize the objective function (1) with minimum human effort in providing the translations. This motivates selecting sentences which are maximally beneficial for all the MT systems. In this section, we present several protocols for sentence selection based on the combined information from multiple language pairs. 3.1 Alternating Selection The simplest selection protocol is to choose k sentences (entries) in the first iteration of AL which improve maximally the first model MF 1→E, while ignoring other models. In the second iteration, the sentences are selected with respect to the second model, and so on (Reichart et al., 2008). 3.2 Combined Ranking Pick any AL-SMT scoring method for a single language pair (see Sec. 4). Using this method, we rank the entries in unlabeled data U for each translation task defined by language pair (F d, E). This results in several ranking lists, each of which represents the importance of entries with respect to a particular translation task. We combine these rankings using a combined score: Score (f1, .., fD)  = D X d=1 αdRankd(fd) Rankd(.) is the ranking of a sentence in the list for the dth translation task (Reichart et al., 2008). 3.3 Disagreement Among the Translations Disagreement among the candidate translations of a particular entry is evidence for the difficulty of that entry for different translation models. The reason is that disagreement increases the possibility that most of the translations are not correct. Therefore it would be beneficial to ask human for the translation of these hard entries. Now the question is how to quantify the notion of disagreement among the candidate translations (e1, .., eD). We propose two measures of disagreement which are related to the portion of shared n-grams (n ≤4) among the translations: • Let ec be the consensus among all the candidate translations, then define the disagreement as P d αd 1 −BLEU(ec, ed)  . • Based on the disagreement of every pair of candidate translations: P d αd P d′ 1 − BLEU(ed′, ed)  . For the single language pair setting, (Haffari et al., 2009) presents and compares several sentence selection methods for statistical phrase-based machine translation. We introduce novel techniques which outperform those methods in the next section. 4 Sentence Selection: Single Language Pair Phrases are basic units of translation in phrasebased SMT models. The phrases which may potentially be extracted from a sentence indicate its informativeness. The more new phrases a sentence can offer, the more informative it is; since it boosts the generalization of the model. Additionally phrase translation probabilities need to be estimated accurately, which means sentences that offer phrases whose occurrences in the corpus were rare are informative. When selecting new sentences for human translation, we need to pay attention to this tradeoff between exploration and exploitation, i.e. selecting sentences to discover new phrases v.s. estimating accurately the phrase translation probabilities. Smoothing techniques partly handle accurate estimation of translation probabilities when the events occur rarely (indeed it is the main reason for smoothing). So we mainly focus on how to expand effectively the lexicon or set of phrases of the model. The more frequent a phrase (not a phrase pair) is in the unlabeled data, the more important it is to 183 know its translation; since it is more likely to see it in test data (specially when the test data is indomain with respect to unlabeled data). The more frequent a phrase is in the labeled data, the more unimportant it is; since probably we have observed most of its translations. In the labeled data L, phrases are the ones which are extracted by the SMT models; but what are the candidate phrases in the unlabeled data U? We use the currently trained SMT models to answer this question. Each translation in the n-best list of translations (generated by the SMT models) corresponds to a particular segmentation of a sentence, which breaks that sentence into several fragments (see Fig. 1). Some of these fragments are the source language part of a phrase pair available in the phrase table, which we call regular phrases and denote their set by Xreg s for a sentence s. However, there are some fragments in the sentence which are not covered by the phrase table – possibly because of the OOVs (out-of-vocabulary words) or the constraints imposed by the phrase extraction algorithm – called Xoov s for a sentence s. Each member of Xoov s offers a set of potential phrases (also referred to as OOV phrases) which are not observed due to the latent segmentation of this fragment. We present two generative models for the phrases and show how to estimate and use them for sentence selection. 4.1 Model 1 In the first model, the generative story is to generate phrases for each sentence based on independent draws from a multinomial. The sample space of the multinomial consists of both regular and OOV phrases. We build two models, i.e. two multinomials, one for labeled data and the other one for unlabeled data. Each model is trained by maximizing the log-likelihood of its corresponding data: LD := X s∈D ˜P(s) X x∈Xs log P(x|θD) (2) where D is either L or U, ˜P(s) is the empirical distribution of the sentences1, and θD is the parameter vector of the corresponding probability 1 ˜P(s) is the number of times that the sentence s is seen in D divided by the number of all sentences in D. distribution. When x ∈Xoov s , we will have P(x|θU) = X h∈Hx P(x, h|θU) = X h∈Hx P(h)P(x|h, θU) = 1 |Hx| X h∈Hx Y y∈Y h x θU(y) (3) where Hx is the space of all possible segmentations for the OOV fragment x, Y h x is the resulting phrases from x based on the segmentation h, and θU(y) is the probability of the OOV phrase y in the multinomial associated with U. We let Hx to be all possible segmentations of the fragment x for which the resulting phrase lengths are not greater than the maximum length constraint for phrase extraction in the underlying SMT model. Since we do not know anything about the segmentations a priori, we have put a uniform distribution over such segmentations. Maximizing (2) to find the maximum likelihood parameters for this model is an extremely difficult problem2. Therefore, we maximize the following lower-bound on the log-likelihood which is derived using Jensen’s inequality: LD ≥ X s∈D ˜P(s) h X x∈Xreg s log θD(x) + X x∈Xoov s X h∈Hx 1 |Hx| X y∈Y h x log θD(y) i (4) Maximizing (4) amounts to set the probability of each regular / potential phrase proportional to its count / expected count in the data D. Let ρk(xi:j) be the number of possible segmentations from position i to position j of an OOV fragment x, and k is the maximum phrase length; ρk(x1:|x|) =      0, if |x| = 0 1, if |x| = 1 Pk i=1 ρk(xi+1:|x|), otherwise which gives us a dynamic programming algorithm to compute the number of segmentation |Hx| = ρk(x1:|x|) of the OOV fragment x. The expected count of a potential phrase y based on an OOV segment x is (see Fig. 1.c): E[y|x] = P i≤j δ[y=xi:j]ρk(x1:i−1)ρk(xj+1:|x|) ρk(x) 2Setting partial derivatives of the Lagrangian to zero amounts to finding the roots of a system of multivariate polynomials (a major topic in Algebraic Geometry). 184 i will go to school on friday Regular Phrases OOV segment go to school go to to school 2/3 2/3 1/3 1/3 1/3 i will in friday XXX XXX .01 .004 ... ... ... (a) potential phr. source target prob count (b) (c) Figure 1: The given sentence in (b) is segmented, based on the source side phrases extracted from the phrase table in (a), to yield regular phrases and OOV segment. The table in (c) shows the potential phrases extracted from the OOV segment “go to school” and their expected counts (denoted by count) where the maximum length for the potential phrases is set to 2. In the example, “go to school” has 3 segmentations with maximum phrase length 2: (go)(to school), (go to)(school), (go)(to)(school). where δ[C] is 1 if the condition C is true, and zero otherwise. We have used the fact that the number of occurrences of a phrase spanning the indices [i, j] is the product of the number of segmentations of the left and the right sub-fragments, which are ρk(x1:i−1) and ρk(xj+1:|x|) respectively. 4.2 Model 2 In the second model, we consider a mixture model of two multinomials responsible for generating phrases in each of the labeled and unlabeled data sets. To generate a phrase, we first toss a coin and depending on the outcome we either generate the phrase from the multinomial associated with regular phrases θreg U or potential phrases θoov U : P(x|θU) := βUθreg U (x) + (1 −βU)θoov U (x) where θU includes the mixing weight β and the parameter vectors of the two multinomials. The mixture model associated with L is written similarly. The parameter estimation is based on maximizing a lower-bound on the log-likelihood which is similar to what was done for the Model 1. 4.3 Sentence Scoring The sentence score is a linear combination of two terms: one coming from regular phrases and the other from OOV phrases: φ1(s) := λ |Xreg s | X x∈Xreg s log P(x|θU) P(x|θL) + 1 −λ |Xoov s | X x∈Xoov s X h∈Hx 1 |Hx| log Y y∈Y h x P(y|θU) P(y|θL) where we use either Model 1 or Model 2 for P(.|θD). The first term is the log probability ratio of regular phrases under phrase models corresponding to unlabeled and labeled data, and the second term is the expected log probability ratio (ELPR) under the two models. Another option for the contribution of OOV phrases is to take log of expected probability ratio (LEPR): φ2(s) := λ |Xreg s | X x∈Xreg s log P(x|θU) P(x|θL) + 1 −λ |Xoov s | X x∈Xoov s log X h∈Hx 1 |Hx| Y y∈Y h x P(y|θU) P(y|θL) It is not difficult to prove that there is no difference between Model 1 and Model 2 when ELPR scoring is used for sentence selection. However, the situation is different for LEPR scoring: the two models produce different sentence rankings in this case. 5 Experiments Corpora. We pre-processed the EuroParl corpus (http://www.statmt.org/europarl) (Koehn, 2005) and built a multilingual parallel corpus with 653,513 sentences, excluding the Q4/2000 portion of the data (2000-10 to 2000-12) which is reserved as the test set. We subsampled 5,000 sentences as the labeled data L and 20,000 sentences as U for the pool of untranslated sentences (while hiding the English part). The test set consists of 2,000 multi-language sentences and comes from the multilingual parallel corpus built from Q4/2000 portion of the data. Consensus Finding. Let T be the union of the nbest lists of translations for a particular sentence. The consensus translation tc is arg max t∈T w1 LM(t) |t| +w2 Qd(t) |t| +w3Rd(t)+w4,d where LM(t) is the score from a 3-gram language model, Qd(t) is the translation score generated by the decoder for MF d→E if t is produced by the dth SMT model, Rd(t) is the rank of the translation in the n-best list produced by the dth model, w4,d is a bias term for each translation model to make their scores comparable, and |t| is the length 185 1000 2000 3000 4000 5000 22.6 22.7 22.8 22.9 23 23.1 23.2 23.3 23.4 23.5 23.6 Added Sentences BLEU Score French to English Model 2 − LEPR Model 1 − ELPR Geom Phrase Random 1000 2000 3000 4000 5000 23.2 23.4 23.6 23.8 24 24.2 24.4 24.6 24.8 25 Added Sentences BLEU Score Spanish to English Model 2 − LEPR Model 1 − ELPR Geom Phrase Random 1000 2000 3000 4000 5000 16.2 16.4 16.6 16.8 17 17.2 17.4 17.6 17.8 Added Sentences BLEU Score German to English Model 2 − LEPR Model 1 − ELPR Geom Phrase Random Figure 2: The performance of different sentence selection strategies as the iteration of AL loop goes on for three translation tasks. Plots show the performance of sentence selection methods for single language pair in Sec. 4 compared to the GeomPhrase (Haffari et al., 2009) and random sentence selection baseline. of the translation sentence. The number of weights wi is 3 plus the number of source languages, and they are trained using minimum error-rate training (MERT) to maximize the BLEU score (Och, 2003) on a development set. Parameters. We use add-ϵ smoothing where ϵ = .5 to smooth the probabilities in Sec. 4; moreover λ = .4 for ELPR and LEPR sentence scoring and maximum phrase length k is set to 4. For the multilingual experiments (which involve four source languages) we set αd = .25 to make the importance of individual translation tasks equal. 0 1000 2000 3000 4000 5000 18 18.5 19 19.5 20 20.5 Added Sentences Avg BLEU Score Mulilingual da−de−nl−sv to en Self−Training Co−Training Figure 3: Random sentence selection baseline using selftraining and co-training (Germanic languages to English). 5.1 Results First we evaluate the proposed sentence selection methods in Sec. 4 for the single language pair. Then the best method from the single language pair setting is used to evaluate sentence selection methods for AL in multilingual setting. After building the initial MT system for each experiment, we select and remove 500 sentences from U and add them together with translations to L for 10 total iterations. The random sentence selection baselines are averaged over 3 independent runs. mode self-train co-train Method wer per wer per Combined Rank 40.2 30.0 40.0 29.6 Alternate 41.0 30.2 40.1 30.1 Disagree-Pairwise 41.9 32.0 40.5 30.9 Disagree-Center 41.8 31.8 40.6 30.7 Random Baseline 41.6 31.0 40.5 30.7 Germanic languages to English mode self-train co-train Method wer per wer per Combined Rank 37.7 27.3 37.3 27.0 Alternate 37.7 27.3 37.3 27.0 Random Baseline 38.6 28.1 38.1 27.6 Romance languages to English Table 1: Comparison of multilingual selection methods with WER (word error rate), PER (position independent WER). 95% confidence interval for WER numbers is 0.7 and for PER numbers is 0.5. Bold: best result, italic: significantly better. We use three language pairs in our single language pair experiments: French-English, GermanEnglish, and Spanish- English. In addition to random sentence selection baseline, we also compare the methods proposed in this paper to the best method reported in (Haffari et al., 2009) denoted by GeomPhrase, which differs from our models since it considers each individual OOV segment as a single OOV phrase and does not consider subsequences. The results are presented in Fig. 2. Selecting sentences based on our proposed methods outperform the random sentence selection baseline and GeomPhrase. We suspect for the situations where L is out-of-domain and the average phrase length is relatively small, our method will outperform GeomPhrase even more. For the multilingual experiments, we use Germanic (German, Dutch, Danish, Swedish) and Romance (French, Spanish, Italian, Portuguese3) lan3A reviewer pointed out that EuroParl English-Portuguese 186 0 1000 2000 3000 4000 5000 18.2 18.4 18.6 18.8 19 19.2 19.4 19.6 19.8 20 Added Sentences Avg BLEU Score Self−Train Mulilingual da−de−nl−sv to en Alternate CombineRank Disagree−Pairwise Disagree−Center Random 1000 1500 2000 2500 3000 3500 4000 4500 5000 19.3 19.4 19.5 19.6 19.7 19.8 19.9 20 20.1 20.2 20.3 Added Sentences Avg BLEU Score Co−Train Mulilingual da−de−nl−sv to en Alternate CombineRank Disagree−Pairwise Disagree−Center Random 0 1000 2000 3000 4000 5000 21.6 21.8 22 22.2 22.4 22.6 22.8 23 23.2 23.4 23.6 Added Sentences Avg BLEU Score Self−Train Mulilingual fr−es−it−pt to en Alternate CombineRank Random 1000 1500 2000 2500 3000 3500 4000 4500 5000 22.6 22.8 23 23.2 23.4 23.6 23.8 Added Sentences Avg BLEU Score Co−Train Mulilingual fr−es−it−pt to en Alternate CombineRank Random Figure 4: The left/right plot show the performance of our AL methods for multilingual setting combined with self-training/cotraining. The sentence selection methods from Sec. 3 are compared with random sentence selection baseline. The top plots correspond to Danish-German-Dutch-Swedish to English, and the bottom plots correspond to French-Spanish-Italian-Portuguese to English. guages as the source and English as the target language as two sets of experiments.4 Fig. 3 shows the performance of random sentence selection for AL combined with self-training/co-training for the multi-source translation from the four Germanic languages to English. It shows that the co-training mode outperforms the self-training mode by almost 1 BLEU point. The results of selection strategies in the multilingual setting are presented in Fig. 4 and Tbl. 1. Having noticed that Model 1 with ELPR performs well in the single language pair setting, we use it to rank entries for individual translation tasks. Then these rankings are used by ‘Alternate’ and ‘Combined Rank’ selection strategies in the multilingual case. The ‘Combined Rank’ method outperforms all the other methods including the strong random selection baseline in both self-training and co-training modes. The disagreement-based selection methods underperform the baseline for translation of Germanic languages to English, so we omitted them for the Romance language experiments. 5.2 Analysis The basis for our proposed methods has been the popularity of regular/OOV phrases in U and their data is very noisy and future work should omit this pair. 4Choice of Germanic and Romance for our experimental setting is inspired by results in (Cohn and Lapata, 2007) unpopularity in L, which is measured by P(x|θU) P(x|θL). We need P(x|θU), the estimated distribution of phrases in U, to be as similar as possible to P ∗(x), the true distribution of phrases in U. We investigate this issue for regular/OOV phrases as follows: • Using the output of the initially trained MT system on L, we extract the regular/OOV phrases as described in §4. The smoothed relative frequencies give us the regular/OOV phrasal distributions. • Using the true English translation of the sentences in U, we extract the true phrases. Separating the phrases into two sets of regular and OOV phrases defined by the previous step, we use the smoothed relative frequencies and form the true OOV/regular phrasal distributions. We use the KL-divergence to see how dissimilar are a pair of given probability distributions. As Tbl. 2 shows, the KL-divergence between the true and estimated distributions are less than that De2En Fr2En Es2En KL(P ∗ reg ∥Preg) 4.37 4.17 4.38 KL(P ∗ reg ∥unif ) 5.37 5.21 5.80 KL(P ∗ oov ∥Poov) 3.04 4.58 4.73 KL(P ∗ oov ∥unif ) 3.41 4.75 4.99 Table 2: For regular/OOV phrases, the KL-divergence between the true distribution (P ∗) and the estimated (P) or uniform (unif ) distributions are shown, where: KL(P ∗∥P) := P x P ∗(x) log P ∗(x) P (x) . 187 10 0 10 1 10 2 10 3 10 4 10 5 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 Rank Probability Regular Phrases in U Estimated Distribution True Distribution 10 0 10 1 10 2 10 3 10 4 10 5 10 −6 10 −5 10 −4 10 −3 10 −2 10 −1 10 0 Rank Probability OOV Phrases in U Estimated Distribution True Distribution Figure 5: The log-log Zipf plots representing the true and estimated probabilities of a (source) phrase vs the rank of that phrase in the German to English translation task. The plots for the Spanish to English and French to English tasks are also similar to the above plots, and confirm a power law behavior in the true phrasal distributions. between the true and uniform distributions, in all three language pairs. Since uniform distribution conveys no information, this is evidence that there is some information encoded in the estimated distribution about the true distribution. However we noticed that the true distributions of regular/OOV phrases exhibit Zipfian (power law) behavior5 which is not well captured by the estimated distributions (see Fig. 5). Enhancing the estimated distributions to capture this power law behavior would improve the quality of the proposed sentence selection methods. 6 Related Work (Haffari et al., 2009) provides results for active learning for MT using a single language pair. Our work generalizes to the use of multilingual corpora using new methods that are not possible with a single language pair. In this paper, we also introduce new selection methods that outperform the methods in (Haffari et al., 2009) even for MT with a single language pair. In addition in this paper by considering multilingual parallel corpora we were able to introduce co-training for AL, while (Haffari et al., 2009) only use self-training since they are using a single language pair. 5This observation is at the phrase level and not at the word (Zipf, 1932) or even n-gram level (Ha et al., 2002). (Reichart et al., 2008) introduces multi-task active learning where unlabeled data require annotations for multiple tasks, e.g. they consider namedentities and parse trees, and showed that multiple tasks helps selection compared to individual tasks. Our setting is different in that the target language is the same across multiple MT tasks, which we exploit to use consensus translations and cotraining to improve active learning performance. (Callison-Burch and Osborne, 2003b; CallisonBurch and Osborne, 2003a) provide a co-training approach to MT, where one language pair creates data for another language pair. In contrast, our co-training approach uses consensus translations and our setting for active learning is very different from their semi-supervised setting. A Ph.D. proposal by Chris Callison-Burch (Callison-burch, 2003) lays out the promise of AL for SMT and proposes some algorithms. However, the lack of experimental results means that performance and feasibility of those methods cannot be compared to ours. While we use consensus translations (He et al., 2008; Rosti et al., 2007; Matusov et al., 2006) as an effective method for co-training in this paper, unlike consensus for system combination, the source languages for each of our MT systems are different, which rules out a set of popular methods for obtaining consensus translations which assume translation for a single language pair. Finally, we briefly note that triangulation (see (Cohn and Lapata, 2007)) is orthogonal to the use of co-training in our work, since it only enhances each MT system in our ensemble by exploiting the multilingual data. In future work, we plan to incorporate triangulation into our active learning approach. 7 Conclusion This paper introduced the novel active learning task of adding a new language to an existing multilingual set of parallel text. We construct SMT systems from each language in the collection into the new target language. We show that we can take advantage of multilingual corpora to decrease annotation effort thanks to the highly effective sentence selection methods we devised for active learning in the single language-pair setting which we then applied to the multilingual sentence selection protocols. In the multilingual setting, a novel cotraining method for active learning in SMT is proposed using consensus translations which outperforms AL-SMT with self-training. 188 References Avrim Blum and Tom Mitchell. 1998. Combining Labeled and Unlabeled Data with Co-Training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory (COLT 1998), Madison, Wisconsin, USA, July 24-26. ACM. Chris Callison-Burch and Miles Osborne. 2003a. Bootstrapping parallel corpora. In NAACL workshop: Building and Using Parallel Texts: Data Driven Machine Translation and Beyond. Chris Callison-Burch and Miles Osborne. 2003b. Cotraining for statistical machine translation. In Proceedings of the 6th Annual CLUK Research Colloquium. Chris Callison-burch. 2003. Active learning for statistical machine translation. In PhD Proposal, Edinburgh University. Trevor Cohn and Mirella Lapata. 2007. Machine translation by triangulation: Making effective use of multi-parallel corpora. In ACL. Le Quan Ha, E. I. Sicilia-Garcia, Ji Ming, and F.J. Smith. 2002. Extension of zipf’s law to words and phrases. In Proceedings of the 19th international conference on Computational linguistics. Gholamreza Haffari, Maxim Roy, and Anoop Sarkar. 2009. Active learning for statistical phrase-based machine translation. In NAACL. Xiaodong He, Mei Yang, Jianfeng Gao, Patrick Nguyen, and Robert Moore. 2008. Indirect-hmmbased hypothesis alignment for combining outputs from machine translation systems. In EMNLP. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT Summit. Evgeny Matusov, Nicola Ueffing, and Hermann Ney. 2006. Computing consensus translation from multiple machine translation systems using enhanced hypotheses alignment. In EACL. Franz Josef Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In ACL ’03: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In ACL ’02: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. Roi Reichart, Katrin Tomanek, Udo Hahn, and Ari Rappoport. 2008. Multi-task active learning for linguistic annotations. In ACL. Antti-Veikko Rosti, Necip Fazil Ayan, Bing Xiang, Spyros Matsoukas, Richard M. Schwartz, and Bonnie Jean Dorr. 2007. Combining outputs from multiple machine translation systems. In NAACL. Nicola Ueffing, Gholamreza Haffari, and Anoop Sarkar. 2007. Transductive learning for statistical machine translation. In ACL. George Zipf. 1932. Selective Studies and the Principle of Relative Frequency in Language. Harvard University Press. 189
2009
21
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 190–198, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP DEPEVAL(summ): Dependency-based Evaluation for Automatic Summaries Karolina Owczarzak Information Access Division National Institute of Standards and Technology Gaithersburg, MD 20899 [email protected] Abstract This paper presents DEPEVAL(summ), a dependency-based metric for automatic evaluation of summaries. Using a reranking parser and a Lexical-Functional Grammar (LFG) annotation, we produce a set of dependency triples for each summary. The dependency set for each candidate summary is then automatically compared against dependencies generated from model summaries. We examine a number of variations of the method, including the addition of WordNet, partial matching, or removing relation labels from the dependencies. In a test on TAC 2008 and DUC 2007 data, DEPEVAL(summ) achieves comparable or higher correlations with human judgments than the popular evaluation metrics ROUGE and Basic Elements (BE). 1 Introduction Evaluation is a crucial component in the area of automatic summarization; it is used both to rank multiple participant systems in shared summarization tasks, such as the Summarization track at Text Analysis Conference (TAC) 2008 and its Document Understanding Conference (DUC) predecessors, and to provide feedback to developers whose goal is to improve their summarization systems. However, manual evaluation of a large number of documents necessary for a relatively unbiased view is often unfeasible, especially in the contexts where repeated evaluations are needed. Therefore, there is a great need for reliable automatic metrics that can perform evaluation in a fast and consistent manner. In this paper, we explore one such evaluation metric, DEPEVAL(summ), based on the comparison of Lexical-Functional Grammar (LFG) dependencies between a candidate summary and one or more model (reference) summaries. The method is similar in nature to Basic Elements (Hovy et al., 2005), in that it extends beyond a simple string comparison of word sequences, reaching instead to a deeper linguistic analysis of the text. Both methods use hand-written extraction rules to derive dependencies from constituent parses produced by widely available Penn II Treebank parsers. The difference between DEPEVAL(summ) and BE is that in DEPEVAL(summ) the dependency extraction is accomplished through an LFG annotation of Cahill et al. (2004) applied to the output of the reranking parser of Charniak and Johnson (2005), whereas in BE (in the version presented here) dependencies are generated by the Minipar parser (Lin, 1995). Despite relying on a the same concept, our approach outperforms BE in most comparisons, and it often achieves higher correlations with human judgments than the string-matching metric ROUGE (Lin, 2004). A more detailed description of BE and ROUGE is presented in Section 2, which also gives an account of manual evaluation methods employed at TAC 2008. Section 3 gives a short introduction to the LFG annotation. Section 4 describes in more detail DEPEVAL(summ) and its variants. Section 5 presents the experiment in which we compared the perfomance of all three metrics on the TAC 2008 data (consisting of 5,952 100-words summaries) and on the DUC 2007 data (1,620 250-word summaries) and discusses the correlations these metrics achieve. Finally, Section 6 presents conclusions and some directions for future work. 2 Current practice in summary evaluation In the first Text Analysis Conference (TAC 2008), as well as its predecessor, the Document Understanding Conference (DUC) series, the evaluation 190 of summarization tasks was conducted using both manual and automatic methods. Since manual evaluation is still the undisputed gold standard, both at TAC and DUC there was much effort to evaluate manually as much data as possible. 2.1 Manual evaluation Manual assessment, performed by human judges, usually centers around two main aspects of summary quality: content and form. Similarly to Machine Translation, where these two aspects are represented by the categories of Accuracy and Fluency, in automatic summarization evaluation performed at TAC and DUC they surface as (Content) Responsiveness and Readability. In TAC 2008 (Dang and Owczarzak, 2008), however, Content Responsiveness was replaced by Overall Responsiveness, conflating these two dimensions and reflecting the overall quality of the summary: the degree to which a summary was responding to the information need contained in the topic statement, as well as its linguistic quality. A separate Readability score was still provided, assessing the fluency and structure independently of content, based on such aspects as grammaticality, nonredundancy, referential clarity, focus, structure, and coherence. Both Overall Responsiveness and Readability were evaluated according to a fivepoint scale, ranging from “Very Poor” to “Very Good”. Content was evaluated manually by NIST assessors using the Pyramid framework (Passonneau et al., 2005). In the Pyramid evaluation, assessors first extract all possible “information nuggets”, or Summary Content Units (SCUs) from the four human-crafted model summaries on a given topic. Each SCU is assigned a weight in proportion to the number of model summaries in which it appears, on the assumption that information which appears in most or all human-produced model summaries is more essential to the topic. Once all SCUs are harvested from the model summaries, assessors determine how many of these SCUs are present in each of the automatic peer summaries. The final score for an automatic summary is its total SCU weight divided by the maximum SCU weight available to a summary of average length (where the average length is determined by the mean SCU count of the model summaries for this topic). All types of manual assessment are expensive and time-consuming, which is why it can be rarely provided for all submitted runs in shared tasks such as the TAC Summarization track. It is also not a viable tool for system developers who ideally would like a fast, reliable, and above all automatic evaluation method that can be used to improve their systems. The creation and testing of automatic evaluation methods is, therefore, an important research venue, and the goal is to produce automatic metrics that will correlate with manual assessment as closely as possible. 2.2 Automatic evaluation Automatic metrics, because of their relative speed, can be applied more widely than manual evaluation. In TAC 2008 Summarization track, all submitted runs were scored with the ROUGE (Lin, 2004) and Basic Elements (BE) metrics (Hovy et al., 2005). ROUGE is a collection of string-comparison techniques, based on matching n-grams between a candidate string and a reference string. The string in question might be a single sentence (as in the case of translation), or a set of sentences (as in the case of summaries). The variations of ROUGE range from matching unigrams (i.e. single words) to matching four-grams, with or without lemmatization and stopwords, with the options of using different weights or skip-n-grams (i.e. matching n-grams despite intervening words). The two versions used in TAC 2008 evaluations were ROUGE-2 and ROUGE-SU4, where ROUGE-2 calculates the proportion of matching bigrams between the candidate summary and the reference summaries, and ROUGE-SU4 is a combination of unigram match and skip-bigram match with skip distance of 4 words. BE, on the other hand, employs a certain degree of linguistic analysis in the assessment process, as it rests on comparing the “Basic Elements” between the candidate and the reference. Basic Elements are syntactic in nature, and comprise the heads of major syntactic constituents in the text (noun, verb, adjective, etc.) and their modifiers in a dependency relation, expressed as a triple (head, modifier, relation type). First, the input text is parsed with a syntactic parser, then Basic Elements are extracted from the resulting parse, and the candidate BEs are matched against the reference BEs. In TAC 2008 and DUC 2008 evaluations the BEs were extracted with Minipar (Lin, 1995). Since BE, contrary to ROUGE, does not 191 rely solely on the surface sequence of words to determine similarity between summaries, but delves into what could be called a shallow semantic structure, comprising thematic roles such as subject and object, it is likely to notice identity of meaning where such identity is obscured by variations in word order. In fact, when it comes to evaluation of automatic summaries, BE shows higher correlations with human judgments than ROUGE, although the difference is not large enough to be statistically significant. In the TAC 2008 evaluations, BE-HM (a version of BE where the words are stemmed and the relation type is ignored) obtained a correlation of 0.911 with human assessment of overall responsiveness and 0.949 with the Pyramid score, whereas ROUGE-2 showed correlations of 0.894 and 0.946, respectively. While using dependency information is an important step towards integrating linguistic knowledge into the evaluation process, there are many ways in which this could be approached. Since this type of evaluation processes information in stages (constituent parser, dependency extraction, and the method of dependency matching between a candidate and a reference), there is potential for variance in performance among dependencybased evaluation metrics that use different components. Therefore, it is interesting to compare our method, which relies on the Charniak-Johnson parser and the LFG annotation, with BE, which uses Minipar to parse the input and produce dependencies. 3 Lexical-Functional Grammar and the LFG parser The method discussed in this paper rests on the assumptions of Lexical-Functional Grammar (Kaplan and Bresnan, 1982; Bresnan, 2001) (LFG). In LFG sentence structure is represented in terms of c(onstituent)-structure and f(unctional)-structure. C-structure represents the word order of the surface string and the hierarchical organisation of phrases in terms of trees. F-structures are recursive feature structures, representing abstract grammatical relations such as subject, object, oblique, adjunct, etc., approximating to predicateargument structure or simple logical forms. Cstructure and f-structure are related by means of functional annotations in c-structure trees, which describe f-structures. While c-structure is sensitive to surface rearrangement of constituents, f-structure abstracts away from (some of) the particulars of surface realization. The sentences John resigned yesterday and Yesterday, John resigned will receive different tree representations, but identical f-structures. The f-structure can also be described in terms of a flat set of triples, or dependencies. In triples format, the f-structure for these two sentences is represented in 1. (1) subject(resign,john) person(john,3) number(john,sg) tense(resign,past) adjunct(resign,yesterday) person(yesterday,3) number(yesterday,sg) Cahill et al. (2004), in their presentation of LFG parsing resources, distinguish 32 types of dependencies, divided into two major groups: a group of predicate-only dependencies and nonpredicate dependencies. Predicate-only dependencies are those whose path ends in a predicatevalue pair, describing grammatical relations. For instance, in the sentence John resigned yesterday, predicate-only dependencies would include: subject(resign, john) and adjunct(resign, yesterday), while non-predicate dependencies are person(john,3), number(john,sg), tense(resign,past), person(yesterday,3), num(yesterday,sg). Other predicate-only dependencies include: apposition, complement, open complement, coordination, determiner, object, second object, oblique, second oblique, oblique agent, possessive, quantifier, relative clause, topic, and relative clause pronoun. The remaining non-predicate dependencies are: adjectival degree, coordination surface form, focus, complementizer forms: if, whether, and that, modal, verbal particle, participle, passive, pronoun surface form, and infinitival clause. These 32 dependencies, produced by LFG annotation, and the overlap between the set of dependencies derived from the candidate summary and the reference summaries, form the basis of our evaluation method, which we present in Section 4. First, a summary is parsed with the CharniakJohnson reranking parser (Charniak and Johnson, 2005) to obtain the phrase-structure tree. Then, a sequence of scripts annotates the output, translating the relative phrase position into f-structural dependencies. The treebank-based LFG annotation used in this paper and developed by Cahill et al. (2004) obtains high precision and recall rates. As reported in Cahill et al. (2008), the version of 192 the LFG parser which applies the LFG annotation algorithm to the earlier Charniak’s parser (Charniak, 2000) obtains an f-score of 86.97 on the Wall Street Journal Section 23 test set. The LFG parser is robust as well, with coverage levels exceeding 99.9%, measured in terms of complete spanning parse. 4 Dependency-based evaluation Our dependency-based evaluation method, similarly to BE, compares two unordered sets of dependencies: one bag contains dependencies harvested from the candidate summary and the other contains dependencies from one or more reference summaries. Overlap between the candidate bag and the reference bag is calculated in the form of precision, recall, and the f-measure (with precision and recall equally weighted). Since for ROUGE and BE the only reported score is recall, we present recall results here as well, calculated as in 2: (2) DEPEVAL(summ) Recall = |Dcand|∩|Dref| |Dref| where Dcand are the candidate dependencies and Dref are the reference dependencies. The dependency-based method using LFG annotation has been successfully employed in the evaluation of Machine Translation (MT). In Owczarzak (2008), the method achieves equal or higher correlations with human judgments than METEOR (Banerjee and Lavie, 2005), one of the best-performing automatic MT evaluation metrics. However, it is not clear that the method can be applied without change to the task of assessing automatic summaries; after all, the two tasks - of summarization and translation - produce outputs that are different in nature. In MT, the unit of text is a sentence; text is translated, and the translation evaluated, sentence by sentence. In automatic summarization, the output unit is a summary with length varying depending on task, but which most often consists of at least several sentences. This has bearing on the matching process: with several sentences on the candidate and reference side each, there is increased possibility of trivial matches, such as dependencies containing function words, which might inflate the summary score even in the absence of important content. This is particularly likely if we were to employ partial matching for dependencies. Partial matching (indicated in the result tables with the tag pm) “splits” each predicate dependency into two, replacing one or the other element with a variable, e.g. for the dependency subject(resign, John) we would obtain two partial dependencies subject(resign, x) and subject(x, John). This process helps circumvent some of the syntactic and lexical variation between a candidate and a reference, and it proved very useful in MT evaluation (Owczarzak, 2008). In summary evaluation, as will be shown in Section 5, it leads to higher correlations with human judgments only in the case of human-produced model summaries, because almost any variation between two model summaries is “legal”, i.e. either a paraphrase or another, but equally relevant, piece of information. For automatic summaries, which are of relatively poor quality, partial matching lowers our method’s ability to reflect human judgment, because it results in overly generous matching in situations where the examined information is neither a paraphrase nor relevant. Similarly, evaluating a summary against the union of all references, as we do in the baseline version of our method, increases the pool of possible matches, but may also produce score inflation through matching repetitive information across models. To deal with this, we produce a version of the score (marked in the result tables with the tag one) that counts only one “hit” for every dependency match, independent of how many instances of a given dependency are present in the comparison. The use of WordNet1 module (Rennie, 2000) did not provide a great advantage (see results tagged with wn), and sometimes even lowered our correlations, especially in evaluation of automatic systems. This makes sense if we take into consideration that WordNet lists all possible synonyms for all possible senses of a word, and so, given a great number of cross-sentence comparisons in multi-sentence summaries, there is an increased risk of spurious matches between words which, despite being potentially synonymous in certain contexts, are not equivalent in the text. Another area of concern was the potential noise introduced by the parser and the annotation process. Due to parsing errors, two otherwise equivalent expressions might be encoded as differing sets of dependencies. In MT evaluation, the dependency-based method can alleviate parser 1http://wordnet.princeton.edu/ 193 noise by comparing n-best parses for the candidate and the reference (Owczarzak et al., 2007), but this is not an efficient solution for comparing multisentence summaries. We have therefore attempted to at least partially counteract this issue by removing relation labels from the dependencies (i.e. producing dependencies of the form (resign, John) instead of subject(resign, John)), which did provide some improvement (see results tagged with norel). Finally, we experimented with a predicate-only version of the evaluation, where only the predicate dependencies participate in the comparison, excluding dependencies that provide purely grammatical information such as person, tense, or number (tagged in the results table as pred). This move proved beneficial only in the case of system summaries, perhaps by decreasing the number of trivial matches, but decreased the method’s correlation for model summaries, where such detailed information might be necessary to assess the degree of similarity between two human summaries. 5 Experimental results The first question we have to ask is: which of the manual evaluation categories do we want our metric to imitate? It is unlikely that a single automatic measure will be able to correctly reflect both Readability and Content Responsiveness, as form and content are separate qualities and need different measures. Content seems to be the more important aspect, especially given that Readability can be partially derived from Responsiveness (a summary high in content cannot be very low in readability, although some very readable summaries can have little relevant content). Content Responsiveness was provided in DUC 2007 data, but not in TAC 2008, where the extrinsic Pyramid measure was used to evaluate content. It is, in fact, preferable to compare our metric against the Pyramid score rather than Content Responsiveness, because both the Pyramid and our method aim to measure the degree of similarity between a candidate and a model, whereas Content Responsiveness is a direct assessment of whether the summary’s content is adequate given a topic and a source text. The Pyramid is, at the same time, a costly manual evaluation method, so an automatic metric that successfully emulates it would be a useful replacement. Another question is whether we focus on system-level or summary-level evaluation. The correlation values at the summary-level are generally much lower than on the system-level, which means the metrics are better at evaluating system performance than the quality of individual summaries. System-level evaluations are essential to shared summarization tasks; summary-level assessment might be useful to developers who want to test the effect of particular improvements in their system. Of course, the ideal evaluation metric would show high correlations with human judgment on both levels. We used the data from the TAC 2008 and DUC 2007 Summarization tracks. The first set comprised 58 system submissions and 4 humanproduced model summaries for each of the 96 subtopics (there were 48 topics, each of which required two summaries: a main and an update summary), as well as human-produced Overall Responsiveness and Pyramid scores for each summary. The second set included 32 system submissions and 4 human models for each of the 45 topics. For fair comparison of models and systems, we used jackknifing: while each model was evaluated against the remaining three models, each system summary was evaluated four times, each time against a different set of three models, and the four scores were averaged. 5.1 System-level correlations Table 1 presents system-level Pearson’s correlations between the scores provided by our dependency-based metric DEPEVAL(summ), as well as the automatic metrics ROUGE-2, ROUGE-SU4, and BE-HM used in the TAC evaluation, and the manual Pyramid scores, which measured the content quality of the systems. It also includes correlations with the manual Overall Responsiveness score, which reflected both content and linguistic quality. Table 3 shows the correlations with Content Responsiveness for DUC 2007 data for ROUGE, BE, and those few select versions of DEPEVAL(summ) which achieve optimal results on TAC 2008 data (for a more detailed discussion of the selection see Section 6). The correlations are listed for the following versions of our method: pm - partial matching for dependencies; wn - WordNet; pred - matching predicate-only dependencies; norel - ignoring dependency relation label; one - counting a match only once irrespective of how many instances of 194 TAC 2008 Pyramid Overall Responsiveness Metric models systems models systems DEPEVAL(summ): Variations base 0.653 0.931 0.883 0.862 pm 0.690 0.811 0.943 0.740 wn 0.687 0.929 0.888 0.860 pred 0.415 0.946 0.706 0.909 norel 0.676 0.929 0.880 0.861 one 0.585 0.958* 0.858 0.900 DEPEVAL(summ): Combinations pm wn 0.694 0.903 0.952* 0.839 pm pred 0.534 0.880 0.898 0.831 pm norel 0.722 0.907 0.936 0.835 pm one 0.611 0.950 0.876 0.895 wn pred 0.374 0.946 0.716 0.912 wn norel 0.405 0.941 0.752 0.905 wn one 0.611 0.952 0.856 0.897 pred norel 0.415 0.945 0.735 0.905 pred one 0.415 0.953 0.721 0.921* norel one 0.600 0.958* 0.863 0.900 pm wn pred 0.527 0.870 0.905 0.821 pm wn norel 0.738 0.897 0.931 0.826 pm wn one 0.634 0.936 0.887 0.881 pm pred norel 0.642 0.876 0.946 0.815 pm pred one 0.504 0.948 0.817 0.907 pm norel one 0.725 0.941 0.905 0.880 wn pred norel 0.433 0.944 0.764 0.906 wn pred one 0.385 0.950 0.722 0.919 wn norel one 0.632 0.954 0.872 0.896 pred norel one 0.452 0.955 0.756 0.919 pm wn pred norel 0.643 0.861 0.940 0.800 pm wn pred one 0.486 0.932 0.809 0.890 pm pred norel one 0.711 0.939 0.881 0.891 pm wn norel one 0.743* 0.930 0.902 0.870 wn pred norel one 0.467 0.950 0.767 0.918 pm wn pred norel one 0.712 0.927 0.887 0.880 Other metrics ROUGE-2 0.277 0.946 0.725 0.894 ROUGE-SU4 0.457 0.928 0.866 0.874 BE-HM 0.423 0.949 0.656 0.911 Table 1: System-level Pearson’s correlation between automatic and manual evaluation metrics for TAC 2008 data. a particular dependency are present in the candidate and reference. For each of the metrics, including ROUGE and BE, we present the correlations for recall. The highest result in each category is marked by an asterisk. The background gradient indicates whether DEPEVAL(summ) correlation is higher than all three competitors ROUGE2, ROUGE-SU4, and BE (darkest grey), two of the three (medium grey), one of the three (light grey), or none (white). The 95% confidence intervals are not included here for reasons of space, but their comparison suggests that none of the system-level differences in correlation levels are large enough to be significant. This is because the intervals themselves are very wide, due to relatively small number of summarizers (58 automatic and 8 human for TAC; 32 automatic and 10 human for DUC) involved in the comparison. 5.2 Summary-level correlations Tables 2 and 4 present the same correlations, but this time on the level of individual summaries. As before, the highest level in each category is marked by an asterisk. Contrary to system-level, here some correlations obtained by DEPEVAL(summ) are significantly higher than those achieved by the three competing metrics, ROUGE-2, ROUGE-SU4, and BE-HM, as determined by the confidence intervals. The letters in parenthesis indicate that a given DEPEVAL(summ) variant is significantly better at correlating with human judgment than ROUGE-2 (= R2), ROUGE-SU4 (= R4), or BE-HM (= B). 6 Discussion and future work It is obvious that none of the versions performs best across the board; their different characteristics might render them better suited either for models or for automatic systems, but not for both at the same time. This can be explained if we understand that evaluating human gold standard summaries and automatically generated summaries of poor-to-medium quality is, in a way, not the same task. Given that human models are by default well-formed and relevant, relaxing any restraints on matching between them (i.e. allowing partial dependencies, removing the relation label, or adding synonyms) serves, in effect, to accept as correct either (1) the same conceptual information expressed in different ways (where the difference might be real or introduced by faulty parsing), or (2) other information, yet still relevant to the topic. Accepting information of the former type as correct will ratchet up the score for the summary and the correlation with the summary’s Pyramid score, which measures identity of information across summaries. Accepting the first and second type of information will raise the score and the correlation with Responsiveness, which measures relevance of information to the particular topic. However, in evaluating system summaries such relaxation of matching constraints will result in accepting irrelevant and ungrammatical information as correct, driving up the DEPEVAL(summ) score, but lowering its correlation with both Pyramid and Responsiveness. In simple words, it is okay to give a model summary “the benefit of doubt”, and accept its content as correct even if it is not matching other model summaries exactly, but the same strategy applied to a system summary might cause mass over-estimation of the summary’s quality. This substantial difference in the nature of human-generated models and system-produced summaries has impact on all automatic means of evaluation, as long as we are limited to methods that operate on more shallow levels than a full 195 TAC 2008 Pyramid Overall Responsiveness Metric models systems models systems DEPEVAL(summ): Variations base 0.436 (B) 0.595 (R2,R4,B) 0.186 0.373 (R2,B) pm 0.467 (B) 0.584 (R2,B) 0.183 0.368 (B) wn 0.448 (B) 0.592 (R2,B) 0.192 0.376 (R2,R4,B) pred 0.344 0.543 (B) 0.170 0.327 norel 0.437 (B) 0.596* (R2,R4,B) 0.186 0.373 (R2,B) one 0.396 0.587 (R2,B) 0.171 0.376 (R2,R4,B) DEPEVAL(summ): Combinations pm wn 0.474 (B) 0.577 (R2,B) 0.194* 0.371 (R2,B) pm pred 0.407 0.537 (B) 0.153 0.337 pm norel 0.483 (R2,B) 0.584 (R2,B) 0.168 0.362 pm one 0.402 0.577 (R2,B) 0.167 0.384 (R2,R4,B) wn pred 0.352 0.537 (B) 0.182 0.328 wn norel 0.364 0.541 (B) 0.187 0.329 wn one 0.411 0.581 (R2,B) 0.182 0.384 (R2,R4,B) pred norel 0.351 0.547 (B) 0.169 0.327 pred one 0.325 0.542 (B) 0.171 0.347 norel one 0.403 0.589 (R2,B) 0.176 0.377 (R2,R4,B) pm wn pred 0.415 0.526 (B) 0.167 0.337 pm wn norel 0.488* (R2,R4,B) 0.576 (R2,B) 0.168 0.366 (B) pm wn one 0.417 0.563 (B) 0.179 0.389* (R2,R4.B) pm pred norel 0.433 (B) 0.538 (B) 0.124 0.333 pm pred one 0.357 0.545 (B) 0.151 0.381 (R2,R4,B) pm norel one 0.437 (B) 0.567 (R2,B) 0.174 0.369 (B) wn pred norel 0.353 0.541 (B) 0.180 0.324 wn pred one 0.328 0.535 (B) 0.179 0.346 wn norel one 0.416 0.584 (R2,B) 0.185 0.385 (R2,R4,B) pred norel one 0.336 0.549 (B) 0.169 0.351 pm wn pred norel 0.428 (B) 0.524 (B) 0.120 0.334 pm wn pred one 0.363 0.525 (B) 0.164 0.380 (R2,R4,B) pm pred norel one 0.420 (B) 0.533 (B) 0.154 0.375 (R2,R4,B) pm wn norel one 0.452 (B) 0.558 (B) 0.179 0.376 (R2,R4,B) wn pred norel one 0.338 0.544 (B) 0.178 0.349 pm wn pred norel one 0.427 (B) 0.522 (B) 0.153 0.379 (R2,R4,B) Other metrics ROUGE-2 0.307 0.527 0.098 0.323 ROUGE-SU4 0.318 0.557 0.153 0.327 BE-HM 0.239 0.456 0.135 0.317 Table 2: Summary-level Pearson’s correlation between automatic and manual evaluation metrics for TAC 2008 data. DUC 2007 Content Responsiveness Metric models systems DEPEVAL(summ) 0.7341 0.8429 DEPEVAL(summ) wn 0.7355 0.8354 DEPEVAL(summ) norel 0.7394 0.8277 DEPEVAL(summ) one 0.7507 0.8634 ROUGE-2 0.4077 0.8772 ROUGE-SU4 0.2533 0.8297 BE-HM 0.5471 0.8608 Table 3: System-level Pearson’s correlation between automatic metrics and Content Responsiveness for DUC 2007 data. For model summaries, only DEPEVAL correlations are significant (the 95% confidence interval does not include zero). None of the differences between metrics are significant at the 95% level. DUC 2007 Content Responsiveness Metric models systems DEPEVAL(summ) 0.2059 0.4150 DEPEVAL(summ) wn 0.2081 0.4178 DEPEVAL(summ) norel 0.2119 0.4185 DEPEVAL(summ) one 0.1999 0.4101 ROUGE-2 0.1501 0.3875 ROUGE-SU4 0.1397 0.4264 BE-HM 0.1330 0.3722 Table 4: Summary-level Pearson’s correlation between automatic metrics and Content Responsiveness for DUC 2007 data. ROUGE-SU4 and BE correlations for model summaries are not statistically significant. None of the differences between metrics are significant at the 95% level. semantic and pragmatic analysis against humanlevel world knowledge. The problem is twofold: first, our automatic metrics measure identity rather than quality. Similarity of content between a candidate summary and one or more references is acting as a proxy measure for the quality of the candidate summary; yet, we cannot forget that the relation between these two features is not purely linear. A candidate highly similar to the reference will be, necessarily, of good quality, but a candidate which is dissimilar from a reference is not necessarily of low quality (vide the case of parallel model summaries, which almost always contain some non-overlapping information). The second problem is the extent to which our metrics are able to distinguish content through the veil of differing forms. Synonyms, paraphrases, or pragmatic features such as the choice of topic and focus render simple string-matching techniques ineffective, especially in the area of summarization where the evaluation happens on a supra-sentential level. As a result, then, a lot of effort was put into developing metrics that can identify similar content despite non-similar form, which naturally led to the application of linguistically-oriented approaches that look beyond surface word order. Essentially, though, we are using imperfect measures of similarity as an imperfect stand-in for quality, and the accumulated noise often causes a divergence in our metrics’ performance with model and system summaries. Much like the inverse relation of precision and recall, changes and additions that improve a metric’s correlation with human scores for model summaries often weaken the correlation for system summaries, and vice versa. Admittedly, we could just ignore this problem and focus on increasing correlations for automatic summaries only; after all, the whole point of creating evaluation metrics is to score and rank the output of systems. Such a perspective can be rather short-sighted, though, given that we expect continuous improvement from the summarization systems to, ideally, human levels, so the same issues which now prevent high correlations for models will start surfacing in evaluation of systemproduced summaries as well. Using metrics that only perform reliably for low-quality summaries might prevent us from noticing when those summaries become better. Our goal should be, therefore, to develop a metric which obtains high correlations in both categories, with the assumption that such a metric will be more reliable in evaluating summaries of varying quality. 196 Since there is no single winner among all 32 variants of DEPEVAL(summ) on TAC 2008 data, we must decide which of the categories is most important to a successful automatic evaluation metric. Correlations with Overall Responsiveness are in general lower than those with the Pyramid score (except in the case of system-level models). This makes sense, if we rememeber that Overall Responsiveness judges content as well as linguistic quality, which are two different dimensions and so a single automatic metric is unlikely to reflect it well, and that it judges content in terms of its relevance to topic, which is also beyond the reach of contemporary metrics which can at most judge content similarity to a model. This means that the Pyramid score makes for a more relevant metric to emulate. The last dilemma is whether we choose to focus on system- or summary-level correlations. This ties in with the purpose which the evaluation metric should serve. In comparisons of multiple systems, such as in TAC 2008, the value is placed in the correct ordering of these systems; while summary-level assessment can give us important feedback and insight during the system development stage. The final choice among all DEPEVAL(summ) versions hinges on all of these factors: we should prefer a variant which correlates highly with the Pyramid score rather than with Responsiveness, which minimizes the gap between model and automatic peer correlations while retaining relatively high values for both, and which fulfills these requirements similarly well on both summary- and system-levels. Three such variants are the baseline DEPEVAL(summ), the WordNet version DEPEVAL(summ) wn, and the version with removed relation labels DEPEVAL(summ) norel. Both the baseline and norel versions achieve significant improvement over ROUGE and BE in correlations with the Pyramid score for automatic summaries, and over BE for models, on the summary level. In fact, almost in all categories they achieve higher correlations than ROUGE and BE. The only exceptions are the correlations with Pyramid for systems at the system-level, but there the results are close and none of the differences in that category are significant. To balance this exception, DEPEVAL(summ) achieves much higher correlations with the Pyramid scores for model summaries than either ROUGE or BE on the system level. In order to see whether the DEPEVAL(summ) advantage holds for other data, we examined the most optimal versions (baseline, wn, norel, as well as one, which is the closest counterpart to label-free BE-HM) on data from DUC 2007. Because only a portion of the DUC 2007 data was evaluated with Pyramid, we chose to look rather at the Content Responsiveness scores. As can be seen in Tables 3 and 4, the same patterns hold: decided advantage over ROUGE/BE when it comes to model summaries (especially at system-level), comparable results for automatic summaries. Since DUC 2007 data consisted of fewer summaries (1,620 vs 5,952 at TAC) and fewer submissions (32 vs 57 at TAC), some results did not reach statistical significance. In Table 3, in the models category, only DEPEVAL(summ) correlations are significant. In Table 4, in the model category, only DEPEVAL(summ) and ROUGE-2 correlations are significant. Note also that these correlations with Content Responsiveness are generally lower than those with Pyramid in previous tables, but in the case of summary-level comparison higher than the correlations with Overall Responsiveness. This is to be expected given our earlier discussion of the differences in what these metrics measure. As mentioned before, the dependency-based evaluation can be approached from different angles, leading to differences in performance. This is exemplified in our experiment, where DEPEVAL(summ) outperforms BE, even though both these metrics rest on the same general idea. The new implementation of BE presented at the TAC 2008 workshop (Tratz and Hovy, 2008) introduces transformations for dependencies in order to increase the number of matches among elements that are semantically similar yet differ in terms of syntactic structure and/or lexical choices, and adds WordNet for synonym matching. Its core modules were updated as well: Minipar was replaced with the Charniak-Johnson reranking parser (Charniak and Johnson, 2005), Named Entity identification was added, and the BE extraction is conducted using a set of Tregex rules (Levy and Andrew, 2006). Since our method, presented in this paper, also uses the reranking parser, as well as WordNet, it would be interesting to compare both methods directly in terms of the performance of the dependency extraction procedure. 197 References Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In Proceedings of the ACL 2005 Workshop on Intrinsic and Extrinsic Evaluation Measures for MT and/or Summarization, pages 65–73, Ann Arbor, MI, USA. Joan Bresnan. 2001. Lexical-Functional Syntax. Blackwell, Oxford. Aoife Cahill, Michael Burke, Ruth O’Donovan, Josef van Genabith, and Andy Way. 2004. Longdistance dependency resolution in automatically acquired wide-coverage PCFG-based LFG approximations. In Proceedings of the 42th Annual Meeting of the Association for Computational Linguistics, pages 320–327, Barcelona, Spain. Aoife Cahill, Michael Burke, Ruth O’Donovan, Stefan Riezler, Josef van Genabith, and Andy Way. 2008. Wide-coverage deep statistical parsing using automatic dependency structure annotation. Comput. Linguist., 34(1):81–124. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and MaxEnt discriminative reranking. In ACL 2005: Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 173–180, Morristown, NJ, USA. Association for Computational Linguistics. Eugene Charniak. 2000. A maximum entropy inspired parser. In Proceedings of the 1st Annual Meeting of the North American Chapter of the Association for Computational Linguistics, pages 132–139, Seattle, WA, USA. Hoa Trang Dang and Karolina Owczarzak. 2008. Overview of the tac 2008 summarization track: Update task. In to appear in: Proceedings of the 1st Text Analysis Conference (TAC). Eduard Hovy, Chin-Yew Lin, and Liang Zhou. 2005. Evaluating DUC 2005 using Basic Elements. In Proceedings of the 5th Document Understanding Conference (DUC). Ronald M. Kaplan and Joan Bresnan, 1982. The Mental Representation of Grammatical Relations, chapter Lexical-functional Grammar: A Formal System for Grammatical Representation. MIT Press, Cambridge, MA, USA. Roger Levy and Galen Andrew. 2006. Tregex and tsurgeon: Tools for querying and manipulating tree data structures. In Proceedings of the 5th International Conference on Language Resources and Evaluation. Dekang Lin. 1995. A dependency-based method for evaluating broad-coverage parsers. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, pages 1420–1427. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Proceedings of the ACL 2004 Workshop: Text Summarization Branches Out, pages 74–81. Karolina Owczarzak, Josef van Genabith, and Andy Way. 2007. Evaluating Machine Translation with LFG dependencies. Machine Translation, 21(2):95– 119. Karolina Owczarzak. 2008. A novel dependencybased evaluation metric for Machine Translation. Ph.D. thesis, Dublin City University. Rebecca J. Passonneau, Ani Nenkova, Kathleen McKeown, and Sergey Sigelman. 2005. Applying the Pyramid method in DUC 2005. In Proceedings of the 5th Document Understanding Conference (DUC). Jason Rennie. 2000. Wordnet::querydata: a Perl module for accessing the WordNet database. http://people.csail.mit.edu/ jrennie/WordNet. Stephen Tratz and Eduard Hovy. 2008. Summarization evaluation using transformed Basic Elements. In Proceedings of the 1st Text Analysis Conference (TAC). 198
2009
22
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 199–207, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Summarizing Definition from Wikipedia Shiren Ye and Tat-Seng Chua and Jie Lu Lab of Media Search National University of Singapore {yesr|chuats|luj}@comp.nus.edu.sg Abstract Wikipedia provides a wealth of knowledge, where the first sentence, infobox (and relevant sentences), and even the entire document of a wiki article could be considered as diverse versions of summaries (definitions) of the target topic. We explore how to generate a series of summaries with various lengths based on them. To obtain more reliable associations between sentences, we introduce wiki concepts according to the internal links in Wikipedia. In addition, we develop an extended document concept lattice model to combine wiki concepts and non-textual features such as the outline and infobox. The model can concatenate representative sentences from non-overlapping salient local topics for summary generation. We test our model based on our annotated wiki articles which topics come from TREC-QA 2004-2006 evaluations. The results show that the model is effective in summarization and definition QA. 1 Introduction Nowadays, ‘ask Wikipedia’ has become as popular as ‘Google it’ during Internet surfing, as Wikipedia is able to provide reliable information about the concept (entity) that the users want. As the largest online encyclopedia, Wikipedia assembles immense human knowledge from thousands of volunteer editors, and exhibits significant contributions to NLP problems such as semantic relatedness, word sense disambiguation and question answering (QA). For a given definition query, many search engines (e.g., specified by ‘define:’ in Google) often place the first sentence of the corresponding wiki1 article at the top of the returned list. The use of 1 For readability, we follow the upper/lower case rule on web (say, ‘web pages’ and ‘on the Web’), and utilize one-sentence snippets provides a brief and concise description of the query. However, users often need more information beyond such a one-sentence definition, while feeling that the corresponding wiki article is too long. Thus, there is a strong demand to summarize wiki articles as definitions with various lengths to suite different user needs. The initial motivation of this investigation is to find better definition answer for TREC-QA task using Wikipedia (Kor and Chua, 2007). According to past results on TREC-QA (Voorhees, 2004; Voorhees and Dang, 2005), definition queries are usually recognized as being more difficult than factoid and list queries. Wikipedia could help to improve the quality of answer finding and even provide the answers directly. Its results are better than other external resources such as WordNet, Gazetteers and Google’s define operator, especially for definition QA (Lita et al., 2004). Different from the free text used in QA and summarization, a wiki article usually contains valuable information like infobox and wiki link. Infobox tabulates the key properties about the target, such as birth place/date and spouse for a person as well as type, founder and products for a company. Infobox, as a form of thumbnail biography, can be considered as a mini version of a wiki article’s summary. In addition, the relevant concepts existing in a wiki article usually refer to other wiki pages by wiki internal links, which will form a close set of reference relations. The current Wikipedia recursively defines over 2 million concepts (in English) via wiki links. Most of these concepts are multiword terms, whereas WordNet has only 50,000 plus multi-word terms. Any term could appear in the definition of a concept if necessary, while the total vocabulary existing in WordNet’s glossary definition is less than 2000. Wikipedia addresses explicit semantics for numerous concepts. These special knowledge representations will provide additional information for analysis and summarization. We thus need to extend existing summarization technologies to take advantage of the knowledge representations in Wikipedia. ‘wiki(pedia) articles’ and ‘on (the) Wikipedia’, the latter referring to the entire Wikipedia. 199 The goal of this investigation is to explore summaries with different lengths in Wikipedia. Our main contribution lies in developing a summarization method that can (i) explore more reliable associations between passages (sentences) in huge feature space represented by wiki concepts; and (ii) effectively combine textual and non-textual features such as infobox and outline in Wikipedia to generate summaries as definition. The rest of this paper is organized as follows: In the next section, we discuss the background of summarization using both textual and structural features. Section 3 presents the extended document concept lattice model for summarizing wiki articles. Section 4 describes corpus construction and experiments are described; while Section 5 concludes the paper. 2 Background Besides some heuristic rules such as sentence position and cue words, typical summarization systems measure the associations (links) between sentences by term repetitions (e.g., LexRank (Erkan and Radev, 2004)). However, sophisticated authors usually utilize synonyms and paraphrases in various forms rather than simple term repetitions. Furnas et al. (1987) reported that two people choose the same main key word for a single well-known object less than 20% of the time. A case study by Ye et al. (2007) showed that 61 different words existing in 8 relevant sentences could be mapped into 16 distinctive concepts by means of grouping terms with close semantic (such as [British, Britain, UK] and [war, fought, conflict, military]). However, most existing summarization systems only consider the repeated words between sentences, where latent associations in terms of inter-word synonyms and paraphrases are ignored. The incomplete data likely lead to unreliable sentence ranking and selection for summary generation. To recover the hidden associations between sentences, Ye et al. (2007) compute the semantic similarity using WordNet. The term pairs with semantic similarity higher than a predefined threshold will be grouped together. They demonstrated that collecting more links between sentences will lead to better summarization as measured by ROUGE scores, and such systems were rated among the top systems in DUC (document understanding conference) in 2005 and 2006. This WordNet-based approach has several shortcomings due to the problems of data deficiency and word sense ambiguity, etc. Wikipedia already defined millions of multiword concepts in separate articles. Its definition is much larger than that of WordNet. For instance, more than 20 kinds of songs and movies called Butterfly , such as Butterfly (Kumi Koda song), Butterfly (1999 film) and Butterfly (2004 film), are listed in Wikipedia. When people say something about butterfly in Wikipedia, usually, a link is assigned to refer to a particular butterfly. Following this link, we can acquire its explicit and exact semantic (Gabrilovich and Markovitch, 2007), especially for multi-word concepts. Phrases are more important than individual words for document retrieval (Liu et al., 2004). We hope that the wiki concepts are appropriate text representation for summarization. Generally, wiki articles have little redundancy in their contents as they utilize encyclopedia style. Their authors tend to use wiki links and ‘See Also’ links to refer to the involved concepts rather than expand these concepts. In general, the guideline for composing wiki articles is to avoid overlong and over-complicated styles. Thus, the strategy of ‘split it’ into a series of articles is recommended; so wiki articles are usually not too long and contain limited number of sentences. These factors lead to fewer links between sentences within a wiki article, as compared to normal documents. However, the principle of typical extractive summarization approaches is that the sentences whose contents are repeatedly emphasized by the authors are most important and should be included (Silber and McCoy, 2002). Therefore, it is challenging to summarize wiki articles due to low redundancy (and links) between sentences. To overcome this problem, we seek (i) more reliable links between passages, (ii) appropriate weighting metric to emphasize the salient concepts about the topic, and (iii) additional guideline on utilizing non-textual features such as outline and infobox. Thus, we develop wiki concepts to replace ‘bag-of-words’ approach for better link measurements between sentences, and extend an existing summarization model on free text to integrate structural information. By analyzing rhetorical discourse structure of aim, background, solution, etc. or citation context, we can obtain appropriate abstracts and the most influential contents from scientific articles (Teufel and Moens, 2002; Mei and Zhai, 2008). Similarly, we believe that the structural information such as infobox and outline is able to improve summarization as well. The outline of a wiki article using inner links will render the structure of its definition. In addition, infobox could be considered as topic signature (Lin and Hovy, 2000) or keywords about the topic. Since keywords and summary of a document can be mutually boosted (Wan et al., 2007), infobox is capable of summarization instruction. When Ahn (2004) and Kor (2007) utilize Wikipedia for TREC-QA definition, they treat the Wikipedia as the Web and perform normal search on it. High-frequency terms in the query snippets returned from wiki index are used to extend query and rank (re-rank) passages. These snippets usually 200 come from multiple wiki articles. Here the useful information may be beyond these snippets but existing terms are possibly irrelevant to the topic. On the contrary, our approach concentrates on the wiki article having the exact topic only. We assume that every sentence in the article is used to define the query topic, no matter whether it contains the term(s) of the topic or not. In order to extract some salient sentences from the article as definition summaries, we will build a summarization model that describes the relations between the sentences, where both textual and structural features are considered. 3 Our Approach 3.1 Wiki Concepts In this subsection, we address how to find reasonable and reliable links between sentences using wiki concepts. Consider a sentence: ‘After graduating from Boston University in 1988, she went to work at a Calvin Klein store in Boston.’ from a wiki article ‘Carolyn Bessette Kennedy’2, we can find 11 distinctive terms, such as after, graduate, Boston, University,1988, go, work, Calvin, Klein, store, Boston, if stop words are ignored. However, multi-word terms such as Boston University and Calvin Klein are linked to the corresponding wiki articles, where their definitions are given. Clearly, considering the anchor texts as two wiki concepts rather than four words is more reasonable. Their granularity are closer to semantic content units in a summarization evaluation method Pyramid (Nenkova et al., 2007) and nuggets in TREC-QA . When the text is represented by wiki concepts, whose granularity is similar to the evaluation units, it is possibly easy to detect the matching output using a model. Here, • Two separate words, Calvin and Klein, are meaningless and should be discarded; otherwise, spurious links between sentences are likely to occur. • Boston University and Boston are processed separately, as they are different named entities. No link between them is appropriate3. • Terms such as ‘John F. Kennedy, Jr.’ and ‘John F. Kennedy’ will be considered as two diverse wiki concepts, but we do not account on how many repeated words there are. • Different anchor texts, such as U.S.A. and United States of America, are recognized as 2All sample sentences in this paper come from this article if not specified. 3Consider new pseudo sentence: ‘After graduating from Stanford in 1988, she went to work ... in Boston.’ We do not need assign link between Stanford and Boston as well. an identical concept since they refer to the same wiki article. • Two concepts, such as money and cash, will be merged into an identical concept when their semantics are similar. In wiki articles, the first occurrence of a wiki concept is tagged by a wiki link, but there is no such a link to its subsequent occurrences in the remaining parts of the text in most cases. To alleviate this problem, a set of heuristic rules is proposed to unify the subsequent occurrences of concepts in normal text with previous wiki concepts in the anchor text. These heuristic rules include: (i) edit distance between linked wiki concept and candidates in normal text is larger than a predefined threshold; and (ii) partially overlapping words beginning with capital letter, etc. After filtering out wiki concepts, the words remaining in wiki articles could be grouped into two sets: close-class terms like pronouns and prepositions as well as open-class terms like nouns and verbs. For example, in the sentence ‘She died at age 33, along with her husband and sister’, the openclass terms include die, age, 33, husband and sister. Even though most open-class terms are defined in Wikipedia as well, the authors of the article do not consider it necessary to present their references using wiki links. Hence, we need to extend wiki concepts by concatenating them with these open-class terms to form an extended vector. In addition, we ignore all close-class terms, since we cannot find efficient method to infer reliable links across them. As a result, texts are represented as a vector of wiki concepts. Once we introduce wiki concepts to replace typical ‘bag-of-words’ approach, the dimensions of concept space will reach six order of magnitudes. We cannot ignore the data spareness issue and computation cost when the concept space is so huge. Actually, for a wiki article and a set of relevant articles, the involved concepts are limited, and we need to explore them in a small sub-space. For instance, 59 articles about Kennedy family in Wikipedia have 10,399 distinctive wiki concepts only, where 5,157 wiki concepts exist twice and more. Computing the overlapping among them is feasible. Furthermore, we need to merge the wiki concepts with identical or close semantic (namely, building links between these synonyms and paraphrases). We measure the semantic similarity between two concepts by using cosine distance between their wiki articles, which are represented as the vectors of wiki concepts as well. For computation efficiency, we calculate semantic similarities between all promising concept pairs beforehand, and then retrieve the value in a Hash table directly. We spent CPU time of about 12.5 days preprocessing the se201 mantic calculation. Details are available at our technical report (Lu et al., 2008). Following the principle of TFIDF, we define the weighing metric for the vector represented by wiki concepts using the entire Wikipedia as the observation collection. We define the CFIDF weight of wiki concept i in article j as: wi,j = cfi,j· idfi = ni,j P k nk,j · log |D| |dj : ti ∈dj|, (1) where cfi,j is the frequency of concept i in article j; idfi is the inverse frequency of concept i in Wikipedia; and D is the number of articles in Wikipedia. Here, sparse wiki concepts will have more contribution. In brief, we represent articles in terms of wiki concepts using the steps below. 1. Extract the wiki concepts marked by wiki links in context. 2. Detect the remaining open-class terms as wiki concepts as well. 3. Merge concepts whose semantic similarity is larger than predefined threshold (0.35 in our experiments) into the one with largest idf. 4. Weight all concepts according to Eqn (1). 3.2 Document Concept Lattice Model Next, we build the document concept lattice (DCL) for articles represented by wiki concepts. For illustration on how DCL is built, we consider 8 sentences from DUC 2005 Cluster d324e (Ye et al., 2007) as case study. 8 sentences, represented by 16 distinctive concepts A-P, are considered as the base nodes 1-8 as shown in Figure 1. Once we group nodes by means of the maximal common concepts among base nodes hierarchically, we can obtain the derived nodes 11-41, which form a DCL. A derived node will annotate a local topic through a set of shared concepts, and define a sub concept space that contains the covered base nodes under proper projection. The derived node, accompanied with its base nodes, is apt to interpret a particular argument (or statement) about the involved concepts. Furthermore, one base node among them, coupled with the corresponding sentence, is capable of this interpretation and could represent the other base nodes to some degree. In order to Extract a set of sentences to cover key distinctive local topics (arguments) as much as possible, we need to select a set of important nonoverlapping derived nodes. We measure the importance of node N in DCL of article j in term of representative power (RP) as: RP(N) = X ci∈N (|ci|· wi,j)/ log(|N|), (2)                                                                                                                                                                                        Figure 1: A sample of concept lattice where concept ci in node N is weighted by wi,j according to Eqn (1), and |N| denotes the concept number in N (if N is a base node) or the number of distinct concepts in |N| (if N is a derived node), respectively. Here, |ci| represents the c’s frequency in N, and log(|N|) reflects N’s cost if N is selected (namely, how many concepts are used in N). For example, 7 concepts in sentence 1 lead to the total |c| of 34 if their weights are set to 1 equally. Its RP is RP(1) = 34/log(7) = 40.23. Similarly, RP(31) = 6 ∗3/log(3) = 37.73. By selecting a set of non-overlapping derived nodes with maximal RP, we are able to obtain a set of local topics with highest representativeness and diversity. Next, a representative sentence with maximal RP in each of such derived nodes is chosen to represent the local topics in observation. When the length of the required summary changes, the number of the local topics needed will also be modified. Consequently, we are able to select the sets of appropriate derived nodes in diverse generalization levels, and obtain various versions of summaries containing the local topics with appropriate granularities. In the DCL example shown in Figure 1, if we expect to have a summary with two sentences, we will select the derived nodes 31 and 32 with highest RP. Nodes 31 and 32 will infer sentences 4 and 2, and they will be concatenated to form a summary. If the summary is increased to three sentences, then three derived nodes 31, 23 and 33 with maximal RP will render representative sentences 4, 5 and 6. Hence, the different number of actual sentences (4+5+6 vs. 4+2) will be selected depending on the length of the required summary. The uniqueness of DCL is that the sentences used in a shorter summary may not appear in a longer summary for the same source text. According to the distinctive derived nodes in diverse levels, the sentences with different generalization abilities are chosen to generate various summaries. 202 Figure 2: Properties in infobox and their support sentences 3.3 Model of Extended Document Concept Lattice (EDCL) Different from free text and general web documents, wiki articles contain structural features, such as infoboxes and outlines, which correlate strongly to nuggets in definition TREC-QA. By integrating these structural features, we will generate better RP measures in derived topics which facilitates better priority assignment in local topics. 3.3.1 Outline: Wiki Macro Structure A long wiki article usually has a hierarchical outline using inner links to organize its contents. For example, wiki article Cat consists of a set of hierarchical sections under the outline of mouth, legs, Metabolism, genetics, etc. This outline provides a hierarchical clustering of sub-topics assigned by its author(s), which implies that selecting sentences from diverse sections of outline is apt to obtain a balanced summary. Actually, DCL could be considered as the composite of many kinds of clusterings (Ye et al., 2007). Importing the clustering from outline into DCL will be helpful for the generation of a balanced summary. We thus incorporate the structure of outline into DCL as follows: (i) treat section titles as concepts in the pseudo derived nodes; (ii) link these pseudo nodes and the base nodes in this section if they share concepts; and (iii) revise base nodes’ RP in Eqn (2) (see Section 3.3.3). 3.3.2 Infobox: a Mini Version of Summary Infobox tabulates the key properties about the topic concept of a wiki article. It could be considered as a mini summary, where many nuggets in TRECQA are included. As properties in infobox are not complete sentences and do not present relevant arguments, it is inappropriate to concatenate them as a summary. However, they are good indicators for summary generation. Following the terms in a property (e.g., spouse name and graduation school),                                     !   " # $ % $ & ' ( & ) * + , . + / $ ( ) 0 ( 1 ' 2 ' 3 4 5 6 7 8 5 7 6 9 : ; < = > ? > @ Figure 3: Extend document concept lattice by outline and infobox in Wikipedia we can find the corresponding sentences in the body of the text that contains such terms4. It describes the details about the involved property and provides the relevant arguments. We call it support sentence. Now, again, we have a hierarchy: Infobox + properties + support sentences. This hierarchy can be used to render a summary by concatenating the support sentences. This summary is inferred from hand-crafted infobox directly and is a full version of infobox; so its quality is guaranteed. However, it is possibly inapplicable due to its improper length. Following the iterative reinforcement approach for summarization and keyword extraction (Wan et al., 2007), it could be used to refine other versions of summaries. Hence, we utilize infobox and its support sentences to modify nodes’ RPs in DCL so that the priority of local topics has bias to infobox. To achieve it, we extend DCL by inserting a hierarchy from infobox: (i) generate a pseudo derived node for each property; (ii) link every derived node to its support sentences; and (iii) cover these pseudo nodes by a virtual derived node called infobox. 3.3.3 Summary Generation from EDCL In DCL, sentences with common concepts form local topics by autonomous approach, where shared concepts are depicted in derived nodes. Now we introduce two additional hierarchies derived from outline and infobox into DCL to refine RPs of salient local topics for summarization, which will render a model named extended document concept lattice (EDCL). As shown in Figure 3, base nodes in EDCL covered by pseudo derived nodes will increase their RPs when they receive influence from outline and infobox. Also, if RPs of their covered base nodes changes, the original derived nodes will modify their RPs as well. Therefore, the new 4Sometimes, we can find more than one appropriate sentence for a property. In our investigation, we select top two sentences with the occurrence of the particular term if available. 203 RPs in derived nodes and based nodes will lead to better priority of ranking derived nodes, which is likely to result in a better summary. One important direct consequence of introducing the extra hierarchies is to increase the RP of nodes relevant to outline and infobox so that the summaries from EDCL are likely to follow human-crafted ones. The influence of human effects are transmitted in a ‘V’ curve approach. We utilize the following steps to generate a summary with a given length (say m sentences) from EDCL. 1. Build a normal DCL, and compute RP for each node according to Eqn 2. 2. Generate pseudo derived nodes (denoted by P) based on outline and infobox, and link the pseudo derived nodes to their relevant base nodes (denoted by B0). 3. Update RP in B0 by magnifying the contribution of shared concepts between P and N05. 4. Update RP in derived nodes that cover B0 on account of the new RP in B0. 5. Select m non-overlapping derived nodes with maximal RP as the current observation. 6. Concatenate representative sentences with top RP from each derived node in the current observation as output. 7. If one representative sentence is covered by more than one derived node in step 5, the output will be less than m sentences. In this case, we need to increase m and repeat step 5-6 until m sentences are selected. 4 Experiments The purposes of our experiment are two-fold: (i) evaluate the effects of wiki definition to the TRECQA task; and (ii) examine the characteristics and summarization performance of EDCL. 4.1 Corpus Construction We adopt the tasks of TREC-QA in 2004-2006 (TREC 12-14) as test scope. We retrieve articles with identical topic names from Wikipedia6. Non-letter transformations are permitted (e.g., from ‘Carolyn Bessette-Kennedy’ to ‘Carolyn BessetteKennedy’). Because our focus is summarization evaluation, we ignore the cases in TRECQA where the exact topics do not exist in Wikipedia, even though relevant topics are available (e.g., ‘France wins World Cup in soccer’ in TREC-QA vs. ‘France national football team’ 5We magnify it by adding |c0| ∗wc ∗η. Here, c0 is the shared concepts between P and N0, and η is the influence factor and set to 2-5 in our experiments. 6The dump is available at http://download.wikimedia.org/. Our dump was downloaded in Sept 2007. and ‘2006 FIFA World Cup’ in Wikipedia). Finally, among the 215 topics in TREC 12-14, we obtain 180 wiki articles with the same topics. We ask 15 undergraduate and graduate students from the Department of English Literature in National University of Singapore to choose 7-14 sentences in the above wiki articles as extractive summaries. Each wiki article is annotated by 3 persons separately. In order for the volunteers to avoid the bias from TREC-QA corpus, we do not provide queries and nuggets used in TREC-QA. Similar to TREC nuggets, we call the selected sentences wiki nuggets. Wiki nuggets provides the ground truth of the performance evaluation, since some TREC nuggets are possibly unavailable in Wikipedia. Here, we did not ask the volunteers to create snippets (like TREC-QA) or compose an abstractive summary (like DUC). This is because of the special style of wiki articles: the entire document is a long summary without trivial stuff. Usually, we do not need to concatenate key phrases from diverse sentences to form a recapitulative sentence. Meanwhile, selecting a set of salient sentences to form a concise version is a relatively less time-consuming but applicable approach. Snippets, by and large, lead to bad readability, and therefore we do not employ this approach. In addition, the volunteers also annotate 7-10 pairs of question/answer for each article for further research on QA using Wikipedia. The corpus, called TREC-Wiki collection, is available at our site (http://nuscu.ddns.comp.nus.edu.sg). The system of Wikipedia summarization using EDCL is launched on the Web as well. 4.2 Corpus Exploration 4.2.1 Answer availability The availability of answers in Wikipedia for TRECQA could be measured in two aspects: (i) how many TREC-QA topics have been covered by Wikipedia? and (ii) how many nuggets could be found in the corresponding wiki article? We find that (i) over 80% of topics (180/215) in the TREC 12-14 are available in Wikipedia, and (ii) about 47% TREC nuggets could be detected directly from Wikipedia (examining applet modified from Pourpre (Lin and Demner-Fushman, 2006)). In contrast, 6,463 nuggets existing in TREC-QA 12-14 are distributed in 4,175 articles from AQUAINT corpus. We can say that Wikipedia is the answer goldmine for TREC-QA questions. When we look into these TREC nuggets in wiki articles closely, we find that most of them are embedded in wiki links or relevant to infobox. It suggests that they are indicators for sentences having nuggets. 204 4.2.2 Correlation between TREC nuggets and non-text features Analyzing the features used could let us understand summarization better (Nenkova and Louis, 2008). Here, we focus on the statistical analysis between TREC/wiki nuggets and non-textual features such as wiki links, infobox and outline. The features used are introduced in Table 1. The correlation coefficients are listed in Table 2. Observation: (1) On the whole, wiki nuggets exhibit higher correlation to non-textual features than TREC nuggets do. The possible reason is that TREC nuggets are extracted from AQUAINT rather than Wikipedia. (2) As compared to other features, infobox and wiki links strongly relate to nuggets. They are thus reliable features beyond text for summarization. (3) Sentence positions exhibit weak correlation to nuggets, even though the first sentence of an article is a good one-sentence definition. Feature Description Link Does the sentence have link? Topic rel. Does the sentence contain any word in topic concept? Outline rel. Does the sentence hold word in its section title(s) (outline)? Infobox rel. Is it a support sentence? Position First sentence of the article, first sentence and last sentence of a paragraph, or others? Table 1: Features for correlation measurement Feature TREC nuggets Wiki nuggets Link 0.087 0.120 Topic rel. 0.038 0.058 Outline rel. 0.078 0.076 Infobox rel. 0.089 0.170 Position -0.047 0.021 Table 2: Correlation Coefficients between nontextual features in Wiki and TREC/wiki nuggets 4.3 Statistical Characteristics of EDCL We design four runs with various configurations as shown in Table 3. We implement a sentence reranking program using MMR (maximal marginal relevance) (Carbonell and Goldstein, 1998) in Run 1, which is considered as the test baseline. We apply standard DCL in Run 2, where concepts are determined according to their definitions in WordNet (Ye et al., 2007). We introduce wiki concepts for standard DCL in Run 3. Run 4 is the full version of EDCL, which considers both outline and infobox. Observations: (1) In Run 1, the average number of distinctive words per article is near to 1200 after stop words are filtered out. When we merge diverse words having similar semantic according to WordNet concepts , we obtain 873 concepts per article on average in Run 2. The word number decreases by about 28% as a result of the omission of close-class terms and the merging of synonyms and paraphrases. (2) When wiki concepts are introduced in Run 3, the number of concepts continues to decrease. Here, some adjacent single-word terms are merged into wiki concepts if they are annotated by wiki links. Even though the reduction of total concepts is limited, these new wiki concepts will group the terms that cannot be detected by WordNet. (3) DCL based on WordNet concepts has less derived nodes (Run 3) than DCL based on wiki concepts does, although the former has more concepts. It implies that wiki concepts lead to higher link density in DCL as more links between concepts can be detected. (4) Outline and infobox will bring additional 54 derived nodes (from 1695 to 1741). Additional computation cost is limited when they are introduced into EDCL. Run 1 Word co-occurrence + MMR Run 2 Basic DCL model (WordNet concepts) Run 3 DCL + wiki concepts Run 4 EDCL (DCL + wiki concepts + outline + infobox) Table 3: Test configurations Concepts Base nodes Derived nodes Run 1 1173 (number of words) Run 2 873 259 1517 Run 3 826 259 1695 Run 4 831 259 1741 Table 4: Average node/concept numbers in DCL and EDCL 4.4 Summarization Performance of EDCL We evaluate the performance of EDCL from two aspects such as contribution to TREC-QA definition task and accuracy of summarization in our TRECWiki collection. Since factoid/list questions are about the most essential information of the target as well, like Cui’s approach (2005), we treat factoid/list answers as essential nuggets and add them to the gold standard list of definition nuggets. We set the sentence number of summaries generated by the system to 205 12. We examine the definition quality by nugget recall (NR) and an approximation to nugget precision (NP) on answer length. These scores are combined using the F1 and F3 measures. The recall in F3 is weighted three times as important as precision. The evaluation is automatically conducted by Pourpre v1.1 (Lin and Demner-Fushman, 2006). Based on the performance of EDCL for TRECQA definition task listed in Table 5, we observe that: (i) When EDCL considers wiki concepts and structural features such as outline and infobox, its F-scores increase significantly (Run 3 and Run 4). (ii) Table 5 also lists the results of Cui’s system (marked by asterisk) using bigram soft patterns (Cui et al., 2005), which is trained by TREC-12 and tested on TREC 13. Our EDCL can achieve comparable or better F-scores on the 180 topics in TREC 12-14. It suggests that Wikipedia could provide high-quality definition directly even though we do not use AQUAINT. (iii) The precision of EDCL in Run 4 outperforms that of soft-pattern approach remarkably (from 0.34 to 0.497). One possible reason is that all sentences in a wiki article are oriented to its topic, and the sentence irrelevant to its topic hardly occurs. NR NP F1 F3 Run 1 0.247 0.304 0.273 0.252 Run 2 0.262 0.325 0.290 0.267 Run 3 0.443 0.431 0.431 0.442 Run 4 0.538 0.497 0.517 0.534 Bigram SP* 0.552 0.340 0.421 0.510 Table 5: EDCL evaluated by TREC-QA nuggets                                  Figure 4: Performance of summarizing Wikipedia using EDCL with different configurations We also test the performance of EDCL using extractive summaries in TREC-Wiki collection. By means of comparing to each set of sentences selected by a volunteer, we examine how many exact annotated sentences are selected by the system using different configurations. The average recalls and precisions as well as their F-scores are shown in Figure 4. Observations: (i) The structural information of Wikipeida has significant contribution to EDCL for summarization. We manually examine some summaries and find that the sentences containing more wiki links are apt to be chosen when wiki concepts are introduced in EDCL. Most sentences in output summaries in Run 4 usually have 1-3 links and relevant to infobox or outline. (ii) When using wiki concepts, infobox and outline to enrich DCL, we find that the precision of sentence selection has improved more than the recall. It reaffirms the conclusion in the previous TREC-QA test in this subsection. (iii) In addition, we manually examine the summaries on some wiki articles with common topics, such as car, house, money, etc. We find that the summaries generated by EDCL could effectively grasp the key information about the topics when the sentence number of summaries exceeds 10. 5 Conclusion and Future Work Wikipedia recursively defines enormous concepts in huge vector space of wiki concepts. The explicit semantic representation via wiki concepts allows us to obtain more reliable links between passages. Wikipedia’s special structural features, such as wiki links, infobox and outline, reflect the hidden human knowledge. The first sentence of a wiki article, infobox (and its support sentences), outline (and its relevant sentences), as well as the entire document could be considered as diverse summaries with various lengths. In our proposed model, local topics are autonomously organized in a lattice structure according to their overlapping relations. The hierarchies derived from infobox and outline are imported to refine the representative powers of local topics by emphasizing the concepts relevant to infobox and outline. Experiments indicate that our proposed model exhibits promising performance in summarization and QA definition tasks. Of course, there are rooms to further improve the model. Possible improvements includes: (a) using advanced semantic and parsing technologies to detect the support and relevant sentences for infobox and outline; (b) summarizing multiple articles in a wiki category; and (c) exploring the mapping from close-class terms to open-class terms for more links between passages is likely to forward some interesting results. More generally, the knowledge hidden in nontextual features of Wikipedia allow the model to harvest better definition summaries. It is challenging but possibly fruitful to recast the normal documents with wiki styles so as to adopt EDCL for free text and enrich the research efforts on other NLP tasks. 206 References [Ahn et al.2004] David Ahn, Valentin Jijkoun, et al. 2004. Using Wikipedia at the TREC QA Track. In Text REtrieval Conference. [Carbonell and Goldstein1998] J. Carbonell and J. Goldstein. 1998. The use of mmr, diversity-based re-ranking for reordering documents and producing summaries. In SIGIR, pages 335–336. [Cui et al.2005] Hang Cui, Min-Yen Kan, and Tat-Seng Chua. 2005. Generic soft pattern models for definitional question answering. In Proceedings of the 28th annual international ACM SIGIR conference on research and development in information retrieval, pages 384–391, New York, NY, USA. ACM. [Erkan and Radev2004] G¨unes¸ Erkan and Dragomir R. Radev. 2004. LexRank: Graph-based Lexical Centrality as Salience in Text Summarization. Artificial Intelligence Research, 22:457–479. [Furnas et al.1987] George W. Furnas, Thomas K. Landauer, Louis M. Gomez, and Susan T. Dumais. 1987. The vocabulary problem in human-system communication. Communications of the ACM, 30(11):964–971. [Gabrilovich and Markovitch2007] Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using wikipedia-based explicit semantic analysis. In Proceedings of The Twentieth International Joint Conference for Artificial Intelligence, pages 1606–1611, Hyderabad, India. [Kor and Chua2007] Kian-Wei Kor and Tat-Seng Chua. 2007. Interesting nuggets and their impact on definitional question answering. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval, pages 335–342, New York, NY, USA. ACM. [Lin and Demner-Fushman2006] Jimmy J. Lin and Dina Demner-Fushman. 2006. Methods for automatically evaluating answers to complex questions. Information Retrieval, 9(5):565–587. [Lin and Hovy2000] Chin-Yew Lin and Eduard Hovy. 2000. The automated acquisition of topic signatures for text summarization. In Proceedings of the 18th conference on Computational linguistics, pages 495–501, Morristown, NJ, USA. ACL. [Lita et al.2004] Lucian Vlad Lita, Warren A. Hunt, and Eric Nyberg. 2004. Resource analysis for question answering. In Proceedings of the ACL 2004 on Interactive poster and demonstration sessions, page 18, Morristown, NJ, USA. ACL. [Liu et al.2004] Shuang Liu, Fang Liu, Clement Yu, and Weiyi Meng. 2004. An effective approach to document retrieval via utilizing wordnet and recognizing phrases. In Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 266–272, New York, NY, USA. ACM. [Lu et al.2008] Jie Lu, Shiren Ye, and Tat-Seng Chua. 2008. Explore semantic similarity and semantic relatedness via wikipedia. Technical report, National Univeristy of Singapore, http://nuscu.ddns.comp.nus.edu.sg. [Mei and Zhai2008] Qiaozhu Mei and ChengXiang Zhai. 2008. Generating impact-based summaries for scientific literature. In Proceedings of ACL-08: HLT, pages 816–824, Columbus, Ohio, June. ACL. [Nenkova and Louis2008] Ani Nenkova and Annie Louis. 2008. Can you summarize this? identifying correlates of input difficulty for multi-document summarization. In Proceedings of ACL-08: HLT, pages 825–833, Columbus, Ohio, June. ACL. [Nenkova et al.2007] Ani Nenkova, Rebecca Passonneau, and Kathleen McKeown. 2007. The pyramid method: Incorporating human content selection variation in summarization evaluation. ACM Transactions on Speech and Language Processing, 4(2):4. [Silber and McCoy2002] H. Grogory Silber and Kathleen F. McCoy. 2002. Efficiently computed lexical chains as an intermediate representation for automatic text summarization. Computational Linguistics, 28(4):487–496. [Teufel and Moens2002] Simone Teufel and Marc Moens. 2002. Summarizing scientific articles: experiments with relevance and rhetorical status. Computational Linguistics, 28(4):409–445, December. [Voorhees and Dang2005] Ellen M. Voorhees and Hoa Trang Dang. 2005. Overview of the trec 2005 question answering track. In Text REtrieval Conference. [Voorhees2004] Ellen M. Voorhees. 2004. Overview of the trec 2004 question answering track. In Text REtrieval Conference. [Wan et al.2007] Xiaojun Wan, Jianwu Yang, and Jianguo Xiao. 2007. Towards an iterative reinforcement approach for simultaneous document summarization and keyword extraction. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 552–559, Prague, Czech Republic, June. ACL. [Ye et al.2007] Shiren Ye, Tat-Seng Chua, Min-Yen Kan, and Long Qiu. 2007. Document concept lattice for text understanding and summarization. Information Processing and Management, 43(6):1643–1662. 207
2009
23
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 208–216, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Automatically Generating Wikipedia Articles: A Structure-Aware Approach Christina Sauper and Regina Barzilay Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology {csauper,regina}@csail.mit.edu Abstract In this paper, we investigate an approach for creating a comprehensive textual overview of a subject composed of information drawn from the Internet. We use the high-level structure of human-authored texts to automatically induce a domainspecific template for the topic structure of a new overview. The algorithmic innovation of our work is a method to learn topicspecific extractors for content selection jointly for the entire template. We augment the standard perceptron algorithm with a global integer linear programming formulation to optimize both local fit of information into each topic and global coherence across the entire overview. The results of our evaluation confirm the benefits of incorporating structural information into the content selection process. 1 Introduction In this paper, we consider the task of automatically creating a multi-paragraph overview article that provides a comprehensive summary of a subject of interest. Examples of such overviews include actor biographies from IMDB and disease synopses from Wikipedia. Producing these texts by hand is a labor-intensive task, especially when relevant information is scattered throughout a wide range of Internet sources. Our goal is to automate this process. We aim to create an overview of a subject – e.g., 3-M Syndrome – by intelligently combining relevant excerpts from across the Internet. As a starting point, we can employ methods developed for multi-document summarization. However, our task poses additional technical challenges with respect to content planning. Generating a well-rounded overview article requires proactive strategies to gather relevant material, such as searching the Internet. Moreover, the challenge of maintaining output readability is magnified when creating a longer document that discusses multiple topics. In our approach, we explore how the highlevel structure of human-authored documents can be used to produce well-formed comprehensive overview articles. We select relevant material for an article using a domain-specific automatically generated content template. For example, a template for articles about diseases might contain diagnosis, causes, symptoms, and treatment. Our system induces these templates by analyzing patterns in the structure of human-authored documents in the domain of interest. Then, it produces a new article by selecting content from the Internet for each part of this template. An example of our system’s output1 is shown in Figure 1. The algorithmic innovation of our work is a method for learning topic-specific extractors for content selection jointly across the entire template. Learning a single topic-specific extractor can be easily achieved in a standard classification framework. However, the choices for different topics in a template are mutually dependent; for example, in a multi-topic article, there is potential for redundancy across topics. Simultaneously learning content selection for all topics enables us to explicitly model these inter-topic connections. We formulate this task as a structured classification problem. We estimate the parameters of our model using the perceptron algorithm augmented with an integer linear programming (ILP) formulation, run over a training set of example articles in the given domain. The key features of this structure-aware approach are twofold: 1This system output was added to Wikipedia at http:// en.wikipedia.org/wiki/3-M syndrome on June 26, 2008. The page’s history provides examples of changes performed by human editors to articles created by our system. 208 Diagnosis . ..No laboratories offering molecular genetic testing for prenatal diagnosis of 3-M syndrome are listed in the GeneTests Laboratory Directory. However, prenatal testing may be available for families in which the disease-causing mutations have been identified in an affected family member in a research or clinical laboratory. Causes Three M syndrome is thought to be inherited as an autosomal recessive genetic trait. Human traits, including the classic genetic diseases, are the product of the interaction of two genes, one received from the father and one from the mother. In recessive disorders, the condition does not occur unless an individual inherits the same defective gene for the same trait from each parent. . . . Symptoms ...Many of the symptoms and physical features associated with the disorder are apparent at birth (congenital). In some cases, individuals who carry a single copy of the disease gene (heterozygotes) may exhibit mild symptoms associated with Three M syndrome. Treatment . ..Genetic counseling will be of benefit for affected individuals and their families. Family members of affected individuals should also receive regular clinical evaluations to detect any symptoms and physical characteristics that may be potentially associated with Three M syndrome or heterozygosity for the disorder. Other treatment for Three M syndrome is symptomatic and supportive. Figure 1: A fragment from the automatically created article for 3-M Syndrome. • Automatic template creation: Templates are automatically induced from humanauthored documents. This ensures that the overview article will have the breadth expected in a comprehensive summary, with content drawn from a wide variety of Internet sources. • Joint parameter estimation for content selection: Parameters are learned jointly for all topics in the template. This procedure optimizes both local relevance of information for each topic and global coherence across the entire article. We evaluate our approach by creating articles in two domains: Actors and Diseases. For a data set, we use Wikipedia, which contains articles similar to those we wish to produce in terms of length and breadth. An advantage of this data set is that Wikipedia articles explicitly delineate topical sections, facilitating structural analysis. The results of our evaluation confirm the benefits of structureaware content selection over approaches that do not explicitly model topical structure. 2 Related Work Concept-to-text generation and text-to-text generation take very different approaches to content selection. In traditional concept-to-text generation, a content planner provides a detailed template for what information should be included in the output and how this information should be organized (Reiter and Dale, 2000). In text-to-text generation, such templates for information organization are not available; sentences are selected based on their salience properties (Mani and Maybury, 1999). While this strategy is robust and portable across domains, output summaries often suffer from coherence and coverage problems. In between these two approaches is work on domain-specific text-to-text generation. Instances of these tasks are biography generation in summarization and answering definition requests in question-answering. In contrast to a generic summarizer, these applications aim to characterize the types of information that are essential in a given domain. This characterization varies greatly in granularity. For instance, some approaches coarsely discriminate between biographical and non-biographical information (Zhou et al., 2004; Biadsy et al., 2008), while others go beyond binary distinction by identifying atomic events – e.g., occupation and marital status – that are typically included in a biography (Weischedel et al., 2004; Filatova and Prager, 2005; Filatova et al., 2006). Commonly, such templates are specified manually and are hard-coded for a particular domain (Fujii and Ishikawa, 2004; Weischedel et al., 2004). Our work is related to these approaches; however, content selection in our work is driven by domain-specific automatically induced templates. As our experiments demonstrate, patterns observed in domain-specific training data provide sufficient constraints for topic organization, which is crucial for a comprehensive text. Our work also relates to a large body of recent work that uses Wikipedia material. Instances of this work include information extraction, ontology induction and resource acquisition (Wu and Weld, 2007; Biadsy et al., 2008; Nastase, 2008; Nastase and Strube, 2008). Our focus is on a different task — generation of new overview articles that follow the structure of Wikipedia articles. 209 3 Method The goal of our system is to produce a comprehensive overview article given a title – e.g., Cancer. We assume that relevant information on the subject is available on the Internet but scattered among several pages interspersed with noise. We are provided with a training corpus consisting of n documents d1 . . . dn in the same domain – e.g., Diseases. Each document di has a title and a set of delineated sections2 si1 . . . sim. The number of sections m varies between documents. Each section sij also has a corresponding heading hij – e.g., Treatment. Our overview article creation process consists of three parts. First, a preprocessing step creates a template and searches for a number of candidate excerpts from the Internet. Next, parameters must be trained for the content selection algorithm using our training data set. Finally, a complete article may be created by combining a selection of candidate excerpts. 1. Preprocessing (Section 3.1) Our preprocessing step leverages previous work in topic segmentation and query reformulation to prepare a template and a set of candidate excerpts for content selection. Template generation must occur once per domain, whereas search occurs every time an article is generated in both learning and application. (a) Template Induction To create a content template, we cluster all section headings hi1 . . . him for all documents di. Each cluster is labeled with the most common heading hij within the cluster. The largest k clusters are selected to become topics t1 . . . tk, which form the domain-specific content template. (b) Search For each document that we wish to create, we retrieve from the Internet a set of r excerpts ej1 . . . ejr for each topic tj from the template. We define appropriate search queries using the requested document title and topics tj. 2. Learning Content Selection (Section 3.2) For each topic tj, we learn the corresponding topic-specific parameters wj to determine the 2In data sets where such mark-up is not available, one can employ topical segmentation algorithms as an additional preprocessing step. quality of a given excerpt. Using the perceptron framework augmented with an ILP formulation for global optimization, the system is trained to select the best excerpt for each document di and each topic tj. For training, we assume the best excerpt is the original human-authored text sij. 3. Application (Section 3.2) Given the title of a requested document, we select several excerpts from the candidate vectors returned by the search procedure (1b) to create a comprehensive overview article. We perform the decoding procedure jointly using learned parameters w1 . . . wk and the same ILP formulation for global optimization as in training. The result is a new document with k excerpts, one for each topic. 3.1 Preprocessing Template Induction A content template specifies the topical structure of documents in one domain. For instance, the template for articles about actors consists of four topics t1 . . . t4: biography, early life, career, and personal life. Using this template to create the biography of a new actor will ensure that its information coverage is consistent with existing human-authored documents. We aim to derive these templates by discovering common patterns in the organization of documents in a domain of interest. There has been a sizable amount of research on structure induction ranging from linear segmentation (Hearst, 1994) to content modeling (Barzilay and Lee, 2004). At the core of these methods is the assumption that fragments of text conveying similar information have similar word distribution patterns. Therefore, often a simple segment clustering across domain texts can identify strong patterns in content structure (Barzilay and Elhadad, 2003). Clusters containing fragments from many documents are indicative of topics that are essential for a comprehensive summary. Given the simplicity and robustness of this approach, we utilize it for template induction. We cluster all section headings hi1 . . . him from all documents di using a repeated bisectioning algorithm (Zhao et al., 2005). As a similarity function, we use cosine similarity weighted with TF*IDF. We eliminate any clusters with low internal similarity (i.e., smaller than 0.5), as we assume these are “miscellaneous” clusters that will not yield unified topics. 210 We determine the average number of sections k over all documents in our training set, then select the k largest section clusters as topics. We order these topics as t1 . . . tk using a majority ordering algorithm (Cohen et al., 1998). This algorithm finds a total order among clusters that is consistent with a maximal number of pairwise relationships observed in our data set. Each topic tj is identified by the most frequent heading found within the cluster – e.g., Causes. This set of topics forms the content template for a domain. Search To retrieve relevant excerpts, we must define appropriate search queries for each topic t1 . . . tk. Query reformulation is an active area of research (Agichtein et al., 2001). We have experimented with several of these methods for drawing search queries from representative words in the body text of each topic; however, we find that the best performance is provided by deriving queries from a conjunction of the document title and topic – e.g., “3-M syndrome” diagnosis. Using these queries, we search using Yahoo! and retrieve the first ten result pages for each topic. From each of these pages, we extract all possible excerpts consisting of chunks of text between standardized boundary indicators (such as <p> tags). In our experiments, there are an average of 6 excerpts taken from each page. For each topic tj of each document we wish to create, the total number of excerpts r found on the Internet may differ. We label the excerpts ej1 . . . ejr. 3.2 Selection Model Our selection model takes the content template t1 . . . tk and the candidate excerpts ej1 . . . ejr for each topic tj produced in the previous steps. It then selects a series of k excerpts, one from each topic, to create a coherent summary. One possible approach is to perform individual selections from each set of excerpts ej1 . . . ejr and then combine the results. This strategy is commonly used in multi-document summarization (Barzilay et al., 1999; Goldstein et al., 2000; Radev et al., 2000), where the combination step eliminates the redundancy across selected excerpts. However, separating the two steps may not be optimal for this task — the balance between coverage and redundancy is harder to achieve when a multi-paragraph summary is generated. In addition, a more discriminative selection strategy is needed when candidate excerpts are drawn directly from the web, as they may be contaminated with noise. We propose a novel joint training algorithm that learns selection criteria for all the topics simultaneously. This approach enables us to maximize both local fit and global coherence. We implement this algorithm using the perceptron framework, as it can be easily modified for structured prediction while preserving convergence guarantees (Daum´e III and Marcu, 2005; Snyder and Barzilay, 2007). In this section, we first describe the structure and decoding procedure of our model. We then present an algorithm to jointly learn the parameters of all topic models. 3.2.1 Model Structure The model inputs are as follows: • The title of the desired document • t1 . . . tk — topics from the content template • ej1 . . . ejr — candidate excerpts for each topic tj In addition, we define feature and parameter vectors: • φ(ejl) — feature vector for the lth candidate excerpt for topic tj • w1 . . . wk — parameter vectors, one for each of the topics t1 . . . tk Our model constructs a new article by following these two steps: Ranking First, we attempt to rank candidate excerpts based on how representative they are of each individual topic. For each topic tj, we induce a ranking of the excerpts ej1 . . . ejr by mapping each excerpt ejl to a score: scorej(ejl) = φ(ejl) · wj Candidates for each topic are ranked from highest to lowest score. After this procedure, the position l of excerpt ejl within the topic-specific candidate vector is the excerpt’s rank. Optimizing the Global Objective To avoid redundancy between topics, we formulate an optimization problem using excerpt rankings to create the final article. Given k topics, we would like to select one excerpt ejl for each topic tj, such that the rank is minimized; that is, scorej(ejl) is high. To select the optimal excerpts, we employ integer linear programming (ILP). This framework is 211 commonly used in generation and summarization applications where the selection process is driven by multiple constraints (Marciniak and Strube, 2005; Clarke and Lapata, 2007). We represent excerpts included in the output using a set of indicator variables, xjl. For each excerpt ejl, the corresponding indicator variable xjl = 1 if the excerpt is included in the final document, and xjl = 0 otherwise. Our objective is to minimize the ranks of the excerpts selected for the final document: min k X j=1 r X l=1 l · xjl We augment this formulation with two types of constraints. Exclusivity Constraints We want to ensure that exactly one indicator xjl is nonzero for each topic tj. These constraints are formulated as follows: r X l=1 xjl = 1 ∀j ∈{1 . . . k} Redundancy Constraints We also want to prevent redundancy across topics. We define sim(ejl, ej′l′) as the cosine similarity between excerpts ejl from topic tj and ej′l′ from topic tj′. We introduce constraints that ensure no pair of excerpts has similarity above 0.5: (xjl + xj′l′) · sim(ejl, ej′l′) ≤1 ∀j, j′ = 1 . . . k ∀l, l′ = 1 . . . r If excerpts ejl and ej′l′ have cosine similarity sim(ejl, ej′l′) > 0.5, only one excerpt may be selected for the final document – i.e., either xjl or xj′l′ may be 1, but not both. Conversely, if sim(ejl, ej′l′) ≤0.5, both excerpts may be selected. Solving the ILP Solving an integer linear program is NP-hard (Cormen et al., 1992); however, in practice there exist several strategies for solving certain ILPs efficiently. In our study, we employed lp solve,3 an efficient mixed integer programming solver which implements the Branch-and-Bound algorithm. On a larger scale, there are several alternatives to approximate the ILP results, such as a dynamic programming approximation to the knapsack problem (McDonald, 2007). 3http://lpsolve.sourceforge.net/5.5/ Feature Value UNI wordi count of word occurrences POS wordi first position of word in excerpt BI wordi wordi+1 count of bigram occurrences SENT count of all sentences EXCL count of exclamations QUES count of questions WORD count of all words NAME count of title mentions DATE count of dates PROP count of proper nouns PRON count of pronouns NUM count of numbers FIRST word1 1∗ FIRST word1 word2 1† SIMS count of similar excerpts‡ Table 1: Features employed in the ranking model. ∗Defined as the first unigram in the excerpt. † Defined as the first bigram in the excerpt. ‡ Defined as excerpts with cosine similarity > 0.5 Features As shown in Table 1, most of the features we select in our model have been employed in previous work on summarization (Mani and Maybury, 1999). All features except the SIMS feature are defined for individual excerpts in isolation. For each excerpt ejl, the value of the SIMS feature is the count of excerpts ejl′ in the same topic tj for which sim(ejl, ejl′) > 0.5. This feature quantifies the degree of repetition within a topic, often indicative of an excerpt’s accuracy and relevance. 3.2.2 Model Training Generating Training Data For training, we are given n original documents d1 . . . dn, a content template consisting of topics t1 . . . tk, and a set of candidate excerpts eij1 . . . eijr for each document di and topic tj. For each section of each document, we add the gold excerpt sij to the corresponding vector of candidate excerpts eij1 . . . eijr. This excerpt represents the target for our training algorithm. Note that the algorithm does not require annotated ranking data; only knowledge of this “optimal” excerpt is required. However, if the excerpts provided in the training data have low quality, noise is introduced into the system. Training Procedure Our algorithm is a modification of the perceptron ranking algorithm (Collins, 2002), which allows for joint learning across several ranking problems (Daum´e III and Marcu, 2005; Snyder and Barzilay, 2007). Pseudocode for this algorithm is provided in Figure 2. First, we define Rank(eij1 . . . eijr, wj), which 212 ranks all excerpts from the candidate excerpt vector eij1 . . . eijr for document di and topic tj. Excerpts are ordered by scorej(ejl) using the current parameter values. We also define Optimize(eij1 . . . eijr), which finds the optimal selection of excerpts (one per topic) given ranked lists of excerpts eij1 . . . eijr for each document di and topic tj. These functions follow the ranking and optimization procedures described in Section 3.2.1. The algorithm maintains k parameter vectors w1 . . . wk, one associated with each topic tj desired in the final article. During initialization, all parameter vectors are set to zeros (line 2). To learn the optimal parameters, this algorithm iterates over the training set until the parameters converge or a maximum number of iterations is reached (line 3). For each document in the training set (line 4), the following steps occur: First, candidate excerpts for each topic are ranked (lines 5-6). Next, decoding through ILP optimization is performed over all ranked lists of candidate excerpts, selecting one excerpt for each topic (line 7). Finally, the parameters are updated in a joint fashion. For each topic (line 8), if the selected excerpt is not similar enough to the gold excerpt (line 9), the parameters for that topic are updated using a standard perceptron update rule (line 10). When convergence is reached or the maximum iteration count is exceeded, the learned parameter values are returned (line 12). The use of ILP during each step of training sets this algorithm apart from previous work. In prior research, ILP was used as a postprocessing step to remove redundancy and make other global decisions about parameters (McDonald, 2007; Marciniak and Strube, 2005; Clarke and Lapata, 2007). However, in our training, we intertwine the complete decoding procedure with the parameter updates. Our joint learning approach finds per-topic parameter values that are maximally suited for the global decoding procedure for content selection. 4 Experimental Setup We evaluate our method by observing the quality of automatically created articles in different domains. We compute the similarity of a large number of articles produced by our system and several baselines to the original human-authored articles using ROUGE, a standard metric for summary quality. In addition, we perform an analysis of ediInput: d1 . . . dn: A set of n documents, each containing k sections si1 . . . sik eij1 . . . eijr: Sets of candidate excerpts for each topic tj and document di Define: Rank(eij1 . . . eijr, wj): As described in Section 3.2.1: Calculates scorej(eijl) for all excerpts for document di and topic tj, using parameters wj. Orders the list of excerpts by scorej(eijl) from highest to lowest. Optimize(ei11 . . . eikr): As described in Section 3.2.1: Finds the optimal selection of excerpts to form a final article, given ranked lists of excerpts for each topic t1 . . . tk. Returns a list of k excerpts, one for each topic. φ(eijl): Returns the feature vector representing excerpt eijl Initialization: 1 For j = 1 . . . k 2 Set parameters wj = 0 Training: 3 Repeat until convergence or while iter < itermax: 4 For i = 1 . . . n 5 For j = 1 . . . k 6 Rank(eij1 . . . eijr, wj) 7 x1 . . . xk = Optimize(ei11 . . . eikr) 8 For j = 1 . . . k 9 If sim(xj, sij) < 0.8 10 wj = wj + φ(sij) −φ(xi) 11 iter = iter + 1 12 Return parameters w1 . . . wk Figure 2: An algorithm for learning several ranking problems with a joint decoding mechanism. tor reaction to system-produced articles submitted to Wikipedia. Data For evaluation, we consider two domains: American Film Actors and Diseases. These domains have been commonly used in prior work on summarization (Weischedel et al., 2004; Zhou et al., 2004; Filatova and Prager, 2005; DemnerFushman and Lin, 2007; Biadsy et al., 2008). Our text corpus consists of articles drawn from the corresponding categories in Wikipedia. There are 2,150 articles in American Film Actors and 523 articles in Diseases. For each domain, we randomly select 90% of articles for training and test on the remaining 10%. Human-authored articles in both domains contain an average of four topics, and each topic contains an average of 193 words. In order to model the real-world scenario where Wikipedia articles are not always available (as for new or specialized topics), we specifically exclude Wikipedia sources during our search pro213 Avg. Excerpts Avg. Sources Amer. Film Actors Search 2.3 1 No Template 4 4.0 Disjoint 4 2.1 Full Model 4 3.4 Oracle 4.3 4.3 Diseases Search 3.1 1 No Template 4 2.5 Disjoint 4 3.0 Full Model 4 3.2 Oracle 5.8 3.9 Table 2: Average number of excerpts selected and sources used in article creation for test articles. cedure (Section 3.1) for evaluation. Baselines Our first baseline, Search, relies solely on search engine ranking for content selection. Using the article title as a query – e.g., Bacillary Angiomatosis, this method selects the web page that is ranked first by the search engine. From this page we select the first k paragraphs where k is defined in the same way as in our full model. If there are less than k paragraphs on the page, all paragraphs are selected, but no other sources are used. This yields a document of comparable size with the output of our system. Despite its simplicity, this baseline is not naive: extracting material from a single document guarantees that the output is coherent, and a page highly ranked by a search engine may readily contain a comprehensive overview of the subject. Our second baseline, No Template, does not use a template to specify desired topics; therefore, there are no constraints on content selection. Instead, we follow a simplified form of previous work on biography creation, where a classifier is trained to distinguish biographical text (Zhou et al., 2004; Biadsy et al., 2008). In this case, we train a classifier to distinguish domain-specific text. Positive training data is drawn from all topics in the given domain corpus. To find negative training data, we perform the search procedure as in our full model (see Section 3.1) using only the article titles as search queries. Any excerpts which have very low similarity to the original articles are used as negative examples. During the decoding procedure, we use the same search procedure. We then classify each excerpt as relevant or irrelevant and select the k non-redundant excerpts with the highest relevance confidence scores. Our third baseline, Disjoint, uses the ranking perceptron framework as in our full system; however, rather than perform an optimization step during training and decoding, we simply select the highest-ranked excerpt for each topic. This equates to standard linear classification for each section individually. In addition to these baselines, we compare against an Oracle system. For each topic present in the human-authored article, the Oracle selects the excerpt from our full model’s candidate excerpts with the highest cosine similarity to the human-authored text. This excerpt is the optimal automatic selection from the results available, and therefore represents an upper bound on our excerpt selection task. Some articles contain additional topics beyond those in the template; in these cases, the Oracle system produces a longer article than our algorithm. Table 2 shows the average number of excerpts selected and sources used in articles created by our full model and each baseline. Automatic Evaluation To assess the quality of the resulting overview articles, we compare them with the original human-authored articles. We use ROUGE, an evaluation metric employed at the Document Understanding Conferences (DUC), which assumes that proximity to human-authored text is an indicator of summary quality. We use the publicly available ROUGE toolkit (Lin, 2004) to compute recall, precision, and F-score for ROUGE-1. We use the Wilcoxon Signed Rank Test to determine statistical significance. Analysis of Human Edits In addition to our automatic evaluation, we perform a study of reactions to system-produced articles by the general public. To achieve this goal, we insert automatically created articles4 into Wikipedia itself and examine the feedback of Wikipedia editors. Selection of specific articles is constrained by the need to find topics which are currently of “stub” status that have enough information available on the Internet to construct a valid article. After a period of time, we analyzed the edits made to the articles to determine the overall editor reaction. We report results on 15 articles in the Diseases category5. 4In addition to the summary itself, we also include proper citations to the sources from which the material is extracted. 5We are continually submitting new articles; however, we report results on those that have at least a 6 month history at time of writing. 214 Recall Precision F-score Amer. Film Actors Search 0.09 0.37 0.13 ∗ No Template 0.33 0.50 0.39 ∗ Disjoint 0.45 0.32 0.36 ∗ Full Model 0.46 0.40 0.41 Oracle 0.48 0.64 0.54 ∗ Diseases Search 0.31 0.37 0.32 † No Template 0.32 0.27 0.28 ∗ Disjoint 0.33 0.40 0.35 ∗ Full Model 0.36 0.39 0.37 Oracle 0.59 0.37 0.44 ∗ Table 3: Results of ROUGE-1 evaluation. ∗Significant with respect to our full model for p ≤0.05. † Significant with respect to our full model for p ≤0.10. Since Wikipedia is a live resource, we do not repeat this procedure for our baseline systems. Adding articles from systems which have previously demonstrated poor quality would be improper, especially in Diseases. Therefore, we present this analysis as an additional observation rather than a rigorous technical study. 5 Results Automatic Evaluation The results of this evaluation are shown in Table 3. Our full model outperforms all of the baselines. By surpassing the Disjoint baseline, we demonstrate the benefits of joint classification. Furthermore, the high performance of both our full model and the Disjoint baseline relative to the other baselines shows the importance of structure-aware content selection. The Oracle system, which represents an upper bound on our system’s capabilities, performs well. The remaining baselines have different flaws: Articles produced by the No Template baseline tend to focus on a single topic extensively at the expense of breadth, because there are no constraints to ensure diverse topic selection. On the other hand, performance of the Search baseline varies dramatically. This is expected; this baseline relies heavily on both the search engine and individual web pages. The search engine must correctly rank relevant pages, and the web pages must provide the important material first. Analysis of Human Edits The results of our observation of editing patterns are shown in Table 4. These articles have resided on Wikipedia for a period of time ranging from 5-11 months. All of them have been edited, and no articles were removed due to lack of quality. Moreover, ten automatically created articles have been promoted Type Count Total articles 15 Promoted articles 10 Edit types Intra-wiki links 36 Formatting 25 Grammar 20 Minor topic edits 2 Major topic changes 1 Total edits 85 Table 4: Distribution of edits on Wikipedia. by human editors from stubs to regular Wikipedia entries based on the quality and coverage of the material. Information was removed in three cases for being irrelevant, one entire section and two smaller pieces. The most common changes were small edits to formatting and introduction of links to other Wikipedia articles in the body text. 6 Conclusion In this paper, we investigated an approach for creating a multi-paragraph overview article by selecting relevant material from the web and organizing it into a single coherent text. Our algorithm yields significant gains over a structure-agnostic approach. Moreover, our results demonstrate the benefits of structured classification, which outperforms independently trained topical classifiers. Overall, the results of our evaluation combined with our analysis of human edits confirm that the proposed method can effectively produce comprehensive overview articles. This work opens several directions for future research. Diseases and American Film Actors exhibit fairly consistent article structures, which are successfully captured by a simple template creation process. However, with categories that exhibit structural variability, more sophisticated statistical approaches may be required to produce accurate templates. Moreover, a promising direction is to consider hierarchical discourse formalisms such as RST (Mann and Thompson, 1988) to supplement our template-based approach. Acknowledgments The authors acknowledge the support of the NSF (CAREER grant IIS-0448168, grant IIS-0835445, and grant IIS0835652) and NIH (grant V54LM008748). Thanks to Mike Collins, Julia Hirschberg, and members of the MIT NLP group for their helpful suggestions and comments. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors, and do not necessarily reflect the views of the funding organizations. 215 References Eugene Agichtein, Steve Lawrence, and Luis Gravano. 2001. Learning search engine specific query transformations for question answering. In Proceedings of WWW, pages 169– 178. Regina Barzilay and Noemie Elhadad. 2003. Sentence alignment for monolingual comparable corpora. In Proceedings of EMNLP, pages 25–32. Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of HLT-NAACL, pages 113–120. Regina Barzilay, Kathleen R. McKeown, and Michael Elhadad. 1999. Information fusion in the context of multidocument summarization. In Proceedings of ACL, pages 550–557. Fadi Biadsy, Julia Hirschberg, and Elena Filatova. 2008. An unsupervised approach to biography production using wikipedia. In Proceedings of ACL/HLT, pages 807–815. James Clarke and Mirella Lapata. 2007. Modelling compression with discourse constraints. In Proceedings of EMNLP-CoNLL, pages 1–11. William W. Cohen, Robert E. Schapire, and Yoram Singer. 1998. Learning to order things. In Proceedings of NIPS, pages 451–457. Michael Collins. 2002. Ranking algorithms for named-entity extraction: Boosting and the voted perceptron. In Proceedings of ACL, pages 489–496. Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. 1992. Intoduction to Algorithms. The MIT Press. Hal Daum´e III and Daniel Marcu. 2005. A large-scale exploration of effective global features for a joint entity detection and tracking model. In Proceedings of HLT/EMNLP, pages 97–104. Dina Demner-Fushman and Jimmy Lin. 2007. Answering clinical questions with knowledge-based and statistical techniques. Computational Linguistics, 33(1):63–103. Elena Filatova and John M. Prager. 2005. Tell me what you do and I’ll tell you what you are: Learning occupationrelated activities for biographies. In Proceedings of HLT/EMNLP, pages 113–120. Elena Filatova, Vasileios Hatzivassiloglou, and Kathleen McKeown. 2006. Automatic creation of domain templates. In Proceedings of ACL, pages 207–214. Atsushi Fujii and Tetsuya Ishikawa. 2004. Summarizing encyclopedic term descriptions on the web. In Proceedings of COLING, page 645. Jade Goldstein, Vibhu Mittal, Jaime Carbonell, and Mark Kantrowitz. 2000. Multi-document summarization by sentence extraction. In Proceedings of NAACL-ANLP, pages 40–48. Marti A. Hearst. 1994. Multi-paragraph segmentation of expository text. In Proceedings of ACL, pages 9–16. Chin-Yew Lin. 2004. ROUGE: A package for automatic evaluation of summaries. In Proceedings of ACL, pages 74–81. Inderjeet Mani and Mark T. Maybury. 1999. Advances in Automatic Text Summarization. The MIT Press. William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243–281. Tomasz Marciniak and Michael Strube. 2005. Beyond the pipeline: Discrete optimization in NLP. In Proceedings of CoNLL, pages 136–143. Ryan McDonald. 2007. A study of global inference algorithms in multi-document summarization. In Proceedings of EICR, pages 557–564. Vivi Nastase and Michael Strube. 2008. Decoding wikipedia categories for knowledge acquisition. In Proceedings of AAAI, pages 1219–1224. Vivi Nastase. 2008. Topic-driven multi-document summarization with encyclopedic knowledge and spreading activation. In Proceedings of EMNLP, pages 763–772. Dragomir R. Radev, Hongyan Jing, and Malgorzata Budzikowska. 2000. Centroid-based summarization of multiple documents: sentence extraction, utilitybased evaluation, and user studies. In Proceedings of ANLP/NAACL, pages 21–29. Ehud Reiter and Robert Dale. 2000. Building Natural Language Generation Systems. Cambridge University Press, Cambridge. Benjamin Snyder and Regina Barzilay. 2007. Multiple aspect ranking using the good grief algorithm. In Proceedings of HLT-NAACL, pages 300–307. Ralph M. Weischedel, Jinxi Xu, and Ana Licuanan. 2004. A hybrid approach to answering biographical questions. In New Directions in Question Answering, pages 59–70. Fei Wu and Daniel S. Weld. 2007. Autonomously semantifying wikipedia. In Proceedings of CIKM, pages 41–50. Ying Zhao, George Karypis, and Usama Fayyad. 2005. Hierarchical clustering algorithms for document datasets. Data Mining and Knowledge Discovery, 10(2):141–168. L. Zhou, M. Ticrea, and Eduard Hovy. 2004. Multidocument biography summarization. In Proceedings of EMNLP, pages 434–441. 216
2009
24
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 217–225, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Learning to Tell Tales: A Data-driven Approach to Story Generation Neil McIntyre and Mirella Lapata School of Informatics, University of Edinburgh 10 Crichton Street, Edinburgh, EH8 9AB, UK [email protected], [email protected] Abstract Computational story telling has sparked great interest in artificial intelligence, partly because of its relevance to educational and gaming applications. Traditionally, story generators rely on a large repository of background knowledge containing information about the story plot and its characters. This information is detailed and usually hand crafted. In this paper we propose a data-driven approach for generating short children’s stories that does not require extensive manual involvement. We create an end-to-end system that realizes the various components of the generation pipeline stochastically. Our system follows a generate-and-and-rank approach where the space of multiple candidate stories is pruned by considering whether they are plausible, interesting, and coherent. 1 Introduction Recent years have witnessed increased interest in the use of interactive language technology in educational and entertainment applications. Computational story telling could play a key role in these applications by effectively engaging learners and assisting them in creating a story. It could also allow teachers to generate stories on demand that suit their classes’ needs. And enhance the entertainment value of role-playing games1. The majority of these games come with a set of pre-specified plots that the players must act out. Ideally, the plot should adapt dynamically in response to the players’ actions. Computational story telling has a longstanding tradition in the field of artificial intelligence. Early work has been largely inspired by Propp’s (1968) 1A role-playing game (RPG) is a game in which the participants assume the roles of fictional characters and act out an adventure. typology of narrative structure. Propp identified in Russian fairy tales a small number of recurring units (e.g., the hero is defeated, the villain causes harm) and rules that could be used to describe their relation (e.g., the hero is pursued and the rescued). Story grammars (Thorndyke, 1977) were initially used to capture Propp’s high-level plot elements and character interactions. A large body of more recent work views story generation as a form of agent-based planning (Theune et al., 2003; Fass, 2002; Oinonen et al., 2006). The agents act as characters with a list of goals. They form plans of action and try to fulfill them. Interesting stories emerge as agents’ plans interact and cause failures and possible replanning. Perhaps the biggest challenge faced by computational story generators is the amount of world knowledge required to create compelling stories. A hypothetical system must have information about the characters involved, how they interact, what their goals are, and how they influence their environment. Furthermore, all this information must be complete and error-free if it is to be used as input to a planning algorithm. Traditionally, this knowledge is created by hand, and must be recreated for different domains. Even the simple task of adding a new character requires a whole new set of action descriptions and goals. A second challenge concerns the generation task itself and the creation of stories characterized by high-quality prose. Most story generation systems focus on generating plot outlines, without considering the actual linguistic structures found in the stories they are trying to mimic (but see Callaway and Lester 2002 for a notable exception). In fact, there seems to be little common ground between story generation and natural language generation (NLG), despite extensive research in both fields. The NLG process (Reiter and Dale, 2000) is often viewed as a pipeline consisting of content planning (selecting and structuring the story’s content), microplanning (sentence ag217 gregation, generation of referring expressions, lexical choice), and surface realization (agreement, verb-subject ordering). However, story generation systems typically operate in two phases: (a) creating a plot for the story and (b) transforming it into text (often by means of template-based NLG). In this paper we address both challenges facing computational story telling. We propose a data-driven approach to story generation that does not require extensive manual involvement. Our goal is to create stories automatically by leveraging knowledge inherent in corpora. Stories within the same genre (e.g., fairy tales, parables) typically have similar structure, characters, events, and vocabularies. It is precisely this type of information we wish to extract and quantify. Of course, building a database of characters and their actions is merely the first step towards creating an automatic story generator. The latter must be able to select which information to include in the story, in what order to present it, how to convert it into English. Recent work in natural language generation has seen the development of learning methods for realizing each of these tasks automatically without much hand coding. For example, Duboue and McKeown (2002) and Barzilay and Lapata (2005) propose to learn a content planner from a parallel corpus. Mellish et al. (1998) advocate stochastic search methods for document structuring. Stent et al. (2004) learn how to combine the syntactic structure of elementary speech acts into one or more sentences from a corpus of good and bad examples. And Knight and Hatzivassiloglou (1995) use a language model for selecting a fluent sentence among the vast number of surface realizations corresponding to a single semantic representation. Although successful on their own, these methods have not been yet integrated together into an end-to-end probabilistic system. Our work attempts to do this for the story generation task, while bridging the gap between story generators and NLG systems. Our generator operates over predicate-argument and predicate-predicate co-occurrence statistics gathered from corpora. These are used to produce a large set of candidate stories which are subsequently ranked based on their interestingness and coherence. The top-ranked candidate is selected for presentation and verbalized using a language model interfaced with RealPro (Lavoie and Rambow, 1997), a text generation engine. This generate-and-rank architecture circumvents the complexity of traditional generation This is a fat hen. The hen has a nest in the box. She has eggs in the nest. A cat sees the nest, and can get the eggs. The sun will soon set. The cows are on their way to the barn. One old cow has a bell on her neck. She sees the dog, but she will not run. The dog is kind to the cows. Figure 1: Children’s stories from McGuffey’s Eclectic Primer Reader; it contains primary reading matter to be used in the first year of school work. systems, where numerous, often conflicting constraints, have to be encoded during development in order to produce a single high-quality output. As a proof of concept we initially focus on children’s stories (see Figure 1 for an example). These stories exhibit several recurrent patterns and are thus amenable to a data-driven approach. Although they have limited vocabulary and nonelaborate syntax, they nevertheless present challenges at almost all stages of the generation process. Also from a practical point of view, children’s stories have great potential for educational applications (Robertson and Good, 2003). For instance, the system we describe could serve as an assistant to a person who wants suggestions as to what could happen next in a story. In the remainder of this paper, we first describe the components of our story generator (Section 2) and explain how these are interfaced with our story ranker (Section 3). Next, we present the resources and evaluation methodology used in our experiments (Section 4) and discuss our results (Section 5). 2 The Story Generator As common in previous work (e.g., Shim and Kim 2002), we assume that our generator operates in an interactive context. Specifically, the user supplies the topic of the story and its desired length. By topic we mean the entities (or characters) around which the story will revolve. These can be a list of nouns such as dog and duck or a sentence, such as the dog chases the duck. The generator next constructs several possible stories involving these entities by consulting a knowledge base containing information about dogs and ducks (e.g., dogs bark, ducks swim) and their interactions (e.g., dogs chase ducks, ducks love dogs). We conceptualize 218 the dog chases the duck the dog barks the duck runs away the dog catches the duck the duck escapes Figure 2: Example of a simplified story tree. the story generation process as a tree (see Figure 2) whose levels represent different story lengths. For example, a tree of depth 3 will only generate stories with three sentences. The tree encodes many stories efficiently, the nodes correspond to different sentences and there is no sibling order (the tree in Figure 2 can generate three stories). Each sentence in the tree has a score. Story generation amounts to traversing the tree and selecting the nodes with the highest score Specifically, our story generator applies two distinct search procedures. Although we are ultimately searching for the best overall story at the document level, we must also find the most suitable sentences that can be generated from the knowledge base (see Figure 4). The space of possible stories can increase dramatically depending on the size of the knowledge base so that an exhaustive tree search becomes computationally prohibitive. Fortunately, we can use beam search to prune low-scoring sentences and the stories they generate. For example, we may prefer sentences describing actions that are common for their characters. We also apply two additional criteria in selecting good stories, namely whether they are coherent and interesting. At each depth in the tree we maintain the N-best stories. Once we reach the required length, the highest scoring story is presented to the user. In the following we describe the components of our system in more detail. 2.1 Content Planning As mentioned earlier our generator has access to a knowledge base recording entities and their interactions. These are essentially predicate argument structures extracted from a corpus. In our experiments this knowledge base was created using the RASP relational parser (Briscoe and Carroll, 2002). We collected all verb-subject, verb-object, verb-adverb, and noun-adjective relations from the parser’s output and scored them with the mutual dog:SUBJ:bark whistle:OBJ:dog dog:SUBJ:bite treat:OBJ:dog dog:SUBJ:see give:OBJ:dog dog:SUBJ:like have: OBJ:dog hungry:ADJ:dog lovely:ADJ:dog Table 1: Relations for the noun dog with high MI scores (SUBJ is a shorthand for subject-of, OBJ for object-of and ADJ for adjective-of). information-based metric proposed in Lin (1998): MI = ln ∥w,r,w′ ∥× ∥∗,r,∗∥ ∥w,r,∗∥× ∥∗,r,w′ ∥  (1) where w and w′ are two words with relation type r. ∗denotes all words in that particular relation and ∥w,r,w′ ∥represents the number of times w,r,w′ occurred in the corpus. These MI scores are used to inform the generation system about likely entity relationships at the sentence level. Table 1 shows high scoring relations for the noun dog extracted from the corpus used in our experiments (see Section 4 for details). Note that MI weighs binary relations which in some cases may be likely on their own without making sense in a ternary relation. For instance, although both dog:SUBJ:run and president:OBJ:run are probable we may not want to create the sentence “The dog runs for president”. Ditransitive verbs pose a similar problem, where two incongruent objects may appear together (the sentence John gives an apple to the highway is semantically odd, whereas John gives an apple to the teacher would be fine). To help reduce these problems, we need to estimate the likelihood of ternary relations. We therefore calculate the conditional probability: p(a1,a2 | s,v) = ∥s,v,a1,a2 ∥ ∥s,v,∗,∗∥ (2) where s is the subject of verb v, a1 is the first argument of v and a2 is the second argument of v and v,s,a1 ̸= ε. When a verb takes two arguments, we first consult (2), to see if the combination is likely before backing off to (1). The knowledge base described above can only inform the generation system about relationships on the sentence level. However, a story created simply by concatenating sentences in isolation will often be incoherent. Investigations into the interpretation of narrative discourse (Asher and Lascarides, 2003) have shown that lexical information plays an important role in determining 219 SUBJ:chase OBJ:chase SUBJ:run SUBJ:escape SUBJ:fall OBJ:catch SUBJ:frighten SUBJ:jump 1 2 2 6 5 8 1 5 Figure 3: Graph encoding (partially ordered) chains of events the discourse relations between propositions. Although we don’t have an explicit model of rhetorical relations and their effects on sentence ordering, we capture the lexical inter-dependencies between sentences by focusing on events (verbs) and their precedence relationships in the corpus. For every entity in our training corpus we extract event chains similar to those proposed by Chambers and Jurafsky (2008). Specifically, we identify the events every entity relates to and record their (partial) order. We assume that verbs sharing the same arguments are more likely to be semantically related than verbs with no arguments in common. For example, if we know that someone steals and then runs, we may expect the next action to be that they hide or that they are caught. In order to track entities and their associated events throughout a text, we first resolve entity mentions using OpenNLP2. The list of events performed by co-referring entities and their grammatical relation (i.e., subject or object) are subsequently stored in a graph. The edges between event nodes are scored using the MI equation given in (1). A fragment of the action graph is shown in Figure 3 (for simplicity, the edges in the example are weighted with co-occurrence frequencies). Contrary to Chambers and Jurafsky (2008) we do not learn global narrative chains over an entire corpus. Currently, we consider local chains of length two and three (i.e., chains of two or three events sharing grammatical arguments). The generator consults the graph when selecting a verb for an entity. It will favor verbs that are part of an event chain (e.g., SUBJ:chase →SUBJ:run →SUBJ:fall in Figure 3). This way, the search space is effectively pruned as finding a suitable verb in the current sentence is influenced by the choice of verb in the next sentence. 2See http://opennlp.sourceforge.net/. 2.2 Sentence Planning So far we have described how we gather knowledge about entities and their interactions, which must be subsequently combined into a sentence. The backbone of our sentence planner is a grammar with subcategorization information which we collected from the lexicon created by Korhonen and Briscoe (2006) and the COMLEX dictionary (Grishman et al., 1994). The grammar rules act as templates. They each take a verb as their head and propose ways of filling its argument slots. This means that when generating a story, the choice of verb will affect the structure of the sentence. The subcategorization templates are weighted by their probability of occurrence in the reference dictionaries. This allows the system to prefer less elaborate grammatical structures. The grammar rules were converted to a format compatible with our surface realizer (see Section 2.3) and include information pertaining to mood, agreement, argument role, etc. Our sentence planner aggregates together information from the knowledge base, without however generating referring expressions. Although this would be a natural extension, we initially wanted to assess whether the stochastic approach advocated here is feasible at all, before venturing towards more ambitious components. 2.3 Surface Realization The surface realization process is performed by RealPro (Lavoie and Rambow (1997)). The system takes an abstract sentence representation and transforms it into English. There are several grammatical issues that will affect the final realization of the sentence. For nouns we must decide whether they are singular or plural, whether they are preceded by a definite or indefinite article or with no article at all. Adverbs can either be pre-verbal or post-verbal. There is also the issue of selecting an appropriate tense for our generated sentences, however, we simply assume all sentences are in the present tense. Since we do not know a priori which of these parameters will result in a grammatical sentence, we generate all possible combinations and select the most likely one according to a language model. We used the SRI toolkit to train a trigram language model on the British National Corpus, with interpolated Kneser-Ney smoothing and perplexity as the scoring metric for the generated sentences. 220 root dog ... bark bark(dog) bark at(dog,OBJ) bark at(dog,duck) bark at(dog,cat) bark(dog,ADV) bark(dog,loudly) hide run duck quack ... run ... fly ... Figure 4: Simplified generation example for the input sentence the dog chases the duck. 2.4 Sentence Generation Example It is best to illustrate the generation procedure with a simple example (see Figure 4). Given the sentence the dog chases the duck as input, our generator assumes that either dog or duck will be the subject of the following sentence. This is a somewhat simplistic attempt at generating coherent stories. Centering (Grosz et al., 1995) and other discourse theories argue that topical entities are likely to appear in prominent syntactic positions such as subject or object. Next, we select verbs from the knowledge base that take the words duck and dog as their subject (e.g., bark, run, fly). Our beam search procedure will reduce the list of verbs to a small subset by giving preference to those that are likely to follow chase and have duck and dog as their subjects or objects. The sentence planner gives a set of possible frames for these verbs which may introduce additional entities (see Figure 4). For example, bark can be intransitive or take an object or adverbial complement. We select an object for bark, by retrieving from the knowledge base the set of objects it co-occurs with. Our surface realizer will take structures like “bark(dog,loudly)”, “bark at(dog,cat)”, “bark at(dog,duck)” and generate the sentences the dog barks loudly, the dog barks at the cat and the dog barks at the duck. This procedure is repeated to create a list of possible candidates for the third sentence, and so on. As Figure 4 illustrates, there are many candidate sentences for each entity. In default of generating all of these exhaustively, our system utilizes the MI scores from the knowledge base to guide the search. So, at each choice point in the generation process, e.g., when selecting a verb for an entity or a frame for a verb, we consider the N best alternatives assuming that these are most likely to appear in a good story. 3 Story Ranking We have so far described most modules of our story generator, save one important component, namely the story ranker. As explained earlier, our generator produces stories stochastically, by relying on co-occurrence frequencies collected from the training corpus. However, there is no guarantee that these stories will be interesting or coherent. Engaging stories have some element of surprise and originality in them (Turner, 1994). Our stories may simply contain a list of actions typically performed by the story characters. Or in the worst case, actions that make no sense when collated together. Ideally, we would like to be able to discern interesting stories from tedious ones. Another important consideration is their coherence. We have to ensure that the discourse smoothly transitions from one topic to the next. To remedy this, we developed two ranking functions that assess the candidate stories based on their interest and coherence. Following previous work (Stent et al., 2004; Barzilay and Lapata, 2007) we learn these ranking functions from training data (i.e., stories labeled with numeric values for interestingness and coherence). Interest Model A stumbling block to assessing how interesting a story may be, is that the very notion of interestingness is subjective and not very well understood. Although people can judge fairly reliably whether they like or dislike a story, they have more difficulty isolating what exactly makes it interesting. Furthermore, there are virtually no empirical studies investigating the linguistic (surface level) correlates of interestingness. We therefore conducted an experiment where we asked participants to rate a set of human authored stories in terms of interest. Our stories were Aesop’s fables since they resemble the stories we wish to generate. They are fairly short (average length was 3.7 sentences) and with a few characters. We asked participants to judge 40 fables on a set of criteria: plot, events, characters, coherence and interest (using a 5-point rating scale). The fables were split into 5 sets of 8; each participant was randomly assigned one of the 5 sets to judge. We obtained rat221 ings (440 in total) from 55 participants, using the WebExp3 experimental software. We next investigated if easily observable syntactic and lexical features were correlated with interest. Participants gave the fables an average interest rating of 3.05. For each story we extracted the number of tokens and types for nouns, verbs, adverbs and adjectives as well as the number of verb-subject and verb-object relations. Using the MRC Psycholinguistic database4 tokens were also annotated along the following dimensions: number of letters (NLET), number of phonemes (NPHON), number of syllables (NSYL), written frequency in the Brown corpus (Kucera and Francis 1967; K-F-FREQ), number of categories in the Brown corpus (K-F-NCATS), number of samples in the Brown corpus (K-F-NSAMP), familiarity (FAM), concreteness (CONC), imagery (IMAG), age of acquisition (AOA), and meaningfulness (MEANC and MEANP). Correlation analysis was used to assess the degree of linear relationship between interest ratings and the above features. The results are shown in Table 2. As can be seen the highest predictor is the number of objects in a story, followed by the number of noun tokens and types. Imagery, concreteness and familiarity all seem to be significantly correlated with interest. Story length was not a significant predictor. Regressing the best predictors from Table 2 against the interest ratings yields a correlation coefficient of 0.608 (p < 0.05). The predictors account uniquely for 37.2% of the variance in interest ratings. Overall, these results indicate that a model of story interest can be trained using shallow syntactic and lexical features. We used the Aesop’s fables with the human ratings as training data from which we extracted features that shown to be significant predictors in our correlation analysis. Word-based features were summed in order to obtain a representation for the entire story. We used Joachims’s (2002) SVMlight package for training with cross-validation (all parameters set to their default values). The model achieved a correlation of 0.948 (Kendall’s tau) with the human ratings on the test set. Coherence Model As well as being interesting we have to ensure that our stories make sense to the reader. Here, we focus on local coherence, which captures text organization at the level 3See http://www.webexp.info/. 4http://www.psy.uwa.edu.au/mrcdatabase/uwa_ mrc.htm Interest Interest NTokens 0.188∗∗ NLET 0.120∗ NTypes 0.173∗∗ NPHON 0.140∗∗ VTokens 0.123∗ NSYL 0.125∗∗ VTypes 0.154∗∗ K-F-FREQ 0.054 AdvTokens 0.056 K-F-NCATS 0.137∗∗ AdvTypes 0.051 K-F-NSAMP 0.103∗ AdjTokens 0.035 FAM 0.162∗∗ AdjTypes 0.029 CONC 0.166∗∗ NumSubj 0.150∗∗ IMAG 0.173∗∗ NumObj 0.240∗∗ AOA 0.111∗ MEANC 0.169∗∗ MEANP 0.156∗∗ Table 2: Correlation values for the human ratings of interest against syntactic and lexical features; ∗: p < 0.05, ∗∗: p < 0.01. of sentence to sentence transitions. We created a model of local coherence using using the Entity Grid approach described in Barzilay and Lapata (2007). This approach represents each document as a two-dimensional array in which the columns correspond to entities and the rows to sentences. Each cell indicates whether an entity appears in a given sentence or not and whether it is a subject, object or neither. This entity grid is then converted into a vector of entity transition sequences. Training the model required examples of both coherent and incoherent stories. An artificial training set was created by permuting the sentences of coherent stories, under the assumption that the original story is more coherent than its permutations. The model was trained and tested on the Andrew Lang fairy tales collection5 on a random split of the data. It ranked the original stories higher than their corresponding permutations 67.40% of the time. 4 Experimental Setup In this section we present our experimental set-up for assessing the performance of our story generator. We give details on our training corpus, system, parameters (such as the width of the beam), the baselines used for comparison, and explain how our system output was evaluated. Corpus The generator was trained on 437 stories from the Andrew Lang fairy tale corpus.6 The stories had an average length of 125.18 sentences. The corpus contained 15,789 word tokens. We 5Aesop’s fables were too short to learn a coherence model. 6See http://www.mythfolklore.net/andrewlang/. 222 discarded word tokens that did not appear in the Children’s Printed Word Database7, a database of printed word frequencies as read by children aged between five and nine. Story search When searching the story space, we set the beam width to 500. This means that we allow only 500 sentences to be considered at a particular depth before generating the next set of sentences in the story. For each entity we select the five most likely events and event sequences. Analogously, we consider the five most likely subcategorization templates for each verb. Considerable latitude is available when applying the ranking functions. We may use only one of them, or one after the other, or both of them. To evaluate which system configuration was best, we asked two human evaluators to rate (on a 1–5 scale) stories produced in the following conditions: (a) score the candidate stories using the interest function first and then coherence (and vice versa), (b) score the stories simultaneously using both rankers and select the story with the highest score. We also examined how best to prune the search space, i.e., by selecting the highest scoring stories, the lowest scoring one, or simply at random. We created ten stories of length five using the fairy tale corpus for each permutation of the parameters. The results showed that the evaluators preferred the version of the system that applied both rankers simultaneously and maintained the highest scoring stories in the beam. Baselines We compared our system against two simpler alternatives. The first one does not use a beam. Instead, it decides deterministically how to generate a story on the basis of the most likely predicate-argument and predicate-predicate counts in the knowledge base. The second one creates a story randomly without taking any cooccurrence frequency into account. Neither of these systems therefore creates more than one story hypothesis whilst generating. Evaluation The system generated stories for 10 input sentences. These were created using commonly occurring sentences in the fairy tales corpus (e.g., The family has the baby, The monkey climbs the tree, The giant guards the child). Each system generated one story for each sentence resulting in 30 (3×10) stories for evaluation. All stories had the same length, namely five sentences. Human judges (21 in total) were asked to rate the 7http://www.essex.ac.uk/psychology/cpwd/ System Fluency Coherence Interest Random 1.95∗ 2.40∗ 2.09∗ Deterministic 2.06∗ 2.53∗ 2.09∗ Rank-based 2.20 2.65 2.20 Table 3: Human evaluation results: mean story ratings for three versions of our system; ∗: significantly different from Rank-based. stories on a scale of 1 to 5 for fluency (was the sentence grammatical?), coherence (does the story make sense overall?) and interest (how interesting is the story?). The stories were presented in random order. Participants were told that all stories were generated by a computer program. They were instructed to rate more favorably interesting stories, stories that were comprehensible and overall grammatical. 5 Results Our results are summarized in Table 3 which lists the average human ratings for the three systems. We performed an Analysis of Variance (ANOVA) to examine the effect of system type on the story generation task. Statistical tests were carried out on the mean of the ratings shown in Table 3 for fluency, coherence, and interest. We observed a reliable effect of system type by subjects and items on all three dimensions. Post-hoc Tukey tests revealed that the stories created with our rankbased system are perceived as significantly better in terms of fluency, interest, and coherence than those generated by both the deterministic and random systems (α < 0.05). The deterministic system is not significantly better than the random one except in terms of coherence. These results are not entirely surprising. The deterministic system maintains a local restricted view of what constitutes a good story. It creates a story by selecting isolated entity-event relationships with high MI scores. As a result, the stories are unlikely to have a good plot. Moreover, it tends to primarily favor verb-object or verb-subject relations, since these are most frequent in the corpus. The stories thus have little structural variation and feel repetitive. The random system uses even less information in generating a story (entityaction relationships are chosen at random without taking note of the MI scores). In contrast to these baselines, the rank-based system assesses candidate stories more globally. It thus favors coherent stories, with varied word choice and structure. 223 The family has the baby The giant guards the child Random The family has the baby. The family is how to empty up to a fault. The baby vanishes into the cave. The family meets with a stranger. The baby says for the boy to fancy the creature. The giant guards the child. The child calls for the window to order the giant. The child suffers from a pleasure. The child longer hides the forest. The child reaches presently. Determ The family has the baby. The family rounds up the waist. The family comes in. The family wonders. The family meets with the terrace. The giant guards the child. The child rescues the clutch. The child beats down on a drum. The child feels out of a shock. The child hears from the giant. Rank-based The family has the baby. The baby is to seat the lady at the back. The baby sees the lady in the family. The family marries a lady for the triumph. The family quickly wishes the lady vanishes. The giant guards the child. The child rescues the son from the power. The child begs the son for a pardon. The giant cries that the son laughs the happiness out of death. The child hears if the happiness tells a story. Table 4: Stories generated by the random, deterministic, and rank-based systems. A note of caution here concerns referring expressions which our systems cannot at the moment generate. This may have disadvantaged the stories overall, rendering them stylistically awkward. The stories generated by both the deterministic and random systems are perceived as less interesting in comparison to the rank-based system. This indicates that taking interest into account is a promising direction even though the overall interestingness of the stories we generate is somewhat low (see third column in Table 3). Our interest ranking function was trained on well-formed human authored stories. It is therefore possible that the ranker was not as effective as it could be simply because it was applied to out-of-domain data. An interesting extension which we plan for the future is to evaluate the performance of a ranker trained on machine generated stories. Table 4 illustrates the stories generated by each system for two input sentences. The rank-based stories read better overall and are more coherent. Our subjects also gave them high interest scores. The deterministic system tends to select simplistic sentences which although read well by themselves do not lead to an overall narrative. Interestingly, the story generated by the random system for the input The family has the baby, scored high on interest too. The story indeed contains interesting imagery (e.g. The baby vanishes into the cave) although some of the sentences are syntactically odd (e.g. The family is how to empty up to a fault). 6 Conclusions and Future Work In this paper we proposed a novel method to computational story telling. Our approach has three key features. Firstly, story plot is created dynamically by consulting an automatically created knowledge base. Secondly, our generator realizes the various components of the generation pipeline stochastically, without extensive manual coding. Thirdly, we generate and store multiple stories efficiently in a tree data structure. Story creation amounts to traversing the tree and selecting the nodes with the highest score. We develop two scoring functions that rate stories in terms of how coherent and interesting they are. Experimental results show that these bring improvements over versions of the system that rely solely on the knowledge base. Overall, our results indicate that the overgeneration-and-ranking approach advocated here is viable in producing short stories that exhibit narrative structure. As our system can be easily rertrained on different corpora, it can potentially generate stories that vary in vocabulary, style, genre, and domain. An important future direction concerns a more detailed assessment of our search procedure. Currently we don’t have a good estimate of the type of stories being overlooked due to the restrictions we impose on the search space. An appealing alternative is the use of Genetic Algorithms (Goldberg, 1989). The operations of mutation and crossover have the potential of creating more varied and original stories. Our generator would also benefit from an explicit model of causality which is currently approximated by the entity chains. Such a model could be created from existing resources such as ConceptNet (Liu and Davenport, 2004), a freely available commonsense knowledge base. Finally, improvements such as the generation of referring expressions and the modeling of selectional restrictions would create more fluent stories. Acknowledgements The authors acknowledge the support of EPSRC (grant GR/T04540/01). We are grateful to Richard Kittredge for his help with RealPro. Special thanks to Johanna Moore for insightful comments and suggestions. 224 References Asher, Nicholas and Alex Lascarides. 2003. Logics of Conversation. Cambridge University Press. Barzilay, Regina and Mirella Lapata. 2005. Collective content selection for concept-to-text generation. In Proceedings of the HLT/EMNLP. Vancouver, pages 331–338. Barzilay, Regina and Mirella Lapata. 2007. Modeling local coherence: An entity-based approach. Computational Linguistics 34(1):1–34. Briscoe, E. and J. Carroll. 2002. Robust accurate statistical annotation of general text. In Proceedings of the 3rd LREC. Las Palmas, Gran Canaria, pages 1499–1504. Callaway, Charles B. and James C. Lester. 2002. Narrative prose generation. Artificial Intelligence 2(139):213–252. Chambers, Nathanael and Dan Jurafsky. 2008. Unsupervised learning of narrative event chains. In Proceedings of ACL08: HLT. Columbus, OH, pages 789–797. Duboue, Pablo A. and Kathleen R. McKeown. 2002. Content planner construction via evolutionary algorithms and a corpus-based fitness function. In Proceedings of the 2nd INLG. Ramapo Mountains, NY. Fass, S. 2002. Virtual Storyteller: An Approach to Computational Storytelling. Master’s thesis, Dept. of Computer Science, University of Twente. Goldberg, David E. 1989. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Longman Publishing Co., Inc., Boston, MA. Grishman, Ralph, Catherine Macleod, and Adam Meyers. 1994. COMLEX syntax: Building a computational lexicon. In Proceedings of the 15th COLING. Kyoto, Japan, pages 268–272. Grosz, Barbara J., Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics 21(2):203–225. Joachims, Thorsten. 2002. Optimizing search engines using clickthrough data. In Proceedings of the 8th ACM SIGKDD. Edmonton, AL, pages 133–142. Knight, Kevin and Vasileios Hatzivassiloglou. 1995. Twolevel, many-paths generation. In Proceedings of the 33rd ACL. Cambridge, MA, pages 252–260. Korhonen, Y. Krymolowski, A. and E.J. Briscoe. 2006. A large subcategorization lexicon for natural language processing applications. In Proceedings of the 5th LREC. Genova, Italy. Kucera, Henry and Nelson Francis. 1967. Computational Analysis of Present-day American English. Brown University Press, Providence, RI. Lavoie, Benoit and Owen Rambow. 1997. A fast and portable realizer for text generation systems. In Proceedings of the 5th ANCL. Washington, D.C., pages 265–268. Lin, Dekang. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 17th COLING. Montr´eal, QC, pages 768–774. Liu, Hugo and Glorianna Davenport. 2004. ConceptNet: a practical commonsense reasoning toolkit. BT Technology Journal 22(4):211–226. Mellish, Chris, Alisdair Knott, Jon Oberlander, and Mick O’Donnell. 1998. Experiments using stochastic search for text planning. In Eduard Hovy, editor, Proceedings of the 9th INLG. New Brunswick, NJ, pages 98–107. Oinonen, K.M., M. Theune, A. Nijholt, and J.R.R. Uijlings. 2006. Designing a story database for use in automatic story generation. In R. Harper, M. Rauterberg, and M. Combetto, editors, Entertainment Computing – ICEC 2006. Springer Verlag, Berlin, volume 4161 of Lecture Notes in Computer Science, pages 298–301. Propp, Vladimir. 1968. The Morphology of Folk Tale. University of Texas Press, Austin, TX. Reiter, E and R Dale. 2000. Building Natural-Language Generation Systems. Cambridge University Press. Robertson, Judy and Judith Good. 2003. Ghostwriter: A narrative virtual environment for children. In Proceedings of IDC2003. Preston, England, pages 85–91. Shim, Yunju and Minkoo Kim. 2002. Automatic short story generator based on autonomous agents. In Proceedings of PRIMA. London, UK, pages 151–162. Stent, Amanda, Rashmi Prasad, and Marilyn Walker. 2004. Trainable sentence planning for complex information presentation in spoken dialog systems. In Proceedings of the 42nd ACL. Barcelona, Spain, pages 79–86. Theune, M., S. Faas, D.K.J. Heylen, and A. Nijholt. 2003. The virtual storyteller: Story creation by intelligent agents. In S. Gbel, N. Braun, U. Spierling, J. Dechau, and H. Diener, editors, TIDSE-2003. Fraunhofer IRB Verlag, Darmstadt, pages 204–215. Thorndyke, Perry W. 1977. Cognitive structures in comprehension and memory of narrative discourse. Cognitive Psychology 9(1):77–110. Turner, Scott T. 1994. The creative process: A computer model of storytelling and creativity. Erlbaum, Hillsdale, NJ. 225
2009
25
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 226–234, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Recognizing Stances in Online Debates Swapna Somasundaran Dept. of Computer Science University of Pittsburgh Pittsburgh, PA 15260 [email protected] Janyce Wiebe Dept. of Computer Science University of Pittsburgh Pittsburgh, PA 15260 [email protected] Abstract This paper presents an unsupervised opinion analysis method for debate-side classification, i.e., recognizing which stance a person is taking in an online debate. In order to handle the complexities of this genre, we mine the web to learn associations that are indicative of opinion stances in debates. We combine this knowledge with discourse information, and formulate the debate side classification task as an Integer Linear Programming problem. Our results show that our method is substantially better than challenging baseline methods. 1 Introduction This paper presents a method for debate-side classification, i.e., recognizing which stance a person is taking in an online debate posting. In online debate forums, people debate issues, express their preferences, and argue why their viewpoint is right. In addition to expressing positive sentiments about one’s preference, a key strategy is also to express negative sentiments about the other side. For example, in the debate “which mobile phone is better: iPhone or Blackberry,” a participant on the iPhone side may explicitly assert and rationalize why the iPhone is better, and, alternatively, also argue why the Blackberry is worse. Thus, to recognize stances, we need to consider not only which opinions are positive and negative, but also what the opinions are about (their targets). Participants directly express their opinions, such as “The iPhone is cool,” but, more often, they mention associated aspects. Some aspects are particular to one topic (e.g., Active-X is part of IE but not Firefox), and so distinguish between them. But even an aspect the topics share may distinguish between them, because people who are positive toward one topic may value that aspect more. For example, both the iPhone and Blackberry have keyboards, but we observed in our corpus that positive opinions about the keyboard are associated with the pro Blackberry stance. Thus, we need to find distinguishing aspects, which the topics may or may not share. Complicating the picture further, participants may concede positive aspects of the opposing issue or topic, without coming out in favor of it, and they may concede negative aspects of the issue or topic they support. For example, in the following sentence, the speaker says positive things about the iPhone, even though he does not prefer it: “Yes, the iPhone may be cool to take it out and play with and show off, but past that, it offers nothing.” Thus, we need to consider discourse relations to sort out which sentiments in fact reveal the writer’s stance, and which are merely concessions. Many opinion mining approaches find negative and positive words in a document, and aggregate their counts to determine the final document polarity, ignoring the targets of the opinions. Some work in product review mining finds aspects of a central topic, and summarizes opinions with respect to these aspects. However, they do not find distinguishing factors associated with a preference for a stance. Finally, while other opinion analysis systems have considered discourse information, they have not distinguished between concessionary and non-concessionary opinions when determining the overall stance of a document. This work proposes an unsupervised opinion analysis method to address the challenges described above. First, for each debate side, we mine the web for opinion-target pairs that are associated with a preference for that side. This information is employed, in conjunction with discourse information, in an Integer Linear Programming (ILP) framework. This framework combines the individual pieces of information to arrive at debate-side 226 classifications of posts in online debates. The remainder of this paper is organized as follows. We introduce our debate genre in Section 2 and describe our method in Section 3. We present the experiments in Section 4 and analyze the results in Section 5. Related work is in Section 6, and the conclusions are in Section 7. 2 The Debate Genre In this section, we describe our debate data, and elaborate on characteristic ways of expressing opinions in this genre. For our current work, we use the online debates from the website http://www.convinceme.net.1 In this work, we deal only with dual-sided, dual-topic debates about named entities, for example iPhone vs. Blackberry, where topic1 = iPhone, topic2 =Blackberry, side1 = pro-iPhone, and side2=pro-Blackberry. Our test data consists of posts of 4 debates: Windows vs. Mac, Firefox vs. Internet Explorer, Firefox vs. Opera, and Sony Ps3 vs. Nintendo Wii. The iPhone vs. Blackberry debate and two other debates, were used as development data. Given below are examples of debate posts. Post 1 is taken from the iPhone vs. Blackberry debate, Post 2 is from the Firefox vs. Internet Explorer debate, and Post 3 is from the Windows vs. Mac debate: (1) While the iPhone may appeal to younger generations and the BB to older, there is no way it is geared towards a less rich population. In fact it’s exactly the opposite. It’s a gimmick. The initial purchase may be half the price, but when all is said and done you pay at least $200 more for the 3g. (2) In-line spell check...helps me with big words like onomatopoeia (3) Apples are nice computers with an exceptional interface. Vista will close the gap on the interface some but Apple still has the prettiest, most pleasing interface and most likely will for the next several years. 2.1 Observations As described in Section 1, the debate genre poses significant challenges to opinion analysis. This 1http://www.forandagainst.com and http://www.createdebate.com are other similar debating websites. subsection elaborates upon some of the complexities. Multiple polarities to argue for a side. Debate participants, in advocating their choice, switch back and forth between their opinions towards the sides. This makes it difficult for approaches that use only positive and negative word counts to decide which side the post is on. Posts 1 and 3 illustrate this phenomenon. Sentiments towards both sides (topics) within a single post. The above phenomenon gives rise to an additional problem: often, conflicting sides (and topics) are addressed within the same post, sometimes within the same sentence. The second sentence of Post 3 illustrates this, as it has opinions about both Windows and Mac. Differentiating aspects and personal preferences. People seldom repeatedly mention the topic/side; they show their evaluations indirectly, by evaluating aspects of each topic/side. Differentiating aspects determine the debate-post’s side. Some aspects are unique to one side/topic or the other, e.g., “3g” in Example 1 and “inline spell check” in Example 2. However, the debates are about topics that belong to the same domain and which therefore share many aspects. Hence, a purely ontological approach of finding “has-a” and “is-a” relations, or an approach looking only for product specifications, would not be sufficient for finding differentiating features. When the two topics do share an aspect (e.g., a keyboard in the iPhone vs. Blackberry debate), the writer may perceive it to be more positive for one than the other. And, if the writer values that aspect, it will influence his or her overall stance. For example, many people prefer the Blackberry keyboard over the iPhone keyboard; people to whom phone keyboards are important are more likely to prefer the Blackberry. Concessions. While debating, participants often refer to and acknowledge the viewpoints of the opposing side. However, they do not endorse this rival opinion. Uniform treatment of all opinions in a post would obviously cause errors in such cases. The first sentence of Example 1 is an instance of this phenomenon. The participant concedes that the iPhone appeals to young consumers, but this positive opinion is opposite to his overall stance. 227 DIRECT OBJECT Rule: dobj(opinion, target) In words: The target is the direct object of the opinion Example: I loveopinion1 Firefoxtarget1 and defendedopinion2 ittarget2 NOMINAL SUBJECT Rule: nsubj(opinion, target) In words: The target is the subject of the opinion Example: IEtarget breaksopinion with everything. ADJECTIVAL MODIFIER Rule: amod(target, opinion) In words: The opinion is an adjectival modifier of the target Example: The annoyingopinion popuptarget PREPOSITIONAL OBJECT Rule: if prep(target1,IN) ⇒pobj(IN, target2) In words: The prepositional object of a known target is also a target of the same opinion Example: The annoyingopinion popuptarget1 in IEtarget2 (“popup” and “IE” are targets of “annoying”) RECURSIVE MODIFIERS Rule: if conj(adj2, opinionadj1) ⇒amod(target, adj2) In words: If the opinion is an adjective (adj1) and it is conjoined with another adjective (adj2), then the opinion is tied to what adj2 modifies Example: It is a powerfulopinion(adj1) and easyopinion(adj2) applicationtarget (“powerful” is attached to the target “application” via the adjective “easy”) Table 1: Examples of syntactic rules for finding targets of opinions 3 Method We propose an unsupervised approach to classifying the stance of a post in a dual-topic debate. For this, we first use a web corpus to learn preferences that are likely to be associated with a side. These learned preferences are then employed in conjunction with discourse constraints to identify the side for a given post. 3.1 Finding Opinions and Pairing them with Targets We need to find opinions and pair them with targets, both to mine the web for general preferences and to classify the stance of a debate post. We use straightforward methods, as these tasks are not the focus of this paper. To find opinions, we look up words in a subjectivity lexicon: all instances of those words are treated as opinions. An opinion is assigned the prior polarity that is listed for that word in the lexicon, except that, if the prior polarity is positive or negative, and the instance is modified by a negation word (e.g., “not”), then the polarity of that instance is reversed. We use the subjectivity lexicon of (Wilson et al., 2005),2 which contains approximately 8000 words which may be used to express opinions. Each entry consists of a subjective word, its prior polarity (positive (+), negative (−), neutral (∗)), morphological information, and part of speech information. To pair opinions with targets, we built a rulebased system based on dependency parse information. The dependency parses are obtained using 2Available at http://www.cs.pitt.edu/mpqa. the Stanford parser.3 We developed the syntactic rules on separate data that is not used elsewhere in this paper. Table 1 illustrates some of these rules. Note that the rules are constructed (and explained in Table 1) with respect to the grammatical relation notations of the Stanford parser. As illustrated in the table, it is possible for an opinion to have more than one target. In such cases, the single opinion results in multiple opinion-target pairs, one for each target. Once these opinion-target pairs are created, we mask the identity of the opinion word, replacing the word with its polarity. Thus, the opiniontarget pair is converted to a polarity-target pair. For instance, “pleasing-interface” is converted to interface+. This abstraction is essential for handling the sparseness of the data. 3.2 Learning aspects and preferences from the web We observed in our development data that people highlight the aspects of topics that are the bases for their stances, both positive opinions toward aspects of the preferred topic, and negative opinions toward aspects of the dispreferred one. Thus, we decided to mine the web for aspects associated with a side in the debate, and then use that information to recognize the stances expressed in individual posts. Previous work mined web data for aspects associated with topics (Hu and Liu, 2004; Popescu et al., 2005). In our work, we search for aspects associated with a topic, but particularized to polarity. Not all aspects associated with a topic are 3http://nlp.stanford.edu/software/lex-parser.shtml. 228 side1 (pro-iPhone) side2 (pro-blackberry) termp P(iPhone+|termp) P(blackberry−|termp) P(iPhone−|termp) P(blackberry+|termp) storm+ 0.227 0.068 0.022 0.613 storm− 0.062 0.843 0.06 0.03 phone+ 0.333 0.176 0.137 0.313 e-mail+ 0 0.333 0.166 0.5 ipod+ 0.5 0 0.33 0 battery− 0 0 0.666 0.333 network− 0.333 0 0.666 0 keyboard+ 0.09 0.12 0 0.718 keyboard− 0.25 0.25 0.125 0.375 Table 2: Probabilities learned from the web corpus (iPhone vs. blackberry debate) discriminative with respect to stance; we hypothesized that, by including polarity, we would be more likely to find useful associations. An aspect may be associated with both of the debate topics, but not, by itself, be discriminative between stances toward the topics. However, opinions toward that aspect might discriminate between them. Thus, the basic unit in our web mining process is a polarity-target pair. Polarity-target pairs which explicitly mention one of the topics are used to anchor the mining process. Opinions about relevant aspects are gathered from the surrounding context. For each debate, we downloaded weblogs and forums that talk about the main topics (corresponding to the sides) of that debate. For example, for the iPhone vs. Blackberry debate, we search the web for pages containing “iPhone” and “Blackberry.” We used the Yahoo search API and imposed the search restriction that the pages should contain both topics in the http URL. This ensured that we downloaded relevant pages. An average of 3000 documents were downloaded per debate. We apply the method described in Section 3.1 to the downloaded web pages. That is, we find all instances of words in the lexicon, extract their targets, and mask the words with their polarities, yielding polarity-target pairs. For example, suppose the sentence “The interface is pleasing” is in the corpus. The system extracts the pair “pleasing-interface,” which is masked to “positive-interface,” which we notate as interface+. If the target in a polarity-target pair happens to be one of the topics, we select the polarity-target pairs in its vicinity for further processing (the rest are discarded). The intuition behind this is that, if someone expresses an opinion about a topic, he or she is likely to follow it up with reasons for that opinion. The sentiments in the surrounding context thus reveal factors that influence the preference or dislike towards the topic. We define the vicinity as the same sentence plus the following 5 sentences. Each unique target word targeti in the web corpus, i.e., each word used as the target of an opinion one or more times, is processed to generate the following conditional probabilities. P(topicq j|targetp i ) = #(topicq j, targetp i ) #targetp i (1) where p = {+,−,∗} and q = {+,−,∗} denote the polarities of the target and the topic, respectively; j = {1, 2}; and i = {1...M}, where M is the number of unique targets in the corpus. For example, P(Mac+|interface+) is the probability that “interface” is the target of a positive opinion that is in the vicinity of a positive opinion toward “Mac.” Table 2 lists some of the probabilities learned by this approach. (Note that the neutral cases are not shown.) 3.2.1 Interpreting the learned probabilities Table 2 contains examples of the learned probabilities. These probabilities align with what we qualitatively found in our development data. For example, the opinions towards “Storm” essentially follow the opinions towards “Blackberry;” that is, positive opinions toward “Storm” are usually found in the vicinity of positive opinions toward “Blackberry,” and negative opinions toward “Storm” are usually found in the vicinity of negative opinions toward “Blackberry” (for example, in the row for storm+, P(blackberry+|storm+) is much higher than the other probabilities). Thus, an opinion expressed about “Storm” is usually the opinion one has toward “Blackberry.” This is expected, as Storm is a type of Blackberry. A similar example is ipod+, which follows the opinion toward the iPhone. This is interesting because an 229 iPod is not a phone; the association is due to preference for the brand. In contrast, the probability distribution for “phone” does not show a preference for any one side, even though both iPhone and Blackberry are phones. This indicates that opinions towards phones in general will not be able to distinguish between the debate sides. Another interesting case is illustrated by the probabilities for “e-mail.” People who like e-mail capability are more likely to praise the Blackberry, or even criticize the iPhone — they would thus belong to the pro-Blackberry camp. While we noted earlier that positive evaluations of keyboards are associated with positive evaluations of the Blackberry (by far the highest probability in that row), negative evaluations of keyboards, are, however, not a strong discriminating factor. For the other entries in the table, we see that criticisms of batteries and the phone network are more associated with negative sentiments towards the iPhones. The possibility of these various cases motivates our approach, in which opinions and their polarities are considered when searching for associations between debate topics and their aspects. 3.3 Debate-side classification Once we have the probabilities collected from the web, we can build our classifier to classify the debate posts. Here again, we use the process described in Section 3.1 to extract polarity-target pairs for each opinion expressed in the post. Let N be the number of instances of polarity-target pairs in the post. For each instance Ij (j = {1...N}), we look up the learned probabilities of Section 3.2 to create two scores, wj and uj: wj = P(topic+ 1 |targetp i ) + P(topic− 2 |targetp i ) (2) uj = P(topic− 1 |targetp i ) + P(topic+ 2 |targetp i ) (3) where targetp i is the polarity-target type of which Ij is an instance. Score wj corresponds to side1 and uj corresponds to side2. A point to note is that, if a target word is repeated, and it occurs in different polarity-target instances, it is counted as a separate instance each time — that is, here we account for tokens, not types. Via Equations 2 and 3, we interpret the observed polarity-target instance Ij in terms of debate sides. We formulate the problem of finding the overall side of the post as an Integer Linear Programming (ILP) problem. The side that maximizes the overall side-score for the post, given all the N instances Ij, is chosen by maximizing the objective function N X j=1 (wjxj + ujyj) (4) subject to the following constraints xj ∈{0, 1}, ∀j (5) yj ∈{0, 1}, ∀j (6) xj + yj = 1, ∀j (7) xj −xj−1 = 0, j ∈{2..N} (8) yj −yj−1 = 0, j ∈{2..N} (9) Equations 5 and 6 implement binary constraints. Equation 7 enforces the constraint that each Ij can belong to exactly one side. Finally, Equations 8 and 9 ensure that a single side is chosen for the entire post. 3.4 Accounting for concession As described in Section 2, debate participants often acknowledge the opinions held by the opposing side. We recognize such discourse constructs using the Penn Discourse Treebank (Prasad et al., 2007) list of discourse connectives. In particular, we use the list of connectives from the Concession and Contra-expectation category. Examples of connectives in these categories are “while,” “nonetheless,” “however,” and “even if.” We use approximations to finding the arguments to the discourse connectives (ARG1 and ARG2 in Penn Discourse Treebank terms). If the connective is mid-sentence, the part of the sentence prior to the connective is considered conceded, and the part that follows the connective is considered nonconceded. An example is the second sentence of Example 3. If, on the other hand, the connective is sentence-initial, the sentence is split at the first comma that occurs mid sentence. The first part is considered conceded, and the second part is considered non-conceded. An example is the first sentence of Example 1. The opinions occurring in the conceded part are interpreted in reverse. That is, the weights corresponding to the sides wj and uj are interchanged in equation 4. Thus, conceded opinions are effectively made to count towards the opposing side. 230 4 Experiments On http://www.convinceme.net, the html page for each debate contains side information for each post (side1 is blue in color and side2 is green). This gives us automatically labeled data for our evaluations. For each of the 4 debates in our test set, we use posts with at least 5 sentences for evaluation. 4.1 Baselines We implemented two baselines: the OpTopic system that uses topic information only, and the OpPMI system that uses topic as well as related word (noun) information. All systems use the same lexicon, as well as exactly the same processes for opinion finding and opinion-target pairing. The OpTopic system This system considers only explicit mentions of the topic for the opinion analysis. Thus, for this system, the step of opinion-target pairing only finds all topic+ 1 , topic− 1 , topic+ 2 , topic− 2 instances in the post (where, for example, an instance of topic+ 1 is a positive opinion whose target is explicitly topic1). The polarity-topic pairs are counted for each debate side according to the following equations. score(side1) = #topic+ 1 + #topic− 2 (10) score(side2) = #topic− 1 + #topic+ 2 (11) The post is assigned the side with the higher score. The OpPMI system This system finds opiniontarget pairs for not only the topics, but also for the words in the debate that are significantly related to either of the topics. We find semantic relatedness of each noun in the post with the two main topics of the debate by calculating the Pointwise Mutual Information (PMI) between the term and each topic over the entire web corpus. We use the API provided by the Measures of Semantic Relatedness (MSR)4 engine for this purpose. The MSR engine issues Google queries to retrieve documents and finds the PMI between any two given words. Table 3 lists PMIs between the topics and the words from Table 2. Each noun k is assigned to the topic with the higher PMI score. That is, if PMI(topic1,k) > PMI(topic2,k) ⇒k= topic1 and if 4http://cwl-projects.cogsci.rpi.edu/msr/ PMI(topic2,k) > PMI(topic1,k) ⇒k= topic2 Next, the polarity-target pairs are found for the post, as before, and Equations 10 and 11 are used to assign a side to the post as in the OpTopic system, except that here, related nouns are also counted as instances of their associated topics. word iPhone blackberry storm 0.923 0.941 phone 0.908 0.885 e-mail 0.522 0.623 ipod 0.909 0.976 battery 0.974 0.927 network 0.658 0.961 keyboard 0.961 0.983 Table 3: PMI of words with the topics 4.2 Results Performance is measured using the following metrics: Accuracy ( #Correct #Total posts), Precision (#Correct #guessed), Recall ( #Correct #relevant) and F-measure ( 2∗P recision∗Recall (P recision+Recall)). In our task, it is desirable to make a prediction for all the posts; hence #relevant = #Total posts. This results in Recall and Accuracy being the same. However, all of the systems do not classify a post if the post does not contain the information it needs. Thus, #guessed ≤ #Total posts, and Precision is not the same as Accuracy. Table 4 reports the performance of four systems on the test data: the two baselines, our method using the preferences learned from the web corpus (OpPr) and the method additionally using discourse information to reverse conceded opinions. The OpTopic has low recall. This is expected, because it relies only on opinions explicitly toward the topics. The OpPMI has better recall than OpTopic; however, the precision drops for some debates. We believe this is due to the addition of noise. This result suggests that not all terms that are relevant to a topic are useful for determining the debate side. Finally, both of the OpPr systems are better than both baselines in Accuracy as well as F-measure for all four debates. The accuracy of the full OpPr system improves, on average, by 35 percentage points over the OpTopic system, and by 20 percentage points over the 231 OpPMI system. The F-measure improves, on average, by 25 percentage points over the OpTopic system, and by 17 percentage points over the OpPMI system. Note that in 3 out of 4 of the debates, the full system is able to make a guess for all of the posts (hence, the metrics all have the same values). In three of the four debates, the system using concession handling described in Section 3.4 outperforms the system without it, providing evidence that our treatment of concessions is effective. On average, there is a 3 percentage point improvement in Accuracy, 5 percentage point improvement in Precision and 5 percentage point improvement in F-measure due to the added concession information. OpTopic OpPMI OpPr OpPr + Disc Firefox Vs Internet explorer (62 posts) Acc 33.87 53.23 64.52 66.13 Prec 67.74 60.0 64.52 66.13 Rec 33.87 53.23 64.52 66.13 F1 45.16 56.41 64.52 66.13 Windows vs. Mac (15 posts) Acc 13.33 46.67 66.67 66.67 Prec 40.0 53.85 66.67 66.67 Rec 13.33 46.67 66.67 66.67 F1 20.0 50.00 66.67 66.67 SonyPs3 vs. Wii (36 posts) Acc 33.33 33.33 56.25 61.11 Prec 80.0 46.15 56.25 68.75 Rec 33.33 33.33 50.0 61.11 F1 47.06 38.71 52.94 64.71 Opera vs. Firefox (4 posts) Acc 25.0 50.0 75.0 100.0 Prec 33.33 100 75.0 100.0 Rec 25.0 50 75.0 100.0 F1 28.57 66.67 75.0 100.0 Table 4: Performance of the systems on the test data 5 Discussion In this section, we discuss the results from the previous section and describe the sources of errors. As reported in the previous section, the OpPr system outperforms both the OpTopic and the OpPMI systems. In order to analyze why OpPr outperforms OpPMI, we need to compare Tables 2 and 3. Table 2 reports the conditional probabilities learned from the web corpus for polaritytarget pairs used in OpPr, and Table 3 reports the PMI of these same targets with the debate topics used in OpPMI. First, we observe that the PMI numbers are intuitive, in that all the words, except for “e-mail,” show a high PMI relatedness to both topics. All of them are indeed semantically related to the domain. Additionally, we see that some conclusions of the OpPMI system are similar to those of the OpPr system, for example, that “Storm” is more closely related to the Blackberry than the iPhone. However, notice two cases: the PMI values for “phone” and “e-mail” are intuitive, but they may cause errors in debate analysis. Because the iPhone and the Blackberry are both phones, the word “phone” does not have any distinguishing power in debates. On the other hand, the PMI measure of “e-mail” suggests that it is not closely related to the debate topics, though it is, in fact, a desirable feature for smart phone users, even more so with Blackberry users. The PMI measure does not reflect this. The “network” aspect shows a comparatively greater relatedness to the blackberry than to the iPhone. Thus, OpPMI uses it as a proxy for the Blackberry. This may be erroneous, however, because negative opinions towards “network” are more indicative of negative opinions towards iPhones, a fact revealed by Table 2. In general, even if the OpPMI system knows what topic the given word is more related to, it still does not know what the opinion towards that word means in the debate scenario. The OpPr system, on the other hand, is able to map it to a debate side. 5.1 Errors False lexicon hits. The lexicon is word based, but, as shown by (Wiebe and Mihalcea, 2006; Su and Markert, 2008), many subjective words have both objective and subjective senses. Thus, one major source of errors is a false hit of a word in the lexicon. Opinion-target pairing. The syntactic rulebased opinion-target pairing system is a large source of errors in the OpPr as well as the baseline systems. Product review mining work has explored finding opinions with respect to, or in conjunction with, aspects (Hu and Liu, 2004; Popescu et al., 2005); however, in our work, we need to find 232 information in the other direction – that is, given the opinion, what is the opinion about. Stoyanov and Cardie (2008) work on opinion co-reference; however, we need to identify the specific target. Pragmatic opinions. Some of the errors are due to the fact that the opinions expressed in the post are pragmatic. This becomes a problem especially when the debate post is small, and we have few other lexical clues in the post. The following post is an example: (4) The blackberry is something like $150 and the iPhone is $500. I don’t think it’s worth it. You could buy a iPod separate and have a boatload of extra money left over. In this example, the participant mentions the difference in the prices in the first sentence. This sentence implies a negative opinion towards the iPhone. However, recognizing this would require a system to have extensive world knowledge. In the second sentence, the lexicon does hit the word “worth,” and, using syntactic rules, we can determine it is negated. However, the opinion-target pairing system only tells us that the opinion is tied to the “it.” A co-reference system would be needed to tie the “it” to “iPhone” in the first sentence. 6 Related Work Several researchers have worked on similar tasks. Kim and Hovy (2007) predict the results of an election by analyzing forums discussing the elections. Theirs is a supervised bag-of-words system using unigrams, bigrams, and trigrams as features. In contrast, our approach is unsupervised, and exploits different types of information. Bansal et al. (2008) predict the vote from congressional floor debates using agreement/disagreement features. We do not model inter-personal exchanges; instead, we model factors that influence stance taking. Lin at al (2006) identify opposing perspectives. Though apparently related at the task level, perspectives as they define them are not the same as opinions. Their approach does not involve any opinion analysis. Fujii and Ishikawa (2006) also work with arguments. However, their focus is on argument visualization rather than on recognizing stances. Other researchers have also mined data to learn associations among products and features. In their work on mining opinions in comparative sentences, Ganapathibhotla and Liu (2008) look for user preferences for one product’s features over another’s. We do not exploit comparative constructs, but rather probabilistic associations. Thus, our approach and theirs are complementary. A number of works in product review mining (Hu and Liu, 2004; Popescu et al., 2005; Kobayashi et al., 2005; Bloom et al., 2007) automatically find features of the reviewed products. However, our approach is novel in that it learns and exploits associations among opinion/polarity, topics, and aspects. Several researchers have recognized the important role discourse plays in opinion analysis (Polanyi and Zaenen, 2005; Snyder and Barzilay, 2007; Somasundaran et al., 2008; Asher et al., 2008; Sadamitsu et al., 2008). However, previous work did not account for concessions in determining whether an opinion supports one side or the other. More sophisticated approaches to identifying opinions and recognizing their contextual polarity have been published (e.g., (Wilson et al., 2005; Ikeda et al., 2008; Sadamitsu et al., 2008)). Those components are not the focus of our work. 7 Conclusions This paper addresses challenges faced by opinion analysis in the debate genre. In our method, factors that influence the choice of a debate side are learned by mining a web corpus for opinions. This knowledge is exploited in an unsupervised method for classifying the side taken by a post, which also accounts for concessionary opinions. Our results corroborate our hypothesis that finding relations between aspects associated with a topic, but particularized to polarity, is more effective than finding relations between topics and aspects alone. The system that implements this information, mined from the web, outperforms the web PMI-based baseline. Our hypothesis that addressing concessionary opinions is useful is also corroborated by improved performance. Acknowledgments This research was supported in part by the Department of Homeland Security under grant N000140710152. We would also like to thank Vladislav D. Veksler for help with the MSR engine, and the anonymous reviewers for their helpful comments. 233 References Nicholas Asher, Farah Benamara, and Yvette Yannick Mathieu. 2008. Distilling opinion in discourse: A preliminary study. In Coling 2008: Companion volume: Posters and Demonstrations, pages 5–8, Manchester, UK, August. Mohit Bansal, Claire Cardie, and Lillian Lee. 2008. The power of negative thinking: Exploiting label disagreement in the min-cut classification framework. In Proceedings of COLING: Companion volume: Posters. Kenneth Bloom, Navendu Garg, and Shlomo Argamon. 2007. Extracting appraisal expressions. In HLTNAACL 2007, pages 308–315, Rochester, NY. Atsushi Fujii and Tetsuya Ishikawa. 2006. A system for summarizing and visualizing arguments in subjective documents: Toward supporting decision making. In Proceedings of the Workshop on Sentiment and Subjectivity in Text, pages 15–22, Sydney, Australia, July. Association for Computational Linguistics. Murthy Ganapathibhotla and Bing Liu. 2008. Mining opinions in comparative sentences. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 241–248, Manchester, UK, August. Minqing Hu and Bing Liu. 2004. Mining opinion features in customer reviews. In AAAI-2004. Daisuke Ikeda, Hiroya Takamura, Lev-Arie Ratinov, and Manabu Okumura. 2008. Learning to shift the polarity of words for sentiment classification. In Proceedings of the Third International Joint Conference on Natural Language Processing (IJCNLP). Soo-Min Kim and Eduard Hovy. 2007. Crystal: Analyzing predictive opinions on the web. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLPCoNLL), pages 1056–1064. Nozomi Kobayashi, Ryu Iida, Kentaro Inui, and Yuji Matsumoto. 2005. Opinion extraction using a learning-based anaphora resolution technique. In Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05), poster, pages 175–180. Wei-Hao Lin, Theresa Wilson, Janyce Wiebe, and Alexander Hauptmann. 2006. Which side are you on? Identifying perspectives at the document and sentence levels. In Proceedings of the 10th Conference on Computational Natural Language Learning (CoNLL-2006), pages 109–116, New York, New York. Livia Polanyi and Annie Zaenen. 2005. Contextual valence shifters. In Computing Attitude and Affect in Text. Springer. Ana-Maria Popescu, Bao Nguyen, and Oren Etzioni. 2005. OPINE: Extracting product features and opinions from reviews. In Proceedings of HLT/EMNLP 2005 Interactive Demonstrations, pages 32–33, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics. R. Prasad, E. Miltsakaki, N. Dinesh, A. Lee, A. Joshi, L. Robaldo, and B. Webber, 2007. PDTB 2.0 Annotation Manual. Kugatsu Sadamitsu, Satoshi Sekine, and Mikio Yamamoto. 2008. Sentiment analysis based on probabilistic models using inter-sentence information. In European Language Resources Association (ELRA), editor, Proceedings of the Sixth International Language Resources and Evaluation (LREC’08), Marrakech, Morocco, May. Benjamin Snyder and Regina Barzilay. 2007. Multiple aspect ranking using the good grief algorithm. In Proceedings of NAACL-2007. Swapna Somasundaran, Janyce Wiebe, and Josef Ruppenhofer. 2008. Discourse level opinion interpretation. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 801–808, Manchester, UK, August. Veselin Stoyanov and Claire Cardie. 2008. Topic identification for fine-grained opinion analysis. In Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008), pages 817–824, Manchester, UK, August. Coling 2008 Organizing Committee. Fangzhong Su and Katja Markert. 2008. From word to sense: a case study of subjectivity recognition. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING2008), Manchester, UK, August. Janyce Wiebe and Rada Mihalcea. 2006. Word sense and subjectivity. In Proceedings of COLING-ACL 2006. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In HLT-EMNLP, pages 347–354, Vancouver, Canada. 234
2009
26
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 235–243, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Co-Training for Cross-Lingual Sentiment Classification Xiaojun Wan Institute of Compute Science and Technology & Key Laboratory of Computational Linguistics, MOE Peking University, Beijing 100871, China [email protected] Abstract The lack of Chinese sentiment corpora limits the research progress on Chinese sentiment classification. However, there are many freely available English sentiment corpora on the Web. This paper focuses on the problem of cross-lingual sentiment classification, which leverages an available English corpus for Chinese sentiment classification by using the English corpus as training data. Machine translation services are used for eliminating the language gap between the training set and test set, and English features and Chinese features are considered as two independent views of the classification problem. We propose a cotraining approach to making use of unlabeled Chinese data. Experimental results show the effectiveness of the proposed approach, which can outperform the standard inductive classifiers and the transductive classifiers. 1 Introduction Sentiment classification is the task of identifying the sentiment polarity of a given text. The sentiment polarity is usually positive or negative and the text genre is usually product review. In recent years, sentiment classification has drawn much attention in the NLP field and it has many useful applications, such as opinion mining and summarization (Liu et al., 2005; Ku et al., 2006; Titov and McDonald, 2008). To date, a variety of corpus-based methods have been developed for sentiment classification. The methods usually rely heavily on an annotated corpus for training the sentiment classifier. The sentiment corpora are considered as the most valuable resources for the sentiment classification task. However, such resources in different languages are very imbalanced. Because most previous work focuses on English sentiment classification, many annotated corpora for English sentiment classification are freely available on the Web. However, the annotated corpora for Chinese sentiment classification are scarce and it is not a trivial task to manually label reliable Chinese sentiment corpora. The challenge before us is how to leverage rich English corpora for Chinese sentiment classification. In this study, we focus on the problem of cross-lingual sentiment classification, which leverages only English training data for supervised sentiment classification of Chinese product reviews, without using any Chinese resources. Note that the above problem is not only defined for Chinese sentiment classification, but also for various sentiment analysis tasks in other different languages. Though pilot studies have been performed to make use of English corpora for subjectivity classification in other languages (Mihalcea et al., 2007; Banea et al., 2008), the methods are very straightforward by directly employing an inductive classifier (e.g. SVM, NB), and the classification performance is far from satisfactory because of the language gap between the original language and the translated language. In this study, we propose a co-training approach to improving the classification accuracy of polarity identification of Chinese product reviews. Unlabeled Chinese reviews can be fully leveraged in the proposed approach. First, machine translation services are used to translate English training reviews into Chinese reviews and also translate Chinese test reviews and additional unlabeled reviews into English reviews. Then, we can view the classification problem in two independent views: Chinese view with only Chinese features and English view with only English features. We then use the co-training approach to making full use of the two redundant views of features. The SVM classifier is adopted as the basic classifier in the proposed approach. Experimental results show that the proposed approach can outperform the baseline inductive classifiers and the more advanced transductive classifiers. The rest of this paper is organized as follows: Section 2 introduces related work. The proposed 235 co-training approach is described in detail in Section 3. Section 4 shows the experimental results. Lastly we conclude this paper in Section 5. 2 Related Work 2.1 Sentiment Classification Sentiment classification can be performed on words, sentences or documents. In this paper we focus on document sentiment classification. The methods for document sentiment classification can be generally categorized into lexicon-based and corpus-based. Lexicon-based methods usually involve deriving a sentiment measure for text based on sentiment lexicons. Turney (2002) predicates the sentiment orientation of a review by the average semantic orientation of the phrases in the review that contain adjectives or adverbs, which is denoted as the semantic oriented method. Kim and Hovy (2004) build three models to assign a sentiment category to a given sentence by combining the individual sentiments of sentimentbearing words. Hiroshi et al. (2004) use the technique of deep language analysis for machine translation to extract sentiment units in text documents. Kennedy and Inkpen (2006) determine the sentiment of a customer review by counting positive and negative terms and taking into account contextual valence shifters, such as negations and intensifiers. Devitt and Ahmad (2007) explore a computable metric of positive or negative polarity in financial news text. Corpus-based methods usually consider the sentiment analysis task as a classification task and they use a labeled corpus to train a sentiment classifier. Since the work of Pang et al. (2002), various classification models and linguistic features have been proposed to improve the classification performance (Pang and Lee, 2004; Mullen and Collier, 2004; Wilson et al., 2005; Read, 2005). Most recently, McDonald et al. (2007) investigate a structured model for jointly classifying the sentiment of text at varying levels of granularity. Blitzer et al. (2007) investigate domain adaptation for sentiment classifiers, focusing on online reviews for different types of products. Andreevskaia and Bergler (2008) present a new system consisting of the ensemble of a corpus-based classifier and a lexicon-based classifier with precision-based vote weighting. Chinese sentiment analysis has also been studied (Tsou et al., 2005; Ye et al., 2006; Li and Sun, 2007) and most such work uses similar lexiconbased or corpus-based methods for Chinese sentiment classification. To date, several pilot studies have been performed to leverage rich English resources for sentiment analysis in other languages. Standard Naïve Bayes and SVM classifiers have been applied for subjectivity classification in Romanian (Mihalcea et al., 2007; Banea et al., 2008), and the results show that automatic translation is a viable alternative for the construction of resources and tools for subjectivity analysis in a new target language. Wan (2008) focuses on leveraging both Chinese and English lexicons to improve Chinese sentiment analysis by using lexicon-based methods. In this study, we focus on improving the corpus-based method for crosslingual sentiment classification of Chinese product reviews by developing novel approaches. 2.2 Cross-Domain Text Classification Cross-domain text classification can be considered as a more general task than cross-lingual sentiment classification. In the problem of crossdomain text classification, the labeled and unlabeled data come from different domains, and their underlying distributions are often different from each other, which violates the basic assumption of traditional classification learning. To date, many semi-supervised learning algorithms have been developed for addressing the cross-domain text classification problem by transferring knowledge across domains, including Transductive SVM (Joachims, 1999), EM(Nigam et al., 2000), EM-based Naïve Bayes classifier (Dai et al., 2007a), Topic-bridged PLSA (Xue et al., 2008), Co-Clustering based classification (Dai et al., 2007b), two-stage approach (Jiang and Zhai, 2007). DauméIII and Marcu (2006) introduce a statistical formulation of this problem in terms of a simple mixture model. In particular, several previous studies focus on the problem of cross-lingual text classification, which can be considered as a special case of general cross-domain text classification. Bel et al. (2003) present practical and cost-effective solutions. A few novel models have been proposed to address the problem, e.g. the EM-based algorithm (Rigutini et al., 2005), the information bottleneck approach (Ling et al., 2008), the multilingual domain models (Gliozzo and Strapparava, 2005), etc. To the best of our knowledge, cotraining has not yet been investigated for crossdomain or cross-lingual text classification. 236 3 The Co-Training Approach 3.1 Overview The purpose of our approach is to make use of the annotated English corpus for sentiment polarity identification of Chinese reviews in a supervised framework, without using any Chinese resources. Given the labeled English reviews and unlabeled Chinese reviews, two straightforward methods for addressing the problem are as follows: 1) We first learn a classifier based on the labeled English reviews, and then translate Chinese reviews into English reviews. Lastly, we use the classifier to classify the translated English reviews. 2) We first translate the labeled English reviews into Chinese reviews, and then learn a classifier based on the translated Chinese reviews with labels. Lastly, we use the classifier to classify the unlabeled Chinese reviews. The above two methods have been used in (Banea et al., 2008) for Romanian subjectivity analysis, but the experimental results are not very promising. As shown in our experiments, the above two methods do not perform well for Chinese sentiment classification, either, because the underlying distribution between the original language and the translated language are different. In order to address the above problem, we propose to use the co-training approach to make use of some amounts of unlabeled Chinese reviews to improve the classification accuracy. The co-training approach can make full use of both the English features and the Chinese features in a unified framework. The framework of the proposed approach is illustrated in Figure 1. The framework consists of a training phase and a classification phase. In the training phase, the input is the labeled English reviews and some amounts of unlabeled Chinese reviews1. The labeled English reviews are translated into labeled Chinese reviews, and the unlabeled Chinese reviews are translated into unlabeled English reviews, by using machine translation services. Therefore, each review is associated with an English version and a Chinese version. The English features and the Chinese features for each review are considered two independent and redundant views of the review. The co-training algorithm is then applied to learn two classifiers 1 The unlabeled Chinese reviews used for co-training do not include the unlabeled Chinese reviews for testing, i.e., the Chinese reviews for testing are blind to the training phase. and finally the two classifiers are combined into a single sentiment classifier. In the classification phase, each unlabeled Chinese review for testing is first translated into English review, and then the learned classifier is applied to classify the review into either positive or negative. The steps of review translation and the cotraining algorithm are described in details in the next sections, respectively. Figure 1. Framework of the proposed approach 3.2 Review Translation In order to overcome the language gap, we must translate one language into another language. Fortunately, machine translation techniques have been well developed in the NLP field, though the translation performance is far from satisfactory. A few commercial machine translation services can be publicly accessed, e.g. Google Translate2, Yahoo Babel Fish3 and Windows Live Translate4. 2 http://translate.google.com/translate_t 3 http://babelfish.yahoo.com/translate_txt 4 http://www.windowslivetranslator.com/ Unlabeled Chinese Reviews Labeled English Reviews Machine Translation (CN-EN) Co-Training Machine Translation (EN-CN) Labeled Chinese Reviews Unlabeled English Reviews Pos\Neg Chinese View English View Test Chinese Review Sentiment Classifier Machine Translation (CN-EN) Test English Review Training Phase Classification Phase 237 In this study, we adopt Google Translate for both English-to-Chinese Translation and Chinese-toEnglish Translation, because it is one of the state-of-the-art commercial machine translation systems used today. Google Translate applies statistical learning techniques to build a translation model based on both monolingual text in the target language and aligned text consisting of examples of human translations between the languages. 3.3 The Co-Training Algorithm The co-training algorithm (Blum and Mitchell, 1998) is a typical bootstrapping method, which starts with a set of labeled data, and increase the amount of annotated data using some amounts of unlabeled data in an incremental way. One important aspect of co-training is that two conditional independent views are required for cotraining to work, but the independence assumption can be relaxed. Till now, co-training has been successfully applied to statistical parsing (Sarkar, 2001), reference resolution (Ng and Cardie, 2003), part of speech tagging (Clark et al., 2003), word sense disambiguation (Mihalcea, 2004) and email classification (Kiritchenko and Matwin, 2001). In the context of cross-lingual sentiment classification, each labeled English review or unlabeled Chinese review has two views of features: English features and Chinese features. Here, a review is used to indicate both its Chinese version and its English version, until stated otherwise. The co-training algorithm is illustrated in Figure 2. In the algorithm, the class distribution in the labeled data is maintained by balancing the parameter values of p and n at each iteration. The intuition of the co-training algorithm is that if one classifier can confidently predict the class of an example, which is very similar to some of labeled ones, it can provide one more training example for the other classifier. But, of course, if this example happens to be easy to be classified by the first classifier, it does not mean that this example will be easy to be classified by the second classifier, so the second classifier will get useful information to improve itself and vice versa (Kiritchenko and Matwin, 2001). In the co-training algorithm, a basic classification algorithm is required to construct Cen and Ccn. Typical text classifiers include Support Vector Machine (SVM), Naïve Bayes (NB), Maximum Entropy (ME), K-Nearest Neighbor (KNN), etc. In this study, we adopt the widely-used SVM classifier (Joachims, 2002). Viewing input data as two sets of vectors in a feature space, SVM constructs a separating hyperplane in the space by maximizing the margin between the two data sets. The English or Chinese features used in this study include both unigrams and bigrams5 and the feature weight is simply set to term frequency6. Feature selection methods (e.g. Document Frequency (DF), Information Gain (IG), and Mutual Information (MI)) can be used for dimension reduction. But we use all the features in the experiments for comparative analysis, because there is no significant performance improvement after applying the feature selection techniques in our empirical study. The output value of the SVM classifier for a review indicates the confidence level of the review’s classification. Usually, the sentiment polarity of a review is indicated by the sign of the prediction value. Given: - Fen and Fcn are redundantly sufficient sets of features, where Fen represents the English features, Fcn represents the Chinese features; - L is a set of labeled training reviews; - U is a set of unlabeled reviews; Loop for I iterations: 1. Learn the first classifier Cen from L based on Fen; 2. Use Cen to label reviews from U based on Fen; 3. Choose p positive and n negative the most confidently predicted reviews Een from U; 4. Learn the second classifier Ccn from L based on Fcn; 5. Use Ccn to label reviews from U based on Fcn; 6. Choose p positive and n negative the most confidently predicted reviews Ecn from U; 7. Removes reviews Een∪Ecn from U7; 8. Add reviews Een∪Ecn with the corresponding labels to L; Figure 2. The co-training algorithm In the training phase, the co-training algorithm learns two separate classifiers: Cen and Ccn. 5 For Chinese text, a unigram refers to a Chinese word and a bigram refers to two adjacent Chinese words. 6 Term frequency performs better than TFIDF by our empirical analysis. 7 Note that the examples with conflicting labels are not included in Een∪Ecn In other words, if an example is in both Een and Ecn, but the labels for the example is conflicting, the example will be excluded from Een∪Ecn. 238 Therefore, in the classification phase, we can obtain two prediction values for a test review. We normalize the prediction values into [-1, 1] by dividing the maximum absolute value. Finally, the average of the normalized values is used as the overall prediction value of the review. 4 Empirical Evaluation 4.1 Evaluation Setup 4.1.1 Data set The following three datasets were collected and used in the experiments: Test Set (Labeled Chinese Reviews): In order to assess the performance of the proposed approach, we collected and labeled 886 product reviews (451 positive reviews + 435 negative reviews) from a popular Chinese IT product web site-IT1688. The reviews focused on such products as mp3 players, mobile phones, digital camera and laptop computers. Training Set (Labeled English Reviews): There are many labeled English corpora available on the Web and we used the corpus constructed for multi-domain sentiment classification (Blitzer et al., 2007)9, because the corpus was large-scale and it was within similar domains as the test set. The dataset consisted of 8000 Amazon product reviews (4000 positive reviews + 4000 negative reviews) for four different product types: books, DVDs, electronics and kitchen appliances. Unlabeled Set (Unlabeled Chinese Reviews): We downloaded additional 1000 Chinese product reviews from IT168 and used the reviews as the unlabeled set. Therefore, the unlabeled set and the test set were in the same domain and had similar underlying feature distributions. Each Chinese review was translated into English review, and each English review was translated into Chinese review. Therefore, each review has two independent views: English view and Chinese view. A review is represented by both its English view and its Chinese view. Note that the training set and the unlabeled set are used in the training phase, while the test set is blind to the training phase. 4.1.2 Evaluation Metric We used the standard precision, recall and Fmeasure to measure the performance of positive and negative class, respectively, and used the 8 http://www.it168.com 9 http://www.cis.upenn.edu/~mdredze/datasets/sentiment/ accuracy metric to measure the overall performance of the system. The metrics are defined the same as in general text categorization. 4.1.3 Baseline Methods In the experiments, the proposed co-training approach (CoTrain) is compared with the following baseline methods: SVM(CN): This method applies the inductive SVM with only Chinese features for sentiment classification in the Chinese view. Only Englishto-Chinese translation is needed. And the unlabeled set is not used. SVM(EN): This method applies the inductive SVM with only English features for sentiment classification in the English view. Only Chineseto-English translation is needed. And the unlabeled set is not used. SVM(ENCN1): This method applies the inductive SVM with both English and Chinese features for sentiment classification in the two views. Both English-to-Chinese and Chinese-toEnglish translations are required. And the unlabeled set is not used. SVM(ENCN2): This method combines the results of SVM(EN) and SVM(CN) by averaging the prediction values in the same way with the co-training approach. TSVM(CN): This method applies the transductive SVM with only Chinese features for sentiment classification in the Chinese view. Only English-to-Chinese translation is needed. And the unlabeled set is used. TSVM(EN): This method applies the transductive SVM with only English features for sentiment classification in the English view. Only Chinese-to-English translation is needed. And the unlabeled set is used. TSVM(ENCN1): This method applies the transductive SVM with both English and Chinese features for sentiment classification in the two views. Both English-to-Chinese and Chinese-toEnglish translations are required. And the unlabeled set is used. TSVM(ENCN2): This method combines the results of TSVM(EN) and TSVM(CN) by averaging the prediction values. Note that the first four methods are straightforward methods used in previous work, while the latter four methods are strong baselines because the transductive SVM has been widely used for improving the classification accuracy by leveraging additional unlabeled examples. 239 4.2 Evaluation Results 4.2.1 Method Comparison In the experiments, we first compare the proposed co-training approach (I=40 and p=n=5) with the eight baseline methods. The three parameters in the co-training approach are empirically set by considering the total number (i.e. 1000) of the unlabeled Chinese reviews. In our empirical study, the proposed approach can perform well with a wide range of parameter values, which will be shown later. Table 1 shows the comparison results. Seen from the table, the proposed co-training approach outperforms all eight baseline methods over all metrics. Among the eight baselines, the best one is TSVM(ENCN2), which combines the results of two transductive SVM classifiers. Actually, TSVM(ENCN2) is similar to CoTrain because CoTrain also combines the results of two classifiers in the same way. However, the co-training approach can train two more effective classifiers, and the accuracy values of the component English and Chinese classifiers are 0.775 and 0.790, respectively, which are higher than the corresponding TSVM classifiers. Overall, the use of transductive learning and the combination of English and Chinese views are beneficial to the final classification accuracy, and the cotraining approach is more suitable for making use of the unlabeled Chinese reviews than the transductive SVM. 4.2.2 Influences of Iteration Number (I) Figure 3 shows the accuracy curve of the cotraining approach (Combined Classifier) with different numbers of iterations. The iteration number I is varied from 1 to 80. When I is set to 1, the co-training approach is degenerated into SVM(ENCN2). The accuracy curves of the component English and Chinese classifiers learned in the co-training approach are also shown in the figure. We can see that the proposed co-training approach can outperform the best baselineTSVM(ENCN2) after 20 iterations. After a large number of iterations, the performance of the cotraining approach decreases because noisy training examples may be selected from the remaining unlabeled set. Finally, the performance of the approach does not change any more, because the algorithm runs out of all possible examples in the unlabeled set. Fortunately, the proposed approach performs well with a wide range of iteration numbers. We can also see that the two component classifier has similar trends with the cotraining approach. It is encouraging that the component Chinese classifier alone can perform better than the best baseline when the iteration number is set between 40 and 70. 4.2.3 Influences of Growth Size (p, n) Figure 4 shows how the growth size at each iteration (p positive and n negative confident examples) influences the accuracy of the proposed co-training approach. In the above experiments, we set p=n, which is considered as a balanced growth. When p differs very much from n, the growth is considered as an imbalanced growth. Balanced growth of (2, 2), (5, 5), (10, 10) and (15, 15) examples and imbalanced growth of (1, 5), (5, 1) examples are compared in the figure. We can see that the performance of the cotraining approach with the balanced growth can be improved after a few iterations. And the performance of the co-training approach with large p and n will more quickly become unchanged, because the approach runs out of the limited examples in the unlabeled set more quickly. However, the performance of the co-training approaches with the two imbalanced growths is always going down quite rapidly, because the labeled unbalanced examples hurt the performance badly at each iteration. Positive Negative Total Method Precision Recall F-measure Precision Recall F-measure Accuracy SVM(CN) 0.733 0.865 0.793 0.828 0.674 0.743 0.771 SVM(EN) 0.717 0.803 0.757 0.766 0.671 0.716 0.738 SVM(ENCN1) 0.744 0.820 0.781 0.792 0.708 0.748 0.765 SVM(ENCN2) 0.746 0.847 0.793 0.816 0.701 0.754 0.775 TSVM(CN) 0.724 0.878 0.794 0.838 0.653 0.734 0.767 TSVM(EN) 0.732 0.860 0.791 0.823 0.674 0.741 0.769 TSVM(ENCN1) 0.743 0.878 0.805 0.844 0.685 0.756 0.783 TSVM(ENCN2) 0.744 0.896 0.813 0.863 0.680 0.761 0.790 CoTrain (I=40; p=n=5) 0.768 0.905 0.831 0.879 0.717 0.790 0.813 Table 1. Comparison results 240 0.72 0.73 0.74 0.75 0.76 0.77 0.78 0.79 0.8 0.81 0.82 1 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 Iteration Number (I ) Accuracy English Classifier(CoTrain) Chinese Classifier(CoTrain) Combined Classifier(CoTrain) TSVM(ENCN2) Figure 3. Accuracy vs. number of iterations for co-training (p=n=5) 0.5 0.55 0.6 0.65 0.7 0.75 0.8 1 5 10 15 20 25 30 35 40 45 50 55 60 65 70 75 80 Iteration Number (I ) Accuracy (p=2,n=2) (p=5,n=5) (p=10,n=10) (p=15,n=15) (p=1,n=5) (p=5,n=1) Figure 4. Accuracy vs. different (p, n) for co-training 0.76 0.77 0.78 0.79 0.8 0.81 0.82 25% 50% 75% 100% Feature size Accuracy TSVM(ENCN1) TSVM(ENCN2) CoTrain (I=40; p=n=5) Figure 5. Influences of feature size 241 4.2.4 Influences of Feature Selection In the above experiments, all features (unigram + bigram) are used. As mentioned earlier, feature selection techniques are widely used for dimension reduction. In this section, we further conduct experiments to investigate the influences of feature selection techniques on the classification results. We use the simple but effective document frequency (DF) for feature selection. Figures 6 show the comparison results of different feature sizes for the co-training approach and two strong baselines. The feature size is measured as the proportion of the selected features against the total features (i.e. 100%). We can see from the figure that the feature selection technique has very slight influences on the classification accuracy of the methods. It can be seen that the co-training approach can always outperform the two baselines with different feature sizes. The results further demonstrate the effectiveness and robustness of the proposed cotraining approach. 5 Conclusion and Future Work In this paper, we propose to use the co-training approach to address the problem of cross-lingual sentiment classification. The experimental results show the effectiveness of the proposed approach. In future work, we will improve the sentiment classification accuracy in the following two ways: 1) The smoothed co-training approach used in (Mihalcea, 2004) will be adopted for sentiment classification. The approach has the effect of “smoothing” the learning curves. During the bootstrapping process of smoothed co-training, the classifier at each iteration is replaced with a majority voting scheme applied to all classifiers constructed at previous iterations. 2) The feature distributions of the translated text and the natural text in the same language are still different due to the inaccuracy of the machine translation service. We will employ the structural correspondence learning (SCL) domain adaption algorithm used in (Blitzer et al., 2007) for linking the translated text and the natural text. Acknowledgments This work was supported by NSFC (60873155), RFDP (20070001059), Beijing Nova Program (2008B03), National High-tech R&D Program (2008AA01Z421) and NCET (NCET-08-0006). We also thank the anonymous reviewers for their useful comments. References A. Andreevskaia and S. Bergler. 2008. When specialists and generalists work together: overcoming domain dependence in sentiment tagging. In Proceedings of ACL-08: HLT. C. Banea, R. Mihalcea, J. Wiebe and S. Hassan. 2008. Multilingual subjectivity analysis using machine translation. In Proceedings of EMNLP-2008. N. Bel, C. H. A. Koster, and M. Villegas. 2003. Cross-lingual text categorization. In Proceedings of ECDL-03. J. Blitzer, M. Dredze and F. Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: domain adaptation for sentiment classification. In Proceedings of ACL-07. A. Blum and T. Mitchell. 1998. Combining labeled and unlabeled data with cotraining. In Proceedings of COLT-98. S. Brody, R. Navigli and M. Lapata. 2006. Ensemble methods for unsupervised WSD. In Proceedings of COLING-ACL-2006. S. Clark, J. R. Curran, and M. Osborne. 2003. Bootstrapping POS taggers using unlabelled data. In Proceedings of CoNLL-2003. W. Dai, G.-R. Xue, Q. Yang, Y. Yu. 2007a. Transferring Naïve Bayes Classifiers for text classification. In Proceedings of AAAI-07. W. Dai, G.-R. Xue, Q. Yang, Y. Yu. 2007b. Coclustering based classification for out-of-domain documents. In Proceedings of KDD-07. H. DauméIII and D. Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26:101–126. A. Devitt and K. Ahmad. 2007. Sentiment polarity identification in financial news: a cohesion-based approach. In Proceedings of ACL2007. T. G. Dietterich. 1997. Machine learning research: four current directions. AI Magazine, 18(4), 1997. A. Gliozzo and C. Strapparava. 2005. Cross language text categorization by acquiring multilingual domain models from comparable corpora. In Proceedings of the ACL Workshop on Building and Using Parallel Texts. K. Hiroshi, N. Tetsuya and W. Hideo. 2004. Deeper sentiment analysis using machine translation technology. In Proceedings of COLING-04. J. Jiang and C. Zhai. 2007. A two-stage approach to domain adaptation for statistical classifiers. In Proceedings of CIKM-07. T. Joachims. 1999. Transductive inference for text classification using support vector machines. In Proceedings of ICML-99. 242 T. Joachims. 2002. Learning to classify text using support vector machines. Dissertation, Kluwer, 2002. A. Kennedy and D. Inkpen. 2006. Sentiment classification of movie reviews using contextual valence shifters. Computational Intelligence, 22(2):110125. S.-M. Kim and E. Hovy. 2004. Determining the sentiment of opinions. In Proceedings of COLING-04. S. Kiritchenko and S. Matwin. 2001. Email classification with co-training. In Proceedings of the 2001 Conference of the Centre for Advanced Studies on Collaborative Research. L.-W. Ku, Y.-T. Liang and H.-H. Chen. 2006. Opinion extraction, summarization and tracking in news and blog corpora. In Proceedings of AAAI-2006. J. Li and M. Sun. 2007. Experimental study on sentiment classification of Chinese review using machine learning techniques. In Proceeding of IEEENLPKE-07. X. Ling, W. Dai, Y. Jiang, G.-R. Xue, Q. Yang, and Y. Yu. 2008. Can Chinese Web pages be classified with English data source? In Proceedings of WWW-08. B. Liu, M. Hu and J. Cheng. 2005. Opinion observer: Analyzing and comparing opinions on the web. In Proceedings of WWW-2005. R. McDonald, K. Hannan, T. Neylon, M. Wells and J. Reynar. 2007. Structured models for fine-to-coarse sentiment analysis. In Proceedings of ACL-07. R. Mihalcea. 2004. Co-training and self-training for word sense disambiguation. In Proceedings of CONLL-04. R. Mihalcea, C. Banea and J. Wiebe. 2007. Learning multilingual subjective language via cross-lingual projections. In Proceedings of ACL-2007. T. Mullen and N. Collier. 2004. Sentiment analysis using support vector machines with diverse information sources. In Proceedings of EMNLP-04. V. Ng and C. Cardie. 2003. Weakly supervised natural language learning without redundant views. In Proceedings of HLT-NAACL-03. K. Nigam, A. K. McCallum, S. Thrun, and T. Mitchell. 2000. Text Classification from Labeled and Unlabeled Documents using EM. Machine Learning, 39(2-3):103–134. B. Pang, L. Lee and S. Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In Proceedings of EMNLP-02. B. Pang and L. Lee. 2004. A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of ACL-04. J. Read. 2005. Using emoticons to reduce dependency in machine learning techniques for sentiment classification. In Proceedings of ACL-05. L. Rigutini, M. Maggini and B. Liu. 2005. An EM based training algorithm for cross-language text categorization. In Proceedings of WI-05. A. Sarkar. 2001. Applying cotraining methods to statistical parsing. In Proceedings of NAACL-2001. I. Titov and R. McDonald. 2008. A joint model of text and aspect ratings for sentiment summarization. In Proceedings of ACL-08:HLT. B. K. Y. Tsou, R. W. M. Yuen, O. Y. Kwong, T. B. Y. La and W. L. Wong. 2005. Polarity classification of celebrity coverage in the Chinese press. In Proceedings of International Conference on Intelligence Analysis. P. Turney. 2002. Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. In Proceedings of ACL-2002. X. Wan. 2008. Using bilingual knowledge and ensemble techniques for unsupervised Chinese sentiment analysis. In Proceedings of EMNLP-2008. T. Wilson, J. Wiebe and P. Hoffmann. 2005. Recognizing Contextual Polarity in Phrase-Level Sentiment Analysis. In Proceedings of HLT/EMNLP-05. G.-R. Xue, W. Dai, Q. Yang, Y. Yu. 2008. Topicbridged PLSA for cross-domain text classification. In Proceedings of SIGIR-08. Q. Ye, W. Shi and Y. Li. 2006. Sentiment classification for movie reviews in Chinese by improved semantic oriented approach. In Proceedings of 39th Hawaii International Conference on System Sciences, 2006. 243
2009
27
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 244–252, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Non-negative Matrix Tri-factorization Approach to Sentiment Classification with Lexical Prior Knowledge Tao Li Yi Zhang School of Computer Science Florida International University {taoli,yzhan004}@cs.fiu.edu Vikas Sindhwani Mathematical Sciences IBM T.J. Watson Research Center [email protected] Abstract Sentiment classification refers to the task of automatically identifying whether a given piece of text expresses positive or negative opinion towards a subject at hand. The proliferation of user-generated web content such as blogs, discussion forums and online review sites has made it possible to perform large-scale mining of public opinion. Sentiment modeling is thus becoming a critical component of market intelligence and social media technologies that aim to tap into the collective wisdom of crowds. In this paper, we consider the problem of learning high-quality sentiment models with minimal manual supervision. We propose a novel approach to learn from lexical prior knowledge in the form of domain-independent sentimentladen terms, in conjunction with domaindependent unlabeled data and a few labeled documents. Our model is based on a constrained non-negative tri-factorization of the term-document matrix which can be implemented using simple update rules. Extensive experimental studies demonstrate the effectiveness of our approach on a variety of real-world sentiment prediction tasks. 1 Introduction Web 2.0 platforms such as blogs, discussion forums and other such social media have now given a public voice to every consumer. Recent surveys have estimated that a massive number of internet users turn to such forums to collect recommendations for products and services, guiding their own choices and decisions by the opinions that other consumers have publically expressed. Gleaning insights by monitoring and analyzing large amounts of such user-generated data is thus becoming a key competitive differentiator for many companies. While tracking brand perceptions in traditional media is hardly a new challenge, handling the unprecedented scale of unstructured user-generated web content requires new methodologies. These methodologies are likely to be rooted in natural language processing and machine learning techniques. Automatically classifying the sentiment expressed in a blog around selected topics of interest is a canonical machine learning task in this discussion. A standard approach would be to manually label documents with their sentiment orientation and then apply off-the-shelf text classification techniques. However, sentiment is often conveyed with subtle linguistic mechanisms such as the use of sarcasm and highly domain-specific contextual cues. This makes manual annotation of sentiment time consuming and error-prone, presenting a bottleneck in learning high quality models. Moreover, products and services of current focus, and associated community of bloggers with their idiosyncratic expressions, may rapidly evolve over time causing models to potentially lose performance and become stale. This motivates the problem of learning robust sentiment models from minimal supervision. In their seminal work, (Pang et al., 2002) demonstrated that supervised learning significantly outperformed a competing body of work where hand-crafted dictionaries are used to assign sentiment labels based on relative frequencies of positive and negative terms. As observed by (Ng et al., 2006), most semi-automated dictionary-based approaches yield unsatisfactory lexicons, with either high coverage and low precision or vice versa. However, the treatment of such dictionaries as forms of prior knowledge that can be incorporated in machine learning models is a relatively less explored topic; even lesser so in conjunction with semi-supervised models that attempt to utilize un244 labeled data. This is the focus of the current paper. Our models are based on a constrained nonnegative tri-factorization of the term-document matrix, which can be implemented using simple update rules. Treated as a set of labeled features, the sentiment lexicon is incorporated as one set of constraints that enforce domain-independent prior knowledge. A second set of constraints introduce domain-specific supervision via a few document labels. Together these constraints enable learning from partial supervision along both dimensions of the term-document matrix, in what may be viewed more broadly as a framework for incorporating dual-supervision in matrix factorization models. We provide empirical comparisons with several competing methodologies on four, very different domains – blogs discussing enterprise software products, political blogs discussing US presidential candidates, amazon.com product reviews and IMDB movie reviews. Results demonstrate the effectiveness and generality of our approach. The rest of the paper is organized as follows. We begin by discussing related work in Section 2. Section 3 gives a quick background on Nonnegative Matrix Tri-factorization models. In Section 4, we present a constrained model and computational algorithm for incorporating lexical knowledge in sentiment analysis. In Section 5, we enhance this model by introducing document labels as additional constraints. Section 6 presents an empirical study on four datasets. Finally, Section 7 concludes this paper. 2 Related Work We point the reader to a recent book (Pang and Lee, 2008) for an in-depth survey of literature on sentiment analysis. In this section, we briskly cover related work to position our contributions appropriately in the sentiment analysis and machine learning literature. Methods focussing on the use and generation of dictionaries capturing the sentiment of words have ranged from manual approaches of developing domain-dependent lexicons (Das and Chen, 2001) to semi-automated approaches (Hu and Liu, 2004; Zhuang et al., 2006; Kim and Hovy, 2004), and even an almost fully automated approach (Turney, 2002). Most semi-automated approaches have met with limited success (Ng et al., 2006) and supervised learning models have tended to outperform dictionary-based classification schemes (Pang et al., 2002). A two-tier scheme (Pang and Lee, 2004) where sentences are first classified as subjective versus objective, and then applying the sentiment classifier on only the subjective sentences further improves performance. Results in these papers also suggest that using more sophisticated linguistic models, incorporating parts-of-speech and n-gram language models, do not improve over the simple unigram bag-of-words representation. In keeping with these findings, we also adopt a unigram text model. A subjectivity classification phase before our models are applied may further improve the results reported in this paper, but our focus is on driving the polarity prediction stage with minimal manual effort. In this regard, our model brings two interrelated but distinct themes from machine learning to bear on this problem: semi-supervised learning and learning from labeled features. The goal of the former theme is to learn from few labeled examples by making use of unlabeled data, while the goal of the latter theme is to utilize weak prior knowledge about term-class affinities (e.g., the term “awful” indicates negative sentiment and therefore may be considered as a negatively labeled feature). Empirical results in this paper demonstrate that simultaneously attempting both these goals in a single model leads to improvements over models that focus on a single goal. (Goldberg and Zhu, 2006) adapt semi-supervised graph-based methods for sentiment analysis but do not incorporate lexical prior knowledge in the form of labeled features. Most work in machine learning literature on utilizing labeled features has focused on using them to generate weakly labeled examples that are then used for standard supervised learning: (Schapire et al., 2002) propose one such framework for boosting logistic regression; (Wu and Srihari, 2004) build a modified SVM and (Liu et al., 2004) use a combination of clustering and EM based methods to instantiate similar frameworks. By contrast, we incorporate lexical knowledge directly as constraints on our matrix factorization model. In recent work, Druck et al. (Druck et al., 2008) constrain the predictions of a multinomial logistic regression model on unlabeled instances in a Generalized Expectation formulation for learning from labeled features. Unlike their approach which uses only unlabeled instances, our method uses both labeled and unlabeled documents in conjunction with labeled and 245 unlabeled words. The matrix tri-factorization models explored in this paper are closely related to the models proposed recently in (Li et al., 2008; Sindhwani et al., 2008). Though, their techniques for proving algorithm convergence and correctness can be readily adapted for our models, (Li et al., 2008) do not incorporate dual supervision as we do. On the other hand, while (Sindhwani et al., 2008) do incorporate dual supervision in a non-linear kernelbased setting, they do not enforce non-negativity or orthogonality – aspects of matrix factorization models that have shown benefits in prior empirical studies, see e.g., (Ding et al., 2006). We also note the very recent work of (Sindhwani and Melville, 2008) which proposes a dualsupervision model for semi-supervised sentiment analysis. In this model, bipartite graph regularization is used to diffuse label information along both sides of the term-document matrix. Conceptually, their model implements a co-clustering assumption closely related to Singular Value Decomposition (see also (Dhillon, 2001; Zha et al., 2001) for more on this perspective) while our model is based on Non-negative Matrix Factorization. In another recent paper (Sandler et al., 2008), standard regularization models are constrained using graphs of word co-occurences. These are very recently proposed competing methodologies, and we have not been able to address empirical comparisons with them in this paper. Finally, recent efforts have also looked at transfer learning mechanisms for sentiment analysis, e.g., see (Blitzer et al., 2007). While our focus is on single-domain learning in this paper, we note that cross-domain variants of our model can also be orthogonally developed. 3 Background 3.1 Basic Matrix Factorization Model Our proposed models are based on non-negative matrix Tri-factorization (Ding et al., 2006). In these models, an m × n term-document matrix X is approximated by three factors that specify soft membership of terms and documents in one of kclasses: X ≈FSGT. (1) where F is an m × k non-negative matrix representing knowledge in the word space, i.e., i-th row of F represents the posterior probability of word i belonging to the k classes, G is an n × k nonnegative matrix representing knowledge in document space, i.e., the i-th row of G represents the posterior probability of document i belonging to the k classes, and S is an k×k nonnegative matrix providing a condensed view of X. The matrix factorization model is similar to the probabilistic latent semantic indexing (PLSI) model (Hofmann, 1999). In PLSI, X is treated as the joint distribution between words and documents by the scaling X →¯X = X/∑ij Xij thus ∑ij ¯Xij = 1). ¯X is factorized as ¯X ≈WSDT,∑ k Wik = 1,∑ k D jk = 1,∑ k Skk = 1. (2) where X is the m × n word-document semantic matrix, X = WSD, W is the word classconditional probability, and D is the document class-conditional probability and S is the class probability distribution. PLSI provides a simultaneous solution for the word and document class conditional distribution. Our model provides simultaneous solution for clustering the rows and the columns of X. To avoid ambiguity, the orthogonality conditions FTF = I, GTG = I. (3) can be imposed to enforce each row of F and G to possess only one nonzero entry. Approximating the term-document matrix with a tri-factorization while imposing non-negativity and orthogonality constraints gives a principled framework for simultaneously clustering the rows (words) and columns (documents) of X. In the context of coclustering, these models return excellent empirical performance, see e.g., (Ding et al., 2006). Our goal now is to bias these models with constraints incorporating (a) labels of features (coming from a domain-independent sentiment lexicon), and (b) labels of documents for the purposes of domainspecific adaptation. These enhancements are addressed in Sections 4 and 5 respectively. 4 Incorporating Lexical Knowledge We used a sentiment lexicon generated by the IBM India Research Labs that was developed for other text mining applications (Ramakrishnan et al., 2003). It contains 2,968 words that have been human-labeled as expressing positive or negative sentiment. In total, there are 1,267 positive (e.g. “great”) and 1,701 negative (e.g., “bad”) unique 246 terms after stemming. We eliminated terms that were ambiguous and dependent on context, such as “dear” and “fine”. It should be noted, that this list was constructed without a specific domain in mind; which is further motivation for using training examples and unlabeled data to learn domain specific connotations. Lexical knowledge in the form of the polarity of terms in this lexicon can be introduced in the matrix factorization model. By partially specifying term polarities via F, the lexicon influences the sentiment predictions G over documents. 4.1 Representing Knowledge in Word Space Let F0 represent prior knowledge about sentimentladen words in the lexicon, i.e., if word i is a positive word (F0)i1 = 1 while if it is negative (F0)i2 = 1. Note that one may also use soft sentiment polarities though our experiments are conducted with hard assignments. This information is incorporated in the tri-factorization model via a squared loss term, min F,G,S∥X −FSGT∥2 +αTr  (F −F0)TC1(F −F0)  (4) where the notation Tr(A) means trace of the matrix A. Here, α > 0 is a parameter which determines the extent to which we enforce F ≈F0, C1 is a m× m diagonal matrix whose entry (C1)ii = 1 if the category of the i-th word is known (i.e., specified by the i-th row of F0) and (C1)ii = 0 otherwise. The squared loss terms ensure that the solution for F in the otherwise unsupervised learning problem be close to the prior knowledge F0. Note that if C1 = I, then we know the class orientation of all the words and thus have a full specification of F0, Eq.(4) is then reduced to min F,G,S∥X −FSGT∥2 +α∥F −F0∥2 (5) The above model is generic and it allows certain flexibility. For example, in some cases, our prior knowledge on F0 is not very accurate and we use smaller α so that the final results are not dependent on F0 very much, i.e., the results are mostly unsupervised learning results. In addition, the introduction of C1 allows us to incorporate partial knowledge on word polarity information. 4.2 Computational Algorithm The optimization problem in Eq.( 4) can be solved using the following update rules G jk ←G jk (XTFS)jk (GGTXTFS)jk , (6) Sik ←Sik (FTXG)ik (FTFSGTG)ik . (7) Fik ←Fik (XGST +αC1F0)ik (FFTXGST +αC1F)ik . (8) The algorithm consists of an iterative procedure using the above three rules until convergence. We call this approach Matrix Factorization with Lexical Knowledge (MFLK) and outline the precise steps in the table below. Algorithm 1 Matrix Factorization with Lexical Knowledge (MFLK) begin 1. Initialization: Initialize F = F0 G to K-means clustering results, S = (FTF)−1FTXG(GTG)−1. 2. Iteration: Update G: fixing F,S, updating G Update F: fixing S,G, updating F Update S: fixing F,G, updating S end 4.3 Algorithm Correctness and Convergence Updating F,G,S using the rules above leads to an asymptotic convergence to a local minima. This can be proved using arguments similar to (Ding et al., 2006). We outline the proof of correctness for updating F since the squared loss term that involves F is a new component in our models. Theorem 1 The above iterative algorithm converges. Theorem 2 At convergence, the solution satisfies the Karuch, Kuhn, Tucker optimality condition, i.e., the algorithm converges correctly to a local optima. Theorem 1 can be proved using the standard auxiliary function approach used in (Lee and Seung, 2001). Proof of Theorem 2. Following the theory of constrained optimization (Nocedal and Wright, 1999), 247 we minimize the following function L(F) = ||X −FSGT||2+αTr  (F −F0)TC1(F −F0)  Note that the gradient of L is, ∂L ∂F = −2XGST +2FSGTGST +2αC1(F −F0). (9) The KKT complementarity condition for the nonnegativity of Fik gives [−2XGST +FSGTGST +2αC1(F −F0)]ikFik = 0. (10) This is the fixed point relation that local minima for F must satisfy. Given an initial guess of F, the successive update of F using Eq.(8) will converge to a local minima. At convergence, we have Fik = Fik (XGST +αC1F0)ik (FFTXGST +αC1F)ik . which is equivalent to the KKT condition of Eq.(10). The correctness of updating rules for G in Eq.(6) and S in Eq.(7) have been proved in (Ding et al., 2006). ⊓– Note that we do not enforce exact orthogonality in our updating rules since this often implies softer class assignments. 5 Semi-Supervised Learning With Lexical Knowledge So far our models have made no demands on human effort, other than unsupervised collection of the term-document matrix and a one-time effort in compiling a domain-independent sentiment lexicon. We now assume that a few documents are manually labeled for the purposes of capturing some domain-specific connotations leading to a more domain-adapted model. The partial labels on documents can be described using G0 where (G0)i1 = 1 if the document expresses positive sentiment, and (G0)i2 = 1 for negative sentiment. As with F0, one can also use soft sentiment labeling for documents, though our experiments are conducted with hard assignments. Therefore, the semi-supervised learning with lexical knowledge can be described as min F,G,S∥X −FSGT∥2 +αTr  (F −F0)TC1(F −F0)  + βTr  (G−G0)TC2(G−G0)  Where α > 0,β > 0 are parameters which determine the extent to which we enforce F ≈F0 and G ≈G0 respectively, C1 and C2 are diagonal matrices indicating the entries of F0 and G0 that correspond to labeled entities. The squared loss terms ensure that the solution for F,G, in the otherwise unsupervised learning problem, be close to the prior knowledge F0 and G0. 5.1 Computational Algorithm The optimization problem in Eq.( 4) can be solved using the following update rules G jk ←G jk (XTFS+βC2G0)jk (GGTXTFS+βGGTC2G0)jk (11) Sik ←Sik (FTXG)ik (FTFSGTG)ik . (12) Fik ←Fik (XGST +αC1F0)ik (FFTXGST +αC1F)ik . (13) Thus the algorithm for semi-supervised learning with lexical knowledge based on our matrix factorization framework, referred as SSMFLK, consists of an iterative procedure using the above three rules until convergence. The correctness and convergence of the algorithm can also be proved using similar arguments as what we outlined earlier for MFLK in Section 4.3. A quick word about computational complexity. The term-document matrix is typically very sparse with z ≪nm non-zero entries while k is typically also much smaller than n,m. By using sparse matrix multiplications and avoiding dense intermediate matrices, the updates can be very efficiently and easily implemented. In particular, updating F,S,G each takes O(k2(m + n) + kz) time per iteration which scales linearly with the dimensions and density of the data matrix. Empirically, the number of iterations before practical convergence is usually very small (less than 100). Thus, computationally our approach scales to large datasets even though our experiments are run on relatively small-sized datasets. 6 Experiments 6.1 Datasets Description Four different datasets are used in our experiments. Movies Reviews: This is a popular dataset in sentiment analysis literature (Pang et al., 2002). It consists of 1000 positive and 1000 negative movie reviews drawn from the IMDB archive of the rec.arts.movies.reviews newsgroups. 248 Lotus blogs: The data set is targeted at detecting sentiment around enterprise software, specifically pertaining to the IBM Lotus brand (Sindhwani and Melville, 2008). An unlabeled set of blog posts was created by randomly sampling 2000 posts from a universe of 14,258 blogs that discuss issues relevant to Lotus software. In addition to this unlabeled set, 145 posts were chosen for manual labeling. These posts came from 14 individual blogs, 4 of which are actively posting negative content on the brand, with the rest tending to write more positive or neutral posts. The data was collected by downloading the latest posts from each blogger’s RSS feeds, or accessing the blog’s archives. Manual labeling resulted in 34 positive and 111 negative examples. Political candidate blogs: For our second blog domain, we used data gathered from 16,742 political blogs, which contain over 500,000 posts. As with the Lotus dataset, an unlabeled set was created by randomly sampling 2000 posts. 107 posts were chosen for labeling. A post was labeled as having positive or negative sentiment about a specific candidate (Barack Obama or Hillary Clinton) if it explicitly mentioned the candidate in positive or negative terms. This resulted in 49 positively and 58 negatively labeled posts. Amazon Reviews: The dataset contains product reviews taken from Amazon.com from 4 product types: Kitchen, Books, DVDs, and Electronics (Blitzer et al., 2007). The dataset contains about 4000 positive reviews and 4000 negative reviews and can be obtained from http://www.cis.upenn. edu/˜mdredze/datasets/sentiment/. For all datasets, we picked 5000 words with highest document-frequency to generate the vocabulary. Stopwords were removed and a normalized term-frequency representation was used. Genuinely unlabeled posts for Political and Lotus were used for semi-supervised learning experiments in section 6.3; they were not used in section 6.2 on the effect of lexical prior knowledge. In the experiments, we set α, the parameter determining the extent to which to enforce the feature labels, to be 1/2, and β, the corresponding parameter for enforcing document labels, to be 1. 6.2 Sentiment Analysis with Lexical Knowledge Of course, one can remove all burden on human effort by simply using unsupervised techniques. Our interest in the first set of experiments is to explore the benefits of incorporating a sentiment lexicon over unsupervised approaches. Does a one-time effort in compiling a domainindependent dictionary and using it for different sentiment tasks pay off in comparison to simply using unsupervised methods? In our case, matrix tri-factorization and other co-clustering methods form the obvious unsupervised baseline for comparison and so we start by comparing our method (MFLK) with the following methods: • Four document clustering methods: Kmeans, Tri-Factor Nonnegative Matrix Factorization (TNMF) (Ding et al., 2006), Information-Theoretic Co-clustering (ITCC) (Dhillon et al., 2003), and Euclidean Co-clustering algorithm (ECC) (Cho et al., 2004). These methods do not make use of the sentiment lexicon. • Feature Centroid (FC): This is a simple dictionary-based baseline method. Recall that each word can be expressed as a “bagof-documents” vector. In this approach, we compute the centroids of these vectors, one corresponding to positive words and another corresponding to negative words. This yields a two-dimensional representation for documents, on which we then perform K-means clustering. Performance Comparison Figure 1 shows the experimental results on four datasets using accuracy as the performance measure. The results are obtained by averaging 20 runs. It can be observed that our MFLK method can effectively utilize the lexical knowledge to improve the quality of sentiment prediction. Movies Lotus Political Amazon 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Accuracy MFLK FC TNMF ECC ITCC K−Means Figure 1: Accuracy results on four datasets 249 Size of Sentiment Lexicon We also investigate the effects of the size of the sentiment lexicon on the performance of our model. Figure 2 shows results with random subsets of the lexicon of increasing size. We observe that generally the performance increases as more and more lexical supervision is provided. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 Fraction of sentiment words labeled Accuracy Movies Lotus Political Amazon Figure 2: MFLK accuracy as size of sentiment lexicon (i.e., number of words in the lexicon) increases on the four datasets Robustness to Vocabulary Size High dimensionality and noise can have profound impact on the comparative performance of clustering and semi-supervised learning algorithms. We simulate scenarios with different vocabulary sizes by selecting words based on information gain. It should, however, be kept in mind that in a truely unsupervised setting document labels are unavailable and therefore information gain cannot be practically computed. Figure 3 and Figure 4 show results for Lotus and Amazon datasets respectively and are representative of performance on other datasets. MLFK tends to retain its position as the best performing method even at different vocabulary sizes. ITCC performance is also noteworthy given that it is a completely unsupervised method. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.85 Fraction of Original Vocabulary Accuracy MFLK FC TNMF K−Means ITCC ECC Figure 3: Accuracy results on Lotus dataset with increasing vocabulary size 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0.5 0.52 0.54 0.56 0.58 0.6 0.62 0.64 0.66 0.68 Fraction of Original Vocabulary Accuracy MFLK FC TNMF K−Means ITCC ECC Figure 4: Accuracy results on Amazon dataset with increasing vocabulary size 6.3 Sentiment Analysis with Dual Supervision We now assume that together with labeled features from the sentiment lexicon, we also have access to a few labeled documents. The natural question is whether the presence of lexical constraints leads to better semi-supervised models. In this section, we compare our method (SSMFLK) with the following three semi-supervised approaches: (1) The algorithm proposed in (Zhou et al., 2003) which conducts semi-supervised learning with local and global consistency (Consistency Method); (2) Zhu et al.’s harmonic Gaussian field method coupled with the Class Mass Normalization (HarmonicCMN) (Zhu et al., 2003); and (3) Green’s function learning algorithm (Green’s Function) proposed in (Ding et al., 2007). We also compare the results of SSMFLK with those of two supervised classification methods: Support Vector Machine (SVM) and Naive Bayes. Both of these methods have been widely used in sentiment analysis. In particular, the use of SVMs in (Pang et al., 2002) initially sparked interest in using machine learning methods for sentiment classification. Note that none of these competing methods utilizes lexical knowledge. The results are presented in Figure 5, Figure 6, Figure 7, and Figure 8. We note that our SSMFLK method either outperforms all other methods over the entire range of number of labeled documents (Movies, Political), or ultimately outpaces other methods (Lotus, Amazon) as a few document labels come in. Learning Domain-Specific Connotations In our first set of experiments, we incorporated the sentiment lexicon in our models and learnt the sentiment orientation of words and documents via F,G factors respectively. In the second set of 250 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Number of documents labeled as a fraction of the original set of labeled documents Accuracy SSMFLK Consistency Method Homonic−CMN Green Function SVM Naive Bays Figure 5: Accuracy results with increasing number of labeled documents on Movies dataset 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Number of documents labeled as a fraction of the original set of labeled documents Accuracy SSMFLK Consistency Method Homonic−CMN Green Function SVM Naive Bayes Figure 6: Accuracy results with increasing number of labeled documents on Lotus dataset experiments, we additionally introduced labeled documents for domain-specific adjustments. Between these experiments, we can now look for words that switch sentiment polarity. These words are interesting because their domain-specific connotation differs from their lexical orientation. For amazon reviews, the following words switched polarity from positive to negative: fan, important, learning, cons, fast, feature, happy, memory, portable, simple, small, work while the following words switched polarity from negative to positive: address, finish, lack, mean, budget, rent, throw. Note that words like fan, memory probably refer to product or product components (i.e., computer fan and memory) in the amazon review context but have a very different connotation say in the context of movie reviews where they probably refer to movie fanfare and memorable performances. We were surprised to see happy switch polarity! Two examples of its negative-sentiment usage are: I ended up buying a Samsung and I couldn’t be more happy and BORING, not one single exciting thing about this book. I was happy when my lunch break ended so I could go back to work and stop reading. 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Number of documents labeled as a fraction of the original set of labeled documents Accuracy SSMFLK Consistency Method Homonic−CMN Green Function SVM Naive Bays Figure 7: Accuracy results with increasing number of labeled documents on Political dataset 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Number of documents labeled as a fraction of the original set of labeled documents Accuracy SSMFLK Consistency Method Homonic−CMN Green Function SVM Naive Bays Figure 8: Accuracy results with increasing number of labeled documents on Amazon dataset 7 Conclusion The primary contribution of this paper is to propose and benchmark new methodologies for sentiment analysis. Non-negative Matrix Factorizations constitute a rich body of algorithms that have found applicability in a variety of machine learning applications: from recommender systems to document clustering. We have shown how to build effective sentiment models by appropriately constraining the factors using lexical prior knowledge and document annotations. To more effectively utilize unlabeled data and induce domain-specific adaptation of our models, several extensions are possible: facilitating learning from related domains, incorporating hyperlinks between documents, incorporating synonyms or co-occurences between words etc. As a topic of vigorous current activity, there are several very recently proposed competing methodologies for sentiment analysis that we would like to benchmark against. These are topics for future work. Acknowledgement: The work of T. Li is partially supported by NSF grants DMS-0844513 and CCF-0830659. We would also like to thank Prem Melville and Richard Lawrence for their support. 251 References J. Blitzer, M. Dredze, and F. Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proceedings of ACL, pages 440–447. H. Cho, I. Dhillon, Y. Guan, and S. Sra. 2004. Minimum sum squared residue co-clustering of gene expression data. In Proceedings of The 4th SIAM Data Mining Conference, pages 22–24, April. S. Das and M. Chen. 2001. Yahoo! for amazon: Extracting market sentiment from stock message boards. In Proceedings of the 8th Asia Pacific Finance Association (APFA). I. S. Dhillon, S. Mallela, and D. S. Modha. 2003. Information-theoretical co-clustering. In Proceedings of ACM SIGKDD, pages 89–98. I. S. Dhillon. 2001. Co-clustering documents and words using bipartite spectral graph partitioning. In Proceedings of ACM SIGKDD. C. Ding, T. Li, W. Peng, and H. Park. 2006. Orthogonal nonnegative matrix tri-factorizations for clustering. In Proceedings of ACM SIGKDD, pages 126– 135. C. Ding, R. Jin, T. Li, and H.D. Simon. 2007. A learning framework using green’s function and kernel regularization with application to recommender system. In Proceedings of ACM SIGKDD, pages 260–269. G. Druck, G. Mann, and A. McCallum. 2008. Learning from labeled features using generalized expectation criteria. In SIGIR. A. Goldberg and X. Zhu. 2006. Seeing stars when there aren’t many stars: Graph-based semisupervised learning for sentiment categorization. In HLT-NAACL 2006: Workshop on Textgraphs. T. Hofmann. 1999. Probabilistic latent semantic indexing. Proceeding of SIGIR, pages 50–57. M. Hu and B. Liu. 2004. Mining and summarizing customer reviews. In KDD, pages 168–177. S.-M. Kim and E. Hovy. 2004. Determining the sentiment of opinions. In Proceedings of International Conference on Computational Linguistics. D.D. Lee and H.S. Seung. 2001. Algorithms for nonnegative matrix factorization. In Advances in Neural Information Processing Systems 13. T. Li, C. Ding, Y. Zhang, and B. Shao. 2008. Knowledge transformation from word space to document space. In Proceedings of SIGIR, pages 187–194. B. Liu, X. Li, W.S. Lee, and P. Yu. 2004. Text classification by labeling words. In AAAI. V. Ng, S. Dasgupta, and S. M. Niaz Arifin. 2006. Examining the role of linguistic knowledge sources in the automatic identification and classification of reviews. In COLING & ACL. J. Nocedal and S.J. Wright. 1999. Numerical Optimization. Springer-Verlag. B. Pang and L. Lee. 2004. A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. In ACL. B. Pang and L. Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval: Vol. 2: No 12, pp 1-135 http://www.cs.cornell.edu/home/llee/opinionmining-sentiment-analysis-survey.html. B. Pang, L. Lee, and S. Vaithyanathan. 2002. Thumbs up? sentiment classification using machine learning techniques. In EMNLP. G. Ramakrishnan, A. Jadhav, A. Joshi, S. Chakrabarti, and P. Bhattacharyya. 2003. Question answering via bayesian inference on lexical relations. In ACL, pages 1–10. T. Sandler, J. Blitzer, P. Talukdar, and L. Ungar. 2008. Regularized learning with networks of features. In NIPS. R.E. Schapire, M. Rochery, M.G. Rahim, and N. Gupta. 2002. Incorporating prior knowledge into boosting. In ICML. V. Sindhwani and P. Melville. 2008. Documentword co-regularization for semi-supervised sentiment analysis. In Proceedings of IEEE ICDM. V. Sindhwani, J. Hu, and A. Mojsilovic. 2008. Regularized co-clustering with dual supervision. In Proceedings of NIPS. P. Turney. 2002. Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews. Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 417–424. X. Wu and R. Srihari. 2004. Incorporating prior knowledge with weighted margin support vector machines. In KDD. H. Zha, X. He, C. Ding, M. Gu, and H.D. Simon. 2001. Bipartite graph partitioning and data clustering. Proceedings of ACM CIKM. D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Scholkopf. 2003. Learning with local and global consistency. In Proceedings of NIPS. X. Zhu, Z. Ghahramani, and J. Lafferty. 2003. Semisupervised learning using gaussian fields and harmonic functions. In Proceedings of ICML. L. Zhuang, F. Jing, and X. Zhu. 2006. Movie review mining and summarization. In CIKM, pages 43–50. 252
2009
28
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 253–261, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP Discovering the Discriminative Views: Measuring Term Weights for Sentiment Analysis Jungi Kim, Jin-Ji Li and Jong-Hyeok Lee Division of Electrical and Computer Engineering Pohang University of Science and Technology, Pohang, Republic of Korea {yangpa,ljj,jhlee}@postech.ac.kr Abstract This paper describes an approach to utilizing term weights for sentiment analysis tasks and shows how various term weighting schemes improve the performance of sentiment analysis systems. Previously, sentiment analysis was mostly studied under data-driven and lexicon-based frameworks. Such work generally exploits textual features for fact-based analysis tasks or lexical indicators from a sentiment lexicon. We propose to model term weighting into a sentiment analysis system utilizing collection statistics, contextual and topicrelated characteristics as well as opinionrelated properties. Experiments carried out on various datasets show that our approach effectively improves previous methods. 1 Introduction With the explosion in the amount of commentaries on current issues and personal views expressed in weblogs on the Internet, the field of studying how to analyze such remarks and sentiments has been increasing as well. The field of opinion mining and sentiment analysis involves extracting opinionated pieces of text, determining the polarities and strengths, and extracting holders and targets of the opinions. Much research has focused on creating testbeds for sentiment analysis tasks. Most notable and widely used are Multi-Perspective Question Answering (MPQA) and Movie-review datasets. MPQA is a collection of newspaper articles annotated with opinions and private states at the subsentence level (Wiebe et al., 2003). Movie-review dataset consists of positive and negative reviews from the Internet Movie Database (IMDb) archive (Pang et al., 2002). Evaluation workshops such as TREC and NTCIR have recently joined in this new trend of research and organized a number of successful meetings. At the TREC Blog Track meetings, researchers have dealt with the problem of retrieving topically-relevant blog posts and identifying documents with opinionated contents (Ounis et al., 2008). NTCIR Multilingual Opinion Analysis Task (MOAT) shared a similar mission, where participants are provided with a number of topics and a set of relevant newspaper articles for each topic, and asked to extract opinion-related properties from enclosed sentences (Seki et al., 2008). Previous studies for sentiment analysis belong to either the data-driven approach where an annotated corpus is used to train a machine learning (ML) classifier, or to the lexicon-based approach where a pre-compiled list of sentiment terms is utilized to build a sentiment score function. This paper introduces an approach to the sentiment analysis tasks with an emphasis on how to represent and evaluate the weights of sentiment terms. We propose a number of characteristics of good sentiment terms from the perspectives of informativeness, prominence, topic–relevance, and semantic aspects using collection statistics, contextual information, semantic associations as well as opinion–related properties of terms. These term weighting features constitute the sentiment analysis model in our opinion retrieval system. We test our opinion retrieval system with TREC and NTCIR datasets to validate the effectiveness of our term weighting features. We also verify the effectiveness of the statistical features used in datadriven approaches by evaluating an ML classifier with labeled corpora. 2 Related Work Representing text with salient features is an important part of a text processing task, and there exists many works that explore various features for 253 text analysis systems (Sebastiani, 2002; Forman, 2003). Sentiment analysis task have also been using various lexical, syntactic, and statistical features (Pang and Lee, 2008). Pang et al. (2002) employed n-gram and POS features for ML methods to classify movie-review data. Also, syntactic features such as the dependency relationship of words and subtrees have been shown to effectively improve the performances of sentiment analysis (Kudo and Matsumoto, 2004; Gamon, 2004; Matsumoto et al., 2005; Ng et al., 2006). While these features are usually employed by data-driven approaches, there are unsupervised approaches for sentiment analysis that make use of a set of terms that are semantically oriented toward expressing subjective statements (Yu and Hatzivassiloglou, 2003). Accordingly, much research has focused on recognizing terms’ semantic orientations and strength, and compiling sentiment lexicons (Hatzivassiloglou and Mckeown, 1997; Turney and Littman, 2003; Kamps et al., 2004; Whitelaw et al., 2005; Esuli and Sebastiani, 2006). Interestingly, there are conflicting conclusions about the usefulness of the statistical features in sentiment analysis tasks (Pang and Lee, 2008). Pang et al. (2002) presents empirical results indicating that using term presence over term frequency is more effective in a data-driven sentiment classification task. Such a finding suggests that sentiment analysis may exploit different types of characteristics from the topical tasks, that, unlike fact-based text analysis tasks, repetition of terms does not imply a significance on the overall sentiment. On the other hand, Wiebe et al. (2004) have noted that hapax legomena (terms that only appear once in a collection of texts) are good signs for detecting subjectivity. Other works have also exploited rarely occurring terms for sentiment analysis tasks (Dave et al., 2003; Yang et al., 2006). The opinion retrieval task is a relatively recent issue that draws both the attention of IR and NLP communities. Its task is to find relevant documents that also contain sentiments about a given topic. Generally, the opinion retrieval task has been approached as a two–stage task: first, retrieving topically relevant documents, then reranking the documents by the opinion scores (Ounis et al., 2006). This approach is also appropriate for evaluation systems such as NTCIR MOAT that assumes that the set of topically relevant documents are already known in advance. On the other hand, there are also some interesting works on modeling the topic and sentiment of documents in a unified way (Mei et al., 2007; Zhang and Ye, 2008). 3 Term Weighting and Sentiment Analysis In this section, we describe the characteristics of terms that are useful in sentiment analysis, and present our sentiment analysis model as part of an opinion retrieval system and an ML sentiment classifier. 3.1 Characteristics of Good Sentiment Terms This section examines the qualities of useful terms for sentiment analysis tasks and corresponding features. For the sake of organization, we categorize the sources of features into either global or local knowledge, and either topic-independent or topic-dependent knowledge. Topic-independently speaking, a good sentiment term is discriminative and prominent, such that the appearance of the term imposes greater influence on the judgment of the analysis system. The rare occurrence of terms in document collections has been regarded as a very important feature in IR methods, and effective IR models of today, either explicitly or implicitly, accommodate this feature as an Inverse Document Frequency (IDF) heuristic (Fang et al., 2004). Similarly, prominence of a term is recognized by the frequency of the term in its local context, formulated as Term Frequency (TF) in IR. If a topic of the text is known, terms that are relevant and descriptive of the subject should be regarded to be more useful than topically-irrelevant and extraneous terms. One way of measuring this is using associations between the query and terms. Statistical measures of associations between terms include estimations by the co-occurrence in the whole collection, such as Point-wise Mutual Information (PMI) and Latent Semantic Analysis (LSA). Another method is to use proximal information of the query and the word, using syntactic structure such as dependency relations of words that provide the graphical representation of the text (Mullen and Collier, 2004). The minimum spans of words in such graph may represent their associations in the text. Also, the distance between words in the local context or in the thesauruslike dictionaries such as WordNet may be approximated as such measure. 254 3.2 Opinion Retrieval Model The goal of an opinion retrieval system is to find a set of opinionated documents that are relevant to a given topic. We decompose the opinion retrieval system into two tasks: the topical retrieval task and the sentiment analysis task. This two-stage approach for opinion retrieval has been taken by many systems and has been shown to perform well (Ounis et al., 2006). The topic and the sentiment aspects of the opinion retrieval task are modeled separately, and linearly combined together to produce a list of topically-relevant and opinionated documents as below. ScoreOpRet(D, Q) = λ·Scorerel(D, Q)+(1−λ)·Scoreop(D, Q) The topic-relevance model Scorerel may be substituted by any IR system that retrieves relevant documents for the query Q. For tasks such as NTCIR MOAT, relevant documents are already known in advance and it becomes unnecessary to estimate the relevance degree of the documents. We focus on modeling the sentiment aspect of the opinion retrieval task, assuming that the topicrelevance of documents is provided in some way. To assign documents with sentiment degrees, we estimate the probability of a document D to generate a query Q and to possess opinions as indicated by a random variable Op.1 Assuming uniform prior probabilities of documents D, query Q, and Op, and conditional independence between Q and Op, the opinion score function reduces to estimating the generative probability of Q and Op given D. Scoreop(D, Q) ≡p(D | Op, Q) ∝p(Op, Q | D) If we regard that the document D is represented as a bag of words and that the words are uniformly distributed, then p(Op, Q | D) = X w∈D p(Op, Q | w) · p(w | D) = X w∈D p(Op | w) · p(Q | w) · p(w | D) (1) Equation 1 consists of three factors: the probability of a word to be opinionated (P(Op|w)), the likelihood of a query given a word (P(Q|w)), and the probability of a document generating a word (P(w|D)). Intuitively speaking, the probability of a document embodying topically related opinion is estimated by accumulating the probabilities of all 1Throughout this paper, Op indicates Op = 1. words from the document to have sentiment meanings and associations with the given query. In the following sections, we assess the three factors of the sentiment models from the perspectives of term weighting. 3.2.1 Word Sentiment Model Modeling the sentiment of a word has been a popular approach in sentiment analysis. There are many publicly available lexicon resources. The size, format, specificity, and reliability differ in all these lexicons. For example, lexicon sizes range from a few hundred to several hundred thousand. Some lexicons assign real number scores to indicate sentiment orientations and strengths (i.e. probabilities of having positive and negative sentiments) (Esuli and Sebastiani, 2006) while other lexicons assign discrete classes (weak/strong, positive/negative) (Wilson et al., 2005). There are manually compiled lexicons (Stone et al., 1966) while some are created semi-automatically by expanding a set of seed terms (Esuli and Sebastiani, 2006). The goal of this paper is not to create or choose an appropriate sentiment lexicon, but rather it is to discover useful term features other than the sentiment properties. For this reason, one sentiment lexicon, namely SentiWordNet, is utilized throughout the whole experiment. SentiWordNet is an automatically generated sentiment lexicon using a semi-supervised method (Esuli and Sebastiani, 2006). It consists of WordNet synsets, where each synset is assigned three probability scores that add up to 1: positive, negative, and objective. These scores are assigned at sense level (synsets in WordNet), and we use the following equations to assess the sentiment scores at the word level. p(P os | w) = max s∈synset(w) SW NP os(s) p(Neg | w) = max s∈synset(w) SW NNeg(s) p(Op | w) = max (p(P os | w), p(Neg | w)) where synset(w) is the set of synsets of w and SWNPos(s), SWNNeg(s) are positive and negative scores of a synset in SentiWordNet. We assess the subjective score of a word as the maximum value of the positive and the negative scores, because a word has either a positive or a negative sentiment in a given context. The word sentiment model can also make use of other types of sentiment lexicons. The sub255 jectivity lexicon used in OpinionFinder2 is compiled from several manually and automatically built resources. Each word in the lexicon is tagged with the strength (strong/weak) and polarity (Positive/Negative/Neutral). The word sentiment can be modeled as below. P (P os|w) = 8 > < > : 1.0 if w is Positive and Strong 0.5 if w is Positive and Weak 0.0 otherwise P (Op | w) = max (p(P os | w), p(Neg | w)) 3.2.2 Topic Association Model If a topic is given in the sentiment analysis, terms that are closely associated with the topic should be assigned heavy weighting. For example, sentiment words such as scary and funny are more likely to be associated with topic words such as book and movie than grocery or refrigerator. In the topic association model, p(Q | w) is estimated from the associations between the word w and a set of query terms Q. p(Q | w) = P q∈Q Asc-Score(q, w) | Q | ∝ X q∈Q Asc-Score(q, w) Asc-Score(q, w) is the association score between q and w, and | Q | is the number of query words. To measure associations between words, we employ statistical approaches using document collections such as LSA and PMI, and local proximity features using the distance in dependency trees or texts. Latent Semantic Analysis (LSA) (Landauer and Dumais, 1997) creates a semantic space from a collection of documents to measure the semantic relatedness of words. Point-wise Mutual Information (PMI) is a measure of associations used in information theory, where the association between two words is evaluated with the joint and individual distributions of the two words. PMI-IR (Turney, 2001) uses an IR system and its search operators to estimate the probabilities of two terms and their conditional probabilities. Equations for association scores using LSA and PMI are given below. Asc-ScoreLSA(w1, w2) = 1 + LSA(w1, w2) 2 Asc-ScoreP MI(w1, w2) = 1 + PMI-IR(w1, w2) 2 2http://www.cs.pitt.edu/mpqa/ For the experimental purpose, we used publicly available online demonstrations for LSA and PMI. For LSA, we used the online demonstration mode from the Latent Semantic Analysis page from the University of Colorado at Boulder.3 For PMI, we used the online API provided by the CogWorks Lab at the Rensselaer Polytechnic Institute.4 Word associations between two terms may also be evaluated in the local context where the terms appear together. One way of measuring the proximity of terms is using the syntactic structures. Given the dependency tree of the text, we model the association between two terms as below. Asc-ScoreDT P (w1, w2) = ( 1.0 min. span in dep. tree ≤Dsyn 0.5 otherwise where, Dsyn is arbitrarily set to 3. Another way is to use co-occurrence statistics as below. Asc-ScoreW P (w1, w2) = ( 1.0 if distance betweenw1andw2 ≤K 0.5 otherwise where K is the maximum window size for the co-occurrence and is arbitrarily set to 3 in our experiments. The statistical approaches may suffer from data sparseness problems especially for named entity terms used in the query, and the proximal clues cannot sufficiently cover all term–query associations. To avoid assigning zero probabilities, our topic association models assign 0.5 to word pairs with no association and 1.0 to words with perfect association. Note that proximal features using co-occurrence and dependency relationships were used in previous work. For opinion retrieval tasks, Yang et al. (2006) and Zhang and Ye (2008) used the cooccurrence of a query word and a sentiment word within a certain window size. Mullen and Collier (2004) manually annotated named entities in their dataset (i.e. title of the record and name of the artist for music record reviews), and utilized presence and position features in their ML approach. 3.2.3 Word Generation Model Our word generation model p(w | d) evaluates the prominence and the discriminativeness of a word 3http://lsa.colorado.edu/, default parameter settings for the semantic space (TASA, 1st year college level) and number of factors (300). 4http://cwl-projects.cogsci.rpi.edu/msr/, PMI-IR with the Google Search Engine. 256 w in a document d. These issues correspond to the core issues of traditional IR tasks. IR models, such as Vector Space (VS), probabilistic models such as BM25, and Language Modeling (LM), albeit in different forms of approach and measure, employ heuristics and formal modeling approaches to effectively evaluate the relevance of a term to a document (Fang et al., 2004). Therefore, we estimate the word generation model with popular IR models’ the relevance scores of a document d given w as a query.5 p(w | d) ≡IR-SCORE(w, d) In our experiments, we use the Vector Space model with Pivoted Normalization (VS), Probabilistic model (BM25), and Language modeling with Dirichlet Smoothing (LM). V SP N(w, d) = 1 + ln(1 + ln(c(w, d))) (1 −s) + s · | d | avgdl · ln N + 1 df(w) BM25(w, d) = ln N −df(w) + 0.5 df(w) + 0.5 · (k1 + 1) · c(w, d) k1 “ (1 −b) + b |d| avgdl ” + c(w, d) LMDI(w, d) = ln 1 + c(w, d) µ · c(w, C) ! + ln µ | d | +µ c(w, d) is the frequency of w in d, | d | is the number of unique terms in d, avgdl is the average | d | of all documents, N is the number of documents in the collection, df(w) is the number of documents with w, C is the entire collection, and k1 and b are constants 2.0 and 0.75. 3.3 Data-driven Approach To verify the effectiveness of our term weighting schemes in experimental settings of the datadriven approach, we carry out a set of simple experiments with ML classifiers. Specifically, we explore the statistical term weighting features of the word generation model with Support Vector machine (SVM), faithfully reproducing previous work as closely as possible (Pang et al., 2002). Each instance of train and test data is represented as a vector of features. We test various combinations of the term weighting schemes listed below. • PRESENCE: binary indicator for the presence of a term • TF: term frequency 5With proper assumptions and derivations, p(w | d) can be derived to language modeling approaches. Refer to (Zhai and Lafferty, 2004). • VS.TF: normalized tf as in VS • BM25.TF: normalized tf as in BM25 • IDF: inverse document frequency • VS.IDF: normalized idf as in VS • BM25.IDF: normalized idf as in BM25 4 Experiment Our experiments consist of an opinion retrieval task and a sentiment classification task. We use MPQA and movie-review corpora in our experiments with an ML classifier. For the opinion retrieval task, we use the two datasets used by TREC blog track and NTCIR MOAT evaluation workshops. The opinion retrieval task at TREC Blog Track consists of three subtasks: topic retrieval, opinion retrieval, and polarity retrieval. Opinion and polarity retrieval subtasks use the relevant documents retrieved at the topic retrieval stage. On the other hand, the NTCIR MOAT task aims to find opinionated sentences given a set of documents that are already hand-assessed to be relevant to the topic. 4.1 Opinion Retieval Task – TREC Blog Track 4.1.1 Experimental Setting TREC Blog Track uses the TREC Blog06 corpus (Macdonald and Ounis, 2006). It is a collection of RSS feeds (38.6 GB), permalink documents (88.8GB), and homepages (28.8GB) crawled on the Internet over an eleven week period from December 2005 to February 2006. Non-relevant content of blog posts such as HTML tags, advertisement, site description, and menu are removed with an effective internal spam removal algorithm (Nam et al., 2009). While our sentiment analysis model uses the entire relevant portion of the blog posts, further stopword removal and stemming is done for the blog retrieval system. For the relevance retrieval model, we faithfully reproduce the passage-based language model with pseudo-relevance feedback (Lee et al., 2008). We use in total 100 topics from TREC 2007 and 2008 blog opinion retrieval tasks (07:901-950 and 08:1001-1050). We use the topics from Blog 07 to optimize the parameter for linearly combining the retrieval and opinion models, and use Blog 08 topics as our test data. Topics are extracted only from the Title field, using the Porter stemmer and a stopword list. 257 Table 1: Performance of opinion retrieval models using Blog 08 topics. The linear combination parameter λ is optimized on Blog 07 topics. † indicates statistical significance at the 1% level over the baseline. Model MAP R-prec P@10 TOPIC REL. 0.4052 0.4366 0.6440 BASELINE 0.4141 0.4534 0.6440 VS 0.4196 0.4542 0.6600 BM25 0.4235† 0.4579 0.6600 LM 0.4158 0.4520 0.6560 PMI 0.4177 0.4538 0.6620 LSA 0.4155 0.4526 0.6480 WP 0.4165 0.4533 0.6640 BM25·PMI 0.4238† 0.4575 0.6600 BM25·LSA 0.4237† 0.4578 0.6600 BM25·WP 0.4237† 0.4579 0.6600 BM25·PMI·WP 0.4242† 0.4574 0.6620 BM25·LSA·WP 0.4238† 0.4576 0.6580 4.1.2 Experimental Result Retrieval performances using different combinations of term weighting features are presented in Table 1. Using only the word sentiment model is set as our baseline. First, each feature of the word generation and topic association models are tested; all features of the models improve over the baseline. We observe that the features of our word generation model is more effective than those of the topic association model. Among the features of the word generation model, the most improvement was achieved with BM25, improving the MAP by 2.27%. Features of the topic association model show only moderate improvements over the baseline. We observe that these features generally improve P@10 performance, indicating that they increase the accuracy of the sentiment analysis system. PMI out-performed LSA for all evaluation measures. Among the topic association models, PMI performs the best in MAP and R-prec, while WP achieved the biggest improvement in P@10. Since BM25 performs the best among the word generation models, its combination with other features was investigated. Combinations of BM25 with the topic association models all improve the performance of the baseline and BM25. This demonstrates that the word generation model and the topic association model are complementary to each other. The best MAP was achieved with BM25, PMI, and WP (+2.44% over the baseline). We observe that PMI and WP also complement each other. 4.2 Sentiment Analysis Task – NTCIR MOAT 4.2.1 Experimental Setting Another set of experiments for our opinion analysis model was carried out on the NTCIR-7 MOAT English corpus. The English opinion corpus for NTCIR MOAT consists of newspaper articles from the Mainichi Daily News, Korea Times, Xinhua News, Hong Kong Standard, and the Straits Times. It is a collection of documents manually assessed for relevance to a set of queries from NTCIR-7 Advanced Cross-lingual Information Access (ACLIA) task. The corpus consists of 167 documents, or 4,711 sentences for 14 test topics. Each sentence is manually tagged with opinionatedness, polarity, and relevance to the topic by three annotators from a pool of six annotators. For preprocessing, no removal or stemming is performed on the data. Each sentence was processed with the Stanford English parser6 to produce a dependency parse tree. Only the Title fields of the topics were used. For performance evaluations of opinion and polarity detection, we use precision, recall, and Fmeasure, the same measure used to report the official results at the NTCIR MOAT workshop. There are lenient and strict evaluations depending on the agreement of the annotators; if two out of three annotators agreed upon an opinion or polarity annotation then it is used during the lenient evaluation, similarly three out of three agreements are used during the strict evaluation. We present the performances using the lenient evaluation only, for the two evaluations generally do not show much difference in relative performance changes. Since MOAT is a classification task, we use a threshold parameter to draw a boundary between opinionated and non-opinionated sentences. We report the performance of our system using the NTCIR-7 dataset, where the threshold parameter is optimized using the NTCIR-6 dataset. 4.2.2 Experimental Result We present the performance of our sentiment analysis system in Table 2. As in the experiments with 6http://nlp.stanford.edu/software/lex-parser.shtml 258 Table 2: Performance of the Sentiment Analysis System on NTCIR7 dataset. System parameters are optimized for F-measure using NTCIR6 dataset with lenient evaluations. Opinionated Model Precision Recall F-Measure BASELINE 0.305 0.866 0.451 VS 0.331 0.807 0.470 BM25 0.327 0.795 0.464 LM 0.325 0.794 0.461 LSA 0.315 0.806 0.453 PMI 0.342 0.603 0.436 DTP 0.322 0.778 0.455 VS·LSA 0.335 0.769 0.466 VS·PMI 0.311 0.833 0.453 VS·DTP 0.342 0.745 0.469 VS·LSA·DTP 0.349 0.719 0.470 VS·PMI·DTP 0.328 0.773 0.461 the TREC dataset, using only the word sentiment model is used as our baseline. Similarly to the TREC experiments, the features of the word generation model perform exceptionally better than that of the topic association model. The best performing feature of the word generation model is VS, achieving a 4.21% improvement over the baseline’s f-measure. Interestingly, this is the tied top performing f-measure over all combinations of our features. While LSA and DTP show mild improvements, PMI performed worse than baseline, with higher precision but a drop in recall. DTP was the best performing topic association model. When combining the best performing feature of the word generation model (VS) with the features of the topic association model, LSA, PMI and DTP all performed worse than or as well as the VS in f-measure evaluation. LSA and DTP improves precision slightly, but with a drop in recall. PMI shows the opposite tendency. The best performing system was achieved using VS, LSA and DTP at both precision and f-measure evaluations. 4.3 Classification task – SVM 4.3.1 Experimental Setting To test our SVM classifier, we perform the classification task. Movie Review polarity dataset7 was 7http://www.cs.cornell.edu/people/pabo/movie-reviewdata/ Table 3: Average ten-fold cross-validation accuracies of polarity classification task with SVM. Accuracy Features Movie-review MPQA PRESENCE 82.6 76.8 TF 71.1 76.5 VS.TF 81.3 76.7 BM25.TF 81.4 77.9 IDF 61.6 61.8 VS.IDF 83.6 77.9 BM25.IDF 83.6 77.8 VS.TF·VS.IDF 83.8 77.9 BM25.TF·BM25.IDF 84.1 77.7 BM25.TF·VS.IDF 85.1 77.7 first introduced by Pang et al. (2002) to test various ML-based methods for sentiment classification. It is a balanced dataset of 700 positive and 700 negative reviews, collected from the Internet Movie Database (IMDb) archive. MPQA Corpus8 contains 535 newspaper articles manually annotated at sentence and subsentence level for opinions and other private states (Wiebe et al., 2005). To closely reproduce the experiment with the best performance carried out in (Pang et al., 2002) using SVM, we use unigram with the presence feature. We test various combinations of our features applicable to the task. For evaluation, we use ten-fold cross-validation accuracy. 4.3.2 Experimental Result We present the sentiment classification performances in Table 3. As observed by Pang et al. (2002), using the raw tf drops the accuracy of the sentiment classification (-13.92%) of movie-review data. Using the raw idf feature worsens the accuracy even more (-25.42%). Normalized tf-variants show improvements over tf but are worse than presence. Normalized idf features produce slightly better accuracy results than the baseline. Finally, combining any normalized tf and idf features improved the baseline (high 83% ∼low 85%). The best combination was BM25.TF·VS.IDF. MPQA corpus reveals similar but somewhat uncertain tendency. 8http://www.cs.pitt.edu/mpqa/databaserelease/ 259 4.4 Discussion Overall, the opinion retrieval and the sentiment analysis models achieve improvements using our proposed features. Especially, the features of the word generation model improve the overall performances drastically. Its effectiveness is also verified with a data-driven approach; the accuracy of a sentiment classifier trained on a polarity dataset was improved by various combinations of normalized tf and idf statistics. Differences in effectiveness of VS, BM25, and LM come from parameter tuning and corpus differences. For the TREC dataset, BM25 performed better than the other models, and for the NTCIR dataset, VS performed better. Our features of the topic association model show mild improvement over the baseline performance in general. PMI and LSA, both modeling the semantic associations between words, show different behaviors on the datasets. For the NTCIR dataset, LSA performed better, while PMI is more effective for the TREC dataset. We believe that the explanation lies in the differences between the topics for each dataset. In general, the NTCIR topics are general descriptive words such as “regenerative medicine”, “American economy after the 911 terrorist attacks”, and “lawsuit brought against Microsoft for monopolistic practices.” The TREC topics are more namedentity-like terms such as “Carmax”, “Wikipedia primary source”, “Jiffy Lube”, “Starbucks”, and “Windows Vista.” We have experimentally shown that LSA is more suited to finding associations between general terms because its training documents are from a general domain.9 Our PMI measure utilizes a web search engine, which covers a variety of named entity terms. Though the features of our topic association model, WP and DTP, were evaluated on different datasets, we try our best to conjecture the differences. WP on TREC dataset shows a small improvement of MAP compared to other topic association features, while the precision is improved the most when this feature is used alone. The DTP feature displays similar behavior with precision. It also achieves the best f-measure over other topic association features. DTP achieves higher relative improvement (3.99% F-measure verse 2.32% MAP), and is more effective for improving the performance in combination with LSA and PMI. 9TASA Corpus, http://lsa.colorado.edu/spaces.html 5 Conclusion In this paper, we proposed various term weighting schemes and how such features are modeled in the sentiment analysis task. Our proposed features include corpus statistics, association measures using semantic and local-context proximities. We have empirically shown the effectiveness of the features with our proposed opinion retrieval and sentiment analysis models. There exists much room for improvement with further experiments with various term weighting methods and datasets. Such methods include, but by no means limited to, semantic similarities between word pairs using lexical resources such as WordNet (Miller, 1995) and data-driven methods with various topic-dependent term weighting schemes on labeled corpus with topics such as MPQA. Acknowledgments This work was supported in part by MKE & IITA through IT Leading R&D Support Project and in part by the BK 21 Project in 2009. References Kushal Dave, Steve Lawrence, and David M. Pennock. 2003. Mining the peanut gallery: Opinion extraction and semantic classification of product reviews. In Proceedings of WWW, pages 519–528. Andrea Esuli and Fabrizio Sebastiani. 2006. Sentiwordnet: A publicly available lexical resource for opinion mining. In Proceedings of the 5th Conference on Language Resources and Evaluation (LREC’06), pages 417–422, Geneva, IT. Hui Fang, Tao Tao, and ChengXiang Zhai. 2004. A formal study of information retrieval heuristics. In SIGIR ’04: Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, pages 49–56, New York, NY, USA. ACM. George Forman. 2003. An extensive empirical study of feature selection metrics for text classification. Journal of Machine Learning Research, 3:1289–1305. Michael Gamon. 2004. Sentiment classification on customer feedback data: noisy data, large feature vectors, and the role of linguistic analysis. In Proceedings of the International Conference on Computational Linguistics (COLING). Vasileios Hatzivassiloglou and Kathleen R. Mckeown. 1997. Predicting the semantic orientation of adjectives. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics (ACL’97), pages 174–181, madrid, ES. Jaap Kamps, Maarten Marx, Robert J. Mokken, and Maarten De Rijke. 2004. Using wordnet to measure semantic orientation of adjectives. In Proceedings of the 4th International Conference on Language Resources and Evaluation (LREC’04), pages 1115–1118, Lisbon, PT. 260 Taku Kudo and Yuji Matsumoto. 2004. A boosting algorithm for classification of semi-structured text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Thomas K. Landauer and Susan T. Dumais. 1997. A solution to plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211–240, April. Yeha Lee, Seung-Hoon Na, Jungi Kim, Sang-Hyob Nam, Hun young Jung, and Jong-Hyeok Lee. 2008. Kle at trec 2008 blog track: Blog post and feed retrieval. In Proceedings of TREC-08. Craig Macdonald and Iadh Ounis. 2006. The TREC Blogs06 collection: creating and analysing a blog test collection. Technical Report TR-2006-224, Department of Computer Science, University of Glasgow. Shotaro Matsumoto, Hiroya Takamura, and Manabu Okumura. 2005. Sentiment classification using word subsequences and dependency sub-trees. In Proceedings of PAKDD’05, the 9th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining. Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: Modeling facets and opinions in weblogs. In Proceedings of WWW, pages 171–180, New York, NY, USA. ACM Press. George A. Miller. 1995. Wordnet: a lexical database for english. Commun. ACM, 38(11):39–41. Tony Mullen and Nigel Collier. 2004. Sentiment analysis using support vector machines with diverse information sources. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 412–418, July. Poster paper. Sang-Hyob Nam, Seung-Hoon Na, Yeha Lee, and JongHyeok Lee. 2009. Diffpost: Filtering non-relevant content based on content difference between two consecutive blog posts. In ECIR. Vincent Ng, Sajib Dasgupta, and S. M. Niaz Arifin. 2006. Examining the role of linguistic knowledge sources in the automatic identification and classification of reviews. In Proceedings of the COLING/ACL Main Conference Poster Sessions, pages 611–618, Sydney, Australia, July. Association for Computational Linguistics. I. Ounis, M. de Rijke, C. Macdonald, G. A. Mishne, and I. Soboroff. 2006. Overview of the trec-2006 blog track. In Proceedings of TREC-06, pages 15–27, November. I. Ounis, C. Macdonald, and I. Soboroff. 2008. Overview of the trec-2008 blog track. In Proceedings of TREC-08, pages 15–27, November. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2):1–135. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 79–86. Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1–47. Yohei Seki, David Kirk Evans, Lun-Wei Ku, Le Sun, HsinHsi Chen, and Noriko Kando. 2008. Overview of multilingual opinion analysis task at ntcir-7. In Proceedings of The 7th NTCIR Workshop (2007/2008) - Evaluation of Information Access Technologies: Information Retrieval, Question Answering and Cross-Lingual Information Access. Philip J. Stone, Dexter C. Dunphy, Marshall S. Smith, and Daniel M. Ogilvie. 1966. The General Inquirer: A Computer Approach to Content Analysis. MIT Press, Cambridge, USA. Peter D. Turney and Michael L. Littman. 2003. Measuring praise and criticism: Inference of semantic orientation from association. ACM Transactions on Information Systems, 21(4):315–346. Peter D. Turney. 2001. Mining the web for synonyms: Pmiir versus lsa on toefl. In EMCL ’01: Proceedings of the 12th European Conference on Machine Learning, pages 491–502, London, UK. Springer-Verlag. Casey Whitelaw, Navendu Garg, and Shlomo Argamon. 2005. Using appraisal groups for sentiment analysis. In Proceedings of the 14th ACM international conference on Information and knowledge management (CIKM’05), pages 625–631, Bremen, DE. Janyce Wiebe, E. Breck, Christopher Buckley, Claire Cardie, P. Davis, B. Fraser, Diane Litman, D. Pierce, Ellen Riloff, Theresa Wilson, D. Day, and Mark Maybury. 2003. Recognizing and organizing opinions expressed in the world press. In Proceedings of the 2003 AAAI Spring Symposium on New Directions in Question Answering. Janyce M. Wiebe, Theresa Wilson, Rebecca Bruce, Matthew Bell, and Melanie Martin. 2004. Learning subjective language. Computational Linguistics, 30(3):277–308, September. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2/3):164–210. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT-EMNLP’05), pages 347–354, Vancouver, CA. Kiduk Yang, Ning Yu, Alejandro Valerio, and Hui Zhang. 2006. WIDIT in TREC-2006 Blog track. In Proceedings of TREC. Hong Yu and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proceedings of 2003 Conference on the Empirical Methods in Natural Language Processing (EMNLP’03), pages 129– 136, Sapporo, JP. Chengxiang Zhai and John Lafferty. 2004. A study of smoothing methods for language models applied to information retrieval. ACM Trans. Inf. Syst., 22(2):179–214. Min Zhang and Xingyao Ye. 2008. A generation model to unify topic relevance and lexicon-based sentiment for opinion retrieval. In SIGIR ’08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 411–418, New York, NY, USA. ACM. 261
2009
29
Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 19–27, Suntec, Singapore, 2-7 August 2009. c⃝2009 ACL and AFNLP A Comparative Study on Generalization of Semantic Roles in FrameNet Yuichiroh Matsubayashi† Naoaki Okazaki† Jun’ichi Tsujii†‡∗ †Department of Computer Science, University of Tokyo, Japan ‡School of Computer Science, University of Manchester, UK ∗National Centre for Text Mining, UK {y-matsu,okazaki,tsujii}@is.s.u-tokyo.ac.jp Abstract A number of studies have presented machine-learning approaches to semantic role labeling with availability of corpora such as FrameNet and PropBank. These corpora define the semantic roles of predicates for each frame independently. Thus, it is crucial for the machine-learning approach to generalize semantic roles across different frames, and to increase the size of training instances. This paper explores several criteria for generalizing semantic roles in FrameNet: role hierarchy, human-understandable descriptors of roles, semantic types of filler phrases, and mappings from FrameNet roles to thematic roles of VerbNet. We also propose feature functions that naturally combine and weight these criteria, based on the training data. The experimental result of the role classification shows 19.16% and 7.42% improvements in error reduction rate and macro-averaged F1 score, respectively. We also provide in-depth analyses of the proposed criteria. 1 Introduction Semantic Role Labeling (SRL) is a task of analyzing predicate-argument structures in texts. More specifically, SRL identifies predicates and their arguments with appropriate semantic roles. Resolving surface divergence of texts (e.g., voice of verbs and nominalizations) into unified semantic representations, SRL has attracted much attention from researchers into various NLP applications including question answering (Narayanan and Harabagiu, 2004; Shen and Lapata, 2007; buy.v PropBank FrameNet Frame buy.01 Commerce buy Roles ARG0: buyer Buyer ARG1: thing bought Goods ARG2: seller Seller ARG3: paid Money ARG4: benefactive Recipient ... ... Figure 1: A comparison of frames for buy.v defined in PropBank and FrameNet Moschitti et al., 2007), and information extraction (Surdeanu et al., 2003). In recent years, with the wide availability of corpora such as PropBank (Palmer et al., 2005) and FrameNet (Baker et al., 1998), a number of studies have presented statistical approaches to SRL (M`arquez et al., 2008). Figure 1 shows an example of the frame definitions for a verb buy in PropBank and FrameNet. These corpora define a large number of frames and define the semantic roles for each frame independently. This fact is problematic in terms of the performance of the machinelearning approach, because these definitions produce many roles that have few training instances. PropBank defines a frame for each sense of predicates (e.g., buy.01), and semantic roles are defined in a frame-specific manner (e.g., buyer and seller for buy.01). In addition, these roles are associated with tags such as ARG0-5 and AM-*, which are commonly used in different frames. Most SRL studies on PropBank have used these tags in order to gather a sufficient amount of training data, and to generalize semantic-role classifiers across different frames. However, Yi et al. (2007) reported that tags ARG2–ARG5 were inconsistent and not that suitable as training instances. Some recent studies have addressed alternative approaches to generalizing semantic roles across different frames (Gordon and Swanson, 2007; Zapi19 Transfer::Recipient Giving::Recipient Commerce_buy::Buyer Commerce_sell::Buyer Commerce_buy::Seller Commerce_sell::Seller Giving::Donor Transfer::Donor Buyer Seller Agent role-to-role relation hierarchical class thematic role role descriptor Recipient Donor Figure 2: An example of role groupings using different criteria. rain et al., 2008). FrameNet designs semantic roles as frame specific, but also defines hierarchical relations of semantic roles among frames. Figure 2 illustrates an excerpt of the role hierarchy in FrameNet; this figure indicates that the Buyer role for the Commerce buy frame (Commerce buy::Buyer hereafter) and the Commerce sell::Buyer role are inherited from the Transfer::Recipient role. Although the role hierarchy was expected to generalize semantic roles, no positive results for role classification have been reported (Baldewein et al., 2004). Therefore, the generalization of semantic roles across different frames has been brought up as a critical issue for FrameNet (Gildea and Jurafsky, 2002; Shi and Mihalcea, 2005; Giuglea and Moschitti, 2006) In this paper, we explore several criteria for generalizing semantic roles in FrameNet. In addition to the FrameNet hierarchy, we use various pieces of information: human-understandable descriptors of roles, semantic types of filler phrases, and mappings from FrameNet roles to the thematic roles of VerbNet. We also propose feature functions that naturally combines these criteria in a machine-learning framework. Using the proposed method, the experimental result of the role classification shows 19.16% and 7.42% improvements in error reduction rate and macro-averaged F1, respectively. We provide in-depth analyses with respect to these criteria, and state our conclusions. 2 Related Work Moschitti et al. (2005) first classified roles by using four coarse-grained classes (Core Roles, Adjuncts, Continuation Arguments and Co-referring Arguments), and built a classifier for each coarsegrained class to tag PropBank ARG tags. Even though the initial classifiers could perform rough estimations of semantic roles, this step was not able to solve the ambiguity problem in PropBank ARG2-5. When training a classifier for a semantic role, Baldewein et al. (2004) re-used the training instances of other roles that were similar to the target role. As similarity measures, they used the FrameNet hierarchy, peripheral roles of FrameNet, and clusters constructed by a EM-based method. Gordon and Swanson (2007) proposed a generalization method for the PropBank roles based on syntactic similarity in frames. Many previous studies assumed that thematic roles bridged semantic roles in different frames. Gildea and Jurafsky (2002) showed that classification accuracy was improved by manually replacing FrameNet roles into 18 thematic roles. Shi and Mihalcea (2005) and Giuglea and Moschitti (2006) employed VerbNet thematic roles as the target of mappings from the roles defined by the different semantic corpora. Using the thematic roles as alternatives of ARG tags, Loper et al. (2007) and Yi et al. (2007) demonstrated that the classification accuracy of PropBank roles was improved for ARG2 roles, but that it was diminished for ARG1. Yi et al. (2007) also described that ARG2–5 were mapped to a variety of thematic roles. Zapirain et al. (2008) evaluated PropBank ARG tags and VerbNet thematic roles in a state-ofthe-art SRL system, and concluded that PropBank ARG tags achieved a more robust generalization of the roles than did VerbNet thematic roles. 3 Role Classification SRL is a complex task wherein several problems are intertwined: frame-evoking word identification, frame disambiguation (selecting a correct frame from candidates for the evoking word), rolephrase identification (identifying phrases that fill semantic roles), and role classification (assigning correct roles to the phrases). In this paper, we focus on role classification, in which the role generalization is particularly critical to the machine learning approach. In the role classification task, we are given a sentence, a frame evoking word, a frame, and 20 member roles Commerce_pay::Buyer Intentionall_act::Agent Giving::Donor Getting::Recipient Giving::Recipient Sending::Recipient Giving::Time Placing::Time Event::Time Commerce_pay::Buyer Commerce_buy::Buyer Commerce_sell::Buyer Buyer Recipient Time C_pay::Buyer GIVING::Donor Intentionally_ACT::Agent Avoiding::Agent Evading::Evader Evading::Evader Avoiding::Agent Getting::Recipient Evading::Evader St::Sentient St::Physical_Obj Giving::Theme Placing::Theme St::State_of_affairs Giving::Reason Evading::Reason Giving::Means Evading::Purpose Theme::Agent Theme::Theme Commerce_buy::Goods Getting::Theme Evading:: Pursuer Commerce_buy::Buyer Commerce_sell::Seller Evading::Evader Role-descriptor groups Hierarchical-relation groups Semantic-type groups Thematic-role groups Group name legend Figure 4: Examples for each type of role group. INPUT: frame = Commerce_sell candidate roles ={Seller, Buyer, Goods, Reason, Time, ... , Place} sentence = Can't [you] [sell Commerce_sell] [the factory] [to some other company]? OUTPUT: sentence = Can't [you Seller] [sell Commerce_sell] [the factory Goods] [to some other company Buyer] ? Figure 3: An example of input and output of role classification. phrases that take semantic roles. We are interested in choosing the correct role from the candidate roles for each phrase in the frame. Figure 3 shows a concrete example of input and output; the semantic roles for the phrases are chosen from the candidate roles: Seller, Buyer, Goods, Reason, ... , and Place. 4 Design of Role Groups We formalize the generalization of semantic roles as the act of grouping several roles into a class. We define a role group as a set of role labels grouped by a criterion. Figure 4 shows examples of role groups; a group Giving::Donor (in the hierarchical-relation groups) contains the roles Giving::Donor and Commerce pay::Buyer. The remainder of this section describes the grouping criteria in detail. 4.1 Hierarchical relations among roles FrameNet defines hierarchical relations among frames (frame-to-frame relations). Each relation is assigned one of the seven types of directional relationships (Inheritance, Using, Perspective on, Causative of, Inchoative of, Subframe, and Precedes). Some roles in two related frames are also connected with role-to-role relations. We assume that this hierarchy is a promising resource for generalizing the semantic roles; the idea is that the role at a node in the hierarchy inherits the characteristics of the roles of its ancestor nodes. For example, Commerce sell::Seller in Figure 2 inherits the property of Giving::Donor. For Inheritance, Using, Perspective on, and Subframe relations, we assume that descendant roles in these relations have the same or specialized properties of their ancestors. Hence, for each role yi, we define the following two role groups, Hchild yi = {y|y = yi ∨y is a child of yi}, Hdesc yi = {y|y = yi ∨y is a descendant of yi}. The hierarchical-relation groups in Figure 4 are the illustrations of Hdesc yi . For the relation types Inchoative of and Causative of, we define role groups in the opposite direction of the hierarchy, Hparent yi = {y|y = yi ∨y is a parent of yi}, Hance yi = {y|y = yi ∨y is an ancestor of yi}. This is because lower roles of Inchoative of and Causative of relations represent more neutral stances or consequential states; for example, Killing::Victim is a parent of Death::Protagonist in the Causative of relation. Finally, the Precedes relation describes the sequence of states and events, but does not specify the direction of semantic inclusion relations. Therefore, we simply try Hchild yi , Hdesc yi , Hparent yi , and Hance yi for this relation type. 4.2 Human-understandable role descriptor FrameNet defines each role as frame-specific; in other words, the same identifier does not appear in different frames. However, in FrameNet, human experts assign a human-understandable name to each role in a rather systematic manner. Some names are shared by the roles in different frames, whose identifiers are different. Therefore, we examine the semantic 21 commonality of these names; we construct an equivalence class of the roles sharing the same name. We call these human-understandable names role descriptors. In Figure 4, the roledescriptor group Buyer collects the roles Commerce pay::Buyer, Commerce buy::Buyer, and Commerce sell::Buyer. This criterion may be effective in collecting similar roles since the descriptors have been annotated by intuition of human experts. As illustrated in Figure 2, the role descriptors group the semantic roles which are similar to the roles that the FrameNet hierarchy connects as sister or parentchild relations. However, role-descriptor groups cannot express the relations between the roles as inclusions since they are equivalence classes. For example, the roles Commerce sell::Buyer and Commerce buy::Buyer are included in the role descriptor group Buyer in Figure 2; however, it is difficult to merge Giving::Recipient and Commerce sell::Buyer because the Commerce sell::Buyer has the extra property that one gives something of value in exchange and a human assigns different descriptors to them. We expect that the most effective weighting of these two criteria will be determined from the training data. 4.3 Semantic type of phrases We consider that the selectional restriction is helpful in detecting the semantic roles. FrameNet provides information concerning the semantic types of role phrases (fillers); phrases that play specific roles in a sentence should fulfill the semantic constraint from this information. For instance, FrameNet specifies the constraint that Self motion::Area should be filled by phrases whose semantic type is Location. Since these types suggest a coarse-grained categorization of semantic roles, we construct role groups that contain roles whose semantic types are identical. 4.4 Thematic roles of VerbNet VerbNet thematic roles are 23 frame-independent semantic categories for arguments of verbs, such as Agent, Patient, Theme and Source. These categories have been used as consistent labels across verbs. We use a partial mapping between FrameNet roles and VerbNet thematic roles provided by SemLink. 1 Each group is constructed as a set Tti = 1http://verbs.colorado.edu/semlink/ {y|SemLink maps y into the thematic role ti}. SemLink currently maps 1,726 FrameNet roles into VerbNet thematic roles, which are 37.61% of roles appearing at least once in the FrameNet corpus. This may diminish the effect of thematic-role groups than its potential. 5 Role classification method 5.1 Traditional approach We are given a frame-evoking word e, a frame f and a role phrase x detected by a human or some automatic process in a sentence s. Let Yf be the set of semantic roles that FrameNet defines as being possible role assignments for the frame f, and let x = {x1, . . . , xn} be observed features for x from s, e and f. The task of semantic role classification can be formalized as the problem of choosing the most suitable role ˜y from Yf. Suppose we have a model P(y|f, x) which yields the conditional probability of the semantic role y for given f and x. Then we can choose ˜y as follows: ˜y = argmax y∈Yf P(y|f, x). (1) A traditional way to incorporate role groups into this formalization is to overwrite each role y in the training and test data with its role group m(y) according to the memberships of the group. For example, semantic roles Commerce sell::Seller and Giving::Donor can be replaced by their thematic-role group Theme::Agent in this approach. We determine the most suitable role group ˜c as follows: ˜c = argmax c∈{m(y)|y∈Yf} Pm(c|f, x). (2) Here, Pm(c|f, x) presents the probability of the role group c for f and x. The role ˜y is determined uniquely iff a single role y ∈Yf is associated with ˜c. Some previous studies have employed this idea to remedy the data sparseness problem in the training data (Gildea and Jurafsky, 2002). However, we cannot apply this approach when multiple roles in Yf are contained in the same class. For example, we can construct a semantic-type group St::State of affairs in which Giving::Reason and Giving::Means are included, as illustrated in Figure 4. If ˜c = St::State of affairs, we cannot disambiguate which original role is correct. In addition, it may be more effective to use various 22 groupings of roles together in the model. For instance, the model could predict the correct role Commerce sell::Seller for the phrase “you” in Figure 3 more confidently, if it could infer its thematic-role group as Theme::Agent and its parent group Giving::Donor correctly. Although the ensemble of various groupings seems promising, we need an additional procedure to prioritize the groupings for the case where the models for multiple role groupings disagree; for example, it is unsatisfactory if two models assign the groups Giving::Theme and Theme::Agent to the same phrase. 5.2 Role groups as feature functions We thus propose another approach that incorporates group information as feature functions. We model the conditional probability P(y|f, x) by using the maximum entropy framework, p(y|f, x) = exp(∑ i λigi(x, y)) ∑ y∈Yf exp(∑ i λigi(x, y)). (3) Here, G = {gi} denotes a set of n feature functions, and Λ = {λi} denotes a weight vector for the feature functions. In general, feature functions for the maximum entropy model are designed as indicator functions for possible pairs of xj and y. For example, the event where the head word of x is “you” (x1 = 1) and x plays the role Commerce sell::Seller in a sentence is expressed by the indicator function, grole 1 (x, y) =      1 (x1 = 1 ∧ y = Commerce sell::Seller) 0 (otherwise) . (4) We call this kind of feature function an x-role. In order to incorporate role groups into the model, we also include all feature functions for possible pairs of xj and role groups. Equation 5 is an example of a feature function for instances where the head word of x is “you” and y is in the role group Theme::Agent, gtheme 2 (x, y) =      1 (x1 = 1 ∧ y ∈Theme::Agent) 0 (otherwise) . (5) Thus, this feature function fires for the roles wherever the head word “you” plays Agent (e.g., Commerce sell::Seller, Commerce buy::Buyer and Giving::Donor). We call this kind of feature function an x-group function. In this way, we obtain x-group functions for all grouping methods, e.g., gtheme k , ghierarchy k . The role-group features will receive more training instances by collecting instances for fine-grained roles. Thus, semantic roles with few training instances are expected to receive additional clues from other training instances via role-group features. Another advantage of this approach is that the usefulness of the different role groups is determined by the training processes in terms of weights of feature functions. Thus, we do not need to assume that we have found the best criterion for grouping roles; we can allow a training process to choose the criterion. We will discuss the contributions of different groupings in the experiments. 5.3 Comparison with related work Baldewein et al. (2004) suggested an approach that uses role descriptors and hierarchical relations as criteria for generalizing semantic roles in FrameNet. They created a classifier for each frame, additionally using training instances for the role A to train the classifier for the role B, if the roles A and B were judged as similar by a criterion. This approach performs similarly to the overwriting approach, and it may obscure the differences among roles. Therefore, they only re-used the descriptors as a similarity measure for the roles whose coreness was peripheral. 2 In contrast, we use all kinds of role descriptors to construct groups. Since we use the feature functions for both the original roles and their groups, appropriate units for classification are determined automatically in the training process. 6 Experiment and Discussion We used the training set of the Semeval-2007 Shared task (Baker et al., 2007) in order to ascertain the contributions of role groups. This dataset consists of the corpus of FrameNet release 1.3 (containing roughly 150,000 annotations), and an additional full-text annotation dataset. We randomly extracted 10% of the dataset for testing, and used the remainder (90%) for training. Performance was measured by micro- and macro-averaged F1 (Chang and Zheng, 2008) with respect to a variety of roles. The micro average biases each F1 score by the frequencies of the roles, 2In FrameNet, each role is assigned one of four different types of coreness (core, core-unexpressed, peripheral, extrathematic) It represents the conceptual necessity of the roles in the frame to which it belongs. 23 and the average is equal to the classification accuracy when we calculate it with all of the roles in the test set. In contrast, the macro average does not bias the scores, thus the roles having a small number of instances affect the average more than the micro average. 6.1 Experimental settings We constructed a baseline classifier that uses only the x-role features. The feature design is similar to that of the previous studies (M`arquez et al., 2008). The characteristics of x are: frame, frame evoking word, head word, content word (Surdeanu et al., 2003), first/last word, head word of left/right sister, phrase type, position, voice, syntactic path (directed/undirected/partial), governing category (Gildea and Jurafsky, 2002), WordNet supersense in the phrase, combination features of frame evoking word & headword, combination features of frame evoking word & phrase type, and combination features of voice & phrase type. We also used PoS tags and stem forms as extra features of any word-features. We employed Charniak and Johnson’s reranking parser (Charniak and Johnson, 2005) to analyze syntactic trees. As an alternative for the traditional named-entity features, we used WordNet supersenses: 41 coarse-grained semantic categories of words such as person, plant, state, event, time, location. We used Ciaramita and Altun’s Super Sense Tagger (Ciaramita and Altun, 2006) to tag the supersenses. The baseline system achieved 89.00% with respect to the micro-averaged F1. The x-group features were instantiated similarly to the x-role features; the x-group features combined the characteristics of x with the role groups presented in this paper. The total number of features generated for all x-roles and x-groups was 74,873,602. The optimal weights Λ of the features were obtained by the maximum a posterior (MAP) estimation. We maximized an L2regularized log-likelihood of the training set using the Limited-memory BFGS (L-BFGS) method (Nocedal, 1980). 6.2 Effect of role groups Table 1 shows the micro and macro averages of F1 scores. Each role group type improved the micro average by 0.5 to 1.7 points. The best result was obtained by using all types of groups together. The result indicates that different kinds of group comFeature Micro Macro −Err. Baseline 89.00 68.50 0.00 role descriptor 90.78 76.58 16.17 role descriptor (replace) 90.23 76.19 11.23 hierarchical relation 90.25 72.41 11.40 semantic type 90.36 74.51 12.38 VN thematic role 89.50 69.21 4.52 All 91.10 75.92 19.16 Table 1: The accuracy and error reduction rate of role classification for each type of role group. Feature #instances Pre. Rec. Micro baseline ≤10 63.89 38.00 47.66 ≤20 69.01 51.26 58.83 ≤50 75.84 65.85 70.50 + all groups ≤10 72.57 55.85 63.12 ≤20 76.30 65.41 70.43 ≤50 80.86 74.59 77.60 Table 2: The effect of role groups on the roles with few instances. plement each other with respect to semantic role generalization. Baldewein et al. (2004) reported that hierarchical relations did not perform well for their method and experimental setting; however, we found that significant improvements could also be achieved with hierarchical relations. We also tried a traditional label-replacing approach with role descriptors (in the third row of Table 1). The comparison between the second and third rows indicates that mixing the original fine-grained roles and the role groups does result in a more accurate classification. By using all types of groups together, the model reduced 19.16 % of the classification errors from the baseline. Moreover, the macro-averaged F1 scores clearly showed improvements resulting from using role groups. In order to determine the reason for the improvements, we measured the precision, recall, and F1-scores with respect to roles for which the number of training instances was at most 10, 20, and 50. In Table 2, we show that the micro-averaged F1 score for roles having 10 instances or less was improved (by 15.46 points) when all role groups were used. This result suggests the reason for the effect of role groups; by bridging similar semantic roles, they supply roles having a small number of instances with the information from other roles. 6.3 Analyses of role descriptors In Table 1, the largest improvement was obtained by the use of role descriptors. We analyze the effect of role descriptors in detail in Tables 3 and 4. Table 3 shows the micro-averaged F1 scores of all 24 Coreness #roles #instances/#role #groups #instances/#group #roles/#group Core 1902 122.06 655 354.4 2.9 Peripheral 1924 25.24 250 194.3 7.7 Extra-thematic 763 13.90 171 62.02 4.5 Table 4: The analysis of the numbers of roles, instances, and role-descriptor groups, for each type of coreness. Coreness Micro Baseline 89.00 Core 89.51 Peripheral 90.12 Extra-thematic 89.09 All 90.77 Table 3: The effect of employing role-descriptor groups of each type of coreness. semantic roles when we use role-descriptor groups constructed from each type of coreness (core3, peripheral, and extra-thematic) individually. The peripheral type generated the largest improvements. Table 4 shows the number of roles associated with each type of coreness (#roles), the number of instances for the original roles (#instances/#role), the number of groups for each type of coreness (#groups), the number of instances for each group (#instances/#group), and the number of roles per each group (#roles/#group). In the peripheral type, the role descriptors subdivided 1,924 distinct roles into 250 groups, each of which contained 7.7 roles on average. The peripheral type included semantic roles such as place, time, reason, duration. These semantic roles appear in many frames, because they have general meanings that can be shared by different frames. Moreover, the semantic roles of peripheral type originally occurred in only a small number (25.24) of training instances on average. Thus, we infer that the peripheral type generated the largest improvement because semantic roles in this type acquired the greatest benefit from the generalization. 6.4 Hierarchical relations and relation types We analyzed the contributions of the FrameNet hierarchy for each type of role-to-role relations and for different depths of grouping. Table 5 shows the micro-averaged F1 scores obtained from various relation types and depths. The Inheritance and Using relations resulted in a slightly better accuracy than the other types. We did not observe any real differences among the remaining five relation types, possibly because there were few se3We include Core-unexpressed in core, because it has a property of core inside one frame. No. Relation Type Micro baseline 89.00 1 + Inheritance (children) 89.52 2 + Inheritance (descendants) 89.70 3 + Using (children) 89.35 4 + Using (descendants) 89.37 5 + Perspective on (children) 89.01 6 + Perspective on (descendants) 89.01 7 + Subframe (children) 89.04 8 + Subframe (descendants) 89.05 9 + Causative of (parents) 89.03 10 + Causative of (ancestors) 89.03 11 + Inchoative of (parents) 89.02 12 + Inchoative of (ancestors) 89.02 13 + Precedes (children) 89.01 14 + Precedes (descendants) 89.03 15 + Precedes (parents) 89.00 16 + Precedes (ancestors) 89.00 18 + all relations (2,4,6,8,10,12,14) 90.25 Table 5: Comparison of the accuracy with different types of hierarchical relations. mantic roles associated with these types. We obtained better results by using not only groups for parent roles, but also groups for all ancestors. The best result was obtained by using all relations in the hierarchy. 6.5 Analyses of different grouping criteria Table 6 reports the precision, recall, and microaveraged F1 scores of semantic roles with respect to each coreness type.4 In general, semantic roles of the core coreness were easily identified by all of the grouping criteria; even the baseline system obtained an F1 score of 91.93. For identifying semantic roles of the peripheral and extra-thematic types of coreness, the simplest solution, the descriptor criterion, outperformed other criteria. In Table 7, we categorize feature functions whose weights are in the top 1000 in terms of greatest absolute value. The behaviors of the role groups can be distinguished by the following two characteristics. Groups of role descriptors and semantic types have large weight values for the first word and supersense features, which capture the characteristics of adjunctive phrases. The original roles and hierarchical-relation groups have strong 4The figures of role descriptors in Tables 4 and 6 differ. In Table 4, we measured the performance when we used one or all types of coreness for training. In contrast, in Table 6, we used all types of coreness for training, but computed the performance of semantic roles for each coreness separately. 25 Feature Type Pre. Rec. Micro baseline c 91.07 92.83 91.93 p 81.05 76.03 78.46 e 78.17 66.51 71.87 + descriptor group c 92.50 93.41 92.95 p 84.32 82.72 83.51 e 80.91 69.59 74.82 + hierarchical c 92.10 93.28 92.68 relation p 82.23 79.84 81.01 class e 77.94 65.58 71.23 + semantic c 92.23 93.31 92.77 type group p 83.66 81.76 82.70 e 80.29 67.26 73.20 + VN thematic c 91.57 93.06 92.31 role group p 80.66 76.95 78.76 e 78.12 66.60 71.90 + all group c 92.66 93.61 93.13 p 84.13 82.51 83.31 e 80.77 68.56 74.17 Table 6: The precision and recall of each type of coreness with role groups. Type represents the type of coreness; c denotes core, p denotes peripheral, and e denotes extra-thematic. associations with lexical and structural characteristics such as the syntactic path, content word, and head word. Table 7 suggests that role-descriptor groups and semantic-type groups are effective for peripheral or adjunctive roles, and hierarchical relation groups are effective for core roles. 7 Conclusion We have described different criteria for generalizing semantic roles in FrameNet. They were: role hierarchy, human-understandable descriptors of roles, semantic types of filler phrases, and mappings from FrameNet roles to thematic roles of VerbNet. We also proposed a feature design that combines and weights these criteria using the training data. The experimental result of the role classification task showed a 19.16% of the error reduction and a 7.42% improvement in the macroaveraged F1 score. In particular, the method we have presented was able to classify roles having few instances. We confirmed that modeling the role generalization at feature level was better than the conventional approach that replaces semantic role labels. Each criterion presented in this paper improved the accuracy of classification. The most successful criterion was the use of human-understandable role descriptors. Unfortunately, the FrameNet hierarchy did not outperform the role descriptors, contrary to our expectations. A future direction of this study would be to analyze the weakness of the FrameNet hierarchy in order to discuss possible improvement of the usage and annotations of features of x class type or hr rl st vn frame 0 4 0 1 0 evoking word 3 4 7 3 0 ew & hw stem 9 34 20 8 0 ew & phrase type 11 7 11 3 1 head word 13 19 8 3 1 hw stem 11 17 8 8 1 content word 7 19 12 3 0 cw stem 11 26 13 5 0 cw PoS 4 5 14 15 2 directed path 19 27 24 6 7 undirected path 21 35 17 2 6 partial path 15 18 16 13 5 last word 15 18 12 3 2 first word 11 23 53 26 10 supersense 7 7 35 25 4 position 4 6 30 9 5 others 27 29 33 19 6 total 188 298 313 152 50 Table 7: The analysis of the top 1000 feature functions. Each number denotes the number of feature functions categorized in the corresponding cell. Notations for the columns are as follows. ‘or’: original role, ‘hr’: hierarchical relation, ‘rd’: role descriptor, ‘st’: semantic type, and ‘vn’: VerbNet thematic role. the hierarchy. Since we used the latest release of FrameNet in order to use a greater number of hierarchical role-to-role relations, we could not make a direct comparison of performance with that of existing systems; however we may say that the 89.00% F1 micro-average of our baseline system is roughly comparable to the 88.93% value of Bejan and Hathaway (2007) for SemEval-2007 (Baker et al., 2007). 5 In addition, the methodology presented in this paper applies generally to any SRL resources; we are planning to determine several grouping criteria from existing linguistic resources and to apply the methodology to the PropBank corpus. Acknowledgments The authors thank Sebastian Riedel for his useful comments on our work. This work was partially supported by Grant-in-Aid for Specially Promoted Research (MEXT, Japan). References Collin F. Baker, Charles J. Fillmore, and John B. Lowe. 1998. The berkeley framenet project. In Proceedings of Coling-ACL 1998, pages 86–90. Collin Baker, Michael Ellsworth, and Katrin Erk. 2007. Semeval-2007 task 19: Frame semantic struc5There were two participants that performed whole SRL in SemEval-2007. Bejan and Hathaway (2007) evaluated role classification accuracy separately for the training data. 26 ture extraction. In Proceedings of SemEval-2007, pages 99–104. Ulrike Baldewein, Katrin Erk, Sebastian Pad´o, and Detlef Prescher. 2004. Semantic role labeling with similarity based generalization using EM-based clustering. In Proceedings of Senseval-3, pages 64– 68. Cosmin Adrian Bejan and Chris Hathaway. 2007. UTD-SRL: A Pipeline Architecture for Extracting Frame Semantic Structures. In Proceedings of SemEval-2007, pages 460–463. Association for Computational Linguistics. X. Chang and Q. Zheng. 2008. Knowledge Element Extraction for Knowledge-Based Learning Resources Organization. Lecture Notes in Computer Science, 4823:102–113. Eugene Charniak and Mark Johnson. 2005. Coarseto-fine n-best parsing and MaxEnt discriminative reranking. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 173–180. Massimiliano Ciaramita and Yasemin Altun. 2006. Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger. In Proceedings of EMNLP-2006, pages 594–602. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28(3):245–288. Ana-Maria Giuglea and Alessandro Moschitti. 2006. Semantic role labeling via FrameNet, VerbNet and PropBank. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the ACL, pages 929–936. Andrew Gordon and Reid Swanson. 2007. Generalizing semantic role annotations across syntactically similar verbs. In Proceedings of ACL-2007, pages 192–199. Edward Loper, Szu-ting Yi, and Martha Palmer. 2007. Combining lexical resources: Mapping between propbank and verbnet. In Proceedings of the 7th International Workshop on Computational Semantics, pages 118–128. Llu´ıs M`arquez, Xavier Carreras, Kenneth C. Litkowski, and Suzanne Stevenson. 2008. Semantic role labeling: an introduction to the special issue. Computational linguistics, 34(2):145–159. Alessandro Moschitti, Ana-Maria Giuglea, Bonaventura Coppola, and Roberto Basili. 2005. Hierarchical semantic role labeling. In Proceedings of CoNLL-2005, pages 201–204. Alessandro Moschitti, Silvia Quarteroni, Roberto Basili, and Suresh Manandhar. 2007. Exploiting syntactic and shallow semantic kernels for question answer classification. In Proceedings of ACL-07, pages 776–783. Srini Narayanan and Sanda Harabagiu. 2004. Question answering based on semantic structures. In Proceedings of Coling-2004, pages 693–701. Jorge Nocedal. 1980. Updating quasi-newton matrices with limited storage. Mathematics of Computation, 35(151):773–782. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In Proceedings of EMNLP-CoNLL 2007, pages 12–21. Lei Shi and Rada Mihalcea. 2005. Putting Pieces Together: Combining FrameNet, VerbNet and WordNet for Robust Semantic Parsing. In Proceedings of CICLing-2005, pages 100–111. Mihai Surdeanu, Sanda Harabagiu, John Williams, and Paul Aarseth. 2003. Using predicate-argument structures for information extraction. In Proceedings of ACL-2003, pages 8–15. Szu-ting Yi, Edward Loper, and Martha Palmer. 2007. Can semantic roles generalize across genres? In Proceedings of HLT-NAACL 2007, pages 548–555. Be˜nat Zapirain, Eneko Agirre, and Llu´ıs M`arquez. 2008. Robustness and generalization of role sets: PropBank vs. VerbNet. In Proceedings of ACL-08: HLT, pages 550–558. 27
2009
3