text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Separable Verbs in a Reusable Morphological Dictionary for German Pius ten Hacken 1 & Stephan Bopp 2 l Institut ftir Informatik / ASW 2Lexicologie, Faculteit der Letteren Universit~it Basel, Petersgraben 51 Vrije Universiteit, De Boelelaan 1105 CH-4051 Basel (Switzerland) NL- 1081 HV Amsterdam (Netherlands) email: [email protected] email: [email protected] Abstract Separable verbs are verbs with prefixes which, depending on the syntactic context, can occur as one word written together or discontinuously. They occur in languages such as German and Dutch and constitute a problem for NLP because they are lexemes whose forms cannot always be recognized by dictionary lookup on the basis of a text word. Conventional solutions take a mixed lexical and syntactic approach. In this paper, we propose the solution offered by Word Manager, consisting of string-based recognition by means of rules of types also required for periphrastic inflection and clitics. In this way, separable verbs are dealt with as part of the domain of reusable lexical resources. We show how this solution compares favourably with conventional approaches. 1. The Problem In German there exists a large class of verbs which behave like aufh6ren ('stop'), illustrated in (1). (1) a. Anna glaubt, dass Bernard aufh6rt. ('Anna believes that Bernard stops') b. Claudia h6rt jetzt auf. ('Claudia stops now PRT') c. Daniel versucht aufzuh6ren. ('Daniel tries to_stop') In subordinate clauses as in (1 a), the particle auf and the inflected part of the verb h6rt are written together. In main clauses such as (lb), the inflected form h6rt is moved by verb-second, leaving the particle stranded. In infinitive clauses with the particle zu ('to'), zu separates the two components of the verb and all three elements are written together. In analysis, the problem of separable verbs is to combine the two parts of the verb in contexts such as (lb) and (lc). Such a combination is necessary because syntactic and semantic properties of aufh6ren are the same, irrespective of whether the two parts are written together or not, but they cannot be deduced from the syntactic and semantic properties of the parts. Therefore, a solution to the problem of separable verbs will treat (lb) as if it read (2a) and (lc) as (2b): (2) a. Claudia aufh6rt jetzt. b. Daniel versucht zu aufh6ren. The problem arises in a very similar fashion in Dutch, as the Dutch translations (3) of the sentences in (1) show. The only difference is that the infinitive in (3c) is not written together. (3) a. Anna gelooft dat Bernard ophoudt. b. Claudia houdt nu op. c. Daniel probeert op te houden. On the other hand, the problem of separable verbs in German and Dutch differs from the corresponding one in English, because English verbs such as look up are multi- word units in all contexts. A treatment of these cases which is in line with the solution proposed here is described by Tschichold (forthcoming). As suggested by the English translation, separable verbs in German and Dutch are lexemes. Therefore, an important issue in evaluating a mechanism for dealing with them is how it fits in with the reusability of lexical resources. Given the importance of the orthographic component in the problem, it ~s not surprising that it is hardly if ever treated in the linguistic literature. 471 2. Previous Approaches In existing systems or resources for NLP, separable verbs are usually treated as a lexicographic and syntactic problem. Two typical approaches can be illustrated on the basis of Celex and Rosetta. Celex (http://www.kun.nl/celex) is a lexical database project offering a German dictionary with 50'000 entries and a Dutch dictionary with 120'000 entries. In these dictionaries separable verbs are listed with a feature conveying the information that they belong to the class of separable verbs and a bracketing structure showing the decomposition into a prefix and a base, e.g. (auf)(h6ren). Celex dictionaries are reusable, but the rule component for the interpretation of the information on separable verbs, i.e. the mechanism for going from (lb-c) to (2), remains to be developed by each NLP- system using the dictionaries. Rosetta is a machine translation system which includes Dutch as one of the source and target languages. Rosetta (1994:78-79) describes how separable verbs are treated. For the verb ophouden illustrated in (3), there are three lexical entries, ophouden for the continuous forms as in (3a), and houden and op for the discontinuous forms as in (3b-c). When a form of houden is found in a text, it is multiply ambiguous, because it can be a form of the simple verb houden ('hold') or of one of the separable verbs ophouden ('stop'), aanhouden ('arrest'), afhouden ('withhold'), etc. The entry for houden as part of ophouden contains the information that it must be combined with a particle op. At the same time, op is ambiguous between a reading as preposition or particle. In syntax, there is a rule combining the two elements in a sentence such as (3b). It is clear that, while this approach may work, it is far from elegant. It creates ambiguity and redundancies, because ophouden written together is treated in a different entry from op + houden as a discontinuous unit. These properties make the resulting dictionaries less transparent and do not favour reusability. It should be pointed out that Celex and Rosetta were not chosen because their solution to the problem of separable verbs is worse than others. They are representative examples of currently used strategies, chosen mainly because they are relatively well-documented. 3. The Word Manager Approach Word Manager TM (WM) is a system for morphological dictionaries. It includes rules for inflection and derivation (WM proper) and for clitics and multi-word units (Phrase Manager, PM). We will use WM here as a name for the combination of the two components. A general description of the design of WM, with references to various publications where the formalism is discussed in more detail, can be found in ten Hacken & Domenig (1996). The German WM dictionary consists of a comprehensive set of inflectional and word formation rules describing the full range of morphological processes in German. In the last two years we have specified more than 100'000 database entries by classification of lexemes in terms of inflection rules (for morphologically simple entries) and by the application of word formation rules (for morphologically complex entries). In addition, the PM module contains a set of rules for clitics and multi-word units which covers German periphrastic inflection patterns and separable verbs. The rule types invoked in the treatment of separable verbs in WM include Inflection Rules (IRules), Word Formation Rules (WFRules), Periphrastic Inflection (PIRules), and Clitic Rules (CRules). We will describe each of them in turn. 3.1. Inflection In inflection, aufhfJren is treated as a verb with a detachable prefix at!f The detachable prefix is defined as an underspecified IFormative. This means that, in the same way as for stems, its specification is distributed over a class specification and a 472 RXRule V_Detachable-Prefix citation-forms (ICat Detachable-Prefix) word-forms (ICat Detachable-Prefix) (ICat Detachable-Prefix) (ICat V-Stem) (ICat V-Suffix) (Mod Inf) (ICat V-Stem) (ICat V-Suffix) (ICat V-Prefix.ge) (ICat V-Stem) . . . ... (ICat V-Suffix) (Mod PaPa) Fig. i: Inflection rule for separable verbs in WM. The dots in the last line mark the absence of a line break in the actual code. Feature specifications separated by tabs refer to sets of formatives in paradigmatic variation. Each line thus generates one or more word forms. target (RIRule V_Detachable-Prefix) separable 1 (ICat Detachable-Prefix) 2 (ICat V-Stem) Fig. 2: Target specification of the WFRule for separable verbs in WM. specification of the individual string. The class is defined by the linguist in the specification of inflection processes. The specification of the string is part of the lexicographic specification, i.e. the string specification is the result of the application of the word formation rule the lexicographer chooses for the definition of an individual entry. In the IRules, detachable prefixes are referred to as formatives in the formulae generating the word forms. Fig. 1 gives the relevant rule of the database for otherwise regular separable verbs, such as aufhOren. 3.2. Word Formation Word Formation Rules consist of a source definition and a target definition. The source definition determines what (kind of) formatives are taken to form a new word. The target definition specifies how the source formatives are combined, and which inflection rule the new word is assigned to. Separable verbs are the result of WFRules which are remarkable because of their target. The target specification is as in Fig. 2. This specification departs from the usual specification of a target in a WFRule in two respects. First, instead of concatenating the source formatives, the rule lists them, leaving concatenation to the IRule. This is necessary to form the past participle aufgeh6rt, where the two formatives are separated by the prefix ge- (cf. last line of Fig. 1). Separable verbs are specified by the lexicographer by linking a word to a WFRule having a target specification as in Fig. 2. In the case of aufl~Oren, this is a rule for prefixing in which "1" in Fig. 2 matches a closed set of predefined prefixes. The IRules and WFRules described so far cover the non-separated occurrences as in (1 a). The second special property of the specification in Fig. 2 is the system keyword "separable" in the second line. It assigns the result of the WFRule to the predefined class %separable. This class, whose name is defined in the WM-formalism, can be used to establish a link between the result of word formation and the input to the periphrastic inflection mechanism used to recognize occurrences such as in (lb). 3.3. Periphrastic Inflection The mechanism for periphrastic inflection in WM consists of two parts. PIClasses are used to identify the components and PIRules to turn them into a single word form. The PIRule for separable verbs in German is given in Fig. 3. The rule in Fig, 3 consists of a name and a body, which in turn consists of input and output specifications separated by "=". The input specifies a finite verb form (infinitive and participles are excluded by "^") and a detachable prefix. The output combines them in the position of the verb, with the form prefix + verb, and with the features percolated from the verb (person, 473 Separable (Cat V)^(Mod Inf)^(Mod Part) + %separable = ... ... (POS I) (FORM 2+i) (PERC i) (Cat V) Fig. 3:Pefip~asticInflection Rule ~rseparableverbsinWM. %separable + (CElement zu) + (Cat V) (Mod Inf) (Temp Pres) .... ... (CElement zu), %separable + (Cat V) (Mod Inf) (Temp Pres) Fig. 4: CRule for the infinitive of separable verbs in number, etc.). This yields (2a) as a step in the analysis of (lb). The possibilities for specifying the relative position of the two elements to be combined are the same as the possibilities for multi- word units in general. In the PIClass for German it is specified that the finite verb always precedes the particle when the two are separated. In Dutch this is not the case, as illustrated by (3c), so that a different specification is required. 3.4. Clitic Rules The clitic rule mechanism is used to analyse aufzuh6ren in (lc) and produce zu aufh6ren as in (2b). The CRule used is given in Fig. 4. Again input and output are separated by "=". The input consists of the concatenation of three elements: a detachable prefix, infinitival zu, and an infinitive. Graphic concatenation is indicated by "+". The CElement zu is defined elsewhere as a form of the infinitival z u, rather than the homonymous preposition, in order not to lose information. The output consists of two words, as indicated by the comma, the second of which concatenates the prefix and the verb. 3.5. Recognition and Generation In recognition, the input is the largest domain over which components of multi- word units (MWUs) can be spread. In practice, this coincides with the sentence. Since WM does not contain a parser, larger chunks of input will result in spurious recognition of potential MWUs. Let us assume as an example that the sentences in (1) are given as input. WM. The first component to act is the clitics component. It leaves everything unchanged except (lc), which is replaced by (2b): aufzuh6ren => zu at!f176ren. Then the rules of WM proper are activated. They replace each word form by a set of analyses in terms of a string and feature set. In (1 a), att.flliJrt is analysed as third person singular or second person plural of the present tense of aufhOren, in (lb) hOrt and attfare analysed separately, and in (Ic) aufiti~ren, which was given the feature infinitive by the CRule in Fig. 4, only as infinitive, not as any of the homonymous forms in the paradigm. The next step is periphrastic inflection. It applies to (la) and (lc) vacuously, but combines hOrt and auf in (lb), producing the feature description corresponding to (2b): hOrt auf => aufhOrt. Finally, the idiom recognition component (not treated here) applies vacuously. A general remark on recognition is in order here. The rule components of PM, i.e. clitics, periphrastic inflection and idiom recognition add their results to the set of intermediate representations available at the relevant point. Thus, after the clitic component, attfz.uhiSren continues to exist alongside zu auJh6ren in the analysis of (lc). Since the former cannot be analysed by WM proper, it is discarded. Likewise, hgrt will survive in (lb) after periphrastic inflection and indeed as part of the final result. This is necessary in examples such as (4): (4) Der Hund h6rt auf den Namen Wurzel. ('The dog answers to the name [of] Wurzel') Since rules in WM are not inherently directional, it is also possible to generate all forms of a lexeme such as aufhOren in the way they may occur in a text. The client 474 application required for this task can also include codes indicating places in the string where other material may intervene, because this information is available in the relevant PIClass of the database. 4. Conclusion Separable verbs in German and Dutch constitute a problem in NLP because they are lexemes whose recognition is not simply a matter of dictionary lookup. Therefore, a reusable lexical database such as Celex does not offer a comprehensive solution to the problem. On the other hand, treating them as a problem of syntactic recognition, as implemented in, for instance, Rosetta, fails to account for the lexeme character of separable verbs. As a consequence, spurious ambiguities and redundancies are created. Ambiguities arise between a simple verb such as hSren ('hear') and the same form functioning as part of a separable verb such as auflzOren. Redundancies emerge between the two different entries for aufhOren, one for the continuous and one for the discontinuous occurrences. In Word Manager, the recognition of separable verbs is entirely within the reusable lexical domain. A client application can start from an input which resembles (2) rather than (lb-c). An indication of the type of input is given in (5) and (6). For (lb), (5a) and (5b) are offered as alternatives. For (lc), (6) is offered as the only analysis (modulo syncretism of versucht). (5) a. claudia (Cat Noun) aufh6ren (Cat Verb)(Tense Pres) (Pers Third)(Num SG) jetzt (Cat Adv) b. claudia (Cat Noun) ht~ren (Cat Verb)(Tense Pres) (Pers Third)(Num SG) jetzt (Cat Adv) auf (Cat Prep) (6) daniel (Cat Noun) versuchen (Cat Verb)(Tense Pres) (Pers Third)(Num SG) zu (Cat Inf-marker) aufh6ren (Cat Verb)(Mode Inf) The task of the client application in the recognition of separable verbs in (1) is reduced to the choice of (5a) rather than (5b). Finally, two points deserve to be emphasized. First, the entire WM-formalism for separable verbs has been implemented as described here. The rules for German have been formulated and a large dictionary for German (100'000 entries) including separable verbs is available. Moreover, the only provision in the WM-formalism specifically geared towards the treatment of separable verbs is the keyword separable in WFRules (cf. Fig. 2) and the corresponding class name %separable. Otherwise the entire formalism used for separable verbs is available as a consequence of general requirements of morphology and multi-word units. References ten Hacken, Pius & Domenig, Marc (1996), 'Reusable Dictionaries for NLP: The Word Manager Approach', Lexicology 2: 232-255. Rosetta, M.T. (1994), Compositional Translation, Kluwer Academic, Dordrecht. Tschichold, Cornelia (forthcoming), English Multi-Word Units in a Lexicon for Natural Language Processing, Ph.D. dissertation, Universitfit Basel (Dec. 1996), to appear at Olms Verlag, Hildesheim. Word Manager: http://www.unibas.ch/Lllab/projects/wordmanager/wordmanager.html Fig. 5: URL for Word Manager. 475
1998
78
A Text Understander that Learns Udo Hahn &: Klemens Schnattinger Computational Linguistics Lab, Freiburg University Werthmannplatz 1, D-79085 Freiburg, Germany {hahn, schnatt inger}@col ing. uni-freiburg, de Abstract We introduce an approach to the automatic ac- quisition of new concepts fi'om natural language texts which is tightly integrated with the under- lying text understanding process. The learning model is centered around the 'quality' of differ- ent forms of linguistic and conceptual evidence which underlies the incremental generation and refinement of alternative concept hypotheses, each one capturing a different conceptual read- ing for an unknown lexical item. 1 Introduction The approach to learning new concepts as a result of understanding natural language texts we present here builds on two different sources of evidence -- the prior knowledge of the do- main the texts are about, and grammatical con- structions in which unknown lexical items oc- cur. While there may be many reasonable inter- pretations when an unknown item occurs for the very first time in a text, their number rapidly decreases when more and more evidence is gath- ered. Our model tries to make explicit the rea- soning processes behind this learning pattern. Unlike the current mainstream in automatic linguistic knowledge acquisition, which can be characterized as quantitative, surface-oriented bulk processing of large corpora of texts (Hin- dle, 1989; Zernik and Jacobs, 1990; Hearst, 1992; Manning, 1993), we propose here a knowledge-intensive model of concept learning from few, positive-only examples that is tightly integrated with the non-learning mode of text understanding. Both learning and understand- ing build on a given core ontology in the format of terminological assertions and, hence, make abundant use of terminological reasoning. The 'plain' text understanding mode can be consid- ered as the instantiation and continuous filling d~udr s,y ~ trw ~ Hyl~si~ space- j Hyputhcsis t spal.'c-n I Q*mlifi~r Q*mlity ~,l~*Ine Figure 1: Architecture of the Text Learner of roles with respect to single concepts already available in the knowledge base. Under learning conditions, however, a set of alternative concept hypotheses has to be maintained for each un- known item, with each hypothesis denoting a newly created conceptual interpretation tenta- tively associated with the unknown item. The underlying methodology is summarized in Fig. 1. The text parser (for an overview, cf. BrSker et al. (1994)) yields information from the grammatical constructions in which an un- known lexical item (symbolized by the black square) occurs in terms of the corresponding de- pendency parse tree. The kinds of syntactic con- structions (e.g., genitive, apposition, compara- tive), in which unknown lexical items appear, are recorded and later assessed relative to the credit they lend to a particular hypothesis. The conceptual interpretation of parse trees involv- ing unknown lexical items in the domain knowl- edge base leads to the derivation of concept hy- potheses, which are further enriched by concep- tual annotations. These reflect structural pat- terns of consistency, mutual justification, anal- ogy, etc. relative to already available concept descriptions in the domain knowledge base or other hypothesis spaces. This kind of initial ev- idence, in particular its predictive "goodness" for the learning task, is represented by corre- sponding sets of linguistic and conceptual qual- 476 iSyntax Semantics CMD C ~ QD z CuD CZuD z VR.C {d e A z [ RZ(d) C_ C z} RnS R z nS z cln {(d,d')en z l d e C z} RIG {(d, d') • n z I d' • C z) Table l: Some Concept and Role Terms Axiom Semantics A - C A z = C z a : C a z E C z Q - R QZ = RZ a R b (a z, b z) E R z Table 2: Axioms for Concepts and Roles ity labels. Multiple concept hypotheses for each unknown lexical item are organized in terms of corresponding hypothesis spaces, each of which holds different or further specialized conceptual readings. The quality machine estimates the overall credibility of single concept hypotheses by tak- ing the available set of quality labels for each hypothesis into account. The final computa- tion of a preference order for the entire set of competing hypotheses takes place in the qual- ifier, a terminological classifier extended by an evaluation metric for quality-based selection cri- teria. The output of the quality machine is a ranked list of concept hypotheses. The ranking yields, in decreasing order of significance, either the most plausible concept classes which classify the considered instance or more general concept classes subsuming the considered concept class (cf. Schnattinger and Hahn (1998) for details). 2 Methodological Framework In this section, we present the major method- ological decisions underlying our approach. 2.1 Terminological Logics We use a standard terminological, KL-ONE- style concept description language, here referred to as C:D£ (for a survey of this paradigm, cf. Woods and Schmolze (1992)). It has several constructors combining atomic concepts, roles and individuals to define the terminological the- ory of a domain. Concepts are unary predicates, roles are binary predicates over a domain A, with individuals being the elements of A. We assume a common set-theoretical semantics for C7)£ - an interpretation Z is a function that assigns to each concept symbol (the set A) a subset of the domain A, Z : A -+ 2 n, to each role symbol (the set P) a binary relation of A, Z : P --+ 2 ~×n, and to each individual symbol (the set I) an element of A, Z : I --+ A. Concept terms and role terms are defined in- ductively. Table 1 contains some constructors and their semantics, where C and D denote con- cept terms, while R and S denote roles. R z (d) represents the set of role fillers of the individual d, i.e., the set of individuals e with (d, e) E R z. By means of terminological axioms (for a sub- set, see Table 2) a symbolic name can be intro- duced for each concept to which are assigned necessary and sufficient constraints using the definitional operator '"= . A finite set of such axioms is called the terminology or TBox. Con- cepts and roles are associated with concrete in- dividuals by assertional axioms (see Table 2; a, b denote individuals). A finite set of such axioms is called the world description or ABox. An in- terpretation Z is a model of an ABox with re- gard to a TBox, iff Z satisfies the assertional and terminological axioms. Considering, e.g., a phrase such as 'The switch of the Itoh-Ci-8 ..', a straightforward translation into corresponding terminological concept descriptions is illustrated by: (el) switch.1 : SWITCH (P2) Itoh-Ci-8 HAS-SWITCH switch.1 (P3) HAS-SWITCH -- (OuTPUTDEV LJ INPUTDEV U IHAS-PARTISwITCH STORAGEDEV t3 COMPUTER) Assertion P1 indicates that the instance switch.1 belongs to the concept class SWITCH. P2 relates Itoh-Ci-8 and switch.1 via the re- lation HAS-SWITCH. The relation HAS-SWITCH is defined, finally, as the set of all HAS-PART relations which have their domain restricted to the disjunction of the concepts OUTPUTDEV, INPUTDEV, STORAGEDEV or COMPUTER and their range restricted to SWITCH. In order to represent and reason about con- cept hypotheses we have to properly extend the formalism of C~£. Terminological hypotheses, in our framework, are characterized by the fol- lowing properties: for all stipulated hypotheses (1) the same domain A holds, (2) the same con- cept definitions are used, and (3) only different assertional axioms can be established. These conditions are sufficient, because each hypoth- esis is based on a unique discourse entity (cf. (1)), which can be directly mapped to associ- ated instances (so concept definitions are stable (2)). Only relations (including the ISA-relation) among the instances may be different (3). 477 Axiom Semantics (a : C)h a z E C zn (aRb)h (a z,b z) ER zh Table 3: Axioms in CDf.. hvp° Given these constraints, we may annotate each assertional axiom of the form 'a : C' and 'a R b' by a corresponding hypothesis label h so that (a : C)h and (a R b)h are valid terminolog- ical expressions. The extended terminological language (cf. Table 3) will be called CD£ ~y~°. Its semantics is given by a special interpreta- tion function Zh for each hypothesis h, which is applied to each concept and role symbol in the canonical way: Zh : A --+ 2zx; Zh : P --+ 2 AxA. Notice that the instances a, b are interpreted by the interpretation function Z, because there ex- ists only one domain £x. Only the interpretation of the concept symbol C and the role symbol R may be different in each hypothesis h. Assume that we want to represent two of the four concept hypotheses that can be derived from (P3), viz. Itoh-Ci-Sconsidered as a storage device or an output device. The corresponding ABox expressions are then given by: ( Itoh-Ci-8 HAS-SWITCH switch.1)h, (Itoh-Ci-8 : STORAGEDEV)h 1 ( Itoh-C i-8 HAS-SWITCH switch.1)h2 (Itoh-Ci-8 : OUTPUTDEV)h~ The semantics associated with this ABox fi'agment has the following form: ~h, (HAS-SWITCH) -" {(Itoh-Ci-8, switch.l)}, Zhx (STORAGEDEV) m {Itoh-Ci-8}, Zha (OuTPUTDEV) "- 0 Zh~(HAS-SWITCH) : {(Itoh-Ci-8, switch.l)}, Zh2(STORAGEDEV) = 0, :~h..(OUTPUTDEV) : {Itoh-Ci-8} 2.2 Hypothesis Generation Rules As mentioned above, text parsing and con- cept acquisition from texts are tightly coupled. Whenever, e.g., two nominals or a nominal and a verb are supposed to be syntactically related in the regular parsing mode, the semantic in- terpreter simultaneously evaluates the concep- tual compatibility of the items involved. Since these reasoning processes are fully embedded in a terminological representation system, checks are made as to whether a concept denoted by one of these objects is allowed to fill a role of the other one. If one of the items involved is unknown, i.e., a lexical and conceptual gap is encountered, this interpretation mode generates initial concept hypotheses about the class mem- bership of the unknown object, and, as a conse- quence of inheritance mechanisms holding for concept taxonomies, provides conceptual role information for the unknown item. Given the structural foundations of termi- nological theories, two dimensions of concep- tual learning can be distinguished -- the tax- onomic one by which new concepts are located in conceptual hierarchies, and the aggregational one by which concepts are supplied with clus- ters of conceptual relations (these will be used subsequently by the terminological classifier to determine the current position of the item to be learned in the taxonomy). In the follow- ing, let target.con be an unknown concept de- noted by the corresponding lexical item tar- get.lex, base.con be a given knowledge base con- cept denoted by the corresponding lexical item base.lex, and let target.lex and base.lex be re- lated by some dependency relation. Further- more, in the hypothesis generation rules below variables are indicated by names with leading '?'; the operator TELL is used to initiate the creation of assertional axioms in C7)£ hyp°. Typical linguistic indicators that can be ex- ploited for taxonomic integration are apposi- tions ('.. the printer @A@ .. '), exemplification phrases ('.. printers like the @A @ .. ') or nomi- nal compounds ( '.. the @A @ printer .. 1. These constructions almost unequivocally determine '@A@' (target.lex) when considered as a proper name 1 to denote an instance of a PRINTER (tar- get.con), given its characteristic dependency re- lation to 'printer' (base.lex), the conceptual cor- relate of which is the concept class PRINTER (base.con). This conclusion is justified indepen- dent of conceptual conditions, simply due to the nature of these linguistic constructions. The generation of corresponding concept hy- potheses is achieved by the rule sub-hypo (Ta- ble 4). Basically, the type of target.con is carried over from base.con (function type-of). In addi- tion, the syntactic label is asserted which char- acterizes the grammatical construction figuring as the structural source for that particular hy- 1Such a part-of-speech hypothesis can be derived from the inventory of valence and word order specifi- cations underlying the dependency grammar model we use (BrSker et al., 1994). 478 sub-hypo (target.con, base.con, h, label) ?type := type-of(base.con) TELL (target.con : ?type)h add-label((target.con : ?type)h ,label) Table 4: Taxonomic Hypothesis Generation Rule pothesis (h denotes the identifier for the selected hypothesis space), e.g., APPOSITION, EXEMPLI- FICATION, or NCOMPOUND. The aggregational dimension of terminologi- cal theories is addressed, e.g., by grammatical constructions causing case frame assignments. In the example '.. @B@ is equipped with 32 MB of RAM ..', role filler constraints of the verb form 'equipped' that relate to its PATIENT role carry over to '@B~'. After subsequent seman- tic interpretation of the entire verbal complex, '@B@' may be anything that can be equipped with memory. Constructions like prepositional phrases ( '.. @C@ from IBM.. ') or genitives ('.. IBM's @C@ .. ~ in which either target.lex or base.lex occur as head or modifier have a simi- lar effect. Attachments of prepositional phrases or relations among nouns in genitives, however, open a wider interpretation space for '@C~' than for '@B~', since verbal case frames provide a higher role selectivity than PP attachments or, even more so, genitive NPs. So, any concept that can reasonably be related to the concept IBM will be considered a potential hypothesis for '@C~-", e.g., its departments, products, For- tune 500 ranking. Generalizing from these considerations, we state a second hypothesis generation rule which accounts for aggregational patterns of concept learning. The basic assumption behind this rule, perm-hypo (cf. Table 5), is that target.con fills (exactly) one of the n roles of base.con it is currently permitted to fill (this set is deter- mined by the function porto-filler). Depend- ing on the actual linguistic construction one en- counters, it may occur, in particular for PP and NP constructions, that one cannot decide on the correct role yet. Consequently, several alternative hypothesis spaces are opened and target.co~ is assigned as a potential filler of the i-th role (taken from ?roleSet, the set of admitted roles) in its corresponding hypothesis space. As a result, the classifier is able to de- rive a suitable concept hypothesis by specializ- ing target.con according to the value restriction of base.con's i-th role. The function member-of ?roleSet :=perm-f iller( target.con, base.con, h) ?r := [?roleSet I FORALL ?i :=?r DOWNTO 1 DO ?rolel := member-of ( ?roleSet ) ?roleSet :=?roleSet \ {?rolei} IF ?i = 1 THEN ?hypo := h ELSE ?hypo := gen-hypo(h) TELL (base.con ?rolei target.con)?hypo add-label ((base.con ?rolei target.con)?hypo, label ) Table 5: Aggregational Hypothesis Generation Rule selects a role from the set ?roleSet; gen-hypo creates a new hypothesis space by asserting the given axioms of h and outputs its identi- fier. Thereupon, the hypothesis space identified by ?hypo is augmented through a TELL op- eration by the hypothesized assertion. As for sub-hypo, perm-hypo assigns a syntactic qual- ity label (function add-label) to each i-th hy- pothesis indicating the type of syntactic con- struction in which target.lex and base.lex are related in the text, e.g., CASEFRAME, PPAT- TACH or GENITIVENP. Getting back to our example, let us assume that the target Itoh-Ci-8 is predicted already as a PRODUCT as a result of preceding interpreta- tion processes, i.e., Itoh-Ci-8 : PRODUCT holds. Let PRODUCT be defined as: PRODUCT -- VHAS-PART.PHYSICALOBJECT I-1 VHAS-SIZE.SIZE ["1 VHAS-PRICE.PRICE i-I VHAS-WEIGHT.WEIGHT At this level of conceptual restriction, four roles have to be considered for relating the tar- get Itoh-Ci-8 - as a tentative PRODUCT - to the base concept SWITCH when interpreting the phrase 'The switch of the Itoh-Ci-8 .. '. Three of them, HAS-SIZE, HAS-PRICE, and HAS-WEIGHT, are ruled out due to the violation of a simple integrity constraint ('switch'does not denote a measure unit). Therefore, only the role HAS- PART must be considered in terms of the expres- sion Itoh-Ci-8 HAS-PART switch.1 (or, equiva- lently, switch.1 PART-OF Itoh-Ci-8). Due to the definition of HAS-SWITCH (cf. P3, Subsection 2.1), the instantiation of HAS-PART is special- ized to HAS-SWITCH by the classifier, since the range of the HAS-PART relation is already re- stricted to SWITCH (P1). Since the classifier ag- gressively pushes hypothesizing to be maximally specific, the disjunctive concept referred to in 479 the domain restrictiou of the role HAS-SWITCH is split into four distinct hypotheses, two of which are sketched below. Hence, we assume Itoh-Ci-8 to deuote either a STORAGEDEvice or an OUTPUTDEvice or an INPUTDEvice or a COMPUTER (note that we also include parts of the IS-A hierarchy in the example below). (Itoh-Ci-8 : STORAGEDEV)h,, (Itoh-Ci-8 : DEVICE)h~,.., ( Itoh-C i-8 HAS-SWITCH switch.1)h~ (Itoh-Ci-8 : OUTPUTDEv)h~, (Itoh-Ci-8 : DEVICE)h2,.., (Itoh-Ci-8 HAS-SWITCH swilch.1)h~,... 2.3 Hypothesis Annotation Rules In this section, we will focus on the quality as- sessment of concept hypotheses which occurs at the knowledge base level only; it is due to the operation of hypothesis annotation rules which continuously evaluate the hypotheses that have been derived from linguistic evidence. The M-Deduction rule (see Table 6) is trig- gered for any repetitive assignment of the same role filler to one specific conceptual relation that occurs in different hypothesis spaces. This rule captures the assu,nption that a role filler which has been multiply derived at different occasions must be granted more strength than one which has been derived at a single occasion only. EXISTS Ol,O2, R, hl,h~. : (Ol R o2)hl A (Ol R o2)h~ A hi ~ h~ TELL (ol R o~_)h~ : M-DEDUCTION Table 6: The Rule M-Deduction Considering our example at the end of subsec- tion 2.2, for 'Itoh-Ci-8' the concept hypotheses STORAGEDEV and OUTPUTDEV were derived independently of each other in different hypoth- esis spaces. Hence, DEVICE as their common superconcept has been multiply derived by the classifier in each of these spaces as a result of transitive closure computations, too. Accord- ingly, this hypothesis is assigned a high degree of confidence by the classifier which derives the conceptual quality label M-DEDUCTION: (Itoh-Ci-8 : DEVICE)hi A (Itoh-Ci-8 : DEVICE)h~ =:=> (Itoh-Ci-8 : DEVICE)hi : M-DEDUCTION The C-Support rule (see Table 7) is triggered whenever, within the same hypothesis space, a hypothetical relation, RI, between two in- stances can be justified by another relation, R2, involving the same two instances, but where the role fillers occur in 'inverted' order (R1 and R2 need not necessarily be semantically inverse re- lations, as with 'buy' and 'sell~. This causes the generation of the quality label C-SuPPORT which captures the inherent symmetry between concepts related via quasi-inverse relations. EXISTS Ol, 02, R1, R2, h : (ol R1 o2)h ^ (02 R2 ol)h ^ ftl # R~ ~=~ TELL (Ol R1 o2)h : C-SuPPORT Table 7: The Rule C-Support Example: (Itoh SELLS ltoh-Ci-8)h A (Itoh-Ci-8 DEVELOPED-BY Itoh)h (ltoh SELLS ltoh-Ci-8)h : C-SuPPORT Whenever an already filled conceptual rela- tion receives an additional, yet different role filler in the same hypothesis space, the Add- Filler rule is triggered (see Table 8). This application-specific rule is particularly suited to our natural language understanding task and has its roots in the distinction between manda- tory and optio,lal case roles for (ACTION) verbs. Roughly, it yields a negative assessment in terms of the quality label ADDFILLER for any attempt to fill the same mandatory case role more than once (unless coordinations are in- volved). Iu contradistinction, when the same role of a non-ACTION concept (typically de- noted by nouns) is multiply filled we assign the positive quality label SUPPORT, since it reflects the conceptual proximity a relation induces on its component fillers, provided that they share a common, non-ACTION concept class. EXISTS 01,02, 03, R, h : (01 R 02)h A (01 R 03)h A (01 : ACTION)h ===V I TELL (01 R o~_)h : ADDFILLER Table 8: The Rule AddFiller We give examples both for the assignmeut of an ADDFILLER as well as for a SUPPORT label: Examples: (produces.1 : ACTION)h A (produces.1 AGENT ltoh)h A (produces.1 AGENT IBM)h (produces.1 AGENT Itoh)h : ADDFILLER (ltoh-Ci-8 : PRINTER)h A (Itoh-Ct : PRINTER)h A (Itoh SELLS Itoh-Ci-8)h A (Itoh SELLS Itoh-Ct)h A (ltoh : -~AcTION)h (Itoh-Ci-8 : PRINTER)h : SUPPORT 480 2.4 Quality Dimensions The criteria from which concept hypotheses are derived differ in the dimension from which they are drawn (grammatical vs. conceptual ev- idence), as well as the strength by which they lend support to the corresponding hypotheses (e.g., apposition vs. genitive, multiple deduc- tion vs. additional role filling, etc.). In order to make these distinctions explicit we have de- veloped a "quality calculus" at the core of which lie the definition of and inference rules for qual- ity labels (cf. Schnattinger and Hahn (1998) for more details). A design methodology for specific quality calculi may proceed along the follow- ing lines: (1) Define the dimensions from which quality labels can be drawn. In our application, we chose the set I:Q := {ll,..., Ira} of linguistic quality labels and CQ := {cl,...,c~} of con- ceptual quality labels. (2) Determine a partial ordering p among the quality labels from one di- mension reflecting different degrees of strength among the quality labels. (3) Determine a total ordering among the dimensions. In our application, we have empirical evi- dence to grant linguistic criteria priority over conceptual ones. Hence, we state the following constraint: Vl E LQ, Vc E CQ : l >p c The dimension I:Q. Linguistic quality labels reflect structural properties of phrasal patterns or discourse contexts in which unknown lexi- cal items occur 2 -- we here assume that the type of grammatical construction exercises a particular interpretative force on the unknown item and, at the same time, yields a particu- lar level of credibility for the hypotheses being derived. Taking the considerations from Sub- section 2.2 into account, concrete examples of high-quality labels are given by APPOSITION or NCOMPOUND labels. Still of good quality but already less constraining are occurrences of the unknown item in a CASEFRAME construction. Finally, in a PPATTACH or GENITIVENP con- struction the unknown lexical item is still less constrained. Hence, at the quality level, these latter two labels (just as the first two labels we considered) form an equivalence class whose el- ements cannot be further discriminated. So we end up with the following quality orderings: 2In the future, we intend to integrate additional types of constraints, e.g., quality criteria reflecting the degree of completeness vs. partiality of the parse. NCOMPOUND ----p APPOSITION NCOMPOUND >p CASEFRAME APPOSITION >p CASEFRAME CASEFRAME >p GENITIVENP CASEFRAME >p PPATTACH GENITIVENP =p PPATTACH The dimension CQ. Conceptualquality labels result from comparing the conceptual represen- tation structures of a concept hypothesis with already existing representation structures in the underlying domain knowledge base or other con- cept hypotheses from the viewpoint of struc- tural similarity, compatibility, etc. The closer the match, the more credit is lent to a hypoth- esis. A very positive conceptual quality label, e.g., is M-DEDUCTION, whereas ADDFILLER is a negative one. Still positive strength is ex- pressed by SUPPORT or C-SuPPORT, both being indistinguishable, however, from a quality point of view. Accordingly, we may state: M-DEDUCTION >p SUPPORT ~{-DEDUCTION >p C-SuPPORT SUPPORT --p C-SuPPORT SUPPORT >p ADDFILLEK C-SuPPORT >p ADDFILLER 2.5 Hypothesis Ranking Each new clue available for a target concept to be learned results in the generation of additional linguistic or conceptual quality labels. So hy- pothesis spaces get incrementally augmented by quality statements. In order to select the most credible one(s) among them we apply a two-step procedure (the details of which are explained in Schnattinger and Hahn (1998)). First, those concept hypotheses are chosen which have ac- cumulated the greatest amount of high-quality labels according to the linguistic dimension £:Q. Second, further hypotheses are selected from this linguistically plausible candidate set based on the quality ordering underlying CQ. We have also made considerable efforts to evaluate the performance of the text learner based on the quality calculus. In order to ac- count for the incrementality of the learning pro- cess, a new evaluation measure capturing the system's on-line learning accuracy was defined, which is sensitive to taxonomic hierarchies. The results we got were consistently favorable, as our system outperformed those closest in spirit, CAMILLE (Hastings, 1996) and ScIsoR (Rau et 481 al., 1989), by a gain in accuracy on the or- der of 8%. Also, the system requires relatively few hypothesis spaces (2 to 6 on average) and prunes the concept search space radically, re- quiring only a few examples (for evaluation de- tails, cf. Hahn and Schnattinger (1998)). 3 Related Work We are not concerned with lexical acquisition from very large corpora using surface-level collo- cational data as proposed by Zernik and Jacobs (1990) and Velardi et al. (1991), or with hy- ponym extraction based on entirely syntactic criteria as in Hearst (1992) or lexico-semantic associations (e.g., Resnik (1992) or Sekine et al. (1994)). This is mainly due to the fact that these studies aim at a shallower level of learn- ing (e.g., selectional restrictions or thematic re- lations of verbs), while our focus is on much more fine-grained conceptual knowledge (roles, role filler constraints, integrity conditions). Our approach bears a close relationship, how- ever, to the work of Mooney (1987), Berwick (1989), Rau et al. (1989), Gomez and Segami (1990), and Hastings (1996), who all aim at the automated learning of word meanings from con- text using a knowledge-intensive approach. But our work differs from theirs in that the need to cope with several competing concept hypotheses and to aim at a reason-based selection in terms of the quality of arguments is not an issue in these studies. Learning from real-world texts usually provides the learner with only sparse and fragmentary evidence, such that multiple hypotheses are likely to be derived and a need for a hypothesis evaluation arises. 4 Conclusion We have introduced a solution for the semantic acquisition problem on the basis of the auto- matic processing of expository texts. The learn- ing methodology we propose is based on the incremental assignment and evaluation of the quality of linguistic and conceptual evidence for emerging concept hypotheses. No specialized learning algorithm is needed, since learning is a reasoning task carried out by the classifier of a terminological reasoning system. However, strong heuristic guidance for selecting between plausible hypotheses comes from linguistic and conceptual quality criteria. Acknowledgements. We would like to thank our colleagues in the CLIF group for fruitful discus- sions, in particular Joe Bush who polished the text as a native speaker. K. Schnattinger is supported by a grant from DFG (Ha 2097/3-1). References R. Berwick. 1989. Learning word meanings from examples. In D. Waltz, editor, Semantic Struc- tures., pages 89-124. Lawrence Erlbaum. N. BrSker, U. Hahn, and S. Schacht. 1994. Concurrent lexicalized dependency parsing: the PARSETALK model. In Proc. of the COLING'94. Vol. I, pages 379-385. F. Gomez and C. Segami. 1990. Knowledge acqui- sition from natural language for expert systems based on classification problem-solving methods. Knowledge Acquisition, 2(2):107-128. U. Hahn and K. Schnattinger. 1998. Towards text knowledge engineering. In Proc. of the AAAI'98. P. Hastings. 1996. Implications of an automatic lex- ical acquisition system. In S. Wermter, E. Riloff, and G. Scheler, editors, Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing, pages 261-274. Springer. M. Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proc. of the COLING'92. Vol.2, pages 539-545. D. Hindle. 1989. Acquiring disambiguation rules from text. In Proc. of the A CL'89, pages 26-29. C. Manning. 1993. Automatic acquisition of large subcategorization dictionary from corpora. In Proc. of the A CL'93, pages 235-242. R. Mooney. 1987. Integrated learning of words and their underlying concepts. In Proe. of the CogSci'87, pages 974-978. L. Rau, P. Jacobs, and U. Zernik. 1989. Information extraction and text summarization using linguis- tic knowledge acquisition. Information Processing Management, 25(4):419-428. P. Resnik. 1992. A class-based approach to lexical discovery. In Proe. of the A CL '92, pages 327-329. K. Schnattinger and U. Hahn. 1998. Quality-based learning. In Proc. of the ECAI'98, pages 160-164. S. Sekine, J. Carroll, S. Ananiadou, and J. Tsujii. 1994. Automatic learning for semantic colloca- tion. In Proc. of the ANLP'94, pages 104-110. P. Velardi, M. Pazienza, and M. Fasolo. 1991. How to encode semantic knowledge: a method for meaning representation and computer-aided ac- quisition. Computational Linguistics, 17:153-170. W. Woods and J. Schmolze. 1992. The KL-ONE family. Computers ~ Mathematics with Applica- tions, 23(2/5):133-177. U. Zernik and P. Jacobs. 1990. Tagging for learn- ing: collecting thematic relations from corpus. In Proc. of the COLING'90. Vol. 1, pages 34-39. 482
1998
79
Time Mapping with Hypergraphs Jan W. Amtrup Volker Weber Computing Research Laboratory University of Hamburg, New Mexico State University Computer Science Department, Las Cruces, NM 88003,USA Vogt-K611n-Str. 30, D-22527 Hamburg, Germany email: jamtrup~crl.nmsu, edu email: [email protected] Abstract Word graphs are able to represent a large num- ber of different utterance hypotheses in a very compact manner. However, usually they con- tain a huge amount of redundancy in terms of word hypotheses that cover almost identi- cal intervals in time. We address this problem by introducing hypergraphs for speech process- ing. Hypergraphs can be classified as an ex- tension to word graphs and charts, their edges possibly having several start and end vertices. By converting ordinary word graphs to hyper- graphs one can reduce the number of edges considerably. We define hypergraphs formally, present an algorithm to convert word graphs into hypergraphs and state consistency proper- ties for edges and their combination. Finally, we present some empirical results concerning graph size and parsing efficiency. 1 Introduction The interface between a word recognizer and language processing modules is a crucial issue with modern speech processing systems. Given a sufficiently high word recognition rate, it suf- fices to transmit the most probable word se- quence from the recognizer to a subsequent module (e.g. a parser). A slight extension over this best chain mode would be to deliver n-best chains to improve language processing results. However, it is usually not enough to deliver just the best 10 or 20 utterances, at least not for reasonable sized applications given todays speech recognition technology. To overcome this problem, in most current systems word graphs are used as speech-language interface. Word graphs offer a simple and efficient means to rep- resent a very high number of utterance hypothe- ses in a extremely compact way (Oerder and Ney, 1993; Aubert and Ney, 1995). und (and) dann (then) Figure 1: Two families of edges in a word graph Although they are compact, the use of word graphs leads to problems by itself. One of them is the current lack of a reasonable measure for word graph size and evaluation of their contents (Amtrup et al., 1997). The problem we want to address in this paper is the presence of a large number of almost identical word hypotheses. By almost identical we mean that the start and end vertices of edges differ only slightly. Consider figure 1 as an example section of a word graph. There are several word hypothe- ses representing the words und (and) and dann (then). The start and end points of them differ by small numbers of frames, each of them 10ms long. The reasons for the existence of these fam- ilies of edges are at least twofold: * Standard HMM-based word recognizers try to start (and finish) word models at each individual frame. Since the resolution is quite high (10ms, in many cases shorter than the word onset), a word model may have boundaries at several points in time. • Natural speech (and in particular spon- taneously produced speech) tends to blur word boundaries. This effect is in part re- sponsible for the dramatic decrease in word recognition rate, given fluent speech as in- put in contrast to isolated words as in- put. Figure 1 demonstrates the inaccuracy 55 of word boundaries by containing several meeting points between und and dann, em- phasized by the fact that both words end resp. start with the same consonant. Thus, for most words, there is a whole set of word hypotheses in a word graph which re- sults in several meets between two sets of hy- potheses. Both facts are disadvantageous for speech processing: Many word edges result in a high number of lexical lookups and basic op- erations (e.g. bottom-up proposals of syntactic categories); many meeting points between edges result in a high number of possibly complex op- erations (like unifications in a parser). The most obvious way to reduce the number of neighboring, identically labeled edges is to reduce the time resolution provided by a word recognizer (Weber, 1992). If a word edge is to be processed, the start and end vertices are mapped to the more coarse grained points in time used by linguistic modules and a redun- dancy check is carried out in order to prevent multiple copies of edges. This can be easily done, but one has to face the drawback on in- troducing many more paths through the graph due to artificially constructed overlaps. Fur- thermore, it is not simple to choose a correct resolution, as the intervals effectively appearing with word onsets and offsets change consider- ably with words spoken. Also, the introduction of cycles has to be avoided. A more sophisticated schema would use in- terval graphs to encode word graphs. Edges of interval graphs do not have individual start and end vertices, but instead use intervals to denote the range of applicability of an edge. The major problem with interval graphs lies with the com- plexity of edge access methods. However, many formal statements shown below will use interval arithmetics, as the argument will be easier to follow. The approach we take in this paper is to use hypergraphs as representation medium for word graphs. What one wants is to carry out oper- ations only once and record the fact that there are several start and end points of words. Hy- pergraphs (Gondran and Minoux, 1984, p. 30) are generalizations of ordinary graphs that al- low multiple start and end vertices of edges. We extend the approach of H. Weber (Weber, 1995) for time mapping. Weber considered sets of edges with identical start vertices but slightly different end vertices, for which the notion fam- ily was introduced. We use full hypergraphs as representation and thus additionally allow sev- eral start vertices, which results in a further de- crease of 6% in terms of resulting chart edges while parsing (cf. section 3). Figure 2 shows the example section using hyperedges for the two families of edges. We adopt the way of dealing with different acoustical scores of word hypotheses from Weber. (then) lOms Figure 2: Two hyperedges representing families of edges 2 Word Graphs and Hypergraphs As described in the introduction, word graphs consist of edges representing word hypotheses generated by a word recognizer. The start and end point of edges usually denote points in time. Formally, a word graph is a directed, acyclic, weighted, labeled graph with distinct root and end vertices. It is a quadruple G = (V, g, YV,/:) with the following components: • A nonempty set of graph vertices Y -- {vl,...,Vn}. To associate vertices with points in time, we use a function t : 1) > N that returns the frame number for a given vertex. • A nonempty set of weighted, labeled, di- rected edges g = {el,...,em} C_ V x ~2 x 14) × E. To access the components of an edge e = (v, v', w, l), we use functions a, ~3, w and l, which return the start vertex (~(e) = v), the end vertex (/~(e) = v'), the weight (w(e) = w) and the label (l(e) = l) of an edge, respectively. • A nonempty set of edge weights ~ -- {wi,...,wp}. Edge weights normally rep- resent a the acoustic score assigned to the word hypothesis by a HMM based word recognizer. 56 • A nonempty set of Labels £ = {tl,... ,lo}, which represents information attached to an edge, usually words. We define the relation of teachability for ver- tices (--r) as Vv, w E V : v --+ w ~ 3e E $ : = v ^ = w The transitive hull of the reachability relation ---r is denoted by -~. We already stated that a word graph is acyclic and distinctly rooted and ended. 2.1 I-Iypergraphs Hypergraphs differ from graphs by allowing sev- eral start and end vertices for a single edge. In order to apply this property to word graphs, the definition of edges has to be changed. The set of edges C becomes a nonempty set of weighted, la- beled, directed hyperedges $ = {el,...,em} C_ V*\O x V*\O x W x £. Several notions and functions defined for ordi- nary word graphs have to be adapted to reflect edges having sets of start and end vertices. • The accessor functions for start and end vertices have to be adapted to return sets of vertices. Consider an edge e = (V, V', w,/), then we redefine a:c > := v (1) E > := v' (2) • Two hyperedges e, e' are adjacent, if they share a common vertex: fl(e) A a(e') # ~ (3) • The reachability relation is now Vv, w E )2 : v-+ w ~ 9e e $ : v e a(e) ^w e ~(e) Additionally, we define accessor functions for the first and last start and end vertex of an edge. We recur to the association of vertices with frame numbers, which is a slight simplifi- cation (in general, there is no need for a total ordering on the vertices in a word graph) 1. Fur- thermore, the intervals covered by start and end vertices are defined. a<(e) := argmin{t(v)lv E V} (4) 1The total ordering on vertices is naturally given through the linearity of speech. a>Ce) := argmax{t(v)lv e V} (5) /3<(e) := argmin{t(v)lv e V'} (6) /3>(e) := argmax{t(v)lv E V'} (7) au(e ) := [t(a<(e)),t(a>(e))] (8) ~D(e) := [t(~<(e)),t(~>(e))] (9) In contrast to interval graphs, we do not re- quire the sets of start and end vertices to be con- tiguous, i.e. there may be vertices that fall in the range of the start or end vertices of an edge which are not members of that set. If we are not interested in the individual members of a(e) or ~(e), we merely talk about interval graphs. 2.2 Edge Consistency Just like word graphs, we demand that hyper- graphs are acyclic, i.e. Vv -5, w : v # w. In terms of edges, this corresponds to Ve : t(a>Ce)) < 2.3 Adding Edges to Hypergraphs Adding a simple word edge to a hypergraph is a simplification of merging two hyperedges bear- ing the same label into a new hyperedge. There- fore we are going to explain the more general case for hyperedge merging first. We analyze which edges of a hypergraph may be merged to form a new hyperedge without loss of linguistic information. This process has to follow three main principles: • Edge labels have to be identical • Edge weights (scores) have to be combined to a single value • Edges have to be compatible in their start and end vertices and must not introduce cycles to the resulting graph Simple Rule Set for Edge Merging Let ei, e2 E E be two hyperedges to be checked for merging, where el = (V1, VI', wt, 11) and e2 = (V2, V~, w2,/2). Then el and e2 could be merged into a new hyperedge e3 = (V3, V~, w3,/3) iff t(el) = t(e2) (10) min(t(/3< (el)), t(/3< (e2))) > max(t(a> (el)), t(a> (e2))) (11) where e3 is: 13 = ll (=/2) (12) w3 = scorejoin(wi,w2) 2 (13) 57 V3 = VI UV2 (14) = Vl' u (15) el and e2 have to be removed from the hyper- graph while e3 has to be inserted. Sufficiency of the Rule-Set Why is this set of two conditions sufficient for hyperedge merging? First of all it is clear that we can merge only hyperedges with the same label (this is prescribed by condition 10). Con- dition 11 gives advice which hyperedges could be combined and prohibits cycles to be intro- duced in the hypergraph. An analysis of the occuring cases shows that this condition is rea- sonable. Without loss of generality, we assume that t(~>(el)) ~ t(;3>(e2)). 1. aO(el)n 13D(e2 ) # 0 V a0(e2 ) n ~0(el) # 0: This is the case where either the start ver- tices of el and the end vertices of e2 or the start vertices of e2 and end vertices of el overlap each other. The merge of two such hyperedges of this case would result in a hyperedge e3 where t(a>(e3)) )> t(~< (e3)). This could introduce cycles to the hyper- graph. So this case is excluded by condi- tion 11. 2. aD(el ) n ~B(e2) = O A a[l(e2 ) ¢3 ;3[](el) = O: This is the complementary case to 1. (a) t(a<(e2)) >_ t(~>(el)) This is the case where all vertices of hyperedge el occur before all vertices of hyperedge e2 or in other words the case where two individual independent word hypotheses with same label oc- cur in the word graph. This case must also not result in an edge merge since ~H(el) C_ [t(a<(el)),t(a>(e2))] in the merged edge. This merge is prohib- ited by condition 11 since all vertices of ~(el) have to be smaller than all vertices of a(e2). (b) t(a<(e2)) < t(~>(el)) This is the complementary case to (a). i. t(o~<(el)) ~ t(~>(e2)) This is only a theoretical case be- cause tC <(el)) < < < _< 2Examples for the scorejoin operation are given later in the paragraph about score normalization. is required (e2 contains the last end vertex). ii. t(a<(el)) < t(;3>(e2)) This is the complementary case to i. As a result of the empty in- tersections and the cases (b) and ii we get t(c~>(el)) < t(~<(e2)) and t(oz>(e2)) < t(~<(el)). That is in other words Vta E a0(el ) U ~n(e2),t~ e ;3U(el)u~D(e2) : t~ < t~ and just the case demanded by condition 2. After analyzing all cases of merging of inter- sections between start and end vertices of two hyperedges we turn to insertion of word hy- potheses to a hypergraph. Of course, a word hypothesis could be seen as interval edge with trivial intervals or as a hyperedge with only one start and one end vertex. Since this case of adding an edge to a hypergraph is rather easy to depict and is heavily used while parsing word graphs incrementally we discuss it in more de- tail. e g e I )( ~ + e 3 C ) ( ) ( ) ( ) ~ e 5 Figure 3: Cases for adding an edge to the graph The speech decoder we use delivers word hy- potheses incrementally and ordered by the time stamps of their end vertices. For practical rea- sons we further sort the start vertices with equal end vertex of a hypergraph by time. Under this precondition we get the cases shown in figure 3. The situation is such that eg is a hyperedge al- ready constructed and el -e5 are candidates for insertion. 58 function AddEdge(G:Hypergraph,en :Wordhypothesis) -~ HyperGraph begin [1] if 3e~ E £(G) with l(ek) = l(en)A t(~<(elc)) > t(a(en)) then (2] Modify edge e& e t := (aCek) u (a(e.)}, ~(ek) u (/~(e.)), max(t~(ek), normalize(w(en ), a (en), ~(en ))), l(e~)) return C' := (r U {a(en),~(en)), e I [3] else [4] Add edge e' n eln := ({c~(en)}, {~(en)}, normalize(w(en ), Q(en ), ~(en )), l(en)) return G' := (1) U {~(en),~(en)},E U {e~}, wu (w(e')}, L u (t(e'))) ,and Figure 4: An algorithm for adding edges to hy- pergraphs of a matching hyperdege. In practice this is not needed and we could check a smaller amount of vertices. We do this by introducing a maximal time gap which gives us advice how far (in mea- sures of time) we look backwards from the start vertex of a new edge to be inserted into the hy- pergraph to determine a compatible hyperedge of the hypergraph. Additional Paths C C Figure 5: Additional paths by time mapping It is not possible to add el and e2 to the hy- peredge eg since they would introduce an over- lap between the sets of start and end vertices of the potential new hyperedge. The resulting hyperedges of adding e3 -- e5 are depicted below. Score Normalization Score normalization is a necessary means if one wants to compare hypotheses of different lengths. Thus, edges in word graphs are as- signed normalized scores that account for words of different extensions in time. The usual mea- sure is the score per .frame, which is computed Score per word by taking Length of word in frames" When combining several word edges as we do by constructing hyperedges, the combina- tion should be assigned a single value that re- flects a certain useful aspect of the originating edges. In order not to exclude certain hypothe- ses from consideration in score-driven language processing modules, the score of the hyperedge is inherited from the best-rated word hypothe- sis (cf. (Weber, 1995)). We use the minimum of the source acoustic scores, which corresponds to a highest recognition probability. Introducing a Maximal Time Gap The algorithm depicted in figure 4 can be speeded up for practical reasons. Each vertex between the graph root and the start vertex of the new edge could be one of the start vertices It is possible to introduce additional paths into a graph by performing time mapping. Con- sider fig. 5 as an example. Taken as a normal word graph, it contains two label sequences, namely a-c-d and b-c-e. However, if time mapping is performed for the edges labelled c, two additional sequences are introduced: a-c-e and b-c-d. Thus, time mapping by hyper- graphs is not information preserving in a strong sense. For practical applications this does not present any problems. The situations in which additional label sequences are introduced are quite rare, we did not observe any linguistic dif- ference in our experiments. 2.4 Edge Combination Besides merging the combination of hyperedges, to construct edges with new content is an impor- tant task within any speech processing module, e.g. for parsing. The assumption we will adopt here is that two hyperedges el, e2 E £ may be combined if they are adjacent, i.e. they share a common vertex: /~(el) N ol(e2) ~ 0. The label of the new edge en (which may be used to rep- resent linguistic content) is determined by the component building it, whereas the start and end vertices are determined by Qc(en) c (el) (16) J (en) := (e2) (17) 59 This approach is quite analogous to edge com- bination methods for normal graphs, e.g. in chart parsing, where two edges are equally re- quired to have a meeting point. However, score computation for hyperedge combination is more difficult. The goal is to determine a score per frame by selecting the smallest possible score under all possible meeting vertices. It is de- rived by examining all possible connecting ver- tices (all elements of I := f~(el) CI a(e2)) and computing the resulting score of the new edge: If w(el) < w(e2), we use w(e,~) := WC~l)(t< -t(~> (~1)))+~(~2).(t(~> (e2))-t<) tC~>(e2))-t(~>(~l)) where t< = min{t(v)lv e I}. If, on the other hand, w(el) > w(e2), we use w(el).(t>-t(~<(~l)))+~C~2).(t(~<(~2))-t>) w(en) := t(~< (e:))-t(~< (~1)) ' where t> = max{t(v)[v e I}. 3 Experiments with Hypergraphs The method of converting word graphs to hy- pergraphs has been used in two experiments so far. One of them is devoted to the study of connectionist unification in speech applications (Weber, forthcoming). The other one, from which the performance figures in this section are drawn, is an experimental speech transla- tion system focusing on incremental operation and uniform representation (Amtrup, 1997). 1000 0 .................................................. # Edges graphs produced by the Hamburg speech recog- nition system (Huebener et al., 1996). The test data consisted of one dialogue within the Verbmobil domain. There were 41 turns with an average length of 4.65s speaking time per turn. The word graphs contained 1828 edges on the average. Figure 6 shows the amount of reduction in the number of edges by converting the graphs into hypergraphs. On the average, 1671 edges were removed (mapped), leaving 157 edges in hypergraphs, approximately 91% less than the original word graphs. Next, we used both sets of graphs (the orig- inal word graphs and hypergraphs) as input to the speech parser used in (Amtrup, 1997). This parser is an incremental active chart parser which uses a typed feature formalism to describe linguistic entities. The grammar is focussed on partial parsing and contains rules mainly for noun phrases, prepositional phrases and such. The integration of complete utterances is ne- glected. Figure 7 shows the reduction in terms of chart edges at completion time. Figure 6: Word edge reduction We want to show the effect of hypergraphs re- garding edge reduction and parsing effort. In or- der to provide real-world figures, we used word lO000 - IO00- -4- Maximum gaps 100 # Edges Figure 7: Chart edge reduction The amount of reduction concerning parsing effort is much less impressive than pure edge reduction. On the average, parsing of complete graphs resulted in 15547 chart edges, while pars- ing of hypergraphs produced 3316 chart edges, a reduction of about 79%. Due to edge combi- nations, one could have expected a much higher value. The reason for this fact lies mainly with the redundancy test used in the parser. There 60 are many instances of edges which are not in- serted into the chart at all, because identical hypotheses are already present. Consequently, the amount of reduction in parse time is within the same bounds. Pars- ing ordinary graphs took 87.7s, parsing of hy- pergraphs 6.4s, a reduction of 93%. There are some extreme cases of word graphs, where hy- pergraph parsing was 94 times faster than word graph parsing. One of the turns had to be ex- cluded from the test set, because it could not be fully parsed as word graph. I000 100 1~ 1.617 --II- With mapping 0.10 o.ol I o 4&o # Edges Figure 8: Parsing time reduction 4 Conclusion In this paper, we have proposed the application of hypergraph techniques to word graph pars- ing. Motivated by linguistic properties of spon- taneously spoken speech, we argued that bun- dles of edges in word graphs should be treated in an integrated manner. We introduced interval graphs and directed hypergraphs as representa- tion devices. Directed hypergraphs extend the notion of a family of edges in that they are able to represent edges having several start and end vertices. We gave a formal definition of word graphs and the necessary extensions to cover hyper- graphs. The conditions that have to be fulfilled in order to merge two hyperedges and to com- bine two adjacent hyperedges were stated in a formal way; an algorithm to integrate a word hypothesis into a hypergraph was presented. We proved the applicability of our mecha- nisms by parsing one dialogue of real-world spo- ken utterances. Using hypergraphs resulted in a 91% reduction of initial edges in the graph and a 79% reduction in the total number of chart edges. Parsing hypergraphs instead of ordinary word graphs reduced the parsing time by 93%. References Jan W. Amtrup, Henrik Heine, and Uwe Jost. 1997. What's in a Word Graph -- Evaluation and Enhancement of Word Lattices. In Proc. of Eurospeech 1997, Rhodes, Greece, Septem- ber. Jan W. Amtrup. 1997. Layered Charts for Speech Translation. In Proceedings of the Seventh International Conference on Theo- retical and Methodological Issues in Machine Translation, TMI '97, Santa Fe, NM, July. Xavier Aubert and Hermann Ney. 1995. Large Vocabulary Continuous Speech Recognition Using Word Graphs. In ICASSP 95. Michel Gondran and Michel Minoux. 1984. Graphs and algorithms. Wiley-Interscience Series in Discrete Mathematics. John Wiley & Sons, Chichester. Kai Huebener, Uwe Jost, and Henrik Heine. 1996. Speech Recognition for Spontaneously Spoken German Dialogs. In ICSLP96, Philadelphia. Martin Oerder and Hermann Ney. 1993. Word Graphs: An Efficient Interface Be- tween Continuous-Speech Recognition and Language Understanding. In Proceedings of the 1993 IEEE International Conference on Acoustics, Speech ~ Signal Processing, ICASSP, pages II/119-II/122, Minneapolis, MN. Hans Weber. 1992. Chartparsing in ASL-Nord: Berichte zu den Arbeitspaketen P1 bis P9. Technical Report ASL-TR-28-92/UER, Uni- versit/it Erlangen-Niirnberg, Erlangen, De- cember. Hans Weber. 1995. LR-inkrementelles, Prob- abilistisches Chartparsing yon Worthypothe- sengraphen mit Unifikationsgrammatiken: Eine Enge Kopplung yon Suche und Analyse. Ph.D. thesis, Universit/it Hamburg. Volker Weber. forthcoming. Funktionales Kon- nektionistisches Unifikationsbasiertes Pars- ing. Ph.D. thesis, Univ. Hamburg. 61
1998
8
Tagging Inflective Languages: Prediction of Morphological Categories for a Rich, Structured Tagset Jan Haji~: and Barbora Hladkfi Institute of Formal and Applied Linguistics MFF UK Charles University, Prague, Czech Republic {hajic,hladka}~ufal.mff.cuni.cz Abstrakt (~esky) (This short abstract is in Czech. For illustration purposes, it has been tagged by our tagger; errors are printed underlined and corrections are shown.) Hlavnfm/AAIS7 .... 1A-- probldmem/NNIS7 ..... A-- p~i/RR--6 morfologickdm/AANS6 .... 1A-- zna~kov£nf/NNNS6 ..... A-- (/z: . . . . . . . . . . . n~kdy/Db t~/Db' zvandm/AAI_S6 .... IA-- morfologicko/A2 ........... -/Z: ........... syntaktickd/AAIP1 .... 1A-- )/z: . . . . . . . . . . . jazykfi/NNIP2 ..... A-- s/RR--7 bohatou/AAFS7 .... 1A-- flexf/NNFS7 ..... A-- ,/Z: jako/J, je/VB-S---3P-AA- nap~fklad/Db ~egtina/NNFSl ..... A-- nebo/J ~ ru~tina/NNFS 1 ..... A-- ,/Z: ........... je/VB-S---3P-AA- -/Z: ........... p~i/P~--6' omezend/AAFS6 .... 1A-- velikosti/NNFS2- .... A-- zdrojfl/NNIP2 ..... A-- -/Z :- po~et/NNIS1 ..... A-- mo~n~ch/AAFP2 .... IA-- zna~ek/NNFP2 ..... A-- ,/Z : ........... kter37/P4YS1. jde/VB-S---3P-AA- obvykle/Dg ....... 1A-- do/RR--2 Correct: N Correct: NS Correct: 6 tisfc6/NNIP2 ..... A-- ./Z: ........... Na~e/PSHS1-P1. metoda/NNFS1 ..... A-- p~itom/Db. vyu~fvi/VB-S---3P-AA- exponenciilnfho/AAIS2 .... 1A-- pravd~podobnostnfho/AAI $2 .... 1A-- modelu/NNIS2 ..... A-- zalo~endho/AAIS2 .... 1A-- na/P~--6 automaticky /Dg ....... 1A-- vybran3~ch/AA_NP6 .... 1A-- Correct: I rysech/NNIP6 ..... A-- ./Z: Parametry/NNIPl ..... A-- tohoto/PDZS2 modelu/NNIS2 ..... A-- se/P7-X4 po~kaj f/VB-P---3P-AA- pomocf/NNFS7 ..... A-- Correct: PSt--2,- jednoduch~ch/AAIP2 .... 1A-- odhad6/NNIP2 ..... A-- (/z: . . . . . . . . . . . trdnink/NNIS1 ..... A-- je/VB-S---3P-AA- tak/Db mnohem/Db rychlej~f/AAES1 .... 2A-- Correct: I ,/Z: ........... ne~./J, kdybychom/J, -P--- 1 ..... pou~ili/VpMP---XR-AA- metodu/NNFS4 ..... A-- maximilnf/AAFS_4---- IA-- Correct: 2 entropie/NNFS2 ..... A-- )/z: . . . . . . . . . . . a/J'- .......... p[itom/Db se/PT-X4. pHmo/Dg ....... 1A-- minimalizuje/VB-S---3P-AA- po~et/NNIS_4- .... A-- chyb/NNFP2 ..... A-- ./Z: ........... Correct: 1 483 Abstract The major obstacle in morphological (sometimes called morpho-syntactic, or extended POS) tagging of highly inflective languages, such as Czech or Rus- sian, is - given the resources possibly available - the tagset size. Typically, it is in the order of thou- sands. Our method uses an exponential probabilis- tic model based on automatically selected features. The parameters of the model are computed using simple estimates (which makes training much faster than when one uses Maximum Entropy) to directly minimize the error rate on training data. The results obtained so far not only show good performance on disambiguation of most of the indi- vidual morphological categories, but they also show a significant improvement on the overall prediction of the resulting combined tag over a HMM-based tag n-gram model, using even substantially less training data. 1 Introduction 1.1 Orthogonality of morphological categories of inflective languages The major obstacle in morphological 1 tagging of highly inflective languages, such as Czech or Rus- sian, is - given the resources possibly available - the tagset size. Typically, it is in the order of thou- sands. This is due to the (partial) "orthogonality "2 of simple morphological categories, which then mul- tiply when creating a "flat" list of tags. However, the individual categories contain only a very small number of different values; e.g., number has five (Sg, P1, Dual, Any, and "not applicable"), case nine etc. The "orthogonality" should not be taken to mean complete independence, though. Inflectional lan- guages (as opposed to agglutinative languages such as Finnish or Hungarian) typically combine several certain categories into one morpheme (suffix or end- ing). At the same time, the morphemes display a high degree of ambiguity, even across major POS categories. For example, most of the Czech nouns can form singular and plural forms in all seven cases, most adjectives can (at least potentially) form all (4) gen- ders, both numbers, all (7) cases, all (3) degrees of comparison, and can be either of positive or nega- tive polarity. That gives 336 possibilities (for ad- jectives), many of them homonymous on the sur- face. On the other hand, pronouns and numerals do 1 This type of tagging is sometimes called morpho-syntactic tagging. However, to stress that we are not dealing with syn- tactic categories such as Object or Attribute (but rather with morphological categories such as Number or Case) we will use the term "morphological" here. 2By orthogonality we mean that all combinations of values of two (or more) categories are systematically possible, i.e. that every member of the cartesian product of the two (or more) sets of values do appear in the language. not display such an orthogonality, and even adjec- tives are not fully orthogonal - an ancient "dual" number, happily living in modern Czech in the fem- inine, plural and instrumental case adds another 6 sub-orthogonal possibilities to almost every adjec- tive. Together, we employ 3127 plausible combina- tions (including style and diachronic variants). 1.2 The individual categories There are 13 morphological categories currently used for morphological tagging of Czech: part of speech, detailed POS (called "subpart of speech"), gender, number, case, possessor's gender, possessor's num- ber, person, tense, degree of comparison, negative- ness (affirmative/negative), voice (active/passive), and variant/register. The P0S category contains only the major part of speech values (noun (N), verb (V), adjective (A), pro- noun (P), verb (V), adjective (A), adverb (D), numeral (C), preposition (R), conjunction (J), interjection (I), particle (T), punctuation (Z), and "undefined" (X)). The "subpart of speech" (SUBPOS) contains details about the major category mad has 75 different values. For example, verbs (POS: V) are divided into simple finite form in present or future tense (B), conditional (c), infinitive (f), imperative (i), etc. 3 All the categories vary in their size as well as in their unigram entropy (see Table 1) computed using the standard entropy definition Hp = - ~ p(y)log(p(y)) (1) yEY where p is the unigram distribution estimate based on the training data, and Y is the set of possible values of the category in question. This formula can be rewritten as 1 [D[ Hp,t)- iDl~lOg(p(yi)) (21 i=1 where p is the unigram distribution, D is the data and IDI its size, and yi is the value of the category in question at the i - th event (or position) in the data. The form (2) is usually used for cross-entropy computation on data (such as test data) different from those used for estimating p. The base of the log function is always taken to be 2. 1.3 The morphological analyzer Given the nature of inflectional languages, which can generate many (sometimes thousands of) forms for a given lemma (or "dictionary entry"), it is necessary to employ morphological analysis before the tagging proper. In Czech, there are as many as 5 differ- ent lemmas (not counting underlying derivations nor 3The categories POS and SUBPOS are the only two categories which are rather lexically (and not inflectionally) based. 484 Table h Most Difficult Individual Morphological Categories Category POS SUBPOS GENDER NUMBER CASE POSSGENDER POSSNUMBER PERSON TENSE GRADE NEGATION VOICE VAR Number of values 12 75 11 6 9 5 3 5 6 4 3 3 10 Unigram entropy Hp (in bits) 2.99 3.83 2.05 1.62 2.24 0.04 0.04 0.64 0.55 0.55 1.07 0.45 0.07 word senses) and up to 108 different tags for an in- put word form. The morphological analyzer used for this purpose (Hajji, in prep.), (Haji~, 1994) covers about 98% of running unrestricted text (newspaper, magazines, novels, etc.). It is based on a lexicon containing about 228,000 lemmas and it can analyze about 20,000,000 word forms. 2 The Training Data Our training data consists of about 130,000 tokens of newspaper and magazine text, manually double- tagged and then corrected by a single judge. Our training data consists of about 130,000 tokens of newspaper and magazine text, manually tagged using a special-purpose tool which allows for easy disambiguation of morphological output. The data has been tagged twice, with manual resolution of discrepancies (the discrepancy rate being about 5%, most of them being simple tagging errors rather than opinion differences). One data item contains several fields: the input word form (token), the disambiguated tag, the set of all possible tags for the input word form, the disam- biguated lemma, and the set of all possible lemmas with links to their possible tags. Out of these, we are currently interested in the form, its possible tags and the disambiguated tag. The lemmas are ignored for tagging purposes. 4 The tag from the "disambiguated tag" field as well as the tags from the "possible tags" field are further divided into so called subtags (by morpho- logical category). In the set "possible tags field", 4In fact, tagging helps in most cases to disambiguate the lemmas. Lemma disambiguation is a separate process follow- ing tagging. The lemma disambiguation is a much simpler problem - the average number of different lemmas per token (as output by the morphological analyzer) is only 1.15. We do not cover the lemma disambiguation procedure here. ~--s ........ IRIRI-I-1461-1-1-1-1-1-I-I-IIoa AAIS6 .... tA N I AIAIIMNISlSI-I-I-I-I t/A/-/-/Ipoetta,"ov&~ milS6 . . . . . A--lNINII/S12361-/-I-I-I-IAl-I-/Imodelu z: ........... [Zl :l-l-l-l-l-l-l-l-l-l-l-l] , P4YS1 ........ [P/4/I¥/S/14/-/-/-/-/-/-/-/-/]kZ,r~ VpYS---IR-A A-lV/p/Y/S/-/-/-II/P,I-/A/-/-/lsi~uloval ~IS4 ..... A--[N/N/I/S/14/-/-/-/-/-/A/-/-/[v~rvoj AANS2 .... IA--[A/A/IMN/S/24/-/-/-/-/i/A/-/-/Isv~zov4ho h~NS2 ..... A-- [N/N/N/S/236/-/-/-/-/-/A/-/-/]kllma~u ]~--8 ........ I~IRI-1-1461-I-I-I-I-I-I-I-311 v AAIm8 .... IA--IAIAIFI~IP1281-1-1-1-111Al-l-llP~i~tlch IaWIP6 ..... A--INININIPlSl-l-l-l-l-lAl-l-lldea,tiletlch Figure 1: Training Data: lit: on computer(adj.) model, which was-simulating development of-world climate in next decades the ambiguity on the level of full (combined) tags is mapped onto so called "ambiguity classes" (AC-s) of subtags. This mapping is generally not reversible, which means that the links across categories might not be preserved. For example, the word form jen for which the morphology generates three possible tags, namely, TT ........... (particle "only"), and NNISI ..... A-- and NNIS4 ..... A-- (noun, masc. inanimate, singular, nominative (1) or accusative (4) case; "yen" (the Japanese currency)), will be assigned six ambiguous ambiguity classes (NT, NT, -I, -S, -14, -h, for POS, subpart of speech, gen- der, number, case, and negation) and 7 unambiguous ambiguity classes (all -). An example of the train- ing data is presented in Fig. 1. It contains three columns, separated by the vertical bar 0): 1. the "truth" (the correct tag, i.e. a sequence of 13 subtags, each represented by a single charac- ter, which is the true value for each individual category in the order defined in Fig. 1 (lst col- umn: POS, 2nd: SUBPOS, etc.) 2. the 13-tuple of ambiguity classes, separated by a slash (/), in the same order; each ambiguity class is named using the single character subtags used for all the possible values of that category; 3. the original word form. Please note that it is customary to number the seven grammatical cases in Czech: (instead of nam- ing them): "nominative" gets 1, "genitive" 2, etc. There are four genders, as the Czech masculine gen- der is divided into masculine animate (M) and inan- imate (I). Fig. 1 is a typical example of the ambiguities en- countered in a running text: little POS ambigu- ity, but a lot of gender, number and case ambiguity (columns 3 to 5). 485 3 The Model Instead of employing the source-channel paradigm for tagging (more or less explicitly present e.g. in (Merialdo, 1992), (Church, 1988), (Hajji, Hladk~, 1997)) used in the past (notwithstanding some ex- ceptions, such as Maximum Entropy and rule-based taggers), we are using here a "direct" approach to modeling, for which we have chosen an exponential probabilistic model. Such model (when predicting an event 5 y E Y in a context x) has the general form PAC,e (YIX) = exp(~-~in----1 Aifi (y, x)) Z(x) (3) where fi (Y, x) is the set (of size n) of binary-valued (yes/no) features of the event value being predicted and its context, hi is a "weigth" (in the exponential sense) of the feature fi, and the normalization factor Z(x) is defined naturally as z(x) = exp( z x)) (4) yEY i----1 ~,Ve use a separate model for each ambiguity class AC (which actually appeared in the training data) of each of the 13 morphological categories 6. The final PAC (Yix) distribution is further smoothed using unigram distributions on subtags (again, separately for each category). pAC(y[x) = apAC,e(yIx) q- (1 -- a)PAC, I(y) (5) Such smoothing takes care of any unseen context; for ambiguity classes not seen in the training data, for which there is no model, we use unigram proba- bilities of subtags, one distribution per category. In the general case, features can operate on any imaginable context (such as the speed of the wind over Mt. Washington, the last word of yesterday TV news, or the absence of a noun in the next 1000 words, etc.). In practice, we view the context as a set of attribute-value pairs with a discrete range of values (from now on, we will use the word "context" for such a set). Every feature can thus be repre- sented by a set of contexts, in which it is positive. There is, of course, also a distinguished attribute for the value of the variable being predicted (y); the rest of the attributes is denoted by x as expected. Values of attributes will be denoted by an overstrike (~, 5). The pool of contexts of prospective features is for the purpose of morphological tagging defined as a Sa subtag, i.e. (in our case) the unique value of a morpho- logical category. 6Every category is, of course, treated separately. It means that e.g. the ambiguity class 23 for category CASE (mean- ing that there is an ambiguity between genitive and dative cases) is different from ambiguity class 23 for category GRADE or PEI~0N. full cross-product of the category being predicted (y) and of the x specified as a combination of: 1. an ambiguity class of a single category, which may be different from the category being pre- dicted, or 2. a word form and 1. the current position, or 2. immediately preceding (following) position in text, or 3. closest preceding (following) position (up to four positions away) having a certain ambiguity class in the POS category Let now Categories = { POS, SUBPOS, GENDER, NUMBER, CASE, POSSGENDER, POSSNUMBER, PERSON, TENSE, GRADE, NEGATION, VOICE, VAR}; then the feature function fcatAc,~,~(Y,X) ~ {0, 1} is well-defined iff 6 CatAc (6) where Cat E Categories and CatAC is the ambi- guity class AC (such as AN, for adjective/noun am- biguity of the part of speech category) of a mor- phological category Cat (such as POS). For exam- ple, the function fPOSaN,A,-~ is well-defined (A E {A,N}), whereas the function fCASE145,6,-£ is not (6 ¢~ {1, 4, 5}). We will introduce the notation of the context part in the examples of feature value com- putation below. The indexes may be omitted if it is clear what category, ambiguity class, the value of the category being predicted and/or the context the feature belongs to. The value of a well-defined feature 7 function fca~Ac,y,~(Y, x) is determined by fCa~ac.y,~(Y, x) = 1 ~=~ ~ = y A • C x. (7) This definition excludes features which are positive for more than one y in any context x. This property will be used later in the feature selection algorithm. As an example of a feature, let's assume we are predicting the category CASE from the ambiguity class 145, i.e. the morphology gives us the possibility to assign nominative (1), accusative (4) or vocative (5) case. A feature then is e.g. The resulting case is nominative (1) and the following word form is pracuje (lit. (it) works) 7From now on, we will assume that all features are well- defined. 486 lllSl .... 1A-- [ A/AlIM/S/1451-/-/-I-IllAI-I-I I tvrd~' I~NISl ..... A--I t~/~i/-I ISl-141-1-1-21-1-1Al-I-Ilboj Figure 2: Context where the feature fPOSNv,N,(POS_l=A,CASE-~=145) is positive (lit. heavy fighting). AAIS6 .... 1A--I A/A/IMN/S/6/-/-/-/-/1/AI-I-/IprtdeBk6m troiS6 ..... A-- I t~VINolIYISI-OI-I-I-I-I-IAI-I-/II~rad6 Figure 3: Context where the feature fPOSNv,N,(POS_l=A,CASE_l=145) is negative (lit. (at the) Prague castle). denoted as fCASE145,1,(FORM+1=pracuje), or The resulting case is accusative (4) and the closest preceding preposition's case has the ambiguity class 46 denoted as fCASEa4s,4,(CASE-pos=R=46). The feature fPOSNv,N,(POS_l=A,CASE_l=145) will be positive in the context of Fig. 2, but not in the context of Fig. 3. The full cross-product of all the possibilities out- lined above is again restricted to those features which have actually appeared in the training data more than a certain number of times. Using ambiguity classes instead of unique values of morphological categories for evaluating the (con- text part of the) features has the advantage of giv- ing us the possibility to avoid Viterbi search during tagging. This then allows to easily add lookahead (right) context. 8 There is no "forced relationship" among categories of the same tag. Instead, the model is allowed to learn also from the same-position "context" of the subtag being predicted. However, when using the model for tagging one can choose between two modes of operation: separate, which is the same mode used when training as described herein, and VTC (Valid Tag Combinations) method, which does not allow for impossible combinations of categories. See Sect. 5 for more details and for the impact on the tagging accuracy. 4 Training 4.1 Feature Weights The usual method for computing the feature weights (the Ai parameters) is Maximum Entropy (Berger 8It remains to be seen whether using the unique values - at least for the left context - and employing Viterbi would help. The results obtained so far suggest that probably not much, and if yes, then it would restrict the number of features selected rather than increase tagging accuracy. & al., 1996). This method is generally slow, as it requires lot of computing power. Based on our experience with tagging as well as with other projects involving statistical modeling, we assume that actually the weights are much less important than the features themselves. We therefore employ very simple weight estima- tion. It is based on the ratio of conditional proba- bility of y in the context defined by the feature fy,~ and the uniform distribution for the ambiguity class AC. 4.2 Feature Selection The usual guiding principle for selecting features of exponential models is the Maximum Likelihood prin- ciple, i.e. the probability of the training data is being maximized. (or the cross-entropy of the model and the training data is being minimized, which is the same thing). Even though we are eventually inter- ested in the final error rate of the resulting model, this might be the only solution in the usual source- channel setting where two independent models (a language model and a "translation" model of some sort - acoustic, real translation etc.) are being used. The improvement of one model influences the error rate of the combined model only indirectly. This is not the case of tagging. Tagging can be seen as a "final application" problem for which we assume to have enough data at hand to train and use just one model, abandoning the source-channel paradigm. We have therefore used the error rate directly as the objective function which we try to minimize when selecting the model's features. This idea is not new, but as far as we know it has been implemented in rule-based taggers and parsers, such as (Brill, 1993a), (Brill, 1993b), (Brill, 1993c) and (Ribarov, 1996), but not in models based on proba- bility distributions. Let's define the set of contexts of a set of features: X(F) = {Z: 3~ Bf~,-~ 6 F}, (s) where F is some set of features of interest. The features can therefore be grouped together based on the context they operate on. In the cur- rent implementation, we actually add features in "batches". A "batch" of features is defined as a set of features which share the same context Z (see the definition below). Computationaly, adding features in batches is relatively cheap both time- and space- wise. For example, the features fPOSNv,N,(POS_I=A,CASE_I=I45) and fPOSNv,V,(POS_I=A,CASE_I=I45) 487 share the context (POS_I = A, CASE_, = 145). Let further • FAC be the pool of features available for selec- tion. • SAC be the set of features selected so far for a model for ambiguity class AC, • PSac (Yl d) the probability, using model (3-5) with features SAC, of subtag y in a context de- fined by position d in the training data, and • FAC,~ be the set ("batch") of features sharing the same context ~, i.e. FAc, = {f FAc: : S = (9) Note that the size of AC is equal to the size of any batch of features ([AC[ = [FAc,~[ for any z). The selection process then proceeds as follows: 1. For all contexts ~ E X(FAc) do the following: 2. For all features f = fy,~ E FAc,5 compute their associated weights AI using the formula: A.~ = log(/3ac~(Y)), where = f~,~(Yd, Xd) (10) (11) 3. Compute the error rate of the training data by going through it and at each position d selecting the best subtag by maximizing PSacUFAc.~(Yid) over all y E AC. 4. Select such a feature set FAC,~ which results in the maximal improvement in the error rate of the training data and add all f e FAC,~ perma- nently to SAC; with SAC now extended, start from the beginning (unless the termination con- dition is met), 5. Termination condition: improvement in error rate smaller than a preset minimum. The probability defined by the formula (11) can easily be computed despite its ugly general form, as the denominator is in fact the number of (positive) occurrences of all the features from the batch defined by the context ~ in the training data. It also helps if the underlying ambiguity class AC is found only in a fraction of the training data, which is typically the case. Also, the size of the batch (equal to [AC[) is usually very small. On top of rather roughly estimating the Af param- eters, we use another implementation shortcut here: we do not necessarily compute the best batch of fea- tures in each iteration, but rather add all (batches of) features which improve the error rate by more than a threshold 6. This threshold is set to half the number of data items which contain the ambiguity class AC at the beginning of the loop, and then is cut in half at every iteration. The positive consequence of this shortcut (which certainly adds some unnec- essary features) is that the number of iterations is much smaller than if the maximum is regularly com- puted at each iteration. 5 Results We have used 130,000 words as the training set and a test set of 1000 words. There have been 378 different ambiguity classes (of subtags) across all categories. We have used two evaluation metrics: one which evaluates each category separately and one "flat- list" error rate which is used for comparison with other methods which do not predict the morpho- logical categories separately. We compare the new method with results obtained on Czech previously, as reported in (Hladk~, 1994) and (Hajie, Hladk~, 1997). The apparently high baseline when compared to previously reported experiments is undoubtedly due to the introduction of multiple models based on ambiguity classes. In all cases, since the percentage of text tokens which are at least two-way ambiguous is about 55%, the error rate should be almost doubled if one wants to know the error rate based on ambiguous words only. The baseline, or "smoothing-only" error rate was at 20.7 % in the test data and 22.18 % in the training data. Table 2 presents the initial error rates for the indi- vidual categories computed using only the smooth- ing part of the model (n = 0 in equation 3). Training took slightly under 20 hours on a Linux- powered Pentium 90, with feature adding threshold set to 4 (which means that a feature batch was not added if it improved the absolute error rate on train- ing data by 4 errors or less). 840 (batches) of fea- tures (which corresponds to about 2000 fully spec- ified features) have been learned. The tagging it- self is (contrary to training) very fast. The average speed is about 300 words/sec, on morphologically prepared data on the same machine. The results are summarized in Table 3. There is no apparent overtraining yet. However, it does appear when the threshold is lowered (we have tested that on a smaller set of training data consisting of 35,000 words: overtraining started to occur when the threshold was down to 2-3). Table 4 contains comparison of the results 488 Category POS SUBPOS GENDER NUMBER CASE POSSGENDER POSSNUMBER PERSON TENSE GRADE NEGATION VOICE VAR Overall training data test data 1.10 1.06 6.35 5.34 14.55 0.05 0.13 0.28 0.36 0.48 1.33 0.40 0.30 22.18 2.1 1.1 6.1 4.2 14.5 0.0 0.1 0.0 0.1 0.3 1.0 0.1 0.3 20.7 Table 2: Initial Error Rate Category POS SUBPOS GENDER NUMBER CASE POSSGENDER POSSNUMBER PERSON TENSE GRADE NEGATION VOICE VAR Overall training data test data 0.02 0.49 1.78 2.73 6.01 0.04 0.01 0.12 0.12 0.11 0.25 0.11 0.10 8.75 0.9 1.0 2.0 0.9 5.0 0.0 0.0 0.0 0.1 0.1 0.0 0.0 0.2 8.0 Table 3: Resulting Error Rate achieved with the previous experiments on Czech tagging (Hajji, HladkA, 1997). It shows that we got more than 50% improvement on the best error rate achieved so far. Also the amount of training data used was lower than needed for the HMM ex- periments. We have also performed an experiment using 35,000 training words which yielded by about 4% worse results (88% combined tag accuracy). Finally, Table 5 compares results (given differ- Experiment Unigram HMM Rule-based (Brill's) Trigram HMM Bigram HMM Exponential Exponential Exponential, VTC training data size 621,015 37,892 621,015 621,015 35,000 130,000 160,000 best error rate (in %) 34.30 20.25 18.86 18.46 12.00 8.00 6.20 Table 4: Comparing Various Methods ent training thresholds 9) obtained on larger train- ing data using the "separate" prediction method dis- cussed so far with results obtained through a mod- ification, the key point of which is that it considers only "Valid (sub)Tag Combinations (VTC)'. The probability of a tag is computed as a simple product of subtag probabilities (normalized), thus assuming subtag independence. The "winner" is presented in boldface. As expected, the overall error rate is al- ways better using the VTC method, but some of the subtags are (sometimes) better predicted using the "separate" prediction method l°. This could have important practical consequences - if, for example, the POS or SUBPOS is all that's interesting. 6 Conclusion and Further Research The combined error rate results are still far below the results reported for English, but we believe that there is still room for improvement. Moreover, split- ting the tags into subtags showed that "pure" part of speech (as well as the even more detailed "subpart" of speech) tagging gives actually better results than those for English. We see several ways how to proceed to possibly improve the performance of the tagger (we are still talking here about the "single best tag" approach; the n-best case will be explored separately): • Disambiguated tags (in the left context) plus Viterbi search. Some errors might be eliminated if features asking questions about the disam- biguated context are being used. The disam- biguated tags concentrate - or transfer - in- formation about the more distant context. It would avoid "repeated" learning of the same or similar features for different but related dis- ambiguation problems. The final effect on the overall accuracy is yet to be seen. Moreover, the transition function assumed by the Viterbi algorithm must be reasonably defined (approx- imated). • Final re-estimation using maximum entropy. Let's imagine that after selecting all the features using the training method described here we recompute the feature weights using the usual maximum entropy objective function. This will produce better (read: more principled) weight estimates for the features already selected, but it might help as well as hurt the performance. • Improved feature pool. This is, according to our opinion, the source of major improvement. The error analysis shows that in many cases the 9No overtraining occurred here either, but the results for thresholds 2-4 do not differ significantly. l°For English, using the Penn 23"eebank data, we have al- ways obtained better accuracy using the VTC method (and redefinition of the tag set based on 4 categories). 489 Threshold: 128 16 8 4 2 Features learned: 23 213 772 1529 4571 Category POS SUBPOS GENDER NUMBER CASE POSSGENDER POSSNUMBER PERSON TENSE GRADE NEGATION VOICE VAR Overall Sep VTC 1.50 1.32 1.24 1.40 4.50 4.06 3.46 2.94 11.10 10.52 O.08 0.10 0.14 0.04 0.28 0.18 0.36 0.18 0.88 1.00 0.62 0.26 0.38 0.18 0.26 0.18 16.50 13.22 Sep VTC 0.86 0.78 0.78 0.84 3.00 2.80 2.62 2.40 7.74 7.66 0.08 0.12 0.04 0.04 0.14 0.16 0.16 0.14 0.70 0.30 0.34 0.36 0.16 0.14 0.24 0.22 12.20 9.58 Sep VTC 0.66 0.60 0.70 0.64 2.40 2.14 1.86 1.72 5.30 5.34 0.08 0.04 0.04 0.00 0.16 0.10 0.10 0.12 0.44 0.30 0.28 0.26 0.10 0.12 0.14 0.14 8.42 6.98 Sep VTC 0.44 0.42 0.36 0.48 2.14 1.80 1.72 1.56 4.82 4.80 0.04 0.06 0.02 0.02 0.14 0.12 0.10 0.12 0.22 0.18 0.24 0.24 0.10 0.12 0.12 0.14 7.62 6.22 Sep VTC 0.36 0.44 0.30 0.48 2.08 1.90 1.80 1.50 4.88 4.84 0.02 0.04 0.00 0.00 0.12 0.06 0.I0 0.08 0.22 0.16 0.26 0.24 0.08 0.08 0.12 0.04 7.66 6.20 Table 5: Resulting Error Rate in % (newspaper, training size: 160,000, test size: 5000 tokens) context to be used for disambiguation has not been used by the tagger simply because more sophisticated features have not been considered for selection. An example of such a feature, which would possibly help to solve the very hard and relatively frequent problem of disambiguat- ing between nominative and accusative cases of certain nouns, would be a question "Is there a noun in nominative case only in the same clause?" - every clause may usually have only one noun phrase in nominative, constituting its subject. For such feature to work we will have to correctly determine or at least approximate the clause boundaries, which is obviously a non- trivial task by itself. 7 Acknowledgements Various parts of this work has been supported by the following grants: Open Foundation RSS/HESP 195/1995, Grant Agency of the Czech Republic (GA(~R) 405/96/K214, and Ministry of Education Project No. VS96151. The authors would also like to thank Fred Jelinek of CLSP JHU Baltimore for valuable comments and suggestions which helped to improve this paper a lot. References Adam Berger, Stephen Della Pietra, Vincent Della Pietra. 1996. Maximum Entropy Approach. In Computational Linguistics, vol. 3, MIT Press, Cambridge, MA. Eric Brill. 1993a. A Corpus Based Approach To Language Learning. PhD Dissertation, Depart- ment of Computer and Information Science, Uni- versity of Pennsylvania. Eric Brill. 1993b. Automatic grammar induc- tion and parsing free text: A Transformation° Based Approach. In: Proceedings of the 3rd In- ternational Workshop on Parsing Technologies, Tilburg, The Netherlands. Eric Brill. 1993c. Transformation-Based Error- Driven Parsing. In: Proceedings of the Twelfth National Conference on Artificial Intelligence. Kenneth W. Church. 1988. A stochastic parts pro- gram and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, pages 136-143, Austin, Texas. Association for Compu- tational Linguistics, Morristown, New Jersey. Jan Haji~. 1994. Unification Morphology Grammar. PhD Dissertation. MFF UK, Charles University, Prague. Jan Haji~. In prep. Automatic Processing of Czech: between Morphology and Syntax. MFF UK, Charles University, Prague. Jan Hajji, Barbora Hladk& 1997. Tagging of Inflec- tive Languages: a Comparison. In Proceedings of the ANLP'97, pages 136-143, Washington, DC. Association for Computational Linguistics, Mor- ristown, New Jersey. Barbora Hladk& 1994. Programov6 vybavenf pro zpracov~ni velk~ch ~esk~ch textov~ch korpusfi. MSc Thesis, Institute of Formal and Applied Lin- guistics, Charles University, Prague, Czech Re- public. Bernard Merialdo. 1992. Tagging Text With A Probabilistic Model. Computational Linguistics, 20(2):155-171 Kiril Ribarov. 1996. Automatick~. tvorba gramatiky p~irozen6ho jazyka. MSc Thesis, Institute of For- mal and Applied Linguistics, Charles University, Prague, Czech Republic. In Czech. 490
1998
80
Improving Data Driven Wordclass Tagging by System Combination Hans van Halteren Dept. of Language and Speech University of Nijmegen P.O. Box 9103 6500 HD Nijmegen The Netherlands [email protected] Jakub Zavrel, Walter Daelemans Dept. of Computational Linguistics Tilburg University P.O. Box 90153 5000 LE Tilburg The Netherlands [email protected],[email protected] Abstract In this paper we examine how the differences in modelling between different data driven systems performing the same NLP task can be exploited to yield a higher accuracy than the best indi- vidual system. We do this by means of an ex- periment involving the task of morpho-syntactic wordclass tagging. Four well-known tagger gen- erators (Hidden Markov Model, Memory-Based, Transformation Rules and Maximum Entropy) are trained on the same corpus data. Af- ter comparison, their outputs are combined us- ing several voting strategies and second stage classifiers. All combination taggers outperform their best component, with the best combina- tion showing a 19.1% lower error rate than the best individual tagger. Introduction In all Natural Language Processing (NLP) systems, we find one or more language models which are used to predict, classify and/or interpret language related observa- tions. Traditionally, these models were catego- rized as either rule-based/symbolic or corpus- based/probabilistic. Recent work (e.g. Brill 1992) has demonstrated clearly that this cat- egorization is in fact a mix-up of two distinct Categorization systems: on the one hand there is the representation used for the language model (rules, Markov model, neural net, case base, etc.) and on the other hand the manner in which the model is constructed (hand crafted vs. data driven). Data driven methods appear to be the more popular. This can be explained by the fact that, in general, hand crafting an explicit model is rather difficult, especially since what is being modelled, natural language, is not (yet) well- understood. When a data driven method is used, a model is automatically learned from the implicit structure of an annotated training cor- pus. This is much easier and can quickly lead to a model which produces results with a 'rea- sonably' good quality. Obviously, 'reasonably good quality' is not the ultimate goal. Unfortunately, the quality that can be reached for a given task is limited, and not merely by the potential of the learn- ing method used. Other limiting factors are the power of the hard- and software used to imple- ment the learning method and the availability of training material. Because of these limitations, we find that for most tasks we are (at any point in time) faced with a ceiling to the quality that can be reached with any (then) available ma- chine learning system. However, the fact that any given system cannot go beyond this ceiling does not mean that machine learning as a whole is similarly limited. A potential loophole is that each type of learning method brings its own 'in- ductive bias' to the task and therefore different methods will tend to produce different errors. In this paper, we are concerned with the ques- tion whether these differences between models can indeed be exploited to yield a data driven model with superior performance. In the machine learning literature this ap- proach is known as ensemble, stacked, or com- bined classifiers. It has been shown that, when the errors are uncorrelated to a sufficient degree, the resulting combined classifier will often per- form better than all the individual systems (Ali and Pazzani 1996; Chan and Stolfo 1995; Tumer and Gosh 1996). The underlying assumption is twofold. First, the combined votes will make the system more robust to the quirks of each learner's particular bias. Also, the use of infor- mation about each individual method's behav- iour in principle even admits the possibility to 491 fix collective errors. We will execute our investigation by means of an experiment. The NLP task used in the experiment is morpho-syntactic wordclass tag- ging. The reasons for this choice are several. First of all, tagging is a widely researched and well-understood task (cf. van Halteren (ed.) 1998). Second, current performance levels on this task still leave room for improvement: 'state of the art' performance for data driven au- tomatic wordclass taggers (tagging English text with single tags from a low detail tagset) is 96- 97% correctly tagged words. Finally, a number of rather different methods are available that generate a fully functional tagging system from annotated text. 1 Component taggers In 1992, van Halteren combined a number of taggers by way of a straightforward majority vote (cf. van Halteren 1996). Since the compo- nent taggers all used n-gram statistics to model context probabilities and the knowledge repre- sentation was hence fundamentally the same in each component, the results were limited. Now there are more varied systems available, a va- riety which we hope will lead to better com- bination effects. For this experiment we have selected four systems, primarily on the basis of availability. Each of these uses different features of the text to be tagged, and each has a com- pletely different representation of the language model. The first and oldest system uses a tradi- tional trig-ram model (Steetskamp 1995; hence- forth tagger T, for Trigrams), based on context statistics P(ti[ti-l,ti-2) and lexical statistics P(tilwi) directly estimated from relative cor- pus frequencies. The Viterbi algorithm is used to determine the most probable tag sequence. Since this model has no facilities for handling unknown words, a Memory-Based system (see below) is used to propose distributions of po- tential tags for words not in the lexicon. The second system is the Transformation Based Learning system as described by Brill (19941; henceforth tagger R, for Rules). This 1 Brill's system is available as a collec- tion of C programs and Perl scripts at ftp ://ftp. cs. j hu. edu/pub/brill/Programs/ RULE_BASED_TAGGER_V. I. 14. tar. Z system starts with a basic corpus annotation (each word is tagged with its most likely tag) and then searches through a space of transfor- mation rules in order to reduce the discrepancy between its current annotation and the correct one (in our case 528 rules were learned). Dur- ing tagging these rules are applied in sequence to new text. Of all the four systems, this one has access to the most information: contextual information (the words and tags in a window spanning three positions before and after the focus word) as well as lexical information (the existence of words formed by suffix/prefix addi- tion/deletion). However, the actual use of this information is severely limited in that the indi- vidual information items can only be combined according to the patterns laid down in the rule templates. The third system uses Memory-Based Learn- ing as described by Daelemans et al. (1996; henceforth tagger M, for Memory). During the training phase, cases containing informa- tion about the word, the context and the cor- rect tag are stored in memory. During tagging, the case most similar to that of the focus word is retrieved from the memory, which is indexed on the basis of the Information Gain of each feature, and the accompanying tag is selected. The system used here has access to information about the focus word and the two positions be- fore and after, at least for known words. For unknown words, the single position before and after, three suffix letters, and information about capitalization and presence of a hyphen or a digit are used. The fourth and final system is the MXPOST system as described by Ratnaparkhi (19962; henceforth tagger E, for Entropy). It uses a number of word and context features rather sim- ilar to system M, and trains a Maximum En- tropy model that assigns a weighting parameter to each feature-value and combination of fea- tures that is relevant to the estimation of the probability P(tag[features). A beam search is then used to find the highest probability tag se- quence. Both this system and Brill's system are used with the default settings that are suggested in their documentation. 2Ratnaparkhi's Java implementation of this sys- tem is available at ftp://ftp.cis.upenn.edu/ pub/adwait/jmx/ 492 2 The data The data we use for our experiment consists of the tagged LOB corpus (Johansson 1986). The corpus comprises about one million words, di- vided over 500 samples of 2000 words from 15 text types. Its tagging, which was manually checked and corrected, is generally accepted to be quite accurate. Here we use a slight adapta- tion of the tagset. The changes are mainly cos- metic, e.g. non-alphabetic characters such as "$" in tag names have been replaced. However, there has also been some retokenization: geni- tive markers have been split off and the negative marker "n't" has been reattached. An example sentence tagged with the resulting tagset is: The ATI singular or plural article Lord NPT singular titular noun Major NPT singular titular noun extended VBD past tense of verb an AT singular article invitation NN singular common noun to IN preposition all ABN pre-quantifier the ATI singular or plural article parliamentary JJ adjective candidates NNS plural common noun SPER period The tagset consists of 170 different tags (in- cluding ditto tags 3) and has an average ambigu- ity of 2.69 tags per wordform. The difficulty of the tagging task can be judged by the two base- line measurements in Table 2 below, represent- ing a completely random choice from the poten- tial tags for each token (Random) and selection of the lexically most likely tag (LexProb). For our experiment, we divide the corpus into three parts. The first part, called Train, consists of 80% of the data (931062 tokens), constructed 3Ditto tags are used for the components of multi- token units, e.g. if "as well as" is taken to be a coor- dination conjunction, it is tagged "as_CC-1 well_CC-2 as_CC-3", using three related but different ditto tags. by taking the first eight utterances of every ten. This part is used to train the individual tag- gers. The second part, Tune, consists of 10% of the data (every ninth utterance, 114479 tokens) and is used to select the best tagger parameters where applicable and to develop the combina- tion methods. The third and final part, Test, consists of the remaining 10% (.115101 tokens) and is used for the final performance measure- ments of all tuggers. Both Tune and Test con- tain around 2.5% new tokens (wrt Train) and a further 0.2% known tokens with new tags. The data in Train (for individual tuggers) and Tune (for combination tuggers) is to be the only information used in tagger construction: all components of all tuggers (lexicon, context statistics, etc.) are to be entirely data driven and no manual adjustments are to be done. The data in Test is never to be inspected in detail but only used as a benchmark tagging for qual- ity measurement. 4 3 Potential for improvement In order to see whether combination of the com- ponent tuggers is likely to lead to improvements of tagging quality, we first examine the results of the individual taggers when applied to Tune. As far as we know this is also one of the first rigorous measurements of the relative quality of different tagger generators, using a single tagset and dataset and identical circumstances. The quality of the individual tuggers (cf. Ta- ble 2 below) certainly still leaves room for im- provement, although tagger E surprises us with an accuracy well above any results reported so far and makes us less confident about the gain to be accomplished with combination. However, that there is room for improvement is not enough. As explained above, for combi- nation to lead to improvement, the component taggers must differ in the errors that they make. That this is indeed the case can be seen in Ta- ble 1. It shows that for 99.22% of Tune, at least one tagger selects the correct tag. However, it is unlikely that we will be able to identify this 4This implies that it is impossible to note if errors counted against a tagger are in fact errors in the bench- mark tagging. We accept that we are measuring quality in relation to a specific tagging rather than the linguistic truth (if such exists) and can only hope the tagged LOB corpus lives up to its reputation. 493 All Taggers Correct 92.49 Majority Correct (3-1,2-1-1) 4.34 Correct Present, No Majority 1.37 (2-2,1-1-1-1) Minority Correct (1-3,1-2-1) 1.01 All Taggers Wrong 0.78 Table 1: Tagger agreement on Tune. The pat- terns between the brackets give the distribution of correct/incorrect tags over the systems. tag in each case. We should rather aim for op- timal selection in those cases where the correct tag is not outvoted, which would ideally lead to correct tagging of 98.21% of the words (in Tune). 4 Simple Voting There are many ways in which the results of the component taggers can be combined, select- ing a single tag from the set proposed by these taggers. In this and the following sections we examine a number of them. The accuracy mea- surements for all of them are listed in Table 2. 5 The most straightforward selection method is an n-way vote. Each tagger is allowed to vote for the tag of its choice and the tag with the highest number of votes is selected. 6 The question is how large a vote we allow each tagger. The most democratic option is to give each tagger one vote (Majority). However, it appears more useful to give more weight to taggers which have proved their quality. This can be general quality, e.g. each tagger votes its overall precision (TotPrecision), or quality in re- lation to the current situation, e.g. each tagger votes its precision on the suggested tag (Tag- Precision). The information about each tagger's quality is derived from an inspection of its re- sults on Tune. 5For any tag X, precision measures which percentage of the tokens tagged X by the tagger are also tagged X in the benchmark and recall measures which percentage of the tokens tagged X in the benchmark are also tagged X by the tagger. When abstracting away from individual tags, precision and recall are equal and measure how many tokens are tagged correctly; in this case we also use the more generic term accuracy. 6In our experiment, a random selection from among the winning tags is made whenever there is a tie. TuneTest Baseline Random 73.68 73.74 LexProb 92.05 92.27 Single Tagger T 95.94 96.08 R 96.34 96.46 M 96.76 96.95 E 97.34 97.43 Simple Voting Majority 97.53 97.63 TotPrecision 97.72 97.80 TagPrecision 97.55 97.68 Precision-Recall 97.73 97.84 Pairwise Voting TagPair 97.99 97.92 Memory-Based Tags 98.31 97.87 Tags+Word 99.21 97.82 Tags+Context 99.46 97.69 Decision trees Tags 98.08 97.78 Tags+Word - - Tags+Context 98.67 97.63 taggers and Table 2: Accuracy of individual combination methods. But we have even more information on how well the taggers perform. We not only know whether we should believe what they propose (precision) but also know how often they fail to recognize the correct tag (recall). This informa- tion can be used by forcing each tagger also to add to the vote for tags suggested by the oppo- sition, by an amount equal to 1 minus the recall on the opposing tag (Precision-Recall). As it turns out~ all voting systems outperform the best single tagger, E. 7 Also, the best voting system is the one in which the most specific in- formation is used, Precision-Recall. However, specific information is not always superior, for TotPrecision scores higher than TagPrecision. This might be explained by the fact that recall information is missing (for overall performance this does not matter, since recall is equal to pre- cision). 7Even the worst combinator, Majority, is significantly better than E: using McNemar's chi-square, p--0. 494 5 Pairwise Voting So far, we have only used information on the performance of individual taggers. A next step is to examine them in pairs. We can investigate all situations where one tagger suggests T1 and the other T2 and estimate the probability that in this situation the tag should actually be Tx, e.g. if E suggests DT and T suggests CS (which can happen if the token is "that") the probabilities for the appropriate tag are: CS subordinating conjunction 0.3276 DT determiner 0.6207 QL quantifier 0.0172 WPR wh-pronoun 0.0345 When combining the taggers, every tagger pair is taken in turn and allowed to vote (with the probability described above) for each pos- sible tag, i.e. not just the ones suggested by the component taggers. If a tag pair T1-T2 has never been observed in Tune, we fall back on information on the individual taggers, viz. the probability of each tag Tx given that the tagger suggested tag Ti. Note that with this method (and those in the next section) a tag suggested by a minority (or even none) of the taggers still has a chance to win. In principle, this could remove the restric- tion of gain only in 2-2 and 1-1-1-1 cases. In practice, the chance to beat a majority is very slight indeed and we should not get our hopes up too high that this should happen very often. When used on Test, the pairwise voting strat- egy (TagPair) clearly outperforms the other vot- ing strategies, 8 but does not yet approach the level where all tying majority votes are handled correctly (98.31%). 6 Stacked classifiers From the measurements so far it appears that the use of more detailed information leads to a better accuracy improvement. It ought there- fore to be advantageous to step away from the underlying mechanism of voting and to model the situations observed in Tune more closely. The practice of feeding the outputs of a num- ber of classifiers as features for a next learner sit is significantly better than the runner-up (Precision-Recall) with p=0. is usually called stacking (Wolpert 1992). The second stage can be provided with the first level outputs, and with additional information, e.g. about the original input pattern. The first choice for this is to use a Memory- Based second level learner. In the basic ver- sion (Tags), each case consists of the tags sug- gested by the component taggers and the cor- rect tag. In the more advanced versions we also add information about the word in ques- tion (Tags+Word) and the tags suggested by all taggers for the previous and the next position (Tags+Context). For the first two the similarity metric used during tagging is a straightforward overlap count; for the third we need to use an Information Gain weighting (Daelemans ct al. 1997). Surprisingly, none of the Memory-Based based methods reaches the quality of TagPair. 9 The explanation for this can be found when we examine the differences within the Memory- Based general strategy: the more feature infor- mation is stored, the higher the accuracy on Tune, but the lower the accuracy on Test. This is most likely an overtraining effect: Tune is probably too small to collect case bases which can leverage the stacking effect convincingly, es- pecially since only 7.51% of the second stage material shows disagreement between the fea- tured tags. To examine if the overtraining effects are spe- cific to this particular second level classifier, we also used the C5.0 system, a commercial version of the well-known program C4.5 (Quinlan 1993) for the induction of decision trees, on the same training material. 1° Because C5.0 prunes the decision tree, the overfitting of training material (Tune) is less than with Memory-Based learn- ing, but the results on Test are also worse. We conjecture that pruning is not beneficial when the interesting cases are very rare. To realise the benefits of stacking, either more data is needed or a second stage classifier that is better suited to this type of problem. 9Tags (Memory-Based) scores significantly worse than TagPair (p=0.0274) and not significantly better than Precision-Recall (p=0.2766). 1°Tags+Word could not be handled by C5.0 due to the huge number of feature values. 495 Test Increase vs Component Average T 96.08 - R 96.46 M 96.95 MR 97.03 96.70+0.33 RT 97.11 96.27+0.84 MT 97.26 96.52+0.74 E 97.43 MRT 97.52 96.50+1.02 ME 97.56 97.19+0.37 ER 97.58 96.95+0.63 ET 97.60 96.76+0.84 MER 97.75 96.95+0.80 ERT 97.79 96.66+1.13 MET 97.86 96.82+1.04 MERT 97.92 96.73+1.19 % Reduc- tion Error Rate Best Component 2.6 (M) 18.4 (R) lO.2 (M) 18.7 (M) 5.1 (E) 5.8 (E) 6.6 (E) 12.5 (E) 14.0 (E) 16.7 (E) 19.1 (E) Table 3: Correctness scores on Test for Pairwise Voting with all tagger combinations 7 The value of combination The relation between the accuracy of combina- tions (using TagPair) and that of the individual taggers is shown in Table 3. The most impor- tant observation is that every combination (sig- nificantly) outperforms the combination of any strict subset of its components. Also of note is the improvement yielded by the best combi- nation. The pairwise voting system, using all four individual taggers, scores 97.92% correct on Test, a 19.1% reduction in error rate over the best individual system, viz. the Maximum Entropy tagger (97.43%). A major factor in the quality of the combi- nation results is obviously the quality of the best component: all combinations with E score higher than those without E (although M, R and T together are able to beat E alone11). Af- ter that, the decisive factor appears to be the difference in language model: T is generally a better combiner than M and R, 12 even though it has the lowest accuracy when operating alone. A possible criticism of the proposed combi- 11By a margin at the edge of significance: p=0.0608. 12Although not significantly better, e.g. the differ- ences within the group ME/ER/ET are not significant. nation scheme is the fact that for the most suc- cessful combination schemes, one has to reserve a non-trivial portion (in the experiment 10% of the total material) of the annotated data to set the parameters for the combination. To see whether this is in fact a good way to spend the extra data, we also trained the two best individ- ual systems (E and M, with exactly the same settings as in the first experiments) on a con- catenation of Train and Tune, so that they had access to every piece of data that the combina- tion had seen. It turns out that the increase in the individual taggers is quite limited when compared to combination. The more exten- sively trained E scored 97.51% correct on Test (3.1% error reduction) and M 97.07% (3.9% er- ror reduction). Conclusion Our experiment shows that, at least for the task at hand, combination of several different sys- tems allows us to raise the performance ceil- ing for data driven systems. Obviously there is still room for a closer examination of the dif- ferences between the combination methods, e.g. the question whether Memory-Based combina- tion would have performed better if we had pro- vided more training data than just Tune, and of the remaining errors, e.g. the effects of in- consistency in the data (cf. Ratnaparkhi 1996 on such effects in the Penn Treebank corpus). Regardless of such closer investigation, we feel that our results are encouraging enough to ex- tend our investigation of combination, starting with additional component taggers and selec- tion strategies, and going on to shifts to other tagsets and/or languages. But the investiga- tion need not be limited to wordclass tagging, for we expect that there are many other NLP tasks where combination could lead to worth- while improvements. Acknowledgements Our thanks go to the creators of the tagger gen- erators used here for making their systems avail- able. References All K.M. and Pazzani M.J. (1996) Error Reduc- tion through Learning Multiple Descriptions. Machine Learning, Vol. 24(3), pp. 173-202. 496 Brill E. (1992) A Simple Rule-Based Part of Speech Tagger. In Proc. ANLP'92, pp. 152- 155. Brill E. (1994) Some Advances in Transformation-Based Part-of-Speech Tag- ging. In Proc. AAAI'94. Chan P.K. and Stolfo S.J. (1995) A Compara- tive Evaluation of Voting and Meta-Learning of Partitioned Data. In Proc. 12th Interna- tional Conference on Machine Learning, pp. 90-98. Daelemans W., Zavrel J., Berck P. and Gillis S.E. (1996) MBT: a Memory-Based Part of Speech Tagger-Generator. In Proc. Fourth Workshop on Very Large Corpora, E. Ejerhed and I. Dagan, eds., Copenhagen, Denmark, pp. 14-27. Daelemans W., van den Bosch A. and Wei- jters A. (1997) IGTree: Using Trees for Compression and Classification in Lazy Learning Algorithms. Artificial Intelligence Review, 11, Special Issue on Lazy Learning, pp. 407-423. van Halteren H. (1996) Comparison of Tag- ging Strategies, a Prelude to Democratic Tag- ging. In "Research in Humanities Computing 4. Selected papers for the ALLC/ACH Con- ference, Christ Church, Oxford, April 1992", S. Hockey and N. Ide (eds.), Clarendon Press, Oxford, England, pp. 207-215. van Halteren H. (ed.) (1998, forthc.) Syntactic Wordclass Tagging. Kluwer Academic Pub- lishers, Dordrecht, The Netherlands, 310 p. Johansson S. (1986) The Tagged LOB Corpus: User's Manual. Norwegian Computing Cen- tre for the Humanities, Bergen, Norway. 149 p. Quinlan J.R. (1993) C~.5: Programs for Ma- chine Learning. San Mateo, CA. Morgan Kaf- mann. Ratnaparkhi A. (1996) A Maximum En- tropy Part of Speech Tagger. In Proc. ACL- SIGDAT Conference on Empirical Methods in Natural Language Processing. Steetskamp R. (1995) An Implementation Of a Probabilistic Tagger. TOSCA Research Group, University of Nijmegen, Nijmegen, The Netherlands. 48 p. Turner K. and Ghosh J.. (1996) Error Correla- tion and Error Reduction in Ensemble Clas- sifiers. Connection Science, Special issue on combining artificial neural networks: ensem- ble approaches, Vol. 8(3&4), pp. 385-404. Wolpert D.H. (1992) Stacked Generalization. Neural Networks, Vol. 5, pp. 241-259. 497
1998
81
A step towards the detection of semantic variants of terms in technical documents Thierry Hamon and Adeline Nazarenko Laboratoire d'Informatique de Paris-Nord Universit~ Paris-Nord Avenue J-B Clement 93430 Villetaneuse, FRANCE [email protected] [email protected] C~cile Gros EDF-DER-IMA-TIEM-SOAD 1 Avenue du G~n~ral de Gaulle 92141 Clamart CEDEX, FRANCE [email protected] Abstract This paper reports the results of a preliminary experiment on the detection of semantic vari- ants of terms in a French technical document. The general goal of our work is to help the struc- turation of terminologies. Two kinds of seman- tic variants can be found in traditional termi- nologies : strict synonymy links and fuzzier re- lations like see-also. We have designed three rules which exploit general dictionary informa- tion to infer synonymy relations between com- plex candidate terms. The results have been examined by a human terminologist. The ex- pert has judged that half of the overall pairs of terms are relevant for the semantic variation. He validated an important part of the detected links as synonymy. Moreover, it appeared that numerous errors are due to few mis-interpreted links: they could be eliminated by few exception rules. 1 Introduction 1.1 Structuring a terminology The work presented here is a part of an indus- trial project of Technical Document Consulta- tion System (Gros et al., 1996) at the French electricity company EDF. The goal is to develop tools to help a terminologist in the construction of a structured terminology (cf. figure 1) pro- viding : • terms of a domain, i.e. simple or com- plex lexical units pointing out accurate con- cepts in a technical document, (Bourigault, 1992); • semantic links such as the see-also relation. This can be viewed as a two-step process. The candidate terms (i.e. lexical units which can be terms if a domain expert validates them) are first automatically extracted from the technical document with a Terminology Extraction Soft- ware (LEXTER) (Bourigault, 1992). The list of candidate terms is then structured into a se- mantic network. We focus on the latter point by detecting semantic variants, especially syn- onyms. ligne a~rienne (overhead line) See_also : D~part a~rien (overhead outlet) Synonym : Liaison ~lectrique a~rienne (overhead electric link) Ligne simple (single circuit line) Is_a : Ligne a~rienne (overhead line) Ligne multiterne (multiple circuit line) ls_a : Ligne a~rienne (overhead line) Synonym : Ligne double (double circuit line) Figure 1: Example of a structured terminology in the electric domain. In order to build a structured terminology, we thus attempt to link candidate terms ex- tracted from a French technical document 1. For instance, from synonyms such as matgriel (equipment) / dquipement (fittings), marche (running) /fonctionnement (working) and nor- mal (normal) / bon (right), we infer a synonymy link between candidate terms matdriel dlec- trique (electric equipment) / dquipement dlec- trique (electrical fittings) and marche normale (normal running) / bon fonctionnement (right working). As the terms used in this paper have been extracted from French documents, their translation, especially for the synonymy, does not always show the same nuance than originally. 498 modNe (model) : < 1 > canon (canon), ~talon (standard), exemplaire (copy), example (example), plan (plan) < 2 > sujet (subject), maquette (maquette) < 3 > h~ros (hero), type (type) < 4 > 4chantillon (sample), specimen (sample) < 5 > standard (standard), type (type), prototype (prototype) < 6 > maquette (model) < 7 > gabarit (size), moule (mould), patron (pattern) Figure 2: Example of a word entry from the dictionary Le Robert. 1.2 Using a general language dictionary for specialized corpora As domain specific semantic information is sel- dom available, our aim is to evaluate the rel- evance and usefulness of general semantic re- sources for the detection of synonymy between candidate terms. For this study, we used a French general dictionary Le Robert supplied by the Institut National de la Langue Franqaise (INaLF). It provides synonyms and analogical words dis- tributed among the different senses (cf. figure 2) of each word entry. It is exploited as a machine- readable synonym dictionary. We use a 200 000 word corpus about electric power plant. Its size is typical of the technical documents. It is very technical if one consid- ers the dictionary lemma coverage for this cor- pus (45%). Concerning two other available doc- uments dealing with software engineering and electric network planning, the dictionary lemma coverage is respectively of 65% and 57%. In that respect the chosen corpus is the worse case for this experiment. The present corpus has been analyzed by the Terminology Extraction Software LEX- TER which extracted 12 043 candidate terms (2 831 nouns, 597 adjectives and 8 615 noun phrases). Each complex candidate term (ligne d'alimentation, supply line) is analyzed into a head (ligne, line) and an expansion (alimenta- tion, supply). It is part of a syntactic network (cf. figure 3). 2 Method for the detection of synonymous terms The terminological variation include morpho- logical (fiectional, derivational) variants, syn- tactic variant (coordinated and compound terms) but also semantic variant (synonyms, hy- peronyms) of controlled terms. In this experi- ment, we attempt to infer synonymy links be- tween candidate terms. 2.1 Semantic variation and synonymy relation Semantic variation The semantic variation includes relations (e.g. synonymy and see-also) between words of the same grammatical cate- gory, even if one may also take into consider- ation phenomena such as elliptic relations or combination of synonymy and derivation rela- tions (e.g. heat and thermal) where the cate- gories may be different. Fuzzier relations such as the traditional see- also relations of terminologies are also very use- ful. Once a link is established between two terms, it is sometimes easy to interpret for the terminology users. Moreover, for applications such as document retrieval, the link itself is of- ten more important than its very type. Synonymy We use a synonymy definition close to that of WordNet (Miller et al., 1993). It is defined as an equivalence relation between terms having the same meaning in a particu- lar context. The transitivity rule cannot be ap- plied to the links extracted from the dictionary. Indeed, while the synonymy is sometimes very contextual in the dictionary, the links appear in the data without context information and would produce a great deal of errors. Thus, for in- stance, the synonymy links between the adjec- tives polaire (polar) and glacial (icy) and the ad- jectives glacial (cold) and insensible (insensitive) would allow to deduce a wrong synonym link between polaire and insensible. Moreover, tests carried out on dictionary samples show that the relevant links which 499 Y ligne (line) ligne a@rienne (overhead line) • ligne simple (single line) ligne double (double line) ligne d'alimentation H (supply line) (...) ligne a~rienne haute tension (hight voltage overhead line) ligne a~rienne moyenne tension (middle voltage overhead line) alimentation (supply) capacit@ de transit de la ligne (transit capacity of the line) cofit d'investissement de la ligne (cost of investissement of the line) d6clenchement de la ligne 9 (tripping of the line) E longueur de la ligne (size of the line) puissance caract@ristique de la ligne (caracteristic power of the line) ordre de d~clenchement (order of tripping) de la ligne (of the line) ...) Figure 3: Fragment of the syntactic network (H = head, E = expansion). Number of simple terms extracted Number of retained words at the filtering step Percentage of retained words at the filtering step Nouns Adjectives Total 2 831 597 3 428 1 134 408 1 542 40% 68% 45% Table 1: Coverage of the corpus by the dictionary. could be added thanks to the transitivity rules already exist in the dictionary. For instance the following words are synonymous pairwise: lo- gement (accommodation), demeure (residence), domicile (residence) and habitation (house). We consider all links provided by the dictio- nary as expressing synonymy relation between simple candidate terms and design a two-step automatic method to infer links between com- plex candidate terms. 2.2 First step: Dictionary data filtering In order to reduce the database, we first fil- ter the relevant dictionary links for the stud- ied document. For instance, the link matdriel (equipment) / dquipement (fittings) is selected because its both ends, materiel and 6quipement exist in the studied corpus. For this document, 3 369 synonymy links between 1 542 simple terms are preserved. Table 1 shows the results of the filtering step in regard to the coverage of our corpus by the dictionary. 2.3 Second step: Detection of synonymous candidate terms Assuming that the semantics and the synonymy of the complex candidate terms are composi- tional, we design three rules to detect synonymy relations between candidate terms. Consider- ing two candidates terms, if one of the following conditions is met, a synonymy link is added to the terminological network: - the heads are identical and the expansions are synonymous (collecteurg~ndral (general collector) / collecteur commun (common collector)); -the heads are synonymous and the ex- pansions are identical (matdriel dlectrique (electric equipment) / dquipement ~lectrique (electrical fittings)); - the heads are synonymous and the expansions are synonymous (marche normale (normal running) / bon fonctionnement (right work- ing)); 500 We first use the dictionary links as a boot- strap to detect synonymy links between com- plex candidate terms. Then, we iterate the pro- cess by including the newly detected links in our base until no new link can be found. In the present experiment, the process ends up after three iterations. 3 Results and study of the detected links 3.1 Various detected links Synonymy links 396 links between complex candidate terms (i.e. noun phrases) are inferred by this method. An expert of the domain vali- dated 37% of them (i.e. 146 links, cf. table 2) as real synonymy links: hauteur d'eau (water height) / niveau d'eau (level of water), d~t~ri- oration notable (notable deterioration) / d6gra- dation importante (important damage) (cf. fig- ure 4). Number Percentage Validated links 146 37% Unvalidated links 250 63% Total 396 100% Table 2: Results of the link validation. Most of the synonymy links between candi- date terms are detected at the first iteration (383 liens out of 396). The majority of the val- idated links are given by the two first rules: 89 validated links out of 206 with the first rule (ad- mission d'air (air intake) / entrde d'air (air en- try)), 49 out of 105 with the second (toitflottant (floating roof) / toil mobile (movable roof) and collecteur gdndral (general collector) / colleeteur commun (common collector)). Obviously, the last rule has a lower precision rate: 8 out of 85 (fausse manoeuvre (wrong operation) / mau- valse manipulation (bad handling)). However, it infers important links which are difficult to detect by hand. Other useful links On the whole, the expert judged that half of the detected links are useful for the terminology structuration even if he re- jected some of them as real synonymy links (cf. figure 5). Our method detects different types of links: meronymy, antonymy, relations between close concepts, connected parts of a whole mech- anism, etc. The meronymy links are the most numerous after synonymy (rapport de s~retd (safety report) / analyse de s~retd (safety analysis)). In the previous example, whereas rapport (report) and analyse (analysis) are given as synonyms by the general language dictionary (which is context- free), their technical meanings in our document are more specific. Therefore, rapport de s~retd is a meronym rather than a synonym of analyse de s~retd in the studied document. Other detected links allow to group the can- didate terms which refer to related concepts. For instance, we detected a link between the device ligne de vidange (draining line) and the place point de purge (blow-down point) which is relevant since a draining line ends at a blow- down point. Likewise, it is useful to link fin de vidange (draining end) which designates an op- eration and destination des purges (blow-down destination) which is the corresponding equip- ment. The expert considered that the link be- tween the candidate terms (commande md- canique (mechanical control) / ordre automa- tique (automatic order)) expresses an antonymy relation, although it is infered from the syn- onymy relation of the dictionary mdeanique (mechanical) / automatique (automatic). It ap- pears that those adjectives have a particular meaning in the present corpus. Therefore, ev- ery link detected from this "synonymy" link is an antonymy one. Those links express various relations some- times difficult to name, even by the expert. Such links are important in a terminology. 3.2 Polysemy, elision and metaphor Most real errors are due to the lack of con- text information for polysemic words and the noisy data existing in the dictionary. For in- stance the French word temps means either time or weather. According to the dictio- nary, temps (weather) is a synonym of temper- ature (temperature) 2, but this meaning is ex- cluded from the present corpus. Since we can- not distinguish the different meanings, the syn- onymy of temps / time and temperature is taken for granted. Temps attendu (expected time) and tempdrature attentive (expected tempera- 2 It would be more precise to interpret it as analogous words. 501 Term 1 Term 2 d~t~rioration notable (notable deterioration) fausse manoeuvre (wrong operation) action de l'op~rateur (action of the operator) capacit~ interne (internal capacity) capacit~ totale (total capacity) capacit~ utile (useful capacity) limite de solubilit~ (limit of solubility) marche manuelle (manual running) tests p~riodiques (periodic tests) hauteur d'eau (water height) panneau de commande (control panel) d~gradation importante (important damage) mauvaise manipulation (bad handling) intervention de l'op6rateur (intervention of the operator) volume interne (internal volume) volume total (total volume) volume utile (useful volume) seuil de solubilit6 (solubility threshold) fonctionnement manuel (manual working) essais p~riodiques (periodic trials) niveau d'eau (level of water) tableau de commande (control board) Figure 4: Examples of synonymy links between complex candidate terms. Term 1 Term 2 essai en usine (test in plant) ligne de vidange (draining line) fonction d'un temps (fonction of a time) froid normal (normal cold) rapport de sfiret~ (safety report) solution d'acide borique (solution of boric acid) temperature attendue (expected temperature) temperature normale (normal temperature) organes de commande (control devices) gros d~bit (big flow) activit~ importante (important activity) commande m~canique (mechanical control) risques de corrosion (risk of corrosion) experience d'exploitation (experiment of exploitation) point de purge (blow-down point) effet d'une temperature (effect of a temperature) refroidissement correct (correct cooling) analyse de sfiret~ (safety analysis) dissolution de l'acide borique (dissolving of the boric acid) temps attendu (expected time) temps normal (normal time) organes d'ordre (order devices) plein d~bit (full flow) activit~ ~lev~e (high activity) ordre automatique (automatic order) risques de destruction (risk of destruction) Figure 5: Examples of rejected links ture) are thus given as synonymous. This type of wrong links is rather important in the list presented to the expert: between 10 to 20 links out of 396. On the contrary, about ten wrong links are due to the elision of common terms in the do- main. For instance, the term activitd (activity) which actually corresponds to the term radioac- tivitd (radioactivity) in the document is given as a synonym of gnergie (energy) in the dictionary. between complex candidate terms. We have detected links such as activitd haute (high activity) / haute dnergie (high energy). As regards metaphor, we have observed that it preserves semantic relation. For instance, in graph theory, the link (arbre (tree) / feuille (leaf)) can be inferred from the meronyny in- formation of general dictionary. Those types of wrong links are easily iden- tified during the validation. Some exceptions rules can be designed to first regroup those links 502 and then eliminate them. With that aim, we plan to use dictionary definitions. 3.3 Evaluation The inferred links express not only synonymy, but also other relations which may be difficult to name. Apart from real errors, these fuzzy see-also relations are useful in the context of a consultation system. The results of this first experiment are en- couraging. Although the precision rate and the number of links are low (37%, 396 links), the use of complementary methods (e.g. detection of syntactic variants) would allow to propagate these links and increase their number. Also, the use of other knowledge sources or different methods (Habert et al., 1996) is necessary to increase precision rate and find links between more technical candidate terms. As regards the improvement of such a method, the terminology acquisition by an ex- pert will take tens of hours while the automatic extraction takes one hour and the validation of the links has been done in two hours. The main difficulty is to evaluate the recall in the results because there is no standard refer- ence in that matter, giving the overall relevant relations in the document. One may think that the comparison with links manually detected by an expert is the best evaluation, but such man- ual detection is subjective. Regarding the vali- dation by several experts, it is well-known that such validation would give different results de- pending on the background of each expert (Sz- pakowicz et al., 1996). So, we are reduced to compare our results with those obtained by dif- ferent methods even though they are not perfect either. We are planning to compare the clusters found by our method with the clustering one of (Assadi, 1997) to study how the results overlap and are complementary. 4 Related works The variant detection in specialized corpora must be taken into account for information re- trieval. This complex operation involves the semantic as well as the morphological and syntactic level. (Jacquemin, 1996) design a unification-based partial parser FASTER which analyses raw technical text while meta-rules detect morpho-syntactic variants of controlled terms (blood cell, blood mononuclear cell). By using morphological and part-of-speech mod- ules, the system are extended to the verbal phrases (tree cutting, tree have been cut down) (Klavans et al., 1997). Dealing with syntac- tic paraphrase in the general language, (Dras, 1997) propose a similar representation by using the STAG formalism to detect syntactic related sentences. Because we deal with the semantic level, our work is complementary of those. Semantic variation is rarely studied in spe- cialized domains. Works on word similarity and word sense disambiguation are generally based on statistical methods designed for large or even very large corpora (Hindle, 1990; Agirre and Rigau, 1996). Therefore, they cannot be ap- plied for technical documents which usually are medium size corpora. However, dealing with already linguistic filtered data, (Assadi, 1997) aims at statistically build rough clusters sup- posing that similar candidate terms have similar expansions. Then he relies on human expertise for the semantic interpretation. It differs from our work which tries to automatically explicit the semantic relations. In order to disambiguate noun objects in a short text (30 000 words), (Li et al., 1995) design heuristic rules using se- mantic similarity information in WordNet and verbs as context. Their system disambiguate an encouraging number on noun-verb pairs if one considers single and multiple sense assigned to a word. In (Basili et al., 1997), the lexical knowledge base WordNet (Miller et al., 1993) is used as a bootstrap for verb disambiguation. They tune it to the domain of the studied document by taking into account the contexts in which the verbs are used. This tuning leads both to elimi- nate certain semantic categories and to add new ones. For instance, the category contact is cre- ated for the verb to record. The resulted sense classification is thus a better description of the verb specialized meanings. Our symbolic and dictionary-based approach is close that of (Basili et al., 1997). They both use general language information (traditional dictionary vs. WordNet) for specialized cor- pora. However, their goals differ: disambigua- tion vs. semantic relation identification. 503 5 Conclusion and future works The use of a synonym dictionary and the rules of synonymous candidate terms detection we have designed allow to extract an encouraging num- ber of links in a very technical corpus. An ex- pert validated these links. More than one third of the detected links are synonymy relations. Beside synonymy, our method detects various kinds of semantic variants. Wrong links due to the polysemy can be easily eliminated with ex- ception rules by comparing selectional patterns and generalized contexts (Basili et al., 1993; Gr- ishman and Sterling, 1994). Our work shows that general semantic data are useful for the terminology structuration and the synonym detection in a corpus of specialized language. The results show that semantic vari- ants can be automatically detected. Of course, the number of acquired links is relatively low but our method is not to be used in isolation. Acknowledgment This work is the result of a collaboration with the Direction des Etudes et Recherche (DER) d'Electricit~ de France (EDF). We thank Marie- Luce Picard from EDF and Benoit Habert from ENS Fontenay-St Cloud for their help, Didier Bourigault and Jean-Yves Hamon from the In- stitut de la Langue Fran~aise (INaLF) for the dictionary and Henry Boccon-Gibod for the val- idation of the results. References E. Agirre and G. Rigau. 1996. Word sense disambiguation using conceptual density. In Proceedings of COLING'96, pages 16-22, Copenhagen, Danmark. H. Assadi. 1997. Knowledge acquisition from texts: Using an automatic clustering method based on noun-modifier relationship. In Proceedings of ACL'97- Student Session, Madrid, Spain. Roberto Basili, Maria Teresa Pazienza, and Paola Velardi. 1993. Acquisition of selec- tional patterns in sublanguages. Machine Translation, 8:175-201. Roberto Basili, Michelangelo Della Rocca, and Maria Teresa Pazienza. 1997. Contextual word sense tunig and disambiguation. Ap- plied Artificial Intelligence, 11:235-262. D. Bourigault. 1992. Surface grammatical analysis for the extraction of terminological noun phrases. In Proceedings of COLING'92, pages 977-981, Nantes, France. Mark Dras. 1997. Representing paraphrases us- ing synchronous tree adjoining grammars. In proceedings of the 1997 Australian NLP Sum- mer Workshop, Syndney, Australia. Ralph Grishman and John Sterling. 1994. Gen- eralizing automatically generated selectional patterns. In Proceedings of Coling'94, vol- ume 3, pages 742-747, Kyoto. C. Gros, H. Assadi, N. Aussenac-Gilles, and A. Courcelle. 1996. Task models for techni- cal documentation accessing. In Proceedings of EKA W'96, Nottingham. Beno~t Habert, Elie Naulleau, and Adeline Nazarenko. 1996. Symbolic word cluster- ing for medium-size corpora. In Proceedings of COLING'96, volume 1, pages 490-495, Copenhagen, Danmark, August. D. Hindle. 1990. Noun classification from predicate-argument structures. In Proceed- ings of ACL'90, pages 268-275, Pittsburgh, PA. C. Jacquemin. 1996. A symbolic and surgi- cal acquisition of terms through variation. In E. Riloff et G. Scheler S. Wermter, editor, Connectionist, Statistical and Symbolic Ap- proaches to Learning/or Natural Language Processing, pages 425-438, Springer. J. Klavans, C. Jacquemin, and E. Tzouker- mann. 1997. A natural language approach to multi-word term conflation. In Proceedings of the third Delos Workshop - Cross-Language Information Retrieval. Xiaobin Li, Stan Szpakowicz, and Stan Matwin. 1995. WordNet-based algorithm word sense disambiguation. In Proceedings of IJCAI-95, pages 1368-1374, Montreal, Canada. G. A. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. Miller. 1993. Introduc- tion to WordNet: An on-line lexical database. Technical Report CSL Report 43, Cognitive Science Laboratory, Princeton. Stan Szpakowicz, Stan Matwin, and Ken Barker. 1996. WordNet-based word sense disambiguation that works for short texts. Technical Report TR-96-03, Department of Computer Science, University of Ottawa, On- tario, Canada. 504
1998
82
Using Decision Trees to Construct a Practical Parser Masahiko Haruno* Satoshi Shirai t Yoshifumi Ooyama t mharuno ~hlp.atr.co.jp shirai,~cslab.kecl.ntt.co.jp oovama~cslal).kecl.nt t.co.j p *ATR Human Information Processing Research Laboratories 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan. tNTT Communication Science Laboratories 2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan. Abstract This paper describes novel and practical Japanese parsers that uses decision trees. First, we con- struct a single decision tree to estimate modifica- tion probabilities; how one phrase tends to modify another. Next, we introduce a boosting algorithm in which several decision trees are constructed and then combined for probability estimation. The two constructed parsers are evaluated by using the EDR Japanese annotated corpus. The single-tree method outperforms the conventional .Japanese stochastic methods by 4%. Moreover, the boosting version is shown to have significant advantages; 1) better pars- ing accuracy than its single-tree counterpart for any amount of training data and 2) no over-fitting to data for various iterations. 1 Introduction Conventional parsers with practical levels of perfor- mance require a number of sophisticated rules that have to be hand-crafted by human linguists. It is time-consunaing and cumbersome to naaintain these rules for two reasons. • The rules are specific to the application domain. • Specific rules handling collocational expressions create side effects. Such rules often deteriorate t, he overall performance of the parser. The stochastic approach, on the other hand, has the potential to overcome these difficulties. Because it. induces stochastic rules to maximize overall per- formance against training data, it not only adapts to any application domain but. also may avoid over- fitting to the data. In the late 80s and early 90s, the induction and parameter estimation of probabilis- tic context free grammars (PCFGs) from corpora were intensively studied. Because these grammars comprise only nonterminal and part-of-speech tag symbols, their performances were not enough to be used in practical applications (Charniak, 1993). A broader range of information, in particular lexical in- formation, was found to be essential in disambiguat- ing the syntactic structures of real-world sentences. SPATTER (Magerman, 1995) augmented the pure PCFG by introducing a number of lexical attributes. The parser controlled applications of each rule by us- ing the lexical constraints induced by decision tree algorithm (Quinlan, 1993). The SPATTER parser attained 87% accuracy and first made stochastic parsers a practical choice. The other type of high- precision parser, which is based on dependency anal- ysis was introduced by Collins (Collins, 1996). De- pendency analysis first segments a sentence into syn- tactically meaningful sequences of words and then considers the modification of each segment. Collins' parser computes the likelihood that each segment modifies the other (2 term relation) by using large corpora. These modification probabilities are con- ditioned by head words of two segments, distance between the two segments and other syntactic fea- tures. Although these two parsers have shown simi- lar performance, the keys of their success are slightly different. SPATTER parser performance greatly de- pends on the feature selection ability of the decision tree algorithm rather than its linguistic representa- tion. On the other hand, dependency analysis plays an essential role in Collins' parser for efficiently ex- tracting information from corpora. In this paper, we describe practical Japanese de- pendency parsers that uses decision trees. In the Japanese language, dependency analysis has been shown to be powerful because segment (bunsetsu) order in a sentence is relatively free compared to European languages..Japanese dependency parsers generally proceed in three steps. 1. Segment a sentence into a sequence of bunsetsu. 2. Prepare a modification matrix, each value of which represents how one bunsetsu is likely to modify another. 3. Find optimal modifications in a sentence by a dynamic programming technique. The most difficult part is the second; how to con- struct a sophisticated modification matrix. With conventional Japanese parsers, the linguist nmst classify the bunsetsu and select appropriate features to compute modification values. The parsers thus suffer from application domain diversity and the side effects of specific rules. 505 Stochastic dependency parsers like Collins', on the other hand, define a set of attributes for condition- ing the modification probabilities. The parsers con- sider all of the attributes regardless of bunsetsu type. These methods can encompass only a small number of features if the probabilities are to be precisely evaluated from finite number of data. Our decision tree method constructs a more sophisticated modi- fication matrix. It automatically selects a sufficient number of significant attributes according to bun- setsu type. We can use arbitrary numbers of the attributes which potentially increase parsing accu- racy. Natural languages are full of exceptional and collo- cational expressions. It is difficult for machine learn- ing algorithms, as well as human linguists, to judge whether a specific rule is relevant in terms of over- all performance. To tackle this problem, we test the mixture of sequentially generated decision trees. Specifically, we use the Ada-Boost algorithm (Fre- und and Schapire, 1996) which iteratively performs two procedures: 1. construct a decision tree based on the current data distribution and 2. updating the distribution by focusing on data that are not well predicted by the constructed tree. The final modification probabilities are computed by mixing all the decision trees according to their performance. The sequential decision trees gradually change from broad coverage to specific exceptional trees that. can- not be captured by a single general tree. In other words, the method incorporates not only general ex- pressions but also infrequent specific ones. The rest of the paper is constructed as follows. Section 2 summarizes dependency analysis for the Japanese language. Section 3 explains our decision tree models that compute modification probabili- ties. Section 4 then presents experimental results obtained by using EDR Japanese annotated corpora. Finally, section 5 concludes the paper. 2 Dependency Analysis in Japanese Language This section overviews dependency analysis in the Japanese language. The parser generally performs the following three steps. 1. Segment a sentence into a sequence ofbunsetsu. 2. Prepare modification matrix each value of which represents how one bunsetsu is likely to modify the other. 3. Find optimal modifications in a sentence by a dynamic programming technique. Because there are no explicit delimiters between words in Japanese, input sentences are first word segmented, part-of-speech tagged, and then chunked into a sequence of bunsetsus. The first step yields, for the following example, the sequence of bunsetsu displayed below. The parenthesis in the Japanese expressions represent the internal structures of the bunsetsu (word segmentations). Example: a~lq e)~7~12~.~:C)-~U ~o75~r7 -1' Y -~ ~r,A. t~ ((~l~)(e~)) ((Y~)(I:)) ((~)i)(e))) kinou-no yuugata-ni kinjo-no yesterday-NO evenin~Nl neighbor-No ((~° ~)(~)) ((v -¢ :-)(¢)) ((~2,z,)(t:) kodomo-ga wain-wo nornuTta children-GA wine-WO drink+PAST The second step of parsing is to construct a modifi- cation matrix whose values represent the likelihood that one bunsetsu modifies another in a sentence. In the Japanese language, we usually make two as- sumptions: 1. Every bunsetsu except the last one modifies only one posterior bunsetsu. 2. No modification crosses to other modifications in a sentence. Table 1 illustrates a modification matrix for the example sentence. In the matrix, columns and rows represent anterior and posterior bunsetsus, respec- tively. For example, the first bunsetsu "kinou- no" modifics the second 'yuugala-ni'with score 0.T0 and the third 'kinjo-no' with score 0.07. The aim of this paper is to generate a modification matrix by using decision trees. kfnou-no ~tul#ata.ni 0.70 yvugata-ni **njo-no 0.07 0.10 kfnjo.no kodorna-#a 0,10 0.10 0.70 kadomo*~a ~ain-~o 0,10 0.10 0.20 0.05 nomu.ta 0.03 0.70 0.10 0.95 i , a l n . mlo 1.00 Table 1: Modification Matrix for Sample Sentence The final step of parsing optimizes the entire de- pendency structure by using the values in the mod- ification matrix. Before going into our model, we introduce the no- tations that will be used in the model. Let S be the input sentence. S comprises a bunsetsu set B of length m ({< bl,f~ >,-.-,< bm,f,, >}) in which bi and fi represent the ith bunsetsu and its features, respectively. We define D to be a modification set; D = {rood(l),..., mod(m - 1)} in which rood(i) indi- cates the number of busetsu modified by the ith bun- setsu. Because of the first assumption, the length of D is always m- 1. Using these notations, the result of the third step for the example can be given as D = {2, 6, 4, 6, 6} as displayed in Figure 1. 3 Decision Trees for Dependency Analysis 3.1 Stochastic Model and Decision Trees The stochastic dependency parser assigns the most plausible modification set Dbe,t to a sentence S in 506 1 kmou-no uugat 3 4 jc-no kodomo-ga ,ll 5 6 t'ain- '0 n0mu.ta t Figure 1: Modification Set for Sample Sentence terms of the training data distribution. Dbest = argmax D P( D[S) = arg,nax D P( D[B) By assuming the independence of modifica- tions, P(D[B) can be transformed as follows. P(yeslbi, bj, fl ,"', fro) means the probability that a pair of bunsetsu bi and bj have a modification rela- tion. Note that each modification is constrained by all features{f, ,--., fro} in a sentence despite of the assumption of independence.We use decision trees to dynamically select appropriate features for each combination of bunsetsus from {f,,---, fm }. mi-~P(yes[bi, "" ,fro) P(DIB) = 1-I - bj, f,,. Let us first consider the single tree case. The training data for the decision tree comprise any un- ordered combination of two bunsetsu in a sentence. Features used for learning are the linguistic informa- tion associated with the two bunsetsu. The next sec- tion will explain these features in detail. The class set for learning has binary values yes and no which delineate whether the data (the two bunstsu) has a modification relation or not. In this setting, the decision tree algorithm automatically and consecu- tively selects the significant, features for discriminat- ing modify/non-modify relations. We slightly changed C4.5 (Quinlan, 1993) pro- grams to be able to extract class frequen- cies at every node in the decision tree be- cause our task is regression rather than classi- fication. By using the class distribution, we compute the probability PDT(yeslbi, bj, f ~,..., fro) which is the Laplace estimate of empirical likeli- hood that bi modifies bj in the constructed deci- sion tree DT. Note that it. is necessary to nor- realize PDT(yes[bi, bj, f,,..., fro) to approximate P(yes[bi,bj,fx,"',fm). By considering all can- didates posterior to bi, P(yeslbi,b.i,fl,'",fm) is computed using a heulistic rule (1). It is of course reasonable to normalize class frequencies instead of the probability PoT(yeslbi, bj,, f,,..., fro). Equa- tion (1) tends to emphasize long distance dependen- cies more than is true for frequency-based normal- ization. P(yeslbi, bj, f, ,..., f.~) ~_ PDT(yeslbi, bj, fl,'", fro) (1) ~ >i m P DT(yeslbl, by, f ~ , . . . , f ,, ) Let us extend the above to use a set of decision trees. As briefly mentioned in Section 1, a number of infrequent and exceptional expressions appear in any natural language phenomena; they deteriorate the overall performance of application systems. It is also difficult for automated learning systems to detect and handle these expressions because excep- tional expressions are placed ill the same class as frequent ones. To tackle this difficulty, we gener- ate a set of decision trees by adaboost (Freund and Schapire, 1996) algorithm illustrated in Table 2. The algorithm first sets the weights to 1 for all exana- pies (2 in Table 2) and repeats the following two procedures T times (3 in Table 2). 1. A decision tree is constructed by using the cur- rent weight vector ((a) in Table 2) 2. Example data are then parsed by using the tree and the weights of correctly handled examples are reduced ((b),(c) in Table 2) 1. '2.. 3. Input: sequence of N examples < eL, u,~ > .... , < eN, .wN > in which el and wi represent an example and its weight, respectively. Initialize the weight vector wi =1 for i = 1,..., N Do for t = l,2,...,T (a) Call C4.5 providing it with the weight vector w,s and Construct a modification probability set ht (b) Let Error be a set of examples that are not. identified by lit Compute the pseudo error rate of ht: e' = E iCE .... wi/ ~ ,=INw, if et > 5' then abort loop l--e t (c) For examples correctly predicted by ht, update the weights vector to be wi = wiflt 4. Output a final probability set: hl=Zt=,T(log~)ht/Zt=,T(Iog~) Table 2: Combining Decision Trees by Ada-boost Algorithm The final probability set h I is then computed by mixing T trees according to their perfor- mance (4 in Table 2). Using h: instead of PoT(yeslbi , bj, fl,'", f,,~), in equation (1) gener- ates a boosting version of the dependency parser. 3.2 Linguistic Feature Types Used for Learning This section explains the concrete feature setting we used for learning. The feature set mainly focuses on 507 1 lexical information of head word 6 distance between two bunsetsu 2 part-of-speech of head word 7 particle 'wa' between two bunsetsu 3 type of bunsetsu 8 punctuation between two bunsetsu 4 punctuation 5 parentheses Table 3: Linguistic Feature Types Used for Learning Feature Type Va|net 4 ,5 $'), <6~', ~tE, t~'~t ~', l~'tt~"6, .:~, -'~', 5, a~., L, L¢~', E'.', "tr.,'t~L, "1-6, "t', "~, "~, "~ st ' ~-. ].'~, %*~t.t,- " , "~, ]_'0'), t.¢l~ * , ~**¢9"C, ]'.gt~,gl~,9]'*~,9"C, 99, ~, ~¢~,, & ~, __%, ~, ~a~, @t,, @t,L, @t,Ll2, @~6, ~'~", t¢6, @6Ul:, to0, ~k~', ~k'C, ::, ~, 0~, d)h, tl, I~./J':), ~, I|E, It:, tt::~., t-C, ~b, ~ L<I/, l.t~. ~, ~-, ~I.~R~I~'~, ~.~1~., ~,.~l~;l~]f'tit, lg'~, $1"tf~,t~l, .V,¢IL ~[]glllql~]. e~i~], n o n, k~.,.X, ~J.¢~ non, ", ~, ~. [, [. [, ~, l, ",',~,,,I,.I,],J A(0), B(;~4), C(>5) 7 0, 1 8 0, 1 Table 4: Values for Each Feature Type ¢3.S i e3 a2s a2 "graph.dirt- sooo *occo ~Sooo 2oo00 2scoo 3o00o asooo 4ooco 45ooo soooo N~bet of Ttammg Data Figure 2: Learning Curve of Single-Tree Parser the two bunsetsu constituting each data.. Tile class set consists of binary values which delineate whether a sample (the two bunsetsu) have a modification re- lation or not. We use 13 features for the task, 10 di- rectly from the 2 bunsetsu under consideration and 3 for other bunsetu information as summarized in Table 3. Each bunsetsu (anterior and posterior) has the 5 features: No.1 to No.5 in Table 3. Features No.6 to No.8 are related to bunsetsu pairs. Both No.1 and No.2 concern the head word of the bunsetsu. No.1 takes values of frequent words or thesaurus cat- egories (NLRI, 1964). No.2, on the other hand, takes values of part-of-speech tags. No.3 deals with bull- setsu types which consist of functional word chunks or tile part-of-speech tags that dominate tile bull- setsu's syntactic characteristics. No.4 and No.5 are binary features and correspond to punctuation and parentheses, respectively. No.6 represents how many bunsetsus exist, between the two bunsetsus. Possible values are A(0), B(0--4) and C(>5). No.7 deals with the post-positional particle 'wa' which greatly influ- ences the long distance dependency of subject-verb modifications. Finally, No.8 addresses tile punctua- tion between the two bunsetsu. Tile detailed values of each feature type are summarized ill Table 4. 4 Experimental Results We evaluated the proposed parser using the EDR Japanese annotated corpus (EDR, 199.5). The ex- periment consisted of two parts. One evaluated the single-tree parser and the other tile boosting coun- terpart. In tile rest of this section, parsing accuracy refers only to precision; how many of tile system's output are correct in terms of the annotated corpus. We do not show recall because we assume every bun- setsu modifies only one posterior bunsetsu. The fea- tures used for learning were non head-word features, (i.e., type 2 to 8 in Table 3). Section 4.1.4 investi- gates lexical information of head words such as fre- quent, words and thesaurus categories. Before going into details of tile experimental results, we sunnna- rize here how training and test data were selected. 1. After all sentences in the EDR corpus were word-segmented and part-of-speech tagged (Matsumoto and others, 1996), they were then chunked into a sequence of bunsetsu. 2. All bunsetsu pairs were compared with EDR bracketing annotation (correct segmentations 508 I Confidence Level ]1 25% ~50%(, 75(~, 95% I Parsing Accuracy 82.01% ~3.43~, 83.52% 83.35% Table 5: Number of Training Sentences v.s. Parsing Accuracy I Number of Training Sentences H 3000 6000 10000 20000 30000 50000 I [ [ P a r s i n g Accuracy ' 82.07% 82.70% 83.52% 84.07% 84.27% 84.33% Table 6: Pruning Confidence Level v.s.Parsing Accuracy and modifications). If a sentence contained a pair inconsistent with the EDR annotation, the sentence was removed from the data. 3. All data examined (total number of sen- tences:207802, total number of bun- set.su:1790920) were divided into 20 files, The training data were same number of first sentences of the 20 files according to the training data size. Test data (10000 sentences) were the 2501th to 3000th sentences of each file. 4.1 Single Tree Experiments In the single tree experiments, we evaluated the fol- lowing 4 properties of the new dependency parser. • Tree pruning and parsing accuracy • Number of training data and parsing accuracy • Significance of features other than Head-word Lexical Information • Significance of Head-word Lexical Information 4.1.1 Pruning and Parsing Accuracy Table 5 summarizes the parsing accuracy with var- ious confidence levels of pruning. The number of training sentences was 10000. In C4.5 programs, a larger value of confidence means weaker pruning and 25% is connnonly used in various domains (Quinlan, 1993). Our experimental results show that 75% pruning attains the best per- formance, i.e. weaker pruning than usual. In the remaining single tree experiments, we used the 75% confidence level. Although strong pruning treats in- frequent data as noise, parsing involves many ex- ceptional and infrequent modifications as mentioned before. Our result means that only information in- cluded in small numbers of samples are useful for disambiguating the syntactic structure of sentences. 4.1.2 The amount of Training Data and Parsing Accuracy Table 6 and Figure 2 show how the number of train- ing sentences influences parsing accuracy for the same 10000 test. sentences. They illustrate tile fol- lowing two characteristics of the learning curve. 1. The parsing accuracy rapidly rises up to 30000 sentences and converges at around 50000 sen- tences. 2. The maximum parsing accuracy is 84.33% at 50000 training sentences. We will discuss the maximum accuracy of 84.33%. Compared to recent stochastic English parsers that yield 86 to 87% accuracy (Collins, 1996; Mager- man, 1995), 84.33% seems unsatisfactory at the first glance. The main reason behind this lies in the dif- ference between the two corpora used: Penn Tree- bank (Marcus et al., 1993) and EDR corpus (EDR, 1995). Penn Treebank(Marcus et al., 1993) was also used to induce part-of-speech (POS) taggers because the corpus contains very precise and detailed POS markers as well as bracket, annotations. In addition, English parsers incorporate the syntactic tags that are contained in the corpus. The EDR corpus, on the other hand, contains only coarse POS tags. We used another Japanese POS tagger (Matsumoto and oth- ers, 1996) to make use of well-grained information for disambiguating syntactic structures. Only the bracket information in the EDR corpus was consid- ered. We conjecture that the difference between the parsing accuracies is due to the difference of the cor- pus information. (Fujio and Matsumoto, 1997) con- structed an EDR-based dependency parser by using a similar method to Collins' (Collins, 1996). The parser attained 80.48% accuracy. Although thier training and test. sentences are not exactly same as ours, the result seems to support our conjecture on the data difference between EDR and Penn Tree- bank. 4.1.3 Significance of Non Head-Word Features We will now summarize tile significance of each non head-word feature introduced in Section 3. The in- fluence of the lexical information of head words will be discussed in the next section. Table 7 illustrates how the parsing accuracy is reduced when each fea- ture is removed. The number of training sentences was 10000. In the table, ant and post. represent, the anterior and the posterior bunsetsu, respectively. Table 7 clearly demonstrates that the most signifi- 509 Feature Accuracy Decrease Feature Accuracy Decrease ant POS of head -0.07% post punctuation +1.62(7(, ant bunsetsu type ant punctuation ant parentheses post POS of head post bunsetsu type +9.34% +1.15% +0.00% +2.13% +0.52% post parentheses -e0.00% distance between two bunsetsus +5.21% punctuation between two bunsetsus +0.01% 'wa' between two bunsetsus +1.79% Table 7: Decrease of Parsing Accuracy When Each Attribute Removed Head Word Information Parsing Accuracy l] 100words 200words Levell Level2 I 83.34% 82.68%82.51%81.67% Table 8: Head Word Information v.s. Parsing Accuracy cant features are anterior bunsetsu type and distance between the two bunsetsu. This result may partially support an often used heuristic; bunsetsu modifica- tion should be as short range as possible, provided the modification is syntactically possible. In partic- ular, we need to concentrate on the types of bunsetsu to attain a higher level of accuracy. Most features contribute, to some extent, to the parsing perfor- mance. In our experiment, information on paren- theses has no effect on the performance. The reason may be that EDR contains only a small number of parentheses. One exception in our features is an- terior POS of head. We currently hypothesize that this drop of accuracy arises from two reasons. • In many cases, the POS of head word can be determined from bunsetsu type. • Our POS tagger sometimes assigns verbs for verb-derived nouns. 4.1.4 Significance of Head-words Lexical Information We focused on the head-word feature by testing the following 4 lexical sources. The first and the second are the 100 and 200 most frequent words, respec- tively. The third and the fourth are derived from a broadly used Japanese thesaurus, Word List by Se- mantic Principles (NLRI, 1964). Level 1 and Level 2 classify words into 15 and 67 categories, respectively. 1. 100 most Frequent words 2. 200 most Frequent words 3. Word List Level 1 4. Word List Level 2 Table 8 displays the parsing accuracy when each head word information was used in addition to the previous features. The number of training sentences was 10000. In all cases, the performance was worse than 83.52% which was attained without head word lexical information. More surprisingly, more head word information yielded worse performance. From this result, it. may be safely said, at least, for the Japanese language,' that we cannot expect, lexica] in- formation to always improve the performance. Fur- ther investigation of other thesaurus and cluster- ing (Charniak, 1997) techniques is necessary to fully understand the influence of lexical information. 4.2 Boosting Experiments This section reports experimental results on the boosting version of our parser. In all experiments, pruning confidence levels were set. to 55%. Table 9 and Figure 3 show the parsing accuracy when the number of training examples was increased. Because the number of iterations in each data set changed be- tween 5 and 8, we will show the accuracy by combin- ing the first 5 decision trees. In Figure 3, the dotted line plots the learning of the single tree case (identi- cal to Figure 2) for reader's convenience. The char- acteristics of the boosting version can be summa- rized as follows compared to the single tree version. • The learning curve rises more rapidly with a small number of examples. It is surprising that the boosting version with 10000 sentences per- forms better than the single tree version with 50000 sentences. • The boosting version significantly outperforms the single tree counterpart for any number of sentences although they use the same features for learning. Next, we discuss how the number of iterations in- fluences the parsing accuracy. Table 10 shows the parsing accuracy for various iteration numbers when 50000 sentences were used as training data. The re- suits have two characteristics. • Parsing accuracy rose up rapidly at the second iteration. * No over-fitting to data was seen although the performance of each generated tree fell around 30% at the final stage of iteration. 510 I Nombe. o T. i,,i,,gSe,l*e,,co. I 3OO0 6OOO I'0000 2OOOO 3OO0O 5O0OO I Parsing Accuracy 83.10% 84.03% 84.44% 84.74% 84.91% 85.03% Table 9: Number of Training Sentences v.s. Parsing Accuracy Parsing Accuracy [[ 84.32% 84.93% 84.89% 84.86% 85.03% 85.01% I Table 10: Number of Iteration v.s. Parsing Accuracy 5 Conclusion We have described a new Japanese dependency parser that uses decision trees. First, we introduced the single tree parser to clarify the basic character- istics of our method. The experimental results show that it outperforms conventional stochastic parsers by 4%. Next, the boosting version of our parser was introduced. The promising results of the boosting parser can be summarized as follows. • The boosting version outperforms the single- tree counterpart regardless of training data amount. • No data over-fitting was seen when the number of iterations changed. We now plan to continue our research in two direc- tions. One is to make our parser available to a broad range of researchers and to use their feedback to re- vise the features for learning. Second, we will apply our method to other languages, say English. Al- though we have focused on the Japanese language, it is straightforward to modi~" our parser to work with other languages. 05.5 85 8,35 83 82,5 B2 "laoostJng.O=r" / / /' / J N~ber Ot Tra~mg Oata Proc. 15th National Conference on Artificial 172- telligence, pages 598-603. Michael Collins. 1996. A New Statistical Parser based on bigram lexical dependencies. In Proc. 34th Annual Meeting of Association for Compu- tational Linguistics, pages 184-191. Japan Electronic Dictionary Reseaech Institute Ltd. EDR, 1995. the EDR Electronic Dictionary Tech- nical Guide. Yoav Freund and Robert Schapire. 1996. A decision-theoretic generalization of on-line learn- ing and an application to boosting. M. Fujio and Y. Matsumoto. 1997. Japanese de- pendency structure analysis based on statistics. In SIGNL NL117-12, pages 83-90. (in Japanese). David M. Magerman. 1995. Statistical Decision- Tree Models for Parsing. In Proc.33rd Annual Meeting of Association for Computational Lin- guistics, pages 276-283. Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Compu- tational Linguistics, 19(2):313-330, June. Y. Matsumoto et al. 1996. Japanese Morphological Analyzer Chasen2.0 User's Manual. NLRI. 1964. Word List by Semantic Principles. Syuei Syuppan. (in Japanese). J.Ross Quinlan. 1993. C4.5 Programs for Machine Learning. Morgan Kaufinann Publishers. Figure 3: Learning Curve of Boosting Parser References Eugene Charniak. 1993. Statistical Language Learn- ing. The MIT Press. Eugene Charniak. 1997. Statistical Parsing with a Context-free Grammar and Word Statistics. In 511
1998
83
Integrating Text Plans for Conciseness and Coherence* Terrence Harvey and Sandra Carberry Department of Computer Science University of Delaware Newark, DE 19716 {harvey, carberry}@cis.udel.edu Abstract Our experience with a critiquing system shows that when the system detects problems with the user's performance, multiple critiques are often produced. Analysis of a corpus of actual cri- tiques revealed that even though each individ- ual critique is concise and coherent, the set of critiques as a whole may exhibit several prob- lems that detract from conciseness and coher- ence, and consequently assimilation. Thus a text planner was needed that could integrate the text plans for individual communicative goals to produce an overall text plan representing a con- cise, coherent message. This paper presents our general rule-based system for accomplishing this task. The sys- tem takes as input a set of individual text plans represented as RST-style trees, and produces a smaller set of more complex trees represent- ing integrated messages that still achieve the multiple communicative goals of the individual text plans. Domain-independent rules are used to capture strategies across domains, while the facility for addition of domain-dependent rules enables the system to be tuned to the require- ments of a particular domain. The system has been tested on a corpus of critiques in the do- main of trauma care. 1 Overview Many natural language systems have been de- veloped to generate coherent text plans (Moore and Paris, 1993; Hovy, 1991; Wanner and Hovy, 1996; Zukerman and McConachy, 1995). How- ever, none has the ability to take a set of inde- pendently generated yet inter-related text plans and produce integrated plans that realize all of the communicative goals in a concise and coher- ent manner. RTPI (Rule-based Text Plan Integrator) was designed to perform this task. The need for coherence requires that the system be able to * This work was supported by the National Library of Medicine under grant R01-LM-05764-01. We thank Bon- nie Webber and John Clarke for their suggestions and advice during the course of this research. identify and resolve conflict across multiple, in- dependent text plans, and exploit relations be- tween communicative goals. Conciseness re- quires the ability to aggregate and subsume communicative goals. Although our work was motivated by the need to produce coherent, in- tegrated messages from the individual critiques produced by a decision support system for emer- gency center trauma care, this same task will arise in future systems as they make use of in- dependent modules that need to communicate with a user. Thus the system should have sim- ple, domain-independent rules, but should also be flexible enough to allow the addition of rules specific to the domain at hand. This paper describes RTPI and our initial implementation that works with the kinds of text plans representative of a critiquing system. While our examples are taken from the domain of trauma care, the domain-independent rules make the system applicable to other domains of critiquing and instruction as well. The moti- vation behind RTPI is presented in Section 2, and Section 3 contrasts it with other work. Then we describe the system's parameters that allow flexible response in multiple environments (Section 4). The heart of the system is RTPTs domain-independent rule base (Section 5) for integrating text plans. The implemented algo- rithm and the results of its application are pre- sented last. 2 Motivation TraumAID (Webber et al., 1992) is a decision support system for addressing the initial defini- tive management of multiple trauma. Trauma- TIQ (Gertner and Webber, 1996) is a module that infers a physician's plan for managing pa- tient care, compares it to TraumAID's plan, and critiques significant differences between them. TraumaTIQ recognizes four classes of differ- ences: errors of omission, errors of commission, scheduling errors, and procedure choice errors. Experimentation with TraumaTIQ showed that when the physician's plan is deficient, several problems are generally detected, and thus mul- tiple critiques are independently produced. 512 We analyzed 5361 individual critiques com- prising 753 critique sets produced by Trauma- TIQ on actual cases of trauma care. A critique set represents the critiques that are produced at a particular point in a case. While each critique was coherent and concise in isolation, we found several problems within critique sets: some cri- tiques detracted from others in the critique set; some would make more sense if they took ex- plicit account of other critiques appearing ear- lier in the set; and there was informational over- lap among critiques. Our analysis revealed 22 common patterns of inter-related critiques, each pattern covering some subset of a critique set. While we initially developed a domain-dependent system, Trau- maGEN, that operated directly on the logical form of the critiques produced by TraumaTIQ, we noted that many of the patterns were more generally applicable, and that the problems we were addressing would also arise in other sophis- ticated systems that distribute their processing across multiple independent modules, each of which may need to communicate with the user. While such systems could be designed to try to prevent problems of this kind from arising, the result would be less modular, more complex, and more difficult to extend. Thus we developed RTPI, a system for con- structing a set of integrated RST-style text plans from a set of individual text plans. RTPI contains a set of domain-independent rules, along with adjustable parameters that deter- mine when and how rules are invoked. In ad- dition, RTPI allows the addition of domain- dependent rules, so the system can account for interactions and strategies particular to a do- main. 3 Other Work The idea of domain-independent text planning rules is not new. Appelt (1985) used "inter- actions typical of linguistic actions" to design critics for action subsumption in KAMP. REVI- SOR (Callaway and Lester, 1997) used domain- independent operators for revision of a text plan for explanation. Because our rules operate on full RST-style text plans that include commu- nicative goals, the rules can be designed to in- tegrate the text plans in ways that still satisfy those goals. The Sentence Planner (Wanner and Hovy, 1996) uses rules to refine a single initial tree rep- resentation. In contrast, RTPI operates on sets of complete, independent text plan trees. And while REVISOR handles clause aggregation, and Sentence Planner removes redundancies by aggregating neighboring expressions, neither of them addresses the aggregation of communica- tive goals (often requiring reorganization), the TraumaTIQ critiques: Caution: check for medication allergies.and do a laparotomy immediately to treat the intra-abdominal injury. Consider checking for medication allergies now to treat a possible GI tract injury. Please remember to check .for medication al- lergies before you give antibiotics. Message from RTPI integrated plan: Caution: check for medication allergies to treat the intra-abdominal injury and a possi- ble GI tract injury, and do it before giving an- tibiotics. Then do a laparotomy to complete treating the intra-abdominal injury. Figure 1: Result of communicative goal aggre- gation. revision and integration of text plans to remove conflict, or the exploiting of relations between communicative goals as done by RTPI. Simi- larly, WISHFUL (Zukerman and McConachy, 1995) includes an optimization phase during which it chooses the optimal way to achieve a set of related communicative goals. However, the system can choose to eliminate propositions and does not have to deal with potential conflict within the information to be conveyed. 4 System Parameters Although RTPI's rules are intended to be domain-independent, environmental factors such as the purpose of the messages and the social role of the system affect how individual text plans should be integrated. For example, if the system's purpose is to provide directions for performing a task, then an ordered set of actions will be acceptable; in contrast, if the system's purpose is decision support, with the user retaining responsibility for the selected actions, then a better organization will be one in which actions are grouped in terms of the objectives they achieve (see Section 5.1.1). Similarly, in some environments it might be reasonable to resolve conflict by omitting communicative goals that conflict with the sys- tem's action recommendations, while in other environments such omission is undesirable (see Section 5.1.2). RTPI has a set of system parameters that capture these environmental factors. These pa- rameters affect what rules are applied, and in some cases how they are applied. They allow characteristics of the output text plans to be tailored to broad classes of domains, giving the system the flexibility to be effective over a wide range of problems. 513 5 The Rule-Base RTPTs input consists of a set of text plans, each of which has a top-level communicative goal. Rhetorical Structure Theory (Mann and Thompson, 1987) posits that a coherent text plan consists of segments related to one an- other by rhetorical relations such as MOTIVA- TION or BACKGROUND. Each text plan pre- sented to RTPI is a tree structure in which individual nodes are related by RST-style re- lations. The top-level communicative goal for each text plan is expressed as an intended effect on the user's mental state (Moore, 1995), such as (GOAL USER (DO ACTION27)), The kinds of goals that RTPI handles are typical of critiquing sys- tems, systems that provide instructions for per- forming a task, etc. These goals may consist of getting the user to perform actions, refrain from performing actions, use an alternate method to achieve a goal, or recognize the temporal con- straints on actions. Rules are defined in terms of tree specifica- tions and operators, and are stylistically simi- lar to the kinds of rules proposed in (Wanner and Hovy, 1996). When all the tree specifica- tions are matched, the score function of the rule is evaluated. The score function is a heuristic specific to each rule, and is used to determine which rule instantiation has the best potential text realization. Scores for aggregation rules, for example, measure the opportunity to re- duce repetition through aggregation, subsump- tion, or pronominal reference, and penalize for paragraph complexity. Once a rule instantiation is chosen, the sys- tem performs any substitutions, pruning, and moving of branches specified by the rule's op- erators. The rules currently in use operate on text plan trees in a pairwise fashion, and re- cursively add more text plans to larger, already integrated plans. 5.1 Classes of Rules RTPI has three classes of rules, all of which produce an integrated text plan from separate text plans. The classes of rules correlate with the three categories of problems that we identi- fied from our analysis of TraumaTIQ's critiques, namely, the need to: 1) aggregate communica- tive goals to achieve more succinct text plans; 2) resolve conflict among text plans; and 3) ex- ploit the relationships between communicative goals to enhance coherence. 5.1.1 Aggregatlon Our analysis of TraumaTIQ's output showed that one prevalent problem was informational overlap, i.e. the same actions and objectives often appeared as part of several different in- put text plans, and thus the resulting messages ( N • . • 'A3,A4])) (Recommend User {AI,A2,A3}) (Pefsuad~U (Do U (AI,A2,A3.A4})) Jo A I,A2, and A3 |N (Mi)llvaticm [AI.A2.A3.A4} (G2}) (Bel User ( Pan-of ( A I ,A2.A3,A4 ) { G2 ) )1 (Inform User (Pan-of {AI,A2,A3,A4} {G21)) as l~ext of G$. ~ ^31)) (Recommend User {AO}) (Persuaded U (~o U {A0,A2.A3})) De AO,A2, a*~ AJ (Molivafio~( A0,A2.A3 ] [GI 1) (Bel User (Pan-~ {A0,A2.A3} [GI ])) [ [Inftwrn User (Pan-of {A0,A2,A3} (GI })) as part of Gl. Figure 2: Input to RTPI (see Figure 3). appear repetitious. Aggregation of the commu- nicative goals associated with these actions and objectives allows RTPI to make the message more concise. Aggregation of overlapping communicative goals is not usually straightforward, however, and often requires substantial reorganizing of the trees. Our approach was to draw on the or- dered, multi-nuclear SEQUENCE relation of RST. We posited that separate plans with overlapping communicative goals could often be reorganized as a sequence of communicative goals in a sin- gle plan. The recommended actions can be dis- tributed over the sequentially related goals as long as the new plan captures the relationships between the actions and their motivations given in the original plans. For example, one complex class of aggrega- tion is the integration of text plans that have overlapping actions or objectives, but also con- tain actions and objectives that do not overlap. When those that overlap can be placed together as part of a valid sequence, a multi-part message can be generated. RTPI produces an integrated text plan comprised of sequentially related seg- ments, with the middle segment conveying the shared actions and their collected motivations. The other segments convey the actions that temporally precede or follow the shared actions, and are also presented with their motivations. For example (Fig. 5), suppose that one text plan has the goal of getting the user to perform actions A0, A2, and A3 to achieve G1, while a second text plan has a goal of getting the user to perform A1,A2, A3, and A4 to achieve G2. Figure 3 presents the text plan resulting from the application of this rule. Realization of this text plan in English produces the message: Do AO as part of G1, and A1 as part of G2. Next do A2 and A3 to address both of these goals. Then do A4 to complete G2. 514 (G~Jal U (DO U |h0, AI, A2, A3. A4))) ( C'~);bl U (IA) U {A2, A3|))" (Goal U (Do U (A41)) (¢~u(Dou c^0~,,) seo ~ sEo (ooal ULIMU (A0})) SEQ do m ~ ,0 (inform U~r (~..of {A2~3} (G),°2),) (Infr, nn Or,~r (End {A4) {G2})) (Bel Us~'r ( P,m-~ {A0} {GI })) (kl Ur, er (P~N6 (A l ) (O2|)) ~ dm~Idrr, baA off,m*,# ge~/,v. ~ comeuu G2. (Inform User (Pro't-of {AO} (GI })) (lnfocm Usct (F~l-of {AI ) {G2})) st Fart of °l ez lmr# o] G2. Figure 3: Result of a complex aggregation rule (see Figure 2). This kind of aggregation is especially appropri- ate in a domain (such as trauma care) where the clause re-ordering normally applied to en- able aggregation (e.g. Sentence Planner) is re- stricted by the partial ordering of sequenced in- structions. RTPI can also handle aggregation when ac- tions or objectives are shared between differ- ent kinds of communicative goals. The bot- tom part of Figure 1 is the text realized from a text plan that was produced by the appli- cation of two rules to three initial text plans: one rule that applies to trees of the same form, and one that applies to two distinct forms. The first rule aggregates the communicative goal (GOAL USER (DO USER check_med_allergies)) that exists in two of the text plans. The second rule looks for overlap between the communicative goal of getting the user to do an action and the goal of having the user recognize a temporal constraint on actions. The application of these two rules to the text plans of the three initial messages shown in the top part of Figure 1 creates the integrated text plan shown in Figure 4 whose English realization appears in the bottom part of Figure 1. RTPI's parameter settings capture aspects of the environment in which the messages will be generated that will affect the kind of aggrega- tion that is most appropriate. The settings for aggregation determine whether RTPI empha- sizes actions or objectives. In the latter case (appropriate in the trauma decision-support en- vironment), an arbitrary limit of three is placed on the number of sequentially related segments in a multi-part message, though each segment can still address multiple goals. This allows the reorganization of communicative goals to enable aggregation while maintaining focus on objec- tives. 5.1.2 Resolving Conflict The ability to recognize and resolve conflict is required in a text planner because both the ap- pearance and resolution of conflict can be the result of text structure. RTPI identifies and re- solves a class of domain-independent conflict, with the resolution strategies dependent upon the social relationship between the user and the system. In addition, the system allows the user to add rules for domain-specific classes of con- flict. One class of conflict that can best be resolved at the text planning level results from implicit messages in text. Resolving conflict of this kind within independent modules of a critiquing sys- tem would require sharing extensive knowledge, thereby violating modularity concepts and mak- ing the planning process much more complex. For example, suppose that the user has con- veyed an intention to achieve a particular objec- tive by performing act Au. One system module might post the communicative goal of getting the user to recognize that act Ap must precede Au, while a different module posts the goal of getting the user to achieve the objective by ex- ecuting As instead of Au. While each of these communicative goals might be well-motivated and coherent in isolation, together they are in- coherent, since the first presumes that Au will be executed, while the second recommends re- tracting the intention to perform Au. A text planner with access to both of these top-level communicative goals and their text plans can recognize this implicit conflict and revise and integrate the text plans to resolve it. There are many ways to unambiguously re- solve this class of implicit conflict. Strategy se- lection depends on the social relationship be- tween the system and the user, as captured by three of RTPTs parameter settings. This re- lationship is defined by the relative levels of knowledge, expertise, and responsibility of the system and user. Three strategies used by our system, and their motivations, are: I. Discard communicative goals that implicitly conflict with a system recommendation. In the above example, this would result in a text plan that only recommends doing As 515 (Goals U {(Do U {A0}),(Know U (In-Order {A0} {At })),(DO U {A2})}) (Goals U {(Do U {A0}),(Know U...)}) SEQ (Goal U (Do U {A2})) do A2 (Recommend U [A0}) (Persuaded U (Do U [A0})) (Inform U (In-Order{A0} {AI }) (Persuaded U (In-Order {A0} {A1 })) DoAO t~ doitbeforeAi IN (Motivation 0} {GI,G2}) (Evidence (In-Order {AO} {AI }) RI)) JN (Bel User (Pan-of {A0} {G I,G21)) (Bel User (Reason (In-OrderlA0} {AI }) RI)) IN IN (Inform User (Pan-of {A0} {GI,G2})) (Inform User (Reason (In-Order{A0} {A1 }) RI)) as part of Gi and G2 (because RI). IN (Motivation {A2} {G1 }) (Bel User (Pan-o I {A2} {GI })) (Inform User (Part-of { A2 } { G I } )) to complete G2. Figure 4: Result of two rules applied to input shown in Fig. 5. First, a rule that applies to trees with top level goals of the form (GOAL USER (DO ...))uses two trees from Fig. 5 to make a tree with the two subtrees labelled (1) and (2). Next, a rule that places scheduling trees ( (GOAL U (KNOW U (IN-ORDER ...))) ) with related goals inserts a third subtree (3), in this case the entire scheduling tree. A domain specific realizer traverses the tree and inserts cue words and conjunctions based on relations. instead of An. This strategy would be ap- propriate if the system is an expert in the domain, has full knowledge of the current situation, and is the sole arbiter of correct performance. II. Integrate the text plan that implicitly con- flicts with the system recommendation as a concession that the user may choose not to accept the recommendation. This strat- egy is appropriate if the system is an ex- pert in the domain, but the user has better knowledge of the current situation and/or retains responsibility for selecting the best plan of action. Decision support is such an environment. The top half of Figure 6 presents two TraumaTIQ critiques that ex- hibit implicit conflict, while the bottom part presents the English realization of the integrated text plan, which uses a CONCES- SION relation to achieve coherence. III. Present the system recommendation as an alternative to the user plan. This may be appropriate if the parameters indicate the user has more complete knowledge and more expertise. (Goal UJ.D~ U {h0,h2})) (Recommend U (A0,A2}) (Persuaded U (Do U {A0,A21)) Do AO and A2 IN (Motivation [A0,A2} {GI }) (Bel User (Pan-of { A0,A2 } { G I })) IN (Inform User (Pan-of {A0,A2} {GI 1)) as part of Gl. (Goal U (Do U {A0})) ( Recommend U {A0}) (Persuaded U (Do U {A0})) Do AO I N (Motivation {A2} {GI }) {N (Bel User (Pan-of {A0} {Ol })) IN (Inform User (Pan-of {A01 {G! })) as ~ of G2. (Inform U (ln-OrderlA01{Al }) (Persuaded U (In-Order [A0}{AI D) DoAObeforeAl [N (Evidence (In-Order (A0 | { A I ]) R l )) IN (Bel User (Reason (In-Order{ A0 } { A ! }) R l)) IN (In form User (Reason (In-Order{ A0 } { A I } ) R I)) (because RI). Figure 5: Input to RTPI (see Figure 4). 5.1.3 Exploiting Related Goals Occasionally two text plans may exhibit no con- flict, yet the relationships between their com- municative goals can be exploited to produce more coherent text. For example, consider the following two individual critiques produced by TraumaTIQ: Caution: do a peritoneal lavage immediately as part of ruling out abdominal bleeding. Do not reassess the patient in 6 to 24 hours until after doing a peritoneal lavage. The out- come of the latter may affect the need to do the former. 516 While the two critiques do not conflict, RTPI's rules exploit the relation between the commu- nicative goals in their respective text plans to produce a more concise and coherent message. In particular, one of RTPI's rules recognizes the interaction between an initial plan to get the user to perform an action As, and a second plan that gets the user to recognize a dependency be- tween As and another action. This rule creates a text plan for the message: Do a peritoneal lavage immediately as part of ruling out abdominal bleeding. Use the results of the peritoneal lavage to decide whether to reassess the patient in 6 to P4 hours. TraumaTIQ critiques: Performing local visual exploration of all ab- dominal wounds is preferred over doing a peri- toneal lavage for ruling out a suspicious ab- dominal wall injury. Please remember to check for laparotomy scars before you do a peritoneal lavage. Message from RTPI integrated plan: Performing local visual exploration of all ab- dominal wounds is preferred over doing a peri- toneal lavage for ruling out a suspicious ab- dominal wall injury. However, if you do a peritoneal lavage, then remember to first check for laparotomy scars. 5.2 Trailing Comments Occasionally when several text plans are inte- grated into a single text plan, another text plan that overlaps with the integrated plan will re- main outside the new plan because the scoring function for the applicable rule was too low to allow it to combine. This is typically because an effort to integrate such a text plan would create a message so complex that the heuristic deemed it inappropriate. However, once concepts have been introduced in the integrated text plan, focusing heuristics (McKeown, 1985) suggest that other text plans containing these concepts be included in the in- tegrated plan as well. Rather than restructure the result of our transformation (against the ad- vice of our heuristic), we append them to the end of the message. Thus we refer to them as trailing comments. Unfortunately, when the communicative goal is to get the user to perform an action, trailing comments that refer to such actions have the po- tential to erroneously suggest new instances of actions. Our solution to this problem is imple- mented in the text realization templates, where we (1) make the focused action the subject of the sentence, reflecting its given status in the discourse, (2)utilize clue words to call atten- tion to its occurrence earlier in the message and to the new information being conveyed, and (3) subordinate other concepts presented with the focused concept by placing them in a phrase in- troduced by the cue words "along with". In one such example from the trauma domain, the main text plan contains the communicative goal of getting the user to perform several actions, including a laparotomy. A SEQUENCE relation is used to adjoin an overlapping text plan as a trailing comment, and this additional com- municative goal is realized in English as (clue words underlined): Figure 6: Conflict resolution. Moreover., doing the laparotomy is also indi- cated, along with repairing the left diaphragm, to treat the lacerated left diaphragm. 6 Algorithm RTPI performs rule-based integration of a set of RST-style trees. Rules are applied in an or- der designed to maximize derived benefit. The system first applies the rules that resolve con- flict, since we hypothesize that the presence of conflict will most seriously hamper assimilation of a message. Next, the rules that exploit rela- tions between text plans are tried because they enhance coherence by explicitly connecting dif- ferent communicative goals. Then the aggrega- tion rules are applied to improve conciseness. Finally, the rules for trailing comments reduce the number of disconnected message units. The algorithm is both greedy and anytime (Garvey and Lesser, 1994); it takes the best re- sult from a single application of a rule to a set of text plans, and then attempts to further apply rules to the modified set. The rule instantiation with the highest heuristic score is chosen and the rule's operator is applied to the trees using those bindings. Since the rules are designed to apply incrementally to a set, every application of a rule results in an improvement in the con- ciseness or coherence of the tree set, and the tree set is always a viable set of text plans. The user can thus set a time limit for processing of a tree set, and the algorithm can return an im- proved set at any time. In practice, however, the processing has never taken more than 1-2 seconds, even for large (25 plans) input sets. 517 7 Results We tested RTPI using the corpus of critiques generated by TraumaTIQ. A set of critiques was extracted from the middle of each of 48 trauma cases, and RST-style text plans were automati- cally generated for all the critiques. Then RTPI ran each set, and messages resulting from a template-based realization of RTPTs text plans were analyzed for conciseness and coherence. We are currently using templates for sentence realization since we have been working in the domain of trauma care, where fast real-time re- sponse is essential. There was a 18% reduction in the aver- age number of individual text plans in the 48 sets examined. The results for individual sets ranged from no integration in cases where all of the text plans were independent of one another, to a 60% reduction in sets that were heavily inter-related. More concise messages also re- sulted from a 12% reduction in the number of references to the diagnostic and therapeutic ac- tions and objectives that are the subject of this domain. The new text plans also allowed some references to be replaced by pronouns during realization, making the messages shorter and more natural. To evaluate coherence, messages from twelve cases 1 were presented, in randomly ordered blind pairs, to three human subjects not affili- ated with our project. The written instructions given to the subjects instructed them to note whether one set of messages was more compre- hensible, and if so, to note why. Two subjects preferred the new messages in 11 of 12 cases, and one subject preferred them in all cases. All subjects strongly preferred the messages pro- duced from the integrated text plan 69% of the time. 8 Summary Integration of multiple text plans is a task that will become increasingly necessary as indepen- dent modules of sophisticated systems are re- quired to communicate with a user. This pa- per has presented our rule-based system, RTPI, for accomplishing this task. RTPI aggregates communicative goals to achieve more succinct text plans, resolves conflict among text plans, and exploits the relations between communica- tive goals to enhance coherence. RTPI successfully integrated multiple text plans to improve conciseness and coherence in the trauma care domain. We will fur- ther explore the application of RTPTs domain- independent rules by applying the system to a 1The evaluation examples consisted of the first eleven instances from the test set where RTPI produced new text plans, plus the first example of conflict in the test set. different domain. We would also like to develop more domain-independent and some domain- dependent rules, and compare the fundamental characteristics of each. References Douglas E. Appelt. 1985. Planning english referring expressions. Artificial Intelligence, 26(1):1-33. Charles B. Callaway and James C. Lester. 1997. Dynamically improving explanations: A revision-based approach to explanation generation. In Proceedings of the 15th Inter- national Joint Conference on Artificial Intel- ligence, Nagoya, Japan, August. IJCAI. Alan Garvey and Victor Lesser. 1994. A survey of research in deliberative real-time artificial intelligence. The Journal of Real-Time Sys- tems, 6. A. Gertner and B. L. Webber. 1996. A Bias To- wards Relevance: Recognizing Plans Where Goal Minimization Fails. In Proceedings of the Thirteenth National Conference on Arti- ficial Intelligence, Portland, OR. Eduard Hovy. 1991. Approaches to the plan- ning of coherent text. In Natural Lan- guage Generation in Artificial Intelligence and Computational Linguistics, pages 153- 198. Kluwer. William C. Mann and Sandra A. Thompson. 1987. Rhetorical structure theory: A the- ory of text organization. Technical Report ISI/RS-87-190, ISI/USC, June. Kathleen R. McKeown. 1985. Text Gener- ation. Cambridge University Press, Cam- bridge, New York. Johanna Moore and Cecile Paris. 1993. Plan- ning text for advisory dialogues: Capturing intentional and rhetorical information. Com- putational Linguistics, 19(4):651-695. Johanna D. Moore, 1995. Participating in Ex- planatory Dialogues, chapter 3. MIT Press. Leo Wanner and Eduard Hovy. 1996. The HealthDoc sentence planner. In Proceedings of the International Workshop on Natural Language Generation, pages 1-10. Bonnie L. Webber, Ron Rymon, and John R. Clarke. 1992. Flexible support for trauma management through goal-directed reason- ing and planning. Artificial Intelligence in Medicine, 4:145-163. Ingrid Zukerman and Richard McConachy. 1995. Generating discourse across several user models: Maximizing belief while avoid- ing boredom and overload. In Proceedings of the International Joint Conference on Artifi- cial Intelligence, pages 1251-1257. 518
1998
84
Definiteness Predictions for Japanese Noun Phrases* Julia E. Heine Computerlinguistik Universit~it des Saarlandes 66041 Saarbriicken Germany [email protected] Abstract One of the major problems when translating from Japanese into a European language such as German or English is to determine definiteness of noun phrases in order to choose the correct determiner in the target language. Even though in Japanese, noun phrase reference is said to de- pend in large parts on the discourse context, we show that in many cases there also exist lin- guistic markers for definiteness. We use these to build a rule hierarchy that predicts 79,5% of the articles with an accuracy of 98,9% from syntactic-semantic properties alone, yielding an efficient pre-processing tool for the computa- tionally expensive context checking. 1 Introduction One of the major problems when translating from Japanese into a European language such as German or English is the insertion of articles. Both German and English distinguish between the definite and indefinite article, the former, in general, indicating some degree of familiarity with the referent, the latter referring to some- thing new. Thus by using a definite article, the speaker expects the hearer to be able to iden- tify the object he is talking about, whilst with the use of an indefinite article, a new referent is introduced into the discourse context (Heim, 1982). In contrast, the reference of Japanese noun phrases depends in large parts on the discourse " I would like to thank my colleagues Johan Bos, BjSrn Gambiick, Yoshiki Mori, Michael Paul, Manfred Pinkal, C.J. Rupp, Atsuko Shimada, Kristina Striegnitz and Karsten Worm for their valuable comments and support. This research was supported by the German Ministry of Education, Science, Research and Technology (BMBF) within the Verbmobil framework under grant no. 01 IV 701 R4. context, taking a previous mention of an object and all properties that can be inferred from it, as well as world knowledge as indicators for def- inite reference. Any noun phrase whose referent cannot be recovered from the discourse context will in turn be taken as indefinite. However, noun phrases can also be explicitly marked for definiteness, forcing an interpretation of the ref- erent independent of the discourse context. In this way, it is possible to trigger accommodation of previously unknown specific referents, or to get an indefinite reading even if an object of the same type has already been introduced. For machine translation, it is important to find a systematic way of extracting the syntactic and semantic information responsible for mark- ing the reference of noun phrases, in order to correctly choose the articles to be used in the target language. For this paper, we propose a rule hierarchy for this purpose, that can be used as a pre- processing tool to context checking. All noun phrases marked for definiteness in any way are assigned their referential property, leaving the others underspecified. After giving a short outline of related work in the next section, we will introduce our rule hier- archy in section 3. The resulting algorithm will be evaluated in section 4, and in section 5 we will address implementational issues. Finally, in section 6 we give a conclusion. 2 Related Work The problem of article selection when translat- ing from Japanese into any language requiring the use of articles has only been addressed sys- tematically by a few authors. (Murata and Nagao, 1993) define a heuristic rule base for definiteness assignment, consisting of 86 weighted rules. These rules use surface in- 519 formation in a sentence to estimate the referen- tial property of each noun. During processing, each applicable rule assigns confidence weights to the three possible referential properties 'defi- nite', 'indefinite' and 'generic'. These values are added up for each property, and the one with the highest score will be assigned to the noun in question. If no rule applies, the default value is 'indefinite'. This approach assigns the correct value in 85,5% of the cases when used with the training data, and 68,9% with unseen data. (Bond et al., 1995) show how the percentage of noun phrases generated with correct use of articles and number in a Japanese to English machine translation system can be increased by applying heuristic rules to distinguish between 'generic', 'referential' and 'ascriptive' uses of noun phrases. These rules are ordered in a hi- erarchical manner, with later rules over-ruling earlier ones. In addition, for each noun phrase use there are specific rules, based on linguis- tic information, that assign definiteness to the noun phrases. Overall, in their system, inser- tion of the correct article can be improved by 12% yielding a correctness level of 77%. In contrast to these approaches relying on monolingual indicators alone, (Siegel, 1996) proposes to assign definiteness during the trans- fer process. In a first stage, all lexically de- fined definiteness attributes are assigned. To all cases not covered by this, a set of preference rules is applied, if their translation equivalent in the target language is a noun. In addition to linguistic indicators from both the source and target language, the rules also take a stack of referents mentioned previously in the discourse into account. This combined approach is very successful, assigning the correct definiteness at- tributes to 98% of all relevant noun phrases in the training data. In the approach described in the next sec- tion, we have taken up the idea of using both linguistic and contextual information for the as- signment of definiteness attributes to Japanese noun phrases. However, instead of using merely a rule base, we propose a monotone algorithm based on a linguistic rule hierarchy followed by a context checking mechanism. 3 The Rule Hierarchy The rule hierarchy we introduce in this paper has been devised from a systematic survey of some data from a Japanese corpus consisting of appointment scheduling dialogues3 Since dia- logues in this domain tend to be short, on av- erage consisting of just 14 utterances, most def- inite references have to be introduced by way of accommodation rather than referring back to the discourse context. Moreover, references to events have a particular tendency to be non- specific, i.e. stating their existence rather than explicating their identity. Non-specific refer- ences are by definition indefinite, whether the referent has been previously introduced to the context or not. Neither accommodation nor non-specific ref- erence can be realized without linguistic in- dicators, since they would otherwise interfere with the context-based distinction between def- inite and indefinite reference within a discourse. The appointment scheduling domain is there- fore ideal for a case study aimed at extracting linguistic indicators for definiteness. 3.1 Overview Explicit marking for definiteness takes place on several syntactic levels, namely on the noun it- self, within the noun phrase, through counting expressions, or on the sentence level. For each of these syntactic levels, a set of rules can be defined by generalizing over the linguistic indi- cators that are responsible for the definiteness attributes carried by the noun phrases in the corpus. Each of these rules consists of one or more preconditions, and a consequent that as- signs the associated definiteness attribute to the respective noun phrase when the preconditions are met. As it turns out, none of the rules defined on the same syntactic level interfere with each other, since they either assign the same value, or their preconditions cannot possibly be met at the same time. Thus the rules can be grouped together into classes corresponding to the four 1In this survey, all the noun phrases from 10 dialogues were analyzed in detail, determining the regularities that led to definiteness predictions. These were then formu- lated into a set of rules and arranged in a hierarchical manner to rule out wrong predictions. A more detailed description of the methods used and a full list of the rules can be found in (Heine, 1997). 520 syntactic levels they are defined on. There is a clear hierachy between the four classes, with all rules of one class given priority over all rules on a lower level, as shown in figure 1. Note that even though the rule classes are defined in terms of syntactic levels, the sequence of rule classes in our hierarchy does not correspond in any way to syntactic structure. nominal phrase noun rules otherwise I clausal rules I otherwise I NP rules I otherwise I counting expressions otherwise definiteness attribute definiteness attribute definiteness attribute definiteness D attribute context checking definite default value D indefinite Figure 1: Definiteness Algorithm 3.2 Noun rules On the noun level, the lexical properties of the noun or one of its direct modifiers can determine the reference of the noun in question. There are a number of nouns, that can be marked as definite on their lexical properties alone, either because they refer to a unique ref- erent in the universe of discourse, or because they carry some sort of indexical implications. The referent is thus described uniquely with respect to some implicitly mentioned context. For example, there exist a number of nouns that implicitly relate the referent with either the hearer or the speaker, depending on the pres- ence or absence of honorifics 2, respectively. In the appointment scheduling domain, the most frequently used words of this class are (go)yotei (your/my schedule), (o)kangae (your/my opin- ion) and (go)tsugoo (for you/me). Indexical time expressions like konshuu (this week) or raigatsu (next month) refer to a spe- cific period of time that stands in a certain re- lation to the time of utterance. Even though they do not necessarily have to stand with an article in the target language, the reference is still definite, as in the following example: (1) raishuu desu ne next week to be isn't it 'That is (the) next week, isn't it?' The interpretation of a modified noun is typi- cally restricted to a specific referent by the mod- ification, thus making it definite in reference. Restrictive modifiers of this type are, for exam- ple, specifiers like demonstratives and posses- sives, as well as time expressions and attribu- tive relative clauses, as shown in the following examples. (2) tooka no shuu desu tenth GEN week to be 'That is the week of the tenth.' (3) nijuurokunichi kara hajimaru twentysixth from to begin shuu wa ikaga deshoo ka week TOPIC how to be QUESTION 2In Japanese, there are two honorific prefixes, go and o, that can be used to politely refer to things related to the hearer. However, there are no such prefixes to humbly refer to things relating to oneself. 521 'How is the week beginning the 26th?' However, indefinite pronouns, as for exam- ple hoka (another), also fall into the category of modifiers, but explicitly assign indefinite refer- ence to the noun they modify. These are usually used to introduce a new referent into a context already containing one or more referents of the same type. (4) hoka no hi erabashite itadaite mo different day choose receive also ii n desu ga good DISCREL 'Could I ask you to choose a different day?' At present, there are nine rules belonging to the noun class, only one of which assigns indef- inite reference whilst all others assign definite reference to the noun in question. 3.3 Clausal rules On the sentence level, verbs may carry strong preferences for the definiteness of one or more of their arguments, somewhat in the way of do- main specific patterns. Generally, these pat- terns serve to specify whether a complement to a certain verb is more likely to be definite or indefinite in a semantically unmarked interpre- tation. For example, in a sentence like 5, kaigi ga haitte orimasu corresponds to the pattern 'EVENT ga hairu' ('have an EVENT scheduled'), where the scheduled event denoted by EVENT is indefinite for the unmarked reading. (5) kayoobi wa gogo sanji made Tuesday TOPIC pm 3 o'clock until kaigi ga haitte orimasu node meeting NOM have scheduled since 'since I have a meeting scheduled until 3 pm on Tuesday' On the other hand, in sentence 6, kaigi ga owarimasu is an instance of the pattern 'EVENT ga owaru' ('the EVENT will end'), where, in the unmarked reading, the event that ends is pre- supposed to be a specific entity, whether it is previously known or not. (6) juuniji ni kaigi ga 12 o'clock at meeting NOM owarimasu node to end since 'since the meeting will end at 12 o'clock' The object of an existential question or a negation is by default indefinite, since these sen- tence types usually indicate the (non)existence of the noun in question. Thus, for example, in the two sentence patterns 'x wa arimasu ka' ('Is there an x?') and 'x wa arimasen' ('There is no x.') the object instantiating x is indefinite, un- less marked otherwise. In addition to these sentence patterns, there are a number of nouns that can be followed by the copula suru to form a light verb construc- tion. These constructions usually come without a particle and are treated as compound verbs, as for example uchiawase suru ('to arrange'). However, these nouns can also occur with the particle o, as in uchiawase o suru, introducing an ambiguity whether this expression should be treated as a light verb construction or as a nor- mal verb complement structure. Since this am- biguity can best be resolved at some later point, the noun should be marked as being indefinite, irrespective of whether it will eventually be gen- erated as a noun or a verb in the target lan- guage. (7) raishuu ikoo de next week from.., onwards uchiawase o shitai arrangement ACC want to make n desu ga DISCREL 'I would like to make an arrangement from next week onwards' To override any of these default values, the noun will have to be explicitly marked, using any of the markers on the noun level. Thus we take the clausal rules to be between the top level noun rules and all other rules further down the hierarchy. From the appointment scheduling domain, eight sentence patterns were extracted, where six assign the default indefinite and two indi- cate definite reference. Thus, together with the 522 light verb constructions, there are nine rules in this class. 3.4 Noun phrase rules The postpositional particles that complete a noun phrase in Japanese serve primarily as case markers, but can also influence the interpreta- tion of the noun with respect to definiteness. However, the definiteness predictions triggered by the use of particles can be fairly weak and are easily overridden by other factors, thus placing the rules emerging from these patterns near the bottom of the hierarchy. The main postpositions indicating definite reference are the topicalization particle wa in its non-contrastive use s, the boundary mark- ers kara (from) and made (to) and the genitive marker no, especially in conjunction with hoo (side), as indicated by the following examples. (s) chotto idoo no jikan unfortunately transfer GEN time ga torenaiyoo desu ne NOM take not DISCREL 'Unfortunately, there is no time for the transfer.' (9) genkoo no hoo mada tochuu manuscript GEN side not yet ready dankai desu keredomo state to be DISCREL 'The manuscript is not ready yet.' All of the four noun phrase rules in the cur- rent framework indicate definite reference. 3.5 Counting expressions As it turns out, there is one more level to the rule hierarchy. Even though counting expres- sions are semantically modifiers, they do not syntactically modify the noun itself but rather the entire noun phrase. They do not have to be adjacent to the noun phrase they modify, since they are marked by a counting suffix indicating the type of objects counted. ~This means, that definite reference is indicated by the main use of the particle wa, namely as a topic marker, stressing the discourse referent the conversation is about. There is another, contrastive use of wa, which introduces something in contrast to another discourse referent. Nat- urally, this use may introduce a related, albeit previously unknown -- and thus indefinite -- referent. (10) nijuuhachinichi g a gogo ni twentyeighth NOM afternoon in kaigi ga ikken haitte orimasu meeting ACC one be scheduled 'There is one/a meeting scheduled on the twentyeighth.' Semantically, counting expressions imply the existence of a certain number of the objects counted, in the same way that the indefinite ar- ticle does. These expressions are therefore taken to be indefinite by default, but can be made definite by any of the other rules. Counting ex- pressions thus make up a class of their own on the lowest level of the hierarchy. 3.6 Underspecified values As might be expected from the concept of pre- processing, there will be a number of noun phrases that cannot be assigned a definiteness attribute by any of the rules described above. These will remain underspecified for definite- ness until an antecedent can be found for them by the context checking mechanism, or until they are assigned a default value. By introducing a value for underspecification, it is possible to postpone the decision whether a noun phrase should be marked definite or in- definite, without losing the information that it must be marked eventually. Since default values are only introduced when a value is still under- specified after the assignment mechanism has finished, there is no need to ever change a value once it has been assigned. This means, that the algorithm can work in a strictly monotone manner, terminating as soon as a value has been found. 4 Evaluation 4.1 Performance of the algorithm The performance of our framework is best de- scribed in terms of recall and precision, where recall refers to the proportion of all relevant noun phrases that have been assigned a correct definiteness attribute, whilst precision expresses the percentage of correct assignments among all attributes assigned. The hierarchy was designed as a pre-process to context checking, extracting all values that can be assigned on linguistic grounds alone, but leaving all others underspecified. It is therefore 523 occurrences correct incorrect precision noun rules clausal rules NP rules count rules total 159 62 53 1 275 158 1 99,4% 60 53 1 272 2 0 0 3 96,8% 100% 100% 98,9% Table 1: Precision of the rules to be expected that its coverage, i.e. the per- centage of noun phrases assigned a value by the hierarchy, is relatively low. However, since we propose that the decision algorithm should be monotone, it is vitally important for the pre- cision to be as near to 100% as possible. Any wrong assignments at any stage of the process will inevitably lead to incorrect translation re- sults. To evaluate the hierarchy, we tested the per- formance of our rule base on 20 unseen dia- logues from the corpus. All noun phrases in the dialogues were first annotated with their defi- niteness attributes, followed by the list of rules with matching preconditions. As a second step, the rules applicable to each noun phrase were ordered according to their class, and the pre- diction of the one highest in the hierarchy was compared with the annotated value. In the test data, there are 346 noun phrases that need assignment of definiteness attributes. 4 Table 1 shows the number of noun phrase oc- currences covered by each rule class, i.e. the number of times one of the noun phrases was assigned a definiteness attribute by any of the rules from each class. This value was then fur- ther divided into the number of correct and in- correct assignments made. From this, the pre- cision was calculated, dividing the number of values correctly assigned by the number of val- ues assigned at all. Overall, with a precision of 98,9%, the aim of high accuracy has been achieved. Dividing the number of correct assignments by the number of noun phrases that need assign- 4Additionally, there are 388 time expressions (i.e. dates, times, weekdays and times of day) that under cer- tain conditions also need an article during generation. However, these were excluded from the statistics, since nearly all of them were found to be trivially definite, somehow artificially pushing the recall of the rules in the hierarchy up to 88,8%. ment, we get a recall of 78,6%. Thus, within the appointment scheduling domain, the hierarchy already accounts for 79,5% of all relevant noun phrases, leaving just 20,5% for the computation- ally expensive context checking. Of the 71 noun phrases left underspecified, 40 have definite reference, suggesting 'definite' as the default value if the hierarchy was to be used as the sole means of assigning definiteness at- tributes. This means, that a system integrating this algorithm with an efficient context check- ing mechanism should have a recall of at least 90%, since this is what can already be achieved by using a default value. 4.2 Comparison to previous approaches The performance of our framework has been found to be better than both of the heuris- tic rule based approaches introduced in sec- tion 2, even before context checking. However, our framework was defined and tested on the restrictive domain of appointment scheduling. Most of the really difficult cases for article se- lection, as for example generics, do not occur in this domain, whilst both (Murata and Nagao, 1993) and (Bond et al., 1995) build their the- ories around the problem of identifying these. There are no statistics on the performance of their systems on a corpus that does not contain any generics. The transfer-based approach of (Siegel, 1996) also covers data from the appointment schedul- ing domain, using both linguistic and contextual information for assigning defininteness. How- ever, her results can still not be compared with our approach, since we do not have any fig- ures on how high the recall of our algorithm is with context checking in place. In addition, the performance data given for our hierarchy was derived from unseen data rather than the data that were used to draw up the rules, as in Siegel's case. 524 Even though no direct comparison is possible because of the different test methods and data sets used, we have been able to show that an approach using a monotone rule hierarchy that can be easily integrated with a context checking mechansim leads to very good results. 5 Implementation The current framework has been designed as part of the dialogue and discourse processing component of the Verbmobil machine transla- tion system, a large scale research project in the area of spontaneous speech dialogue trans- lation between German, English and Japanese (Wahlster, 1997). Within the modular sys- tem architecture, the dialogue and discourse processing is situated in between the compo- nents for semantic construction (Gamb~ck et al., 1996) and semantic-based transfer (Dorna and Emele, 1996). It uses context knowledge to resolve semantic representations possibly under- specified with respect to syntactic or semantic ambiguities. At this stage, all the information needed for definiteness assignment is easily accessible, en- abling the rules in our hierarchy to be imple- mented one-to-one as simple implications. Since all information is accessible at all times, the ap- plication of the rules can be ordered according to the hierarchy. Only if none of the rules given in the hierarchy are applicable, will the context checking process be started. If an antecedent can be found for the relevant noun phrase, it will be assigned definite reference, otherwise it is taken to be indefinite. The algorithm will terminate as soon as a value has been assigned, thus ensuring mono- tonicity and efficiency, as 45% of all noun phrases are already assigned a value by one of the noun rules at the top of the hierarchy. 6 Conclusion In this paper, we have developed an efficient algorithm for the assignment of definiteness at- tributes to Japanese noun phrases that makes use of syntactic and semantic information. Within the domain of appointment schedul- ing, the integration of our rule hierarchy reduces the need for computationally expensive context checking to 20,5% of all relevant noun phrases, as 79,5% are already assigned a value with a precision of 98,9%. Even though the current framework is to a large extent domain specific, we believe that it may be easily extended to other domains by adding appropriate rules. References Francis Bond, Kentaro Ogura, and Tsukasa Kawaoka. 1995. Noun phrase reference in Japanese-to-English machine translation. In Sixth International Conference on Theoretical and Methodological Issues in Machine Trans- lation, pages 1-14. Michael Dorna and Martin C. Emele. 1996. Semantic-based transfer. In Proceedings of the 16th Conference on Computational Linguistics, volume 1, pages 316-321, Kcbenhavn, Denmark. ACL. BjSrn Gamb~ck, Christian Lieske, and Yoshiki Mori. 1996. Underspecified Japanese seman- tics in a machine translation system. In Pro- ceedings of the 11th Pacific Asia Conference on Language, Information and Computation, pages 53-62, Seoul, Korea. Irene Heim. 1982. The Semantics of Definite and Indefinite Noun Phrases. Ph.D. thesis, University of Massachusetts. Julia E. Heine. 1997. Ein Algorithmus zur Bestimmung der Definitheitswerte japanis- chef Nominalphrasen. Diplomarbeit, Uni- versit~t des Saarlandes, Saarbrficken. avail- able at: http://www.coli.uni-sb.de/--,heine/ arbeit.ps.gz (in German). Masaki Murata and Makoto Nagao. 1993. De- termination of referential property and num- ber of nouns in Japanese sentences for ma- chine translation into English. In Proceedings of the Figh International Conference on The- oretical and Methodological Issues in Machine Translation, pages 218-225. Melanie Siegel. 1996. Preferences and defaults for definiteness and number in Japanese to German machine translation. In Byung-Soo Park and Jong-Bok Kim, editors, Selected Pa- pers from the 11th Pacific Asia Conference on Language, Information and Computation. Wolfgang Wahlster. 1997. Verbmobil - Erken- nung, Analyse, Transfer, Generierung und Synthese von Spontansprache. Verbmobil Report 198, DFKI GmbH. (in German). 525
1998
85
Eliminative Parsing with Graded Constraints Johannes Heinecke and Jiirgen Kunze (heinecke I [email protected] ) Lehrstuhl Computerlinguistik, Humboldt-Universit~t zu Berlin Schiitzenstraf~e 21, 10099 Berlin, Germany Wolfgang Menzel and Ingo Schrtider (menzel I [email protected] ) Fachbereich Informatik, Universit~t Hamburg Vogt-Kblln-Stra~e 30, 22527 Hamburg, Germany Abstract Resource adaptlvity" Because the sets of struc- Natural language parsing is conceived to be a pro- cedure of disambiguation, which successively re- duces an initially totally ambiguous structural rep- resentation towards a single interpretation. Graded constraints are used as means to express well- formedness conditions of different strength and to decide which partial structures are locally least pre- ferred and, hence, can be deleted. This approach facilitates a higher degree of robustness of the ana- lysis, allows to introduce resource adaptivity into the parsing procedure, and exhibits a high potential for parallelization of the computation. 1 Introduction Usually parsing is understood as a constructive pro- cess, which builds structural descriptions out of ele- mentary building blocks. Alternatively, parsing can be considered a procedure of disambiguation which starts from a totally ambiguous structural repre- sentation containing all possible interpretations of a given input utterance. A combinatorial explosion is avoided by keeping ambiguity strictly local. Al- though particular readings can be extracted from this structure at every time point during disam- biguation they are not maintained explicitly, and are not immediately available. Ambiguity is reduced successively towards a single interpretation by deleting locally least preferred par- tial structural descriptions from the set of solutions. This reductionistic behavior coins the term elimina- tire parsing. The criteria which the deletion deci- sions are based on are formulated as compatibility constraints, thus parsing is considered a constraint satisfaction problem (CSP). Eliminative parsing by itself shows some interest- ing advantages: Fail soft behavior: A rudimentary robustness can be achieved by using procedures that leave the last local possibility untouched. More elabo- rated procedures taken from the field of partial constraint satisfaction (PCSP) allow for even greater robustness (cf. Section 3). tural possibilities are maintained explicitly, the amount of disambiguation already done and the amount of the remaining effort are immediately available. Therefore, eliminative approaches lend themselves to the active control of the pro- cedures in order to fulfill external resource lim- itations. Parallelization: Eliminative parsing holds a high potential for parallelization because ambiguity is represented locally and all decisions are based on local information. Unfortunately even for sublanguages of fairly modest size in many cases no complete disambigua- tion can be achieved (Harper et al., 1995). This is mainly due to the crisp nature of classical constraints that do not allow to express the different strength of grammatical conditions: A constraint can only al- low or forbid a given structural configuration and all constraints are of equal importance. To overcome this disadvantage gradings can be added to the constraints. Grades indicate how seri- ous one considers a specific constraint violation and allow to express a range of different types of condi- tions including preferences, defaults, and strict re- strictions. Parsing, then, is modelled as a partial constraint satisfaction problem with scores (Tsang, 1993) which can almost always be disambiguated to- wards a single solution if only the grammar provides enough evidence, which means that the CSP is over- constrained in the classical sense because at least preferential constraints are violated by the solution. We will give a more detailed introduction to con- straint parsing in Section 2 and to the extension to graded constraints in Section 3. Section 4 presents algorithms for the solution of the previously defined parsing problem and the linguistic modeling for con- straint parsing is finally described in Section 5. 2 Parsing as Constraint Satisfaction While eliminative approaches are quite customary for part-of-speech disambiguation (Padr6, 1996) and underspecified structural representations (Karlsson, 526 1990), it has hardly been used as a basis for full structural interpretation. Maruyama (1990) de- scribes full parsing by means of constraint satisfac- tion for the first time. (a) 0". nil The snake is chased by the cat. 1 2 3 4 5 6 7 vl = (nd, 2) v2 = (subj,3) (b) v3 = (nil, O) v4 = (ac,3) v5 = (pp, 4) v6 = (nd, 7) vT = (pc, 5) Figure 1: (a) Syntactic dependency tree for an ex- ample utterance: For each word form an unambigu- ous subordination and a label, which characterizes of subordination, are to be found. (b) Labellings for a set of constraint variables: Each variable corre- sponds to a word form and takes a pairing consisting of a label and a word form as a value. Dependency relations are used to represent the structural decomposition of natural language utter- ances (cf. Figure la). By not requiring the intro- duction of non-terminals, dependency structures al- low to determine the initial space of subordination possibilities in a straight forward manner. All word forms of the sentence can be regarded as constraint variables and the possible values of these variables describe the possible subordination relations of the word forms. Initially, all pairings of a possible dom- inating word form and a label describing the kind of relation between dominating and dominated word form are considered as potential value assignments for a variable. Disambiguation, then, reduces the set of values until finally a unique value has been obtained for each variable. Figure lb shows such a final assignment which corresponds to the depen- dency tree in Figure la. 1 Constraints like {X} : Subj : Agreement : X.label=subj --> X$cat=NOUN A XI"cat=VERB A XSnum=XTnum judge the well-formedness of combinations of sub- ordination edges by considering the lexical prop- erties of the subordinated (XSnum) and the domi- nating (XTnum) word forms, the linear precedence 1For illustration purposes, the position indices serve as a means for the identification of the word forms. A value (nil, O) is used to indicate the root of the dependency tree. (XTpos) and the labels (X.label). Therefore, the conditions are stated on structural representations rather than on input strings directly. For instance, the above constraint can be paraphrased as follows: Every subordination as a subject requires a noun to be subordinated and a verb as the dominating word form which have to agree with respect to number. An interesting property of the eliminative ap- proach is that it allows to treat unexpected input without the necessity to provide an appropriate rule beforehand: If constraints do not exclude a solution explicitly it will be accepted. Therefore, defaults for unseen phenomena can be incorporated without ad- ditional effort. Again there is an obvious contrast to constructive methods which are not able to establish a structural description if a corresponding rule is not available. For computational reasons only unary and binary constraints are considered, i. e. constraints interre- late at most two dependency relations. This, cer- tainly, is a rather strong restriction. It puts severe limitations on the kind of conditions one wishes to model (cf. Section 5 for examples). As an interme- diate solution, templates for the approximation of ternary constraints have been developed. Harper et al. (1994) extended constraint parsing to the analysis of word lattices instead of linear se- quences of words. This provides not only a reason- able interface to state-of-the-art speech recognizers but is also required to properly treat lexical ambi- guities. 3 Graded Constraints Constraint parsing introduced so far faces at least two problems which are closely related to each other and cannot easily be reconciled. On the one hand, there is the difficulty to reduce the ambiguity to a single interpretation. In terms of CSP, the constraint parsing problem is said to have too small a tight- ness, i. e. there usually is more than one solution. Certainly, the remaining ambiguity can be further reduced by adding additional constraints. This, on the other hand, will most probably exclude other constructions from being handled properly, because highly restrictive constraint sets can easily render a problem unsolvable and therefore introduce brit- tleness into the parsing procedure. Whenever be- ing faced with such an overconstrained problem, the procedure has to retract certain constraints in order to avoid the deletion of indispensable subordination possibilities. Obviously, there is a trade-off between the cover- age of the grammar and the ability to perform the disambiguation efficiently. To overcome this prob- lem one wishes to specify exactly which constraints can be relaxed in case a solution can not be estab- lished otherwise. Therefore, different types of con- 527 straints are needed in order to express the differ- ent strength of strict conditions, default values, and preferences. For this purpose every constraint c is annotated with a weight w(c) taken from the interval [0, 1] that denotes how seriously a violation of this con- straint effects the acceptability of an utterance (cf. Figure 2). {X} : Subjlnit : Subj : 0.0 : X.label=subj -~ X$cat=NOUN A XJ'cat=VERB {X} : SubjNumber : Subj : 0.1 : X.label--subj -~ XJ.num--Xl"num {X} : SubjOrder : Subj : O.g : X.label--subj -~ XSpos<X'l'pos {X, Y} : SubjUnique : Subj : 0.0 : X.label=subj A Xl"id--Y'l'id --+ Y.label:flsubj Figure 2: Very restrictive constraint grammar frag- ment for subject treatment in German: Graded con- straints are additionally annotated with a score. The solution of such a partial constraint satisfac- tion problem with scores is the dependency struc- ture of the utterance that violates the fewest and the weakest constraints. For this purpose the notation of constraint weights is extended to scores for de- pendency structures. The scores of all constraints c violated by the structure under consideration s are multiplied and a maximum selection is carried out to find the solution s' of the PCSP. s' = arg max H w(c)"Cc's) c Since a particular constraint can be violated more than once by a given structure, the constraint grade w(c) is raised to the power of n(c,s) which denotes the number of violations of the constraint c by the structure s. Different types of conditions can easily be ex- pressed with graded constraints: • Hard constraints with a score of zero (e. g. con- straint SubjUnique) exclude totally unaccept- able structures from consideration. This kind of constraints can also be used to initialize the space of potential solutions (e. g. Subjlnit). • Typical well-formedness conditions like agree- ment or word order are specified by means of weaker constraints with score larger than, but near to zero, e. g. constraint SubjNumber. • Weak constraints with score near to one can be used for conditions that are merely prefer- ences rather than error conditions or that en- code uncertain information. Some of the phe- nomena one wishes to express as preferences concern word order (in German, cf. subject top- icalization of constraint SubjOrder), defeasible selectional restrictions, attachment preferences, attachment defaults (esp. for partial parsing), mapping preferences, and frequency phenom- ena. Uncertain information taken from prosodic clues, graded knowledge (e. g. measure of phys- ical proximity) or uncertain domain knowledge is a typical example for the second type. Since a solution to a CSP with graded constraints does not have to satisfy every single condition, overconstrained problems are no longer unsolvable. Moreover, by deliberately specifying a variety of preferences nearly all parsing problems indeed be- come overconstrained now, i. e. no solution fulfills all constraints. Therefore, disambiguation to a sin- gle interpretation (or at least a very small solution set) comes out of the procedure without additional effort. This is also true for utterances that are -- strictly speaking -- grammatically ambiguous. As long as there is any kind of preference either from linguistic or extra-linguistic sources no enumeration of possible solutions will be generated. Note that this is exactly what is required in most applications because subsequent processing stages usually need only one interpretation rather than many. If under special circumstances more than one interpretation of an utterance is requested this kind of information can be provided by defining a thres- hold on the range of admissible scores. The capability to rate constraint violations en- ables the grammar writer to incorporate knowledge of different kind (e. g. prosodic, syntactic, seman- tic, domain-specific clues) without depending on the general validity of every single condition. Instead, occasional violations can be accepted as long as a particular source of knowledge supports the analysis process in the long term. Different representational levels can be established in order to model the relative autonomy of syntax, semantics, and even other contributions. These mul- tiple levels must be related to each other by means of mapping constraints so that evidence from one level helps to find a matching interpretation on an- other one. Since these constraints are defeasible as well, an inconsistency among different levels must not necessarily lead to an overall break down. In order to accommodate a number of represen- tational levels the constraint parsing approach has to be modified again so that a separate constraint variable is established for each level and each word form. A solution, then, does not consist of a single dependency tree but a whole set of trees. While constraint grades make it possible to weigh up different violations of grammatical conditions the representation of different levels additionally allows for the arbitration among conflicting evidence origi- 528 nating from very different sources, e. g. among agree- ment conditions and selectional role filler restrictions or word order regularities and prosodic hints. While constraints encoding specific domain knowl- edge have to be exchanged when one switches to an- other application context other constraint clusters like syntax can be kept. Consequently, the multi- level approach which makes the origin of different disambiguating information explicit holds great po- tential for reusability of knowledge. 4 Solution methods In general, CSPs are NP-complete problems. A lot of methods have been developed, though, to allow for a reasonable complexity in most practical cases. Some heuristic methods, for instance, try to arrive at a solution more efficiently at the expense of giv- ing up the property of correctness, i. e. they find the globally best solution in most cases while they are not guaranteed to do so in all cases. This allows to influence the temporal characteristics of the parsing procedure, a possibility which seems especially im- portant in interactive applications: If the system has to deliver a reasonable solution within a specific time interval a dynamic scheduling of computational re- sources depending on the remaining ambiguity and available time is necessary (Menzel, 1994, anytime algorithm). While different kinds of search are more suitable with regard to the correctness property, lo- cal pruning strategies lend themselves to resource adaptive procedures. Menzel and SchrSder (1998b) give details about the decision procedures for con- straint parsing. 5 Grammar modeling For experimental purposes a constraint grammar has been set up, which consists of two descriptive levels, one for syntactic (including morphology and agreement) and one for semantic relations. Whereas the syntactical description clearly follows a depen- dency approach, the second main level of our ana- lysis, semantics, is limited to sortal restrictions and predicate-argument relations for verbs, predicative adjectives, and predicative nouns. In order to illustrate the interaction of syntactical and semantical constraints, the following (syntacti- cally correct) sentence is analyzed. Here the use of a semantic level excludes or depreciates a reading which violates lexical restrictions: Da habe ich einen Termin beim Zahnarzt ("At this time, I have an ap- pointment at the dentist's.") The preposition beim ("at the") is a locational preposition, the noun Zah- narzt ("dentist"), however, is of the sort "human". Thus, the constraint which determines sortal com- patibility for prepositions and nouns is violated: {X} : PrepSortal : Prepositions : 0.3 : XTcat----PREP X$cat---NOUN -~ compatible(ont, Xl"sort, XSsort) 'Prepositions should agree sortally with their noun.' Other constraints control attachment preferences. For instance, the sentence am Montag machen wit einen Termin aus has two different readings ("we will make an appointment, which will take place on Monday" vs. "oll Monday we will meet to make an appointment for another day"), i. e. the attachment of the prepositional phrase am Montag can not be determined without a context. If the first reading is preferred (the prepositional phrase is attached to ausmachen), this can be achieved by a graded con- straint. It can be overruled, if other features rule out this possibility. A third possible use for weak constraints are at- tachment defaults, if e. g. a head word needs a cer- tain type of word as a dependent constituent. When- ever the sentence being parsed does not provide the required constituent, the weak constraint is violated and another constituent takes over the function of the "missing" one (e. g. nominal use of adjectives). Prosodic information could also be dealt with. Compare Wit miissen noch einen Termin aus- machen ("We still have to make an appointment" vs. "We have to make a further appointment"). A stress on Termin would result in a preference of the first reading whereas a stressed noch makes the second translation more adequate. Note that it should always be possible to outdo weak evidence like prosodic hints by rules of word order or even information taken from the discourse, e. g. if there is no previous appointment in the discourse. In addition to the two main description levels a number of auxiliary ones is employed to circum- vent some shortcomings of the constraint-based ap- proach. Recall that the CSP has been defined as to uniquely assign a dominating node (together with an appropriate label) to each input form (cf. Fig- ure 1). Unfortunately, this definition restricts the approach to a class of comparatively weak well- formedness conditions, namely subordination possi- bilities describing the degree to which a node can fill the valency of another one. For instance, the potential of a noun to serve as the grammatical sub- ject of the finite verb (cf. Figure 2) belongs to this class of conditions. If, on the other hand, the some- what stronger notion of a subordination necessity (i. e. the requirement to fill a certain valency) is considered, an additional mechanism has to be in- troduced. From a logical viewpoint, constraints in a CSP are universally quantified and do not pro- vide a natural way to accomodate conditions of ex- istence. However, in the case of subordination ne- cessities the effect of an existential quantifier can easily be simulated by the unique value assignment principle of the constraint satisfaction mechanism it- self. For that purpose an additional representational 529 level for the inverse dependency relation is intro- duced for each valency to be saturated (Helzerman and Harper, 1992, cf. needs-roles). Dedicated con- straints ensure that the inverse relation can only be established if a suitable filler has properly been iden- tified in the input sentence. Another reason to introduce additional auxiliary levels might be the desire to use a feature inheri- tance mechanism within the structural description. Basically, constraints allow only a passive feature checking but do not support the active assignment of feature values to particular nodes in the depen- dency tree. Although this restriction must be con- sidered a fundamental prerequisite for the strictly local treatment of huge amounts of ambiguity, it cer- tainly makes an adequate modelling of feature per- colation phenomena rather difficult. Again, the use of auxiliary levels provides a solution by allowing to transport the required information along the edges of the dependency tree by means of appropriately de- fined labels. For efficiency reasons (the complexity is exponential in the number of features to percolate over the same edge) the application of this technique should be restricted to a few carefully selected phe- nomena. The approach presented in this paper has been tested successfully on some 500 sentences of the Verbmobil domain (Wahlster, 1993). Currently, there are about 210 semantic constraints, including constraints on auxiliary levels. The syntax is defined by 240 constraints. Experiments with slightly dis- torted sentences resulted in correct structural trees in most cases. 6 Conclusion An approach to the parsing of dependency struc- tures has been presented, which is based on the elimination of partial structural interpretations by means of constraint satisfaction techniques. Due to the graded nature of constraints (possibly conflict- ing) evidence from a wide variety of informational sources can be integrated into a uniform computa- tional mechanism. A high degree of robustness is introduced, which allows the parsing procedure to compensate local constraint violations and to resort to at least partial interpretations if necessary. The approach already has been successfully ap- plied to a diagnosis task in foreign language learning environments (Menzel and Schr5der, 1998a). Fur- ther investigations are prepared to study the tem- poral characteristics of the procedure in more detail. A system is aimed at, which eventually will be able to adapt its behavior to external pressure of time. Acknowledgements This research has been partly funded by the German Research Foundation "Deutsche Forschungsgemein- schaft" under grant no. Me 1472/1-1 & Ku 811/3-1. References Mary P. Harper, L. H. Jamieson, C. D. Mitchell, G. Ying, S. Potisuk, P. N. Srinivasan, R. Chen, C. B. Zoltowski, L. L. McPheters, B. Pellom, and R. A. Helzerman. 1994. Integrating language models with speech recognition. In Proceedings of the AAAI-9~ Workshop on the Integration of Nat- ural Language and Speech Processing, pages 139- 146. Mary P. Harper, Randall A. Helzermann, C. B. Zoltowski, B. L. Yeo, Y. Chan, T. Steward, and B. L. Pellom. 1995. Implementation issues in the development of the PARSEC parser. Software - Practice and Experience, 25(8):831-862. Randall A. Helzerman and Mary P. Harper. 1992. Log time parsing on the MasPar MP-1. In Pro- ceedings of the 6th International Conference on Parallel Processing, pages 209-217. Fred Karlsson. 1990. Constraint grammar as a framework for parsing running text. In Proceed- ings of the 13th International Conference on Com- putational Linguistics, pages 168-173, Helsinki. Hiroshi Maruyama. 1990. Structural disambigua- tion with constraint propagation. In Proceedings of the 28th Annual Meeting of the ACL, pages 31- 38, Pittsburgh. Wolfgang Menzel and Ingo Schr5der. 1998a. Constraint-based diagnosis for intelligent lan- guage tutoring systems. In Proceedings of the IT~KNOWS Conference at the IFIP '98 Congress, Wien/Budapest. Wolfgang Menzel and Ingo SchrSder. 1998b. De- cision procedures for dependency parsing using graded constraints. In Proc. of the Joint Con- ference COLING/ACL Workshop: Processing of Dependency-based Grammars, Montreal, CA. Wolfgang Menzel. 1994. Parsing of spoken language under time constraints. In A. Cohn, editor, Pro- ceedings of the 11th European Conference on Ar- tificial Intelligence, pages 560-564, Amsterdam. Lluis Padr6. 1996. A constraint satisfaction alter- native to POS tagging. In Proc. NLP÷IA, pages 197-203, Moncton, Canada. E. Tsang. 1993. Foundations of Constraint Satisfac- tion. Academic Press, Harcort Brace and Com- pany, London. Wolfgang Wahlster. 1993. Verbmobil: Translation of face-to-face dialogs. In Proceedings of the Machine Translation Summit IV, pages 127-135, Kobe. 530
1998
86
A Connectionist Architecture for Learning to Parse James Henderson and Peter Lane Dept of Computer Science, Univ of Exeter Exeter EX4 4PT, United Kingdom j amie@dcs, ex. ac. uk, pclane~dcs, ex. ac. uk Abstract We present a connectionist architecture and demon- strate that it can learn syntactic parsing from a cor- pus of parsed text. The architecture can represent syntactic constituents, and can learn generalizations over syntactic constituents, thereby addressing the sparse data problems of previous connectionist ar- chitectures. We apply these Simple Synchrony Net- works to mapping sequences of word tags to parse trees. After training on parsed samples of the Brown Corpus, the networks achieve precision and recall on constituents that approaches that of statistical methods for this task. 1 Introduction Connectionist networks are popular for many of the same reasons as statistical techniques. They are ro- bust and have effective learning algorithms. They also have the advantage of learning their own inter- nal representations, so they are less constrained by the way the system designer formulates the prob- lem. These properties and their prevalence in cog- nitive modeling has generated significant interest in the application of connectionist networks to natu- ral language processing. However the results have been disappointing, being limited to artificial do- mains and oversimplified subproblems (e.g. (Elman, 1991)). Many have argued that these kinds of con- nectionist networks are simply not computationally adequate for learning the complexities of real natural language (e.g. (Fodor and Pylyshyn, 1988), (Hender- son, 1996)). Work on extending connectionist architectures for application to complex domains such as natural lan- guage syntax has developed a theoretically moti- vated technique called Temporal Synchrony Variable Binding (Shastri and Ajjanagadde, 1993; Henderson, 1996). TSVB allows syntactic constituency to be represented, but to date there has been no empirical demonstration of how a learning algorithm can be effectively applied to such a network. In this paper we propose an architecture for TSVB networks and empirically demonstrate its ability to learn syntac- tic parsing, producing results approaching current statistical techniques. In the next section of this paper we present the proposed connectionist architecture, Simple Syn- chrony Networks (SSNs). SSNs are a natural ex- tension of Simple Kecurrent Networks (SRNs) (El- man, I99I), which are in turn a natural extension of Multi-Layered Perceptrons (MLPs) (Rumelhart et al., 1986). SRNs are an improvement over MLPs because they generalize what they have learned over words in different sentence positions. SSNs are an improvement over SKNs because the use of TSVB gives them the additional ability to generalize over constituents in different structural positions. The combination of these generalization abilities is what makes SSNs adequate for syntactic parsing. Section 3 presents experiments demonstrating SSNs' ability to learn syntactic parsing. The task is to map a sentence's sequence of part of speech tags to either an unlabeled or labeled parse tree, as given in a preparsed sample of the Brown Cor- pus. A network input-output format is developed for this task, along with some linguistic assump- tions that were used to simplify these initial ex- periments. Although only a small training set was used, an SSN achieved 63% precision and 69% re- call on unlabeled constituents for previously unseen sentences. This is approaching the 75% precision and recall achieved on a similar task by Probabilis- tic Context Free Parsers (Charniak, forthcoming), which is the best current method for parsing based on part of speech tags alone. Given that these are the very first results produced with this method, fu- ture developments are likely to improve on them, making the future for this method very promising. 2 A Connectionist Architecture that Generalizes over Constituents Simple Synehrony Networks (SSNs) are designed to extend the learning abilities of standard eonnec- tionist networks so that they can learn generaliza- tions over linguistic constituents. This generaliza- tion ability is provided by using Temporal Synchrony Variable Binding (TSVB) (Shastri and Ajjanagadde, 1993) to represent constituents. With TSVB, gener- 531 Hidden Input ~ u t II |1 I I I ; ; ; copy ,' ,' ,' ,' links #1 is eS SS : [ ,'/; , I ss~sr t :--.:_._- :..-.-gg-: - Figure 1: A Simple Recurrent Network. alization over constituents is achieved in an exactly analogous way to the way Simple Recurrent Net- works (SRNs) (Elman, 1991) achieve generalization over the positions of words in a sentence. SRNs are a standard connectionist method for processing se- quences. As the name implies, SSNs are one way of extending SRNs with TSVB. 2.1 Simple Recurrent Networks Simple Recurrent Networks (Elman, 1991) are a sim- ple extension of the most popular form of connec- tionist network, Multi-Layered Perceptrons (MLPs) (Rumelhart et al., 1986). MLPs are popular because they can approximate any finite mapping, and be- cause training them with the Backpropagation learn- ing algorithm (Rumelhart et al., 1986) has been demonstrated to be effective in a wide variety of applications. Like MLPs, SRNs consist of a finite set of units which are connected by weighted links, as illustrated in figure 1. The output of a unit is simply a scalar activation value. Information is in- put to a network by placing activation values on the input units, and information is read out of a net- work by reading off activation values from the out- put units. Computation is performed by the input activation being scaled by the weighted links and passed through the activation functions of the "hid- den" units, which are neither part of the input nor output. The only parameters in this computation are the weights of the links and how many hidden units are used. The number of hidden units is cho- sen by the system designer, but the link weights are automatically trained using a set of example input- output mappings and the Backpropagation learning algorithm. Unlike MLPs, SRNs process sequences of inputs and produce sequences of outputs. To store infor- mation about previous inputs, SRNs use a set of context units, which simply record the activations of the hidden units during the previous time step (shown as dashed links in figure 1). When the SRN is done computing the output for one input in the se- quence, the vector of activations on the hidden units is copied to the context units. Then the next input is processed with this copied pattern in the context units. Thus the hidden pattern computed for one input is used to represent the context for the subse- quent input. Because the hidden pattern is learned, this method allows SRNs to learn their own inter- nal representation of this context. This context is the state of the network. A number of algorithms exist for training such networks with loops in their flow of activation (called recurrence), for example Backpropagation Through Time (Rumelhart et al., 1986). The most important characteristic of any learning- based model is the way it generalizes from the ex- amples it is trained on to novel testing examples. In this regard there is a crucial difference between SRNs and MLPs, namely that SRNs generalize across se- quence positions. At each position in a sequence a new context is copied, a new input is read, and a new output is computed. However the link weights that perform this computation are the same for all the positions in the sequence. Therefore the infor- mation that was learned for an input and context in one sequence position will inherently be generalized to inputs and contexts in other sequence positions. This generalization ability is manifested in the fact that SRNs can process arbitrarily long sequences; even the inputs at the end, which are in sequence positions that the network has never encountered before, can be processed appropriately. This gener- alization ability is a direct result of SRNs using time to represent sequence position. Generalizing across sequence positions is crucial for syntactic parsing, since a word tends to have the same syntactic role regardless of its absolute position in a sentence, and there is no practical bound on the length of sentences. However this ability still doesn't make SRNs adequate for syntactic parsing. Because SRNs have a bounded number of output units, and therefore an effectively bounded output for each in- put, the space of possible outputs should be linear in the length of the input. For syntactic parsing, the total number of constituents is generally considered to be linear in the length of the input, but each con- stituent has to choose its parent from amongst all the other constituents. This gives us a space of pos- sible parent-child relationships that is proportional to the square of the length of the input. For exam- ple, the attachment of a prepositional phrase needs to be chosen from all the constituents on the right frontier of the current parse tree. There may be an arbitrary number of these constituents, but an SRN would have to distinguish between them using only a bounded number of output units. While in the- ory such a representation can be achieved using ar- bitrary precision continuous activation values, this 532 bounded nature is symptomatic of a limitation in SRNs' generalization abilities. What we really want is for the network to learn what kinds of constituents such prepositional phrases like to attach to, and ap- ply these generalizations independently of the abso- lute position of the constituent in the parse tree. In other words, we want the network to generalize over constituents. There is no apparent way for SRNs to achieve such generalization. This inability to gener- alize results in the network having to be trained on a set of sentences in which every kind of constituent appears in every position in the parse tree, result- ing in serious sparse data problems. We believe that it is this difficulty that has prevented the successful application of SRNs to syntactic parsing. 2.2 Simple Synchrony Networks The basic technique which we use to solve SRNs' inability to generalize over constituents is exactly analogous to the technique SRNs use to generalize over sentence positions; we process constituents one at a time. Words are still input to the network one at a time, but now within each input step the net- work cycles through the set of constituents. This dual use of time does not introduce any new compli- cations for learning algorithms, so, as for SRNs, we can use Backpropagation Through Time. The use of timing to represent constituents (or more gener- ally entities) is the core idea of Temporal Synchrony Variable Binding (Shastri and Ajjanagadde, 1993). Simple Synchrony Networks are an application of this idea to SRNs. 1 As illustrated in figure 2, SSNs use the same method of representing state as do SRNs, namely context units. The difference is that SSNs have two of these memories, while SRNs have one. One memory is exactly the same as for SRNs (the fig- ure's lower recurrent component). This memory has no representation of constituency, so we call it the "gestalt" memory. The other memory has had TSVB applied to it (the figure's upper recur- rent component, depicted with "stacked" units). This memory only represents information about con- stituents, so we call it the constituent memory. These two representations are then combined via an- other set of hidden units to compute the network's output. Because the output is about constituents, these combination and output units have also had TSVB applied to them. The application of TSVB to the output units al- lows SSNs to solve the problems that SR.Ns have with representing the output of a syntactic parser. For every step in the input sequence, TSVB units cycle through the set of constituents. To output 1 There are a variety of ways to extend SRNs using TSVB. The architecture presented here was selected based on previ- ous experiments using a toy grammar. Col Con Con Ge.¢ Cor Figure 2: A Simple Synchrony Network. The units to which TSVB has been applied are depicted as several units stacked on top of each other, because they store activations for several constituents. something about a particular constituent, it is sim- ply necessary to activate an output unit at that constituent's time in the cycle. For example, when a prepositional phrase is being processed, the con- stituent which that prepositional phrase attaches to can be specified by activating a "parent" output unit in synchrony with the chosen constituent. However many constituents there are for the prepositional phrase to choose between, there will be that many times in the cycle that the "parent" unit can be acti- vated in. Thereby we can output information about an arbitrary number of constituents using only a bounded number of units. We simply require an arbitrary amount of time to go through all the con- stituents. Just as SRNs' ability to input arbitrarily long sen- tences was symptomatic of their ability to generalize over sentence position, the ability of SSNs to output information about arbitrarily many constituents is symptomatic of their ability to generalize over con- stituents. Having more constituents than the net- work has seen before is not a problem because out- puts for the extra constituents are produced on the same units by the same link weights as for other constituents. The training that occurred for the other constituents modified the link weights so as to produce the constituents' outputs appropriately, and now these same link weights are applied to the 533 extra constituents. So the SSN has generalized what it has learned over constituents. For example, once the network has learned what kinds of constituents a preposition likes to attach to, it can apply these gen- eralizations to each of the constituents in the current parse and choose the best match. In addition to their ability to generalize over con- stituents, SSNs inherit from SRNs the ability to gen- eralize over sentence positions. By generalizing in both these ways, the amount of data that is nec- essary to learn linguistic generalizations is greatly reduced, thus addressing the sparse data problems which we believe are the reasons connectionist net- works have not been successfully applied to syntactic parsing. The next section empirically demonstrates that SSNs can be successfully applied to learning the syntactic parsing of real natural language. 3 Experiments in Learning to Parse Adding the theoretical ability to generalize over lin- guistic constituents is an important step in connec- tionist natural language processing, but theoretical arguments are not sufficient to address the empir- ical question of whether these mechanisms are ef- fective in learning to parse real natural language. In this section we present experiments on training Simple Synchrony Networks to parse naturally oc- curring sentences. First we present the input-output format for the SSNs used in these experiments, then we present the corpus, then we present the results, and finally we discuss likely future developments. Despite the fact that these are the first such ex- periments to be designed and run, an SSN achieved 63% precision and 69% recall on constituents. Be- cause these results are approaching the results for current statistical methods for parsing from part of speech tags (around 75% precision and recall), we conclude that SSNs are effective in learning to parse. We anticipate that future developments using larger training sets, words as inputs, and a less constrained input-output format will make SSNs a real alterna- tive to statistical methods. 3.1 SSNs for Parsing The networks that are used in the experiments all have the same design. They all use the internal structure discussed in section 2.2 and illustrated in figure 2, and they all use the same input-output for- mat. The input-output format is greatly simplified by SSNs' ability to represent constituents, but for these initial experiments some simplifying assump- tions are still necessary. In particular, we want to define a single fixed input-output mapping for ev- ery sentence. This gives the network a stable pat- tern to learn, rather than having the network itself make choices such as when information should be output or which output constituent should be asso- ciated with which words. To achieve this we make two assumptions, namely that outputs should occur as soon as theoretically possible, and that the head of each constituent is its first terminal child. As shown in figure 2, SSNs have two sets of in- put units, constituent input units and gestalt input units. Defining a fixed input pattern for the gestalt inputs is straightforward, since these inputs per- tain to information about the sentence as a whole. Whenever a tag is input to the network, the activa- tion pattern for that tag is presented to the gestalt input units. The information from these tags is stored in the gestalt context units, forming a holistic representation of the preceding portion of the sen- tence. The use of this holistic representation is a sig- nificant distinction between SSNs and current sym- bolic statistical methods, giving SSNs some of the advantages of finite state methods. Figure 3 shows an example parse, and depicts the gestalt inputs as a tag just above its associated word. First NP is in- put to the gestalt component, then VVZ, then AT, and finally NN. Defining a fixed input pattern for the constituent input units is more difficult, since the input must be independent of which tags are grouped together into constituents. For this we make use of the assumption that the first terminal child of every constituent is its head. When a tag is input we add a new constituent to the set of constituents that the network cycles through and assume that the input tag is the head of that constituent. The activation pattern for the tag is input in synchrony with this new constituent, but nothing is input to any of the old constituents. In the parse depicted in figure 3, these constituent inputs are shown as predications on new variables. First constituent w is introduced and given the input NP, then z is introduced and given VVZ, then y is introduced and given AT, and finally z is introduced and given NN. Because the only input to a constituent is its head tag, the only thing that the constituent context units do is remember information about each constituent's first terminal child. This is not a very realistic as- sumption about the nature of the linguistic gener- alizations that the network needs to learn, but it is adequate for these initial experiments. This as- sumption simply means that more burden is placed on the network's gestalt memory, which can store in- formation about any tag. Provided the appropriate constituent can be identified based on its first termi- nal child, this gestalt information can be transferred to the constituent through the combination units at the time when an output needs to be produced. We also want to define a single fixed output pat- tern for each sentence. This is necessary since we use simple Backpropagation Through Time, plus it gives the network a stable mapping to learn. This desired output pattern is called the target pattern. The net- 534 Input Output Accumulated Output NP(w) w w NP I I (John) NP NP X vvz(X)wz w ~] (loves) VVZ NP VVZ X X AT(y) ~y ~ ~ AT I (a) AT NP VVZ AT X (woman) NN NP VVZ AT NN Figure 3: A parse of "John loves a woman". work is trained to try to produce this exact pattern, even though other patterns may be interpretable as the correct parse. To define a unique target output we need to specify which constituents in the corpus map to which constituents in the network, and at what point in the sentence each piece of informa- tion in the corpus needs to be output. The first problem is solved by the assumption that the first terminal child of a constituent is its head. 2 We map each constituent in the corpus to the constituent in the network that has the same head. Network con- stituents whose head is not the first terminal child of any corpus constituent are simply never mentioned in the output, as is true of z in figure 3. The second problem is solved by assuming that outputs should occur as soon as theoretically possible. As soon as all the constituents involved in a piece of information have been introduced into the network, that piece of information is required to be output. Although this means that there is no point at which the entire parse for a sentence is being output by the network, we can simply accumulate the network's incremen- tal outputs and thereby interpret the output of the parser as a complete parse. To specify an unlabeled parse tree it is sufficient to output the tree's set of parent-child relationships. For parent-child relationships that are between a constituent and a terminal, we know the constituent will have been introduced by the time the termi- nal's tag is input because a constituent is headed by its first terminal child. Thus this parent-child relationship should be output when the terminal's 2The cases where constituents in the corpus have no ter- minal children axe discussed in the next subsection. tag is input. This is done using a "parent" output unit, which is active in synchrony with the parent constituent when the terminal's tag is input. In fig- ure 3, these parent outputs are shown structurally as parent-child relationships with the input tags. The first three tags all designate the constituents intro- duced with them as their parents, but the fourth tag (NN) designates the constituent introduced with the previous tag (y) as its parent. For parent-child relationships that are between two nonterminal constituents, the earliest this in- formation can be output is when the head of the second constituent is input. This is done using a "grandparent" output unit and a "sibling" output unit. The grandparent output unit is used when the child comes after the parent's head (i.e. right branching constituents like objects). In this case the grandparent output unit is active in synchrony with the parent constituent when the head of the child constituent is input. This is illustrated in the third row in figure 3, where AT is shown as having the grandparent z. The sibling output unit is used when the child precedes the parent's head (i.e. left branching constituents like subjects). In this case the sibling output unit is active in synchrony with the child constituent when the head of the parent constituent is input. This is illustrated in the sec- ond row in figure 3, where VVZ is shown as having the sibling w. These parent, grandparent, and sib- ling output units are sumcient to specify any of the parse trees that we require. While training the networks requires having a unique target output, in testing we can allow any output pattern that is interpretable as the correct parse. Interpreting the output of the network has two stages. First, the continuous unit activations are mapped to discrete parent-child relationships. For this we simply take the maximums across com- peting parent outputs (for terminal's parents), and across competing grandparent and sibling outputs (for nonterminal's parents). Second, these parent- child relationships are mapped to their equivalent parse "tree". This process is illustrated in the right- most column of figure 3, where the network's incre- mental output of parent-child relationships is accu- mulated to form a specification of the complete tree. This second stage may have some unexpected re- sults (the constituents may be discontiguous, and the structure may not be connected), but it will always specify which words in the sentence each constituent includes. By defining each constituent purely in terms of what words it includes, we can compare the constituents identified in the network's output to the constituents in the corpus. As is stan- dard, we report the percentage of the output con- stituents that are correct (precision), and percentage of the correct constituents that are output (recall). 535 3.2 A Corpus for SaNs The Susanne 3 corpus is used in this paper as a source of preparsed sentences. The Susanne corpus consists of a subset of the Brown corpus, preparsed accord- ing to the Susanne classification scheme described in (Sampson, 1995). This data must be converted into a format suitable for the learning experiments described below. This section describes the conver- sion of the Susanne corpus sentences and the preci- sion/recall evaluation functions. We begin by describing the part of speech tags, which form the input to the network. The tags in the Susanne scheme are a detailed extension of the tags used in the Lancaster-Leeds Treebank (see Garside et al, 1987). For the experiments described below the simpler Lancaster-Leeds scheme is used. Each tag is a two or three letter sequence, e.g. 'John' would be encoded 'NP', the articles 'a' and 'the' are encoded 'AT', and verbs such as 'is' encoded 'VBZ'. These are input to the network by setting one bit in each of three banks of inputs; each bank representing one letter position, and the set bit indicating which letter or space occupies that position. The network's output is an incremental represen- tation of the unlabeled parse tree for the current sentence. The Susanne scheme uses a detailed clas- sification of constituents, and some changes are nec- essary before the data can be used here. Firstly, the experiments in this paper are only concerned with parsing sentences, and so all constituents referring to the meta-sentence level have been discarded. Sec- ondly, the Susanne scheme allows for 'ghost' mark- ers. These elements are also discarded, as the 'ghost' elements do not affect the boundaries of the con- stituents present in the sentence. Finally, it was noted in the previous subsection that the SSNs used for these learning experiments require every constituent to have at least one termi- nal child. There are very few constructions in the corpus that violate this constraint, but one of them is very common, namely the S-VP division. The lin- guistic head of the S (the verb) is within the VP, and thus the S often occurs without any tags as im- mediate children. For example, this occurs when S expands to simply NP VP. To address this problem, we collapse the S and VP into a single constituent, as is illustrtated in figure 3. The same is done for other such constructions, which include adjective, noun, determiner and prepositional phrases. This move is not linguistically unmotivated, since the re- sult is equivalent to a form of dependency grammar (Mel~uk, 1988), which have a long linguistic tradi- tion. The constructions are also well defined enough 3 We acknowledge the roles of the Economic and Social Re- search Council (UK) as sponsor and the University of Sussex as grantholder in providing the Susanne corpus used in the experiments described in this paper. Expt Training Cross val Test Prec Rec Prec P~c Prec Rec /~ / 75.6 79.1 66.7 71.9 60.4 66.5 71.675.868.273.862.669.4 (3) 64.271.458.666.959.868.5 Table 1: Results of experiments on Susanne corpus. that the collapsed constituents could be separated at the interpretation stage, but we don't do that in these experiments. Also note that this limitation is introduced by a simplifying assumption, and is not inherent to the architecture. 3.3 Experimental Results The experiments in this paper use one of the Susanne genres (genre A, press reportage) for the selection of training, cross-validation and test data. We de- scribe three sets of experiments, training SSNs with the input-output format described in section 3.1. In each experiment, a variety of networks was trained, varying the number of units in the hidden and com- bination layers. Each network is trained using an extension of Backpropagation Through Time until the sum-squared error reaches a minimum. A cross- validation data set is used to choose the best net- works, which are then given the test data, and pre- cision/recall figures obtained. For experiments (1) and (2), the first twelve files in Susanne genre A were used as a source for the train- ing data, the next two for the cross-validation set (4700 words in 219 sentences, average length 21.56), and the final two for testing (4602 words in 176 sen- tences, average length 26.15). For experiment (1), only sentences of length less than twenty words were used for training, resulting in a training set of 4683 words in 334 sentences. The precision and recall results for the best network can be seen in the first row of table 1. For experiment (2), a larger training set was used, containing sen- tences of length less than thirty words, resulting in a training set of 13,523 words in 696 sentences. We averaged the performance of the best two networks to obtain the figures in the second row of table 1. For experiment (3), labeled parse trees were used as a target output, i.e. for each word we also output the label of its parent constituent. The output for the constituent labels uses one output unit for each of the 15 possible labels. For calculating the preci- sion and recall results, the network must also output the correct label with the head of a constituent in order to count that constituent as correct. Further, this experiment uses data sets selected at random from the total set, rather than taking blocks from the corpus. Therefore, the cross-validation set in this case consists of 4551 words in 186 sentences, average length 24.47 words. The test set consists of 536 4485 words in 181 sentences, average length 24.78 words. As in experiment (2), we used a training set of sentences with less than 30 words, producing a set of 1079 sentences, 27,559 words. For this experiment none of the networks we tried converged to nontriv- ial solutions on the training set, but one network achieved reasonable performance before it collapsed to a trivial solution. The results for this network are shown in the third row of table 1. From current corpus based statistical work on parsing, we know that sequences of part of speech tags contain enough information to achieve around 75% precision and recall on constituents (Charniak, forthcoming). On the other extreme, the simplistic parsing strategy of producing a purely right branch- ing structure only achieves 34% precision and 61% recall on our test set. The fact that SSNs can achieve 63% precision and 69% recall using much smaller training sets than (Charniak, forthcoming) demon- strates that SSNs can be effective at learning the required generalizations from the data. While there is still room for improvement, we conclude that SSNs can learn to parse real natural language. 3.4 Extendabillty The initial results reported above are very promis- ing for future developments with Simple Synchrony Networks, as they are likely to improve in both the near and long term. Significant improvements are likely with larger training sets and longer training sentences. While other approaches typically use over a million words of training data, the largest training set we use is only 13,500 words. Also, fine tuning of the training methodology and architecture often im- proves network performance. For example we should be using larger networks, since our best results came from the largest networks we tried. Currently the biggest obstacle to exploring these alternatives is the long training times that are typical of Backpropa- gation Through Time, but there are a number of standard speedups which we will be trying. Another source of possible improvements is to make the networks' input-output format more lin- guistically motivated. As an example, we retested the networks from experiment 2 above with a dif- ferent mapping from the output of the network to constituents. If a word chooses an earlier word's constituent as its parent, then we treat these two words as being in the same constituent, even if the earlier word has itself chosen an even earlier word as its parent. 10% of the constituents are changed by this reinterpretation, with precision improving by 1.6% and recall worsening by 0.6%. In the longer term the biggest improvement is likely to come from using words, instead of tags, as the input to the network. Currently all the best parsing systems use words, and back off to using tags for infrequent words (Charniak, forthcoming). Be- cause connectionist networks automatically exhibit a frequency by regularity effect where infrequent cases are all pulled into the typical pattern, we would ex- pect such backing off to be done automatically, and thus we would expect SSNs to perform well with words as inputs. The performance we have achieved with such small training sets supports this belief. 4 Conclusion This paper demonstrates for the first time that a connectionist network can learn syntactic parsing. This improvement is the result of extending a stan- dard architecture (Simple Recurrent Networks) with a technique for representing linguistic constituents (Temporal Synchrony Variable Binding). This ex- tension allows Simple Synchrony Networks to gen- eralize what they learn across constituents, thereby solving the sparse data problems of previous connec- tionist architectures. Initial experiments have em- pirically demonstrated this ability, and future ex- tensions are likely to significantly improve on these results. We believe that the combination of this gen- eralization ability with the adaptability of connec- tionist networks holds great promise for many areas of Computational Linguistics. References Eugene Charniak. forthcoming. Statistical tech- niques for natural language parsing. AI Magazine. Jeffrey L. Elman. 1991. Distributed representa- tions, simple recurrent networks, and grammatical structure. Machine Learning, 7:195-225. Jerry A. Fodor and Zenon W. Pylyshyn. 1988. Con- nectionism and cognitive architecture: A critical analysis. Cognition, 28:3-71. R. Garside, G. Leech, and G. Sampson (eds). 1987. The Computational Analysis of English: a corpus- based approach. Longman Group UK Limited. James Henderson. 1996. A connectionist architec- ture with inherent systematicity. In Proceedings of the Eighteenth Conference of the Cognitive Sci- ence Society, pages 574-579, La Jolla, CA. I. Mel~uk. 1988. Dependency Syntax: Theory and Practice. SUNY Press. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. 1986. Learning internal representations by error propagation. In D. E. Rumelhart and J. L. Mc- Clelland, editors, Parallel Distributed Processing, Vol 1. MIT Press, Cambridge, MA. Geoffrey Sampson. 1995. English for the Computer. Oxford University Press, Oxford, UK. Lokendra Shastri and Venkat Ajjanagadde. 1993. From simple associations to systematic reasoning: A connectionist representation of rules, variables, and dynamic bindings using temporal synchrony. Behavioral and Brain Sciences, 16:417-451. 537
1998
87
Memoisation for Glue Language Deduction and Categorial Parsing Mark Hepple Department of Computer Science University of Sheffield Regent Court, 211 Portobello Street Sheffield S1 4DP, UK hepple@dcs, shef. ac. uk Abstract The multiplicative fragment of linear logic has found a number of applications in computa- tional linguistics: in the "glue language" ap- proach to LFG semantics, and in the formu- lation and parsing of various categorial gram- mars. These applications call for efficient de- duction methods. Although a number of de- duction methods for multiplicative linear logic are known, none of them are tabular meth- ods, which bring a substantial efficiency gain by avoiding redundant computation (c.f. chart methods in CFG parsing): this paper presents such a method, and discusses its use in relation to the above applications. 1 Introduction The multiplicative fragment of linear logic, which includes just the linear implication (o-) and multiplicative (®) operators, has found a number of applications within linguistics and computational linguistics. Firstly, it can be used in combination with some system of la- belling (after the 'labelled deduction' method- ology of (Gabbay, 1996)) as a general method for formulating various categorial grammar sys- tems. Linear deduction methods provide a com- mon basis for parsing categorial systems formu- lated in this way. Secondly, the multiplicative fragment forms the core of the system used in work by Dalrymple and colleagues for handling the semantics of LFG derivations, providing a 'glue language' for assembling the meanings of sentences from those of words and phrases. Although there are a number of deduction methods for multiplicative linear logic, there is a notable absence of tabular methods, which, like chart parsing for CFGs, avoid redundant com- putation. Hepple (1996) presents a compilation method which allows for tabular deduction for implicational linear logic (i.e. the fragment with only o--). This paper develops that method to cover the fragment that includes the multiplic- ative. The use of this method for the applica- tions mentioned above is discussed. 2 Multiplicative Linear Logic Linear logic is a 'resource-sensitive' logic: in any deduction, each assumption ('resource') is used precisely once• The formulae of the multiplicat- ive fragment of (intuitionistic) linear logic are defined by ~" ::= A I ~'o-~" J 9 v ® ~ (A a nonempty set of atomic types). The following rules provide a natural deduction formulation: Ao--B : a B:b o-E A: (ab) [B : v] A:a o--I Ao-B : ),v.a [B: x],[C : y] B®C: b A:a A:a B:b ®E ®I A" @ • E.,~(b, a) A®B: (a ® b) The elimination (E) and introduction (I) rules for o-- correspond to steps of functional ap- plication and abstraction, respectively, as the term labelling reveals. The o--I rule dis- charges precisely one assumption (B) within the proof to which it applies. The ®I rule pairs together the premise terms, whereas ®E has a substitution like meaning. 1 Proofs that Wo--(Xo--Z), Xo--Y, Yo--Z =~ W and that Xo-Yo-Z, Y@Z =v X follow: Wo-(Xo-Z) : w Xo-Y:x Yo-Z:y [Z:z] Y: (yz) x: Xo--Z : Az.x(yz) w: 1The meaning is more obvious in the notation of (Benton et al., 1992): (let b be x~y in a). 538 Xo-Yo-Z : x [Z: z] [Y: y] Y®Z:w Xo-Y: (zz) X: (zzu) x E~,,(w, (=z~)) The differential status of the assumptions and goal of a deduction (i.e. between F and A in F =v A) is addressed in terms of polarity: as- sumptions are deemed to have positive polar- ity, and goals negative polarity. Each Sub- formula also has a polarity, which is determ- ined by the polarity of the immediately con- taining (sub)formula, according to the following schemata (where 15 is the opposite polarity to p): (i) (X p o--Y~)P (ii) (X p®Yp)p For example, the leftmost assumption of the first proof above has the polarity pattern ( W + o- (X- o- Z + )- )+. The proofs illustrate the phenomenon of 'hypothetical reasoning', where additional assumptions (called 'hypothet- icals') are used, which are later discharged. The need for hypothetical reasoning in a proof is driven by the types of the assumptions and goal: the hypotheticals correspond to positive polar- ity subformulae of the assumptions/goal that occur in the following subformula contexts: i) (X- o--Y+)- (giving hypothetical Y) ii) (X + ®Y+)+ (giving hypo's X and Y) The subformula (Xo-Z) of Wo--(Xo-Z) in the proof above is an instance of context (i), so a hypothetical Z results. Subformulae that are in- stances of patterns (i,ii) may nest within other such instances (e.g. in ((A®B)®C)o-D, both ((A®B)@C) and (A®B) are instances of (ii)). In such cases, we can focus on the maximal pat- tern instances (i.e. not contained within any other), and then examine the hypotheticals pro- duced for whether they in turn license hypothet- ical reasoning. This approach makes explicit the patterns of dependency amongst hypothet- ical elements. 3 First-order Compilation for Implicational Linear Logic Hepple (1996) shows how deductions in implic- ational linear logic can be recast as deductions involving only first-order formulae, using only a single inference rule (a variant of o-E). The method involves compiling the original formulae to indexed first-order formulae, where a higher- order 2 initial formula yields multiple compiled formulae, e.g. (omitting indices) Xo--(Yo--Z) would yield Xo--Y and Z, i.e. with the sub- formula Z, relevant to hypothetical reasoning, being excised to be treated as a separate as- sumption, leaving a first-order residue. 3 Index- ing is used to ensure general linear use of re- sources, but also notably to ensure proper use of excised subformulae, i.e. so that Z, in our ex- ample, must be used in deriving the argument of Xo-Y, or otherwise invalid deductions would result). Simplifying Xo--(Yo--Z) to Xo--Y re- moves the need for an o--I inference, but the effect of such a step is not lost, since it is com- piled into the semantics of the formula. The approach is best explained by example. In proving Xo--(Yo--Z), Yo-W, Wo--Z =v X, the premise formulae compile to the indexed for- mulae (1-4) shown in the proof below. Each of these formulae (1-4) is associated with a set containing a single index, which serves as a unique identifier for that assumption. 1. 2. {j}:Z:z 2. {k}:Yo--(W:0):Au.yu 4. 5. {j, 1} :W:wz 6. {j,k,l} :Y:y(wz) 7. {i,j, k,l}: X:x( z.y(wz)) [2+4] [3+5] [1+6] The formulae (5-7) arise under combination, al- lowed by the single rule below. The index sets of these formulae identify precisely the assump- tions from which they are derived, with appro- priate indexation being ensured by the condi- tion 7r = ¢~¢ of the rule (where t2 stands for disjoint union, which enforces linear usage). ¢:Ao--(B:a):)~v.a ¢:B:b 7r = ¢~¢ rr: A: a[b//v] 2The key division here is between higher-order formu- lae, which are are functors that seek at least one argu- ment that bears a a functional type (e.g. Wo--(Xo--Z)), and first-order formulae, which seek no such argument. 3This 'excision' step has parallels to the 'emit' step used in the chart-parsing approaches for the associative Lambek calculus of (KSnig, 1994) and (Hepple, 1992), although the latters differs in that there is no removal of the relevant subformula, i.e. the 'emitting formula' is not simplified, remaining higher-order. 539 Assumptions (1) and (4) both come from Xo-(Yo--Z): note how (1)'s argument is marked with (4)'s index (j). The condition c~ C ¢ of the rule ensures that (4) must contribute to the de- rivation of (1)'s argument. Finally, observe that the rule's semantics involves not simple applic- ation, but rather by direct substitution for the variable of a lambda expression, employing a special variant of substitution, notated _[_//_], which specifically does not act to avoid acci- dental binding. Hence, in the final inference of the proof, the variable z falls within the scope of an abstraction over z, becoming bound. The ab- straction over z corresponds to an o-I step that is compiled into the semantics, so that an expli- cit inference is no longer required. See (Hepple, 1996) for more details, including a precise state- ment of the compilation procedure. 4 First-order Compilation for Multiplicative Linear Logic In extending the above approach to the multi- plicative, we will address the ®I and @E rules as separate problems. The need for an ®I use within a proof is driven by the type of either some assumption or the proof's overall goal, e.g. to build the argument of an assumption such as Ao-(B@C). For this specific example, we might try to avoid the need for an expli- cit @I use by transforming the assumption to the form Ao-Bc-C (note that the two formu- lae are interderivable). This line of explora- tion, however, leads to incompleteness, since the manoeuvre results in proof structures that lack a node corresponding to the result of the ®I in- ference (which is present in the natural deduc- tion proof), and this node may be needed as the locus of some other inference. 4 This problem can be overcome by the use of goal atoms, which are unique pseudo-type atoms, that are intro- duced into types by compilation (in the par- lance of lisp, they are 'gensymmed' atoms). An assumption Ao-(B@C) would compile to Ao--G plus Go-Bo-C, where G is the unique goal atom (gl, perhaps). A proof using these types does contain a node corresponding to (what would be) the result of the @ inference in the natural 4Specifically, the node must be present to allow for steps corresponding to @E inferences. The ex- pert reader should be able to convince themselves of this fact by considering an example such as Xo-((Y®U)~-(Z®U)), Yo-Z ~ X. deduction proof, namely that bearing type G, the result of combining Go--Bo-C with its ar- guments. This method can be used in combination with the existing compilation approach. For ex- ample, an initial assumption Ao-((B®C)o--D) would yield a hypothetical D, leaving the residue Ao-(B@C), which would become Ac~-G plus Go--Bo-C, as just discussed. This method of uniquely-generated 'goal atoms' can also be used in dealing with deductions having complex types for their intended overall result (which may license hypotheticals, by virtue of real- ising the polarity contexts discussed in section 2). Thus, we can replace an initial deduction F =~ A with Co--A, F ~ G, making the goal A part of the left hand side. The new premise Go---A can be compiled just like any other. Since the new goal formula G is atomic, it requires no compilation. For example, a goal type Xo-Y would become an extra premise Go--(Xo--Y), which would compile to formulae Go-X plus Y. Turning next to ®E, the rule involves hypo- thetical reasoning, so compilation of a maximal positive polarity subformula B®C will add hy- potheticals B,C. No further compilation of B®C itself is then required: whatever is needed for hypothetical reasoning with respect to the in- ternal structure of its subformulae will arise elsewhere by compilation of the hypotheticals B,C. Assume that these latter hypotheticals have identifying indices i, j and semantic vari- ables x, y respectively. A rule for ®E might combine B®C (with term t, say) with any other formula A (with term s, say) provided that the latter has a disjoint index set that includes i, j, to give a result that is also of type A, that is as- signed semantics E~y(t, s). To be able to con- struct this semantics, the rule would need to be able to access the identities of the variables x, y. The need to explicitly annotate this iden- tity information might be avoided by 'raising' the semantics of the multiplicative formula at compilation time to be a function over the other term, e.g. t might be raised to Au.E~y(t,u). A usable inference rule might then take the follow- ing form (where the identifying indices of the hypotheticals have been marked on the product type): (¢,A,s) {¢,(B®C): {i,j},Au.t) i,j•¢ ~r = ¢w¢ Gr, A, t[sllu]) 540 Note that we can safely restrict the rule to re- quire that the type A of the minor premise is atomic. This is possible since firstly, the first-order compilation context ensures that the arguments required by a functor to yield an atomic result are always present (with respect to completing a valid deduction), and secondly, the alternatives of combining a functor with a mul- tiplicative under the rule either before or after supplying its arguments are equivalent. 5 In fact, we do not need the rule above, as we can instead achieve the same effects us- ing only the single (o--) inference rule that we already have, by allowing a very restricted use of type polymorphism. Thus, since the above rule's conclusion and minor premise are the same atomic type, we can in the compilation simply replace a formula XNY, with an implic- ation .Ao---(.A: {i,j}), where ,4 is a variable over atomic types (and i,j the identifying indices of the two hypotheticals generated by compil- ation). The semantics provided for this functor is of the 'raised' kind discussed above. However, this approach to handling ®E inferences within the compiled system has an undesirable charac- teristic (which would also arise using the infer- ence rule discussed above), which is that it will allow multiple derivations that assign equival- ent proof terms for a given type combination. This is due to non-determinism for the stage at which a type such as Ao---(A: {i,j}) particip- ates in the proof. A proof might contain sev- eral nodes bearing atomic types which contain the required hypotheticals, and Ao-(al: {i, j}) might combine in at any of these nodes, giving equivalent results. 6 The above ideas for handling the multiplicat- ive are combined with the methods developed 5This follows from the proof term equivalence E~,y(f,(ga)) = (E~,~(f,9) a) where x,y E freevars(g). The move of requiring the minor premise to be atomic effects a partial normalisation which involves not only the relative ordering of ®E and o--E steps, but also that between interdependent ®E steps (as might arise for an assumption such as ((ANB)®C)). It is straightforward to demonstrate that the restriction results in no loss of readings. See (Benton et al., 1992) regarding term as- signment and proof normalisation for linear logic. 6It is anticipated that this problem can be solved by using normalisation results as a basis for discarding par- tial analyses during processing, but further work is re- quired in developing this idea. for the implicational fragment to give the com- pilation procedure (~-), stated in Figure 1. This takes a sequent F => A as input (case T1), where A is a type and each assumption in F takes the form Type:Sere (Sere minimally just some unique variable), and it returns a structure (~, ¢, A}, where ~ is a goal atom, ¢ the set of all identifying indices, and A a set of indexed first order formulae (with associated semantics). Let A* denote the result of closing A under the single inference rule. The sequent is proven iff (¢, ~, t) E A* for some term t, which is a com- plete proof term for the implicit deduction. The statement of the compilation procedure here is somewhat different to that given in (Hepple, 1996), which is based on polar translation func- tions. In the version here, the formula related cases address only positive formulae. T As an example, consider the deduction Xo--Y, Y®Z => XNZ. Compilation returns the goal atom gO, the full index set {g, h, i, j, k, l}, )lus the formulae show in (1-6) below. 1. ({9},gOo-(gl: {h}),At.t) 2. ({h},glo-(X:O)o-(Z:O),AvAw.(w ®v)) 3. ({i},Xo-(Y:O),kx.(ax)) 4. ({j},A~-(A: {k, 0), ~.E~z(b, u)> 5. {{k},Y,y} 6. ({/},Z,z) 7. ({i, k}, X, (ay)) [3+5] 8. <{h,l},glo---(X:O),Aw.(w®z)) [2+6] 9. {{h,i, k,l}, gl, ((ay) ® z)) [7+8] 10. ({h,i,j,k,l},gl, E~z(b,((ay)®z))) [4+9] 11. ({g,h,i,j,k,l},gO, E~(b,((ay)®z))) [1+11] 12. {{g, h,i, k, l}, gO, ((ay) ® z)) [1+9] 13. ({9, h,i,j,k,l},gO, E~(b,((ay) Nz))) [4+12] The formulae (7-13) arise under combination. Formulae (11) and (13) correspond to success- ful overall analyses (i.e. have type gO, and are labelled with the full index set). The proof il- lustrates the possibility of multiple derivations 7Note that the complexity of the compilation is linear in the 'size' of the initial deduction, as measured by a count of type atoms. For applications where the formulae that may participate are preset (e.g. they are drawn from lexicon), formulae can be precompiled, although the results of precompilation would need to be parametised with respect to the variables/indices appearing, with a sufficient supply 'fresh' symbols being generated at time of lexical access, to ensure uniqueness. 541 (T1) T(XI:Xl,...,Xn:x n =:~ Xo) -- (~,4, i) where i0,...,in fresh indices; ~ a fresh goal atom; ¢ = indices(A) A = 7-(<i0, Go-Xo, y.y>)u 7-(<il, Xl, xl>) u... u 7-(<in, xn, (7-2) 7"((4, X, 8)) : (4, X, s) where X atomic (7-3) 7-((¢,Xo-Y,s)) = 7-((4, Xo-(Y:O),s)) where Y has no (inclusion) index set (7-4) T((4, Xlo-(Y:¢),s)) = (4, X2o--(Y:¢),;~x.t) UF where Y is atomic; x a fresh variable; 7-((4, X1, (sx))) = (4, X2, t) +ttJF (T5) 7-((4, Xo-((Yo-Z): ¢), s)) = 7-((¢, Xo-(Y: ~r), Ay.s()~z.y))) U 7-((i, Z, z)) where i a fresh index; y, z fresh variables; 7r = i U ¢ (7-6) 7-((4, Xo-((Y ® Z): ¢), s)) = 7-((4, Xo-(G: ~), s)) u 7-((i, ~o-Yo-Z, ~z~y.(y ® z))) where i a fresh index; G a fresh goal atom; y, z fresh variables; 7r = i U (77) T((4, X ® Y,s)) = (4, Ao---(A: {i,j}),At.(E~(s,t))) UT-((i,X,x)) U T((j,Y,y)) where i, j fresh indices; x, y, t fresh variables; .4 a fresh variable over atomic types Figure 1: The Compilation Procedure assigning equivalent readings, i.e. (11) and (13) have identical proof terms, that arise by non- determinism for involvement of formula (4). 5 Computing Exclusion Constraints The use of inclusion constraints (i.e. require- ments that some formula must be used in de- riving a given functor's argument) within the approach allows us to ensure that hypotheticals are appropriately used in any overall deduction and hence that deductions are valid. However, the approach allows that deduction can generate some intermediate results that cannot be part of an overall deduction. For example, compiling a formula Xo--(Yo--(Zo--W))o--(Vo-W) gives the first-order residue Xo-Yo--V, plus hypothetic- als Zo-W and W. A partial deduction in which the hypothetical Zo-W is used in deriving the argument V of Xo--Yo-V cannot be extended to a successfull overall deduction, since its use again for the functor's second argument Y (as an inclusion constraint will require) would viol- ate linear usage. For the same reason, a direct combination of the hypotheticals Zo-W and W is likewise a deductive dead end. This problem can be addressed via exclusion constraints, i.e. annotations to forbid stated formulae having been used in deriving a given funtor's argument, as proposed in (Hepple, 1998). Thus, a functor might have the form Xo---(Y:{i}:{j}) to indicate that i must appear in its argument's index set, and that j must not. Such exclusions can be straightforwardly com- puted over the set of compiled formulae that de- rive from each initial assumption, using simple (set-theoretic) patterns of reasoning. For ex- ample, for the case above, since W must be used in deriving the argument V of the main residue formula, it can be excluded from the ar- gument Y of that formula (which follows from the disjointness condition on the single inference rule). Given that the argument Y must include Zo--W, but excludes W, we can infer that W cannot contribute to the argument of Zo--W, giving an exclusion constraint that (amongst other things) blocks the direct combination of Zo--W and W. See (Hepple, 1998) for more de- tails (although a slightly different version of the first-order formalism is used there). 6 Tabular Deduction A simple algorithm for use with the above ap- proach, which avoids much redundant compu- tation, is as follows. Given a possible theorem to prove, the results of compilation (i.e. in- dexed types plus semantics) are gathered on an agenda. Then, a loop is followed in which an item is taken from the agenda and added to the database (which is initially empty), and then the next triple is taken from the agenda and 542 so on until the agenda is empty. Whenever an entry is added to the database, a check is made to see if it can combine with any that are already there, in which case new agenda items are gen- erated. When the agenda is empty, a check is made for any successful overall analyses. Since the result of a combination always bears an in- dex set larger than either parent, and since the maximal index set is fixed at compilation time, the above process must terminate. However, there is clearly more redundancy to be eliminated here. Where two items dif- fer only in their semantics, their subsequent involvement in any further deductions will be precisely parallel, and so they can be collapsed together. For this purpose, the semantic com- ponent of database entries is replaced with a unique identifer, which serves as a 'hook' for semantic alternatives. Agenda items, on the other hand, instead record the way that the agenda item was produced, which is either 'pre- supplied' (by compilation) or 'by combination', in which case the entries combined are recorded by their identifiers. When an agenda item is added to the database, a check is made for an entry with the same indexed type. If there is none, a new entry is created and a check made for possible combinations (giving rise to new agenda items). However, if an appropriate ex- isting entry is found, a record is made for that entry of an additional way to produce it, but no check made for possible combinations. If at the end there is a successful overall analsysis, its unique identifier, plus the records of what combined to produce what, can be used to enu- merate directly the proof terms for successful analyses. 7 Application ~1: Categorial Parsing The associative Lambek calculus (Lambek, 1958) is perhaps the most familiar representat- ive of the class of categorial formalisms that fall within the 'type-logical' tradition. Recent work has seen proposals for a range of such systems, differing in their resource sensitivity (and hence, implicitly, their underlying notion of 'linguistic structure'), in some cases combining differing resource sensitivities in one system, s Many of SSee, for example, the formalisms developed in (Moortgat et al., 1994), (Morrill, 1994), (Hepple, 1995). these proposals employ a 'labelled deductive system' methodology (Gabbay, 1996), whereby types in proofs are associated with labels which record proof information for use in ensuring cor- rect inferencing. A natural 'base logic' on which to construct such systems is the multiplicat- ive fragment of linear logic, since (i) it stands above the various categorial systems in the hier- archy of substructural logics, and (ii) its oper- ators correspond to precisely those appearing in any standard categorial logic. The key require- ment for parsing categorial systems formulated in this way is some theorem proving method that is sufficient for the fragment of linear logic employed (although some additional work will be required for managing labels), and a num- ber of different approaches have been used, e.g. proof nets (Moortgat, 1992), and SLD resolu- tion (Morrill, 1995). Hepple (1996) introduces first-order compilation for implicational linear logic, and shows how that method can be used with labelling as a basis parsing implicational categorial systems. No further complications arise for combining the extended compilation approach described in this paper with labelling systems as a basis for efficient, non-redundant parsing of categorial formalisms in the core mul- tiplicative fragment. See (Hepple, 1996) for a worked example. 8 Application ~2: Glue Language Deduction In a line of research beginning with Dalrymple et al. (1993), a fragment of linear logic is used as a 'glue language' for assembling sentence mean- ings for LFG analyses in a 'deductive' fashion (enabling, for example, an direct treatment of quantifier scoping, without need of additional mechanisms). Some sample expressions: hates: VX, Y.(s ~t hates(X, Y) )o-( (f .,., eX) ® (g"-% Y) ) everyone: VH, S.(H-,-*t every(person, S) ) o-(Vx.(H x)) The operator ~ serves to pair together a 'role' with a meaning expression (whose semantic type is shown by a subscript), where a 'role' is essentially a node in a LFG f-structure. For our purposes roles can be treated as if they were just atomic symbols. For theorem proving pur- poses, the universal quantifiers above can be de- leted: the uppercase variables can be treated 543 as Prolog-like variables, which become instanti- ated under matching during proof construction; the lowercase variables can be replaced by arbit- rary constants. Such deletion leaves a residue that can be treated as just expressions of mul- tiplicative linear logic, with role/meaning pairs serving as 'basic formulae'. 9 An observation contrasting the categorial and glue language approaches is that in the cat- egorial case, all that is required of a deduction is the proof term it returns, which (for 'lin- guistic derivations') provides a 'semantic recipe' for combining the lexical meanings of initial for- mulae directly. However, for the glue language case, given the way that meanings are folded into the logical expressions, the lexical terms themselves must participate in a proof for the semantics of a LFG derivation to be produced. Here is one way that the first-order compila- tion approach might be used for glue language deduction (other ways are possible). Firstly, we can take each (quantifier-free) glue term, re- place each role/meaning pair with just the role component, and associate the resulting formula with a unique semantic variable. The set of for- mulae so produced can then undergo the first- order compilation procedure. Crucially for com- pilation, although some of the role expressions in the formulae may be ('Prolog-like') variables, they correspond to atomic formulae (so there is no 'hidden structure' that compilation cannot address). A complication here is that occur- rences of a single role variable may end up in different first-order formulae. In any overall de- duction, the binding of these multiple variable instances must be consistent, but we cannot rely on a global binding context, since alternative proofs will typically induce distinct (but intern- ally consistent) bindings. Hence, bindings must be handled locally (i.e. relative to each database formula) and combinations will involve merging of local binding contexts. Each proof term that tabular deduction returns corresponds to a nat- ural deduction proof over the precompilation formulae. If we mechanically mirror this pat- tern of proof over the original glue terms (with meanings, but quantifier-free), a role/meaning 9See (Fry, 1997), who uses a proof net method for glue language deduction, for relevant discussion. This paper also provides examples of glue language uses that require a full deductive system for the multiplicative fragment. pair that provides a reading of the original LFG derivation will result. References Nick Benton, Gavin Bierman, Valeria de Paiva & Martin Hyland. 1992. 'Term Assignment for Intuitionistic Linear Logic.' Tech. Report 262, Cambridge University Computer Lab. Mary Dalrymple, John Lamping & Vijay Saraswat. 1993. 'LFG semantics via con- straints.' Proc. EACL-6, Utrecht. John Fry 1997. 'Negative Polarity Licensing at the Syntax-Semantics Interface.' Proc. A CL/EA CL-97 Joint Con]erence, Madrid. Dov M. Gabbay. 1996. Labelled deductive sys- tems. Volume 1. Oxford University Press. Mark Hepple. 1992. 'Chart Parsing Lambek Grammars: Modal Extensions and Incre- mentality', Proc. COLING-92. Mark Hepple. 1995. 'Mixing Modes of Lin- guistic Description in Categorial Grammar.' Proc. EA CL-7, Dublin. Mark Hepple. 1996. 'A Compilation-Chart Method for Linear Categorial Deduction.' Proc. COLING-96, Copenhagen. Mark Hepple. 1998. 'Linear Deduction via First-order Compilation.' Proc. First Work- shop on Tabulation in Parsing and Deduc- tion. Esther KSnig. 1994. 'A Hypothetical Reasoning Algorithm for Linguistic Analysis.' Journal of Logic and Computation, Vol. 4, No 1. Joachim Lambek. 1958. 'The mathematics of sentence structure.' American Mathematical Monthly, 65, pp154-170. Michael Moortgat. 1992. 'Labelled deduct- ive systems for categorial theorem proving.' Proc. o/Eighth Amsterdam Colloquium, ILLI, University of Amsterdam. Michael Moortgat & Richard T. Oehrle. 1994. 'Adjacency, dependency and order.' Proc. of Ninth Amsterdam Colloquium. Glyn Morrill. 1994. Type Logical Grammar: Categorial Logic of Signs. Kluwer Academic Publishers, Dordrecht. Glyn Morrill. 1995. 'Higher-order Linear Lo- gic Programming of Categorial Deduction.' Proc. of EACL-7, Dublin. 544
1998
88
Parsing Parallel Grammatical Representations Derrick Higgins Department of Linguistics University of Chicago 1050 East 59th Street Chicago, IL 60626 [email protected] Abstract Traditional accounts of quantifier scope em- ploy qualitative constraints or rules to account for scoping preferences. This paper outlines a feature-based parsing algorithm for a gram- mar with multiple simultaneous levels of repre- sentation, one of which corresponds to a par- tial ordering among quantifiers according to scope. The optimal such ordering (as well as the ranking of other orderings) is determined in this grammar not by absolute constraints, but by stochastic heuristics based on the de- gree of alignment among the representational levels. A Prolog implementation is described and its accuracy is compared with that of other accounts. 1 Introduction It has long been recognized that the possibility and preference rankings of scope readings de- pend to a great degree on the position of scope- taking elements in the surface string (Chomsky, 1975; Hobbs and Shieber, 1987). Yet most tra- ditional accounts of semantic scopal phenomena in natural language have not directly tied these two factors together. Instead, they allow only certain derivations to link the surface structure of a sentence with the representational level at which scope relations are determined, place constraints upon the semantic feature-passing mechanism, or otherwise emulate a constraint which requires some degree of congruence be- tween the surface syntax of a sentence and its preferred scope reading(s). A simpler and more direct approach is sug- gested by constraint-based, multistratal theo- ries of grammar (Grimshaw, 1997; Jackendoff, 1997; Sadock, 1991; Van Valin, 1993). In these models, it is possible to posit multiple represen- tational levels for a sentence without according ontological primacy to any one of them, as in all varieties of transformational grammar. This allows constraints to be formulated which place limits on structural discrepancies between lev- els, yet need not be assimilated into an overrid- ing derivational mechanism. This paper will examine the model of one of these theories, Autolexical Grammar (Sadock, 1991; Sadock, 1996; Schiller et al., 1996), as it is implemented in a computational scope gen- erator and critic. This left-corner chart parser generates surface syntactic structures for each sentence (as the only level of syntactic represen- tation), as well as Function-Argument seman- tic structures and Quantifier/Operator-Scope structures. These latter two structures together determine the semantic interpretation of a sen- tence. It will be shown that this model is both categorical enough to handle standard gener- alizations about quantifier scope, such as bans on extraction from certain domains, and fuzzy enough to present reasonable preference rank- ings among scopings and account for lexical differences in quantifier strength (Hobbs and Shieber, 1987; Moran, 1988). 2 A Multidimensional Approach to Quantifier Scoping 2.1 The Autolexical Model The framework of Autolexical Grammar treats a language as the intersection of numerous inde- pendent CF-PSGs, or hierarchies, each of which corresponds to a specific structural or functional aspect of the language. Semantic, syntactic, morphological, discourse-functional and many other hierarchies have been introduced in the literature, but this project focuses on the in- teractions among only three major hierarchies: Surface Syntax, Function-Argument Structure, 545 and Operator Scope Structure. The surface syntactic hierarchy is a feature- based grammar expressing those generalizations about a sentence which are most clearly syn- tactic in nature, such as agreement, case, and syntactic valency. The function-argument hi- erarchy expresses that (formal) semantic infor- mation about a sentence which does not involve scope resolution, e.g., semantic valency and as- sociation of referential terms with argument po- sitions, as in Park (1995). The operator scope hierarchy, naturally, imposes a scope ordering on the quantifiers and operators found in the expression. Two other, minor hierarchies are employed in this implementation. The linear or- dering of words in the surface string is treated as a hierarchy, and a lexical hierarchy is intro- duced in order to express the differing lexical "strength" of quantifiers. Each hierarchy can be represented as a tree in which the terminal nodes are not ordered with respect to one another. This implies that, for example, [John [saw Mary]] and [Mary [saw John]] will both be acceptable syntactic rep- resentations for the surface string Mary saw John. The optimal set of hierarchies for a string consists of the candidate hierarchies for each level of representation which together are most structurally congruous. The structural similar- ity between hierarchies is determined in Au- tolexical Grammar by means of an Alignment Constraint, which in the implementation de- scribed here counts the number of overlapping constituents in the two trees. Thus, while struc- tures similar to [Mary [saw John]] and [John [saw Mary]] will both be acceptable as syntac- tic and function-argument structure representa- tions, the alignment constraint will strongly fa- vor a pairing in which both hierarchies share the same representation. Structural hierarchies are additionally evaluated by means of a Contigu- ity Constraint, which requires that the terminal nodes of each constituent of a hierarchy should be together in the surface string, or at least as close together as possible. 2.2 Quantifier Ordering Heuristics The main constraints which this model places on the relative scope of quantifiers and opera- tors are the alignment of the operator scope hi- erarchy with syntax, function-argument struc- ture, and the lexical hierarchy of quantifier strength. The first of these constraints reflects "the principle that left-to-right order at the same syntactic level is preserved in the quan- tifier order" 1 and accounts for syntactic extrac- tion restrictions. The second will favor operator scope structures in which scope-taking elements are raised as little as possible from their base ar- gument positions. The last takes account of the scope preferences of individual quantifiers, such as the fact that each tends to have wider scope than all other quantifiers (Hobbs and Shieber, 1987; Moran, 1988). As an example of the sort of syntactically- based restrictions on quantifier ordering which this model can implement, consider the general- ization listed in Moran (1988), that "a quanti- fier cannot be raised across more than one ma- jor clause boundary." Because the approach pursued here already has a general constraint which penalizes candidate parses according to the degree of discrepancy between their syntax and scope hierarchies, we do not need to accord a privileged theoretical status to "major clause boundaries." Figure 1 illustrates the approximate optimal structure accorded to the sentence Some pa- tients believe all doctors are competent on the syntactic and scopal hierarchies, in which an extracted quantifier crosses one major clause boundary. It will be given a misalignment index of 4 (considering for the moment only the inter- action of these two levels), because of the four overlapping constituents on the two hierarchies. This example would be misaligned only to de- gree 2 if the other quantifier order were chosen, and depending on the exact sentence type con- sidered, an example with a scope-taking element crossing two major clause boundaries should be misaligned to about degree 8. The fact that the difference between the pri- mary and secondary scopings of this sentence is 2 degrees of alignment, while the difference between crossing one clause boundary and two clause boundaries is 4 degrees of alignment, cor- responds with generally accepted assumptions about the acceptability of this example. While the reading in which the scope of quantifiers mirrors their order in surface structure is cer- tainly preferred, the other ordering is possible as well. If the extraction crosses another clause 1Hobbs and Shieber (1987), p. 49 546 S Some patients believe all doctors are competent @ Figure 1: Illustration of the Alignment Constraint. The four highlighted nodes count against this combination of structures, because they overlap with constituents in the other tree. boundary, however, as in Some patients believe Mary thinks all doctors are competent, the re- versed scoping is considerably more unlikely. 2.3 Lexical Properties of Quantifiers In addition to ranking the possible scopings of a sentence based on the surface syntactic posi- tions of its quantifiers and operators, the pars- ing and alignment algorithm employed in this project takes into account the "strength" of dif- ferent scope-taking elements. By introducing a lexical hierarchy of quantifier strength, in which those elements more likely to take wide scope are found higher in the tree, we are able to use the same mechanism of the alignment constraint to model the facts which other approaches treat with stipulative heuristics. For example, in Some patient paid each doc- tor, the preferred reading is the one in which each takes wide scope, contrary to our expecta- tions based on the generalization that the pri- mary scoping tends to mirror surface syntactic order. An approach employing some variant of Cooper storage would have to account for this by assigning to each pair of quantifiers a like- lihood that one will be raised past the other. In this case, it would be highly likely for each to be raised past some. The autolexical ap- proach, however, allows us to achieve the same effect without introducing an additional device. Given a proper weighting of the result of align- ing the scope hierarchy with this lexical hierar- chy, it is a simple matter to settle on the correct candidates. 3 The Algorithm 3.1 Parsing Strategy This implementation of the Autolexical account of quantifier scoping is written for SWI-Prolog, and inherits much of its feature-based grammat- ical formalism from the code listings of Gazdar and Mellish (1989), including dagunify.pl, by Bob Carpenter. The general strategy employed by the program is first to find all parses which each hierarchy's grammar permits for the string, and then to pass these lists of structures to func- tions which implement the alignment and con- tiguity constraints. These functions perform a pairwise evaluation of the agreement between structures, eventually converging on the opti- mal set of hierarchies. The same parsing engine is used to generate structures for each of the major hierarchies con- tributing to the representation of a string. It is based on the left-corner parser of pro_patr.pl in Gazdar and Mellish (1989), attributed origi- nally to Pereira and Shieber (1987). This parser has been extended to store intermediate results for lookup in a hash table. At present, the parsing of each hierarchy is in- dependent of that of the other hierarchies, but ultimately it would be preferable to allow, e.g., edges from the syntactic parse to contribute to 547 the function-argument parsing process. Such a development would allow us to express catego- rial prototypes in a natural way. For example, the proposition that "syntactic NPs tend to de- note semantic arguments" could be modeled as a default rule for incorporating syntactic edges into a function-argument structure parse. The "generate and test" mechanism em- ployed here to maximize the congruity of repre- sentations on different levels is certainly some- what inefficient. Some of the structures which it considers will be bizarre by all accounts. To a certain degree, this profligacy is held in check by heuristic cutoffs which exclude a combina- tion from consideration as soon as it becomes apparent that is misaligned to an unacceptable degree. Ultimately, however, the solution may lie in some sort of parallel approach. A develop- ment of this program designed either for parallel Prolog or for a truly parallel architecture could effect a further restriction on the candidate set of representations by implementing constraints on parallel parsing processes, rather than (or in addition to) on the output of such processes. 3.2 Alignment The alignment constraint (applied by the align/3 predicate here) compares two trees (Prolog lists), returning the total number of overlapping constituents in both trees as a mea- sure of their misalignent. Constituents are said to overlap if the sets of terminal nodes which they dominate intersect, but neither is a subset of the other. The code fragment below provides a rough outline of the operation of this predicate. First, both trees being compared are "pruned" so that neither contains any terminal nodes not found in the other. The terminal elements of each of the tree's constituents are then recorded in lists. Once those constituents which occur in both trees are removed, the sum of the length of these two lists is the total number of overlap- ping constituents. align(Li,L2,Num) "- flatten(LI,Fl), flatten(L2,F2), union(FI,F2,AllTerms), intersection(FI,F2,GoodTerms), subtract(AllTerms,GoodTerms,BadTerms), Delete constits without correlates rmbad(LI,BadTerms,Goodl), rmbad(L2,BadTerms,Good2), Z Get list of constits in each tree constits(Goodl,CListl), constits(Good2,CList2), Z Delete duplicates intersection(CListl,CList2,CList3), subtract(CListl,CList3,Finall), subtract(CList2,CList3,Final2), Z Count mismatches length(Finall,Sizel), length(Final2,Size2), Num is Sizel + Size2. 3.3 Contiguity While the alignment constraint evaluates the similarity of two trees, the contiguity constraint (contig/3 in this project) calculates the degree of fit between a hierarchy and a string (in this case, the surface string). The relevant measure of "goodness of fit" is taken here to be the min- imal number of crossing branches the structure entails. It is true that this approach makes the contiguity constraint dependent on the partic- ular grammatical rules of each representational level. However, since an Autolexical model does not attempt to handle syntax directly in the semantic representation, or morphology in the syntactic representation, there is no real dan- ger of proliferating nonterminal nodes on any particular level. The definition of the contig predicate is somewhat more complex than that for align, because it must find the minimum number of crossing branches in a structure. It works by maintaining a chart (based on the contval predicate) of the number of branches "covering" each constituent, as it works its way up the tree. The contmin predicate keeps track of the cur- rent lowest contiguity violation for the struc- ture, so that worse alternatives can be aban- doned as soon as they cross this threshold. contig([],_,0). contig(A,_,0) "- not(is_list(A)), !. contig([A],Flat,Num) "- 548 is_list (A), cont ig (A, Flat, Num), !. contig([A,B] ,Flat,Num) "- cont ig (A ,Flat, Numl), contig (B ,Flat, Num2), contval (A ,Left I ,Right I, Num3), contval (B, Left2, Right2, Num4), NumO is Numl + Num2 + Num3 + Num4, forall (contmin (Min), (NumO >= Min) *-> fail ; true), Num is NumO, forall (contval (X,L,R, N), (L > min(Leftl,Left2), R < max(Rightl,Right2)) *-> (retract (contval (X, L, R, N) ), asserta (contval (X ,L ,R, N+I ) ) ) ; true) , asserta( contval ( [A,B] ,min(Left I ,Left2), max (Right I, Right 2), O) ). contig( [B ,A] ,Flat ,Num) • - contig (A ,Flat, Numl), contig (B ,Flat, Num2), contval (A ,Left I ,Right I, Num3), contval (B, Left2, Right 2, Num4), NumO is Numl + Num2 + Num3 + Num4, forall (contmin (Min), (NumO >= Min) *-> fail ; true), Num is NumO, forall (contval (X, L,R, N), (L > min(Leftl,Left2), R < max(Rightl,Right2)) *-> (retract (contval (X, L, R, N) ), asserta(contval (X, L,R, N+I) ) ) ; true), asserta( contval ( [A,B] ,min(Left I ,Left2), max (Right I, Right 2), O) ). 4 Conclusion Multistratal theories of grammar are not often chosen as guidelines for computational linguis- tics, because of performance and manageability concerns. This project, however, should at least demonstrate that even in a high-level language like Prolog a multistratal parsing model can be made to produce consistent results in a reason- able length of time. Furthermore, the project described here does more than simply emulate the output of a stan- dard, monostratal CF-PSG parser; it yields a preference ranking of readings for each string, rather than a single right answer. While the Autolexical model may not now be correct for applications in which speed is of primary con- cern, it has only begun to be implemented com- putationally, and any serious attempt at infer- encing from natural language input will have to produce similar, graded output (Moran, 1988). References Noam Chomsky. 1975. Deep structure, sur- face structure, and semantic interpretation. In Studies on Semantics in Generative Gram- mar, pages 62-119. Mouton. Gerald Gazdar and Chris Mellish. 1989. Natu- ral Language Processing in PROLOG. Addi- son Wesley. Jane Grimshaw. 1997. Projection, heads, and optimality. Linguistic Inquiry, 28(3):373- 422. Jerry R. Hobbs and Stuart M. Shieber. 1987. An algorithm for generating quantifier scop- ings. Computational Linguistics, 13:47-63. Ray Jackendoff. 1997. The Architecture of the Language Faculty. Number 28 in Linguistic Inquiry Monographs. The MIT Press. Douglas B. Moran. 1988. Quantifier scoping in the SRI core language engine. In A CL Pro- ceedings, 26th Annual Meeting, pages 33-40. Jong C. Park. 1995. Quantifier scope and con- stituency. In A CL Proceedings, 33rd Annual Meeting. Fernando C.N. Pereira and Stuart M. Shieber. 1987. Prolog and Natural-Language Analysis, volume 10 of CSLI Lecture Notes. Center for the Study of Language and Information. Jerrold M. Sadock. 1991. Autolexical Syntax: a Theory of Parallel Grammatical Representa- tions. University of Chicago Press. Jerrold M. Sadock. 1996. Reflexive reference in west greenlandic. Contemporary Linguistics, 1:137-160. Eric Schiller, Elisa Steinberg, and Barbara Need, editors. 1996. Autolexical Theory: Ideas and Methods. Mouton de Gruyter. Robert D. Van Valin, editor. 1993. Advances in Role and Reference Grammar. Number 82 in Current Issues in Linguistic Theory. John Benjamins Publishing Company. 549
1998
89
Trainable, Scalable Summarization Using Robust NLP and Machine Learning* Chinatsu Aone~, Mary Ellen Okurowski +, James Gorlinsky~ tSRA International +Department of Defense 4300 Fair Lakes Court 9800 Savage Road Fairfax, VA 22033 Fort Meade, MD 20755-6000 {aonec, gorlinsk}@sra.com [email protected] Abstract We describe a trainable and scalable sum- marization system which utilizes features derived from information retrieval, infor- mation extraction, and NLP techniques and on-line resources. The system com- bines these features using a trainable fea- ture combiner learned from summary ex- amples through a machine learning algo- rithm. We demonstrate system scalability by reporting results on the best combina- tion of summarization features for different document sources. We also present prelim- inary results from a task-based evaluation on summarization output usability. 1 Introduction Frequency-based (Edmundson, 1969; Kupiec, Ped- ersen, and Chen, 1995; Brandow, Mitze, and Rau, 1995), knowledge-based (Reimer and Hahn, 1988; McKeown and l:Ladev, 1995), and discourse- based (Johnson et al., 1993; Miike et al., 1994; Jones, 1995) approaches to automated summarization cor- respond to a continuum of increasing understanding of the text and increasing complexity in text pro- cessing. Given the goal of machine-generated sum- maries, these approaches attempt to answer three central questions: • How does the system count words to calculate worthiness for summarization? • How does the system incorporate the knowledge of the domain represented in the text? • How does the system create a coherent and co- hesive summary? Our work leverages off of research in these three approaches and attempts to remedy some of the dif- ficulties encountered in each by applying a combina- tion of information retrieval, information extraction, "We would like to thank Jamie Callan for his help with the INQUERY experiments. and NLP techniques and on-line resources with ma- chine learning to generate summaries. Our DimSum system follows a common paradigm of sentence ex- traction, but automates acquiring candidate knowl- edge and learns what knowledge is necessary to sum- marize. We present how we automatically acquire candi- date features in Section 2. Section 3 describes our training methodology for combining features to gen- erate summaries, and discusses evaluation results of both batch and machine learning methods. Section 4 reports our task-based evaluation. 2 Extracting Features In this section, we describe how the sys- tem counts linguistically-motivated, automatically- derived words and multi-words in calculating wor- thiness for summarization. We show how the sys- tem uses an external corpus to incorporate domain knowledge in contrast to text-only statistics. Fi- nally, we explain how we attempt to increase the co- hesiveness of our summaries by using name aliasing, WordNet synonyms, and morphological variants. 2.1 Defining Single and Multi-word Terms Frequency-based summarization systems typically use a single word string as the unit for counting fre- quency. Though robust, such a method ignores the semantic content of words and their potential mem- bership in multi-word phrases and may introduce noise in frequency counting by treating the same strings uniformly regardless of context. Our approach, similar to (Tzoukerman, Klavans, and Jacquemin, 1997), is to apply NLP tools to ex- tract multi-word phrases automatically with high ac- curacy and use them as the basic unit in the sum- marization process, including frequency calculation. Our system uses both text statistics (term frequency, or t]) and corpus statistics (inverse docmnent fre- quency, or id]) (Salton and McGill, 1983) to derive signature words as one of the summarization fea- tures. If single words were the sole basis of counting for our summarization application, noise would be 62 introduced both in term frequency and inverse doc- ument frequency. First, we extracted two-word noun collo- cations by pre-processing about 800 MB of L.A. Times/Washington Post newspaper articles us- ing a POS tagger and deriving two-word noun collo- cations using mutual information. Secondly, we em- ployed SRA's NameTag TM system to tag the afore- mentioned corpus with names of people, entities, and places, and derived a baseline database for t]*idfcal- culation. Multi-word names (e.g., "Bill Clinton") are treated as single tokens and disambiguated by semantic types in the database. 2.2 Acquiring Knowledge of the Domain Knowledge-based summarization approaches often have difficulty acquiring enough domain knowledge to create conceptual representations for a text. We have automated the acquisition of some domain knowledge from a large corpus by calculating idfval- ues for selecting signature words, deriving colloca- tions statistically, and creating a word association index (Jing aim Croft, 1994). 2.3 Recognizing Sources of Discourse Knowledge through Lexlcal Cohesion Our approach to acquiring sources of discourse knowledge is much shallower than those of discourse- based approaches. For a target text for summariza- tion, we tried to capture lexical cohesion of signa- ture words through name aliasing with the NameTag tool, synonyms with WordNet, and morphological variants with morphological pre-processing. 3 Combining Features We experinaented with combining summarization features in two stages. In the first batch stage, we experimented to identify what features are most ef- fective for signature words. In the second stage, we took the best combination of features determined by the first stage and used it to define "high scoring sig- nature words." Then, we trained DimSum over high- score signature word feature, along with conven- tional length and positional information, to deter- mine which training features are most useful in ren- dering useful summaries. We also experimented with the effect of training and different corpora types. 3.1 Batch Feature Combiner 3.1.1 Method In DirnSum, sentences are selected for a summary based upon a score calculated from the different combinations of signature word features and their expansion with the discourse features of aliases, syn- onyms, and morphological variants. Every token in a document is assigned a score based on its tf*idf value. The token score is used, in turn, to calculate the score of each sentence in the document. The score of a sentence is calculated as the average of the scores of the tokens contained in that sentence. To obtain the best combination of features for sen- tence extraction, we experimented extensively. The summarizer allows us to experiment with both how we count and what we count for both in- verse document frequency and term frequency val- ues. Because different baseline databases can affect idfvalues, we examined the effect on summarization of multiple baseline databases based upon nmltiple definitions of the signature words. Similarly, the dis- course features, i.e., synonyms, morphological vari- ants, or name aliases, for signature words, can affect tf values. Since these discourse features boost the term frequency score within a text when they are treated as variants of signature words, we also ex- amined their impact upon summarization. After every sentence is assigned a score, the top n highest scoring sentences are chosen as a summary of the content of the document. Currently, the Dim- Sum system chooses the number of sentences equal to a power k (between zero and one) of the total number of sentences. This scheme has an advantage over choosing a given percentage of document size as it yields more information for longer documents while keeping summary size manageable. 3.1.2 Evaluation Over 135,000 combinations of the above pa- rameters were performed using 70 texts from L.A. Times/Washington Post. We evaluated the summary results against the human-generated ex- tracts for these 70 texts in terms of F-Measures. As the results in Table 1 indicate, name recognition, alias recognition and WordNet (for synonyms) all make positive contributions to the system summary performance. The most significant result of the batch tests was the dramatic improvement in performance from withholding person names from the feature combi- nation algorithm.The most probable reason for this is that personal names usually have high idf values, but they are generally not good indicators of topics of articles. Even when names of people are associ- ated with certain key events, documents are not usu- ally about these people. Not only do personal names appear to be very misleading in terms of signature word identification, they also tend to mask synonym group performance. WordNet synonyms appear to be effective only when names are suppressed. 3.2 'IYainable Feature Combiner 3.2.1 Method With our second method, we developed a train- able feature combiner using Bayes' rule. Once we had defined the best feature combination for high scoring tf*idf signature words in a sentence in the first round, we tested the inclusion of coin- lnonly acknowledged positional and length informa- 63 I Entity I Place I Person [Alias [Syn. II F-M I + + + + + + + + + + + + + + + + + + + + + + 41.3 + 40.7 + 40.4 39.6 ÷ 39.5 39.0 37.4 + + 37.4 + 37.2 q- 36.7 Text. Set Training? I F-M Lead ,latwp-devl NO 41.3 latwp-devl YES 49.9 48.2 latwp-testl NO I 31.9 latwp-testl YES I 44.6 42.0 pi-test 1 NO t 40.5 pi-testl YES I 49.7 47.7 Table 2: Results on Different Test Sets with or with- out Training Table h Results for Different Feature Combinations tion. From manually extracted summaries, the sys- tem automatically learns to combine the following extracted features for summarization: • short sentence length (less than 5 words) • inclusion high-score tJaidfsignature words in a sentence • sentence position in a document (lst, 2nd, 3rd or 4th quarter) • sentence position in a paragraph (initial. me- dial, final) Inclusion in the high scoring t]* idf signature word set was determined by a variable system parameter (identical to that used in the pre-trainable version of the system). Unlike Kupiec et al.'s experiment, we did not use the cue word feature. Possible values of the paragraph feature are identical to how Kupiec et al. used this feature, but applied to all paragraphs because of the short length of the newspaper articles. 3.2.2 Evaluation We performed two different rounds of experi- ments, the first with newspaper sets and the second with a broader set from the TREC-5 collection (Har- man and Voorhees, 1996). In both rounds we exper- imented with * different feature sets • different data sources • the effects of training. In the first round, we trained our system on 70 texts from the L.A. Times/Washington Post (latwp- devl) and then tested it against 50 new texts from the L.A. Times/Washington Post (latwp-testl) and 50 texts from the Philadelphia Inquirer (pi-testl). The results are shown in Table 2. In both cases, we found that the effects of training increased system scores by as much as 10% F-Measure or greater. Our results are similar to those of Mitra (Mitra, Sing- hal, and Buckley, 1997), but our system with the trainable combiner was able to outperform the lead sentence summaries. F-M Sentence] High I Document Length Score Position 24.6 24.6 + 39.2 + 39.7 39.7 39.7 + 39.7 39.7 + 43.8 45.1 45.5 + 45.7 + 46.6 46.6 + 48.4 49.9 + + + + + + + + + Paragraph Position + + + + + + + + + + + + + + + + Table 3: Effects of Different Training Features Table 3 summarizes the results of using dif- ferent training features on the 70 texts from L.A. Times/Washington Post (latwp-devl). It is ev- ident that positional information is the most valu- able. while the sentence length feature introduces the most noise. High scoring signature word sen- tences contribute, especially in conjunction with the positional information and the paragraph feature. High Score refers to using ant]* idfmetric with Word- Net synonyms and name aliases enabled, person names suppressed, but all other name types active. The second round of experiments were conducted using 100 training and 100 test texts for each of six sources from the the TREC 5 corpora (i.e., Associ- ated Press, Congressional Records, Federal Registry, Financial Times, Wall Street Journal, and Ziff). Each corpus was trained and tested on a large base- line database created by using multiple text sources. Results on the test sets are shown in Table 4. The discrepancy in results among data sources suggests that summarization may not be equally viable for all data types. This squares with results reported in (Nomoto and Matsumoto, 1997) where learned attributes varied in effectiveness by text type. 64 Text Set ap-testl cr-testl fr-testl ft-testl wsj-testl zf-testl ] F-M I Precision ] Recall ] Short [ High Score [ Doc. Position 49.7 47.5 52.1 YES YES YES 36.1 35.1 37.0 YES NO YES 38.4 33.8 44.5 YES NO YES 46.5 41.8 52.3 YES YES YES 51.5 48.5 54.8 YES NO YES 46.6 45.0 48.3 NO YES YES Para. Position YES YES YES NO Table 4: Results of Summaries for Different Corpora 4 Task-based Evaluation The goal of our task-based evaluation was to de- termine whether it was possible to retrieve auto- matically generated summaries with similar preci- sion to that of retrieving the full texts. Underpin- ning this was the intention to examine whether a generic summary could substitute for a full-text doc- ument given that a common application for summa- rization is assumed to be browsing/scanning sum- marized versions of retrieved documents. The as- sumption is that summaries help to accelerate the browsing/scanning without information loss. Miike et al. (1994) described preliminary experi- ments comparing browsing of original full texts with browsing of dynamically generated abstracts and re- ported that abstract browsing was about 80% of the original browsing function with precision and recall about the same. There is also an assumption that summaries, as encapsulated views of texts, may actually improve retrieval effectiveness. (Brandow, Mitze, and Rau, 1995) reported that using program- matically generated sulnmaries improved precision significantly, but with a dramatic loss in recall. We identified 30 TREC-5 topics, classified by the easy/hard retrieval schema of (Voorhees and Har- man, 1996), five as hard, five as easy, and the re- maining twenty were randomly selected. In our eval- uation, INQUERY (Allan et al., 1996) retrieved and ranked 50 documents for these 30 TREC-5 topics. Our summary system summarized these 1500 texts at 10%.reduction, 20%, 30%, and at what our sys- tem considers the BEST reduction. For each level of reduction, a new index database was built for IN- QUERY, replacing the full texts with summaries. The 30 queries were run against the new database, retrieving 10,000 documents per query. At this point, some of the summarized versions were dropped as these documents no longer ranked in the 10,000 per topic, as shown in Table 5. For each query, all results except for the documents summa- rized were thrown away. New rankings were com- puted with the remaining summarized documents. Precision for the INQUERY baseline (INQ.base) was then compared against each level of the reduction. Table 6 shows that at each level of reduction the overall precision dropped for the summarized ver- sions. With more reduction, the drop was more dra- Precision at INQ.BEST I 5 docs .8000 .8000 10 docs .8000 .7800 15 docs .7465 .7200 20 docs .7600 .7200 30 docs .7067 .6733 Table 7: Precision for 5 High Recall Queries matic. However, the BEST summary version per- formed better than the percentage methods. We examined in more detail document-level aver- ages for five "easy" topics for which the INQUERY system had retrieved a high number of texts. Ta- ble 7 reveals that for topics with a high INQUERY retrieval rate the precision is comparable. We posit that when queries have a high number of relevant documents retrieved, the summary system is more likely to reduce information rather than lose infor- mation. Query topics with a high retrieval rate are likely to have documents on the subject matter and therefore the summary just reduces the information, possibly alleviating the browsing/scanning load. We are currently examining documents lost in the re-ranking process and are cautious in interpreting results because of the difficulty of closely correlating the term selection and ranking algorithms of auto- matic IR systems with human performance. Our ex- perimental results do indicate, however, that generic summarization is more useful when there are many documents of interest to the user and the user wants to scan summaries and weed out less relevant docu- ment quickly. 5 Summary Our summarization system leverages off research in information retrieval, information extraction, and NLP. Our experiments indicate that automatic sum- marization performance can be enhanced by discov- ering different combinations of features through a machine learning technique, and that it can exceed lead summary performance and is affected by data source type. Our task-based evaluation reveals that generic summaries may be more effectively applied to high-recall document, retrievals. 65 Run INQ.base I INQ.10% ] INQ.20% I INQ.30% [ INQ.BEST I Retrieved 1500 1500 1500 1500 1500 Relevant 4551 Rel-ret 415 4551 4551 4551 4551 294 (-29.2%) 332 (-20.0%) 335 (-19.3%) 345 (-16.9%) Table 5: INQUERY Baseline Recall vs. Summarized Versions Precision at 5 docs 0.4133 10 docs 0.3700 15 docs 0.3511 0.3383 30 docs 0.3067 0.3267 (-21.0) 0.2600 (-29.7) 0.2400 (-31.6) 0.2217 (-34.5) 0.2056 (-33.0) INQ.2O% I INQ.30% 0.3800 (- 8.1) 0.3067 (-25.8) 0.2800 (-24.3) 0.2933 (-20.7) 0.2800 (-20.3) 0.2867 (-18.3) 0.2600 (-23.1) 0.2733 (-19.2) 0.2400 (-21.7) 0.2522 (-17.8) INQ.BEST 0.3333 (-19.4) 0.3100 (-16.2) 0.2867 (-18.3) 0.2717 (-19.7) 0.2556 (-16.7) Table 6: INQUERY Baseline Precision vs. Summarized Versions References Allan, J., J. Callan, B. Croft, L. Ballesteros, J. Broglio, J. Xu, and H. Shu Ellen. 1996. In- query at trec-5. In Proceedings of The Fifth Text REtrieval Conference (TREC-5). Brandow, Ron, Karl Mitze, and Lisa Rau. 1995. Automatic condensation of electronic publications by sentence selection. Information Processing and Management, 31:675-685. Edmundson, H. P. 1969. New methods in automatic abstracting. Journal of the Association for Com- puting Machinery, 16(2):264-228. Harman, Donna and Ellen M. Voorhees, editors. 1996. Proceedings of The Fifth Text REtrieval Conference (TREC-5). National Institute of Stan- dards and Technology, Department of Commerce. Jing, Y. and B. Croft. 1994. An Association The- saurus for Information Retrieval. Technical Re- port 94-17. Center for Intelligent Information Re- trieval, University of Massachusetts. Johnson, F. C., C. D. Paice, W. J. Black, and A. P. Neal. 1993. The application of linguistic process- ing to automatic abstract generation. Journal of Documentation and Text Management, 1(3):215- 241. Jones, Karen Sparck. 1995. Discourse modeling for automatic summaries. In E. Hajicova, M. Cer- venka, O. Leska, and P. Sgall, editors, Prague Lin- guistic Circle Papers, volume 1, pages 201-227. Kupiec, Julian, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Pro- ceedings of the 18th Annual International SIGIR Conference on Research and Development in In- formation Retrieval, pages 68-73. McKeown, Kathleen and Dragomir Radev. 1995. Generating summaries of multiple news articles. In Proceedings of the 18th Annual International SIGIR Conference on Research and Development in Information, pages 74-78. Miike, Seiji, Etsuo Itho, Kenji Ono, and Kazuo Sumita. 1994. A full text retrieval system with a dynamic abstract generation function. In Pro- ceedings of 17th Annual International ACM SI- GIR Conference on Research and Development in Information Retrieval, pages 152-161. Mitra, Mandar, Amit Singhal, and Chris Buckley. 1997. An Automatic Text Summarization and Text Extraction. In Proceedings of Intelligent Scalable Text Summarization Workshop, Associa- tion for Computational Linguistics (ACL), pages 39-46. Nomoto, T. and Y. Matsumoto. 1997. Data relia- bility and its effects on automatic abstraction. In Proceedings of the Fifth Workshop on Very Large Corpora. Reimer, Ulrich and Udo Hahn. 1988. Text con- densation as knowledge base abstraction. In Pro- ceedings of the 4th Conference on Artificial Intel- ligence Applications (CAIA), pages 338-344. Salton, G. and M. McGill, editors. 1983. hdroduc- lion to Modern Information Retrieval. McGraw- Hill Book Co., New York, New York. Tzoukerman, E., J. Klavans, and C. Jacquemin. 1997. Effective use of naural language processing techniques for automatic conflation of multi-word terms: the role of derivational morphology, part of speech tagging and shallow parsing. In Pro- ceedings of the Annual International ACM SIGIR Conference on Research and Development of In- formation Retrieval, pages 148-155. Voorhees, Ellen M. and Donna Harman. 1996. Overview of the fifth text retrieval conference (tree-5). In Proceedings of The Fifth Text RE- trieval Conference (TREC-5). 66
1998
9
Long Distance Pronominalisation and Global Focus Janet Hitzeman and Massimo Poesio CSTR and HCRC, University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW, Scotland {J. Hi t z eman, Mass imo. Poes io} @ed. ac. uk Abstract (1) Our corpus of descriptive text contains a signifi- cant number of long-distance pronominal references (8.4% of the total). In order to account for how these pronouns are interpreted, we re-examine Grosz and Sidner's theory of the attentional state, and in par- ticular the use of the global focus to supplement centering theory. Our corpus evidence concerning these long-distance pronominal references, as well as studies of the use of descriptions, proper names and ambiguous uses of pronouns, lead us to con- clude that a discourse focus stack mechanism of the type proposed by Sidner is essential to account for the use of these referring expressions. We suggest revising the Grosz & Sidner framework by allowing for the possibility that an entity in a focus space may have special status. 1 Motivation We call LONG-DISTANCE PRONOMINALISATIONS those cases of pronoun use in which the antecedent of the pronoun does not occur either in the same sentence as the pronoun or in the immediately pre- ceding one, but further back in the text. These cases are thought to be rare on the basis of studies such as (Hobbs, 1978), which found that 98% of pro- noun antecedents in the corpus analysed were in the same sentence as the pronoun or the previous one. However, our analysis of a small corpus of oral de- scriptions of museum items collected for the ILEX project (Hitzeman et al., 1997) revealed that long- distance pronouns are much more common in this kind of data -four times as common, in fact: out of a total of 83 pronouns, 7 (8.4%) were long-distance. The typical pattern of long-distance pronominalisa- tion in the ILEX dialogues is shown in (1), where the pronoun him in the last sentence refers to the jew- eller, mentioned most recently two sentences ear- lier. JO: Okay, thank you. Shall we look at the object in case number 16, number 1 ? There's a set of three objects here. LG: 1. Yes. 2. What these symbolise for me are one of the pre- occupations of the 1980s, which is recycling. 3. The jeweller who made these bangles was par- ticularly interested in the idea of using intrinsically worthless material- material that had been thrown away, old junk- and he lavished on those materials an incredibly painstaking and time-consuming tech- nique, so that the amount of time put into the labour of making these jewels bears absolutely no relation to the value of the materials that he's used. 4. And if you look at, for instance, the bangle at the bottom- that's the blue and red one- what looks as though it's painted decoration is in fact inlaid; it's bits of cut-off razor-blade, biro, knitting needles, in- laid into layer after layer of resin, which is done in emulation of Japanese lacquer technique. 5. And that particular bangle took hhn something like 120 hours of work. All 7 long-distance pronouns in the ILEX dialogues we have studied refer to discourse entities intro- duced in background text in this way. Unlike Sidner's theory of focus (Sidner, 1979), the theory of the attentional state in (Grosz and Sidner, 1986) (henceforth: G&S) does not include explicit provision for long-distance pronominalisa- tions, although some of the necessary tools are po- tentially already there, as we will see. The compo- nent of the theory that deals with pronominal ref- erence, centering theory (Grosz et al., 1995), only accounts for cases in which the antecedent of a pro- noun is introduced by the previous sentence; cases such as (1) have to be handled by different mech- anisms. In this paper we look the phenomenon of long-distance pronominalisation in some detail, ex- amining data from different domains, and consider 550 its implications for G&S's theory. 2 Theories of focus Space unfortunately prevents a full discussion of Grosz's (1977), Sidner's (1979), and G&S's (1986) theories of focus and the attentional state in this abstract. The crucial aspects of these theories, for the purpose of the discussion below, are as follows. First of all, G&S propose a distinction between two components of the attentional state: the GLOBAL FOCUS, structured as a stack of focus spaces and ac- cessed to interpret definite descriptions; and the LO- CAL FOCUS, consisting of the information preferen- tially used to interpret pronouns• In addition, they adopt CENTERING THEORY (Grosz et al., 1995) as a theory of the local focus. Secondly, although G&S's theory integrates ideas from both Grosz's and Sidner's original the- ories, and although both of these theories assumed a stack structure, the global focus in G&S's the- ory is structured as a stack of focus spaces, as in Grosz's original proposal, rather than as a stack of discourse foci, as in Sidner's original theory. The claim that different parts of the attentional state are accessed when resolving pronouns and definite de- scriptions is supported, broadly speaking, by psy- cholinguistic research (see, e.g., (Garrod, 1993)). The main claims of centering theory are also con- sistent with psycholinguistic results (Hudson, 1988; Gordon et al., 1993). To our knowledge, however, the choice of a stack of focus spaces over a stack of discourse foci has never been motivated; yet this decision plays a crucial role in our problem, as we will see. A point worth keeping in mind throughout the following discussion is that, although the concept • O" of c8 (centerm~ theory s name for the current most salient entity) was originally introduced as 'roughly corresponding to Sidner's concept of discourse fo- cus', in fact it is not clear that the two concepts are capturing the same intuitions (Poesio and Steven- son, 1998). Although it is often the case that the CB and the discourse focus coincide, this is not true in general.I For the purposes of this paper, however, we will assume that the two notions do coincide, and will use the neutral term MOST SALIENT EN- TITY (MSE) to refer to the discourse focus / CB of a particular segment of text. ~This intuitive impression was confirmed by a recent study (Giouli• 1996), whose author tracked both the 'intuitive CB' and the 'intuitive discourse focus' of 8 Map Task conversations. 3 The Data The Intelligent Labelling Explorer (ILEX) project is building a system that generates descriptions of ob- jects displayed in a museum gallery. 2 In order to generate the most natural descriptions of the ob- jects, dialogues with a museum curator were col- lected, describing objects in the National Museum of Scotland's 20th Century Jeweilery Gallery. We will refer to this corpus as the ILEX corpus. In the dialogues, the curator (LG) moves from case to case as directed by an observer (JO) and describes the jewels in each case, as in example (1). The work described here is part of two related projects: SOLE, the goal of which is to extend the ILEX system with the capability of generating prosodically adequate speech, and GNOME, which is concerned with the generation of nominal expres- sions. A second corpus of museum object descrip- tions was collected for use with SOLE; we will refer to this corpus as the SOLE COrpUS. 4 Analysis 4.1 First Hypothesis Because G&S's theory of the attentional state al- ready hypothesises global focus structures in addi- tion to the local attentional structures assumed in centering theory, the simplest explanation for our cases of long-distance pronominalisation is to hy- pothesise that readers exploit the global focus to re- solve pronouns in such cases. Assuming that the global focus is involved in these cases, instead of complicating the local fo- cus/centering theory, is consistent with the little available psychological evidence --e.g., with the re- suits of Clark and Sengui (1979), who observed a slowdown in reading times for the sentence con- taining the pronoun when the antecedent is not in the same or the previous sentence, implying that long-distance pronominal anaphora are handled dif- ferently. Furthermore, suggesting that these pronouns are resolved by accessing the global focus would not really amount to a revision of the basic assump- tions of G&S's theory. Although no explicit pro- posal conceming the respective roles of local fo- cus and global focus in pronoun resolution has ever been made in the literature on the G&S framework, cases of pronouns involving access to the global at- tentionai structure rather than to the local focus have ~'http: //www.cogsci .ed.ac.ukfalik/ilex/ systemintro, html 551 already been discussed in this literature. So-called RETURN-POPS, which are pronouns that signal a re- turn to a superordinate discourse segment, were dis- cussed by Grosz (1977) and then in detail in (Fox, 1987). In (2), for example, sentence 5 resumes the segment interrupted by 2-4; the antecedent for the pronoun her is supposed to be found on the stack, although the details of this process have never re- ally been discussed. 3 (2) 1. C: Ok Harry. I'm have a problem that uh my-with today's economy my daughter is working, 2. H: I missed your name. 3. C: Hank. 4. H: Go ahead Hank. 5. C: as well as he___zr husband A second case of pronouns that clearly seem to in- volve access to some global structure are so-called 'generic' pronouns, such as the, in (3): (3) John went back to the hotel. He looked for Mary in their room, but couldn't find her. They told him that she had left about an hour earlier. (We are not aware of any account of these uses of pronouns within the G&S framework.) As we will see in a moment, the long-distance pronouns observed in the ILEX dialogues are neither generic pronouns nor return-pops; nevertheless, we are going to show that these cases, as well, are re- solved by accessing the global focus. 4.2 Long-distance pronouns need not be return-pops Tile use of him in the last sentence of (1) could only be termed a RETURN-POP if it were to involve a re- turn to the previous discourse segment which 'pops over' sentence 4 (And if you look at, for instance, the bangle at the bottom ... ) and 'closes off' the ma- terial introduced in that sentence. But this is clearly not the case, as shown by the fact that the final sen- tence contains a reference to both the jeweller and the bangle. Indeed, the bangle could also be referred to with a pronoun: And it took him something like 120 hours of work. The fact that pronouns and def- inite NPs in the last sentence can refer back to ma- terial in the 4th sentence indicates that this material must still be on the stack. 3This example is from (Pollack et al., 1982). 4.3 Discourse Structure in the Example Text Before discussing how the global focus is used for resolving pronouns such as the long-distance pro- noun in the last sentence of (1), we need to discuss the structure of these examples: i.e., is the part of (l) which has the jeweller as MSE (2nd sentence) still on the stack when the part that describes details of the jewel and contains the long-distance pronoun (3rd and 4th sentence) is processed? Answering this question is made more difficult by the fact that G&S's theory of the intentional struc- ture is very abstract, and therefore does not help much in specific cases, especially when the genre is not task-oriented conversations. More specific indications concerning the structure of the relevant example, and more in general of the conversations in the ILEX corpus, are given by Rhetorical Struc- ture Theory (RST) (Mann and Thompson, 1988), 4 although even with RST it is still possible to anal- yse any given text in many different ways. Nev- ertheless, we believe that the structure depicted in Figure 1 is a plausible analysis for (1); an alterna- tive analysis would be to take the 4th and 5th sen- tence as elaborations of and he lavished on those materials an incredibly painstaking technique .... but in this case, as well (and in all other rhetorical structures we could consider) sentences 4 and 5 are satellites of sentence 3. (We have employed the set of rhetorical relations currently used to analyse the ILEX data.) The relation between G&S's and RST's notion of structure has been analysed by, among others, (Moore and Paris, i 993; Moser and Moore, 1996). According to Moser and Moore, the relation can be characterised as follows: an RST nucleus expresses an intention I~; a satellite expresses an intention 18; and I,~ dominates Is. Thus, in (1), the nucleus of the exemplification relation, sentence 3, would domi- nate the satellite, consisting of sentences 4 and 5. We will make the same assumption here. Hence we can assume that the third sentence in (1) will still be on the stack when processing the 4th and 5th sentences. This would also hold for the alternative rhetorical structures we have considered. 5 4Fox, as well, used RST to analyse the structure of texts in her study of the effect of discourse structure on anaphora (Fox, 1987). 5Some readers might wonder whether it wouldn't be sim- pler to assume that all of the utterances in (1) are part of the same segment. This assumption would indeed make the an- tecedent accessible; however, it would not explain the data. not at least if we assume that it is centering theory that determines 552 'jeweller particularly interested in using worthless material' The jeweller who made these bangles .... EXEMPLIFICATION And if you look at, for instance .... And that particular bangle took HIM .... CONSEQUENCE so that the amount of time he put into the labour .... Figure 1: A possible analysis of (1). 4.4 What Goes on the Stack? We can finally turn to the task of explaining how the global focus is used to resolve long-distance pronominalisations. The simplest explanation con- sistent with the G&S's framework would be to as- sume that resolving such pronouns involves search- ing for the first discourse entity in the focus space stack that satisfies gender and number constraints. Under the assumptions about the discourse struc- ture of examples like (1) just discussed, this expla- nation would indeed account for that example; there is evidence, however, that additional constraints are involved. The first bit of evidence is that the pres- ence on the focus space stack of an appropriate an- tecedent does not always make the use of a long distance pronoun felicitous. Consider tile follow- ing fi'agment of an article that appeared in The Guardian, January 28, 1995, p.3. (4) Joan Partington, aged 44, from Bolton, Lan- cashire, has six children. The eldest are two 17-year-old twin boys, one awaiting a heart by- pass operation and the other with severe be- havioral problems. A 13-year-old son has hy- drocephalus. She was living with her hus- band when Wigan magistrates ordered her to be jailed unless she paid £5 per week, although he earned only £70 per week as a part-time postman. anaphoric reference, since centering does not explain how a pronoun can refer to an antecedent two sentences back. Assum- ing that there is more than one segment in such texts, instead, will turn out to be not just a more plausible assumption about segmentation; it will also give us a simple way to explain the data, The use of he in the last sentence is awkward, even though there is a discourse entity on the focus space stack- the husband- that would satisfy the con- straints imposed by the pronoun. This seems to in- dicate that the elements of a focus space are not all equally accessible. The second relevant bit of evidence concerns the use of proper names in the ILEX corpus. It may hap- pen in the ILEX dialogues that a designer like Jessie King is first mentioned by name in a segment where she is not the main topic of discussion, as in Other jewels in the Bohemian style include a brooch by Jessie King. If this is the case, then when later we're talking about another jewel that King designed, she will have to be introduced again with a full proper name, Jessie King, rather than simply King. If, how- ever, she becomes the 'main topic' of discussion, then later, whenever we talk about her again, we can use reduced forms of her proper name, such as King. Again, this difference is not easy to explain in terms of focus spaces if we assume that all objects in a focus space have the same status. A third class of expressions providing evidence relevant to this discussion are bridging descriptions, i.e., definite descriptions like the door that refer to an object associated with a previously mentioned discourse entity such as the house, rather than to the entity itself (Clark, 1977). Poesio et al. (1997; 1998) report experiments in which different types of lexical knowledge sources are used to resolve bridg- ing descriptions and other cases of definite descrip- tions that require more than simple string match for their resolution. Their results indicate that to re- solve bridging descriptions it is not sufficient sim- ply to find which of the entities in the current focus 553 space is semantically closest to the bridging descrip- tion: in about half of the cases of bridging descrip- tions that could be resolved on the basis of the lexi- cal knowledge used in these experiments, the focus spaces contained an entity whose description was more closely related to that of the bridging descrip- tion than the one of the actual antecedent(s). This evidence about infelicitous pronouns, proper names, and bridging descriptions suggests that the entities in a focus space are not all equally salient. In fact, one could even wonder if we need focus spaces at all; i.e., if Sidner's original proposal - ac- cording to which it's just the MSE that goes on the stack, not the whole focus space - is correct. A re- vision of G&S's theory along these lines- i.e., in which the focus space stack is replaced by an MSE stack- would still explain (1), since the jeweller is clearly the MSE of sentence 3; indeed, all 7 cases of long-distance pronouns found in the ILEX corpus have a previous MSE as their antecedent. But, in ad- dition, this revision would explain the awkwardness of (4): the husband was never an MSE, so it would not be on the stack. A global focus of this type would also give us a way to formulate a restriction on using shortened forms of proper names that would account for the facts observed in the ILEX corpus: reduced NPs are allowed for entities that have been introduced as MSEs, full NPs are needed otherwise. And fi- nally, keeping track of previous MSEs seems essen- tial for bridging descriptions as well: in order to find the reasons for the low performance of algo- rithms for resolving bridging descriptions entirely based on lexical knowledge, (Poesio et al., 1998) examined the bridging descriptions their corpus to find out their 'preferred' antecedent. 6 They found that the preferred antecedent of a bridging descrip- tion is a previous MSE in 54 out of 203 cases. In the SOLE COrpUS, 8 OUt of 11 bridging descriptions relate to the MSE. Does this mean, then, that we can get rid of fo- cus spaces, and assume that it's MSEs that go on the stack? Before looking at the data, we have to be clear as to what would count as evidence one way or the other. Even an approach in which only previ- ous MSES are on the stack would still allow access to entities which are part of what Grosz called the IM- PLICIT FOCUS of these MSEs, i.e., the entities that 6As discussed in (Poesio and Vieira, 1998), in general there is more than one potential 'antecedent' for a bridging descrip- tion in a text. are 'strongly associated' with the MSES. This notion of 'strong association' is difficult to define- in fact, it is likely to be a matter of degree- but nevertheless it is plausible to assume that the objects 'strongly associated' with a discourse entity A do not include every discourse entity B which is part of a situation described in the text in which A is also involved; and this can be tested with linguistic examples, up to a point. For example, whereas definite descriptions like the radiator cap can easily be resolved in a null context to a car, descriptions like the dog can't, as shown by the infelicity of (5d) as a continuation of (5b), even though dogs in cars are not uncommon; some contextual antecedent is needed. (5) a. Mary saw a dark car go by quickly. b. It was a bright, warm day. c. The radiator cap was shining in the sun. d. The dog was enjoying the warmth. The question we have to answer, then, is whether the only information that is available as part of the attentional state is what is 'strongly associated' with one of the previous MSES, or, instead, all of the in- formation mentioned in the text. 7 Now, sentences like (5a) license both bridging de- scriptions to the car, as in (5c), and to Mary, as in Her hat had become very hot. Whatever we take the MSE of (5a) tO be, it seems implausible to ar- gue that both the bridging description s the radiator cap and Her hat are resolved by looking at the ob- jects 'strongly associated' with that discourse entity. It is much simpler to assume that both Mary and the car are still accessible as part of the focus space constructed to represent the situation described by the text. This also holds for what we have called 'generic' pronouns, as shown by (3), in which they refers to individuals associated with the hotel men- tioned in the first sentence, not to the MSE, John. And indeed, Sidner assumed two stacks- one of discourse foci, the other of actor foci. But even this extension would not be enough, because the an- tecedent of a bridging description is not always an entity explicitly introduced in the text, but can also be a more abstract DISCOURSE TOPIC, by which we 7Notice however that the claim that only MSES go on the stack does not entail that everything else in the text is simply forgotten- the claim is simply that that intbrmation is not avail- able for resolving references anymore; presumably it would be stored somewhere in 'long term memory'. Conversely, the claim that everything stays on the stack would have to be supplemented by some story concerning how information gets forgotten-e.g., by some caching mechanism such as the one proposed by Walker (1996). 554 mean an issue / proposition that can be said to char- acterise the content of the focus space as a whole. In a corpus analysis done in connection with (Poesio et al., 1997; Poesio et al., 1998), we found that 7 out of 70 inferential descriptions were of this type; in the SOLE corpus, in which 3 out of 11 bridging de- scriptions behave this way. An example of this use is the description the problem below, that refers to the problem introduced by the first sentence in the text: (6) Solo woodwind players have to be creative if they want to work a lot, because their reper- toire and audience appeal are limited .... The oboist Heinz Holliger has taken a hard line about the problem ... Reference to abstract objects in general seem to re- quire maintaining information about the events and situations described by a text on the stack- see, e.g., (Webber, 1991). So, it looks like what we need is something of a compromise between the notion of global focus implicit in Sidner's original proposal and that proposed by G&S. 4.5 The proposal The following hypothesis about the global focus and its use in pronoun resolution seems to provide the best account of the evidence we have examined: 1. The global focus consists of a stack of fo- cus spaces, as in G&S's proposal. Each of these focus spaces can be summarised as be- ing 'about' some object / proposition / issue- indeed, more than one- for which we will use the term DISCOURSE TOPICS; but, in addition, 2. Each focus space may be optionally associated with a MOST SALIENT ENTITY (MSE) explic- itly introduced in the text. 3. The antecedent for a non-generic pronoun is preferentially to be found in the local focus; if none is available, one of the MSEs associated with a focus space on the stack can also serve as antecedent. 8 4. Generic pronouns refer back to the situation described by the current focus space; 5. Bridging descriptions can be related either to an entity in the current focus space, or to an MSE, or to a discourse topic; rThis would explain the difference in reading times ob- served by (Clark and Sengul, 1979). 6. Definite descriptions can refer back to any en- tity in the global focus, including discourse topics. The reason for using the term 'optional' in 2 is that whereas focus spaces can always be described as be- ing about something, they are not always associated with a 'most salient entity': e.g., the first sentence in (6) introduces several topics (woodwind players, their need to be creative, etc.) but does not introduce an MSE. 5 Related Work In a recent paper, Hahn and Strube (1997) propose to extend centering theory with what is, essentially, Sidner's stack of discourse foci, although their al- gorithm for identifying the ce is not identical to Sidner's. Their analysis of German texts shows a rather good performance for their algorithm, but, as only MSEs are predicted to be accessible, none of the anaphors depending on focus space information could be resolved. Their algorithm also appears to treat definite descriptions and pronouns uniformly as 'anaphors', which seems problematic in the light of psychological evidence showing that they behave differently, and examples like the following: (7) a. John/saw Mary. He/greeted her. b. John/saw Mary. ??The mani greeted her. (Guindon, 1985) proposed an alternative model of the attentional state involving a cache instead of a stack, and Walker (1996) argues that the cache model can account for all of the data that origi- nally motivated the stack model and, in addition, explains the use of informationally redundant ut- terances. The cache model isn't yet specified in enough detail for all of its implications for the data discussed here to be clear, but it appears that some of the issues discussed in this paper would have to be addressed in a cache model as well, and that some of our conclusions would apply in a model of that type as well. In particular, these propos- als are not very specific about whether the cache should count as a replacement of just the global fo- cus component of G&S's theory or of both local and global focus, and about what should go in the cache-Guindon seems to assume that it's discourse entities, whereas Walker also seems to allow for propositions and relational information. If the cache was intended as an alternative model of the global focus component, the data discussed here could be 555 taken as an argument that what goes in the cache should be focus spaces with distinguished MSEs. 6 Conclusions Our main intent in looking at long-distance pronom- inalisation was to make some of the aspects of the G&S model of attentional state more precise, and to clarify its connection with earlier work by Sid- ner. The evidence we have presented suggests a main conclusion and a corollary. The main conclu- sion is that the uses of long-distance pronouns in our corpus can be explained as cases of reference to the MSE of a segment whose associated focus space is still on the stack. The corollary is that these ex- amples can be accounted for within a G&S-style model of discourse structure, provided that the the- ory is augmented by singling out some entities in focus spaces, and having these entities do some of the work done by Sidner's stack of discourse foci. A concern with studies of this type is that notions such as 'most salient entity' are hard to define, and it's not obvious that two different researchers would necessarily agree on what is the MSE of a given sen- tence. Work on verifying whether the notion we are assuming can indeed be reliably identified is under way as part of the GNOME project. References H. H. Clark and C. J. Sengul. 1979. In search of refer- ents for nouns and pronouns. Memory and Cognition, 7(I ):35-4 1. H. H. Clark. 1977. Bridging. In P. N. Johnson-Laird and P.C. Wason, editors, Thinking: Readings in Cognitive Science. Cambridge University Press. B. A. Fox. 1987. Discourse Structure and Anaphora. Cambridge University Press, Cambridge, UK. S. Garrod. 1993. Resolving pronouns and other anaphoric devices: The case for diversity in dis- course processing. In C. Clifton, L. Frazier, and K. Rayner, editors, Perspectives in Sentence Process- ing. Lawrence Erlbaum. P. Giouli. 1996. Topic chaining and discourse structure in task-oriented dialogues. Master's thesis, University of Edinburgh, Linguistics Department. P. C. Gordon, B. J. Grosz, and L. A. Gillion. 1993. Pro- nouns, names, and the centering of attention in dis- course. Cognitive Science, 17:311-348. B. J. Grosz and C. L. Sidner. 1986. Attention, inten- tion, and the structure of discourse. Computational Linguistics, 12(3): 175-204. B. J. Grosz, A. K. Joshi, and S. Weinstein. 1995. Center- ing: A framework for modelling the local coherence of discourse. ComputationalLinguistics, 21(2):202- 225. B. J. Grosz. 1977. The Representation and Use of Fo- cus in Dialogue Understanding. Ph.D. thesis, Stan- ford University. R. Guindon. 1985. Anaphora resolution: Short-term memory and focusing. In Proc. of the 23rd Annual Meeting of the ACL, pp. 218-227. U. Hahn and M. Strube. 1997. Centering in-the-large: Computing referential discourse segments. In Proc. of the 35th Annual Meeting of the ACL, pp. 104-111. J. Hitzeman, C. Mellish, and J. Oberlander. 1997. Dy- namic generation of museum web pages: The intelli- gent labelling explorer. Archives and Museum lnfor- matics, 11:107-115. J. R. Hobbs. 1978. Resolving pronoun references. Lin- gua, 44:311-338. S.B. Hudson. 1988. The Structure of Discourse and Anaphor Resolution: The Discourse Center attd the Roles of Nouns and Pronouns. Ph.D. thesis, Univer- sity of Rochester. W. C. Mann and S. A. Thompson. 1988. Rhetorical structure theory: Towards a functional theory of text organization. Text, 8(3):243-281. J. D. Moore and C. L. Paris. 1993. Planning text for ad- visory dialogues: Capturing intentional and rhetorical information. Computational Linguistics, 19(4):651- 694, December. M. Moser and J. Moore. 1996. Toward a synthesis of two accounts of discourse structure. Contputational Linguistics, 22(3):409-419. M. Poesio and R. Stevenson. 1998. Computational mod- els of salience and psychological evidence. In prepa- ration. M. Poesio and R. Vieira. 1998. A corpus-based investi- gation of definite description use. ComputationalLin- guistics. To appear. M. Poesio, R. Vieira, and S. Teufel. 1997. Resolving bridging references in unrestricted text. In R. Mitkov, editor, Proc. of the ACL Workshop on Operational Factors in Robust Anaphora Resolution, pp. 1-6, Madrid. M. Poesio, S. Schulte im Walde, and C. Brew. 1998. Lexical clustering and definite description interpre- tation. In Proc. of the AAAI Spring Synlposiunt on Learning for Discourse, Stanford, CA, March. M. Pollack, J. Hirschberg, and B. Webber. 1982. User participation in the reasoning process of expert sys- tem. In Proc. of AAAI-82, pp. 358-361. C. L. Sidner. 1979. Towards a computational theo~' of definite anaphora comprehension in English dis- course. Ph.D. thesis, MIT. M. A. Walker. 1996. Limited attention and discourse structure. Computational Linguistics, 22(2):255-264. B. L. Webber. 1991. Structure and ostension in the inter- pretation of discourse deixis. Languageand Cognitive Processes, 6(2): 107-135. 556
1998
90
An Empirical Evaluation of Probabilistic Lexicalized Tree Insertion Grammars * Rebecca Hwa Harvard University Cambridge, MA 02138 USA rebecca~eecs.harvard.edu Abstract We present an empirical study of the applica- bility of Probabilistic Lexicalized Tree Inser- tion Grammars (PLTIG), a lexicalized counter- part to Probabilistic Context-Free Grammars (PCFG), to problems in stochastic natural- language processing. Comparing the perfor- mance of PLTIGs with non-hierarchical N-gram models and PCFGs, we show that PLTIG com- bines the best aspects of both, with language modeling capability comparable to N-grams, and improved parsing performance over its non- lexicalized counterpart. Furthermore, train- ing of PLTIGs displays faster convergence than PCFGs. 1 Introduction There are many advantages to expressing a grammar in a lexicalized form, where an ob- servable word of the language is encoded in each grammar rule. First, the lexical words help to clarify ambiguities that cannot be re- solved by the sentence structures alone. For example, to correctly attach a prepositional phrase, it is often necessary to consider the lex- ical relationships between the head word of the prepositional phrase and those of the phrases it might modify. Second, lexicalizing the gram- mar rules increases computational efficiency be- cause those rules that do not contain any ob- served words can be pruned away immediately. The Lexicalized Tree Insertion Grammar for- malism (LTIG) has been proposed as a way to lexicalize context-free grammars (Schabes * This material is based upon work supported by the Na- tional Science Foundation under Grant No. IR19712068. We thank Yves Schabes and Stuart Shieber for their guidance; Joshua Goodman for his PCFG code; Lillian Lee and the three anonymous reviewers for their com- ments on the paper. and Waters, 1994). We now apply a prob- abilistic variant of this formalism, Probabilis- tic Tree Insertion Grammars (PLTIGs), to nat- ural language processing problems of stochas- tic parsing and language modeling. This pa- per presents two sets of experiments, compar- ing PLTIGs with non-lexicalized Probabilistic Context-Free Grammars (PCFGs) (Pereira and Schabes, 1992) and non-hierarchical N-gram models that use the right branching bracketing heuristics (period attaches high) as their pars- ing strategy. We show that PLTIGs can be in- duced from partially bracketed data, and that the resulting trained grammars can parse un- seen sentences and estimate the likelihood of their occurrences in the language. The experi- ments are run on two corpora: the Air Travel Information System (ATIS) corpus and a sub- set of the Wall Street Journal TreeBank cor- pus. The results show that the lexicalized na- ture of the formalism helps our induced PLTIGs to converge faster and provide a better language model than PCFGs while maintaining compara- ble parsing qualities. Although N-gram models still slightly out-perform PLTIGs on language modeling, they lack high level structures needed for parsing. Therefore, PLTIGs have combined the best of two worlds: the language modeling capability of N-grams and the parse quality of context-free grammars. The rest of the paper is organized as fol- lows: first, we present an overview of the PLTIG formalism; then we describe the experimental setup; next, we interpret and discuss the results of the experiments; finally, we outline future di- rections of the research. 2 PLTIG and Related Work The inspiration for the PLTIG formalism stems from the desire to lexicalize a context-free gram- 557 mar. There are three ways in which one might do so. First, one can modify the tree struc- tures so that all context-free productions con- tain lexical items. Greibach normal form pro- vides a well-known example of such a lexical- ized context-free formalism. This method is not practical because altering the structures of the grammar damages the linguistic informa- tion stored in the original grammar (Schabes and Waters, 1994). Second, one might prop- agate lexical information upward through the productions. Examples of formalisms using this approach include the work of Magerman (1995), Charniak (1997), Collins (1997), and Good- man (1997). A more linguistically motivated approach is to expand the domain of produc- tions downward to incorporate more tree struc- tures. The Lexicalized Tree-Adjoining Gram- mar (LTAG) formalism (Schabes et al., 1988), (Schabes, 1990) , although not context-free, is the most well-known instance in this category. PLTIGs belong to this third category and gen- erate only context-free languages. LTAGs (and LTIGs) are tree-rewriting sys- tems, consisting of a set of elementary trees combined by tree operations. We distinguish two types of trees in the set of elementary trees: the initial trees and the auxiliary trees. Unlike full parse trees but reminiscent of the produc- tions of a context-free grammar, both types of trees may have nonterminal leaf nodes. Aux- iliary trees have, in addition, a distinguished nonterminal leaf node, labeled with the same nonterminal as the root node of the tree, called the foot node. Two types of operations are used to construct derived trees, or parse trees: sub- stitution and adjunction. An initial tree can be substituted into the nonterminal leaf node of another tree in a way similar to the substitu- tion of nonterminals in the production rules of CFGs. An auxiliary tree is inserted into another tree through the adjunction operation, which splices the auxiliary tree into the target tree at a node labeled with the same nonterminal as the root and foot of the auxiliary tree. By us- ing a tree representation, LTAGs extend the do- main of locality of a grammatical primitive, so that they capture both lexical features and hi- erarchical structure. Moreover, the adjunction operation elegantly models intuitive linguistic concepts such as long distance dependencies be- tween words. Unlike the N-gram model, which only offers dependencies between neighboring words, these trees can model the interaction of structurally related words that occur far apart. Like LTAGs, LTIGs are tree-rewriting sys- tems, but they differ from LTAGs in their gener- ative power. LTAGs can generate some strictly context-sensitive languages. They do so by us- ing wrapping auxiliary trees, which allow non- empty frontier nodes (i.e., leaf nodes whose la- bels are not the empty terminal symbol) on both sides of the foot node. A wrapping auxiliary tree makes the formalism context-sensitive be- cause it coordinates the string to the left of its foot with the string to the right of its foot while allowing a third string to be inserted into the foot. Just as the ability to recursively center- embed moves the required parsing time from O(n) for regular grammars to O(n 3) for context- free grammars, so the ability to wrap auxiliary trees moves the required parsing time further, to O(n 8) for tree-adjoining grammars 1. This level of complexity is far too computationally expensive for current technologies. The com- plexity of LTAGs can be moderated by elimi- nating just the wrapping auxiliary trees. LTIGs prevent wrapping by restricting auxiliary tree structures to be in one of two forms: the left auxiliary tree, whose non-empty frontier nodes are all to the left of the foot node; or the right auxiliary tree, whose non-empty frontier nodes are all to the right of the foot node. Auxil- iary trees of different types cannot adjoin into each other if the adjunction would result in a wrapping auxiliary tree. The resulting system is strongly equivalent to CFGs, yet is fully lex- icalized and still O(n 3) parsable, as shown by Schabes and Waters (1994). Furthermore, LTIGs can be parameterized to form probabilistic models (Schabes and Waters, 1993). Informally speaking, a parameter is as- sociated with each possible adjunction or sub- stitution operation between a tree and a node. For instance, suppose there are V left auxiliary trees that might adjoin into node r/. Then there are V q- 1 parameters associated with node r/ 1The best theoretical upper bound on time complex- ity for the recognition of Tree Adjoining Languages is O(M(n2)), where M(k) is the time needed to multiply two k x k boolean matrices.(Rajasekaran and Yooseph, 1995) 558 Elem~ntwy ~ ~: t l~t t~ptl 1 £ X, ~td I t~rd 2 twordn X word 2 X word n * $ Figure h A set of elementary LTIG trees that represent a bigram grammar. The arrows indi- cate adjunction sites. that describe the distribution of the likelihood of any left auxiliary tree adjoining into node ~/. (We need one extra parameter for the case of no left adjunction.) A similar set of parame- ters is constructed for the right adjunction and substitution distributions. 3 Experiments In the following experiments we show that PLTIGs of varying sizes and configurations can be induced by processing a large training cor- pus, and that the trained PLTIGs can provide parses on unseen test data of comparable qual- ity to the parses produced by PCFGs. More- over, we show that PLTIGs have significantly lower entropy values than PCFGs, suggesting that they make better language models. We describe the induction process of the PLTIGs in Section 3.1. Two corpora of very different nature are used for training and testing. The first set of experiments uses the Air Travel In- formation System (ATIS) corpus. Section 3.2 presents the complete results of this set of ex- periments. To determine if PLTIGs can scale up well, we have also begun another study that uses a larger and more complex corpus, the Wall Street Journal TreeBank corpus. The initial re- sults are discussed in Section 3.3. To reduce the effect of the data sparsity problem, we back off from lexical words to using the part of speech tags as the anchoring lexical items in all the experiments. Moreover, we use the deleted- interpolation smoothing technique for the N- gram models and PLTIGs. PCFGs do not re- quire smoothing in these experiments. 3.1 Grammar Induction The technique used to induce a grammar is a subtractive process. Starting from a universal grammar (i.e., one that can generate any string made up of the alphabet set), the parameters Example sentence: The cat chases the mouse Corresponding derivation tree: tinit .~dJ. tthe .~dj. teat ~dj. tchase s ~dj. ttht ,,,1~t. adj. tmouse Figure 2: An example sentence. Because each tree is right adjoined to the tree anchored with the neighboring word in the sentence, the only structure is right branching. are iteratively refined until the grammar gen- erates, hopefully, all and only the sentences in the target language, for which the training data provides an adequate sampling. In the case of a PCFG, the initial grammar production rule set contains all possible rules in Chomsky Nor- mal Form constructed by the nonterminal and terminal symbols. The initial parameters asso- ciated with each rule are randomly generated subject to an admissibility constraint. As long as all the rules have a non-zero probability, any string has a non-zero chance of being generated. To train the grammar, we follow the Inside- Outside re-estimation algorithm described by Lari and Young (1990). The Inside-Outside re- estimation algorithm can also be extended to train PLTIGs. The equations calculating the inside and outside probabilities for PLTIGs can be found in Hwa (1998). As with PCFGs, the initial grammar must be able to generate any string. A simple PLTIG that fits the requirement is one that simulates a bigram model. It is represented by a tree set that contains a right auxiliary tree for each lex- ical item as depicted in Figure 1. Each tree has one adjunction site into which other right auxil- iary trees can adjoin. The tree set has only one initial tree, which is anchored by an empty lex- ical item. The initial tree represents the start of the sentence. Any string can be constructed by right adjoining the words together in order. Training the parameters of this grammar yields the same result as a bigram model: the param- eters reflect close correlations between words 559 Ktemem~ ~ Sits: t~t tl ~ 1 a word= ~rdl uv'~¢ m 5i -_ /\ -/\ /\ -/\ ~X~ X. X X X, X X, X _sj _SIR_ " _51 __iSJR_ word I word x wo~ 1 wo~ X Figure 3: An LTIG elementary tree set that al- low both left and right adjunctions. that are frequently seen together, but the model cannot provide any high-level linguistic struc- ture. (See example in Figure 2.) Example sentence: The cat chases the mouse Corresponding derivation tree: tinit .~dj. re,chases ~ ltca~ ~r,~rtottme l~l'the ~'l,the Figure 4: With both left and right adjunctions possible, the sentences can be parsed in a more linguistically plausible way To generate non-linear structures, we need to allow adjunction in both left and right direc- tions. The expanded LTIG tree set includes a left auxiliary tree representation as well as right for each lexical item. Moreover, we must mod- ify the topology of the auxiliary trees so that adjunction in both directions can occur. We in- sert an intermediary node between the root and the lexical word. At this internal node, at most one adjunction of each direction may take place. The introduction of this node is necessary be- cause the definition of the formalism disallows right adjunction into the root node of a left aux- iliary tree and vice versa. For the sake of unifor- mity, we shall disallow adjunction into the root nodes of the auxiliary trees from now on. Figure 3 shows an LTIG that allows at most one left and one right adjunction for each elementary tree. This enhanced LTIG can produce hierar- chical structures that the bigram model could not (See Figure 4.) It is, however, still too limiting to allow only one adjunction from each direction. Many 560 words often require more than one modifier. For example, a transitive verb such as "give" takes at least two adjunctions: a direct object noun phrase, an indirect object noun phrase, and pos- sibly other adverbial modifiers. To create more adjunct/on sites for each word, we introduce yet more intermediary nodes between the root and the lexical word. Our empirical studies show that each lexicalized auxiliary tree requires at least 3 adjunction sites to parse all the sentences in the corpora. Figure 5(a) and (b) show two examples of auxiliary trees with 3 adjunction sites. The number of parameters in a PLTIG is dependent on the number of adjunction sites just as the size of a PCFG is dependent on the number of nonterminals. For a language with V vocabulary items, the number of parameters for the type of PLTIGs used in this paper is 2(V+I)+2V(K)(V+I), where K is the number of adjunction sites per tree. The first term of the equation is the number of parameters con- tributed by the initial tree, which always has two adjunction sites in our experiments. The second term is the contribution from the aux- iliary trees. There are 2V auxiliary trees, each tree has K adjunction sites; and V + 1 param- eters describe the distribution of adjunction at each site. The number of parameters of a PCFG with M nonterminals is M 3 + MV. For the ex- periments, we try to choose values of K and M for the PLTIGs and PCFGs such that 2(Y + 1) + 2Y(g)(Y + 1) ~ M 3 + MY 3.2 ATIS To reproduce the results of PCFGs reported by Pereira and Schabes, we use the ATIS corpus for our first experiment. This corpus contains 577 sentences with 32 part-of-speech tags. To ensure statistical significance, we generate ten random train-test splits on the corpus. Each set randomly partitions the corpus into three sections according to the following distribution: 80% training, 10% held-out, and 10% testing. This gives us, on average, 406 training sen- tences, 83 testing sentences, and 88 sentences for held-out testing. The results reported here are the averages of ten runs. We have trained three types of PLTIGs, vary- ing the number of left and right adjunction sites. The L2R1 version has two left adjunction sites and one right adjunction site; L1R2 has one tlw°rd n X x x. word n re word n X x. × L\ word n (a) tlwo;,d n X word n rrwordn X 5xt word n (b) tlw°rd n X word n ~'word n X x. sx\ word nl (c) ] t 11 . . . . . No. of ~ I I 40 45 r~O • ,.IF~- m "t.2Rl" ---- %2R2" ...... "PCFG1 S" -- "PCFG2~' I Figure 6: Average convergence rates of the training process for 3 PLTIGs and 2 PCFGs. Figure 5: Prototypical auxiliary trees for three PLTIGs: (a) L1R2, (b) L2R1, and (c) L2R2. left adjunction site and two right adjunction sites; L2R2 has two of each. The prototypi- cal auxiliary trees for these three grammars are shown in Figure 5. At the end of every train- ing iteration, the updated grammars are used to parse sentences in the held-out test sets D, and the new language modeling scores (by mea- suring the cross-entropy estimates f/(D, L2R1), f/(D, L1R2), and//(D, L2R2)) are calculated. The rate of improvement of the language model- ing scores determines convergence. The PLTIGs are compared with two PCFGs: one with 15-nonterminals, as Pereira and Schabes have done, and one with 20-nonterminals, which has comparable number of parameters to L2R2, the larger PLTIG. In Figure 6 we plot the average iterative improvements of the training process for each grammar. All training processes of the PLTIGs converge much faster (both in numbers of itera- tions and in real time) than those of the PCFGs, even when the PCFG has fewer parameters to estimate, as shown in Table 1. From Figure 6, we see that both PCFGs take many more iter- ations to converge and that the cross-entropy value they converge on is much higher than the PLTIGs. During the testing phase, the trained gram- mars are used to produce bracketed constituents on unmarked sentences from the testing sets T. We use the crossing bracket metric to evaluate the parsing quality of each gram- mar. We also measure the cross-entropy es- timates [-I(T, L2R1), f-I(T, L1R2),H(T, L2R2), f-I(T, PCFG:5), and fI(T, PCFG2o) to deter- mine the quality of the language model. For a baseline comparison, we consider bigram and trigram models with simple right branching bracketing heuristics. Our findings are summa- rized in Table 1. The three types of PLTIGs generate roughly the same number of bracketed constituent errors as that of the trained PCFGs, but they achieve a much lower entropy score. While the average entropy value of the trigram model is the low- est, there is no statistical significance between it and any of the three PLTIGs. The relative sta- tistical significance between the various types of models is presented in Table 2. In any case, the slight language modeling advantage of the tri- gram model is offset by its inability to handle parsing. Our ATIS results agree with the findings of Pereira and Schabes that concluded that the performances of the PCFGs do not seem to de- pend heavily on the number of parameters once a certain threshold is crossed. Even though PCFG2o has about as many number of param- eters as the larger PLTIG (L2R2), its language modeling score is still significantly worse than that of any of the PLTIGs. 561 I[ Bigram/Trigram PCFG 15 Number of parameters 1088 / 34880 3855 - 45 Iterations to convergence Real-time convergence (min) - 62 [-I(T, Grammar) 2.88 / 2.71 3.81 Crossing bracket (on T) 66.78 93.46 PCFG201L1R21L2R1 I L2R2 8640 6402 6402 8514 45 19 17 24 142 8 7 14 3.42 2.87 2.85 2.78 93.41 93.07 93.28 94.51 Table 1: Summary results for ATIS. The machine used to measure real-time is an HP 9000/859. Number of parameters Bigram/Trigram 2400 / 115296 PCFG 15 4095 PCFG 20 8960 PCFG 23[ LIR2 I L2R1 I L2R2 13271 Iterations to - 80 60 70 convergence Real-time con- - 143 252 511 vergence (hr) .f-I(T, Grammar 3.39/3.20 4.31 4.27 4.13 Crossing 49.44 56.41 78.82 79.30 bracket (T) 14210 14210 18914 28 30 28 38 41 60 3.58 3.56 3.59 80.08 82.43 80.832 Table 3: Summary results of the training phase for WSJ PLTIGs II better bigram better - trigram better - better I[ PCFGs PLTIGs bigram Table 2: Summary of pair-wise t-test for all grammars. If "better" appears at cell (i,j), then the model in row i has an entropy value lower than that of the model in column j in a statis- tically significant way. The symbol "-" denotes that the difference of scores between the models bears no statistical significance. 3.3 WSJ Because the sentences in ATIS are short with simple and similar structures, the difference in performance between the formalisms may not be as apparent. For the second experiment, we use the Wall Street Journal (WSJ) corpus, whose sentences are longer and have more var- ied and complex structures. We use sections 02 to 09 of the WSJ corpus for training, sec- tion 00 for held-out data D, and section 23 for test T. We consider sentences of length 40 or less. There are 13242 training sentences, 1780 sentences for the held-out data, and 2245 sen- tences in the test. The vocabulary set con- sists of the 48 part-of-speech tags. We compare three variants of PCFGs (15 nonterminals, 20 nonterminals, and 23 nonterminals) with three variants of PLTIGs (L1R2, L2R1, L2R2). A PCFG with 23 nonterminals is included because its size approximates that of the two smaller PLTIGs. We did not generate random train- test splits for the WSJ corpus because it is large enough to provide adequate sampling. Table 3 presents our findings. From Table 3, we see several similarities to the results from the ATIS corpus. All three variants of the PLTIG formal- ism have converged at a faster rate and have far better language modeling scores than any of the PCFGs. Differing from the previous experi- ment, the PLTIGs produce slightly better cross- ing bracket rates than the PCFGs on the more complex WSJ corpus. At least 20 nonterminals are needed for a PCFG to perform in league with the PLTIGs. Although the PCFGs have fewer parameters, the rate seems to be indiffer- ent to the size of the grammars after a thresh- old has been reached. While upping the number of nonterminal symbols from 15 to 20 led to a 22.4% gain, the improvement from PCFG2o to PCFG23 is only 0.5%. Similarly for PLTIGs, L2R2 performs worse than L2R1 even though it has more parameters. The baseline comparison for this experiment results in more extreme out- comes. The right branching heuristic receives a 562 crossing bracket rate of 49.44%, worse than even that of PCFG15. However, the N-gram models have better cross-entropy measurements than PCFGs and PLTIGs; bigram has a score of 3.39 bits per word, and trigram has a score of 3.20 bits per word. Because the lexical relationship modeled by the PLTIGs presented in this pa- per is limited to those between two words, their scores are close to that of the bigram model. 4 Conclusion and Future Work In this paper, we have presented the results of two empirical experiments using Probabilis- tic Lexicalized Tree Insertion Grammars. Com- paring PLTIGs with PCFGs and N-grams, our studies show that a lexicalized tree represen- tation drastically improves the quality of lan- guage modeling of a context-free grammar to the level of N-grams without degrading the parsing accuracy. In the future, we hope to continue to improve on the quality of parsing and language modeling by making more use of the lexical information. For example, cur- rently, the initial untrained PLTIGs consist of elementary trees that have uniform configura- tions (i.e., every auxiliary tree has the same number of adjunction sites) to mirror the CNF representation of PCFGs. We hypothesize that a grammar consisting of a set of elementary trees whose number of adjunction sites depend on their lexical anchors would make a closer ap- proximation to the "true" grammar. We also hope to apply PLTIGs to natural language tasks that may benefit from a good language model, such as speech recognition, machine translation, message understanding, and keyword and topic spotting. References Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statis- tics. In Proceedings of the AAAI, pages 598- 603, Providence, RI. AAAI Press/MIT Press. Michael Collins. 1997. Three generative, lexi- calised models for statistical parsing. In Pro- ceedings of the 35th Annual Meeting of the ACL, pages 16-23, Madrid, Spain. Joshua Goodman. 1997. Probabilistic fea- ture grammars. In Proceedings of the Inter- national Workshop on Parsing Technologies 1997. Rebecca Hwa. 1998. An empirical evaluation of probabilistic lexicalized tree insertion gram- mars. Technical Report 06-98, Harvard Uni- versity. Full Version. K. Lari and S.J. Young. 1990. The estima- tion of stochastic context-free grammars us- ing the inside-outside algorithm. Computer Speech and Language, 4:35-56. David Magerman. 1995. Statistical decision- models for parsing. In Proceedings of the 33rd Annual Meeting of the A CL, pages 276-283, Cambridge, MA. Fernando Pereira and Yves Schabes. 1992. Inside-Outside reestimation from partially bracketed corpora. In Proceedings of the 30th Annual Meeting of the ACL, pages 128-135, Newark, Delaware. S. Rajasekaran and S. Yooseph. 1995. Tal recognition in O(M(n2)) time. In Proceedings of the 33rd Annual Meeting of the A CL, pages 166-173, Cambridge, MA. Y. Schabes and R. Waters. 1993. Stochastic lexicalized context-free grammar. In Proceed- ings of the Third International Workshop on Parsing Technologies, pages 257-266. Y. Schabes and R. Waters. 1994. Tree insertion grammar: A cubic-time parsable formalism that lexicalizes context-free grammar without changing the tree produced. Technical Re- port TR-94-13, Mitsubishi Electric Research Laboratories. Y. Schabes, A. Abeille, and A. K. Joshi. 1988. Parsing strategies with 'lexicalized' gram- mars: Application to tree adjoining gram- mars. In Proceedings of the 1Pth Interna- tional Conference on Computational Linguis- tics (COLING '88), August. Yves Schabes. 1990. Mathematical and Com- putational Aspects of Lexicalized Grammars. Ph.D. thesis, University of Pennsylvania, Au- gust. 563
1998
91
Terminological variation, a means of identifying research topics from texts Fidelia IBEKWE-SANJUAN CRISTAL-GRESEC, Stendhal University, Grenoble France and Dept. of Information & Communication IUT du Havre - B.P. 4006 - 76610 Le Havre France E-mail : [email protected] Abstract After extracting terms from a corpus of titles and abstracts in English, syntactic variation relations are identified amongst them in order to detect research topics. Three types of syntactic variations were studied : permutation, expansion and substitution. These syntactic variations yield other relations of formal and conceptual nature. Basing on a distinction of the variation relations according to the grammatical function affected in a term - head or modifier - term variants are first clustered into connected components which are in turn clustered into classes. These classes relate two or more components through variations involving a change of head word, thus of topic. The graph obtained reveals the global organisation of research topics in the corpus. A clustering method has been built to compute such classes of research topics. Introduction The importance of terms in various natural language tasks such as automatic indexing, computer-aided translation, information retrieval and technology watch need no longer be proved. Terms are meaningful textual units used for naming concepts or objects in a given field. Past studies have focused on building term extraction tools : TERMINO (David S. & Plante P. 1991), LEXTER (Bourigault D. 1994), ACABIT (Daille 1994), FASTR (Jacquemin 1995), TERMS (Katz S.M. & Justeson T.S. 1995). Here, term extraction and the identification of syntactic variation relations are considered for topic detection. Variations are changes affecting the structure and the form of a term producing another textual unit close to the initial one e.g. dna amplification and amplification fingerprinting of dna. Variations can point to terminological evolution and thus to that of the underlying concept. Topic is used in its grammatical sense, i.e. the head word in a noun phrase. In the above term, fingerprinting is the topic (head word) and dna amplification its properties (modifiers). However, a topic cannot appear by chance in specialised litterature, so this grammatical definition needs to be backed up by empirical evidence such as recurrence of terms sharing the same head word. We constituted a test corpus of scientific abstracts and titles in English from the field of plant biotechnology making up ---29000 words. These texts covered publications made over 13 years (1981-1993). We focused on three syntactic variation types occurring frequently amongst terms : permutation, substitution and expansion (§2). Tzoukermann E. Klavans J. and Jacquemin C. (1997) extracted morpho-syntactic term variants for NLP tasks such as automatic indexing. They accounted for a wide spectrum of variation producing phenomena like the morpho-syntactic variation involving derivation in tree cutting and trees have been cut down 1. We focused for the moment on terms appearing as noun phrases (NP). Although term variants can appear as verb phrases (VP), we believe that NP variants reflect more terminological stability thus a real shift in topic (root hair --~ root hair deformation) than their VP counterpart (root hair the root hair appears deformed). Also, our application - research topic identification - being quite sensitive, requires a careful selection of term variants types depending on their interpretability. Examples taken from Tzoukermann et al. (1997). 564 This is to avoid creating relations between terms which could mislead the end-user, typically a technological watcher, in his task. For instance how do we interpret the relation between concept class and class concept ? Also, our aim is not to extract syntactic variants per se but to identify them in order to establish meaningful relations between them. 1 Extracting terms from texts 1.1 Morpho-syntactic features Term extraction is based on their morpho- syntactic features. The morphological composition of NP terms allows for a limited number of categories mostly nouns, adjectives and some prepositions. Terms can appear under two syntactic structures : compound (the specific alfalfa nodulation) or syntagmatic (the specific nodulation of alfalfa). Since terms are used for naming concepts and objects in a given knowledge field, they tend to be relatively short textual units usually between 2-4 words though terms of longer length occur (endogeneous duck hepatitis B virus). In this study, we fixed a word limit of 7 not considering determiners and prepositions. Based on these three features, morphological make-up, syntactic structure and length, clauses are processed in order to extract complex terms rather than atomic ones. The motivation behind this approach is that complex terms reveal the association of concepts, hence they are more relevant for the application we are considering. A fine-grained term extraction strategy would isolate the concepts and thus lose the information given by their associations in the corpus. For this reason, we could not consider the use of an existing term extraction tool and thus had to carry out a manual simulation of the term extraction phase. NP splitting rules take into account the lexical nature of the constituent words and their raising properties (i.e. derived nouns as opposed to non- derived ones). Furthermore, following the empirical approach successfully implemented by Bourigault (1994), we split complex NPs only after a search has been performed in the corpus for occurrences of their sub-segments in unambiguous situations, i.e. when the sub-segments are not included in a larger segment. This favours the extraction of pre-conceived textual units possibly corresponding to domain terms. However morpho- syntactic features alone cannot verify the terminological status of the units extracted since they can also select non terms (see Smadja 1993). For instance root nodulation is a term in the plant biotechnology field whereas book review also found in the corpus is not. Thus in the first stage, the terms extracted are only plausible candidates which need to be filtered in order to eliminate the most unlikely ones. This filtering takes advantage of lexical information accessible at our level of analysis to fine-tune the statistical occurrence criterion which used alone, inevitably leads to a massive elimination. 1.2 Splitting complex noun phrases An NP is deemed complex if its morpho-syntactic features do not conform to that specified for terms, e.g. oxygen control of nitrogen fixation gene expression in bradyrhizobium japonicum a title found in our corpus. Its corresponding syntactic context is : NP1_of_NP2_prepLNP3 where NP is a recognised noun phrase, prep~ refers to the class of preposition not containing of and often found in the morphological composition of terms (for, by, in, from, with). Normally, exploiting syntactic information on the raising properties of the head noun (control) and following the distributional approach, the above segment will be split thus : NPl NP2 --4 NP3 But this splitting is only performed if no sub- segment of the initial one occurred alone in the corpu s. This search yielded nitrogen fixation gene expression and bradyrhizobium japonicum which both occurred more than 6 times in the corpus. Their existence confirms the relevance of our splitting rule which would have yielded the same result: oxygen control; nitrogen fixation gene expression; bradyrhizobium japonicum Altogether, 4463 candidate terms were extracted from our corpus and subjected to a filtering process which combined lexical and statistical criteria. The lexical criterion consisted in eliminating terms that contained a determiner other than the that remained after the splitting phase. Only this determiner can occur in a term as it has the capacity, out of context, to refer to a concept or object in a knowledge field, i.e. the use 565 of the variant the low-line instead of the full term low fertility droughtmaster line 2. The statistical criterion consisted in eliminating terms starting with the and appearing only once. These two criteria enabled us to eliminate 30% (1304) candidates and to retain 70% (3159) which we consider to be likely terminological units. We are aware that this filtering procedure remains approximate and cannot eliminate bad candidates like book review whose morphological and lexical make-up correspond to those of terms. But we also observe that such bad candidates are naturally filtered out in later stages as they rarely possess variants and thus will not appear as research topics (see §4). 2 Identifying syntactic variants Given the two syntactic structures under which a term can appear - compound or syntagmatic - we first pre-processed the terms by transforming those in a syntagmatic structure into their compound version. This transformation is based on the following noun phrase formation rule for English : DAM1 h p m Mz---~ D A m M2 Ml h where D, A and M are respectively strings of determiner, adjective and words whose place can be empty, h is a head noun, m is a word and p is a preposition. Thus, the compound version of the specific nodulation of alfalfa will give the specific alfalfa nodulation. This transformation does not modify the original structure under which a term occurred in the corpus. It only serves to furnish input data to the syntactic variation identification programs. This transformation which is equivalent to permutation (§2.1)is the linguistic relation which once accounted for, reveals the formal nature of the other types of syntactic variations. Also, it enables us to detect variants in the two syntactic structures thus accounting for syntactic variants such as defined in Tzoukermann et al. (1997). In what follows, t~ and t2 are terms. 2.1 Permutation (Perm) It marks the transformation of a term, from a syntagmatic structure to a compound one : tI=ANMI hpmM2 t2=AmM2NMI h 2 It apparently refers to a breed (line) of cattle. where tl is really found in the corpus, N is a string of words that is either empty or a noun. 37 terms were concerned by this relation. Some examples are given in Table 1. 2.2 Substitution (Sub) It marks the replacing of a component word in tl by another word in t2 in terms of equal length. Only one word can be replaced and at the same position to ensure the interpretability of the relation. We distinguished between modifier and head substitution. • Modifier substitution (M-Sub) : t2 is a substitution of t~ if and only if : t~ = M 1 m M 2 h and t2 = M~ m' M 2 h with m' ~ m • Head substitution (H-Sub) : t2 is a substitution of tl if and only if : tz= Mmh andt2= Mmh' with h' ~ h Tzoukermann et al. (1997) considered chemical treatment against disease and disease treatment as substitution variants whereas, in our study, after transformation, they would be a case of left- expansion (L-Exp). Examples of head and modifier substitutions are given in Table 2. 1543 terms shared substitution relations : 1084 in the modifier substitution and 872 in the head substitution. The same term can occur in both categories. 2.3 Expansion (Exp) Expansion is the generic name designating three elementary operations of word adjunction in an existing term. Word adjunction can occur in three positions : left, right or within. Thus we have left expansion, right expansion and insertion respectively. • Left expansion (L-Exp) : tz is a left-expansion of t~ if and only if : tl = Mh and t2 = M' m' M h • Right expansion (R-Exp) : t2 is a right-expansion of t~ if and only if : tl =M h and t2 = M h M' h' • Insertion (Ins) : t2 is an insertion of t~ if and only if : tl =Ml mMzh t2 =M1 mm'M'MEh 566 Examples of each sub-type of expansion are given in Table 3. Some terms combine the two types of expansion - left and right expansions (noted LR-Exp), for example root of bragg ---> root exudate of soyabean cultivar bragg. These complex expansion variants were also identified. A total of 1014 terms were involved in the expansion variation relations. Altogether, 82% (2593 out of 3159) terms were involved in the three types of syntactic variations studied showing the importance of the phenomena amongst terms. Syntagmatic structure accession of azolla-anabaena avirulent strain of pseudomonas syringae curling of root hair excision of nodule the specific nodulation of alfalfa Compound structure azolla-anabaena accession avirulent pseudomonas syringae strain root hair curling / root-hair curling nodule excision the specific alfalfa nodulation Table 1. Examples of permutation variants identified in the corpus. Head substitution variants nodule development regulation nodule development arrest nodule development consequence infection thread development infection thread formation infection thread initiation nodulation of soybean mutant isolation of soybean mutant property of soybean mutant Modifier substitution variants alfalfa root hair curled root hair lucerne root hair characteristic dna fingerprinting conventional dna fingerprinting complex dna fingerprinting enzymatic amplification of dna amplification of genomic dna Table 2. Some head and modifier substitution variants identified in the corpus. Left expansion self-licking ---> refractor), self-licking stereotypic self-licking nitrogenase activity ---> nitrogenase activity of cv. bragg nitrogenase activity of nitrate nitrogenase activity of nts382 nitrogenase activity of soyabean Right expansion blue light ---> blue light-induced expression blue light induction blue lisht induction experiment immigrant of eastern countries --> immigrant children of eastern countries 3 Insertion conserved domain ---> conserved central domain conserved protein domain fast staining of dna---> fast silver staining of dna Table 3. Examples of expansions variants identified in the corpus. The programs identifying syntactic variants were written in the Awk language and implemented on a Sun Sparc workstation. Syntactic variations possess formal properties such as symmetry and antisymmetry. Permutation and substitution engender a symmetrical relation between terms, e.g. genomic dna a template dna. 3 This example is fictitious. 567 Expansion engenders an antisymmetrical or order relation between terms, for instance nitrogen fixation<nitrogen fxation gene<nitrogen fixation gene activation. These two formal properties will form the second level for differentiating variation relations during clustering (see §4). 3 Conceptual properties of syntactic variations Syntactic variations yield conceptual relations which can reveal the association of concepts represented by the terms. We observed three conceptual relations : class_of, equivalence, generic/specific. • Class_of Substitution (Sub) engenders a relation between term variants which can be qualified as "class_of". Modifier substitution groups properties around the same concept class : template dna, genomic dna, target dna are properties associated to the class of concept named "dna". Head substitution groups concepts or objects around a class of property: dna fragment, dna sequence, dna fingerprinting are concepts associated to the class of property named dna. This relation does not imply a hierarchy amongst terms thus somehow reflecting the symmetrical relation engendered on the formal level. • Equivalence Permutation engenders a conceptual equivalence between two variants which partially echoes the formal symmetry, e.g. dnafragment-fragment of dna. • Generic~specific Expansion, all sub-types considered, engenders a generic/specific relation between terms which echoes the antisymmetrical relation observed on the formal level. Expansion thus introduces a hierarchy amongst terms and allows us to construct paradigms that may correspond to families of concepts or objects (R-Exp, LR-Exp) or families of properties (L-Exp, Ins). Jacquemin (1995) reported similar conceptual relations for insertion and coordination variants. 4 Identifying topics organisation We built a novel clustering method - Classification by Preferential Clustered Link (CPCL) - to cluster terms into classes of research topics. First we distinguished two categories of variation relations : those affecting modifier words noted COMP (M-Sub, L-Exp, Ins) and those affecting the head word noted CLAS (H-Sub, LR- Exp, R-Exp). The need to value the variation relations may arise if a type (symmetrical or antisymmetrical) is in the minority. To preserve the information it carries, a default value is fixed for this minority type. The value of the majority type is then calculated as its proportion with regard to the minority type. In our corpus, Exp (antisymmetrical) relations were in minority compared to Sub (symmetrical relations). Their default value was set at 1. The value of Sub relations was then given by the ratio Exp/Sub where Exp (respectively Sub) is the total number of expansions relations (respectively substitutions) between terms in the corpus. This valuing of variation relations highlights a type of information that would otherwise be drowned but is not a mandatory condition for the clustering algorithm to work. COMP relations structure term variants around the same head word thus forming components representing the paradigms in the corpus. These paradigms typically correspond to isolated topics (see Table 4 hereafter). The strength of the link between two components Pi and Pj is given by the sum of the value of variation relations between them. More formally, we define the COMP relation between terms as : ti COMP tj iff ti and tj share the same head word and if one is the variant of the other. The transitive closure COMP* of COMP partitions the whole set of terms into components. These components are not isolated and are linked by transversal CLAS relations implying a change of head word, thus bringing to light the associations between research topics in the corpus. CLAS relations cluster components basing on the following principle : two components Pi and Pj are clustered if the link between them is stronger than the link between either of them and any other component Pk which has not been clustered neither with Pi nor with Pj. We call classification, a partition of terms in such classes. An efficient algorithm has been implemented in Ibekwe- SanJuan (1997) which seeks growing series of 568 such classifications. These series represent more or less fine-grained structurings of the corpus. A more formal description of the CPCL method can be found in Ibekwe-SanJuan (1998). Table 4 shows a component and a class. The component formed around the head word hair reveals the properties (modifiers) associated with this topic but does not tell us anything about its association other topics. The class on the other hand reveals the association of hair with other topics. A component II A class of terms alfalfa root hair curled root hair deformed root hair lucerne root hair root hair alfalfa root hair concomitant root hair curling curled root hair deformed root hair hair deformation lucerne root hair occasional hair curling root deformation root hair root hair curling root hair deformation some root hair curling Table 4. A component and a class. The graph in Figure 1 hereafter shows the global organisation of classes obtained from the classification of the entire corpus (2593 syntactic term variants). External links between classes are given by bold lines for R-Exp and LR-Exp, dotted lines portray head-substitution H-Sub. Only one term from each class is shown for legibility reasons. We observe that classes like 17, 19, 18 and 9 have a lot of external links and seem to be at the core of research topics in the corpus. Classes like 12, 3 and 13 share strong external links with a single class which could indicate privileged thematic relations. The unique link between class 3 and 19 is explained by the fact that 3 represented an emerging topic 4 at the time the corpus was constituted (1993) : the research done around a new gene type (the klebsiella pneumoniae nifb gene). So it was relevant that this class be strongly linked to class 19 without being central. Also, class 10 represented an emerging topic in 1993 : the research for retrotransposable elements which enables the passing from one gene to another. Research topics evolution and transformation can be traced through a chronological analysis of clustered term variants (see Ibekwe-SanJuan 1998). The results obtained can support scientific and technological watch activities. Concluding remarks Syntactic variation relations are promising linguistic phenomena for tracking topic evolution in texts. However, being that clustering is based on syntactic variation relations, the CPCL method cannot detect topics related through semantic or pragmatic relations. For instance, the topic depicted by class 8 (glycine max) should have been related to topic 20 (lucerne plant) from a semantic viewpoint. Their separation was caused by the absence of syntactic variations between the constituent terms. Such relations can be brought to light only if further knowledge (semantic) is incorporated into the relations used for clustering. In the future, we will test our clustering method on another corpus of a larger size and extend our study to other variation phenomena as possible topic shifting devices. 4 The interpretations given here are based on an oral communication with a domain information specialist. 569 Acknowledgements Thanks to the reviewers for their constructive comments which I hope, helped improve this paper. References Bourigault D. (1994). LEXTER, un Logiciel d'Extraction Terminologique. Application l'Acquisition des Connaissances ~ partir de Textes. PhD. dissertation, Ecoles des Hautes Etudes en Sciences Sociales, Paris, 352p. Daille B. (1994). Study and implementation of combined techniques for automatic extraction of terminology. The Balancing Act : Combining Symbolic and Statistical Approaches to Language, Proceedings of the "Workshop of the 32nd Annual Meeting of the ACL", Las Cruces, New Mexico, USA, 9p. David S. Plante P. (1991). Le Progiciel TERMINO: De la n~cessit~ d'une analyse morphosyntaxique pour le dgpouillement terminologique de textes, Proceedings of the Colloquium "Les Industries de la Langue", Montr6al Nov. pp. 21-24. Ibekwe-SanJuan F. (1997). Defining a linguistic-based methodology for tracking thematic trends in scientific publications. PhD. Dissertation, University of Stendhal, Grenoble France, 376p. Ibekwe-SanJuan F. (1998). A linguistic and mathematical method for mapping thematic trends from texts. To appear in 13th European Conference on Artificial Intelligence (ECAI'98), Brighton, UK, 23- 28 August 1998, pp. 170-174. Jacquemin C. (1995). A symbolic and surgical acquisition of terms through variation. Workshop on "New approaches to learning for NLP", 14th International Joint Conference on Artificial Intelligence (IJCAI'95), Montrdal, 8p. Katz S.M. Justeson T.S. (1995). Technical terminology: some linguistic properties and an algorithm for identification in text. Journal of Natural Language Engineering, 1/1, 19p. Smadja F. (1993). Retrieving collocations from text : Xtract. Computational Linguistics, 19/1, pp. 143 - 177. Tzoukermann E. Klavans J. Jacquemin C. (1997). Effective use of natural language processing techniques for automatic conflation of multi-words. SIGIR'97, 8p. ~a~a_~_a~_ ........... ............... 18 nts382 nodule ~ ", 1 nodule organogenesis ....... [ [ ~ Z : -- -- =..J~._ 8 glycine max ,, ................ f6-g-a "1:3-genome ......... / I 14 dna._amplification .0"~--_- ~',' I', ~"-..,~.~.3 hair deformation 0 7markerpa~132 " ~ "~. ,' I , "- , , 9p t ~ ~ ,. /" 7" -. [ .) "• 2 biological root , L '~ I ~ \ ~" / J ';'-..10 retrotranspo sableelemenl~\ . . . . .- X ,' I, ,, , 9 sequence information ~ ,, , , .1~ ' / ', ,' I , ~,~ ce,~u..ens,on=,tura ~l i t t nltrogenase actlvl ~ ~ , derepression 17 ~ ¢ = ~ =~" - - 7 - - - ~.- ..... I" - ~" ..... C - - ~- ....... -' ......... 7"~"L- = ~, ' ""~ J , , ' , . 11 bradydlizobium ,, ' ~ " "~." ~ - _ ~ "~ I ~,, ' / ,, " japonicum strain tmda110 4 Ioxl mma ~. "~ "~'-. I ~ ~ ~ ' ~ ," " " - x~ d ~ ," 3 klebsiella pneumoniae ~i " ". ". I ~ ~ " " " Jl/" nifb gene~ 12 doseeffect~'-. - -~-'\ I ~ . ~ ' ~ --'I ..... " ~ 19 host range gene 6 high intensity 20 lucerne plant Key R-Exp, LR-Exp .... H-Sub Figure 1. The external view of research topics identified in the corpus (1981-93). 570
1998
92
Information Classification and Navigation Based on 5W1H of the Target Information Takahiro Ikeda and Akitoshi Okumura and Kazunori Muraki C&C Media Research Laboratories, NEC Corporation 4-1-1 Miyazaki, Miyamae-ku, Kawasaki, Kanagawa 216 Abstract This paper proposes a method by which 5WlH (who, when, where, what, why, how, and predicate) infor- mation is used to classify and navigate Japanese- language texts. 5WlH information, extracted from text data, has an access platform with three func- tions: episodic retrieval, multi-dimensional classi- fication, and overall classification. In a six-month trial, the platform was used by 50 people to access 6400 newspaper articles. The three functions proved to be effective for office documentation work and the precision of extraction was approximately 82%. 1 Introduction In recent years, we have seen an explosive growth in the volume of information available through on- line networks and from large capacity storage de- vices. High-speed and large-scale retrieval tech- niques have made it possible to receive information through information services such as news clipping and keyword-based retrieval. However, information retrieval is not a purpose in itself, but a means in most cases. In office work, users use retrieval ser- vices to create various documents such as proposals and reports. Conventional retrieval services do not provide users with a good access platform to help them achieve their practical purposes (Sakamoto, 1997; Lesk et al., 1997). They have to repeat retrieval operations and classify the data for themselves. To overcome this difficulty, this paper proposes a method by which 5WlH (who, when, where, what, why, how, and predicate) information can be used to classify and navigate Japanese-language texts. 5WlH information provides users with easy- to-understand classification axes and retrieval keys because it has a set of fundamental elements needed to describe events. In this paper, we discuss common information retrieval requirements for office work and describe the three functions that our access platform us- ing 5WlH information provides: episodic retrieval, multi-dimensional classification, and overall classifi- cation. We then discuss 5WlH extraction methods, and, finally, we report on the results of a six-month trial in which 50 people, linked to a company in- tranet, used the platform to access newspaper arti- cles. 2 Retrieval Requirements In an Office Information retrieval is an extremely important part of office work, and particularly crucial in the creation of office documents. The retrieval requirements in office work can be classified into three types. Episodic viewpoint: We are often required to make an episode, temporal transition data on a cer- tain event. For example, "Company X succeeded in developing a two-gigabyte memory" makes the user want to investigate what kind of events were announced about Company X's memory before this event. The user has to collect the related events and then arrange them in temporal order to make an episode. Comparative viewpoint: The comparative view- point is familiar to office workers. For example, when the user fills out a purchase request form to buy a product, he has to collect comparative infor- mation on price, performance and so on, from several companies. Here, the retrieval is done by changing retrieval viewpoints. Overall viewpoint: An overall viewpoint is neces- sary when there is a large amount of classification data. When a user produces a technical analysis re- port after collecting electronics-related articles from a newspaper over one year, the amount of data is too large to allow global tendencies to be interpreted such as when the events occurred, what kind of com- panies were involved, and what type of action was required. Here, users have to repeat retrieval and classification by choosing appropriate keywords to condense classification so that it is not too broad- ranging to understand. 571 l Episodic retrieval I Overall classification I Figure 1: 5WIH classification and navigation 3 5WIH Classification and Navigation Conventional keyword-based retrieval does not con- sider logical relationships between keywords. For ex- ample, the condition, "NEC & semiconductor & pro- duce" retrieves an article containing "NEC formed a technical alliance with B company, and B com- pany produced semiconductor X." Mine et al. and Satoh et al. reported that this problem leads to re- trieval noise and unnecessary results (Mine et al., 1997; Satoh and Muraki, 1993). This problem makes it difficult to meet the requirements of an office be- cause it produces retrieval noise in these three types of operations. 5WlH information is who, when, where, what, why, how, and predicate information extracted from text data through the 5WlH extraction module us- ing language dictionary and sentence analysis tech- niques. 5WlH extraction modules assign 5WlH in- dexes to the text data. The indexes are stored in list form of predicates and arguments (when, who, what, why, where, how) (Lesk et ai., 1997). The 5WlH index can suppress retrieval noise because the in- dex considers the logical relationships between key- words. For example, the 5WlH index makes it pos- sible to retrieve texts using the retrieval condition "who: NEC & what: semiconductor & predicate: produce." It can filter out the article containing "NEC formed a technical alliance with B company, and B company produced semiconductor X." Based on 5WlH information, we propose a 5WlH classification and navigation model which can meet office retrieval requirements. The model has three functions: episodic retrieval, multi-dimensional clas- sification, and overall classification (Figure 1). 3.1 Episodic Retrieval The 5WlH index can easily do episodic retrieval by choosing a set of related events and arranging 96.10 NEC adjusts semiconductor production downward. 96.12 97.1 97.4 97.5 NEC postpones semiconductor production plant construction. NEC shifts semiconductor production to 64 Megabit next generation DRAMs. NEC invests ¥ 40 billion for next generation semiconductor production. NEC semiconductor production 18% more than expected. Figure 2: Episodic retrieval example W ~ PC HD I NEC ......... X~;. ........ PC . ..... ~ . . . . . . . . . Figure 3: Multi-dimensional classification example the events in temporal order. The results are read- able by users as a kind of episode. For example, an NEC semiconductor production episode is made by retrieving texts containing "who: NEC & what: semiconductor & predicate: product" indexes and sorting the retrieved texts in temporal order (Figure 2). The 5WlH index can suppress retrieval noise by conventional keyword-based retrieval such as "NEC & semiconductor & produce." Also, the result is an easily readable series of events which is able to meet episodic viewpoint requirements in office retrieval. 3.2 Multi-dimensional Classification The 5WlH index has seven-dimensionai axes for classification. Texts are classified into categories on the basis of whether they contain a certain combi- nation of 5WlH elements or not. Though 5WlH elements create seven-dimensional space, users are provided with a two-dimensional matrix because this makes it easier for them to understand text distri- bution. Users can choose a fundamental viewpoint from 5WlH elements to be the vertical axis. The other elements are arranged on the horizontal axis as the left matrix of Figure 3 shows. Classification makes it possible to access data from a user's com- parative viewpoints by combining 5WlH elements. For example, the cell specified by NEC and PC shows the number of articles containing NEC as a "who" element and PC as a "what" element. Users can easily obtain comparable data by switching their fundamental viewpoint from the 572 Who NF~ opens a new internet service. Electric . . . . . Company " A ...... Cotp, develops a new computer. B Inc. puts a portable terminal on the market, Communi- J C Telecommunication starts a virtual market. cation ~,..~ D Telephone sells a communication adapter. Figure 4: Overall classification example "who" viewpoint to the "what" viewpoint, for ex- ample, as the right matrix of Figure 3 shows. This meets comparative viewpoint requirements in office retrieval. 3.3 Overall Classification When there are a large number of 5WlH elements, the classification matrix can be packed by using a thesaurus. As 5WlH elements axe represented by upper concepts in the thesaurus, the matrix can be condensed. Figure 4 has an example with six "who" elements which are represented by two categories. The matrix provides users with overall classification as well as detailed sub-classification through the se- lection of appropriate hierarchical levels. This meets overall classification requirements in office retrieval. 4 5W1H Information Extraction 5W1H extraction was done by a case-based shal- low parsing (CBSP) model based on the algorithm used in the VENIEX, Japanese information extrac- tion system (Muraki et al., 1993). CBSP is a robust and effective method of analysis which uses lexical information, expression patterns and case-markers in sentences. Figure 5 shows the detail on the algo- rithm for CBSP. In this algorithm, input sentences are first seg- mented into words by Japanese morphological anal- ysis (Japanese sentences have no blanks between words.) Lexical information is linked to each word such as the part-of-speech, root forms and semantic categories. Next, 5WlH elements are extracted by proper noun extraction, pattern expression matching and case-maker matching. In the proper noun extraction phase, a 60 050- word proper noun dictionary made it possible to indicate people's names and organization names as "who" elements and place names as "where" ele- ments. For example, NEC and China are respec- tively extracted as a "who" element and a "where" procedure CBSP; begin Apply morphological analysis to the sentence; foreach word in the sentence do begin if the word is a people's name or an organization name then Mark the word as a "who" element and push it to the stack; else if the word is a place name then Mark the word as a "where" element and push it to the stack; else if the word matches an organization name pattern then Mark the word as a "who" element and push it to the stack; else if the word matches a date pattern then Mark the word as a "when" element and push it to the stack; else if the word is a noun then if the next word is ¢~¢ or t2 then Mark the word and the kept unspecified elements as "who" elements and push them to the stack; if the next word is ~: or ~= then Mark the word and the kept unspecified elements as "what" elements and push them to the stack; else Keep the word as an unspecified element; else if the word is a verb then begin Fix the word as the predicate element of a 5WlH set; repeat Pop one marked word from the stack; if the 5WlH element corresponding to the mark of the word is not fixed then Fix the word as the 5WlH element corresponding to its mark; else break repeat; until stack is empty; end end end Figure 5: The algorithm for CBSP element from the sentence, "NEC d ¢ q~ ~ ~/fik *-No (NEC produces semiconductors in China.)" In the pattern expression matching phase, the sys- tem extracts words matching predefined patterns as "who" and "when" elements. There are several typ- 573 Table 1: The results of evaluation for "who," "what," and "predicate" elements and overall extracted information. "Who" elements "What" elements "Predicate" elements Present Absent Total Present Absent Total Present Absent Total Overall Correct 5423 71 5494 5653 50 5703 6042 5 6047 5270 Error 414 490 904 681 14 695 55 296 351 1128 Total 5837 561 6398 6334 64 6398 6097 301 6398 6396 Precision 92.9% 12.7% 85.9% 89.2% 78.1% 89.1% 99.1% 1.7% 94.5% 82.4% ical patterns for organization names and people's names, dates, and places (Muraki et al., 1993). For example, nouns followed by ~J: (Co., Inc. Ltd.) and ~-~ (Univ.) mean they are organizations and "who" elements. For example, 1998 ~ 4 J~ 18 ~ (April 18, 1998) can be identified as a date. "When" elements can be recognized by focusing on the pattern for (year),)~ (month), and ~ (day). For words which are not extracted as 5WlH el- ements in previous phases, the system decides its 5WlH index by case marker matching. The system checks the relationships between Japanese particles (case markers) and verbs and assigns a 5W1H in- dex to each word according to rules such as 7~ ~ is a marker of a "who" element and ~ is a marker of a "what" element. In the example "A }J:7~ X ~r ~ (Company A sells product X.)," company A is identified as a "who" element according to the case marker 7) ~ if it is not specified as a "who" element by proper noun extraction and pattern expression matching. 5WlH elements followed by a verb (predicate) are fixed as a 5WlH set so that a 5WlH set does not include two elements for the same 5WlH index. A 5WlH element belongs to the same 5W1H set as the nearest predicate after it. 5 Information Access Platform 5WlH information classification and navigation works in the information access platform. The plat- form disseminates users with newspaper information through the company intranet. The platform struc- ture is shown in Figure 6. Web robots collect newspaper articles from spec- ified URLs every day. The data is stored in the database, and a 5WlH index data is made for the data. Currently, 6398 news articles are stored in the databases. Some articles are disseminated to users according to their profiles. Users can browse all the data through WWW browsers and use 5WlH classi- fication and navigation functions by typing sentences or specifying regions in the browsing texts. l ~I Dissemination }~ I f I¢ I I imoosi;o , ~a'ta~a~J IN'DEX ]l I retrieval U S E R S Figure 6: Information access interface structure 5WlH elements are automatically extracted from the typed sentences and specified regions. The ex- tracted 5WlH elements are used as retrieval keys for episodic retrieval, and as axes for multi-dimensional classification and overall classification. 5.1 5W1H Information Extraction "When," "who, .... what," and "predicate" informa- tion has been extracted from 6398 electronics in- dustry news articles since August, 1996. We have evaluated extracted information for 6398 news head- lines. The headline average length is approximately 12 words. Table 1 shows the result of evaluating "who," "what," and "predicate" information and overall extracted information. In this table, the results are classified with re- gard to the presence of corresponding elements in the news headlines. More than 90% of "who," "what," and "predicate" elements can correctly be extracted with our extraction algorithm from headlines having such elements. On the other hand, the algorithm is not highly precise when there is no correspond- ing element in the article. The errors are caused by picking up other elements despite the absence of the element to be extracted. However, the er- rors hardly affect applications such as episodic re- 574 ~ : ~ j , ..... .~., . . . . . [~/lon~] ": ~ • Wl [~/lllS] -~[~t~N~;;'X~'~4~n,'DRAU'.-:~/Yt "- -~'~CM Figure 7: Episodic retrieval example (2) trieval and multi-dimensional classification because they only add unnecessary information and do not remove necessary information. The precision independent of the presence of the element is from 85% to 95% for each, and the overall precision is 82.4%. 5.1.1 Episodic Retrieval Figure 7 is an actual screen of Figure 2, which shows an example of episodic retrieval based on headline news saying, "NEC ~)~-~¢)~::~:J: 0 18%~ (NEC produces 18% more semiconductors than ex- pected.)" The user specifies the region, "NEC ~)¢ ~i~k¢)~i~ (NEC produces semiconductors)" on the headline for episodic retrieval. A "who" element NEC, a "what" element ~i~$ (semiconductor), and a "predicate" element ~ (produce) are episodic re- trieval keys. The extracted results are NEC's semi- conductor production story. The upper frame of the window lists a set of head- lines arranged in temporal order. In each article, NEC is a "who" element, the semiconductor is a "what" element and production is a "predicate" el- ement. By tracing episodic headlines, the user can find that the semiconductor market was not good at the end of 1996 but that it began turning around in 1997. The lower frame shows an article corre- sponding to the headline in the upper frame. When the user clicks the 96/10/21 headline, the complete article is displayed in the lower frame. 5.1.2 Multi-dimensional Classification Figures 8 and 9 show multi-dimensional classifica- tion results based on the headline, "NEC • A ~± • B ~± HB~-g"4'~Y-- ~ ¢) ~]~J{~$~ ~ ~.-~ (NEC, A Co., and B Co. are developing encoded data recov- . . . . . . . . . . . . . . . Hiilillllilll i IIIII1[11iiii111 I :~" ======================~I Figure 8: Multi-dimensional classification example (2) . . . . . . . . . . . . . . . . . . . . III IHflfl I II II I II)[i1'~¢~ i [96/0?/1T] D$~: I~i.|~.~g~'~{:l'C~x~'>Y,-7-~--~;~ ~ Figure 9: Multi-dimensional classification example (3) ery techniques.)." "Who" elements are "NEC, A Co., and B Co." listed on the vertical axis which is the fundamental axis in the upper frame of Figure 8. "What" elements are "~-~?. (encode), ~*- (data), []~ (recovery), and ~ (technique)." h "predicate" element is a "r,~ (develop)." "What" and "predicate" elements are both arranged on the horizontal axis in the upper frame of Figure 8. When clicking a cell for "who": NEC and "what": ~ (encode), users can see the headlines of articles con- taining the above two keywords in the lower frame of Figure 8. When clicking on the "What" cell in the upper 575 I! !'ii ................... ?~"i IUI"'U ~~i~ ~ ,~, ...... ~... :~.:~ ~::: :::::~:::~!:::::::::::::::::::::::::::::::::: ~:::::~: ~: ~:~m~ ~ }t~.il ....................... U ............................ E!:::: ............... ::::: "U i!~ i ....... }; Il ~,:11~1 ~ ~ . . . . . . ~:-: ........ : - i - 2 ---~ 7-- ~ . . . . . . : ...... i - ~ ...... [::~IFT"""T:: ............. ~"- "?""': -:'-7::'::~ ............ :" ~ .......... ~'"~:7 ''U ......... :,~" " '" " .... L }::~::; ::::::::::::::::::::::::::::::::::::::::::::::::: ::::::::::::::::::::::::::::::::::::::::::::::::::::::::: :::::::::::::::::::::: ~:::::: ":::: '::::::~:::: ::::::::::::::::::::: : } ~1~1~}""~ ..................... - ................................... ~ ....................... : ............ ','T'"~"::--~Y ''m i""~ " Figure 10: Overall classification for 97/4 news Figure 11: Overall sub-classification for 97/4 news frame of Figure 8, the user can switch the funda- mental axis from "who" to "what" (Figure 9, up- per frame). By switching the fundamental axis, the user can easily see classification from different view- points. On clicking the cell for "what": ~{P. (en- code) and "predicate": ~2~ (develop), the user finds eight headlines (Figure 9, lower frame). The user can then see different company activities such as the 97/04/07 headline; "C ~i ~ o fzff'- ~' ~.~ ~f~g@~: ~ (C Company has developed data transmission encoding technology using a satellite)," shown in the lower frame of Figure 9. In this way, a user can classify article headlines by switching 5WlH viewpoints. 5.1.3 Overall Classification Overall classification is condensed by using an orga- nization and a technical thesaurus. The organization thesaurus has three layers and 2800 items, and the technical thesaurus has two layers and 1000 techni- cal terms. "Who" and "what" elements are respec- tively represented by the upper classes of the orga- nization thesaurus and the technical thesaurus. The upper classes are vertical and horizontal elements in the multi-dimensional classification matrix. "Pred- icate" elements are categorized by several frequent predicates based on the user's priorities. Figure 10 shows the results of overall classifica- tion for 250 articles disseminated in April, 1997. Here, "who" elements on the vertical axis are rep- resented by industry categories instead of company names, and "what" elements on the horizontal axis are represented by technical fields instead of tech- nical terms. On clicking the second cell from the top of the "who" elements, ~]~Jt~ (electrical and mechanical) in Figure 10, the user can view subcat- egorized classification on electrical and mechanical industries as indicated in Figure 11. Here, ~ : (electrical and mechanical) is expanded to the sub- categories; ~ J ~ (general electric) ~_~ (power electric), ~ I ~ (home electric), ~.{~j~ (commu- nication), and so on. 6 Current Status The information access platform was exploited dur- ing the MIIDAS (Multiple Indexed Information Dis- semination and Acquisition Service) project which NEC used internally (Okumura et al., 1997). The DEC Alpha workstation (300 MHz) is a server ma- chine providing 5WlH classification and navigation functions for 50 users through WWW browsers. User interaction occurs through CGI and JAVA pro- grams. After a six-month trial by 50 users, four areas for improvement become evident. 1) 5WlH extraction: 5WlH extraction precision was approximately 82% for newspaper headlines. The extraction algorithm should be improved so that it can deal with embedded sentences and compound sentences. Also, dictionaries should be improved in order to be able to deal with different domains such as patent data and academic papers. 2) Episodic retrieval: The interface should be im- proved so that the user can switch retrieval from episodic to normal retrieval in order to compare re- trieval data. Episodic retrieval is based on the temporal sorting of a set of related events. At present, geographic ar- rangement is expected to become a branch function for episodic retrieval. It is possible to arrange each event on a map by using 5WlH index data. This would enable users to trace moving events such as the onset of a typhoon or the escape of a criminal. 3) Multi-dimensional classification: Some users need to edit the matrix for themselves on the screen. 576 Moreover, it is necessary to insert new keywords and delete unnecessary keywords. 7 Related Work SOM (Self-Organization Map) is an effective auto- matic classification method for any data represented by vectors (Kohonen, 1990). However, the meaning of each cluster is difficult to understand intuitively. The clusters have no logical meaning because they depend on a keyword set based on the frequency that keywords occur. Scatter/Gather is clustering information based on user interaction (Hearst and Pederson, 1995; Hearst et al., 1995). Initial cluster sets are based on key- word frequencies. GALOIS/ULYSSES is a lattice-based classifica- tion system and the user can browse information on the lattice produced by the existence of keywords (Carpineto and Romano, 1995). 5WlH classification and navigation is unique in that it is based on keyword functions, not on the existence of keywords. Lifestream manages e-mail by focusing on tempo- ral viewpoints (Freeman and Fertig, 1995). In this sense, this idea is similar to our episodic retrieval though the purpose and target are different. Mine et al. and Hyodo and Ikeda reported on the effectiveness of using dependency relations between keywords for retrieval (Mine et al., 1997; Hyodo and Ikeda, 1994). As the 5WlH index is more informative than sim- ple word dependency, it is possible to create more functions. More informative indexing such as se- mantic indexing and conceptual indexing can the- oretically provide more sophisticated classification. However, this indexing is not always successful for practical use because of semantic analysis difficul- ties. Consequently 5WlH is the most appropriate indexing method from the practical viewpoint. 8 Conclusion This paper proposed a method by which 5WlH (who, when, where, what, why, how, and predi- cate) information is used to classify and navigate Japanese-language texts. 5WlH information, ex- tracted from text data, provides an access plat- form with three functions: episodic retrieval, multi- dimensional classification, and overall classification. In a six-month trial, the platform was used by 50 people to access 6400 newspaper articles. The three functions proved to be effective for of- fice documentation work and the extraction preci- sion was approximately 82%. We intend to make a more quantitative evaluation by surveying more users about the functions. We also plan to improve the 5W1H extraction algorithm, dictionaries and the user interface. Acknowledgment We would like to thank Dr. Satoshi Goto and Dr. Takao Watanabe for their encouragement and con- tinued support throughout this work. We also appreciate the contribution of Mr. Kenji Satoh, Mr. Takayoshi Ochiai, Mr. Satoshi Shimokawara, and Mr. Masahito Abe to this work. References C. Carpineto and G. Romano. 1995. A system for conceptual structuring and hybrid navigation of text database. In AAAI Fall Symposium on AI Application in Knowledge Navigation and Retrieval, pages 20-25. E. Freeman and S. Fertig. 1995. Lifestreams: Organiz- ing your electric life. In AAAI Fall Symposium on AI Application in Knowledge Navigation and Retrieval, pages 38-44. M. A. Hearst and J. O. Pederson. 1995. Revealing col- lection structure through information access interface. In Proceedings of IJCAI'95, pages 2047-2048. M. A. Hearst, D. R. Karger, and J. O. Pederson. 1995. Scatter/gather as a tool for navigation of retrieval re- sults. In AAAI Fall Symposium on AI Application in Knowledge Navigation and Retrieval, pages 65-71. Y. Hyodo and T. Ikeda. 1994. Text retrieval system used on structure matching. The Transactions of The Insti- tute of Electronics, Information and Communication Engineers, J77-D-II(5):1028-1030. T. Kohonen. 1990. The self-organizing map. In Proceed- ings of IEEE, volume 78, pages 1059-1063. M. Lesk, D. Cutting, J. Pedersen, T. Noreanlt, and M. Koll. 1997. Real life information retrieval: com- mercial search engines. In Proceedings of SIGIR'97, page 333, July. T. Mine, K. Aso, and M. Amamiya. 1997. Japanese document retrieval system on www using depen- dency relations between words. In Proceedings of PA- CLING'97, pages 290-215, September. K. Muraki, S. Doi, and S. Ando. 1993. Description of the veniex system as used for muc-r. In Proceedings of MUCS, pages 147-159, August. A. Okumura, T. Ikeda, and K. Muraki. 1997. Selec- tive dissemination of information based on a multiple- ontology. In Proceedings of IJCAI'97 Ontology Work- shop, pages 138-145, August. H. Sakamoto. 1997. Natural language processing tech- nology for information. In JEIDA NLP Workshop, July. K. Satoh and K. Muraki. 1993. Penstation for idea pro- cessing. In Proceedings of NLPRS'93, pages 153-158, December. 577
1998
93
A concurrent approach to the automatic extraction of subsegmental primes and phonological constituents from speech Michael INGLEBY School of Computing and Mathematics, University of Huddersfield, Queensgate, Huddersfield HD1 3DH, UK [email protected] Abstract We demonstrate the feasibility of using unary primes in speech-driven language processing. Proponents of Government Phonology (one of several phonological frameworks in which speech segments are represented as combinations of relatively few subsegmental primes) claim that primes are acoustically realisable. This claim is examined critically searching out signatures for primes in multi- speaker speech signal data. In response to a wide variation in the ease of detection of primes, it is proposed that the computational approach to phonology-based, speech-driven software should be organised in stages. After each stage, computational processes like segmentation and lexical access can be launched to run concurrently with later stages of prime detection. Introduction and overview In § 1, the subsegmental primes and phonological constituents used in Government Phonology (GP) are described, and the acoustic realisability claims which make GP primes seem particularly attractive to developers of speech-driven software are summarised. We then outline an approach to defining identification signatures for primes (§ 2). Our approach is based on cluster analysis using a set of acoustic cues chosen to reflect familiar events in spectrograms: plosion, frication, excitation, resonance... We note that cues indicating manner of articulation, which change abruptly at segment boundaries, are computationaUy simple, while those for voicing state and resonance quality are complex and calculable only after signal segmentation. Also, Wiebke BROCKHAUS Department of German, University of Manchester, Oxford Rd, Manchester M13 9PL, UK [email protected] the regions of cue space where the primes cluster (and which serve as their signatures) are disconnected, with separate sub-regions corresponding to the occurrence of a prime in nuclear or non-nuclear segmental positions. A further complication is that GP primes combine asymmetrically in segments: one prime - the HEAD - of the combination being more dominant, while the other element(s) - the OPERATORS(S) - tend to be recessive. This is handled by establishing in cue space a central location and within-cluster variance for each prime. The training sample needed for this consists of segments in which the prime suffers modification only by minimal combination with others, i.e on its own, or with as few other primes as possible. Then, when a segment containing the prime in less than minimal combination is presented for identification, its location in cue space lies within a restricted number of units of within-cluster variance of the central location of the prime cluster. The number of such distance units determines headedness in the segment, with separate thresholds for occurrence as head and as operator. In § 3 we describe in more detail the stagewise procedure for identifying via quadratic discriminants the primes present in segments. At each stage, we detail the computational processes which are driven by the partial identification achieved by theend of the stage. The processes include segmentation, selection of lexical cohort by manner class, detection of constituent structure, detection and repair of the effects of phonological processes on the speech signal. The prototype, speaker-independent, isolated- word automatic speech recognition (ASR) system is described in § 4. Called 'PhonMaster', it is 578 implemented in C++ using objects which perform separate stages of lexical access and process repair concurrently. 1 Phonological primes and constituents Much of the phonological research work of the past twenty years has focussed on phonological representations: on the make-up of individual segments and on the prosodic hierarchy binding skeletal positions together. Some researchers (e.g. Anderson and Ewen 1987 and Kaye et al. 1985) have proposed a small set of subsegmental primes which may occur in isolation but can also he compounded to model the many phonologically significant sounds of the world's languages. To give an example, in one version of GP (see Brockhaus et al. 1996), nine primes or ELEMENTS are recognised, viz. the .manner elements h (noise) and ? (occlusion), the source elements H (voicelessness), L (non-spontaneous voicing) and N (nasality), and the resonance elements A (low), I (palatal), U (labial) and R (coronal). These elements are phonologically active - they can spread to neighbouring segments, be lenited etc.. The skeletal positions to which elements may be attached (alone or in combination) enter into asymmetric binary relations with each other, so-called GOVERNING relations. A CONSTITUENT is defined as an ordered pair, governor first on the left and governee second on the right. Words are composed of well-formed sequences of constituents. Which skeletal positions may enter into governing relations with each other is mainly determined by the elements which occupy a particular skeletal slot, so elemental make-up is an important factor in the construction of phonological constituents. GP proponents have claimed that elements, which were originally described in articulatory terms, have audible acoustic identities. As we shall see in § 2, it is possible to define the acoustic signatures of individual elements, so that the presence of an element can be detected by analysis of the speech signal. Picking out elements from the signal is much more straightforward than identifying phonemes. Firstly, elements are subject to less variation due the contextual effects (e.g. place assimilation) of preceding and following segments than phonemes. Secondly, elements are much smaller in number than phonemes (nine elements compared to c. 44 phonemes in English) and, thirdly, elements, unlike phonemes, have been shown to participate in the kind of phonological processes which lead to variation in pronunciation (see references in Harris 1994). Fourthly, although there is much variation of phoneme inventory from language to language, the element inventory is universal. These four characteristics of its elements, plus the availability of reliable element detection, make a phonological framework such as GP a highly attractive basis for multi-speaker speech-driven software. This includes not only traditional ASR applications (e.g. dictation, database access), but also embraces multilingual speech input, medical (speech therapy) and teaching (computer-assisted language learning) applications. 2 Signatures of GP elements Table 1 below details the acoustic cues used in PhonMaster. Using training data from five speakers, male and female, synthetic and real with different regional accents, these cues discriminate between the simplest speech segments containing an element in a minimal combination with others. In the case of a resonance element, say, U, the minimal state of combination corresponds to isolated occurrence in a vowel such as [U], as in RP English hood or German Bus. The accuracy of cues such as those in Table 1 for discrimination of simplest speech segments has been tested by different researchers using ratios of within- class to between-class variance-covariance and dendrograms (Brockhaus et al. 1996, Williams 1997), as described in PhonMaster's documentation. The cues are calculated from fast Fourier transforms (FFTs) of speech signals in terms of total amplitude or energy distribution ED across low, middle and high frequency parts of the vocal range and the angular frequencies to(F) and amplitudes a(F) of formants. The first four cues dp, to {h are properties of a single spectral slice, and the change in these four from slice to slice is logged as t} 5, which peaks at segment boundaries. The duration cue #p6 is segment-based, computable only after segmentation from the length in slices from boundary to boundary, 579 normalising this length using the JSRU database of the relative durations of segments in different manner classes (see Chalfont 97). The normalisation is a simple form of time-warping without the computational complexity of dynamic time-warping or Hidden Markov Models (HMMs). Cue Label Definition dpl Energy qbl = EDIo / ED~ ratio~ dp 2 Energy qb 2 -= EDmi d / ED~ ratio 2 dp 3 Width (~3 = (to(F2) - (o(F l)) / (to(F3) - to(F2)) ~4 Fall dP4 - a(F1) /(a(F3)+a(F2)) dP5 Change If6qb = (I)next.sliee -- ~)current-slice, - + + 6q% +8 4 dP6 Duration l~6 operates with reference !to a durations database dp7 F1 ]q b7 = o(F 1)bo~.d/~o(F 1),t,,dy Trajectory qbs 'IfA~ = dPsteady - ~bound, ~bs = (Aco(F3) +Aco(F2))/ Formant Transition Table 1. Cues used to define signatures The other segment-based cues contrast steady- state formant values at the centre of a segment with values at entrance and exit boundary. They describe the context of a segment without going to the computational complexity of triphone HMMs (e.g. Young 1996). The PhonMaster approach is not tied to a particular set of cues, so long as the members of the set are concerned with ratios which vary much less from speaker to speaker than absolute frequencies and intensities. Nor is the approach bound to FFTs - linear predictive coding would extract energy density and formants just as well. Signatures are defined from cues by locating in cue space cluster centres and defining a quadratic discriminant based on the variance-covariance matrix of the cluster. When elements occur in higher degrees of combination than those selected for the training sample, separate detection thresholds for distance from cluster centre are set for occurrence as head and occurrence as operator. 3 Stagewise element recognition The detection of dements in the signal proceeds in three stages, with concurrent processes (lexical access, phonological process repair...) being launched after each stage and before the full identity of a segment has been established. The overall architecture of the recognition task is shown in Figure 1. At Stage 1, the recogniser checks for the presence of the manner elements h and ?. 1. Maalte¢ 2. Pbenttlelt Figure 1. Stagewise cue invocation strategy This launches the calculation of cues 4)5 (for the automatic segmentation process) and 4)6 (to distinguish vowels from approximants, and to determine vowel length). The ensuing manner class assignment process produces the classes: Occ Occlusion (i.e. ? present as head, as in plosives and affricates) Sfr Strong fricative (i.e. h present as head, as in [s], [z], IS] and [Z]) Wfr Weak fricative (i.e. h present as operator, as in plosives and non-sibilant fricatives) 580 Plo Nas App Svo LVo Vow Plosion (as for Wfr, but interpreted as plosion when directly following Occ- except word-initially) Nasal (i.e. ? present as operator) Approximant Short vowel Long vowel or diphthong Vowel (not readily identifiable as being either long or short). the words can be identified uniquely by manner class alone. This is the case for languages such as English, German, French and Italian, so the accessing of an individual word may be successful as early as Stage 1, and no further data processing need be carried out. If, however, as in Figure 3, the manner-class sequence identified is a common one, shared by several words, then the recognition process moves Figure 2. Representation of potential after Stage 1 As soon as such a sequence of manner classes becomes available, repair processes and lexical searches can be launched concurrently. The repair object refers to the constituent structure which can be built on the basis of manner-class information alone and checks its conformance to the universal principles of grammar in GP as well as to language- specific constraints. In cases of conflict with either, a new structure is created to resolve the conflict For example, the word potential is often realised without a vowel between the first two consonants. This elided vowel would be restored automatically by the repair object, as illustra'~d in Figure 2, where a nuclear position (N) has been inserted between the two onset (O) positions occupied by the plosives. Constituent structure is less specific than manner classes (in certain cases, different manner-class sequences are assigned the same constituent structure), so manner classes form the key for lexical access at Stage 1. Zue (1985) reports that, even in a large lexicon of c. 20, 000 words, around a third of Figure 3. Lexical search screen for a common manner class sequence (Stage 1) on to Stage 2, where the phonatory properties of the segments identified at Stage 1 are determined. Continuing with the example in Figure 3, the lexical access object would now discard words such as seed or shade, as neither of them contains the element H (voicelessness in obstruents), whose presence has been detected in both the initial fricative and the final plosive at Stage 2. Again, it may be possible to identify a unique word candidate at the end of Stage 2, but if several candidates are available, recognition moves on to Stage 3. Here, the focus is on the four resonance elements. As the manifestations of U, R, I and A vary between voiced vs. voiceless obstruents vs. sonorants, appropriate cues are invoked for each of these three broad classes (some of the cues reusing information gathered at Stage 1). The detection of certain resonance elements then provides all the necessary information for a final lexical search. In our example, only one word, seep, contains all the elements detected at Stages 1 to 3, as illustrated in 581 Figure 4. Only in cases of homophony will more than one word be accessed at Stage 3. Figure 4. Lexical search screen for a common manner class sequence (Stage 3) Concurrently with this lexical search, repair processes check for the effects of assimilation, allowing for adjacent segments (especially in clusters involving nasals and plosives)to share one or more resonance elements, thus resolving possible access problems arising from words such as input /'InpUt/being realised as ['IrnpUt]. 4 PhonMaster and its successors The PhonMaster prototype was implemented in C++ by a PhD student educated in object-oriented design and Windows application programming. It uses standard object-class libraries for screen management, standard relational database tools for control of the lexicon and standard code for FFT as in a spectrogram display object. Users may add words using a keypad labelled with IPA symbols. Manner class sequences and constituent structure are generated automatically. The objects concerned wilh the extraction of cues from spectra, segmentation, manner-class sequencing and display of constituent structure, repairing effects of lenition and assimilation are custom built. PhonMaster does not use corpus trigram statistics (e.g. Young 1996) to disambiguate word lattices, and there is no speaker-adaptation. Without these standard ways of enhancing pure pattern-recognition accuracy, its success rate for pure word recognition is around 75%. We are contemplating the addition d" pitch cues, which, with duration, would allow detection of stress, which may further increase accuracy. Object orientation makes the task of incorporating currently popular pattern recognition methods fairly straightforward. HMMs whose hidden states have cues like ours as observables are obvious things to try. Artificial Neural Nets (ANNs) also fit into the task architecture in various places. Vector quantisation ANNs could be used to learn the best choice of thresholds for head-operator detection and discrimination. ANNs with output nodes based on our quadratic discriminants in place of the more common linear discriminants are also an option, and their output node strengths would be direct measures of presence of elements. References Anderson J.M. and Ewen C.J. (1987) Principles of Dependency Phonology. Cambridge University Press, Cambridge, England, 312 pp. Brockhaus W.G., Ingleby M. and Chalfont C.R. (1996) Acoustic signatures of phonological primes. Internal report. Universities of Manchester and Huddersfield, England. Chalfont C.R. (1997) University of Huddersfield PhD Dissertation 'Automatic Speech Recogni- tion: a Government Phonology perspective' Harris J. (1994) English Sound Structure. Blackwell, Oxford, England. Kaye J.D., Lowenstamm J. and Vergnaud J.-R. (1985) The internal structure of phonological elements: a theory of charm and government. Phonology Yearbook, 2, pp. 305-328. Williams G. (1997)A pattern recognition model for the phonetic interpretation of elements. SOAS Working Papers in Linguistics and Phonetics, 7, pp. 275-297. Young S. (1997) A Review of Large-vocabulary Continuous-speech Recognition. IEEE Signal Processing Magazine, September Issue. Zue V.W. (1985) The use of speech knowledge in Automatic Speech Recognition, Proc. ICASSP, 73/11, pp. 1602-1615. 582
1998
94
Exploring the Characteristics of Multi-Party Dialogues Masato Ishizaki Japan Advanced Institute of Science and Technology Tatsunokuchi, Noumi, Ishikawa, 923-1292, Japan masatoQjaist.ac.jp Tsuneaki Kato NTT Communication Science Labs. 2-4, Hikaridai, Seika, Souraku, Kyoto, 619-0237, Japan [email protected] Abstract This paper describes novel results on the char- acteristics of three-party dialogues by quantita- tively comparing them with those of two-party. In previous dialogue research, two-party dia- logues are mainly focussed because data col- lection of multi-party dialogues is difficult and there are very few theories handling them, al- though research on multi-party dialogues is ex- pected to be of much use in building computer supported collaborative work environments and computer assisted instruction systems. In this paper, firstly we describe our data collection method of multi-party dialogues using a meet- ing scheduling task, which enables us to com- pare three-party dialogues with those of two party. Then we quantitively compare these two kinds of dialogues such as the number of characters and turns and patterns of informa- tion exchanges. Lastly we show that patterns of information exchanges in speaker alternation and initiative-taking can be used to characterise three-party dialogues. 1 Introduction Previous research on dialogue has been mostly focussing on two-party human-human dialogue for developing practical human-computer dia- logue systems. However, our everyday commu- nicative activities involves not only two-party communicative situations but also those of more than two-party (we call this multi-party). For example, it is not unusual for us to chitchat with more than one friend, or business meetings are usually held among more than two participants. Recently advances of computer and network- ing technologies enable us to examine the possi- bility of using computers to assist effective com- munication in business meetings. As well as this line of computer assisted communication research, autonomous programs called 'agents', which enable users to effectively use comput- ers for solving problems, have been extensively studied. In this research trend, 'agent' is sup- posed to be distributed among computers, and how they cooperate for problem solving is one of the most important research topics. Pre- vious studies on two party dialogue can be of some use to the above important computer re- lated communication research, but research on multi-party interaction can contribute more di- rectly to the advances of the above research. Furthermore, research on multi-party dialogue is expected to make us understand the nature of human communication in combination with the previous and ongoing research on two-party dialogue. The purpose of this paper is to quantitively show the characteristics of multi-party dia- logues in comparison with those of two-party using actual dialogue data. In exploring the characteristics of multi-party dialogues to those of two-party, we will concentrate on the follow- ing problems. What patterns of information ex- changes do conversational partici- pants form? When abstracting the types of speech acts, in two-party dialogues, the pattern of information exchanges is that the first and second speakers alternately contribute (A-B-A-B ...). But in multi- party dialogues, for example, in three-party dialogues, dialogue does not seem to pro- ceed as A-B-C-A-B-C ..., since this pat- tern seems to be too inefficient if B tells C what B are told by A, which C will be told the same content twice, and too efficient and strict if A, B and C always initiate new topics in this order, in which they have no 583 occasions for checking one's understanding. • How do conversational participants take initiative? In business meetings, most of which are of multi-party, chairper- sons usually control the flow of informa- tion for effective and efficient discussions. Are there any differences between in multi- and two-party dialogues? For example, are there any possibilities if in multi-party di- alogues the role of chairpersons emerges from the nature of the dialogues? These are not only problems in exploring multi-party dialogues. For example, we do not know how conversational participants take turns (when do they start to talk)? Or how and when do conversational participants form small subgroups? However, the two problems we will tackle here are very important issues to building computer systems in that they directly relates to topic management in dialogue pro- cessing, which is necessary to correctly process anaphora/ellipsis and effective dialogue control. In the following, firstly, previous research on multi-party dialogues is surveyed. Secondly, our task domain, data collection method, and ba- sic statistics of the collected data are explained. Thirdly, our dialogue coding scheme, coding re- sults and the resultant patterns of information exchanges for two- and multi-party dialogues are shown. Lastly, the patterns of initiative tak- ing behaviour are discussed. 2 Related Studies Sugito and Sawaki (1979) analysed three nat- urally occurring dialogues to characterise lan- guage behaviour of Japanese in shopping situ- ations between a shop assistant and two cus- tomers. They relate various characteristics of their dialogue data such as the number of ut- terances, the types of information exchanges and patterns of initiative taking to the stages or phases of shopping like opening, discussions between customers, clarification by a customer with a shop assistant and closing. Novick and Ward (1993) proposed a compu- tational model to track belief changes of a pilot and an air traffic controller in air traffic control (ATC) communication. ATC might be called multi-party dialogue in terms of the number of conversational participants. An air traffic con- troller exchanges messages with multiple pilots. But this is a rather special case for multi-party dialogues in that all of ATC communication consists of two-party dialogues between a pilot and an air traffic controller. Novick et al. (1996) extended 'contribution graph' and how mutual belief is constructed for multi-party dialogues, which was proposed by Clark (1992). They used their extension to analyse an excerpt of a conversation between Nixon and his brain trust involving the Water- gate scandal. Clark's contribution graph can be thought of as a reformulation of adjacency pairs and insertion sequences in conversation analy- sis from the viewpoint that how mutual belief is constructed, and are devoted to the analysis of two-party dialogues. They proposed to include reactions of non-intended listeners as evidence for constructing mutual belief and modify the notation of the contribution graph. Schegloff (1996) pointed out three research topics of multi-party dialogue from the view- point of conversation analysis. The first topic involves recipient design. A speaker builds ref- erential expressions for the intended listener to be easily understood, which is related to next speaker selection. The second concerns reason- ing by non-intended listeners. When a speaker praises some conversational participant, the re- maining participants can make inferences that the speaker criticises what they do not do or behave like the praised participant. The third is schism, which can be often seen in some par- ties or teaching classes. For example, when a speaker continue to talk an uninteresting story for hours, party attendees split to start to talk neighbours locally. Eggins and Slade (1997) analysed naturally- occurring dialogues using systemic grammar framework to characterise various aspects of communication such as how attitude is encoded in dialogues, how people negotiate with, and support for and confront against others, and how people establish group membership. By and large, on multi-party dialogues, there are very few studies in computational linguis- tics and there are several or more researches on multi-party dialogue, which analyse only their example dialogues in discourse analysis. But as far as we know, there is no research on quanti- tatively comparing the characteristics of multi- 584 party dialogues with those of two-party. Re- search topics enumerated for conversation anal- ysis are also of interest to computational lin- guistic research, but obviously we cannot han- dle all the problems of multi-party dialogues here. This paper will concentrate on the pat- terns of information exchanges and initiative taking, which are among issues directly related to computer modelling of multi-party dialogues. 3 Data Collection and Basic Statistics For the purpose of developing distributed au- tonomous agents working for assisting users with problem solving, we planned and collected two- and three-party dialogues using the task of scheduling meetings. We tried to set up the same problem solving situations for both types of dialogues such as participants' goals, knowl- edge, gender, age and background education. Our goal is to develop computational applica- tions where agents with equal status solve users' problems by exchanging messages, which is the reason why he did not collect dialogue data between between different status like expert- novice and teacher-pupils. The experiments were conducted in such a way that for one task, the subjects are given a list of goals (meetings to be scheduled) and some pieces of information about meeting rooms and equipments like overhead projectors, and are instructed to make a meeting schedule for satisfying as many participants' constraints as possible. The data were collected by assigning 3 different problems or task settings to 12 par- ties, which consist of either two or three sub- jects, which amounts to 72 dialogues in total. The following conditions were carefully set up to make dialogue subjects as equal as possible. • Both two- and three-party subjects were constrained to be of the same gender. The same number of dialogues (36 dialogues) were collected for female and male groups. • The average ages of female and male sub- jects are 21.0 (S.D. 1.6) and 20.8 (S.D. 2.1) years old. All participants are either a uni- versity student or a graduate. • Subjects were given the same number of goals and information (needless to say, [ I • of chars. I # of turns I [2-Pl 92637 I 3572[ [3-P I 93938 I 3520 I Table 1: Total no. of characters and turns in two- and three-party dialogues [[ ANOVA of chars. [ ANOVA of turns 2-p 3.57, 0.59, 0.02 I 0.00, 0.00, 0.00 3-p 2.53, 1.47, 0.43 I 3.91, 1.72, 1.00 Table 2: ANOVA of characters and turns for three problem settings in two- and three-party dialogues kinds of goals and information are differ- ent for each participant in a group). In these experiments, dialogues among the subjects were recorded on DAT recorders in non-face-to-face condition, which excludes the effects of non-linguistic behaviour. The aver- age length of all collected dialogues is 473.5 sec- onds (approximately 7.9 minutes) and the total amounts to 34094 seconds (approximately 9.5 hours). There are dialogues in which participants mistakenly finished before they did not satisfy all possible constraints. It is very rare that one party did this sort of mistakes for all three task settings assigned to them, however in order to eliminate unknown effects, we exclude all three dialogues if they made mistakes in at least one task setting. For this reason, we limit the target of our analysis to 18 dialogues each for two- and three-party dialogues which do not have such kind of problem (the average length of the tar- get dialogues is 494.2 seconds (approximately 8.2 minutes). Table 1 shows the number of hiragana char- acters 1 and turns for each speakers, and its total for two- and three-party dialogues. It il- lustrates that the total number of characters and turns of three-party dialogues are almost the same as those of two-party, which indicates 1 This paper uses the number of hiragana characters to assess how much speakers talk. One hiragana character approximately corresponds to one mora, which has been used as a phonetic unit in Japanese. 585 the experimental setup worked as intended be- tween two- and three-party dialogues. Table 2 shows ANOVA of the number of hiragana char- acters and turns calculated separately for dif- ferent task settings to examine whether there are differences of the number of characters and turns between speakers. The results indicates that there are statistically no differences at .05 level to the number of characters and turns for different speakers both in two- and three-party dialogues except for one task setting as to the number of turns in three-party dialogues. But this are statistically no differences at .01 level. For the experimental setup, we can understand that our setup generally worked as intended. 4 Patterns of Information Exchanges 4.1 Dialogue Coding To examine patterns of information exchanges and initiative taking, we classify utterances from the viewpoint of initiation-response and speech act types. This classification is a modification of the DAMSL coding scheme, which comes out of the standardisation work- shop on discourse coding scheme (Carletta et al., 1997b), and a coding scheme proposed by Japanese standardisation working group on dis- course coding scheme(Ichikawa et al., 1998) adapted to the characteristics of this meeting scheduling task and Japanese. We used two coders to classify utterances in the above 36 dialogues and obtained 70% rough agreement and 55% kappa agreement value. Even in the above discourse coding standardisation groups, they are not at the stage where which agreement value range coding results need to be reliable. In content analysis, they require a kappa value over 0.67 for deriving a tentative conclusion, but in a guideline of medical science, a kappa value 0.41 < g < 0.60 are judged to be mod- erate (Carletta et al., 1997a; Landis and Koch, 1977; Krippendorff, 1980). To make the anal- ysis of our dialogue data robust, we analysed both coded dialogues, and obtained similar re- sults. As space is limited, instead of discussing both results, we discuss one result in the fol- lowing. From the aspect of initiation-response, utterances are examined if they fall into the cat- egory of response, which is judged by checking if they can discern cohesive relations between the current and corresponding utterances if ex- Types of speech act .for initiating Want-propose(WP), Inform(IF), Request(RQ) Types of speech act for responding Positive_answer-accept (PA), Negative_answer- reject(NA), Content-answer(CA), Hold(HL) Types of speech act -for both Question-check(QC), Counter_propose(CP), Meta(MT) Table 3: Types of speech act for coding two- and three-party dialogues ist. The corresponding utterances must be ones which are either just before the current or some utterances before the current in the case of em- bedding, or insertion sequences. If the current utterance is not judged as response, then it falls into the category of initiation. From speech act types, as in Table 3, utter- ances are classified into five types each for ini- tiating and responding, two of which are used for both initiating and responding. Bar ('-') in- serted categories show adaptation to our task domain and Japanese. For example, in this task domain, expressions of 'want' for using some meeting room are hard to be distinguished from those of 'proposal' in Japanese, and thus these two categories are combined into one category 'want-proposal'. 4.2 Patterns of act sequences by speakers Table 5 shows the frequency ratio as to the length of act sequences represented by different speakers in two- and three-party dialogues. The act sequences are defined to start from a newly initiating utterance to the one before next newly initiating utterance. Let us examine an excerpt in Table 4 from our dialogue data, where the first column shows a tentative number of utter- ances, the second is a speaker, the third is an ut- terance type, and the fourth is English transla- tion of an utterance. In this example, there are two types of act sequences from the first to the fifth utterance (E-S-E-S-E) and from the sixth to the seventh (S-H). Our purpose here is to ex- amine how many of the act sequences consists of two participants' interaction in three-party dialogues. Hence we abstract a speakers' name with the position in a sequence. The speaker in 586 2acts 3acts 4acts 5acts 6acts 2-p 54.2 21.6 11.8 5.3 2.1 3-p 45.1 26.0 12.2 5.4 2.4 Table 5: Frequency ratio (%) for the number of act sequences in two- and three-party dialogues the first turn is named A, and the one in the second and third turn are named B and C, re- spectively. In both two- and three-party dialogues, the most frequent length of act sequences is that of two speakers. The frequencies decrease as the length of act sequences increases. In two-party dialogues, speaker sequences concern only their length, since there are two speakers to be alter- nate while in three-party dialogues, more than two length of sequences take various patterns, for example, A-B-A and A-B-C in three act se- quences. Table 6 illustrates patterns of speaker sequences and their frequency ratios. In three act sequences, the frequency ratios of A-B-A and A-B-C are 62.7% and 37.3%, respectively, which signifies the dominance of two-party in- teractions. Likewise, in four, five and six act se- quences, two-party interactions are dominant, 53.2%, 36.7% and 31.8%, both of which are far more frequent than theoretical expected fre- quencies (25%, 12.5 and 6.3%). In three-party dialogues, two-party interactions amounts to 70.6% (45.1%+26.0% x 62.7%+ 12.2% x 53.2%+ 5.4% x 36.7% + 2.4% x 31.8% = 70.6%) against total percentage 91.1% from two to six act se- quences (if extrapolating this number to total 100% is allowable, 77.5% of the total interac- tions are expected to be of two-party). The conclusion here is that two-party inter- actions are dominant in three-party dia- logues. This conclusion holds for our meeting scheduling dialogue data, but intuitively its ap- plicability to other domains seems to be promis- ing, which should obviously need further work. 4.3 Patterns of initiative taking The concept 'initiative' is defined by Whittaker and Stenton (Whittaker and Stenton, 1988) us- ing a classification of utterance types assertions, commands, questions and prompts. The initia- tive was used to analyse behaviour of anaphoric expressions in (Walker and Whittaker, 1990). 3 act sequences [ ABel A°c I 62.7 37.3 4 act sequences 53.2 17.1 16.2 13.5 I 5 act sequences ABABA ABCAB 36.7 16.3 ABABC ABACA 10.2(each) Others 26.6 6 act sequences ABABAB ABCACB ABABAC Others ABCACA 31.8 18.2 9.1(each) 31.8 Table 6: Frequency ratio (%) of 3 to 6 act se- quences in three-party dialogues The algorithm to track the initiative was pro- posed by Chu-Carroll and Brown (1997). The relationship between the initiative and efficiency of task-oriented dialogues was empirically and analytically examined in (Ishizaki, 1997). By their definition, a conversational participant has the initiative when she makes some utterance except for responses to partner's utterance. The reason for this exception is that an utterance following partner's utterance should be thought of as the one elicited by the previous speaker rather than directing a conversation in their own right. A participant does not have the initiative (or partner has the initiative) when she uses a prompt to partner, since she clearly abdicates her opportunity for expressing some propositional content. Table 7 and 8 show the frequency ratios of who takes the initiative and X 2 value calculated from the frequencies for two- and three-party di- alogues. In two-party dialogues, based on its X 2 values, the initiative is not equally distributed between speakers in 5 out of 18 dialogues at .05 rejection level. In three-party dialogues, this occurs in 10 out of 18 dialogues, which signifies the emergence of an initiative-taker or a chair- person. To examine the roles of the participants in detail, the differences of the participants' be- haviour between two- and three party informa- 587 # Sp Type Utterance 1 E WP 2 S 3 E 4 S 5 E 6 S 7 H Well, I want to plan my group's three-hour meeting after a two-hour meeting with Ms. S's group. QC After the meeting? PA Yes. PA Right. PA Right. QC What meetings do you want to plan, Ms. H? CA I want to schedule our group's meeting for two hours. Table 4: An excerpt from the meeting scheduling dialogues I °25 J °53 1 J 7.43 f 7°8 1 °71 1 °°2 f I °17 J 7°° I °18 1 °°9 1 4811 °38 1 469 1 1 4°° 1 64° I 37.5 44.7 44.0 25.7' 29.2 42.9 43:8 50.0 48.3 25.0 38.2 39.1 51.9 46.2 53.1 23.4 51.0 36.0 I x = II 3001 53 I 72 I 826 I 112 I 861 25 I .°° I .03 [ 18.0 [ 3.07 I 3.26 I .07 I .45 I .18 I 13.3 I .02 13.92 j Table 7: Frequency ratio (%)of initiative-taking and X 2 values of the frequencies between different speakers in two-party dialogues tion exchanges in Table 9. The table shows the comparison between two and three speaker in- teractions in three-party dialogues as to as who takes the initiative in 3 to 6 act sequences. From this table, we can observe the tendency that E takes the initiative more frequently than S and H for all three problem settings in two-party interaction, and two of three settings in three- party interaction. S has a tendency to take more initiatives in two-party interaction than that in three-party. H's initiative taking behaviour is the other way around to S's. Comparing with S's and H's initiative taking patterns, E can be said to take the initiative constantly irrespective of the number of party in interaction. The conclusion here is that initiative- taking behaviour is more clearly observed in three-party dialogues than those in two-party dialogues. Detailed analysis of the participants' behaviour indicates that there might be differences when the participants take the initiative, which are characterised by the number of participants in interaction. 5 Conclusion and Further Work This paper empirically describes the impor- tant characteristics of three-party dialogues by analysing the dialogue data collected in the task of meeting scheduling domain. The character- istics we found here are (1) two-party inter- actions are dominant in three-party dialogues, and (2) the behaviour of the initiative-taking I H s I E I H I I 2-pi139-1,33.0,31.11 39-1,45.4,43.2 I 21-8,21-6,25.7 l 3-p 30.9, 21.9, 27.0 40.5, 35.9, 32.4 28.6, 42.2, 40.6 Table 9: Frequency ratio (%) of initiative-taking for 3 to 6 act sequences between two- and three-party interaction in three-party dialogues (Three numbers in a box are for three problem settings, respectively.) is emerged more in three-party dialogues than in those of two-party. We will take our find- ings into account in designing a protocol which enables distributed agents to communicate and prove its utility by building computer system applications in the near future. References J. Carletta, A. Isard., S. Isard, J. Kowtko, A. Newlands, G.Doherty-Sneddon, and A. H. Anderson. 1997a. The reliability of a di- alogue structure coding scheme. Computa- tional Linguistics, 23(1):13-32. J. Carletta, N. Dahlb~ick, N. Reithinger, and M. A. Walker. 1997b. Standards for dialogue coding in natural language processing. Tech- nical report. Dagstuhl-Seminar-Report: 167. J. Chu-Carroll and M. K. Brown. 1997. Track- ing initiative in collaborative dialogue inter- actions. In Proceedings of the Thirty-fifth An- nual Meeting of the Association for Compu- tational Linguistics and the Eighth Confer- 588 E 26.2 28.1 35.8 13.8 18.5 9.2 14.3 45.8 30.8 [ 51.2 30.4 34.0 39.3 14.5 7.4 56.8 I0.0 [ 54.5 S 57.1 45.3 45.3 34.5 38.9 38.5 25.7 25.0 21.2 I 34.1 46.4 40.4 46.4 54.5 70.4 34.1 42.5 I 36.4 H 16.7 26.6 18.9 51.7 42.6 52.3 60.0 29.2 48.1 14.6 23.2 25.5 14.3 30.9 22.2 9.1 47.5 9.1 X ~ ][ 11.3 4.2 5.70 6.28 5.44 18.9 11.9 3.50 5.81 [ 8.24 4.75 1.57 4.79 13.3 17.6 15.0 8.19 I 13.8 Table 8: Frequency ratio (%) of initiative-taking and X 2 values of the frequencies among different speakers in three-party dialogues ence of of the European Chapter of the Asso- ciation for Computational Linguistics, pages 262-270. H. H. Clark. 1992. Arenas of Language Use. The University of Chicago Press and Center for the Study of Language and Information. S. Eggins and D. Slade. 1997. Analyzing Casual Conversation. Cassell. A. Ichikawa, MI Araki, Y. Horiuchi, M. Ishizaki, S. Itabashi, T. Ito, H. Kashioka, K. Kato, H. Kikuchi, H. Koiso, T. Kumagai, A. Kurematsu, K. Maekawa, K. Mu- rakami, S. Nakazato, M. Tamoto, S. Tutiya, Y. Yamashita, and T. Yoshimura. 1998. Standardising annotation schemes for japanese discourse. In Proceedings of the First International Conference on Language Resources and Evaluation. M. Ishizaki. 1997. Mixed-Initiative Natural Language Dialogue with Variable Commu- nicative Modes. Ph.D. thesis, The Centre for Cognitive Science and The Department of Ar- tificial Intelligence, The University of Edin- burgh. K. Krippendorff. 1980. Content Analysis: An Introduction to its Methodology. Sage Publi- cations. J. R. Landis and G. G. Koch. 1977. The mea- surement of observer agreement for categorial data. Biometrics, 33:159-174. D. G. Novick and K. Ward. 1993. Mutual beliefs of multiple conversants: A computa- tional model of collaboration in air traffic con- trol. In Proceedings of the Eleventh National Conference on Artificial Intelligence, pages 196-201. D. G. Novick, L. Walton, and K. Ward. 1996. Contribution graphs in multiparty discourse. In Proceedings of International Symposium on Spoken Dialogue, pages 53-56. E. A. Schegloff. 1996. Issues of relevance for discourse analysis: Contingency in action, interaction and co-participant context. In Eduard H. Hovy and Donia R. Scott, edi- tors, Computational and Conversational Dis- course, pages 3-35. Springer-Verlag. S. Sugito and M. Sawaki. 1979. Gengo koudo no kijutsu (description of language behaviour in shopping situations). In Fujio Minami, editor, Gengo to Koudo (Language and Be- haviour), pages 271-319. Taishukan Shoten. (in Japanese). hi. A. Walker and S. Whittaker. 1990. Mixed initiative in dialogue: An investigation into discourse segment. In Proceedings of the Twenty-eighth Annual Meeting of the Asso- ciation for Computational Linguistics, pages 70-78. S. Whittaker and P. Stenton. 1988. Cues and control in expert-client dialogues. In Proceed- ings of the Twenty-sixth Annual Meeting of the Association for Computational Linguis- tics, pages 123-130. 589
1998
95
Robust Interaction through Partial Interpretation and Dialogue Management Arne JSnsson and Lena StrSmb~ick* Department of Computer and Information Science LinkSping University, S - 58183 LinkSping, Sweden email: [email protected] [email protected] Abstract In this paper we present results on developing ro- bust natural language interfaces by combining shal- low and partial interpretation with dialogue manage- ment. The key issue is to reduce the effort needed to adapt the knowledge sources for parsing and in- terpretation to a necessary minimum. In the paper we identify different types of information and present corresponding computational models. The approach utilizes an automatically generated lexicon which is updated with information from a corpus of simulat- ed dialogues. The grammar is developed manually from the same knowledge sources. We also present results from evaluations that support the approach. 1 Introduction Relying on a traditional deep and complete analysis of the utterances in a natural lan- guage interface requires much effort on building grammars and lexicons for each domain. An- alyzing a whole utterance also gives problems with robustness, since the grammars need to cope with all possible variations of an utter- ance. In this paper we present results on devel- oping knowledge-based natural language inter- faces for information retrieval applications uti- lizing shallow and partial interpretation. Simi- lar approaches are proposed in, for instance, the work on flexible parsing (Carbonell and Hayes, 1987) and in speech systems (cf. (Sj51ander and Gustafson, 1997; Bennacef et al., 1994)). The interpretation is driven by the information needed by the background system and guided by expectations from a dialogue manager. The analysis is done by parsing as small parts of the utterance as possible. The infor- mation needed by the interpretation module, i.e. grammar and lexicon, is derived from the database of the background system and infor- mation from dialogues collected in Wizard of " Authors are in alphabetical order Oz-experiments. We will present what types of information that are needed for the interpreta- tion modules. We will also report on the sizes of the grammars and lexicon and results from applying the approach to information retrieval systems. 2 Dialogue management Partial interpretation is particularly well-suited for dialogue systems, as we can utilize informa- tion from a dialogue manager on what is ex- pected and use this to guide the analysis. Fur- thermore, dialogue management allows for focus tracking as well as clarification subdialogues to further improve the interaction (JSnsson, 1997). In information retrieval systems a common user initiative is a request for domain concept information from the database; users specify a database object, or a set of objects, and ask for the value of a property of that object or set of objects. In the dialogue model this can be modeled in two focal parameters: Objects relat- ed to database objects and Properties modeling the domain concept information. The Proper- ties parameter models the domain concept in a sub-parameter termed Aspect which can be specified in another sub-parameter termed Val- ue. The specification of these parameters in turn depends on information from the user ini- tiative together with context information and the answer from the database system. The ac- tion to be carried out by the interface for task- related questions depends on the specification of values passed to the Objects and Properties parameters (JSnsson, 1997). We can also distinguish two types of infor- mation sources utilized by the dialogue manag- er; the database with task information, T, or system-related information about the applica- tion, S. 590 3 Types of information We can identify different types of information utilized when interpreting an utterance in a natural language interface to a database sys- tem. This information corresponds to the in- formation that needs to be analyzed in user- utterances. Domain concepts are concepts about which the system has information, mainly concepts from the database, T, but also synonyms to such concepts acquired, for instance, from the infor- mation base describing the system, S. In a database query system users also often request information by relating concepts and objects, e.g. which one is the cheapest. We call this type of language constructions relation- al e~pressions. The relational expressions can be identified from the corpus. Another common type of expressions are numbers. Numbers can occur in various forms, such as dates, object and property values. Set operations. It is necessary to distinguish utterances such as: show all cars costing less than 70 000 from which of these costs less than 70 000. The former should get all cars costing less than 70 000 whereas the latter should uti- lize the set of cars recorded as Objects by the dialogue manager. In some cases the user uses expressions such as remove all cars more expen- sire than 70 000, and thus is restricting a set by mentioning the objects that should be removed. Interactional concepts. This class of con- cepts consists of words and phrases that concern the interaction such as Yes, No, etc (cf. (Byron and Heeman, 1997)). Task/System expressions. Users can use do- main concepts such as explain, indicating that the domain concept is not referring to a request for information from the database, T, but in- stead from the system description, S. When acquiring information for the interpreter, three different sources of information can be uti- lized: 1) background system information, i.e. the database, T, and the information describ- ing the background system's capabilities, S, 2) information from dialogues collected with users of the system, and 3) common sense and prior knowledge on human-computer interaction and natural language dialogue. The various infor- mation sources can be used for different pur- poses (JSnsson, 1993). 4 The interpretation module The approach we are investigating relies on an- alyzing as small and crucial parts of the ut- terances as possible. One of the key issues is to find these parts. In some cases an analy- sis could consist of one single domain or inter- actional concept, but for most cases we need to analyze small sub-phrases of an utterance to get a more reliable analysis. This requires flex- ibility in processing of the utterances and is a further development of the ideas described in StrSmb~ick (1994). In this work we have cho- sen to use PATR-II but in the future construc- tions from a more expressive formalism such as EFLUF (StrSmb~ck, 1997) could be needed. Flexibility in processing is achieved by one ex- tension to ordinary PATR and some additions to a chart parser environment. Our version of PATR allows for a set of unknown words with- in phrases. This gives general grammar rules, and helps avoiding the analysis to be stuck in case of unknown words. In the chart parsing environment it is possible to define which of the inactive edges that constitute the result. The grammar is divided into five grammar modules where each module corresponds to some information requested by the dialogue manager. The modules can be used indepen- dently from each other. Domain concepts are captured using two grammar modules. The task of these grammars is to find keywords or sub-phrases in the expres- sions that correspond to the objects and prop- erties in the database. The properties can be concept keywords or relational expressions con- taining concept keywords. Numbers are typed according to the property they describe, e.g. 40000 denote a price. To simplify the grammars we only require that the grammar recognizes all objects and properties mentioned. The results of the analyses are filtered through the heuristics that only the most specific objects are presented to the dialogue manager. Set operations. This grammar module 591 provides a marker to tell the dialogue man- ager what type of set operation the initiative requests, new, old or restrict. The user's utterance is searched for indicators of any of these three set operators. If no indicators are found we will assume that the operator is old. The chart is searched for the first and largest phrase that indicates a set operator. Recognizing interactional utterances. Many interactional utterances are not nec- essary to interpret for information retrieval systems, such as Thank you. However, Yes/No- expressions are important. They can be recognized by looking for one of the keywords yes or no. One example of this is the utterance No, just the medium sized cars as an answer to if the user wants to see all cars in a large table. The Yes/No-grammar can conclude that it is a no answer and the property grammar will recognize the phrase medium sized cars. System/Task recognition. Utterances asking for information about a concept, e.g. Explain the numbers for rust, can be distin- guished from utterances requesting information acquired from the background system How rust prone are these cars by defining keywords with a special meaning, such as explain. If any of these keywords are found in an utterance the dialogue manager will interpret the question as system-related. If not it will assume that the question is task-related. 5 An example To illustrate the behaviour of the system con- sider an utterance such as show cars costing less than 100000 crowns. The word cars indicates that the set operator is new. The relational expression will be interpreted by the grammar rules: relprop -> property : 0 properties = I properties . relprop -> property comp glue entity : 0 properties = 1 properties : 0 properties = 2 properties : 0 properties = 4 properties : 0 properties value arg = 4 value . This results in two analyses [Aspect: price] and [Aspect: price, Value: [Relation: less, Arg: 100000]] which, when filtered by the heuristics, present the latter, the most specific analysis, to the dialogue manager. The dialogue manager inspects the result and as it is a valid database request forwards it to the background system. However, too many objects satisfy the request and the dialogue manager initiates a clarifica- tion request to the user to further specify the request. The user responds with remove audi 1985 and 1988. The keyword remove triggers the set operator restrict and the objects are in- terpreted by the rules: object -> manufacturer : 0 object = 1 object . object -> manufacturer * 2 year : 0 object = 1 object : 0 object year = 2 year . This results in three objects [Manufacturer: audi], [Manufacturer: audi, Year: 1985] and [Manufacturer: audi, Year: 1988]. When filtered the first interpretation is removed. This is in- tegrated by the dialogue manager to provide a specification on both Objects and Properties which is passed to the background system and a correct response can be provided. 6 Empirical evidence for the approach In this section we present results on partial in- terpretation i for a natural language interface for the CARS-application; a system for typed inter- action to a relational database with information on second hand cars. The corpus contains 300 utterances from 10 dialogues. Five dialogues from the corpus were used when developing the interpretation methods, the Development set, and five dialogues were used for evaluation, the Test set. 6.1 Results The lexicon includes information on what type of entity a keyword belongs to, i.e. Objects or Properties. This information is acquired au- tomatically from the database with synonyms added manually from the background system description. The automatically generated lexicon of con- cepts consists of 102 entries describing Objects 1Results on dialogue management has been presented in JSnsson (1997). 592 Table 1: Precision and recall for the grammars Yes/No S/T Set Devel. set 100% 100% 97,5% Test set 100% 91,7% 86,1% Objects Fully Partial Recall Precision Recall Precision Devel. set 100% 98% 100% 98% Test set 94,1% 80% 100% 85% Properties Fully Partial Recall Precision Recall Precision Devel. set 97% 97% 99% 100% Test set 59,6% 73,9% 73,7% 91,3% and Properties. From the system description in- formation base 23 synonyms to concepts in the database were added to the lexicon. From the Development set another 7 synonyms to con- cepts in the database, 12 relational concepts and 7 markers were added. The five grammars were developed manually from the Development set. The object gram- mar consists of 5 rules and the property gram- mar consists of 21 rules. The grammar used for finding set indicators consists of 13 rules. The Yes/No grammar and System/Task gram- mar need no grammar rules. The time for devel- oping these grammars is estimated to a couple of days. The obtained grammars and the lexicon of to- tally 151 entries were tested on both the Devel- opment set and on the five new dialogues in the Test set. The results are presented in table 1. In the first half of the table we present the number of utterances where the Yes/No, System/Task and Set parameters were correctly classified. In the second we present recall and precision for Objects and Properties. We have distinguished fully correct inter- pretations from partially correct. A partially correct interpretation provides information on the Aspect but might fail to consider Value- restrictions, e.g. provide the Aspect value price but not the Value-restriction cheapest to an ut- terance such as what is the price of the cheapest volvo. This is because cheapest was not in the first five dialogues. The majority of the problems are due to such missing concepts. We therefore added informa- tion from the Test set. This set provided anoth- er 4 concepts, 2 relational concepts, and I mark- Table 2: Precision and recall when concepts from the test set were added Properties Fully Partial Recall Precision Recall Precision Test set 92,3% 79,5% 93,8% 90,6% er and led us to believe that we have reached a fairly stable set of concepts. Adding these rela- tional and domain concepts increased the cor- rect recognition of set operations to 95,8%. It also increased the numbers for Properties recall and precision, as seen in table 2. The other re- sults remained unchanged. To verify the hypothesis that the concepts are conveyed from the database and a small number of dialogues, we analyzed another 10 dialogues from the same setting but where the users know that a human interprets their utterance. From these ten dialogues only another 3 concepts and 1 relational concept were identified. Further- more, the concepts are borderline cases, such as mapping the concept inside measurement onto the database property coupd, which could well result in a system-related answer if not added to the lexicon. As a comparison to this a traditional non- partial PATR-grammar, developed for good coverage on only one of the dialogues consists of about 200 rules. The lexicon needed to cover all ten dialogues consists of around 470 words, to compare with the 158 of the lexicon used here. The principles have also been evaluated on a system with information on charter trips to the Greek archipelago, TRAVEL. This corpus contains 540 utterances from 10 dialogues. The information base for TRAVEL consists of texts from travel brochures which contains a lot of information. It includes a total of around 750 different concepts. Testing this lexicon on the corpus of ten dialogues 20 synonyms were found. When tested on a set of ten dialogues collected with users who knew it was a simulation (cf. the CARS corpus) another 10 synonyms were found. Thus 99% of the concepts utilized in this part of the corpus were captured from the information base and the first ten dialogues. This clearly supports the hypothesis that the relevant con- cepts can be captured from the background sys- tem and a fairly small number of dialogues. For the TRAVEL application we have also es- 593 timated how many of the utterances in the cor- pus that can be analyzed by this model. 90,4% of the utterances can easily be captured by the model. Of the remaining utterances 4,3% are partly outside the task of the system and a stan- dard system message would be a sufficient re- sponse. This leaves only 4,8% of the utterances that can not be handled by the approach. 6.2 Discussion When processing data from the dialogues we have used a system for lexical error recov- ery, which corrects user mistakes such as mis- spellings, and segmentation errors. This system utilizes a trained HMM and accounts for most errors (Ingels, 1996). In the results on lexical data presented above we have assumed a system for morphological analysis to handle inflections and compounds. The approach does not handle anaphora. This can result in erroneous responses, for in- stance, Show rust .for the mercedes will interpret the mercedes as a new set of cars and the answer will contain all mercedeses not only those in the previous discourse. In the applications studied here this is not a serious problem. However, for other applications it can be important to handle such expressions correctly. One possible solution is to interpret definite form of object descriptions as a marker for an old set. The application of the method have only uti- lized information acquired from the database, from information on the system's capabilities and from corpus information. The motivation for this was that we wanted to use unbiased information sources. In practice, however, one would like to augment this with common sense knowledge on human-computer interaction as discussed in JSnsson (1993). 7 Conclusions We have presented a method for robust inter- pretation based on a generalization of PATR-II which allows for generalization of grammar rules and partial parsing. This reduces the sizes of the grammar and lexicon which results in re- duced development time and faster computa- tion. The lexical entries corresponding to en- tities about which a user can achieve informa- tion is mainly automatically created from the background system. Furthermore, the system will be fairly robust as we can invest time on establishing a knowledge base corresponding to most ways in which a user can express a domain concept. Acknowledgments This work results from a number of projects on de- velopment of natural language interfaces supported by The Swedish Transport & Communications Re- search Board (KFB) and the joint Research Pro- gram for Language Technology (HSFR/NUTEK). We are indebted to Hanna Benjaminsson and Mague Hansen for work on generating the lexicon and de- veloping the parser. References S. Bennacef, H. Bonneau-Maynard, J. L. Gauvin, L. Lamel, and W. Minker. 1994. A spoken lan- guage system for information retrieval. In Pro- ceedings of ICLSP'9g. Donna K. Byron and Peter A. Heeman. 1997. Dis- course marker use in task-oriented spoken dialog. In Proceedings of Eurospeech'97, Rhodes, Greece, pages 2223-2226. Jaime G. Carbonell and Philip J. Hayes. 1987. Ro- bust parsing using multiple construction-specific strategies. In Leonard Bolc, editor, Natural Lan- guage Parsing Systems, pages 1-32. Springer- Verlag. Peter Ingels. 1996. Connected text recognition us- ing layered HMMs and token passing. In K. Oflaz- er and H. Somers, editors, Proceedings of the Second Conference on New Methods in Language Processing, pages 121-132, Sept. Arne JSnsson. 1993. A method for development of dialogue managers for natural language interfaces. In Proceedings of the Eleventh National Confer- ence of Artificial Intelligence, Washington DC, pages 190-195. Arne JSnsson. 1997. A model for habitable and efficient dialogue management for natural lan- guage interaction. Natural Language Engineering, 3(2/3):103-122. K£re SjSlander and Joakim Gustafson. 1997. An in- tegrated system for teaching spoken dialogue sys- tems technology. In Proceedings of Eurospeech '97, Rhodes, Greece, pages 1927-1930. Lena StrSmb/ick. 1994. Achieving flexibility in uni- fication formalisms. In Proceedings of 15th Int. Conf. on Computational Linguistics (Coling'94), volume II, pages 842-846, August. Kyoto, Japan. Lena StrSmb~ick. 1997. EFLUF - an implementa- tion of a flexible unification formalism. In Proc of ENVGRAM - Computational Environments for Practical Grammar Development, Processing and Integration with other NLP modules., July. Madrid, Spain. 594
1998
96
Improving Automatic Indexing through Concept Combination and Term Enrichment Christian Jacquemin* LIMSI-CNRS BP 133, F-91403 ORSAY Cedex, FRANCE j acquemin@limsi, fr Abstract Although indexes may overlap, the output of an automatic indexer is generally presented as a fiat and unstructured list of terms. Our pur- pose is to exploit term overlap and embed- ding so as to yield a substantial qualitative and quantitative improvement in automatic in- dexing through concept combination. The in- crease in the volume of indexing is 10.5% for free indexing and 52.3% for controlled indexing. The resulting structure of the indexed corpus is a partial conceptual analysis. 1 Overview The method, proposed here for improving au- tomatic indexing, builds partial syntactic stru- ctures by combining overlapping indexes. It is complemented by a method for term acquisition which is described in (Jacquemin, 1996). The text, thus structured, is reindexed; new indexes are produced and new candidates are discove- red. Most NLP approaches to automatic indexing concern free indexing and rely on large-scale shallow parsers with a particular concern for dependency relations (Strzalkowski, 1996). For the purpose of controlled indexing, we exploit the output of a NLP-based indexer and the stru- ctural relations between terms and variants in order to (1) enhance the coverage of the in- dexes, (2) incrementally build an a posteriori conceptual analysis of the document, and, (3) interweave controlled indexing, free indexing, and thesaurus acquisition. These 3 goals are achieved by CONPARS (CONceptual PARSer), presented in this paper and illustrated by Fi- gure 1. CONPARS is based on the output of * We thank INIST-CNRS for providing us with thesauri and corpora in the agricultural domain and AFIRST for supporting this research through the SKETCHI project. a part-of-speech tagger for French described in (Tzoukermann and Radev, 1997) and FASTR, a controlled indexer (Jacquemin et al., 1997). All the experiments reported in this paper are performed on data in the agricultural domain: [AGRIC] a 1.18-million word corpus, [AGRO- VOC] a 10,570-term controlled vocabulary, and [AGR-CAND] a 15,875-term list acquired by ACABIT (Daille, 1997) from [AGRIC]. Augmented indexing Figure 1: Overall Architecture of CONPARS 2 Basic Controlled Indexing The preprocessing of the corpus by the tag- ger yields a morphologically analyzed text, with unambiguous syntactic categories. Then, the tagged corpus is automatically indexed by FASTR which retrieves occurrences of multi- word terms or variants (see Table 1). 595 Table 1: Indexing of a Sample Sentence La variation mensuelle de la respiration du sol et ses rapports avec l'humiditd et la tempdrature du sol ont dtd analysdes dans le sol super]iciel d'une for~t tropicale. (The monthly variation of the respi- ration of the soil and its connections with the mois- ture and the temperature of the soil have been ana- lyzed in the surface soil of a tropical forest.) il 007019 Respiration du sol Occurrence respiration du sol (respiration of the soil) i2 002904 Sol de for~t Embedding2 so_.__l superficiel d'une ]or~t (surf. soil of a forest) i3 012670 Humiditd du sol Coordination1 humiditd et la tempdrature du sol (moisture and the temperature of the soil) i4 007034 Tempdrature du sol Occurrence tempdrature du sol (temperature of the soil) i5 007035 Analyse de sol VerbTransfl analysdes clans le sol (analyzed in the soil) i6 007809 For~t tropicale Occurrence for~t tropicale (tropical forest) Each variant is obtained by generating term variations through local transformations com- posed of an input lexico-syntactic structure and a corresponding output transformed struc- ture. Thus, VerbTransfl is a verbalization which transforms a Noun-Preposition-Noun term into a verb phrase represented by the variation pat- tern V 4 (Adv ? (Prep ? Art [ Prep) A ?) N3:1 VerbTransfl( N1 Prep2 N3 ) (1) = V4 (Adv ? (Prep ? Art J Prep) A ?) N3 {MorphFamily(N1) = MorphFamily(V4)} The constraint following the output structure states that V4 belongs to the same morphologi- cal family as N1, the head noun of the term. VerbTransfl recognizes analys~es[v] dans[prep] le[nrt] sOl[N] (analyzed in the soil) as a variant of analyse[N] de[Prep] sol[N] (soil analysis). Six families of term variations are accounted for by our implementation for French: coordina- tion, compounding/decompounding, term em- bedding, verbalization (of nouns or adjectives), nominalization (of nouns, adjectives, or verbs), and adjectivization (of nouns, adjectives, or verbs). Each index in Table 1 corresponds to 1The following abbreviations are used for the catego- ries: V = verb, N = noun, Art = article, hdv --- adverb, Conj = conjunction, Prep --- preposition, Punc -- punc- tuation. a unique term; it is referenced by its identifier, its string, and a unique variation of one of the aforementioned types (or a plain occurrence). 3 Conceptual Phrase Building The indexes extracted at the preceding step are text chunks which generally build up a correct syntactic structure: verb phrases for verbaliza- tions and, otherwise, noun phrases. When over- lapping, these indexes can be combined and re- placed by their head words so as to condense and structure the documents. This process is the reverse operation of the noun phrase decom- position described in (Habert et al., 1996). The purpose of automatic indexing entails the following characteristics of indexes: • frequently, indexes overlap or are embed- ded one in another (with [AGR-CAND], 35% of the indexes overlap with another one and 37% of the indexes are embed- ded in another one; with [AGROVOC], the rates are respectively 13% and 5%), • generally, indexes cover only a small fra- ction of the parsed sentence (with [AGR- CAND], the indexes cover, on average, 15% of the surface; with [AGROVOC], the ave- rage coverage is 3%), • generally, indexes do not correspond to maximal structures and only include part of the arguments of their head word. Because of these characteristics, the construc- tion of a syntactic structure from indexes is like solving a puzzle with only part of the clues, and with a certain overlap between these clues. Text Structuring The construction of the structure consists of the following 3 steps: Step 1. The syntactic head of terms is deter- mined by a simple noun phrase grammar of the language under study. For French, the following regular expression covers 98% of the term struc- tures in the database [AGROVOC] (Mod is any adjectival modifier and the syntactic head is the noun in bold face): Mod* N N ? (Mod I (Prep Art ? Mod* N N ? Mod*))* The second source of knowledge about synta- ctic heads is embodied in transformations. For 596 instance, the syntactic head of the verbalization in (1) is the verb in bold typeface. Step 2. A partial relation between the indexes of a sentence is now defined in order to rank in priority the indexes that should be grouped first into structures (the most deeply embedded ones). This definition relies on the relative spa- tial positions of two indexes i and j and their syntactic heads H(i) and H(j): Definition 3.1 (Index priority) Let i and j be two indexes in the same sentence. The rela- tive priority ranking of i and j is: i~j ¢~ (i=j) V(H(i)=n(j)AiCj) V (H(i)¢H(j)AH(i)ej A n(j)¢_i) This relation is obviously reflexive. It is nei- ther transitive nor antisymmetric. It can, howe- ver, be shown that this relation is not cyclic for 3 elements: i~j A jT~k =¢ -~(kT~i). (This property is not demonstrated here, due to the lack of space.) The linguistic motivations of Definition 3.1 are linked to the composite structure built at Step 3 according to the relative priorities stated by T~. We now examine, in turn, the 4 cases of term overlap: 1. Head embedding: 2 indexes i and j, with a common head word and such that i is embedded into j, build a 2-level structure: H(i) H(i) H(i) This structuring is illustrated by nappe d'eau (sheet of water) which combines with nappe d'eau souterraine (underground sheet of water) and produces the 2-level structure [[nappe d'eau] souterraine] ([un- derground ~ of water]]). (Head words are underlined.) In this case, i has a higher priority than j; it corresponds to (H(i) = H(j) A i C_ j) in Definition 3.1. 2. Argument embedding: 2 indexes i and j, with different head words and such that the head word of i belongs to j and the head word of j does not belong to i, combine as follows: n(j) H(j) H(i) 14(0 This structuring is illustrated by nappe d'eau which combines with eau souter- raine (underground water) and produces the structure [nappe d~.eau souterraine]] ([sheet of [underground water.]]). Here, i has a higher priority than j; it corresponds to (H(i) ~ H(j) A H(i) • j A g(j) ~ i) in Definition 3.1. 3. Head overlap: 2 indexes i and j, with a common head word and such that i and j partially overlap, are also combi- ned at Step 3 by making j a substructure of i. This combination is, however, non- deterministic since no priority ordering is defined between these 2 indexes. There- fore, it does not correspond to a condition in Definition 3.1. H(i) In our experiments, this structure cor- responds to only one situation: a head word with pre- and post-modifiers such as importante activitd (intense activity) and activivtg de ddgradation mdtabolique (activity of metabolic degradation). With [-AGR-CAND], this configuration is encountered only 27 times (.1% of the index overlaps) because premodifiers rarely build correct term occurrences in French. Premodifiers generally correspond to occasional characteristics such as size, height, rank, etc. 4. The remaining case of overlapping indexes with different head words and reciprocal in- clusions of head words is never encounte- red. Its presence would undeniably denote a flaw in the calculus of head words. Step 3. A bottom-up structure of the sentences is incrementally built by replacing indexes by trees. The indexes which are highest ranked by 597 the Step 2 are processed first according to the following bottom-up algorithm: 1. build a depth-1 tree whose daughter nodes are all the words in the current sentence and whose head node is S, 2. for all the indexes i in the current sentence, selected by decreasing order of priority, (a) mark all the the depth-1 nodes which are a lexical leaf of i or which are the head node of a tree with at least one leaf in i, (b) replace all the marked nodes by a unique tree whose head features are the features of H(i), and whose depth- 1 leaves are all the marked nodes. When considering the sentence given in Table 1, the ordering of the indexes after Step 2 is the following: i2 > i5, i6 > i2, and i4 > i3. (They all result from the argument embedding relation.) The algorithm yields the following structure of the sample sentence: f ...la respiration et ses rapports avec l'humidit~ ont dt~ analvs~es respiration du sol humidit~ et la temperature analys~es dans le sol temperature du sol sol superficiel d'une for~t for~t tropicale Text Condensation The text structure resulting from this algorithm condenses the text and brings closer words that would otherwise remain separated by a large number of arguments or modifiers. Because of this condensation, a reindexing of the structu- red text yields new indexes which are not ex- tracted at the first step. Let us illustrate the gains from reindexing on a sample utterance: l'dvolution au cours du temps du sol et des rendements (temporal evo- lution of soils and productivity). At the first step of indexing, ~volution au cours du temps (lit. evolution over time) is recognized as a va- riant of dvolution dans le temps (lit. evolution with time). At the second step of indexing, the daughter nodes of the top-most tree build the condensed text: l'dvolution du sol et des rende- ments (evolution of soils and productivity): 1st step l'~volution au cours du temps du sol el des rendements 2nd step l'~volution du sol et des rendements l'~volution au cours du temps This condensed text allows for another index ex- traction: dvolution du sol et des rendements, a Coordination variant of dvolution du rendement (evolution of productivity). This index was not visible at the first step because of the additional modifier au cours du temps (temporal). (Reite- rated indexing is preferable to too unconstrai- ned transformations which burden the system with spurious indexes.) Both processes--text structuring, presented here, and term acquisition, described in (Jac- quemin, 1996)--reinforce each other. On the one hand, acquisition of new terms increases the volume of indexes and thereby improves text structuring by decreasing the non-conceptual surface of the text. On the other hand, text condensation triggers the extraction of new in- dexes, and thereby furnishes new possibilities for the acquisition of terms. 4 Evaluation Qualitative evaluation: The volume of in- dexing is characterized by the surface of the text occupied by terms or their combinations-- we call it the conceptual surface. Figure 2 shows the distribution of the sentences in re- lation to their conceptual surface. For instance, in 8,449 sentences among the 62,460 sentences of [AGRIC], the indexes occupy from 20 to 30% of the surface (3rd column). This figure indicates that the structures built from free indexing are significantly richer than those obtained from controlled indexing. The number of sentences is a decreasing exponen- tial function of their conceptual surface (a linear function with a log scale on the y axis). Figure 3 illustrates how the successive steps of the algorithm contribute to the final size of the incremental indexing. For each mode of 598 10 s ~ 10 4 N 10 3 ~ 10 2 ~ 10 I~ 10 0 0 ........ Free indexing ........ Controlled indexing 10 20 30 40 50 60 70 80 90 100 % of conceptual suface Figure 2: Conceptual Surface of Sentences Table 2: Increase in the volume of indexing Acquisition Condensation Total Controlled 49.3% 3.0% 52.3% Free 5.8% 4.7% 10.5% indexing two curves are plotted: the phrases resulting from initial indexing and from rein- dexing due to text condensation (circles) and the phrases due to term acquisition (asterisks). For instance, at step3, free indexing yields 309 indexes and reindexing 645. The corresponding percentages are reported in Table 2. The indexing with the poorest initial volume (controlled indexing) is the one that benefits best from term acquisition. Thus, concept com- bination and term enrichment tend to compen- sate the deficiencies of the initial term list by extracting more knowledge from the corpus. 10 5, "~ 10 4. 103 102 ~. 10' I0 ~ ~ o Free indexing * Free acquisition "'.... ~_._~.~.. ..-.@-.. Controlled indexing . "'-_. ~ .... * .... o .... Controlled acquisition 2 3 4 5 6 7 8 # step Figure 3: Step-by-step Number of Phrases Qualitative evaluation: Table 3 indicates the number of overlapping indexes in relation to their type. It provides, for each type, the rate of success of the structuring algorithm. This eva- Table 3: Incremental Structure Building Head Argument Total embedding embedding Distribution 27.0% 73.0% 100% # correct 128 346 474 Precision 79.0% 91.1% 87.5% luation results from a human scanning of 542 randomly chosen structures. 5 Conclusion This study has presented CONPARS, a tool for enhancing the output of an automatic in- dexer through index combination and term en- richment. Ongoing work intends to improve the interaction of indexing and acquisition through self-indexing of automatically acquired terms. References B6atrice Daille. 1997. Study and implementa- tion of combined techniques for automatic ex- traction of terminology. In J. L. Klavans and P. Resnik, ed., The Balancing Act: Combi- ning Symbolic and Statistical Approaches to Language, p. 49-66. MIT Press, Cambridge. Benoit Habert, Elie Naulleau, and Adeline Na- zarenko. 1996. Symbolic word clustering for medium size corpora. In Proceedings of CO- LING'96, p. 490-495, Copenhagen. Christian Jacquemin, Judith L. Klavans, and Evelyne Tzoukermann. 1997. Expansion of multi-word terms for indexing and retrieval using morphology and syntax. In Proceedings of ACL-EACL'97, p. 24-31. Christian Jacquemin. 1996. A symbolic and surgical acquisition of terms through varia- tion. In S. Wermter, E. Riloff, and G. Sche- ler, ed., Connectionist, Statistical and Symbo- lic Approaches to Learning for NLP, p. 425- 438. Springer, Heidelberg. Tomek Strzalkowski. 1996. Natural language information retrieval. Information Processing ~J Management, 31(3):397-417. Evelyne Tzoukermann and Dragomir R. Radev. 1997. Use of weighted finite state transducers in part of speech tagging. In A. Kornai, ed., Extended Finite State Models of Language. Cambridge University Press. 599
1998
97
Combining a Chinese Thesaurus with a Chinese Dictionary Ji Donghong Kent Ridge Digital Labs 21 Heng Mui Keng Terrace Singapore, 119613 dhji @krdl.org.sg Gong Junping Department of Computer Science Ohio State University Columbus, OH [email protected] Huang Changuing Department of Computer Science Tsinghua University Beijing, 100084, P. R. China [email protected] Abstract In this paper, we study the problem of combining a Chinese thesaurus with a Chinese dictionary by linking the word entries in the thesaurus with the word senses in the dictionary, and propose a similar word strategy to solve the problem. The method is based on the definitions given in the dictionary, but without any syntactic parsing or sense disambiguation on them at all. As a result, their combination makes the thesaurus specify the similarity between senses which accounts for the similarity between words, produces a kind of semantic classification of the senses defined in the dictionary, and provides reliable information about the lexical items on which the resources don't conform with each other. 1. Introduction Both ((TongYiOi CiLin)) (Mei. et al, 1983) and ((XianDai HanYu CiDian)) (1978) are important Chinese resources, and have been widely used in various Chinese processing systems (e.g., Zhang et al, 1995). As a thesaurus, ((TongYiCi CiLin)) defines semantic categories for words, however, it doesn't specify which sense of a polysemous word is involved in a semantic category. On the other hand, ((XianDai HanYu CiDian)) is an ordinary dictionary which provides definitions of senses while not giving any information about their semantic classification. A manual effort has been made to build a resource for English, i.e., WordNet, which contains both definition and classification information (Miller et al., 1990), but such resources are not available for many other languages, e.g. Chinese. This paper presents an automatic method to combine the Chinese thesaurus with the Chinese dictionary into such a resource, by tagging the entries in the thesaurus with appropriate senses in the dictionary, meanwhile assigning appropriate semantic codes, which stand for semantic categories in the thesaurus, to the senses in the dictionary. D.Yarowsky has considered a similar problem to link Roget's categories, an English thesaurus, with the senses in COBUILD, an English dictionary (Yarowsky, 1992). He treats the problem as a sense disambiguation one, with the definitions in the dictionary taken as a kind of contexts in which the headwords occur, and deals with it based on a statistical model of Roget's categories trained on large corpus. In our opinion, the method, for a specific word, neglects the difference between its definitions and the ordinary contexts: definitions generally contain its synonyms, hyponyms or hypernyms, etc., while ordinary contexts generally its collocations. So the trained model on ordinary contexts may be not appropriate for the disambiguation problem in definition contexts. A seemingly reasonable method to the problem would be common word strategy, which has been extensively studied by many researchers (e.g., Knight, 1993; Lesk, 1986). The solution 600 would be, for a category, to select those senses whose definitions hold most number of common words among all those for its member words. But the words in a category in the Chinese thesaurus may be not similar in a strict way, although similar to some extend, so their definitions may only contain some similar words at most, rather than share many words. As a result, the common word strategy may be not appropriate for the problem we study here. In this paper, we extend the idea of common word strategy further to a similar word method based on the intuition that definitions for similar senses generally contain similar words, if not the same ones. Now that the words in a category in the thesaurus are similar to some extent, some of their definitions should contain similar words. We see these words as marks of the category, then the correct sense of a word involved in the category could be identified by checking whether its definition contains such marks. So the key of the method is to determine the marks for a category. Since the marks may be different word tokens, it may be difficult to make them out only based on their frequencies. But since they are similar words, they would belong to the same category in the thesaurus, or hold the same semantic code, so we can locate them by checking their semantic codes. In implementation, for any category, we first compute a salience value for each code with respect to it, which in fact provides the information about the marks of the category, then compute distances between the category and the senses of its member words, which reflect whether their definitions contain the marks and how many, finally select those senses as tags by checking whether their distances from the category fall within a threshold. The remainder of this paper is organized as the following: in section 2, we give a formal setting of the problem and present the tagging procedure; in section 3, we explore the issue of threshold estimation for the distances between senses and categories based on an analysis of the distances between the senses and categories of univocal words; in section 4, we report our experiment results and their evaluation; in section 5, we present some discussions about our methodology; finally in section 6, we give some conclusions. 2. Problem Setting The Chinese dictionary provides sense distinctions for 44,389 Chinese words, on the other hand, the Chinese thesaurus divides 64,500 word entries into 12 major, 94 medium and 1428 minor categories, which is in fact a kind of semantic classification of the words t. Intuitively, there should be a kind of correspondence between the senses and the entries. The main task of combining the two resources is to locate such kind of correspondence. Suppose X is a category 2 in the thesaurus, for any word we X, let Sw be the set of its senses in the dictionary, and Sx = U Sw, for any se Sx, let w~X DW, be the set of the definition words in its definition, DW,= UDW ~ , and DW~ UDW w, s¢S w we X for any word w, let CODE(w) be the set of its semantic codes that are given in the thesaurus 3, CODEs= UCODE(w), CODE~= UCODE, , weD~ seS w and CODEx= U CODE,. For any ce CODEx, we s~S x ' The electronic versions of the two resources we use now only contain part of the words in them, see section 4. We generally use "category" to refer to minor categories in the following text, if no confusion is involved. Furthermore, we also use a semantic code to refer to a category. , A category is given a semantic code, a word may belong to several categories, and hold several codes. 601 define its definition salience with respect to X in 1). I{wIw ~ X, c e CODEw }[ I) Sail(c, X)= [Xl For example, 2) lists a category Ea02 in the thesaurus, whose members are the synonyms or antonyms of word i~j~(/gaoda/; high and big) 4. 2) ~ ~,J, ~ ~ ~:~ i~: ~ I~ ~ i~ ~i~)t, ~ IE~ ~ ~ ~... 3) lists some semantic codes and their definition salience with respect to the category. 3) Ea02 (0.92), Ea03 (0.76), Dn01 (0.45), Eb04 (0.24), Dn04 (0.14). To define a distance between a category X and a sense s, we first define a distance between any two categories according to the distribution of their member words in a corpus, which consists of 80 million Chinese characters. For any category X, suppose its members are w~, w2 ..... w,, for any w, we first compute its mutual information with each semantic code according to their co-occurrence in a corpus s, then select 10 top semantic codes as its environmental codes', which hold the biggest mutual information with wi. Let NC~ be the set of w/s environmental codes, Cr be the set of all the semantic codes given in the thesaurus, for any ce Cr, we define its context salience with respect to X in 4). 4) Sal,(c, X)'-- /1 ' "/gaoda/" is the Pinyin of the word, and "high and big '' is its English translation. 5 We see each occurrence of a word in the corpus as one occurrence of its codes. Each co-occurrence of a word and a code falls within a 5-word distance. 6 The intuition behind the parameter selection (10) is that the words which can combined with a specific word to form collocations fall in at most 10 categories in the thesaurus. We build a context vector for X in 5), where k=lCTI. 5) CVx=<Salz(ct, X), Salz(cz, X) ..... Sal2(c,, X)> Given two categories X and Y, suppose CVx and cvr are their context vectors respectively, we define their distance dis(X, Y) as 6) based on the cosine of the two vectors. 6) dis(X, Y)=l-cos(cvx, cvr) Let c~ CODEx, we define a distance between c and a sense s in 7). 7) dis(c, s)= Min dis(c, c') c'~ CODE~ Now we define a distance between a category X and a sense s in 8). 8) dis(X, s)= ~ (h c • dis(c, s)) c~CODE x Sal] (c, X) where he= Sal z ( c' , X) c'~CODE x Intuitively, if CODEs contains the salient codes with respect to X, i.e., those with higher salience with respect to X, dis(X, s) will be smaller due to the fact that the contribution of a semantic code to the distance increases with its salience, so s tends to be a correct sense tag of some word. For any category X, let w~X and seSw, if dis(X, s)<T, where T is some threshold, we will tag w by s, and assign the semantic code X to s. 3. Parameter Estimation Now we consider the problem of estimating an appropriate threshold for dis(X, s) to distinguish between the senses of the words in X. To do so, we first extract the words which hold only one code in the thesaurus, and have only one sense in the dictionary T, then check the distances between these senses and categories. The number of such words is 22,028. , This means that the words are regarded as univocal ones by both resources. 602 Tab.1 lists the distribution of the words with respect to the distance in 5 intervals. Intervals [o.o, 0.2) Word num. 8,274 Percent(%) 37.56 [0.2, 0.4) 10,655 48.37 [0.4, 0.6) 339 1.54 [0.6, 0.8) 1172 5.32 [0.8, 1.0] 1588 7.21 all 22,028 100 Tab. I. The distribution of univocal words with respect to dis(X, s) From Tab.l, we can see that for most univocal words, the distance between their senses and categories lies in [0, 0.4]. Let Wv be the set of the univocal words we consider here, for any univocal word we Wv, let sw be its unique sense, and Xw be its univocal category, we call DEN<a. a> point density in interval [tj, t2] as 9), where O<tj<t2<l. 9) DEN<a. a>= [{wlw ~ W v ,t, < dis( Xw,s,, ) < t 2 }1 t 2 - t, We define 10) as an object function, and take t" which maximizes DEN, as the threshold. 1 O) DENt = DEN<o. t,- DEN<t. I> The object function is built on the following inference. About the explanation of the words which are regarded as univocal by both Chinese resources, the two resources tend to be in accordance with each other. It means that for most univocal words, their senses should be the correct tags of their entries, or the distance between their categories and senses should be smaller, falling within the under-specified threshold. So it is reasonable to suppose that the intervals within the threshold hold a higher point density, furthermore that the difference between the point density in [0, t*], and that in It', 1 ] gets the biggest value. With t falling in its value set {dis(X, s)}, we get t ° as 0.384, when for 18,653 (84.68%) univocal words, their unique entries are tagged with their unique senses, and for the other univocal words, their entries not tagged with their senses. 4. Results and Evaluation There are altogether 29,679 words shared by the two resources, which hold 35,193 entries in the thesaurus and 36,426 senses in the dictionary. We now consider the 13,165 entries and 14,398 senses which are irrelevant with the 22,028 univocal words. Tab. 2 and 3 list the distribution of the entries with respect to the number of their sense tags, and the distribution of the senses with respect to the number of their code tags respectively. Tag num. 0 Entr 7 1625 Percent (%) 12.34 1 9908 75.26 2 1349 10.25 23 283 2.15 Tab. 2. The distribution of entries with respect to their sense tags Ta~nUlTL 0 Sense 1461 I 10433 72.46 2 2334 16.21 >3 170 Percent (%) 10.15 1.18 Tab. 3. The distribution of senses with respect to their code tags In order to evaluate the efficiency of our method, we define two measures, accuracy rate and loss rate, for a group of entries E as 11) and 12) respectively 8. a We only give the evaluation on the results for entries, the evaluation on the results for senses can be done similarly. 603 IRr n cr l IRr l scr - (Rr • II where RTe is a set of the sense tags for the entries in E produced by the tagging procedure, and CT~ is a set of the sense tags for the entries in E, which are regarded as correct ones somehow. What we expect for the tagging procedure is to select the appropriate sense tags for the entries in the thesaurus, if they really exist in the dictionary. To evaluate the procedure directly proves to be difficult. We turn to deal with it in an indirect way, in particular, we explore the efficiency of the procedure of tagging the entries, when their appropriate sense tags don't exist in the dictionary. This indirect evaluation, on the one hand, can be .carried out automatically in a large scale, on the other hand, can suggest what the direct evaluation entails in some way because that none appropriate tags can be seen as a special tag for the entries, say None 9. In the first experiment, let's consider the 18,653 uniyocal words again which are selected in parameter estimation stage. For each of them, we create a new entry in the thesaurus which is different from its original one. Based on the analysis in section 3, the senses for theses words should only be the correct tags for their corresponding entries, the newly created ones have to take None as their correct tags. When creating new entries, we adopt the following 3 different kinds of constraints: i) the new entry belongs to the same medium category with the original one; ii) the new entry belongs to the same major category with the original one; iii) no constraints; With each constraint, we select 5 groups of new 8 A default sense tag for the entries. 604 entries respectively, and carry out the experiment for each group. Tab. 4 lists average accuracy rates and loss rates under different constraints. Constraint Aver. accuracy(%) i) 88.39 ii) 94.75 iii). 95.26 Aver. loss (%) 11.61 5.25 4.74 Tab. 4. Average accuracy, loss rates under different constraints From Tab. 4, we can see that the accuracy rate under constraint i) is a bit less than that under constraint ii) or iii), the reason is that with the created new entries belonging to the same medium category with the original ones, it may be a bit more likely for them to be tagged with the original senses. On the other hand, notice that the accuracy rates and loss rates in Tab.4 are complementary with each other, the reason is that IRTei equals ICTel in such cases. In another experiment, we select 5 groups of 0-tag, 1-tag and 2-tag entries respectively, and each group consists of 20-30 entries. We check their accuracy rates and loss rates manually. Tab. 5 lists the results. Ta~ num. 0 2 Aver. accuracy(%) Aver. loss(%) 94.6 7.3 90.1 5.2 87.6 2.1 Tab. 5. Average accuracy and loss rates under different number of tags Notice that the accuracy rates and loss rates in Tab.5 are not complementary, the reason is that IRT~ doesn't equal ICTel in such cases. In order to explore the main factors affecting accuracy and loss rates, we extract the entries which are not correctly tagged with the senses, and check relevant definitions and semantic codes. The main reasons are: i) No salient codes exist with respect to a category, or the determined are not the expected. This may be attributed to the fact that the words in a category may be not strict synonyms, or that a category may contain too less words, etc. ii) The information provided for a word by the resources may be incomplete. For example, word "~(/quanshu/, all) holds one semantic code Ka06 in the thesaurus, its definition in the dictionary is: ~: /quanshu/ ~[Eb02] /quanbu/ all The correct tag for the entry should be the sense listed above, but in fact, it is tagged with None in the experiment. The reason is that word ~:~ (/quanbu/, all) can be an adverb or an adjective, and should hold two semantic codes, Ka06 and Eb02, corresponding with its adverb and adjective usage respectively, but the thesaurus neglects its adverb usage. If Ka06 is added as a semantic code of word ~_~ (/quanbu/, all), the entry will be successfully tagged with the expected sense. iii) The distance defined between a sense and a category fails to capture the information carded by the order of salient codes, more generally, the information carded by syntactic structures involved. As an example, consider word ~-~ (/yaochuan/), which has two definitions listed in the following. i~ 1) i~[Dal9] ~[Ie01l. /yaochuan/ /yaoyan/ /chuanbo/ hearsay spread the hearsay spreads. 2) ~[Ie01] I~ ~.~-~ [Dal9] /chuanbo/ Idel /yaoyan/ spread of hearsay the hearsay which spreads The two definitions contain the same content words, the difference between them lies in the order of the content words, more generally, lies in the syntactic structures involved in the definitions: the former presents a sub-obj structure, while the latter with a "l~(/de/,of)" structure. To distinguish such definitions needs to give more consideration on word order or syntactic structures. 5. Discussions In the tagging procedure, we don't try to carry out any sense disambiguation on definitions due to its known difficulty. Undoubtedly, when the noisy semantic codes taken by some definition words exactly cover the salient ones of a category, they will affect the tagging accuracy. But the probability for such cases may be lower, especially when more than one salient code exists with respect to a category. The distance between two categories is defined according to the distribution of their member words in a corpus. A natural alternative is based on the shortest path from one category to another in the thesaurus (e.g., Lee at al., 1993; Rada et al., 1989), but it is known that the method suffers from the problem of neglecting the wide variability in what a link in the thesaurus entails. Another choice may be information content method (Resnik, 1995), although it can avoid the difficulty faced by shortest path methods, it will make the minor categories within a medium one get a same distance between each other, because the distance is defined in terms of the information content carded by the medium category. What we concern here is to evaluate the dissimilarity between different categories, including those within one medium category, so we make use of semantic code based vectors to define their dissimilarity, which is motivated by Shuetze's word frequency based vectors (Shuetze, 1993). In order to determine appropriate sense tags 605 for a word entry in one category, we estimate a threshold for the distance between a sense and a category. Another natural choice may be to select the sense holding the smallest distance from the category as the correct tag for the entry. But this choice, although avoiding estimation issues, will fail to directly demonstrate the inconsistency between the two resources, and the similarity between two senses with respect to a category. 6. Conclusions In this paper, we propose an automatic method to combine a Chinese thesaurus with a Chinese dictionary. Their combination establishes the correspondence between the entries in the thesaurus and the senses in the dictionary, and provides reliable information about the lexical items on which the two resources are not in accordance with each other. The method uses no language-specific knowledge, and can be applied to other languages. The combination of the two resources can be seen as improvement on both of them. On the one hand, it makes the thesaurus specify the similarity between word senses behind that between words, on the other hand, it produces a semantic classification for the word senses in the dictionary. The method is in fact appropriate for a more general problem: given a set of similar words, how to identify the senses, among all, which account for their similarity. In the problem we consider here, the words fall within a category in the Chinese thesaurus, with similarity to some extent between each other. The work suggests that if the set contains more words, and they are more similar with each other, the result will be more sound. References Knight K. (1993) Building a Large Ontology for Machine Translation. In "Proceedings of DARPA Human Language Conference", Princeton, USA, 185-190. Lesk M. (1986) Automated Word Sense Disambiguation using Machine Readable Dictionaries: How to Tell a Pine Cone from an Ice Cream Cone. In "Proceedings of the ACM SIGDOC Conference", Toronto Ontario. Lee J. H., Kim M. H., and Lee Y. J. (1993). Information retrieval based on concept distance in IS-A hierarchies. Journal of Documentation, 49/2. Mei J.J. et al. (1983) TongYiCi CiLin(A Chinese Thesaurus). Shanghai Cishu press, Shanghai. Miller G.A., Backwith R., Felibaum C., Gross D. and Miller K. J. (1990) Introduction to WordNet: An On-line Lexical Database. International Journal of Lexicography, 3(4) (Special Issue). Rada R. and Bicknell E (1989) Ranking documents with a thesaurus. JASIS, 40(5), pp. 304-3 I0. Resnik P. (1995) Using Information Content to Evaluate the similarity in a Taxonomy. In "Proceedings of the 14th International Joint Conference on Artificial Intelligence". Schutze H. (1993) Part-of-speech induction from scratch. In "Proceedings of the 31 st Annual Meeting of the Association for Computational Linguistics", Columbus, OH. XianDai HanYu CiDian(A modern Chinese Dictionary) (1978), Shangwu press, Beijing. Yarowsky D. (1992) Word Sense Disambiguation Using Statistical Models of Roget's Categories Trained on Large Corpora. In "Proceedings of COLING'92", Nantas, France, pp. 454-460. Zhang J, Huang C. N. Yang E. H. (1994) Construction a Machine Tractable Dictionary from a Machine Readable Dictionary. Communications of Chinese and Oriental Language Information Processing Society, 4(2), pp. 123-130. 606
1998
98
Combining Multiple, Large-Scale Resources in a Reusable Lexicon for Natural Language Generation Hongyan Jing and Kathleen McKeown Department of Computer Science Columbia University New York, NY 10027, USA {hjing, kathy} @cs.columbia.edu Abstract A lexicon is an essential component in a gener- ation system but few efforts have been made to build a rich, large-scale lexicon and make it reusable for different generation applications. In this paper, we describe our work to build such a lexicon by combining multiple, heteroge- neous linguistic resources which have been de- veloped for other purposes. Novel transforma- tion and integration of resources is required to reuse them for generation. We also applied the lexicon to the lexical choice and realization com- ponent of a practical generation application by using a multi-level feedback architecture. The integration of the lexicon and the architecture is able to effectively improve the system para- phrasing power, minimize the chance of gram- matical errors, and simplify the development process substantially. 1 Introduction Every generation system needs a lexicon, and in almost every case, it is acquired anew. Few ef- forts in building a rich, large-scale, and reusable generation lexicon have been presented in liter- ature. Most generation systems are still sup- ported by a small system lexicon, with limited entries and hand-coded knowledge. Although such lexicons are reported to be sufficient for the specific domain in which a generation sys- tem works, there are some obvious deficiencies: (1) Hand-coding is time and labor intensive, and introduction of errors is likely. (2) Even though some knowledge, such as syntactic structures for a verb, is domain-independent, often it is re-encoded each time a new application is un- der development. (3) Hand-coding seriously re- stricts the scale and expressive power of gener- ation systems. As natural language generation is used in more ambitious applications, this sit- uation calls for an improvement. Generally, existing linguistic resources are not suitable to use for generation directly. First, most large-scale linguistic resources so far were built for language interpretation applications. They are indexed by words, whereas, an ideal generation lexicon should be indexed by the se- mantic concepts to be conveyed, because the in- put of a generation system is at semantic level and the processing during generation is based on semantic concepts, and because the mapping in the generation process is from concepts to words. Second, the knowledge needed for gen- eration exists in a number of different resources, with each resource containing a particular type of information; they can not currently be used simultaneously in a system. In this paper, we present work in building a rich, large-scale, and reusable lexicon for gener- ation by combining multiple, heterogeneous lin- guistic resources. The resulting lexicon contains syntactic, semantic, and lexical knowledge, in- dexed by senses of words as required by gener- ation, including: A complete list of syntactic subcategoriza- tions for each sense of a verb to support surface realization. A large variety of transitivity alternations for each sense of a verb to support para- phrasing. Frequency of lexical items and verb subcat- egorizations and also selectional constraints derived from a corpus to support lexical choice. Rich lexical relations between lexical con- cepts, including hyponymy, antonymy, and so on, to support lexical choice. 607 The construction of the lexicon is semi- automatic, and the lexicon has been used for lexical choice and realization in a practical gen- eration system. In Section 2, we describe the process to build the generation lexicon by com- bining existing linguistic resources. In Section 3, we show the application of the lexicon by ac- tually using it in a generation system. Finally, we present conclusions and future work. 2 Constructing a generation lexicon by merging linguistic resources 2.1 Linguistic resources In our selection of resources, we aim primarily for accuracy of the resource, large coverage, and providing a particular type of information es- pecially useful for natural language generation. four linguistic resources: 1. The WordNet on-line lexical database (Miller et al., 1990). WordNet is a well known on-line dictionary, consisting of 121,962 unique words, 99,642 synsets (each synset is a lexical concept represented by a set of synonymous words), and 173,941 senses of words. 1 It is especially useful for generation because it is based on lexical concepts, rather than words, and because it provides several semantic relationships (hyponymy, antonymy, meronymy, entail- ment) which are beneficial to lexical choice. 2. English Verb Classes and Alternations (EVCA) (Levin, 1993). EVCA is an ex- tensive linguistic study of diathesis alter- nations, which are variations in the realiza- tion of verb arguments. For example, the alternation "there-insertion" transforms A ship appeared on the horizon to There ap- peared a ship on the horizon. Knowledge of alternations facilitates the generation of paraphrases. (Levin, 1993) studies 80 al- ternations. 3. The COMLEX syntax dictionary (Grish- man et al., 1994). COMLEX contains syntactic information for 38,000 English words. The information includes subcat- egorization and complement restrictions. 4. The Brown Corpus tagged with WordNet senses (Miller et al., 1993). The original 1As of Version 1.6, released in December 1997. Brown corpus (Ku~era and Francis, 1967) has been used as a reference corpus in many computational applications. Part of Brown Corpus has been tagged with WordNet senses manually by the WordNet group. We use this corpus for frequency measure- ments and exacting selectional constraints. 2.2 Combining linguistic resources In this section, we present an algorithm for merging data from the four resources in a man- ner that achieves high accuracy and complete- ness. We focus on verbs, which play the most important role in deciding phrase and sentence structure. Our algorithm first merges COMLEX and EVCA, producing a list of syntactic subcate~ gorizations and alternations for each verb. Dis- tinctions in these syntactic restrictions accord- ing to each sense of a verb are achieved in the second stage, where WordNet is merged with the result of the first step. Finally, the corpus information is added, complementing the static resources with actual usage counts for each syn- tactic pattern. This allows us to detect rarely used constructs that should be avoided during generation, and possibly to identify alternatives that are not included in the lexical databases. 2.2.1 Merging COMLEX and EVCA Alternations involve syntactic transformations of verb arguments. They are thus a means to alleviate the usual lack of alternative ways to express the same concept in current generation systems. EVCA has been designed for use by humans, not computers. We need therefore to convert the information present in Levin's book (Levin, 1993) to a format that can be automatically analyzed. We extracted the relevant informa- tion for each verb using the verb classes to which the various verbs are assigned; members of the same class have the same syntactic behav- ior in terms of allowable alternations. EVCA specifies a mapping between words and word classes, associating each class with alternations and with subcategorization frames. Using the mapping from word and word classes, and from word classes to alternations, alternations for each verb are extracted. We manually formatted the alternate pat- terns in each alternation in COMLEX format. 608 The reason to choose manual formatting rather than automating the process is to guarantee the reliability of the result. In terms of time, manual formatting process is no more expensive than automation since the total number of alter- nations is smail(80). When an alternate pattern can not be represented by the labels in COM- LEX, we need to added new labels during the formatting process; this also makes automating the process difficult. The formatted EVCA consists of sets of ap- plicable alternations and subcategorizations for 3,104 verbs. We show the sample entry for the verb appear in Figure 1. Each verb has 1.9 alter- nations and 2.4 subcategorizations on average. The maximum number of alternations (13) is realized for the verb "roll". The merging of COMLEX and EVCA is achieved by unification, which is possible due to the usage of similar representations. Two points are worth to mention: (a) When a more general form is unified with a specific one, the later is adopted in final result. For example, the unification of PP2 and PP-PRED-RS 3 is PP- PRED-RS. (b) Alternations are validated by the subcategorization information. An alternation is applicable only if both alternate patterns are applicable. Applying this algorithm to our lexical re- sources, we obtain rich subcategorization and alternation information for each verb. COM- LEX provides most subcategorizations, while EVCA provides certain rare usages of a verb which might be missing from COMLEX. Con- versely, the alternations in EVCA are validated by the subcategorizations in COMLEX. The merging operation produces entries for 5,920 verbs out of 5,583 in COMLEX and 3,104 in EVCA. 4 Each of these verbs is associated with 5.2 subcategorizations and 1.0 alternation on average. Figure 2 is an updated version of Fig- ure 1 after this merging operation. 2.2.2 Merging COMLEX/EVCA with WordNet WordNet is a valuable resource for generation because most importantly the synsets provide 2The verb can take a prepositional phrase SThe verb can take a prepositional phrase, and the subject of the prepositional phrase is the same as the verb's 42,947 words appear in both resources. appear: ((INTm%NS) (LOCPP) (pp) (ADJ-PFA-PART) (INTKANS THEKE-V-SUBJ :ALT There-Insertion) (LOCPP THEKE-V-SUBJ-LOCPP :ALT There-Insertion) (LOCPP LOCPP-V-SUBJ :ALT Locative_Inversion)) Figure h Alternations and subcategorizations from EVCA for the verb appear. ~ppefl~r: ((PP-T0-INF-KS :PVAL ("to")) (PP-PKED-RS :PVAL ("to .... of" "under .... against" "in favor of' ' "before" "at")) (EXTRAP-T0-NP-S) (INTRANS) (INTRANS THERE-V-SUBJ :ALT There-Insertion) (L0CPP THEKE-V-SUBJ-L0CPP :ALT There-Insertion) (LOCPP L0CPP-V-SUBJ :ALT Locative_Inversion))) Figure 2: Entry for the verb appear after merg- ing COMLEX with EVCA. a mapping between concepts and words. Its in- clusion of rich lexical relations also provide basis for lexical choice. Despite of these advantages, the syntactic information in WordNet is rela- tively poor. Conversely, the result we obtained after combining COMLEX and EVCA has rich syntactic information, but this information is provided at word level thus unsuitable to use for generation directly. These complementary resources are therefore combined in the second stage, where the subcategorizations and alter- nations from COMLEX/EVCA for each word are assigned to each sense of the word. Each synset in WordNet is linked with a list of verb frames, each of which represents a sim- ple syntactic pattern and general semantic con- straints on verb arguments, e.g., Somebody -s something. The fact that WordNet contains this syntactic information(albeit poor) makes it pos- sible to link the result from COMLEX/EVCA with WordNet. The merging operation is based on a compat- ibility matrix, which indicates the compatibility of each subcategorization in COMLEX/EVCA with each verb frame in WordNet. The sub- 609 categorizations and alternations listed in COM- LEX/EVCA for each word is then assigned to different senses of the word based on their com- patibility with the verbs frames listed under that sense of the word in WordNet. For exam- ple, if for a certain word, the subcategorizations PP-PRED-RS and NP are listed for the word in COMLEX/EVCA, and the verb frame some- body -s PP is listed for the first sense of the word in WordNet, then PP-PRED-RS will be assigned to the first sense of the word while NP will not. We also keep in the lexicon the gen- eral constraint on verb arguments from Word- Net frames. Therefore, for this example, the entry for the first sense of w indicates that the verb can take a prepositional phrase as a com- plement, the subject of the verb is the same as the subject of the prepositional phrase, and the subject should be in the semantic category "somebody". As you can see, the result incorpo- rates information from three resources and but is more informative than any of them. An alter- nation is considered applicable to a word sense if both alternate patterns have matchable verb frames under that sense. The compatibility matrix is the kernel of the merging operations. The 147"35 matrix (147 subcategorizations from COMLEX/EVCA, 35 verb frames from WordNet) was first manually constructed based on human understanding. In order to achieve high accuracy, the restrictions to decide whether a pair of labels are compatible are very strict when the matrix was first con- structed. We then use regressive testing to ad- just the matrix based on the analysis of merging results. During regressive testing, we first merge WordNet with COMLEX/EVCA using current version of compatibility matrix, and write all inconsistencies to a log file. In our case, an in- consistency occurs if a subcategorization or al- ternation in COMLEX/EVCA for a word can not be assigned to any sense of the word, or a verb frame for a word sense does not match any subcategorization for that word. We then analyze the log file and adjust the compatibil- ity matrix accordingly. This process repeated 6 times until when we analyze a fair amount of inconsistencies in the log file, they are no more due to over-restriction of the compatibility ma- trix. Inconsistencies between WordNet and COM- appear: sense 1 give an impression ((PP-T0-INF-RS :PVAL ("to") :SO ((sb, -))) (TO-INF-RS :SO ((sb, -))) (NP-PRED-RS :SO ((sb, -))) (ADJP-PRED-RS :$0 ((sb, -) (sth, -))))) sense 2 become visible ((PP-TO-INF-RS :PVAL ("to") :SO ((sb, --) (sth, -))) o , , (INTRANS THERE-V-SUBJ : ALT there-insertion :SO ((sb, -) (sth, -)))) sense 8 have an outward expression ((NP-PRED-RS :SO ((sth, -))) (ADJP-PRED-RS :SO ((sb, -) (sth, -)))) Figure 3: Entry for the verb appear after merg- ing WordNet with the result from COMLEX and EVCA. LEX/EVCA result unmatching subcategoriza- tions or verb frames. On average, 15% of sub- categorizations and alternations for a word can not be assigned to any sense of the word, mostly due to the incompleteness of syntactic informa- tion in WordNet; 2% verb frames for each sense of a word does not match any subcategoriza- tions for the word, either due to incomplete- ness of COMLEX/EVCA or erroneous entries in WordNet. The lexicon at this stage is a rich set of sub- categorizations and alternations for each sense of a word, coupled with semantic constraints of verb arguments. For 5,920 words in the result after combining COMLEX and EVCA, 5,676 words also appear in WordNet and each word has 2.5 senses on average. After the merging operation, the average number of subcatego- rizations is refined from 5.2 per verb in COM- LEX/EVCA to 3.1 per sense, and the average number of alternations is refined from 1.0 per verb to 0.2 per sense. Figure 3 shows the result for the verb appear after the merging operation. 2.3 Corpus analysis Finally, we enriched the lexicon with language usage information derived from corpus analy- sis. The corpus used here is the Brown Corpus. The language usage information in the lexicon include: (1) frequency of each word sense; (2) frequency of subcategorizations for each word sense. A parser is used to recognize the subcat- egorization of a verb. The corpus analysis in- 610 formation complements the subcategorizations from the static resources by marking potential superfluous entries and supplying entries that are possibly missing in the lexicai databases; (3) semantic constraints of verb arguments. The arguments of each verb are clustered based on hyponymy hierarchy in WordNet. The seman- tic categories we thus obtained are more specific compared to the general constraint(animate or inanimate) encoded in WordNet frame represen- tation. The language usage information is espe- cially useful in lexicai choice. 2.4 Discussion Merging resources is not a new idea and pre- vious work has investigated integration of re- sources for machine translation and interpreta- tion (Klavans et al., 1991), (Knight and Luk, 1994). Whereas our work differs from previ- ous work in that for the first time, a generation lexicon is built by this technique; unlike other work which aims to combine resources with sim- ilar type of information, we select and combine multiple resources containing different types of information; while others combine not well for- matted lexicon like LDOCE (Longman Dictio- nary of Contemporary English), we chose well formatted resources (or manually format the re- source) so as to get reliable and usable results; semi-automatic rather than fully automatic ap- proach is adopted to ensure accuracy; corpus analysis based information is also linked with information from static resources. By these measures, we are able to acquire an accurate, reusable, rich, and large-scale lexicon for natu- ral language generation. 3 Applications 3.1 Architecture We applied the lexicon to lexical choice and lexical realization in a practical generation sys- tem. First we introduce the architecture of lexi- cal choice and realization and then describe the overall system. A multi-level feedback architecture as shown in Figure 4 was used for lexical choice and real- ization. We distinguish two types of concepts: semantic concepts and lexicai concepts. A se- mantic concept is the semantic meaning that a user wants to convey, while a lexical concept is a lexical meaning that can be represented by a set I Sentence Planner I ~i uoncepts to Le×ical Concepts 11 ~01 Lexical Concepts "~} [ Mapping from Lexicall i ~ ..~ii [ Concepts to Words [ ----~rdNe) ~Generafi~o and Syntactic Paraphrases ~ [ Surface Realizatio~ Natural Language Output Figure 4: The Architecture for Lexical Choice and Realization of synonymous words, such as synsets defined in WordNet. Paraphrases are also distinguished into 3 types according to whether they are at the semantic, lexical, or syntactic level. For ex- ample, if asked whether you will be at home tomorrow, then the answers "I'll be at work to- morrow", "No, I won't be at home.', and "I'm leaving for vacation tonight" are paraphrases at the semantic level. Paraphrases like "He bought an umbrella" and "He purchased an umbrella" are at the lexical level since they are acquired by substituting certain words with synonymous words. Paraphrases like "A ship appeared on the horizon" and "On the horizon appeared a ship" are at the syntactic level since they only involve syntactic transformations. Therefore, all paraphrases introduced by alternations are at syntactic level. Our architecture includes lev- els corresponding to these 3 levels of paraphras- ing. The input to the lexical choice and realiza- tion module is represented as semantic concepts. In the first stage, semantic paraphrasing is car- ried out by mapping semantic concepts to lex- ical concepts. Generally, semantic level para- phrases are very complex. They depend on the 611 situation, the domain, and the semantic rela- tions involved. Semantic paraphrases are repre- sented declaratively in a database file which can be edited by the users. The file is indexed by semantic concepts and under each entry, a list of lexical concepts that can be used to realize the semantic concept are provided. In the second stage, we use the lexical re- source that we constructed to choose words for the lexical concepts produced by stage 1. The lexicon is indexed by lexical concepts that point to synsets in WordNet. These synsets repre- sent a set of synonymous words and thus, it is at this stage that lexical paraphrasing is han- dled. In order to choose which word to use for the lexical concept, we use domain-independent constraints that are included in the lexicon as well as domain-specific constraints. Syntactic constraints that come from the detailed sub- categorizations linked to each word sense is a domain-independent constraint. Subcategoriza- tions are used to check that the input can be realized by the word. For example, if the in- put has 3 arguments, then words which take only 2 arguments can not be selected. Seman- tic constraints on verb argument derived from WordNet and the corpus are used to check the agreement of the arguments. For example, if the input subject argument is an animate, then words which take only inanimate subject can not be selected. Frequency information derived from the corpus is also used to constrain word choice. Besides the above domain-independent constraints other constraints specific to a do- main might also be needed to choose an ap- propriate word for the lexical concept. Intro- ducing the combined lexicon at this stage al- lows us to produce many lexical paraphrases without much effort; it also allows us to sep- arate domain-independent and domain-specific constraints in lexical choice so that domain- independent constraints can be reused in each application. The third stage produces a structure repre- sented as a high level sentence structure, with subcategorizations and words associated with each sentence. At this stage, information in the lexical resource about subcategorization and alternations are applied in order to generate syntactic paraphrases. Output of this stage is then fed directly to the surface realization pack- age, the FUF/SURGE system (Elhadad, 1992; Robin, 1994). To choose which alternate pat- tern of an alternation to use, we use information such as focus of the sentence as criteria; when the two alternates are not distinctively different, such as "He knocked the door" and "He knocked at the door", one of them is randomly chosen. The application of subcategorizations in the lex- icon at this stage helps to check that the output is grammatically correct, and alternations can produce many syntactic paraphrases. The above refining processing is interactive. When a lower level can not find a possible can- didate to realize the high level representation, feedback is sent to the higher level module, which then makes changes accordingly. 3.2 PlanDOC Using the proposed architecture, we applied the lexicon to a practical generation system, PIan- DOC. PlanDOC is an enhancement to Bell- core's LEIS-PLAN TM network planning prod- uct. It transforms lengthy execution traces of engineer's interaction with LEIX-PLAN into human-readable summaries. For each message in PlanDOC, at least 3 paraphrases are defined at semantic level. For example, '~rhe base plan called for one fiber ac- tivation at CSA 2100" and "There was one fiber activation at CSA 2100" are semantic para- phrases in PlanDOC domain. At the lexical level, we use synonymous words from WordNet to generate lexical paraphrases. A sample lexi- cal paraphrase for "The base plan called for one fiber activation at CSA 2100" is "The base plan proposed one fiber activation at CSA 2100". Subcategorizations and alternations from the lexicon are then applied at the syntactic level. After three levels of paraphrasing, each mes- sage in PlanDOC on average has over 10 para- phrases. For a specific domain such as PlanDOC, an enormous proportion of a general lexicon like the one we constructed is unrelated thus un- used at all. On the other hand, domain-specific knowledge may need to be added to the lexicon. The problem of how to adapt a general lexicon to a particular application domain and merge domain ontologies with a general lexicon is out of the scope of this paper but discussed in (Jing, 1998). 612 4 Conclusion In this paper, we present research on building a rich, large-scale, and reusable lexicon for gener- ation by combining multiple heterogeneous lin- guistic resources. Novel semi-automatic trans- formation and integration were used in combin- ing resources to ensure reliability of the result- ing lexicon. The lexicon, together with a multi- level feedback architecture, is used in a practical generation system, PlanDOC. The application of the lexicon in a generation system such as PlanDOC has many advantages. First, paraphrasing power of the system can be greatly improved due to the introduction of syn- onyms at the lexical concept level and alterna- tions at the syntactic level. Second, the integra- tion of the lexicon and the flexible architecture enables us to separate the domain-dependent component of the lexical choice module from domain-independent components so they can be reused. Third, the integration of the lexi- con with the surface realization system helps in checking for grammatical errors and also sim- plifies the interface input to the realization sys- tem. For these reasons, we were able to develop PlanDOC system in a short time. Although the lexicon was developed for gen- eration, it can be applied in other applications too. For example, the syntactic-semantic con- straints can be used for word sense disambigua- tion (Jing et al., 1997); The subcategoriza- tion and alternations from EVCA/COMLEX are better resources for parsing; WordNet en- riched with syntactic information might also be of value to many other applications. Acknowledgment This material is based upon work supported by the National Science Foundation under Grant No. IRI 96-19124, IRI 96-18797 and by a grant from Columbia University's Strategic Initiative Fund. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foun- dation. References Michael Elhadad. 1992. Using Argumenta- tion to Control Lexical Choice: A Functional Unification-Based Approach. Ph.D. thesis, Department of Computer Science, Columbia University. Ralph Grishman, Catherine Macleod, and Adam Meyers. 1994. COMLEX syntax: Building a computational lexicon. In Proceed- ings of COLING'9$, Kyoto, Japan. Hongyan Jing, Vasileios Hatzivassilogiou, Re- becca Passonneau, and Kathleen McKeown. 1997. Investigating complementary methods for verb sense pruning. In Proceedings of A NL P '97 Lexical Semantics Workshop, pages 58-65, Washington, D.C., April. Hongyan Jing. 1998. Applying wordnet to nat- ural language generation. In To appear in the Proceedings of COLING-ACL'98 work- shop on the Usage of WordNet in Natural Language Processing Systems, University of Montreal, Montreal, Canada, August. J. Klavans, R. Byrd, N. Wacholder, and M. Chodorow. 1991. Taxonomy and poly- semy. Technical Report Research Report RC 16443, IBM Research Division, T.J. Wat- son Research Center, Yorktown Heights, NY 10598. Kevin Knight and Steve K. Luk. 1994. Build- ing a large-scale knowledge base for machine translation. In Proceedings of AAAI'9,~. H Ku6era and W. N. Francis. 1967. Computa- tional Analysis of Present-day American En- glish. Brown University Press, Providence, RI. Beth Levin. 1993. English Verb Classes and Alternations: A Preliminary Investigation. University of Chicago Press, Chicago, Illinois. George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: An on-line lexical database. International Jour- nal of Lexicography (special issue), 3(4):235- 312. George A. Miller, Claudia Leacock, Randee Tengi, and Ross T. Bunker. 1993. A semantic concordance. Cognitive Science Laboratory, Princeton University. Jacques Robin. 1994. Revision-Based Gener- ation of Natural Language Summaries Pro- riding Historical Background: Corpus-Based Analysis, Design, Implementation, and Eval- uation. Ph.D. thesis, Department of Com- puter Science, Columbia University. Also Technical Report CU-CS-034-94. 613
1998
99
Untangling Text Data Mining Marti A. Hearst School of Information Management & Systems University of California, Berkeley 102 South Hall Berkeley, CA 94720-4600 h ttp ://www. sims. berkeley, edu/-hearst Abstract The possibilities for data mining from large text collections are virtually untapped. Text ex- presses a vast, rich range of information, but en- codes this information in a form that is difficult to decipher automatically. Perhaps for this rea- son, there has been little work in text data min- ing to date, and most people who have talked about it have either conflated it with informa- tion access or have not made use of text directly to discover heretofore unknown information. In this paper I will first define data mining, information access, and corpus-based computa- tional linguistics, and then discuss the relation- ship of these to text data mining. The intent behind these contrasts is to draw attention to exciting new kinds of problems for computa- tional linguists. I describe examples of what I consider to be reM text data mining efforts and briefly outline recent ideas about how to pursue exploratory data analysis over text. 1 Introduction The nascent field of text data mining (TDM) has the peculiar distinction of having a name and a fair amount of hype but as yet almost no practitioners. I suspect this has happened because people assume TDM is a natural ex- tension of the slightly less nascent field of data mining (DM), also known as knowledge dis- covery in databases (Fayyad and Uthurusamy, 1999), and information archeology (Brachman et al., 1993). Additionally, there are some disagreements about what actually constitutes data mining. It turns out that "mining" is not a very good metaphor for what people in the field actually do. Mining implies extracting precious nuggets of ore from otherwise worthless rock. If data mining really followed this metaphor, it would mean that people were discovering new factoids within their inventory databases. How- ever, in practice this is not really the case. Instead, data mining applications tend to be (semi)automated discovery of trends and pat- terns across very large datasets, usually for the purposes of decision making (Fayyad and Uthu- rusamy, 1999; Fayyad, 1997). Part of what I wish to argue here is that in the case of text, it can be interesting to take the mining-for- nuggets metaphor seriously. The various contrasts discussed below are summarized in Table 1. 2 TDM vs. Information Access It is important to differentiate between text data mining and information access (or infor- mation retrieval, as it is more widely known). The goal of information access is to help users find documents that satisfy their information needs (Baeza-Yates and Ribeiro-Neto, 1999). The standard procedure is akin to looking for needles in a needlestack - the problem isn't so much that the desired information is not known, but rather that the desired information coex- ists with many other valid pieces of information. Just because a user is currently interested in NAFTA and not Furbies does not mean that all descriptions of Furbies are worthless. The prob- lem is one of homing in on what is currently of interest to the user. As noted above, the goal of data mining is to discover or derive new information from data, finding patterns across datasets, and/or sepa- rating signal from noise. The fact that an infor- mation retrieval system can return a document that contains the information a user requested implies that no new discovery is being made: the information had to have already been known to the author of the text; otherwise the author could not have written it down. 3 I have observed that many people, when asked about text data mining, assume it should have something to do with "making things eas- ier to find on the web". For example, the de- scription of the KDD-97 panel on Data Mining and the Web stated: ... Two challenges are predominant for data mining on the Web. The first goal is to help users in finding useful information on the Web and in discovering knowledge about a domain that is represented by a collection of Web-documents. The second goal is to analyse the transactions run in a Web-based system, be it to optimize the system or to find information about the clients using the system. 1 This search-centric view misses the point that we might actually want to treat the information in the web as a large knowledge base from which we can extract new, never-before encountered information (Craven et al., 1998). On the other hand, the results of certain types of text processing can yield tools that indirectly aid in the information access process. Exam- ples include text clustering to create thematic overviews of text collections (Cutting et al., 1992; Chalmers and Chitson, 1992; Rennison, 1994; Wise et al., 1995; Lin et al., 1991; Chen et al., 1998), automatically generating term as- sociations to aid in query expansion (Peat and Willett, 1991; Voorhees, 1994; Xu and Croft, 1996), and using co-citation analysis to find gen- eral topics within a collection or identify central web pages (White and McCain, 1989; Larson, 1996; Kleinberg, 1998). Aside from providing tools to aid in the stan- dard information access process, I think text data mining can contribute along another di- mension. In future I hope to see information access systems supplemented with tools for ex- ploratory data analysis. Our efforts in this di- rection are embodied in the LINDI project, de- scribed in Section 5 below. 3 TDM and Computational Linguistics If we extrapolate from data mining (as prac- ticed) on numerical data to data mining from text collections, we discover that there already l http: / /www.aaai.org/ Conferences/ KD D /1997 /kdd97- schedule.html exists a field engaged in text data mining: corpus-based computational linguistics! Empir- ical computational linguistics computes statis- tics over large text collections in order to dis- cover useful patterns. These patterns are used to inform algorithms for various subproblems within natural language processing, such as part-of-speech tagging, word sense disambigua- tion, and bilingual dictionary creation (Arm- strong, 1994). It is certainly of interest to a computational linguist that the words "prices, prescription, and patent" are highly likely to co-occur with the medical sense of "drug" while "abuse, para- phernalia, and illicit" are likely to co-occur with the illegal drug sense of this word (Church and Liberman, 1991). This kind of information can also be used to improve information retrieval al- gorithms. However, the kinds of patterns found and used in computational linguistics are not likely to be what the general business commu- nity hopes for when they use the term text data mining. Within the computational linguistics frame- work, efforts in automatic augmentation of ex- isting lexical structures seem to fit the data- mining-as-ore-extraction metaphor. Examples include automatic augmentation of WordNet re- lations (Fellbaum, 1998) by identifying lexico- syntactic patterns that unambiguously indicate those relations (Hearst, 1998), and automatic acquisition of subcategorization data from large text corpora (Manning, 1993). However, these serve the specific needs of computational lin- guistics and are not applicable to a broader au- dience. 4 TDM and Category Metadata Some researchers have claimed that text cate- gorization should be considered text data min- ing. Although analogies can be found in the data mining literature (e.g., referring to classifi- cation of astronomical phenomena as data min- ing (Fayyad and Uthurusamy, 1999)), I believe when applied to text categorization this is a mis- nomer. Text categorization is a boiling down of the specific content of a document into one (or more) of a set of pre-defined labels. This does not lead to discovery of new information; pre- sumably the person who wrote the document knew what it was about. Rather, it produces a 4 Finding Patterns Non-textual data standard data mining Textual data computational linguistics Finding Nuggets Novel I Non-Novel ? database queries real TDM information retrieval Table 1: A classification of data mining and text data mining applications. compact summary of something that is already known. However, there are two recent areas of in- quiry that make use of text categorization and do seem to fit within the conceptual framework of discovery of trends and patterns within tex- tual data for more general purpose usage. One body of work uses text category labels (associated with Reuters newswire) to find "un- expected patterns" among text articles (Feld- man and Dagan, 1995; Dagan et al., 1996; Feld- man et al., 1997). The main approach is to compare distributions of category assignments within subsets of the document collection. For instance, distributions of commodities in coun- try C1 are compared against those of country C2 to see if interesting or unexpected trends can be found. Extending this idea, one coun- try's export trends might be compared against those of a set of countries that are seen as an economic unit (such as the G-7). Another effort is that of the DARPA Topic Detection and Tracking initiative (Allan et al., 1998). While several of the tasks within this initiative are standard text analysis prob- lems (such as categorization and segmentation), there is an interesting task called On-line New Event Detection, whose input is a stream of news stories in chronological order, and whose output is a yes/no decision for each story, made at the time the story arrives, indicating whether the story is the first reference to a newly occur- ring event. In other words, the system must detect the first instance of what will become a • series of reports on some important topic. Al- though this can be viewed as a standard clas- sification task (where the class is a binary as- signment to the new-event class) it is more in the spirit of data mining, in that the focus is on discovery of the beginning of a new theme or trend. The reason I consider this examples - using multiple occurrences of text categories to de- tect trends or patterns - to be "real" data min- ing is that they use text metadata to tell us something about the world, outside of the text collection itself. (However, since this applica- tion uses metadata associated with text docu- ments, rather than the text directly, it is un- clear if it should be considered text data min- ing or standard data mining.) The computa- tional linguistics applications tell us about how to improve language analysis, but they do not discover more widely usable information. 5 Text Data Mining as Exploratory Data Analysis Another way to view text data mining is as a process of exploratory data analysis (Tukey, 1977; Hoaglin et al., 1983) that leads to the dis- covery of heretofore unknown information, or to answers for questions for which the answer is not currently known. Of course, it can be argued that the stan- dard practice of reading textbooks, journal ar- ticles and other documents helps researchers in the discovery of new information, since this is an integral part of the research process. How- ever, the idea here is to use text for discovery in a more direct manner. Two examples are de- scribed below. 5.1 Using Text to Form Hypotheses about Disease For more than a decade, Don Swanson has elo- quently argued why it is plausible to expect new information to be derivable from text col- lections: experts can only read a small subset of what is published in their fields and are of- ten unaware of developments in related fields. Thus it should be possible to find useful link- ages between information in related literatures, if the authors of those literatures rarely refer to one another's work. Swanson has shown how chains of causal implication within the medical literature can lead to hypotheses for causes of rare diseases, some of which have received sup- porting experimental evidence (Swanson, 1987; 5 Swanson, 1991; Swanson and Smalheiser, 1994; Swanson and Smalheiser, 1997). For example, when investigating causes of mi- graine headaches, he extracted various pieces of evidence from titles of articles in the biomedi- cal literature. Some of these clues can be para- phrased as follows: • stress is associated with migraines • stress can lead to loss of magnesium • calcium channel blockers prevent some mi- graines • magnesium is a natural calcium channel blocker • spreading cortical depression (SCD) is im- plicated in some migraines • high leveles of magnesium inhibit SCD • migraine patients have high platelet aggre- gability • magnesium can suppress platelet aggrega- bility These clues suggest that magnesium defi- ciency may play a role in some kinds of mi- graine headache; a hypothesis which did not ex- ist in the literature at the time Swanson found these links. The hypothesis has to be tested via non-textual means, but the important point is that a new, potentially plausible medical hy- pothesis was derived from a combination of text fragments and the explorer's medical ex- pertise. (According to Swanson (1991), subse- quent study found support for the magnesium- migraine hypothesis (Ramadan et al., 1989).) This approach has been only partially auto- mated. There is, of course, a potential for com- binatorial explosion of potentially valid links. Beeferman (1998) has developed a flexible in- terface and analysis tool for exploring certain kinds of chains of links among lexical relations within WordNet. 2 However, sophisticated new algorithms are needed for helping in the prun- ing process, since a good pruning algorithm will want to take into account various kinds of se- mantic constraints. This may be an interest- ing area of investigation for computational lin- guists. 2See http://www.link.cs.cmu.edu/lexfn 5.2 Using Text to Uncover Social Impact Switching to an entirely different domain, con- sider a recent effort to determine the effects of publicly financed research on industrial ad- vances (Narin et al., 1997). After years of preliminary studies and building special pur- pose tools, the authors found that the tech- nology industry relies more heavily than ever on government-sponsored research results. The authors explored relationships among patent text and the published research literature, us- ing a procedure which was reported as follows in Broad (1997): The CHI Research team examined the science references on the front pages of American patents in two recent periods - 1987 and 1988, as well as 1993 and 1994 - looking at all the 397,660 patents issued. It found 242,000 identifiable science ref- erences and zeroed in on those published in the preceding 11 years, which turned out to be 80 percent of them. Searches of computer databases allowed the linking of 109,000 of these references to known jour- nals and authors' addresses. After elim- inating redundant citations to the same paper, as well as articles with no known American author, the study had a core col- lection of 45,000 papers. Armies of aides then fanned out to libraries to look up the papers and examine their closing lines, which often say who financed the research. That detective work revealed an extensive reliance on publicly financed science. Further narrowing its focus, the study set aside patents given to schools and govern- ments and zeroed in on those awarded to industry. For 2,841 patents issued in 1993 and 1994, it examined the peak year of lit- erature references, 1988, and found 5,217 citations to science papers. Of these, it found that 73.3 percent had been written at public institutions - uni- versities, government labs and other pub- lic agencies, both in the United States and abroad. Thus a heterogeneous mix of operations was required to conduct a complex analyses over large text collections. These operations in- cluded: 6 1 Retrieval of articles from a particular col- lection (patents) within a particular date range. 2 Identification of the citation pool (articles cited by the patents). 3 Bracketing of this pool by date, creating a new subset of articles. 4 Computation of the percentage of articles that remain after bracketing. 5 Joining these results with those of other collections to identify the publishers of ar- ticles in the pool. 6 Elimination of redundant articles. 7 Elimination of articles based on an at- tribute type (author nationality). 8 Location of full-text versions of the articles. 9 Extraction of a special attribute from the full text (the acknowledgement of funding). 10 Classification of this attribute (by institu- tion type). 11 Narrowing the set of articles to consider by an attribute (institution type). 12 Computation of statistics over one of the attributes (peak year) 13 Computation of the percentage of arti- cles for which one attribute has been as- signed another attribute type (whose cita- tion attribute has a particular institution attribute). Because all the data was not available online, much of the work had to be done by hand, and special purpose tools were required to perform the operations. 5.3 The LINDI Project The objectives of the LINDI project 3 are to in- vestigate how researchers can use large text col- lections in the discovery of new important infor- mation, and to build software systems to help support this process. The main tools for dis- covering new information are of two types: sup- port for issuing sequences of queries and related operations across text collections, and tightly coupled statistical and visualization tools for the examination of associations among concepts that co-occur within the retrieved documents. Both sets of tools make use of attributes as- sociated specifically with text collections and 3LINDI: Linking Information for Novel Discovery and Insight. their metadata. Thus the broadening, narrow- ing, and linking of relations seen in the patent example should be tightly integrated with anal- ysis and interpretation tools as needed in the biomedical example. Following Amant (1996), the interaction paradigm is that of a mixed-initiative balance of control between user and system. The inter- action is a cycle in which the system suggests hypotheses and strategies for investigating these hypotheses, and the user either uses or ignores these suggestions and decides on the next move. We are interested in an important problem in molecular biology, that of automating the discovery of the function of newly sequenced genes (Walker et al., 1998). Human genome researchers perform experiments in which they analyze co-expression of tens of thousands of novel and known genes simultaneously. 4 Given this huge collection of genetic information, the goal is to determine which of the novel genes are medically interesting, meaning that they are co-expressed with already understood genes which are known to be involved in disease. Our strategy is to explore the biomedical literature, trying to formulate plausible hypotheses about which genes are of interest. Most information access systems require the user to execute and keep track of tactical moves, often distracting from the thought-intensive as- pects of the problem (Bates, 1990). The LINDI interface provides a facility for users to build and so reuse sequences of query operations via a drag-and-drop interface. These allow the user to repeat the same sequence of actions for differ- ent queries. In the gene example, this allows the user to specify a sequence of operations to ap- ply to one co-expressed gene, and then iterate this sequence over a list of other co-expressed genes that can be dragged onto the template. (The Visage interface (Derthick et al., 1997) implements this kind of functionality within its information-centric framework.) These include the following operations (see Figure 1): • Iteration of an operation over the items within a set. (This allows each item re- trieved in a previous query to be use as a 4A gene g~ co-expresses with gene g when both are found to be activated in the same cells at the same time with much more likelihood than chance. search terms for a new query.) • Transformation, i.e., applying an operation to an item and returning a transformed item (such as extracting a feature). • Ranking, i.e., applying an operation to a set of items and returning a (possibly) re- ordered set of items with the same cardi- nality. • Selection, i.e., applying an operation to a set of items and returning a (possibly) reordered set of items with the same or smaller cardinality. • Reduction, i.e., applying an operation to one or more sets of items to yield a sin- gleton result (e.g., to compute percentages and averages). 6 Summary For almost a decade the computational linguis- tics community has viewed large text collections as a resource to be tapped in order to produce better text analysis algorithms. In this paper, I have attempted to suggest a new emphasis: the use of large online text collections to discover new facts and trends about the world itself. I suggest that to make progress we do not need fully artificial intelligent text analysis; rather, a mixture of computationally-driven and user- guided analysis may open the door to exciting new results. Acknowledgements. Hao Chen, Ketan Mayer-Patel, and Vijayshankar Raman helped design and did all the implementation of the first LINDI prototype. This system will allow maintenance of sev- eral different types of history including history of commands issued, history of strategies em- ployed, and hiStory of hypotheses tested. For the history view, we plan to use a "spreadsheet" layout (Hendry and Harper, 1997) as well as a variation on a "slide sorter" view which Visage uses for presentation creation but not for his- tory retention (Roth et al., 1997). Since gene function discovery is a new area, there is not yet a known set of exploration strategies. So initially the system must help an expert user generate and record good explo- ration strategies. The user interface provides a mechanism for recording and modifying se- quences of actions. These include facilities that refer to metadata structure, allowing, for exam- ple, query terms to be expanded by terms one level above or below them in a subject hierarchy. Once a successful set of strategies has been de- vised, they can be re-used by other researchers and (with luck) by an automated version of the system. The intent is to build up enough strate- gies that the system will begin to be used as an assistant or advisor (Amant, 1996), ranking hy- potheses according to projected importance and plausibility. Thus the emphasis of this system is to help automate the tedious parts of the text manipulation process and to integrate un- derlying computationally-driven text analysis with human-guided decision making within ex- ploratory data analysis over text. References J. Allan, J. Carbonell, G. Doddington, J. Yamron, and Y. Yang. 1998. Topic detection and tracking pilot study: Final report. In Proceedings of the DARPA Broadcast News Transcription and Un- derstanding Workshop, pages 194-218. Robert St. Amant. 1996. A Mixed-Initiative Planning Approach to Exploratory Data Analy- sis. Ph.D. thesis, Univeristy of Massachusetts, Amherst. Susan Armstrong, editor. 1994. Using Large Cor- pora. MIT Press. Ricardo Baeza-Yates and Berthier Ribeiro-Neto. 1999. Modern Information Retrieval. Addison- Wesley Longman Publishing Company. Marcia J. Bates. 1990. The berry-picking search: User interface design. In Harold Thimbleby, edi- tor, User Interface Design. Addison-Wesley. Douglas Beeferman. 1998. Lexical discovery with an enriched semantic network. In Proceedings of the ACL/COLING Workshop on Applications of WordNet in Natural Language Processing Sys- tems, pages 358-364. R. J. Brachman, P. G. Selfridge, L. G. Terveen, B. Altman, A Borgida, F. Halper, T. Kirk, A. Lazar, D. L. McGuinness, and L. A. Resnick. 1993. Integrated support for data archaeology. International Journal of Intelligent and Cooper- ative Information Systems, 2(2):159-185. William J. Broad. 1997. Study finds public science is pillar of industry. In The New York Times, May 13. Matthew Chalmers and Paul Chitson. 1992. Bead: Exploration in information visualization. In Proceedings of the 15th Annual International ACM/SIGIR Conference, pages 330-337, Copen- hagen, Denmark. 8 Figure 1: A hypothetical sequence of operations for the exploration of gene function within a biomedical text collection, where the functions of genes A, B, and C are known, and commonalities are sought to hypothesize the function of the unknown gene. The mapping operation imposes a rank ordering on the selected keywords. The final operation is a selection of only those documents that contain at least one of the top-ranked keywords and that contain mentions of all three known genes. Hsinchen Chen, Andrea L. Houston, Robin R. Sewell, and Bruce R. Schatz. 1998. Internet browsing and searching: User evaluations of cate- gory map and concept space techniques. Journal of the American Society for Information Sciences (JASIS), 49(7). Kenneth W. Church and Mark Y. Liberman. 1991. A status report on the ACL/DCI. In The Pro- ceedings of the 7th Annual Conference of the UW Centre for the New OED and Text Research: Us- ing Corpora, pages 84-91, Oxford. M. Craven, D. DiPasquo, D. Freitag, A. McCallum, T. Mitchell, K. Nigam, and S. Slattery. 1998. Learning to extract symbolic knowledge from the world wide web. In Proceedings of AAAI. Douglass R. Cutting, Jan O. Pedersen, David Karger, and John W. Tukey. 1992. Scat- ter/Gather: A cluster-based approach to brows- ing large document collections. In Proceedings of the 15th Annual International ACM/SIGIR Con- ference, pages 318-329, Copenhagen, Denmark. Ido Dagan, Ronen Feldman, and Haym Hirsh. 1996. Keyword-based browsing and analysis of large document sets. In Proceedings of the Fifth Annual Symposium on Document Analysis and Informa- tion Retrieval (SDAIR), Las Vegas, NV. Mark Derthick, John Kolojejchick, and Steven F. Roth. 1997. An interactive visualization environ- ment for data exploration. In Proceedings of the Third Annual Conference on Knowledge Discov- ery and Data Mining (KDD), Newport Beach. Usama Fayyad and Ramasamy Uthurusamy. 1999. Data mining and knowledge discovery in databases: Introduction to the special issue. Communications of the ACM, 39(11), November. Usama Fayyad. 1997. Editorial. Data Mining and Knowledge Discovery, 1(1). Ronen Feldman and Ido Dagan. 1995. KDT - knowledge discovery in texts. In Proceedings of the First Annual Conference on Knowledge Dis- covery and Data Mining (KDD), Montreal. Ronen Feldman, Will Klosgen, and Amir Zilber- stein. 1997. Visualization techniques to explore data mining results for document collections. In Proceedings of the Third Annual Conference on Knowledge Discovery and Data Mining (KDD), Newport Beach. Christiane Fellbaum, editor. 1998. WordNet: An 9 Electronic Lexical Database. MIT Press. Marti A. Hearst. 1998. Automated discovery of wordnet relations. In Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA. David G. Hendry and David J. Harper. 1997. An in- formal information-seeking environment. Journal of the American Society for Information Science, 48(11):1036-1048. David C. Hoaglin, Frederick Mosteller, and John W. Tukey. 1983. Understanding Robust and Ex- ploratory Data Analysis. John Wiley & Sons, Inc. Jon Kleinberg. 1998. Authoritative sources in a hy- perlinked environment. In Proceedings of the 9th A CM-SIAM Symposium on Discrete Algorithms. Ray R. Larson. 1996. Bibliometrics of the world wide web: An exploratory analysis of the intellec- tual structure of cyberspace. In ASIS '96: Pro- ceedings of the 1996 Annual ASIS Meeting. Xia Lin, Dagobert Soergel, and Gary Marchion- ini. 1991. A self-organizing semantic map for in- formation retrieval. In Proceedings of the 14th Annual International ACM//SIGIR Conference, pages 262-269, Chicago. Christopher D. Manning. 1993. Automatic acquisi- tion of a large subcategorization dictionary from corpora. In Proceedings of the 31st Annual Meet- ing of the Association for Computational Lin- gusitics, pages 235-242, Columbus, OH. Francis Narin, Kimberly S. Hamilton, and Dominic Olivastro. 1997. The increasing linkage between us technology and public science. Research Pol- icy, 26(3):317-330. Helen J. Peat and Peter Willett. 1991. The limi- tations of term co-occurence data for query ex- pansion in document retrieval systems. JASIS, 42(5):378-383. N. M. Ramadan, H. Halvorson, A. Vandelinde, and S.R. Levine. 1989. Low brain magnesium in mi- graine. Headache, 29(7):416-419. Earl Rennison. 1994. Galaxy of news: An approach to visualizing and understanding expansive news landscapes. In Proceedings of UIST 94, ACM Symposium on User Interface Software and Tech- nology, pages 3-12, New York. Steven F. Roth, Mei C. Chuah, Stephan Kerped- jiev, John A. Kolojejchick, and Peter Lucas. 1997. Towards an information visualization workspace: Combining multiple means of expression. Human- Computer Interaction, 12(1-2):131-185. Don R. Swanson and N. R. Smalheiser. 1994. As- sessing a gap in the biomedical literature: Mag- nesium deficiency and neurologic disease. Neuro- science Research Communications, 15:1-9. Don R. Swanson and N. R. Smalheiser. 1997. An in- teractive system for finding complementary litera- tures: a stimulus to scientific discovery. Artificial Intelligence, 91:183-203. Don R. Swanson. 1987. Two medical literatures that are logically but not bibliographically con- nected. JASIS, 38(4):228-233. Don R. Swanson. 1991. Complementary structures in disjoint science literatures. In Proceedings of the l~th Annual International ACM//SIGIR Con- ference, pages 280-289. John W. Tukey. 1977. Exploratory Data Analysis. Addison-Wesley Publishing Company. Ellen M. Voorhees. 1994. Query expansion using lexical-semantic relations. In Proceedings of the 17th Annual International A CM//SIGIR Confer- ence, pages 61-69, Dublin, Ireland. Michael G. Walker, Walter Volkmuth, Einat Sprin- zak, David Hodgson, and Tod Klingler. 1998. Prostate cancer genes identified by genome~scale expression analysis. Technical Report (unnum- bered), Incyte Pharmaceuticals, July. H. D. White and K. W. McCain. 1989. Bibliomet- rics. Annual Review of Information Science and Technology, 24:119-186. James A. Wise, James J. Thomas, Kelly Pennock, David Lantrip, Marc Pottier, and Anne Schur. 1995. Visualizing the non-visual: Spatial analysis and interaction with information from text docu- ments. In Proceedings of the Information Visual- ization Symposium 95, pages 51-58. IEEE Com- puter Society Press. J. Xu and W. B. Croft. 1996. Query expansion us- ing local and global document analysis. In SI- GIR '96: Proceedings of the 19th Annual Interna- tional ACM SIGIR Conference on Research and Development in Information Retrieval, pages 4- 11, Zurich. 10
1999
1
Supervised Grammar Induction using Training Data with Limited Constituent Information * Rebecca Hwa Division of Engineering and Applied Sciences Harvard University Cambridge, MA 02138 USA [email protected] Abstract Corpus-based grammar induction generally re- lies on hand-parsed training data to learn the structure of the language. Unfortunately, the cost of building large annotated corpora is pro- hibitively expensive. This work aims to improve the induction strategy when there are few labels in the training data. We show that the most in- formative linguistic constituents are the higher nodes in the parse trees, typically denoting com- plex noun phrases and sentential clauses. They account for only 20% of all constituents. For in- ducing grammars from sparsely labeled training data (e.g., only higher-level constituent labels), we propose an adaptation strategy, which pro- duces grammars that parse almost as well as grammars induced from fully labeled corpora. Our results suggest that for a partial parser to replace human annotators, it must be able to automatically extract higher-level constituents rather than base noun phrases. 1 Introduction The availability of large hand-parsed corpora such as the Penn Treebank Project has made high-quality statistical parsers possible. How- ever, the parsers risk becoming too tailored to these labeled training data that they cannot re- liably process sentences from an arbitrary do- main. Thus, while a parser trained on the • Wall Street Journal corpus can fairly accurately parse a new Wall Street Journal article, it may not perform as well on a New Yorker article. To parse sentences from a new domain, one would normally directly induce a new grammar * This material is based upon work supported by the Na- tional Science Foundation under Grant No. IRI 9712068. We thank Stuart Shieber for his guidance, and Lillian Lee, Ric Crabbe, and the three anonymous reviewers for their comments on the paper. from that domain, in which the training pro- cess would require hand-parsed sentences from the new domain. Because parsing a large cor- pus by hand is a labor-intensive task, it would be beneficial to minimize the number of labels needed to induce the new grammar. We propose to adapt a grammar already trained on an old domain to the new domain. Adaptation can exploit the structural similar- ity between the two domains so that fewer la- beled data might be needed to update the gram- mar to reflect the structure of the new domain. This paper presents a quantitative study com- paring direct induction and adaptation under different training conditions. Our goal is to un- derstand the effect of the amounts and types of labeled data on the training process for both induction strategies. For example, how much training data need to be hand-labeled? Must the parse trees for each sentence be fully spec- ified? Are some linguistic constituents in the parse more informative than others? To answer these questions, we have performed experiments that compare the parsing quali- ties of grammars induced under different train- ing conditions using both adaptation and di- rect induction. We vary the number of labeled brackets and the linguistic classes of the labeled brackets. The study is conducted on both a sim- ple Air Travel Information System (ATIS) cor- pus (Hemphill et al., 1990) and the more com- plex Wall Street Journal (WSJ) corpus (Marcus et al., 1993). Our results show that the training examples do not need to be fully parsed for either strat- egy, but adaptation produces better grammars than direct induction under the conditions of minimally labeled training data. For instance, the most informative brackets, which label con- stituents higher up in the parse trees, typically 73 identifying complex noun phrases and senten- tial clauses, account for only 17% of all con- stituents in ATIS and 21% in WSJ. Trained on this type of label, the adapted grammars parse better than the directly induced grammars and almost as well as those trained on fully labeled data. Training on ATIS sentences labeled with higher-level constituent brackets, a directly in- duced grammar parses test sentences with 66% accuracy, whereas an adapted grammar parses with 91% accuracy, which is only 2% lower than the score of a grammar induced from fully la- beled training data. Training on WSJ sentences labeled with higher-level constituent brackets, a directly induced grammar parses with 70% accuracy, whereas an adapted grammar parses with 72% accuracy, which is 6% lower than the score of a grammar induced from fully labeled training data. That the most informative brackets are higher-level constituents and make up only one- fifth of all the labels in the corpus has two impli- cations. First, it shows that there is potential reduction of labor for the human annotators. Although the annotator still must process an entire sentence mentally, the task of identifying higher-level structures such as sentential clauses and complex nouns should be less tedious than to fully specify the complete parse tree for each sentence. Second, one might speculate the pos- sibilities of replacing human supervision alto- gether with a partial parser that locates con- stituent chunks within a sentence. However, as our results indicate that the most informa- tive constituents are higher-level phrases, the parser would have to identify sentential clauses and complex noun phrases rather than low-level base noun phrases. 2 Related Work on Grammar Induction • Grammar induction is the process of inferring the structure of a language by learning from ex- ample sentences drawn from the language. The degree of difficulty in this task depends on three factors. First, it depends on the amount of supervision provided. Charniak (1996), for in- stance, has shown that a grammar can be easily constructed when the examples are fully labeled parse trees. On the other hand, if the examples consist of raw sentences with no extra struc- tural information, grammar induction is very difficult, even theoretically impossible (Gold, 1967). One could take a greedy approach such as the well-known Inside-Outside re-estimation algorithm (Baker, 1979), which induces locally optimal grammars by iteratively improving the parameters of the grammar so that the entropy of the training data is minimized. In practice, however, when trained on unmarked data, the algorithm tends to converge on poor grammar models. For even a moderately complex domain such as the ATIS corpus, a grammar trained on data with constituent bracketing information produces much better parses than one trained on completely unmarked raw data (Pereira and Schabes, 1992). Part of our work explores the in-between case, when only some constituent la- bels are available. Section 3 defines the different types of annotation we examine. Second, as supervision decreases, the learning process relies more on search. The success of the induction depends on the initial parameters of the grammar because a local search strategy may converge to a local minimum. For finding a good initial parameter set, Lari and Young (1990) suggested first estimating the probabili- ties with a set of regular grammar rules. Their experiments, however, indicated that the main benefit from this type of pretraining is one of run-time efficiency; the improvement in the quality of the induced grammar was minimal. Briscoe and Waegner (1992) argued that one should first hand-design the grammar to en- code some linguistic notions and then use the re- estimation procedure to fine-tune the parame- ters, substituting the cost of hand-labeled train- ing data with that of hand-coded grammar. Our idea of grammar adaptation can be seen as a form of initialization. It attempts to seed the grammar in a favorable search space by first training it with data from an existing corpus. Section 4 discusses the induction strategies in more detail. A third factor that affects the learning pro- cess is the complexity of the data. In their study of parsing the WSJ, Schabes et al. (1993) have shown that a grammar trained on the Inside- Outside re-estimation algorithm can perform quite well on short simple sentences but falters as the sentence length increases. To take this factor into account, we perform our experiments 74 Categories Labeled Sentence HighP BaseNP BaseP AllNP (I want (to take (the flight with at most one stop))) (I) want to take (the flight) with (at most one stop) (I) want to take (the flight) with (at most one) stop (I) want to take ((the flight) with (at most one stop)) NotBaseP (I (want (to (take (the flight (with (at most one stop))))))) I ATIS I WSJ 17% 21% 27% 29% 32% 30% 37% 43% 68% 70% Table 1: The second column shows how the example sentence ((I) (want (to (take ((the flight) (with ((at most one) stop))))))) is labeled under each category. The third and fourth columns list the percentage break-down of brackets in each category for ATIS and WSJ respectively. on both a simple domain (ATIS) and a complex one (WSJ). In Section 5, we describe the exper- iments and report the results. 3 Training Data Annotation The training sets are annotated in multiple ways, falling into two categories. First, we con- struct training sets annotated with random sub- sets of constituents consisting 0%, 25~0, 50%, 75% and 100% of the brackets in the fully an- notated corpus. Second, we construct sets train- ing in which only a certain type of constituent is annotated. We study five linguistic categories. Table 1 summarizes the annotation differences between the five classes and lists the percent- age of brackets in each class with respect to the total number of constituents 1 for ATIS and WSJ. In an AI1NP training set, all and only the noun phrases in the sentences are labeled. For the BaseNP class, we label only simple noun phrases that contain no embedded noun phrases. Similarly for a BaseP set, all sim- ple phrases made up of only lexical items are labeled. Although there is a high intersection between the set of BaseP labels and the set of BaseNP labels, the two classes are not identical. A BaseNP may contain a BaseP. For the exam- ple in Table 1, the phrase "at most one stop" is a BaseNP that contains a quantifier BaseP "at most one." NotBaseP is the complement of BaseP. The majority of the constituents in a sentence belongs to this category, in which at least one of the constituent's sub-constituents is not a simple lexical item. Finally, in a HighP set, we label only complex phrases that decom- 1 For computing the percentage of brackets, the outer- most bracket around the entire sentence and the brack- ets around singleton phrases (e.g., the pronoun "r' as a BaseNP) are excluded because they do not contribute to the pruning of parses. pose into sub-phrases that may be either an- other HighP or a BaseP. That is, a HighP con- stituent does not directly subsume any lexical word. A typical HighP is a sentential clause or a complex noun phrase. The example sentence in Table 1 contains 3 HighP constituents: a com- plex noun phrase made up of a BaseNP and a prepositional phrase; a sentential clause with an omitted subject NP; and the full sentence. 4 Induction Strategies To induce a grammar from the sparsely brack- eted training data previously described, we use a variant of the Inside-Outside re-estimation algorithm proposed by Pereira and Schabes (1992). The inferred grammars are repre- sented in the Probabilistic Lexicalized Tree In- sertion Grammar (PLTIG) formalism (Schabes and Waters, 1993; Hwa, 1998a), which is lexical- ized and context-free equivalent. We favor the PLTIG representation for two reasons. First, it is amenable to the Inside-Outside re-estimation algorithm (the equations calculating the inside and outside probabilities for PLTIGs can be found in Hwa (1998b)). Second, its lexicalized representation makes the training process more efficient than a traditional PCFG while main- taining comparable parsing qualities. Two training strategies are considered: di- rect induction, in which a grammar is induced from scratch, learning from only the sparsely la- beled training data; and adaptation, a two-stage learning process that first uses direct induction to train the grammar on an existing fully la- beled corpus before retraining it on the new cor- pus. During the retraining phase, the probabil- ities of the grammars are re-estimated based on the new training data. We expect the adaptive method to induce better grammars than direct induction when the new corpus is only partially 75 annotated because the adapted grammars have collected better statistics from the fully labeled data of another corpus. 5 Experiments and Results We perform two experiments. The first uses ATIS as the corpus from which the different types of partially labeled training sets are gener- ated. Both induction strategies train from these data, but the adaptive strategy pretrains its grammars with fully labeled data drawn from the WSJ corpus. The trained grammars are scored on their parsing abilities on unseen ATIS test sets. We use the non-crossing bracket mea- surement as the parsing metric. This experi- ment will show whether annotations of a partic- ular linguistic category may be more useful for training grammars than others. It will also in- dicate the comparative merits of the two induc- tion strategies trained on data annotated with these linguistic categories. However, pretrain- ing on the much more complex WSJ corpus may be too much of an advantage for the adaptive strategy. Therefore, we reverse the roles of the corpus in the second experiment. The partially labeled data are from the WSJ corpus, and the adaptive strategy is pretrained on fully labeled ATIS data. In both cases, part-of-speech(POS) tags are used as the lexical items of the sen- tences. Backing off to POS tags is necessary because the tags provide a considerable inter- section in the vocabulary sets of the two cor- pora. 5.1 Experiment 1: Learning ATIS The easier learning task is to induce grammars to parse ATIS sentences. The ATIS corpus con- sists of 577 short sentences with simple struc- tures, and the vocabulary set is made up of 32 • POS tags, a subset of the 47 tags used for the WSJ. Due to the limited size of this corpus, ten sets of randomly partitioned train-test-held-out triples are generated to ensure the statistical significance of our results. We use 80 sentences for testing, 90 sentences for held-out data, and the rest for training. Before proceeding with the main discussion on training from the ATIS, we briefly describe the pretraining stage of the adaptive strategy. 5.1.1 Pretraining with WSJ The idea behind the adaptive method is simply to make use of any existing labeled data. We hope that pretraining the grammars on these data might place them in a better position to learn from the new, sparsely labeled data. In the pretraining stage for this experiment, a grammar is directly induced from 3600 fully labeled WSJ sentences. Without any further training on ATIS data, this grammar achieves a parsing score of 87.3% on ATIS test sentences. The relatively high parsing score suggests that pretraining with WSJ has successfully placed the grammar in a good position to begin train- ing with the ATIS data. 5.1.2 Partially Supervised Training on ATIS We now return to the main focus of this experi- ment: learning from sparsely annotated ATIS training data. To verify whether some con- stituent classes are more informative than oth- ers, we could compare the parsing scores of the grammars trained using different constituent class labels. But this evaluation method does not take into account that the distribution of the constituent classes is not uniform. To nor- malize for this inequity, we compare the parsing scores to a baseline that characterizes the rela- tionship between the performance of the trained grammar and the number of bracketed con- stituents in the training data. To generate the baseline, we create training data in which 0%, 25%, 50%, 75%, and 100% of the constituent brackets are randomly chosen to be included. One class of linguistic labels is better than an- other if its resulting parsing improvement over the baseline is higher than that of the other. The test results of the grammars induced from these different training data are summa- rized in Figure 1. Graph (a) plots the outcome of using the direct induction strategy, and graph (b) plots the outcome of the adaptive strat- egy. In each graph, the baseline of random con- stituent brackets is shown as a solid line. Scores of grammars trained from constituent type spe- cific data sets are plotted as labeled dots. The dotted horizontal line in graph (b) indicates the ATIS parsing score of the grammar trained on WSJ alone. Comparing the five constituent types, we see that the HighP class is the most informative 76 95 ss 8 ~ 6s e ~ 55 5O Rand-75% Rand-1 Rand-2S JNP NotBaseP Hi~iIP b I i , i I I 20O 40O 6OO 80O 1000 1200 1400 1600 Number of brackets in the ATIS ~ain~lg data (a) 95 < ! 7s • "! 60 65 '~ ss SO . . . . . . RIP-1 X' hP Rand-25% = _ * Rand-I ig o A]INP - NotBaseP ......... ~ ........................................................................ WSJ ~W ....... i 1 i i i i i 200 4OO 6OO BOO 1000 1200 1400 1600 Number of brackets in the ATIS training data (b) Figure 1: Parsing accuracies of (a) directly induced grammars and (b) adapted grammars as a function of the number of brackets present in the training corpus. There are 1595 brackets in the training corpus all together. for the adaptive strategy, resulting in a gram- mar that scored better than the baseline. The grammars trained on the AllNP annotation per- formed as well as the baseline for both strate- gies. Grammars trained under all the other training conditions scored below the baseline. Our results suggest that while an ideal train- ing condition would include annotations of both higher-level phrases and simple phrases, com- plex clauses are more informative. This inter- pretation explains the large gap between the parsing scores of the directly induced grammar and the adapted grammar trained on the same HighP data. The directly induced grammar performed poorly because it has never seen a labeled example of simple phrases. In contrast, the adapted grammar was already exposed to labeled WSJ simple phrases, so that it success- fully adapted to the new corpus from annotated examples of higher-level phrases. On the other hand, training the adapted grammar on anno- tated ATIS simple phrases is not successful even though it has seen examples of WSJ higher- level phrases. This also explains why gram- mars trained on the conglomerate class Not- BaseP performed on the same level as those trained on the AllNP class. Although the Not- BaseP set contains the most brackets, most of the brackets are irrelevant to the training pro- cess, as they are neither higher-level phrases nor simple phrases. Our experiment also indicates that induction strategies exhibit different learning characteris- tics under partially supervised training condi- tions. A side by side comparison of Figure 1 (a) and (b) shows that the adapted grammars perform significantly better than the directly induced grammars as the level of supervision decreases. This supports our hypothesis that pretraining on a different corpus can place the grammar in a good initial search space for learn- ing the new domain. Unfortunately, a good ini- tial state does not obviate the need for super- vised training. We see from Figure l(b) that retraining with unlabeled ATIS sentences actu- ally lowers the grammar's parsing accuracy. 5.2 Experiment 2: Learning WSJ In the previous section, we have seen that anno- tations of complex clauses are the most helpful for inducing ATIS-style grammars. One of the goals of this experiment is to verify whether the result also holds for the WSJ corpus, which is structurally very different from ATIS. The WSJ corpus uses 47 POS tags, and its sentences are longer and have more embedded clauses. As in the previous experiment, we construct training sets with annotations of different con- stituent types and of different numbers of ran- domly chosen labels. Each training set consists of 3600 sentences, and 1780 sentences are used as held-out data. The trained grammars are tested on a set of 2245 sentences. Figure 2 (a) and (b) summarize the outcomes 77 80 "i 7s 70 5 i " 55 '5 50~ I ";~ 40 35 ' ' ' Ran~l- Rand-25% / e...NP~,.p No~,P 65 ! eo~ It "6 i 50 'Rand-TS~ F~nd-50"/,~____----- R a n d 2 5 % ~ Not~eP ~ Ba~N~ImP '~-,oo~. ~a-~ ............................................................................................................ i i i i i i i i i 35 ° i I i i i i a i i ~0 1oooo 15ooo 200~0 25ooo 30~0 350c0 4oo0o 45ooo 5ooo I ocxJo 15ooo 2oooo 25ooo 300~0 3s0~0 40ooo 45c~0 Numb4r of brackets in me WSJ uaining data number of brackets in the WSJ training data (a) (b) Figure 2: Parsing accuracies of (a) directly induced grammars and (b) adapted grammars as a function of the number of brackets present in the training corpus. There is a total of 46463 brackets in the training corpus. of this experiment. Many results of this section are similar to the ATIS experiment. Higher- level phrases still provide the most information; the grammars trained on the HighP labels are the only ones that scored as well as the baseline. Labels of simple phrases still seem the least in- formative; scores of grammars trained on BaseP and BaseNP remained far below the baseline. Different from the previous experiment, how- ever, the AI1NP training sets do not seem to provide as much information for this learning task. This may be due to the increase in the sentence complexity of the WSJ, which further de-emphasized the role of the simple phrases. Thus, grammars trained on AllNP labels have comparable parsing scores to those trained on HighP labels. Also, we do not see as big a gap between the scores of the two induction strate- gies in the HighP case because the adapted grammar's advantage of having seen annotated ATIS base nouns is reduced. Nonetheless, the adapted grammars still perform 2% better than the directly induced grammars, and this im- provement is statistically significant. 2 Furthermore, grammars trained on NotBaseP do not fall as far below the baseline and have higher parsing scores than those trained on HighP and AllNP. This suggests that for more complex domains, other linguistic constituents 2A pair-wise t-test comparing the parsing scores of the ten test sets for the two strategies shows 99% confi- dence in the difference. such as verb phrases 3 become more informative. A second goal of this experiment is to test the adaptive strategy under more stringent condi- tions. In the previous experiment, a WSJ-style grammar was retrained for the simpler ATIS corpus. Now, we reverse the roles of the cor- pora to see whether the adaptive strategy still offers any advantage over direct induction. In the adaptive method's pretraining stage, a grammar is induced from 400 fully labeled ATIS sentences. Testing this ATIS-style gram- mar on the WSJ test set without further train- ing renders a parsing accuracy of 40%. The low score suggests that fully labeled ATIS data does not teach the grammar as much about the structure of WSJ. Nonetheless, the adap- tive strategy proves to be beneficial for learning WSJ from sparsely labeled training sets. The adapted grammars out-perform the directly in- duced grammars when more than 50% of the brackets are missing from the training data. The most significant difference is when the training data contains no label information at all. The adapted grammar parses with 60.1% accuracy whereas the directly induced grammar parses with 49.8% accuracy. SV~e have not experimented with training sets con- taining only verb phrases labels (i.e., setting a pair of bracket around the head verb and its modifiers). They are a subset of the NotBaseP class. 78 6 Conclusion and Future Work In this study, we have shown that the structure of a grammar can be reliably learned without having fully specified constituent information in the training sentences and that the most in- formative constituents of a sentence are higher- level phrases, which make up only a small per- centage of the total number of constituents. Moreover, we observe that grammar adaptation works particularly well with this type of sparse but informative training data. An adapted grammar consistently outperforms a directly in- duced grammar even when adapting from a sim- pler corpus to a more complex one. These results point us to three future di- rections. First, that the labels for some con- stituents are more informative than others im- plies that sentences containing more of these in- formative constituents make better training ex- amples. It may be beneficial to estimate the informational content of potential training (un- marked) sentences. The training set should only include sentences that are predicted to have high information values. Filtering out unhelpful sentences from the training set reduces unnec- essary work for the human annotators. Second, although our experiments show that a sparsely labeled training set is more of an obstacle for the direct induction approach than for the grammar adaptation approach, the direct induction strat- egy might also benefit from a two stage learning process similar to that used for grammar adap- tation. Instead of training on a different corpus in each stage, the grammar can be trained on a small but fully labeled portion of the corpus in its first stage and the sparsely labeled por- tion in the second stage. Finally, higher-level constituents have proved to be the most infor- mative linguistic units. To relieve humans from labeling any training data, we should consider using partial parsers that can automatically de- tect complex nouns and sentential clauses. References J.K. Baker. 1979. Trainable grammars for speech recognition. In Proceedings of the Spring Conference of the Acoustical Society of America, pages 547-550, Boston, MA, June. E.J. Briscoe and N. Waegner. 1992. Robust stochastic parsing using the inside-outside al- gorithm. In Proceedings of the AAAI Work- shop on Probabilistically-Based NLP Tech- niques, pages 39-53. E. Charniak. 1996. Tree-bank grammars. In Proceedings of the Thirteenth National Con- ference on Artificial Intelligence, pages 1031- 1036. E. Mark Gold. 1967. Language identification in the limit. Information Control, 10(5):447- 474. C.T. Hemphill, J.J. Godfrey, and G.R. Dod- dington. 1990. The ATIS spoken language systems pilot corpus. In DARPA Speech and Natural Language Workshop, Hidden Valley, Pennsylvania, June. Morgan Kaufmann. R. Hwa. 1998a. An empirical evaluation of probabilistic lexicalized tree insertion gram- mars. In Proceedings of COLING-A CL, vol- ume 1, pages 557-563. R. Hwa. 1998b. An empirical evaluation of probabilistic lexicalized tree insertion gram- mars. Technical Report 06-98, Harvard Uni- versity. Available as cmp-lg/9808001. K. Lari and S.J. Young. 1990. The estima- tion of stochastic context-free grammars us- ing the inside-outside algorithm. Computer Speech and Language, 4:35-56. M. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a large annontated corpus of english: the penn treebank. Computational Linguistics, 19(2):313-330. F. Pereira and Y. Schabes. 1992. Inside- Outside reestimation from partially bracketed corpora. In Proceedings of the 30th Annual Meeting of the A CL, pages 128-135, Newark, Delaware. Y. Schabes and R. Waters. 1993. Stochastic lexicalized context-free grammar. In Proceed- ings of the Third International Workshop on Parsing Technologies, pages 257-266. Y. Schabes, M. Roth, and R. Osborne. 1993. Parsing the Wall Street Journal with the Inside-Outside algorithm. In Proceedings of the Sixth Conference of the European Chap- ter of the ACL, pages 341-347. 79
1999
10
A Meta-Level Grammar: Redefining Synchronous TAG for Translation and Paraphrase Mark Dras Microsoft Research Institute Department of Computer Science Macquarie University, Australia markd@±cs, mq. edu. au Abstract In applications such as translation and paraphrase, operations are carried out on grammars at the meta level. This pa- per shows how a meta-grammar, defining structure at the meta level, is useful in the case of such operations; in particu- lar, how it solves problems in the current definition of Synchronous TAG (Shieber, 1994) caused by ignoring such structure in mapping between grammars, for appli- cations such as translation. Moreover, es- sential properties of the formalism remain unchanged. 1 Introduction A grammar is, among other things, a device by which it is possible to express structure in a set of entities; a grammar formalism, the con- straints on how a grammar is allowed to ex- press this. Once a grammar has been used to express structural relationships, in many ap- plications there are operations which act at a 'meta level' on the structures expressed by the grammar: for example, lifting rules on a depen- dency grammar to achieve pseudo-projectivity (Kahane et al, 1998), and mapping between synchronised Tree Adjoining Grammars (TAGs) (Shieber and Schabes, 1990; Shieber 1994) as in machine translation or syntax-to-semantics transfer. At this meta level, however, the oper- ations do not themselves exploit any structure. This paper explores how, in the TAG case, us- ing a meta-level grammar to define meta-level structure resolves the flaws in the ability of Syn- chronous TAG (S-TAG) to be a representation for applications such as machine translation or paraphrase. This paper is set out as follows. It describes the expressivity problems of S-TAG as noted in Shieber (1994), and shows how these occur also in syntactic paraphrasing. It then demon- strates, illustrated by the relative structural complexity which occurs at the meta level in syntactic paraphrase, how a meta-level gram- mar resolves the representational problems; and it further shows that this has no effect on the generative capacity of S-TAG. 2 S-TAG and Machine Translation Synchronous TAG, the mapping between two Tree Adjoining Grammars, was first proposed by Shieber and Schabes (1990). An applica- tion proposed concurrently with the definition of S-TAG was that of machine translation, map- ping between English and French (Abeill~ et al, 1990); work continues in the area, for example using S-TAG for English-Korean machine trans- lation in a practical system (Palmer et al, 1998). In mapping between, say, English and French, there is a lexicalised TAG for each language (see XTAG, 1995, for an overview of such a gram- mar). Under the definition of TAG, a grammar contains elementary trees, rather than flat rules, which combine together via the operations of substitution and adjunction (composition oper- ations) to form composite structures--derived trees--which will ultimately provide structural representations for an input string if this string is grammatical. An overview of TAGs is given in Joshi and Schabes (1996). The characteristics of TAGs make them better suited to describing natural language than Con- text Free Grammars (CFGs): CFGs are not ad- equate to describe the entire syntax of natural language (Shieber, 1985), while TAGs are able to provide structures for the constructions prob- lematic for CFGs, and without a much greater generative capacity. Two particular chaxacteris- 80 (~1: S NP0 $ VP V NP1 j. I defeated a2: NP I Garrad NP I Garrad a4: Det I the (~3: NP Det$ N I Sumer~ans ;35: VP Adv VP, I cunningly Figure 1: Elementary TAG trees tics of TAG that make it well suited to describ- ing natural language are the extended domain of locality (EDL) and factoring recursion from the domain of dependencies (FRD). In TAG, for in- stance, information concerning dependencies is given in one tree (EDL): for example, in Fig- ure 1,1 the information that the verb defeated has subject and object arguments is contained in the tree al. In a CFG, with rules of the form S --+ NP VP and VP --+ V NP, it is not possible to have information about both ar- guments in the same rule unless the VP node is lost. TAG keeps dependencies together, or local, no matter how far apart the correspond- ing lexicM items are. FRD means that recursive information--for example, a sequence of adjec- tives modifying the object noun of defeated--are factored out into separate trees, leaving depen- dencies together. A consequence of the TAG definition is that, un- like CFG, a TAG derived tree is not a record of its own derivation. In CFG, each tree given as a structural description to a string enables the rules applied to be recovered. In a TAG, this is not possible, so each derived tree has an asso- ciated derivation tree. If the trees in Figure 1 were composed to give a structural description for Garrad cunningly defeated the Sumerians, the derived tree and its corresponding deriva- 1The figures use standard TAG notation: $ for nodes requiring substitution, • for foot nodes of auxiliary trees. S vP Adv VP cunningly V NP defeated Det N J I the Sumerians or2 (1) ;35 (2) or3 (2.2) i p ~4(1) Figure 2: Derived and derivation trees, respec- tively, for Figure 1 tion tree would be as in Figure 2. 2 Weir (1988) terms the derived tree, and its component elementary trees, OBJECT-LEVEL TREES; the derivation tree is termed a META- LEVEL TREE, since it describes the object-level trees. The derivation trees are context free (Weir, 1988), that is, they can be expressed by a CFG; Weir showed that applying a TAG yield function to a context free derivation tree (that is, reading the labels off the tree, and substi- tuting or adjoining the corresponding object- level trees as appropriate) will uniquely specify a TAG tree. Schabes and Shieber (1994) charac- terise this as a function 7) from derivation trees to derived trees. The idea behind S-TAG is to take two TAGs and link them in an appropriate way so that when substitution or adjunction occurs in a tree in one grammar, then a corresponding compo- sition operation occurs in a tree in the other grammar. Because of the way TAG's EDL cap- tures dependencies, it is not problematic to have translations more complex than word-for-word mappings (Abeill~ et al, 1990). For example, from the Abeill~ et al paper, handling argument swap, as in (1), is straightforward. These would be represented by tree pairs as in Figure 3. 2In derivation trees, addresses are given using the Gorn addressing scheme, although these are omitted in this paper where the composition operations are obvious. 81 o~6: sg] Np$~~VP Np$~~~Vp V NP$ [~] V PP misses manque P NP$[-~ I d or7: I ] as: ] I John Jean Mary Marie Figure 3: S-TAG with argument swap (1) a. John misses Mary. b. Marie manque g Jean. In these tree pairs, a diacritic ([-/7) represents a link between the trees, such that if a substi- tution or adjunction occurs at one end of the link, a corresponding operation must occur at the other end, which is situated in the other tree of the same tree pair. Thus if the tree for John in a7 is substituted at E] in the left tree of a6, the tree for Jean must be substituted at [-~ in the right tree. The diacritic E] allows a sentential modifier for both trees (e.g. unfortu- nately / malheureusement). The original definition of S-TAG (Shieber and Schabes, 1990), however, had a greater genera- tive capacity than that of its component TAG grammars: even though each component gram- mar could only generate Tree Adjoining Lan- guages (TALs), an S-TAG pairing two TAG grammars could generate non-TALs. Hence, a redefinition was proposed (Shieber, 1994). Un- der this new definition, the mapping between grammars occurs at the meta level: there is an isomorphism between derivation trees, preserv- ing structure at the meta level, which estab- lishes the translation. For example, the deriva- • tion trees for (1) using the elementary trees of Figure 3 is given in Figure 4; there is a clear isomorphism, with a bijection between nodes, and parent-child relationships preserved in the mapping. In translation, it is not always possible to have a bijection between nodes. Take, for example, (2). a[misses] a[man.que ~] s a[John] a[Mary] a[Jean] a[Marie] / Figure 4: Derivation tree pair for Fig 3 (2) a. Hopefully John misses Mary. b. On esp~re que Marie manque Jean. In English, hopefully would be represented by a single tree; in French, on esp~re que typically by two. Shieber (1994) proposed the idea of bounded subderivation to deal with such aber- rant cases--treating the two nodes in the deriva- tion tree representing on esp~re que as singular, and basing the isomorphism on this. This idea of bounded subderivation solves several difficul- ties with the isomorphism requirement, but not all. An example by Shieber demonstrates that translation involving clitics causes problems un- der this definition, as in (3). The partial deriva- tion trees containing the clitic lui and its English parallel are as in Figure 5. (3) a. The doctor treats his teeth. b. Le docteur lui soigne les dents. A potentially unbounded amount of material in- tervening in the branches of the righthand tree means that an isomorphism between the trees cannot be established under Shieber's specifi- cation even with the modification of bounded subderivations. Shieber suggested that the iso- morphism requirement may be overly stringent; 82 o~[treats] a[s~gne] c~[teeth I a[lui] a[dents] a[his] Figure 5: Clitic derivation trees but intuitively, it seems reasonable that what occurs in one grammar should be mirrored in the other in some way, and this reflected in the derivation history. Section 3 looks at representing syntactic para- phrase in S-TAG, where similar problems are encountered; in doing this, it can be seen more clearly than in translation that the difficulty is caused not by the isomorphism requirement it- self but by the fact that the isomorphism does not exploit any of the structure inherent in the derivation trees. 3 S-TAG and Paraphrase Syntactic paraphrase can also be described with S-TAG (Dras, 1997; Dras, forthcoming). The manner of representing paraphrase in S-TAG is similar to the translation representation de- scribed in Section 2. The reason for illustrating both is that syntactic paraphrase, because of its structural complexity, is able to illuminate the nature of the problem with S-TAG. In a specific parallel, a difficulty like that of the clitics oc- curs here also, for example in paraphrases such as (4). (4) a. The jacket which collected the dust was tweed. b. The jacket collected the dust. It was tweed. Tree pairs which could represent the elements in the mapping between (4a) and (4b) are given in Figure 6. It is clearly the case that the trees in the tree pair c~9 are not elementary trees, in the same way that on esp~re que is not represented by a single elementary tree: in both cases, such single elementary trees would violate the Con- dition on Elementary Tree Minimality (Frank, 1992). The tree pair a0 is the one that captures the syntactic rearrangement in this paraphrase; such a tree pair will be termed the STRUCTURAL MAPPING PAIR (SMP). Taking as a basic set of trees the XTAG standard grammar of English (XTAG, 1995), the derivation tree pair for (4) would be as in Figure 7. 3 Apart from c~9, each tree in Figure 6 corresponds to an elementary object-level tree, as indicated by its label; the remaining labels, indicated in bold in the meta- level' derivation tree in Figure 7, correspond to the elementary object-level trees forming (~9, in much the same way that on esp~re que is repre- sented by a subderivation comprising an on tree substituted into an esp~re que tree. Note that the nodes corresponding to the left tree of the SMP form two discontinuous groups, but these discontinuous groups are clearly re- lated. Dras (forthcoming) describes the condi- tions under which these discontinuous groupings are acceptable in paraphrase; these discontinu- ous groupings are treated as a single block with SLOTS connecting the groupings, whose fillers must be of particular types. Fundamentally, however, the structure is the same as for clitics: in one derivation tree the grouped elements are in one branch of the tree, and in the other they are in two separate branches with the possibility of an unbounded amount of intervening mate- rial, as described below in Section 4. 4 Meta-Level Structure Example (5) illustrates why the paraphrase in (4) has the same difficulty as the clitic example in (3) when represented in S-TAG: because un- bounded intervening material can occur when promoting arbitrarily deeply embedded relative clauses to sentence level, as indicated by Fig- ure 8, an isomorphism is not possible between derivation trees representing paraphrases such as (4) and (5). Again, the component trees of the SMP are in bold in Figure 8. (5) a. The jacket which collected the dust which covered the floor was tweed. b. The jacket which collected the dust 3Node labels, the object-level tree names, are given according to the XTAG standard: see Appendix B of XTAG (1995). This is done so that the component trees of the aggregate (~9 and their types are obvious. The lexical item to which each is bound is given in square brackets, to make the trees, and the correspondence be- tween for example Figure 6 and Figure 7, clearer. 83 S NP NPo ~ ' ~ ' ~ S Comp S ' which NP VP , I collected VP A V vP is V AdjP I I e Adj I tweed S S NPo ~ ~ V P V NP1 $['~ I collected Punct I S NP VP It V VP is V AdjP I I Adj I tweed NP NP > alo: Det$ N Det$ N I I jacket jacket Det all: t~e NP Det > I C~12: Det$ N the ] dust NP A Det$ N t dust Figure 6: S-TAG for (4) ocnxOAxl [tweed] ~DXD[the] /3N0nx0Vnxl[collected] ~COMPs[which] c~NXdxN[dust] i c~DXD[the] 3Vvx[was] ~NXdxN[jacket] ~Vvx[was] ~sPUs[.] * i t i ~DXD[the] cmx0Vnxl^[collected] s c~NXN[it] aNXdx,N[dust] t J c~DXD[the] Figure 7: Derivation tree pair for example (4) was tweed. The dust covered the floor. 4 The paraphrase in (4) and in Figures 6 and 7, and other paraphrase examples, strongly sug- gest that these more complex mappings are not an aberration that can be dealt with by patch- ing measures such as bounded subderivation. It is clear that the meta level is fundamentally not just for establishing a one-to-one onto mapping between nodes; rather, it is also about defin- ing structures representing, for example, the 4The referring expression that is the subject of this second sentence has changed from it in (4) to the dust so the antecedent is clear. Ensuring it is appropriately coreferent, by using two occurrences of the same diacritic in the same tree, necessitates a change in the properties of the formalism unrelated to the one discussed in this paper; see Dras (forthcoming). Assume, for the purpose of this example, that the referring expression is fixed and given, as is the case with it, rather than determined by coindexed diacritics. SMP at this meta level: in an isomorphism be- tween trees in Figure 8, it is necessary to re- gard the SMP components of each tree as a uni- tary substructure and map them to each other. The discontinuous groupings should form these substructures regardless of intervening material, and this is suggestive of TAG's EDL. In the TAG definition, the derivation trees are context free (Weir, 1988), and can be expressed by a CFG. The isomorphism in the S-TAG def- inition of Shieber (1994) reflects this, by effec- tively adopting the single-level domain of local- ity (extended slightly in cases of bounded sub- derivation, but still effectively a single level), in the way that context free trees are fundamen- tally made from single level components and grown by concatenation of these single levels. This is what causes the isomorphism require- ment to fail, the inability to express substruc- tures at the meta level in order to map between them, rather than just mapping between (effec- 84 ............... y Nx¢~] ~DXDI, h0] ~l[:o~I~dJ /~COMPs[which] aNXdxN[dust] aDXD[the] /~N0nx0Vnxl [covered] aDXD[t he] flVvx[~s] .. _ %~xdx~lNf~c~ ~Vvx[is] /~sPUs[.] ~DXD[the] ~N0nx0Vnx l[coliect ed] anxOVnxl [covered] ~COMPs[which] aNXdxN[dust] aNXN[it] oNXdxN[floor] ~DXD[the] aDXD[the] Figure 8: Derivation tree for example (5) tively) single nodes. To solve the problem with isomorphism, a meta- level grammar can be defined to specify the necessary substructures prior to mapping, with minimality conditions on what can be consid- ered acceptable discontinuity. Specifically, in this case, a TAG meta-level grammar can be defined, rather than the implicit CFG, because this captures the EDL well. The TAG yield function of Weir (1988) can then be applied to these derivation trees to get derived trees. This, of course, raises questions about effects on gen- erative capacity and other properties; these are dealt with in Section 5. A procedure for automatically constructing a TAG meta-grammar is as follows in Construc- tion 1. The basic idea is that where the node bijection is still appropriate, the grammar re- tains its context free nature (by using single- level TAG trees composed by substitution, mim- icking CFG tree concatenation), but where EDL is required, multi-level TAG initial trees are defined, with TAG auxiliary trees for describ- ing the intervening material. These meta-level trees are then mapped appropriately; this cor- responds to a bijection of nodes at the meta- meta level. For (5), the meta-level grammar for the left projection then looks as in Figure 9, and for the right projection as in Figure 10. • Figure 11 contains the meta-meta-level trees, the tree pair that is the derivation of the meta level, where the mapping is a bijection between nodes. Adding unbounded material would then just be reflected in the meta-meta-level as a list of/3 nodes depending from the j315/j31s nodes in these trees. The question may be asked, Why isn't it the case that the same effect will occur at the meta- meta level that required the meta-grammar in the first place, leading perhaps to an infinite (and useless) sequence? The intuition is that it is the meta-level, rather than anywhere 'higher', which is fundamentally the place to specify structure: the object level specifies the trees, and the meta level specifies the grouping or structure of these trees. Then the mapping takes place on these structures, rather than the object-level trees; hence the need for a grammar at the meta-level but not beyond. Construction 1 To build a TAG metagram- mar: 1. An initial tree in the metagrammar is formed for each part of the derivation tree corresponding to the substructure repre- senting an SMP, including the slots so that a contiguous tree is formed. Any node that links these parts of the derivation tree to other subtrees in the derivation tree is also included, and becomes a substitution node in the metagrammar tree. 2. Auxiliary trees are formed corresponding to the parts of the derivation trees that are slot fillers along with the nodes in the discon- tinuous regions adjacent to the slots; one contiguous auxiliary tree is formed for each bounded sequence of slot fillers within each substructure. These trees also satisfy cer- tain minimality conditions. 3. The remaining metagrammar trees then come from splitting the derivation tree into single-level trees, with the nodes on 85 Ot13: anx0Axl ~NXdxN ~Vvx aDXD ~N0nx0Vnxl ~COMPs aNXdxN$ a14: c~NXdxN I aDXD J315: aNXdxN aDXD ~N0nx0Vnxl ~COMPs aNXdxN, Figure 9: Meta-grammar for (5a) these single-level trees in the metagrammar marked for substitution if the corresponding nodes in the derivation tree have subtrees. The minimality conditions in Step 2 of Con- struction 1 are in keeping with the idea of min- imality elsewhere in TAG (for example, Frank, 1992). The key condition is that meta-level auxiliary trees are rooted in c~-labelled nodes, and have only ~-labelled nodes along the spine. The intuition here is that slots (the nodes which meta-level auxiliary trees adjoin into) must be c~-labelled: fl-labelled trees would not need slots, as the substructure could instead be con- tinuous and the j3-1abelled trees would just ad- join in. So the meta-level auxiliary trees are rooted in c~-labelled trees; but they have only ~- labelled trees in the spine, as they aim to repre- sent the minimal amount of recursive material. Notwithstanding these conditions, the construc- tion is quite straightforward. 5 Generative Capacity Weir (1988) showed that there is an infinite pro- gression of TAG-related formalisms, in genera- tive capacity between CFGs and indexed gram- mars. A formalism ~-i in the progression is de- fined by applying the TAG yield function to a derivation tree defined by a grammar formalism ~16; cmx0Axl ~NXdxN ~Vvx /~sPUs I I c~DXD aNXdxN c~NXdxN c~NXdxN$ cqT: aNXdxN I aDXD aNXdxN c~DXD ~N0nx0Vnxl ~COMPs c~NXdxN, Figure 10: Meta-grammar for (5b) 0t14 ~15 a17 ~18/ Figure 11: Derivation tree pair for Fig 3 5~i_1; the generative capacity of ~i is a superset of ~'i-1- Thus using a TAG meta-grammar, as described in Section 4, would suggest that the generative capacity of the object-level formal- ism would necessarily have been increased over that of TAG. However, there is a regular form for TAGs (Rogers, 1994), such that the trees of TAGs in this regular form are local sets; that is, they are context free. The meta-level TAG built by Construction 1 with the appropriate conditions on slots is in this regular form. A proof of this is in Dras (forthcoming); a sketch is as follows. If adjunction may not occur along the spine of another auxiliary tree, the grammar is in regu- lar form. This kind of adjunction does not oc- cur under Construction 1 because all meta-level auxiliary trees are rooted in c~-labelled trees (object-level auxiliary trees), while their spines consist only of p-labelled trees (object-level ini- tial trees). Since the meta-level grammar is context free, despite being expressed using a TAG grammar, this means that the object-level grammar is still 8{} a TAG. 6 Conclusion In principle, a meta-grammar is desirable, as it specifies substructures at a meta level, which is necessary when operations are carried out that are applied at this meta level. In a practical ap- plication, it solves problems in one such formal- ism, S-TAG, when used for paraphrase or trans- lation, as outlined by Shieber (1994). Moreover, the formalism remains fundamentally the same, in specifying mappings between two grammars of restricted generative capacity; and in cases where this is important, it is possible to avoid changing the generative capacity of the S-TAG formalism in applying this meta-grammar. Currently this revised version of the S-TAG for- malism is used as the low-level representation in the Reluctant Paraphrasing framework of Dras (1998; forthcoming). It is likely to also be use- ful in representations for machine translation between languages that are structurally more dissimilar than English and French, and hence more in need of structural definition of object- level constructs; exploring this is future work. References Abeill@, Anne, Yves Schabes and Aravind Joshi. 1990. Using Lexicalized TAGs for Machine Trans- lation. Proceedings of the 13th International Con- ference on Computational Linguistics, 1-6. Dras, Mark. 1997. Representing Paraphrases Using S-TAGs. Proceedings of the 35th Meeting of the As- sociation for Computational Linguistics, 516-518. Dras, Mark. 1998. Search in Constraint-Based Paraphrasing. Natural Language Processing and In- dustrial Applications (NLPq-IA98), 213-219. Dras, Mark. forthcoming. Tree Adjoining Grammar and the Reluctant Paraphrasing of Text. PhD thesis, Macquarie University, Australia. Joshi, Aravind and Yves Schabes. 1996. Tree- Adjoining Grammars. In Grzegorz Rozenberg and • Arto Salomaa (eds.), Handbook of Formal Lan- guages, Vol 3, 69-123. Springer-Verlag. New York, NY. Kahane, Sylvain, Alexis Nasr and Owen Ram- bow. 1998. Pseudo-Projectivity: A Polynomi- ally Parsable Non-Projective Dependency Gram- mar. Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics, 646-652. Palmer, Martha, Owen Rainbow and Alexis Nasr. 1998. Rapid Prototyping of Domain-Specific Ma- chine Translation Systems. AMTA-98, Langhorne, PA. Rogers, James. 1994. Capturing CFLs with Tree Adjoining Grammars. Proceedings of the 32nd Meet- ing of the Association for Computational Linguis- tics, 155-162. Schabes, Yves and Stuart Shieber. 1994. An Al- ternative Conception of Tree-Adjoining Derivation. Computational Linguistics, 20(1): 91-124. Shieber, Stuart. 1985. Evidence against the context- freeness of natural language. Linguistics and Philos- ophy, 8, 333-343. Shieber, Stuart and Yves Schabes. 1990. Syn- chronous Tree-Adjoining Grammars. Proceedings of the 13th International Conference on Computational Linguistics, 253-258. Shieber, Stuart. 1994. Restricting the Weak- Generative Capacity of Synchronous Tree-Adjoining Grammars. Computational Intelligence, 10(4), 371- 386. Weir, David. 1988. Characterizing Mildly Context- Sensitive Grammar Formalisms. PhD thesis, Uni- versity of Pennsylvania. XTAG. 1995. A Lexicalized Tree Adjoining Gram- mar for English. Technical Report IRCS95-03, Uni- versity of Pennsylvania. 87
1999
11
Preserving Semantic Dependencies in Synchronous Tree Adjoining Grammar* William Schuler University of Pennsylvania 200 South 33rd Street Philadelphia, PA 19104 USA schuler@linc, cis. upenn, edu Abstract Rambow, Wier and Vijay-Shanker (Rainbow et al., 1995) point out the differences between TAG derivation structures and semantic or predicate- argument dependencies, and Joshi and Vijay- Shanker (Joshi and Vijay-Shanker, 1999) de- scribe a monotonic compositional semantics based on attachment order that represents the desired dependencies of a derivation without un- derspecifying predicate-argument relationships at any stage. In this paper, we apply the Joshi and Vijay-Shanker conception of compositional semantics to the problem of preserving seman- tic dependencies in Synchronous TAG transla- tion (Shieber and Schabes, 1990; Abeill~ et al., 1990). In particular, we describe an algorithm to obtain the semantic dependencies on a TAG parse forest and construct a target derivation forest with isomorphic or locally non-isomorphic dependencies in O(n 7) time. 1 Introduction The primary goal of this paper is to solve the problem of preserving semantic dependencies in Isomorphic Synchronous Tree Adjoining Gram- mar (ISTAG) (Shieber, 1994; Shieber and Sch- abes, 1990), a variant of Tree Adjoining Gram- mar (Joshi, 1985) in which source and target elementary trees are assembled into isomorphic derivations. The problem, first described in Rambow, Wier and Vijay-Shanker (Rainbow et al., 1995), stems from the fact that the TAG derivation structure - even using a flat adjunc- tion of modifiers (Schabes and Shieber, 1994) - deviates from the appropriate dependency *The author would like to thank Karin Kipper, Aravind Joshi, Martha Palmer, Norm Badler, and the anonymous reviewers for their valuable comments. This work was partially supported by NSF Grant SBP~8920230 and ARO Grant DAAH0404-94-GE-0426. structure in certain cases. This can result in translation errors. For example, if we parse sentence (1), (1) X is supposed to be able to fly. using the trees in Figure 1, we get the following derivation:l a:fly I 131 :be-able-to(VP) I j32:is-supposed-to(VP) with the auxiliary is-supposed-to adjoining at the VP to predicate over be-able-to and the aux- iliary be-able-to adjoining at the VP to predi- cate over fly. If we then try to assemble an iso- morphic tree in a language such as Portuguese (which makes less use of raising verbs) using the ISTAG transfer rules in Figure 2, we will be forced into an ill-formed derivation: : voar I ;31 :~-capaz-de (VP) I /~2 :~-pressuposto-que (S ?) because the raising construction is-supposed- to translates to a bridge construction d- pressuposto-que and cannot adjoin anywhere in the tree for ~-capaz-de (the translation of be- able-to) because there is no S-labeled adjunction site. The correct target derivation: a:voar ~l:~-capaz-de(VP) ~2:~-pressuposto-que(S) 1The subject is omitted to simplify the diagram. 88 VP VP Vo VP Vo VP is Vo VP [ [ ~ able Vo VP* supposed Vo VP* [ [ to to S NP$ VP I Vo I fly Figure 1: Sample elementary trees for "supposed to be able to fly" which yields the translation in sentence (2), (2) t~ pressuposto que X 6 capaz de voar. is not isomorphic to the source. Worse, this non-isomorphism is unbounded, because the bridge verb pressuposto may have to migrate across any number of intervening raising verbs to find an ancestor that contains an appropriate adjunction site: a:fly a:voar I fll :able(VP) [ fll :capaz(VP) fln:press•(S) • , . l I , o. fin--1 :going(VP) I [ fin--1 :vai(VP) fln:supp.(VP) This sort of non-local non-isomorphic transfer cannot be handled in a synchronous TAG that has an isomorphism restriction on derivation trees• On the other hand, we do not wish to return to the original non-local formulation of synchronous TAG (Shieber and Schabes, 1990) because the non-local inheritance of links on the derived tree is difficult to implement, and because the non-local formulation can recog- nize languages beyond the generative power of TAG. Rambow, Wier and Vijay-Shanker them- selves introduce D-Tree Grammar (Rambow et al., 1995) and Candito and Kahane introduce the DTG variant Graph Adjunction Grammar (Candito and Kahane, 1998b) in order to solve this problem using a derivation process that mirrors composition more directly, but both in- volve potentially significantly greater recogni- tion complexity than TAG. 2 Overview Our solution is to retain ISTAG, but move the isomorphism restriction from the deriva- tion structure to the predicate-argument at- tachment structure described in (Joshi and Vijay-Shanker, 1999). This structure represents the composition of semantic predicates for lexicalized elementary trees, each of which contains a 'predicate' vari- able associated with the situation or entity that the predicate introduces, and a set of 'argument' variables associated with the foot node and sub- stitution sites in the original elementary tree. The predicates are composed by identifying the predicate variable in one predicate with an ar- gument variable in another, so that the two vari- ables refer to the same situation or entity. Composition proceeds from the bottom up on the derivation tree, with adjuncts traversed in order from the lowest to the highest adjunction site in each elementary tree, in much the same way that a parser produces a derivation. When- ever an initial tree is substituted, its predicate variable is identified in the composed structure with an argument variable of the tree it substi- tutes into. Whenever an auxiliary tree is ad- joined, the predicate variable of the tree it ad- joins into is identified in the composed struc- ture with one of its own argument variables. In cases of adjunction, an auxiliary tree's seman- tics can also specify which variable will become the predicate variable of the composed struc- ture for use in subsequent adjunctions at higher adjunction sites: a modifier auxiliary will re- turn the host tree's original predicate variable, and a predicative auxiliary will return its own predicate variable. 2 Since the traversal must 2See (Schabes and Shieber, 1994) for definitions of modifier and predicative auxiliaries. 89 VP Vo VP is Vo VP supposed Vo VP* I to VP Vo VP be Vo VP able Vo VP* I to S Vo S Vo S pressuposto Vo S* I que VP Vo VP Vo VP capaz Vo VP* I de S NP$ VP I Vo t fly S NP.I. VP I Vo i voar Figure 2: Synchronous tree pairs for "supposed to be able to fly" proceed from the bottom up, the attachment of predicates to arguments is neither destructive nor underspecified at any stage in the interpre- tation. For example, assume the initial tree a:fly has a predicate variable s], representing the situa- tion of something flying, and an argument vari- able xl, representing the thing that is flying; and assume the predicative auxiliary tree/31 :be- able-to has a predicate variable s2, represent- ing the situation of something being possible, and an argument variable s3, representing the thing that is possible. If fll is now adjoined into a, the composed structure would have sl identified with s3 (since the situation of flying is the thing that is possible), and s2 as an over- all predicate variable, so if another tree later adjoins into this composed structure rooted on a, it will predicate over s2 (the situation that flying is possible) rather than over a's original predicate variable sl (the situation of flying by itself). Note that Joshi and Vijay-Shanker do not require the predicate and modifier distinc- tions, because they can explicitly specify the fates of any number of predicate variables in a tree's semantic representation. For simplicity, we will limit our discussion to only the two pos- sibilities of predicative and modifier auxiliaries, using one predicate variable per tree. If we represent each such predicate-argument attachment as an arc in a directed graph, we can view the predicate-argument attachment struc- ture of a derivation as a dependency graph, in much the same way as Candito and Kahane interpret the original derivation trees (Candito and Kahane, 1998a). More importantly, we can see that this definition predicts the predicate- argument dependencies for sentences (1) and (2) to be isomorphic: ¢0:supposed-to ¢0:~-pressuposto-que i i ¢1 :be-able-to ¢1 :&capaz-de ¢2:flY ¢2:voar even though their derivation trees are not. This is because the predicative auxiliary for &capaz-de returns its predicate variable to the host tree for subsequent adjunctions, so the aux- iliary tree for g-pressuposto-que can attach it as one of its arguments, just as if it had adjoined directly to the auxiliary, as supposed-to does in English. It is also important to note that Joshi and Vijay-Shanker's definition of TAG composi- tional semantics differs from that of Shieber 9{) and Schabes (Shieber and Schabes, 1990) using Synchronous TAG, in that the former preserves the scope ordering of predicative adjunctions, which may be permuted in the latter, altering the meaning of the sentence. 3 It is precisely this scope-preserving property we hope to ex- ploit in our formulation of a dependency-based isomorphic synchronous TAG in the next two sections. However, as Joshi and Vijay-Shanker suggest, the proper treatment of synchronous translation to logical form may require a multi- component Synchronous TAG analysis in order to handle quantifiers, which is beyond the scope of this paper. For this reason, we will focus on examples in machine translation. 3 Obtaining Source Dependencies If we assume that this attachment structure captures a sentence's semantic dependencies, then in order to preserve semantic dependencies in synchronous TAG translation, we will need to obtain this structure from a source derivation and then construct a target derivation with an isomorphic structure. The first algorithm we present obtains se- mantic dependencies for derivations by keep- ing track of an additional field in each chart item during parsing, corresponding to the pred- icate variable from Section 2. Other than the additional field, the algorithm remains essen- tially the same as the parsing algorithm de- scribed in (Schabes and Shieber, 1994), so it can be applied as a transducer during recogni- tion, or as a post-process on a derivation forest (Vijay-Shanker and Weir, 1993). Once the de- sired dependencies are obtained, the forest may be filtered to select a single most-preferred tree using statistics or rule-based selectional restric- tions on those dependencies. 4 For calculating dependencies, we define a function arg(~) to return the argument posi- tion associated with a substitution site or foot node ~? in elementary tree V. Let a dependency be defined as a labeled arc (¢, l, ~b), from predi- cate ¢ to predicate ¢ with label I. • For each tree selected by ¢, set the predi- cate variable of each anchor item to ¢. 3See (Joshi and Vijay-Shanker, 1999) for a complete description. 4See (Schuler, 1998) for a discussion of statistically filtering TAG forests using semantic dependencies. • For each substitution of initial tree a¢ with predicate variable w into "),¢ at node address U, emit (¢, arg(v , r/), w) • For each modifier adjunction of auxil- iary tree/3¢ into tree V¢ with predicate vari- able X, emit (¢, arg(p, FOOT), X) and set the predicate variable of the composed item to X. • For each predicative adjunction of aux- iliary tree /3¢ with predicate variable w into tree "),¢ with predicate variable X, emit (¢, arg(/3, FOOT), X) and set the predicate variable of the composed item to w. • For all other productions, propagate the predicate variable up along the path from the main anchor to the root. Since the number of possible values for the additional predicate variable field is bounded by n, where n is the number of lexical items in the input sentence, and none of the produc- tions combine more than one predicate variable, the complexity of the dependency transducing algorithm is O(nT). This algorithm can be applied to the example derivation tree in Section 1, a:fly I /31 :be-able-to(VP) I /32 :is-supposed-to(VP) which resembles the stacked derivation tree for Candito and Kahane's example 5a, "Paul claims Mary said Peter left." First, we adjoin/32 :is-supposed-to at node VP of/31 :be-able-to, which produces the dependency (is-supposed-to,0,be-able-to}. Then we adjoin ~31:be-able-to at node VP of a:fly, which pro- duces the dependency (be-able-to,0,fly). The resulting dependencies are represented graphi- Cally in the dependency structure below: ¢0 :supposed-to I ¢] :be-able-to(0) I ¢2:fly(0) This example is relatively straightforward, simply reversing the direction of adjunction de- pendencies as described in (Candito and Ka- hane, 1998a), but this algorithm can transduce 91 the correct isomorphic dependency structure for the Portuguese derivation as well, similar to the distributed derivation tree in Candito and Ka- hane's example 5b, "Paul claims Mary seems to adore hot dogs," (Rambow et al., 1995), where there is no edge corresponding to the depen- dency between the raising and bridge verbs: c~:voar 81:~-capaz-de(VP) ~2:fi-pressuposto-que(S) We begin by adjoining ~1 :g-capaz-de at node VP of c~:voar, which produces the dependency (~-capaz-de, 0,voar), just as before. Then we ad- join p2:~-pressuposto-que at node S of c~:voar. This time, however, we must observe the predi- cate variable of the chart item for c~:voar which was updated in the previous adjunction, and now references ~-capaz-de instead of voar. Be- cause the transduction rule for adjunction uses the predicate variable of the parent instead of just the predicate, the dependency produced by the adjunetion of ~2 is (~-pressuposto-que, 0,~- capaz-de), yielding the graph: As Candito and Kahane point out, this derivation tree does not match the dependency structure of the sentence as described in Mean- ing Text Theory (Mel'cuk, 1988), because there is no edge in the derivation corresponding to the dependency between surprise and have-to (the necessity of Paul's staying is what surprises Mary, not his staying in itself). Using the above algorithm, however, we can still produce the de- sired dependency structure: ¢1 :surprise ¢2:have-to(0) Cs:Mary(1) I Ca:stay(0) I ¢4:Paul(0) by adjoining fl:have-to at node VP of c~2:stay to produce a composed item with have-to as its predicate variable, as well as the depen- dency (have-to, 0,stay/. When a2:stay substi- tutes at node So of c~l:surprise, the resulting dependency also uses the predicate variable of the argument, yielding (surprise, 0,have-to). ¢0 :~-pressuposto-que I ¢1 :~-capaz-de(0) I ¢2:voar(0) The derivation examples above only address the preservation of dependencies through ad- junction. Let us now attempt to preserve both substitution and adjunction dependencies in transducing a sentence based on Candito and Kahane's example 5c, "That Paul has to stay surprised Mary," in order to demonstrate how they interact. 5 We begin with the derivation tree: al :surprise c~2 :stay(S0) c~4 :Mary(NPl) c~a:Paul(NP0) ~:have-to(VP) 5We have replaced want to in the original example with have to in order to highlight the dependency struc- ture and set aside any translation issues related to PRO control. 4 Obtaining Target Derivations Once a source derivation is selected from the parse forest, the predicate-argument dependen- cies can be read off from the items in the forest that constitute the selected derivation. The re- sulting dependency graph can then be mapped to a forest of target derivations, where each predicate node in the source dependency graph is linked to a set of possible elementary trees in the target grammar, each of which is instanti- ated with substitution or adjunction edges lead- ing to other linked sets in the forest. The el- ementary trees in the target forest are deter- mined by the predicate pairs in the transfer lex- icon, and by the elementary trees that can re- alize the translated targets. The substitution and adjunction edges in the target forest are determined by the argument links in the trans- fer lexicon, and by the substitution and adjunc- tion configurations that can realize the trans- lated targets' dependencies. Mapping dependencies into substitutions is relatively straightforward, but we have seen in Section 2 that different adjunction configura- tions (such as the raising and bridge verb ad- 92 junctions in sentences (1) and (2)) can corre- spond to the same dependency graph, so we should expect that some dependencies in our target graph may correspond to more than one adjunction configuration in the target deriva- tion tree. Since a dependency may be realized by adjunctions at up to n different sites, an un- constrained algorithm would require exponen- tial time to find a target derivation in the worst case. In order to reduce this complexity, we present a dynamic programming algorithm for constructing a target derivation forest in time proportional to O(n 4) which relies on a restric- tion that the target derivations must preserve the relative scope ordering of the predicates in the source dependency graph. This restriction carries the linguistic implica- tion that the scope ordering of adjuncts is part of the meaning of a sentence and should not be re-arranged in translation. Since we exploit a notion of locality similar to that of Isomor- phic Synchronous TAG, we should not expect the generative power of our definition to exceed the generative power of TAG, as well. First, we define an ordering of predicates on the source dependency graph corresponding to a depth-first traversal of the graph, originating at the predicate variable of the root of the source derivation, and visiting arguments and modi- fiers in order from lowest to highest scope. In other words, arguments and modifiers will be ordered from the bottom up on the elementary tree structure of the parent, such that the foot node argument of an elementary tree has the lowest scope among the arguments, and the first adjunct on the main (trunk) anchor has the low- est scope among the modifiers. Arguments, which can safely be permuted in translation because their number is finitely bounded, are traversed entirely before the par- ent; and modifiers, which should not be per- muted because they may be arbitrarily numer- ous, are traversed entirely after the parent. This enumeration will roughly correspond to the scoping order for the adjuncts in the source derivation, while preventing substituted trees from interrupting possible scoping configura- tions. We can now identify all the descendants of any elementary tree in a derivation because they will form a consecutive series in the enu- meration described above. It therefore provides a convenient way to generate a target derivation forest that preserves the scoping information in the source, by 'parsing' the scope-ordered string of elementary trees, using indices on this enu- meration instead of on a string yield. It is important to note that in defining this algorithm, we assume that all trees associated with a particular predicate will use the same argument structure as that predicate. 6 We also assume that the set of trees associated with a particular predicate may be filtered by transfer- ring information such as mood and voice from source to target predicates. Apart from the different use of indices, the algorithm we describe is exactly the reverse of the transducer described in Section 3, taking a dependency graph 79 and producing a TAG derivation forest containing exactly the set of derivation trees for which those dependencies hold. Here, as in a parsing algorithm, we define forest items as tuples of (~/¢, 'q, _1_, i,j, X) where a, ~, and 7 are elementary trees with node'O, ¢ and ¢ are predicates, X and w be predicate vari- ables, and T and _1_ are delimiters tbr opening and closing adjunction, but now let i, j, and k refer to the indices on the scoping enumeration described above, instead of on an input string. In order to reconcile scoping ranges for substi- tution, we must also define a function first(C) to return the leftmost (lowest) edge of the ¢'s range in the scope enumeration, and last(C) to return the rightmost (highest) edge of the ¢'s range in the scope enumeration. • For each tree 7 mapped from predicate ¢ at scope i, introduce (~,¢, first(C), i + 1, ¢}. • If (¢,arg(7,~),co) E 79, try substitution of c~ into 3': (c~¢, ROOT, T, first(co), last(co), co) 7, ±, , ,-) ~Although this does not hold for certain relative clause elementary trees with wh-extractions as substi- tutions sites (since the wh-site is an argument of the main verb of the clause instead of the foot node), Can- dito and Kahane (Candito and Kahane, 1998b) suggest an alternative analysis which can be extended to TAG by adjoining the relative clause into its wh-word as a predicative adjunct, and adjoining the wh-word into the parent noun phrase as a modifier, so the noun phrase is treated as an argument of the wh-word rather than of the relative clause. 93 • If (¢, arg(/3, FOOT), X) E 79, try modifier adjunction of fl into -),: (V~,~h_l_,i,j,x) (/3¢,ROOT, T,j,k,w) (V¢, ~, -l-, i, k, x) • If (¢, arg(/3, FOOT), X) E 79, try predicative adjunction of/3 into V: (V¢,~,_I_,i,j,x) (/3¢,ROOT, T,j,k,w) (V¢,~,T,i,k,w) • Apply productions for nonterminal projec- tion as in the transducer algorithm, prop- agating index ranges and predicative vari- ables up along the path from the main an- chor to the root. Since none of the productions combine more than three indices and one predicate variable, and since the indices and predicate variable may have no more than n distinct values, the algo- rithm runs in O(n 4) time. Note that one of the indices may be redundant with the predi- cate variable, so a more efficient implementation might be possible in dO(n3). We can demonstrate this algorithm by trans- lating the English dependency graph from Sec- tion 1 into a derivation tree for Portuguese. First, we enumerate the predicates with their relative scoping positions: [3] ¢0:is-supposed-to I [2] ¢l:be-able-to I [i] ¢2:fly Then we construct a derivation forest based on the translated elementary trees a:voar,/31 :d- capaz-de, and /32 :d-pressuposto-que. Beginning at the bottom, we assign to these constituents the relative scoping ranges of 1-2, 2-3, and 3-$, respectively, where $ is a terminal symbol. There is also a dependency from is-supposed- to to be-able-to allowing us to adjoin /32:d- pressuposto-que to /31:d-capaz-de to make it cover the range from 2 to $, but there would be no S node to host its adjunction, so this pos- sibility can not be added to the forest. We can, however, adjoin/32:d-pressuposto-que to the in- stance of a:voar extending to/31 :d-capaz-de that covers the range from 1 to 3, resulting in a com- plete analysis of the entire scope from 1 to $, (from (~:voar to/32:pressuposto) rooted on voar: (O~voar, l,2,..) (/3capaz, 2, 3, ..) (/3press, 3, $, ..) <O~voar ' 1, 3, capaz) <avoar, 1, $, press} which matches the distributed derivation tree where both auxiliary trees adjoin to roar. [1-$]a:voar [2-3]/31:6-capaz-de(VP) [3-$]~2:6-pressup.-que(S) Let us compare this to a translation using the same dependency structure, but different words: [3] ¢0 :is-going-to I [2] ¢l:be-able-to I [1] ¢2:fly Once again we select trees in the target lan- guage, and enumerate them with scoping ranges in a pre-order traversal, but this time the con- struction at scope position 3 must be translated as a raising verb (vai) instead of as a bridge con- struction (d-pressuposto-que): (avoar, l,2,..> (/3capaz,2,3,..> (/3vai,3,$,..> (avoar, l,2,..) (/3capaz,2,3,..> (/3press, 3,$,..> Since there is a dependency from be-able-to to fly, we can adjoin/31:d-capaz-de to a:voar such that it covers the range of scopes from 1 to 3 (from roar to d-capaz-de), so we add this possi- bility to the forest. Although we can still adjoin/31 :ser-capaz-de at the VP node of a:voar, we will have nowhere to adjoin /32:vai, since the VP node of a:voar is now occupied, and only one predicative tree may adjoin at any node. 7 (avoar, 1, 2,..) (t3capaz, 2, 3, ..) (/3vai, 3, $, ..) (avoar, 1, 3, capaz> (avoar , l, 2, ..) (/3capaz, 2, 3, -.) (/3;ress, 3,$,..) (avoar, 1, 3, capaz) 7See (Schabes and Shieber, 1994) for the motivations of this restriction. 94 Fortunately, we can also realize the depen- dency between vai and ser-capaz-de by adjoin- ing/32 :vai at the VP. <avo r, l, 2, ..) <13capaz, 2, 3, ..) (/3va , 3, $, ..) < capaz, 2, $, vai) The new instance spanning from 2 to $ (from ~1 :capaz to/32 :vai) can then be adjoined at the VP node of roar, to complete the derivation. ( avoar , 1, 2, ..) (flcapaz, 2, 3,..) (~vai, 3, $,..) (~cap~z, 2, $, vai) (Olvoar , 1, $, vai) This corresponds to the stacked derivation, with p2:vai adjoined to t31:ser-capaz-de and 1~1 :ser-capaz-de adjoined to a:voar: [1-$] a:voar I [2-$] ~1 :ser-capaz-de(VP) I [3-$] ~2 :vai(VP) 5 Conclusion We have presented two algorithms - one for in- terpreting a derivation forest as a semantic de- pendency graph, and the other for realizing a semantic dependency graph as a derivation for- est - that make use of semantic dependencies as adapted from the notion of predicate-argument attachment in (Joshi and Vijay-Shanker, 1999), and we have described how these algorithms can be run together in a synchronous TAG trans- lation system, in CO(n 7) time, using transfer rules predicated on isomorphic or locally non- isomorphic dependency graphs rather than iso- morphic or locally non-isomorphic derivation trees. We have also demonstrated how such a system would be necessary in translating a real-world example that is isomorphic on de- pendency graphs but globally non-isomorphic on derivation trees. This system is currently being implemented as part of the Xtag project at the University of Pennsylvania, and as nat- ural language interface in the Human Modeling and Simulation project, also at Penn. References Anne Abeill6, Yves Schabes, and Aravind K. Joshi. 1990. Using lexicalized tree adjoining grammars for machine translation. In Proceedings of the 13th International Conference on Coraputatio'nal Linguistics (COLING '90), Helsinki, Finland, Au- gust. Marie-Helene Candito and Sylvain Kahane. 1998a. Can the TAG derivation tree represent a semantic graph? In Proceedings of the TAG+4 Workshop, University of Pennsylvania, August. Marie-Helene Candito and Sylvain Kahane. 1998b. Defining DTG derivations to get semantic graphs. In Proceedings of the TAG+~ Workshop, Univer- sity of Pennsylvania, August. Aravind Joshi and K. Vijay-Shanker. 1999. Com- positional Semantics with Lexicalized Tree- Adjoining Grammar (LTAG): How Much Under- specification is Necessary? In Proceedings of the 2nd International Workshop on Computational Semantics. Aravind K. Joshi. 1985. How much context sensitiv- ity is necessary for characterizing structural de- scriptions: Tree adjoining grammars. In L. Kart- tunen D. Dowty and A. Zwicky, editors, Natural language parsing: Psychological, computational and theoretical perspectives, pages 206-250. Cam- bridge University Press, Cambridge, U.K. Anthony S. Kroch. 1989. Asymmetries in long dis- tance extraction in a TAG grammar. In M. Baltin and A. Kroch, editors, Alternative Conceptions of Phrase Structure, pages 66-98. University of Chicago Press. Igor Mel'cuk. 1988. Dependency syntax: theory and practice . State University of NY Press, Albany. Owen Rainbow and Giorgio Satta. 1996. Syn- chronous Models of Language. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics (A CL '96). Owen Rambow, David Weir, and K. Vijay-Shanker. 1995. D-tree grammars. In Proceedings of the 33rd Annual Meeting of the Association for Com- putational Linguistics (A CL '95). Yves Schabes and Stuart M. Shieber. 1994. An al- ternative conception of tree-adjoining derivation. Computational Linguistics, 20(1):91-124. William Schuler. 1998. Expoiting semantic depen- dencies in parsing. Proceedings of the TAG+4 Workshop. Stuart M. Shieber and Yves Schabes. 1990. Syn- chronous tree adjoining grammars. In Proceedings of the 13th International Conference on Compu- tational Linguistics (COLING '90), Helsinki, Fin- land, August. Stuart M. Shieber. 1994. Restricting the weak- generative capability of synchronous tree adjoin- ing grammars. Computational Intelligence, 10(4). K. Vijay-Shanker and D.J. Weir. 1993. The use of shared forests in tree adjoining grammar parsing. In Proceedings of EA CL '93, pages 384-393. 95
1999
12
Compositional Semantics for Linguistic Formalisms Shuly Wintner* Institute for Research in Cognitive Science University of Pennsylvania 3401 Walnut St., Suite 400A Philadelphia, PA 19018 shuly@:t±nc, cis. upenn, edu Abstract In what sense is a grammar the union of its rules? This paper adapts the notion of com- position, well developed in the context of pro- gramming languages, to the domain of linguis- tic formalisms. We study alternative definitions for the semantics of such formalisms, suggest- ing a denotational semantics that we show to be compositional and fully-abstract. This fa- cilitates a clear, mathematically sound way for defining grammar modularity. 1 Introduction Developing large scale grammars for natural languages is a complicated task, and the prob- lems grammar engineers face when designing broad-coverage grammars are reminiscent of those tackled by software engineering (Erbach and Uszkoreit, 1990). Viewing contemporary linguistic formalisms as very high level declara- tive programming languages, a grammar for a natural language can be viewed as a program. It is therefore possible to adapt methods and techniques of software engineering to the do- main of natural language formalisms. We be- lieve that any advances in grammar engineering must be preceded by a more theoretical work, concentrating on the semantics of grammars. This view reflects the situation in logic program- ming, where developments in alternative defini- tions for predicate logic semantics led to im- plementations of various program composition operators (Bugliesi et al., 1994). This paper suggests a denotational seman- tics tbr unification-based linguistic formalisms and shows that it is compositional and fully- *I am grateful to Nissim Francez for commenting on an em'lier version of this paper. This work was supported by an IRCS Fellowship and NSF grant SBR 8920230. abstract. This facilitates a clear, mathemati- cally sound way for defining grammar modu- larity. While most of the results we report on are probably not surprising, we believe that it is important to derive them directly for linguis- tic formalisms for two reasons. First, practi- tioners of linguistic formMisms usually do not view them as instances of a general logic pro- gramming framework, but rather as first-class programming environments which deserve in- dependent study. Second, there are some cru- cial differences between contemporary linguis- tic formalisms and, say, Prolog: the basic ele- ments -- typed feature-structures -- are more general than first-order terms, the notion of uni- fication is different, and computations amount to parsing, rather than SLD-resolution. The fact that we can derive similar results in this new domain is encouraging, and should not be considered trivial. Analogously to logic programming languages, the denotation of grammars can be defined us- ing various techniques. We review alternative approaches, operational and denotational, to the semantics of linguistic formalisms in sec- tion 2 and show that they are "too crude" to support grammar composition. Section 3 presents an alternative semantics, shown to be compositional (with respect to grammar union, a simple syntactic combination operation on grammars). However, this definition is "too fine": in section 4 we present an adequate, compositional and fully-abstract semantics for linguistic formalisms. For lack of space, some proofs are omitted; an extended version is avail- able as a technical report (Wintner, 1999). 2 Grammar semantics Viewing grammars as formal entities that share many features with computer programs, it is 9{} natural to consider the notion of semantics of ratification-based formalisms. We review in this se(:tion the operational definition of Shieber et a,1. (1995) and the denotational definition of, e.g., Pereira and Shieber (1984) or Carpenter (1992, pp. 204-206). We show that these def- initions are equivalent and that none of them supports compositionality. 2.1 Basic notions W(, assume familiarity with theories of feature structure based unification grammars, as for- mulated by, e.g., Carpenter (1992) or Shieber (1992). Grammars are defined over typed fea- twre .structures (TFSs) which can be viewed as generalizations of first-order terms (Carpenter, 1991). TFSs are partially ordered by subsump- tion, with ± the least (or most general) TFS. A multi-rooted structure (MRS, see Sikkel (1997) ()r Wintner and Francez (1999)) is a sequence of TFSs, with possible reentrancies among dif- fi;rent elements in the sequence. Meta-variables A,/3 range over TFSs and a, p - over MRSs. MRSs are partially ordered by subsumption, de- n()ted '__', with a least upper bound operation ()f 'an'llfication, denoted 'U', and a greatest lowest t)(mnd denoted 'W. We assume the existence of a. fixed, finite set WORDS of words. A lexicon associates with every word a set of TFSs, its cat- egory. Meta-variable a ranges over WORDS and .w -- over strings of words (elements of WORDS*). Grammars are defined over a signature of types and features, assumed to be fixed below. Definition 1. A rule is an MRS of length greater than or equal to 1 with a designated (fir'st) element, the head o.f the rule. The rest of the elements .form the rule's body (which may be em, pty, in which case the rule is depicted a.s' a TFS). A lexicon is a total .function .from WORDS to .finite, possibly empty sets o.f TFSs. A grammar G = (T¢,/:, A s} is a .finite set of ,rules TO, a lexicon £. and a start symbol A s that is a TFS. Figure 1 depicts an example grammar, 1 sup- pressing the underlying type hierarchy. 2 The definition of unification is lifted to MRSs: let a,p be two MRSs of the same length; the 'Grammars are displayed using a simple description language, where ':' denotes feature values. 2Assmne that in all the example grammars, the types s, n, v and vp are maximal and (pairwise) inconsistent. A '~ = (~:at : .~) { (cat:s) -+ (co, t:n) (cat:vp) ] 7~ = (cat: vp) ---> (c.at: v) (cat: n) vp) + .,,) Z2(John) = Z~(Mary) = {(cat: 'n)} £(sleeps) = £(sleep) = £(lovcs) = {(co, t : v)} Figure 1: An example grammar, G unification of a and p, denoted c, U p, is the most general MRS that is subsmned by both er and p, if it exists. Otherwise, the unification .fails. Definition 2. An MRS (AI,...,A~:) reduces to a TFS A with respect to a gram, mar G (de- noted (At,...,Ak) ~(-~ A) 'li~' th, ere exists a rule p E T~ such, that (B,131,...,B~:) = p ll (_L, A1,..., Ak) and B V- A. Wll, en G is under- stood from. the context it is om, itted. Reduction can be viewed as the bottom-up counterpart of derivation. If f, g, are flmctions over the same (set) do- main, .f + g is )~I..f(I) U .q(I). Let ITEMS = {[w,i,A,j] [ w E WORDS*, A is a TFS and i,j E {0,1,2,3,...}}. Let Z = 2 ITEMS. Meta- variables x, y range over items and I - over sets of items. When 27 is ordered by set inclusion it forms a complete lattice with set union as a least upper bound (lub) operation. A flmction T : 27 -+ 27 is monotone if whenever 11 C_/2, also T(I1) C_ T(I2). It is continuous iftbr every chain I1 C_ /2 C_ ..., T(Uj< ~/.i) = Uj<~T(Ij) . If a function T is monotone it has a least fixpoint (Tarski-Knaster theorem); if T is also continu- ous, the fixpoint can be obtained by iterative application of T to the empty set (Kleene the- orem): lfp(T) = TSw, where TI" 0 = 0 and T t n = T(T t (n- 1)) when 'n is a succes- sor ordinal and (_Jk<n(T i" n) when n is a limit ordinal. When the semantics of programming lan- guages are concerned, a notion of observables is called for: Ob is a flmction associating a set of objects, the observables, with every program. The choice of semantics induces a natural equiv- alence operator on grammars: given a semantics 'H', G1 ~ G2 iff ~GI~ = ~G2~. An essential re- quirement of any semantic equivalence is that it 97' be correct (observables-preserving): if G1 -G2, then Ob(G1) = Ob(G2). Let 'U' be a composition operation on gram- mars and '•' a combination operator on deno- rations. A (correct) semantics 'H' is compo- .s'itional (Gaifinan and Shapiro, 1989) if when- ever ~1~ : ~G2~ and ~G3] -- ~G4], also ~G, U G3~ = [G2 U G4]. A semantics is com- mutative (Brogi et al., 1992) if ~G1 UG2] = ~G,~ • [G2~. This is a stronger notion than (:ompositionality: if a semantics is commutative with respect to some operator then it is compo- sitional. 2.2 An operational semantics As Van Emden and Kowalski (1976) note, "to define an operational semantics for a program- ruing language is to define an implementational independent interpreter for it. For predicate logic the proof procedure behaves as such an in- terpreter." Shieber et al. (1995) view parsing as a. deductive process that proves claims about the grammatical status of strings from assumptions derived from the grammar. We follow their in- sight and notation and list a deductive system for parsing unification-based grammars. Definition 3. The deductive parsing system associated with a grammar G = (7~,F.,AS} is defined over ITEMS and is characterized by: Axioms: [a, i, A, i + 1] i.f B E Z.(a) and B K A; [e, i, A, i] if B is an e-rule in T~ and B K_ A Goals: [w, 0, A, [w]] where A ~ A s Inference rules: [wx , i l , A1, ill,..., [Wk, ik, Ak , Jk ] [Wl " " " Wk, i, A, j] if .'h = i1,+1 .for 1 <_ l < k and i = il and J = Jk and (A1,...,Ak) =>a A When an item [w,i,A,j] can be deduced, applying k times the inference rules associ- z~ted with a grammar G, we write F-~[w, i, A, j]. When the number of inference steps is irrele- vant it is omitted. Notice that the domain of items is infinite, and in particular that the num- ber of axioms is infinite. Also, notice that the goal is to deduce a TFS which is subsumed by the start symbol, and when TFSs can be cyclic, there can be infinitely many such TFSs (and, hence, goals) - see Wintner and Francez (1999). Definition 4. The operational denotation o.f a grammar G is EG~o,, = {x IF-v; :,:}. G1 -op G2 iy ]C1 o, = G2Bo , We use the operational semantics to de- fine the language generated by a grammar G: L(G) = {(w,A} [ [w,O,A,l',,[] E [G]o,}. Notice that a language is not merely a set of strings; rather, each string is associated with a TFS through the deduction procedure. Note also that the start symbol A ' does not play a role in this definition; this is equivalent to assuming that the start symbol is always the most general TFS, _k. The most natural observable for a grammar would be its language, either as a set of strings or augmented by TFSs. Thus we take Ob(G) to be L(G) and by definition, the operational semantics '~.] op' preserves observables. 2.3 Denotational semantics In this section we consider denotational seman- tics through a fixpoint of a transformational op- erator associated with grammars. -This is es- sentially similar to the definition of Pereira and Shieber (1984) and Carpenter (1992, pp. 204- 206). We then show that the denotational se- mantics is equivalent to the operational one. Associate with a grammar G an operator 7~ that, analogously to the immediate conse- quence operator of logic programming, can be thought of as a "parsing step" operator in the context of grammatical formalisms. For the following discussion fix a particular grammar G = (n,E,A~). Definition 5. Let Tc : Z -+ Z be a trans- formation on sets o.f items, where .for every I C_ ITEMS, [w,i,A,j] E T(~(I) iff either • there exist Yl,...,yk E I such that Yl = [w1,,iz,Al,jt] .for" 1 < 1 <_ k and il+l = jz for 1 < l < k and il = 1 and jk = J and (A1,... ,Ak) ~ A and w = "w~ .. • wk; or • i =j andB is an e-rule in G andB K A and w = e; or • i+l =j and [w[ = 1 andB G 12(w) and BKA. For every grammar G, To., is monotone and continuous, and hence its least fixpoint exists and l.fp(TG) = TG $ w. Following the paradigm 98 of logic programming languages, define a fix- point semantics for unification-based grammars by taking the least fixpoint of the parsing step operator as the denotation of a grammar. Definition 6. The fixpoint denotation of a grammar G is ~G[.fp = l.fp(Ta). G1 =--.fp G2 iff ~ti,( T<; ~ ) = l fp(Ta~). The denotational definition is equivalent to the operational one: Theorem 1. For x E ITEMS, X E lfp(TG) iff ~-(? x. The proof is that [w,i,A,j] E Ta $ n iff F-7;,[w, i, A, j], by induction on n. Corollary 2. The relation '=fp' is correct: whenever G1 =.fp G2, also Ob(G1) = Ob(a2). 2.4 Compositionality While the operational and the denotational se- mantics defined above are standard for com- plete grammars, they are too coarse to serve as a model when the composition of grammars is concerned. When the denotation of a gram- mar is taken to be ~G]op, important character- istics of the internal structure of the grammar are lost. To demonstrate the problem, we intro- duce a natural composition operator on gram- mars, namely union of the sets of rules (and the lexicons) in the composed grammars. Definition 7. /f GI = <T¢1, ~1, A~) and G2 = (7-~2,E'2,A~) are two grammars over the same signature, then the union of the two gram- mars, denoted G1 U G2, is a new grammar G = (T~, £, AS> such that T~ = 7~ 1 (.J 7"~2, ft. = ff~l + ff~2 and A s = A~ rq A~. Figure 2 exemplifies grammar union. Observe that for every G, G', G O G' = G' O G. • Proposition 3. The equivalence relation '=op' is not compositional with respect to Ob, {U}. Proof. Consider the grammars in figure 2. ~a:~o,, = lado. = {["loves",/, (cat: v),i + 1]l i > 0} but tbr I = {["John loves John", i, (cat: s),i+3 I i >_ 0}, I C_ [G1UG4]op whereas I ~ [G1UGa~op. Thus Ga =-op G4 but (Gl (2 Go) ~op (G1 tO G4), hence '~--Op' is not compositional with respect to Ob, {tO}. [] G1 : A s = (cat :.s) (co, t: s) -+ (c.,t: ,,,,) (co, t: vp) C(John) = {((:.t : n)} a2: A s = (_1_) (co, t: vp) -+ (co, t: v) (cat : vp) -+ (cat:v) (cat:n) /:(sleeps) =/:(loves) = {(cat: v)} Go: A s = (&) /:(loves) = {(cat: v)} G4: A s = (_1_) (cat:vp) -+ (co, t:v) (cat:n) C(loves) = {(cat: v)} G1 U G2 : A s = (cat : s) (co, t: ~) -+ (~:o,t: ,,,,) (~.at : vp) (cat : vp) -~ (co, t : v) (cat: vp) --+ (cat: v) (cat: n) /:(John) = {(cat: n)} £(sleeps) = £(loves) = {(cat: v)} G1UGa : A s = (cat : s) (cat: s) --+ (cat: n) (cat: vp) C(John) = {(cat: ',,,)} £(loves) = {(cat: v)} GI U G4 : A s = (cat : s) (co, t: ~) + (co.t: ,,,.) (cat: vp) (co, t : vp) -~ (cat:,,) (co, t : ~) /:(John) = {(cat: n)} /:(loves) = {(cat: v)} Figure 2: Grammar union The implication of the above proposition is that while grammar union might be a natural, well defined syntactic operation on grammars, the standard semantics of grannnars is too coarse to support it. Intuitively, this is because when a grammar G1 includes a particular rule p that is inapplicable for reduction, this rule contributes nothing to the denotation of the grammar. But when G1 is combined with some other grammar, G2, p might be used for reduction in G1 U G2, where it can interact with the rules of G2. We suggest an alternative, fixpoint based semantics for unification based grammars that naturally supports compositionality. 3 A compositional semantics To overcome the problems delineated above, we follow Mancarella and Pedreschi (1988) in con- sidering the grammar transformation operator itself (rather than its fixpoint) as the denota- 99 tion of a grammar. Definition 8. The algebraic denotation o.f G is ffGffa I = Ta. G1 -at G2 iff Tal = TG2. Not only is the algebraic semantics composi- tionM, it is also commutative with respect to grammar union. To show that, a composition operation on denotations has to be defined, and we tbllow Mancarella and Pedreschi (1988) in its definition: Tc;~ • To;., = ),LTc, (~) u Ta2 (5 Theorem 4. The semantics '==-at ' is commuta- tive with respect to grammar union and '•': for e, vcry two grammars G1, G2, [alffat" ~G2ffal = :G I [-J G 2 ff (tl . Proof. It has to be shown that, for every set of items L Tca~a., (I) = Ta, (I)u Ta.,(I). • if x E TG1 (I) U TG~, (I) then either x G Tch (I) or x E Ta.,(I). From the definition of grammar union, x E TG1uG2(I) in any case. • if z E Ta~ua.,(I) then x can be added by either of the three clauses in the definition of Ta. - if x is added by the first clause then there is a rule p G 7~1 U T~2 that li- censes the derivation through which z is added. Then either p E 7~1 or p G T~2, but in any case p would have licensed the same derivation, so either ~ Ta~ (I) or • ~ Ta~ (I). - if x is added by the second clause then there is an e-rule in G1 U G2 due to which x is added, and by the same rationale either x C TG~(I) or x E TG~(I). - if x is added by the third clause then there exists a lexical category in £1 U £2 due to which x is added, hence this category exists in either £1 or £2, and therefore x C TG~ (I) U TG2 (I). [] Since '==-at' is commutative, it is also compo- sitional with respect to grammar union. In- tuitively, since TG captures only one step of the computation, it cannot capture interactions among different rules in the (unioned) grammar, and hence taking To: to be the denotation of G yields a compositional semantics. The Ta operator reflects the structure of the grammar better than its fixpoint. In other words, the equivalence relation induced by TG is finer than the relation induced by lfp(Tc). The question is, how fine is the '-al' relation? To make sure that a semantics is not too fine, one usually checks the reverse direction. Definition 9. A fully-abstract equivalence relation '-' is such that G1 =- G'2 'i,.[.-f .for all G, Ob(G1 U G) = Ob(G.e U G). Proposition 5. Th, e semantic equivalence re- lation '--at' is not fully abshuct. Proof. Let G1 be the grammar A~ = ±, £1 = 0, ~ = {(cat: ~) -~ (~:.,t : ,,,,p) (c.,t : vp), (cat: up) -~ (,:..t : ',,.p)} and G2 be the gramm~:r A~ = 2, Z:2 = O, n~ = {(~at : .~) -~ (~,.,t : .,p) (.at: ~p)} • G1 ~at G2: because tbr I = {["John loves Mary",6,(cat : np),9]}, T(;I(I ) = I but To., (I) = O • for all G, Ob(G U G~) = Ob(G [3 G2). The only difference between GUG1 and GUG2 is the presence of the rule (cat : up) -+ (cat : up) in the former. This rule can contribute nothing to a deduction procedure, since any item it licenses must already be deducible. Therefore, any item deducible with G U G1 is also deducible with G U G2 and hence Ob(G U G1) ---- Ob(G U G,2). [] A better attempt would have been to con- sider, instead of TG, the fbllowing operator as the denotation of G: [G]i d = AI.Ta(I) U I. In other words, the semantics is Ta + Id, where Id is the identity operator. Unfortunately, this does not solve the problem, as '~']id' is still not fully-abstract. 100 4 A fully abstract semantics We have shown so far that 'Hfp' is not com- positional, and that 'Hid' is compositional but not fully abstract. The "right" semantics, there- fore, lies somewhere in between: since the choice of semantics induces a natural equivalence on grammars, we seek an equivalence that is cruder thzm 'Hid' but finer than 'H.fp'. In this section we adapt results from Lassez and Maher (1984) a.nd Maher (1988) to the domain of unification- b~Lsed linguistic formalisms. Consider the following semantics for logic programs: rather than taking the operator asso- dated with the entire program, look only at the rules (excluding the facts), and take the mean- ing of a program to be the function that is ob- tained by an infinite applications of the opera- tor associated with the rules. In our framework, this would amount to associating the following operator with a grammar: Definition 10. Let RG : Z -~ Z be a trans- formation on sets o.f items, where .for every [ C ITEMS, [w,i,A,j] E RG(I) iff there exist Yl,...,Yk E I such that yl = [wz,it,Al,jd .for 1 _ < l _ < k and il+t = jl .for 1 < l < k and i, = 1 and.jk = J and (A1,...,Ak) ~ A and "~1) ~ 'tl) 1 • • • ?U k. Th, e functional denotation of a grammar G is /[G~.f,,, = (Re + Id) ~ = End-0 (RG + Id) n. Notice that R w is not RG "[ w: the former is a function "d from sets of items to set of items; the latter is a .set of items. Observe that Rc is defined similarly to Ta (definition 5), ignoring the items added (by Ta) due to e-rules and lexical items. If we define the set of items I'nitc to be those items that are a.dded by TG independently of the argument it operates on, then for every grammar G and ev- ery set of items I, Ta(I) = Ra(I) U Inita. Re- lating the functional semantics to the fixpoint one, we tbllow Lassez and Maher (1984) in prov- ing that the fixpoint of the grammar transfor- mation operator can be computed by applying the fimctional semantics to the set InitG. Definition 11. For G = (hg,£,A~), Initc = {[e,i,A,i] [ B is an e~-rule in G and B E_A} U {[a,i,A,i + 1J I B E £(a) .for B E A} Theorem 6. For every grammar G, (R.c + fd.) (z',,.itcd = tb(TG) Proof. We show that tbr every 'n., (T~ + Id) n = (E~.-~ (Re + Id) ~:) (I'nit(;) by induction on Tt. For n = 1, (Tc + Id) ~[ 1 = (Tc~ + Id)((Ta + Id) ~ O) = (Tc, + Id)(O). Clearly, the only items added by TG are due to the second and third clauses of definition 5, which are exactly Inita. Also, (E~=o(Ra + Id)~:)(Initc;) = (Ra + Id) ° (Initc) = I'nitc;. Assume that the proposition holds tbr n- 1, that is, (To + Id) "[ (',, - 1) = t~E'"-2t~'a:=0 txta + Id) k)Unite). Then (Ta + Id) $ n = definition of i" (TG + Id)((Ta + Id) ~[ (v, - 1)) = by the induction hypothesis ~n--2 (Ta + Id)(( k=0(RG + Id)k)(Inita)) = since Ta(I) = Ra(I) U Inita En-2 (Ra + Id)(( k=Q(Rc; + Id)~')(Inita)) U Inita = (Ra + (Ra + Id) k) (1',,,its,)) = (y]n-1/R , Id)h:)(Init(:) k=0 ~, ,G-I- Hence (RG + Id) ~ (Init(; = (27(; + Id) ~ w = lfp( TG ) . [] The choice of 'Hfl~' as the semantics calls for a different notion of' observables. The denota- tion of a grammar is now a flmction which re- flects an infinite number of' applications of the grammar's rules, but completely ignores the e- rules and the lexical entries. If we took the ob- servables of a grammar G to be L(G) we could in general have ~G1].f,. = ~G2]fl~. but Ob(G1) 7 ~ Ob(G2) (due to different lexicons), that is, the semantics would not be correct. However, when the lexical entries in a grammar (including the e- rules, which can be viewed as empty categories, or the lexical entries of traces) are taken as in- put, a natural notion of observables preservation is obtained. To guarantee correctness, we define the observables of a grammar G with respect to a given input. Definition 12. Th, e observables of a gram- mar G = (~,/:,A s} with respect to an in- put set of items I are Ot, (C) = {(',,,,A) I [w,0, d, I 1] e 101 Corollary 7. The semantics '~.~.f ' is correct: 'llf G1 =fn G2 then .for every I, Obl(G1) = Ol, ( a,e ). The above definition corresponds to the pre- vious one in a natural way: when the input is taken to be Inita, the observables of a grammar are its language. Theorem 8. For all G, L(G) = Obinita(G). P'moJ: L(G) = definition of L(G) { (',,,, A) I [w, O, A, I 1] e I[C]lo,,} = definition 4 {(w, A) [ F-c [w, O, A, = by theorem 1 {<w, A> I [,w, 0, A, Iwl] e l.fp(Ta)} = by theorem 6 {(,w, A) I [w, O, A, [wl] e [G]fn(InitG)} = by definition 12 Obt,,,~tc; (G) [] .To show that the semantics 'Hfn' is composi- tional we must define an operator for combining denotations. Unfortunately, the simplest oper- ator, '+', would not do. However, a different operator does the job. Define ~Gl~.f~ • [G2~f~ to 1)e ([[G1]l.fn + [G2~f~) °'. Then 'H.f~' is commuta- tive (and hence compositional) with respect to ~•' and 'U'. Theorem 9. fiG1 U G2~fn = ~Gl]fn " ~G2~.fn. The proof is basically similar to the case of logic programming (Lassez and Maher, 1984) and is detailed in Wintner (1999). Theorem 10. The semantics '~'[fn' is fully abstract: ,for every two grammars G1 and G2, 'llf .for" every grammar G and set of items I, Obr(G1 U G) = ObI(G2 U G), then G1 =fn G2. The proof is constructive: assuming that G t ~f;~ G2, we show a grammar G (which de- t)ends on G1 and G2) such that Obt(G1 U G) ¢ Obr(G2 U G). For the details, see Wintner (1999). 5 Conclusions This paper discusses alternative definitions for the semantics of unification-based linguistic for- malisms, culminating in one that is both com- positional and fully-abstract (with respect to grammar union, a simple syntactic combination operations on grammars). This is mostly an adaptation of well-known results from h)gic pro- gramming to the ti'amework of unification-based linguistic tbrmalisms, and it is encouraging to see that the same choice of semantics which is compositional and fiflly-abstra(:t for Prolog turned out to have the same desirable proper- ties in our domain. The functional semantics '~.].f,' defined here assigns to a grammar a fimction which reflects the (possibly infinite) successive application of grammar rules, viewing the lexicon as input to the parsing process. We, believe that this is a key to modularity in grammar design. A gram- mar module has to define a set of items that it "exports", and a set of items that can be "imported", in a similar way to the declaration of interfaces in programming languages. We are currently working out the details of such a definition. An immediate application will fa- cilitate the implementation of grammar devel- opment systems that support modularity in a clear, mathematically sound way. The results reported here can be extended in various directions. First, we are only con- cerned in this work with one composition oper- ator, grammar union. But alternative operators are possible, too. In particular, it would be in- teresting to define an operator which combines the information encoded in two grammar rules, for example by unifying the rules. Such an op- erator would facilitate a separate development of grammars along a different axis: one module can define the syntactic component of a gram- mar while another module would account for the semantics. The composition operator will unify each rule of one module with an associated rule in the other. It remains to be seen whether the grammar semantics we define here is composi- tional and fully abstract with respect to such an operator. A different extension of these results should provide for a distribution of the type hierarchy among several grammar modules. While we as- sume in this work that all grammars are defined 102 over a given signature, it is more realistic to as- sume separate, interacting signatures. We hope to be able to explore these directions in the fu- ture. References Antonio Brogi, Evelina Lamina, and Paola Mello. 1992. Compositional model-theoretic semantics for logic programs. New Genera- tion Computing, 11:1-21. Michele Bugliesi, Evelina Lamina, and Paola Mello. 1994. Modularity in logic pro- gramming. Journal of Logic Programming, 19,20:443 502. Bob Carpenter. 1991. Typed feature struc- tures: A generalization of first-order terms. In Vijai Saraswat and Ueda Kazunori, edi- tors, Logic Programming - Proceedings of the 1991 International Symposium,, pages 187- 201, Cambridge, MA. MIT Press. Bob Carpenter. 1992. The Logic of Typed Fea- ture Structures. Cambridge Tracts in Theo- retical Computer Science. Cambridge Univer- sity Press. Gregor Erbach and Hans Uszkoreit. 1990. Grammar engineering: Problems and prospects. CLAUS report 1, University of the Saarland and German research center for Artificial Intelligence, July. Haim Gaifman and Ehud Shapiro. 1989. Fully abstract compositional semantics for logic programming. In 16th Annual ACM Sym- posium on Principles o.f Logic Programming, pages 134-142, Austin, Texas, January. J.-L. Lassez and M. J. Maher. 1984. Closures and fairness in the semantics of programming logic. Theoretical computer science, 29:167- 184. M. J. Maher. 1988. Equivalences of logic pro- grams. In .Jack Minker, editor, Foundations of Deductive Databases and Logic Program- rain.q, chapter 16, pages 627-658. Morgan Kaulinann Publishers, Los Altos, CA. Paolo Mancarella and Dino Pedreschi. 1988. An algebra of logic programs. In Robert A. Kowalski and Kenneth A. Bowen, edi- tors, Logic Programming: Proceedings of the F@h international conference and sympo- ,sium, pages 1006-1023, Cambridge, Mass. MIT Press. Fernando C. N. Pereira and Stuart M. Shieber. 1984. The semantics of grammar formalisms seen as computer languages. In Proceedings of the lOth international con.ference on compu- tational linguistics and the 22nd annual meet- ing o.f the association .for computational lin- guistics, pages 123-129, Stantbrd, CA, July. Stuart Shieber, Yves Schabes, and Fernando Pereira. 1995. Principles and implementation of deductive parsing. Jo'wrr~,al o]" Logic Pro- gramming, 24(1-2):3-36, July/August. Stuart M. Shieber. 1992. Constraint-Based Grammar Form, alism, s. MIT Press, Cam- bridge, Mass. Klaas Sikkel. 1997. Par'sing Schemata. Texts in Theoretical Computer Science - An EATCS Series. Springer Verlag, Berlin. M. H. Van Emden and Robert A. Kowalski. 1976. The semantics of predicate logic as a programming language..Iournal of the Asso- ciation .for Ccrmputing Machinery, 23(4):733- 742, October. Shuly Wintner and Nissim Francez. 1999. Off- line parsability and the well-tbundedness of subsumption. Journal of Logic, Language and In.formation, 8(1):1-16, January. Shuly Wintner. 1999. Compositional semantics for linguistic formalisms. IRCS Report 99-05, Institute for Research in Cognitive Science, University of Pennsylvania, 3401 Wahmt St., Suite 400A, Philadelphia, PA 19018. 103
1999
13
Inducing a Semantically Annotated Lexicon via EM-Based Clustering Mats Rooth Stefan Riezler Detlef Prescher Glenn Carroll Franz Beil Institut ffir Maschinelle Sprachverarbeitung University of Stuttgart, Germany Abstract We present a technique for automatic induction of slot annotations for subcategorization frames, based on induction of hidden classes in the EM framework of statistical estimation. The models are empirically evalutated by a general decision test. Induction of slot labeling for subcategoriza- tion frames is accomplished by a further applica- tion of EM, and applied experimentally on frame observations derived from parsing large corpora. We outline an interpretation of the learned rep- resentations as theoretical-linguistic decomposi- tional lexical entries. 1 Introduction An important challenge in computational lin- guistics concerns the construction of large-scale computational lexicons for the numerous natu- ral languages where very large samples of lan- guage use are now available. Resnik (1993) ini- tiated research into the automatic acquisition of semantic selectional restrictions. Ribas (1994) presented an approach which takes into account the syntactic position of the elements whose se- mantic relation is to be acquired. However, those and most of the following approaches require as a prerequisite a fixed taxonomy of semantic rela- tions. This is a problem because (i) entailment hierarchies are presently available for few lan- guages, and (ii) we regard it as an open ques- tion whether and to what degree existing designs for lexical hierarchies are appropriate for repre- senting lexical meaning. Both of these consid- erations suggest the relevance of inductive and experimental approaches to the construction of lexicons with semantic information. This paper presents a method for automatic induction of semantically annotated subcatego- rization frames from unannotated corpora. We use a statistical subcat-induction system which estimates probability distributions and corpus frequencies for pairs of a head and a subcat frame (Carroll and Rooth, 1998). The statistical parser can also collect frequencies for the nomi- nal fillers of slots in a subcat frame. The induc- tion of labels for slots in a frame is based upon estimation of a probability distribution over tu- ples consisting of a class label, a selecting head, a grammatical relation, and a filler head. The class label is treated as hidden data in the EM- framework for statistical estimation. 2 EM-Based Clustering In our clustering approach, classes are derived directly from distributional data--a sample of pairs of verbs and nouns, gathered by pars- ing an unannotated corpus and extracting the fillers of grammatical relations. Semantic classes corresponding to such pairs are viewed as hid- den variables or unobserved data in the context of maximum likelihood estimation from incom- plete data via the EM algorithm. This approach allows us to work in a mathematically well- defined framework of statistical inference, i.e., standard monotonicity and convergence results for the EM algorithm extend to our method. The two main tasks of EM-based clustering are i) the induction of a smooth probability model on the data, and ii) the automatic discovery of class-structure in the data. Both of these aspects are respected in our application of lexicon in- duction. The basic ideas of our EM-based clus- tering approach were presented in Rooth (Ms). Our approach constrasts with the merely heuris- tic and empirical justification of similarity-based approaches to clustering (Dagan et al., to ap- pear) for which so far no clear probabilistic interpretation has been given. The probability model we use can be found earlier in Pereira et al. (1993). However, in contrast to this ap- 104 Class 17 PROB 0.0265 0.0437 0.0302 0.0344 0.0337 0.0329 0.0257 0.0196 0.0177 0.0169 0.0156 0.0134 10.0129 0.0120 0.0102 0.0099 0.0099 0.0088 0.0088 0.0080 0.0078 increase.as:s increase.aso:o fall.as:s pay.aso:o reduce.aso:o rise.as:s exceed.aso:o exceed.aso:s affect.aso:o grow.as:s include.aso:s reach.aso:s decline.as:s lose.aso:o act.aso:s improve.aso:o include.aso:o cut.aso:o show.aso:o vary.as:s o~~ ~ .~.~ ~ o ~ . ~ ": :::::::::::: ::: ::: : : ":':: : :. • • • • • • s • • • • s s • s • • • • • • • • • • s • • • s • s • s s s s s • • • • • • • • • s • • • • • • • s • • • • • • • • • • • • • • s • • • • • • • • • s • • • • s • • o • • • • • • s • • s • • • • • • • • • • s s • • • s • s s • • • • s • • • • s • s • • • • • • s • • s s • • • • • • • s s s • • • • • s • • s • s s • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • s • • s • • • • • • • • • • • • • • • • • • s • • • • • • • • • • • • s • • • • • • • s s • • • • • • • s • • • • • • • • s • • s s • • • • • • • • • • • • • • • • • • • • • • s • • • s • • s s • • • • s • s • s • • • • s • • • • • s • • • 1:'11:1 . . . . . . . . . . • • • • • • • • • • • • s • • • • • • • • • • • • • • • • • • • • Figure 1: Class proach, our statistical inference method for clus- tering is formalized clearly as an EM-algorithm. Approaches to probabilistic clustering similar to ours were presented recently in Saul and Pereira (1997) and Hofmann and Puzicha (1998). There also EM-algorithms for similar probability mod- els have been derived, but applied only to sim- pler tasks not involving a combination of EM- based clustering models as in our lexicon induc- tion experiment. For further applications of our clustering model see Rooth et al. (1998). We seek to derive a joint distribution of verb- noun pairs from a large sample of pairs of verbs v E V and nouns n E N. The key idea is to view v and n as conditioned on a hidden class c E C, where the classes are given no prior interpreta- tion. The semantically smoothed probability of a pair (v, n) is defined to be: p(v,n) = ~~p(c,v,n)= ~-']p(c)p(vJc)p(nJc) cEC cEC The joint distribution p(c,v,n) is defined by p(c, v, n) = p(c)p(vlc)p(n[c ). Note that by con- struction, conditioning of v and n on each other is solely made through the classes c. In the framework of the EM algorithm (Dempster et al., 1977), we can formalize clus- tering as an estimation problem for a latent class (LC) model as follows. We are given: (i) a sam- ple space y of observed, incomplete data, corre- 17: scalar change sponding to pairs from VxN, (ii) a sample space X of unobserved, complete data, corresponding to triples from CxYxg, (iii) a set X(y) = {x E X [ x = (c, y), c E C} of complete data related to the observation y, (iv) a complete-data speci- fication pe(x), corresponding to the joint proba- bility p(c, v, n) over C x V x N, with parameter- vector 0 : (0c, Ovc, OncJc E C, v e V, n E N), (v) an incomplete data specification Po(Y) which is related to the complete-data specification as the marginal probability Po(Y) -- ~~X(y)po(x). " The EM algorithm is directed at finding a value 0 of 0 that maximizes the incomplete- data log-likelihood function L as a func- tion of 0 for a given sample y, i.e., 0 = arg max L(O) where L(O) = lnl-IyP0(y ). 0 As prescribed by the EM algorithm, the pa- rameters of L(e) are estimated indirectly by pro- ceeding iteratively in terms of complete-data es- timation for the auxiliary function Q(0;0(t)), which is the conditional expectation of the complete-data log-likelihood lnps(x) given the observed data y and the current fit of the pa- rameter values 0 (t) (E-step). This auxiliary func- tion is iteratively maximized as a function of O (M-step), where each iteration is defined by the map O(t+l) = M(O(t) = argmax Q(O; 0 (t)) 0 Note that our application is an instance of the EM-algorithm for context-free models (Baum et 105 Class 5 PROB 0.0412 0.0542 0.0340 0.0299 0.0287 0.0264 0.0213 0.0207 0.0167 0.0148 0.0141 0.0133 0.0121 0.0110 0.0106 0.0104 0.0094 0.0092 0.0089 0.0083 0.0083 ~g ?~gg o o(D g g g g o cD o o ~ggggg~gg~Sgggggggg~g ~ .D m ~k.as:s Q • • ..... :11111:: 11: think,as:s • • • • • • • • • • • shake.aso:s • • • • • • • • • • • • • smile.as:s • • ..... 1: : 11:1:1::. reply.as:s • • shrug ..... : : : : : : : : : ° : : wonder.as:s • • • • • • • • • feel.aso:s • • • • • • • • • take.aso:s • • • • .... :1111. :11 : watch.aso:s • • • • • • • • • • • ask.aso:s • • • • • • • • • • • • • • tell.aso:s • • • • • • • • • • • • • look.as:s • • • • • • • • • • • ~ive.~so:s • • • • • • • • • • • hear.aso:s • • • • • • • • • • grin.as:s • • • • • • • • • • • • answer.as:s • • • • • • • • • • _ .~ o ~ . .~ ~ :::''::::.:::::: • • • • • • Q • • • • • • • • • • • • • • • • • • • • • • 1111:11::1.1:11: • ~ • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • :':':':::::.'::: • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • t • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • Figure 2: Class 5: communicative action al., 1970), from which the following particular- ily simple reestimation formulae can be derived. Let x = (c, y) for fixed c and y. Then M(Ovc) = Evetv)×g Po( lY) Eypo( ly) ' M(On~) = F'vcY×{n}P°(xiy) Eyp0( ly) ' E po( ly) lYl probabilistic context-free grammar of (Carroll and Rooth, 1998) gave for the British National Corpus (117 million words). e6 7o 55 Intuitively, the conditional expectation of the number of times a particular v, n, or c choice is made during the derivation is prorated by the conditionally expected total number of times a choice of the same kind is made. As shown by Baum et al. (1970), these expectations can be calculated efficiently using dynamic program- ming techniques. Every such maximization step increases the log-likelihood function L, and a se- quence of re-estimates eventually converges to a (local) maximum of L. In the following, we will present some exam- ples of induced clusters. Input to the clustering algorithm was a training corpus of 1280715 to- kens (608850 types) of verb-noun pairs partici- pating in the grammatical relations of intransi- tive and transitive verbs and their subject- and object-fillers. The data were gathered from the maximal-probability parses the head-lexicalized Figure 3: Evaluation of pseudo-disambiguation Fig. 2 shows an induced semantic class out of a model with 35 classes. At the top are listed the 20 most probable nouns in the p(nl5 ) distribu- tion and their probabilities, and at left are the 30 most probable verbs in the p(vn5) distribution. 5 is the class index. Those verb-noun pairs which were seen in the training data appear with a dot in the class matrix. Verbs with suffix .as : s in- dicate the subject slot of an active intransitive. Similarily .ass : s denotes the subject slot of an active transitive, and .ass : o denotes the object slot of an active transitive. Thus v in the above discussion actually consists of a combination of a verb with a subcat frame slot as : s, ass : s, or ass : o. Induced classes often have a basis in lexical semantics; class 5 can be interpreted 106 as clustering agents, denoted by proper names, "man", and "woman", together with verbs denot- ing communicative action. Fig. 1 shows a clus- ter involving verbs of scalar change and things which can move along scales. Fig. 5 can be in- terpreted as involving different dispositions and modes of their execution. 3 Evaluation of Clustering Models 3.1 Pseudo-Disambiguation We evaluated our clustering models on a pseudo- disambiguation task similar to that performed in Pereira et al. (1993), but differing in detail. The task is to judge which of two verbs v and v ~ is more likely to take a given noun n as its argument where the pair (v, n) has been cut out of the original corpus and the pair (v ~, n) is con- structed by pairing n with a randomly chosen verb v ~ such that the combination (v ~, n) is com- pletely unseen. Thus this test evaluates how well the models generalize over unseen verbs. The data for this test were built as follows. We constructed an evaluation corpus of (v, n, v ~) triples by randomly cutting a test corpus of 3000 (v, n) pairs out of the original corpus of 1280712 tokens, leaving a training corpus of 1178698 to- kens. Each noun n in the test corpus was com- bined with a verb v ~ which was randomly cho- sen according to its frequency such that the pair (v ~, n) did appear neither in the training nor in the test corpus. However, the elements v, v ~, and n were required to be part of the training corpus. Furthermore, we restricted the verbs and nouns in the evalutation corpus to the ones which oc- cured at least 30 times and at most 3000 times with some verb-functor v in the training cor- pus. The resulting 1337 evaluation triples were used to evaluate a sequence of clustering models trained from the training corpus. The clustering models we evaluated were • parametrized in starting values of the training algorithm, in the number of classes of the model, and in the number of iteration steps, resulting in a sequence of 3 × 10 x 6 models. Starting from a lower bound of 50 % random choice, ac- curacy was calculated as the number of times the model decided for p(nlv) > p(nlv' ) out of all choices made. Fig. 3 shows the evaluation results for models trained with 50 iterations, averaged over starting values, and plotted against class cardinality. Different starting values had an ef- 76 Figure 4: Evaluation on smoothing task fect of + 2 % on the performance of the test. We obtained a value of about 80 % accuracy for models between 25 and 100 classes. Models with more than 100 classes show a small but stable overfitting effect. 3.2 Smoothing Power A second experiment addressed the smoothing power of the model by counting the number of (v, n) pairs in the set V x N of all possible combi- nations of verbs and nouns which received a pos- itive joint probability by the model. The V x N- space for the above clustering models included about 425 million (v, n) combinations; we ap- proximated the smoothing size of a model by randomly sampling 1000 pairs from V x N and returning the percentage of positively assigned pairs in the random sample. Fig. 4 plots the smoothing results for the above models against the number of classes. Starting values had an in- fluence of -+ 1% on performance. Given the pro- portion of the number of types in the training corpus to the V × N-space, without clustering we have a smoothing power of 0.14 % whereas for example a model with 50 classes and 50 it- erations has a smoothing power of about 93 %. Corresponding to the maximum likelihood paradigm, the number of training iterations had a decreasing effect on the smoothing perfor- mance whereas the accuracy of the pseudo- disambiguation was increasing in the number of iterations. We found a number of 50 iterations to be a good compromise in this trade-off. 4 Lexicon Induction Based on Latent Classes The goal of the following experiment was to de- rive a lexicon of several hundred intransitive and transitive verbs with subcat slots labeled with latent classes. 107 4.1 Probabilistic Labeling with Latent Classes using EM-estimation To induce latent classes for the subject slot of a fixed intransitive verb the following statisti- cal inference step was performed. Given a la- tent class model PLC(') for verb-noun pairs, and a sample nl,... ,aM of subjects for a fixed in- transitive verb, we calculate the probability of an arbitrary subject n E N by: p(n) = _,P(C)PLc(nlc). cEC cCC The estimation of the parameter-vector 0 = (Oclc E C) can be formalized in the EM frame- work by viewing p(n) or p(c, n) as a function of 0 for fixed PLC(.). The re-estimation formulae resulting from the incomplete data estimation for these probability functions have the follow- ing form (f(n) is the frequency of n in the sam- ple of subjects of the fixed verb): M(Oc) = EneN f(n)po(cln) E, elv f (?%) A similar EM induction process can be applied also to pairs of nouns, thus enabling induction of latent semantic annotations for transitive verb frames. Given a LC model PLC(') for verb-noun pairs, and a sample (nl,n2)l,..., (nl,n2)M of noun arguments (ni subjects, and n2 direct ob- jects) for a fixed transitive verb, we calculate the probability of its noun argument pairs by: p(7%1, ?%2) = Ec,,c c p(cl, c2, ?%1, ?%2) ---- E c1 ,c2 6C P ( C1' C2 )PLC (?% 11cl )pLc (7%21c~) Again, estimation of the parameter-vector 0 = (0clc210,c2 E C) can be formalized in an EM framework by viewing p(nl,n2) or p(cl,c2,nl,n2) as a function of 0 for fixed PLC(.). The re-estimation formulae resulting from this incomplete data estimation problem have the following simple form (f(nz, n2) is the frequency of (n!, n2) in the sample of noun ar- gument pairs of the fixed verb): M(Od~2) = Enl,n2eN f(7%1, n2)po(cl, c21nl, n2) Enl, N Y(7%1, ?%2) Note that the class distributions p(c) and p(cl,C2) for intransitive and transitive models can be computed also for verbs unseen in the LC model. blush 5 0.982975 snarl 5 0.962094 constance 3 christina 3 willie 2.99737 ronni 2 claudia 2 gabriel 2 maggie 2 bathsheba 2 sarah 2 girl 1.9977 mandeville 2 jinkwa 2 man 1.99859 scott 1.99761 omalley 1.99755 shamlou 1 angalo 1 corbett 1 southgate 1 ace 1 Figure 6: Lexicon entries: blush, snarl increase 17 0.923698 number 134.147 demand 30.7322 pressure 30.5844 temperature 25.9691 cost 23.9431 proportion 23.8699 size 22.8108 rate 20.9593 level 20.7651 price 17.9996 Figure 7: Scalar motion increase. 4.2 Lexicon Induction Experiment Experiments used a model with 35 classes. From maximal probability parses for the British Na- tional Corpus derived with a statistical parser (Carroll and Rooth, 1998), we extracted fre- quency tables for intransitve verb/subject pairs and transitive verb/subject/object triples. The 500 most frequent verbs were selected for slot labeling. Fig. 6 shows two verbs v for which the most probable class label is 5, a class which we earlier described as communicative ac- tion, together with the estimated frequencies of f(n)po(cln ) for those ten nouns n for which this estimated frequency is highest. Fig. 7 shows corresponding data for an intran- sitive scalar motion sense of increase. Fig. 8 shows the intransitive verbs which take 17 as the most probable label. Intuitively, the verbs are semantically coherent. When com- pared to Levin (1993)'s 48 top-level verb classes, we found an agreement of our classification with her class of "verbs of changes of state" except for the last three verbs in the list in Fig. 8 which is sorted by probability of the class label. Similar results for German intransitive scalar motion verbs are shown in Fig. 9. The data for these experiments were extracted from the maximal-probability parses of a 4.1 million word 108 Class 8 PROB 0.0369 o o o o o o o o o o o o o o ~ o ~ 0 ~ o o o o o o o o o o o o o o o o o 0.0539 0.0469 0.0439 0.0383 0.0270 0.0255 0.0192 0.0189 0.0179 0.0162 0.0150 0.0140 0.0138 0.0109 0.0109 0.0097 0.0092 0.0091 require.aso:o show,aso:o need,aso:o involve.aso:o produce.aso:o occur.as:s cause.aso:s cause.aso:o affect.aso:s require.aso:s mean.aso:o suggest.aso:o produce.aso:s demand.aso:o reduce.aso:s reflect.aso:o involve.aso:s undergo.aso;o :::: 1111 111: !O • • • :::::::::::::: :::1:...: "..: :1.1.1111"11 : :::" : .. • • • • • • • • $ • $ • • • • • • • • • ::1.11 :1:'1 • • • • • • • • • • • • Figure 5: Class 8: dispositions 0.977992 0.948099 0.923698 0.908378 0.877338 0.876083 0.803479 0.672409 0.583314 decrease double increase decline rise soar fall slow diminish 0.560727 0.476524 0.42842 0.365586 0.365374 0.292716 0.280183 0.238182 drop grow vary improve climb flow cut mount 0.741467 ansteigen 0.720221 steigen 0.693922 absinken 0.656021 sinken 0.438486 schrumpfen 0.375039 zuriickgehen 0.316081 anwachsen 0.215156 stagnieren 0.160317 wachsen 0.154633 hinzukommen (go up) (rise) (sink) (go down) (shrink) (decrease) (increase) (stagnate) (grow) (be added) Figure 8: Scalar motion verbs corpus of German subordinate clauses, yielding 418290 tokens (318086 types) of pairs of verbs or adjectives and nouns. The lexicalized proba- bilistic grammar for German used is described in Beil et al. (1999). We compared the Ger- man example of scalar motion verbs to the lin- guistic classification of verbs given by Schuh- macher (1986) and found an agreement of our classification with the class of "einfache An- derungsverben" (simple verbs of change) except for the verbs anwachsen (increase) and stag- nieren(stagnate) which were not classified there at all. Fig. i0 shows the most probable pair of classes for increase as a transitive verb, together with estimated frequencies for the head filler pair. Note that the object label 17 is the class found with intransitive scalar motion verbs; this cor- respondence is exploited in the next section. Figure 9: German intransitive scalar motion verbs increase (8, 17) 0.3097650 development - pressure fat - risk communication - awareness supplementation - concentration increase- number 2.3055 2.11807 2.04227 1.98918 1.80559 Figure 10: Transitive increase with estimated frequencies for filler pairs. 5 Linguistic Interpretation In some linguistic accounts, multi-place verbs are decomposed into representations involv- ing (at least) one predicate or relation per argument. For instance, the transitive causative/inchoative verb increase, is composed of an actor/causative verb combining with a 109 VP / ~ VP A NP vl NP V1 NP Vl VP V VP V VP V A NP V NP V NP V increase Riz R.,v ^ increase,v VP NP V I Rlr A increase~v Figure 11: First tree: linguistic lexical entry for transitive verb increase. Second: corresponding lexical entry with induced classes as relational constants. Third: indexed open class root added as conjunct in transitive scalar motion increase. Fourth: induced entry for related intransitive in- crease. one-place predicate in the structure on the left in Fig. 11. Linguistically, such representations are motivated by argument alternations (diathesis), case linking and deep word order, language ac- quistion, scope ambiguity, by the desire to repre- sent aspects of lexical meaning, and by the fact that in some languages, the postulated decom- posed representations are overt, with each primi- tive predicate corresponding to a morpheme. For references and recent discussion of this kind of theory see Hale and Keyser (1993) and Kural (1996). We will sketch an understanding of the lexi- cal representations induced by latent-class label- ing in terms of the linguistic theories mentioned above, aiming at an interpretation which com- bines computational leaxnability, linguistic mo- tivation, and denotational-semantic adequacy. The basic idea is that latent classes are compu- tational models of the atomic relation symbols occurring in lexical-semantic representations. As a first implementation, consider replacing the re- lation symbols in the first tree in Fig. 11 with relation symbols derived from the latent class la- beling. In the second tree in Fig 11, R17 and R8 are relation symbols with indices derived from the labeling procedure of Sect. 4. Such represen- tations can be semantically interpreted in stan- dard ways, for instance by interpreting relation symbols as denoting relations between events and individuals. Such representations are semantically inad- equate for reasons given in philosophical cri- tiques of decomposed linguistic representations; see Fodor (1998) for recent discussion. A lex- icon' estimated in the above way has as many primitive relations as there are latent classes. We guess there should be a few hundred classes in an approximately complete lexicon (which would have to be estimated from a corpus of hun- dreds of millions of words or more). Fodor's ar- guments, which axe based on the very limited de- gree of genuine interdefinability of lexical items and on Putnam's arguments for contextual de- termination of lexical meaning, indicate that the number of basic concepts has the order of mag- nitude of the lexicon itself. More concretely, a lexicon constructed along the above principles would identify verbs which are labelled with the same latent classes; for instance it might identify the representations of grab and touch. For these reasons, a semantically adequate lexicon must include additional relational con- stants. We meet this requirement in a simple way, by including as a conjunct a unique con- stant derived from the open-class root, as in the third tree in Fig. 11. We introduce index- ing of the open class root (copied from the class index) in order that homophony of open class roots not result in common conjuncts in seman- tic representations--for instance, we don't want the two senses of decline exemplified in decline the proposal and decline five percent to have an common entailment represented by a common conjunct. This indexing method works as long as the labeling process produces different latent class labels for the different senses. The last tree in Fig. 11 is the learned represen- tation for the scalar motion sense of the intran- sitive verb increase. In our approach, learning the argument alternation (diathesis) relating the transitive increase (in its scalar motion sense) to the intransitive increase (in its scalar motion sense) amounts to learning representations with a common component R17 A increase17. In this case, this is achieved. 6 Conclusion We have proposed a procedure which maps observations of subcategorization frames with their complement fillers to structured lexical entries. We believe the method is scientifically interesting, practically useful, and flexible be- cause: 1. The algorithms and implementation are ef- ficient enough to map a corpus of a hundred million words to a lexicon. 110 2. The model and induction algorithm have foundations in the theory of parameter- ized families of probability distributions and statistical estimation. As exemplified in the paper, learning, disambiguation, and evaluation can be given simple, motivated formulations. 3. The derived lexical representations are lin- guistically interpretable. This suggests the possibility of large-scale modeling and ob- servational experiments bearing on ques- tions arising in linguistic theories of the lex- icon. 4. Because a simple probabilistic model is used, the induced lexical entries could be incorporated in lexicalized syntax-based probabilistic language models, in particular in head-lexicalized models. This provides for potential application in many areas. 5. The method is applicable to any natural language where text samples of sufficient size, computational morphology, and a ro- bust parser capable of extracting subcate- gorization frames with their fillers are avail- able. References Leonard E. Baum, Ted Petrie, George Soules, and Norman Weiss. 1970. A maximiza- tion technique occuring in the statistical analysis of probabilistic functions of Markov chains. The Annals of Mathematical Statis- tics, 41(1):164-171. Franz Beil, Glenn Carroll, Detlef Prescher, Ste- fan Riezler, and Mats Rooth. 1999. Inside- outside estimation of a lexicalized PCFG for German. In Proceedings of the 37th Annual Meeting of the A CL, Maryland. Glenn Carroll and Mats Rooth. 1998. Valence induction with a head-lexicalized PCFG. In Proceedings of EMNLP-3, Granada. Ido Dagan, Lillian Lee, and Fernando Pereira. to appear. Similarity-based models of word cooccurence probabilities. Machine Learning. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(B):1-38. Jerry A. Fodor. 1998. Concepts : Where Cogni- tire Science Went Wrong. Oxford Cognitive Science Series, Oxford. K. Hale and S.J. Keyser. 1993. Argument struc- ture and the lexical expression of syntactic re- lations. In K. Hale and S.J. Keyser, editors, The View from Building 20. MIT Press, Cam- bridge, MA. Thomas Hofmann and Jan Puzicha. 1998. Un- supervised learning from dyadic data. Tech- nical Report TR-98-042, International Com- puter Science Insitute, Berkeley, CA. Murat Kural. 1996. Verb Incorporation and El- ementary Predicates. Ph.D. thesis, University of California, Los Angeles. Beth Levin. 1993. English Verb Classes and Alternations. A Preliminary Investiga- tion. The University of Chicago Press, Chicago/London. Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of en- glish words. In Proceedings of the 31th Annual Meeting of the A CL, Columbus, Ohio. Philip Resnik. 1993. Selection and information: A class-bases approach to lexical relationships. Ph.D. thesis, University of Pennsylvania, CIS Department. Francecso Ribas. 1994. An experiment on learn- ing appropriate selectional restrictions from a parsed corpus. In Proceedings of COLING-9~, Kyoto, Japan. Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1998. EM- based clustering for NLP applications. In Inducing Lexicons with the EM Algorithm. AIMS Report 4(3), Institut fiir Maschinelle Sprachverarbeitung, Universit~t Stuttgart. Mats Rooth. Ms. Two-dimensional clusters in grammatical relations. In Symposium on Rep- resentation and Acquisition of Lexical Knowl- edge: Polysemy, Ambiguity, and Generativity. AAAI 1995 Spring Symposium Series, Stan- ford University. Lawrence K. Saul and Fernando Pereira. 1997. Aggregate and mixed-order Markov models for statistical language processing. In Pro- ceedings of EMNLP-2. Helmut Schuhmacher. 1986. Verben in Feldern. Valenzw5rterbuch zur Syntax und Semantik deutscher Verben. de Gruyter, Berlin. 111
1999
14
Corpus-Based Linguistic Indicators for Aspectual Classification Eric V. Siegel Department of Computer Science Columbia University New York, NY 10027 Abstract Fourteen indicators that measure the frequency of lexico-syntactic phenomena linguistically re- lated to aspectual class are applied to aspec- tual classification. This group of indicators is shown to improve classification performance for two aspectual distinctions, stativity and com- pletedness (i.e., telicity), over unrestricted sets of verbs from two corpora. Several of these in- dicators have not previously been discovered to correlate with aspect. 1 Introduction Aspectual classification maps clauses to a small set of primitive categories in order to reason about time. For example, events such as, "You called your father," are distinguished from states such as, "You resemble your father." These two high-level categories correspond to primitive distinctions in many domains, e.g., the distinction between procedure and diagnosis in the medical domain. Aspectual classification further distinguishes events according to completedness (i.e., telicity), which determines whether an event reaches a culmination point in time at which a new state is introduced. For example, "I made a fire" is culminated, since a new state is introduced - something is made, whereas, "I gazed at the sunset" is non-culminated. Aspectual classification is necessary for inter- preting temporal modifiers and assessing tem- poral entailments (Vendler, 1967; Dowty, 1979; Moens and Steedman, 1988; Dorr, 1992), and is therefore a necessary component for applica- tions that perform certain natural language in- terpretation, natural language generation, sum- marization, information retrieval, and machine translation tasks. Aspect introduces a large-scale, domain- dependent lexical classification problem. Al- though an aspectual lexicon of verbs would suf- fice to classify many clauses by their main verb only, a verb's primary class is often domain- dependent (Siegel, 1998b). Therefore, it is nec- essary to produce a specialized lexicon for each domain. Most approaches to automatically catego- rizing words measure co-occurrences between open-class lexical items (Schfitze, 1992; Hatzi- vassiloglou and McKeown, 1993; Pereira et al., 1993). This approach is limited since co- occurrences between open-class lexical items is sparse, and is not specialized for particular se- mantic distinctions such as aspect. In this paper, we describe an expandable framework to classify verbs with linguistically- specialized numerical indicators. Each linguis- tic indicator measures the frequency of a lexico- syntactic marker, e.g. the perfect tense. These markers are linguistically related to aspect, so the indicators are specialized for aspectual clas- sification in particular. We perform an evalua- tion of fourteen linguistic indicators over unre- stricted sets of verbs from two corpora. When used in combination, this group of indicators is shown to improve classification performance for two aspectual distinctions: stativity and com- pletedness. Moreover, our analysis reveals a predictive value for several indicators that have not previously been discovered to correlate with aspect in the linguistics literature. The following section further describes as- pect, and introduces linguistic insights that are exploited by linguistic indicators. The next sec- tion describes the set of linguistic indicators evaluated in this paper. Then, our experimen- tal method and results are given, followed by a discussion and conclusions. 112 Table 1: Aspectual classes. This table comes from Moens and Steedman (Moens and Steedman, 1988). Culm EVENTS STATES punctual extended CULM CULM PROCESS recognize build a house Non- POINT PROCESS Culm hiccup run, swim understand 2 Aspect in Natural Language Table 1 summarizes the three aspectual distinc- tions, which compose five aspectual categories. In addition to the two distinctions described in the previous section, atomicity distinguishes events according to whether they have a time duration (punctual versus extended). Therefore, four classes of events are derived: culmination, culminated process, process, and point. These aspectual distinctions are defined formally by Dowty (1979). Several researchers have developed models that incorporate aspectual class to assess tem- poral constraints between clauses (Passonneau, 1988; Dorr, 1992). For example, stativity must be identified to detect temporal constraints be- tween clauses connected with when, e.g., in in- terpreting (1), (1) She had good strength when objectively tested. the following temporal relationship holds: have I I I test I However, in interpreting (2), (2) Phototherapy was discontinued when the bilirubin came down to 13. the temporal relationship is different: COme I I discontinue I I These aspectual distinctions are motivated by a series of entailment constraints. In particu- lar, certain lexico-syntactic features of a clause, such as temporal adjuncts and tense, are con- strained by and contribute to the aspectual class of the clause (Vendler, 1967; Dowty, 1979). Tables 2 illustrates an array of linguistic con- Table 2: Several aspectual markers and associated constraints on aspectual class, primarily from Kla- vans' summary (1994). If a clause can occur: then it is: with a temporal adverb Event (e.g., then) in progressive Extended Event with a duration in-PP Culm Event (e.g., in an hour) in the perfect tense Culm Event or State straints. Each entry in this table describes an aspectual marker and the constraints on the as- pectual category of any clause that appears with that marker. For example, a clause must be an extended event to appear in the progressive tense, e.g., (3) He was prospering in India. (extended), which contrasts with, (4) *You were noticing it. (punctual). and, (5) *She was seeming sad. (state). As a second example, an event must be cul- minated to appear in the perfect tense, for ex- ample, (6) She had made an attempt. (culm.), which contrasts with, (7) *He has cowered down. (non-culm.) 3 Linguistic Indicators The best way to exploit aspectual markers is not obvious, since, while the presence of a marker in a particular clause indicates a constraint on the aspectual class of the clause, the absence thereof does not place any constraint. Therefore, as with most statistical methods for natural lan- guage, the linguistic constraints associated with markers are best exploited by a system that measures co-occurrence frequencies. For exam- ple, a verb that appears more frequently in the progressive is more likely to describe an event. Klavans and Chodorow (1992) pioneered the ap- plication of statistical corpus analysis to aspec- tuai classification by ranking verbs according to the frequencies with which they occur with cer- tain aspectual markers. Table 3 lists the linguistic indicators evalu- ated for aspectual classification. Each indica- 113 Ling Indicator Example Clause frequency ~tnot" or "never" temporal adverb no subject past/pres partic duration in-PP perfect present tense progressive manner adverb evaluation adverb past tense duration for-PP continuous adverb (not applicable) She can not explain why. I saw to it then. He was admitted. ... blood pressure going up. She built it in an hour. They have landed. I am happy. I am behaving myself. She studied diligently. They performed horribly. I was happy. I sang for ten minutes. She will live indefinitely. Table 3: Fourteen linguistic indicators evaluated for aspectual classification. tor has a unique value for each verb. The first indicator, frequency, is simply the frequency with which each verb occurs over the entire corpus. The remaining 13 indicators measure how frequently each verb occurs in a clause with the named linguistic marker. For exam- ple, the next three indicators listed measure the frequency with which verbs 1) are modified by not or never, 2) are modified by a temporal ad- verb such as then or frequently, and 3) have no deep subject (e.g., passive phrases such as, "She was admitted to the hospital"). Further details regarding these indicators and their linguistic motivation is given by Siegel (1998b). There are several reasons to expect superior classification performance when employing mul- tiple linguistic indicators in combination rather than using them individually. While individ- ual indicators have predictive value, they are predictively incomplete. This incompleteness has been illustrated empirically by showing that some indicators help for only a subset of verbs (Siegel, 1998b). Such incompleteness is due in • part to sparsity and noise of data when com- puting indicator values over a corpus with lim- ited size and some parsing errors. However, this incompleteness is also a consequence of the lin- guistic characteristics of various indicators. For example: • Aspectual coercion such as iteration com- promises indicator measurements (Moens and Steedman, 1988). For example, a punctual event appears with the progres- sive in, "She was sneezing for a week." (point --, process --. culminated process) In this example, for a week can only modify an extended event, requiring the first coer- cion. In addition, this for-PP also makes an event culminated, causing the second transformation. • Some aspectual markers such as the pseudo-cleft and manner adverbs test for intentional events, and therefore are not compatible with all events, e.g., "*I died diligently." • The progressive indicator's predictiveness for stativity is compromised by the fact that many location verbs can appear with the progressive, even in their stative sense, e.g. "The book was lying on the shelf." (Dowty, 1979) • Several indicators measure phenomena that are not linguistically constrained by any aspectuM category, e.g., the present tense, frequency and not/never indicators. 4 Method and Results In this section, we evaluate the set of fourteen linguistic indicators for two aspec- tual distinctions: stativity and completed- ness. Evaluation is over corpora of med- ical reports and novels, respectively. This data is summarized in Table 4 (available at www. CS. columbia, edu/~evs/YerbData). First, linguistic indicators are each evalu- ated individually. A training set is used to se- lect indicator value thresholds for classification. Then, we report the classification performance achieved by combining multiple indicators. In this case, the training set is used to optimize a model for combining indicators. In both cases, evaluation is performed over a separate test set of clauses. The combination of indicators is performed by four standard supervised learning algo- rithms: decision tree induction (Quinlan, 1986), CART (Friedman, 1977), log-linear regression (Santner and Duffy, 1989) and genetic program- ming (GP) (Cramer, 1985; Koza, 1992). A pilot study showed no further improve- ment in accuracy or recall tradeoff by additional learning algorithms: Naive Bayes (Duda and 114 stativity completedness corpus: 3,224 med reports 10 novels size: 1,159,891 846,913 parsed clauses: 97,973 training: 739 (634 events) testing: 739 (619 events) verbs in test set: 222 204 clauses excluded: be and have stative 75,289 307 (196 culm) 308 (195 culm) Table 4: Two classification problems on different data sets. Hart, 1973), Ripper (Cohen, 1995), ID3 (Quin- lan, 1986), C4.5 (Quinlan, 1993), and met- alearning to combine learning methods (Chan and Stolfo, 1993). 4.1 Stativity Our experiments are performed across a cor- pus of 3,224 medical discharge summaries. A medical discharge summary describes the symp- toms, history, diagnosis, treatment and outcome of a patient's visit to the hospital. These re- ports were parsed with the English Slot Gram- mar (ESG) (McCord, 1990), resulting in 97,973 clauses that were parsed fully with no self- diagnostic errors (ESG produced error messages on 12,877 of this corpus' 51,079 complex sen- tences). Be and have, the two most popular verbs, cov- ering 31.9% of the clauses in this corpus, are handled separately from all other verbs. Clauses with be as their main verb, comprising 23.9% of the corpus, always denote a state. Clauses with have as their main verb, composing 8.0% of the corpus, are highly ambiguous, and have been addressed separately by considering the direct object of such clauses (Siegel, 1998a). 4.1.1 Manual Marking 1,851 clauses from the parsed corpus were man- ually marked according to stativity. As a lin- guistic test for marking, each clause was tested for readability with "What happened was... ,1 A comparison between human markers for this test performed over a different corpus is re- ported below in Section 4.2.1. Of these, 373 1 Manual labeling followed a strict set of linguistically- motivated guidelines, e.g., negations were ignored (Siegel, 199Sb). Linguistic Stative Event T-test Indicator Mean Mean P-value frequency 932.89 667.57 0.0000 "not" or "never" 4.44% 1.56% 0.0000 temporal adverb 1.00% 2.70% 0.0000 no subject 36.05% 57.56% 0.0000 past/pres pattie 20.98% 15.37% 0.0005 duration in-PP 0.16% 0.60% 0.0018 perfect 2.27% 3.44% 0.0054 present tense 11.19% 8.94% 0.0901 progressive 1.79% 2.69% 0.0903 manner adverb 0.00% 0.03% 0.1681 evaluation adverb 0.69% 1.19% 0.1766 past tense 62.85% 65.69% 0.2314 duration for-PP 0.59% 0.61% 0.8402 continuous adverb 0.04% 0.03% 0.8438 Table 5: Indicators discriminate between states and events. clauses were rejected because of parsing prob- lems. This left 1,478 clauses, divided equally into training and testing sets. 83.8% of clauses with main verbs other than be and have are events, which thus provides a baseline method of 83.8% for comparison. Since our approach examines only the main verb of a clause, classification accuracy over the test cases has a maximum of 97.4% due to the presence of verbs with multiple classes. 4.1.2 Individual Indicators The values of the indicators listed in Table 5 were computed, for each verb, across the 97,973 parsed clauses from our corpus of medical dis- charge summaries. The second and third columns of Table 5 show the average value for each indicator over stative and event clauses, as measured over the training examples. For example, 4.44% of stative clauses are modified by either not or never, but only 1.56% of event clauses were so modified. The fourth column shows the results of T- tests that compare indicator values over stative training cases to those over event cases for each indicator. As shown, the differences in stative and event means are statistically significant (p < .01) for the first seven indicators. Each indicator was tested individually for classification accuracy by establishing a classifi- cation threshold over the training data, and val- idating performance over the testing data using the same threshold. Only the frequency indi- cator succeeded in significantly improving clas- 115 States Events acc recall prec recall prec dt 93.9% 74.2% 86.4% 97.7% 95.1% GP 91.2% 47.4% 97.3% 99.7% 90.7% llr 86.7% 34.2% 68.3% 96.9% 88.4% bl 83.8% 0.0% 100.0% 100.0% 83.8% b12 94.5% 69.2% 95.4% 99.4% 94.3% Table 6: Comparison of three learning methods and two performance baselines, distinguishing states from events. sification accuracy by itself, achieving an accu- racy of 88.0%. This improvement in accuracy was achieved simply by discriminating the pop- ular verb show as a state~ but classifying all other verbs as events. Although many domains may primarily use show as an event, its appear- ances in medical discharge summaries, such as, "His lumbar puncture showed evidence of white cells," primarily utilize show to denote a state. 4.1.3 Indicators in Combination Three machine learning methods successfully combined indicator values, improving classifi- cation accuracy over the baseline measure. As shown in Table 6, the decision tree attained the highest accuracy, 93.9%. Binomial tests showed this to be a significant improvement over the 88.0% accuracy achieved by the frequency indi- cator alone, as well as over the other two learn- ing methods. No further improvement in classi- fication performance was achieved by CART. The increase in the number of stative clauses correctly classified, i.e. stative recall, illustrates an even greater improvement over the base- line. As shown in Table 6, the three learn- ing methods achieved stative recalls of 74.2%, 47.4% and 34.2%, as compared to the 0.0% sta- tive recall achieved by the baseline, while only a small loss in recall over event clauses was suf- fered. The baseline does not classify any stative clauses correctly because it classifies all clauses as events. Classification performance is equally compet- itive without the frequency indicator, although this indicator appears to dominate over oth- ers. When decision tree induction was employed to combine only the 13 indicators other than frequency, the resulting decision tree achieved 92.4% accuracy and 77.5% stative recall. 4.2 Completedness In medical discharge summaries, non- culminated event clauses are rare. Therefore, our experiments for classification according to completedness are performed across a corpus of ten novels comprising 846,913 words. These novels were parsed with ESG, resulting in 75,289 fully-parsed clauses (22,505 of 59,816 sentences produced errors). 4.2.1 Manual Marking 884 clauses from the parsed corpus were man- ually marked according to completedness. Of these, 109 were rejected because of parsing problems, and 160 rejected because they de- scribed states. The remaining 615 clauses were divided into training and test sets such that the distribution of classes was equal. The baseline method in this case achieves 63.3% accuracy. The linguistic test was selected for this task by Passonneau (1988): If a clause in the past progressive necessarily entails the past tense reading, the clause describes a non-culminated event. For example, We were talking just like men (non-culm.) entails that We talked just like men, but The woman was building a house (culm.) does not necessarily entail that The woman built a house. Cross-checking between linguists shows high agreement. In particular, in a pilot study manually annotating 89 clauses from this corpus according to stativity, two lin- guists agreed 81 times. Of 57 clauses agreed to be events, 46 had agreement with respect to completedness. The verb say (point), which occurs nine times in the test set, was initially marked incorrectly as culminated, since points are non-extended and therefore cannot be placed in the progres- sive. After some initial experimentation, we cor- rected the class of each occurrence of say in the data. 4.2.2 Individual Indicators Table 7 is analogous to Table 5 for complete- ness. The differences in culminated and non- culminated means are statistically significant (p < .05) for the first six indicators. However, for completedness, no indicator was shown to sig- nificantly improve classification accuracy over the baseline. 116 Linguistic Culm Non-Culm T-test Indicator Mean Mean P-value perfect 7.87% 2.88% 0.0000 temporal adverb 5.60% 3.41% 0.0000 manner adverb 0.19% 0.61% 0.0008 progressive 3.02% 5.03% 0.0031 past/pres partic 14.03% 17.98% 0.0080 no subject 30.77% 26.55% 0.0241 duration in-PP 0.27% 0.06% 0.0626 present tense 17.18% 14.29% 0.0757 duration for-PP 0.34% 0.49% 0.1756 continuous adverb 0.10% 0.49% 0.2563 frequency 345.86 286.55 0.5652 "not" or "never" 3.41% 3.15% 0.6164 evaluation adverb 0.46% 0.39% 0.7063 past tense 53.62% 54.36% 0.7132 Table 7: Indicators discriminate between culmi- nated and non-culminated events. acc Culminated Non-Culm recall prec recall prec CART 74.0% 86.2% 76.0% 53.1% 69.0% llr 70.5% 83.1% 73.6% 48.7% 62.5% lit2 67.2% 81.5% 71.0% 42.5% 57.1% GP 68.6% 77.3% 74.2% 53.6% 57.8% dt 68.5% 86.2% 70.6% 38.1% 61.4% bl 63.3% 100.0% 63.3% 0.0% 100.0% b12 70.8% 94.9% 69.8% 29.2% 76.7% Table 8: Comparison of four learning methods and two performance baselines, distinguishing cul- minated from non-culminated events. 4.2.3 Indicators in Combination As shown in Table 8, the highest accuracy, 74.0%, was attained by CART. A binomial test shows this is a significant improvement over the 63.3% baseline. The increase in non-culminated recall illus- trates a greater improvement over the baseline. As shown in Table 8, non-culminated recalls of up to 53.6% were achieved by the learning meth- ods, compared to 0.0%, achieved by the base- line. Additionally, a non-culminated F-measure of 61.9 was achieved by GP, when optimizing for F-Measure, improving over 53.7 attained by the optimal uninformed baseline. F-measure com- putes a tradeoff between recall and precision (Van Rijsbergen, 1979). In this work, we weigh recall and precision equally, in which case, recall*precision F - measure = (recall+precision)f2 Automatic methods highly prioritized the perfect indicator. The induced decision tree uses the perfect indicator as its first discriminator, log-linear regression ranked the perfect indica- tor as fourth out of fourteen, function trees cre- ated by GP include the perfect indicator as one of five indicators used together to increase clas- sification performance, and the perfect indicator tied as most highly correlated with completed- ness (cf. Table 7). 5 Discussion Since certain verbs are aspectually ambiguous, and, in this work, clauses are classified by their main verb only, a second baseline approach would be to simply memorize the majority as- pect of each verb in the training set, and classify verbs in the test set accordingly. In this case, test verbs that did not appear in the training set would be classified according to majority class. However, classifying verbs and clauses accord- ing to numerical indicators has several impor- tant advantages over this baseline: • Handles rare or unlabeled verbs. The results we have shown serve to estimate classification performance over "unseen" verbs that were not included in the super- vised training sample. Once the system has been trained to distinguish by indi- cator values, it can automatically classify any verb that appears in unlabeled cor- pora, since measuring linguistic indicators for a verb is fully automatic. This also ap- plies to verbs that are underrepresented in the training set. For example, one node of the resulting decision tree trained to distinguish according to stativity identifies 19 stative test cases without misclassifying any of 27 event test cases with verbs that occur only one time each in the training set. • Success when training doesn't include test verbs. To test this, all test verbs were eliminated from the training set, and log-linear regression was trained over this smaller set to distinguish according to com- pletedness. The result is shown in Table 8 ("llr2"). Accuracy remained higher than the baseline "br' (bl2 not applicable), and the recall tradeoff is felicitous. . Improved performance. Memorizing majority aspect does not achieve as high an accuracy as the linguistic indicators for 117 completedness, nor does it achieve as wide a recall tradeff for both stativity and com- pletedness. These results are indicated as the second baselines ("bl2") in tables 6 and 8, respectively. • Scalar values assigned to each verb al- low the tradeoff between recall and preci- sion to be selected for particular applica- tions by selecting the classification thresh- old. For example, in a separate study, op- timizing for F-measure resulted in a more dramatic tradeoff in recall values as com- pared to those attained when optimizing for accuracy (Siegel, 1998b). Moreover, such scalar values can provide input to sys- tems that perform reasoning on fuzzy or uncertainty knowledge. • This framework is expandable since additional indicators can be introduced by measuring the frequencies of additional aspectual markers. Furthermore, indica- tors measured over multiple clausal con- stituents, e.g., main verb-object pairs, al- leviate verb ambiguity and sparsity and improve classification performance (Siegel, 1998b). 6 Conclusions We have developed a full-scale system for aspec- tual classification with multiple linguistic indi- cators. Once trained, this system can automati- cally classify all verbs appearing in a corpus, in- cluding "unseen" verbs that were not included in the supervised training sample. This frame- work is expandable, since additional lexico- syntactic markers may also correlate with as- pectual class. Future work will extend this ap- proach to other semantic distinctions in natural language. Linguistic indicators successfully exploit lin- guistic insights to provide a much-needed method for aspectual classification. When com- bined with a decision tree to classify according to stativity, the indicators achieve an accuracy of 93.9% and stative recall of 74.2%. When com- bined with CART to classify according to com- pletedness, indicators achieved 74.0% accuracy and 53.1% non-culminated recall. A favorable tradeoff in recall presents an ad- vantage for applications that weigh the identi- fication of non-dominant classes more heavily (Cardie and Howe, 1997). For example, cor- rectly identifying occurrences of for that denote event durations relies on positively identifying non-culminated events. A system that summa- rizes the duration of events which incorrectly classifies "She ran (for a minute)" as culmi- nated will not detect that "for a minute" de- scribes the duration of the run event. This is be- cause durative for-PPs that modify culminated events denote the duration of the ensuing state, e.g., I leJt the room for a minute. (Vendler, 1967) Our analysis has revealed several insights re- garding individual indicators. For example, both duration in-PP and manner adverb are particularly valuable for multiple aspectual dis- tinctions - they were ranked in the top two po- sitions by log-linear modeling for both stativity and completedness. We have discovered several new linguistic in- dicators that are not traditionally linked to as- pectual class. In particular, verb frequency with no deep subject was positively correlated with both stativity and completedness. Moreover, four other indicators are newly linked to stativ- ity: (1) Verb frequency, (2) occurrences modi- fied by "not" or "never", (3) occurrences in the past or present participle, and (4) occurrences in the perfect tense. Additionally, another three were newly linked to completedness: (1) occur- rences modified by a manner adverb, (2) occur- rences in the past or present participle, and (3) occurrences in the progressive. These new correlations can be understood in pragmatic terms. For example, since points (non-culminated, punctual events, e.g., hiccup) are rare, punctual events are likely to be cul- minated. Therefore, an indicator that discrim- inates events according to extendedness, e.g., the progressive, past/present participle, and du- ration for-PP, is likely to also discriminate be- tween culminated and non-culminated events. As a second example, the not/never indica- tor correlates with stativity in medical reports because diagnoses (i.e., states) are often ruled out in medical discharge summaries, e.g., "The patient was not hypertensive," but procedures (i.e., events) that were not done are not usu- ally mentioned, e.g., '~.An examination was not performed." 118 Acknowledgements Kathleen R. McKeown was extremely helpful regard- ing the formulation of this work and Judith L. Kla- vans regarding linguistic techniques, and they, along with Min-Yen Kan and Dragomir R. Radev provided useful feedback on an earlier draft of this paper. This research was supported in part by the Columbia University Center for Advanced Technol- ogy in High Performance Computing and Commu- nications in Healthcare (funded by the New York State Science and Technology Foundation), the Of- fice of Naval Research under contract N00014-95-1- 0745 and by the National Science Foundation under contract GER-90-24069. References C. Cardie and N. Howe. 1997. Improving mi- nority class prediction using case-specific feature weights. In D. Fisher, editor, Proceedings of the Fourteenth International Conference on Machine Learning. Morgan Kaufmann. P.K. Chan and S.J. Stolfo. 1993. Toward multistrat- egy parallel and distributed learning in sequence analysis. In Proceedings of the First International Conference on Intelligent Systems for Molecular Biology. W. Cohen. 1995. Fast effective rule induction. In Proc. 12th Intl. Conf. Machine Learning, pages 115-123. N. Cramer. 1985. A representation for the adap- tive generation of simple sequential programs. In J. Grefenstette, editor, Proceedings of the [First] International Conference on Genetic Algorithms. Lawrence Erlbaum. B.& Dorr. 1992. A two-level knowledge represen- tation for machine translation: lexical seman- tics and tense/aspect. In James Pustejovsky and Sabine Bergler, editors, Lexieal Semantics and Knowledge Representation. Springer Verlag, Berlin. D.R. Dowty. 1979. Word Meaning and Montague Grammar. D. Reidel, Dordrecht, W. Germany. R. O. Duda and P.E. Hart. 1973. Pattern Classifi- cation and Scene Analysis. Wiley, New York. J.H. Friedman. 1977. A recursive partitioning deci- sion rule for non-parametric classification. IEEE Transactions on Computers. V. Hatzivassiloglou and K. McKeown. 1993. To- wards the automatic identification of adjectival scales: clustering adjectives according to mean- ing. In Proceedings of the 31st Annual Meeting of the ACL, Columbus, Ohio, June. Association for Computational Linguistics. J.L. Klavans and M. Chodorow. 1992. Degrees of stativity: the lexical representation of verb as- pect. In Proceedings of the 14th International Conference on Computation Linguistics. J.L. Klavans. 1994. Linguistic tests over large cor- pora: aspectual classes in the lexicon. Technical report, Columbia University Dept. of Computer Science. unpublished manuscript. J.R. Koza. 1992. Genetic Programming: On the programming of computers by means of natural selection. MIT Press, Cambridge, MA. M.C. McCord. 1990. SLOT GRAMMAR. In R. Studer, editor, International Symposium on Natural Language and Logic. Springer Verlag. M. Moens and M. Steedman. 1988. Temporal ontol- ogy and temporal reference. Computational Lin- guistics, 14(2). R.J. Passonneau. 1988. A computational model of the semantics of tense and aspect. Computational Linguistics, 14(2). F. Pereira, N. Tishby, and L. Lee. 1993. Distribu- tional clustering of english words. In Proceedings of the 31st Conference of the ACL, Columbus, Ohio. Association for Computational Linguistics. J.R. Quinlan. 1986. Induction of decision trees. Ma- chine Learning, 1(1):81-106. J.R. Quinlan. 1993. C~.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA. T.J. Santner and D.E. Duffy. 1989. The Statistical Analysis of Discrete Data. Springer-Verlag, New York. H. Schfitze. 1992. Dimensions of meaning. In Pro- ceedings of Supereomputing. E.V. Siegel and K.R. McKeown. 1996. Gathering statistics to aspectually classify sentences with a genetic algorithm. In K. Oflazer and H. Somers, editors, Proceedings of the Second International Conference on New Methods in Language Process- ing, Ankara, Turkey, Sept. Bilkent University. E.V. Siegel. 1997. Learning methods for combining linguistic indicators to classify verbs. In Proceed- ings of the Second Conference on Empirical Meth- ods in Natural Language Processing, Providence, RI, August. E.V. Siegel. 1998a. Disambiguating verbs with the wordnet category of the direct object. In Proced- ings of the Usage of WordNet in Natural Language Processing Systems Workshop, Montreal, Canada. E.V. Siegel. 1998b. Linguistic Indicators for Lan- guage Understanding: Using machine learning methods to combine corpus-based indicators for aspectual classification of clauses. Ph.D. thesis, Columbia University. C.J. Van Rijsbergen. 1979. Information Retrieval. Butterwoths, London. Z. Vendler. 1967. Verbs and times. In Linguistics in Philosophy. Cornell University Press, Ithaca, NY. 119
1999
15
Automatic construction of a hypernym-labeled noun hierarchy from text Sharon A. Caraballo Dept. of Computer Science Brown University Providence, RI 02912 sc@cs, brown, edu Abstract Previous work has shown that automatic methods can be used in building semantic lexicons. This work goes a step further by automatically creating not just clusters of related words, but a hierarchy of nouns and their hypernyms, akin to the hand-built hi- erarchy in WordNet. 1 Introduction The purpose of this work is to build some- thing like the hypernym-labeled noun hierar- chy of WordNet (Fellbaum, 1998) automat- ically from text using no other lexical re- sources. WordNet has been an important re- search tool, but it is insufficient for domain- specific text, such as that encountered in the MUCs (Message Understanding Confer- ences). Our work develops a labeled hierar- chy based on a text corpus. In this project, nouns are clustered into a hierarchy using data on conjunctions and ap- positives appearing in the Wall Street Jour- nal. The internal nodes of the resulting tree are then labeled with hypernyms for the nouns clustered underneath them, also based on data extracted from the Wall Street Jour- nal. The resulting hierarchy is evaluated by human judges, and future research directions are discussed. 2 Building the noun hierarchy The first stage in constructing our hierar- chy is to build an unlabeled hierarchy of nouns using bottom-up clustering methods (see, e.g., Brown et al. (1992)). Nouns are clustered based on conjunction and apposi- tive data collected from the Wall Street Jour- nal corpus. Some of the data comes from the parsed files 2-21 of the Wall Street Journal Penn Treebank corpus (Marcus et al., 1993), and additional parsed text was obtained by parsing the 1987 Wall Street Journal text us- ing the parser described in Charniak et al. (1998). From this parsed text, we identified all conjunctions of noun phrases (e.g., "execu- tive vice-president and treasurer" or "scien- tific equipment, apparatus and disposables") and all appositives (e.g., "James H. Rosen- field, a former CBS Inc. executive" or "Boe- ing, a defense contractor"). The idea here is that nouns in conjunctions or appositives tend to be semantically related, as discussed in Riloff and Shepherd (1997) and Roark and Charniak (1998). Taking the head words of each NP and stemming them results in data for about 50,000 distinct nouns. A vector is created for each noun contain- ing counts for how many times each other noun appears in a conjunction or appositive with it. We can then measure the similarity of the vectors for two nouns by computing the cosine of the angle between these vec- tors, as V*W cos (v, w) - Ivi Iwi To compare the similarity of two groups of nouns, we define similarity as the average of the cosines between each pair of nouns made up of one noun from each of the two groups. sim(A,B) = Ev,wCOS (v,w) size(A)size(B) where v ranges over all vectors for nouns 120 in group A, w ranges over the vectors for group B, and size(x) represents the number of nouns which are descendants of node x. We want to create a tree of all of the nouns in this data using standard bottom-up clus- tering techniques as follows: Put each noun into its own node. Compute the similarity between each pair of nodes using the cosine method. Find the two most similar nouns and combine them by giving them a common parent (and removing the child nodes from future consideration). We can then compute the new node's similarity to each other node by computing a weighted average of the sim- ilarities between each of its children and the other node. In other words, assuming nodes A and B have been combined under a new parent C, the similarity between C and any other node i can be computed as sim(C, i) = sire(A, i)size(A) + sire(B, i)size(B) size(A) + size(B) Once again, we combine the two most sim- ilar nodes under a common parent. Repeat until all nouns have been placed under a common ancestor. Nouns which have a cosine of 0 with every other noun are not included in the final tree. In practice, we cannot follow exactly that algorithm, because maintaining a list of the cosines between every pair of nodes requires a tremendous amount of memory. With 50,000 nouns, we would initially require a 50,000 x 50,000 array of values (or a trian- gular array of about half this size). With our current hardware, the largest array we can comfortably handle is about 100 times smaller; that is, we can build a tree starting from approximately 5,000 nouns. The way we handled this limitation is to process the nouns in batches. Initially 5,000 nouns are read in. We cluster these until we have 2,500 nodes. Then 2,500 more nouns are read in, to bring the total to 5,000 again, and once again we cluster until 2,500 nodes remain. This process is repeated until all nouns have been processed. Since the lowest-frequency nouns are clus- tered based on very little information and have a greater tendency to be clustered badly, we chose to filter some of these out. By reducing the number of nouns to be read, a much nicer structure is obtained. We now only consider nouns with a vector of length at least 2. There are approximately 20,000 nouns as the leaves in our final binary tree structure. Our next step is to try to label each of the internal nodes with a hypernym describing its descendant nouns. 3 Assigning hypernyms Following WordNet, a word A is said to be a hyperuym of a word B if native speakers of English accept the sentence "B is a (kind of) A.,, To determine possible hypernyms for a particular noun, we use the same parsed text described in the previous section. As sug- gested in Hearst (1992), we can find some hypernym data in the text by looking for conjunctions involving the word "other", as in "X, Y, and other Zs" (patterns 3 and 4 in Hearst). From this phrase we can extract that Z is likely a hypernym for both X and Y. This data is extracted from the parsed text, and for each noun we construct a vector of hypernyms, with a value of i if a word has been seen as a hypernym for this noun and 0 otherwise. These vectors are associated with the leaves of the binary tree constructed in the previous section. For each internal node of the tree, we con- struct a vector of hypernyms by adding to- gether the vectors of its children. We then assign a hypernym to this node by sim- ply choosing the hypernym with the largest value in this vector; that is, the hypernym which appeared with the largest number of the node's descendant nouns. (In case of ties, the hypernyms are ordered arbitrarily.) We also list the second- and third-best hy- pernyms, to account for cases where a sin- 121 Hypernyms # nouns gle word does not describe the cluster ad- equately, or cases where there are a few good hypernyms which tend to alternate, such as "country" and "nation". (There may or may not be any kind of seman- tic relationship among the hypernyms listed. Because of the method of selecting hyper- nyms, the hypernyms may be synonyms of each other, have hypernym-hyponym rela- tionships of their own, or be completely un- related.) If a hypernym has occurred with only one of the descendant nouns, it is not listed as one of the best hypernyms, since we have insufficient evidence that the word could describe this class of nouns. Not ev- ery node has sufficient data to be assigned a hypernym. 4 Compressing the tree The labeled tree constructed in the previ- ous section tends to be extremely redundant. Recall that the tree is binary. In many cases, a group of nouns really do not have an in- herent tree structure, for example, a cluster of countries. Although it is possible that a reasonable tree structure could be created with subtrees of, say, European countries, Asian countries, etc., recall that we are us- ing single-word hypernyms. A large binary tree of countries would ideally have "coun- try" (or "nation") as the best hypernym at every level. We would like to combine these subtrees into a single parent labeled "coun- try" or "nation", with each country appear- ing as a leaf directly beneath this parent. (Obviously, the tree will no longer be bi- nary). Another type of redundancy can occur when an internal node is unlabeled, meaning a hypernym could not be found to describe • its descendant nouns. Since the tree's root is labeled, somewhere above this node there is necessarily a node labeled with a hypernym which applies to its descendant nouns, in- cluding those which are a descendant of this node. We want to move this node's children directly under the nearest labeled ancestor. We compress the tree using the following very simple algorithm: in depth-first order, vision bank/group/bond conductor problem apparel/clothing/knitwear item/paraphernalia/car felony/charge/activity system official/product/right official/company/product product/factor/service 22 95 51 151 113 226 109 47 88 10,266 6,056 agency/area event/item animal/group/people country/nation/producer product/item/crop diversion problem/drug/disorder wildlife 60 135 188 348 300 130 306 35 Table 1: The children of the root node. examine the children of each internal node. If the child is itself an internal node, and it either has no best hypernym or the same three best hypernyms as its parent, delete this child and make its children into children of the parent instead. 5 Results and evaluation There are 20,014 leaves (nouns) and 654 in- ternal nodes in the final tree (reduced from 20,013 internal nodes in the uncompressed tree). The top-level node in our learned tree is labeled "product/analyst/official". (Re- call from the previous discussion that we do not assume any kind of semantic relation- ship among the hypernyms listed for a par- ticular cluster.) Since these hypernyms are learned from the Wall Street Journal, they are domain-specific labels rather than the more general "thing/person". However, if the hierarchy were to be used for text from the financial domain, these labels may be preferred. The next level of the hierarchy, the chil- dren of the root, is as shown in Table 1. ("Conductor" seems out-of-place on this list; see the next section for discussion.) These 122 numbers do not add up to 20,014 because 1,288 nouns are attached directly to the root, meaning that they couldn't be clustered to any greater level of detail. These tend to be nouns for which little data was avail- able, generally proper nouns (e.g., Reindel, Yaghoubi, Igoe). To evaluate the hierarchy, 10 internal nodes dominating at least 20 nouns were se- lected at random. For each of these nodes, we randomly selected 20 of the nouns from the cluster under that node. Three human judges were asked to evaluate for each noun and each of the (up to) three hypernyms listed as "best" for that cluster, whether they were actually in a hyponym-hypernym relation. The judges were students working in natural language processing or computa- tional linguistics at our institution who were not directly involved in the research for this project. 5 "noise" nouns randomly selected from elsewhere in the tree were also added to each cluster without the judges' knowl- edge to verify that the judges were not overly generous. Some nouns, especially proper nouns, were not recognized by the judges. For any noun that was not evaluated by at least two judges, we evaluated the noun/hypernym pair by examining the appearances of that noun in the source text and verifying that the hypernym was correct for the predomi- nant sense of the noun. Table 2 presents the results of this eval- uation. The table lists only results for the actual candidate hyponym nouns, not the noise words. The "Hypernym 1" column in- dicates whether the "best" hypernym was considered correct, while the "Any hyper- nym" column indicates whether any of the listed hypernyms were accepted. Within • those columns, "majority" lists the opinion of the majority of judges, and "any" indi- cates the hypernyms that were accepted by even one of the judges. The "Hypernym 1/any" column can be used to compare results to Riloff and Shep- herd (1997). For five hand-selected cate- gories, each with a single hypernym, and the 20 nouns their algorithm scored as the best members of each category, at least one judge marked on average about 31% of the nouns as correct. Using randomly-selected cate- gories and randomly-selected category mem- bers we achieved 39%. By the strictest criteria, our algorithm produces correct hyponyms for a randomly- selected hypernym 33% of the time. Roark and Charniak (1998) report that for a hand- selected category, their algorithm generally produces 20% to 40% correct entries. Furthermore, if we loosen our criteria to consider also the second- and third-best hy- pernyms, 60% of the nouns evaluated were assigned to at least one correct hypernym according to at least one judge. The "bank/firm/station" cluster consists largely of investment firms, which were marked as incorrect for "bank", resulting in the poor performance on the Hypernym 1 measures for this cluster. The last cluster in the list, labeled "company", is actually a very good cluster of cities that because of sparse data was assigned a poor hypernym. Some of the suggestions in the .following sec- tion might correct this problem. Of the 50 noise words, a few of them were actually rated as correct as well, as shown in Table 3. This is largely because the noise words were selected truly at random, so that a noise word for the "company" cluster may not have been in that particular cluster but may still have appeared under a "company" hypernym elsewhere in the hierarchy. 6 Discussion and future directions Future work should benefit greatly by using data on the hypernyms of hypernyms. In our current tree, the best hypernym for the en- tire tree is "product"; however, many times nodes deeper in the tree are given this la- bel also. For example, we have a cluster including many forms of currency, but be- cause there is little data for these partic- ular words, the only hypernym found was "product". However, the parent of this node has the best hypernym of "currency". If 123 Three best hypernyms worker/craftsmen/personnel cost/expense/area cost/operation/problem legislation/measure/proposal benefit/business/factor factor lawyer firm/investor/analyst bank/firm/station company AVERAGE Hypernym 1 majority 13 7 6 3 2 2 14 13 0 6 6.6 / 33.0% any 13 10 8 5 2 7 14 13 0 6 7.8 / 39.0% Any hypernym majority 13 9 11 9 2 2 14 14 15 6 9.5 / 47.5% any 13 10 17 18 5 7 14 14 17 6 12.1 / 60.5% Table 2: The results of the judges' evaluation. Three best hypernyms noise words Hypernym 1 Any hypernym majority any majority any 1/2.0% 4/8.0% 2/4.0% 4/8.0% Table 3: The results of the judges' evaluation of noise words. we knew that "product" was a hypernym of "currency", we could detect that the parent node's label is more specific and simply ab- sorb the child node into the parent. Fur- thermore, we may be able to use data on the hypernyms of hypernyms to give bet- ter labels to some nodes that are currently labeled simply with the best hypernyms of their subtrees, such as a node labeled "prod- uct/analyst" which has two subtrees, one la- beled "product" and containing words for things, the other labeled "analyst" and con- taining names of people. We would like to instead label this node something like "en- tity". It is not yet clear whether corpus data will provide sufficient data for hypernyms at such a high level of the tree, but depending on the intended application for the hierarchy, this level of generality might not be required. As noted in the previous section, one ma- jor spurious result is a cluster of 51 nouns, mainly people, which is given the hypernym "conductor". The reason for this is that few of the nouns appear with hypernyms, and two of them (Giulini and Ozawa) appear in the same phrase listing conductors, thus giv- ing "conductor" a count of two, sufficient to be listed as the only hypernym for the clus- ter. It might be useful to have some stricter criterion for hypernyms, say, that they oc- cur with a certain percentage of the nouns below them in the tree. Additional hyper- nym data would also be helpful in this case, and should be easily obtainable by looking for other patterns in the text as suggested by Hearst (1992). Because the tree is built in a binary fashion, when, e.g., three clusters should all be distinct children of a common par- ent, two of them must merge first, giving an artificial intermediate level in the tree. For example, in the current tree a cluster with best hypernym "agency" and one with best hypernym "exchange" (as in "stock ex- change") have a parent with two best hyper- nyms "agency/exchange", rather than both of these nodes simply being attached to the next level up with best hypernym "group". It might be possible to correct for this situa- tion by comparing the hypernyms for the two clusters and if there is little overlap, delet- ing their parent node and attaching them to their grandparent instead. It would be useful to try to identify terms made up of multiple words, rather than just using the head nouns of the noun phrases. 124 Not only would this provide a more "use- ful hierarchy, or at least perhaps one that is more useful for certain applications, but it would also help to prevent some er- rors. Hearst (1992) gives an example of a potential hyponym-hypernym pair "bro- ken bone/injury". Using our algorithm, we would learn that "injury" is a hypernym of "bone". Ideally, this would not appear in our hierarchy since a more common hypernym would be chosen instead, but it is possible that in some cases a bad hypernym would be found based on multiple word phrases. A discussion of the difficulties in deciding how much of a noun phrase to use can be found in Hearst. Ideally, a useful hierarchy should allow for multiple senses of a word, and this is an area which can be explored in future work. How- ever, domain-specific text tends to greatly constrain which senses of a word will appear, and if the learned hierarchy is intended for use with the same type of text from which it was learned, it is possible that'this would be of limited benefit. We used parsed text for these experiments because we believed we would get better re- sults and the parsed data was readily avail- able. However, it would be interesting to see if parsing is necessary or if we can get equivalent or nearly-equivalent results doing some simpler text processing, as suggested in Ahlswede and Evens (1988). Both Hearst (1992) and Riloff and Shepherd (1997) use unparsed text. 7 Related work Pereira et al. (1993) used clustering to build an unlabeled hierarchy of nouns. Their hier- archy is constructed top-down, rather than bottom-up, with nouns being allowed mem- bership in multiple clusters. Their cluster- ing is based on verb-object relations rather than on the noun-noun relations that we use. Future work on our project will include an attempt to incorporate verb-object data as well in the clustering process. The tree they construct is also binary with some internal nodes which seem to be "artificial", but for evaluation purposes they disregard the tree structure and consider only the leaf nodes. Unfortunately it is difficult to compare their results to ours since their evaluation is based on the verb-object relations. Riloff and Shepherd (1997) suggested us- ing conjunction and appositive data to clus- ter nouns; however, they approximated this data by just looking at the nearest NP on each side of a particular NP. Roark and Charniak (1998) built on that work by actu- ally using conjunction and appositive data for noun clustering, as we do here. (They also use noun compound data, but in a sep- arate stage of processing.) Both of these projects have the goal of building a single cluster of, e.g., vehicles, and both use seed words to initialize a cluster with nouns be- longing to it. Hearst (1992) introduced the idea of learn- ing hypernym-hyponym relationships from text and gives several examples of patterns that can be used to detect these relation- ships including those used here, along with an algorithm for identifying new patterns. This work shares with ours the feature that it does not need large amounts of data to learn a hypernym; unlike in much statistical work, a single occurrence is sufficient. The hyponym-hypernym pairs found by Hearst's algorithm include some that Hearst describes as "context and point-of-view de- pendent," such as "Washington/nationalist" and "aircraft/target". Our work is some- what less sensitive to this kind of problem since only the most common hypernym of an entire cluster of nouns is reported, so much of the noise is filtered. 8 Conclusion We have shown that hypernym hierarchies of nouns can be constructed automati- cally from text with similar performance to semantic lexicons built automatically for hand-selected hypernyms. With the addi- tion of some improvements we have identi- fied, we believe that these automatic meth- ods can be used to construct truly useful hi- erarchies. Since the hierarchy is learned from 125 sample text, it could be trained on domain- specific text to create a hierarchy that is more applicable to a particular domain than a general-purpose resource such as WordNet. 9 Acknowledgments Thanks to Eugene Charniak for helpful dis- cussions and for the data used in this project. Thanks also to Brian Roark, Heidi J. Fox, and Keith Hall for acting as judges in the project evaluation. This research is sup- ported in part by NSF grant IRI-9319516 and by ONR grant N0014-96-1-0549. References Thomas Ahlswede and Martha Evens. 1988. Parsing vs. text processing in the analysis of dictionary definitions. In Proceedings of the 29th Annual Meeting of the Associa- tion for Computational Linguistics, pages 217-224. Peter F. Brown, Vincent J. Della Pietra, Peter V. DeSouza, Jennifer C. Lai, and Robert L. Mercer. 1992. Class-based n- gram models of natural language. Com- putational Linguistics, 18:467-479. Eugene Charniak, Sharon Goldwater, and Mark Johnson. 1998. Edge-based best- first chart parsing. In Proceedings of the Sixth Workshop on Very Large Corpora, pages 127-133. Association for Computa- tional Linguistics. Christiane Fellbaum, editor. 1998. Word- Net: An Electronic Lexical Database. MIT Press. Marti A. Hearst. 1992. Automatic acquisi- tion of hyponyms from large text corpora. In Proceedings of the Fourteenth Interna- tional Conference on Computational Lin- guistics. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: the Penn Treebank. Computational Linguis- tics, 19:313-330. Fernando Pereira, Naftali Tishby, and Lil- lian Lee. 1993. Distributional clustering of English words. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, pages 183- 190. Ellen Riloff and Jessica Shepherd. 1997. A corpus-based approach for building se- mantic lexicons. In Proceedings of the Sec- ond Conference on Empirical Methods in Natural Language Processing, pages 117- 124. Brian Roark and Eugene Charniak. 1998. Noun-phrase co-occurrence statistics for semi-automatic semantic lexicon construc- tion. In COLING-ACL '98: 36th An- nual Meeting of the Association for Com- putational Linguistics and 17th Interna- tional Conference on Computational Lin- guistics: Proceedings of the Conference, pages 1110-1116. 126
1999
16
Using aggregation for selecting content when generating referring expressions John A. Bateman Sprach- und Literaturwissenschaften University of Bremen Bremen, Germany e-mail: bateman0un±-bremen, de Abstract Previous algorithms for the generation of re- ferring expressions have been developed specif- ically for this purpose. Here we introduce an alternative approach based on a fully generic ag- gregation method also motivated for other gen- eration tasks. We argue that the alternative contributes to a more integrated and uniform approach to content determination in the con- text of complete noun phrase generation. 1 Introduction When generating referring expressions (RE), it is generally considered necessary to provide suf- ficient information so that the reader/hearer is able to identify the intended referent. A num- ber of broadly related referring expression al- gorithms have been developed over the past decade based on the natural metaphor of 'ruling out distractors' (Reiter, 1990; Dale and Had- dock, 1991; Dale, 1992; Dale and Reiter, 1995; Horacek, 1995). These special purpose algo- rithms constitute the 'standard' approach to determining content for RE-generation at this time; they have been developed solely for this purpose and have evolved to meet some spe- cialized problems. In particular, it was found early on that the most ambitious RE goal-- that of always providing the maximally concise referring expression necessary for the context ('full brevity')--is NP-haxd; subsequent work on RE-generation has therefore attempted to steer a course between computational tractabil- ity and coverage. One common feature of the favored algorithmic simplifications is their in- crementality: potential descriptions are succes- sively refined (usually non-destructively) to pro- duce the final RE, which therefore may or may not be minimal. This is also often motivated on grounds of psychological plausibility. In this paper, we introduce a completely different metaphor for determining RE-content that may be considered in contrast to, or in combination with, previous approaches. The main difference lies in an orientation to the organization of a data set as a whole rather than to individual components as revealed dur- ing incremental search. Certain opportunities for concise expression that may otherwise be missed are then effectively isolated. The ap- proach applies results from the previously unre- lated generation task of 'aggregation', which is concerned with the grouping together of struc- turally related information. 2 The aggregation-based metaphor Aggregation in generation has hitherto gener- ally consisted of lists of more or less ad hoc, or case-specific rules that group together paxticu- lax pre-specified configurations (cf. Dalianis and Hovy (1996) and Shaw (1998)); however Bate- man et al. (1998) provide a more rigorous and generic foundation for aggregation by applying results from data-summarization originally de- veloped for multimedia information presenta- tion (Kamps, 1997). Bateman et al. set out a general purpose method for constructing ag- gregation lattices which succinctly represent all possible structural aggregations for any given data set. 1 The application of the aggregation- based metaphor to RE-content determination is motivated by the observation that if some- thing is a 'potential distractor' for some in- tended referent, then it is equally, under ap- propriate conditions, a candidate for aggrega- tion together with the intended referent. That 1'Structural' aggregation refers to opportunities for grouping inherent in the structure of the data and ignor- ing additional opportunities for grouping that might be found by modifying the data inferentially. 127 is, what makes something a distractor is pre- cisely the same as that which makes it a poten- tial co-member of some single grouping created by structural aggregation. To see this, consider the following simple example discussed by Dale and Reiter (1995) consisting of three objects with various properties (re-represented here in a simple association list format): 2 (ol (type dog) (size small) (color (02 (type dog) (size large) (color (03 (type cat) (size small) (color To successfully refer to the first object ol, suf- ficient information must be given so as to 'rule out' the possible distractors: therefore, type alone is not sufficient, since this fails to rule out o2, nor is any combination of size or color suffi- cient, since these fail to rule out 03. Successful RE's are 'the small dog' or 'the black dog' and not 'the small one', 'the dog', or 'the black one'. Considering the data set from the aggrega- tion perspective, we ask instead how to refer most succinctly to all of the objects ol, o2, o3. There are two basic alternatives, indicated by bracketing in the following: 3 1. (A (small black and a large white) dog) and (a small black cat). 2. (A small black (dog and cat)) and (a large white dog). The former groups together ol and o2 on the basis of their shared type, while the latter groups together ol and o3 on the basis of their shared size and color properties. Significantly, these are just the possible sources of distraction that Dale and Reiter discuss. The set of possible aggregations can be deter- mined from an aggregation lattice correspond- ing to the data set. We construct the lattice us- ing methods developed in Formal Concept Anal- ysis (FCA) (Wille, 1982). For the example at hand, the aggregation lattice is built up as fol- lows. The set of objects is considered as a rela- tion table where the columns represent the ob- ject attributes and their values, and the rows 2This style of presentation is not particularly perspic- uous but space precludes providing intelligible graphics, especially for the more complex situations used as exam- ples below. In case of difficulties, we recommend quickly sketching the portrayed situation as a memory aid. 3The exact rendering of these variants in English or any other language is not at issue here. black)) white)) black)) represent the individual objects. Since the at- tributes (e.g., 'color', 'size', etc.) can take mul- tiple values (e.g., 'large', 'small'), this represen- tation of the data is called a multivalued con- text. This is then converted into a one-valued context by comparing all rows of the table pair- wise and, for each attribute (i.e., each column in the table) entering one distinguished value (e.g., T or 1) if the corresponding values of the attributes compared are identical, and another distinguished value (nil or 0) if they are not. The one-valued context for the objects ol-o3 is thus: object pairs type size color ol-o2 1 0 0 ol-o3 0 1 1 o2-o3 0 0 0 This indicates that objects ol and o2 have equal values for their type attribute but other- wise not, while ol and 03 have equal values for both their size and color attributes but not for their type attributes. The one-valued context readily supports the derivation of formal con- cepts. A formal concept is defined in FCA as an extension-intension pair (A,B), where the extension is a subset A of the set of objects and the intension is a subset B of the set of attributes. For any given concept, each element of the extension must accept all attributes of the intension. Visually, this corresponds to permut- ing any rows and columns of the one-valued con- text and noting all the maximally 'filled' (i.e., containing l's or T's) rectangles. A 'subcon- cept' relation, '<FCA', is defined over the set of formal concepts thus: (A, B) <FCA (A*, B*) iff A C A* ~=~ B* C B The main theorem of FCA then shows that <FCA induces a complete lattice structure over the set of formal concepts. The resulting lattice for the present example is shown in Figure 1. Each node is shown labeled with two pieces of information: the intension and the extension. The intensions consist simply of the sets of prop- erties involved. The representations of the ex- tensions emphasize the function of the nodes in the lattice--i.e., that the indicated objects (e.g., ol and o2 for the leftmost node) are equal with respect to all the attributes contained in the in- tension (e.g., type for the leftmost node). 128 {TYPE} m(ol )=m(o2) C> {COLOR, SIZE} m(ol )=m(o3) Figure 1: Simple aggregation lattice This lattice may be construed as an aggre- gation lattice because the functional redun- dancies that are captured are precisely those redundances that indicate opportunities for structurally-induced aggregation. The leftmost node shows that the attribute type may be ag- gregated if we describe ol together with o2, and the right-most node shows that {color, size} may be aggregated when describing ol and o3. Now, given the equivalence between aggrega- tion possibilities and 'distractors', we can also use the lattice to drive RE-content determina- tion. Assume again that we wish to refer to ob- ject ol. In essence, a combination of attributes must be selected that is not subject to aggre- gation; any combination susceptible to aggre- gation will necessarily 'confuse' the objects for which the aggregation holds when only one of the objects, or co-aggregates, is mentioned. For example, the rightmost node shows that an RE with the content size&color(ol), e.g., 'the small black thing', confuses ol and o3. To se- lect attributes that are appropriate, we first ex- amine the minimal nodes of the lattice to see if any of these do not 'impinge' (i.e., have no aggregation consequences: we make this more precise below) on the intended referent. In this case, however, all these nodes do mention ol and so no strong preference for the RE-content is delivered by the data set itself. This appears to us to be the correct characterization of the reference situation: precisely which attributes are selected should now be determined by fac- tors not attributable to 'distraction' but rather • by more general communicative goals involving discourse and the requirements of the particular language. The resulting attribute combinations are then checked against the aggregation lat- tice for their referential effectiveness in a man- ner reminiscent of the incremental approach of previous algorithms. Selection of type is not sufficient but the addition of either color or size is (type~zcolor = ± and type~size=l). The reference situation is quite different when we wish to refer to either o2 or o3. For both of these cases there exists a non-impinging node (the right and leftmost nodes respec- tively). This establishes immediate attribute preferences based on the organizational proper- ties of the data. Content-determination for o2 should include at least size or color ('the white thing', 'the large thing') and for o3 at least type ('the cat'). These RE's are minimal. 3 Examples of aggregation-driven RE-content determination In this section, we briefly summarize some more significant examples of RE-content determina- tion using aggregation. Length limitations will require some shortcuts to be taken in the dis- cussion and we will not follow up all of the al- ternative RE's that can be motivated. 3.1 Minimal descriptions Dale and Reiter (1995) consider a number of variant algorithms that deviate from full brevity in order to achieve more attractive computa- tional behavior. The first variant they consider relies on a 'Greedy Heuristic' (Dale, 1989; John- son, 1974); they illustrate that this algorithm sacrifices minimality by constructing an RE for object ol in the context of the following prop- erties concerning a set of seven cups of varying size (large, small), color (red, green, blue) and material (paper, plastic): (oi (size large)(color red)(material plastic)) (02 (size small)(color red)(material plastic)) (03 (size small)(color red)(material paper)) (04 (size medium)(color red)(material paper)) (05 (size large)(color green)(material paper)) (06 (size large)(color blue)(material paper)) (07 (size large)(color blue)(material plastic)) The greedy algorithm produces 'the large red plastic cup' although the true minimum descrip- tion is 'the large red cup'. The aggregation-based approach to the same data set provides an interesting contrast in re- sult. The aggregation lattice for the data is given in Figure 2. The lattice is constructed as before: first by converting the multivalued context of the original data set to a one-valued context and then by imposing the subconcept 129 {COLOR} = 4~ m(ol)=m(o2)= m(o3)=rn(o4) m(ol)=m(o2) rn(o3)=m(o4} "-....... m(o6)~m(o7! .-'" {SIZE} m(ol)=m(o5)= m(o6)=m(o7) rn(ol)=m(o7} rn(o5)=m(o6) Figure 2: Aggregation lattice for the 'seven cups' example relation over the complete set of formal con- cepts. The nodes of the lattice are also labeled as before, although we rely here on the formal properties of the lattice to avoid redundant la- beling. For example, the two sets of attribute equalities given for node 1 (one relating o2 and o3, the other relating o6 and o7) apply to both color (inherited from node 2) and size (inher- ited from node 4); we do not, therefore, repeat the labeling of properties for node 1. Similarly, and due to the bidirectionality inherent in the subconcept definition, the attribute equalities of node 1 are also 'inherited' upwards both to node 2 and to node 4. The attribute equalities of node 4 therefore include contributions from both node 1 and node 6. We will generally in- dicate in the labeling only the additional infor- mation arising from the structure of the lattice, and even then only when it is relevant to the discussion. So for node 4 we indicate that ol, o5, o6 and o7 now form a single attribute equal- ity set made up of three contributions: one from node 1 (o6 and o7) and two from node 6. Their combination in a single set is only possible at node 4 because node 4 is a superconcept of both node 1 and node 6. The other attribute equality set for node 1 (o2 and o3) does not add further information at node 4 and so is left implicit in node 4's labeling. The labeling or non-labeling of redundant information has of course no for- mal consequences for the information contained in the lattice. To determine RE-content appropriate for re- ferring to object ol, we again look for minimal (i.e., nearest the bottom) concepts, or aggrega- tion sets, that do not 'impinge' on ol. The only node satisfying this requirement is node 1. This tells us that the set of possible co-aggregates for ol with respect to the properties {size & color} is empty, which is equivalent to stating that there are no objects in the data set which might be confused with ol if size&color(ol) forms the RE-content. Thus, 'the large red cuP' may be directly selected, and this is precisely the true minimal RE for this data set. 3.2 Relational descriptions: restricting recursion One early extension of the original RE- algorithms was the treatment of data sets in- volving relations (Dale and Haddock, 1991). Subsequently, Horacek (1995) has argued that the extension proposed possesses several deficits involving both the extent of coverage and its be- havior. In particular, Horacek notes that "it is not always necessary that each entity directly or indirectly related to the intended referent and included in the description be identified uniquely" (p49). Partially to handle such sit- uations, Horacek provides a further related al- gorithm that is intended to improve on the orig- inal and which he illustrates in action with ref- erence to a rather more complex situation in- volving two tables with a variety of cups and bottles on them. One table (tl) has two bottles and a cup on it, another (t2) has only a cup. In- formation is also given concerning the relative positions of the cups and bottles. The situation that Horacek identifies as prob- lematic occurs when the reference task is to re- fer to the table tl and the the RE-algorithm has decided to include the bottles that are on this table as part of its description. This is an appropriate decision since the presence of these bottles is the one distinguishing feature of the selected table. But it is sufficient for the identi- fication of tl for bottles to be mentioned at all: there is no need for either or both of the bot- tles to be distinguished more specifically. An RE-algorithm should therefore avoid attempt- ing this additional, unnecessary reference task. To form an aggregation lattice for this fact set, we extend our data representation to deal with relations as well as attributes. This is limited to 'reifying' the relations and label- ing them with 'instance variables' as commonly done in input expressions for generation sys- tems (Kasper, 1989). For convenience, we also at this point fold in the type information di- 130 (g7 (pred on)(argl bl)(argltype bottle)(arg2 tl)(arg2type table)) (g8 (pred on)(argl b2)(argltype bottle)(arg2 tl)(arg2type table)) (g9 (pred on)(argl cl)(argltype cup)(arg2 tl)(arg2type table)) (g10 (pred on)(argl c2)(argltype cup)(arg2 t2)(arg2type table)) (gli (pred left-of)(argl bl)(argltype bottle)(arg2 cl)(arg2type cup)) (g12 (pred left-of)(argl cl)(argltype cup)(arg2 b2)(arg2type bottle)) {ARG2TYPE} • m(g7)=m(g8)=m(glO) II {ARC2} II m(g7)=m(g8)=m(g9) 'm(g9)=m(glO) m(g7)=m(g8) ............ {ARGITYPE} m(g8)=m(gl 1) m(g10)=m(g12) {ARGI} m(g7)=m(gl 1) m(g9)=m(g12) Figure 3: Aggregation lattice for example from Horacek (1995) rectly as would be normal for a typed semantic representation. This gives the set of facts g7- g12 shown at the top of Figure 3. 4 Once the data set is in this form, aggregation lattice con- struction may proceed as described above; the result is also shown in Figure 3. This lattice re- flects the more complex reference situation rep- resented by the data set and its possible ag- gregations: for example, node 7 shows that the facts {g7, g8, gg, gl0} may be aggregated with respect to both arg2type ('table': node 5) and pred ('on': node 6). Node 3, in contrast, shows that the two distinct sets {g9, gl0} and {g7, g8} (again inherited upwards from node 2) may both individually (but not collectively) also be aggregated with pred, arg2type, and addition- ally with argltype ('cup': node 4). We first consider the reference task described by Horacek, i.e., identifying the object tl. Now that we are dealing with relations, the ob- • jects to be referred to generally occur as values of 'attributes'--that is, as entries in the data table--rather than as entire rows. In order to construct an appropriate RE we need to find re- lations that describe the intended referent and which do not allow aggregation with other rela- 4Note that this is then isomorphic to a set of SPL specifications of the form (g7 / on :argl (bl / bottle) :arg2 (tl / table)), etc. tions describing other conflicting referents. We also need to indicate explicitly that the RE- content should not avail itself of the literal in- stance variables: these are to remain internal to the lattice and to RE-construction so that individuals remain distinct. We therefore dis- tinguish been 'public' and 'private' attributes: public attributes are available for driving lin- guistic expression, private attributes are not. If we were not to impose this distinction, then re- ferring expressions such as 'the table tl' would be seen as appropriate and probably minimal descriptions! 5 An aggregation set that does hot involve a private attribute will be called a pub- lic concept. The first step in constructing an RE is now to identify the relations/events in which the in- tended referent is involved--here {g7, g8, gg}-- and to specify the positions (both private and public) that the referent holds in these. We call the set of potentially relevant relations, the reference information source set (ares). In the present case, the same argument po- sition is held by the intended referent t l for all RISS-members, i.e., privately arg2 and pub- licly arg2type: Next, we proceed as before to 5Note that this might well be appropriate behavior in some context--in which case the variables would be declared public. 131 find a non-impinging, minimal aggregate set. However, we can now define 'non-impinging' more accurately. A non-impinging node is one for which there is at least one public supercon- cept fulfilling the following condition: the re- quired superconcept may not bring any RISS- non-member together as co-aggregate with any RISS-member drawn from the originating aggre- gation set with respect to the specified public at- tribute of the intended referent. By these definitions both the minimal nodes of the lattice are non-impinging. However, node 2 is more supportive of minimal RE's and we will only follow this path here; formal indica- tions of minimality are given by the depth and number of paths leading from the node used for aggregation to the top of the aggregation lattice (since any resulting description then combines discriminatory power from each of its chains of superconcepts) and the number of additional facts that are taken over and above the original RISS-members. Node 2 is therefore the 'default' choice simply given a requirement of brevity, al- though the generation process is free to ignore this if other communicative goals so decide. There are two public superconcepts for node 2: both of nodes 7 and 3 inherit arg2type from node 5 but do not themselves contain a pri- vate attribute. Of these only node 7 brings one of the originating RIss-members (i.e., g7 and g8 from node 2) into an aggregation set with a RISS non-member (gl0). Node 2 is there- fore non-impinging via node 3. The attributes that may be aggregated at node 2 are arg2 (node 2 <EVA 8), arg2type (2 <FCA 5), pred (2 <FCA 6) and argltype (2 <:FCA 4). Since this includes arg2, the private position of the in- tended referent, we know that the data set does not support aggregation for g7 and g8 with re- spect to any other distracting value for arg2, and so g7 and g8, both collectively and individ- ually, are appropriate and sufficient RE's for tl. • Rendering these in English would give us: g7 or g8 'the table with a bottle on it' g? plus g8 'the table with some bottles on it' The precise rendering of the bottles depends on other generator decisions; important here is only the fact that it is known that we do not need to uniquely identify which bottles are in question. More identifying information for argl (g8' (pred on) (argl b2) (argltype bottle) (arg2 t2)(arg2type table)) (g12' (pred left-of) (argl c2) (argltype cup) (arg2 b2)(arg2type bottle)) PRED ~ ARGITYPE m(gS')=m(gl 1 ) m(gl 1 )=m(gl 2') m(g9)=m(gl 2') ARG2TYPE m(gS,)=nn(g9) EA 3 J 2,, ~ ARG1 m(gT)=m(gl 1 ) m(g7)=m(gl~// ~ . .,,,. m(glO)=m(gl 2') ARG2 ," m(gO)=m(gl ) "-.. "-J'n(g7)=m(g9)," .... -_@, Figure 4: Aggregation lattice for modified ex- ample situation from Horacek (the bottles bl and b2) would be necessary only if an aggregation with other arg2's (e.g., other tables) were possible, but it is not, and so the type information is already sufficient to produce an RE with no unwanted aggregation possibili- ties. The aggregation-based approach will not, therefore, go on to consider further facts unless there is an explicit communicative intention to do so. 3.3 Relational descriptions: when further information is necessary In this final example we show that the behav- ior above does not preclude information being added when it is in fact necessary. We show this by adapting Horacek's set of facts slightly to create a different aggregation lattice; we move one of the bottles (b2) over to the other table t2, placing it to the right of the cup. We show the modified facts and the new aggregation lattice in Figure 4. Here a few concepts have moved in response to the revised reference situation: for example, arg2type (node 3) is now a direct subconcept of pred indicating that in the re- vised data set there is a functional relationship between the two attributes: all co-aggregates with respect to arg2type are necessarily also co-aggregates with respect to pred. In the pre- vious example this did not hold because there were also facts with shared pred and non-shared arg2type (facts gll and g12: node 6). 132 We will again attempt to refer to the table t 1 to compare the results with those of the previ- ous subsection. To begin, we have a RISS of {gT, gg} with the intended referent in arg2 (private) and arg2type (public) as before. We then look for non-impinging, most-specific nodes. Here, nodes 4 and 5 are both impinging. Node 4 is impinging in its own right since it sanctions ag- gregation of both the RIss-members it mentions with non-members with respect to arg2type (node 3) and argltype (node 6); this deficit is then inherited upwards. Node 5 is impinging by virtue of its first and only available public superconcept, node 3, which sanctions as co- aggregates {gT, g8 ~, gg, gl0} with respect to arg2type. Neither node 4 nor node 5 can there- fore support appropriate RE's. Only node 2 is non-impinging, since it does not sanction aggre- gation involving arg2type or arg2, and is the only available basis for an effective RE with the revised data set. To construct the RE we take the RISS-member of node 2 (i.e., gT) and consider it and the aggre- gations it sanctions as candidate material. Node 2 indicates that g7 may be aggregated with gll with respect to argltype; such an aggregation is guaranteed not to invoke a false referent for argl because it is non-impinging. Moreover, we can infer that g? alone is insufficient since nodes 3 and 4 indicate that g7 is a co-aggregate with facts with non-equal argl values (e.g., gSr), and so aggregation is in fact necessary. The RE then combines: (g7 (pred on)(argl bl)(argltype bottle) (arg2 tl)(arg2type table)) (g11 (pred left-of)(argl bl)(argltype bottle) (arg2 cl)(arg2type cup)) to produce 'the table on which a bottle is to the left of a cup'. This is the only RE that will iden- tify the required table in this highly symmetri- • cal context. No further information is sought because there are no further aggregations pos- sible with respect to arg2 and so the reference is unique; it is also minimal. 4 Discussion and Conclusion One important feature of the proposed ap- proach is its open-nature with respect to the rest of the generation process. The mechanisms described attempt only to factor out one recur- rent problem of generation, namely organizing instantial data to reveal the patterns of con- trast and similarity. In this way, RE-generation is re-assimilated and seen in a somewhat more general light than previously. In terms of the implementation and complex- ity of the approach, it is clear that it cuts the cake rather differently from previous algo- rithms/approaches. Some cases of efficient ref- erence may be read-off directly from the lat- tice; others may require explicit construction and trial of RE-content more reminiscent of the previous algorithms. In fact, the aggregation lattice may in such cases be usefully considered in combination with those algorithms, providing an alternative method for checking the consis- tency of intermediate steps. Here one impor- tant difference between the current approach and previous attempts at maintaining consis- tency is the re-orientation from an incremental procedure to a more static 'overview' of the re- lationships present, thus providing a promising avenue for the exploration of referring strategies with a wider 'domain of locality'. This re-orientation is also reflected in the differing computational complexity of the ap- proaches: the run-time behavior of the previ- ous algorithms is highly dependent on the fi- nal result (number of properties known true of the referent, number of attributes mentioned in the RE), whereas the run-time of the cur- rent approach is more closely tied to the data set as a whole, particularly to the number of facts (rid) and the number of attributes (ha). Test runs involving lattice construction for ran- dom data sets ranging from 10 to 120 objects, with a number of attributes ranging from 5 to 15 (each with 5-7 possible values) showed that a simple experimental algorithm constructed for uncovering the formal concepts constitut- ing the aggregation lattices had a typical run- time approximately proportional to nan2d . Al- though worst-case behavior for both this and the lattice construction component is substan- tially slower, there are now efficient standard algorithms and implementations available that mitigate the problem even when manipulating quite sizeable data sets. 6 For the sizes of data 6A useful summary and collection of pointers to com- plexity results and efficient algorithms is given by Vogt 133 sets that occur when considering a RE, time- complexity is not likely to present a problem. Nevertheless, for larger data sets the ap- proach given here is undoubtedly considerably slower than the simplified algorithms reported both by Dale and Reiter and by Horacek. How- ever, in contrast to those approaches, it re- lies only on generic, non-RE specific methods. The approach also, as suggested above, appears under certain conditions to effectively deliver maximally concise RE's; just what these con- ditions are and whether they can be systemat- ically exploited remain for future research. Fi- nally, since the use of aggregation lattices has been argued for other generation tasks (Bate- man et al., 1998), some of the 'cost' of deploy- ment may in fact turn out to be shared, making a direct comparison solely with the RE-task in any case inappropriate. Other generation con- straints might then also naturally contribute to restricting the overall size of the data sets to be considered--perhaps even to within acceptable practical limits. Acknowledgements This paper was improved by the anonymous comments of reviewers for both the ACL and the European Natural Language Generation Workshop (1999). Remaining errors and obscu- rities are my own. References John Bateman, Thomas Kamps, JSrg Kleinz, and Klaus Reichenberger. 1998. Commu- nicative goal-driven NL generation and data- driven graphics generation: an architectural synthesis for multimedia page generation. In Proceedings of the 1998 International Work- shop on Natural Language Generation, pages 8-17. Niagara-on-the-Lake, Canada. Robert Dale and Nicholas Haddock. 1991. Gen- erating referring expressions involving rela- tions. In Proceedings of the 1991 Meeting of the European Chapter of the Association for Computational Linguistics, pages 161- 166, Berlin. Robert Dale and Ehud Reiter. 1995. Compu- tational interpretations of the Gricean max- (1996). Formal techniques for minimizing the size of the data set that is used for further processing are also given. ims in the generation of referring expressions. Cognitive Science, 19:233-263. Robert Dale. 1989. Cooking up referring ex- pressions. In Proceedings of the Twenty- Seventh Annual Meeting of the Association for Computational Linguistics, Vancouver, British Columbia. Robert Dale. 1992. Generating referring ex- pressions: constructing descriptions in a domain of objects and processes. Brad- ford Books, MIT Press, Cambridge, Mas- sachusetts. Hercules Dalianis and Eduard Hovy. 1996. Ag- gregation in natural language generation. In Giovanni Adorni and Michael Zock, editors, Trends in natural language generation: an ar- tificial intelligence perspective, pages 88-105. Springer-Verlag. Helmut Horacek. 1995. More on generating referring expressions. In Proceedings of the Fifth European Workshop on Natural Lan- guage Generation, pages 43-58, Leiden, The Netherlands. D. Johnson. 1974. Approximate algorithms for combinatorial problems. Journal of Com- puter and Systems Sciences, 9. Thomas Kamps. 1997. A constructive theory for diagram design and its algorithmic imple- mentation. Ph.D. thesis, Darmstadt Univer- sity of Technology, Germany. Robert T. Kasper. 1989. A flexible interface for linking applications to PENMAN'S sentence generator. In Proceedings of the DARPA Workshop on Speech and Natural Language. Ehud Reiter. 1990. Generating descriptions that exploit a user's domain knowledge. In R. Dale, C. Mellish, and M. Zock, editors, Current Research in Natural Language Gen- eration. Academic Press, London. James Shaw. 1998. Clause aggregation us- ing linguistic knowledge. In Proceedings of the 1998 International Workshop on Nat- ural Language Generation, pages 138-147. Niagara-on-the-Lake, Canada. Frank Vogt. 1996. Formale Begriffsanalyse mit C++. Datenstrukturen und Algorithmen. Springer-Verlag. R. Wille. 1982. Restructuring lattice theory: an approach based on hierarchies of concept. In I. Rival, editor, Ordered Sets, pages 445-470. Reidel, Dordecht/Boston. 134
1999
17
Ordering Among Premodifiers James Shaw and Vasileios Hatzivassiloglou Department of Computer Science Columbia University New York, N.Y. 10027, USA {shaw, vh}@cs, columbia, edu Abstract We present a corpus-based study of the se- quential ordering among premodifiers in noun phrases. This information is important for the fluency of generated text in practical appli- cations. We propose and evaluate three ap- proaches to identify sequential order among pre- modifiers: direct evidence, transitive closure, and clustering. Our implemented system can make over 94% of such ordering decisions cor- rectly, as evaluated on a large, previously un- seen test corpus. 1 Introduction Sequential ordering among premodifiers affects the fluency of text, e.g., "large foreign finan- cial firms" or "zero-coupon global bonds" are desirable, while "foreign large financial firms" or "global zero-coupon bonds" sound odd. The difficulties in specifying a consistent ordering of adjectives have already been noted by linguists [Whorf 1956; Vendler 1968]. During the process of generating complex sentences by combining multiple clauses, there are situations where mul- tiple adjectives or nouns modify the same head noun. The text generation system must order these modifiers in a similar way as domain ex- perts use them to ensure fluency of the text. For example, the description of the age of a patient precedes his ethnicity and gender in medical do- main as in % 50 year-old white female patient". Yet, general lexicons such as WordNet [Miller et al. 1990] and COMLEX [Grishman et al. 1994], do not store such information. In this paper, we present automated tech- niques for addressing this problem of determin- ing, given two premodifiers A and B, the pre- ferred ordering between them. Our methods rely on and generalize empirical evidence ob- tained from large corpora, and are evaluated objectively on such corpora. They are informed and motivated by our practical need for order- ing multiple premodifiers in the MAGIC system [Dalal et al. 1996]. MAGIC utilizes co-ordinated text, speech, and graphics to convey informa- tion about a patient's status after coronary by- pass surgery; it generates concise but complex descriptions that frequently involve four or more premodifiers in the same noun phrase. To demonstrate that a significant portion of noun phrases have multiple premodifiers, we extracted all the noun phrases (NPs, exclud- ing pronouns) in a two million word corpus of medical discharge summaries and a 1.5 million word Wall Street Journal (WSJ) corpus (see Section 4 for a more detailed description of the corpora). In the medical corpus, out of 612,718 NPs, 12% have multiple premodifiers and 6% contain solely multiple adjectival premodifiers. In the WSJ corpus, the percentages are a little lower, 8% and 2%, respectively. These percent- ages imply that one in ten NPs contains mul- tiple premodifiers while one in 25 contains just multiple adjectives. Traditionally, linguists study the premodifier ordering problem using a class-based approach. Based on a corpus, they propose various se- mantic classes, such as color, size, or national- ity, and specify a sequential order among the classes. However, it is not always clear how to map premodifiers to these classes, especially in domain-specific applications. This justifies the exploration of empirical, corpus-based al- ternatives, where the ordering between A and B is determined either from direct prior evi- dence in the corpus or indirectly through other words whose relative order to A and B has al- ready been established. The corpus-based ap- proach lacks the ontological knowledge used by linguists, but uses a much larger amount of di- 135 rect evidence, provides answers for many more premodifier orderings, and is portable to differ- ent domains. In the next section, we briefly describe prior linguistic research on this topic. Sections 3 and 4 describe the methodology and corpus used in our analysis, while the results of our experi- ments are presented in Section 5. In Section 6, we demonstrate how we incorporated our or- dering results in a general text generation sys- tem. Finally, Section 7 discusses possible im- provements to our current approach. 2 Related Work The order of adjectives (and, by analogy, nom- inal premodifiers) seems to be outside of the grammar; it is influenced by factors such as polarity [Malkiel 1959], scope, and colloca- tional restrictions [Bache 1978]. Linguists [Goy- vaerts 1968; Vendler 1968; Quirk and Green- baum 1973; Bache 1978; Dixon 1982] have per- formed manual analyses of (small) corpora and pointed out various tendencies, such as the facts that underived adjectives often precede derived adjectives, and shorter modifiers precede longer ones. Given the difficulty of adequately describ- ing all factors that influence the order of pre- modifiers, most earlier work is based on plac- ing the premodifiers into broad semantic classes, and specifying an order among these classes. More than ten classes have been proposed, with some of them further broken down into sub- classes. Though not all these studies agree on the details, they demonstrate that there is fairly rigid regularity in the ordering of adjectives. For example, Goyvaerts [1968, p. 27] proposed the order quality -< size/length/shape -< old/new/young -< color -< nationality -< style -< gerund -< denominall; Quirk and Greenbaum [1973, p. 404] the order general -< age -< color -< participle -< provenance -< noun -< denominal; and Dixon [1982, p. 24] the order value -< dimension -< physical property -< speed -< human propensity -< age -< color. Researchers have also looked at adjective or- dering across languages [Dixon 1982; Frawley 1992]. Frawley [1992], for example, observed that English, German, Hungarian, Polish, Turk- ish, Hindi, Persian, Indonesian, and Basque, all 1Where A ~ B stands for "A precedes B'. order value before size and both of those before color. As with most manual analyses, the corpora used in these analyses are relatively small com- pared with modern corpora-based studies. Fur- thermore, different criteria were used to ar- rive at the classes. To illustrate, the adjec- tive "beautiful" can be classified into at least two different classes because the phrase "beau- tiful dancer" can be transformed from either the phrase "dancer who is beautiful", or "dancer who dances beautifully". Several deep semantic features have been pro- posed to explain the regularity among the po- sitional behavior of adjectives. Teyssier [1968] first proposed that adjectival functions, i.e. identification, characterization, and classifica- tion, affect adjective order. Martin [1970] car- ried out psycholinguistic studies of adjective ordering. Frawley [1992] extended the work by Kamp [1975] and proposed that intensional modifiers precede extensional ones. However, while these studies offer insights at the complex phenomenon of adjective ordering, they cannot be directly mapped to a computational proce- dure. On the other hand, recent computational work on sentence planning [Bateman et al. 1998; Shaw 1998b] indicates that generation re- search has progressed to a point where hard problems such as ellipsis, conjunctions, and or- dering of paradigmatically related constituents are addressed. Computational corpus stud- ies related to adjectives were performed by [Justeson and Katz 1991; Hatzivassiloglou and McKeown 1993; Hatzivassiloglou and McKeown 1997], but none was directly on the ordering problem. [Knight and Hatzivassiloglou 1995] and [Langkilde and Knight 1998] have proposed models for incorporating statistical information into a text generation system, an approach that is similar to our way of using the evidence ob- tained from corpus in our actual generator. 3 Methodology In this section, we discuss how we obtain the premodifier sequences from the corpus for anal- ysis and the three approaches we use for estab- lishing ordering relationships: direct corpus ev- idence, transitive closure, and clustering analy- sis. The result of our analysis is embodied in a 136 function, compute_order(A, B), which returns the sequential ordering between two premodi- tiers, word A and word B. To identify orderings among premodifiers, premodifier sequences are extracted from sim- plex NPs. A simplex NP is a maximal noun phrase that includes premodifiers such as de- terminers and possessives but not post-nominal constituents such as prepositional phrases or relative clauses. We use a part-of-speech tag- ger [Brill 1992] and a finite-state grammar to extract simplex NPs. The noun phrases we ex- tract start with an optional determiner (DT) or possessive pronoun (PRP$), followed by a se- quence of cardinal numbers (CDs), adjectives (JJs), nouns (NNs), and end with a noun. We include cardinal numbers in NPs to capture the ordering of numerical information such as age and amounts. Gerunds (tagged as VBG) or past participles (tagged as VBN), such as "heated" in "heated debate", are considered as adjectives if the word in front of them is a determiner, possessive pronoun, or adjective, thus separat- ing adjectival and verbal forms that are con- flared by the tagger. A morphology module transforms plural nouns and comparative and superlative adjectives into their base forms to ensure maximization of our frequency counts. There is a regular expression filter which re- moves obvious concatenations of simplex NPs such as "takeover bid last week" and "Tylenol 40 milligrams". After simplex NPs are extracted, sequences of premodifiers are obtained by dropping deter- miners, genitives, cardinal numbers and head nouns. Our subsequent analysis operates on the resulting premodifier sequences, and involves three stages: direct evidence, transitive closure, and clustering. We describe each stage in more detail in the following subsections. 3.1 Direct Evidence Our analysis proceeds on the hypothesis that the relative order of two premodifiers is fixed and independent of context. Given two premod- ifiers A and B, there are three possible under- lying orderings, and our system should strive to find which is true in this particular case: ei- ther A comes before B, B comes before A, or the order between A and B is truly unimpor- tant. Our first stage relies on frequency data collected from a training corpus to predict the order of adjective and noun premodifiers in an unseen test corpus. To collect direct evidence on the order of premodifiers, we extract all the premodifiers from the corpus as described in the previous subsection. We first transform the premodi- tier sequences into ordered pairs. For example, the phrase "well-known traditional brand-name drug" has three ordered pairs, "well-known -< traditional", "well-known -~ brand-name", and "traditional -~ brand-name". A phrase with n premodifiers will have (~) ordered pairs. From these ordered pairs, we construct a w x w matrix Count, where w the number of distinct modi- fiers. The cell [A, B] in this matrix represents the number of occurrences of the pair "A -~ B", in that order, in the corpus. Assuming that there is a preferred ordering between premodifiers A and B, one of the cells Count[A,B] and Count[B,A] should be much larger than the other, at least if the corpus be- comes arbitrarily large. However, given a corpus of a fixed size there will be many cases where the frequency counts will both be small. This data sparseness problem is exacerbated by the inevitable occurrence of errors during the data extraction process, which will introduce some spurious pairs (and orderings) of premodifiers. We therefore apply probabilistic reasoning to determine when the data is strong enough to decide that A -~ B or B -~ A. Under the null hypothesis that the two premoditiers order is ar- bitrary, the number of times we have seen one of them follows the binomial distribution with pa- rameter p -- 0.5. The probability that we would see the actually observed number of cases with A ~ B, say m, among n pairs involving A and B is k----m which for the special case p = 0.5 becomes (0 (0 k=m k=rn If this probability is low, we reject the null hy- pothesis and conclude that A indeed precedes (or follows, as indicated by the relative frequen- cies) B. 137 3.2 Transitivity As we mentioned before, sparse data is a seri- ous problem in our analysis. For example, the matrix of frequencies for adjectives in our train- ing corpus from the medical domain is 99.8% empty--only 9,106 entries in the 2,232 x 2,232 matrix contain non-zero values. To compen- sate for this problem, we explore the transi- tive properties between ordered pairs by com- puting the transitive closure of the ordering re- lation. Utilizing transitivity information corre- sponds to making the inference that A -< C fol- lows from A -~ B and B -< C, even if we have no direct evidence for the pair (A, C) but provided that there is no contradictory evidence to this inference either. This approach allows us to fill from 15% (WSJ) to 30% (medical corpus) of the entries in the matrix. To compute the transitive closure of the order relation, we map our underlying data to special cases of commutative semirings [Pereira and Ri- ley 1997]. Each word is represented as a node of a graph, while arcs between nodes correspond to ordering relationships and are labeled with ele- ments from the chosen semiring. This formal- ism can be used for a variety of problems, us- ing appropriate definitions of the two binary op- erators (collection and extension) that operate on the semiring's elements. For example, the all-pairs shortest-paths problem in graph the- ory can be formulated in a rain-plus semiring over the real numbers with the operators rain for collection and + for extension. Similarly, finding the transitive closure of a binary relation can be formulated in a max-rain semi-ring or a or-and semiring over the set {0, 1}. Once the proper operators have been chosen, the generic Floyd-Warshall algorithm [Aho et al. 1974] can solve the corresponding problem without modi- fications. We explored three semirings appropriate to our problem. First, we apply the statistical de- cision procedure of the previous subsection and assign to each pair of premodifiers either 0 (if we don't have enough information about their preferred ordering) or 1 (if we do). Then we use the or-and semiring over the {0,1} set; in the transitive closure, the ordering A -~ B will be present if at least one path connecting A and B via ordered pairs exists. Note that it is possible for both A -~ B and B -~ A to be present in the transitive closure. This model involves conversions of the corpus evidence for each pair into hard decisions on whether one of the words in the pair precedes the other. To avoid such early commitments, we use a second, refined model for transitive closure where the arc from A to B is labeled with the probability that A precedes indeed B. The natural extension of the ({0, 1}, or, and) semiring when the set of labels is replaced with the interval [0, 1] is then ([0, 1], max, rain). We estimate the probability that A precedes B as one minus the probability of reaching that conclusion in error, according to the statistical test of the previous subsection (i.e., one minus the sum specified in equation (2). We obtained similar results with this estimator and with the maximal likelihood estimator (the ratio of the number of times A appeared before B to the total number of pairs involving A and B). Finally, we consider a third model in which we explore an alternative to transitive closure. Rather than treating the number attached to each arc as a probability, we treat it as a cost, the cost of erroneously assuming that the corre- sponding ordering exists. We assign to an edge (A, B) the negative logarithm of the probability that A precedes B; probabilities are estimated as in the previous paragraph. Then our prob- lem becomes identical to the all-pairs shortest- path problem in graph theory; the correspond- ing semiring is ((0, +c~), rain, +). We use log- arithms to address computational precision is- sues stemming from the multiplication of small probabilities, and negate the logarithms so that we cast the problem as a minimization task (i.e., we find the path in the graph the minimizes the total sum of negative log probabilities, and therefore maximizes the product of the original probabilities). 3.3 Clustering As noted earlier, earlier linguistic work on the ordering problem puts words into seman- tic classes and generalizes the task from order- ing between specific words to ordering the cor- responding classes. We follow a similar, but evidence-based, approach for the pairs of words that neither direct evidence nor transitivity can resolve. We compute an order similarity mea- sure between any two premodifiers, reflecting whether the two words share the same pat- 138 tern of relative order with other premodifiers for which we have sufficient evidence. For each pair of premodifiers A and B, we examine ev- ery other premodifier in the corpus, X; if both A -~ X and B -~ X, or both A ~- X and B ~- X, one point is added to the similarity score be- tween A and B. If on the other hand A -~ X and B ~- X, or A ~- X and B -~ X, one point is sub- tracted. X does not contribute to the similarity score if there is not sufficient prior evidence for the relative order of X and A, or of X and B. This procedure closely parallels non-parametric distributional tests such as Kendall's T [Kendall 1938]. The similarity scores are then converted into dissimilarities and fed into a non-hierarchical clustering algorithm [Sp~th 1985], which sep- arates the premodifiers in groups. This is achieved by minimizing an objective function, defined as the sum of within-group dissimilari- ties over all groups. In this manner, premodi- tiers that are closely similar in terms of sharing the same relative order with other premodifiers are placed in the same group. Once classes of premodifiers have been in- duced, we examine every pair of classes and de- cide which precedes the other. For two classes C1 and C2, we extract all pairs of premodifiers (x, y) with x E C1 and y E C2. If we have evi- dence (either direct or through transitivity) that x -~ y, one point is added in favor of C1 -~ C2; similarly, one point is subtracted if x ~- y. After all such pairs have been considered, we can then predict the relative order between words in the two clusters which we haven't seen together ear- lier. This method makes (weak) predictions for any pair (A, B) of words, except if (a) both A and B axe placed in the same cluster; (b) no or- dered pairs (x, y) with one element in the class of A and one in the class of B have been identi- fied; or (c) the evidence for one class preceding the other is in the aggregate equally strong in both directions. 4 The Corpus We used two corpora for our analysis: hospi- tal discharge summaries from 1991 to 1997 from the Columbia-Presbyterian Medical Center, and the January 1996 part of the Wall Street Jour- nal corpus from the Penn TreeBank [Marcus et al. 1993]. To facilitate comparisons across the two corpora, we intentionally limited ourselves to only one month of the WSJ corpus, so that approximately the same amount of data would be examined in each case. The text in each cor- pus is divided into a training part (2.3 million words for the medical corpus and 1.5 million words for the WSJ) and a test part (1.2 million words for the medical corpus and 1.6 million words for the WSJ). All domain-specific markup was removed, and the text was processed by the MXTERMINATOR sentence boundary detector [Reynar and Rat- naparkhi 1997] and Brill's part-of-speech tag- ger [Brill 1992]. Noun phrases and pairs of pre- modifiers were extracted from the tagged corpus according to the methods of Section 3. From the medical corpus, we retrieved 934,823 sim- plex NPs, of which 115,411 have multiple pre- modifiers and 53,235 multiple adjectives only. The corresponding numbers for the WSJ cor- pus were 839,921 NPs, 68,153 NPs with multiple premodifiers, and 16,325 NPs with just multiple adjectives. We separately analyze two groups of premodi- tiers: adjectives, and adjectives plus nouns mod- ifying the head noun. Although our techniques are identical in both cases, the division is moti- vated by our expectation that the task will be easier when modifiers are limited to adjectives, because nouns tend to be harder to match cor- rectly with our finite-state grammar and the in- put data is sparser for nouns. 5 Results We applied the three ordering algorithms pro- posed in this paper to the two corpora sepa- rately for adjectives and adjectives plus nouns. For our first technique of directly using evidence from a separate training corpus, we filled the Count matrix (see Section 3.1) with the fre- quencies of each ordering for each pair of pre- modifiers using the training corpora. Then, we calculated which of those pairs correspond to a true underlying order relation, i.e., pass the sta- tistical test of Section 3.1 with the probability given by equation (2) less than or equal to 50%. We then examined each instance of ordered pre- modifiers in the corresponding test corpus, and counted how many of those the direct evidence method could predict correctly. Note that if A and B occur sometimes as A -~ B and some- 139 Corpus Test pairs Medical/ adjectives 27,670 Financial/ adjectives 9,925 Medical/ adjectives 74,664 and nouns Financial/ adjectives 62,383 and nouns Direct evidence Transitivity Transitivity (maxomin) (min-plus) 92.67% (88.20%-98.47%) 89.60% (94.94%-91.79%) 94.93% (97.20%-96.16%) 75.41% (53.85%-98.37%) 79.92% (72.76%-90.79%) 80.77% (76.36%-90.18%) 88.79% (80.38%-98.35%) 87.69% (90.86%-91.50%) 90.67% (91.90%-94.27%) 65.93% (35.76%-95.27%) 69.61% (56.63%-84.51%) 71.04% (62.48%-83.55%) Table 1: Accuracy of direct-evidence and transitivity methods on different data strata of our test corpora. In each case, overall accuracy is listed first in bold, and then, in parentheses, the percentage of the test pairs that the method has an opinion for (rather than randomly assign a decision because of lack of evidence) and the accuracy of the method within that subset of test cases. times as B -< A, no prediction method can get all those instances correct. We elected to follow this evaluation approach, which lowers the ap- parent scores of our method, rather than forcing each pair in the test corpus to one unambiguous category (A -< B, B -< A, or arbitrary). Under this evaluation method, stage one of our system achieves on adjectives in the medi- cal domain 98.47% correct decisions on pairs for which a determination of order could be made. Since 11.80% of the total pairs in the test corpus involve previously unseen combinations of ad- jectives and/or new adjectives, the overall accu- racy is 92.67%. The corresponding accuracy on data for which we can make a prediction and the overall accuracy is 98.35% and 88.79% for adjec- tives plus nouns in the medical domain, 98.37% and 75.41% for adjectives in the WSJ data, and 95.27% and 65.93% for adjectives plus nouns in the WSJ data. Note that the WSJ corpus is considerably more sparse, with 64.24% unseen combinations of adjective and noun premodi- tiers in the test part. Using lower thresholds in equation (2) results in a lower percentage of cases for which the system has an opinion but a higher accuracy for those decisions. For exam- ple, a threshold of 25% results in the ability to predict 83.72% of the test adjective pairs in the medical corpus with 99.01% accuracy for these cases. We subsequently applied the transitivity stage, testing the three semiring models dis- cussed in Section 3.2. Early experimentation indicated that the or-and model performed poorly, which we attribute to the extensive propagation of decisions (once a decision in fa- vor of the existence of an ordering relationship is made, it cannot be revised even in the presence of conflicting evidence). Therefore we report re- sults below for the other two semiring models. Of those, the min-plus semiring achieved higher performance. That model offers additional pre- dictions for 9.00% of adjective pairs and 11.52% of adjective-plus-noun pairs in the medical cor- pus, raising overall accuracy of our predictions to 94.93% and 90.67% respectively. Overall ac- curacy in the WSJ test data was 80.77% for ad- jectives and 71.04% for adjectives plus nouns. Table 1 summarizes the results of these two stages. Finally, we applied our third, clustering ap- proach on each data stratum. Due to data sparseness and computational complexity is- sues, we clustered the most frequent words in each set of premodifiers (adjectives or adjectives plus nouns), selecting those that occurred at least 50 times in the training part of the cor- pus being analyzed. We report results for the adjectives selected in this manner (472 frequent adjectives from the medical corpus and 307 ad- jectives from the WSJ corpus). For these words, the information collected by the first two stages of the system covers most pairs. Out of the 111,176 (=472.471/2) possible pairs in the med- ical data, the direct evidence and transitivity stages make predictions for 105,335 (94.76%); the corresponding number for the WSJ data is 40,476 out of 46,971 possible pairs (86.17%). 140 The clustering technique makes ordering pre- dictions for a part of the remaining pairs--on average, depending on how many clusters are created, this method produces answers for 80% of the ordering cases that remained unanswered after the first two stages in the medical corpus, and for 54% of the unanswered cases in the WSJ corpus. Its accuracy on these predictions is 56% on the medical corpus, and slightly worse than the baseline 50% on the WSJ corpus; this lat- ter, aberrant result is due to a single, very fie- quent pair, chief executive, in which executive is consistently mistagged as an adjective by the part-of-speech tagger. Qualitative analysis of the third stage's out- put indicates that it identifies many interest- ing relationships between premodifiers; for ex- ample, the pair of most similar premodifiers on the basis of positional information is left and right, which clearly fall in a class similar to the semantic classes manually constructed by lin- guists. Other sets of adjectives with strongly similar members include {mild, severe, signifi- cant} and {cardiac, pulmonary, respiratory}. We conclude our empirical analysis by test- ing whether a separate model is needed for pre- dicting adjective order in each different domain. We trained the first two stages of our system on the medical corpus and tested them on the WSJ corpus, obtaining an overall prediction ac- curacy of 54% for adjectives and 52% for adjec- rives plus nouns. Similar results were obtained when we trained on the financial domain and tested on medical data (58% and 56%). These results are not much better than what would have been obtained by chance, and are clearly inferior to those reported in Table 1. Although the two corpora share a large number of ad- jectives (1,438 out of 5,703 total adjectives in the medical corpus and 8,240 in the WSJ cor- pus), they share only 2 to 5% of the adjective pairs. This empirical evidence indicates that ad- jectives are used differently in the two domains, and hence domain-specific probabilities must be estimated, which increases the value of an au- tomated procedure for the prediction task. 6 Using Ordered Premodifiers in Text Generation Extracting sequential ordering information of premodifiers is an off-line process, the results of (a) "John is a diabetic male white 74- year-old hypertensive patient with a red swollen mass in the left groin." (b) "John is a 74-year-old hypertensive diabetic white male patient with a swollen red mass in the left groin." Figure 1: (a) Output of the generator without our ordering module, containing several errors. (b) Output of the generator with our ordering module. which can be easily incorporated into the over- all generation architecture. We have integrated the function compute_order(A, B) into our mul- timedia presentation system MAGIC [Dalai et al. 1996] in the medical domain and resolved numerous premodifier ordering tasks correctly. Example cases where the statistical prediction module was helpful in producing a more fluent description in MAGIC include placing age infor- mation before ethnicity information and the lat- ter before gender information, as well as spe- cific ordering preferences, such as "thick" before "yellow" and "acute" before "severe". MAGIC'S output is being evaluated by medical doctors, who provide us with feedback on different com- ponents of the system, including the fluency of the generated text and its similarity to human- produced reports. Lexicalization is inherently domain depen- dent, so traditional lexica cannot be ported across domains without major modifications. Our approach, in contrast, is based on words extracted from a domain corpus and not on concepts, therefore it can be easily applied to new domains. In our MAGIC system, aggre- gation operators, such as conjunction, ellip- sis, and transformations of clauses to adjectival phrases and relative clauses, are performed to combine related clauses together and increase conciseness [Shaw 1998a; Shaw 1998b]. We wrote a function, reorder_premod(... ), which is called after the aggregation operators, takes the whole lexicalized semantic representation, and reorders the premodifiers right before the lin- guistic realizer is invoked. Figure i shows the difference in the output produced by our gener- 141 ator with and without the ordering component. 7 Conclusions and Future Work We have presented three techniques for explor- ing prior corpus evidence in predicting the order of premodifiers within noun phrases. Our meth- ods expand on observable data, by inferring new relationships between premodifiers even for combinations of premodifiers that do not occur in the training corpus. We have empirically val- idated our approach, showing that we can pre- dict order with more than 94% accuracy when enough corpus data is available. We have also implemented our procedure in a text generator, producing more fluent output sentences. We are currently exploring alternative ways to integrate the classes constructed by the third stage of our system into our generator. In the future, we will experiment with semantic (rather than positional) clustering of premodi- tiers, using techniques such as those proposed in [Hatzivassiloglou and McKeown 1993; Pereira et al. 1993]. The qualitative analysis of the output of our clustering module shows that frequently positional and semantic classes overlap, and we are interested in measuring the extent of this phenomenon quantitatively. Conditioning the premodifier ordering on the head noun is an- other promising approach, at least for very fre- quent nouns. 8 Acknowledgments We are grateful to Kathy McKeown for numer- ous discussions during the development of this work. The research is supported in part by the National Library of Medicine under grant R01-LM06593-01 and the Columbia University Center for Advanced Technology in High Per- formance Computing and Communications in Healthcaxe (funded by the New York State Sci- ence and Technology Foundation). Any opin- ions, findings, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the above agen- cies. References Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman. The Design and Analysis of Com- puter Algorithms. Addison-Wesley, Reading, Massachusetts, 1974. Carl Bache: The Order of Premodifying Adjec- tives in Present-Day English. Odense Univer- sity Press, 1978. John A. Bateman; Thomas Kamps, Jorg Kleinz, and Klaus Reichenberger. Communicative Goal-Driven NL Generation and Data-Driven Graphics Generation: An ArchitecturM Syn- thesis for Multimedia Page Generation. In Proceedings of the 9th International Work- shop on Natural Language Generation., pages 8-17, 1998. Eric Brill. A Simple Rule-Based Part of Speech Tagger. In Proceedings of the Third Confer- ence on Applied Natural Language Process- ing, Trento, Italy, 1992. Association for Com- putational Linguistics. Mukesh Dalal, Steven K. Feiner, Kathleen R. McKeown, Desmond A. Jordan, Barry Allen, and Yasser al Safadi. MAGIC: An Exper- imental System for Generating Multimedia Briefings about Post-Bypass Patient Status. In Proceedings of the 1996 Annual Fall Sym- posium of the American Medical Informat- ics Association (AMIA-96), pages 684-688, Washington, D.C., October 26-30 1996. R. M. W. Dixon. Where Have All the Adjectives Gone? Mouton, New York, 1982. William Frawley. Linguistic Semantics. Lawrence Erlbaum Associates, Hillsdale, New Jersey, 1992. D. L. Goyvaerts. An Introductory Study on the Ordering of a String of Adjectives in Present- Day English. Philologica Pragensia, 11:12- 28, 1968. Ralph Grishman, Catherine Macleod, and Adam Meyers. COMLEX Syntax: Building a Computational Lexicon. In Proceedings of the 15th International Conference on Com- putational Linguistics (COLING-9~), Kyoto, Japan, 1994. Vasileios Hatzivassiloglou and Kathleen McKe- own. Towards the Automatic Identification of Adjectival Scales: Clustering Adjectives Ac- cording to Meaning. In Proceedings of the 31st Annual Meeting of the ACL, pages 172- 142 182, Columbus, Ohio, June 1993. Association for Computational Linguistics. Vasileios Hatzivassiloglou and Kathleen McKe. own. Predicting the Semantic Orientation of Adjectives. In Proceedings of the 35th Annual Meeting of the A CL, pages 174-181, Madrid, Spain, July 1997. Association for Computa- tional Linguistics. John S. Justeson and Slava M. Katz. Co- occurrences of Antonymous Adjectives and Their Contexts. Computational Linguistics, 17(1):1-19, 1991. J. A. W. Kamp. Two Theories of Adjectives. In E. L. Keenan, editor, Formal Semantics of Natural Language. Cambridge University Press, Cambridge, England, 1975. Maurice G. Kendall. A New Measure of Rank Correlation. Biometrika, 30(1-2):81- 93, June 1938. Kevin Knight and Vasileios Hatzivassiloglou. Two-Level, Many-Paths Generation. In Pro- ceedings of the 33rd Annual Meeting of the A CL, pages 252-260, Boston, Massachusetts, June 1995. Association for Computational Linguistics. Irene Langkilde and Kevin Knight. Genera- tion that Exploits Corpus-Based Statistical Knowledge. In Proceedings of the 36th An- nual Meeting of the A CL and the 17th Inter- national Conference on Computational Lin- guistics (ACL//COLING-98), pages 704-710, Montreal, Canada, 1998. Yakov Malkiel. Studies in Irreversible Bino- mials. Lingua, 8(2):113-160, May 1959. Reprinted in [Malkiel 1968]. Yakov Malkiel. Essays on Linguistic Themes. Blackwell, Oxford, 1968. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. Building a large annotated corpus of English: The Penn Tree- bank. Computational Linguistics, 19:313- 330, 1993. J. E. Martin. Adjective Order and Juncture. Journal of Verbal Learning and Verbal Behav- ior, 9:379-384, 1970. George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J. Miller. Introduction to WordNet: An On- Line LexicM Database. International Journal of Lexicography (special issue), 3(4):235-312, 1990. Fernando C. N. Pereira and Michael D. Ri- ley. Speech Recognition by Composition of Weighted Finite Automata. In Emmanuel Roche and Yves Schabes, editors, Finite- State Language Processing, pages 431-453. MIT Press, Cambridge, Massachusetts, 1997. Fernando Pereira, Naftali Tishby, and Lillian Lee. Distributional Clustering of English Words. In Proceedings of the 31st Annual Meeting of the ACL, pages 183-190, Colum- bus, Ohio, June 1993. Association for Com- putational Linguistics. Randolph Quirk and Sidney Greenbaum. A Concise Grammar of Contemporary English. Harcourt Brace Jovanovich, Inc., London, 1973. Jeffrey C. Reynar and Adwait Ratnaparkhi. A Maximum Entropy Approach to Identifying Sentence Boundaries. In Proc. of the 5th Ap- plied Natural Language Conference (ANLP- 97), Washington, D.C., April 1997. James Shaw. Clause Aggregation Using Lin- guistic Knowledge. In Proceedings of the 9th International Workshop on Natural Language Generation., pages 138-147, 1998. James Shaw. Segregatory Coordination and El- lipsis in Text Generation. In Proceedings of the 36th Annual Meeting of the ACL and the 17th International Conference on Computa- tional Linguistics (A CL/COLING-98), pages 1220-1226, Montreal, Canada, 1998. Helmuth Sp~th. Cluster Dissection and Anal- ysis: Theory, FORTRAN Programs, Exam- ples. Ellis Horwood, Chichester, England, 1985. J. Teyssier. Notes on the Syntax of the Adjec- tive in Modern English. Behavioral Science, 20:225-249, 1968. Zeno Vendler. Adjectives and Nominalizations. Mouton and Co., The Netherlands, 1968. Benjamin Lee Whorf. Language, Thought, and Reality; Selected Writings. MIT Press, Cam- bridge, Massachusetts, 1956. 143
1999
18
Bilingual Hebrew-English Generation of Possessives and Partitives: Raising the Input Abstraction Level Yael Dahan Netzer and Michael Elhadad Ben Gurion University Department of Mathematics and Computer Science, Beer Sheva, 84105, Israel (yaeln I elhadad) @cs. bgu. ac. il Abstract Syntactic realization grammars have tradi- tionally attempted to accept inputs with the highest possible level of abstraction, in or- der to facilitate the work of the compo- nents (sentence planner) preparing the in- put. Recently, the search for higher ab- straction has been, however, challenged (E1- hadad and Robin, 1996)(Lavoie and Ram- bow, 1997)(Busemann and Horacek, 1998). In this paper, we contribute to the issue of selecting the "ideal" abstraction level in the input to syntactic realization grammar by considering the case of partitives and pos- sessives in a bilingual Hebrew-English gen- eration grammar. In the case of bilingual generation, the ultimate goal is to provide a single input structure, where only the open- class lexical entries are specific to the lan- guage. In that case, the minimal abstraction required must cover the different syntactic constraints of the two languages. We present a contrastive analysis of the syntactic realizations of possessives and par- titives in Hebrew and English and conclude by presenting an input specification for com- plex NPs which is slightly more abstract than the one used in SURGE. We define two main features -possessor and rejLset, and • discuss how the grammar handles complex syntactic co-occurrence phenomena based on this input. We conclude by evaluating how the resulting input specification language is appropriate for both languages. 1 Introduction One of the first issues to address when se- lecting a syntactic realization component is whether its input specification language fits the desired application. Traditionally, syntactic realization components have at- tempted to raise the abstraction level of in- put specifications for two reasons: (1) to pre- serve the possibility of paraphrasing and (2) to make it easy for the sentence planner to map from semantic data to syntactic input As new applications appear, that can- not start generation from a semantic in- put because such an input is not available (for example re-generation of sentences from syntactic fragments to produce summaries (Barzilay et al., 1999) or generation of com- plex NPs in a hybrid template system for business letters (Gedalia, 1996)), this moti- vation has lost some of its strength. Con- sequently, "shallow surface generators" have recently appeared (Lavoie and Rambow, 1997) (Busemann and Horacek, 1998) that require an input considerably less abstract than those required by more traditional re- alization components such as SURGE (E1- hadad and Robin, 1996) or KPML (Bate- man, 1997). In this paper, we contribute to the de- bate on selecting an appropriate level of ab- straction by considering the case of bilin- gual generation. We present results ob- tained while developing the HUGG syntactic realization component for Hebrew (Dahan- Netzer, 1997). One of the goals of this sys- tem is to design a generator with an input specification language as similar as possible to that of an English generator, SURGE in our case. The ideal scenario for bilingual generation is illustrated in Figure 1. It consists of the 144 John gave a book to Mary John natan sefer le-Mary cat proc partic :lause type relation-type agent affected possessor possessed composite ] possessive lex 'John' gender masculine [cat proper ] [1] lex 'Mary' gender feminine [1] cat common ] lex 'book/sefer' Figure 1" Ideal scenario for bilingual gener- ation following steps: 1. Prepare an input specification in one language 2. Translate all the lexical entries (func- tion words do not appear) 3. Generate with any grammar In the example, the same input structure is used and the generator can produce sen- tences in both languages if only the lexical items are translated. Consider the following paraphrase in En- glish for the same input: John gave Mary a book. The Hebrew grammar does not produce such a paraphrase, as there is no equivalent in Hebrew to the dative move alternation. In this case, we conclude that the input ab- straction level is appropriate. In contrast, if the input had specified a structure such as indirect-object(prep=to/le, np--Mary), then it would not have been abstract enough to serve as a bilingual input structure. Similarly, the English possessive marker is very close to the Hebrew "construct state" (smixut): The King's palace Armon ha-melex Palace-cs the-king The following input structure seems, therefore, appropriate for both languages: lex possessor common 1 "palace" / "armon" [leXdefinite yes"king"/"melex"] There are, however, divergences between the use of smixut in Hebrew and of the pos- sessive marker in English: Segovia's pupil The pupil of Segovia * talmyd segovyah talmyd Sel segovyah ? The house's windows The windows of the house Haionot ha-bayit ha-Halonot Sel ha-bayit Our goal, therefore, is to design an input structure that is abstract enough to let the grammar decide whether to use a possessive marker vs. an of-construct in English or a Sel-construct vs. a smixut-construction in Hebrew. A similar approach has been adopted in generation (Bateman, 1997), (Bateman et al., 1991) and in machine translation most notably in (Dorr, 1994). Dorr focuses on di- vergences at the clause level as illustrated by the following example: I like Mary Maria me gusta a mi Mary pleases me Dorr selects a representation structure based on Jackendoff's Lexical Conceptual Structures (LCS) (Jackendoff, 1990). In the KPML system, the proposed so- lution is based on the systemic notion of "delicacy" and the assumption is that low- delicacy input features (the most abstract ones) remain common to the two target lan- guages and high-delicacy features would dif- fer. In this paper, we focus on the input spec- ification for complex NPs. The main reason for this choice is that the input for NPs in SURGE has remained close to English syn- tax (low abstraction). It consists of the fol- lowing main sub-constituents: head, classi- tier, describer, qualifier and determiner. In previous work (Elhadad, 1996), we dis- cuss how to map a more abstract domain- specific representation to the SURGE input 145 structure within a sentence planner. When moving to a bilingual generator, we have found the need for a higher level of ab- straction to avoid encoding language-specific knowledge in the sentence planners. We specifically discuss here the following deci- sions: • How to realize a possessive relation: John's shirt vs. the shirt of John • How to realize a partitive relation: all the kids vs. all of the kids In the rest of the paper, we first present basic contrastive data and existing analyses about possessives and partitives in Hebrew and English. We then present the input fea- tures we have designed to cover possessives and partitives in both languages and discuss how these features are used to account for the main decisions required of the realizer. We conclude by an evaluation of the bilin- gual input structure on a set of 100 sample input structures for complex NPs in the two languages and of the divergences that remain in the generated NPs. In conclusion, this bilingual analysis has helped us identify im- portant abstractions that lead to more fluent generation in both languages. 2 Possessives and Partitives in Hebrew and English This section briefly presents data on posses- sives and partitives in English and Hebrew. These observations delimit the questions we address in the paper: when is a genitive con- struct used to express possessives and when is an explicit partitive used. 2.1 Possessives in English Possessives can be realized in two basic structures: as part of the determiner se- quence (Halliday, 1994) (as either a pos- sessive pronoun or a full NP marked with apostrophe-s as a genitive marker) or as a construct NP of NP. In addition to possessive, the genitive marker can realize several semantic relations (Quirk et al., 1985) (pp.192-203): subjec- tive genitive (the boy's application --the boy applied) , genitive of origin (the girl's story -- the girl told a story), objective genitive, descriptive genitive (a women's college -- a college for woman). As a consequence of this versatility, the general decision of apostrophe vs. of is not trivial: Quirk claims that the higher on the gender scale, i.e., the more animate the noun, the more the possessor realization tends to be realized as an inflected genitive: • Person's name: Segovia's pupil • Person's nouns: the boy's new shirt • Collective nouns: the nation's social se- curity • Higher Animals: the horse's neck • Geographical names: Europe's future • Locative nouns: the school's history • Temporal nouns: the decade's event This decision also interacts with other re- alization decisions: if several modifiers must be attached to the same head, they can com- pete for the same slot in the syntactic struc- ture. In such cases, the decision is one of preference ranking: The boy's application of last year vs. last year's application of the boy. 2.2 Possessives in Hebrew Possessives in Hebrew can be realized by three syntactic constructions: construct state cadur ha-tynok ball the-baby free genitive ha-cadur Sel ha-tynok the ball of the baby double genitive cadur-o Sel ha-tynok ball-his of the-baby The construct state (called smixut) is similar to the apostrophe marker in En- glish: it involves a noun adjacent to an- other noun or noun phrase, without any marker (like a preposition) between them (Berman, 1978). The head noun in the con- struct form generally undergoes morpholog- ical changes: yaldah - yaldat. Smixut is, on the one hand, very productive in Hebrew and yet very constrained (Dahan-Netzer and E1- hadad, 1998b). 146 Free genitive constructs use a preposi- tional phrase with the preposition Sel. Many studies treat Sel as a case marker only (cf. (Berman, 1978) (Yzhar, 1993) (Borer, 1988)). The choice of one of the three forms seems to be stylistic and vary in spoken and writ- ten Hebrew (cf. (Berman, 1978), (Glin- eft, 1989), (Ornan, 1964), and discussion in (Seikevicz, 1979)). But, in addition to these pragmatic factors and as is the case for the English genitive, the construct state can realize a wide variety of semantic relations (Dahan-Netzer and Elhadad, 1998b), (Azar, 1985), (Levi, 1976). The selection is also a matter of preference ranking among com- petitors for the same syntactic slot. For ex- ample, we have shown in (Dahan-Netzer and Elhadad, 1998b) that the semantic relations that can be realized by a construct state are the ones defined as classifier in SURGE. Therefore, the co-occurrence of such a rela- tion with another classifier leads to a com- petition for the syntactic slot of "classifier" and also contributes to the decision of how to realize a possessive. Consider the following example: cat head classifier possessor common lex "Simlah"/"dress" ] lex "Sabat" ] cat common ] lex "yalda"/"girl" If only the possessor is provided in the fol- lowing input, it can be mapped to a con- struct state: Simlat ha-yaldah dress-cs the-girl the girl's dress If a classifier is provided in addition, the construct-state slot is not available anymore 1, and the free genitive construct must be used: Simlat ha-Sabat Sel ha-yaldah dress-cs the-Shabat of the-girl The Shabat dress of the girl l If the classifier had been specified in the input as a semantic relation as discussed in (Dahan-Netzer and Elhadad, 1998b), an alternative realization (The girl's dress/or Shabat) could have been obtained. 2.3 Partitives in English The partitive relation denotes a subset of the thing to which the head of a noun phrase refers. A partitive relation can be realized in two main ways: as part of the pre-determiner sequence (Halliday, 1994), (Winograd, 1983) using quantifiers that have a partitive mean- ing (e.g., some/most/many/one-third (of the) children) or using a construction of the form a measure/X of Y. There are three subtypes of the parti- tive construction ((Quirk et al., 1985)[p.130], (Halliday, 1994)): measure a mile of cable, typical partitives a loaf of bread, a slice of cake, and general partitives: a piece/bit/of an item of X. In the syntactic structure of a partitive structure, the part is the head of the phrase (and determines agreement), but the Thing - is what is being measured. This creates an interesting difference ~)etween the logical and syntactic structure of the NP. (Mel'cuk and Perstov, 1987) defines the elective surface syntactic relation which con- nects an of-phrase to superlative adjectives or numerals. An elective phrase is an ellip- tical structure: the rightmost [string] of the strings. It can be headed by an adjective in superlative form (the poorest among the na- tion), a numeral (45 of these 256 sentences), ordinal (the second of three) or a quantita- tive word having the feature elect: all, most, some of... The elective relation can be used recursively (Many of the longest of the first 45 of these 256 sentences). In the case of quantifier-partitives, one must decide whether to use an explicitly par- titive construct (some of the children) or not (some children). The structure that does not use of is used for generic NPs (when the head is non-definite: most children). For specific reference, the of-construction is op- tional with nouns and obligatory with pro- nouns: all (of) the meat all of it 2.4 Partitives in Hebrew There are two possible ways to express par- titivity in Hebrew: using a construction of 147 the form X me-Y, or using a partitive quan- tifier. In contrast to English, quantifiers that are marked as partitive, cannot be used in an explicitly partitive structure: roy ha-yeladym - * roy me-ha-yeladym - most of the children Se'ar ha-yeladym - * Se'ar me-ha-yeladym - the rest of the children col ha-yeladym - * col me-ha-yeladym - all of the children Conversely, a quantifier that is not marked as partitive can be used in an explicitly par- titive structure: harbeh yeladym - many children harbeh me-hayeladym - many of the children mewat ha-yeladym - few the-children mewat me-ha-yeladym - few of the-children There are complex restrictions in Hebrew on the co-occurrence of several determiners in the same NP and on their relative order- ing within the NP. To explain them, Glin- ert (Glinert, 1989) adopts a functional per- spective, quite appropriate to the needs of a generation system, and identifies a general pattern for the NP, that we use as a basis for the mapping rules in HUGG: [partitive determiner amount head classifiers describers post-det/quant qualifiers] Yzhar and Doron (Doron, 1991) (Yzhar, 1993) distinguish between two sets of deter- miners, that they call D and Q quantifiers. The distinction is based on syntactic fea- tures, such as position, ability to be modi- fied, ability to participate in partitive struc- tures and requirement to agree in number and gender with the head. This distinction is used to explain co-occurrence restrictions, the order of appearance of D vs Q quantifiers and the recursive structure of D determiners: D determiners can be layered on top of other D determiners. A single Q quantifier can oc- cur in an NP and it remains attached closest to the head. In (Dahan-Netzer, 1997) and (Dahan- Netzer and Elhadad, 1998a), we have refined the D/Q classification and preferred using functional criteria: we map the Q quanti- tiers to the "amount" category defined by Glinert, and the D set is split into the parti- tive and determiner categories - each with a different function. Of these, only partitives are recursive. Given these observations, the following de- cisions must be left "open" in the input to the realizer: how to map a possessor to dif- ferent realizations; in which order to place co-occurring quantifiers; and whether to use an explicit of construct for partitive quanti- tiers. The input specification language must also enforce that only acceptable recursive structures be expressible. 3 Defining an Abstract Input for NP Realization 3.1 Input Features The input structure for NPs we adopt is split in four groups of features, which appear in Figure 3.1: • Head or reference-set: defines the thing or set referred to by the NP • Qualifying: adds information to the thing • Identifying: identifies the thing among other possible referents • Quantifying: determines the quantity or amount of the thing. The main modifications from the existing SURGE input structure are the introduction of the re/-set feature and the update of the usage of the possessor feature. For both of these features, the main re- quirement on the realizer is to properly han- dle cases of "competition" for the same re- stricted syntactic slot, as illustrated in the Shabat dress example above. The possible realizations of pos- sessor are controlled by the feature realize-possessor-as free-genitive, bound or double-genitive. Defaults (unmarked cases) vary between the two languages and the co-occurrence constraints also vary, because each form is mapped to different syntactic slots. For example, a bound possessor is mapped to the determiner slot in English, while in Hebrew it is mapped to a classifier slot. 148 Qualifying features English Realization Hebrew Realization classifier Leather shoe nawal wor Electric chair cise' HaSmaly describer Pretty boy yeled yafeh qualifier A story about a cat sypur wal Hatul A story I read sypur S-kar'aty possessor The king's palace Armon ha-melez A palace of a king Armon Sel melez The book of his Armono Seio Identifying features distance That boy yeled zeh Ordinal The third child ha-yeled ha-SlySy status (deictic2) Definite yes/no Selective yes/no Total +/-/none The same child The/a book Some/D children All/No/~ children Quantifying features I Oto yeled (ha) seyer Total +/-/none Cardinal The three children Fraction Multiplier degree + degree- degree none comparative yes One-third o I the children Twice his weight (The) many ears A little butter Some children Mofl~ ears superlative yes The most cars evaluative yes Too many ears orientation- Few cars col hayeladym, A] EHad me-ha-yeladym SloSet ha-yeladym SIyS me-ha-yeladym ciflaym miSkalo harbeh mezonyot, ha-mezonyot ha-rabot kZa T Hems 'h eamah yeladym yoter mezonyot roy ha-mezonyot yoter m-day mezonyot mewaT mezonyot Figure 2: Input features When possessives are realized as free gen- itives, they are mapped to the slot of qual- ifiers, usually in the front position. Boro- chovsky (Borochovsky, 1986) discusses ex- ceptions to this ordering rule in Hebrew: Vawadah l-wirwurym Sel ha-miSTarah The commission for.appeals of the-police * Vawadah Sel ha-MiSTarah l-wirwurym In this example, the purpose-modifier is "closer" semantically to the head than the possessor. The ordering decision must rely on semantic information (purpose) that is not available in our general input structure (cf. (Dahan-Netzer and Elhadad, 1998b) for an even more abstract proposal). Realization rules in each language take into account the restrictions on possible mappings for the possessor by unifying the feature realize-possessive-as based on the lexical properties of both the head and the possessor: Construct-state not ok for possessive rela- tion with proper name: ? Simlat Hanah- ? dress-cs Hanah Double possessive ok for person names and possessor: Simlatah Sel Hanah - dress-cs-her of Hanah Double possessive not ok for non-possessive relation: * Simlatah Sel ha-Sabat * dress-cs-her of the-Shabat Similarly, the possible realizations of the partitive are controlled by the feature realize-partitive-as: of or quantifier. Quantifiers are classifed along the por- tion/amount dimension. This system can be realized either lexically by quantifiers marked as partitive, or by using an explicit partitive syntactic structure X rae-Y/X of Y. Because the realization grammar uses the knowledge of which word realizes which func- tion, the distinction among partitive quan- tifiers, amount quantifiers and determiners predicts the order of the words in the He- brew NP. The standard order is: [partitive determiner amount head] As noted above, only partitives can en- ter into recursive structures, in both Hebrew 149 and English. Accordingly, our input specifi- cation language enforces the constraint that only a single amount and a single identifica- tion feature can be present simultaneously. Whenever a partitive quantifier is desired, the input specification must include a ref-set feature instead of the head. This enforces the constraint that partitives yield recursive constructs, similarly to Mel'cuk's elective- relation. Such recursive structures are illus- trated in the following example: wasarah me-col ha-maffgynym ten off-all the-demonstrators Ten off all off the demonstrators cat np cardinal value total 10 ] [ ex ref-set ref-set definite yes The input is abstract enough to let the re- alization grammar decide whether to build an explicitly partitive construction. This de- cision depends on the lexical features of the realizing quantifiers and is different in En- glish and Hebrew, as discussed above. Additional realization rules take into ac- count additional co-occurrence restrictions. For example, in Hebrew, if the "portion" part is modified with adjectives, then an ex- plicitly partitive construction must be used: ha-roy ha-gadoi mi-beyn ha-yeladym the-most the-big of-from the-children The vast majority of the children In summary, we have presented a set of input features for complex NPs that include the abstract possessor and re.f-set features. These two features can be mapped to dif- ferent syntactic slots. Realization rules in the grammar control the mapping of these features based on complex co-occurrence re- strictions. They also take into account the lexical properties of specific quantifiers and determiners when deciding whether to use explicitly partitive constructions. Finally, the input structure enforces that only parti- tive relations can enter into recursive struc- tures. Both HUGG in Hebrew and SURGE in English have been adapted to support this modified input specification. 4 Conclusion To evaluate whether the proposed input structure is appropriate as a bilingual spec- ification, we have tested our generation sys- tem on a set of 100 sample inputs for com- plex NPs in English and Hebrew. In the experiment, we only translated open-class lexical items, thus following the "ideal sce- nario" discussed in the Introduction. De- spite the divergences between their surface syntactic structure, the input structures pro- duced valid complex NPs in both languages in all cases. We identified the following open problems in the resulting sample: the selection of the unmarked realization option and the deter- mination of the default value of the definite feature remain difficult and vary a lot be- tween the two languages. This case study has demonstrated that the methodology of contrastive analysis of simi- lar semantic relations in two languages with dissimilar syntactic realizations is a fruitful way to define a well-founded input specifica- tion language for syntactic realization. References M. Azar. 1985. Classification of Hebrew compounds. In R. Nir, editor, Academic Teaching off Contemporary Hebrew. Inter- national Center for University Teaching of Jewish Civilization, Jerusalem. (in He- brew). R. Barzilay, K. McKeown, and M. Elhadad. 1999. Information fusion in the context of multi-document summarization. In Pro- ceeding off ACL '99, Maryland, June. ACL. J.A. Bateman, C.M. Matthiessen, K. Nanri, and L. Zeng. 1991. The re-use of linguistic resources across languages in multilingual generation components. In IJCAI 1991, pages 966-971, Sydney, Australia. Morgan Kaufmann. J.A. Bateman, 1997. KPML Devel- opment Environment: multilingual linguistic resource development and sentence generation. GMD, IPSI, Darmstadt, Germany, release 1.1 edi- 150 tion. www.darmstadt.gmd.de/publish/ komet/kpml.html. R. Aronson Berman. 1978. Modern Hebrew Structure. University Publishing Projects, Tel Aviv. H. Borer. 1988. On morphological paral- lelism between compounds and constructs. In Geert Jooij and Jaap Van Marle, ed- itors, Yearbook of Morphology 1, pages 45-65. Foris publications, Dordrecht, Hol- land. E. Borochovsky. 1986. The hierarchy of modifiers after the noun. Leshonenu, 50. (in Hebrew). S. Busemann and H. Horacek. 1998. A flex- ible shallow approach to text generation. In INLG'98, pages 238-247, Niagara-on- the-Lake, Canada, August. Y. Dahan-Netzer and M. Elhadad. 1998a. Generating determiners and quantifiers in Hebrew. In Proceeding of Workshop on Computational Approaches to Semitic Languages, Montreal, Canada, August. ACL. Y. Dahan-Netzer and M. Elhadad. 1998b. Generation of noun compounds in He- brew: Can syntactic knowledge be fully encapsulated? In INLG'98, pages 168- 177, Niagara-on-the-Lake, Canada, Au- gust. Y. Dahan-Netzer. 1997. HUGG - Unification-based Grammar for the Generation of Hebrew noun phrases. Master's thesis, Ben Gurion University, Beer Sheva Israel. (in Hebrew). E. Doron. 1991. The NP structure. In U. Ornan, E. Doron, and A. Ariely, ed- itors, Hebrew Computational Linguistics. Ministry of Science. (in Hebrew). B. Dorr. 1994. Machine translation diver- gences: A formal description and proposed solution. Journal of Computational Lin- guistics, 20(4):597-663. M. Elhadad and J. Robin. 1996. An overview of SURGE: a re-usable compre- hensive syntactic realization component. In INLG'96, Brighton, UK. (demonstra- tion session). M. Elhadad. 1996. Lexical choice for com- plex noun phrases: Structure, modifiers and determiners. Machine Translation, 11:159-184. R. Gedalia. 1996. Automatic generation of business letters: Combining word-based and template-based nlg through the dis- tinct handling of referring expressions. Master's thesis, Ben Gurion University, Beer Sheva Israel. (in Hebrew). L. Glinert. 1989. The Grammar of Modern Hebrew. Cambridge University. M. A. K. Halliday. 1994. An Introduction to Functional Grammar. Edward Arnold, London, second edition. R.S. Jackendoff. 1990. Semantic Structures. MIT Press, Cambridge MA. B. Lavoie and O. Rambow. 1997. A fast and portable realizer for text generation systems. In ANLP'97, Washington, DC. www.cogentex.com/systems/realpro. J.N. Levi. 1976. A semantic analysis of He- brew compound nominals. In Peter Cole, editor, Studies in Modern Hebrew syn- tax and semantics. North-Holland, Ams- terdam. I.A. Mel'cuk and N.V. Perstov. 1987. Surface-syntax of English, a formal model in the Meaning Text Theory. Benjamins, Amsterdam/Philadelphia. U. Ornan. 1964. The Nominal Phrase in Modern Hebrew. Ph.D. thesis, Hebrew University, Jerusalem. (in Hebrew). R. Quirk, S. Greenbaum, G. Leech, and J. Svartvik. 1985. A comprehensive gram- mar of the English language. Longman. C. Seikevicz. 1979. The Possessive Con- struction in Modern Hebrew: A Sociolin- guistic Approach. Ph.D. thesis, George- town University, Washington D.C. T. Winograd. 1983. Language as a Cogni- tive Process: Syntax, volume I. Addison- Wesley, Reading, MA. D. Yzhar. 1993. Computational grammar for noun phrases in Hebrew. Master's the- sis, Hebrew University, Jerusalem. In He- brew. 151
1999
19
AUTOMATIC SPEECH RECOGNITION AND ITS APPLICATION TO INFORMATION EXTRACTION Sadaoki Furui Department of Computer Science Tokyo institute of Technology 2-12-1, Ookayama, Meguro-ku, Tokyo, 152-8552 Japan [email protected] ABSTRACT This paper describes recent progress and the author's perspectives of speech recognition technology. Applications of speech recognition technology can be classified into two main areas, dictation and human-computer dialogue systems. In the dictation domain, the automatic broadcast news transcription is now actively investigated, especially under the DARPA project. The broadcast news dictation technology has recently been integrated with information extraction and retrieval technology and many application systems, such as automatic voice document indexing and retrieval systems, are under development. In the human-computer interaction domain, a variety of experimental systems for information retrieval through spoken dialogue are being investigated. In spite of the remarkable recent progress, we are still behind our ultimate goal of understanding free conversational speech uttered by any speaker under any environment. This paper also describes the most important research issues that we should attack in order to advance to our ultimate goal of fluent speech recognition. pattern recognition paradigm, a data-driven approach which makes use of a rich set of speech utterances from a large population of speakers, the use of stochastic acoustic and language modeling, and the use of dynamic programming- based search methods. A series of (D)ARPA projects have been a major driving force of the recent progress in research on large-vocabulary, continuous-speech recognition. Specifically, dictation of speech reading newspapers, such as north America business newspapers including the Wall Street Journal (WSJ), and conversational speech recognition using an Air Travel Information System (ATIS) task were actively investigated. More recent DARPA programs are the broadcast news dictation and natural conversational speech recognition using Switchboard and Call Home tasks. Research on human-computer dialogue systems, the Communicator program, has also started [ 1 ]. Various other systems have been actively investigated in US, Europe and Japan stimulated by DARPA projects. Most of them can be classified into either dictation systems or human-computer dialogue systems. 1. INTRODUCTION The field of automatic speech recognition has witnessed a number of significant advances in the past 5 - 10 years, spurred on by advances in signal processing, algorithms, computational architectures, and hardware. These advances include the widespread adoption of a statistical Figure 1 shows a mechanism of state-of-the-art speech recognizers [2]. Common features of these systems are the use of cepstral parameters and their regression coefficients as speech features, triphone HMMs as acoustic models, vocabularies of several thousand or several ten thousand entries, and stochastic language models such as bigrams and trigrams. Such methods have 11 been applied not only to English but also to French, German, Italian, Spanish, Chinese and Japanese. Although there are several language- specific characteristics, similar recognition results have been obtained. Speec~ input Acoustic analysis I ~XI'..X T I Gl°bal search: ~'-P(xr"xTIwr"wk) Ph°nemeinvent°ryl I | maximize Pronunciation lexicon[ IP( xr.. xT IWr..wt).P(wr..wt )l °ver Wl'" wt J,,P(wl""wk) tLanguagemodel [ 1 Recognized word sequence world domain of obvious value has lead to rapid technology transfer of speech recognition into other research areas and applications. Since the variations in speaking style and accent as well as in channel and environment conditions are totally unconstrained, broadcast news is a superb stress test that requires new algorithms to work across widely varying conditions. Algorithms need to solve a specific problem without degrading any other condition. Another advantage of this domain is that news is easy to collect and the supply of data is boundless. The data is found speech; it is completely uncontrived. Fig. 1 - Mechanism of state-of-the-art speech recognizers. The remainder of this paper is organized as follows. Section 2 describes recent progress in broadcast news dictation and its application to information extraction, and Section 3 describes human-computer dialogue systems. In spite of the remarkable recent progress, we are still far behind our ultimate goal of understanding free conversational speech uttered by any speaker under any environment. Section 4 describes how to increase the robustness of speech recognition, and Section 5 describes perspectives of linguistic modeling for spontaneous speech recognition/ understanding. Section 6 concludes the paper. 2. BROADCAST NEWS DICTATION AND INFORMATION EXTRACTION 2.1 DARPA Broadcast News Dictation Project With the introduction of the broadcast news test bed to the DARPA project in 1995, the research effort took a profound step forward. Many of the deficiencies of the WSJ domain were resolved in the broadcast news domain [3]. Most importantly, the fact that broadcast news is a real- 2.2 Japanese Broadcast News Dictation System We have been developing a large- vocabulary continuous-speech recognition (LVCSR) system for Japanese broadcast-news speech transcription [4][5]. This is a part of a joint research with the NHK broadcast company whose goal is the closed-captioning of TV programs. The broadcast-news manuscripts that were used for constructing the language models were taken from the period between July 1992 • and May 1996, and comprised roughly 500k sentences and 22M words. To calculate word n- gram language models, we segmented the broadcast-news manuscripts into words by using a morphological analyzer since Japanese sentences are written without spaces between words. A word-frequency list was derived for the news manuscripts, and the 20k most frequently used words were selected as vocabulary words. This 20k vocabulary covers about 98% of the words in the broadcast-news manuscripts. We calculated bigrams and trigrams and estimated unseen n-grams using Katz's back-off smoothing method. Japanese text is written by a mixture of three kinds of characters: Chinese characters (Kanji) 12 and two kinds of Japanese characters (Hira-gana and Kata-kana). Most Kanji have multiple readings, and correct readings can only be decided according to context. Conventional language models usually assign equal probability to all possible readings of each word. This causes recognition errors because the assigned probability is sometimes very different from the true probability. We therefore constructed a language model that depends on the readings of words in order to take into account the frequency and context-dependency of the readings. Broadcast news speech includes filled pauses at the beginning and in the middle of sentences, which cause recognition errors in our language models that use news manuscripts written prior to broadcasting. To cope with this problem, we introduced filled-pause modeling into the language model. Table 1 - Experimental results of Japanese broadcast news dictation with various language models (word error rate [%]) Evaluation sets Language model m/c m/n f/c f/n LM1 17.6 37.2 14.3 41.2 LM2 16.8 35.9 13.6 39.3 LM3 14.2 33.1 12.9 38.1 News speech data, from TV broadcasts in July 1996, were divided into two parts, a clean part and a noisy part, and were separately evaluated. The clean part consisted of utterances with no background noise, and the noisy part consisted of utterances with background noise. The noisy part included spontaneous speech such as reports by correspondents. We extracted 50 male utterances and 50 female utterances for each part, yielding four evaluation sets; male-clean (m/c), male-noisy (m/n), female-clean (f/c), female- noisy (fin). Each set included utterances by five or six speakers. All utterances were manually segmented into sentences. Table 1 shows the experimental results for the baseline language model (LM 1) and the new language models. LM2 is the reading-dependent language model, and LM3 is a modification of LM2 by filled-pause modeling. For clean speech, LM2 reduced the word error rate by 4.7 % relative to LM1, and LM3 model reduced the word error rate by 10.9 % relative to LM2 on average. 2.3 Information Extraction in the DARPA Project News is filled with events, people, and organizations and all manner of relations among them. The great richness of material and the naturally evolving content in broadcast news has leveraged its value into areas of research well beyond speech recognition. In the DARPA project, the Spoken Document Retrieval (SDR) of TREC and the Topic Detection and Tracking (TDT) program are supported by the same materials and systems that have been developed in the broadcast news dictation arena [3]. BBN'sRough'n'Reddy system extracts structural features of broadcast news. CMU's Informedia [6], MITRE's Broadcast Navigator, and SRI's Maestro have all exploited the multi-media features of news producing a wide range of capabilities for browsing news archives interactively. These systems integrate various diverse speech and language technologies including speech recognition, speaker change detection, speaker identification, name extaction, topic classification and information retrieval. 2.4 Information Extraction from Japanese Broadcast News Summarizing transcribed news speech is useful for retrieving or indexing broadcast news. We investigated a method for extracting topic words from nouns in the speech recognition results on the basis of a significance measure [4][5]. The extracted topic words were compared with "true" topic words, which were given by three human subjects. The results are shown in Figure 2. 13 When the top five topic words were chosen (recall=13%), 87% of them were correct on average. 75 "~ 50 25 Speech -q3- Text I i i i 0 25 50 75 100 Recall[%] Fig. 2 - Topic word extraction results. 3. HUMAN-COMPUTER DIALOGUE SYSTEMS 3.1 Typical Systems in US and Europe Recently a number of sites have been working on human-computer dialogue systems. The followings are typical examples. (a) The View4You system at the University of Karksruhe The University of Karlsruhe focuses its speech research on a content-addressable multimedia information retrieval system, under a multi-lingual environment, where queries and multimedia documents may appear in multiple languages [7]. The system is called "View4You" and their research is conducted in cooperation with the Informedia project at CMU [6]. In the View4You system, German and Servocroatian public newscasts are recorded daily. The newscasts are automatically segmented and an index is created for each of the segments by means of automatic speech recognition. The user can query the system in natural language by keyboard or through a speech utterance. The system returns a list of segments which is sorted by relevance with respect to the user query. By selecting a segment, the user can watch the corresponding part of the news show on his/her computer screen. The system overview is shown in Fig. 3. (b) The SCAN- speech content based audio navigator at AT&T Labs SCAN (Speech Content based Audio Navigator) is a spoken document retrieval system developed at AT&T Labs integrating speaker-independent, large-vocabulary speech recognition with information-retrieval to support query-based retrieval of information from speech archives [8]. Initial development focused on the application of SCAN to the broadcast news domain. An overview of the system architecture is provided in Fig. 4. The system consists of three components: (1) a speaker-independent large- vocabulary speech recognition engine which (Satellite receiver ) ~ Video ( MPEG-coder ) MPEO-video ~ MPEG-audio C Segm nter ) ~ MPEG-audio , Segment boundaries ~peech recognizer) MPEO-auaio Text Segment boundaries I Result output ] - --~ [ (Thesaurus) Video query server ) .~ Result Front-end Text Onput speech recognizer~ Ilnternet newWW~spaperl Fig. 3 - System overview of the View4You system. 14 Intonational I phrase boundary [ detection I Classification Recognition User interface Information retrieval Fig. 4 - Overview of the SCAN spoken document system architecture. segments the speech archive and generates transcripts, (2) an information-retrieval engine which indexes the transcriptions and formulates hypotheses regarding document relevance to user-submitted queries and (3) a graphical-user- interface which supports search and local contextual navigation based on the machine- generated transcripts and graphical representations of query-keyword distribution in the retrieved speech transcripts. The speech recognition component of SCAN includes an intonational phrase boundary detection module and a classification module, These subcomponents preprocess the speech data before passing the speech to the recognizer itself. (c) The GALAXY-II conversational system at MIT Galaxy is a client- server architecture developed at MIT for accessing on- line information using spoken dialogue [9]. Ithas served as the testbed for developing human language Phone technology at MIT for several years. Recently, they have initiated a significant redesign of the GALAXY architecture to make it easier for researchers to develop their own applications, using either exclusively their own servers or intermixing them with servers developed by others. This redesign was done in part due to the fact that GALAXY has been designed as the first reference architecture for the new DARPA Communicator program. The resulting configuration of the GALAXY-II architecture is shown in Fig. 5. The boxes in this figure represent various human language technology servers as well as information and domain servers. The label in italics next to each box identifies the corresponding MIT system component. Interactions between servers are mediated by the hub and managed in the hub script. A particular dialogue session is initiated by a user either through interaction with a graphical interface at a Web site, through direct telephone dialup, or through a desktop agent. DECTALK & ENVOICE Text-to-speech , conversion [ Audio server Speech recognition SUMMIT GENESIS [ Language I generation D-Server Dialogue I management [ App.ca,ion [ ' back-ends I-Sorvor Context tracking Discourse Frame ] construction TINA Fig. 5 - Architecture of GALAXY-II. 15 (d) The ARISE train travel information system at LIMSI The ARISE (Automatic Railway Information Systems for Europe) projects aims developing prototype telephone information services for train travel information in several European countries [ 10]. In collaboration with the Vecsys company and with the SNCF (the French Railways), LIMSI has developed a prototype telephone service providing timetables, simulated fares and reservations, and information on reductions and services for the main French intercity connections. A prototype French/English service for the high speed trains between Paris and London is also under development. The system is based on the spoken language systems developed for the RailTel project [11] and the ESPRIT Mask project [12]. Compared to the RailTel system, the main advances in ARISE are in dialogue management, confidence measures, inclusion of optional spell mode for ci, ty/station names, and barge-in capability to allow more natural interaction between the user and the machine. 3.2 Designing a Multimodal Dialogue System for Information Retrieval We have recently investigated a paradigm for designing multimodal dialogue systems [ 13]. An example task of the system was to retrieve particular information about different shops in the Tokyo Metropolitan area, such as their names, addresses and phone numbers. The system accepted speech and screen touching as input, and presented retrieved information on a screen display or by synthesized speech as shown in Fig. 6. The speech recognition part was modeled by the FSN (finite state network) consisting of keywords and fillers, both of which were implemented by the DAWG (directed acyclic word-graph) structure. The number ofkeywords was 306, consisting of district names and business names. The fillers accepted roughly 100,000 non-keywords/phrases occuring in spontaneous speech. A variety of dialogue strategies were designed and evaluated based on an objective cost function having a set of actions and states as parameters. Expected dialogue cost The speech recognizer uses n-gram backoff language models estimated on the transcriptions of spoken queries. Since the amount of language model training data is small, some grammatical classes, such as cities, days and months, are used to provide more robust estimates of the n- gram probabilities. A confidence score is associated with each Input ~ Speech recognizer sc ey' Output ~ Speech L synthesizer ]- Dialogue manager Fig. 6 - Multimodal dialogue system structure for information retrieval. hypothesized word, and if the score is below an empirically determined threshold, the hypothesized word is marked as uncertain. The uncertain words are ignored by the understanding component or used by the dialogue manager to start clarification subdialogues. was calculated for each strategy, and the best strategy was selected according to the keyword recognition accuracy. 16 4. ROBUST SPEECH RECOGNITION 4.1 Automatic adaptation Ultimately, speech recognition systems should be capable of f robust, speaker- independent or speaker- adaptive, continuous speech recognition• Figure 7 shows main causes of acoustic variation in speech [14]. ~. It is crucial to establish methods that are robust against voice variation due to individuality, the physical and psychological condition of the speaker, telephone sets, microphones, network characteristics, additive background noise, speaking styles, and so on. Figure 8 shows main methods for making speech recognition systems robust against voice variation. It is also important for the systems to impose few restrictions on tasks and vocabulary. To solve these problems, it is essential to develop automatic adaptation techniques. Extraction and normalization of. (adaptation to) voice individuality is one of the most important issues [ 14]. A small percentage of people occasionally cause systems to produce exceptionally low recognition rates• This is an example of the "sheep and goats" phenomenon. Speaker adaptation (normalization) methods can usually be classified into supervised (text-dependent) and unsupervised (text-independent) methods• Unsupervised, on-line, INoiSe . Other speakers ] fDtstortlon ~ b i'" • Background noise| |N°ise | • Reverberations .J / Ech°es l "//~Dropouts ) -! Channel ~ recognition -1 I system Speaker Task/context • Voice quality • Man-machine • Pitch dialogue • Gender • Dictation • Dialect • Free conversation Speaking style • Interview • Stress/emotion Phonetic/prosodic • Speaking rate context • Lombard effect Microphone • Distortion • Electrical noise Directional | characteristics J Fig. 7 - Main causes of acoustic variation in speech. [ ............... fClose-talking microphone / (Microphone array Microphone • fAuditory models Analysis and feature extraction ..... ~(EIH, SMC, PLP) /" Adaptive filtering J [ Noise subtraction . ." . ,~ ] Comb filtering venture-level normmizatiorv/ 1 (n,~t'.tr,'jl . . . . inn ada t tion r'--x ~'v ......... vv...~ p a. , / ~ Cepstral mean normalization / l A cepstra , ~. RASTA r ( Noise addition | J HMM (de) composition(PMC) ........................... "~ Model transformation(MLLR) Model-level t ...... I, Bayesian adaptive learning normalization/I _ ' , adaptation ~ Distance// f'Frequency weighting measure • ~ ' [ [similarity t ...... ~ Weighted cepstral distance | I I measures [ I.Cepstrum projection measure (Reference~ / / I temolates/I ~ ~ . . I~models ) Word spottm Robust matching~--- ~-- ~ . . / t.utterance venncation ]Linguisti c processing t .... Language model adaptation Fig. 8 - Main methods to cope with voice variation in speech recognition. 17 instantaneous/incremental adaptation is ideal, since the system works as if it were a speaker- independent system, and it performs increasingly better as it is used. However, since we have to adapt many phonemes using a limited size of utterances including only a limited number of phonemes, it is crucial to use reasonable modeling of speaker-to-speaker variablity or constraints. Modeling of the mechanism of speech production is expected to provide a useful modeling of speaker-to-speaker variability. 4.2 On-line speaker adaptation in broadcast news dictation Since, in broadcast news, each speaker utters several sentences in succession, the recognition error rate can be reduced by adapting acoustic models incrementally within a segment that contains only one speaker. We applied on-line, unsupervised, instantaneous and incremental speaker adaptation combined with automatic detection of speaker changes [4]. The MLLR [ 15] -MAP [ 16] and VFS (vector-field smoothing) [17] methods were instantaneously and incrementally carried out for each utterance. The adaptation process is as follows. For the first input utterance, the speaker-independ¢nt model is used for both recognition and adaptation, and the first speaker-adapted model is created. For the second input utterance, the likelihood value of the utterance given the speaker-independent model and that given the speaker-adapted model are calculated and compared. If the former value is larger, the utterance is considered to be the beginning of a new speaker, and another speaker- adapted model is created. Otherwise, the existing speaker-adapted model is incrementally adapted. For the succeeding input utterances, speaker changes are detected in the same way by comparing the acoustic likelihood values of each utterance obtained from the speaker-independent model and some speaker-adapted models. If the speaker-independent model yields a larger likelihood than any of the speaker-adapted models, a speaker change is detected and a new speaker-adapted model is constructed. Experimental results show that the adaptation reduced the word error rate by 11.8 % relative to the speaker-independent models. 5. PRESPECTIVES OF LANGUAGE MODELING 5.1 Language modeling for spontaneous speech recognition One of the most important issues for speech recognition is how to create language models (rules) for spontaneous speech. When recognizing spontaneous speech in dialogues, it is necessary to deal with variations that are not encountered when recognizing speech that is read from texts. These variations include extraneous words, out-of-vocabulary words, ungrammatical sentences, disfluency, partial words, repairs, hesitations, and repetitions. It is crucial to develop robust and flexible parsing algorithms that match the characteristics of spontaneous speech. A paradigm shift from the present transcription-based approach to a detection-based approach will be important to solve such problems [2]. How to extract contextual information, predict users' responses, and focus on key words are very important issues. Stochastic language modeling, such as bigrams and trigrams, has been a very powerful tool, so it would be very effective to extend its utility by incorporating semantic knowledge. It would also be useful to integrate unification grammars and context-free grammars for efficient word prediction. Style shifting is also an important problem in spontaneous speech recognition. In typical laboratory experiments, speakers are reading lists of words rather than trying to accomplish a real task. Users actually trying to accomplish a task, however, use a different linguistic style. Adaptation of linguistic models according to tasks, topics and speaking styles is a very important issue, since collecting a large linguistic database for every new task is difficult and costly. 18 5.2 Message-Driven Speech Recognition State-of-the-art automatic speech recognition systems employ the criterion of maximizing P(/4,qX), where W is a word sequence, and X is an acoustic observation sequence. This criterion is reasonable for dictating read speech. However, the ultimate goal of automatic speech recognition is to extract the underlying messages of the speaker from the speech signals. Hence we need to model the process of speech generation and recognition as shown in Fig. 9 [ 18], where M is the message (content) that a speaker intended to convey. models in the same way as in usual recognition processes. We assume that P(M) has a uniform probability for all M. Therefore, we only need to consider further the term P(~M). We assume that P(~M) can be expressed as follows. P(WW/) - P( M) P( WI M) P( XI W) Message ~ Linguistic ~ Acoustic ~.~ Speech source channel channel recognizer • Language • Speaker Vocabulary Reverberation Grammar Noise Semantics Transmission- Context characteristics Habits Microphone Fig. 9 - A communication - theoretic view of speech generation and recognition. According to this model, the speech recognition process is represented as the maximization of the following a posteriori probability [4][5], (4) where ~, 0<-/1.<1, is a weighting factor. P(W), the first term of the right hand side, represents a part of P(~M) that is independent of Mand can be given by a general statistical language model. P'(WIM), the second term of the right hand side, represents the part ofP(WIA D that depends on M. We consider that M is represented by a co-occurrence of words based on the distributional hypothesis by Harris [ 19]. Since this approach formulates P'(WIM) without explicitly representing M, it can use information about the speaker's message M without being affected by the quantization problem of topic classes. This new formulation of speech recognition was applied to the Japanese broadcast news dictation, and it was found that word error rates for the clean set were slightly reduced by this method. maxP(MIX) = max]~ P(MIW)P(WIX). (1) M M W Using Bayes' rule, Eq. (1) can be expressed as maxP(MIX) = maxZ P(XIW) P(WIM) P(M) M w P(X) (2) For simplicity, we can approximate the equation as P(XlW) P(W1M) P(M) max P(MIX) = max (3) M M, w P(X) P(X1W) is calculated using hidden Markov 6. CONCLUSIONS Speech recognition technology has made a remarkable progress in the past 5 - 10 years. Based on the progress, various application systems have been developed using dictation and spoken dialogue technology. One of the most important applications is information extraction and retrieval. Using the speech recognition technology, broadcast news can be automatically indexed, producing a wide range of capabilities for browsing news archives interactively. Since speech is the most natural and efficient communication method between humans, 19 automatic speech recognition will continue to find applications, such as meeting/conference summarization, automatic closed captioning, and interpreting telephony. It is expected that speech recognizer will become the main input device of the "wearable" computers that are now actively investigated. In order to materialize these applications, we have to solve many problems. The most important issue is how to make the speech recognition systems robust against acoustic and lingustic variation in speech. In this context, a paradigm shitt from speech recognition to understanding where underlying messages of the speaker, that is, meaning/context that the speaker intended to convey are extracted, instead of transcribing all the spoken words, will be indispensable. REFERENCES [ 1 ] http://fofoca.mitre.org [2] S. Furui: "Future directions in speech information processing", Proc. 16th ICA and 135th Meeting ASA, Seattle, pp. 1-4 (1998) [3] F. Kubala: "Broadcast news is good news", DARPA Broadcast News Workshop, Virginia (1999) [4] K. Ohtsuki, S. Furui, N. Sakurai, A. Iwasaki and Z.-P. Zhang: "Improvements in Japanese broadcast news transcription", DARPA Broadcast News Workshop, Virginia (1999) [5] K. Ohtsuki, S. Furui, A. Iwasaki and N. Sakurai: "~lessage-driven speech recognition and topic- word extraction", Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Phoenix, pp. 625-628 (1999) [6] M. Witbrock and A. G. Hauptmann: "Speech recognition and information retrieval: Experiments in retrieving spoken documents", Proc. DARPA Speech Recognition Workshop, Virginia, pp. 160-164 (1997). See also http:// www.informedia.cs.cmu.edu/ [7] T. Kemp, P. Geutner, M. Schmidt, B. Tomaz, M. Weber, M. Westphal and A. Waibel: "The interactive systems labs View4You video indexing system", Proc. Int. Conf. Spoken Language Processing, Sydney, pp. 1639-1642 (1998) [8] J. Choi, D. Hindle, J. Hirschberg, I. Magrin- Chagnolleau, C. Nakatani, F. Pereira, A. Singhal and S. Whittaker: "SCAN - speech content based audio navigator: a systems overview", Proc. Int. Conf. Spoken Language Processing, Sydney, pp. 2867-2870 (1998) [9] S. Seneff, E. Hurley, R. Lau, C. Pao, P. Schmid and V. Zue: "GALAXY-II: a reference architecture for conversational system development", Proc. Int. Conf. Spoken Language Processing, Sydney, pp. 931-934 (1998) [10] L. Lamel, S. Rosset, J. L. Gauvain and S. Bennacef: "The LIMSI ARISE system for train travel information", Proc. IEEE Int. Conf. Acoust., Speech, Signal Process., Phoenix, pp. 501-504 (1999) [11] L. F. Lamel, S. K. Bennacef, S. Rosset, L. Devillers, S. Foukia, J. J. Gangolf and J. L. Gauvain: "The LIMSI RailTel system: Field trial of a telephone service for rail travel information", Speech Communication, 23, pp. 67-82 (1997) [12] J. L. Gauvain, J. J. Gangolf and L. Lamel: "Speech recognition for an information Kiosk", Proc. Int. Conf. Spoken Language Processing, Philadelphia, pp. 849-852 (1998) [13] S. Furui and K. Yamaguchi: "Designing a multimodal dialogue system for information retrieval", Proc. Int. Conf. Spoken Language Processing, Sydney, pp. 1191-1194 (1998) [14] S. Furui: "Recent advances in robust speech recognition", Proc. ESCA-NATO Workshop on Robust Speech Recognition for Unknown Communication Channels, Pont-a-Mousson, France, pp. 11-20 (1997) [ 15] C. J. Leggetter and P. C. Woodland: "Maximum likelihood linear regression for speaker adaptation of continuous density hidden Markov models", Computer Speech and Language, pp. 171-185 (1995). [16] J. -L. Gauvain and C.-H. Lee: "Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains" IEEE Trans. on Speech and Audio Processing, 2, 2, pp. 291-298 (1994). [17] K. Ohkura, M. Sugiyama and S. Sagayama: "Speaker adaptation based on transfer vector field smoothing with continuous mixture density HMMs", Proc. Int. Conf. Spoken Language Processing, Banff, pp. 369-372 (1992) [18] B.-H. Juang: "Automatic speech recognition: Problems, progress & prospects", IEEE Workshop on Neural Networks for Signal Processing (1996) [19] Z. S. Harris: "Co-occurrence and transformation in linguistic structure", Language, 33, pp. 283- 340 (1957) 20
1999
2
A Method for Word Sense Disambiguation of Unrestricted Text Rada Mihalcea and Dan I. Moldovan Department of Computer Science and Engineering Southern Methodist University Dallas, Texas, 75275-0122 (rada,moldovan}@seas.smu.edu Abstract Selecting the most appropriate sense for an am- biguous word in a sentence is a central prob- lem in Natural Language Processing. In this paper, we present a method that attempts to disambiguate all the nouns, verbs, adverbs and adjectives in a text, using the senses pro- vided in WordNet. The senses are ranked us- ing two sources of information: (1) the Inter- net for gathering statistics for word-word co- occurrences and (2)WordNet for measuring the semantic density for a pair of words. We report an average accuracy of 80% for the first ranked sense, and 91% for the first two ranked senses. Extensions of this method for larger windows of more than two words are considered. 1 Introduction Word Sense Disambiguation (WSD) is an open problem in Natural Language Processing. Its solution impacts other tasks such as discourse, reference resolution, coherence, inference and others. WSD methods can be broadly classified into three types: 1. WSD that make use of the information provided by machine readable dictionaries (Cowie et al., 1992), (Miller et al., 1994), (Agirre and Rigau, 1995), (Li et al., 1995), (McRoy, 1992); 2. WSD that use information gathered from training on a corpus that has already been semantically disambiguated (super- vised training methods) (Gale et al., 1992), (Ng and Lee, 1996); 3. WSD that use information gathered from raw corpora (unsupervised training meth- ods) (Yarowsky, 1995) (Resnik, 1997). There are also hybrid methods that combine several sources of knowledge such as lexicon in- formation, heuristics, collocations and others (McRoy, 1992) (Bruce and Wiebe, 1994) (Ng and Lee, 1996) (Rigau et al., 1997). Statistical methods produce high accuracy re- sults for small number of preselected words. A lack of widely available semantically tagged cor- pora almost excludes supervised learning meth- ods. A possible solution for automatic acqui- sition of sense tagged corpora has been pre- sented in (Mihalcea and Moldovan, 1999), but the corpora acquired with this method has not been yet tested for statistical disambiguation of words. On the other hand, the disambiguation using unsupervised methods has the disadvan- tage that the senses are not well defined. None of the statistical methods disambiguate adjec- tives or adverbs so far. In this paper, we introduce a method that at- tempts to disambiguate all the nouns, verbs, ad- jectives and adverbs in a text, using the senses provided in WordNet (Fellbaum, 1998). To our knowledge, there is only one other method, recently reported, that disambiguates unre- stricted words in texts (Stetina et al., 1998). 2 A word-word dependency approach The method presented here takes advantage of the sentence context. The words are paired and an attempt is made to disambiguate one word within the context of the other word. This is done by searching on Internet with queries formed using different senses of one word, while keeping the other word fixed. The senses are ranked simply by the order provided by the number of hits. A good accuracy is obtained, perhaps because the number of texts on the In- ternet is so large. In this way, all the words are 152 processed and the senses axe ranked. We use the ranking of senses to curb the computational complexity in the step that follows. Only the most promising senses are kept. The next step is to refine the ordering of senses by using a completely different method, namely the semantic density. This is measured by the number of common words that are within a semantic distance of two or more words. The closer the semantic relationship between two words the higher the semantic density between them. We introduce the semantic density be- cause it is relatively easy to measure it on a MRD like WordNet. A metric is introduced in this sense which when applied to all possible combinations of the senses of two or more words it ranks them. An essential aspect of the WSD method pre- sented here is that it provides a raking of pos- sible associations between words instead of a binary yes/no decision for each possible sense combination. This allows for a controllable pre- cision as other modules may be able to distin- guish later the correct sense association from such a small pool. 3 Contextual ranking of word senses Since the Internet contains the largest collection of texts electronically stored, we use the Inter- net as a source of corpora for ranking the senses of the words. 3.1 Algorithm 1 For a better explanation of this algorithm, we provide the steps below with an example. We considered the verb-noun pair "investigate re- port"; in order to make easier the understand- ing of these examples, we took into considera- tion only the first two senses of the noun re- port. These two senses, as defined in WordNet, appear in the synsets: (report#l, study} and {report#2, news report, story, account, write up}. INPUT: semantically untagged word1 - word2 pair (W1 - W2) OUTPUT: ranking the senses of one word PROCEDURE: STEP 1. Form a similarity list ]or each sense of one of the words. Pick one of the words, say W2, and using WordNet, form a similarity list for each sense of that word. For this, use the words from the synset of each sense and the words from the hypernym synsets. Consider, for example, that W2 has m senses, thus W2 appears in m similarity lists: ..., (wL (', ..., where W 1, Wff, ..., W~ n are the senses of W2, and W2 (s) represents the synonym number s of the sense W~ as defined in WordNet. Example The similarity lists for the first two senses of the noun report are: (report, study) (report, news report, story, account, write up) STEP 2. Form W1 - W2 (s) pairs. The pairs that may be formed are: - w, - (1), - ..., wl - (Wl -- W 2, Wl - W2 2(1), Wi - W2(2), ..., Wl - W: (k2)) (Wl - W2 n, Wl - W2 n(1), Wl - W2 m(2), ..., Wi - W~ (kin)) Example The pairs formed with the verb inves- tigate and the words in the similarity lists of the noun report are: (investigate-report, investigate-study) (investigate-report, investigate-news report, investigate- story, investigate-account, investigate-write up) STEP 3. Search the Internet and rank the senses W~ (s). A search performed on the Internet for each set of pairs as defined above, results in a value indicating the frequency of occurrences for Wl and the sense of W2. In our experiments we used (Altavista, 1996) since it is one of the most powerful search engines currently available. Us- ing the operators provided by AltaVista, query- forms are defined for each W1 - W2 (s) set above: (a) ("w, oR "wl oR oR . . . OR "W1 W~ (k~)') (b) ((W~ NEAR W~) OR (W1 NEAR W~ (1)) OR (W1 NEAR W~ (2)) OR ... OR (W~ NEAR W~(k'))) for all 1 < i < m. Using one of these queries, we get the number of hits for each sense i of W2 and this provides a ranking of the m senses of W2 as they relate with 1411. Example The types of query that can be formed using the verb investigate and the similarity lists of the noun report, are shown below. After each query, we indicate the number of hits obtained by a search on the Internet, using AltaVista. (a) ("investigate report" OR "investigate study") (478) ("investigate report" OR "investigate news report" OR "investigate story" OR "investigate account" OR "inves- tigate write up") (~81) (b) ((investigate NEAR report) OR (investigate NEAR study)) (34880) ((investigate NEAR report) OR (investigate NEAR news report) OR (investigate NEAR story) OR (investigate NEAR account) OR (investigate NEAR write up)) (15ss4) A similar algorithm is used to rank the senses of W1 while keeping W2 constant (un- disambiguated). Since these two procedures are done over a large corpora (the Internet), and with the help of similarity lists, there is little correlation between the results produced by the two procedures. 3.1.1 Procedure Evaluation This method was tested on 384 pairs: 200 verb- noun (file br-a01, br-a02), 127 adjective-noun (file br-a01), and 57 adverb-verb (file br-a01), extracted from SemCor 1.6 of the Brown corpus. Using query form (a) on AltaVista, we obtained the results shown in Table 1. The table indi- cates the percentages of correct senses (as given by SemCor) ranked by us in top 1, top 2, top 3, and top 4 of our list. We concluded that by keeping the top four choices for verbs and nouns and the top two choices for adjectives and ad- verbs, we cover with high percentage (mid and upper 90's) all relevant senses. Looking from a different point of view, the meaning of the pro- cedure so far is that it excludes the senses that do not apply, and this can save a considerable amount of computation time as many words are highly polysemous. top 1 top 2 top 3 top 4 noun 76% 83~ 86~ 98% verb 60% 68% 86% 87% adjective 79.8% 93% adverb 87% 97% Table 1: Statistics gather from the Internet for 384 word pairs. We also used the query form (b), but the re- sults obtained were similar; using, the operator NEAR, a larger number of hits is reported, but the sense ranking remains more or less the same. 3.2 Conceptual density algorithm A measure of the relatedness between words can be a knowledge source for several decisions in NLP applications. The approach we take here is to construct a linguistic context for each sense of the verb and noun, and to measure the num- ber of the common nouns shared by the verb and the noun contexts. In WordNet each con- cept has a gloss that acts as a micro-context for that concept. This is a rich source of linguistic information that we found useful in determining conceptual density between words. 3.2.1 Algorithm 2 INPUT: semantically untagged verb - noun pair and a ranking of noun senses (as determined by Algorithm 1) OUTPUT: sense tagged verb - noun pair P aOCEDURE: STEP 1. Given a verb-noun pair V - N, denote with < vl,v2, ...,Vh > and < nl,n2, ...,nt > the possible senses of the verb and the noun using WordNet. STEP 2. Using Algorithm 1, the senses of the noun are ranked. Only the first t possible senses indicated by this ranking will be considered. The rest are dropped to reduce the computa- tional complexity. STEP 3. For each possible pair vi - nj, the con- ceptual density is computed as follows: (a) Extract all the glosses from the sub- hierarchy including vi (the rationale for select- ing the sub-hierarchy is explained below) (b) Determine the nouns from these glosses. These constitute the noun-context of the verb. Each such noun is stored together with a weight w that indicates the level in the sub-hierarchy of the verb concept in whose gloss the noun was found. (c) Determine the nouns from the noun sub- hierarchy including nj. (d) Determine the conceptual density Cij of common concepts between the nouns obtained at (b) and the nouns obtained at (c) using the metric: Icdijl k Cij = log (descendents j) (1) where: • Icdljl is the number of common concepts between the hierarchies of vl and nj 154 • wk are the levels of the nouns in the hierarchy of verb vi • descendentsj is the total number of words within the hierarchy of noun nj STEP 4. Vii ranks each pair vi -nj, for all i and j. Rationale 1. In WordNet, a gloss explains a concept and provides one or more examples with typical us- age of that concept. In order to determine the most appropriate noun and verb hierarchies, we performed some experiments using SemCor and concluded that the noun sub-hierarchy should include all the nouns in the class of nj. The sub-hierarchy of verb vi is taken as the hierar- chy of the highest hypernym hi of the verb vi. It is necessary to consider a larger hierarchy then just the one provided by synonyms and direct hyponyms. As we replaced the role of a corpora with glosses, better results are achieved if more glosses are considered. Still, we do not want to enlarge the context too much. 2. As the nouns with a big hierarchy tend to have a larger value for Icdij[, the weighted sum of common concepts is normalized with re- spect to the dimension of the noun hierarchy. Since the size of a hierarchy grows exponentially with its depth, we used the logarithm of the to- tal number of descendants in the hierarchy, i.e. log(descendents j). 3. We also took into consideration and have experimented with a few other metrics. But af- ter running the program on several examples, the formula from Algorithm 2 provided the best results. 4 An Example As an example, let us consider the verb-noun collocation revise law. The verb revise has two possible senses in WordNet 1.6 and the noun law • has seven senses. Figure 1 presents the synsets in which the different meanings of this verb and noun appear. First, Algorithm 1 was applied and search the Internet using AltaVista, for all possi- ble pairs V-N that may be created using re- vise and the words from the similarity lists of law. The following ranking of senses was ob- tained: Iaw#2(2829), law#3(648), law#4(640), law#6(397), law#1(224), law#5(37), law#7(O), "REVISE 1. {revise#l} => { rewrite} 2. {retool, revise#2} => { reorganize, shake up} LAW 1. { law#I, jurisprudence} => {collection, aggregation, accumulation, assemblage} 2. {law#2} = > {rule, prescript] ... 3. {law#3, natural law} = > [ concept, conception, abstract] 4. {law#4, law of nature} = > [ concept, conception, abstract] 5. {jurisprudence, law#5, legal philosophy} => [ philosophy} 6. {law#6, practice of law} => [ learned profession} 7. {police, police force, constabulary, law#7} = > {force, personnel} Figure 1: Synsets and hypernyms for the differ- ent meanings, as defined in WordNet where the numbers in parentheses indicate the number of hits. By setting the threshold at t = 2, we keep only sense #2 and #3. Next, Algorithm 2 is applied to rank the four possible combinations (two for the verb times two for the noun). The results are summarized in Table 2: (1) [cdij[ - the number of common concepts between the verb and noun hierarchies; (2) descendantsj the total number of nouns within the hierarchy of each sense nj; and (3) the conceptual density Cij for each pair ni - vj derived using the formula presented above. ladij I descendantsj Cij n2 n3 1"$2 I"$3 n2 1"$3 5 4 975 1265 0.30 0.28 0 0 975 1265 0 0 Table 2: Values used in computing the concep- tual density and the conceptual density Cij The largest conceptual density C12 = 0.30 corresponds to V 1 -- n2:revise#l~2 - law#2/5 (the notation #i/n means sense i out of n pos- 155 sible tion Cor, senses given by WordNet). This combina- of verb-noun senses also appears in Sem- file br-a01. 5 Evaluation and comparison with other methods 5.1 Tests against SemCor The method was tested on 384 pairs selected from the first two tagged files of SemCor 1.6 (file br-a01, br-a02). From these, there are 200 verb-noun pairs, 127 adjective-noun pairs and 57 adverb-verb pairs. In Table 3, we present a summary of the results. top 1 top 2 top 3 top 4 noun 86.5% 96% 97% 98% verb 67% 79% 86% 87% adjective 79.8% 93% adverb 87% 97% Table 3: Final results obtained for 384 word pairs using both algorithms. Table 3 shows the results obtained using both algorithms; for nouns and verbs, these results are improved with respect to those shown in Table 1, where only the first algorithm was ap- plied. The results for adjectives and adverbs are the same in both these tables; this is because the second algorithm is not used with adjectives and adverbs, as words having this part of speech are not structured in hierarchies in WordNet, but in clusters; the small size of the clusters limits the applicability of the second algorithm. Discussion of results When evaluating these results, one should take into consideration that: 1. Using the glosses as a base for calculat- ing the conceptual density has the advantage of eliminating the use of a large corpus. But a dis- advantage that comes from the use of glosses is that they are not part-of-speech tagged, like some corpora are (i.e. Treebank). For this rea- son, when determining the nouns from the verb glosses, an error rate is introduced, as some verbs (like make, have, go, do) are lexically am- biguous having a noun representation in Word- Net as well. We believe that future work on part-of-speech tagging the glosses of WordNet will improve our results. 2. The determination of senses in SemCor was done of course within a larger context, the context of sentence and discourse. By working only with a pair of words we do not take advan- tage of such a broader context. For example, when disambiguating the pair protect court our method picked the court meaning "a room in which a law court sits" which seems reasonable given only two words, whereas SemCor gives the court meaning "an assembly to conduct judicial business" which results from the sentence con- text (this was our second choice). In the next section we extend our method to more than two words disambiguated at the same time. 5.2 Comparison with other methods As indicated in (Resnik and Yarowsky, 1997), it is difficult to compare the WSD methods, as long as distinctions reside in the approach considered (MRD based methods, supervised or unsupervised statistical methods), and in the words that are disambiguated. A method that disambiguates unrestricted nouns, verbs, adverbs and adjectives in texts is presented in (Stetina et al., 1998); it attempts to exploit sen- tential and discourse contexts and is based on the idea of semantic distance between words, and lexical relations. It uses WordNet and it was tested on SemCor. Table 4 presents the accuracy obtained by other WSD methods. The baseline of this com- parison is considered to be the simplest method for WSD, in which each word is tagged with its most common sense, i.e. the first sense as defined in WordNet. Base Stetina Yarowsky Our line method noun 80.3% 85.7% 93.9% 86.5% verb 62.5% 63.9% 67% adjective 81.8% 83.6% 79.8 adverb 84.3% 86.5% 87% AVERAOE I 77% I 80% I 180.1%1 Table 4: A comparison with other WSD meth- ods. As it can be seen from this table, (Stetina et al., 1998) reported an average accuracy of 85.7% for nouns, 63.9% for verbs, 83.6% for adjectives and 86.5% for adverbs, slightly less than our re- sults. Moreover, for applications such as infor- mation retrieval we can use more than one sense combination; if we take the top 2 ranked com- binations our average accuracy is 91.5% (from Table 3). Other methods that were reported in the lit- 156 erature disambiguate either one part of speech word (i.e. nouns), or in the case of purely statis- tical methods focus on very limited number of words. Some of the best results were reported in (Yarowsky, 1995) who uses a large training corpus. For the noun drug Yarowsky obtains 91.4% correct performance and when consider- ing the restriction "one sense per discourse" the accuracy increases to 93.9%, result represented in the third column in Table 4. 6 Extensions 6.1 Noun-noun and verb-verb pairs The method presented here can be applied in a similar way to determine the conceptual density within noun-noun pairs, or verb-verb pairs (in these cases, the NEAR operator should be used for the first step of this algorithm). 6.2 Larger window size We have extended the disambiguation method to more than two words co-occurrences. Con- sider for example: The bombs caused damage but no injuries. The senses specified in SemCor, are: la. bomb(#1~3) cause(#1//2) damage(#1~5) iujury ( #1/4 ) For each word X, we considered all possible combinations with the other words Y from the sentence, two at a time. The conceptual density C was computed for the combinations X -Y as a summation of the conceptual densities be- tween the sense i of the word X and all the senses of the words Y. The results are shown in the tables below where the conceptual den- sity calculated for the sense #i of word X is presented in the column denoted by C#i: X - Y C#1 0#2 C#3 bomb-cause 0.57 0 0 bomb-damage 5.09 0.13 0 bomb-injury 2.69 0.15 0 SCORE 8.35 0.28 0 By selecting the largest values for the con- ceptual density, the words are tagged with their senses as follows: lb. bomb(#1/3) cause(#1/2) damage(#1~5) iuju, (#e/4) X-Y cause-bomb cause-damage cause-injury SCORE c#I 5.16 12.83 12.63 30.62 C#2 1.34 2.64 1.75 5.73 X - Y C#1 damage-bomb 5.60 damage-cause 1.73 damage-injury 9.87 SCORE 17.20 c#2 2.14 2.63 2.57 7.34 C#3 C#4 C#5 1.95 0.88 2.16 0.17 0.16 3.80 3.24 1.56 7.59 5.36 2.60 13.55 Note that the senses for word injury differ from la. to lb.; the one determined by our method (#2/4) is described in WordNet as "an acci- dent that results in physical damage or hurt" (hypernym: accident), and the sense provided in SemCor (#1/4) is defined as "any physical damage'(hypernym: health problem). This is a typical example of a mismatch caused by the fine granularity of senses in Word- Net which translates into a human judgment that is not a clear cut. We think that the sense selection provided by our method is jus- tified, as both damage and injury are objects of the same verb cause; the relatedness of dam- age(#1/5) and injury(#2/~) is larger, as both are of the same class noun.event as opposed to injury(#1~4) which is of class noun.state. Some other randomly selected examples con- sidered were: 2a. The te,~orists(#l/1) bombed(#l/S) the embassies(#1~1). 2b. terrorist(#1~1) bomb(#1~3) embassy(#1~1) 3a. A car-bomb(#1~1) exploded(#2/lO) in ]rout of PRC(#I/1) embassy(#1/1). 3b. car-bomb(#1/1) explode(#2/lO) PRC(#I/1) embassy(#1~1) 4a. The bombs(#1~3) broke(#23~27) windows(#l/4) and destroyed(#2~4) the two vehicles(#1~2). 4b. bomb(#1/3) break(#3/27) window(#1/4) destroy(#2/4) vehicle(# l/2) where sentences 2a, 3a and 4a are extracted from SemCor, with the associated senses for each word, and sentences 2b, 3b and 4b show the verbs and the nouns tagged with their senses by our method. The only discrepancy is for the 157 X - Y C#I C#2 C#3 C#4 injury-bomb 2.35 5.35 0.41 2.28 injury-cause 0 4.48 0.05 0.01 injury-damage 5.05 10.40 0.81 9.69 SCORE 7.40 20.23 1.27 11.98 word broke and perhaps this is due to the large number of its senses. The other word with a large number of senses explode was tagged cor- rectly, which was encouraging. 7 Conclusion WordNet is a fine grain MRD and this makes it more difficult to pinpoint the correct sense com- bination since there are many to choose from and many are semantically close. For appli- cations such as machine translation, fine grain disambiguation works well but for information extraction and some other applications this is an overkill, and some senses may be lumped to- gether. The ranking of senses is useful for many applications. References E. Agirre and G. Rigau. 1995. A proposal for word sense disambiguation using conceptual distance. In Proceedings of the First Inter- national Conference on Recent Advances in Natural Language Processing, Velingrad. Altavista. 1996. Digital equipment corpora- tion. "http://www.altavista.com". R. Bruce and J. Wiebe. 1994. Word sense disambiguation using decomposable models. In Proceedings of the Thirty Second An- nual Meeting of the Association for Computa- tional Linguistics (ACL-9~), pages 139-146, LasCruces, NM, June. J. Cowie, L. Guthrie, and J. Guthrie. 1992. Lexical disambiguation using simulated an- nealing. In Proceedings of the Fifth Interna- tional Conference on Computational Linguis- tics COLING-92, pages 157-161. C. Fellbaum. 1998. WordNet, An Electronic Lexical Database. The MIT Press. W. Gale, K. Church, and D. Yarowsky. 1992. One sense per discourse. In Proceedings of the DARPA Speech and Natural Language Work- shop, Harriman, New York. X. Li, S. Szpakowicz, and M. Matwin. 1995. A wordnet-based algorithm for word seman- tic sense disambiguation. In Proceedings of the Forteen International Joint Conference on Artificial Intelligence IJCAI-95, Montreal, Canada. S. McRoy. 1992. Using multiple knowledge sources for word sense disambiguation. Com- putational Linguistics, 18(1):1-30. R. Mihalcea and D.I. Moldovan. 1999. An au- tomatic method for generating sense tagged corpora. In Proceedings of AAAI-99, Or- lando, FL, July. (to appear). G. Miller, M. Chodorow, S. Landes, C. Leacock, and R. Thomas. 1994. Using a semantic con- cordance for sense identification. In Proceed- ings of the ARPA Human Language Technol- ogy Workshop, pages 240-243. H.T. Ng and H.B. Lee. 1996. Integrating multi- ple knowledge sources to disambiguate word sense: An examplar-based approach. In Pro- ceedings of the Thirtyfour Annual Meeting of the Association for Computational Linguis- tics (A CL-96), Santa Cruz. P. Resnik and D. Yarowsky. 1997. A perspec- tive on word sense disambiguation methods and their evaluation. In Proceedings of A CL Siglex Workshop on Tagging Text with Lexical Semantics, Why, What and How?, Washing- ton DC, April. P. Resnik. 1997. Selectional preference and sense disambiguation. In Proceedings of A CL Siglex Workshop on Tagging Text with Lexical Semantics, Why, What and How?, Washing- ton DC, April. G. Rigau, J. Atserias, and E. Agirre. 1997. Combining unsupervised lexical knowledge methods for word sense disambiguation. Computational Linguistics. J. Stetina, S. Kurohashi, and M. Nagao. 1998. General word sense disambiguation method based on a full sentential context. In Us- age of WordNet in Natural Language Process- ing, Proceedings of COLING-A CL Workshop, Montreal, Canada, July. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the Thirtythird Association of Computational Linguistics. 158
1999
20
A Knowledge-free Method for Capitalized Word Disambiguation Andrei Mikheev* Harlequin Ltd., Lismore House, 127 George Street, Edinburgh EH72 4JN, UK mikheev@harlequin, co. uk Abstract In this paper we present an approach to the dis- ambiguation of capitalized words when they are used in the positions where capitalization is ex- pected, such as the first word in a sentence or after a period, quotes, etc.. Such words can act as proper names or can be just capitalized vari- ants of common words. The main feature of our approach is that it uses a minimum of pre- built resources and tries to dynamically infer the disambiguation clues from the entire docu- ment. The approach was thoroughly tested and achieved about 98.5% accuracy on unseen texts from The New York Times 1996 corpus. 1 Introduction Disambiguation of capitalized words in mixed- case texts has hardly received much attention in the natural language processing and infor- mation retrieval communities, but in fact it plays an important role in many tasks. Cap- italized words usually denote proper names - names of organizations, locations, people, arti- facts, etc. - but there are also other positions in the text where capitalization is expected. Such ambiguous positions include the first word in a sentence, words in all-capitalized titles or ta- ble entries, a capitalized word after a colon or open quote, the first capitalized word in a list- entry, etc. Capitalized words in these and some other positions present a case of ambiguity - they can stand for proper names as in "White later said ...", or they can be just capitalized common words as in "White elephants are ...". Thus the disambiguation of capitalized words in the ambiguous positions leads to the identifica- tion of proper names I and in this paper we will * Also at HCRC, University of Edinburgh 1This is not entirely true - adjectives derived from lo- cations such as American, French, etc., are always writ- use these two terms interchangeably. Note that this task, does not involve the classification of proper names into semantic categories (person, organization, location, etc.) which is the objec- tive of the Named Entity Recognition task. Many researchers observed that commonly used upper/lower case normalization does not necessarily help document retrieval. Church in (Church, 1995) among other simple text nor- malization techniques studied the effect of case normalization for different words and showed that "...sometimes case variants refer to the same thing (hurricane and Hurricane), some- times they refer to different things (continental and Continental) and sometimes they don't re- fer to much of anything (e.g. anytime and Any- time)." Obviously these differences are due to the fact that some capitalized words stand for proper names (such as Continental- the name of an airline) and some don't. Proper names are the main concern of the Named Entity Recognition subtask (Chinchor, 1998) of Information Extraction. There the dis- ambiguation of the first word of a sentence (and in other ambiguous positions) is one of the cen- tral problems. For instance, the word "Black" in the sentence-initial position can stand for a person's surname but can also refer to the colour. Even in multi-word capitalized phrases the first word can belong to the rest of the phrase or can be just an external modifier. In the sentence "Daily, Mason and Partners lost their court case" it is clear that "Daily, Mason and Partners" is the name of a company. In the sentence "Unfortunately, Mason and Partners lost their court case" the name of the company does not involve the word "unfortunately", but ten capitalized but in fact can stand for an adjective (American president) as well as a proper noun (he was an American). 159 the word "Daily" is just as common a word as "unfortunately". Identification of proper names is also impor- tant in Machine Translation because normally proper names should be transliterated (i.e. pho- netically translated) rather than properly (se- mantically) translated. In confidential texts, such as medical records, proper names must be identified and removed before making such texts available to unauthorized people. And in gen- eral, most of the tasks which involve different kinds of text analysis will benefit from the ro- bust disambiguation of capitalized words into proper names and capitalized common words. Despite the obvious importance of this prob- lem, it was always considered part of larger tasks and, to the authors' knowledge, was not studied closely with full attention. In the part- of-speech tagging field, the disambiguation of capitalized words is treated similarly to the disambiguation of common words. However, as Church (1988) rightly pointed out "Proper nouns and capitalized words are particularly problematic: some capitalized words are proper nouns and some are not. Estimates from the Brown Corpus can be misleading. For exam- ple, the capitalized word "Acts" is found twice in Brown Corpus, both times as a proper noun (in a title). It would be misleading to infer from this evidence that the word "Acts" is al- ways a proper noun." Church then proposed to include only high frequency capitalized words in the lexicon and also label words as proper nouns if they are "adjacent to" other capital- ized words. For the rest of capitalized common words he suggested that a small probability of proper noun interpretation should be assumed and then one should hope that the surrounding context will help to make the right assignment. This approach is successful for some cases but, as we pointed out above, a sentence-initial cap- italized word which is adjacent to other capital- ized words is not necessarily a part of a proper name, and also many common nouns and plural nouns can be used as proper names (e.g. Rid- ers) and their contextual expectations are not too different from their usual parts of speech. In the Information Extraction field the dis- ambiguation of capitalized words in the am- biguous positions was always tightly linked to the classification of the proper names into se- mantic classes such as person name, location, company name, etc. and to the resolution of coreference between the identified and classi- fied proper names. This gave rise to the meth- ods which aim at these tasks simultaneously. (Mani&MacMillan, 1995) describe a method of using contextual clues such as appositives ("PERSON, the daughter of a prominent local physician") and felicity conditions for identify- ing names. The contextual clues themselves are then tapped for data concerning the referents of the names. The advantage of this approach is that these contextual clues not only indicate whether a capitalized word is a proper name, but they also determine its semantic class. The disadvantage of this method is in the cost and difficulty of building a wide-coverage set of con- textual clues and the dependence of these con- textual clues on the domain and text genre. Contextual clues are very sensitive to the spe- cific lexical and syntactic constructions and the clues developed for the news-wire texts are not useful for legal or medical texts. In this paper we present a novel approach to the problem of capitalized word disambiguation. The main feature of our approach is that it uses a minimum of pre-built resources and tries to dynamically infer the disambiguation clues from the entire document under processing. This makes our approach domain and genre inde- pendent and thus inexpensive to apply when dealing with unrestricted texts. This approach was used in a named entity recognition system (Mikheev et al., 1998) where it proved to be one of the key factors in the system achieving a nearly human performance in the 7th Message Understanding Conference (MUC'7) evaluation (Chinchor, 1998). 2 Bottom-Line Performance In general, the disambiguation of capitalized words in the mixed case texts doesn't seem to be too difficult: if a word is capitalized in an un- ambiguous position, e.g., not after a period or other punctuation which might require the fol- lowing word to be capitalized (such as quotes or brackets), it is a proper name or part of a multi- word proper name. However, when a capitalized word is used in a position where it is expected to be capitalized, for instance, after a period or in a title, our task is to decide whether it acts 160 Total Words Proper Names Common Words All Words tokens types 2,677 665 826 339 1,851 326 Known Words tokens types 2,012 384 171 68 1,841 316 Unknown Words tokens types 665 281 655 271 10 10 Table 1: Distribution of capitalized word-tokens/word-types in the ambiguous positions. as a proper name or as the expected capitalized common word. The first obvious strategy for deciding whether a capitalized word in an ambiguous po- sition is a proper name or not is to apply lexi- con lookup (possibly enhanced with a morpho- logical word guesser, e.g., (Mikheev, 1997)) and mark as proper names the words which are not listed in the lexicon of common words. Let us investigate this strategy in more detail: In our experiments we used a corpus of 100 documents (64,337 words) from The New York Times 1996. This corpus was balanced to represent different domains and was used for the formal test run of the 7th Message Understanding Conference (MUC'7) (Chinchor, 1998) in the Named En- tity Recognition task. First we ran a simple zoner which identi- fied ambiguous positions for capitalized words - capitalized words after a period, quotes, colon, semicolon, in all-capital sentences and titles and in the beginnings of itemized list entries. The 64,337-word corpus contained 2,677 cap- italized words in ambiguous positions, out of which 2,012 were listed in the lexicon of En- glish common words. Ten common words were not listed in the lexicon and not guessed by our morphological guesser: "Forecasters", "Bench- mark", "Eeverybody", "Liftoff", "Download- ing", "Pretax", "Hailing", "Birdbrain", "Opt- ing" and "Standalone". In all our experiments we did not try to disambiguate between singu- • lar and plural proper names and we also did not count as an error the adjectival reading of words which are always written capitalized (e.g. American, Russian, Okinawian, etc.). The dis- tribution of proper names among the ambiguous capitalized words is shown in Table 1. Table 1 allows one to estimate the perfor- mance of the lexicon lookup strategy which we take as the bottom-line. First, using this strat- egy we would wrongly assign the ten common words which were not listed in the lexicon. More damaging is the biind assignment of the com- mon word category to the words listed in the lexicon: out of 2,012 known word-tokens 171 actually were used as proper names. This in to- tal would give us 181 errors out of 2,677 tries - about a 6.76% misclassification error on capi- talized word-tokens in the ambiguous positions. The lexicon lookup strategy can be enhanced by accounting for the immediate context of the capitalized words in question. However, cap- italized words in the ambiguous positions are not easily disambiguated by their surrounding part-of-speech context as attempted by part-of- speech taggers. For instance, many surnames are at the same time nouns or plural nouns in English and thus in both variants can be fol- lowed by a past tense verb. Capitalized words in the phrases Sails rose ... or Feeling him- sell.., can easily be interpreted either way and only knowledge of semantics disallows the plural noun interpretation of Stars can read. Another challenge is to decide whether the first capitalized word belongs to the group of the following proper nouns or is an external modifier and therefore not a proper noun. For instance, All American Bank is a single phrase but in All State Police the word "All" is an external mod- ifier and can be safely decapitalized. One might argue that a part-of-speech tagger can capture that in the first case the word "All" modified a singular proper noun ("Bank") and hence is not grammatical as an external modifier and in the second case it is a grammatical external modi- fier since it modifies a plural proper noun ("Po- lice") but a simple counter-example - All Amer- ican Games - defeats this line of reasoning. The third challenge is of a more local nature - it reflects a capitalization convention adopted by the author. For instance, words which re- flect the occupation of a person can be used in an honorific mode e.g. "Chairman Mao" vs. 161 "ATT chairman Smith" or "Astronaut Mario Runko" vs. "astronaut Mario Runko". When such a phrase opens a sentence, looking at the sentence only, even a human classifier has trou- bles in making a decision. To evaluate the performance of part-of-speech taggers on the proper-noun identification task we ran an HMM trigram tagger (Mikheev, 1997) and the Brill tagger (Brill,.1995) on our cor- pus. Both taggers used the Penn Treebank tag- set and were trained on the Wall Street Jour- nal corpus (Marcus et al., 1993). Since for our task the mismatch between plural proper noun (NNPS) and singular proper noun (NNP) was not important we did not count this as an error. De- pending on the smoothing technique, the HMM tagger performed in the range of 5.3%-4.5% of the misclassification error on capitalized com- mon words in the ambiguous positions, and the Brill tagger showed a similar pattern when we varied the lexicon acquisition heuristics. The taggers handled the cases when a poten- tial adjective was followed by a verb or adverb ( "Golden added .. ") well but they got confused with a potential noun followed by a verb or adverb ( "Butler was .." vs. "Safety was .. "), probably because the taggers could not distin- guish between concrete and mass nouns. Not surprisingly the taggers did not do well on po- tential plural nouns and gerunds - none of them were assigned as a proper noun. The taggers also could not handle the case when a poten- tial noun or adjective was followed by another capitalized word ("General Accounting Office") well. In general, when the taggers did not have strong lexical preferences, apart from several obvious cases they tended to assign a common word category to known capitalized words in the ambiguous positions and the performance of the part-of-speech tagging approach was only about 2% superior to the simple bottom-line strategy. 3 Our Knowledge-Free Method As we discussed above, the bad news (well, not really news) is that virtually any common word can potentially act as a proper name or part of a multi-word proper name. Fortunately, there is good news too: ambiguous things are usu- ally unambiguously introduced at least once in the text unless they are part of common knowl- edge presupposed to be known by the readers. This is an observation which can be applied to a broader class of tasks. For example, people are often referred to by their surnames (e.g. "Black") but usually introduced at least once in the text either with their first name ("John Black") or with their title/profession affiliation ("Mr. Black", "President Bush") and it is only when their names are common knowledge that they don't need an introduction ( e.g. "Castro", "Gorbachev"). In the case of proper name identification we are not concerned with the semantic class of a name (e.g. whether it is a person name or loca- tion) but we simply want to distinguish whether this word in this particular occurrence acts as a proper name or part of a multi-word proper name. If we restrict our scope only to a single sentence, we might find that there is just not enough information to make a confident deci- sion. For instance, Riders in the sentence "Rid- ers said later.." is equally likely to be a proper noun, a plural proper noun or a plural com- mon noun but if in the same text we find "John Riders" this sharply increases the proper noun interpretation and conversely if we find "many riders" this suggests the plural noun interpre- tation. Thus our suggestion is to look at the unambiguous usage of the words in question in the entire document. 3.1 The Sequence Strategy Our first strategy for the disambiguation of cap- italized words in ambiguous positions is to ex- plore sequences of proper nouns in unambigu- ous positions. We call it the Sequence Strategy. The rationale behind this is that if we detect a phrase of two or more capitalized words and this phrase starts from an unambiguous position we can be reasonably confident that even when the same phrase starts from an unreliable position all its words still have to be grouped together and hence are proper nouns. Moreover, this ap- plies not just to the exact replication of such a phrase but to any partial ordering of its words of size two or more preserving their sequence. For instance, if we detect a phrase Rocket Systems Development Co. in the middle of a sentence, we can mark words in the sub-phrases Rocket Systems, Rocket Systems Co., Rocket Co., Sys- terns Development, etc. as proper nouns even if they occur at the beginning of a sentence or in other ambiguous positions. A span of capital- 162 Proper Names Common Words Total All Ambiguous Disambiguated + Sequence Strategy + Single Word + Assignment Stop-List -t- Assignment All Words tokens types 826 339 795 1 62 0 Known Words All Words tokens types tokens types 171 68 1,851 326 54 1,568 218 1 8 8 148 3 Known Words All Words tokens types tokens types 1,841 316 2,677 665 1,563 213 3 3 0 0 0 0 510 1 0 0 70 0 1,265 143 3 3 316 140 1 1 25 32 0 0 192 108 1 1 0 0 0 0 99 0 0 0 11 0 0 0 43 1,270 1 3 0 298 0 0 0 0 0 5 298 70 0 2,363 534 9 9 62 25 0 0 1,780 340 4 4 298 70 0 0 0 Lexicon Lookup + 223 0 0 0 223 99 Assignment - 0 5 0 0 5 5 Left Unassigned 30 22 30 22 275 100 275 100 305 122 Table 2: Disambiguated capitalized word-tokens/types in the ambiguous positions. ized words can also include lower-cased words of length three or shorter. This allows us to cap- ture phrases like A ~ M, The Phantom of the Opera., etc. We generate partial orders from such phrases in a similar way but insist that ev- ery generated sub-phrase should start and end with a capitalized word. To make the Sequence Strategy robust to po- tential capitalization errors in the document we also use a set of negative evidence. This set is essentially a set of all lower-cased words of the document with their following words (bigrams). We don't attempt here to build longer sequences and their partial orders because we cannot in general restrict the scope of dependencies in such sequences. The negative evidence is then used together with the positive evidence of the Sequence Strategy and block the proper name assignment when controversy is found. For in- stance, if in a document the system detects a capitalized phrase "The President" in an un- ambiguous position, then it will be assigned as a proper name even if found in ambiguous po- sitions in the same document. To be more pre- cise the method will assign the word "The" as a proper noun since it should be grouped together with the word "President" into a single proper name. However, if in the same document the system detects an alternative evidence e.g. "the President" or "the president" - it then blocks such assignment as unsafe. The Sequence Strategy strategy is extremely useful when dealing with names of organizations since many of them are multi-word phrases com- posed from common words. And indeed, as is shown in Table 2, the precision of this strat- egy was 100% and the recall about 7.5%: out of 826 proper names in ambiguous positions, 62 were marked and all of them were marked cor- rectly. If we concentrate only on difficult cases when proper names are at the same time com- mon words of English, the recall of the Sequence Strategy rises to 18.7%: out of 171 common words which acted as proper names 32 were cor- rectly marked. Among such words were "News" from "News Corp.", "Rocket" from "Rocket Systems Co.", "Coast" from "Coast Guard" and "To" from "To B. Super". 3.2 Single Word Assignment The Sequence Strategy is accurate, but it cov- ers only a part of potential proper names in ambiguous positions and at the same time it does not cover cases when capitalized words do not act as proper names. For this purpose we developed another strategy which also uses in- formation from the entire document. We call this strategy Single Word Assignment, and it can be summarized as follows: if we detect a word which in the current document is seen capitalized in an unambiguous position and at the same time it is not used lower-cased, this word in this particular document, even when 163 used capitalized in ambiguous positions, is very likely to stand for a proper name as well. And conversely, if we detect a word which in the current document is used only lower-cased in unambiguous positions, it is extremely unlikely that this word will act as a proper name in an ambiguous position and thus, such a word can be marked as a common word. The only consid- eration here should be made for high frequency sentence-initial words which do not normally act as proper names: even if such a word is observed in a document only as a proper name (usually as part of a multi-word proper name), it is still not safe to mark it as a proper name in ambiguous positions. Note, however, that these words can be still marked as proper names (or rather as parts of proper multi-word names) by the Sequence Strategy. To build such list of stop-words we ran the Sequence Strategy and Single Word Assignment on the Brown Corpus (Francis&Kucera, 1982), and reliably collected 100 most frequent sentence-initial words. Table 2 shows the success of the Single Word Assignment strategy: it marked 511 proper names from which 510 were marked correctly, and it marked 1,273 common words from which 1,270 were marked correctly. The only word which was incorrectly marked as a proper name was the word "Insurance" in "Insurance com- pany ..." because in the same document there was a proper phrase "China-Pacific Insurance Co." and no lower-cased occurrences of the word "insurance" were found. The three words incorrectly marked as common words were: "Defence" in "Defence officials ..", "Trade" in "Trade Representation office .." and "Satellite" in "Satellite Business News". Five out of ten words which were not listed in the lexicon ( "Pre- tax", "Benchmark", "Liftoff', "Downloading" and "Standalone") were correctly marked as common words because they were found to ex- ist lower-cased in the text. In general the error rate of the assignment by this method was 4 out of 1,784 which is less than 0.02%. It is interest- ing to mention that when we ran Single Word Assignment without the stop-list, it incorrectly marked as proper names only three extra com- mon words ("For", "People" and "MORE"). 3.3 Taking Care of the Rest After Single Word Assignment we applied a sim- ple strategy of marking as common words all unassigned words which were found in the stop- list of the most frequent sentence-initial words. This gave us no errors and covered extra 298 common words. In fact, we could use this strat- egy before Single Word Assignment, since the words from the stop-list are not marked at that point anyway. Note, however, that the Sequence Strategy still has to be applied prior to the stop- list assignment. Among the words which failed to be assigned by either of our strategies were 243 proper names, but only 30 of them were in fact ambiguous, since they were listed in the lexicon of common words. So at this point we marked as proper names all unassigned words which were not listed in the lexicon of common words. This gave us 223 correct assignments and 5 incorrect ones - the remaining five out of these ten common words which were not listed in the lexicon. So, in total, by the combination of the described methods we achieved a precision of correctly-assigned __ 2363 -- 99.62% all_assigned -- 2363+9- and a recall of all_assigned __ 2363+9 __ 88.7%. total_ambiguous -- 2677 -- Now we have to decide what to do with the re- maining 305 words which failed to be assigned. Among such words there are 275 common words and 30 proper names, so if we simply mark all these words as common words we will increase our recall to 100% with some decrease in pre- cision - from 99.62% down to 98.54%. Among the unclassified proper names there were a few which could be dealt by a part-of-speech tag- get: "Gray, chief...", "Gray said...", "Bill Lat- tanzi...", "Bill Wade...", "Bill Gates...", "Burns , an..." and "..Golden added". Another four un- classified proper names were capitalized words which followed the "U.S." abbreviation e.g. "U.S. Supreme Court". This is a difficult case even for sentence boundary disambiguation sys- terns ((Mikheev, 1998), (Palmer&Hearst, 1997) and (Reynar&Ratnaparkhi, 1997)) which are built for exactly that purpose, i.e., to decide whether a capitalized word which follows an ab- breviation is attached to it or whether there is a sentence boundary between them. The "U.S." abbreviation is one of the most difficult ones because it can be as often seen at the end of a sentence as in the beginning of multi-word proper names. Another nine unclassified proper names were stable phrases like "Foreign Min- ister", "Prime Minister", "Congressional Re- publicans", "Holy Grail", etc. mentioned just 164 once in a document. And, finally, about seven or eight unclassified proper names were diffi- cult to account for at all e.g. "Sate-owned" or "Freeman Zhang". Some of the above men- tioned proper names could be resolved if we ac- cumulate multi-word proper names across sev- eral documents, i.e., we can use information from one document when we deal with another. This can be seen as an extension to our Se- quence Strategy with the only difference that the proper noun sequences have to be taken not only from the current document but from the cache memory and all multi-word proper names identified in a document are to be appended to that cache. When we tried this strategy on our test corpus we were able to correctly assign 14 out of 30 remaining proper names which in- creased the system's precision on the corpus to 99.13% with 100% recall. 4 Discussion In this paper we presented an approach to the disambiguation of capitalized common words when they are used in positions where capi- talization is expected. Such words can act as proper names or can be just capitalized variants of common words. The main feature of our ap- proach is that it uses a minimum of pre-built resources - we use only a list of common words of English and a list of the most frequent words which appear in the sentence-stating positions. Both of these lists were acquired without any human intervention. To compensate for the lack of pre-acquired knowledge, the system tries to infer disambiguation clues from the entire doc- ument itself. This makes our approach domain independent and closely targeted to each docu- ment. Initially our method was developed using the training data of the MUC-7 evaluation and tested on the withheld test-set as described in this paper. We then applied it to the Brown Corpus and achieved similar results with degra- dation of only 0.7% in precision, mostly due to the text zoning errors and unknown words. We deliberately shaped our approach so it does not rely on pre-compiled statistics but rather acts by analogy. This is because the most interest- ing events are inherently infrequent and, hence, are difficult to collect reliable statistics for, and at the same time pre-compiled statistics would be smoothed across multiple documents rather than targeted to a specific document. The main strategy of our approach is to scan the entire document for unambiguous usages of words which have to be disambiguated. The fact that the pre-built resources are used only at the latest stages of processing (Stop-List Assignment and Lexicon Lookup Assignment) ensures that the system can handle unknown words and disambiguate even very implausible proper names. For instance, it correctly as- signed five out of ten unknown common words. Among the difficult cases resolved by the sys- tem were a multi-word proper name "To B. Su- per" where both "To" and "Super" were cor- rectly identified as proper nouns and a multi- word proper name "The Update" where "The" was correctly identified as part of the maga- zine name. Both "To" and "The" were listed in the stop-list and therefore were very implau- sible to classify as proper nouns but neverthe- less the system handled them correctly. In its generic configuration the system achieved pre- cision of 99.62% with recall of 88.7% and preci- sion 98.54% with 100% recall. When we en- hanced the system with a multi-word proper name cache memory the performance improved to 99.13% precision with 100% recall. This is a statistically significant improvement against the bottom-line performance which fared about 94% precision with 100% recall. One of the key factors to the success in the proposed method is an accurate zoning of the documents. Since our method relies on the cap- italization in unambiguous positions - such po- sitions should be robustly identified. In the general case this is not too difficult but one should take care of titles, quoted speech and list entries - otherwise if treated as ordinary text they can provide false candidates for cap- italization. Our method in general is not too sensitive to the capitalization errors: the Se- quence Strategy is complimented with the neg- ative evidence. This together with the fact that it is rare when several words appear by mistake more than once makes this strategy robust. The Single Word Assignment strategy uses the stop list which includes the most frequent common words. This screens out many potential errors. One notable difficulty for the Single Word As- signment represent words which denote profes- sion/title affiliations. These words modifying 165 a person name might require capitalization - "Sheriff John Smith", but in the same docu- ment they can appear lower-cased - "the sher- iff". When the capitalized variant occurs only as sentence initial our method predicts that it should be decapitalized. This, however, is an extremely difficult case even for human index- ers - some writers tend to use certain profes- sions such as Sheriff, Governor, Astronaut, etc., as honorific affiliations and others tend to do otherwise. This is a generally difficult case for Single Word Assignment - when a word is used as a proper name and as a common word in the same document, and especially when one of these usages occurs only in an ambiguous posi- tion. For instance, in a document about steel the only occurrence of "Steel Company" hap- pened to start a sentence. This lead to an er- roneous assignment of the word "Steel" as com- mon noun. Another example: in a document about "the Acting Judge", the word "acting" in a sentence "Acting on behalf.." was wrongly classified as a proper name. The described approach is very easy to imple- ment and it does not require training or installa- tion of other software. The system can be used as it is and, by implementing the cache mem- ory of multi-word proper names, it can be tar- geted to a specific domain. The system can also be used as a pre-processor to a part-of-speech tagger or a sentence boundary disambiguation program which can try to apply more sophisti- cated methods to unresolved capitalized words In fact, as a by-product of its performance, our system disambiguated about 17% (9 out of 60) of ambiguous sentence boundaries when an abbreviation was followed by a capitalized word. Apart from collecting an extensive cache of multi-word proper names, another useful strat- egy which we are going to test in the future is to collect a list of common words which, at the beginning of a sentence, act most frequently as proper names and to use such a list in a simi- lar fashion to the list of stop-words. Such a list can be collected completely automatically but this requires a corpus or corpora much larger than the Brown Corpus because the relevant sentences are rather infrequent. We are also planning to investigate the sensitivity of our method to the document size in more detail. References Brill E. 1995 "Transformation-based error-driven learning and natural language parsing: a case study in part-of-speech tagging" In Computa- tional Linguistics 21 (4), pp. 543-565 N. Chinchor 1998 Overview of MUC-7. In Seventh Message Understanding Conference (MUC- 7) : Proceedings of a Conference held in Fairfax, VA, April 29-May 1, 1998. www. muc. saic. com/muc_7_proceedings/overwiew, html K. Church 1995 "One Term Or Two?" In Pro- ceedings of the 18th Annual Internationals ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'95), Seattle K. Church 1988 A Stochastic parts program and noun-phrase parser for unrestricted text. In Pro- ceedings of the Second A CL Conference on Ap- plied Natural Language Processing (ANLP'88), Austin, Texas W. Francis and H. Kucera 1982 Frequency Analysis of English Usage. Boston MA: Houghton Mifflin. D. D. Palmer and M. A. Hearst 1997. Adaptive Mul- tilingual Sentence Boundary Disambiguation. In Computational Linguistics, 23 (2), pp. 241-269 I. Mani and T.R. MacMillan 1995 Identifying Unknown Proper Names in Newswire Text In B. Boguraev and J. Pustejovsky, eds., Corpus Processing for Lexical Acquisition, MIT Press. M. Marcus, M.A. Marcinkiewicz, and B. Santorini 1993. Building a Large Annotated Corpus of En- glish: The Penn Treebank. In Computational Lin- guistics, vol 19(2), ACL. A. Mikheev. 1998 "Feature Lattices for Maxi- mum Entropy Modelling" In Proceedings of the 36th Conference of the Association for Compu- tational Linguistics (A CL/COLING'98), pp 848- 854. Montreal, Quebec. A. Mikheev. 1997 "Automatic Rule Induction for Unknown Word Guessing." In Computational Linguistics 23 (3), pp. 405-423 A. Mikheev. 1997 "LT POS - the LTG part of speech tagger." Language Tech- nology Group, University of Edinburgh. www. Itg. ed. ac. uk/software/pos A. Mikheev, C. Grover and M. Moens 1998 De- scription of the LTG system used for MUC-7. In Seventh Message Understanding Confer- ence (MUC-7): Proceedings of a Conference held in Fairfax, VA, April 29-May I, 1998. www.muc, saic. com/muc_7_proceedings/ltg- muc7. ps J. C. Reynar and A. Ratnaparkhi 1997. A Max- imum Entropy Approach to Identifying Sentence Boundaries. In Proceedings of the Fifth A CL Con- ference on Applied Natural Language Processing (ANLP'97), Washington D.C., ACL. 166
1999
21
Dynamic Nonlocal Language Modeling via Hierarchical Topic-Based Adaptation Radu Florian and David Yarowsky Computer Science Department and Center for Language and Speech Processing, Johns Hopkins University Baltimore, Maryland 21218 {rflorian,yarowsky}@cs.jhu.edu Abstract This paper presents a novel method of generating and applying hierarchical, dynamic topic-based lan- guage models. It proposes and evaluates new clus- ter generation, hierarchical smoothing and adaptive topic-probability estimation techniques. These com- bined models help capture long-distance lexical de- pendencies. °Experiments on the Broadcast News corpus show significant improvement in perplexity (10.5% overall and 33.5% on target vocabulary). 1 Introduction Statistical language models are core components of speech recognizers, optical character recognizers and even some machine translation systems Brown et al. (1990). The most common language model- ing paradigm used today is based on n-grams, local word sequences. These models make a Markovian assumption on word dependencies; usually that word predictions depend on at most m previous words. Therefore they offer the following approximation for the computation of a word sequence probability: P(wU) = -') = 1-I =lP(w, where w{ denotes the sequence wi... wj ; a common size for m is 3 (trigram language models). Even if n-grams were proved to be very power- ful and robust in various tasks involving language models, they have a certain handicap: because of the Markov assumption, the dependency is limited to very short local context. Cache language models (Kuhn and de Mori (1992),Rosenfeld (1994)) try to overcome this limitation by boosting the probabil- ity of the words already seen in the history; trigger models (Lau et al. (1993)), even more general, try to capture the interrelationships between words. Mod- els based on syntactic structure (Chelba and Jelinek (1998), Wright et al. (1993)) effectively estimate intra-sentence syntactic word dependencies. The approach we present here is based on the observation that certain words tend to have differ- ent probability distributions in different topics. We propose to compute the conditional language model probability as a dynamic mixture model of K topic- specific language models: Einpirlcal Observat/on: Lexical Probabilities are Sensitive to Topic and Subtopic P( peace ! subtopic ) 0~cs oJ~cs o.oo4 ~ o~ l'= i~ols o.l~l o.~ s Maj~ Topl~ amd SO sub*opl~ fnme the Bm*d~st N~ ¢oqpw Figure 1: Conditional probability of the word peace given manually assigned Broadcast News topics K P (w, lw~ -1) = E P (tlw~-X) "V (wilt, w~ -x) t=l K E P (tlw -a) • et ,-x (1) t=l The motivation for developing topic-sensitive lan- guage models is twofold. First, empirically speaking, many n-gram probabilities vary substantially when conditioned on topic (such as in the case of content words following several function words). A more im- portant benefit, however, is that even when a given bigram or trigram probability is not topic sensitive, as in the case of sparse n-gram statistics, the topic- sensitive unigram or bigram probabilities may con- stitute a more informative backoff estimate than the single global unigram or bigram estimates. Discus- sion of these important smoothing issues is given in Section 4. Finally, we observe that lexical probability distri- butions vary not only with topic but with subtopic too, in a hierarchical manner. For example, con- sider the variation of the probability of the word peace given major news topic distinctions (e.g. BUSI- NESS and INTERNATIONAL news) as illustrated in Figure 1. There is substantial subtopic proba- bility variation for peace within INTERNATIONAL news (the word usage is 50-times more likely 167 in INTERNATIONAL:MIDDLE-EAST than INTERNA- TIONAL:JAPAN). We propose methods of hierarchical smoothing of P(w~ Itopict) in a topic-tree to capture this subtopic variation robustly. 1.1 Related Work Recently, the speech community has begun to ad- dress the issue of topic in language modeling. Lowe (1995) utilized the hand-assigned topic labels for the Switchboard speech corpus to develop topic- specific language models for each of the 42 switch- board topics, and used a single topic-dependent lan- guage model to rescore the lists of N-best hypothe- ses. Error-rate improvement over the baseline lan- guage model of 0.44% was reported. Iyer et al. (1994) used bottom-up clustering tech- niques on discourse contexts, performing sentence- level model interpolation with weights updated dy- namically through an EM-like procedure. Evalu- ation on the Wall Street Journal (WSJ0) corpus showed a 4% perplexity reduction and 7% word er- ror rate reduction. In Iyer and Ostendorf (1996), the model was improved by model probability rees- timation and interpolation with a cache model, re- sulting in better dynamic adaptation and an overall 22%/3% perplexity/error rate reduction due to both components. Seymore and Rosenfeld (1997) reported significant improvements when using a topic detector to build specialized language models on the Broadcast News (BN) corpus. They used TF-IDF and Naive Bayes classifiers to detect the most similar topics to a given article and then built a specialized language model to rescore the N-best lists corresponding to the arti- cle (yielding an overall 15% perplexity reduction us- ing document-specific parameter re-estimation, and no significant word error rate reduction). Seymore et al. (1998) split the vocabulary into 3 sets: gen- eral words, on-topic words and off-topic words, and then use a non-linear interpolation to compute the language model. This yielded an 8% perplexity re- duction and 1% relative word error rate reduction. In collaborative work, Mangu (1997) investigated the benefits of using existing an Broadcast News topic hierarchy extracted from topic labels as a ba- sis for language model computation. Manual tree construction and hierarchical interpolation yielded a 16% perplexity reduction over a baseline uni- gram model. In a concurrent collaborative effort, Khudanpur and Wu (1999) implemented clustering and topic-detection techniques similar on those pre- sented here and computed a maximum entropy topic sensitive language model for the Switchboard cor- pus, yielding 8% perplexity reduction and 1.8% word error rate reduction relative to a baseline maximum entropy trigram model. 2 The Data The data used in this research is the Broadcast News (BN94) corpus, consisting of radio and TV news transcripts form the year 1994. From the total of 30226 documents, 20226 were used for training and the other 10000 were used as test and held-out data. The vocabulary size is approximately 120k words. 3 Optimizing Document Clustering for Language Modeling For the purpose of language modeling, the topic la- bels assigned to a document or segment of a doc- ument can be obtained either manually (by topic- tagging the documents) or automatically, by using an unsupervised algorithm to group similar docu- ments in topic-like clusters. We have utilized the latter approach, for its generality and extensibility, and because there is no reason to believe that the manually assigned topics are optimal for language modeling. 3.1 Tree Generation In this study, we have investigated a range of hierar- chical clustering techniques, examining extensions of hierarchical agglomerative clustering, k-means clus- tering and top-down EM-based clustering. The lat- ter underperformed on evaluations in Florian (1998) and is not reported here. A generic hierarchical agglomerative clustering al- gorithm proceeds as follows: initially each document has its own cluster. Repeatedly, the two closest clus- ters are merged and replaced by their union, until there is only one top-level cluster. Pairwise docu- ment similarity may be based on a range of func- tions, but to facilitate comparative analysis we have utilized standard cosine similarity (d(D1,D2) = <D1,D2~ ) and IR-style term vectors (see Salton IIDx Ih liD2 Ih and McGill (1983)). This procedure outputs a tree in which documents on similar topics (indicated by similar term content) tend to be clustered together. The difference be- tween average-linkage and maximum-linkage algo- rithms manifests in the way the similarity between clusters is computed (see Duda and Hart (1973)). A problem that appears when using hierarchical clus- tering is that small centroids tend to cluster with bigger centroids instead of other small centroids, of- ten resulting in highly skewed trees such as shown in Figure 2, a=0. To overcome the problem, we de- vised two alternative approaches for computing the intercluster similarity: • Our first solution minimizes the attraction of large clusters by introducing a normalizing fac- tor a to the inter-cluster distance function: < c(C1),c(C2) > d(C1,C2) = N(C1), ~ Ilc(C,)ll N(C2) ~ IIc(C2)ll (2) 168 a=O a = 0.3 a = 0.5 Figure 2: As a increases, the trees become more balanced, at the expense of forced clustering e=0 e = 0.15 e = 0.3 e = 0.7 Figure 3: Tree-balance is also sensitive to the smoothing parameter e. 3.2 Optimizing the Hierarchical Structure To be able to compute accurate language models, one has to have sufficient data for the relative fre- quency estimates to be reliable. Usually, even with enough data, a smoothing scheme is employed to in- sure that P (wdw~ -1) > 0 for any given word sequence w~. The trees obtained from the previous step have documents in the leaves, therefore not enough word mass for proper probability estimation. But, on the path from a leaf to the root, the internal nodes grow in mass, ending with the root where the counts from the entire corpus are stored. Since our intention is to use the full tree structure to interpolate between the in-node language models, we proceeded to identify a subset of internal nodes of the tree, which contain sufficient data for language model estimation. The criteria of choosing the nodes for collapsing involves a goodness function, such that the cut I is a solu- tion to a constrained optimization problem, given the constraint that the resulting tree has exactly k leaves. Let this evaluation function be g(n), where n is a node of the tree, and suppose that we want to minimize it. Let g(n, k) be the minimum cost of creating k leaves in the subtree of root n. When the evaluation function g (n) satisfies the locality con- dition that it depends solely on the values g (nj,.), (where (n#)j_ 1 kare the children of node n), g (root) can be coml)uted efficiently using dynamic program- ming 2 : where N (Ck) is the number of vectors (docu- ments) in cluster Ck and c (Ci) is the centroid of the i th cluster. Increasing a improves tree balance as shown in Figure 2, but as a becomes large the forced balancing degrades cluster qual- ity. A second approach we explored is to perform basic smoothing of term vector weights, replac- ing all O's with a small value e. By decreasing initial vector orthogonality, this approach facili- tates attraction to small centroids, and leads to more balanced clusters as shown in Figure 3. Instead of stopping the process when the desired • number of clusters is obtained, we generate the full tree for two reasons: (1) the full hierarchical struc- ture is exploited in our language models and (2) once the tree structure is generated, the objective func- tion we used to partition the tree differs from that used when building the tree. Since the clustering procedure turns out to be rather expensive for large datasets (both in terms of time and memory), only 10000 documents were used for generating the initial hierarchical structure. °Section 3.2 describes the choice of optimum a. gCn, 1) = g(n) g(n, k) = min h (g (nl, jl),..* , g (n/c, jk))(3) jl,,jk > 1 Let us assume for a moment that we are inter- ested in computing a unigram topic-mixture lan- guage model. If the topic-conditional distributions have high entropy (e.g. the histogram of P(wltopic ) is fairly uniform), topic-sensitive language model in- terpolation will not yield any improvement, no mat- ter how well the topic detection procedure works. Therefore, we are interested in clustering documents in such a way that the topic-conditional distribution P(wltopic) is maximally skewed. With this in mind, we selected the evaluation function to be the condi- tional entropy of a set of words (possibly the whole vocabulary) given the particular classification. The conditional entropy of some set of words )~V given a partition C is HCWIC) = ~ PCC~) ~ P(wlC,). log(P(wlC,)) i=1 wEWCIC d = ~ ~ ~_, cCw, C,). logCP(wlC,)) (4) i=1 wEWnC i 1the collection of nodes that collapse 2h is an operator through which the values g (nl,jl) ..... g (nk,jk) are combined, as ~ or YI 169 5.55 5.5 5.45 5A 5.35 5.3 5.25 32 3.13 5.1 5.05 Ccad~tiooal F.~opy in the Avenge-Linkage Case , u , I n 64 Cin~ -- 77 CinSlCn ...... 100 clus, ters ..... ~ ;.................'" .................................................. I "'1' I I 0.l 0.2 0-~ 0.4 ~5 01.6 3.85 3.8 3.75 3.7 0.7 Couditinnal Eam~py in in¢ Maximum.Linkage Case 3.65 3.6 3.55 0 n 77 dusters ...... "'".,.. ....., ........ "'-.,. ................... ....'" ..... .-° ............. ""~.. .............. °.-°**° I I I 0., 0.2 03 01.4 01., 01.6 (I 0.7 Figure 4: Conditional entropy for different a, cluster sizes and linkage methods where c (w, Ci) is the TF-IDF factor of word w in class Ci and T is the size of the corpus. Let us observe that the conditional entropy does satisfy the locality condition mentioned earlier. Given this objective function, we identified the op- timal tree cut using the dynamic-programming tech- nique described above. We also optimized different parameters (such as a and choice of linkage method). Figure 4 illustrates that for a range of cluster sizes, maximal linkage clustering with a=0.15-0.3 yields optimal performance given the objective function in equation (2). The effect of varying a is also shown graphically in Figure 5. Successful tree construction for language modeling purposes will minimize the conditional en- tropy of P (~VIC). This is most clearly illustrated for the word politics, where the tree generated with a = 0.3 maximally focuses documents on this topic into a single cluster. The other words shown also exhibit this desirable highly skewed distribution of P (}4;IC) in the cluster tree generated when a = 0.3. Another investigated approach was k-means clus- tering (see Duda and Hart (1973)) as a robust and proven alternative to hierarchical clustering. Its ap- plication, with both our automatically derived clus- ters and Mangn's manually derived clusters (Mangn (1997)) used as initial partitions, actually yielded a small increase in conditional entropy and was not pursued further. 4 Language Model Construction and Evaluation Estimating the language model probabilities is a two-phase process. First, the topic-sensitive lan- i--1 gnage model probabilities P (wilt, wi_,~+~ ) are com- puted during the training phase. Then, at run-time, or in the testing phase, topic is dynamically iden- tified by computing the probabilities P (tlw~ -1) as in section 4.2 and the final language model proba- bilities are computed using Equation (1). The tree used in the following experiments was generated us- ing average-linkage agglomerative clustering, using parameters that optimize the objective function in Section 3. 4.1 Language Model Construction The topic-specific language model probabilities are computed in a four phase process: 1. Each document is assigned to one leaf in the tree, based on the similarity to the leaves' cen- troids (using the cosine similarity). The doc- ument counts are added to the selected leaf's count. 2. The leaf counts are propagated up the tree such that, in the end, the counts of every inter- nal node are equal to the sum of its children's counts. At this stage, each node of the tree has an attached language model - the relative fre- quencies. 3. In the root of the tree, a discounted Good- Turing language model is computed (see Katz (1987), Chen and Goodman (1998)). 4. m-gram smooth language models are computed for each node n different than the root by three-way interpolating between the m-gram language model in the parent parent(n), the (m - 1)-gram smooth language model in node n and the m-gram relativeffrequency estimate in node n: -1) = ~1 [wm--l~ . 1 J par. t(.)(wmlw; (5) ( ml 7 +.xs. (w~ '-~) f. (w~lw? -1) with + + = for each node n in the tree. Based on how ~k (w~,-1) depend on the particular node n and the word history w~ -1, various models can be obtained. We investigated two approaches: a bigram model in which the ,k's are fixed over the tree, and a more general trigram model in 170 Case 1: fnode (Wl) ~ 0 P root (w2lwl) ,~1 fnode (w21wl) "?node (Wl) + ,~2/~node (W,.) Pnode (I/]211°1) = -~ (1 -- )~1 -- ~2) Pp .... t(node) (~21~) ~.ode (~I) Pnode (~2) where ?node (flY1) = if w2 E ~'(~O1) if w2 E 7~(Wl) if w2 E/-4 (wl) w2 E~'(tOl) w2E3~(Wl) (1-F-/3) y]. fnode(W21Wl)' Otnode (I#1) = ) -,2e~(,,1) 0+~) - ~ P,,ode ("2) tv2 E 3c(1~'1 ) U'R. ( tv I ) • Case 2: fnode (Wl) = 0 I P root (w=lwl) if w2 E ~(Wl) ~2Pnode (~O2) ''}'node (101) Pnode (w2lwl) = + (1 -- AS) Pp .... t(node) (w2lwl) if w2 e "R. (Wl) anode (I/31) Pnode (W2) if W2 e/4 (wl) where ?node (I/)1) and anode (I/31) are computed in a similar fashion such that the probabilities do sum to 1. Figure 5: Basic Bigram Language Model Specifications which A's adapt using an EM reestimation pro- cedure. 4.1.1 Bigram Language Model Not all words are topic sensitive. Mangu (1997) ob- served that closed-class function words (FW), such as the, of, and with, have minimal probability vari- ation across different topic parameterizations, while most open-class content words (CW) exhibit sub- stantial topic variation. This leads us to divide the possible word pairs in two classes (topic-sensitive and not) and compute the A's in Equation (5) in such a way that the probabilities in the former set are constant in all the models. To formalize this: * Y(Wl) = {w2 • ~1 (Wl,W2) is fixed}-the 'Taxed" space; • T~(Wl) = {w2 • "~l (Wl,W2) is free/variable}- the '~ree" space; • b/(Wl) = {w2 • 121 (Wl,W2) was never seen}- the "unknown" space. The imposed restriction is, then: for every word wland any word w2 • Y(wl) Pn(w21wl) = Proof (w21wl) in any node n. The distribution of bigrams in the training data is as follows, with roughly 30% bigram probabilities allowed to vary in the topic-sensitive models: This approach raises one interesting issue: the language model in the root assigns some probabil- ity mass to the unseen events, equal to the single- tons' mass (see Good (1953),Katz (1987)). In our case, based on the assumptions made in the Good- Turing formulation, we considered that the ratio of the probability mass that goes to the unseen events and the one that goes to seen, free events should be Model fixed fixed free free Bigrsm-type Exsmple p(FWIFW) p(thel~) p(FWICW) ~,(o.t'i.e.,~a,'io) p(CWICW) p(airlco/d) n(CWlFW) n(oi,.Ith=) Freq. 45.3~ Iesst topic sensitive 24.8~ .t 5.3% .t 24.5~ most topic sensitive fixed over the nodes of the tree. Let/3 be this ratio. Then the language model probabilities are computed as in Figure 5. 4.1.2 Ngram Language Model Smoothing In general, n gram language model probabili- ties can be computed as in formula (5), where (A~ (w"'-~'J'l are adapted both for the partic- ~. 1 I / k-~l...3 ular node n and history w~ -1. The proposed de- pendency on the history is realized through the his- tory count c (w~'-1) and the relevance of the history w~ -1 to the topic in the nodes n and parent (n). The intuition is that if a history is as relevant in the current node as in the parent, then the estimates in the parent should be given more importance, since they are better estimated. On the other hand, if the history is much more relevant in the current node, then the estimates in the node should be trusted more. The mean adapted A for a given height h is the tree is shown in Figure 6. This is consistent with the observation that splits in the middle of the tree tend to be most informative, while those closer to the leaves suffer from data fragmentation, and hence give relatively more weight to their parent. As before, since not all the m-grams are expected to be topic-sensitive, we use a method to insure that those rn grams are kept 'Taxed" to minimize noise and modeling effort. In this case, though, 2 lan- guage models with different support are used: one 171 It is at least on the Serb side a real setback to the peace a3 cA ~ o.~ o Topi¢ ID 0.016 0.014 "~ 0.012 ,.~ 0.01 o.l~le o.oo4 o ' I t ~11 P~ce~c I history) II • - - ,n _l II -• , b -- n.m_ I n0 2O 3O 4o f*o piece ~3 : o.2 o.ls "~ o.! ~ o.o5 o Topic ID 0.0006 0.0005 ~ 0.0004 P(piccc I history) Figure 7: Topic sensitive probability estimation for peace and piece in context "~ 0.8 "J 0.6 0.4 0.2 I I I I 4 5 6 7 s Node Height Figure 6: Mean of the estimated As at node height h, in the unigram case that supports the topic insensitive m-grams and that is computed only once (it's a normalization of the topic-insensitive part of the overall model), and one that supports the rest of the mass and which is com- puted by interpolation using formula (5). Finally, the final language model in each node is computed as a mixture of the two. 4.2 Dynamic Topic Adaptation Consider the example of predicting the word follow- ing the Broadcast News fragment: "It is at least on the Serb side a real drawback to the ~-?--~'. Our topic detection model, as further detailed later in this sec- tion, assigns a topic distribution to this left context (including the full previous discourse), illustrated in the upper portion of Figure 7. The model identi- fies that this particular context has greatest affinity with the empirically generated topic clusters #41 and #42 (which appear to have one of their foci on international events). The lower portion of Figure 7 illustrates the topic- conditional bigram probabilities P(w[the, topic) for two candidate hypotheses for w: peace (the actu- ally observed word in this case) and piece (an in- correct competing hypothesis). In the former case, P(peace[the, topic) is clearly highly elevated in the most probable topics for this context (#41,#42), and thus the application of our core model combi- nation (Equation 1) yields a posterior joint product P (w, lw~ -1) = ~'~K= 1P ($lw~-l) • Pt (w, lw~_-~+l) that is 12-times more likely than the overall bigram proba- bility, P(air[the) = 0.001. In contrast, the obvious accustically motivated alternative piece, has great- est probability in a far different and much more dif- fuse distribution of topics, yielding a joint model probability for this particular context that is 40% lower than its baseline bigram probability. This context-sensitive adaptation illustrates the efficacy of dynamic topic adaptation in increasing the model probability of the truth. Clearly the process of computing the topic de- tector P (tlw~ -1) is crucial. We have investigated several mechanisms for estimating this probability, the most promising is a class of normalized trans- formations of traditional cosine similarity between the document history vector w~ -x and the topic cen- troids: P (tlw~-') = f (Cosine-Sire (t,w~-i)) f (Cosine-Sire (t', w~-l)) (6) tl One obvious choice for the function f would be the identity. However, considering a linear contribution 172 Language Perplexity on Perplexity on Model the entire the target vocabulary vocabulary Standard Bigram Model 215 584 History size Scaled 100 5OO0 .2 5000 5000 yes 1000 yes yes* yes no 5000 yes 5000 yes g(x) f(x) k-NN X X ~ - X X Z - X* X Z* -* 1 x - X ~z _ x x z 15-NN e z ~e z - 206 195 192 (-10%) 460 405 389(-33%) 202 444 193 394 192 390 196 411 Table 1: Perplexity results for topic sensitive bigram language model, different history lengths of similarities poses a problem: because topic de- tection is more accurate when the history is long, even unrelated topics will have a non-trivial contri- bution to the final probability 3, resulting in poorer estimates. One class of transformations we investigated, that directly address the previous problem, adjusts the similarities such that closer topics weigh more and more distant ones weigh less. Therefore, f is chosen such that I(=~} < ~-~ for ~E1 < X2 ¢~ s¢.~)- ~ - (7) f(zl) < for zz < z2 X I ~ ag 2 that is, ~ should be a monotonically increas- ing function on the interval [0, 1], or, equivalently f (x) = x. g (x), g being an increasing function on [0,1]. Choices for g(x) include x, z~(~f > 0), log (z), e z . Another way of solving this problem is through the scaling operator f' (xi) = ,~-mm~ By apply- max zi --min zi " ing this operator, minimum values (corresponding to low-relevancy topics) do not receive any mass at all, and the mass is divided between the more relevant topics. For example, a combination of scaling and g(x) = x ~ yields: p( jlwi-l! = ($im('w~--l't')--min~Sim('w~--l'tk) )"Y (8) A third class of transformations we investigated considers only the closest k topics in formula (6) and ignores the more distant topics. 4.3 Language Model Evaluation Table 1 briefly summarizes a larger table of per- formance measured on the bigram implementation 3Due to unimportant word co-occurrences of this adaptive topic-based LM. For the default parameters (indicated by *), a statistically signif- icant overall perplexity decrease of 10.5% was ob- served relative to a standard bigram model mea- sured on the same 1000 test documents. System- atically modifying these parameters, we note that performance is decreased by using shorter discourse contexts (as histories never cross discourse bound- aries, 5000-word histories essentially correspond to the full prior discourse). Keeping other parame- ters constant, g(x) = x outperforms other candidate transformations g(x) = 1 and g(x) = e z. Absence of k-nn and use of scaling both yield minor perfor- mance improvements. It is important to note that for 66% of the vo- cabulary the topic-based LM is identical to the core bigram model. On the 34% of the data that falls in the model's target vocabulary, however, perplexity reduction is a much more substantial 33.5% improve- ment. The ability to isolate a well-defined target subtask and perform very well on it makes this work especially promising for use in model combination. 5 Conclusion In this paper we described a novel method of gen- erating and applying hierarchical, dynamic topic- based language models. Specifically, we have pro- posed and evaluated hierarchical cluster genera- tion procedures that yield specially balanced and pruned trees directly optimized for language mod- eling purposes. We also present a novel hierar- chical interpolation algorithm for generating a lan- guage model from these trees, specializing in the hierarchical topic-conditional probability estimation for a target topic-sensitive vocabulary (34% of the entire vocabulary). We also propose and evalu- ate a range of dynamic topic detection procedures based on several transformations of content-vector similarity measures. These dynamic estimations of P(topici[history) are combined with the hierarchical estimation of P(wordj Itopici, history) in a product across topics, yielding a final probability estimate 173 of P(wordj Ihistory) that effectively captures long- distance lexical dependencies via these intermediate topic models. Statistically significant reductions in perplexity are obtained relative to a baseline model, both on the entire text (10.5%) and on the target vocabulary (33.5%). This large improvement on a readily isolatable subset of the data bodes well for further model combination. Acknowledgements The research reported here was sponsored by Na- tional Science Foundation Grant IRI-9618874. The authors would like to thank Eric Brill, Eugene Char- niak, Ciprian Chelba, Fred Jelinek, Sanjeev Khudan- pur, Lidia Mangu and Jun Wu for suggestions and feedback during the progress of this work, and An- dreas Stolcke for use of his hierarchical clustering tools as a basis for some of the clustering software developed here. References P. Brown, J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, J. Lafferty, R. Mercer, and P. Roossin'. 1990. A statistical approach to machine transla- tion. Computational Linguistics, 16(2). Ciprian Chelba and Fred Jelinek. 1998. Exploiting syntactic structure for language modeling. In Pro- ceedings COLING-ACL, volume 1, pages 225-231, August. Stanley F. Chen and Joshua Goodman. 1998. An empirical study of smoothing techinques for language modeling. Technical Report TR-10-98, Center for Research in Computing Technology, Harvard University, Cambridge, Massachusettes, August. Richard O. Duda and Peter E. Hart. 1973. Patern Classification and Scene Analysis. John Wiley & Sons. R~u Florian. 1998. Exploiting nonlo- cal word relationships in language mod- els. Technical report, Computer Science Department, Johns Hopkins University. http://nlp.cs.jhu.edu/-rflorian/papers/topic- lm-tech-rep.ps. J. Good. 1953. The population of species and the estimation of population parameters. Biometrica, 40, parts 3,4:237-264. Rukmini Iyer and Mari Ostendorf. 1996. Modeling long distance dependence in language: Topic mix- tures vs. dynamic cache models. In Proceedings of the International Conferrence on Spoken Lan- guage Processing, volume 1, pages 236-239. Rukmini Iyer, Mari Ostendorf, and J. Robin Rohlicek. 1994. Language modeling with sentence-level mixtures. In Proceedings ARPA Workshop on Human Language Technology, pages 82-87. Slava Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. In IEEE Transactions on Acoustics, Speech, and Signal Processing, 1987, volume ASSP-35 no 3, pages 400-401, March 1987. Sanjeev Khudanpur and Jun Wu. 1999. A maxi- mum entropy language model integrating n-gram and topic dependencies for conversational speech recognition. In Proceedings on ICASSP. R. Kuhn and R. de Mori. 1992. A cache based nat- ural language model for speech recognition. IEEE Transaction PAMI, 13:570-583. R. Lau, Ronald Rosenfeld, and Salim Roukos. 1993. Trigger based language models: a maximum en- tropy approach. In Proceedings ICASSP, pages 45-48, April. S. Lowe. 1995. An attempt at improving recognition accuracy on switchboard by using topic identifi- cation. In 1995 Johns Hopkins Speech Workshop, Language Modeling Group, Final Report. Lidia Mangu. 1997. Hierarchical topic-sensitive language models for automatic speech recog- nition. Technical report, Computer Sci- ence Department, Johns Hopkins University. http://nlp.cs.jhu.edu/-lidia/papers/tech-repl .ps. Ronald Rosenfeld. 1994. A hybrid approach to adaptive statistical language modeling. In Pro- ceedings ARPA Workshop on Human Language Technology, pages 76-87. G. Salton and M. McGill. 1983. An Introduc- tion to Modern Information Retrieval. New York, McGram-Hill. Kristie Seymore and Ronald Rosenfeld. 1997. Using stow topics for language model adaptation. In EuroSpeech97, volume 4, pages 1987-1990. Kristie Seymore, Stanley Chen, and Ronald Rosen- feld. 1998. Nonlinear interpolation of topic mod- els for language model adaptation. In Proceedings of ICSLP98. J. H. Wright, G. J. F. Jones, and H. Lloyd-Thomas. 1993. A consolidated language model for speech recognition. In Proceedings EuroSpeech, volume 2, pages 977-980. 174
1999
22
A Second-Order Hidden Markov Model for Part-of-Speech Tagging Scott M. Thede and Mary P. Harper School of Electrical and Computer Engineering, Purdue University West Lafayette, IN 47907 { thede, harper} @ecn.purdue.edu Abstract This paper describes an extension to the hidden Markov model for part-of-speech tagging using second-order approximations for both contex- tual and lexical probabilities. This model in- creases the accuracy of the tagger to state of the art levels. These approximations make use of more contextual information than standard statistical systems. New methods of smoothing the estimated probabilities are also introduced to address the sparse data problem. 1 Introduction Part-of-speech tagging is the act of assigning each word in a sentence a tag that describes how that word is used in the sentence. Typ- ically, these tags indicate syntactic categories, such as noun or verb, and occasionally include additional feature information, such as number (singular or plural) and verb tense. The Penn Treebank documentation (Marcus et al., 1993) defines a commonly used set of tags. Part-of-speech tagging is an important re- search topic in Natural Language Processing (NLP). Taggers are often preprocessors in NLP systems, making accurate performance espe- cially important. Much research has been done to improve tagging accuracy using several dif- ferent models and methods, including: hidden Markov models (HMMs) (Kupiec, 1992), (Char- niak et al., 1993); rule-based systems (Brill, 1994), (Brill, 1995); memory-based systems (Daelemans et al., 1996); maximum-entropy systems (Ratnaparkhi, 1996); path voting con- straint systems (Tiir and Oflazer, 1998); linear separator systems (Roth and Zelenko, 1998); and majority voting systems (van Halteren et al., 1998). This paper describes various modifications to an HMM tagger that improve the perfor- mance to an accuracy comparable to or better than the best current single classifier taggers. 175 This improvement comes from using second- order approximations of the Markov assump- tions. Section 2 discusses a basic first-order hidden Markov model for part-of-speech tagging and extensions to that model to handle out-of- lexicon words. The new second-order HMM is described in Section 3, and Section 4 presents experimental results and conclusions. 2 Hidden Markov Models A hidden Markov model (HMM) is a statistical construct that can be used to solve classification problems that have an inherent state sequence representation. The model can be visualized as an interlocking set of states. These states are connected by a set of transition probabili- ties, which indicate the probability of traveling between two given states. A process begins in some state, then at discrete time intervals, the process "moves" to a new state as dictated by the transition probabilities. In an HMM, the exact sequence of states that the process gener- ates is unknown (i.e., hidden). As the process enters each state, one of a set of output symbols is emitted by the process. Exactly which symbol is emitted is determined by a probability distri- bution that is specific to each state. The output of the HMM is a sequence of output symbols. 2.1 Basic Definitions and Notation According to (Rabiner, 1989), there are five el- ements needed to define an HMM: 1. N, the number of distinct states in the model. For part-of-speech tagging, N is the number of tags that can be used by the system. Each possible tag for the system corresponds to one state of the HMM. 2. M, the number of distinct output symbols in the alphabet of the HMM. For part-of- speech tagging, M is the number of words in the lexicon of the system. 3. A = {a/j}, the state transition probabil- ity distribution. The probability aij is the probability that the process will move from state i to state j in one transition. For part-of-speech tagging, the states represent the tags, so aij is the probability that the model will move from tag ti to tj -- in other words, the probability that tag tj follows ti. This probability can be estimated using data from a training corpus. 4. B = {bj(k)), the observation symbol prob- ability distribution. The probability bj(k) is the probability that the k-th output sym- bol will be emitted when the model is in state j. For part-of-speech tagging, this is the probability that the word Wk will be emitted when the system is at tag tj (i.e., P(wkltj)). This probability can be esti- mated using data from a training corpus. 5. 7r = {Tri}, the initial state distribution. 7ri is the probability that the model will start in state i. For part-of-speech tagging, this is the probability that the sentence will be- gin with tag ti. When using an HMM to perform part-of- speech tagging, the goal is to determine the most likely sequence of tags (states) that gen- erates the words in the sentence (sequence of output symbols). In other words, given a sen- tence V, calculate the sequence U of tags that maximizes P(VIU ). The Viterbi algorithm is a common method for calculating the most likely tag sequence when using an HMM. This algo- rithm is explained in detail by Rabiner (1989) and will not be repeated here. 2.2 Calculating Probabilities for Unknown Words In a standard HMM, when a word does not occur in the training data, the emit probabil- ity for the unknown word is 0.0 in the B ma- trix (i.e., bj(k) = 0.0 if wk is unknown). Be- ing able to accurately tag unknown words is important, as they are frequently encountered when tagging sentences in applications. Most work in the area of unknown words and tagging deals with predicting part-of-speech informa- tion based on word endings and affixation infor- mation, as shown by work in (Mikheev, 1996), (Mikheev, 1997), (Weischedel et al., 1993), and (Thede, 1998). This section highlights a method devised for HMMs, which differs slightly from previous approaches. To create an HMM to accurately tag unknown words, it is necessary to deter- mine an estimate of the probability P(wklti) for use in the tagger. The probabil- ity P(word contains sjl tag is ti) is estimated, where sj is some "suffix" (a more appropri- ate term would be word ending, since the sj's are not necessarily morphologically significant, but this terminology is unwieldy). This new probability is stored in a matrix C = {cj(k)), where cj(k) = P(word has suffix ski tag is tj), replaces bj(k) in the HMM calculations for un- known words. This probability can be esti- mated by collecting suffix information from each word in the training corpus. In this work, suffixes of length one to four characters are considered, up to a maximum suf- fix length of two characters less than the length of the given word. An overall count of the num- ber of times each suffix/tag pair appears in the training corpus is used to estimate emit prob- abilities for words based on their suffixes, with some exceptions. When estimating suffix prob- abilities, words with length four or less are not likely to contain any word-ending information that is valuable for classification, so they are ignored. Unknown words are presumed to be open-class, so words that are not tagged with an open-class tag are also ignored. When constructing our suffix predictor, words that contain hyphens, are capitalized, or contain numeric digits are separated from the main calculations. Estimates for each of these categories are calculated separately. For ex- ample, if an unknown word is capitalized, the probability distribution estimated from capital- ized words is used to predict its part of speech. However, capitalized words at the beginning of a sentence are not classified in this way-- the initial capitalization is ignored. If a word is not capitalized and does not contain a hy- phen or numeric digit, the general distribution is used. Finally, when predicting the possible part of speech for an unknown word, all possible matching suffixes are used with their predictions smoothed (see Section 3.2). 3 The Second-Order Model for Part-of-Speech Tagging The model described in Section 2 is an exam- ple of a first-order hidden Markov model. In part-of-speech tagging, it is called a bigram tag- ger. This model works reasonably well in part- of-speech tagging, but captures a more limited 176 amount of the contextual information than is available. Most of the best statistical taggers use a trigram model, which replaces the bigram transition probability aij = P(rp = tjITp_ 1 -~ ti) with a trigram probability aijk : P(7"p = tklrp_l = tj, rp-2 = ti). This section describes a new type of tagger that uses trigrams not only for the context probabilities but also for the lex- ical (and suffix) probabilities. We refer to this new model as a full second-order hidden Markov model. 3.1 Defining New Probability Distributions The full second-order HMM uses a notation similar to a standard first-order model for the probability distributions. The A matrix con- tains state transition probabilities, the B matrix contains output symbol distributions, and the C matrix contains unknown word distributions. The rr matrix is identical to its counterpart in the first-order model. However, the definitions of A, B, and C are modified to enable the full second-order HMM to use more contextual in- formation to model part-of-speech tagging. In the following sections, there are assumed to be P words in the sentence with rp and Vp being the p-th tag and word in the sentence, respectively. 3.1.1 Contextual Probabilities The A matrix defines the contextual probabil- ities for the part-of-speech tagger. As in the trigram model, instead of limiting the context to a first-order approximation, the A matrix is defined as follows: A = {aijk), where" aija= P(rp = tklrp_l = tj, rp-2 = tl), 1 < p < P Thus, the transition matrix is now three dimen- sional, and the probability of transitioning to a new state depends not only on the current state, but also on the previous state. This al- lows a more realistic context-dependence for the word tags. For the boundary cases of p = 1 and p = 2, the special tag symbols NONE and SOS are used. 3.1.2 Lexieal and Suffix Probabilities The B matrix defines the lexical probabilities for the part-of-speech tagger, while the C ma- trix is used for unknown words. Similarly to the trigram extension to the A matrix, the approx- imation for the lexical and suffix probabilities can also be modified to include second-order in- formation as follows: B = {bij(k)) and C = {vii(k)}, where = = P(vp = wklrp = rp-1 = ti) P(vp has suffix sklrp = tj, rp-1 = tl) forl<p<P In these equations, the probability of the model emitting a given word depends not only on the current state but also on the previous state. To our knowledge, this approach has not been used in tagging. SOS is again used in the p = 1 case. 3.2 Smoothing Issues While the full second-order HMM is a more pre- cise approximation of the underlying probabil- ities for the model, a problem can arise from sparseness of data, especially with lexical esti- mations. For example, the size of the B ma- trix is T2W, which for the WSJ corpus is ap- proximately 125,000,000 possible tag/tag/word combinations. In an attempt to avoid sparse data estimation problems, the probability esti- mates for each distribution is smoothed. There are several methods of smoothing discussed in the literature. These methods include the ad- ditive method (discussed by (Gale and Church, 1994)); the Good-Turing method (Good, 1953); the Jelinek-Mercer method (Jelinek and Mercer, 1980); and the Katz method (Katz, 1987). These methods are all useful smoothing al- gorithms for a variety of applications. However, they are not appropriate for our purposes. Since we are smoothing trigram probabilities, the ad- ditive and Good-Turing methods are of limited usefulness, since neither takes into account bi- gram or unigram probabilities. Katz smooth- ing seems a little too granular to be effective in our application--the broad spectrum of possi- bilities is reduced to three options, depending on the number of times the given event occurs. It seems that smoothing should be based on a function of the number of occurances. Jelinek- Mercer accommodates this by smoothing the n-gram probabilities using differing coefficients (A's) according to the number of times each n- gram occurs, but this requires holding out train- ing data for the A's. We have implemented a model that smooths with lower order informa- tion by using coefficients calculated from the number of occurances of each trigram, bigram, and unigram without training. This method is explained in the following sections. 3.2.1 State Transition Probabilities To estimate the state transition probabilities, we want to use the most specific information. 177 However, that information may not always be available. Rather than using a fixed smooth- ing technique, we have developed a new method that uses variable weighting. This method at- taches more weight to triples that occur more often. The tklrp-1 P=ka formula for the estimate /3 of P(rp = = tj, rp-2 = tl) is: Na + (1 - ka)k2 N2 + (1 - k3)(1 k2). N: c, Yoo which depends on the following numbers: gl = N2 --~ N3 = Co = C: -- Co = number of times tk occurs number of times sequence tjta occurs number of times sequence titjtk occurs total number of tags that appear number of times tj occurs number of times sequence titj occurs where: log(N2 + 1) + 1 k~. = log(Ng. + 1) + 2' log(Na + I) + 1 and ka = log(Na + 1) + 2 The formulas for k2 and k3 are chosen so that the weighting for each element in the equation for/3 changes based on how often that element occurs in the training data. Notice that the sum of the coefficients of the probabilities in the equation for/3 sum to one. This guarantees that the value returned for/3 is a valid probability. After this value is calculated for all tag triples, the values are normalized so that ~ /3 -- 1, tkET creating a valid probability distribution. The value of this smoothing technique be- comes clear when the triple in question occurs very infrequently, if at all. Consider calculating /3 for the tag triple CD RB VB. The informa- tion for this triple is: N1 = 33,277 (number of times VB appears) N2 = 4,335 (number of times RB VB appears) Na = 0 (number of times CD RB VB appears) Co = 1,056,892 (total number of tags) C: = 46,994 (number of times RB appears) C2 = 160 (number of times CD RB appears) Using these values, we calculate the coeffi- cients k2 and k3: log(4,335 + 1) + 1 4.637 k2 = - ---0.823 log(4,335 + 1) + 2 5.637 ka = log(0+l)+l =-1 =0.500 log(0 + 1) + 2 2 Using these values, we calculate the probability /3: 15 = k3 • ~-~-N3 q_ (1 - ka)k2 • -~lN° q_ (1 - k3)(1 - k2). NxC.._o = 0.500 • 0.000 Jr 0.412 • 0.092 + 0.088 • 0.031 = 0.041 If smoothing were not applied, the probabil- ity would have been 0.000, which would create problems for tagger generalization. Smoothing allows tag triples that were not encountered in the training data to be assigned a probability of occurance. 3.2.2 Lexical and Suffix Probabilities For the lexical and suffix probabilities, we do something somewhat different than for context probabilities. Initial experiments that used a formula similar to that used for the contextual estimates performed poorly. This poor perfor- mance was traced to the fact that smoothing al- lowed too many words to be incorrectly tagged with tags that did not occur with that word in the training data (over-generalization). As an alternative, we calculated the smoothed proba- bility/3 for words as follows: (log(N3 + i) + i. N3 1 N2 t5 __ "log(N3 + 1) + 2)C-22 + (log(N3 + 1) + 2)C-T where: N2 = number of times word wk occurs with tag tj N3 = number of times word wk occurs with tag tj preceded by tag tl C1 = number of times tj occurs C2 = number of times sequence titj occurs Notice that this method assigns a probability of 0.0 to a word/tag pair that does not appear in the training data. This prevents the tagger from trying every possible combination of word and tag, something which both increases run- ning time and decreases the accuracy. We be- lieve the low accuracy of the original smoothing scheme emerges from the fact that smoothing the lexical probabilities too far allows the con- textual information to dominate at the expense of the lexical information. A better smooth- ing approach for lexical information could pos- sibly be created by using some sort of word class idea, such as the genotype idea used in (Tzouk- ermann and Radev, 1996), to improve our /5 estimate. 178 In addition to choosing the above approach for smoothing the C matrix for unknown words, there is an additional issue of choosing which suffix to use when predicting the part of speech. There are many possible answers, some of which are considered by (Thede, 1998): use the longest matching suffix, use an entropy measure to de- termine the "best" affix to use, or use an av- erage. A voting technique for cij(k) was deter- mined that is similar to that used for contextual smoothing but is based on different length suf- fixes. Let s4 be the length four suffix of the given word. Define s3, s2, and sl to be the length three, two, and one suffixes respectively. If the length of the word is six or more, these four suf- fixes are used. Otherwise, suffixes up to length n - 2 are used, where n is the length of the word. Determine the longest suffix of these that matches a suffix in the training data, and cal- culate the new smoothed probability: ~ /(gk)e~,(sk) + (1 -- f(Y*))P~j(sk-,), 1 < k < 4 where: log(~+l/+l •/(x) = log( +lj+2 • Ark = the number of times the suffix sk oc- curs in the training data. • ~ij(Sk) -- the estimate of Cij(8k) from the previous lexical smoothing. After calculating/5, it is normalized. Thus, suf- fixes of length four are given the most weight, and a suffix receives more weight the more times it appears. Information provided by suffixes of length one to four are used in estimating the probabilities, however. 3.3 The New Viterbi Algorithm Modification of the lexical and contextual probabilities is only the first step in defining a full second-order HMM. These probabilities must also be combined to select the most likely sequence of tags that generated the sentence. This requires modification of the Viterbi algo- rithm. First, the variables ~ and ¢ from (Ra- biner, 1989) are redefined, as shown in Figure 1. These new definitions take into account the added dependencies of the distributions of A, B, and C. We can then calculate the most likely tag sequence using the modification of the Viterbi algorithm shown in Figure 1. The run- ning time of this algorithm is O (NT3), where N is the length of the sentence, and T is the num- ber of tags. This is asymptotically equivalent to the running time of a standard trigram tagger that maximizes the probability of the entire tag sequence. 4 Experiment and Conclusions The new tagging model is tested in several different ways. The basic experimental tech- nique is a 10-fold cross validation. The corpus in question-is randomly split into ten sections with nine of the sections combined to train the tagger and the tenth for testing. The results of the ten possible training/testing combinations are merged to give an overall accuracy mea- sure. The tagger was tested on two corpora-- the Brown corpus (from the Treebank II CD- ROM (Marcus et al., 1993)) and the Wall Street Journal corpus (from the same source). Com- paring results for taggers can be difficult, es- pecially across different researchers. Care has been taken in this paper that, when comparing two systems, the comparisons are from experi- ments that were as similar as possible and that differences are highlighted in the comparison. First, we compare the results on each corpus of four different versions of our HMM tagger: a standard (bigram) HMM tagger, an HMM us- ing second-order lexical probabilities, an HMM using second-order contextual probabilities (a standard trigram tagger), and a full second- order HMM tagger. The results from both cor- pora for each tagger are given in Table 1. As might be expected, the full second-order HMM had the highest accuracy levels. The model us- ing only second-order contextual information (a standard trigram model) was second best, the model using only second-order lexical informa- tion was third, and the standard bigram HMM had the lowest accuracies. The full second- order HMM reduced the number of errors on known words by around 16% over bigram tag- gers (raising the accuracy about 0.6-0.7%), and by around 6% over conventional trigram tag- gets (accuracy increase of about 0.2%). Similar results were seen in the overall accuracies. Un- known word accuracy rates were increased by around 2-3% over bigrams. The full second-order HMM tagger is also compared to other researcher's taggers in Ta- ble 2. It is important to note that both SNOW, a linear separator model (Roth and Zelenko, 179 THE SECOND-ORDER VITERBI ALGORITHM The variables: • gp(i,j)= max P(rl,...,rp-2, rp-1 =ti, rp=tj,vl,...vp),2<p<P Tl ~...rTp--2 • Cp(i,j) = arg max P(rl,...,rp-2, rp-1 = ti,rp = tj,vl,...vp),2 < p < P Tl~...iTp--2 The procedure: 1. 6,(i,j) = { ~ribij(vl), ifvlisknown } ?ricij (Vl) , if vl is unknown ,1 _< i, j < N ¢l(i,j) = O, 1 < i,j < N { lma<xN[Jp-l(i,j)aljk]bjk(vp), if vp is known } 2. ~p(j, k) = m~xN[Jp_~(i,j)ai~k]c~k(v,), if vp is unknown ,1 < i,j, k < N, 2 < p < P Cp (j, k) = arg l~_ia<_Xg[Sp_l (i, j)aijk], 1 < i, j, k < N, 2 g p < P 3. P* = max 6p(i,j) l <i,j<_N rt~ = argj max 6p(i,j) l <i,j<N r],_ 1 = arg i max Jp(i,j) l<_i,j<N 4. r; = Cp+l (r~+l, r;+2),p = P-2, P-3,...,2,1 Figure 1: Second-Order Viterbi Algorithm Comparison on Brown Tagger Type Known Standard Bigram 95.94% Second-Order Lexical only 96.23% Second-Order Contextual only 96.41% Full Second-Order HMM 96.62% Corpus Unknown Overall 80.61% 95.60% 81.42% 95.90% 82.69% 96.11% 83.46% 96.33% Comparison on WSJ Corpus Tagger Type Known Unknown Standard Bigram 96.52% 82.40% Second-Order Lexical only 96.80% 83.63% Second-Order Contextual only 96.90% 84.10% Full Second-Order HMM 97.09% 84.88% Overall 96.25% 96.54% 96.65% 96.86% % Error Reduction of Second-Order HMM System Type Compared Brown WSJ Bigram 16.6% 16.3% Lexical Trigrams Only 10.5% 9.2% Contextual Trigrams Only 5.7% 6.3% Table 1: Comparison between Taggers on the Brown and WSJ Corpora 1998), and the voting constraint tagger (Tiir and Oflazer, 1998) used training data that con- tained full lexical information (i.e., no unknown words), as well as training and testing data that did not cover the entire WSJ corpus. This use of a full lexicon may have increased their accuracy beyond what it would have been if the model were tested with unknown words. The stan- dard trigram tagger data is from (Weischedel et al., 1993). The MBT (Daelemans et al., 1996) 180 Tagger Type Standard Trigram (Weischedel et al., 1993) MBT (Daelemans et al., 1996) Rule-based (Brill, 1994) Maximum-Entropy (Ratnaparkhi, 1996) Full Second-Order HMM SNOW (Roth and Zelenko, 1998) Voting Constraints (Tiir and Oflazer, 1998) Full Second-Order HMM Known Unknown Overall Open/Closed Lexicon? 96.7% 85.0% 96.3% open 96.7% 90.6% 2 96.4% open 82.2% 96.6% open 97.1% 97.2% 85.6% 84.9% 97.5% 96.6% 96.9% 98.05% open open closed closed closed Testing Method full WSJ 1 fixed WSJ cross-validation fixed full WSJ 3 fixed full WSJ 3 full WSJ cross-validation fixed subset of WSJ 4 subset of WSJ cross-validation 5 full WSJ cross-validation Table 2: Comparison between Full Second-Order HMM and Other Taggers did not include numbers in the lexicon, which accounts for the inflated accuracy on unknown words. Table 2 compares the accuracies of the taggers on known words, unknown words, and overall accuracy. The table also contains two additional pieces of information. The first indi- cates if the corresponding tagger was tested us- ing a closed lexicon (one in which all words ap- pearing in the testing data are known to the tag- ger) or an open lexicon (not all words are known to.the system). The second indicates whether a hold-out method (such as cross-validation) was used, and whether the tagger was tested on the entire WSJ corpus or a reduced corpus. Two cross-validation tests with the full second-order HMM were run: the first with an open lexicon (created from the training data), and the second where the entire WSJ lexicon was used for each test set. These two tests al- low more direct comparisons between our sys- tem and the others. As shown in the table, the full second-order HMM has improved overall ac- curacies on the WSJ corpus to state-of-the-art 1The full WSJ is used, but the paper does not indicate whether a cross-vaiidation was performed. 2MBT did not place numbers in the lexicon, so all numbers were treated as unknown words. aBoth the rule-based and maximum-entropy models use the full WSJ for training/testing with only a single test set. 4SNOW used a fixed subset of WSJ for training and testing with no cross-validation. 5The voting constraints tagger used a subset of WSJ for training and testing with cross-validation. levels--96.9% is the greatest accuracy reported on the full WSJ for an experiment using an open lexicon. Finally, using a closed lexicon, the full second-order HMM achieved an accuracy of 98.05%, the highest reported for the WSJ cor- pus for this type of experiment. The accuracy of our system on unknown words is 84.9%. This accuracy was achieved by creating separate classifiers for capitalized, hy- phenated, and numeric digit words: tests on the Wall Street Journal corpus with the full second- order HMM show that the accuracy rate on un- known words without separating these types of words is only 80.2%. 6 This is below the perfor- mance of our bigram tagger that separates the classifiers. Unfortunately, unknown word accu- racy is still below some of the other systems. This may be due in part to experimental dif- ferences. It should also be noted that some of these other systems use hand-crafted rules for unknown word rules, whereas our system uses only statistical data. Adding additional rules to our system could result in comparable per- formance. Improving our model on unknown words is a major focus of future research. In conclusion, a new statistical model, the full second-order HMM, has been shown to improve part-of-speech tagging accuracies over current models. This model makes use of second-order approximations for a hidden Markov model and 8Mikheev (1997) also separates suffix probabilities into different estimates, but fails to provide any data illustrating the implied accuracy increase. 181 improves the state of the art for taggers with no increase in asymptotic running time over tra- ditional trigram taggers based on the hidden Markov model. A new smoothing method is also explained, which allows the use of second-order statistics while avoiding sparse data problems. References Eric Brill. 1994. A report of recent progress in transformation-based error-driven learn- ing. Proceedings of the Twelfth National Con- ference on Artifical Intelligence, pages 722- 727. Eric Brill. 1995. Transformation-based error- driven learning and natural language process- ing: A case study in part of speech tagging. Computational Linguistics, 21(4):543-565. Eugene Charniak, Curtis Hendrickson, Neil Ja- cobson, and Mike Perkowitz. 1993. Equa- tions for part-of-speech tagging. Proceedings of the Eleventh National Conference on Arti- ficial Intelligence, pages 784-789. Walter Daelemans, Jakub Zavrel, Peter Berck, and Steven Gillis. 1996. MBT: A memory- based part of speech tagger-generator. Pro- ceedings of the Fourth Workshop on Very Large Corpora, pages 14-27. William A. Gale and Kenneth W. Church. 1994. What's wrong with adding one? In Corpus- Based Research into Language. Rodolpi, Am- sterdam. I. J. Good. 1953. The population frequencies of species and the estimation of population parameters. Biometrika, 40:237-264. Frederick Jelinek and Robert L. Mercer. 1980. Interpolated estimation of markov source pa- rameters from sparse data. Proceedings of the Workshop on Pattern Recognition in Prac- tice. Salva M. Katz. 1987. Estimation of probabili- ties from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing, 35 (3) :400-401. Julian Kupiec. 1992. Robust part-of-speech tagging using a hidden Markov model. Com- puter Speech and Language, 6(3):225-242. Mitchell Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330. Andrei Mikheev. 1996. Unsupervised learning of word-category guessing rules. Proceedings of the 34th Annual Meeting of the Association for Compuatational Linguistics, pages 327- 334. Andrei Mikheev. 1997. Automatic rule induc- tion for unknown-word guessing. Computa- tional Linguistics, 23 (3) :405-423. Lawrence R. Rabiner. 1989. A tutorial on hidden Markov models and selected applica- tions in speech recognition. Proceeding of the IEEE, pages 257-286. Adwait Ratnaparkhi. 1996. A maximum en- tropy model for part-of-speech tagging. Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing, pages 133-142. Dan Roth and Dmitry Zelenko. 1998. Part of speech tagging using a network of linear sep- arators. Proceedings of COLING-ACL '98, pages 1136-1142. Scott M. Thede. 1998. Predicting part-of- speech information about unknown words using statistical methods. Proceedings of COLING-ACL '98, pages 1505-1507. GSkhan Tiir and Kemal Oflazer. 1998. Tagging English by path voting constraints. Proceed- ings of COLING-ACL '98, pages 1277-1281. Evelyne Tzoukermann and Dragomir R. Radev. 1996. Using word class for part-of-speech disambiguation. Proceedings of the Fourth Workshop on Very Large Corpora, pages 1- 13. Hans van Halteren, Jakub Zavrel, and Wal- ter Daelemans. 1998. Improving data driven wordclass tagging by system combination. Proceedings of COLING-A CL '98, pages 491- 497. Ralph Weischedel, Marie Meeter, Richard Schwartz, Lance Ramshaw, and Jeff Pal- mucci. 1993. Coping with ambiguity and unknown words through probabilitic models. Computational Linguistics, 19:359-382. 182
1999
23
The CommandTalk Spoken Dialogue System* Amanda Stent, John Dowding Jean Mark Gawron, Elizabeth Owen Bratt, and Robert Moore SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 {stent,dowding,gawron,owen,bmoore}@ai.sri.com 1 Introduction CommandTalk (Moore et al., 1997) is a spoken- language interface to the ModSAF battlefield simulator that allows simulation operators to generate and execute military exercises by cre- ating forces and control measures, assigning missions to forces, and controlling the display (Ceranowicz, 1994). CommandTalk consists of independent, cooperating agents interacting through SRI's Open Agent Architecture (OAA) (Martin et al., 1998). This architecture allows components to be developed independently, and then flexibly and dynamically combined to sup- port distributed computation. Most of the agents that compose CommandTalk have been described elsewhere !for more detail, see (Moore et al., 1997)). This paper describes extensions to CommandTalk to support spoken dialogue. While we make no theoretical claims about the nature and structure of dialogue, we are influ- enced by the theoretical work of (Grosz and Sidner, 1986) and will use terminology from that tradition when appropriate. We also follow (Chu-Carroll and Brown, 1997) in distinguish- ing task initiative and dialogue initiative. Section 2 demonstrates the dialogue capabil- ities of CommandTalk by way of an extended example. Section 3 describes how language in CommandTalk is modeled for understanding and generation. Section 4 describes the archi- tecture of the dialogue manager in detail. Sec- tion 5 compares CommandTalk with other spo- * This research was supported by the Defense Advanced Research Projects Agency under Contract N66001-94-C- 6046 with the Space and Naval Warfare Systems Cen- ter. The views and conclusions contained in this doc- ument are those of the authors and should not be in- terpreted as necessarily representing the official policies, either express or implied, of the Defense Advanced Re- search Projects Agency of the U.S. Government. ken dialogue systems. 2 Example Dialogues The following examples constitute a single ex- tended dialogue illustrating the capabilities of the dialogue manager with regard to structured dialogue, clarification and correction, changes in initiative, integration of speech and gesture, and sensitivity to events occurring in the underlying simulated world. 1 Ex. 1-" U 1 S 2 U 3 S 4 U 5 S 6 Confirmation Create a point named Checkpoint 1 at 64 53 ® Create a CEV at Checkpoint 1 ® Create a CEV here < click> ® I will create CEV at FQ 643 576 Utterances 1 and 3 illustrate typical success- ful interactions between an operator and the system. When no exceptional event occurs, CommandTalk does not respond verbally. How- ever, it does provide an audible tone to indicate that it has completed processing. For a suc- cessful command, it produces a rising tone, il- lustrated by the ® symbol in utterances 2 and 4. For an unsuccessful command it produces a falling tone, illustrated by the ® symbol in ut- terances 12 and 14. 2 1U indicates a user utterance as recognized by the system, and S indicates a system utterance. Bracketed items in italics are gestures. 2The rising and falling tones come from the predia- logue version of the system. They were added because we observed that users did not always notice when the system had not understood them correctly, and a tex- tual error message alone did not always get the user's attention. These tones also perform basic grounding be- havior. 183 Utterance 6 demonstrates a case where, al- though the system successfully completed the command, it chose to provide an explicit con- firmation. Explicit confirmations can be given at any time. In this case, the system chose to give the confirmation because it performed a nontrivial reference, resolving "here" to the map coordinates given by the gesture, FQ 643 576. Similar situations in which the system gives an explicit confirmation are the resolution of pronouns and elided, definite or plural noun phrases. Ex. 2: Correction U 7 Put Objective Golf here <click> S 8 ® I will locate Objective Golf at FQ 658 583 U 9 I meant Objective Alpha S 10 ® I will locate Objective Alpha at FQ 658 583 Example 2 contains a correction sub-dialogue. Utterance 9 is a correction of utterance 7, and is interpreted as though the operator had said "Put Objective Alpha here". This illustrates two points. First, since utterance 7 was suc- cessful, the system undoes its effects (that is, deletes Objective Golf) before creating Objec- tive Alpha. Second, although the edited utter- ance contains the word "here", the gesture that was used to resolve that is no longer available. The system keeps track of gestural information along with linguistic information in its repre- sentation of context in order to interpret cor- rections. Ex. 3: Structured Sub-dialogue U 11 CEV advance to Objective Golf S 12 ® There are 2 CEVs. Do you mean 100All or 100A127 U 13 100All S 14 @ There is no point named Ob- jective Golf. Which point should 100All proceed to? U 15 I said Objective Alpha S 16 ® 100All will proceed to Objective Alpha Example 3 illustrates a structured discourse segment containing two subsegments. Utter- ance 11 is uninterpretable for two reasons: the reference to "CEV" is ambiguous, and Objec- tive Golf does not exist. The first difficulty is resolved in discourse segment 12-13, and the second in discourse segment 14-16. Notice that the operator is not required to answer the ques- tion posed by the system in utterance 14, but is free to correct the system's misunderstanding of utterance 11 even though it is not the im- mediately prior utterance. This is true because utterance 13 (the most recent utterance) is in- terpreted as if the operator had said "100All advance to Objective Golf". Ex. 4: Required Argument U 17 A12 conduct crater breach facing north S 18 ® Which point should 100A12 con- duct crater breach oriented north at? U 19 <click> S 20 ® 100A12 will conduct crater breach oriented north at FQ 542 595 Example 4 demonstrates a case where, al- though there are no errors in the operator's ut- terance, the system requires additional informa- tion before it can execute the command. Also note that the question asked by the system in utterance 18 is answered with an isolated ges- ture. Ex. 5: Delayed Response U 21 A13 continue to Checkpoint 1 in a column formation. S 22 ® There is no A13. Which unit should proceed in a column forma- tion to Checkpoint 17 U 23 What units are there? S 24 ® There are 2 units: 100All and 100A12. U 25 All proceed to Checkpoint 1. S 26 ® 100All will proceed in a column formation to Checkpoint 1. In example 5, the system asks a question but the operator needs to perform some other ac- tivity before answering it. The question asked by the system in utterance 22 is answered by the operator in utterance 25. Due to the in- tervening material, the most natural way to an- swer the question posed in utterance 22 is with a 154 complete sentence, rather than an isolated noun phrase. Ex. 6: Changes in the Simulated World U 27 A12 advance to Objective Bravo. S 28 ® There is no Objective Bravo. What point should 100A12 ad- vance to? U 29 Create it at 635 545. S 30 ® S 31 Should 100A12 proceed to Objec- tive Bravo? U 32 Yes. S 33 ® 100A12 will proceed to Objective Bravo. Example 6 demonstrates the use of a guard, or test to see if a situation holds. In utterance 27, a presupposition failure occurs, leading to the open proposition expressed in utterance 28. A guard, associated with the open proposition, tests to see if the system can successfully resolve "Objective Bravo". Rather than answering the question in utterance 28, the operator chooses to create Objective Bravo. The system then tests the guard, which succeeds because Objec- tive Bravo now exists. The system therefore takes dialogue initiative by asking the operator in utterance 31 if that operator would like to carry out the original command. Although, in this case, the simulated world changed in direct response to a linguistic act, in general the world can change for a variety of reasons, including the operator's activities on the GUI or the activities of other operators. 3 Language Interpretation and Generation The language used in CommandTalk is derived from a single grammar using Gemini (Dowding et al., 1993), a unification-based grammar for- malism. This grammar is used to provide all the language modeling capabilities of the system, including the language model used in the speech recognizer, the syntactic and semantic interpre- tation of user utterances (Dowding et al., 1994), and the generation of system responses (Shieber et al., 1990). For speech recognition, Gemini uses the Nu- ance speech recognizer. Nuance accepts lan- guage models written in a Grammar Speci- fication Language (GSL) format that allows context-free, as well as the more commonly used finite-state, models. 3 Using a technique de- scribed in (Moore, 1999), we compile a context- free covering grammar into GSL format from the main Gemini grammar. This approach of using a single grammar source for both sides of the dialogue has sev- eral advantages. First, although there are differ- ences between the language used by the system and that used by the speaker, there is a large de- gree of overlap, and encoding the grammar once is efficient. Second, anecdotal evidence suggests that the language used by the system influences the kind of language that speakers use in re- sponse. This gives rise to a consistency problem if the language models used for interpretation and generation are developed independently. The grammar used in CommandTalk contains features that allow it to be partitioned into a set of independent top-level grammars. For instance, CommandTalk contains related, but distinct, grammars for each of the four armed services (Army, Navy, Air Force, and Marine Corps). The top-level grammar currently in use by the speech recognizer can be changed dy- namically. This feature is used in the dialogue manager to change the top-level grammar, de- pending on the state of the dialogue. Currently in CommandTalk, for each service there are two main grammars, one in which the user is free to give any top-level command, and another that contains everything in the first grammar, plus isolated noun phrases of the semantic types that can be used as answers to wh-questions, as well as answers to yes/no questions. 3.1 Prosody A separate Prosody agent annotates the sys- tem's utterances to provide cues to the speech synthesizer about how they should be produced. It takes as input an utterance to be spoken, along with its parse tree and logical form. The output is an expression in the Spoken Text Markup Language 4 (STML) that annotates the locations and lengths of pauses and the loca- tions of pitch changes. 3GSL grammars that are context-free cannot contain indirect left-recursion. 4See http ://www. cstr. ed. ac. uk/proj ect s/ssml. html for details. 185 3.2 Speech Synthesis Speech synthesis is performed by another agent that encapsulates the Festival speech synthe- sizer. Festival 5 was developed by the Centre for Speech Technology Research (CSTR) at the University of Edinburgh. Festival was selected because it accepts STML commands, is avail- able for research, educational, and individual use without charge, and is open-source. 4 Dialogue Manager The role of the dialogue manager in Com- mandTalk is to manage the representation of linguistic context, interpret user utterances within that context, plan system responses, and set the speech recognition system's lan- guage model. The system supports natural, structured mixed-initiative dialogue and multi- modal interactions. When interpreting a new utterance from the user, the dialogue manager considers these pos- sibilities in order: 1. Corrections: The utterance is a correction of a prior utterance. 2. Transitions/Responses: The utterance is a continuation of the current discourse seg- ment. 3. New Commands/Questions: The utterance is initiating a new discourse segment. The following sections will describe the data structures maintained by the dialogue manager, and show how they are affected as the dialogue manager processes each of these three types of user utterances. 4.1 Dialogue Stack CommandTalk uses a dialogue stack to keep track of the current discourse context. The dialogue stack attempts to keep track of the open discourse segments at each point in the dialogue. Each stack frame corresponds to one user-system discourse pair, and contains at least the following elements: • an atomic dialogue state identifier (see Sec- tion 4.2) 5See http://~w, cstr. ed. ac. u.k/projects/ festival .htral for full information on Festival. • a semantic representation of the user's ut- terance(s) • a semantic representation of the system's response, if any • a representation of the background (i.e., open proposition) for the anticipated user response. • focus spaces containing semantic represen- tations of the items referred to in each sys- tem and user utterance a gesture space containing the gestures used in the interpretation of each user ut- terance • an optional guard The semantic representation of the system re- sponse is related to the background, but there are cases where the background may contain more information than the response. For ex- ample, in utterance 28 the system could have simply said "There is no Objective Bravo", and omitted the explicit follow-up question. In this case, the background may still contain the open proposition. Unlike in dialogue analyses carried out on completed dialogues (Grosz and Sidner, 1986), the dialogue manager needs to maintain a stack of all open discourse segments at each point in an on-going dialogue. When a system allows corrections, it can be difficult to determine when a user has completed a discourse segment. Ex. 7: Consecutive Corrections U 34 S 35 U 36 S 37 U 38 S 39 Center on Objective Charlie ® There is no point named Objec- tive Charlie. What point should I center on? 95 65 ® I will center on FQ 950 650 I said 55 65 ® I will center on FQ 550 650 In example 7, for instance, when the user an- swers the question in utterance 36, the system will pop the frame corresponding to utterances 34-35 off the stack. However, the information in that frame is necessary to properly interpret the correction in utterance 38. Without some other mechanism it would be unsafe to ever pop a 186 frame from the stack, and the stack would grow indefinitely. Since the dialogue stack represents our best guess as to the set of currently open dis- course segments, we want to allow the system to pop frames from the stack when it believes dis- course segments have been closed. We make use of another representation, the dialogue trail, to let us to recover from these moves if they prove to be incorrect. The dialogue trail acts as a history of all di- alogue stack operations performed. Using the trail, we record enough information to be able to restore the dialogue stack to any previous configuration (each trail entry records one op- eration taken, the top of the dialog stack before the operation, and the top of the dialog stack after). Unlike the stack, the dialogue trail rep- resents the entire history of the dialogue, not just the set of currently open propositions. The fact that the dialogue trail can grow arbitrarily long has not proven to be a problem in practice since the system typically does not look past the top item in the trail. 4.2 Finite State Machines Each stack frame in the dialogue manager con- tains a unique dialogue state identifier. These states form a collection of finite-state machines (FSMs), where each FSM describes the turns comprising a particular discourse segment. The dialogue stack is reminiscent of a recursive tran- sition network, in that the stack records the sys- tem's progress through a series of FSMs in par- allel. However, in this case, the stack operations are not dictated explicitly by the labels on the FSMs, but stack push operations correspond to the onset of a discourse segment, and stack pop operations correspond to the conclusion of a dis- course segment. Most of the FSMs currently used in Com- mandTalk coordinate dialogue initiative. These FSMs have a very simple structure of at most two states. For instance, there are FSMs rep- resenting discourse segments for clarification questions (utterances 23-24), reference failures (utterances 27-28), corrections (utterances 9- 10), and guards becoming true (utterances 31- 33). CommandTalk currently uses 22 such small FSMs. Although they each have a very simple structure, they compose naturally to support more complex dialogues. In these sub-dialogues the user retains the task initiative, but the sys- tem may temporarily take the dialogue initia- tive. This set of FSMs comprises the core dia- logue competence of the system. In a similar way, more complex FSMs can be designed to support more structured dia- logues, in which the system may take more of the task initiative. The additional structure im- posed varies from short 2-3 turn interactions to longer "form-filling" dialogues. We currently have three such FSMs in CommandTalk: The Embark/Debark command has four re- quired parameters; a user may have diffi- culty expressing them all in a single utter- ance. CommandTalk will query the user for missing parameters to fill in the structure of the command. The Infantry Attack command has a num- ber of required parameters, a potentially unbounded number of optional parameters, and some constraints between optional ar- guments (e.g., two parameters are each op- tional, but if one is specified then the other must be also). The Nine Line Brief is a strMght-forward form-filling command with nine parameters that should be provided in a specified or- der. When the system interprets a new user ut- terance that is not a correction, the next alter- native is that it is a continuation of the current discourse segment. Simple examples of this kind of transition occur when the user is answering a question posed by the system, or when the user has provided the next entry in a form-filling di- alogue. Once the transition is recognized, the current frame on top of the stack is popped. If the next state is not a final state, then a new frame is pushed corresponding to the next state. If it is a final state, then a new frame is not created, indicating the end of the discourse seg- ment. The last alternative for a new user utterance is that it is the onset of a new discourse segment. During the course of interpretation of the ut- terance, the conditions for entering one or more new FSMs may be satisfied by the utterance. These conditions may be linguistic, such as pre- supposition failures, or can arise from events that occur in the simulation, as when a guard 187 is tested in example 6. Each potential FSM has a corresponding priority (error, warning, or good). An FSM of the highest priority will be chosen to dictate the system's response. One last decision that must be made is whether the new discourse segment is a subseg- ment of the current segment, or if it should be a sibling of that segment. The heuristic that- we use is to consider the new segment a subseg- ment if the discourse frame on top of the stack contains an open proposition (as in utterance 23). In this case, we push the new frame on the stack. Otherwise, we consider the previous seg- ment to now be closed (as in utterance 3), and we pop the frame corresponding to it prior to pushing on the new frame. 4.3 Mechanisms for Reference CommandTalk employs two mechanisms for maintaining local context and performing refer- ence: a list of salient objects in the simulation, and focus spaces of linguistic items used in the dialogue. Since CommandTalk is controlling a dis- tributed simulation, events can occur asyn- chronously with the operator's linguistic acts, and objects may become available for reference independently of the on-going dialogue. For in- stance, if an enemy unit suddenly appears on the operator's display, that unit is available for immediate reference, even if no prior linguistic reference to it has been made. The ModSAF agent notifies the dialogue manager whenever an object is created, modified, or destroyed, and these objects are stored in a salience list in or- der of recency. The salience list can also be up- dated when simulation objects are referred to using language. The salience list is not part of the dialogue stack. It does not reflect attentional state; rather, it captures recency and "known" infor- mation. While the salience list contains only entities that directly correspond to objects in the sim- ulation, focus spaces contain representations of entities realized in linguistic acts, including ob- jects not directly represented in the simulation. This includes objects that do not exist (yet), as in "Objective Bravo" in utterance 28, which is referred to with a pronoun in utterance 29, and sets of objects introduced by plural noun phrases. All items referred to in an utterance are stored in a focus space associated with that utterance in the stack frame. There is one focus space per utterance. Focus spaces can be used during the genera- tion of pronouns and definite noun phrases. Al- though at present CommandTalk does not gen- erate pronouns (we choose to err on the side of verbosity, to avoid potential confusion due to misrecognitions), focus spaces could be used to make intelligent decisions about when to use a pronoun or a definite reference. In particular, while it might be dangerous to generate a pro- noun referring to a noun phrase that the user has used, it would be appropriate to use a pro- noun to refer to a noun phrase that the system has used. Focus spaces are also used during the inter- pretation of responses and corrections. In these cases the salience list reflects what is known now, not what was known at the time the ut- terance being corrected or clarified was made. The focus spaces reflect what was known and in focus at that earlier time; they track atten- tional state. For instance, imagine example 6 had instead been: Ex. 6b: U 4O S 41 U 42 Focusing A14 advance there. ® There is no A14. Which unit should advance to Checkpoint 1? Create CEV at 635 545 and name it A14. At the end of utterance 42 the system will reinterpret utterance 40, but the most recent location in the salience list is FQ 635 545 rather than Checkpoint 1. The system uses the focus space to determine the referent for "there" at the time utterance 40 was originally made. In conclusion, CommandTalk's dialogue man- ager uses a dialogue stack and trail, refer- ence mechanisms, and finite state machines to handle a wide range of different kinds of di- alogue, including form-filling dialogues, free- flowing mixed-initiative dialogues, and dia- logues involving multi-modality. 5 Related Work CommandTalk differs from other recent spoken language systems in that it is a command and control application. It provides a particularly 188 interesting environment in which to design spo- ken dialogue systems in that it supports dis- tributed stochastic simulations, in which one operator controls a certain collection of forces while other operators simultaneously control other allied and/or opposing forces, and unex- pected events can occur that require responses in real time. Other applications (Litman et al., 1998; Walker et al., 1998) have been in domains that were sufficiently limited (e.g., queries about train schedules, or reading email) that the sys- tem could presume much about the user's goals, and make significant contributions to task ini- tiative. However, the high number of possible commands available in CommandTalk, and the more abstract nature of the user's high-level goals (to carry out a simulation of a complex military engagement) preclude the system from taking significant task initiative in most cases. The system most closely related to Com- mandTalk in terms of dialogue use is TRIPS (Ferguson and Allen, 1998), although there are several important differences. In contrast to TRIPS, in CommandTalk gestures are fully in- corporated into the dialogue state. Also, Com- mandTalk provides the same language capabil- ities for user and system utterances. Unlike other simulation systems, such as QuickSet (Cohen et al., 1997), CommandTalk has extensive dialogue capabilities. In Quick- Set, the user is required to confirm each spoken utterance before it is processed by the system (McGee et al., 1998). Our earlier work on spoken dialogue in the air travel planning domain (Bratt et al., 1995) (and related systems) interpreted speaker utterances in context, but did not support structured dia- logues. The technique of using dialogue context to control the speech recognition state is similar to one used in (Andry, 1992). 6 Future Work We have discussed some aspects of Com- mandTalk that make it especially suited to han- dle different kinds of interactions. We have looked at the use of a dialogue stack, salience information, and focus spaces to assist inter- pretation and generation. We have seen that structured dialogues can be represented by com- posing finite-state models. We have briefly dis- cussed the advantages of using the same gram- mar for all linguistic aspects of the system. It is our belief that most of the items discussed could easily be transferred to a different domain. The most significant difficulty with this work is that it has been impossible to perform a for- mal evaluation of the system. This is due to the difficulty of collecting data in this domain, which requires speakers who are both knowl- edgeable about the domain and familiar with ModSAF. CommandTalk has been used in sim- ulations of real military exercises, but those ex- ercises have always taken place in classified en- vironments where data collection is not permit- ted. To facilitate such an evaluation, we are cur- rently porting the CommandTalk dialogue man- ager to the domain of air travel planning. There is a large body of existing data in that domain (MADCOW, 1992), and speakers familiar with the domain are easily available. The internal representation of actions in CommandTalk is derived from ModSAF. We would like to port that to a domain-independent representation such as frames or explicit repre- sentations of plans. Finally, there are interesting options regard- ing the finite state model. We are investigating other representations for the semantic contents of a discourse segment, such as frames or active templates. 7 Acknowledgments We would like to thank Andrew Kehler, David Israel, Jerry Hobbs, and Sharon Goldwater for comments on an earlier version of this paper, and we have benefited from the very helpful comments from several anonymous reviewers. References F. Andry. 1992. Static and Dynamic Predic- tions: A Method to Improve Speech Under- standing in Cooperative Dialogues. In Pro- ceedings of the International Conference on Spoken Language Processing, Banff, Canada. H. Bratt, J.Dowding, and K. Hunicke-Smith. 1995. The SRI Telephone ATIS System. In Proceedings of the Spoken Language Sys- terns Technology Workshop, pages 218-220, Austin, Texas. A. Ceranowicz. 1994. Modular Semi- Automated Forces. In J.D. Tew et al., 189 editor, Proceedings of the Winter Simulation Conference, pages 755-761. J. Chu-Carroll and M. Brown. 1997. Tracking Initiative in Collaborative Dialogue Interac- tions. In Proceedings of the Thirty-Fifth An- nual Meeting of the A CL and 8th Conference of the European Chapter of the ACL, Madrid, Spain. P. Cohen, M. Johnston, D. McGee, S. Oviatt, J. Pittman, I. Smith, L. Chen, and J. Clow. 1997. QuickSet: Multimodal Interaction for Distributed Applications. In Proceedings of the Fifth Annual International Multimodal Conference, Seattle, WA. J. Dowding, J. Gawron, D. Appelt, L. Cherny, R. Moore, and D. Moran. 1993. Gemini: A Natural Language System for Spoken Lan- guage Understanding. In Proceedings of the Thirty-First Annual Meeting of the ACL, Columbus, OH. Association for Computa- tional Linguistics. J. Dowding, R. Moore, F. Andry, and D. Moran. 1994. Interleaving Syntax and Semantics in an Efficient Bottom-Up Parser. In Proceed- ings of the Thirty-Second Annual Meeting of the A CL, Las Cruces, New Mexico. Associa- tion for Computational Linguistics. G. Ferguson and J. Allen. 1998. TRIPS: An Intelligent Integrated Problem-Solving Assis- tant. In Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI- 98), Madison, WI. B. Grosz and C. Sidner. 1986. Attention, Inten- tions, and the Structure of Discourse. Com- putational Linguistics, 12(3):175-204. D. Litman, S. Pan, and M. Walker. 1998. Eval- uating Response Strategies in a Web-Based Spoken Dialogue Agent. In Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics, pages 780- 786, Montreal, Canada. MADCOW. 1992. Multi-Site Data Collection for a Spoken Language Corpus. In Proceed- ings of the DARPA Speech and Natural Lan- guage Workshop, pages 200-203, Harriman, New York. D. Martin, A. Cheyer, and D. Moran. 1998. Building Distributed Software Systems with the Open Agent Architecture. In Proceed- ings of the Third International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, Blackpool, Lan- cashire, UK. The Practical Application Com- pany Ltd. D. McGee, P. Cohen, and S. Oviatt. 1998. Con- firmation in Multimodal Systems. In Proceed- ings of the 38th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 823-829, Montreal, Canada. R. Moore, J. Dowding, H. Bratt, J. Gawron, Y. Gorfu, and A. Cheyer. 1997. Com- mandTalk: A Spoken-Language Interface for Battlefield Simulations. In Proceedings of the Fifth Conference on Applied Natural Lan- guage Processing, pages 1-7, Washington, DC. Association for Computational Linguis- tics. R. Moore. 1999. Using Natural Language Knowledge Sources in Speech Recognition. In Keith Ponting, editor, Speech Pattern Pro- cessing. Springer-Verlag. S. M. Shieber, G. van Noord, R. Moore, and F. Pereira. 1990. A Semantic Head- Driven Generation Algorithm for Unification- Based Formalisms. Computational Linguis- tics, 16(1), March. M. Walker, J. Fromer, and S. Narayanan. 1998. Learning Optimal Dialogue Strategies: A Case Study of a Spoken Dialogue Agent for Email. In Proceedings of the 38th An- nual Meeting of the Association for Compu- tational Linguistics, pages 1345-1351, Mon- treal, Canada. 190
1999
24
Construct Algebra: Analytical Dialog Management Alicia Abella and Allen L. Gorin AT cT Labs Research 180 Park Ave. Bldg 103 Florham Park, NJ 07932 Abstract In this paper we describe a systematic approach for creating a dialog management system based on a Construct Algebra, a collection of relations and operations on a task representation. These relations and operations are analytical components for building higher level abstractions called dialog motivators. The dialog manager, con- sisting of a collection of dialog motivators, is entirely built using the Construct Algebra. 1 INTRODUCTION The dialog manager described in this paper implements a novel approach to the problem of dialog management. There are three ma- jor contributions: the task knowledge repre- sentation, a Construct Algebra and a collec- tion of dialog motivators. The task knowl- edge representation exploits object-oriented paradigms. The dialog motivators provide the dialog manager with the dialog strate- gies that govern its behavior. The Construct Algebra provides the building blocks needed to create new dialog motivators and analyze them. The first main component of this dialog manager is the task knowledge representa- tion. The task knowledge is encoded in ob- jects. These objects form an inheritance hi- erarchy that defines the relationships that exists among these objects. The dialog man- ager exploits this inheritance hierarchy in de- termining what queries to pose to the user. No explicit states and transitions need to be defined using this framework (Bennacef et al., 1996; Meng and et. al., 1996; Sadek et al., 1996). A change to the dialog does not require a change to the dialog manager, but more simply, a change to the inheritance hi- erarchy. The second main component of this dia- log manager is the collection of dialog mo- tivators. The dialog motivators determine what actions need to be taken (e.g. ask a confirmation question). The dialog motiva- tors are founded on a theoretical framework called a Construct Algebra. The Construct Algebra allows a designer to add new moti- vators in a principled way. Creating a new application requires defining the inheritance hierarchy and perhaps additional dialog mo- tivators not encompassed in the existing col- lection. This dialog manager has been used for two applications. The first is a spoken dialog sys- tem that enables a user to respond to the open-ended prompt How may I help you? (HMIHY) (Gorin et al., 1997). The sys- tem recognizes the words the customer has said (Riccardi and Bangalore, 1998) and ex- tracts the meaning of these words (Wright et al., 1998) to determine what service they want, conducting a dialog (Abella and Gorin, 1997; Abella et al., 1996) to effec- tively engage the customer in a conversa- tion that will result in providing the service they requested. The second application is to Voice Post Query (VPQ) (Buntschuh et al., 1998) which provides spoken access to the information in large personnel database (> 120,000 entries). A user can ask for em- ployee information such as phone number, fax number, work location, or ask to call an employee. These applications are signifi- 191 cantly different but they both use the same dialog manager. 2 Task Representation Information about the task is defined us- ing an object inheritance hierarchy. The in- heritance hierarchy defines the relationships that exist amongst the task knowledge. Ob- jects are defined to encode the hierarchy. This representation adheres to the princi- ples of object-oriented design as described in (Booch, 1994). Each of the objects has three partitions. The first partition contains the name of the object, the second contains a list of variables with associated values that are specific to the object, and the third par- tition contains any methods associated with the object. For simplicity of illustration we will not include any of the methods. Each of the objects inherits its methods from a higher level object called the Construct. The Construct's methods are the relations and operations that will be described in section 4. The result of the speech recognizer is sent to the spoken language understanding (SLU) module. The SLU module extracts the meaning of the user's utterance and pro- duces a list of possible objects with asso- ciated confidence scores that is interpreted by the dialog manager. The dialog manager then uses the inheritance hierarchy and an algorithm 1 fully described in (Abella and Gorin, 1997) to produce a set of semanti- cally consistent inputs to be used by the di- alog manager. The input is represented as a boolean expression of constructs extracted from the utterance. This input is then ma- nipulated by the dialog motivators to pro- duce an appropriate action, which most of- ten consists of playing a prompt to the user or generating a query to a database. 3 The Construct A construct is the dialog knowledge representation manager's general vehicle. The task 1An understanding of this algorithm is not nec- essary for the understanding of the work described in this paper. DIAL FOR ME :ORWARD NUMBER 555-1234 I BILLING NULL Figure 1: A construct example for HMIHY knowledge is encoded as a hierarchy of con- structs. The construct itself is represented as a tree structure which allows for the build- ing of a containment hierarchy. It consists of two parts, a head and a body. Figure 1 illustrates a construct example for HMIHY. The DIAL_FOR_ME construct is the head and it has two constructs for its body, FOR- WARD_NUMBER and BILLING. These two constructs represent the two pieces of in- formation necessary to complete a call. If a user calls requesting to place a call it is the DIAL_FOR_ME construct that is created with the generic BILLING construct and the FORWARD_NUMBER construct with its value set to empty. The dialog manager will then ask for the forward number and for the type of billing method. In figure 1 the dialog manager has received a response to the forward number request. 4 Construct Algebra The construct algebra defines a collection of elementary relations and operations on a set of constructs. These relations and opera- tions are then used to build the larger pro- cessing units that we call the dialog moti- vators. The set of dialog motivators defines the application. In this section we formally define these relations and operations. 4.1 The Construct Definition 1 Head A head is an ordered pair <name, value>, where name belongs to some set of prede- 192 fined names, N, and value belongs to some set of predefined values, V. A value may be NULL (not assigned a value). Definition 2 Construct A construct is defined recursively as an or- dered pair <head, body> where body is a (pos- sibly empty) set of constructs. 4.2 Relations The Construct Algebra defines six relations in the set of constructs. In each of the definitions, Cl and c2 are constructs. Note that the symbols C and C, introduced here, should not be understood in their usual "subset" and "proper subset" interpretation but will be described in definitions 4 and 5. Definition 3 Equality Two constructs are equal, denoted cl = c2 when head(c1) = head(c2) and body(c1) = body(c2) Definition 3 requires that the heads of c1 and c2 be equal. Recall that the head of a construct is an ordered pair <name, value> which means that their names and values must be equal. A value may be empty (NULL) and by definition be equal to any other value. The equality of bodies means that a bijective mapping exists from the body of cl into the body of c2 such that elements associated with this mapping are equal. Definition 4 Restriction Cl is a restriction of c2, denoted cl C c~, when head(c1) = head(c2) and (3f : body(c1) --+ body(c2))(fis 1 to 1 A (Vbl • body(cl))(bl C_ f(bl)) Intuitively, cl can be obtained by "pruning" elements of c2. The second part of the def- inition, (3f : ...) is what differentiates C from =. It is required that a mapping f be- tween the bodies of Cl and c2 exist with the following properties: [ ~RSON Cl C < \ PERSON "",,,. ........... , ADD ~ES s STREET ............................ . . 3H(}NE NUMBEI c2 Figure 2: STREET and PHONE_NUMBER are "pruned" from c2 to obtain Cl. • f is 1 to 1. In other words, different elements of the body of O, call them hi, are associated with different elements of the body of c2, call them b2 • The elements of the body of c1 are re- strictions of the elements of the body of c2. In other words, bl C_ b2, where bl are elements from the body of Cl and b2 are elements from the body of c2. Figure 2 illustrates an example. Definition 5 Containment cl is contained in c2, denoted Cl C c2, when Cl C_ c2 or (3b2 • body(c2))(Cl C 52) We assume that c1 C c2 either if Cl is a restriction of c2 or if Cl is contained in any element of the body of c2. Figure 3 gives an example. The AMBIGUITY construct represents the fact that the system is not sure whether the user has requested a COLLECT call or a CALLING_CARD call. This would trigger a clarifying question from the dialog manager. 193 ? el C AMBIGUIT I 'k ............. ""¢2 ~'ALLING_CARD CARD NUMBEI~ 8485417 Cl BILLING C2 Figure 4: cj ¢--->c2 Figure 3: cl C c2 Definition 6 Generalization c2 is a generalization of el, denoted c1~__.~c2, when CALLING_CARD DIALFOR_ME head(cl)c--+head(c2) and (3f: body(c2) ~ body(c1)) (fis 1 to 1 A (Vba • body(c2)))(f(b2)~___b2) The generalization of heads means that the name of c2 is on the inheritance path of cl and their values are equal. Intuitively, c2 is an ancestor of Cl or in object-oriented C ~. terms ~C 1 is-a, 2 Note the similarity of this relation to C. Figure 4 illustrates an example. BILLING is a generalization of CALLING_CARD, or in other words CALL- ING_CARD is-a BILLING. Definition 7 Symmetric Generalization Cl is a symmetric generalization of c2, de- noted cl ~ c2, when C1¢--->C2 or c2¢---~Cl This definition simply removes the direction- ality of __¢---~. In other words, either 'tE 1 iS-a C2" 194 ? CARD_NUMBER 8485417 BILLING Cl c2 Figure 5: cl ¢--> c2 or ;;c2 is-a c1" Definition 8 Containment Generalization Cl is a containment generalization of c2, de- noted ci ¢---> c2, when b2 is contained in c2 and cl is a symmet- ric generalization of b2. An example is illus- trated in figure 5. BILLING is contained in DIAL_FOR_ME and is a symmetric general- ization of CALLING_CARD. 4.3 Operations The Construct Algebra consists of two operations union, U and projection, \. Definition 9 Union (U) We will define this operation in several steps. Each step is a progression towards a more general definition. Definition 9.1 Union of values (vl U v2) V 1 U V 2 = Vl, Vl = v2 and vl # NULL v2, Vl = v2 and Vl = NULL not defined, Vl # v2 Recall that by definition, NULL is equal to any other value. Definition 9.2 Union of heads We define head(c1) U head(c2) only in the case c] ¢-~c2, which is all that is needed for a definition of U. head(c I ) U head(c2) : value(el) U vatue( )) Definition 9.3 (c, U c2) If c1~_~_c2, C 1 U C 2 = ( head( c 1 ) U head(c2), u • body( )} u {bllbl • body(c 1) A (Vb2 • body(c2))(bl #/(b2))}) In this definition the head of the resulting construct is the union of the heads of the operands. The body of the resulting construct consists of two parts. The first part is a set of unions (denoted f(b2) U b2 in the definition above) where b2 spans the body of the second operand c2 and f is a mapping from Definition 6. Recall that the mapping f associates elements of the body(c1) with elements of the body(c2) such that f(b2)~-+b2 for b2 • body(c2) so the union f(bj U b2 is (recursively) defined in Definition 9.3. The second part of the body of the resulting construct consists of those elements bl of the body(c1) that no element from the body(c2)maps into through the mapping f. In other words, the second part of the body consists of those elements "left CALLIN ¢ CARD-NUMB 1 NULL u EXP|RATIO~ __ 299 / Cl _CARD :ALLI~ 1 CARD NlYMB~, 1239834 = c2 Figure 6: cl U c2 if c1¢-.-~c2 LLINO_CARD ~ A R D _ N U M B E R 1239834 EXPIRATIO1 ~ 299 behind" in the body(cl) after the mapping f. Figure 6 illustrates an example. The union operations results in a construct with the head CALLING_CARD and a body that contains both CARD_NUMBER and EXPIRATION. The CARD_NUMBER construct from Cl and c2 can be combined because the value of CARD__NUMBER from cl is NULL. The construct EXPIRATION is added because it does not exist on the body of c2. Definition 9.4 Cl U c2 If C 1 ,-v C2, ciUc2, ci ~-+c2 C 1 U ¢2 = C 2 U el, C2 ~ C1 Definition 9.5 cl U c2 If cl ~-+ c2, C 1 U c 2 = C 1 U c2, (head(c2), {el U b~lb~ • body(c2) A cl ~ b2}U {b2152 • body(c2) ^ Cl b£), C1 ,"-' C2 C1 ~ C2 Figure 7 illustrates this union. The head of the resulting construct is the head of c2 which is DIAL_FOR_ME. The resulting construct no longer has BILLING but 195 :ALLING CARD EXPIRATION AL ~ORWARD~NUMB FZ~ BILLING [ Cl C2 DIAL_FOR_ME ~LLING_CARD ARD_NUMBEI EXPIRATION ! Figure 7: Cl I.J C2 if cl ~ c2 rather CALLING_CARD since BILLING is a generalization of CALLING_CARD. In addition the resulting construct contains the construct FORWARD_NUMBER because it remains from DIAL_FOR_ME. Definition 9.6 Cl U e2 In the general case, C1 ~ C2 -~- el [,-J e2, c2 [..J Cl, ((REP, NULL), {cl, c2}), C1 ~ C2 e2 ~ el Cl ~ C2 and C2 ~ Cl In this definition REP is a construct used to represent the union of those constructs that do not satisfy any of the aforementioned conditions. By definition REP has a value of NULL and the body consists of the constructs Cl and e2. Definition 10 Projection (\) CI\C 2 ~-. ((AMBIGUITY, NULL), {hi U c2161 C c1 A bl ~- c2}) e2 ¢-+ cl Cl C2 ~ el Figure 8 illustrates an example of an am- biguous construct and the result of the FIRST NA] C2 C1\C2 Figure 8: Projection operation example projection operation. The construct is AMBIGUITY because all the elements of its body have the value of 6151 for DEPT. In this example, c2 contains the construct LAST_NAME with the value of Smith. There are 2 constructs on the body of Cl that are in the relation b2 C Cl, in other words have value for LAST_NAME of Smith. Therefore the result is an AMBIGUITY construct with two elements on its body, both with the LAST_NAME value of Smith. 5 Dialog Motivators A dialog motivator determines what action the dialog manager needs to take in con- ducting its dialog with a user. The di- alog manager for HMIHY currently con- sists of 5 dialog motivators. They are dis- ambiguation , confirmation, error handling (recovery from misrecognition or misunder- standing and silence), missing information and context switching. VPQ uses two addi- tional motivators, they are continuation and 196 co: Construct used for disambiguation, cQ Ec CA: User response Dk(c, cigK) = c, c ~ AMBIGUITY Dk+l (c, CIDK), CA ~__~_ERROR Dk+l (C, CID g (.J CQ), c A • IDK C\CA, C A ¢-----} C C A C A ~ C Figure 9: Disambiguation Motivator database querying. The disambiguation motivator determines when there is ambiguous semantic informa- tion, like conflicting billing methods. Con- firmation is used when the SLU returns a result with low confidence. Error handling takes on three forms. There is error recovery when the speech recognizer has likely misrec- ognized what the user has said (low confi- dence scores associated with the recognition results), when the user falls silent, and when the user says something the SLU does not expect or does not handle. Missing infor- mation determines what information to ask about in order to complete a transaction. Context switching is the ability of the sys- tem to realize when the user has changed his/her mind or realizes that it has mis- understood and allows the user to correct it. The continuation motivator determines when it is valid to offer the user the choice to query the system for additional information. Database querying decides when the system has acquired enough information to query a database for the requested information. 5.1 Disambiguation Motivator Figure 9 illustrate how the disambiguation motivator is created using the Construct Algebra. The disambiguation motivator is called with the current construct c and a set of constructs called CID g that represents information that the user does not know (IDK - "I Don't Know"), in other words, the user explicitly responds to a prompt with the phrase "I don't know" or its equivalent s. 2The phrases chosen are based on trials Input: A sequence of semantic input from the SLU module in response to a prompt Output: Complete construct c (no need for further dialog) Repeat For all dialog motivators DMI if DMi applies to c Perform action(DMi,c) Apply Dialog Manager to get CA Using Construct Algebra, combine c and CA into c Until no motivator applies Return c Figure 10: Dialog Manager algorithm The motivator runs through several checks on the construct c. The first is to check to see if in fact the motivator applies, or in other words if c is a restriction of AMBIGUITY. If it is not then the motivator simply return c without changing it. The second step is to check to see if the ERROR construct is a generalization of CA where CA represents the user's response. The ERROR construct rep- resents an error condition like silence or mis- recognition. If it is, then it goes on to next motivator because this motivator does not apply to error conditions. If CA equals the IDK construct then this means that the user did not know the answer to our query and we add the construct used for disambiguation, cQ to the set of constructs ¢IDK. If however, CA is in the containment generalization rela- tion with c then the projection operation is applied and the result is returned. If CA is not in this relation then this indicates a con- text switch on the part of the user and the disambiguation motivator returns CA as the result. All other motivators are constructed in a similar fashion. An application can use these motivators or create new ones that are ap- plication specific using the operations and relations of the Construct Algebra. 197 System" VPQ. What can I do for you? User: I need the phone number for Klein. System- I have more than 20 listings for Klein. Can you please say the first name? User: William. System" I have 2 listings for William Klein. Can you tell me the person's work location? User: Bedminster System" The phone number for William Klein is 973 345 5432. Would you like more information? User: No. System" Thank you for using VPQ. Figure 11: A sample dialog for VPQ 6 Dialog Manager The input to the dialog manager is a collec- tion of semantic input generated by the SLU. Figure 10 illustrates the algorithm used by the dialog manager. The output is the com- plete construct c which no longer requires further dialog. The algorithm loops through all the dialog motivators determining which one needs to be applied to c. If it finds a mo- tivator that applies then it will perform the necessary action (e.g. play a prompt or do a database lookup). The algorithm repeats itself to obtain CA (the construct answer). In other words, the construct that results from the action is subject to the dialog motiva- tors starting from the beginning. Once CA has been found to be complete it is combined with c using Construct Algebra to produce a new construct. This new construct c also goes through the loop of dialog motivators and the procedure continues until no moti- vator applies and the algorithm returns the final construct c. 6.1 Example To illustrate how the dialog manager func- tions we will use an example from VPQ. Figure 11 illustrates a sample dialog with the system. The sequence of motivators for VPQ is error handling, confirmation, miss- ing information, database querying and dis- ambiguation. The construct that is created as a result of the user's initial utterance is shown in figure 12. All the information needed to do a database lookup is found in the user's utterance, namely the piece of in- formation the user is seeking and the name of the person. Therefore the first motivator that applies is database querying. This moti- vator creates the database query and based on the result creates the construct CA. The construct CA is then searched by each of the motivators beginning again with error han- dling. The motivator that applies to CA is the disambiguation motivator because there are more than 20 people in the database whose last name is pronounced Klein, in- cluding Klein, Cline and Kline. The dis- ambiguation motivator searches through CA to determine, based on preset parameters, which piece of information is most useful for the disambiguation process as well as which piece of information the user is likely to know, which is selected when the inheritance hierarchy is designed. For VPQ this includes asking about the first name and work loca- tion. In this example the dialog manager searches the database entries and determines that the most discriminating piece of infor- mation is the first name. Once the user re- sponds with the first name there are still 2 possible candidates and it asks for the next piece of information which is work location. Had the user not known the work location the system would have read out the phone number of both people since the total num- ber of matches is less than 3. If the num- ber of entries after disambiguation remains greater than 3 the system refers the user to a live operator during work hours. 7 Conclusion In this paper we have described a novel ap- proach to dialog management. The task knowledge representation defined intuitively and without the need to define call flows in the traditional finite-state approach. The Construct Algebra serves as the building blocks from which the dialog motivators that drive the dialog system are comprised. Building a new application will only require the designer to define the objects (e.g. COL- 198 Figure 12: Sample construct for VPQ. LECT, CREDIT etc.) and the inheritance hierarchy. The Construct Algebra serves as an analytical tool that allows the dialog mo- tivators to be formally defined and analyzed and provides an abstraction hierarchy that hides the low-level details of the implemen- tation and pieces together the dialog motiva- tors. This same dialog manager is currently being used by two very different applications (HMIHY and VPQ). A.L. Gorin, G. Riccardi, and J.H. Wright. 1997. How May I Help You? Speech Com- munciation. Helen Meng and Senis Busayapongchai et. al. 1996. Wheels: A conversational sys- tem in the automobile classifieds domain. International Conference on Spoken Lan- guage Processing. G. Riccardi and S. Bangalore. 1998. Au- tomatic acquisision of phrase grammars for stochastic language modeling. In Proc. ACL Workshop on Very Large Corpora, Montreal. M.D. Sadek, A. Ferrieux, A. Cozannet, P. Bretier, F. Panaget, and J. Simonin. 1996. Effective Human-Computer Co- operative Spoken Dialogue: the AGS Demonstrator. International Conference on Spoken Language Processing. Jerry Wright, Allen L. Gorin, and Alicia Abella. 1998. Spoken language under- standing within dialogs using a graphical model of task structure. In Proc. ICSLP Sydney. References / Alicia Abella and Allen L. Gorin. 1997. Generating semantically consistent inputs to a dialog manager. In Proc. EuroSpeech Rhodes, Greece. A. Abella, M. K. Brown, and B. Buntschuh. 1996. Development principles for dialog- based interfaces. European Conference on Artificial Intelligence. S. Bennacef, L. Devillers, S. Rosset, and L. Lamel. 1996. Dialog in the rail- tel telephone-based system. International Conference on Spoken Language Process- ing. Grady Booch. 1994. Object-Oriented Anal- ysis and Design with Applications. Ben- jamin Cummings. B. Buntschuh, C. Kamm, G. DiFabbrizio, A. Abella, M. Mohri, S. Narayan, I. Zelj- vokic, R.D. Sharp, J. Wright, S. Marcus, J. Shaffer, R. Duncan, and J.G. Wilpon. 1998. VPQ: A spoken language interface to large scale directory information. In Proc. ICSLP Sydney. 199
1999
25
Understanding Unsegmented User Utterances in Real-Time Spoken Dialogue Systems Mikio Nakano, Noboru Miyazaki, Jun-ichi Hirasawa, Kohji Dohsaka, Takeshi Kawabata* NTT Laboratories 3-1 Morinosato-Wakamiya, Atsugi 243-0198, Japan nakano @ atom.brl.ntt.co.jp, nmiya @ atom.brl.ntt.co.jp, jun @ idea.brl.ntt.co.jp, dohsaka@ atom.brl.ntt.co.jp, kaw @ nttspch.hil.ntt.co.jp Abstract This paper proposes a method for incrementally un- derstanding user utterances whose semantic bound- aries are not known and responding in real time even before boundaries are determined. It is an integrated parsing and discourse processing method that updates the partial result of understanding word by word, enabling responses based on the partial result. This method incrementally finds plausible sequences of utterances that play crucial roles in the task execution of dialogues, and utilizes beam search to deal with the ambiguity of boundaries as well as syntactic and semantic ambiguities. The re- sults of a preliminary experiment demonstrate that this method understands user utterances better than an understanding method that assumes pauses to be semantic boundaries. 1 Introduction Building a real-time, interactive spoken dialogue system has long been a dream of researchers, and the recent progress in hardware technology and speech and language processing technologies is making this dream a reality. It is still hard, however, for com- puters to understand unrestricted human utterances and respond appropriately to them. Considering the current level of speech recognition technology, system-initiative dialogue systems, which prohibit users from speaking unrestrictedly, are preferred (Walker et al., 1998). Nevertheless, we are still pursuing techniques for understanding unrestricted user utterances because, if the accuracy of under- standing can be improved, systems that allow users to speak freely could be developed and these would be more useful than systems that do not. * Current address: N'I"F Laboratories, 1-1 Hikarino-oka, Yoko- suka 239-0847, Japan Most previous spoken dialogue systems (e.g. sys- tems by Allen et al. (1996), Zue et al. (1994) and Peckham (1993)) assume that the user makes one utterance unit in each speech push-to-talk method is used. unit we mean a phrase from representation is derived, and sentence in written language. act in this paper to mean a interval, unless the Here, by utterance which a speech act it corresponds to a We also use speech command that up- dates the hearer's belief state about the speaker's intention and the context of the dialogue. In this paper, a system using this assumption is called an interval-based system. The above assumption no longer holds when no restrictions are placed on the way the user speaks. This is because utterance boundaries (i.e., semantic boundaries) do not always correspond to pauses and techniques based on other acoustic information are not perfect. Utterance boundaries thus cannot be identified prior to parsing, and so the timing of determining parsing results to update the belief state is unclear. On the other hand, responding to a user utterance in real time requires understanding it and updating the belief state in real time; thus, it is impossible to wait for subsequent inputs to determine boundaries. Abandoning full parsing and adopting keyword- based or fragment-based understanding could pre- vent this problem. This would, however, sacri- fice the accuracy of understanding because phrases across the pauses could not be syntactically ana- lyzed. There is, therefore, a need for a method based on full parsing that enables real-time un- derstanding of user utterances without boundary information. This paper presents incremental significant- utterance-sequence search (ISSS), a method that 200 enables incremental understanding of user utter- ances word by word by finding plausible sequences of utterances that play crucial roles in the task ex- ecution of dialogues. The method utilizes beam search to deal with the ambiguity of boundaries as well as syntactic and semantic ambiguities. Since it outputs the partial result of understanding that is the most plausible whenever a word hypothesis is in- putted, the response generation module can produce responses at any appropriate time. A comparison of an experimental spoken dialogue system using ISSS with an interval-based system shows that the method is effective. 2 Problem A dilemma is addressed in this paper. First, it is diffi- cult to identify utterance boundaries in spontaneous speech in real time using only pauses. Observation of human-human dialogues reveals that humans of- ten put pauses in utterances and sometimes do not put pauses at utterance boundaries. The following human utterance shows where pauses might appear in an utterance. I'd like to make a reservation for a con- ference room (pause) for, uh (pause) this afternoon (pause) at about (pause) say (pause) 2 or 3 o'clock (pause) for (pause) 15 people As far as Japanese is concerned, several studies have pointed out that speech intervals in dialogues are not always well-formed substrings (Seligman et al., 1997; Takezawa and Morimoto, 1997). On the other hand, since parsing results can- not be obtained unless the end of the utterance is identified, making real-time responses is impossi- ble without boundary information. For example, consider the utterance "I'd like to book Meeting Room 1 on Wednesday". It is expected that the system should infer the user wants to reserve the room on 'Wednesday this week' if this utterance was made on Monday. In real conversations, however, there is no guarantee that 'Wednesday' is the final word of the utterance. It might be followed by the phrase 'next week', in which case the system made a mistake in inferring the user's intention and must backtrack and re-understand. Thus, it is not possible to determine the interpretation unless the utterance boundary is identified. This problem is more serious in head-final languages such as Japanese because function words that represent negation come after content words. Since there is no explicit clue in- dicating an utterance boundary in unrestricted user utterances, the system cannot make an interpretation and thus cannot respond appropriately. Waiting for a long pause enables an interpretation, but prevents response in real time. We therefore need a way to reconcile real-time understanding and analysis without boundary clues. 3 Previous Work Several techniques have been proposed to segment user utterances prior to parsing. They use into- nation (Wang and Hirschberg, 1992; Traum and Heeman, 1997; Heeman and Allen, 1997) and prob- abilistic language models (Stolcke et al., 1998; Ramaswamy and Kleindienst, 1998; Cettolo and Falavigna, 1998). Since these methods are not perfect, the resulting segments do not always cor- respond to utterances and might not be parsable because of speech recognition errors. In addition, since the algorithms of the probabilistic methods are not designed to work in an incremental way, they cannot be used in real-time analysis in a straightfor- ward way. Some methods use keyword detection (Rose, 1995; Hatazaki et al., 1994; Seto et al., 1994) and key-phrase detection (Aust et al., 1995; Kawahara et al., 1996) to understand speech mainly because the speech recognition score is not high enough. The lack of the full use of syntax in these ap- proaches, however, means user utterances might be misunderstood even if the speech recognition gave the correct answer. Zechner and Waibel (1998) and Worm (1998) proposed understanding utterances by combining partial parses. Their methods, however, cannot syntactically analyze phrases across pauses since they use speech intervals as input units. Al- though Lavie et al. (1997) proposed a segmentation method that combines segmentation prior to parsing and segmentation during parsing, but it suffers from the same problem. In the parser proposed by Core and Schubert (1997), utterances interrupted by the other dialogue participant are analyzed based on recta-rules. It is unclear, however, how this parser can be incorpo- 201 rated into a real-time dialogue system; it seems that it cannot output analysis results without boundary clues. 4 Incremental Significant-Utterance- Sequence Search Method 4.1 Overview The above problem can be solved by incremen- tal understanding, which means obtaining the most plausible interpretation of user utterances every time a word hypothesis is inputted from the speech recog- nizer. For incremental understanding, we propose incremental significant-utterance-sequence search (ISSS), which is an integrated parsing and dis- course processing method. ISSS holds multiple possible belief states and updates those belief states when a word hypothesis is inputted. The response generation module produces responses based on the most likely belief state. The timing of responses is determined according to the content of the belief states and acoustic clues such as pauses. In this paper, to simplify the discussion, we as- sume the speech recognizer incrementally outputs elements of the recognized word sequence. Need- less to say, this is impossible because the most likely word sequence cannot be found in the midst of the recognition; only networks of word hypotheses can be outputted. Our method for incremental process- ing, however, can be easily generalized to deal with incremental network input, and our experimental system utilizes the generalized method. 4.2 Significant-Utterance Sequence A significant utterance (SU) in the user's speech is a phrase that plays a crucial role in performing the task in the dialogue. An SU may be a full sentence or a subsentential phrase such as a noun phrase or a verb phrase. Each SU has a speech act that can be considered a command to update the belief state. SU is defined as a syntactic category by the grammar for linguistic processing, which includes semantic inference rules. Any phrases that can change the belief state should be defined as SUs. Two kinds of SUs can be considered; domain-related ones that express the user's intention about the task of the dialogue and dialogue-related ones that express the user's attitude with respect to the progress of the dia- logue such as confirmation and denial. Considering a meeting room reservation system, examples of domain-related SUs are "I need to book Room 2 on Wednesday", "I need to book Room 2", and "Room 2" and dialogue-related ones are "yes", "no", and "Okay". User utterances are understood by finding a se- quence of SUs and updating the belief state based on the sequence. The utterances in the sequence do not overlap. In addition, they do not have to be adjacent to each other, which leads to robustness against speech recognition errors as in fragment- based understanding (Zechner and Waibel, 1998; Worm, 1998). The belief state can be computed at any point in time if a significant-utterance sequence for user utterances up to that point in time is given. The belief state holds not only the user's intention but also the history of system utterances, so that all discourse information is stored in it. Consider, for example, the following user speech in a meeting room reservation dialogue. I need to, uh, book Room 2, and it's on Wednesday. The most likely significant-utterance sequence con- sists of "I need to, uh, book Room 2" and "it's on Wednesday". From the speech act representation of these utterances, the system can infer the user wants to book Room 2 on Wednesday. 4.3 Finding Significant-Utterance Sequences SUs are identified in the process of understanding. Unlike ordinary parsers, the understanding mod- ule does not try to determine whether the whole input forms an SU or not, but instead determines where SUs are. Although this can be considered a kind of partial parsing technique (McDonald, 1992; Lavie, 1996; Abney, 1996), the SUs obtained by ISSS are not always subsentential phrases; they are sometimes full sentences. For one discourse, multiple significant-utterance sequences can be considered. "Wednesday next week" above illustrates this well. Let us assume that the parser finds two SUs, "Wednesday" and "Wednesday next week". Then three significant- utterance sequences are possible: one consisting of "Wednesday", one consisting of "Wednesday next 202 week", and one consisting of no SUs. The second sequence is obviously the most likely at this point, but it is not possible to choose only one sequence and discard the others in the midst of a dialogue. We therefore adopt beam search. Priorities are assigned to the possible sequences, and those with low priorities are neglected during the search. 4.4 ISSS Algorithm The ISSS algorithm is based on shift-reduce parsing. The basic data structure is context, which represents search information and is a triplet of the following data. stack: A push-down stack used in a shift- reduce parser. belief state: A set of the system's beliefs about the user's intention with re- spect to the task of the dialogue and dialogue history. priority: A number assigned to the con- text. Accordingly, the algorithm is as follows. (I) Create a context in which the stack and the belief state are empty and the priority is zero. (II) For each input word, perform the following process. 1. Obtain the lexical feature structure for the word and push it to the stacks of all existing contexts. 2. For each context, apply rules as in a shift-reduce parser. When a shift-reduce conflict or a reduce-reduce conflict occur, the context is duplicated and different operations are performed on them. When a reduce operation is performed, increase the priority of the context by the priority assigned to the rule used for the reduce operation. 3. For each context, if the top of the stack is an SU, empty the stack and update the belief state according to the content of the SU. Increase the priority by the square of the length (i.e., the number of words) of this SU. (I) SU [day: ?x] -~ NP [sort: day, sem: ?x] (priority: 1) (11) NP[sort: day] :~ NP [sort: day] NP [sort: week] (priority: 2) Figure 1: Rules used in the example. . Discard contexts with low priority so that the number of remaining contexts will be the beam width or less. Since this algorithm is based on beam search, it works in real time if Step (II) is completed quickly enough, which is the case in our experimental sys- tem. The priorities for contexts are determined using a general heuristics based on the length of SUs and the kind of rules used. Contexts with longer SUs are preferred. The reason we do not use the length of an SU, but its square instead, is that the system should avoid regarding an SU as consisting of several short SUs. Although this heuristics seems rather simple, we have found it works well in our experimental systems. Although some additional techniques, such as discarding redundant contexts and multiplying a weight w (w > 1) to the priority of each context after the Step 4, are effective, details are not discussed here for lack of space. 4.5 Response Generation The contexts created by the utterance understanding module can also be accessed by the response gener- ation module so that it can produce responses based on the belief state in the context with the highest priority at a point in time. We do not discuss the tim- ing of the responses here, but, generally speaking, a reasonable strategy is to respond when the user pauses. In Japanese dialogue systems, producing a backchannel is effective when the user's intention is not clear at that point in time, but determining the content of responses in a real-time spoken dialogue system is also beyond the scope of this paper. 4.6 A Simple Example Here we explain ISSS using a simple example. Consider again "Wednesday next week". To sim- plify the explanation, we assume the noun phrase 203 Inputs Wednesday next week time (la) (2a) priority:0 stack priority:0 no changes [ NP(Wednesday) J ''''~'~ (2b) priority: 1 belief state ( ) (2c) ~ priority:2 I I day:Wednesday "~ this week j/ (3a) priority:0 I NP(Wednesday) I NP(next week) ( ) (n) (3b) priority:2 I NP(next week) I ( " (day:Wednesday) ~ this week Figure 2: Execution of ISSS. (4a) priority:0 no changes (4b) priority:2 [ NP(WednesdaYnext week) ~ (4b) priority:2 no changes ( ) (1) (4c) priority:3 (4d) priority:7 I I I I (~ay:Wednesday next week ) (4e) priority:2 no changes 'next week' is one word. The speech recognizer incrementally sends to the understanding module the word hypotheses 'Wednesday' and 'next week'. The rules used in this example are shown in Figure 1. They are unification-based rules. Not all features and semantic constraints are shown. In this exam- ple, nouns and noun phrases are not distinguished. The ISSS execution is shown in Figure 2. When 'Wednesday' is inputted, its lexical feature structure is created and pushed to the stack. Since Rule (I) can be applied to this stack, (2b) in Figure 2 is created. The top of the stack in (2b) is an SU, thus (2c) is created, whose belief state contains the user's intention of meeting room reservation on Wednes- day this week. We assume that 'Wednesday' means Wednesday this week by default if this utterance was made on Monday, and this is described in the additional conditions in Rule (I). After 'next week' is inputted, NP is pushed to the stacks of all con- texts, resulting in (3a) and (3b). Then Rule (II) is applied to (3a), making (4b). Rule (I) can be applied to (4b), and then (4c) is created and is turned into (4d), which has the highest priority. Before 'next week' is inputted, the interpretation that the user wants to book a room on Wednesday this week has the highest priority, and then after that, the interpretation that the user wants to book a room on Wednesday next week has the highest Dialogue ) C s~,,~ Control ontext Utterance I Response Understanding (ISSS method) Generation Wor / hypotheses/ ~ i o n I peec "eco nition I I eoc o uction I l \ User utterance System utterance Figure 3: Architecture of the experimental systems. priority. Thus, by this method, the most plausible interpretation can be obtained in an incremental way. 5 Implementation Using ISSS, we have developed several experimen- tal Japanese spoken dialogue systems, including a meeting room reservation system. The architecture of the systems is shown in Fig- ure 3. The speech recognizer uses HMM-based continuous speech recognition directed by a regular 204 grammar (Noda et al., 1998). This grammar is weak enough to capture spontaneously spoken utterances, which sometimes include fillers and self-repairs, and allows each speech interval to be an arbitrary num- ber of arbitrary bunsetsu phrases.l The grammar contains less than one hundred words for each task; we reduced the vocabulary size so that the speech recognizer could output results in real time. The speech recognizer incrementally outputs word hy- potheses as soon as they are found in the best-scored path in the forward search (Hirasawa et al., 1998; G6rz et al., 1996). Since each word hypothesis is accompanied by the pointer to its preceding word, the understanding module can reconstruct word se- quences. The newest word hypothesis determines the word sequence that is acoustically most likely at a point in time. 2 The utterance understanding module works based on ISSS and uses a domain-dependent unification grammar with a context-free backbone that is based on bunsetsu phrases. This grammar is more re- strictive than the grammar for speech recognition, but covers phenomena peculiar to spoken language such as particle omission and self-repairs. A be- lief state is represented by a frame (Bobrow et al., 1977); thus, a speech act representation is a command for changing the slot value of a frame. Although a more sophisticated model would be re- quired for the system to engage in a complicated dialogue, frame representations are sufficient for our tasks. The response generation module is invoked when the user pauses, and plans responses based on the belief state of the context with the highest priority. The response strategy is similar to that of previous frame-based dialogue systems (Bobrow et al., 1977). The speech production module out- puts speech according to orders from the response generation module. Figure 4 shows the transcription of an example dialogue of a reservation system that was recorded in the experiment explained below. As an example of SUs across pauses, "gozen-jftji kara gozen-jaichiji made (from 10 a.m. to 11 a.m.)" in U5 and U7 IA bunsetsu phrase is a phrase that consists of one content word and a number (possibly zero) of function words. 2A method for utilizing word sequences other than the most likely one and integrating acoustic scores and ISSS priorities remains as future work. SI: donoy6na goy6ken de sh6ka (May I 5.69-7.19 help you?) U2: kaigishitsu no yoyaku o onegaishimasu 7.79-9.66 (I'd like to book a meeting room.) [hai s~desu gogoyoji made (That's right, to 4 p.m.)] $3: hal (uh-huh) 10.06-10.32 U4: e konshO no suiy6bi (Well, Wednesday 11.75-13.40 this week) [iie konsh~ no suiyObi (No, Wednesday this week)] $5: hal (uh-huh) 14.04-14.31 U5: gozen-jfiji kara (from 10 a.m.) [gozen-jftji kara (from 10 a.m.)] 15.13-16.30 $6: hal (uh-huh) 17.15-17.42 U7: gozen-jfiichiji made (to 11 a.m.) 18.00-19.46 [gozen-j~ichiji made (to 11 a.m. )] $8: hai (uh-huh) 19.83-20.09 U9: daisan- (three) 20.54-21.09 [daisan-kaigishitu (Meeting Room 3)] S10: hal (uh-huh) 21.92-22.19 U11: daisan-kaigishitu o onegaishimasu (I'd 21.52-23.59 like to book Meeting Room 3) [failure] S12: hal (uh-huh) 24.05-24.32 U13: yoyaku o onegaishimasu (Please book 25.26-26.52 it) [janiji (12 o 'clock)] S14: hai (uh-huh) 27.09-27.36 UI5: yoyaku shitekudasai (Please book it) 31.72-32.65 [yoyaku shitekudasai (Please book it)] S16:konsh0 no suiybbi gozen-j0ji kara 33.62-39.04 gozen-jOichiji made daisan-kaigi- shitu toyOkotode yoroshT-deshbka (Wednesday this week, from 10 a.m. to 11 a.m., meeting room 3, OK?) U17: hai (yes) 40.85--41.10 [hai (yes)] S18: kashikomarimashit& (All right) 41.95--43.00 Figure 4: Example dialogue. S means a system utterance and U a user utterance. Recognition results are enclosed in square brackets. The figures in the rightmost column are the start and end times (in seconds) of utterances. was recognized. Although the SU '~ianiji yoyaku shitekudasai (12 o'clock, please book it)" in U13 and U15 was syntactically recognized, the system could not interpret it well enough to change the frame because of grammar limitations. The reason why the user hesitated to utter U15 is that S14 was not what the user had expected. We conducted a preliminary experiment to in- vestigate how ISSS improves the performance of spoken dialogue systems. Two systems were com- 205 pared: one that uses ISSS (system A), and one that requires each speech interval to be an SU (an interval-based system, system B). In system B, when a speech interval was not an SU, the frame was not changed. The dialogue task was a meet- ing room reservation. Both systems used the same speech recognizer and the same grammar. There were ten subjects and each carried out a task on the two systems, resulting in twenty dialogues. The subjects were using the systems for the first time. They carried out one practice task with system B beforehand. This experiment was conducted in a computer terminal room where the machine noise was somewhat adverse to speech recognition. A meaningful discussion on the success rate of utter- ance segmentation is not possible because of the recognition errors due to the small coverage of the recognition grammar. 3 All subjects successfully completed the task with system A in an average of 42.5 seconds, and six subjects did so with system B in an average of 55.0 seconds. Four subjects could not complete the task in 90 seconds with system B. Five subjects completed the task with system A 1.4 to 2.2 times quicker than with system B and one subject com- pleted it with system B one second quicker than with system A. A statistical hypothesis test showed that times taken to carry out the task with system A are significantly shorter than those with system B (Z = 3.77, p < .0001). 4 The order in which the subjects used the systems had no significant effect. In addition, user impressions of system A were generally better than those of system B. Although there were some utterances that the system misun- derstood because of grammar limitations, excluding the data for the three subjects who had made those utterances did not change the statistical results. The reason it took longer to carry out the tasks 3About 50% of user speech intervals were not covered by the recognition grammar due to the small vocabulary size of the recognition grammar. For the remaining 50% of the intervals, the word error rate of recognition was about 20%. The word error rate is defined as 100 * ( substitutions + deletions + insertions ) / ( correct + substitutions + deletions ) (Zechner and Waibel, 1998). 4In this test, we used a kind of censored mean which is computed by taking the mean of the logarithms of the ratios of the times only for the subjects that completed the tasks with both systems. The population distribution was estimated by the bootstrap method (Cohen, 1995). with system B is that, compared to system A, the probability that it understood user utterances was much lower. This is because the recognition results of speech intervals do not always form one SU. About 67% of all recognition results of user speech intervals were SUs or fillers. 5 Needless to say, these results depend on the recog- nition grammar, the grammar for understanding, the response strategy and other factors. It has been suggested, however, that assuming each speech in- terval to be an utterance unit could reduce system performance and that ISSS is effective. 6 Concluding Remarks This paper proposed ISSS (incremental significant- utterance-sequence search), an integrated incremen- tal parsing and discourse processing method that en- ables both the understanding of unsegmented user utterances and real-time responses. This paper also reported an experimental result which suggested that ISSS is effective. It is also worthwhile men- tioning that using ISSS enables building spoken di- alogue systems with less effort because it is possible to define significant utterances without considering where pauses might appear. Acknowledgments We would like to thank Dr. Ken'ichiro Ishii, Dr. Norihiro Hagita, and Dr. Kiyoaki Aikawa, and the members of the Dialogue Understanding Research Group for their helpful comments. We used the speech recognition engine REX developed by NTI" Cyber Space Laboratories and would like to thank those who helped us use it. Thanks also go to the subjects of the experiment. Comments by the anonymous reviewers were of great help. References Steven Abney. 1996. Partial parsing via finite-state cas- cades. In Proceedings of the ESSLLI '96 Robust Parsing Workshop, pages 8-15. James E Allen, Bradford W. Miller, Eric K. Ringger, and Teresa Sikorski. 1996. A robust system for natural spoken dialogue. In Proceedings of ACL-96, pages 62-70. Harald Aust, Martin Oerder, Frank Seide, and Volker Steinbiss. 1995. The Philips automatic train timetable information system. Speech Communication, 17:249- 262. 5Note that 91% of user speech intervals were well-formed substrings (not necessary SUs). 206 Daniel G. Bobrow, Ronald M. Kaplan, Martin Kay, Donald A. Norman, Henry Thompson, and Terry Winograd. 1977. GUS, a frame driven dialog system. Artificial Intelligence, 8:155-173. Mauro Cettolo and Daniele Falavigna. 1998. Automatic detection of semantic boundaries based on acoustic and lexical knowledge. In Proceedings of ICSLP-98, pages 1551-1554. Paul R. Cohen. 1995. Empirical Methods for Artificial Intelligence. MIT Press. Mark G. Core and Lenhart K. Schubert. 1997. Handling speech repairs and other disruptions through parser metarules. In Working Notes of AAA1 Spring Sympo- sium on Computational Models for Mixed Initiative Interaction, pages 23-29. Gtinther G6rz, Marcus Kesseler, J6rg Spilker, and Hans Weber. 1996. Research on architectures for integrated speech/language systems in Verbmobil. In Proceed- ings of COLING-96, pages 484-489. Kaichiro Hatazaki, Farzad Ehsani, Jun Noguchi, and Takao Watanabe. 1994. Speech dialogue system based on simultaneous understanding. Speech Com- munication, 15:323-330. Peter A. Heeman and James F. Allen. 1997. Into- national boundaries, speech repairs, and discourse markers: Modeling spoken dialog. In Proceedings of ACL/EACL-97. Jun-ichi Hirasawa, Noboru Miyazaki, Mikio Nakano, and Takeshi Kawabata. 1998. Implementation of coordi- native nodding behavior on spoken dialogue systems. In Proceedings oflCSLP-98, pages 2347-2350. Tatsuya Kawahara, Chin-Hui Lee, and Biing-Hwang Juang. 1996. Key-phrase detection and verification for flexible speech understanding. In Proceedings of ICSLP-96, pages 861-864. Alon Lavie, Donna Gates, Noah Coccaro, and Lori Levin. 1997. Input segmentation of spontaneous speech in JANUS: A speech-to-speech translation system. In Elisabeth Maier, Marion Mast, and Susann LuperFoy, editors, Dialogue Processing in Spoken Language Systems, pages 86-99. Springer-Verlag. Alon Lavie. 1996. GLR* : A Robust Grammar-Focused Parser for Spontaneously Spoken Language. Ph.D. thesis, School of Computer Science, Carnegie Mellon University. David D. McDonald. 1992. An efficient chart-based algorithm for partial-parsing of unrestricted texts. In Proceedings of the Third Conference on Applied Nat- ural Language Processing, pages 193-200. Yoshiaki Noda, Yoshikazu Yamaguchi, Tomokazu Ya- mada, Akihiro Imamura, Satoshi Takahashi, Tomoko Matsui, and Kiyoaki Aikawa. 1998. The development of speech recognition engine REX. In Proceedings of the 1998 1EICE General Conference D-14-9, page 220. (in Japanese). Jeremy Peckham. 1993. A new generation of spoken language systems: Results and lessons from the SUNDIAL project. In Proceedings of Eurospeech- 93, pages 33-40. Ganesh N. Ramaswamy and Jan Kleindienst. 1998. Automatic identification of command boundaries in a conversational natural language user interface. In Proceedings of lCSLP-98, pages 401-404. R. C. Rose. 1995. Keyword detection in conversational speech utterances using hidden Markov model based continuous speech recognition. Computer Speech and Language, 9:309-333. Marc Seligman, Junko Hosaka, and Harald Singer. 1997. "Pause units" and analysis of spontaneous Japanese dialogues: Preliminary studies. In Elisabeth Maier, Marion Mast, and Susann LuperFoy, editors, Dialogue Processing in Spoken Language Systems, pages 100- 112. Springer-Verlag. Shigenobu Seto, Hiroshi Kanazawa, Hideaki Shinchi, and Yoichi Takebayashi. 1994. Spontaneous speech dialogue system TOSBURG-II and its evaluation. Speech Communication, 15:341-353. Andreas Stolcke, Elizabeth Shriberg, Rebecca Bates, Mari Ostendorf, Dilek Hakkani, Madelaine Plauche, G6khan Ttir, and Yu Lu. 1998. Automatic detection of sentence boundaries and disfluencies based on rec- ognized words. In Proceedings of ICSLP-98, pages 2247-2250. Toshiyuki Takezawa and Tsuyoshi Morimoto. 1997. Dialogue speech recognition method using syntac- tic rules based on subtrees and preterminal bigrams. Systems and Computers in Japan, 28(5):22-32. David R. Traum and Peter A. Heeman. 1997. Utterance units in spoken dialogue. In Elisabeth Maier, Marion Mast, and Susann LuperFoy, editors, Dialogue Pro- cessing in Spoken Language Systems, pages 125-140. Springer-Verlag. Marilyn A. Walker, Jeanne C. Fromer, and Shrikanth Narayanan. 1998. Learning optimal dialogue strate- gies: A case study of a spoken dialogue agent for email. In Proceedings of COLING-A CL'98. Michelle Q. Wang and Julia Hirschberg. 1992. Auto- matic classification of intonational phrase boundaries. Computer Speech and Language, 6:175-196. Karsten L. Worm. 1998. A model for robust processing of spontaneous speech by integrating viable fragments. In Proceedings of COLING-ACL'98, pages 1403- 1407. Klaus Zechner and Alex Waibel. 1998. Using chunk based partial parsing of spontaneous speech in unre- stricted domains for reducing word error rate in speech recognition. In Proceedings of COLING-ACL'98, pages 1453-1459. Victor Zue, Stephanie Seneff, Joseph Polifroni, Michael Phillips, Christine Pao, David Goodine, David God- deau, and James Glass. 1994. PEGASUS: A spo- ken dialogue interface for on-line air travel planning. Speech Communication, 15:331-340. 207
1999
26
Should we Translate the Documents or the Queries in Cross-language Information Retrieval? J. Scott McCarley IBM T.J. Watson Research Center P.O. Box 218 Yorktown Heights, NY 10598 [email protected] Abstract Previous comparisons of document and query translation suffered difficulty due to differing quality of machine translation in these two opposite directions. We avoid this difficulty by training identical statistical translation models for both translation di- rections using the same training data. We in- vestigate information retrieval between En- glish and French, incorporating both trans- lations directions into both document trans- lation and query translation-based informa- tion retrieval, as well as into hybrid sys- tems. We find that hybrids of document and query translation-based systems out- perform query translation systems, even human-quality query translation systems. 1 Introduction Should we translate the documents or the queries in cross-language information re- trieval? The question is more subtle than the implied two alternatives. The need for translation has itself been. questioned : al- though non-translation based methods of cross-language information retrieval (CLIR), such as cognate-matching (Buckley et al., 1998) and cross-language Latent Semantic Indexing (Dumais et al., 1997) have been developed, the most common approaches have involved coupling information retrieval (IR) with machine translation (MT). (For convenience, we refer to dictionary-lookup techniques and interlingua (Diekema et al., 1999) as "translation" even if these tech- niques make no attempt to produce coherent or sensibly-ordered language; this distinction is important in other areas, but a stream of words is adequate for IR.) Translating the documents into the query's language(s) and translating the queries into the docu- ment's language(s) represent two extreme approaches to coupling MT and IR. These two approaches are neither equivalent nor mutually exclusive. They are not equivalent because machine translation is not an invert- ible operation. Query translation and doc- ument translation become equivalent only if each word in one language is translated into a unique word in the other languages. In fact machine translation tends to be a many-to- one mapping in the sense that finer shades of meaner are distinguishable in the original text than in the translated text. This effect is readily observed, for example, by machine translating the translated text back into the original language. These two approaches are not mutually exclusive, either. We find that a hybrid approach combining both directions of translation produces superior performance than either direction alone. Thus our answer to the question posed by the title is both. Several arguments suggest that document translation should be competitive or supe- rior to query translation. First, MT is error-prone. Typical queries are short and may contain key words and phrases only once. When these are translated inappro- priately, the IR engine has no chance to recover. Translating a long document of- fers the MT engine many more opportuni- ties to translate key words and phrases. If only some of these are translated appropri- ately, the IR engine has at least a chance of matching these to query terms. The sec- ond argument is that the tendency of MT 208 engines to produce fewer distinct words than were contained in the original document (the output vocabulary is smaller than the in- put vocabulary) also indicates that machine translation should preferably be applied to the documents. Note the types of prepro- cessing in use by many monolingual IR en- gines: stemming (or morphological analysis) of documents and queries reduces the num- ber of distinct words in the document index, while query expansion techniques increase the number of distinct words in the query. Query translation is probably the most common approach to CLIR. Since MT is fre- quently computationally expensive and the document sets in IR are large, query transla- tion requires fewer computer resources than document translation. Indeed, it has been asserted that document translation is sim- ply impractical for large-scale retrieval prob- lems (Carbonell et al., 1997), or that doc- ument translation will only become practi- cal in the future as computer speeds im- prove. In fact, we have developed fast MT algorithms (McCarley and Roukos, 1998) ex- pressly designed for translating large col- lections of documents and queries in IR. Additionally, we have used them success- fully on the TREC CLIR task (Franz et al., 1999). Commercially available MT sys- tems have also been used in large-scale doc- ument translation experiments (Oard and Hackett, 1998). Previously, large-scale at- tempts to compare query translation and document translation approaches to CLIR (Oard, 1998) have suggested that document translation is preferable, but the results have been difficult to interpret. Note that in order to compare query translation and document translation, two different translation systems must be involved. For example, if queries are in English and document are in French, then the query translation IR system must incor- porate English=~French translation, whereas the document translation IR system must incorporate French=~English. Since famil- iar commercial MT systems are "black box" systems, the quality of translation is not known a priori. The present work avoids this difficulty by using statistical machine translation systems for both directions that are trained on the same training data us- ing identical procedures. Our study of doc- ument translation is the largest comparative study of document and query translation of which we are currently aware. We also inves- tigate both query and document translation for both translation directions within a lan- guage pair. We built and compared three information retrieval systems : one based on document translation, one based on query translation, and a hybrid system that used both trans- lation directions. In fact, the "score" of a document in the hybrid system is simply the arithmetic mean of its scores in the query and document translation systems. We find that the hybrid system outperforms either one alone. Many different hybrid systems are possible because of a tradeoff between computer resources and translation quality. Given finite computer resources and a col- lection of documents much larger than the collection of queries, it might make sense to invest more computational resources into higher-quality query translation. We inves- tigate this possibility in its limiting case: the quality of human translation exceeds that of MT; thus monolingual retrieval (queries and documents in the same language) rep- resents the ultimate limit of query transla- tion. Surprisingly, we find that the hybrid system involving fast document translation and monolingual retrieval continues to out- perform monolingual retrieval. We thus con- clude that the hybrid system of query and document translation will outperform a pure query translation system no matter how high the quality of the query translation. 2 Translation Model The algorithm for fast translation, which has been described previously in some de- tail (McCarley and Roukos, 1998) and used with considerable success in TREC (Franz et al., 1999), is a descendent of IBM Model 1 (Brown et al., 1993). Our model captures important features of more complex models, such as fertility (the number of French words 209 output when a given English word is trans- lated) but ignores complexities such as dis- J tortion parameters that are unimportant for IR. Very fast decoding is achieved by imple- menting it as a direct-channel model rather than as a source-channel model. The ba- sic structure of the English~French model is the probability distribution fl...A, le,,co text(e,)). (1) of the fertility ni of an English word ei and a set of French words fl...f,~ associated with that English word, given its context. Here we regard the context of a word as the pre- ceding and following non-stop words; our ap- proach can easily be extended to other types of contextual features. This model is trained on approximately 5 million sentence pairs of Hansard (Canadian parliamentary) and UN proceedings which have been aligned on a sentence-by-sentence basis by the methods of (Brown et al., 1991), and then further aligned on a word-by-word basis by meth- ods similar to (Brown et al., 1993). The French::~English model can be described by simply interchanging English and French no- tation above. It is trained separately on the same training data, using identical proce- dures. 3 Information Retrieval Experiments The document sets used in our experiments were the English and French parts of the doc- ument set used in the TREC-6 and TREC- 7 CLIR tracks. The English document set consisted of 3 years of AP newswire (1988-1990), comprising 242918 stories orig- inally occupying 759 MB. The French doc- ument set consisted of the same 3 years of SDA (a Swiss newswire service), compris- ing 141656 stories and originally occupy- ing 257 MB. Identical query sets and ap- propriate relevance judgments were available in both English and French. The 22 top- ics from TREC-6 were originally constructed in English and translated by humans into French. The 28 topics from TREC-7 were originally constructed (7 each from four dif- ferent sites) in English, French, German, and Italian, and human translated into all four languages. We have no knowledge of which TREC-7 queries were originally constructed in which language. The queries contain three SGML fields (<topic>, <description>, <narrative>), which allows us to' con- trast short (<description> field only) and long (all three fields) forms of the queries. Queries from TREC-7 appear to be some- what "easier" than queries from TREC-6, across both document sets. This difference is not accounted for simply by the number of relevant documents, since there were consid- erably fewer relevant French documents per TREC-7 query than per TREC-6 query. With this set of resources, we performed the two different sets of CLIR experiments, denoted EqFd (English queries retrieving French documents), and FqBd (French queries retrieving English documents.) In both EqFd and' FqEd we employed both techniques (translating the queries, trans- lating the documents). We emphasize that the query translation in EqFd was performed with the same English=~French translation system as the document transla- tion in FqEd, and that the document trans- lation EqFd was performed with the same French=~English translation system as the query translation in FqEd. We further em- phasize that both translation systems were built from the same training data, and thus are as close to identical quality as can likely be attained. Note also that the results presented are not the TREC-7 CLIR task, which involved both cross-language informa- tion retrieval and the merging of documents retrieved from sources in different languages. Preprocessing of documents includes part- of-speech tagging and morphological anal- ysis. (The training data for the transla- tion models was preprocessed identically, so that the translation models translated be- tween morphological root words rather than between words.) Our information retrieval systems consists of first pass scoring with the Okapi formula (Robertson et al., 1995) on unigrams and symmetrized bigrams (with 210 en, des, de, and - allowed as connectors) fol- lowed by a second pass re-scoring using local context analysis (LCA) as a query expan- sion technique (Xu and Croft, 1996). Our primary basis for comparison of the results of the experiments was TREC-style average precision after the second pass, although we have checked that our principal conclusions follow on the basis of first pass scores, and on the precision at rank 20. In the query translation experiments, our implementation of query expansion corresponds to the post- translation expansion of (Ballasteros and Croft, 1997), (Ballasteros and Croft, 1998). All adjustable parameters in the IR sys- tem were left unchanged from their values in our TREC ad-hoc experiments (Chan et al., 1997),(Franz and Roukos, 1998), (Franz et al., 1999) or cited papers (Xu and Croft, 1996), except for the number of documents used as the basis for the LCA, which was estimated at 15 from scaling considerations. Average precision for both query and docu- ment translation were noted to be insensitive to this parameter (as previously observed in other contexts) and not to favor one or the other method of CLIR. 4 Results In experiment EqFd, document translation outperformed query translation, as seen in columns qt and dt of Table 1. In experiment FqEd, query translation outperformed doc- ument translation, as seen in the columns qt and dt of Table 2. The relative perfor- mances of query and document translation, in terms of average precision, do not differ between long and short forms of the queries, contrary to expectations that query transla- tion might fair better on longer queries. A more sophisticated translation model, incor- porating more nonlocal features into its def- inition of context might reveal a difference in this aspect. A simple explanation is that in both experiments, French=eeEnglish trans- lation outperformed English=~French trans- lation. It is surprising that the difference in performance is this large, given that the training of the translation systems was iden- tical. Reasons for this difference could be in the structure of the languages themselves; for example, the French tendency to use phrases such as pomme de terre for potato may hinder retrieval based on the Okapi for- mula, which tends to emphasize matching unigrams. However, separate monolingual retrieval experiments indicate that the ad- vantages gained by indexing bigrams in the French documents were not only too small to account for the difference between the re- trieval experiments involving opposite trans- lation directions, but were in fact smaller than the gains made by indexing bigrams in the English documents. The fact that French is a more highly inflected language than English is unlikely to account for the difference since both translation systems and the IR system used morphologically ana- lyzed text. Differences in the quality of pre- processing steps in each language, such as tagging and morphing, are more difficult to account for, in the absence of standard met- rics for these tasks. However, we believe that differences in preprocessing for each lan- guage have only a small effect on retrieval performance. Furthermore, these differences are likely to be compensated for by the train- ing of the translation algorithm: since its training data was preprocessed identically, a translation engine trained to produce lan- guage in a particular style of morphing is well suited for matching translated docu- ments with queries morphed in the same style. A related concern is "matching" be- tween translation model training data and retrieval set - the English AP documents might have been more similar to the Hansard than the Swiss SDA documents. All of these concerns heighten the importance of study- ing both translation directions within the language pair. On a query-by-query basis, the scores are quite correlated, as seen in Fig. (1). On TREC-7 short queries, the average preci- sions of query and document translation are within 0.1 of each other on 23 of the 28 queries, on both FqEd and EqFd. The re- maining outlier points tend to be accounted for by simple translation errors, (e.g. vol 211 EqFd qt dt qt + dt ht ht + dt trec6.d trec6.tdn trec7.d trec7.tdn 0.2685 0.2819 0.2976 0.3494 0.3548 0.2981 0.3379 0.3425 0.3823 0.3664 0.3296 0.3345 0.3532 0.3611 0.4021 0.3826 0.3814 0.4063 0.4072 0.4192 Table 1: Experiment EqFd: English queries retrieving French documents All numbers are TREC average precisions. qt : query translation system dt : document translation system qt + dt : hybrid system combining qt and dt ht : monolingual baseline (equivalent to human translation) ht + dt : hybrid system combining ht and dt FqEd trec6.d trec6.tdn trec7.d trec7.tdn qt 0.3271 0.3666 0.4014 0.4541 dt 0.2992 0.3390 0.3926 0.4384 qt + dt 0.3396 0.3743 0.4264 0.4739 ht 0.2873 0.3889 0.4377 0.4812 ht + dt 0.3369 0.4016 0.4475 0.4937 Table 2: Experiment FqEd: French queries retrieving English documents All numbers are TREC average precisions. qt : query translation system dt : document translation system qt + dt : hybrid system combining qt and dt ht : monolingual baseline (equivalent to human translation) ht + dt : hybrid system combining ht and dt d'oeuvres d'art --4 flight art on the TREC- 7 query CL,036.) With the limited number of queries available, it is not clear whether the difference in retrieval results between the two translation directions is a result of small effects across many queries, or is principally determined by the few outlier points. We remind the reader that the query translation and document translation ap- proaches to CLIR are not symmetrical. In- formation is distorted in a different manner by the two approaches, and thus a combi- nation of the two approaches may yield new information. We have investigated this as- pect by developing a hybrid system in which the score of each document is the mean of its (normalized) scores from both the query and document translation experiments. (A more general linear combination would perhaps be more suitable if the average precision of the two retrievals differed substantially.) We ob- serve that the hybrid systems which combine query translation and document translation outperform both query translation and doc- ument translation individually, on both sets of documents. (See column qt + dt of Tables 1 and 2.) Given the tradeoff between computer re- sources and quality of translation, some would propose that correspondingly more computational effort should be put into query translation. From this point of view, a document translation system based on fast MT should be compared with a query trans- lation system based on higher quality, but slower MT. We can meaningfully investigate this limit by regarding the human-translated versions of the TREC queries as the ex- treme high-quality limit of machine trans- lation. In this task, monolingual retrieval (the usual baseline for judging the degree to which translation degrades retrieval per- formance in CLIR) can be regarded as the extreme high-quality limit of query trans- 212 o8 ! g 0.4 i ,. 0.0 0, ¢ 0.0 0.2 0.4 0.6 0.8 1.0 Query trans. Figure 1: Scatterplot of average precision of document translation vs. query translation. lation. Nevertheless, document translation provides another source of information, since the context sensitive aspects of the transla- tion account for context in a manner distinct from current algorithms of information re- trieval. Thus we do a further set of experi- ments in which we mix document translation and monolingual retrieval. Surprisingly, we find that the hybrid system outperforms the pure monolingual system. (See columns ht and ht +dr of Tables 1 and 2.) Thus we conclude that a mixture of document trans- lation and query translation can be expected to outperform pure query translation, even very high quality query translation. 5 Conclusions and Future Work We have performed experiments to compare query and document translation-based CLIR systems using statistical translation models that are trained identically for both trans- lation directions. Our study is the largest comparative study of document translation and query translation of which we are aware; furthermore we have contrasted query and document translation systems on both direc- tions within a language pair. We find no clear advantage for either the query trans- lation system or the document translation system; instead French=eeEnglish translation appears advantageous over English~French translation, in spite of identical procedures used in constructing both. However a hy- brid system incorporating both directions of translation outperforms either. Further- more, by incorporating human query trans- lations rather than machine translations, we show that the hybrid system contin- ues to outperform query translation. We have based our conclusions by comparing TREC-style average precisions of retrieval with a two-pass IR system; the same con- clusions follow if we instead compare preci- sions at rank 20 or average precisions from first pass (Okapi) scores. Thus we conclude that even in the limit of extremely high qual- ity query translation, it will remain advan- tageous to incorporate both document and query translation into a CLIR system. Fu- ture work will involve investigating trans- lation direction differences in retrieval per- formance for other language pairs, and for statistical translation systems trained from comparable, rather than parallel corpora. 6 Acknowledgments This work is supported by NIST grant no. 70NANB5H1174. We thank Scott Axel- rod, Martin Franz, Salim Roukos, and Todd Ward for valuable discussions. 213 References L. Ballasteros and W.B. Croft. 1997. Phrasal translation and query expansion techniques for cross-language information retrieval. In 20th Annual ACM SIGIR Conference on Information Retrieval. L. Ballasteros and W.B. Croft. 1998. Re- solving ambiguity for cross-language re- trieval. In 21th Annual ACM SIGIR Con- ference on Information Retrieval. P.F. Brown, J.C. Lai, and R.L. Mercer. 1991. Aligning sentences in parallel cor- pora. In Proceedings of the 29th Annual Meeting of the Association for Computa- tional Linguistics. P. Brown, S. Della Pietra, V. Della Pietra, and R. Mercer. 1993. The mathematics of statistical machine translation : Param- eter estimation. Computational Linguis- tics, 19:263-311. C. Buckley, M. Mitra, J. Wals, and C. Cardie. 1998. Using clustering and superconcepts within SMART : TREC-6. In E.M. Voorhees and D.K. Harman, ed- itors, The 6th Text REtrieval Conference (TREC-6). J.G. Carbonell, Y. Yang, R.E. Frederk- ing, R.D. Brown, Yibing Geng, and Danny Lee. 1997. Translingual informa- tion retrieval : A comparative evaluation. In Proceedings of the Fifteenth Interna- tional Joint Conference on Artificial In- telligence. E. Chan, S. Garcia, and S. Roukos. 1997. TREC-5 ad-hoc retrieval using k nearest- neighbors re-scoring. In E.M. Voorhees and D.K. Harman, editors, The 5th Text REtrieval Conference (TREC-5). A. Diekema, F. Oroumchian, P. Sheridan, and E. Liddy. 1999. TREC-7 evaluation of Conceptual Interlingua Document Re- trieval (CINDOR) in English and French. In E.M. Voorhees and D.K. Harman, ed- itors, The 7th Text REtrieval Conference (TREC-7). S. Dumais, T.A. Letsche, M.L. Littman, and T.K. Landauer. 1997. Automatic cross- language retrieval using latent semantic indexing. In AAAI Symposium on Cross- Language Text and Speech Retrieval. M. Franz and S. Roukos. 1998. TREC-6 ad- hoc retrieval. In E.M. Voorhees and D.K. Harman, editors, The 6th Text REtrieval Conference (TREC-6). M. Franz, J.S. McCarley, and S. Roukos. 1999. Ad hoc and multilingual informa- tion retrieval at IBM. In E.M. Voorhees and D.K. Harman, editors, The 7th Text REtrieval Conference (TREC-7). J.S. McCarley and S. Roukos. 1998. Fast document translation for cross-language information retrieval. In D. Farwell., E. Hovy, and L. Gerber, editors, Machine Translation and the Information Soup, page 150. D.W. Oard and P. Hackett. 1998. Docu- ment translation for cross-language text retrieval at the University of Maryland. In E.M. Voorhees and D.K. Harman, ed- itors, The 6th Text REtrieval Conference (TREC-6). D.W. Oard. 1998. A comparative study of query and document translation for cross- language information retrieval. In D. Far- well., E. Hovy, and L. Gerber, editors, Machine Translation and the Information Soup, page 472. S.E. Robertson, S. Walker, S. Jones, M.M. Hancock-Beaulieu, and M. Gatford. 1995. Okapi at TREC-3. In E.M. Voorhees and D.K. Harman, editors, The 3d Text RE- trieval Conference (TREC-3). Jinxi Xu and W. Bruce Croft. 1996. Query expansion using local and global docu- ment analysis. In 19th Annual ACM SI- GIR Conference on Information Retrieval. 214
1999
27
Resolving Translation Ambiguity and Target Polysemy in Cross-Language Information Retrieval Hsin-Hsi Chen, Guo-Wei Bian and Wen-Cheng Lin Department of Computer Science and Information Engineering National Taiwan University, Taipei, TAIWAN, R.O.C. E-mail: [email protected], {gwbian, denislin}@nlg2.csie.ntu.edu.tw Abstract This paper deals with translation ambiguity and target polysemy problems together. Two monolingual balanced corpora are employed to learn word co-occurrence for translation ambiguity resolution, and augmented translation restrictions for target polysemy resolution. Experiments show that the model achieves 62.92% of monolingual information retrieval, and is 40.80% addition to the select-all model. Combining the target polysemy resolution, the retrieval performance is about 10.11% increase to the model resolving translation ambiguity only. 1. Introduction Cross language information retrieval (CLIR) (Oard and Dorr, 1996; Oard, 1997) deals with the use of queries in one language to access documents in another. Due to the differences between source and target languages, query translation is usually employed to unify the language in queries and documents. In query translation, translation ambiguity is a basic problem to be resolved. A word in a source query may have more than one sense. Word sense disambiguation identifies the correct sense of each source word, and lexical selection translates it into the corresponding target word. The above procedure is similar to lexical choice operation in a traditional machine translation (MT) system. However, there is a significant difference between the applications of MT and CLIR. In MT, readers interpret the translated results. If the target word has more than one sense, readers can disambiguate its meaning automatically. Comparatively, the translated result is sent to a monolingual information retrieval system in CLIR. The target polysemy adds extraneous senses and affects the retrieval performance. Some different approaches have been proposed for query translation. Dictionary-based approach exploits machine-readable dictionaries and selection strategies like select all (Hull and Grefenstette, 1996; Davis, 1997), randomly select N (Ballesteros and Croft, 1996; Kwok 1997) and select best N (Hayashi, Kikui and Susaki, 1997; Davis 1997). Corpus-based approaches exploit sentence-aligned corpora (Davis and Dunning, 1996) and document-aligned corpora (Sheridan and Ballerini, 1996). These two approaches are complementary. Dictionary provides translation candidates, and corpus provides context to fit user intention. Coverage of dictionaries, alignment performance and domain shift of corpus are major problems of these two approaches. Hybrid approaches (Ballesteros and Croft, 1998; Bian and Chen, 1998; Davis 1997) integrate both lexical and corpus knowledge. All the above approaches deal with the translation ambiguity problem in query translation. Few touch on translation ambiguity and target polysemy together. This paper will study the multiplication effects of translation ambiguity and target polysemy in cross-language information retrieval systems, and propose a new translation method to resolve these problems. Section 2 shows the effects of translation ambiguity and target polysemy in Chinese-English and English- Chinese information retrievals. Section 3 presents several models to revolve translation ambiguity and target polysemy problems. Section 4 demonstrates the experimental results, and compares the performances of the proposed models. Section 5 concludes the remarks. 2. Effects of Ambiguities Translation ambiguity and target polysemy are two major problems in CLIR. Translation ambiguity results from the source language, and target polysemy occurs in target language. Take Chinese-English information retrieval (CEIR) and English-Chinese information retrieval (ECIR) as examples. The former uses Chinese queries to 215 Table 1. Statistics of Chinese and English Thesaurus English Thesaurus Chinese Thesaurus Total Words Average # of Senses Average # ofSensesfor Top 1000Words 29,380 !.687 3.527 53,780 1.397 1.504 retrieve English documents, while the later employs English queries to retrieve Chinese documents. To explore the difficulties in the query translation of different languages, we gather the sense statistics of English and Chinese words. Table 1 shows the degree of word sense ambiguity (in terms of number of senses) in English and in Chinese, respectively. A Chinese thesaurus, i.e., ~~$~k (tong2yi4ci2ci21in2), (Mei, et al., 1982) and an English thesaurus, i.e., Roget's thesaurus, are used to count the statistics of the senses of words. On the average, an English word has 1.687 senses, and a Chinese word has 1.397 senses. If the top 1000 high frequent words are considered, the English words have 3.527 senses, and the bi-character Chinese words only have 1.504 senses. In summary, Chinese word is comparatively unambiguous, so that translation ambiguity is not serious but target polysemy is serious in CEIR. In contrast, an English word is usually ambiguous. The translation disambiguation is important in ECIR. Consider an example in CEIR. The Chinese word ",~,It" (yin2hang2) is unambiguous, but its English translation "bank" has 9 senses (Longman, 1978). When the Chinese word " ,~ 45- " (yin2hang2) is issued, it is translated into the English counterpart "bank" by dictionary lookup without difficulty, and then "bank" is sent to an IR system. The IR system will retrieve documents that contain this word. Because "bank" is not disambiguated, irrelevant documents will be reported. On the contrary, when "bank" is submitted to an ECIR system, we must disambiguate its meaning at first. If we can find that its correct translation is "-~g-#5"" (yin2hang2), the subsequent operation is very simple. That is, "~'~5-" (yin2hang2) is sent into an IR system, and then documents containing "~l~5"" (yin2hang2) will be presented. In this example, translation disambiguation should be done rather than target polysemy resolution. The above examples do not mean translation disambiguation is not required in CEIR. Some Chinese words may have more than one sense. For example, "k-~ ~ " (yun4dong4) has the following meanings (Lai and Lin, 1987): (1) sport, (2) exercise, (3) movement, (4) motion, (5) campaign, and (6) lobby. Each corresponding English word may have more than one sense. For example, "exercise" may mean a question or set of questions to be answered by a pupil for practice; the use of a power or right; and so on. The multiplication effects of translation ambiguity and target polysemy make query translation harder. 3. Translation Ambiguity and Polysemy Resolution Models In the recent works, Ballesteros and Croft (1998), and Bian and Chen (1998) employ dictionaries and co-occurrence statistics trained from target language documents to deal with translation ambiguity. We will follow our previous work (Bian and Chen, 1998), which combines the dictionary-based and corpus-based approaches for CEIR. A bilingual dictionary provides the translation equivalents of each query term, and the word co-occurrence information trained from a target language text collection is used to disambiguate the translation. This method considers the content around the translation equivalents to decide the best target word. The translation of a query term can be disambiguated using the co-occurrence of the translation equivalents of this term and other terms. We adopt mutual information (Church, et al., 1989) to measure the strength. This disambiguation method performs good translations even when the multi-term phrases are not found in the bilingual dictionary, or the phrases are not identified in the source language. Before discussion, we take Chinese-English information retrieval as an example to explain our methods. Consider the Chinese query ",~I~'~5-" (yin2hang2) to an English collection again. The ambiguity grows from none (source side) to 9 senses (target side) during query translation. How to incorporate the knowledge from source side to target side is an important issue. To avoid the problem of target polysemy in query 216 translation, we have to restrict the use of a target word by augmenting some other words that usually co-occur with it. That is, we have to make a context for the target word. In our method, the contextual information is derived from the source word. We collect the frequently accompanying nouns and verbs for each word in a Chinese corpus. Those words that co-occur with a given word within a window are selected. The word association strength of a word and its accompanying words is measured by mutual information. For each word C in a Chinese query, we augment it with a sequence of Chinese words trained in the above way. Let these words be CW~, CW2, ..., and CWm. Assume the corresponding English translations of C, CW~, CW2, ..., and CWm are E, EW,, EW2, ..., and EWm, respectively. EWe, EW2, ..., and EWm form an augmented translation restriction of E for C. In other words, the list (E, EW1, EW2, ..., EWm) is called an augmented translation result for C. EWe, EWe, ..., and EWm are a pseudo English context produced from Chinese side. Consider the Chinese word "~I~gS"" (yin2hang2). Some strongly co-related Chinese words in ROCLING balanced corpus (Huang, et al., 1995) are: "I!.g.~," (tie 1 xian4), "~ ~" (ling3chu 1 ), "_-~_. ~" (li3ang2), "~ 1~" (yalhui4), ";~ ~" (hui4dui4), etc. Thus the augmented translation restriction of "bank" is (rebate, show out, Lyons, negotiate, transfer, ...). Unfortunately, the query translation is not so simple. A word C in a query Q may be ambiguous. Besides, the accompanying words CW~ (1 < i < m) trained from Chinese corpus may be translated into more than one English word. An augmented translation restriction may add erroneous patterns when a word in a restriction has more than one sense. Thus we devise several models to discuss the effects of augmented restrictions. Figure 1 shows the different models and the model refinement procedure. A Chinese query may go through translation ambiguity resolution module (left-to-right), target polysemy resolution module (top-down), or both (i.e., these two modules are integrated at the right corner). In the following, we will show how each module is operated independently, and how the two modules are combined. For a Chinese query which is composed of n words C~, C2, ..., Ca, find the corresponding English translation equivalents in a Chinese- English bilingual dictionary. To discuss the propagation errors from translation ambiguity resolution part in the experiments, we consider the following two alternatives: (a) select all (do-nothing) The strategy does nothing on the translation disambiguation. All the English translation equivalents for the n Chinese words are selected, and are submitted to a monolingual information retrieval system. (b) co-occurrence model (Co-Model) We adopt the strategy discussed previously for translation disambiguation (Bian and Chen, 1998). This method considers the content around the English translation equivalents to decide the best target equivalent. For target polysemy resolution part in Figure 1, we also consider two alternatives. In the first alternative (called A model), we augment restrictions to all the words no matter whether they are ambiguous or not. In the second alternative (called U model), we neglect those Cs that have more than one English translation. Assume Co~), C~2) .... , Co~p) (p < n) have only one English translation. The restrictions are augmented to Co~), C~2) ..... Co~p) only. We apply the above corpus-based method to find the restriction for each English word selected by the translation ambiguity resolution model. Recall that the restrictions are derived from Chinese corpus. The accompanying words trained from Chinese corpus may be translated into more than one English word. Here, the translation ambiguity may occur when translating the restrictions. Three alternatives are considered. In U1 (or A1) model, the terms without ambiguity, i.e., Chinese and English words are one-to-one correspondent in a Chinese-English bilingual dictionary, are added. In UT (or AT) model, th/~ terms with the same parts of speech (POSes) are added. That is, POS is used to select English word. In UTT (or ATT) model, we use mutual information to select top 10 accompanying terms of a Chinese query word, and POS is used to obtain the augmented translation restriction. 217 Chinese Query I C~, C2 ..... Cn Target Polysemy Resolution A MOdel __~ Chinese Query [ Ct, C2 ..... Cn Translation Ambiguity Resolution Select All (baseline) Co Model (Co-occurrence model) English Query } ~(Eu,., Eth), (E21, . E2t,) ..... (Enl,.., Ent,) English Query "1 EL, E2, ..., En Chinese Restriction {CWll... CWt~j, {CW21.., CW2m:} ..... {CW.t ..... CWm) Translated English Restriction {EW. ..... ZWlk 0, I tzw2, ...... EW~k~} .... [ {EW., ..... EW*k} A1 Model ..j (Unique Translation) "I AT Model ~j (POS Tag Matched) "t ATT Model k[ (Top 10 & POS Tag Matched)t ER-A 1 I ER-AT ] ! ER.A ] I Argumented English Query El, {EWij } U Model UI Model "J ER-U1 I [ ~ ~ (Unique Translation) vl I , ~Chinese Query (1) Only one English Translation: ~ Chinese Restriction C o(I), Ca(2) .... , Co(p) {CWotl) Z ..... CWo(l)ml} ' UT Model "J ER-UT ] ~]~'-~ (2) More than one English Translation: " {CWof2)t{CWa(p) I ..... ..... CWo(2)m.,}C~/a(p) raF~ ..... (POS Tag Matched) "l I / C a(~-i~, C o(p+2) ..... C o{.) ~. UTT Model ~l ER-UTT I (Top 10 & POS Tag Matched)l X Figure 1. Models for Translation Ambiguity and Target Polysemy Resolution In the above treatment, a word C~ in a query Q is translated into (Ei, EWil, EWi2 .... , EWimi). Ei is selected by Co-Model, and EWi~, EWi2 .... , EWimi are augmented by different target polysemy resolution models. Intuitively, Ei, EWil, EWi2 .... , EWim~ should have different weights. Ei is assigned a higher weight, and the words EWil, EWi2 ..... Eim~ in the restriction are assigned lower weights. They are determined by the following formula, where n is number of words in Q and mk is the number of words in a restriction for Ek. 1 weight(Ei) - n+l 1 weight(EWij) = n (n + 1) * E mk k=l Thus six new models, i.e., A1W, ATW, ATTW, U1W, UTW and UTTW, are derived. Finally, we apply Co-model again to disambiguate the pseudo contexts and devise six new models (A1WCO, ATWCO, ATTWCO, U1WCO, UTWCO, and UTTWCO). In these six models, only one restriction word will be selected from the words EWil, EWiz, ..., EWim i via disambiguation with other restrictions. 4. Experimental Results To evaluate the above models, we employ TREC-6 text collection, TREC topics 301-350 (Harman, 1997), and Smart information retrieval system (Salton and Buckley, 1988). The text collection contains 556,077 documents, and is about 2.2G bytes. Because the goal is to evaluate the performance of Chinese-English information retrieval on different models, we translate the 50 English queries into Chinese by human. The topic 332 is considered as an example in the following. The original English version and the human-translated Chinese version are shown. A TREC topic is composed of several fields. Tags <num>, <title>, <des>, and <narr> denote topic number, title, description, and narrative fields. Narrative provides a complete description of document relevance for the 218 assessors. In our experiments, only the fields of title and description are used to generate queries. <top> <num> Number: 332 <title> Income Tax Evasion <desc> Description: This query is looking for investigations that have targeted evaders of U.S. income tax. <narr> Narrative: A relevant document would mention investigations either in the U.S. or abroad of people suspected of evading U.S. income tax laws. Of particular interest are investigations involving revenue from illegal activities, as a strategy to bring known or suspected criminals to justice. </top> <top> <num> Number: 332 <C-title> <C-desc> Description: <C-narr> Narrative: .~l~ ~.&.~-~- ° :~,J-~, ~ ~ ~ - ~ ~ ~.~-~ , </top> Totally, there are 1,017 words (557 distinct words) in the title and description fields of the 50 translated TREC topics. Among these, 401 words have unique translations and 616 words have multiple translation equivalents in our Chinese-English bilingual dictionary. Table 2 shows the degree of word sense ambiguity in English and in Chinese, respectively. On the average, an English query term has 2.976 senses, and a Chinese query term has 1.828 senses only. In our experiments, LOB corpus is employed to train the co-occurrence statistics for translation ambiguity resolution, and ROCLING balanced corpus (Huang, et al., 1995) is employed to train the restrictions for target polysemy resolution. The mutual information tables are trained using a window size 3 for adjacent words. Table 3 shows the query translation of TREC topic 332. For the sake of space, only title field is shown. In Table 3(a), the first two rows list the original English query and the Chinese query. Rows 3 and 4 demonstrate the English translation by select-all model and co-occurrence model by resolving translation ambiguity only. Table 3(b) shows the augmented translation results using different models. Here, both translation ambiguity and target polysemy are resolved. The following lists the selected restrictions in A1 model. i~_~(evasion): ~.~_N (N: poundage), ~/t~_N (N: scot), ~.tkV (V: stay) ?~-(income): I~g~_N (N: quota) ~(tax): i/~_V (N: evasion), I~_N (N:surtax), ~t ~,_N (N: surplus), ,g'~_N (N: sales tax) Augmented translation restrictions (poundage, scot, stay), (quota), and (evasion, surtax, surplus, sales tax) are added to "evasion", "income", and "tax", respectively. From Longman dictionary, we know there are 3 senses, 1 sense, and 2 senses for "evasion", "income", and "tax", respectively. Augmented restrictions are used to deal with target polysemy problem. Compared with A1 model, only "evasion" is augmented with a translation restriction in U1 model. This is because " "~ ~ " (tao21uo4) has only one translation and "?~-" (suo3de2) and "~" (sui4) have more than one translation. Similarly, the augmented translation restrictions are omitted in the other U-models. Now we consider AT model. The Chinese restrictions, which have the matching POSes, are listed below: i~ (evasion): ~_N (N: poundage), ~l~t~0~,_N (N: scot), L~_V (V: stay), ~N (N: droit, duty, geld, tax), li~l~f~ N (N: custom, douane, tariff), /~.~ V (V: avoid, elude, wangle, welch, welsh; N: avoidance, elusion, evasion, evasiveness, miss, runaround, shirk, skulk), i.~)~_V (V: contravene, infract, infringe; N: contravention, infraction, infringement, sin, violation) ~" ~- (income): ~_V (V: impose; N: division), ~.&~,_V (V: assess, put, tax; N: imposition, taxation), ~A~_N (N: Swiss, Switzer), i~_V (V: minus, subtract), I~I[$~_N (N: quota), I~l ~_N (N: commonwealth, folk, land, nation, nationality, son, subject) (tax): I~h~_N (N: surtax), .~t~g, N (N: surplus), ~'~ _N (N: sales tax), g~V (V: abase, alight, debase, descend), r~_N (N: altitude, loftiness, tallness; ADJ: high; ADV: loftily), ~V (V: comprise, comprize, embrace, encompass), -~V (V: compete, emulate, vie; N: conflict, contention, duel, strife) Table 2. Statistics of TREC Topics 301-350 # of Distinct Words Average # of Senses Original English Topics 500 (370 words found in our dictionary) 2.976 Human-translated Chinese Topics 557 (389 words found in our dictionary) 1.828 219 Table 3. Query Translation of Title Field of TREC Topic 332 (a) Resolving Translation Ambiguity Only original English query income tax evasion Chinese translation by human ~ (tao21uo4) ?~- (suo3de2) $~, (sui4) by select all model (evasion), (earning, finance, income, taking), (droit, duty, geld, tax) by co-occurrence model evasion, income, tax (b) Resolving both Translation Ambiguity and Target Polysemy by AI model by UI model by AT model by UT model :by ATT model by UTT model b-y ATWCO model by UTWCO model by ATTWCO model by UTTWCO model (evasion, poundage, scot, stay), (income, quota), (tax, evasion, surtax, surplus, sales tax) (evasion, poundage, scot, stay), (income), (tax) (evasion; poundage; scot; stay; droit, duty, geld, tax; custom, douane, tariff; avoid, elude, wangle, welch, welsh; contravene, infract, infringe), (income; impose; assess, put, tax; Swiss, Switzer; minus subtract; quota; commonwealth, folk, land, nation, nationality, son, subject), (tax; surtax; surplus; sales tax; abase, alight, debase, descend; altitude, loftiness, tallness; comprise, comprize, embrace, encompass; compete, emulate, vie) (evasion; poundage, scot, stay, droit, duty, geld, tax, custom, douane, tariff, avoid, elude, wangle, welch, welsh, contravene, infract, infringe), (income), (tax) (evasion, poundage, scot, stay, droit, duty, geld, tax, custom, douane, tariff), (income), (tax) (evasion, poundage, scot, stay, droit, duty, geld, tax, custom, douane, tariff), (income), (tax) (evasion, tax), (income, land), (tax, surtax) (evasion, poundage), (income), (tax) (evasion, tax), (income), (tax) (evasion, poundage), (income), (tax) Those English words whose POSes are the same as the corresponding Chinese restrictions are selected as augmented translation restriction. For example, the translation of"~"_V (tao2bi4) has two possible POSes, i.e., V and N, so only "avoid", "elude", "wangle", "welch", and "welsh" are chosen. The other terms are added in the similar way. Recall that we use mutual information to select the top 10 accompanying terms of a Chinese query term in ATT model. The 5 ~ row shows that the augmented translation restrictions for "?)i"~-" (suo3de2) and "~," (sui4) are removed because their top 10 Chinese accompanying terms do not have English translations of the same POSes. Finally, we consider ATWCO model. The words "tax", "land", and "surtax" are selected from the three lists in 3 rd row of Table 3(b) respectively, by using word co-occurrences. Figure 2 shows the number of relevant documents on the top 1000 retrieved documents for Topics 332 and 337. The performances are stable in all of the +weight (W) models and the enhanced CO restriction (WCO) models, even there are different number of words in translation restrictions. Especially, the enhanced CO restriction models add at most one translated restriction word for each query tenn. They can achieve the similar performance to those models that add more translated restriction words. Surprisingly, the augmented translation results may perform better than the monolingual retrieval. Topic 337 in Figure 2 is an example. Table 4 shows the overall performance of 18 different models for 50 topics. Eleven-point average precision on the top 1000 retrieved documents is adopted to measure the performance of all the experiments. The monolingual information retrieval, i.e., the original English queries to English text collection, is regarded as a baseline model. The performance is 0.1459 under the specified environment. The select-all model, i.e., all the translation equivalents are passed without disambiguation, has 0.0652 average precision. About 44.69% of the performance of the monolingual information retrieval is achieved. When co-occurrence model is employed to resolve translation ambiguity, 0.0831 average precision (56.96% of monolingual information retrieval) is reported. Compared to do-nothing model, the performance is 27.45% increase. Now we consider the treatment of translation ambiguity and target polysemy together. Augmented restrictions are formed in A1, AT, ATT, U1, UT and UTT models, however, their performances are worse than Co-model (translation disambiguation only). The major 220 Figure 2. The Retrieved Performances of Topics 332 and 337 90 80 70 60 50 40 30 20 10 0 # of relevant documents are retrieved - ~ < < model = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Table 4. Performance of Different Models (11-point Average Precision) + 3 3 2 -=-,7 I; Monolingual IR Resolving Resolving Translation Ambiguity Translation Ambiguity and Target Polysemy S¢!e6( English .... UnambigU~ W6rds All W0rds All Co Mode! UI LIT UTT Ai AT ATT i i i ' i i .... i ..... i i i' i i i i 0.0797 0.0574 0.0709 .... 0.0674 0.0419 " 0.0660 (54.63%) (39.34%) (48.59% (46.20%) (28.72%) (45.24% 0.1459 0.0652 0.0831 (44.69%) (56.96%) ,, U!WCO UTWCO~ !UTTWCO A1WCO A~W.CO 0.0916 0.0915 0.0914 0.0914 0.0913 0.0914 (62.78%) (62.71%) (62.65%) (62.65%) (62.58%), (62.65%) ~ Weight, E~lishi~0 M0d~i for ÷ Weighti English Co Mod~l for Resection Translation Res~ietion Translation ATTWCO 0.0918 0.0917 0.0915 0.0917 0.0917 0.0915 (62.92%) (62.85%) (62.71%) (62.85%) (62.85%) (62.71%) reason is the restrictions may introduce errors. That can be found from the fact that models U 1, UT, and UTT are better than A1, AT, and ATT. Because the translation of restriction from source language (Chinese) to target language (English) has the translation ambiguity problem, the models (U1 and A1) introduce the unambiguous restriction terms and perform better than other models. Controlled augmentation shows higher performance than uncontrolled augmentation. When different weights are assigned to the original English translation and the augmented restrictions, all the models are improved significantly. The performances of A1W, ATW, ATTW, U1W, UTW, and UTTW are about 10.11% addition to the model for translation disambiguation only. Of these models, the performance change from model AT to model ATW is drastic, i.e., from 0.0419 (28.72%) to 0.0913 (62.58%). It tells us the original English translation plays a major role, but the augmented restriction still has a significant effect on the performance. We know that restriction for each English translation presents a pseudo English context. Thus we apply the co-occurrence model again on the pseudo English contexts. The performances are increased a little. These models add at most one translated restriction word for each query term, but their performances are better than those models that adding more translated restriction words. It tells us that a good translated restriction word for each query term is enough for resolving target polysemy problem. U1WCO, which is the best in these experiments, gains 62.92% of monolingual information retrieval, and 40.80% increase to the do-nothing model (select- all). 221 5. Concluding Remarks This paper deals with translation ambiguity and target polysemy at the same time. We utilize two monolingual balanced corpora to learn useful statistical data, i.e., word co-occurrence for translation ambiguity resolution, and translation restrictions for target polysemy resolution. Aligned bilingual corpus or special domain corpus is not required in this design. Experiments show that resolving both translation ambiguity and target polysemy gains about 10.11% performance addition to the method for translation disambiguation in cross-language information retrieval. We also analyze the two factors: word sense ambiguity in source language (translation ambiguity), and word sense ambiguity in target language (target polysemy). The statistics of word sense ambiguities have shown that target polysemy resolution is critical in Chinese-English information retrieval. This treatment is very suitable to translate very short query on Web, The queries on Web are 1.5-2 words on the average (Pinkerton, 1994; Fitzpatrick and Dent, 1997). Because the major components of queries are nouns, at least one word of a short query of length 1.5-2 words is noun. Besides, most of the Chinese nouns are unambiguous, so that translation ambiguity is not serious comparatively, but target polysemy is critical in Chinese-English Web retrieval. The translation restrictions, which introduce pseudo contexts, are helpful for target polysemy resolution. The applications of this method to cross-language Internet searching, the applicability of this method to other language pairs, and the effects of human-computer interaction on resolving translation ambiguity and target polysemy will be studied in the future. References Ballesteros, L. and Croft, W.B. (1996) "Dictionary-based Methods for Cross-Lingual Information Retrieval." Proceedings of the 7 h International DEXA Conference on Database and Expert Systems Applications, 791-801. Ballesteros, L. and Croft, W.B. (1998) "Resolving Ambiguity for Cross-Language Retrieval." Proceedings of 21"' ACM SIGIR, 64-71. Bian, G.W. and Chen, H.H. (1998) "Integrating Query Translation and Document Translation in a Cross- Language Information Retrieval System." Machine Translation and Information Soup, Lecture Notes in Computer Science, No. 1529, Spring-Verlag, 250-265. Church, K. et al. (1989) "Parsing, Word Associations and Typical Predicate-Argument Relations." Proceedings of International Workshop on Parsing Technologies, 389- 398. Davis, M.W. (1997) "New Experiments in Cross-Language Text Retrieval at NMSU's Computing Research Lab." Proceedings of TREC 5, 39-1-39-19. Davis, M.W. and Dunning, T. (1996) "A TREC Evaluation of Query Translation Methods for Multi-lingual Text Retrieval." Proceedings of TREC-4, 1996. Fitzpatrick, L. and Dent, M. (1997) "Automatic Feedback Using Past Queries: Social Searching. " Proceedings of 2ff h ACM SIGIR, 306-313. Harman, D.K. (1997) TREC-6 Proceedings, Gaithersburg, Maryland. Hayashi, Y., Kikui, G, and Susaki, S. (1997) "TITAN: A Cross-linguistic Search Engine for the WWW." Working Notes of AAAI-97 Spring Symposiums on Cross-Language Text and Speech Retrieval, 58-65. Huang, C.R., et al. (1995) "Introduction to Academia Sinica Balanced Corpus. " Proceedings of ROCLING VIII, Taiwan, 81-99. Hull, D.A. and Grefenstette, G. (1996) "Querying Across Languages: A Dictionary-based Approach to Multilingual Information Retrieval." Proceedings of the 19 'h ACM SIGIR, 49-57. Kowk, K.L. (1997) "Evaluation of an English-Chinese Cross- Lingual Retrieval Experiment." Working Notes of AAAI-97 Spring Symposiums on Cross-Language Text and Speech Retrieval, i 10-114. Lai, M. and Lin, T.Y. (1987) The New Lin Yutang Chinese- English Dictionary. Panorama Press Ltd, Hong Kong. Longman (1978) Longman Dictionary of Contemporary English. Longman Group Limited. Mei, J.; et al. (1982) tong2yi4ci2ci2lin2. Shanghai Dictionary Press. Oar& D.W. (1997) "Alternative Approaches for Cross- Language Text Retrieval." Working Notes of AAAI-97 Spring Symposiums on Cross-Language Text and Speech Retrieval, 131-139. Oard, D.W. and Dorr, B.J. (1996) A Survey of Multilingual Text Retrieval. Technical Report UMIACS-TR-96-19, University of Maryland, Institute for Advanced Computer Studies. http://www.ee.umd.edu/medlab/filter/paperslmlir.ps. Pinkerton, B. (1994) "Finding What People Want: Experiences with the WebCrawler." Proceedings of WWW. Salton, G. and Buckley, C. (1988) "Term Weighting Approaches in Automatic Text Retrieval." Information Processing and Management, Vol. 5, No. 24, 513-523. Sheridan, P. and Ballerini, J.P. (1996) "Experiments in Multilingual Information Retrieval Using the SPIDER System." Proceedings of the l ff h ACM SIGIR, 58-65. 222
1999
28
Using Mutual Information to Resolve Query Translation Ambiguities and Query Term Weighting 1 Myung-Gil Jang, 2 Sung Hyon Myaeng and 1 Se Young Park 1 Dept. of Knowledge Information, Electronics and Telecommunications Research Institute 161 Kajong-Dong, Yusong-Gu, Taejon, Korea 305-350 { mgjang, sypark } @etri.re.kr 2 Dept. of Computer Science, Chungnam National University 220 Gung-Dong, Yusong-Gu, Taejon, Korea 305-764 [email protected] Abstract An easy way of translating queries in one language to the other for cross-language information retrieval (IR) is to use a simple bilingual dictionary. Because of the general- purpose nature of such dictionaries, however, this simple method yields a severe translation ambiguity problem. This paper describes the degree to which this problem arises in Korean-English cross-language IR and suggests a relatively simple yet effective method for disambiguation using mutual information statistics obtained only from the target document collection. In this method, mutual information is used not only to select the best candidate but also to assign a weight to query terms in the target language. Our experimental results based on the TREC-6 collection shows that this method can achieve up to 85% of the monolingual retrieval case and 96% of the manual disambiguation case. Introduction Cross-language information retrieval (IR) enables a user to retrieve documents written in diverse languages using queries expressed in his or her own language. For cross-language IR, either queries or documents are translated to overcome the language differences. Although it is possible to apply a high-quality machine translation system for documents as in Oard & Hackett (1997), query translation has emerged as a more popular method because it is much simpler and more economical compared to document translation. Query translation can be done in one or more of the three approaches: a dictionary-based approach, a thesaurus-based approach, or a corpus-based approach. There are three problems that a cross-language IR system using a query translation method must solve (Grefenstette, 1998). The first problem is to figure out how a term expressed in one language might be written in another. The second problem is to determine which of the possible translations should be retained. The third problem is to determine how to properly weight the importance of translation alternatives when more than one is retained. For cross-language IR between Korean and English, i.e. between Korean queries and English documents, an easy way to handle query , translation is to use a Korean-English machine- readable dictionary (MRD) because such bilingual MRDs are more widely available than other resources such as parallel corpora. However, it has been known that with a simple use of bilingual dictionaries in other language pairs, retrieval effectiveness can be only 40%- 60% of that with monolingual retrieval (Ballesteros & Croft, 1997). It is obvious that other additional resources need to be used for better performance. This paper focuses on the last two problems: pruning translations and calculating the weights for translation alternatives. We first describe the overall query translation process and the extent to which the ambiguity problem arises in Korean-English cross-language IR. We then propose a relatively simple yet effective method for resolving translation disambiguation using mutual information (MI) (Church and Hanks, 1990) statistics obtained only from the target document collection. In this method, mutual 223 information is used not only to select the best candidate but also to assign a weight to query terms in the target language. 1 Overall Query Translation Process Our Korean-to-English query translation scheme works in four stages: keyword selection, dictionary-based query translation, bilingual word sense disambiguation, and query term weighting. Although none of the common resources such as dictionaries, thesauri, and corpora alone is complete enough to produce high quality English queries, we decided to use a bilingual dictionary at the second stage and a target-language corpus for the third and the fourth stages. Our strategy was to try not to depend on scarce resources to make the approach practical. Figure 1 shows the four stages of Korean-to-English query translation. Korean Query Korean-to-English [ Query Translation Keyword Selection English Query T Query Term I Bilingual Word I Disambiguation [ Dictionary-Based 1 Query Translation Fig. 1. Four Stages for Korean-to-English Query Translation. 1.1 Keyword Selection At the first stage, Korean keywords to be fed into the query translation process are extracted from a quasi-natural language query. This keyword selection is done with a morphological analyzer and a stochastic part-of-speech (POS) tagger for the Korean language (Shin et al., 1996). The role of the tagger is to help select the exact morpheme sequence from the multiple candidate sequences generated by the morphological analysis. This process of employing a morphological analysis and a tagger is crucial for selecting legitimate query words from the topic statements because Korean is an agglutinative language. Without the tagger, all the extraneous candidate keywords generated from the morphological analyzer will have to be entered into the translation process, which in and of itself will generate extraneous words, due to one-to-many mapping in the bilingual dictionary. 1.2 Dictionary-Based Query Translation The second stage does the actual query translation based on a dictionary look-up, by applying both word-by-word translation and phrase-level translation. For the correct identification of phrases in a Korean query, it would help to identify the lexical relations and produce statistical information on pairs of words in a text corpus as in Smadja (1993). Since the bilingual dictionary lacks some words that are essential for a correct interpretation of the Korean query, it is important to identify unknown words such as foreign words and transliterate them into English strings that need to be matched against an English dictionary (Jeong et al., 1997). 1.3 Selection of the Correct Translations At the word disambiguation stage, we filter out the extraneous words generated blindly from the dictionary lookup process. In addition to the POS tagger, we employed a bilingual word disambiguation technique using the co- occurrence information extracted from the collection of target documents. More specifically, The mutual information statistics between pairs of words were used to determine whether English words from different sets generated by the translation process are "compatible". In a sense, we make use of mutual disambiguation effect among query terms. More details are described in Section 3. 1.4 Query Term Weighting Finally, we apply our query term weighting technique to produce the final target query. The term weighting scheme basically reflects the degree of associations between the translated terms, and we give a high or low term weighting value according to the degree of mutual association between query terms. This is another area where we make use of mutual information obtained from a text corpus. The result from the four stages is a set of query terms to be used in a 224 vector-space retrieval model. 2 Analysis of Translation Ambiguity Although an easy way to find translations of query terms is to use a bilingual dictionary, this method alone suffers from problems caused by translation ambiguity since there are often one- to-many correspondences in a bilingual dictionary. For example, in a Korean query consisting of three words, ":Z]-o--~-5~]- -~7] _Q_~"(ja-dong-cha gong-gi oh-yum) that means air pollution caused by automobiles, each word can be translated into multiple English words when a Korean-English dictionary is used in a straightforward way. The first word ":Z]-o-~-5~]-" (ja-dong-cha) of the query can be translated into English words with semantically similar but different words like "motorcar", "automobile", and "car". The second word "--~-71" (gong-gi), a homonymous word, can be translated into English words with different meanings: "air", "atmosphere", "empty vessel", and "bowl". And the last word "_9--4" (oh-yum) can be translated into two English words, "pollution" and "contamination". Retaining multiple candidate words can be useful in promoting recall in monolingual IR system, but previous research indicates that failure to disambiguate the meanings of the words can hurt retrieval effectiveness tremendously. For instance, it is obvious that a phrase like empty vessel would change the meaning of the query entirely. Even a word like contamination, a synonym of pollution, may end up retrieving unrelated documents due to the slight differences in meaning. Title Sho~ Long Table 1. The De~ree of Ambiguities I [W°rds I W°rd Pairs # in S. # in T. Average # in S. # in T. Average Lan. Lang. Ambiguity Lan. Lang. Ambiguity 48 158 I 3.29 [ 29i 3212 8.83 112 447 3.99 1459 16.03 462 1835 3.97 6196 14.65 Table 1 shows the extent to which ambiguity occurs in our query translation when an English- Korean dictionary is used blindly after the morphological analysis and tagging. The three rows, title, short, and long, indicate three different ways of composing queries from the topic statements in the TREC collection. The left half shows the average number of English words per Korean word for each query, whereas the right half shows the average number of word pairs in English that can be formed from a single word pair in Korean. The latter indicates that the disambiguation process will have to select one out of more than 9 possible pairs on the average, regardless of which part of the topic statements is used for formal query generation. 3 Query Translation and Mutual Information Our strategy for cross-language IR aims at practicality in that we try not to depend on scarce resources. Along the same line of reasoning, we opted for a disambiguation approach that requires only a collection of documents in the target language, which is always available in any cross-language IR environment. Since the goal of disambiguation is to select the best pair among many alternatives as described above, the mutual information statistic is a natural choice in judging the degree to which two words co-occur within a certain text boundary. It would be reasonable to choose the pair of words that are most strongly associated with each other, thereby eliminating those translations that are not likely to be correct ones. Mutual information values are calculated based on word co-occurrence statistics and used as a measure to calculate correlation between words. The mutual information Ml(x,y) is defined as the following formula (Church and Hanks, 1990). p(x, y) N fw(X, y ) MI(x, y) = log 2 = log z (1) p(x)p(y) f(x)f(y) Here x and y are words occurring within a window of w words. The probabilities p(x) and p(y) are estimated by counting the number of observations of x and y in a corpus, f(x) and fly), and normalizing each by N, the size of the corpus. Joint probabilities, p(x,y), are estimated by counting the number of times, f,(x,y), that x is followed by y in a window of w words and normalizing it by N. In our application of query translation, the joint co- occurrence frequency f,(x,y) has 6-word window size which seems to allow semantic relations of query as well as fixed expressions (idioms such 225 as bread and butter). We ensure that the word x be followed by the word y within the same sentence only. In our query translation scheme, MI values are used to select most likely translations after each Korean query word is translated into one or more English words. Our use of MI values is based on the assumption that when two words co-occur in the same query, they are likely to co- occur in the same affinity in documents. Conversely, two words that do not co-occur in the same affinity are not likely to show up in the same query. In a sense, we are conjecturing mutual information can reveal some degree of semantic association between words. Table 2 gives some examples of MI values for the alternative word pairs for translated queries of TREC-6 Cross-Language IR Track. These MI values were extracted from the English text corpus consisting of 1988 - 1990 AP news, which contains 116,759,540 words. Table 2. Exam Word x Word y respiratory ailment teddy bear fossil fuel air pollution research development AIDS spread ivory trade environment protection bear doll region country point interest law terrorism treatment result terrorism government opinion news food life copy price labor information )le of Ml(x, Values fix) fiy) fix,y) I Ml(x,y) 716 1134 74 9.272506 679 7932 262 8.644690 676 13176 333 8.381424 52216 4878 890 6.011214 24278 24213 1317 5.566768 18575 10199 212 4.872597 1885 86608 84 4.095613 7771 13139 36 3.717652 7932 1394 3 3.455646 21093 103833 358 2.948925 30419 51917 107 2.068232 70182 4762 20 1.944089 13432 38055 22 1.614487 4762 193977 29 1.299005 9124 82220 21 1.184332 32222 40625 30 0.984281 6803 90594 10 0.638950 26571 30245 11 0.468861 When Ml(x,y) is large, the word associations are strong and produce credible results for disambiguation of translations. However, if Ml(x,y) < 0, we can predict that the word x and word y are in complementary distribution. 4 Disambiguation and Weight Calculation We can alleviate the translation ambiguity by discriminating against those word pairs with low MI values. The word pair with the highest MI value is considered to be the correct one among all the candidates in the two sets. Since a query is likely to be targeted at a single concept, regardless of how broad or narrow it is, we conjecture that words describing the concept are likely to have a high degree of association. Although we use the mutual information statistic to measure the association, others such as those used by Ballesteros & Croft (1998) can be considered. In the example of Section 2, each Korean word has multiple English words due to translation ambiguity. Figure 2 shows the MI values calculated for the word pairs comprising the translations of the original query. The words under wl, w2, and w3 are the translations from the three query words, respectively. The lines indicate that mutual information values are available for the pairs, and the numbers show some of the significant MI values for the corresponding pairs among all the possible pairs. wl w2 w3 bowl Fig. 2. An Example of Word Pairs with MI Values Our bilingual word disambiguation and weighting schemes rely on both relative and absolute magnitudes of the MI vales. The algorithm first looks for the pair with the highest MI value and selects the best candidates before and after the pair by comparing the MI values for the pairs that are connected with the initially chosen pairs. This process is applied to the words immediately before or after the chosen pair in order to limit the effect of the choice that may be incorrect. It should be noted that the words not chosen in this process are not used in the translated query unless the MI values are greater than a threshold. As described below, we assume that the candidates not in the first tier may still be useful if they are strongly associated with the adjacent word selected. 226 For example, the word pair <air, pollution> that has the bold line representing the strongest association in the column is choisen first. Then the three MI values for the pairs containing air are compared to select the <automobile, air> pair, resulting in <automobile, air, pollution>. If there were additional columns in the example, the same process would be applied to the rest of the network. There are three reasons why query term weighting is of some value in addition to the pruning of conceptually unrelated terms. First, our word selection method is not guaranteed to give the correct translation. The method would give a reasonable result only when two consecutive query terms are actually used together in many documents, which is a hypothesis yet to be confirmed for its validity. Second, there may be more than one strong association whose degrees are different from each other by a large magnitude. Third, seemingly extraneous terms may serve as a recall-enhancing device with a query expansion effect. The basic idea in our term weighting scheme is to give a large weight to the best candidate and divide the remaining quantity to assign equal weights to the rest of the candidates. In other words, the weight for the best candidate, W~, is either 1 if it is greater than a threshold value or expressed as follows. Wb = f(x) ×0.5 + 0.5 (2) 0+1 Here x and 0 are a MI value and a threshold, respectively. The numerator, f(x), gives the smallest integer greater than the MI value so that the resulting weight is the same for all the candidates whose MI values are within a certain interval. Once the value for W b is calculated, the weight for the rest of the candidates are calculated as follows: Wr _ 1 - W h (3) n-1 where n is the number of candidates. It should be noted that W~ + Z W = 1. Based on our observation of the calculated MI values, we chose to use 3.0 as the cut-off value in choosing the best candidate and assign a fairly high weight. The cut-off value was determined purely based on the data we obtained; it can vary based on the new range of MI values when different corpora are used. In the example of Fig. 2, the word pair candidate between wl and w2 are (motorcar, air), (automobile, air), and (car, air). Here because the weight of the word pairs (automobile, air) is W, = 0.83, the word "automobile" has a relatively higher term weight than the other two words "motorcar" and "car". Finally the optimal English query set with their term weight, <(motocar,0.085), (automobile, 0.83), (car, 0.085) >, is generated for the translations of wl. 5 Experiments We developed a system for our cross-language IR techniques and conducted some basic experiments using the collection from the Cross- Language Track of TREC 6. The 24 English queries are comprised of three fields: titles, descriptions, and narratives. These English queries were manually translated into Korean queries so that we can pretend as if the Korean queries had been generated by human users for cross-language IR. In order to compare cross- language IR and mono-language IR, we used the Smart 11.0 system developed by Cornell University. Our goal was to examine the efficacy of the disambiguation and term weighting schemes in our query translation. We ran our system with three sets of queries, differentiated by the query lengths: 'title' queries with title fields only, 'short' queries with description fields only, and 'long' queries with all the three fields. The retrieval effectiveness measured with l 1-point average precision was used for comparison against the baseline of monolingual retrieval using the original English query. Table 3 gives the experimental results from using the four types of query set. The result from "Translated Query I" was generated only with the keyword selection and dictionary-based query translation stages. The result "Translated Query II" was generated after all the stages of our word disambiguation and query term weighting were done. And the result from the manually disambiguated query set was generated by manually selecting the best candidate terms from the Translated Query I. 227 Query Sets Original Quer)' Tran. Query I Tran. Query II M.Disam. Query Table 3. Ex 1 ~erimental Results i Title Short ] Lon~ l lpt. P C/M(~,:) l lpt. P C/M("~) [ l lpt. P C/M(¢,~) 0.3251 0.3189 0.2821 0.2290 70.44 0.21443 67.20 0.1587 56.26 0.2675 82.28 0.2698 84.60 0.2232 79.12 0.2779 85.48 0.3002 94.14 0.2433 86.25 The performance of the Translated query set I was about 70%, 67%, and 56% of monolingual retrieval for the three cases, respectively. The performances of the translated query set II were about 82%, 85%, and 79% of monolingual retrieval for the three cases, respectively. The performance of the disambiguated queries, 85%, 94%, and 86% of monolingual retrieval for the three cases, respectively, can be treated as the upper limit for the cross-language retrieval. The reason why they are not 100% is attributed to the several factors. They are: 1) the inaccuracy of the manual translation of the original English query into the Korean queries, 2) the inaccuracy of the Korean morphological analyzer and the tagger in generating query words, and 3) the inaccuracy in generating candidate terms using the bilingual dictionary. The difference between Translated Query I and Translated Query II indicates that the Ml-based disambiguation and the term weighting schemes are effective in enhancing the retrieval effectiveness. In addition, the results show that the use of these query translation schemes is more effective with long queries than with shorter queries. This is expected because the longer the queries are, the more contextual information can be used for mutual disambiguation. Conclusion It has been known that query translation using a simple bilingual dictionary leads to a more than 40% drop in retrieval effectiveness due to translation ambiguity. Our query translation method uses mutual information extracted from the 1988 - 1990 AP corpus in order to solve the problems of the bilingual word disambiguation and query term weighting. The experiments using test collection of TREC-6 Cross-Language Track show that the method improves retrieval effectiveness in Korean-to-English cross- language IR. The performance can be up to 85% of the monolingual retrieval case. We also found that we obtained the largest percent increase with long queries. While the experimental results are very promising, there are several issues to be explored. First, we need to test how effectively the method can be applied. Second, we intend to experiment with other co-occurrence metrics, instead of the mutual information statistic, for possible improvement. This investigation is motivated by our observation of some counter- intuitive MI values. Third, we also plan on using different algorithms for choosing the terms and calculating the weights. In addition, we plan to use the pseudo relevance feedback method that has been proven to be effective in monolingual retrieval. Terms in some top-ranked documents are thrown into the original query with an assumption that at least some, if not all, of the documents are relevant to the original query and that the terms appearing in the documents are useful in representing user's information need. Here we need to determine a threshold value for the number of top ranked document for our cross-language retrieval situation, let alone other phenomenon. References Douglas W. Oard and Paul Hackett (1997). Document Translation for the Cross-Language Text Retrieval at the University of Maryland, The Sixth Text Retrieval Conference (TREC-6), NIST. Gregory Grefenstette (1998). Cross-Language Information Retrieval, Kluwer Academic Publishers. Lisa BaUesteros and W. Bruce Croft(1997). Phrasal Translation and Query Expansion Techniques for Cross-lingual Information Retrieval, SIGIR'97. Lisa Ballesteros and W. Bruce Croft(1998). Resolving Ambiguity for Cross-language Retrieval, SIGIR' 98. Kenneth W. Church and Patrick Hanks (1990). Word Association Norms, Mutual Information, and Lexicography, Computational Linguistics, Vol. 16, No. 1, pp. 22-29. Joong-Ho Shin, Young-Soek Han, Key-Sun Choi (1996). A HMM Part of Speech Tagger for Korean with Word Phrasal Relations, In Proceedings of Recent Advances in Natural Language Processing. Frank Samdja (1993) Retrieval Collection from Text: Xtract, Computational Linguistics, Vol. 19, No. 1, pp.143-177. 228 Jeong, K. S., Kwon,Y. H. and Myaeng, S. H. (1997). Construction of Equivalence Classes through Automatic Extraction and Identification of Foreign Words, In Proceedings of NLPRS'97, Phuket, Tailand. 229
1999
29
The Lexical Component of Natural Processing George A. Miller Cognitive Science Laboratory Princeton University Language Abstract Computational linguistics is generally considered to be the branch of engineering that uses computers to do useful things with linguistic signals, but it can also be viewed as an extended test of computational theories of human cognition; it is this latter perspective that psychol- ogists find most interesting. Language provides a critical test for the hypothesis that physical symbol systems are adequate to perform all human cognitive functions. As yet, no adequate system for natural language processing has approached human levels of performance. Of the various problems that natural language processing has re- vealed, polysemy is probably the most frustrating. People deal with polysemy so easily that potential abiguities are overlooked, whereas computers must work hard to do far less well. A linguistic approach generally involves a parser, a lexicon, and some ad hoc rules for using linguistic context to identify the context-appropriate sense. A statisti- cal approach generally involves the use of word co-occurrence statistics to create a semantic hyperspace where each word, regardless of its pol- ysemy, is represented as a single vector. Each approach has strengths and limitations; some combination is often proposed. Various possibil- ities will be discussed in terms of their psychological plausibility. 21
1999
3
Analysis System of Speech Acts and Discourse Structures Using Maximum Entropy Model* Won Seug Choi, Jeong-Mi Cho and Jungyun Seo Dept. of Computer Science, Sogang University Sinsu-dong 1, Mapo-gu Seoul, Korea, 121-742 {dolhana, jmcho} @nlprep.sogang.ac.kr, [email protected] Abstract We propose a statistical dialogue analysis model to determine discourse structures as well as speech acts using maximum entropy model. The model can automatically acquire probabilistic discourse knowledge from a discourse tagged corpus to resolve ambiguities. We propose the idea of tagging discourse segment boundaries to represent the structural information of discourse. Using this representation we can effectively combine speech act analysis and discourse structure analysis in one framework. Introduction To understand a natural language dialogue, a computer system must be sensitive to the speaker's intentions indicated through utterances. Since identifying the speech acts of utterances is very important to identify speaker's intentions, it is an essential part of a dialogue analysis system. It is difficult, however, to infer the speech act from a surface utterance since an utterance may represent more than one speech act according to the context. Most works done in the past on the dialogue analysis has analyzed speech acts based on knowledge such as recipes for plan inference and domain specific knowledge (Litman (1987), Caberry (1989), Hinkelman (1990), Lambert (1991), Lambert (1993), Lee (1998)). Since these knowledge-based models depend on costly hand-crafted knowledge, these models are difficult to be scaled up and expanded to other domains. Recently, machine learning models using a discourse tagged corpus are utilized to analyze speech acts in order to overcome such problems (Nagata (1994a), Nagata (1994b), Reithinger (1997), Lee (1997), Samuel (1998)). Machine learning offers promise as a means of associating features of utterances with particular speech acts, since computers can automatically analyze large quantities of data and consider many different feature interactions. These models are based on the features such as cue phrases, change of speaker, short utterances, utterance length, speech acts tag n-grams, and word n-grams, etc. Especially, in many cases, the speech act of an utterance influenced by the context of the utterance, i.e., previous utterances. So it is very important to reflect the information about the context to the model. Discourse structures of dialogues are usually represented as hierarchical structures, which reflect embedding sub-dialogues (Grosz (1986)) and provide very useful context for speech act analysis. For example, utterance 7 in Figure 1 has several surface speech acts such as acknowledge, inform, and response. Such an ambiguity can be solved by analyzing the context. If we consider the n utterances linearly adjacent to utterance 7, i.e., utterances 6, 5, etc., as context, we will get acknowledge or inform with high probabilities as the speech act of utterance 7. However, as shown in Figure 1, utterance 7 is a response utterance to utterance 2 that is hierarchically recent to utterance 7 according to the discourse structure of the dialogue. If we know the discourse structure of the dialogue, we can determine the speech act of utterance 7 as response. * This work was supported by KOSEF under the contract 97-0102-0301-3. 230 Some researchers have used the structural information of discourse to the speech act analysis (Lee (1997), Lee (1998)). It is not, however, enough to cover various dialogues since they used a restricted rule-based model such as RDTN (Recursive Dialogue Transition Networks) for discourse structure analysis. Most of the previous related works, to our knowledge, tried to determine the speech act of an utterance, but did not mention about statistical models to determine the discourse structure of a dialogue. I )User : I would like Io reserve a room. 2) Agent : What kind of room do you want? 3) User : What kind of room do you have'? 4) Agent : We have single mid double rooms. 5) User : How much are those rooms? 6) Agent : Single costs 30,000 won and double ~SlS 40,000 WOll. 7) User : A single room. please. request ask-ref ask-ref response ask-tel response acknowledge inform r~mmse Figure 1 : An example of a dialogue with speech acts In this paper, we propose a dialogue analysis model to determine both the speech acts of utterances and the discourse structure of a dialogue using maximum entropy model. In the proposed model, the speech act analysis and the discourse structure analysis are combined in one framework so that they can easily provide feedback to each other. For the discourse structure analysis, we suggest a statistical model with discourse segment boundaries (DSBs) similar to the idea of gaps suggested for a statistical parsing (Collins (1996)). For training, we use a corpus tagged with various discourse knowledge. To overcome the problem of data sparseness, which is common for corpus-based works, we use split partial context as well as whole context. After explaining the tagged dialogue corpus we used in section 1, we discuss the statistical models in detail in section 2. In section 3, we explain experimental results. Finally, we conclude in section 4. 1 Discourse tagging In this paper, we use Korean dialogue corpus transcribed from recordings in real fields such as hotel reservation, airline reservation and tour reservation. This corpus consists of 528 dialogues, 10,285 utterances (19.48 utterances per dialogue). Each utterance in dialogues is manually annotated with discourse knowledge such as speaker (SP), syntactic pattern (ST), speech acts (SA) and discourse structure (DS) information. Figure 2 shows a part of the annotated dialogue corpus ~. SP has a value either "User" or "Agent" depending on the speaker. /SPAJser /ENh'm a student and registered/br a language course at University of Georgia in U.S. ISTl[decl,be,present,no,none,none] /SA/introducing -oneself /DS/[2I /SP/User ~9_. /EN/I have sa)me questions about lodgings. IST/Idecl,paa.presenl,no,none,nonel /SA/ask-ref ~DS/121 --> Continue /SP/Agent /EN/There is a dormitory in Universily of Georgia lot language course students. ISTIIdecl.pvg,present,no,none.none] /SA/response /DS/[21 /SPAJser /ENfrhen, is meal included in tuilion lee? /ST/¿yn quest.pvg ,present.no.none ,then I /SA/ask-if /DS/12. I I Figure 2: A part of the annotated dialogue corpus The syntactic pattern consists of the selected syntactic features of an utterance, which approximate the utterance. In a real dialogue, a speaker can express identical contents with different surface utterances according to a personal linguistic sense. The syntactic pattern generalizes these surface utterances using syntactic features. The syntactic pattern used in (Lee (1997)) consists of four syntactic features such as Sentence Type, Main-Verb, Aux-Verb and Clue-Word because these features provide strong cues to infer speech acts. We add two more syntactic features, Tense and Negative Sentence, to the syntactic pattern and elaborate the values of the syntactic features. Table 1 shows the syntactic features of a syntactic pattern with possible values. The syntactic features are automatically extracted from the corpus using a conventional parser (Kim (1994)). Manual tagging of speech acts and discourse structure information was done by graduate students majoring in dialogue analysis and post- processed for consistency. The classification of speech acts is very subjective without an agreed criterion. In this paper, we classified the 17 types of speech acts that appear in the dialogue KS represents the Korean sentence and EN represents the translated English sentence. 231 corpus. Table 2 shows the distribution of speech acts in the tagged dialogue corpus. Discourse structures are determined by focusing on the subject of current dialogue and are hierarchically constructed according to the subject. Discourse structure information tagged in the corpus is an index that represents the hierarchical structure of discourse reflecting the depth of the indentation of discourse segments. The proposed system transforms this index information to discourse segment boundary (DSB) information to acquire various statistical information. In section 2.2.1, we will describe the DSBs in detail. Syntactic feature Values decl, imperative, wh question, yn_question Notes Sentence T)~e The mood of all utterance pvg, pvd, paa, pad, be, The type of the main verb. For Main-Verb know, ask, etc. special verbs, lexical items are (total 88 kinds) used. Tense past, present, future. The tense of an utterance Negative Sentence Yes or No Yes if an utterance is negative. serve, seem, want, will, The modality of an utterance. Aux-Verb etc. (total 31 kinds) Yes, No, OK., etc. The special word used in the utterance having particular Clue-Word (total 26 kinds speech acts. Table I : Syntactic features used in the syntactic pattern Speech Act Type Ratio(%) Accept 2.49 Acknowledge 5.75 Ask-confirm 3.16 Ask-if 5.36 Ask-tel 13.39 Closing 3.39 Correct 0.03 Expressive 5,64 biform 11.90 Speech Act Type Ratio(%) h~troducing-oneself 6.75 Offer 0.40 Opening 6.58 Promise 2,42 Reject 1.07 Request 4.96 Response 24.73 Suggest 1.98 Total 100.00 Table 2: The distribution of speech acts in corpus 2 Statistical models We construct two statistical models: one for speech act analysis and the other for discourse structure analysis. We integrate the two models using maximum entropy model. In the following subsections, we describe these models in detail. 2.1 Speech act analysis model Let UI,, denote a dialogue which consists of a sequence of n utterances, U1,U2 ..... U,, and let S i denote the speech act of U. With these notations, P(SilU1,i) means the probability that S~ becomes the speech act of utterance U~ given a sequence of utterances U1,U2,...,Ui. We can approximate the probability P(Si I Ul.i) by the product of the sentential probability P(Ui IS i) and the contextual probability P( Si I UI, i - i, $1, ~ - 1). Also we can approximate P(SilUl, i-l, Si,i-i) by P(Si l SI, g -l) (Charniak (1993)). P(S~IUI,~)= P(SilS~,~-I)P(U~ISi) (1) It has been widely believed that there is a strong relation between the speaker's speech act and the surface utterances expressing that speech act (Hinkelman (1989), Andernach (1996)). That is, the speaker utters a sentence, which most well expresses his/her intention (speech act) so that the hearer can easily infer what the speaker's speech act is. The sentential probability P(UilSO represents the relationship between the speech acts and the features of surface sentences. Therefore, we approximate the sentential probability using the syntactic pattern Pi" P(Ui I Si) = P(PiISi) (2) The contextual probability P(Si I $1, ~ - 1) is the probability that utterance with speech act S i is uttered given that utterances with speech act $1, $2 ..... S/- 1 were previously uttered. Since it is impossible to consider all preceding utterances $1, $2 ..... Si - ~ as contextual information, we use the n-gram model. Generally, dialogues have a hierarchical discourse structure. So we approximate the context as speech acts of n utterances that are hierarchically recent to the utterance. An utterance A is hierarchically recent to an utterance B if A is adjacent to B in the tree structure of the discourse (Walker (1996)). Equation (3) represents the approximated contextual probability in the case of using trigram where Uj and U~ are hierarchically recent to the utterance U, where l<j<k<i-1. 232 P(Si I S],, - ,) = P(Si I Sj, Sk) (3) As a result, the statistical model for speech act analysis is represented in equation (4). P(S, I U,, 0 = P(Si I S,,, - ,)P(Ui I S,) = P(Si IS j, Sk)P(Pi [St) (4) 2.2 Discourse structure analysis model 2.2.1 Discourse segment boundary tagging We define a set of discourse segment boundaries (DSBs) as the markers for discourse structure tagging. A DSB represents the relationship between two consecutive utterances in a dialogue. Table 3 shows DSBs and their meanings, and Figure 3 shows an example of DSB tagged dialogue. DSB Meaning DE Start a new dialogue DC Continue a dialogue SS Start a sub-dialogue nE End n level sub-dialogues nB nE and then SS Table 3: DSBs and their meanings DS DSB 1) User : I would like to reserve a room. I NULL 2) Agent : What kind of room do you want? 1.1 SS 3) User : What kind of room do you have? 1.1.1 SS 4) Agent : We have single and double rooms. 1.1.1 DC 5) User : How much are those rooms? 1.!.2 I B 6) Agent : Single costs 30,000 won and double costs 40,000 won. 1.1.2 DC 7) User : A single room, please. I. 1 1E Figure 3: An example of DSB tagging Since the DSB of an utterance represents a relationship between the utterance and the previous utterance, the DSB of utterance 1 in the example dialogue becomes NULL. By comparing utterance 2 with utterance 1 in Figure 3, we know that a new sub-dialogue starts at utterance 2. Therefore the DSB of utterance 2 becomes SS. Similarly, the DSB of utterance 3 is SS. Since utterance 4 is a response for utterance 3, utterance 3 and 4 belong to the same discourse segment. So the DSB of utterance 4 becomes DC. Since a sub-dialogue of one level (i.e., the DS 1.1.2) consisting of utterances 3 and 4 ends, and new sub-dialogue starts at utterance 5. Therefore, the DSB of utterance 5 becomes lB. Finally, utterance 7 is a response for utterance 2, i.e., the sub-dialogue consisting of utterances 5 and 6 ends and the segment 1.1 is resumed. Therefore the DSB of utterance 7 becomes 1E. 2.2.2 Statistical model for discourse structure analysis We construct a statistical model for discourse structure analysis using DSBs. In the training phase, the model transforms discourse structure (DS) information in the corpus into DSBs by comparing the DS information of an utterance with that of the previous utterance. After transformation, we estimate probabilities for DSBs. In the analyzing process, the goal of the system is simply determining the DSB of a current utterance using the probabilities. Now we describe the model in detail. Let G, denote the DSB of U,. With this notation, P(GilU],O means the probability that G/ becomes the DSB of utterance U~ given a sequence of utterances U~, U 2 ..... Ui. As shown in the equation (5), we can approximate P(GilU~,O by the product of the sentential probability P(Ui I Gi) and the contextual probability P( Gi I U ], i - ]. GI, i - ]) : P(GilU1, i) = P(Gi I U], i - ], Gi, i - OP(Ui I Gi) (5) In order to analyze discourse structure, we consider the speech act of each corresponding utterance. Thus we can approximate each utterance by the corresponding speech act in the sentential probability P(Ui I Gi): P(Ui I G0 --- P(SilGO (6) 233 Let F, be a pair of the speech act and DSB of U, to simplify notations: Fi ::- (Si, Gi) (7) We can approximate the contextual probability P(GilUl.i-i, Gl.i-l) as equation (8) in the case of using trigram. P(Gi IUl, i-l,Gl, i-1) = P(Gi I FI, i - 1) = P(Gi I Fi - 2, Fi - l) (8) As a result, the statistical model for the discourse structure analysis is represented as equation (9). P(Gi I UI. i) = P(Gi IUl.i-i, Gl.i-OP(Ui IGi) = P(G, I F~ - 2, F, - OP(& I GO (9) 2.3 Integrated dialogue analysis model Given a dialogue UI,., P(Si, Gi IUl, i) means the probability that S~ and G i will be, respectively, the speech act and the DSB of an utterance U/ given a sequence of utterances Ut, U2 ..... U~. By using a chain rule, we can rewrite the probability as in equation (10). P(Si, Gi I UI, i) = P(SiIUI, i)P(GiISi, UI, i) (10) In the right hand side (RHS) of equation (10), the first term is equal to the speech act analysis model shown in section 2.1. The second term can be approximated as the discourse structure analysis model shown in section 2.2 because the discourse structure analysis model is formulated by considering utterances and speech acts together. Finally the integrated dialogue analysis model can be formulated as the product of the speech act analysis model and the discourse structure analysis model: e(Si, Gi I Ul.i) = P(S, I ULi)P(Gi I Ul.i) = P(S, I Sj, &)P(P, I SO x P(G~ I Fi - 2, F~ - OP(Si I GO (10 2.4 Maximum entropy model All terms in RHS of equation (11) are represented by conditional probabilities. We estimate the probability of each term using the following representative equation: P(a lb)= P(a,b) y~ P(a', b) a (12) We can evaluate P(a,b) using maximum entropy model shown in equation (13) (Reynar 1997). P(a,b) = lrI" I Ot[ '(''b) i=1 where 0 < c~ i < oo, i = { 1,2 ..... k } (13) In equation (13), a is either a speech act or a DSB depending on the term, b is the context (or history) of a, 7r is a normalization constant, and is the model parameter corresponding to each feature functionf. In this paper, we use two feature functions: unified feature function and separated feature function. The former uses the whole context b as shown in equation (12), and the latter uses partial context split-up from the whole context to cope with data sparseness problems. Equation (14) and (15) show examples of these feature functions for estimating the sentential probability of the speech act analysis model. iff a = response and (14) b = User : [decl, pvd, future, no, will, then] otherwise 10 iff a = response and f(a,b) = SentenceType(b) = User : decl otherwise (15) Equation (14) represents a unified feature function constructed with a syntactic pattern 234 having all syntactic features, and equation (15) represents a separated feature function constructed with only one feature, named Sentence Type, among all syntactic features in the pattern. The interpretation of the unified feature function shown in equation (14) is that if the current utterance is uttered by "User", the syntactic pattern of the utterance is [decl,pvd,future,no,will,then] and the speech act of the current utterance is response then f(a,b)= 1 else f(a,b)=O. We can construct five more separated feature functions using the other syntactic features. The feature functions for the contextual probability can be constructed in similar ways as the sentential probability. Those are unified feature functions with feature trigrams and separated feature functions with distance-1 bigrams and distance-2 bigrams. Equation (16) shows an example of an unified feature function, and equation (17) and (18) which are delivered by separating the condition of b in equation (16) show examples of separated feature functions for the contextual probability of the speech act analysis model. 10 iff a = response and f(a, b) = b = User : request, Agent : ask - ref otherwise where b is the information of Ujand Uk defined in equation (3) (16) 10 iff a = response and f(a,b) = b_ t = Agent : ask - ref otherwise where b_~ is the information of Uk defined in equation (3) (17) f(a'b)={lo iffa=resp°nseandb-2otherwise=USer:request where b_ 2 is the information of Ujdefined in equation (3) (18) Similarly, we can construct feature functions for the discourse structure analysis model. For the sentential probability of the discourse structure analysis model, the unified feature function is identical to the separated feature function since the whole context includes only a speech act. Using the separated feature functions, we can solve the data sparseness problem when there are not enough training examples to which the unified feature function is applicable. 3 Experiments and results In order to experiment the proposed model, we used the tagged corpus shown in section 1. The corpus is divided into the training corpus with 428 dialogues, 8,349 utterances (19.51 utterances per dialogue), and the testing corpus with 100 dialogues, 1,936 utterances (19.36 utterances per dialogue). Using the Maximum Entropy Modeling Toolkit (Ristad 1996), we estimated the model parameter ~ corresponding to each feature functionf in equation (13). We made experiments with two models for each analysis model. Modem uses only the unified feature function, and Model-II uses the unified feature function and the separated feature function together. Among the ways to combine the unified feature function with the separated feature function, we choose the combination in which the separated feature function is used only when there is no training example applicable for the unified feature function. First, we tested the speech act analysis model and the discourse analysis model. Table 4 and 5 show the results for each analysis model. The results shown in table 4 are obtained by using the correct structural information of discourse, i.e., DSB, as marked in the tagged corpus. Similarly those in table 5 are obtained by using the correct speech act information from the tagged corpus. Accuracy (Closed test) Accuracy (Open test) Candidates Top-1 Top-3 Top-1 Top-3 Lee (1997) 78.59% 97.88% Samuel (1998) 73.17% Reithinger (1997) 74.70% Model I 90.65% 99.66% 81.61% 93.18% Model II 90.65% 99.66% 83,37% 95.35% Table 4. Results of speech act analysis Accuracy(Open test) Candidates Top-I Top-3 Model I 81.51% 98.55% Model I] 83.21% 99.02% Table 5, Results of discourse structure analysis In the closed test in table 4, the results of Model- I and Model-II are the same since the probabilities of the unified feature functions always exist in this case. As shown in table 4, the proposed models show better results than previous work, Lee (1997). As shown in table 4 and 5, ModeMI shows better results than Model- 235 I in all cases. We believe that the separated feature functions are effective for the data sparseness problem. In the open test in table 4, it is difficult to compare the proposed model directly with the previous works like Samuel (1998) and Reithinger (1997) because test data used in those works consists of English dialogues while we use Korean dialogues. Furthermore the speech acts used in the experiments are different. We will test our model using the same data with the same speech acts as used in those works in the future work. We tested the integrated dialogue analysis model in which speech act and discourse structure analysis models are integrated. The integrated model uses ModeMI for each analysis model because it showed better performance. In this model, after the system determing the speech act and DSB of an utterance, it uses the results to process the next utterance, recursively. The experimental results are shown in table 6. As shown in table 6, the results of the integrated model are worse than the results of each analysis model. For top-1 candidate, the performance of the speech act analysis fell off about 2.89% and that of the discourse structure analysis about 7.07%. Nevertheless, the integrated model still shows better performance than previous work in the speech act analysis. Accuracy(Open test) Candidates Top- 1 Top-3 Result of speech act 80.48% 94.58% analysis Result of discourse 76.14% 95.45% structure analysis Table 6. Results of the integrated anal, 'sis model Conclusion In this paper, we propose a statistical dialogue analysis model which can perform both speech act analysis and discourse structure analysis using maximum entropy model. The model can automatically acquire discourse knowledge from a discourse tagged corpus to resolve ambiguities. We defined the DSBs to represent the structural relationship of discourse between two consecutive utterances in a dialogue and used them for statistically analyzing both the speech act of an utterance and the discourse structure of a dialogue. By using the separated feature functions together with the unified feature functions, we could alleviate the data sparseness problems to improve the system performance. The model can, we believe, analyze dialogues more effectively than other previous works because it manages speech act analysis and discourse structure analysis at the same time using the same framework. Acknowledgements Authors are grateful to the anonymous reviewer for their valuable comments on this paper. Without their comments, we may miss important mistakes made in the original draft. References Andernach, T. 1996. A Machine Learning Approach to the Classification of Dialogue Utterances. Proceedings of NeMLaP-2. Berger, Adam L., Stephen A. Della Pietra, and Vincent J. Della Pietra. 1996. A Maximum Entropy Approach to Natural Language Processing. Computational Linguistics, 22( 1):39-71. Caberry, Sandra. 1989. A Pragmatics-Based Approach to Ellipsis Resolution. Computational Linguistics, 15(2):75-96. Carniak, Eugene. 1993. Statistical Language Learning. A Bradford Book, The MIT Press, Cambridge, Massachusetts, London, England. Collins, M. J. 1996. A New Statistical Parser Based on Bigram Lexical Dependencies. Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics, pages 184-191. Grosz, Barbara J. and Candace L. Sidner. 1986. Attention, Intentions, and the Structure of Discourse. Computational Linguistics, 12(3): 175- 204. Hinkelman, E. A. 1990. Linguistic and Pragmatic Constraints on Utterance Interpretation. Ph.D. Dissertation, University of Rochester, Rochester, New York. Hinkelman, E. A. and J. F. Allen. 1989. Two Constraints on Speech Act Ambiguity. Proceedings of the 27th Annual Meeting of the Association of Computational Linguistics, pages 212-219. Kim, Chang-Hyun, Jae-Hoon Kim, Jungyun Seo, and Gil Chang Kim. 1994. A Right-to-Left Chart 236 Parsing for Dependency Grammar using Headable Path. Proceeding of the 1994 International Conference on Computer Processing of Oriental Languages (ICCPOL), pages 175-180. Lambert, Lynn. 1993. Recognizing Complex Discourse Acts: A Tripartite Plan-Based Model of Dialogue. Ph.D. Dissertation, The University of Delaware, Newark, Delaware. Lambert, Lynn and Sandra Caberry. 1991. A Tripartite Plan-based Model of Dialogue. Proceedings of ACL, pages 47-54. Lee, Jae-won, Jungyun Seo, Gil Chang Kim. 1997. A Dialogue Analysis Model With Statistical Speech Act Processing For Dialogue Machine Translation. Proceedings of Spoken Language Translation (Workshop in conjunction with (E)ACL'97), pages 10-15. Lee, Hyunjung, Jae-Won Lee and Jungyun Seo. 1998. Speech Act Analysis Model of Korean Utterances for Automatic Dialog Translation. Journal of Korea Information Science Society (B): Software and Applications, 25(10): 1443-1552 (In Korean). Litman, Diane J. and James F. Allen. 1987. A Plan Recognition Model for Subdialogues in Conversations. Cognitive Science, pages 163-200. Nagata, M. and T. Morimoto. 1994a. First steps toward statistical modeling of dialogue to predict the speech act type of the next utterance. Speech Communication, 15: 193-203. Nagata, M. and T. Morimoto. 1994b. An information-theoretic model of discourse for next utterance type prediction. Transactions of Information Processing Society of Japan, 35(6):1050-1061. Reithinger, N. and M. Klesen. 1997. Dialogue act classification using language models. Proceedings of EuroSpeech-97, pages 2235-2238. Reynar, J. C. and A. Ratnaparkhi. 1997. A Maximum Entropy Approach to Identifying Sentence Boundaries. In Proceeding of the Fifth Conference on Applied Natural Language Processing, pages 16-19. Ristad, E. 1996. Maximum Entropy Modeling Toolkit. Technical Report, Department of Computer Science, Princeton University. Samuel, Ken, Sandra Caberry, and K. Vijay-Shanker. 1998. Computing Dialogue Acts from Features with Transformation-Based Learning. Applying Machine Learning to Discourse Processing: Papers from the 1998 AAAI Spring Symposium. Stanford, California. Pages 90-97. Walker, Marilyn A. 1996. Limited Attention and Discourse Structure. Computational Linguistics, 22(2):255-264. 237
1999
30
Measuring Conformity to Discourse Routines in Decision-Making Interactions Sherri L. Condon Claude G. ~ech William R. Edwards Department of English Department of Psychology Center for Advanced Computer Studies [email protected] [email protected] [email protected] University of Southwestern Louisiana/Universit~ des Acadiens Lafayette, LA 70504 Abstract In an effort to develop measures of discourse level management strategies, this study examines a measure of the degree to which decision- making interactions consist of sequences of utterance functions that are linked in a decision- making routine. The measure is applied to 100 dyadic interactions elicited in both face-to-face and computer-mediated environments with systematic variation of task complexity and message-window size. Every utterance in the interactions is coded according to a system that identifies decision-makmg functions and other routine functions of utterances. Markov analyses of the coded utterances make it possible to measure the relative fi'equencies with which sequences of 2 and 3 utterances trace a path in a Markov model of the decision routine. These proportions suggest that interactions in all conditions adhere to the model, although we find greater conformity in the computer-mediated environments, which is probably due to increased processing and attmfional demands for greater efficiency, The results suggest that measures based on Markov analyses of coded interactions can provide useful measures for comparing discourse level properties, for correlating discourse features with other textual features, and for analyses of discourse management strategies. Introduction Increasingly, research in computational linguistics has contributed to knowledge about the organization and processing of human interaction through quantitative analyses of annotated texts and dialogues (e.g. Carletta et al., 1997; Cohen et al., 1990, Maier et al., 1997; Nakatani et al., 1995; Passonneau, 1996; Walker, 1996). This program of research presents opportunities to examine the relation between linguistic form and pragmatic functions using large corpora to test hypotheses and to detect covariance among discourse features. For example, Di Eugenio et al. (1997) demonstrate that utterances coded as acceptances were more likely to corefer to an item in a previous turn. Grosz and Hirschberg (1992) investigate intonational correlates of discourse structure. These researchers recognize that discourse-level structures and strategies influence syntactic and phonological encoding. The regularities observed can be exploited to resolve language processing problems such as ambiguity and coreference, to integrate high level planning with encoding and interpretation strategies, or to refine statistics-based systems. In order to identify and utilize discourse- based structures and strategies, researchers need methods of linking observable forms with discourse functions, and our focus on discourse management strategies has motivated similar goals. Condon & (~ech (1996a,b) use annotated decision-making interactions to investigate properties of discourse routines and to examine the effects of communication features such as screen size on computer-mediated interactions (~ech & Condon, 1997). In this paper we present a method for measuring the degree to which an 238 interaction conforms to a discourse routine, which not only allows more refined analyses of routine behavior, but also permits fine-grained comparison of discourses obtained under different conditions. In our research, discourse routines have emerged as a fundamental strategy for managing verbal interaction, resulting in the kind of behavior that researchers label adjacencypaJrs such as question/answer or request/compliance as well as more complex sequences of functions. Discourse routines occur when a particular act or function is routinely continued by another, and as "predictable defaults," routine continuations maximize efficiency by requiring minimal encoding while receiving highest priority among possible interpretations. Moreover, discourse routines can be exploited by failing to conform to routine expectations (Schegloff, 1986). Consequently, interactions will not necessarily conform to routines at every opportunity, which raises the problem of measuring the extent to which they do conform Condon et al. (1997) develop a measure based on Markov analyses of coded interactions, • and the measure is employed here with a larger corpus in which students engage in a more complex decision-making task. These measures provide evidence for the claim that participants in computer-mediated decision-making interactions rely on a simple decision routine more than participants in face-to-face decision- making interactions. The measures suggest that conformity to the routine is not strongly affected by any of the other variables examined in the study (task complexity, screen size), even though some participants in the computer- mediated conditions of the more complex task adopted turn management strategies that would be untenable in face-to-face interaction. Data Collection The initial corpus of 32 interactions involving simple decision-making tasks was obtained under conditions which were similar, but not identical, to the conditions under which the 68 interactions involving a more complex task were obtained. One obvious difference is that participants in the first study completed 2 simple tasks planning a social event (a getaway weekend, a barbecue), while participants in the second study completed a single, more complex task: planning a televised ceremony to present the MTV music video awards. Furthermore, all interactions in the first study were mixed sex pairs, whereas interactions in the MTV study include mixed and same sex pairs. All participants were native English speakers at the University of Southwestern Louisiana who received credit in Introductory Psychology classes for their participation. In both studies, the dyads who interacted face-to-face sat together at a table with a tape recorder, while the pairs who interacted electronically were seated at microcomputers in separate rooms. The latter communicated by typing messages which appeared on the sender's monitor as they were typed, but did not appear on the receiver's monitor until the sender pressed a SEND key. The soft-ware incorporated this feature to provide well- defined turns and to make it possible to capture and change messages in future studies. In addition, to minimize message permanence and more closely approximate face-to-face interaction, text. on the screen is always produced by only one participant at a time. In the original study, the message area was approximately 4 lines long, and it was not clear how much this factor influenced our results. Consequently, in the MTV study, the message area of the screen was either 4, 10, or 18 lines. Other differences in the computer- mediated conditions of the two studies include differences in the arrangement of information on the screen such as a brief description of the MTV problem which remained at the bottom of the screen. We also used an answer form in the first study, but not the second. More details about the communication systems in the two studies are provided Condon& ~ech (1996a) and (~ech & Condon (1998). 239 Data Analysis Face-to-face interactions were transcribed from audio recordings into computer files using a set of conventions established in a training manual (Condon & Cech, 1992). All interactions were divided into utterance units defined as single clauses with all complements and adjuncts, including sentential complements and subordinate clauses. Interjections like yeah, now, well, and ok were considered to be separate utterances due to the salience of their interactional, as opposed to propositional, content. The coding system includes categories for request routines and a decision routine involving 3 acts or functions (Condon, 1986, Condon & (~ech, 1996a,b). We believe that the decision routine observed in the interactions instantiates a more general schema for decision-making that may be routinized in various ways. In the abstract schema, each decision has a goal; proposals to satisfy the goal must be provided, these proposals must be evaluated, and there must be conventions for determining, from the evaluations, whether the proposals are adopted as decisions. Routines make it possible to map from the general schema to sequences of routine utterance functions. Default principles associated with routines can determine the encoding of these routine functions in sequences of utterances. According to the model we are developing, a sequence of routine continuations is mapped into a sequence of adjacent utterances in one-to- one fashion by default. If the routine specifies that a routine continuation must be provided by a different speaker, as in adjacency pairs, then the default is for the different speaker to produce the routine continuation immediately after the first pair-part. Since these are defaults, we can expect that they may be weakened or overridden in specific circumstances. At the same time, if our reasoning is correct, we should be able to find evidence of routines operating in the manner we have described. (1) provides an excerpt from a computer- mediated interaction in which utterances are labeled to illustrate the routine sequence. P 1 and P2 designate first and second speaker (an utterance that is a continuation by the same speaker is not annotated for speaker). (1) a. P1: [orientation] who should win best Alternative video. b. P2: [suggestion] Pres. of the united states c. PI: [agreement] ok d. P2: [orientation] who else should nominate. e. [suggestion] bush. goo-goodolls oasis f. Pl: [agreement] sounds good, [...1 we and (2) provides an annotated excerpt from a face-to-face interaction. (2) a. Pl: [orientationl who's going to win? b. [suggestion] Mariah? c. P2: [agreement] yeahprobably d. PI: [orientation] alright Mariah winswhat song? e. P2: [suggestion] uh Fantasy or whatever? f. Pl: [agreement] that's it that's the same song I was thinking of g. [orientation] alright alternative? h. [suggestion] Alanis? Coded as "Orients Suggestion," orientations, like (la,2a) establish goals for each decision, while suggestions like (lb,e) and (2b, e,h) formulate proposals within these constraints. Agreements like (lc,f) and (2c,f), which are coded "Agrees with Suggestion," and disagreements ("Disagrees with Suggestion") evaluate a proposal and establish consensus. The routine does not specify that a suggestion which routinely continues an orientation must be produced by a different speaker: the suggestion may be elicited from a different speaker, as in (la,b) and (2d,e) or it may be provided by the same speaker, as in (ld,e) and (2a,b). However, an agreement that routinely continues a suggestion is produced by a different speaker, as (lb,c), (le,f), (2b,c) and (2e,f) attest. Other routine functions are also classified in the coding system. Utterances coded as "Requests Action" propose behaviors in the speech event such as (3). 240 (3) a. well list your two down there (oral) b. ok, now we need to decide another band to perform (computer-mediated) c. Give some suggestions (computer-mediated) metalanguage, and orientations somewhat less reliable. Results were Utterances coded as "Requests Information" seek information not already provided in the discourse, as in (la,2a). Utterances that seek confirmation or verification of provided information, however, are coded as "Requests Validation." The category "Elaborates- Repeats" serves as a catch-all for utterances with comprehensible content that do not function as requests or suggestions or as responses to these. Two categories are included to assess affective functions: "Requests/Offers Personal Information" for personal comments not required to complete the task and "Jokes Exaggerates" for utterances that inject humor. The category "Discourse Marker" is used for a limited set of forms: Ok, well, anyway, so, now, let's see, and alright. Another category, Metalanguage, was used to code utterances about the talk such as (3b,c). In the initial corpus, the categories described above are organized into 3 classes: MOVE, RESPONSE, and OTHER, and each utterance was assigned a function in each of these three groups of categories. In cases involving no clear function in a class, the utterance was assigned a No Clear code. A complete list of categories is presented at the bottom of Figure 1 and more complete descriptions can be found in Condon and Cech (1992). In the modified system used to code the MTV corpus, the criteria for classifying all of these categories remain the same. The data were coded by students who received course credit as research assistants. Coders were trained by coding and discussing excerpts from the data. Reliability tests were administered frequently during the coding process. Reliability scores were high (80-100% agreement with a standard) for frequently occurring move and response functions, discourse markers, and the two categories designed to identify affective functions. Scores for infrequent move and response functions, In the initial study, the 16 face-to-face interactions produced a corpus of 4141 utterances (ave. 259 per discourse), while the 16 computer-mediated interactions consisted of 918 utterances (ave. 57). In the MTV study, the 8 face-to-face interactions produced 3593 utterances (ave. 449), the 20 interactions in the 4-line condition included 2556 utterances (ave. 128), the 20 interactions in the 10-line condition produced 3041 utterances (ave. 152) and the 20 interactions in the 18-line condition included 2498 utterances (ave. 125). Clearly, completing the more complex MTV task required more talk. Figure 1 presents proportions of utterance functions averaged per interaction for each modality in the initial study. Analyses of variance that treated discourse (dyad) as the random variable were performed on the data within each of the three broad categories, excluding the No Clear MOVE/RESPONSE/ OTHER functions where inclusion would force levels of the between-discourse factor to the same value. We found no significant effect of problem t?/pe or order (for details see Condon & Cech, 1996). However, the interaction of function type with discourse modality was significant at the .001-level for all three (MOVE, RESPONSE, OTHER) function classes. Tests of simple effects of modality type for each function indicated that only four proportions were identical in the two modalities: Requests Validation in the MOVE class, Disagrees in the RESPONSE class, and, in the OTHER class, Personal Information and Jokes-Exaggerates. Figure 2 presents the proportions of utterance functions for the MTV corpus using the same categories of functions as in Figure 1. The similarity of the results in the two figures is remarkable, especially considering differences in methods of data collection described above. First, it can be observed that 241 I o 00.2. " : oo.1. . \ o I l I I I I ! I MOVES RESPONSES OTHER 6 i .f I I iA dv ,sos c. Ao dt is i, MOVES RESPONSES OTHER MOVE FUNCTIONS SA Suggests Action RA Requests Action RV Requests Validation RI Requests laformation ER Elaborates, Repeats OTHER FUNCTIONS DM Discourse Marker MI, Metalanguage OS Orients Suggestion Pl Personal Information Jig Jokes, Exaggerates RESPONSE FUNCTIONS AS Agrees with Suggestion DS Disagrees with Suggestion CR Complies with Request AO Acknowledges Only Figure 1: Propo~ons of code categories in face-to- face (squares) and computer-mea~ated interactions (asterisks) in the original study the screen size in the MTV-condition did not influence the proportions of functions in the 4- line and 18-line conditions. The results in both those conditions are nearly identical. Second, similar differences are obtained between face-to- face and computer-mediated conditions in both corpora. For example, all of the computer- mediated interactions produced suggestions at a proportion of approximately .3, while the face- to-face interactions produced suggestions at closer to half that frequency. Similar patterns of difference between face-to-face and computer- Figure 2: Proportions of code categories in face-to- face (Mangles), 4-line (squares) and 18-line (circles) conditions mediated conditions occur in both corpora for the 3 types of requests in the coding system, tOO. We anticipated an increase in discourse management functions due to the complexity of the task, and the increase in metalanguage from .05 to. 15 in the face-to-face conditions suggests that the more complex task pressured participants to engage in more explicit management strategies. In the computer- mediated interactions, the proportion of functions coded as metalanguage also increases with the complexity of the task, though not as much. The greater proportion of discourse markers in the computer-mediated interactions also reflects an increase in discourse management activity for the more complex task. The failure to observe an increase in the proportion of utterances coded as "Orients Suggestion" in the MTV interactions is probably a result of the emergence of a turn strategy not observed in the interactions with simpler decision-making tasks. Specifically, while all of the computer-mediated interactions in the initial study and many of the computer- mediated interactions in the MTV study 242 consisted of relatively short turns, some of the latter display a strategy of employing long turns in which participants encode routine functions for several decisions in the same turn, as in (4). (4) Best Female Video Either we could have Celine Dione's song rts all coming back to me or the other one that was in that movie up close and personal. Aany of the clips with her in them would be good. Toni Braxton with that song..gosh I can't think of any of the names of anybody's songs. And show the same clip as before. What about jewel. Who will save your soul. Personally I think she should win we could use the clip of her playing the guitar in the bathroom. We need one more female singer. Did we pick who should present the award? I think Bush should play after the award. These more parallel management strategies can reduce the number of orientations if a single orientation can hold for several suggestions and a single agreement can accept them all. Of course, this is exactly what happens when participants provide a list of suggestions in a short turn, too. Therefore, the parallel strategy is a minor modification of the decision routine, but it may influence the proportions of routine functions by reducing the number of orientations and agreements. In fact, the proportions of utterances coded as "Agrees with Suggestion" and "Complies with Request" are lower in the computer- mediated MTV interactions than in the computer-mediated interactions of the initial corpus. Though these proportions are still slightly higher than those in the face-to-face MTV condition, preserving the pattern observed in the initial corpus, the differences are smaller. These differences are reflected even more dramatically if we compare the ratios of suggestions to agreements in the MTV corpus. At approximately 1.5, the ratio of suggestions to agreements in the face-to-face condition of the MTV study resembles the ratio in the face-to- face condition of the earlier study (1.64). Similarly, the ratio of suggestions to agreements in the computer-mediated interactions of the original study is 1.71. In contrast, the ratios of suggestions to agreements in the 4- and 18-line conditions of the MTV corpus are much larger, both at approximately 2.5. We believe that much of the difference observed is the result of longer turns employing parallel decision management in the MTV corpus. These results raise the question of the extent to which the interactions conform to a model of the decision routine we have described. The measure developed in Condon et al. (1997) begins by combining the 3 code annotations as a triple and treating those triples as the output of a probabilistic source. Then 0-, 1 st- and 2nd-order Markov analyses are performed on the resulting sequences of triples. While the 0-order analyses simply give the proportions of each triple in the interactions, the lSt-order analyses make it possible to examine adjacent pairs of triples to determine the probability that a particular combination of functions will be followed by another particular combination of functions. Similarly, the 2hal-order analyses examine sequences of 3 utterances. Orientation ~ Suggestion~Agre_ement Figure 3: A More Complex Decision Routine Based on Frequency Analyses Examination of the 2ha-order analyses in the original study revealed that all of the 7 most frequent sequences of 3 utterances trace a path in the model in Figure 3. Using the model in Figure 3, we then calculated the proportions of 0-, 1 st- and 2nd-order sequences that trace a path through the model. Of course, the 0-order frequencies simply provide the proportions of utterances that are coded as 243 Discourse Morality Markov Order Oral Electronic 0 (Single Function) 1 (Sequence of Two) 2 (Sequence of Three) .34 (.09) .53 (.13) .16 (.06) .32 (.13) .07(.04) .21(.11) Table 1: Proportions of Utterance Events Averaged Per Discourse (Standard Deviations in Parentheses) that Conform to the Model in Figure 3 from the Original Corpus either orientations, suggestions or agreements, but the 1 st- and 2"a-order analyses make it possible to examine the extent to which pairs and sequences of 3 utterances conform to the model in Figure 3. Table 1 presents the results of obtaining the measure just described from the initial corpus of face-to-face and computer- mediated interactions. The proportions therefore reflect the average (and standard deviation) per discourse of events that conform to a sequence of routine continuations in Figure 3. Since conforming to the model is less and less likely as more functions are linked in sequence, it is not surprising that the proportions decrease as the order of the Markov analysis increases. Still, it is encouraging that the proportions of routine continuations in the 1 st- order analyses are approximately equal to the proportions of suggestions in the two types of interactions, since the latter provide an estimate of the number of opportunities to engage in the routine. Table 2 presents the results of computing the same analyses on the face-to-face, 4-line, 10-line, and 18-line computer-mediated interactions in the MTV corpus. The 0-order results are much the same for both corpora with about 1/3 of the utterances in face-to-face interactions functioning in the decision routine compared to ½ in the computer-mediated interactions. Similarly, proportions of utterance pairs that conform to the routine remain fairly close to the proportions of suggestions in each condition. Screen size appears to have no effect on the results obtained with this measure. Conclusions The results are promising both as evidence for our theory of routines and as an initial attempt to devise a measure of conformity to routines. In particular, the fact that an additional corpus with a more complex task has provided measures which are very similar to those obtained in the initial corpus increases our confidence that these methods are tapping into some stable phenomena. Moreover, the similarities of the conformity measures in Tables 1 and 2 occur in spite of the emergence Marker Order Discourse Modality Oral 4-1me 1 O-line 18 -line 0 (Single Function) 1 (Sequence of Two) 2 (Sequence of Three) .29 (.07) .50 (.12) .48 (.11) .45 (.ll) .11 (.05) .27 (.10) .25 (.10) .21 (.11) .04 (.03) .17 (.10) .14 (.08) .12 (.10) Table 2: Proportions of Utterance Events Averaged Per Discourse (Standard Deviations in Parentheses) that Conform to the Model in Figure 3 from the MT~ Corpus 244 of new computer-mediated discourse management strategies in which long turns encode decision sequences in parallel. Though these strategies seem to have a strong effect on the ratio of suggestions to agreements in the computer-mediated interactions of the MTV corpus, the conformity measures are still quite similar to the measures obtained in the computer-mediated interactions of the initial study. The MTV data also confirm the result obtained in the original study that computer- mediated interactions rely more heavily on routines than face-to-face interactions. The much higher conformity measures for all three Markov orders provide clear evidence for this claim with respect to the decision routine. Moreover, a comparison of Figures l and 2 shows that the computer-mediated interactions have higher proportions of requests, especially requests for information. If these proportions are indicative of the extent to which request routines are relied on in the interactions, then these data also support the claim that computer- mediated interactions rely on discourse routines more than face-to-face interactions. Given our claims about the effectiveness of discourse routines, it makes sense that participants in an unfamiliar communication environment will employ their most efficient strategies. The conformity measure that has been devised does not make use of all the information available in the Markov analyses, and we continue to experiment with different measures. It seems clear that Markov analyses can provide sensitive measures that will be useful for identifying differences between interactions and for measuring the effects of experimental factors on interactions. References Carletta, J.; Dahlback, N.; Reithinger, N.; and Walker, M. 1997. Standards for dialogue coding in natural language processing. Report no. 167, Dagstuhl- Seminar. Cohen, P.R.; Morgan, J.; and Pollack, M., eds. 1990. Intentions in Communication. Cambridge, MA: MIT Pr. (~ech, C. and Condon, S. 1998. Message Size Constraints on Discourse Planning in Synchronous Computer-Mediated Communication. Behavior Research Methods, Instruments, & Computers, 30, 255-263. Condon, S. 1986. The Discourse Functions of OK. Semiotica, 60: 73-101. Condon, S., and ~ech, C. 1992. Manual for Coding Decision-Making Interactions. Rev. 1995. Unpublished manuscript available at Discourse Resource Initiative wcbsitc at http://www.gcorgetown.edu/luperfoy/Discourse- Treebank/dri-home.html Condon, S., and (~ech, C. 1996a. Functional Comparison of Face-to-Face and Computer- Mediated Decision-Making Interactions. In Herring, S. (ed.), Computer-Mediated Communication: Linguistic, Social, and Cross- Cultural Perspectives. Philadelphia: John Benjamin. Condon, S., and (~ech, C. 1996b. Discourse Management in Face-to-Face and Computer- Mediated Decision-Making Interactions. Electronic Journal of Communication/La Revue Electroni~e de Communication, 6, 3. Condon, S., Cech, C., and Edwards, W. (1997) Discourse routines in decision-making interactions. Paper presented to AAAI Fall Symposium on Communicative Action in Humans and Machines. Di Eugenio, B.; Jordan, P.; Thomason, R.; and Moore, J. 1997. Reconstructed intentions in collaborative problem solving dialogues. Paper presented to AAAI Fall Syngx~um on Communicative Action in Humans and Machines. Grosz, B. and Hirschberg, J. 1992. Some intonational characteristics of discourse structure. In Proceedings of the International Conference on Spoken Language Processing, Banff, Canada (429-432). Maier, E.; Mast, M.; and Lupeffoy, S., ¢ds., Dialogue Processing in Spoken Language Systems, Lecture Notes in Artificial Intelligence. Springer Verlag. Nakatani, C., Hirschberg, J. and Grosz, B. 1995. Discourse structure in spoken language: Studies on speech corpora. Paper presented to AAAI 1995 Spring Symposium Series: Empirical Methods in Discourse Interpretation and Generation. Passonneau, R. 1996. Using centering to relax Gricean informational constraints on discourse anaphoric noun phrases. Language and Speech, 39(2-3), 229-264. Schegloff, E. 1986. The Routine as Achievement. Human Studies, 9: 111-151. Walker, M (1996). Inferring acceptance and rejection in dialog by default rules of inference. Language and Speech, 39(2-3), 265-304. 245
1999
31
Development and Use of a Gold-Standard Data Set for Subjectivity Classifications Janyce M. Wiebet and Rebecca F. Bruce:[: and Thomas P. O'Harat tDepartment of Computer Science and Computing Research Laboratory New Mexico State University, Las Cruces, NM 88003 :~Department of Computer Science University of North Carolina at Asheville Asheville, NC 28804-8511 wiebe, tomohara@cs, nmsu. edu, bruce@cs, unca. edu Abstract This paper presents a case study of analyzing and improving intercoder reliability in discourse tagging using statistical techniques. Bias- corrected tags are formulated and successfully used to guide a revision of the coding manual and develop an automatic classifier. 1 Introduction This paper presents a case study of analyz- ing and improving intercoder reliability in dis- course tagging using the statistical techniques presented in (Bruce and Wiebe, 1998; Bruce and Wiebe, to appear). Our approach is data driven: we refine our understanding and pre- sentation of the classification scheme guided by the results of the intercoder analysis. We also present the results of a probabilistic classifier developed on the resulting annotations. Much research in discourse processing has focused on task-oriented and instructional di- alogs. The task addressed here comes to the fore in other genres, especially news reporting. The task is to distinguish sentences used to ob- jectively present factual information from sen- tences used to present opinions and evaluations. There are many applications for which this dis- tinction promises to be important, including text categorization and summarization. This research takes a large step toward developing a reliably annotated gold standard to support experimenting with such applications. This research is also a case study of ana- lyzing and improving manual tagging that is applicable to any tagging task. We perform a statistical analysis that provides information that complements the information provided by Cohen's Kappa (Cohen, 1960; Carletta, 1996). In particular, we analyze patterns of agreement to identify systematic disagreements that result from relative bias among judges, because they can potentially be corrected automatically. The corrected tags serve two purposes in this work. They are used to guide the revision of the cod- ing manual, resulting in improved Kappa scores, and they serve as a gold standard for developing a probabilistic classifier. Using bias-corrected tags as gold-standard tags is one way to define a single best tag when there are multiple judges who disagree. The coding manual and data from our exper- iments are available at: http://www.cs.nmsu.edu/~wiebe/projects. In the remainder of this paper, we describe the classification being performed (in section 2), the statistical tools used to analyze the data and produce the bias-corrected tags (in section 3), the case study of improving intercoder agree- ment (in section 4), and the results of the clas- sifter for automatic subjectivity tagging (in sec- tion 5). 2 The Subjective and Objective Categories We address evidentiality in text (Chafe, 1986), which concerns issues such as what is the source of information, and whether information is be- ing presented as fact or opinion. These ques- tions are particularly important in news report- ing, in which segments presenting opinions and verbal reactions are mixed with segments pre- senting objective fact (van Dijk, 1988; Kan et al., 1998). The definitions of the categories in our cod- 246 ing manual are intention-based: "If the primary intention of a sentence is objective presentation of material that is factual to the reporter, the sentence is objective. Otherwise, the sentence is subjective." 1 We focus on sentences about private states, such as belief, knowledge, emotions, etc. (Quirk et al., 1985), and sentences about speech events, such as speaking and writing. Such sentences may be either subjective or objective. From the coding manual: "Subjective speech-event (and private-state) sentences are used to com- municate the speaker's evaluations, opinions, emotions, and speculations. The primary in- tention of objective speech-event (and private- state) sentences, on the other hand, is to ob- jectively communicate material that is factual to the reporter. The speaker, in these cases, is being used as a reliable source of information." Following are examples of subjective and ob- jective sentences: 1. At several different levels, it's a fascinating tale. Subjective sentence. 2. Bell Industries Inc. increased its quarterly to 10 cents from seven cents a share. Ob- jective sentence. 3. Northwest Airlines settled the remaining lawsuits filed on behalf of 156 people killed in a 1987 crash, but claims against the jetliner's maker axe being pursued, a fed- eral judge said. Objective speech-event sen- tence. 4. The South African Broadcasting Corp. said the song "Freedom Now" was "un- desirable for broadcasting." Subjective speech-event sentence. In sentence 4, there is no uncertainty or eval- uation expressed toward the speaking event. Thus, from one point of view, one might have considered this sentence to be objective. How- ever, the object of the sentence is not presented as material that is factual to the reporter, so the sentence is classified as subjective. Linguistic categorizations usually do not cover all instances perfectly. For example, sen- 1 The category specifications in the coding manual axe based on our previous work on tracking point of view (Wiebe, 1994), which builds on Banfield's (1982) linguis- tic theory of subjectivity. tences may fall on the borderline between two categories. To allow for uncertainty in the an- notation process, the specific tags used in this work include certainty ratings, ranging from 0, for least certain, to 3, for most certain. As dis- cussed below in section 3.2, the certainty ratings allow us to investigate whether a model positing additional categories provides a better descrip- tion of the judges' annotations than a binary model does. Subjective and objective categories are poten- tially important for many text processing ap- plications, such as information extraction and information retrieval, where the evidential sta- tus of information is important. In generation and machine translation, it is desirable to gener- ate text that is appropriately subjective or ob- jective (Hovy, 1987). In summarization, sub- jectivity judgments could be included in doc- ument profiles, to augment automatically pro- duced document summaries, and to help the user make relevance judgments when using a search engine. In addition, they would be useful in text categorization. In related work (Wiebe et al., in preparation), we found that article types, such as announcement and opinion piece, are significantly correlated with the subjective and objective classification. Our subjective category is related to but dif- fers from the statement-opinion category of the Switchboard-DAMSL discourse annotation project (Jurafsky et al., 1997), as well as the gives opinion category of Bale's (1950) model of small-group interaction. All involve expres- sions of opinion, but while our category spec- ifications focus on evidentiality in text, theirs focus on how conversational participants inter- act with one another in dialog. 3 Statistical Tools Table 1 presents data for two judges. The rows correspond to the tags assigned by judge 1 and the columns correspond to the tags assigned by judge 2. Let nij denote the number of sentences that judge 1 classifies as i and judge 2 classi- fies as j, and let/~ij be the probability that a randomly selected sentence is categorized as i by judge 1 and j by judge 2. Then, the max- imum likelihood estimate of 15ij is ~ where n_l_ + , n++ = ~ij nij = 504. Table 1 shows a four-category data configu- 247 Judge 1 = D Sub j2,3 Subjoj Objo,1 Obj2,3 Judge 2 = J Sub j2,3 Subjoa Objoa Obj2,3 n13 = 15 n14 = 4 rill = 158 n12 = 43 n21 =0 n22 =0 n23 =0 n24 =0 n31 = 3 n32 = 2 n33 = 2 n34 = 0 n41 = 38 n42 -- 48 n43 = 49 n44 = 142 n+z = 199 n+2 = 93 n+3 = 66 n+4 = 146 nl+ = 220 n2+ = 0 n3+ = 7 n4+ = 277 n++ = 504 Table 1: Four-Category Contingency Table ration, in which certainty ratings 0 and 1 are combined and ratings 2 and 3 are combined. Note that the analyses described in this section cannot be performed on the two-category data configuration (in which the certainty ratings are not considered), due to insufficient degrees of freedom (Bishop et al., 1975). Evidence of confusion among the classifica- tions in Table 1 can be found in the marginal totals, ni+ and n+j. We see that judge 1 has a relative preference, or bias, for objective, while judge 2 has a bias for subjective. Relative bias is one aspect of agreement among judges. A second is whether the judges' disagreements are systematic, that is, correlated. One pattern of systematic disagreement is symmetric disagree- ment. When disagreement is symmetric, the differences between the actual counts, and the counts expected if the judges' decisions were not correlated, are symmetric; that is, 5n~j = 5n~i for i ~ j, where 5ni~ is the difference from inde- pendence. Our goal is to correct correlated disagree- ments automatically. We are particularly in- terested in systematic disagreements resulting from relative bias. We test for evidence of such correlations by fitting probability models to the data. Specifically, we study bias using the model for marginal homogeneity, and sym- metric disagreement using the model for quasi- symmetry. When there is such evidence, we propose using the latent class model to correct the disagreements; this model posits an unob- served (latent) variable to explain the correla- tions among the judges' observations. The remainder of this section describes these models in more detail. All models can be eval- uated using the freeware package CoCo, which was developed by Badsberg (1995) and is avail- able at: http://web.math.auc.dk/-jhb/CoCo. 3.1 Patterns of Disagreement A probability model enforces constraints on the counts in the data. The degree to which the counts in the data conform to the constraints is called the fit of the model. In this work, model fit is reported in terms of the likelihood ra- tio statistic, G 2, and its significance (Read and Cressie, 1988; Dunning, 1993). The higher the G 2 value, the poorer the fit. We will consider model fit to be acceptable if its reference sig- nificance level is greater than 0.01 (i.e., if there is greater than a 0.01 probability that the data sample was randomly selected from a popula- tion described by the model). Bias of one judge relative to another is evi- denced as a discrepancy between the marginal totals for the two judges (i.e., ni+ and n+j in Table 1). Bias is measured by testing the fit of the model for marginal homogeneity: ~i+ = P+i for all i. The larger the G 2 value, the greater the bias. The fit of the model can be evaluated as described on pages 293-294 of Bishop et al. (1975). Judges who show a relative bias do not al- ways agree, but their judgments may still be correlated. As an extreme example, judge 1 may assign the subjective tag whenever judge 2 assigns the objective tag. In this example, there is a kind of symmetry in the judges' re- sponses, but their agreement would be low. Pat- terns of symmetric disagreement can be identi- fied using the model for quasi-symmetry. This model constrains the off-diagonal counts, i.e., the counts that correspond to disagreement. It states that these counts are the product of a 248 table for independence and a symmetric table, nij = hi+ × )~+j ×/~ij, such that /kij = )~ji. In this formula, )~i+ × ,k+j is the model for inde- pendence and ),ij is the symmetric interaction term. Intuitively, /~ij represents the difference between the actual counts and those predicted by independence. This model can be evaluated using CoCo as described on pages 289-290 of Bishop et al. (1975). 3.2 Producing Bias-Corrected Tags We use the latent class model to correct sym- metric disagreements that appear to result from bias. The latent class model was first intro- duced by Lazarsfeld (1966) and was later made computationally efficient by Goodman (1974). Goodman's procedure is a specialization of the EM algorithm (Dempster et al., 1977), which is implemented in the freeware program CoCo (Badsberg, 1995). Since its development, the latent class model has been widely applied, and is the underlying model in various unsupervised machine learning algorithms, including Auto- Class (Cheeseman and Stutz, 1996). The form of the latent class model is that of naive Bayes: the observed variables are all con- ditionally independent of one another, given the value of the latent variable. The latent variable represents the true state of the object, and is the source of the correlations among the observed variables. As applied here, the observed variables are the classifications assigned by the judges. Let B, D, J, and M be these variables, and let L be the latent variable. Then, the latent class model is: p(b,d,j,m,l) = p(bll)p(dll)p(jll)p(mll)p(l ) (by C.I. assumptions) p( b, l )p( d, l )p(j , l )p( m, l) p(t)3 (by definition) The parameters of the model are {p(b, l),p(d, l),p(j, l),p(m, l)p(l)}. Once es- timates of these parameters are obtained, each clause can be assigned the most probable latent category given the tags assigned by the judges. The EM algorithm takes as input the number of latent categories hypothesized, i.e., the num- ber of values of L, and produces estimates of the parameters. For a description of this process, see Goodman (1974), Dawid & Skene (1979), or Pedersen & Bruce (1998). Three versions of the latent class model are considered in this study, one with two latent categories, one with three latent categories, and one with four. We apply these models to three data configurations: one with two categories (subjective and objective with no certainty rat- ings), one with four categories (subjective and objective with coarse-grained certainty ratings, as shown in Table 1), and one with eight cate- gories (subjective and objective with fine-grained certainty ratings). All combinations of model and data configuration are evaluated, except the four-category latent class model with the two- category data configuration, due to insufficient degrees of freedom. In all cases, the models fit the data well, as measured by G 2. The model chosen as final is the one for which the agreement among the latent categories assigned to the three data con- figurations is highest, that is, the model that is most consistent across the three data configura- tions. 4 Improving Agreement in Discourse Tagging Our annotation project consists of the following steps: 2 1. A first draft of the coding instructions is developed. 2. Four judges annotate a corpus according to the first coding manual, each spending about four hours. 3. The annotated corpus is statistically ana- lyzed using the methods presented in sec- tion 3, and bias-corrected tags are pro- duced. 4. The judges are given lists of sentences for which their tags differ from the bias- corrected tags. Judges M, D, and J par- ticipate in interactive discussions centered around the differences. In addition, after reviewing his or her list of differences, each judge provides feedback, agreeing with the 2The results of the first three steps are reported in (Bruce and Wiebe, to appear). 249 bias-corrected tag in many cases, but argu- ing for his or her own tag in some cases. Based on the judges' feedback, 22 of the 504 bias-corrected tags are changed, and a second draft of the coding manual is writ- ten. 5. A second corpus is annotated by the same four judges according to the new coding manual. Each spends about five hours. 6. The results of the second tagging experi- ment are analyzed using the methods de- scribed in section 3, and bias-corrected tags are produced for the second data set. Two disjoint corpora are used in steps 2 and 5, both consisting of complete articles taken from the Wall Street Journal Treebank Corpus (Marcus et al., 1993). In both corpora, judges assign tags to each non-compound sentence and to each conjunct of each compound sentence, 504 in the first corpus and 500 in the second. The segmentation of compound sentences was performed manually before the judges received the data. Judges J and B, the first two authors of this paper, are NLP researchers. Judge M is an undergraduate computer science student, and judge D has no background in computer science or linguistics. Judge J, with help from M, devel- oped the original coding instructions, and Judge J directed the process in step 4. The analysis performed in step 3 reveals strong evidence of relative bias among the judges. Each pairwise comparison of judges also shows a strong pattern of symmetric disagree- ment. The two-category latent class model pro- duces the most consistent clusters across the data configurations. It, therefore, is used to de- fine the bias-corrected tags. In step 4, judge B was excluded from the in- teractive discussion for logistical reasons. Dis- cussion is apparently important, because, al- though B's Kappa values for the first study are on par with the others, B's Kappa values for agreement with the other judges change very little from the first to the second study (this is true across the range of certainty values). In contrast, agreement among the other judges no- ticeably improves. Because judge B's poor Per- formance in the second tagging experiment is linked to a difference in procedure, judge B's Study 1 Study 2 %of ~ %of corpus corpus covered covered Certainty Values 0,1,2 or 3 M&D M&J D&J B&J B&M B&D 0.60 100 0.63 100 0.57 100 0.62 100 0.60 100 0.58 100 0.76 100 0.67 100 0.65 100 0.64 100 0.59 100 0.59 100 Certainty Values 1,2 or 3 M&D 0.62 96 0.84 92 M & J 0.78 81 0.81 81 D & J 0.67 84 0.72 82 Certainty Values 2 or 3 M&D M&J D&J 0.67 89 0.88 64 0.76 68 0.89 81 0.87 67 0.88 62 Table 2: Palrwise Kappa (a) Scores tags are excluded from our subsequent analysis of the data gathered during the second tagging experiment. Table 2 shows the changes, from study 1 to study 2, in the Kappa values for pairwise agree- ment among the judges. The best results are clearly for the two who are not authors of this paper (D and M). The Kappa value for the agreement between D and M considering all cer- tainty ratings reaches .76, which allows tenta- tive conclusions on Krippendorf's scale (1980). If we exclude the sentences with certainty rat- ing 0, the Kappa values for pairwise agreement between M and D and between J and M are both over .8, which allows definite conclusions on Krippendorf's scale. Finally, if we only con- sider sentences with certainty 2 or 3, the pair- wise agreements among M, D, and J all have high Kappa values, 0.87 and over. We are aware of only one previous project reporting intercoder agreement results for simi- lar categories, the switchboard-DAMSL project mentioned above. While their Kappa results are very good for other tags, the opinion-statement tagging was not very successful: "The distinc- tion was very hard to make by labelers, and 250 Test DIJ M.H.: G 2 104.912 Sig. 0.000 Q.S.: G 2 0.054 Sig. 0.997 DIM JIM 17.343 136.660 0.001 0.000 0.128 0.350 0.998 0.95 Table 3: Tests for Patterns of Agreement accounted for a large proportion of our interla- beler error" (Jurafsky et al., 1997). In step 6, as in step 3, there is strong evi- dence of relative bias among judges D, J and M. Each pairwise comparison of judges also shows a strong pattern of symmetric disagreement. The results of this analysis are presented in Table 3. 3 Also as in step 3, the two-category latent class model produces the most consistent clus- ters across the data configurations. Thus, it is used to define the bias-corrected tags for the second data set as well. 5 Machine Learning Results Recently, there have been many successful ap- plications of machine learning to discourse pro- cessing, such as (Litman, 1996; Samuel et al., 1998). In this section, we report the results of machine learning experiments, in which we develop probablistic classifiers to automatically perform the subjective and objective classifica- tion. In the method we use for developing clas- sifters (Bruce and Wiebe, 1999), a search is per- formed to find a probability model that cap- tures important interdependencies among fea- tures. Because features can be dropped and added during search, the method also performs feature selection. In these experiments, the system considers naive Bayes, full independence, full interdepen- dence, and models generated from those using forward and backward search. The model se- lected is the one with the highest accuracy on a held-out portion of the training data. 10-fold cross validation is performed. The data is partitioned randomly into 10 different SFor the analysis in Table 3, certainty ratings 0 and 1, and 2 and 3 are combined. Similar results are obtained when all ratings are treated as distinct. sets. On each fold, one set is used for testing, and the other nine are used for training. Fea- ture selection, model selection, and parameter estimation are performed anew on each fold. The following are the potential features con- sidered on each fold. A binary feature is in- cluded for each of the following: the presence in the sentence of a pronoun, an adjective, a cardinal number, a modal other than will, and an adverb other than not. We also include a binary feature representing whether or not the sentence begins a new paragraph. Finally, a fea- ture is included representing co-occurrence of word tokens and punctuation marks with the subjective and objective classification. 4 There are many other features to investigate in future work, such as features based on tags assigned to previous utterances (see, e.g., (Wiebe et al., 1997; Samuel et al., 1998)), and features based on semantic classes, such as positive and neg- ative polarity adjectives (Hatzivassiloglou and McKeown, 1997) and reporting verbs (Bergler, 1992). The data consists of the concatenation of the two corpora annotated with bias-corrected tags as described above. The baseline accuracy, i.e., the frequency of the more frequent class, is only 51%. The results of the experiments are very promising. The average accuracy across all folds is 72.17%, more than 20 percentage points higher than the baseline accuracy. Interestingly, the system performs better on the sentences for which the judges are certain. In a post hoc anal- ysis, we consider the sentences from the second data set for which judges M, J, and D rate their certainty as 2 or 3. There are 299/500 such sen- tences. For each fold, we calculate the system's accuracy on the subset of the test set consisting of such sentences. The average accuracy of the subsets across folds is 81.5%. Taking human performance as an upper bound, the system has room for improvement. The average pairwise percentage agreement be- tween D, J, and M and the bias-corrected tags in the entire data set is 89.5%, while the system's percentage agreement with the bias-corrected tags (i.e., its accuracy) is 72.17%. aThe per-class enumerated feature representation from (Wiebe et ai., 1998) is used, with 60% as the con- ditional independence cutoff threshold. 251 6 Conclusion This paper demonstrates a procedure for auto- matically formulating a single best tag when there are multiple judges who disagree. The procedure is applicable to any tagging task in which the judges exhibit symmetric disagree- ment resulting from bias. We successfully use bias-corrected tags for two purposes: to guide a revision of the coding manual, and to develop an automatic classifier. The revision of the cod- ing manual results in as much as a 16 point im- provement in pairwise Kappa values, and raises the average agreement among the judges to a Kappa value of over 0.87 for the sentences that can be tagged with certainty. Using only simple features, the classifier achieves an average accuracy 21 percentage points higher than the baseline, in 10-fold cross validation experiments. In addition, the aver- age accuracy of the classifier is 81.5% on the sentences the judges tagged with certainty. The strong performance of the classifier and its con- sistency with the judges demonstrate the value of this approach to developing gold-standard tags. 7 Acknowledgements This research was supported in part by the Office of Naval Research under grant number N00014-95-1-0776. We are grateful to Matthew T. Bell and Richard A. Wiebe for participating in the annotation study, and to the anonymous reviewers for their comments and suggestions. References J. Badsberg. 1995. An Environment for Graph- ical Models. Ph.D. thesis, Aalborg University. R. F. Bales. 1950. Interaction Process Analysis. University of Chicago Press, Chicago, ILL. Ann Banfield. 1982. Unspeakable Sentences: Narration and Representation in the Lan- guage of Fiction. Routledge & Kegan Paul, Boston. S. Bergler. 1992. Evidential Analysis o.f Re- ported Speech. Ph.D. thesis, Brandeis Univer- sity. Y.M. Bishop, S. Fienberg, and P. Holland. 1975. Discrete Multivariate Analysis: Theory and Practice. The MIT Press, Cambridge. R. Bruce and J. Wiebe. 1998. Word sense dis- tinguishability and inter-coder agreement. In 252 Proc. 3rd Conference on Empirical Methods in Natural Language Processing (EMNLP- 98), pages 53-60, Granada, Spain, June. ACL SIGDAT. R. Bruce and J. Wiebe. 1999. Decompos- able modeling in natural language processing. Computational Linguistics, 25(2). R. Bruce and J. Wiebe. to appear. Recognizing subjectivity: A case study of manual tagging. Natural Language Engineering. J. Carletta. 1996. Assessing agreement on clas- sification tasks: The kappa statistic. Compu- tational Linguistics, 22(2):249-254. W. Chafe. 1986. Evidentiality in English con- versation and academic writing. In Wallace Chafe and Johanna Nichols, editors, Eviden- tiality: The Linguistic Coding of Epistemol- ogy, pages 261-272. Ablex, Norwood, NJ. P. Cheeseman and J. Stutz. 1996. Bayesian classification (AutoClass): Theory and re- sults. In Fayyad, Piatetsky-Shapiro, Smyth, and Uthurusamy, editors, Advances in Knowledge Discovery and Data Mining. AAAI Press/MIT Press. J. Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychologi- cal Meas., 20:37-46. A. P. Dawid and A. M. Skene. 1979. Maximum likelihood estimation of observer error-rates using the EM algorithm. Applied Statistics, 28:20-28. A. Dempster, N. Laird, and D. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39 (Series B):1-38. T. Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Com- putational Linguistics, 19(1):75-102. L. Goodman. 1974. Exploratory latent struc- ture analysis using both identifiable and unidentifiable models. Biometrika, 61:2:215- 231. V. Hatzivassiloglou and K. McKeown. 1997. Predicting the semantic orientation of adjec- tives. In ACL-EACL 1997, pages 174-181, Madrid, Spain, July. Eduard Hovy. 1987. Generating Natural Lan- guage under Pragmatic Constraints. Ph.D. thesis, Yale University. D. Jurafsky, E. Shriberg, and D. Biasca. 1997. Switchboard SWBD-DAMSL shallow- discourse-function annotation coders manual, draft 13. Technical Report 97-01, University of Colorado Institute of Cognitive Science. M.-Y. Kan, J. L. Klavans, and K. R. McKe- own. 1998. Linear segmentation and segment significance. In Proc. 6th Workshop on Very Large Corpora (WVLC-98), pages 197-205, Montreal, Canada, August. ACL SIGDAT. K. Krippendorf. 1980. Content Analysis: An Introduction to its Methodology. Sage Publi- cations, Beverly Hills. P. Lazarsfeld. 1966. Latent structure analy- sis. In S. A. Stouffer, L. Guttman, E. Such- man, P.Lazarfeld, S. Star, and J. Claussen, editors, Measurement and Prediction. Wiley, New York. D. Litman. 1996. Cue phrase classification us- ing machine learning. Journal of Artificial Intelligence Research, 5:53-94. M. Marcus, Santorini, B., and M. Marcinkiewicz. 1993. Building a large annotated corpus of English: The penn treebank. Computational Linguis- tics, 19(2):313-330. Ted Pedersen and Rebecca Bruce. 1998. Knowledge lean word-sense disambiguation. In Proc. of the 15th National Conference on Artificial Intelligence (AAAI-98), Madison, Wisconsin, July. R. Quirk, S. Greenbaum, G. Leech, and J. Svartvik. 1985. A Comprehensive Gram- mar of the English Language. Longman, New York. T. Read and N. Cressie. 1988. Goodness-of- fit Statistics for Discrete Multivariate Data. Springer-Verlag Inc., New York, NY. K. Samuel, S. Carberry, and K. Vijay- Shanker. 1998. Dialogue act tagging with transformation-based learning. In Proc. COLING-ACL 1998, pages 1150-1156, Mon- treal, Canada, August. T.A. van Dijk. 1988. News as Discourse. Lawrence Erlbaum, Hillsdale, NJ. J. Wiebe, R. Bruce, and L. Duan. 1997. Probabilistic event categorization. In Proc. Recent Advances in Natural Language Pro- cessing (RANLP-97), pages 163-170, Tsigov Chark, Bulgaria, September. J. Wiebe, K. McKeever, and R. Bruce. 1998. Mapping collocational properties into ma- chine learning features. In Proc. 6th Work- 253 shop on Very Large Corpora (WVLC-98), pages 225-233, Montreal, Canada, August. ACL SIGDAT. J. Wiebe, J. Klavans, and M.Y. Kan. in prepa- ration. Verb profiles for subjectivity judg- ments and text classification. J. Wiebe. 1994. Tracking point of view in narrative. Computational Linguistics, 20(2):233-287.
1999
32
Dependency Parsing with an Extended Finite State Approach Kemal Oflazer Department of Computer Engineering Bilkent University Ankara, 06533,Turkey ko©cs, bilkent, edu. tr Computing Research Laboratory New Mexico State University Las Cruces, NM, 88003 USA ko@crl, nmsu. edu Abstract This paper presents a dependency parsing scheme using an extended finite state approach. The parser augments input representation with "channels" so that links representing syntactic dependency rela- tions among words can be accommodated, and it- erates on the input a number of times to arrive at a fixed point. Intermediate configurations violating various constraints of projective dependency repre- sentations such as no crossing links, no independent items except sentential head, etc, are filtered via fi- nite state filters. We have applied the parser to de- pendency parsing of Turkish. 1 Introduction Recent advances in the development of sophisticated tools for building finite state systems (e.g., XRCE Finite State Tools (Karttunen et al., 1996), ATgzT Tools (Mohri et al., 1998)) have fostered the develop- ment of quite complex finite state systems for natu- ral language processing. In the last several years, there have been a number of studies on develop- ing finite state parsing systems, (Koskenniemi, 1990; Koskenniemi et al., 1992; Grefenstette, 1996; Ait- Mokhtar and Chanod, 1997). There have also been a number of approaches to natural language pars- ing using extended finite state approaches in which a finite state engine is applied multiple times to the input, or various derivatives thereof, until some stop- ping condition is reached. Roche (1997) presents an approach for parsing in which the input is itera- tively bracketed using a finite state transducer. Ab- ney(1996) presents a finite state parsing approach in which a tagged sentence is parsed by transducers which progressively transform the input to sequences of symbols representing phrasal constituents. This paper presents an approach to dependency parsing using an extended finite state model resembling the approaches of Roche and Abney. The parser pro- duces outputs that encode a labeled dependency tree representation of the syntactic relations between the words in the sentence. We assume that the reader is familiar with the basic concepts of finite state transducers (FST here- after), finite state devices that map between two reg- ular languages U and L (Kaplan and Kay, 1994). 2 Dependency Syntax Dependency approaches to syntactic representation use the notion of syntactic relation to associate sur- face lexical items. The book by Mel~uk (1988) presents a comprehensive exposition of dependency syntax. Computational approaches to dependency syntax have recently become quite popular (e.g., a workshop dedicated to computational approaches to dependency grammars has been held at COL- ING/ACL'98 Conference). J~irvinen and Tapana- ninen have demonstrated an efficient wide-coverage dependency parser for English (Tapanainen and J~irvinen, 1997; J£rvinen and Tapanainen, 1998). The work of Sleator and Temperley(1991) on link grammar, an essentially lexicalized variant of depen- dency grammar, has also proved to be interesting in a number of aspects. Dependency-based statistical language modeling and analysis have also become quite popular in statistical natural language process- ing (Lafferty et al., 1992; Eisner, 1996; Chelba and et al., 1997). Robinson(1970) gives four axioms for well-formed dependency structures, which have been assumed in almost all computational approaches. In a depen- dency structure of a sentence (i) one and only one word is independent, i.e., not linked to some other word, (ii) all others depend directly on some word, (iii) no word depends on more than one other, and, (iv) if a word A depends directly on B, and some word C intervenes between them (in linear order), then C depends directly on A or on B, or on some other intervening word. This last condition of pro- jectivity (or various extensions of it; see e.g., Lau and Huang (1994)) is usually assumed by most com- putational approaches to dependency grammars as a constraint for filtering configurations, and has also been used as a simplifying condition in statistical approaches for inducing dependencies from corpora (e.g., Yiiret(1998).) 3 Turkish Turkish is an agglutinative language where a se- quence of inflectional and derivational morphemes get affixed to a root (Oflazer, 1993). Derivations are very productive, and the syntactic relations that a word is involved in as a dependent or head element, are determined by the inflectional properties of the 254 "~41~tCJ fruit DopeDdoDt:g L:Lnk t:o Head 1 II IG3 io, }1 Figure h Links and Inflectional Groups one or more (intermediate) derived forms. In this work, we assume that a Turkish word is represented as a sequence of inflectional groups (IGs hereafter), separated by "DBs denoting derivation boundaries, in the following general form: root+Infl1"DB+Infl2"DB+. • .'DB+Infl. where Infli denote relevant inflectional features including the part-of-speech for the root, or any of the derived forms. For instance, the derived determiner saglamla§tlrdzgzmzzdaki I would be represented as:2 s aglam+hdj "DB+Verb+Be come "DB+Verb+Caus+Po s "DB+Adj +PastPart+P i sg* DB +Noun+Zero+A3sg+Pnon+Loc'DB+Det This word has 6 IGs: I. sa~lam+Adj 2. +Verb+Become 3. +Verb+Caus+Pos 4. +Adj+PastPart+Plsg 5. +Noun+Zero+A3sg 6. +Det +Pnon+Loc A sentence would then be represented as a sequence of the IGs making up the words. An interesting observation that we can make about Turkish is that, when a word is considered as a sequence of IGs, syntactic relation links only emanate from the last IG of a (dependent) word, and land on one of the IG's of the (head) word on the right (with minor exceptions), as exemplified in Figure 1. A second observation is that, with minor exceptions, the dependency links between the IGs, when drawn above the IG sequence, do not cross. Figure 2 shows a dependency tree for a sentence laid on top of the words segmented along IG boundaries. 4 Finite State Dependency Parsing The approach relies on augmenting the input with "channels" that (logically) reside above the IG se- quence and "laying" links representing dependency relations in these channels, as depicted Figure 3 a). The parser operates in a number of iterations: At each iteration of the parser, an new empty channel 1Literally, "(the thing existing) at the time we caused (something) to become strong". Obviously this is not a word that one would use everyday. Turkish words found in typical text average about 3-4 morphemes including the stem. 2 The morphological features other than the obvious POSe are: +Become: become verb, +Caus: causative verb, PastPart: Derived past participle, Ptsg: leg possessive agreement, A3sg: 3sg number-person agreement,+Zero: Zero derivation with no overt morpheme, +Pnon: No possessive agreement, +Loc:Locative case, +Poe: Positive Polarity. a) Input sequence of IGs am augmented with symbols to represent Channels. (IGl) (IG2) (IG3)... (IGi)... (IGn_{) (IG,) b) Links are embedded in channels. ,..-...,.,,% ,,,:...,...r, ............................ .~,,.....~ ...... (IGl) (IG2) (IG3)... (IGi)... (IG._l) (IG.) c) New channels are "stacked on top of each other". • u...... ~...T .',..,L. ............................. ~.......~ ..... .n.......r..,.:,......~ ............................ .~....,..~ ...... (IGI) (IG2) (IG3)... (IGi)... (IG..I) (IG.) d) So that links that can not be accommodated in lower channels can be established. ...................................... • .l ...................... ;. ..... (IGl) (IG2) (IG3)... (IGi)... (IG,.l) (1G,) • .~.--.-- ~- "A'"" ~ ............. ~ ............ ~"""1~ ..... (IG,) (IG,) (IG0... (IG~)... (IG°.,) 0G,) Figure 3: Channels and Links is "stacked" on top of the input, and any possible links are established using these channels, until no new links can be added. An abstract view of this is presented in parts b) through e) of Figure 3. 4.1 Representing Channels and Syntactic Relations The sequence (or the chart) of IGs is produced by a a morphological analyzer FST, with each IG be- ing augmented by two pairs of delimiter symbols, as <(IG)>. Word final IGs, IGs that links will emanate from, are further augmented with a special marker ©. Channels are represented by pairs of matching sym- bols that surround the <... ( and the )...> pairs. Symbols for new channels (upper channels in Figure 3) are stacked so that the symbols for the topmost channels are those closest to the (...).a The chan- nel symbol 0 indicates that the channel segment is not used while 1 indicates that the channel is used by a link that starts at some IG on the left and ends at some IG on the right, that is, the link is just crossing over the IG. If a link starts from an IG (ends on an IG), then a start (stop) symbol de- noting the syntactic relation is used on the right (left) side of the IG. The syntactic relations (along with symbols used) that we currently encode in our parser are the following: 4 S (Subject), 0 (Object), M (Modifier, adv/adj), P (Possessor), C (Classifier), D (Determiner), T (Dative Adjunct), L ( Locative Adjunct), A: (Ablative Adjunct) and I (Instrumen- tal Adjunct). For instance, with three channels, the two IGs of bahgedeki in Figure 2, would be repre- sented as <MD0(bah~e+Noun+h3sg+Pnon+Loc)000> <000(+Det©)00d>. The M and the D to the left of 3 At any time, the number of channel symbols on both sides of an IG are the same. 4We use the lower case symbol to mark the start of the link and the upper case symbol to encode the end of the link. 255 Det Pos Subj D ADJ N D N ADV V N PN ADV V Last line shows the final POS for each word. Figure 2: Dependency Links in an example Turkish Sentence the first IG indicate the incoming modifier and de- terminer links, and the d on the right of the second IG indicates the outgoing determiner link. 4.2 Components of a Parser Stage The basic strategy of a parser stage is to recognize by a rule (encoded as a regular expression) a dependent IG and a head IG, and link them by modifying the "topmost" channel between those two. To achieve this: 1. we put temporary brackets to the left of the dependent IG and to the right of the head IG, making sure that (i) the last channel in that segment is free, and (ii) the dependent is not already linked (at one of the lower channels), 2. we mark the channels of the start, intermediate and ending IGs with the appropriate symbols encoding the relation thus established by the brackets, 3. we remove the temporary brackets. A typical linking rule looks like the following: 5 [LL IGI LR] [ML IG2 MR]* [RL IG3 RR] (->) "{s" ... "s}" This rule says: (optionally) bracket (with {S and S}), any occurrence of morphological pattern IG1 (dependent), skipping over any number of occur- rences of pattern IG2, finally ending with a pat- tern IG3 (governor). The symbols L(eft)L(eft), LR, ML, MR, RL and RR are regular expressions that encode constraints on the bounding chan- nel symbols. For instance, LI~ is the pattern "© .... ) .... 0" ["0" I 1]* ">" which checks that (i) this is a word-final IG (has a "©"), (ii) the right side "topmost" channel is empty (channel symbol nearest to ")"is "0"), and (iii) the IG is not linked to any other in any of the lower channels (the only symbols on the right side are 0s and ls.) For instance the example rule [LL NominativeNominalA3pl LR] [ML AnyIG MR]* [RL [FiniteVerbA3sg I FiniteVerbl3pl] RR ] (->) "{s .... s}" SWe use the XRCE Regular Expression Language Syntax; see http ://www. xrce. xerox, com/resea.vch/ taltt/fst/fssyntax.htral for details. is used to bracket a segment starting with a plural nominative nominal, as subject of a finite verb on the right with either +A3sg or +A3pl number-person agreement (allowed in Turkish.) The regular expres- sion NominativeNominalA3pl matches any nomi- nal IG with nominative case and A3pl agreement, while the regular expression [FiniteVerbA3sg J FiniteVerbA3pl] matches any finite verb IG with either A3sg or A3pl agreement. The regular expres- sion AnyIG matches any IG. All the rules are grouped together into a parallel bracketing rule defined as follows: Bracket = [ Patternl (->) "{Rell" ... "Rell}", Pattern2 (->) "{Rel2" ... "Rel2}", ]; which will produce all possible bracketing of the in- put IG sequence. 6 4.3 Filtering Crossing Link Configurations The bracketings produced by Bracket contain con- figurations that may have crossing links. This hap- pens when the left side channel symbols of the IG immediately right of a open bracket contains the symbol 1 for one of the lower channels, indicating a link entering the region, or when the right side channel symbols of the IG immediately to the left of a close bracket contains the symbol 1 for one of the lower channels, indicating a link exiting the seg- ment, i.e., either or both of the following patterns appear in the bracketed segment: (i) {S < ... 1 ... 0 ( ... ) ... (ii) ... ( ... ) 0 ... 1 ... > S} Configurations generated by bracketing are filtered by FSTs implementing suitable regular expressions that reject inputs having crossing links. A second configuration that may appear is the fol- lowing: A rule may attempt to put a link in the topmost channel even though the corresponding seg- ment is not utilized in a previous channel, e.g., the corresponding segment one of the previous channels may be all Os. This constraint filters such cases to 6{Reli and Roli} are pairs of brackets; there is a distinct pair for each syntactic relation to be identified by these rules. 256 prevent redundant configurations from proliferating for later iterations of the parser. 7 For these two con- figuration constraints we define Filteraonfigs as s FilterConfigs = [ FilterCrossingLinks .o. Filt erEmptySegment s] ; We can now define one phase (of one iteration) of the parser as: Phase = Bracket .o. FilterCon2igs .o. MarkChannels .o. RemoveTempBrackets; The transducer MarkChannels modifies the chan- nel symbols in the bracketed segments to either the syntactic relation start or end symbol, or a 1, depending on the IG. Finally, the transducer RemoveTempBrackets, removes the brackets. 9 The formulation up to now does not allow us to bracket an IG on two consecutive non-overlapping links in the same channel. We would need a brack- eting configuration like ... {S < ... > {H < ... > S} ... < ... > M} ... but this would not be possible within Bracket, as patterns check that no other brackets are within their segment of interest. Simply composing the Phase transducer with itself without introducing a new channel solves this problem, giving us a one- stage parser, i.e., Parse = Phase .o. Phase; 4.4 Enforcing Syntactic Constraints The rules linking the IGs are overgenerating in that they may generate configurations that may vio- late some general or language specific constraints. For instance, more than one subject or one ob- ject may attach to a verb, or more that one deter- miner or possessor may attach to a nominal, an ob- ject may attach to a passive verb (conjunctions are handled in the manner described in J£rvinen and Tapanainen(1998)), or a nominative pronoun may be linked as a direct object (which is not possible in Turkish), etc. Constraints preventing these may can be encoded in the bracketing patterns, but do- ing so results in complex and unreadable rules. In- stead, each can be implemented as a finite state filter which operate on the outputs of Parse by checking the symbols denoting the relations. For instance we can define the following regular expression for fil- tering out configurations where two determiners are attached to the same IG: l° 7This constraint is a bit trickier since one has to check that the same number of channels on both sides are empty; we limit ourselves to the last 3 channels in the implementation. 8. o. denotes the transducer composition operator. We also use, for exposition purposes, =, instead of the XRCE define command. 9 The details of these regular expressions are quite uninter- esting. l°LeftChannelSymbols and RightChannelSymbols denote the sets of symbols that can appear on the left and right side channels. AtMost0neDet = [ "<" [ ~ [[$"D"]'I] & LeftCharmelSymbols* ] "(" AnyIG ("@") ")" RightChannelSymbols* ">" ]*; The FST for this regular expression makes sure that all configurations that are produced have at most one D symbol among the left channel symbols, n Many other syntactic constraints (e.g., only one ob- ject to a verb) can be formulated similar to above. All such constraints Consl, Cons2 ...ConsN, can then be composed to give one FST that enforces all of these: SyntacticFilter = [ Consl .o. Cons2 .o. Cons3 .o . . . . . o. ConsN] 4.5 Iteratlve application of the parser Full parsing consists of iterative applications of the Parser and SyntacticFilter FSTs. Let Input be a transducer that represents the word sequence. Let LastChannelNotEmpt y = ["<" Lef tChannelSymbels+ "(" AnyIG ("@") ")" RightCharmelSymbols+ ">"]* - ["<" LeftChannelSymbols* 0 "(" AnyIG ("@") ")" 0 RightChannelSymbols* ">"]*; be a transducer which detects if any configuration has at least one link established in the last channel added (i.e., not all of the "topmost" channel sym- bols are O's.) Let MorphologicalDisambiguator be a reductionistic finite state disambiguator which performs accurate but very conservative local dis- ambiguation and multi-word construct coalescing, to reduce morphological ambiguity without making any errors. The iterative applications of the parser can now be given (in pseudo-code) as: # Map sentence to a transducer representing a chart of IGs M = [Sentence .o. MorphologicalAnalyzer] .o. MorphologicalDisambi~nlat or; repeat { M = M .o. AddChannel .o. Parse .o. Synt act icFilter ; } until ( [M .o. LastChannelNotEmpty].l == { }) M = M .o. 0nly0neUnlinked ; Parses = M.I; This procedure iterates until the most recently added channel of every configuration generated is unused (i.e., the (lower regular) language recognized by M .o. LastChannelNotEmpty is empty.) The step after the loop, M = M .o. 0nly0neUnlinked, enforces the constraint that 11 The crucial portion at the beginning says "For any IG it is not the case that there is more than one substring containing D among the left channel symbols of that IG." 257 in a correct dependency parse all except one of the word final IGs have to link as a dependent to some head. This transduction filters all those configurations (and usually there are many of them due to the optionality in the bracketing step.) Then, Parses defined as the (lower) language of the resulting FST has all the strings that encode the IGs and the links. 4.6 Robust Parsing It is possible that either because of grammar cover- age, or ungrammatical input, a parse with only one unlinked word final IG may not be found. In such cases Parses above would be empty. One may how- ever opt to accept parses with k > 1 unlinked word final IGs when there are no parses with < k un- linked word final IGs (for some small k.) This can be achieved by using the lenient composition operator (Karttunen, 1998). Lenient composition, notated as . 0., is used with a generator-filter combination. When a generator transducer G is leniently composed with a filter transducer, F, the resulting transducer, G . 0. F, has the following behavior when an input is applied: If any of the outputs of G in response to the input string satisfies the filter F, then G .0. F produces just these as output. Otherwise, G .0. F outputs what G outputs. Let Unlinked_i denote a regular expression which accepts parse configurations with less than or equal i unlinked word final IGs. For instance, for i = 2, this would be defined as follows: -[[$[ "<" LeftChannelSymbols* "(" AnyIG "@ .... )" E"0" I 13. ">"]3" > 2 ]; which rejects configurations having more than 2 word final IGs whose right channel symbols contain only 0s and is, i.e., they do not link to some other IG as a dependent. Replacing line M = H .o. Only0neUnlinked, with, for instance, M = M .0. Unlinked_l .0. Unlinked_2 .0. Unlinked_3; will have the parser produce outputs with up to 3 unlinked word final IGs, when there are no outputs with a smaller num- ber of unlinked word final IGs. Thus it is possible to recover some of the partial dependency structures when a full dependency structure is not available for some reason. The caveat would be however that since Unlinked_l is a very strong constraint, any relaxation would increase the number of outputs substantially. 5 Experiments with dependency parsing of Turkish Our work to date has mainly consisted of developing and implementing the representation and finite state techniques involved here, along with a non-trivial grammar component. We have tested the resulting system and grammar on a corpus of 50 Turkish sen- tences, 20 of which were also used for developing and testing the grammar. These sentences had 4 to 24 words with an average 10 about 12 words. The grammar has two major components. The morphological analyzer is a full coverage analyzer built using XRCE tools, slightly modified to gen- erate outputs as a sequence of IGs for a sequence of words. When an input sentence (again repre- sented as a transducer denoting a sequence of words) is composed with the morphological analyzer (see pseudo-code above), a transducer for the chart rep- resenting all IGs for all morphological ambiguities (remaining after morphological disambiguation) is generated. The dependency relations are described by a set of about 30 patterns much like the ones exemplified above. The rules are almost all non- lexical establishing links of the types listed earlier. Conjunctions are handled by linking the left con- junct to the conjunction, and linking the conjunction to the right conjunct (possibly at a different chan- nel). There are an additional set of about 25 finite state constraints that impose various syntactic and configurational constraints. The resulting Parser transducer has 2707 states 27,713 transitions while the SyntacticConstraints transducer has 28,894 states and 302,354 transitions. The combined trans- ducer for morphological analysis and (very limited) disambiguation has 87,475 states and 218,082 arcs. Table 1 presents our results for parsing this set of 50 sentences. The number of iterations also count the last iteration where no new links are added. In- spired by Lin's notion of structural complexity (Lin, 1996), measured by the total length of the links in a dependency parse, we ordered the parses of a sen- tence using this measure. In 32 out of 50 sentences (64%), the correct parse was either the top ranked parse or among the top ranked parses with the same measure. In 13 out of 50 parses (26%) the correct parse was not among the top ranked parses, but was ranked lower. Since smaller structural complexity requires, for example, verbal adjuncts, etc. to attach to the nearest verb wherever possible, topicalization of such items which brings them to the beginning of the sentence, will generate a long(er) link to the verb (at the end) increasing complexity. In 5 out of 50 sentences (5%), the correct parse was not available among the parses generated, mainly due to grammar coverage. The parses generated in these cases used other (morphological) ambiguities of certain lexical items to arrive at some parse within the confines of the grammar. The finite state transducers compile in about 2 minutes on Apple Macintosh 250 Mhz Power- book. Parsing is about a second per iteration in- cluding lookup in the morphological analyzer. With completely (and manually) morphologically disam- biguated input, parsing is instantaneous. Figure 4 presents the input and the output of the parser for a sample Turkish sentence. Figure 5 shows the output 258 Input Sentence: Diinya Bankas~T/irkiye Direkthdi English: World Bank Turkey Director said that as a re- h/ikfimetin izledi~i ekonomik programln sonucunda sult of the economic program fonowed by the government, 5nemfi achmlann atflchg]m s6yledi, important steps were taken. Parser Output after 3 iterations: Parsel: <O00(dUnya+Noun+A3sg+Pnon+Nom@)OOc><COO(banka+Noun+A3sg+P3sg+Bom~)OcO> <OlO(tUrkiye+Noun+Prop+A3sg+Pnon+Nom@)Olc> <CC~(direkt~r+N~un+A3sg+~3sg+N~m@)s~><~1(hUkUmet+B~un+A3sg+~n~n+Gen@)1~s><~1(iz1e+verb+p~s)1~> <~(+Adj+Past~art+p3sg@)1m~><~11(ek~n~mik+Adj@)1~m><MM1(pr~gram+B~un+A3sg+~n~n+Gen~)~p> <P~(s~nuC+N~un+A3sg+P3s~÷L~c@)~1~><~(~nem+N~un)~><~11(+Adj+with@)1~m><M1~(adIm+N~un+A3p1+Pn~n+Gen~)1~s> <S~(at+Verb)~><~(+verb+~ass+P~s)~><~(+I~un+~ast~art+A3sg+~3s~Acc@)~1~><~L~(s~y1e+verb+p~s+~ast+A3sg@)~> Parse2: <~(dUnya+I~un+A3sg+~n~n+N~m@)~c><C~(banka+N~un+A3sg+~3sg+I~m~)~c~><~1~(tUrkiye+N~un+pr~p+A3sg+pn~n+l~m@)~c> <CC~(direkt~r+N~un+A3sg+p3sg+N~m@)s~><~(hUkUmet+l~un+A3sg+pn~n+Gen@)1~s><~(iz1e+Verb+p~s)1~> <~(+Adj+Past~art+~3sg@)~m~><~(ek~n~mik+AdjQ)~m><RM1(pr~ram+N~un+A3s~+pn~n+GenQ)~p> <p~(s~nuC+N~un+A3sg+~3sg+L~)~1~><~1~(~nem+|~un)~><~1(+Adj+with@)1~m><M~1(adIm+N~un+A3p1+~n~n+Gen~)1~s> <SL1(at+Verb)1~><~1(+Verb+~ass+~s)1~><~(+N~un+~astpart+A3sg+~3sg+Acc@)1~><~(s~y1e+verb+p~s+~ast+A3sg@)~> The only difference in the two are parses are in the locative adjunct attachment (to verbs at and sSyle, highlighted with ***). Figure 4: Sample Input and Output of the parser Avg. Words/Sentence: Avg. IGs/Sentence: Avg. Parser Iterations: Avg. Parses/Sentence: 11.7 (4 - 24) 16.4 (5 - 36) 5.2 (3 - 8) 23.9 (1 - 132) Table 1: Statistics from Parsing 50 Turkish Sen- tences of the parser processed with a Perl script to provide a more human-consumable presentation: 6 Discussion and Conclusions We have presented the architecture and implemen- tation of novel extended finite state dependency parser, with results from Turkish. We have formu- lated, but not yet implemented at this stage, two extensions. Crossing dependency links are very rare in Turkish and almost always occur in Turkish when an adjunct of a verb cuts in a certain position of a (discontinuous) noun phrase. We can solve this by allowing such adjuncts to use a special channel "be- low" the IG sequence so that limited crossing link configurations can be allowed. Links where the de- pendent is to the right of its head, which can happen with some of the word order variations (with back- grounding of some dependents of the main verb) can similarly be handled with a right-to-left version of Parser which is applied during each iteration, but these cases are very rare. In addition to the reductionistic disambiguator that we have used just prior to parsing, we have im- plemented a number of heuristics to limit the num- ber of potentially spurious configurations that re- sult because of optionality in bracketing, mainly by enforcing obligatory bracketing for immediately se- quential dependency configurations (e.g., the com- plement of a postposition is immediately before it.) Such heuristics force such dependencies to appear in the first channel and hence prune many potentially useless configurations popping up in later stages. The robust parsing technique has been very instru- mental during the process mainly in the debugging of the grammar, but we have not made any substan- tial experiments with it yet. 7 Acknowledgments This work was partially supported by a NATO Science for Stability Program Project Grant, TU- LANGUAGE made to Bilkent University. A portion of this work was done while the author was visit- ing Computing Research Laboratory at New Mexico State University. The author thanks Lauri Kart- tunen of Xerox Research Centre Europe, Grenoble for making available XRCE Finite State Tools. References Steven Abney. 1996. Partial parsing via finite state cascades. In Proceedings of the ESSLLI'96 Robust Parsing Workshop. Salah Ait-Mokhtar and Jean-Pierre Chanod. 1997. Incremental finite-state parsing. In Proceedings of ANLP'97, pages 72 - 79, April. Ciprian Chelba and et al. 1997. Structure and esti- mation of a dependency language model. In Pro- cessings of Eurospeech '97. Jason Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Pro- ceedings of the 16th International Conference on Computational Linguistics (COLING-96), pages 340-345, August. 259 S ............................................................... c ................. C s s ................. R c---C c c---CC s s---S m s---NR p- dUnya banka tUrkiye direktOr hUkUmet izle ekonomik program Noun Noun Noun Noun Noun Verb AdS AdS@ Noun A3sg A3sg Prop A3sg A3sg Pos PastPart A3sg Pnon P3sg A3sg P3sg Pnon P3sg@ Pnon Nos@ Sos@ Pnon Nom@ Gen@ Gen@ Nom@ ........................................................................................ S 1 ......................................... L S --P 1 m---R s---SL o---0 S sonuC Ones adIs at sOyle Noun Noun Adj Noun Verb Verb Noun Verb A3sg With@ A3pI Pass PastPart Pos P3sg Pnon Pos A3sg Past Loc@ Gen@ P3sg A3sg@ Acc@ Figure 5: Dependency tree for the second parse Gregory Grefenstette. 1996. Light parsing as finite- state filtering. In ECAI '96 Workshop on Ex- tended finite state models of language. August. Timo J~irvinen and Pasi Tapanainen. 1998. Towards an implementable dependency grammar. In Pro- ceedings of COLING/ACL'98 Workshop on Pro- cessing Dependency-based Grammars, pages 1-10. Ronald M. Kaplan and Martin Kay. 1994. Regular models of phonological rule systems. Computa- tional Linguistics, 20(3):331-378, September. Lauri Karttunen, Jean-Pierre Chanod, Gregory Grefenstette, and Anne Schiller. 1996. Regu- lar expressions for language engineering. Natural Language Engineering, 2(4):305-328. Lauri Karttunen. 1998. The proper treatment of optimality theory in computational linguistics. In Lauri Karttunen and Kemal Oflazer, editors, Pro- ceedings of the International Workshop on Finite State Methods in Natural Language Processing- FSMNLP, June. Kimmo Koskenniemi, Pasi Tapanainen, and Atro Voutilainen. 1992. Compiling and using finite- state syntactic rules. In Proceedings of the 14th International Conference on Computational Lin- guistics, COLING-92, pages 156-162. Kimmo Koskenniemi. 1990. Finite-state parsing and disambiguation. In Proceedings of the 13th International Conference on Computational Lin- guistics, COLING'90, pages 229 - 233. John Lafferty, Daniel Sleator, and Davy Temper- ley. 1992. Grammatical trigrams: A probabilis- tic model of link grammars. In Proceedings of the 1992 AAAI Fall Symposium on Probablistic Ap- proaches to Natural Language. Bong Yeung Tom Lai and Changning Huang. 1994. Dependency grammar and the parsing of Chinese sentences. In Proceedings of the 1994 Joint Con- ference of 8th ACLIC and 2nd PaFoCol. Dekang Lin. 1996. On the structural complexity of natural language sentences. In Proceedings of the 16th International Conference on Computational Linguistics (COLING-96). Igor A. Mel~uk. 1988. Dependency Syntax: Theory and Practice. State University of New York Press. Mehryar Mohri, Fernando Pereira, and Michael Ri- ley. 1998. A rational design for a weighted finite- state transducer library. In Lecture Notes in Com- puter Science, 1.~36. Springer Verlag. Kemal Oflazer. 1993. Two-level description of Turk- ish morphology. In Proceedings of the Sixth Con- ference of the European Chapter of the Associa- tion for Computational Linguistics, April. A full version appears in Literary and Linguistic Com- puting, Vol.9 No.2, 1994. Jane J. Robinson. 1970. Dependency structures and transformational rules. Language, 46(2):259-284. Emmanuel Roche. 1997. Parsing with finite state transducers. In Emmanuel Roche and Yves Sch- abes, editors, Finite-State Language Processing, chapter 8. The MIT Press. Daniel Sleator and Davy Temperley. 1991. Parsing English with a link grammar. Technical Report CMU-CS-91-196, Computer Science Department, Carnegie Mellon University. Pasi Tapanainen and Timo J~rvinen. 1997. A non- projective dependency parser. In Proceedings of ANLP'97, pages 64 - 71, April. Deniz Y/iret. 1998. Discovery of Linguistic Rela- tions Using Lexical Attraction. Ph.D. thesis, De- partment of Electrical Engineering and Computer Science, Massachusetts Institute of Technology. 260
1999
33
A Unification-based Approach to Morpho-syntactic Parsing of Agglutinative and Other (Highly) Inflectional Languages G~ibor Pr6sz6ky [email protected] MorphoLogic K6smdrki u. 8. Budapest, Hungary, H-1118 http://www.morphologic.hu Bal~tzs Kis [email protected] Abstract This paper introduces a new approach to morpho-syntactic analysis through Humor 99 (High-speed Unification Mo.rphology), a re- versible and unification-based morphological analyzer which has already been integrated with a variety of industrial applications. Hu- mor 99 successfully copes with problems of agglutinative (e.g. Hungarian, Turkish, Esto- nian) and other (highly) inflectional lan- guages (e.g. Polish, Czech, German) very ef- fectively. The authors conclude the paper by arguing that the approach used in Humor 99 is general enough to be well suitable for a wide range of languages, and can serve as basis for higher-level linguistic operations such as shallow parsing. Introduction There are several linguistic phenomena that are possible to process by means of morphological tools for agglutinative and other highly inflec- tional languages, while processing the same fea- tures requires syntactic parsers in case of other languages such as English. This paper provides a brief description of Humor 99 first presenting a general theoretical background of the system. This is followed by examples of the most recent applications (in addition to those listed earlier) where the authors argue that the approach used in Humor 99 is general enough to be well suitable for a wide range of languages, and can serve as basis for higher-level linguistic operations such as shallow or even full parsing. 1 Affix arrays rather than affixes Segmentation of a word-form in Humor 99 is based on surface patterns, that is, typical sequen- ces of separate suffix morphemes are analyzed as a whole. For example, the English nominal end- ing string ers' (NtoV+PL+POSS) is a complex affix handled as an atomic string in Humor 991 . The string ers' is generated from er+s+ 's in an earlier development phase by a dedicated utility. The generator is able to make a finite set of affix sequences from an (even recursive) description 2. Running this utility can be considered the learn- ing phase of the algorithm. The resulting suffix combinations are stored in a compressed internal lexicon structure that guarantees very fast searching) The entire algorithm shows features similar to the hypothesis according to which most segments of word-forms in agglutinative lan- We use mainly English examples in spite of the fact that English morphology is simpler than the morphologies of agglutinative and highly inflectional languages. 2 Depth of the recursive process can be given as a parameter. The method is similar to the one of Goldberg & K=ilm=in (1992) used in the BUG system: the description is theoretically infinite, hut there is a finite performance limit when running. 3 The idea has something in common with the PC-Kimmo based analyzer of the University of Pennsylvania (Karp et al. 1992). Our compression ratio is around 20%. 261 guages are handled as "Gestalts" by native speakers, instead of parsing them on-line. 4 This idea is not new in the literature: according to Bybee, "a psycholinguistic argument for treating (some) ending sequences as wholes comes from the observation that children acquiring inflec- tional languages seldom make errors involving the order of morphemes in a word." (Bybee 1985) Another source is Karlsson: "The endings and entries are often listed as wholes, especially in close-knit combinations. 5 Such combinations are often subject to bi-directional dependencies that are hard to capture otherwise" (Karlsson 1986). 2 Allomorphs rather than base forms Karlsson (1986) shows several ways in which lexical forms of words may be constructed: full listing, minimal listing, methods with unique lexical forms and methods with phonologically distinct stem variants. Full listing does not need rules at all, but it is implausible for agglutinative languages. Minimal listings need a quite large rule system in case of highly inflectional lan- guages, although their lexicons are relatively small. In methods based on unique lexical forms allowing diacritics and morpho-phonemes (Ko- skenniemi 1983, Abondolo 1988) paradigms are represented by a single base form 6. Our approach is close to the minimal listing methods, but less rules are needed. Finally, the representation pre- sented here regards phonologically distinct bound variants of a base form as separate stems. 7 There 4 Psycholinguists are interested in testing this hypothesis with native speakers (Pl~h, pers. comm.) 5 A good example is the linguistic tradition handling number and person combinations of Hungarian definite conjugation. 6 That is why it is very difficult to add new entries to the lexicons automatically in real NLP environments. 7 Actual two-level (and some other) descriptions apply similar methods in order to cope with morphotactic problems that cannot be treated phonologically in an elegant way. are two known important variants of this method: one using technical stems -- that is, strings that linguists do not consider stem variants -- and another using real allomorphs. The former was applied in the TEXFIN system of Karttunen (1981), the latter was used by Karlsson (1986). This is the method we have chosen for the Hu- mor 99 system. Humor 99 lexicons contain stem allomorphs (generated by the learning phase mentioned above) instead of single stems. Relations among allomorphs of the same base form (e.g. wolf, wolv) are, however, important for syntax, seman- tics, and the end-user. An online morphological parser needs not be directly concerned with the derivation of allomorphs from their base forms, for example, it does not matter how happi is de- rived from happy before -ly. This phenomenon - a consequence of the orthographical system - is handled by the off-line linguistic process of Hu- mor 99, which makes the analysis much faster. This method is close to the lexicon compilation used in finite-state models. 3 Paradigm groups and paradigms Concatenation of stem allomorphs and suffix al- lomorphs is licensed with the help of the follow- ing two factors: continuation classes s defined by paradigm descriptions, and classes of surface al- lomorphs. The latter is a cross-classification of the paradigms according to phonological and graphemic properties of the surface forms. Both verbal and nominal stem allomorphs can be char- acterized by sets of suffix allomorphs that can follow them. When describing the behavior of stems, all suffix combinations beginning with the same morpheme are considered equivalent be- cause the only relevant pieces of information come from the suffix that immediately follows the stem. E.g. from the point of view of the pre- ceding stem (humid) morpheme combinations 8 Similar to the two-level descriptions' continuation classes (Koskenniemi 1983). 262 Example I Example 2 Word'form l humidity humidi~ ' s humidities humidities' Humor's real-time Humor's output segmentation .... segmentation humid + ity humid + ity humid + ity's humid + it)/+ 's humid + ities humid + iti + es humid + ities' humid + iti + es' ~es Features= ÷/- Values Nbr=Pl Deriv=Adv Deriv=Abstr [ Deg=Comp Deg=Super , Mo~hme S Hess er est Subcat=-N fish house + Stems !0 Ca~Nom Subeat=-Adj green happy + + + + + + + Subcat=Adv like ity+SG, ity+PL, ity+SG+GEN, ity+PL+GEN behave as ity itself (Example 1). Therefore, every affix array is represented by its starting affix 9. Each equivalence class and each paradigm is given an abstract name, that is, each existing set of equivalence classes can have its own abstract name. Example 2 shows a simplified default paradigm of adjectives. For instance, the stem green belongs to the paradigm that can be de- scribed by the set {Deriv=Abstr, Deg=Comp, Deg=Super}, er is a suffix belonging to {Deg=Comp}, thus the word-form greener is morphotactically licensed by the unifiability of the two structures: the feature 'Deg' occurs in both with the same value. It is possible to con- struct a net - a partial ordering of paradigm sets - according to the degree and sort of defectivity. The Subsumption hierarchy is useful in aggluti- native languages where allomorph paradigms of various stem classes might behave the same way although they have been derived by different morphonological processes. 9 There is an equivalence relation on the set of affix arrays. l0 Nom means nominal, N, Adj and Adv as usual. Some remarks to the sample words: greens does exist, but as a lexical noun. Some affixed forms, like happily, happier, The scheme shown in Example 2 would better suit languages like Hungarian, but here we try to demonstrate constructing morphological classes without naming them. The (partial) paradigm net based on Example 2 can be the following: CLASShappy > CLASS green > CLASS far > > CLASS~sh CLASShou~ > CLASS ~sh This classsification might be used by traditional linguists for creating definitions (or rather nam- ing conventions) of morpheme classes that are more precise than usual. 4 Unifiability without unification Features used for checking appropriate properties of stems and suffixes are relevant attributes of morpho-graphemic behavior. Checking 'appro- priateness' is based on unification, or, strictly speaking, checking unifiability of the adequate features of stems and suffixes. A phonologically and ortographically motivated allomorph-based variant of Example 3 is shown by Example 4. happiest, farther, farthest, are influenced also by phonological and/or orthographical processes. 263 Example 3 Features= • +/- Values Lex=Base Nbr=PI s ~es Deg=Comp i • Deg=Super Deriv=Adv ly Deriv=Abstr ness er est Subcat=N Stem Atlomorphs Cat=Nom Subcat=-Adj fish house + + - + green happy happi + + - + + + . + . + + . + Subcat=Adv far farth + Features (morpho-phonological properties) are used to characterize both stem and suffix allo- morphs. A list of Feature=Value pairs shows the morphological structure of the morphemes green and er: green." [Cat=-Nom, Lex=Base, Subcat=-Adj, Deriv =Abstr, Deg={Comp, Super} ] er:[Cat=Nom, Subcat={Adj,Adv}, Deg=C omp]They are unifiable, thus the word- form greener is also morpho- phonologically licensed 11: INPUT: greener OUTPUT: green[A] + er[CMP] The most important advantage of this feature- based method is that possible paradigms and morpho-phonological types need not be defined previously, only the classification criteria have to be clarified. Since the number of these criteria is around a few dozens (in case of a language with rather complicated morphology), the number of theoretically possible paradigm classes is several millions or more. According to our practice lin- 11 Unifiability in Humor 99 is defined as follows: An f feature of the D description can have either a single value or a set of values. An f feature of the D description has compatible values in the E description iffone of the values of f can be found among the values of f in the E description. D and E are unifiable iffevery f feature of the E description has compatible values in the D description. guists choose about 10-20 orthogonal properties which produce 21°-22o possible classes, but, in fact, most of these hypothetical classes are empty in the language chosen. The implemented morphological analyzer provides the user with more detailed category information (lexical, morpho-syntactic, semantic, etc.) according to the case illustrated by Example 4 (see next page). Allomorphs happy and ly cannot be unified be- cause of contradicting values of Allom, but happi and ly can. If the unifiability check is successful, the base form is reconstructed (according to the Base information: happi ~ happy) and the output information (that is, Category code in our case) is returned: INPUT: happyly OUTPUT: *happyly INPUT: happily OUTPUT: happy[A]=happi+ly [A2ADV] As we have seen, lexical information has a cen- tral role in Humor, because only a single rule - unifiability-checking - is to be applied. 5 Controlling morpheme sequence recognition Humor 99 is capable of much more than sketched above. For instance, there can be more than one concatenation points in a single word form. Therefore effective analysis requires an elegant 264 Example 4 • I Allomorph Feature=Value happy Cat=Nom Subcat=Adj Deriv=Abstr Allom=y Lex=Base happi Cat=Nom Subcat=Adj Deriv=Adv Deg=Comp DerSuper Allom=i Lex=NonBase ly Cat=-Nom Subcat=Adj Deriv=Adv Allom=i Lex=NonBase Base 0 i ->__.y cate~or~ [ADJ] [ADH [ADV] way of handling compounding and adequate han- dling of derivational affixes. Recent implementations of Humor 99 define the set of possible morpheme sequences by means of the so-called meta-dictionary (in fact, it's a fi- nite-state automaton). This structure transforms Humor 99 into a representation where three inde- pendent types of conditions can be set (on differ- ent levels) to control which morphemes (and in what way) may be following each other. All of them were mentioned earlier; the list below is only a summary: 1. Morpheme sequence recognition is achieved through the meta-dictionary. 2. A continuation class matrix provides concate- nation licensing based on paradigm descriptions. 3. A feature structure controls concatenation li- censing based on surface allomorph classification by means of unifiability checking. Earlier implementations of Humor used the fol- lowing hard-coded scheme to control morpheme order where all parts except STEM1 were optional (Example 5). Example 5 (INFL. AFF.) 265 Example 6 shows how a meta-dictionary can be drawn up to handle the above structure. 12 Example 6 [% indicates the starting state; $ indicates ending (or ac- cepting) states] START : % PREFIX -> STEM REQUIRED STEM1 -> STEM~ PASSED STEM_REQUIRED : STEM1 -> STEM1 PASSED STEMI_PASSED : $ STEM2 -> AFFIXES POSSIBLE DERIV AFF -> INFL AFF POSSIBLE INFL AFF -> END -- -- AFFIXES_POSSIBLE : $ DERIV AFF -> INFL AFF POSSIBLE INFL AFF -> END -- -- INFL AFF POSSIBLE:$ INFL AFF -> END END : $ Here is an example how Humor's analyzer reacts to a typical construction of an agglutinative lan- guage (Hungarian): elsz6mlt6gdpezgethettem. ("I could use a computer to make fun for a while"): INPUT: elsz~tmit6g~pezgethettem INTERNAL SEGMENTATION: el[PREFIX]+sz~mit6[STEM 1 ]+g~p[STEM2]+ +ezgethet[DERIV.AFF.]+tem[INFL.AFF] OUTPUT: eI[VPREF]+s~it6[ADJ]+g~p[N]+ez[N2V]+ +get[FREQ]+het[OPT]+tem[PAST-SG- 1 ] 6 Comparison with other methods There are only a few general, reversible mor- phological systems that are suitable for more than a single language. In addition to the well-known two-level morphology (Koskenniemi 1983) and its modifications (Karttunen 1993) it is worth mentioning the Nabu system (Slocum 1988). There are some morphological description sys- tems showing some features in common with Humor 99 - like paradigmatic morphology (Cal- der 1989), or the Paradigm Description Language (Anick & Artemieff 1992) - but they don't have 12 The meta-dictionary shown in the example compiles with Humor's lexicon compiler without any changes. large-scale implementations. Two-level mor- phology is a reversible, orthography-based sys- tem that has several advantages from a linguist's point of view. Namely, the morpho-phone- mic/graphemic rules can be formalized in a gen- eral and very elegant way. It also has computa- tional advantages, but the lexicons must contain entries with extra symbols and other sophisti- cated elements in order to produce the necessary surface forms. Non-linguist users need an easy- to-extend dictionary into which words can be in- serted (almost) automatically. The lexical basis of Humor 99 contains surface characters only - no transformations are applied -, while the meta- dictionary mechanism retains many advantages of the two-level systems. It means in the practice that users can add entries to the running system without re-compiling it. The compilation time of a Humor 99 dictionary is usually 1-2 minutes (for 100,000 basic entries) on an average PC, which is another advantage (at least, for the linguist) when comparing it with other two-level systems. The result of the com- pilation is a compressed structure that can be used by any Humor 99 applications. The com- pression ratio is less than 20% in terms of lexicon size compared to the source material. The size of the dictionary has very little affect on the speed of the run-time system because the tree-based searching algorithm is enhanced with a special paging mechanism developed exclusively for this purpose. 7 Recent applications of the Humor 99 system There are several applications of Humor 99 - most of them are fully implemented, some others are still in a planning phase. For the time being, our research focuses on two applications, both serving one larger goal: the improvement of translation support of morphologically complex languages. This paper does not cover industrial applications such as spelling checkers, hyphen- ators, thesauri etc., since these modules have 266 been on the market for several years. The fol- lowing sections briefly describe (1) linguistic stemming for searching purposes, (2) an en- hancement to the Humor 99 morphological ana- lyzer that can act as a shallow or full parser in translation support systems. Linguistic stemming may be considered as a normalizer function which 'normalizes' word forms into canonic lexical forms, thus enabling searching systems to find any form of a specific word in an information base regardless of the word form entered in the search expression. In languages where a single lexical item can take thousands of possible forms, it is essential to have this normalization in electronic dictionaries used for translation support. However, it is these languages where linguistic stemming is impossi- ble without morphological analysis - otherwise several billions of word forms would have to be included in a single database. Thus stemming is a combination of the morphological analysis and a post-processing phase where the actual stems (lexical forms) are extracted from the analysis re- suits. Both the analysis and the extraction phase have to be very precise, otherwise false stems may be returned, and, in case of an electronic dictionary, wrong articles may be retrieved. In languages where words consist of several parts (i.e. productive compounding and/or sequences of derivative suffixes are possible), there might be a lot of possible stems of a single word form - the degree of disambiguity within a single word form can be much higher than in languages hav- ing less complex morphologies. Extraction is based on the results of morphologi- cal analysis where the original word form is seg- mented into morphemes, with each morpheme having a category label and a lexical form. From the segmented results, this phase selects mor- phemes with stem categories (adjective, noun, verb etc.). Example 7 shows a typical stemming problem where the computer is not entitled to choose between the different possible stems. In these cases, all stems must be returned. Choice is a task of either the end-user or a disambiguator module that is based on the context of the word. Example 7 There are two possible segmentations of the Hungarian word 'szemetek': szemetek = szem[N] + etek[Poss-P3 ] in English: 'your eyes' ('you' in plural) szemetek = szemdt[N]=szemet + ek[Pl] in English: 'pieces of rubbish' The two possible stems are: 'szem' (eye) and 'szemdt' (rubbish). 8 An enhancement: shallow and full parsing with HumorESK HumorESK (Humor Enhanced with Syntactic Knowledge) is a twofold application of Humor 99 that is used for shallow and full parsing. 13 The first point of using the morphological analyzer in the parser is to get as much linguistic information about a single word form as possible. The second point is using the basic principles of the mor- phological analyzer to implement the parser it- self. This means that we either collect or generate phrase patterns on different linguistic levels (noun phrases, prepositional phrases, verbal phrases etc.), and compile a Humor-like lexicon of them. On a specific linguistic level each atomic element of a pattern actually corresponds to a (more) complex structure on a lower linguis- tic level. Example 8 shows how a noun phrase pattern can be constructed from the result of the morphological analysis. Example 8 Surface string: the big bad wolves Morphological analysis: the[Det] big[Adj] bad[Adj] wolf[N]=wolve+s[PL] Noun phrase pattern: [Det] [Adj] [Adj] [N] [PL] 13 In our environment, shallow parsing of noun phra- ses - noun phrase extraction - is already implemented. 267 The example is quite simplified, and does not show an important aspect of the parser, namely, it retains the unification-based approach introduced in the morphological analyzer. This means that all atomic elements in a phrase pattern have three feature structures; two for the concatenation of two adjacent symbols, and one that describes the global ('phrase-wide') behavior of the symbol in question. After recognizing a phrase pattern (where recognition includes surface order li- censing based on unifiability checking), another licensing step is performed, based on the global features of each phrase element. This step (1) may reflect the internal hierarchy of symbols within the phrase, (2) sometimes includes actual unification of feature structures. Thus a single higher-level symbol can be generated from the phrase pattern that inherits features from the lower levels. The parser is still in development, although there is an implementation that is being tested together with the dictionary system. References Abondolo, D. M. Hungarian Inflectional Mor- phology. Akad6miai, Budapest. (1988) Anick, Peter & Susan Artemieff A High-level Morphological Description Language Exploit- ing Inflectional Paradigms. Proceedings of COLING-92, Nantes: 67-73. (1992) Beesley, K. R. Constraining Separated Morpho- tactic Dependencies In Finite State Grammars. Proceedings of the International Workshop on Finite State Methods in Natural Language Processing: 41-49 (1998) Bybee, J. L. Morphology. A Study of the Relation between Meaning and Form. Benjamins, Am- sterdam. (1985) Calder, J. Paradigmatic Morphology. Proceed- ings of 4th Conference of EACL 89:58-65 (1989) Carter, D. Rapid Development of Morphological Descriptions for Full Language Processing Systems. Proceedings of EACL 95:202-209 (1995) Goldberg, J. & K~ilm~in, L. The First BUG Re- port. Proceedings of COLING-92: 945-949 (1992) J~ippinen, H. and Ylilammi, M. Associative Model of Morphological Analysis: An Em- pirical Inquiry. Computational Linguistics 12(4): 257-252 (1986) Karlsson, F. A Paradigm-based Morphological Analyzer. Papers from the Fifth Scandinavian Conference of Computational Linguistics, Helsinki: 95-112 (1986) Karp, D. & Schabes, Y. A Wide Coverage Public Domain Morphological Analyzer for English. Proceedings of COLING-92: 950-95 5 (1992) Karttunen, L., Root, R. and Uszkoreit, H. Mor- phological Analysis of Finnish by Computer. Proceedings of the 71st Annual Meeting of the SASS. Albuquerque, New Mexico. (1981) Karttunen, L.Finite-State Lexicon Compiler. Technical Report. ISTL-NLTT-1993-04-02. Xerox PARC, Palo Alto, California (1993) Koskenniemi, K. Two-level Morphology: A Gen- eral Computational Model for Word-form Recognition and Production. Univ. of Hel- sinki, Dept. of Gen. Ling., Publications No.11. (1983) Oflazer, K. Two-Level Description of Turkish Morphology. Proceedings of EACL-93. (1993) Slocum, J. Morphological Processing in the Nabu System. Proceedings of the 2nd Applied Natu- ral Language Processing: 228-234 ( 1988) Voutilainen, A. Does Tagging Help Parsing? A Case Study on Finite State Parsing. Proceed- ings of the International Workshop on Finite State Methods in Natural Language Process- ing." 25-36 (1998) Zajac, R. Feature Structures, Unification and Fi- nite-State Transducers. Proceedings of the International Workshop on Finite State Meth- ods in Natural Language Processing." 101- 109 (1998) 268
1999
34
Inside-Outside Estimation of a Lexicalized PCFG for German Franz Beil, Glenn Carroll, Detlef Prescher, Stefan Riezler and Mats Rooth Institut ffir Maschinelle Sprachverarbeitung, University of Stuttgart Abstract The paper describes an extensive experiment in inside-outside estimation of a lexicalized proba- bilistic context free grammar for German verb- final clauses. Grammar and formalism features which make the experiment feasible are de- scribed. Successive models are evaluated on pre- cision and recall of phrase markup. 1 Introduction Charniak (1995) and Carroll and Rooth (1998) present head-lexicalized probabilistic context free grammar formalisms, and show that they can effectively be applied in inside-outside es- timation of syntactic language models for En- glish, the parameterization of which encodes lexicalized rule probabilities and syntactically conditioned word-word bigram collocates. The present paper describes an experiment where a slightly modified version of Carroll and Rooth's model was applied in a systematic experiment on German, which is a language with rich in- flectional morphology and free word order (or rather, compared to English, free-er phrase or- der). We emphasize techniques which made it practical to apply inside-outside estimation of a lexicalized context free grammar to such a language. These techniques relate to the treat- ment of argument cancellation and scrambled phrase order; to the treatment of case features in category labels; to the category vocabulary for nouns, articles, adjectives and their projections; to lexicalization based on uninflected lemmata rather than word forms; and to exploitation of a parameter-tying feature. 2 Corpus and morphology The data for the experiment is a corpus of Ger- man subordinate clauses extracted by regular expression matching from a 200 million token newspaper corpus. The clause length ranges be- tween four and 12 words. Apart from infiniti- val VPs as verbal arguments, there are no fur- ther clausal embeddings, and the clauses do not contain any punctuation except for a ter- minal period. The corpus contains 4128873 to- kens and 450526 clauses which yields an average of 9.16456 tokens per clause. Tokens are auto- matically annotated with a list of part-of-speech (PoS) tags using a computational morpholog- ical analyser based on finite-state technology (Karttunen et al. (1994), Schiller and StSckert (1995)). A problem for practical inside-outside esti- mation of an inflectional language like German arises with the large number of terminal and low-level non-terminal categories in the gram- mar resulting from the morpho-syntactic fea- tures of words. Apart from major class (noun, adjective, and so forth) the analyser provides an ambiguous word with a list of possible combina- tions of inflectional features like gender, person, number (cf. the top part of Fig. 1 for an exam- ple ambiguous between nominal and adjectival PoS; the PoS is indicated following the '+' sign). In order to reduce the number of parameters to be estimated, and to reduce the size of the parse forest used in inside-outside estimation, we collapsed the inflectional readings of adjec- tives, adjective derived nouns, article words, and pronouns to a single morphological feature (see of Fig. 1 for an example). This reduced the num- ber of low-level categories, as exemplified in Fig. 2: das has one reading as an article and one as a demonstrative; westdeutschen has one reading as an adjective, with its morphological feature N indicating the inflectional suffix. We use the special tag UNTAGGED indicating that the analyser fails to provide a tag for the word. The vast majority of UNTAGGED words are proper names not recognized as such. These gaps in the morphology have little effect on our experiment. 3 Grammar The grammar is a manually developed headed context-free phrase structure grammar for Ger- man subordinate clauses with 5508 rules and 269 analyze> Deutsche i. deutsch'ADJ.Pos+NN.Fem.Akk.Sg 2. deutsch^ADJ.Pos+NN.Fem.Nom.Sg 3. deutsch^ADJ.Pos+NN.Masc.Nom. Sg. Sw 4. deutsch^ADJ.Pos+NN.Neut.Akk.Sg. Sw 5. deutsch^ADJ.Pos+NN.Neut.Nom. Sg.Sw 6. deutsch-ADJ.Pos+NN.NoGend.Akk.Pi.St 7. deutsch^ADJ.Pos+NN.NoGend.Nom.Pl.St 8. *deutsch+ADJ.Pos.Fem.Akk.Sg 9. *deutsch+ADJ.Pos.Fem.Nom.Sg i0. *deutsch+ADJ.Pos.Masc.Nom.Sg.Sw ii. *deutsch+ADJ.Pos.Neut.Akk.Sg.Sw 12. *deutsch+ADJ.Pos.Neut.Nom.Sg. Sw 13. *deutsch+ADJ.Pos.NoGend.Akk.Pi.St 14. *deutsch+ADJ.Pos.NoGend.Nom.Pl.St ==> Deutsche { ADJ.E, NNADJ.E } Figure 1: Collapsing Inflectional Features w~hrend { ADJ.Adv, ADJ.Pred, KOUS, APPR.Dat, APPR.Gen } sich { PRF.Z } das { DEMS.Z, ART.Def.Z } Preisniveau { NN.Neut.NotGen. Sg } dem { DEMS.M, ART.Def.M } westdeutschen { ADJ.N } snn~dlere { VVFIN } { PER } Figure 2: Corpus Clip 562 categories, 209 of which are terminal cat- egories. The formalism is that of Carroll and Rooth (1998), henceforth C+R: mother -> non-heads head' non-heads (freq) The rules are head marked with a prime. The non-head sequences may be empty, freq is a rule frequency, which is initialized randomly and subsequently estimated by the inside outside- algorithm. To handle systematic patterns re- lated to features, rules were generated by Lisp functions, rather than being written directly in the above form. With very few exceptions (rules for coordination, S-rule), the rules do not have more than two daughters. Grammar development is facilitated by a chart browser that permits a quick and efficient discovery of grammar bugs (Carroll, 1997a). Fig. 3 shows that the ambiguity in the chart is quite considerable even though grammar and corpus are restricted. For the entire corpus, we com- puted an average 9202 trees per clause. In the chart browser, the categories filling the cells in- dicate the most probable category for that span with their estimated frequencies. The pop-up window under IP presents the ranked list of all possible categories for the covered span. Rules (chart edges) with frequencies can be viewed with a further menu. In the chart browser, colors are used to display frequencies (between 0 and 1) estimated by the inside-outside algorithm. This allows properties shared across tree analyses to be checked at a glance; often grammar and es- timation bugs can be detected without mouse operations. The grammar covers 88.5~o of the clauses and 87.9% of the tokens contained in the corpus. Parsing failures are mainly due to UNTAGGED words contained in 6.6% of the failed clauses, the pollution of the corpus by infinitival con- structions (~1.3%), and a number of coordina- tions not covered by the grammar (~1.6%). 3.1 Case features and agreement On nominal categories, in addition to the four cases Nom, Gen, Dat, and Akk, case features with a disjunctive interpretation (such as Dir for Nom or Akk) are used. The grammar is writ- ten in such a way that non-disjunctive features are introduced high up in the tree. This results in some reduction in the size of the parse forest, and some parameter pooling. Essentially the full range of agreement inside the noun phrase is en- forced. Agreement between the nominative NP and the tensed verb (e.g. in number) is not en- forced by the grammar, in order to control the number of parameters and rules. For noun phrases we employ Abney's chunk grammar organization (Abney, 1996). The noun chunk (NC) is an approximately non-recursive projection that excludes post-head complements and (adverbial) adjuncts introduced higher than pre-head modifiers and determiners but in- cludes participial pre-modifiers with their com- plements. Since we perform complete context free parsing, parse forest construction, and inside-outside estimation, chunks are not moti- vated by deterministic parsing. Rather, they fa- cilitate evaluation and graphical debugging, by tending to increase the span of constituents with high estimated frequency. 270 daEI :: i: i ::::: :(:: : ~: 0 ,LXJ9 VPP.np.np ;495 VPP.n n86 VPK.n %'¢JZ VPP.dp.dp 1743 VPP.d 1556 VPP,nd.nd 10Z$ VPP Figure 3: Chart browser Word-by-word gloss of the clause: 'that Sarajevo over the airport with the essentials supplied will can' class # frame types VPA.na.na VPA.na.na VPA 15 n, na, nad, nai, nap, nar, nd, ndi, ~ ndp, ndr, ni, nir, np, npr, nr / \ / \ NP.Nom VPA.na.a NP.Akk VPA.na.n VPP 13 d, di, dp, dr, i, ir, n, nd, ni, np, p, pr, r ~ / ~ VPI 10 a, ad, ap, ar, d, dp, dr, p, pr, r NP.Akk VPA.na NP.Nom VPA.na VPK 2 i, n Figure 4: Number and types of verb frames Figure 5: Coding of canonical and scrambled ar- gument order 3.2 Subcategorisation frames of verbs The grammar distinguishes four subcategorisa- tion frame classes: active (VPA), passive (VPP), infinitival (VPI) frames, and copula construc- tions (VPK). A frame may have maximally three arguments. Possible arguments in the frames are nominative (n), dative (d) and accusative (a) NPs, reflexive pronouns (r), PPs (p), and infini- tival VPs (i). The grammar does not distinguish plain infinitival VPs from zu-infinitival VPs. The grammar is designed to partially distinguish dif- ferent PP frames relative to the prepositional head of the PP. A distinct category for the spe- cific preposition becomes visible only when a subcategorized preposition is cancelled from the subcat list. This means that specific prepositions do not figure in the evaluation discussed below. The number and the types of frames in the dif- ferent frame classes are given in figure 4. German, being a language with comparatively free phrase order, allows for scrambling of ar- guments. Scrambling is reflected in the particu- lar sequence in which the arguments of the verb frame are saturated. Compare figure 5 for an ex- ample of a canonical subject-object order in an active transitive frame and its scrambled object- subject order. The possibility of scrambling verb arguments yields a substantial increase in the number of rules in the grammar (e.g. 102 com- binatorically possible argument rules for all in VPA frames). Adverbs and non-subcategorized PPs are introduced as adjuncts to VP categories which do not saturate positions in the subcat frame. In earlier experiments, we employed a flat clausal structure, with rules for all permutations of complements. As the number of frames in- creased, this produced prohibitively many rules, particularly with the inclusion of adjuncts. 4 Parameters The parameterization is as in C+R, with one significant modification. Parameters consist of (i) rule parameters, corresponding to right hand 271 sides conditioned by parent category and par- ent head; (ii) lexical choice parameters for non- head children, corresponding to child lemma conditioned by child category, parent category, and parent head lemma. See C+R or Charniak (1995) for an explanation of how such parame~ ters define a probabilistic weighting of trees. The change relative to C+R is that lexicalization is by uninflected lemma rather than word form. This reduces the number of lexical parameters, giving more acceptable model sizes and elimi- nating splitting of estimated frequencies among inflectional forms. Inflected forms are generated at the leaves of the tree, conditioned on termi- nal category and lemma. This results in a third family of parameters, though usually the choice of inflected form is deterministic. A parameter pooling feature is used for argu- ment filling where all parent categories of the form VP.x.y are mapped to a category VP.x in defining lexical choice parameters. The conse- quence is e.g. that an accusative daughter of a nominative-accusative verb uses the same lexical choice parameter, whether a default or scram- bled word order is used. (This feature was used by C÷R for their phrase trigram grammar, not in the linguistic part of their grammar.) Not all desirable parameter pooling can be expressed in this way, though; for instance rule parameters are not pooled, and so get split when the parent category bears an inflectional feature. 5 Estimation The training of our probabilistic CFG proceeds in three steps: (i) unlexicalized training with the supar parser, (ii) bootstrapping a lexical- ized model from the trained unlexicalized one with the ultra parser, and finally (iii) lexical- ized training with the hypar parser (Carroll, 1997b). Each of the three parsers uses the inside- outside algorithm, supar and ultra use an un- lexicalized weighting of trees, while hypar uses a lexicalized weighting of trees, ultra and hypar both collect frequencies for lexicalized rule and lexical choice events, while supar collects only unlexicalized rule frequencies. Our experiments have shown that training an unlexicalized model first is worth the effort. De- spite our use of a manually developed grammar that does not have to be pruned of superfluous rules like an automatically generated grammar, A B C 1: 52.0199 2: 25.3652 3: 24.5905 : : 13: 24.2872 14: 24.2863 15: 24.2861 16: 24.2861 17: 24.2867 1: 53.7654 1: 49.8165 2: 26.3184 2: 23.1008 3: 25.5035 3: 22.4479 : : : : 55: 25.0548 70: 22.1445 56: 25.0549 80: 22.1443 57: 25.0549 90: 22.1443 58: 25.0549 95: 22.1443 59: 25.055 96: 22.1444 Figure 6: Overtraining on heldout data) (iteration: cross-entropy the lexicalized model is notably better when preceded by unlexicalized training (see also Er- san and Charniak (1995) for related observa- tions). A comparison of immediate lexicalized training (without prior training of an unlexical- ized model) and our standard training regime that involves preliminary unlexicalized training speaks in favor of our strategy (cf. the differ- ent 'lex 0' and 'lex 2' curves in figures 8 and 9). However, the amount of unlexicalized training has to be controlled in some way. A standard criterion to measure overtraining is to compare log-likelihood values on held-out data of subsequent iterations. While the log- likelihood value of the training data is theo- retically guaranteed to converge through sub- sequent iterations, a decreasing log-likelihood value of the held-out data indicates over- training. Instead of log-likelihood, we use the inversely proportional cross-entropy measure. Fig. 6 shows comparisons of different sizes of training and heldout data (training/heldout): (A) 50k/50k, (B) 500k/500k, (C) 4.1M/500k. The overtraining effect is indicated by the in- crease in cross-entropy from the penultimate to the ultimate iteration in the tables. Overtraining results for lexicalized models are not yet avail- able. However, a comparison of precision/recall measures on categories of different complexity through iterative unlexicalized training shows that the mathematical criterion for overtraining may lead to bad results from a linguistic point of view. While we observed more or less con- verging precision/recall measures for lower level structures such as noun chunks, iterative unlexi- calized training up to the overtraining threshold turned out to be disastrous for the evaluation of complex categories that depend on almost the 272 "° 0 0 0 0 0 0 - ooooo "= 0 0 0 0 0 0 '. O 0 0 0 0 0 0 0 0 0 0 0 -- ~ 0 0 0 -- 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 O. 0 0 Figure 7: Chart browser for manual NC labelling O entire span of the clause. The recognition of sub- categorization frames through 60 iterations of unlexicalized training shows a massive decrease in precision/recall from the best to the last iter- ation, even dropping below the results with the randomly initialized grammar (see Fig. 9). 5.1 Training regime We compared lexicalized training with respect to different starting points: a random unlexi- calized model, the trained unlexicalized model with the best precision/recall results, and an un- lexicalized model that comes close to the cross- entropy overtraining threshold. The details of the training steps are as follows: (1) 0, 2 and 60 iterations of unlexicalized pars- ing with supar; (2) lexicalization with ultra using the entire corpus; (3) 23 iterations of lexicalized parsing with hypar. The training was done on four machines (two 167 MHz UltraSPARC and two 296 MHz SUNW UltraSPARC-II). Using the grammar described here, one iteration of supar on the entire corpus takes about 2.5 hours, lexicalization and gen- erating an initial lexicalized model takes more than six hours, and an iteration of lexicalized parsing can be done in 5.5 hours. 6 Evaluation For the evaluation, a total of 600 randomly se- lected clauses were manually annotated by two labelers. Using a chart browser, the labellers filled the appropriate cells with category names of NCs and those of maximal VP projections (cf. Figure 7 for an example of NC-labelling). Subsequent alignment of the labelers decisions resulted in a total of 1353 labelled NC categories (with four different cases). The total of 584 la- belled VP categories subdivides into 21 differ- ent verb frames with 340 different lemma heads. The dominant frames are active transitive (164 occurrences) and active intransitive (117 occur- rences). They represent almost half of the an- notated frames. Thirteen frames occur less than ten times, five of which just once. 6.1 Methodology To evaluate iterative training, we extracted maximum probability (Viterbi) trees for the 600 clause test set in each iteration of parsing. For extraction of a maximal probability parse in unlexicalized training, we used Schmid's lopar parser (Schmid, 1999). Trees were mapped to a database of parser generated markup guesses, and we measured precision and recall against the manually annotated category names and spans. Precision gives the ratio of correct guesses over all guesses, and recall the ratio of correct guesses over the number of phrases identified by human annotators. Here, we render only the pre- cision/recall results on pairs of category names and spans, neglecting less interesting measures on spans alone. For the figures of adjusted re- call, the number of unparsed misses has been subtracted from the number of possibilities. 273 0.88 O.86 0.84 0.82 0.8 0 .78 0.76 0 .74 0.72 0.7 0.68 0 :::::::::::::::::::::: ..................................... i/ precision lex 02 - - . ............. .. .... precision unlex ...... precision lax O0 ........ precision lex 60 ...... recall lax 02 ....... recall unlax ....... recall lex O0 ........ recall lex 60 ........ i , = , i i i i , I0 20 30 40 50 60 70 80 90 iteration # Figure 8: Precision/recall measures on NC cases 0.72 0.7 0.68 0.66 0.64 0.62 "~ 0 . 6 ~. 0.58 0.56 0.54 0.52 .... ........... ... .... \ \ \ i I i0 20 precision lex 02 --. pracision unlax ...... precision lex O0 ........ . precimion lex 60 ....... = = i 50 40 50 60 70 80 90 iteration # Figure 9: Precision measures on all verb frames In the following, we focus on the combination of the best unlexicalized model and the lexical- ized model that is grounded on the former. 6.2 NC Evaluation Figure 8 plots precision/recall for the training runs described in section 5.1, with lexicalized parsing starting after 0, 2, or 60 unlexicalized it- erations. The best results are achieved by start- ing with lexicalized training after two iterations of unlexicalized training. Of a total of 1353 an- notated NCs with case, 1103 are correctly recog- nized in the best unlexicalized model and 1112 in the last lexicalized model. With a number of 1295 guesses in the unlexicalized and 1288 guesses in the final lexicalized model, we gain 1.2% in precision (85.1% vs. 86.3%) and 0.6% in recall (81.5% vs. 82.1%) through lexicalized training. Adjustment to parsed clauses yields 88% vs. 89.2% in recall. As shown in Figure 8, the gain is achieved already within the first it- eration; it is equally distributed between correc- tions of category boundaries and labels. The comparatively small gain with lexical- ized training could be viewed as evidence that the chunking task is too simple for lexical infor- mation to make a difference. However, we find about 7% revised guesses from the unlexicalized to the first lexicalized model. Currently, we do not have a clear picture of the newly introduced errors. The plots labeled "00" are results for lexi- calized training starting from a random initial grammar. The precision measure of the first lex- icalized model falls below that of the unlexi- calized random model (74%), only recovering through lexicalized training to equalize the pre- cision measure of the random model (75.6%). This indicates that some degree of unlexicalized initialization is necessary, if a good lexica]ized model is to be obtained. (Skut and Brants, 1998) report 84.4% recall and 84.2% for NP and PP chunking without case labels. While these are numbers for a simpler problem and are slightly below ours, they are figures for an experiment on unrestricted sen- tences. A genuine comparison has to await ex- tension of our model to free text. 6.3 Verb Frame Evaluation Figure 9 gives results for verb frame recogni- tion under the same training conditions. Again, we achieve best results by lexicalising the sec- ond unlexicalized model. Of a total of 584 anno- tated verb frames, 384 are correctly recognized in the best unlexicalized model and 397 through subsequent lexicalized training. Precision for the best unlexicalized model is 68.4%. This is raised by 2% to 70.4% through lexicalized training; re- call is 65.7%/68%; adjustment by 41 unparsed misses makes for 70.4%/72.8% in recall. The rather small improvements are in contrast to 88 differences in parser markup, i.e. 15.7%, be- tween the unlexicalized and second lexicalized model. The main gain is observed within the first two iterations (cf. Figure 9; for readability, we dropped the recall curves when more or less parallel to the precision curves). Results for lexicalized training without prior unlexicalized training are better than in the NC evaluation, but fall short of our best results by more than 2%. The most notable observation in verb frame evaluation is the decrease of precision of frame recognition in unlexicalized training from the second iteration onward. After several dozen it- 274 0.8 0,75 ~'~ 0.7 0.65 0.6 0.55 0.5 Figure F.f f ' / " ....... I precision lex 02 -- k\ precision unlex ...... " precision lex O0 ........ ' precision lex 60 .............. recall unlex . . . . . . . \ \ , , "-~ ......... ; ........... ~ .......... ,, , , , i0 20 30 40 50 60 70 80 90 iteration # 10: Precision measures on non-PP frames erations, results are 5% below a random model and 14% below the best model. The primary reason for the decrease is the mistaken revi- sion of adjoined PPs to argument PPs. E.g. the required number of 164 transitive frames is missed by 76, while the parser guesses 64 VPt.nap frames in the final iteration against the annotator's baseline of 12. In contrast, lexi- calized training generally stabilizes w.r.t, frame recognition results after only few iterations. The plot labeled "lex 60" gives precision for a lexicalized training starting from the unlexical- ized model obtained with 60 iterations, which measured by linguistic criteria is a very poor state. As far as we know, lexicalized EM esti- mation never recovers from this bad state. 6.4 Evaluation of non-PP Frames Because examination of individual cases showed that PP attachments are responsible for many errors, we did a separate evaluation of non-PP frames. We filtered out all frames labelled with a PP argument from both the maximal proba- bility parses and the manually annotated frames (91 filtered frames), measuring precision and re- call against the remaining 493 labeller anno- tated non-PP frames. For the best lexicalized model, we find some- what but not excessively better results than those of the evaluation of the entire set of frames. Of 527 guessed frames in parser markup, 382 are correct, i.e. a precision of 72.5%. The recall figure of 77.5~0 is considerably better since overgeneration of 34 guesses is neglected. The differences with respect to different start- ing points for lexicalization emulate those in the evaluation of all frames. The rather spectacular looking precision and recall differences in unlexicalized training con- firm what was observed for the full frame set. From the first trained unlexicalized model throughout unlexicalized training, we find a steady increase in precision (70% first trained model to 78% final model) against a sharp drop in recall (78% peek in the second model vs. 50% in the final). Considering our above re- marks on the difficulties of frame recognition in unlexicalized training, the sharp drop in re- call is to be expected: Since recall measures the correct parser guesses against the annotator's baseline, the tendency to favor PP arguments over PP-adjuncts leads to a loss in guesses when PP-frames are abandoned. Similarly, the rise in precision is mainly explained by the decreas- ing number of guesses when cutting out non-PP frames. For further discussion of what happens with individual frames, we refer the reader to (Beil et al., 1998). One systematic result in these plots is that performance of lexicalized training stabilizes af- ter a few iterations. This is consistent with what happens with rule parameters for individ- ual verbs, which are close to their final values within five iterations. 7 Conclusion Our principal result is that scrambling-style free-er phrase order, case morphology and sub- categorization, and NP-internal gender, num- ber and case agreement can be dealt with in a head-lexicalized PFCG formedism by means of carefully designed categories and rules which limit the size of the packed parse forest and give desirable pooling of parameters. Hedging this, we point out that we made compromises in the grammar (notably, in not enforcing nominative- verb agreement) in order to control the number of categories, rules, and parameters. A second result is that iterative lexicalized inside-outside estimation appears to ,be bene- ficial, although the precision/recall increments are small. We believe this is the first substan- tial investigation of the utility of iterative lexi- calized inside-outside estimation of a lexicalized probabilistic grammar involving a carefully built grammar where parses can be evaluated by lin- guistic criteria. A third result is that using too many unlexi- calized iterations (more than two) is detrimen- tal. A criterion using cross-entropy overtraining 275 on held-out data dictates many more unlexical- ized iterations, and this criterion is therefore in- appropriate. Finally, we have clear cases of lexicalized EM estimation being stuck in linguistically bad states. As far as we know, the model which gave the best results could also be stuck in a compar- atively bad state. We plan to experiment with other lexicalized training regimes, such as ones which alternate between different training cor- pora. The experiments are made possible by im- provements in parser and hardware speeds, the carefully built grammar, and evaluation tools. In combination, these provide a unique environ- ment for investigating training regimes for lexi- calized PCFGs. Much work remains to be done in this area, and we feel that we are just begin- ning to develop understanding of the time course of parameter estimation, and of the general effi- cacy of EM estimation of lexicalized PCFGs as evaluated by linguistic criteria. We believe our current grammar of Ger- man could be extended to a robust free-text chunk/phrase grammar in the style of the En- glish grammar of Carroll and Rooth (1998) with about a month's work, and to a free-text grammar treating verb-second clauses and addi- tional complementation structures (notably ex- traposed clausal complements) with about one year of additional grammar development and experiment. These increments in the grammar could easily double the number of rules. How- ever this would probably not pose a problem for the parsing and estimation software. Glenn Carroll, 1997b. Manual pages for supar, ultra, hypar, and genDists. IMS, Stuttgart University. E. Charniak. 1995. Parsing with context-free grammars and word statistics. Technical Re- port CS-95-28, Department of Computer Sci- ence, Brown University. M. Ersan and E. Charniak. 1995. A statistical syntactic disambiguation program and what it learns. TechnicM Report CS-95-29, Depart- ment of Computer Science, Brown University. Lauri Karttunen, Todd Yampol, and Gregory Grefenstette, 1994. INFL Morphological An- alyzer~Generator 3.2.9 (3. 6.4). Xerox Corpo- ration. Anne Schiller and Chris StSckert, 1995. DMOR. IMS, Universit~t Stuttgart. Helmut Schmid, 1999. Manual page for lopar. IMS, Universit~t Stuttgart. Wojciech Skut and Thorsten Brants. 1998. A maximum-entropy partiM parser for un- restricted text. In Proceedings o/ the Sixth Workshop on Very Large Corpora, Montreal, Quebec. References Steven Abney. 1996. Chunk stylebook. Techni- cal report, SfS, Universit~t Tiibingen. Franz Beil, Glenn Carroll, Detlef Prescher, Ste- fan Riezler, and Mats Rooth. 1998. Inside- outside estimation of a lexicalized PCFG for German. -Gold-. In Inducing Lexicons with the EM Algorithm. AIMS Report 4(3), IMS, Universit~t Stuttgart. Glenn Carroll and Mats Rooth. 1998. Valence induction with a head-lexicalized PCFG. In Proceedings of EMNLP-3, Granada. Glenn Carroll, 1997a. Manual pages for charge, hyparCharge, and tau. IMS, Universit~t Stuttgart. 276
1999
35
A Part of Speech Estimation Method for Japanese Unknown Words using a Statistical Model of Morphology and Context Masaaki NAGATA NTT Cyber Space Laboratories 1-1 Hikari-no-oka Yokosuka-Shi Kanagawa, 239-0847 Japan nagata@nttnly, isl. ntt. co. jp Abstract We present a statistical model of Japanese unknown words consisting of a set of length and spelling models classified by the character types that con- stitute a word. The point is quite simple: differ- ent character sets should be treated differently and the changes between character types are very im- portant because Japanese script has both ideograms like Chinese (kanji) and phonograms like English (katakana). Both word segmentation accuracy and part of speech tagging accuracy are improved by the proposed model. The model can achieve 96.6% tag- ging accuracy if unknown words are correctly seg- mented. 1 Introduction In Japanese, around 95% word segmentation ac- curacy is reported by using a word-based lan- guage model and the Viterbi-like dynamic program- ming procedures (Nagata, 1994; Yamamoto, 1996; Takeuchi and Matsumoto, 1997; Haruno and Mat- sumoto, 1997). About the same accuracy is reported in Chinese by statistical methods (Sproat et al., 1996). But there has been relatively little improve- ment in recent years because most of the remaining errors are due to unknown words. There are two approaches to solve this problem: to increase the coverage of the dictionary (Fung and Wu, 1994; Chang et al., 1995; Mori and Nagao, 1996) and to design a better model for unknown words (Nagata, 1996; Sproat et al., 1996). We take the latter approach. To improve word segmenta- tion accuracy, (Nagata, 1996) used a single general purpose unknown word model, while (Sproat et al., 1996) used a set of specific word models such as for plurals, personal names, and transliterated foreign words. The goal of our research is to assign a correct part of speech to unknown word as well as identifying it correctly. In this paper, we present a novel statistical model for Japanese unknown words. It consists of a set of word models for each part of speech and word type. We classified Japanese words into nine orthographic types based on the character types that constitute a word. We find that by making different models for each word type, we can better model the length and spelling of unknown words. In the following sections, we first describe the lan- guage model used for Japanese word segmentation. We then describe a series of unknown word mod- els, from the baseline model to the one we propose. Finally, we prove the effectiveness of the proposed model by experiment. 2 Word Segmentation Model 2.1 Baseline Language Model and Search Algorithm Let the input Japanese character sequence be C = Cl...Cm, and segment it into word sequence W = wl ... wn 1 . The word segmentation task can be de- fined as finding the word segmentation 12d that max- imize the joint probability of word sequence given character sequence P(WIC ). Since the maximiza- tion is carried out with fixed character sequence C, the word segmenter only has to maximize the joint probability of word sequence P(W). = arg mwax P(WIC) = arg mwax P(W) (1) We call P(W) the segmentation model. We can use any type of word-based language model for P(W), such as word ngram and class-based ngram. We used the word bigram model in this paper. So, P(W) is approximated by the product of word bi- gram probabilities P(wi[wi- 1). P(W) P(wz I<bos>) 1-I ,~2 P(wi [wi-1 )P(<eos> Iwn) (2) Here, the special symbols <bos> and <eos> indi- cate the beginning and the end of a sentence, re- spectively. Basically, word bigram probabilities of the word segmentation model is estimated by computing the 1 In this paper, we define a word as a combination of its surface form and part of speech. Two words are considered to be equal only if they have the same surface form and part of speech. 277 Table 1: Examples of word bigrams including un- known word tags word bigram frequency ¢)/no/particle <U-verb> <U-number> <U-adjectival-verb> <U-adjective> <U-adverb> <U-noun> b/shi/inflection H/yen/suffix t~/na/inflection ~/i/inflection /to/particle 6783 1052 407 405 182 139 relative frequencies of the corresponding events in the word segmented training corpus, with appropri- ate smoothing techniques. The maximization search can be efficiently implemented by using the Viterbi- like dynamic programming procedure described in (Nagata, 1994). 2.2 Modification to Handle Unknown Words To handle unknown words, we made a slight modi- fication in the above word segmentation model. We have introduced unknown word tags <U-t> for each part of speech t. For example, <U-noun> and <U- verb> represents an unknown noun and an unknown verb, respectively. If wl is an unknown word whose part of speech is t, the word bigram probability P(wi[wl-a) is ap- proximated as the product of word bigram probabil- ity P(<U-t>[wi_l) and the probability of wi given it is an unknown word whose part of speech is t, P(wi[<U-t>). P(wilwi-1) = P(<U-t>lwi-1)P(wil<U-t>,wi-a) P(<U-t>[wi_l)P(wil<U-t>) (3) Here, we made an assumption that the spelling of an unknown word solely depends on its part of speech and is independent of the previous word. This is the same assumption made in the hidden Markov model, which is called output independence. The probabilities P(<U-t>lwi_l ) can be esti- mated from the relative frequencies in the training corpus whose infrequent words are replaced with their corresponding unknown word tags based on their part of speeches 2 Table 1 shows examples of word bigrams including unknown word tags. Here, a word is represented by a list of surface form, pronunciation, and part of speech, which are delimited by a slash '/'. The first 2 Throughout in this paper, we use the term "infrequent words" to represent words that appeared only once in the corpus. They are also called "hapax legomena" or "hapax words". It is well known that the characteristics of hapax legomena are similar to those of unknown words (Baayen and Sproat, 1996). example "¢)/no/particle <U-noun>" will appear in the most frequent form of Japanese noun phrases "A © B", which corresponds to "B of A" in English. As Table 1 shows, word bigrams whose infrequent words are replaced with their corresponding part of speech-based unknown word tags are very important information source of the contexts where unknown words appears. 3 Unknown Word Model 3.1 Baseline Model The simplest unknown word model depends only on the spelling. We think of an unknown word as a word having a special part of speech <UNK>. Then, the unknown word model is formally defined as the joint probability of the character sequence wi = cl .. • ck if it is an unknown word. Without loss of generality, we decompose it into the product of word length probability and word spelling probability given its length, P(wi[<UNK>) = P(cx... ck[<VNK>) = P(kI<UNK>)P(cl ... cklk, <UNK>) (4) where k is the length of the character sequence. We call P(kI<UNK> ) the word length model, and P(cz... ck Ik, <UNK>) the word spelling model. In order to estimate the entropy of English, (Brown et al., 1992) approximated P(kI<UNK> ) by a Poisson distribution whose parameter is the average word length A in the training corpus, and P(cz... cklk, <UNK>) by the product of character zerogram probabilities. This means all characters in the character set are considered to be selected inde- pendently and uniformly. )k P(Cl . . .ckI<UNK> ) -~ -~. e-~p k (5) where p is the inverse of the number of characters in the character set. If we assume JIS-X-0208 is used as the Japanese character set, p = 1/6879. Since the Poisson distribution is a single parame- ter distribution with lower bound, it is appropriate to use it as a first order approximation to the word length distribution. But the Brown model has two problems. It assigns a certain amount of probability mass to zero-length words, and it is too simple to express morphology. For Japanese word segmentation and OCR error correction, (Nagata, 1996) proposed a modified ver- sion of the Brown model. Nagata also assumed the word length probability obeys the Poisson distribu- tion. But he moved the lower bound from zero to one. ()~ - I) k-1 P(k]<UNK>) ~ (k- 1)! e-()~-l) (6) 278 Instead of zerogram, He approximated the word spelling probability P(Cl...ck[k, <UNK>) by the product of word-based character bigram probabili- ties, regardless of word length. P(cl... cklk, <UNK>) P(Cll<bow> ) YI~=2 P(cilc,_~)P( <eow>lc~) (7) where <bow> and <eow> are special symbols that indicate the beginning and the end of a word. 3.2 Correction of Word Spelling Probabilities We find that Equation (7) assigns too little proba- bilities to long words (5 or more characters). This is because the lefthand side of Equation (7) represents the probability of the string cl ... Ck in the set of all strings whose length are k, while the righthand side represents the probability of the string in the set of all possible strings (from length zero to infinity). Let Pb(cz ...ck]<UNK>) be the probability of character string Cl...ck estimated from the char- acter bigram model. Pb(cl... ckI<UNK>) -- P(Cl]<bow>) 1-I~=2 P(c~lc,-1)P( <e°w>lck) (8) Let Pb (kl <UNK>) be the sum of the probabilities of all strings which are generated by the character bigram model and whose length are k. More appro- priate estimate for P(cl... cklk, <UNK>) is, P(cl... cklk, <UNK>) ~ Pb(cl ... ckI<UNK>) Pb(kI<UNK>) (9) But how can we estimate Pb(kI<UNK>)? It is difficult to compute it directly, but we can get a rea- sonable estimate by considering the unigram case. If strings are generated by the character unigram model, the sum of the probabilities of all length k strings equals to the probability of the event that the end of word symbol <eow> is selected after a character other than <eow> is selected k - 1 times. Pb(k[<UNK>) ~ (1 -P(<eow>))k-ZP(<eow>)(10) Throughout in this paper, we used Equation (9) to compute the word spelling probabilities. 3.3 Japanese Orthography and Word Length Distribution In word segmentation, one of the major problems of the word length model of Equation (6) is the decom- position of unknown words. When a substring of an unknown word coincides with other word in the dic- tionary, it is very likely to be decomposed into the dictionary word and the remaining substring. We find the reason of the decomposition is that the word 0.5 0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 Word Length Distribution , i i Probs from Raw Counts (hapax words) Estimates by Poisson (hapax words) -+--- // I I i i 2 4 6 8 10 Word Character Length Figure 1: Word length distribution of unknown words and its estimate by Poisson distribution 0.5 0.45 0.4 035 0.3 0.25 0.2 0.15 0.1 0.05 0 0 Unknown Word Length Oistflbutlon kanJl katakana ~ 2 4 6 8 10 Word Character Length Figure 2: Word length distribution of kanji words and katakana words length model does not reflect the variation of the word length distribution resulting from the Japanese orthography. Figure 1 shows the word length distribution of in- frequent words in the EDR corpus, and the estimate of word length distribution by Equation (6) whose parameter (A = 4.8) is the average word length of infrequent words. The empirical and the estimated distributions agree fairly well. But the estimates by Poisson are smaller than empirical probabilities for shorter words (<= 4 characters), and larger for longer words (> characters). This is because we rep- 279 Table 2: Character type configuration of infrequent words in the EDR corpus Table 3: Examples of common character bigrams for each part of speech in the infrequent words character type sequence kanji katakana katakana-kanji kanji-hiragana hiragana kanji-katakana kat akana-symbol-katakana number kanji-hiragana-kanji alphabet kanji-hir agana-kanji-hir agana hiragana-kanji percent 45.1% 11.4% 6.5% 5.6% 3.7% 3.4% 3.0% 2.6% 2.4% 2.0% 1.7% 1.3% examples =~y~T'I/y Y t. *ag, ~$ OO7 ~ , ~V~ VSOP ±~,~, ~ ~-~,~! resented all unknown words by one length model. Figure 2 shows the word length distribution of words consists of only kanji characters and words consists of only katakana characters. It shows that the length of kanji words distributes around 3 char- acters, while that of katakana words distributes around 5 characters. The empirical word length dis- tribution of Figure 1 is, in fact, a weighted sum of these two distributions. In the Japanese writing system, there are at least five different types of characters other than punc- tuation marks: kanji, hiragana, katakana, Roman alphabet, and Arabic numeral. Kanji which means 'Chinese character' is used for both Chinese origin words and Japanese words semantically equivalent to Chinese characters. Hiragana and katakana are syllabaries: The former is used primarily for gram- matical function words, such as particles and inflec- tional endings, while the latter is used primarily to transliterate Western origin words. Roman alphabet is also used for Western origin words and acronyms. Arabic numeral is used for numbers. Most Japanese words are written in kanji, while more recent loan words are written in katakana. Katakana words are likely to be used for techni- cal terms, especially in relatively new fields like computer science. Kanji words are shorter than katakana words because kanji is based on a large (> 6,000) alphabet of ideograms while katakana is based on a small (< 100) alphabet of phonograms. Table 2 shows the distribution of character type sequences that constitute the infrequent words in the EDR corpus. It shows approximately 65% of words are constituted by a single character type. Among the words that are constituted by more than two character types, only the kanji-hiragana and hiragana-kanji sequences are morphemes and others are compound words in a strict sense although they part of speech character bigram frequency noun number adjectival-verb verb adjective adverb <eow> <bow> 1 <eow> ~'J <eow> b <eow> 0 <eow> 1343 484 327 213 69 63 are identified as words in the EDR corpus 3 Therefore, we classified Japanese words into 9 word types based on the character types that consti- tute a word: <sym>, <num>, <alpha>, <hira>, <kata>, and <kan> represent a sequence of sym- bols, numbers, alphabets, hiraganas, katakanas, and kanjis, respectively. <kan-hira> and <hira-kan> represent a sequence of kanjis followed by hiraganas and that of hiraganas followed by kanjis, respec- tively. The rest are classified as <misc>. The resulting unknown word model is as follows. We first select the word type, then we select the length and spelling. P(Cl ...ckI<UNK>) = P( <WT>I<UNK> )P(kI<WT> , dUNK>) P(cl... cklk, <WT>, <UNK>) (11) 3.4 Part of Speech and Word Morphology It is obvious that the beginnings and endings of words play an important role in tagging part of speech. Table 3 shows examples of common char- acter bigrams for each part of speech in the infre- quent words of the EDR corpus. The first example in Table 3 shows that words ending in ' --' are likely to be nouns. This symbol typically appears at the end of transliterated Western origin words written in katakana. It is natural to make a model for each part of speech. The resulting unknown word model is as follows. P(Cl .. • ck]<U-t>) = P(k]<U-t>)P(Cl... cklk, <U-t>) (12) By introducing the distinction of word type to the model of Equation (12), we can derive a more sophis- ticated unknown word model that reflects both word 3 When a Chinese character is used to represent a seman- tically equivalent Japanese verb, its root is written in the Chinese character and its inflectional suffix is written in hi- ragana. This results in kanji-hiragana sequence. When a Chinese character is too difficult to read, it is transliterated in hiragana. This results in either hiragana-kanji or kanji- hiragana sequence. 280 type and part of speech information. This is the un- known word model we propose in this paper. It first selects the word type given the part of speech, then the word length and spelling. P(cl... c l<U-t>) = P( <WT>I<U-t> )P(kI<WT>, <U-t>) P(Cl... cklk, <WT>, <U-t>) (13) Table 4: The amount of training and test sets sentences word tokens char tokens training set 100,000 2,460,188 3,897,718 test set-1 test set-2 100,000 5,000 2,465,441 122,064 3,906,260 192,818 The first factor in the righthand side of Equa- tion (13) is estimated from the relative frequency of the corresponding events in the training corpus. p(<WT>I<U_t> ) = C(<WT>, <U-t>) C(<U-t>) (14) Here, C(.) represents the counts in the corpus. To estimate the probabilities of the combinations of word type and part of speech that did not appeared in the training corpus, we used the Witten-Bell method (Witten and Bell, 1991) to obtain an esti- mate for the sum of the probabilities of unobserved events. We then redistributed this evenly among all unobserved events a The second factor of Equation (13) is estimated from the Poisson distribution whose parameter '~<WT>,<U-t> is the average length of words whose word type is <WT> and part of speech is <U-t>. P(kI<WT>, <U-t>) = ()~<WW>,<U-t>-l) u-1 e--(A<WW>,<U.t>-l) (15) (k-l)! If the combinations of word type and part of speech that did not appeared in the training corpus, we used the average word length of all words. To compute the third factor of Equation (13), we have to estimate the character bigram probabilities that are classified by word type and part of speech. Basically, they are estimated from the relative fre- quency of the character bigrams for each word type and part of speech. f(cilci-1, <WT>, <U-t>) = C(<WT>,<U-t>,ci_ 1 ,cl) C(<WT>,<U-t>,ci_l) (16) However, if we divide the corpus by the combina- tion of word type and part of speech, the amount of each training data becomes very small. Therefore, we linearly interpolated the following five probabili- ties (Jelinek and Mercer, 1980). P(c~lci_l, <WT>, <U-t>) = 4 The Witten-Bell method estimates the probability of ob- serving novel events to be r/(n+r), where n is the total num- ber of events seen previously, and r is the number of symbols that are distinct. The probability of the event observed c times is c/(n + r). oqf(ci, <WT>, <U-t>) +a2f(ci 1Ci-1, <WT>, <U-t>) +a3f(ci) + aaf(cilci_,) + ~5(1/V) (17) Where ~1+(~2+~3+cq+c~5 --- 1. f(ci, <WT>, <U-t>) and f(ci[ci-t, <WT>, <U-t>) are the relative frequen- cies of the character unigram and bigram for each word type and part of speech, f(ci) and f(cilci_l) are the relative frequencies of the character unigram and bigram. V is the number of characters (not to- kens but types) appeared in the corpus. 4 Experiments 4.1 Training and Test Data for the Language Model We used the EDR Japanese Corpus Version 1.0 (EDR, 1991) to train the language model. It is a manually word segmented and tagged corpus of ap- proximately 5.1 million words (208 thousand sen- tences). It contains a variety of Japanese sentences taken from newspapers, magazines, dictionaries, en- cyclopedias, textbooks, etc.. In this experiment, we randomly selected two sets of 100 thousand sentences. The first 100 thousand sentences are used for training the language model. The second 100 thousand sentences are used for test- ing. The remaining 8 thousand sentences are used as a heldout set for smoothing the parameters. For the evaluation of the word segmentation ac- curacy, we randomly selected 5 thousand sentences from the test set of 100 thousand sentences. We call the first test set (100 thousand sentences) "test set-l" and the second test set (5 thousand sentences) "test set-T'. Table 4 shows the number of sentences, words, and characters of the training and test sets. There were 94,680 distinct words in the training test. We discarded the words whose frequency was one, and made a dictionary of 45,027 words. Af- ter replacing the words whose frequency was one with the corresponding unknown word tags, there were 474,155 distinct word bigrams. We discarded the bigrams with frequency one, and the remaining 175,527 bigrams were used in the word segmentation model. As for the unknown word model, word-based char- acter bigrams are computed from the words with 281 Table 5: Cross entropy (CE) per word and character perplexity (PP) of each unknown word model unknown word model CE per word char PP Poisson+zerogram 59.4 2032 Poisson+bigram 37.8 128 WT+Poisson+bigram 33.3 71 frequency one (49,653 words). There were 3,120 dis- tinct character unigrams and 55,486 distinct char- acter bigrams. We discarded the bigram with fre- quency one and remaining 20,775 bigrams were used. There were 12,633 distinct character unigrams and 80,058 distinct character bigrams when we classified them for each word type and part of speech. We discarded the bigrams with frequency one and re- maining 26,633 bigrams were used in the unknown word model. Average word lengths for each word type and part of speech were also computed from the words with frequency one in the training set. 4.2 Cross Entropy and Perplexity Table 5 shows the cross entropy per word and char- acter perplexity of three unknown word model. The first model is Equation (5), which is the combina- tion of Poisson distribution and character zerogram (Poisson + zerogram). The second model is the combination of Poisson distribution (Equation (6)) and character bigram (Equation (7)) (Poisson + bi- gram). The third model is Equation (11), which is a set of word models trained for each word type (WT + Poisson + bigram). Cross entropy was computed over the words in test set-1 that were not found in the dictionary of the word segmentation model (56,121 words). Character perplexity is more intu- itive than cross entropy because it shows the average number of equally probable characters out of 6,879 characters in JIS-X-0208. Table 5 shows that by changing the word spelling model from zerogram to big-ram, character perplex- ity is greatly reduced. It also shows that by making a separate model for each word type, character per- plexity is reduced by an additional 45% (128 -~ 71). This shows that the word type information is useful for modeling the morphology of Japanese words. 4.3 Part of Speech Prediction Accuracy without Context Figure 3 shows the part of speech prediction accu- racy of two unknown word model without context. It shows the accuracies up to the top 10 candidates. The first model is Equation (12), which is a set of word models trained for each part of speech (POS + Poisson + bigram). The second model is Equa- tion (13), which is a set of word models trained for Part of Speech Estimation Accuracy 0.95 ~"~ ...... ~'**"" 0.9 /'"" 0.85 0.8 ~- / ~ + WT + Poisson + bigram -e-- N I// POS + Poisson + bigram --~--- 0.75 [/ 0.65 1 2 3 4 5 6 7 8 9 10 Rank Figure 3: Accuracy of part of speech estimation each part of speech and word type (POS + WT + Poisson + bigram). The test words are the same 56,121 words used to compute the cross entropy. Since these unknown word models give the prob- ability of spelling for each part of speech P(wlt), we used the empirical part of speech probability P(t) to compute the joint probability P(w, t). The part of speech t that gives the highest joint probability is selected. = argmtaxP(w,t ) = P(t)P(wlt ) (18) The part of speech prediction accuracy of the first and the second model was 67.5% and 74.4%, respec- tively. As Figure 3 shows, word type information improves the prediction accuracy significantly. 4.4 Word Segmentation Accuracy Word segmentation accuracy is expressed in terms of recall and precision as is done in the previous research (Sproat et al., 1996). Let the number of words in the manually segmented corpus be Std, the number of words in the output of the word segmenter be Sys, and the number of matched words be M. Recall is defined as M/Std, and precision is defined as M/Sys. Since it is inconvenient to use both recall and precision all the time, we also use the F-measure to indicate the overall performance. It is calculated by F= (f~2+l.0) xPxR f~2 x P + R (19) where P is precision, R is recall, and f~ is the relative importance given to recall over precision. We set 282 Table 6: Word segmentation accuracy of all words rec prec F Poisson+bigram 94.5 93.1 93.8 WT+Poisson+bigram 94.4 93.8 94.1 POS+Poisson+bigram 94.4 93.6 94.0 POS+WT+Poisson+bigram 94.6 93.7 94.1 Table 7: Word segmentation accuracy of unknown words 64.1%. Other than the usual recall/precision measures, we defined another precision (prec2 in Table 8), which roughly correspond to the tagging accuracy in English where word segmentation is trivial. Prec2 is defined as the percentage of correctly tagged un- known words to the correctly segmented unknown words. Table 8 shows that tagging precision is im- proved from 88.2% to 96.6%. The tagging accuracy in context (96.6%) is significantly higher than that without context (74.4%). This shows that the word bigrams using unknown word tags for each part of speech are useful to predict the part of speech. rec prec F Poisson + bigram 31.8 65.0 42.7 WT+Poisson+bigram 45.5 62.0 52.5 POS+Poisson+bigram 39.7 61.5 48.3 POS+WT+Poisson+bigram 42.0 66.4 51.4 f~ = 1.0 throughout this experiment. That is, we put equal importance on recall and precision. Table 6 shows the word segmentation accuracy of four unknown word models over test set-2. Com- pared to the baseline model (Poisson + bigram), by using word type and part of speech information, the precision of the proposed model (POS + WT + Pois- son + bigram) is improved by a modest 0.6%. The impact of the proposed model is small because the out-of-vocabulary rate of test set-2 is only 3.1%. To closely investigate the effect of the proposed unknown word model, we computed the word seg- mentation accuracy of unknown words. Table 7 shows the results. The accuracy of the proposed model (POS + WT + Poisson + bigram) is signif- icantly higher than the baseline model (Poisson + bigram). Recall is improved from 31.8% to 42.0% and precision is improved from 65.0% to 66.4%. Here, recall is the percentage of correctly seg- mented unknown words in the system output to the all unknown words in the test sentences. Precision is the percentage of correctly segmented unknown words in the system's output to the all words that system identified as unknown words. Table 8 shows the tagging accuracy of unknown words. Notice that the baseline model (Poisson + bigram) cannot predict part of speech. To roughly estimate the amount of improvement brought by the proposed model, we applied a simple tagging strat- egy to the output of the baseline model. That is, words that include numbers are tagged as numbers, and others are tagged as nouns. Table 8 shows that by using word type and part of speech information, recall is improved from 28.1% to 40.6% and precision is improved from 57.3% to 5 Related Work Since English uses spaces between words, unknown words can be identified by simple dictionary lookup. So the topic of interest is part of speech estimation. Some statistical model to estimate the part of speech of unknown words from the case of the first letter and the prefix and suffix is proposed (Weischedel et al., 1993; Brill, 1995; Ratnaparkhi, 1996; Mikheev, 1997). On the contrary, since Asian languages like Japanese and Chinese do not put spaces between words, previous work on unknown word problem is focused on word segmentation; there are few studies estimating part of speech of unknown words in Asian languages. The cues used for estimating the part of speech of unknown words for Japanese in this paper are ba- sically the same for English, namely, the prefix and suffix of the unknown word as well as the previous and following part of speech. The contribution of this paper is in showing the fact that different char- acter sets behave differently in Japanese and a better word model can be made by using this fact. By introducing different length models based on character sets, the number of decomposition errors of unknown words are significantly reduced. In other words, the tendency of over-segmentation is cor- rected. However, the spelling model, especially the character bigrams in Equation (17) are hard to es- timate because of the data sparseness. This is the main reason of the remaining under-segmented and over-segmented errors. To improve the unknown word model, feature- based approach such as the maximum entropy method (Ratnaparkhi, 1996) might be useful, be- cause we don't have to divide the training data into several disjoint sets (like we did by part of speech and word type) and we can incorporate more lin- guistic and morphological knowledge into the same probabilistic framework. We are thinking of re- implementing our unknown word model using the maximum entropy method as the next step of our research. 283 Table 8: Part of speech tagging accuracy of unknown words (the last column represents the percentage of correctly tagged unknown words in the correctly segmented unknown words) rec prec F prec2 Poisson+bigram 28.1 57.3 37.7 88.2 WT+Poisson+bigram 37.7 51.5 43.5 87.9 POS+Poisson+bigram 37.5 58.1 45.6 94.3 POS+WT+Poisson+bigram 40.6 64.1 49.7 96.6 6 Conclusion We present a statistical model of Japanese unknown words using word morphology and word context. We find that Japanese words are better modeled by clas- sifying words based on the character sets (kanji, hi- ragana, katakana, etc.) and its changes. This is because the different character sets behave differ- ently in many ways (historical etymology, ideogram vs. phonogram, etc.). Both word segmentation ac- curacy and part of speech tagging accuracy are im- proved by treating them differently. References Harald Baayen and Richard Sproat. 1996. Estimat- ing lexical priors for low-frequency morphologi- cally ambiguous forms. Computational Linguis- tics, 22(2):155-166. Eric Brill. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543-565. Peter F. Brown, Stephen A. Della Pietra, Vincent J. Della Pietra, Jennifer C. Lal, and Robert L. Mercer. 1992. An estimate of an upper bound for the entropy of English. Computational Linguis- tics, 18(1):31-40. Jing-Shin Chang, Yi-Chung Lin, and Keh-Yih Su. 1995. Automatic construction of a Chinese elec- tronic dictionary. In Proceedings of the Third Workshop on Very Large Corpora, pages 107-120. EDR. 1991. EDR electronic dictionary version 1 technical guide. Technical Report TR2-003, Japan Electronic Dictionary Research Institute. Pascale Fung and Dekai Wu. 1994. Statistical aug- mentation of a Chinese machine-readable dictio- nary. In Proceedings of the Second Workshop on Very Large Corpora, pages 69-85. Masahiko Haruno and Yuji Matsumoto. 1997. Mistake-driven mixture of hierachical tag context trees. In Proceedings of the 35th ACL and 8th EA CL, pages ~ 230-237. F. Jelinek and R. L. Mercer. 1980. Interpolated esti- mation of Markov source parameters from sparse data. In Proceedings of the Workshop on Pattern Recognition in Practice, pages 381-397. Andrei Mikheev. 1997. Automatic rule induction for unknown-word guessing. Computational Linguis- tics, 23(3):405-423. Shinsuke Mori and Makoto Nagao. 1996. Word ex- traction from corpora and its part-of-speech esti- mation using distributional analysis. In Proceed- ings of the 16th International Conference on Com- putational Linguistics, pages 1119-1122. Masaaki Nagata. 1994. A stochastic Japanese mor- phological analyzer using a forward-dp backward- A* n-best search algorithm. In Proceedings of the 15th International Conference on Computational Linguistics, pages 201-207. Masaaki Nagata. 1996. Context-based spelling cor- rection for Japanese OCR. In Proceedings of the 16th International Conference on Computational Linguistics, pages 806-811. Adwait Ratnaparkhi. 1996. A maximum entropy model for part-of-speech tagging. In Proceedings of Conference on Empirical Methods in Natural Language Processing, pages 133-142. Richard Sproat, Chilin Shih, William Gale, and Nancy Chang. 1996. A stochastic finite-state word-segmentation algorithm for Chinese. Com- putational Linguistics, 22(3):377-404. Koichi Takeuchi and Yuji Matsumoto. 1997. HMM parameter learning for Japanese morphological analyzer. Transaction of Information Processing of Japan, 38(3):500-509. (in Japanese). Ralph Weischedel, Marie Meteer, Richard Schwartz, Lance Ramshaw, and Jeff Palmucci. 1993. Cop- ing with ambiguity and unknown words through probabilistic models. Computational Linguistics, 19(2):359-382. Ian H. Witten and Timothy C. Bell. 1991. The zero-frequency problem: Estimating the proba- bilities of novel events in adaptive text compres- sion. IEEE Transaction on Information Theory, 37(4):1085-1094. Mikio Yamamoto. 1996. A re-estimation method for stochastic language modeling from ambiguous ob- servations. In Proceedings of the Fourth Workshop on Very Large Corpora, pages 155-167. 284
1999
36
Memory-Based Morphological Analysis Antal van den Bosch and Walter Daelemans ILK / Computational Linguistics Tilburg University {antalb,walter}@kub.nl} Abstract We present a general architecture for efficient and deterministic morphological analysis based on memory-based learning, and apply it to morphological analysis of Dutch. The system makes direct mappings from letters in context to rich categories that encode morphological boundaries, syntactic class labels, and spelling changes. Both precision and recall of labeled morphemes are over 84% on held-out dictionary test words and estimated to be over 93% in free text. 1 Introduction Morphological analysis is an essential compo- nent in language engineering applications rang- ing from spelling error correction to machine translation. Performing a full morphological analysis of a wordform is usually regarded as a segmentation of the word into morphemes, com- bined with an analysis of the interaction of these morphemes that determine the syntactic class of the wordform as a whole. The complexity of wordform morphology varies widely among the world's languages, but is regarded quite high even in the relatively simple cases, such as En- glish. Many wordforms in English and other western languages contain ambiguities in their morphological composition that can be quite in- tricate. General classes of linguistic knowledge that are usually assumed to play a role in this disambiguation process are knowledge of (i) the morphemes of a language, (ii) the morphotac- tics, i.e., constraints on how morphemes are al- lowed to attach, and (iii) spelling changes that can occur due to morpheme attachment. State-of-the art systems for morphological analysis of wordforms are usually based on two-level finite-state transducers (FSTS, Kosken- niemi (1983)). Even with the availability of sophisticated development tools, the cost and complexity of hand-crafting two-level rules is high, and the representation of concatenative compound morphology with continuation lexi- cons is difficult. As in parsing, there is a trade- off between coverage and spurious ambiguity in these systems: the more sophisticated the rules become, the more needless ambiguity they in- troduce. In this paper we present a learning approach which models morphological analysis (includ- ing compounding) of complex wordforms as se- quences of classification tasks. Our model, MBMA (Memory-Based Morphological Analy- sis), is a memory-based learning system (Stan- fill and Waltz, 1986; Daelemans et al., 1997). Memory-based learning is a class of induc- tive, supervised machine learning algorithms that learn by storing examples of a task in memory. Computational effort is invested on a "call-by-need" basis for solving new exam- ples (henceforth called instances) of the same task. When new instances are presented to a memory-based learner, it searches for the best- matching instances in memory, according to a task-dependent similarity metric. When it has found the best matches (the nearest neighbors), it transfers their solution (classification, label) to the new instance. Memory-based learn- ing has been shown to be quite adequate for various natural-language processing tasks such as stress assignment (Daelemans et al., 1994), grapheme-phoneme conversion (Daelemans and Van den Bosch, 1996; Van den Bosch, 1997), and part-of-speech tagging (Daelemans et al., 1996b). The paper is structured as follows. First, we give a brief overview of Dutch morphology in Section 2. We then turn to a description of MBMA in Section 3. In Section 4 we present 285 the experimental outcomes of our study with MBMA. Section 5 summarizes our findings, re- ports briefly on a partial study of English show- ing that the approach is applicable to other lan- guages, and lists our conclusions. 2 Dutch Morphology The processes of Dutch morphology include inflection, derivation, and compounding. In- flection of verbs, adjectives, and nouns is mostly achieved by suffixation, but a circum- fix also occurs in the Dutch past participle (e.g. ge+werk+t as the past participle of verb werken, to work). Irregular inflectional morphology is due to relics of ablaut (vowel change) and to suppletion (mixing of different roots in inflec- tional paradigms). Processes of derivation in Dutch morphology occur by means of prefixa- tion and suffixation. Derivation can change the syntactic class of wordforms. Compounding in Dutch is concatenative (as in German and Scan- dinavian languages)' words can be strung to- gether almost unlimitedly, with only a few mor- photactic constraints, e.g., rechtsinformatica- toepassingen (applications of computer science in Law). In general, a complex wordform inher- its its syntactic properties from its right-most part (the head). Several spelling changes occur: apart from the closed set of spelling changes due to irregular morphology, a number of spelling changes is predictably due to morphological context. The spelling of long vowels varies be- tween double and single (e.g. ik loop, I run, versus wii Iop+en, we run); the spelling of root- final consonants can be doubled (e.g. ik stop, I stop, versus wij stopp+en, we stop); there is variation between s and z and f and v (e.g. huis, house, versus huizen, houses). Finally, between the parts of a compound, a linking morpheme may appear (e.g. staat+s+loterij, state lottery). For a detailed discussion of morphological phe- nomena in Dutch, see De Haas and Trommelen (1993). Previous approaches to Dutch morpho- logical analysis have been based on finite-state transducers (e.g., XEROX'es morphological an- alyzer), or on parsing with context-free word grammars interleaved with exploration of pos- sible spelling changes (e.g. Heemskerk and van Heuven (1993); or see Heemskerk (1993) for a probabilistic variant). 3 Applying memory-based learning to morphological analysis Most linguistic problems can be seen as,context- sensitive mappings from one representation to another (e.g., from text to speech; from a se- quence of spelling words to a parse tree; from a parse tree to logical form, from source lan- guage to target language, etc.) (Daelemans, 1995). This is also the case for morphologi- cal analysis. Memory-based learning algorithms can learn mappings (classifications) if a suffi- cient number of instances of these mappings is presented to them. We drew our instances from the CELEX lex- ical data base (Baayen et al., 1993). CELEX contains a large lexical data base of Dutch word- forms, and features a full morphological analy- sis for 247,415 of them. We took each wordform and its associated analysis, and created task in- stances using a windowing method (Sejnowski and Rosenberg, 1987). Windowing transforms each wordform into as many instances as it has letters. Each example focuses on one letter, and includes a fixed number of left and right neighbor letters, chosen here to be five. Con- sequently, each instance spans eleven letters, which is also the average word length in the CELEX data base. Moreover, we estimated from exploratory data analysis that this con- text would contain enough information to allow for adequate disambiguation. To illustrate the construction of instances, Table 1 displays the 15 instances derived from the Dutch example word abnormaliteiten (ab- normalities) and their associated classes. The class of the first instance is "A+Da", which says that (i) the morpheme starting in a is an adjective ("A") 1, and (ii) an a was deleted at the end ("+Da"). The coding thus tells that the first morpheme is the adjective abnorrnaal. The second morpheme, iteit, has class "N_A,". This complex tag indicates that when iteit at- taches right to an adjective (encoded by "A,"), the new combination becomes a noun ("N_"). Finally, the third morpheme is en, which is a plural inflection (labeled "m" in CELEX). This way we generated an instance base of 2,727,462 1CELEX features ten syntactic tags: noun (N), adjec- tive (A), quantifier/numeral (Q), verb (V), article (D), pronoun (O), adverb (B), preposition (P), conjunction (C), interjection (J), and abbreviation (X). 286 instances. Within these instances, 2422 differ- ent class labels occur. The most frequently oc- curring class label is "0", occurring in 72.5% of all instances. The three most frequent non-null labels are "N" (6.9%), "V" (3.6%), and "m" (1.6%). Most class labels combine a syntactic or inflectional tag with a spelling change, and generally have a low frequency. When a wordform is listed in CELEX as hav- ing more than one possible morphological la- beling (e.g., a morpheme may be N or V, the inflection -en may be plural for nouns or infini- tive for verbs), these labels are joined into am- biguous classes ("N/V") and the first generated example is labeled with this ambiguous class. Ambiguity in syntactic and inflectional tags oc- curs in 3.6% of all morphemes in our CELEX data. The memory-based learning algorithm used within MBMA is ml-m (Daelemans and Van den Bosch, 1992; Daelemans et al., 1997), an extension of IBI (Aha et al., 1991). IBI-IG con- structs a data base of instances in memory dur- ing learning. New instances are classified by IBI-IG by matching them to all instances in the instance base, and calculating with each match the distance between the new instance X and the memory instance Y, A(X~Y) ---- ~-]n W(fi)~(xi,yi), where W(fi) is the weight i----1 of the ith feature, and 5(x~, Yi) is the distance between the values of the ith feature in in- stances X and Y. When the values of the in- stance features are symbolic, as with our linguis- tic tasks, the simple overlap distance function 5 is used: 5(xi,yi) = 0 if xi = Yi, else 1. The (most frequently occurring) classification of the memory instance Y with the smallest A(X, Y) is then taken as the classification of X. The weighting function W(fi) computes for each feature, over the full instance base, its information gain, a function from information theory; cf. Quinlan (1986). In short, the infor- mation gain of a feature expresses its relative importance compared to the other features in performing the mapping from input to classi- fication. When information gain is used in the similarity function, instances that match on im- portant features are regarded as more alike than instances that match on unimportant features. In our experiments, we are primarily inter- ested in the generalization accuracy of trained models, i.e., the ability of these models to use their accumulated knowledge to classify new instances that were not in the training mate- rial. A method that gives a good estimate of the generalization performance of an algo- rithm on a given instance base, is 10-fold cross- validation (Weiss and Kulikowski, 1991). This method generates on the basis of an instance base 10 subsequent partitionings into a training set (90%) and a test set (10%), resulting in 10 experiments. 4 Experiments: MBMA of Dutch wordforms As described, we performed 10-fold cross vali- dation experiments in an experimental matrix in which MBMA is applied to the full instance base, using a context width of five left and right context letters. We structure the presentation of the experimental outcomes as follows. First, we give the generalization accuracies on test in- stances and test words obtained in the exper- iments, including measurements of generaliza- tion accuracy when class labels are interpreted at lower levels of granularity. While the latter measures give a rough idea of system accuracy, more insight is provided by two additional anal- yses. First, precision and recall rates of mor- phemes are given. We then provide prediction accuracies of syntactic word classes. Finally, we provide estimations on free-text accuracies. 4.1 Generalization accuracies The percentages of correctly classified test in- stances are displayed in the top line of Table 2, showing an error in test instances of about 4.1% (which is markedly better than the baseline er- ror of 27.5% when guessing the most frequent class "0"), which translates in an error at the word level of about 35%. The output of MBMA can also be viewed at lower levels of granularity. We have analyzed MBMA's output at the three following lower granularity levels: 1. Only decide, per letter, whether a seg- mentation occurs at that letter, and if so, whether it marks the start of a derivational stem or an inflection. This can be derived straightforwardly from the full-task class labeling. 2. Only decide, per letter, whether a segmen- tation occurs at that letter. Again, this can 287 instance number 1 2 3 4 left context -- - a _ _ a b 5 _ a b n 6 a b n o 7 b n o r 8 n o r m o r m a 10 r m a I 11 rn a I i 12 13 14 15 a I i t I i t e i t e i t e i t I fOCUS letter I a a b b n n o o r r m m a a I I i i t t e e i i t t e e n right context TASK b n o r m A+Da n o r m a 0 o r m a I 0 r m a I i 0 m a I i t 0 a I i t e 0 I i t e i 0 i t e i t 0 t e i t e N_A, e i t e n 0 i t e n _ 0 _ 0 _ 0 _ m _ 0 t e n _ e n n Table 1: Instances with morphological analysis classifications derived from abnormaliteiten, ana- lyzed as [abnormaal]A[iteit]N_A,[en]m. be derived straightforwardly. This task im- plements segmentation of a complex word form into morphemes. 3. Only check whether the desired spelling change is predicted correctly. Because of the irregularity of many spelling changes this is a hard task. The results from these analyses are displayed in Table 2 under the top line. First, Ta- ble 2 shows that performance on the lower- granularity tasks that exclude detailed syntac- tic labeling and spelling-change prediction is about 1.1% on test instances, and roughly 10% on test words. Second, making the distinction between inflections and other morphemes is al- most as easy as just determining whether there is a boundary at all. Third, the relatively low score on correctly predicted spelling changes, 80.95%, indicates that it is particularly hard to generalize from stored instances of spelling changes to new ones. This is in accordance with the common linguistic view on spelling-change exceptions. When, for instance, a past-tense form of a verb involves a real exception (e.g., the past tense of Dutch brengen, to bring, is bracht), it is often the case that this exception is confined to generalize to only a few other exam- ples of the same verb (brachten, gebracht) and not to any other word that is not derived from the same stem, while the memory-based learn- ing approach is not aware of such constraints. A post-processing step that checks whether the proposed morphemes are also listed in a mor- pheme lexicon would correct many of these er- rors, but has not been included here. 4.2 Precision and recall of morphemes Precision is the percentage of morphemes pre- dicted by MBMA that is actually a morpheme in the target analysis; recall is the percentage of morphemes in the target analysis that are also predicted by MBMA. Precision and recall of morphemes can again be computed at differ- ent levels of granularity. Table 3 displays these computed values. The results show that both precision and recall of fully-labeled morphemes within test words are relatively low. It comes as no surprise that the level of 84% recalled fully labeled morphemes, including spelling in- formation, is not much higher than the level of 80% correctly recalled spelling changes (see Ta- ble 2). When word-class information, type of inflection, and spelling changes are discarded, precision and recall of basic segment types be- comes quite accurate: over 94%. 288 instances words class labeling granularity labeling example % :t: % + full morphological analysis [abnormaai]A[iteit]N_A,[en]m 95.88 0.04 64.63 0.24 derivation/inflection [abnormal]deriv[iteit]deriv[en]in/l 98.83 0.02 89.62 0.17 segmentation [abnormal][iteit][en] 98.97 0.02 90.69 0.02 spelling changes +Da 80.95 0.40 Table 2: Generalization accuracies in terms of the percentage of correctly classified test instances and words, with standard deviations (+) of MBMA applied to full Dutch morphological analysis and three lower-granularity tasks derived from MBMA's full output. The example word abnormaliteiten is shown according to the different labeling granularities, and only its single spelling change at the bottom line). precision recall task variation (%) (%) full morphological analysis 84.33 83.76 derivation/inflection 94.72 94.07 segmentation 94.83 94.18 Table 3: Precision and recall of morphemes, de- rived from the classification output of MBMA applied to the full task and two lower- granularity variations of Dutch morphological analysis, using a context width of five left and right letters. 4.3 Predicting the syntactic class of wordforms Since MBMA predicts the syntactic label of morphemes, and since complex Dutch word- forms generally inherit their syntactic proper- ties from their right-most morpheme, MBMA's syntactic labeling can be used to predict the syntactic class of the full wordform. When ac- curate, this functionality can be an asset in han- dling unknown words in part-of-speech tagging systems. The results, displayed in Table 4, show that about 91.2% of all test words are assigned the exact tag they also have in CELEX (includ- ing ambiguous tags such as "N/V" - 1.3% word- forms in the CELEX dataset have an ambiguous syntactic tag). When MBMA's output is also considered correct if it predicts at least one out of the possible tags listed in CELEX, the accu- racy on test words is 91.6%. These accuracies compare favorably with a related (yet strictly incomparable) approach that predicts the word class from the (ambiguous) part-of-speech tags of the two surrounding words, the first letter, and the final three letters of Dutch words, viz. 71.6% on unknown words in texts (Daelemans et al., 1996a). !syntactic class correct test words prediction words (%) -4- !exact 91.24 0.21 exact or among alternatives 91.60 0.21 Table 4: Average prediction accuracies (with standard deviations) of MBMA on syntactic classes of test words. The top line displays exact matches with CELEX tags; the bottom line also includes predictions that are among CELEX al- ternatives. 4.4 Free text estimation Although some of the above-mentioned accu- racy results, especially the precision and recall of fully-labeled morphemes, seem not very high, they should be seen in the context of the test they are derived from: they stem from held-out portions of dictionary words. In texts sampled from real-life usage, words are typically smaller and morphologically less complex, and a rela- tively small set of words re-occurs very often. It is therefore relevant for our study to have an estimate of the performance of MBMA on real texts. We generate such an estimate fol- lowing these considerations: New, unseen text is bound to contain a lot of words that are in the 245,000 CELEX data base, but also some number of unknown words. The morphological analy- ses of known words are simply retrieved by the memory-based learner from memory. Due to some ambiguity in the class labeling in the data base itself, retrieval accuracy will be somewhat 289 below 100%. The morphological analyses of un- known words are assumed to be as accurate as was tested in the above-mentioned experiments: they can be said to be of the type of dictionary words in the 10% held-out test sets of 10-fold cross validation experiments. CELEX bases its wordform frequency information on word counts made on the 42,380,000-words Dutch INL cor- pus. 5.06% of these wordforms are wordform tokens that occur only once. We assume that this can be extrapolated to the estimate that in real texts, 5% of the words do not occur in the 245,000 words of the CELEX data base. Therefore, a sensible estimate of the accura- cies of memory-based learners on real text is a weighted sum of accuracies comprised of 95% of the reproduction accuracy (i.e, the error on the training set itself), and 5% of the generalization accuracy as reported earlier. Table 5 summarizes the estimated generaliza- tion accuracy results computed on the results of MBMA. First, the percentages of correct in- stances and words are estimated to be above 98% for the full task; in terms of words, it is es- timated that 84% of all words are fully correctly analyzed. When lower-granularity classification tasks are discerned, accuracies on words are es- timated to exceed 96% (on instances, less than 1% errors are estimated). Moreover, precision and recall of morphemes on the full task are estimated to be above 93%. A considerable sur- plus is obtained by memory retrieval in the es- timated percentage of correct spelling changes: 93%. Finally, the prediction of the syntactic tags of wordforms would be about 97% accord- ing to this estimate. We briefly note that Heemskerk (1993) re- ports a correct word score of 92% on free text test material yielded by the probabilistic mor- phological analyzer MORPA. MORPA segments wordforms, decides whether a morpheme is a stem, an affix or an inflection, detects spelling changes, and assigns a syntactic tag to the word- form. We have not made a conversion of our output to Heemskerk's (1993). Moreover, a proper comparison would demand the same test data, but we believe that the 92% corresponds roughly to our MBMA estimates of 97.2% correct syntactic tags, 93.1% correct spelling changes, and 96.7% correctly segmented words. Estimate correct instances, full task correct words, full task 98.4% 84.2% correct instances, derivation/inflection 99.6% correct words, derivation/inflection 96.7% correct instances, segmentation correct words, segmentation 99.6% 96.7% precision of fully-labeled morphemes 93.6% recall of fully-labeled morphemes 93.2% precision of deriv./intl, morphemes 98.5% recall of deriv./inft, morphemes 98.0% precision of segments 98.5% recall of segments 97.9% correct spelling changes correct syntactic wordform t a ~ Table 5: Estimations of accuracies on real text, derived from the generalization accuracies of MBMA on full Dutch morphological analysis. 5 Conclusions We have demonstrated the applicability of memory-based learning to morphological anal- ysis, by reformulating the problem as a classi- fication task in which letter sequences are clas- sifted as marking different types of morpheme boundaries. The generalization performance of memory-based learning algorithms to the task is encouraging, given that the tests are done on held-out (dictionary) words. Estimates of free-text performance give indications of high accuracies: 84.6% correct fully-analyzed words (64.6% on unseen words), and 96.7% correctly segmented and coarsely-labeled words (about 90% for unseen words). Precision and recall of fully-labeled morphemes is estimated in real texts to be over 93% (about 84% for unseen words). Finally, the prediction of (possibly am- biguous) syntactic classes of unknown word- forms in the test material was shown to be 91.2% correct; the corresponding free-text es- timate is 97.2% correctly-tagged wordforms. In comparison with the traditional approach, which is not immune to costly hand-crafting and spurious ambiguity, the memory-based learning approach applied to a reformulation of the prob- lem as a classification task of the segmentation type, has a number of advantages: 290 • it presupposes no more linguistic knowl- edge than explicitly present in the cor- pus used for training, i.e., it avoids a knowledge-acquisition bottleneck; • it is language-independent, as it functions on any morphologically analyzed corpus in any language; • learning is automatic and fast; • processing is deterministic, non-recurrent (i.e., it does not retry analysis generation) and fast, and is only linearly related to the length of the wordform being processed. The language-independence of the approach can be illustrated by means of the following par- tial results on MBMA of English. We performed experiments on 75,745 English wordforms from CELEX and predicted the lower-granularity tasks of predicting morpheme boundaries (Van den Bosch et al., 1996). Experiments yielded 88.0% correctly segmented test words when de- ciding only on the location of morpheme bound- aries, and 85.6% correctly segmented test words discerning between derivational and inflectional morphemes. Both results are roughly compa- rable to the 90% reported here (but note the difference in training set size). A possible limitation of the approach may be the fact that it cannot return more than one possible segmentation for a wordform. E.g. the compound word kwartslagen can be inter- preted as either kwart+slagen (quarter turns) or kwarts+lagen (quartz layers). The memory- based approach would select one segmentation. However, true segmentation ambiguity of this type is very rare in Dutch. Labeling ambigu- ity occurs more often (3.6% of all morphemes), and the current approach simply produces am- biguous tags. However, it is possible for our approach to return distributions of possible classes, if desired, as well as it is possible to "un- pack" ambiguous labeling into lists of possible morphological analyses of a wordform. If, for example, MBMA's output for the word bakken (bake, an infinitive or plural verb form, or bins, a plural noun) would be [bak]v/N[en]tm/i/m, then this output could be expanded unambigu- ously into the noun analysis [bak]N[en]m (plu- ral) and the two verb readings [bak]y[en]i (in- finitive) and [bak]y[en]tm (present tense plu- ral). Points of future research are comparisons with other morphological analyzers and lem- matizers; applications of MBMA to other lan- guages (particularly those with radically differ- ent morphologies); and qualitative analyses of MBMA's output in relation with linguistic pre- dictions of errors and markedness of exceptions. Acknowledgements This research was done in the context of the "Induction of Linguistic Knowledge" (ILK) research programme, supported partially by the Netherlands Organization for Scientific Re- search (NWO). The authors wish to thank Ton Weijters and the members of the Tilburg ILK group for stimulating discussions. A demonstra- tion version of the morphological analysis sys- tem for Dutch is available via ILK's homepage http ://ilk. kub. nl. References D. W. Aha, D. Kibler, and M. Albert. 1991. Instance-based learning algorithms. Machine Learning, 6:37-66. R. H. Baayen, R. Piepenbrock, and H. van Rijn. 1993. The CELEX lexical data base on CD- ROM. Linguistic Data Consortium, Philadel- phia, PA. W. Daelemans and A. Van den Bosch. 1992. Generalisation performance of backpropaga- tion learning on a syllabification task. In M. F. J. Drossaers and A. Nijholt, editors, Proc. of TWLT3: Connectionism and Nat- ural Language Processing, pages 27-37, En- schede. Twente University. W. Daelemans and A. Van den Bosch. 1996. Language-independent data-oriented grapheme-to-phoneme conversion. In J. P. H. Van Santen, R. W. Sproat, J. P. Olive, and J. Hirschberg, editors, Progress in Speech Processing, pages 77-89. Springer-Verlag, Berlin. W. Daelemans, S. Gillis, and G. Durieux. 1994. The acquisition of stress: a data- oriented approach. Computational Linguis- tics, 20(3):421-451. W. Daelemans, J. Zavrel, and P. Berck. 1996a. Part-of-speech tagging for Dutch with MBT, a memory-based tagger generator. In K. van der Meer, editor, Informatieweten- schap 1996, Wetenschappelijke bijdrage aan 291 de Vierde Interdisciplinaire Onderzoekscon- ferentie In,formatiewetenchap, pages 33-40, The Netherlands. TU Delft. W. Daelemans, J. Zavrel, P. Berck, and S. Gillis. 1996b. MBT: A memory-based part of speech tagger generator. In E. Ejerhed and I. Dagan, editors, Proc. of Fourth Workshop on Very Large Corpora, pages 14-27. ACL SIGDAT. W. Daelemans, A. Van den Bosch, and A. Weij- ters. 1997. IGwree: using trees for com- pression and classification in lazy learning algorithms. Artificial Intelligence Review, 11:407-423, W. Daelemans. 1995. Memory-based lexical ac- quisition and processing. In P. Steffens, ed- itor, Machine Translation and the Lexicon, Lecture Notes in Artificial Intelligence, pages 85-98. Springer-Verlag, Berlin. W. De Haas and M. Trommelen. 1993. Mor- ,fologisch handboek van her Nederlands: Een overzicht van de woordvorming. SDU, 's Gravenhage, The Netherlands. J. Heemskerk and V. van Heuven. 1993. MORPA: A morpheme lexicon-based mor- phological parser. In V. van Heuven and L. Pols, editors, Analysis and synthesis o,f speech; Strategic research towards high-quality speech generation, pages 67-85. Mouton de Gruyter, Berlin. J. Heemskerk. 1993. A probabilistic context- free grammar for disambiguation in morpho- logical parsing. In Proceedings of the 6th Con- ference of the EACL, pages 183-192. K. Koskenniemi. 1983. Two-level morphol- ogy: a general computational model -for word- -form recognition and production. Ph.D. the- sis, University of Helsinki. J.R. Quinlan. 1986. Induction of Decision Trees. Machine Learning, 1:81-206. T. J. Sejnowski and C. S. Rosenberg. 1987. Par- allel networks that learn to pronounce English text. Complex Systems, 1:145-168. C. Stanfill and D. Waltz. 1986. Toward memory-based reasoning. Communications o,f the ACM, 29(12):1213-1228, December. A. Van den Bosch, W. Daelemans, and A. Weij- ters. 1996. Morphological analysis as classi- fication: an inductive-learning approach. In K. Ofiazer and H. Somers, editors, Proceed- ings of the Second International Con,ference on New Methods in Natural Language Pro- cessing, NeMLaP-P, Ankara, Turkey, pages 79-89. A. Van den Bosch. 1997. Learning to pro- nounce written words: A study in inductive language learning. Ph.D. thesis, Universiteit Maastricht. S. Weiss and C. Kulikowski. 1991. Computer systems that learn. San Mateo, CA: Morgan Kaufmann. 292
1999
37
Two Accounts of Scope Availability and Semantic Underspecification Alistair Willis and Suresh Manandhar, Department of Computer Science, University of York, York Y010 5DD, UK. {agw, suresh}@cs, york. ac. uk Abstract We propose a formal system for representing the available readings of sentences displaying quan- tifier scope ambiguity, in which partial scopes may be expressed. We show that using a theory of scope availability based upon the function- argument structure of a sentence allows a deter- ministic, polynomial time test for the availabil- ity of a reading, while solving the same problem within theories based on the well-formedness of sentences in the meaning language has been shown to be NP-hard. 1 Introduction The phenomenon of quantifier scope ambigu- ity has been discussed extensively within com- putational and theoretical linguistics. Given a sentence displaying quantifier scope ambiguity, such as Every man loves a woman, part of the problem of representing the sentence's meaning is to distinguish between the two possible mean- ings: Vx(ma (x) -+ 3y(woma (y) A lo e(x, y))) where every man loves a (possibly) different woman, or where a single woman is loved by every man. One aspect of the problem is the generation of all available readings in a suitable representa- tion language. Cooper (1983) described a sys- tem of "storing" the quantifiers as A-expressions during the parsing process and retrieving them at the sentence level; different orders of quan- tifier retrieval generate different readings of the sentence. However, Cooper's method generates logical forms in which variables are not correctly bound by their quantifiers, and so do not cor- respond to a correct sentence meaning. This problem is rectified by nested storage (Keller, 1986) and the Hobbs and Shieber (1987) al- gorithm. However, the linguistic assumptions underlying these approaches have recently been questioned. Park (1995) has argued that the availability of readings is determined not by the well-formedness of sentences in the meaning lan- guage, but by the function-argument relation- ships within the sentence. His theory proposes that only a subset of the well-formed sentences generated by nested storage are available to a speaker of English. Although the theories have different generative power, it is difficult to find linguistic data that convincingly proves either theory correct. In the absence of persuasive linguistic data, it is reasonable to ask whether other grounds exist for choosing to work with either of the two theories. This paper considers the appli- cation of both theories to the problem of un- derspecified meaning representation, and the question of determining whether a set of con- straints represents an available reading of an ambiguous sentence or not. We show that a constraint language based upon Park's linguis- tic theory (Willis and Manandhar, 1999) solves this problem in polynomial time, and contrast this with recent work based on dominance con- straints which shows that using the more per- missive theory of availability to solve the same problems leads to NP-hardness. 2 Underspecification A recent area of interest has been with under- specified representations of an ambiguous sen- tence's meaning, for example, Quasi-Logical Form (QLF) (Alshawi and Crouch, 1992) and Underspecified Discourse Representation The- 293 ory (UDRT) (Reyle, 1995). We shall charac- terise the desirable properties of an underspec- ified meaning representation as: 1. the meaning of a sentence should be rep- resented in a way that is not committed to any one of the possible (intended) meanings of the sentence, and 2. it should be possible to incrementally intro- duce partial information about the mean- ing, if such information is available, and without the need to undo work that has already been done. A principal aim of systems providing an un- derspecified representation of quantifier scope is the ability to represent partial scopings. That is, it should be possible to state that some of the quantifiers have some scope relative to each other, while remaining uncommitted to the rel- ative scope of the remaining quantifiers. How- ever, representations which simply allow partial scopes to be stated without further analysis do not adequately capture the behaviour of quanti- tiers in a sentence. Consider the sentence Every representative of a company saw most samples, represented in the style of QLF: _:see(<+i every x _:rep.of(x, <+j exists y co(y)>)>, <+k most z sample(z)>) A fully scoped logical form of this QLF is: [+i,+k,+j] :see(<+i every x rep.of(x, <+j exists y co(y)>)>, <+k most z sample(z)>) where the list of quantifier labels indicates the rela- tive scope of qnantifiers at that point in the sentence. Although this formula is well formed in the QLF language, it does not correspond to a well formed sentence of logic, seeming closer to the formula: every (x, rep. of (x, y), most (z, sample (z), exists(y, co(y), see(x, z)))) where the variable y does not appear in the scope of its quantifier. A language such as QLF will generally allow this scoping to be ex- pressed, even though it does not correspond to a reading available to a speaker. In QLF se- mantics, a scoping which does not give rise to any well formed readings is considered "uninter- pretable"; ie. there is no interpretation in which an evaluation function maps the QLF onto a truth value. Our aim is to present a system in which there is a straightforward computational test of whether a well-formed reading of a sentence ex- ists in which a partial scoping is satisfied, with- out requiring recourse to the final logical form. The language CLLS (Egg et al., 1998) has re- cently been developed which correctly generates the well-formed readings by using dominance constraints over trees. Readings of a sentence can be represented using a tree, where domi- nance represents outscoping, and quantifiers are represented using binary trees whose daughters correspond to the quantifiers' restriction and scope. So for the current example, Every repre- sentative of a company saw most samples, the reading: every(x, a(y, co(y), rep.o f ( x, y ) ), most(z, sample(z), see(x, z) ) ) can be represented by the tree in figure 1, where the restrictions of a and most have been omitted for clarity. Domination in the tree represents outscoping in the logical form. every//~ a • • most I I rep.o f • • see Figure 1: Representing relative scope as a tree Underspecification can be captured by defin- ing dominance constraints between nodes rep- resenting the quantifiers and relations in a sen- tence. Readings of the sentence with a free variable are avoided by asserting that each re- lation containing a variable must be dominated by that variable's quantifier, and an available reading of the sentence is represented by a tree in which all the dominance constraints are sat- isfied. So the ill-formed readings of the sen- tence can be avoided by stating that the relation rep.of is dominated by the restriction of every and the scope of a, while see is dominated by the scopes of both a and most. This is represented in figure 2, where the dominance constraints are illustrated by dotted lines. Further partial scope information can be introduced with additional dominance con- straints. So the partial scope requirement that 294 • Root jy: : ............... every • ~ a • most i/%. , - rep.of". "-~ see Figure 2: Representing available scopes with dominance constraints most should outscope every would be captured by a constraint stating that the node represent- ing most should dominate the node representing every in the constraints' solution. It is has been shown (Koller et al., 1998) that determining the consistency of these constraints is NP-hard. In the rest of this paper, we show that an alternative theory of scope availability yields a constraint system that can be solved in polynomial time. 3 Alternative Account of Availability The NP-hardness result of the previous section arises from the assumption that the availability of scopings is determined by the well formedness of the associated logical forms. Park (1995) has proposed an alternative theory of scope avail- ability which states that available scopes are accounted for by relative scopes of arguments around relations, whereby quantifiers may not move across NP boundaries. For example, con- sider the sentence Every representative of a company saw most samples, containing two rela- tions, saw and of. Around saw, every (represen- tative of a company) can outscope most (sam- ples), or vice versa, and around of, every (rep- resentative) can outscope a (company), or vice versa. Park generalises this observation to the claim that for any n-ary relation in a sentence, there are n! possible orderings of quantified ar- guments around that relation. Other quanti- tiers in the sentence should not "intercalate" be- tween those which are single arguments to a re- lation. So in the example sentence there are four possible scopes, because there are 2! = 2 scop- ings around saw and 2! = 2 scopings around of. What is not possible is a reading where a outscopes most which outscopes every; although this can be represented by a well formed sen- tence of logic (with no unbound variables), it is not available to a speaker of English. By using this theory as the basis of under- specification, we can say: • underspecification is to be captured by al- lowing different possible relative scope as- signments around the predicates, and • partial scopes between arbitrary quanti- tiers in the sentence will be translated into the equivalent scoping of quantifiers around their predicates. The chosen representation will be based upon a sentence's quantifiers and relations (for exam- ple, verbs and prepositions). Quantifiers and the relations which determine their relative scope are represented by a set of elements under a strict partial order, where the ordering represents the relative scopes. A strict order will be taken to be transitive, antisym- metric and irreflexive. However, because the interaction between the predicates in the sen- tence has implications for possible scopings, it is also necessary to consider the relationships between the ordered sets. Consider again the sentence Every man loves a woman. The quantifiers and relation in this sentence can be represented by a set of elements {every, a, love}. A strict partial order, ~-, is de- fined over the set which states that the relation love must be outscoped by both quantifiers: ({every, a, love}, (every ~- love, a ~- love)) The partial order states that both quantifiers outscope the verb, but says nothing about their scopes relative to each other. This represents a completely underspecified meaning. An unam- biguous reading of the sentence is represented when ~- defines a total order on the set. So if the relation every ~- a were added, the reading: Vx.man(x) --~ 3y.woman(y) A love(x, y) every ~- a ~- love would be represented. Alternatively, adding a ~- every to the underspecified form would rep- resent the reading: 3y.woman(y) A Vx.man(x) -+ love(x, y) a ~- every ~- love 295 The introduction of a further relation which does not lead to a well formed sentence (such as love ~- every) is shown by the irreflexivity of ~- being violated. While using a single set of elements correctly accounts for the possible scopes of quantifiers in the sentences discussed so far, relative clauses and prepositional attachment to NPs are more complex. Consider the sentence Every repre- sentative of a company saw most samples. The presence of two binary relations, of and saw, implies that there should be 2!.2! -- 4 readings. Continuing with the system developed so far, these possibilities could be represented by a pair of strictly partially ordered sets: ({every, most, see},(everyNsee, most Nsee)) ({every, a, of}, (every ~' of, a ~' of)) where the four possible ways of completing the strict orders on the sets correspond to the four available readings. To represent relative scope between arbitrary quantifiers in the sentence, a further transitive relation, .>, is defined. Say that if (S, ~-) is a strictly partially ordered set in the structure where x, y E S and x ~- y then x .> y. So for example, consider the pair of strictly partially ordered sets: ({every, most, see},(every~most~see)) ({every, a, of}, (a ~' every ~-' of)) which would represent the reading (in a format similar to generalised quantifiers): a(y, every(x, rep.of(x, y), most(z, sample(z), see(x, z)))) The orders on the sets state that every .:> most see and a .> every .:> of, and from the transi- tivity of .> it can be inferred (correctly) that a .:> most. Similarly, given the ambiguous sen- tence and the partial scope requirement that a should outscope most, the required partial scope can be obtained by adding the relations a ~-~ every and every ~- most. The transitivity of .> is not enough to cap- ture all the available scope information. Sup- pose it were required that most should outscope a. There are two readings of the sentence which satisfy this partial scope, those being: most(z, sample(z), every(x, a(y, co(y), rep.of (x, y)), see(x, z))) and most(z, sample(z), a(y, co(y), every(x, rep.oI (x, y), see(x, z)))). These readings are precisely those for which the object of see outscopes its subject; the partial scope is captured by the pair: ({every, most, see}, (most ~- every ~- see)) ({every, a, of}, (every ~-' of, a ~-' of)) where there is no additional information about the relative scope of every and a. However, the transitivity of -> alone does not capture the fact that most .:> a follows from most .:> every. We remedy this by defining a domination re- lation. In the current case, say that every dom- inates a, which means that a is nested within the QNP whose head quantifier is every. Then because quantifiers may not "intercalate" across NP boundaries, anything that outscopes every also outscopes anything that every dominates (here, a); if most outscopes one it must outscope both. We capture this behaviour by putting the sets into a tree structure, where each of the nodes is one of the strictly ordered sets repre- senting the scopes around a relation. For any node, N, each of the daughter nodes has (ex- actly) one element in common with N, oth- erwise, any element appears only once in the structure. So, consider again the sentence Ev- ery representative of a company saw most sam- ples. The scope information of the underspeci- fled form is represented by the tree: ({every, most, see}, (every see, most see)) / ({every, a, of},(every ~-' of, a ~' of)) Now, say that an element X dominates another element Y (denoted as X ~-~ Y) if X and Y are (distinct) elements in a set at some node, and X is also in the parent node. Also, ~-+ is transitive and irreflexive. So in the example given: every ~-+ a and every ~ of, but every ~-+ every. We can now extend the definition of -> by saying that: 296 if (P,~-) is a node in the tree, and x, y E P and x ~- y, then x.>y and x.>z where z is any term that y dominates. Also, .> is transitive and irreflexive. This captures the scoping behaviour for nested quantifiers. So from the ambiguous representa- tion of scopes: ({every, most, see}, (most every see)) I ({every, a, of}, (every of, a of)) where most ~-- every and every ~ a, it is pos- sible to infer correctly that most .> a, whatever the relation is between every and a. 4 Formal Definition of Scope Representations We now provide a formal description of the structures described in section 3. The defini- tion is divided into two parts. First a scope structure is defined, which is a tree structure whose nodes are sets under a strict order and describes the correct possible scopings of quan- tiffed arguments around their relations. Next, a scope representation is defined, which is the pair of a scope structure and an outscoping relation, • >, which is defined over all the elements in the structure. The analysis presented here differs from that of the previous section in that the nodes in the scope ~ structures are sets under a strict to- tal order, rather than under a partial order. The structures therefore represent unambigu- ous readings of the sentence. Underspecifica- tion will then be captured in the constraint lan- guage, rather than in the underlying structures, as discussed in section 5. A scope structure is a finite tree, where each node of the tree is a finite, non-empty set of el- ements, P, taken from a set (9 = {a,/~,-),,...} under a strict total order. For any node, each daughter node is also a strictly ordered set, such that each daughter set di has exactly one el- ement in common with P, a different element for each of the di. An element can only appear once in the tree, unless it is the common node between a mother and a daughter. So: is a correct scope structure, because no element appears twice except c~ and 8, which appear in mother/daughter pairs (the ordering relations have been omitted for clarity). A scope structure is defined as a triple (P, ~- , :D), where P is a set of elements, ~- is a strict total order over P and 7:) is the set of daughters. We say that an element occurs in a scope struc- ture if it is a member of the set at any node in the scope structure. If (9 is a (countable) set of elements, then scope structures can be recur- sively defined as: • If S = (Ps, >-s, {}), where Ps is a finite, non-empty subset of (9 and >-s is a strict total order on Ps, then S is a scope struc- ture, where: 1. if x E Ps, then x occurs in S, • If R and S are scope structures such that R = (PR, ~R, DR) and S = (Ps, ~-s, :DS), where no element occurs in both R and S, and there is some element a such that a E Pn, then if T = (PT, N'T,~T), where PT = {a} t2 Ps, T~T = {R} U :Ds and ~-T is a strict total order on PT then T is a scope structure, where: 1. If some element x occurs in either R or S then x occurs in T 2. If some element x occurs in R and x a, then a dominates x in T 3. If x and y occur in R and x dominates y in R then x dominates y in T 4. If x and y occur in S and x dominates y in S then x dominates y in T If S is a scope structure, then a node in S is defined as: • If S is a scope structure such that S -- (Ps, >-s, T~S), then: - (Ps, >'-s) is a node in S - if di E :Ds, then any node in di is a node in S. Having defined scope structures, we now de- fine a scope representation, which is a pair iS, ">s), where S is a scope structure and ">s is a relation between pairs of elements which oc- cur in S. ">s represents outscoping between any 297 pair of elements in the structure, rather than just between elements at a common node. If S is a scope structure such that S = (Ps,~-s,7)s), then (S, >s) is a scope represen- tation, where ">s is the minimum relation such that: * If (P, ~-p) is a node in S and x, y E P and x N-p y, then x ">s Y. • If (P, ~-p) is a node in S and x, y E P and x ~-p y, then ifz is an element which occurs in S and y dominates z in S then x ">s z. • ">s is transitive. If (S, ">s) is a well formed scope representation, then ">s is a strict partial order over the set of elements which occur in S. 5 Constraints for Scope Underspecification We now consider a constraint language for rep- resenting the available scopes in a sentence. The structure of the sentence can be defined in terms of common arguments to a relation (which is represented by membership of a common set in the scope structure) and the domination rela- tion. The constraint language is: ¢, ¢ ::= x o y Common set membership x ¢--+ y Domination x D y Outscoping ~b A ¢ Conjunction where x, y are members of a (countable) set of constants, COAl = {x, y, z, . . . }. It is intended that these constraints be de- fined over terms in an underspecified semantic representation, such as QLF or UDRT, with a function mapping grammatical objects in the representation onto members of CON. Repre- senting the quantifiers and relations in the sen- tence is sufficient for our current needs. Con- straints of the form x o y (where o is symmetric) state either that x and y represent common ar- guments to a relation, or that x and y represent a relation and a quantifier which quantifies over it. Constraints of the form x ~-4 y indicate that x is the head quantifier of a complex NP, in which y, another grammatical object (either a quantifier or a relation), is nested. So for example, consider again the sentence Every representative of a company saw most samples, and assume that terms in the un- derspecified representation representing the the grammatical objects every, exists, most, rep.of and see map onto the elements e, a, m, o and s respectively, where {e, a, m, o, s} C CON. Then the constraint representing the fully underspec- ified meaning is: eosAmosAeomAsoeAsomAmoe A eooAaooAeoaAooeAooaAaoe A e c-~ a A e ~-+ o A ei> sAe~oAmi> sAaDo Note that the symmetry of o is stated explic- itly in the constraint. The (underspecified) con- straint is generated either from the grammar or directly from the underspecified structure, so the inference rules for determining the availabil- ity of a partial scope only generate constraints of the form X t> Y. These rules are discussed further in section 6. Underspecification is now captured within the constraint language; note the parallels between the constraints of the form X t> Y in this example and the partial orders used in section 3. The satisfiability of the constraints is given in terms of the scope representations defined in section 4. A scope representation, (S, ">s), sat- isfies a constraint of the form X o Y if (P, >-p) is a node in S such that X', Y' E Ps, X' # Y', where some assignment function maps X and Y onto X' and Y'. Similarly, constraints of the form X ~-+ Y are satisfied if X' dominates Y' in S, and constraints of the form X D Y are satisfied if X' ">s Y'. So the above constraint is satisfied by a set of scope structures of the form: ({every, most, see}, >-) / ({every, a, of}, ~-') where the assignment function maps the con- stants e,a,m,o and s onto the elements every, a, most, of and see respectively, and where every ~- see, most ~- see, every ~-' of and a ~-' of. We can now define the semantics for the con- straint language. An assignment function, I[-~/, maps constants of the constraint language onto 298 elements which occur in S and wffs of the con- straint language onto one of the pair of values {t,f}. I is a pair ((I),~4}, where (I) is a scope representation, such that (I) = (S, ">s}, and .4 is a function mapping constants of the constraint language onto the set of elements which occur in S. The denotation of the constraints is then given by: • IX~ I -= ,A(X) if X is a constant in the constraint language. • IXoY] I = t if there is a node in S, (P, N-p), such that IX~ I E P and [[y]]/ E P and [[X]]I ~ [[y]]1, otherwise IX o y]I = f. • IX ~ y]I = t if IX~ I dominates ~y~I in S, otherwise IX ~-+ y~I = f. • IX ~> Y~I = t if IZ] I >s lynX, otherwise otherwise [[¢ A ¢]]" ---- f. Satisfiability A constraint set, A, is satisfiable iff there is at least one I such that I¢~ / = t for all constraints ¢ where ¢ E A. The satisfiability of a constraint set represents the existence of a reading of the sentence which respects the partial scoping. 6 Availability of Partial Scopes We now turn to the question of determining whether a partial scoping is available. In sec- tion 3 it was stated that scope availability is accounted for by the relative scope of quanti- tiers around their predicates. It turns out (al- though we do not prove it here) that for any partial scoping, there is a necessary and suffi- cient set of scopings of quantifiers around their relations that gives the partial scoping. For ex- ample, we showed that for the sentence Every representative of a company saw most samples, the readings where most outscopes a are exactly those where the subject of see outscopes its ob- ject. Therefore, from the constraint most C> a, it should be possible to infer most E> every. The aim of the constraint solver is to determine what scopings of quantifiers about their relations are required to obtain the required partial scoping, and therefore to state whether the partial scope is available. A set of rules is defined on the constraints, so that additional scope information may be in- ferred. The introduction of further scope con- straints does not affect scope information al- ready present (monotonicity). The rules are given in figure 3, where F represents any con- junction of literals and the associativity and commutativity of A are assumed. The infer- ence rules S1, $2 and $3 operate by recursively reducing the (arbitrary) outscoping constraint X~>Z to XI>YAYE>Y~, where Y and Y~ represent arguments to a common relation, and Y' either dominates or is equal to Z. Repeated application of these constraints gives the set of scopes of quantifiers around their relations for the initial partial scoping. The rules Trans and Dora then generate the remaining possible scope constraints. If a scope is unavailable, then completing the transitive closure of D across the structure yields a constraint of the form X ~> X. We then say that: • A constraint set is in normal ]orm iff ap- plying the rules S1, $2, $3, Trans and Dom does not yield any new constraints. If F is a constraint set in normal form then: • F represents an available scoping iff it does not contain a constraint of the form X ~> X. • F represents a complete scoping iff it rep- resents an available scoping, and for every constraint of the form X o Y there is either a constraint X D Y or a constraint Y D X. The condition for a scoping to be available fol- lows from the irreflexivity of ->. The condition for a scoping to be complete states that if two elements are arguments to a relation, or are a re- lation and one of its arguments, then they must have scope relative to each other. This corre- sponds to considering sets under a total order, rather than under a partial order. Complexity Issues Let F be a constraint representing an available scoping of a sentence, and let X~>Y be a constraint representing a par- tial scope between two terms in that sentence. Then the worst case of applying the inference rules to F A X ~> Y to saturation turns out to be equivalent to completing the transitive clo- sure of i>, which is known to be soluble in better than O(n 3) time (Cormen et al., 1990), where n is the number of elements in the structure. 299 S1 : $2: $3 : Trans: Dora: F AX oY AX ~ Xt AXtC> Y F X ~> Y AXtC> X F A X o Y A Y ¢-4 Y' A X t> Yt I- X i:> Y F AX oY AX ,--~ X~ AY,-+ YI AXIC> y'~-X'D X AXC> Y F AX t> Y AYt> Z~- X c> Z F AX o Y AX ~> Y A Y c-.+ Zt- X t> Z where F is any conjunction of literals. Figure 3: Rules of inference Application of rules $1, $2 and $3 to comple- tion can be completed in linear time; if X i> Y is a constraint between two arbitrary quanti- tiers X and Y where X fi Y, then exactly one of the rules S1, $2 or $3 applies (lack of space prevents us proving this here). If X o Y, then none of these three rules applies. Application of S1, $2 or $3 adds at most two new constraints, of which at most one is a scope constraint XC>Y ~ where X fi Y~. At most n - 1 such constraints are generated. Application of the rules S1, $2 and $3 re- duces an arbitrary partial scope into relative scopes of arguments around their relations. If a scoping is unavailable, this is represented by the irreflexivity of C> being violated. Testing for this requires that the transitive closure of C> be completed; this is known to be soluble in better than cubic time. We conclude that testing for the availability of a partial scope in this frame- work can be achieved in better than cubic time in the worst case. 7 Conclusion and Comments A desirable property for an underspecified rep- resentation of quantifier scope ambiguity is that there should be a computationally efficient test for whether a partial scope is available or not. We have shown that accepting a theory of avail- ability which states that scope availability is de- termined by the function-argument structure of a sentence allows the development of a test for availability which is polynomial in the number of quantifiers and relations in a sentence, while theories of availability based upon the logical well-formedness of meaning representations has been shown to be NP-hard. Acknowledgements The authors would like to thank Alan Frisch, Mark Steedman and three anonymous reviewers for useful comments. The first author is funded by an EPSRC grant. References H. Alshawi and R. Crouch. 1992. Monotonic Semantic Interpretation. In Proceedings of the 30th Annual Meeting of the ACL, pages 32-39, Newark, Delaware. R. Cooper. 1983. Quantification and Syntactic Theory. Reidel. T. Cormen, C. Leiserson, and R. Rivest. 1990. Introduction to Algorithms. The MIT Press, Cambridge, Massachusetts. M. Egg, J. Niehren, P. Ruhrberg, and F. Xu. 1998. Constraints over lambda-structures in semantic underspecification. In Proceedings of the 17th International Conference on Com- putational Linguistics and 36th Annual Meet- ing of the A CL, Montreal, Canada. J. Hobbs and S. Shieber. 1987. An algorithm for generating quantifier scopings. Computa- tional Linguistics, 13. W. Keller. 1986. Nested Cooper storage: The proper treatment of quantification in ordinary noun phrases. In U. Reyle and C. Rohrer, editors, Natural Language Parsing and Linguistic Theory, Studies in Linguistics and Philosophy, pages 432-437. Reidel. A. Koller, J. Niehren, and R. Treinen. 1998. Dominance constraints: Algorithms and com- plexity. In Third International Conference on Logical Aspects of Computational Linguistics (LA CL '98), Grenoble, France. J.C. Park. 1995. Quantifier scope and con- stituency. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 205-212. Cambridge, MA. U. Reyle. 1995. On reasoning with ambiguities. In Proceedings of the EA CL, Dublin. A. Willis and S. Manandhar. 1999. The avail- ability of partial scopings in an underspeci- fled semantic representation. In 3rd Interna- tional Workshop on Computational Seman- tics, Tilburg, the Netherlands, January. 300
1999
38
Alternating Quantifier Scope in CCG* Mark Steedman Division of Informatics, University of Edinburgh, 2 Buccleuch Place, Edinburgh EH8 9LW, UK steedman@cogsc i. ed. ac. uk Abstract The paper shows that movement or equivalent computational structure-changing operations of any kind at the level of logical form can be dispensed with entirely in capturing quantifer scope ambi- guity. It offers a new semantics whereby the ef- fects of quantifier scope alternation can be obtained by an entirely monotonic derivation, without type- changing rules. The paper follows Fodor (1982), Fodor and Sag (1982), and Park (1995, 1996) in viewing many apparent scope ambiguities as arising from referential categories rather than true general- ized quantitiers. 1 Introduction It is standard to assume that the ambiguity of sen- tences like (1) is to be accounted for by assigning two logical forms which differ in the scopes as- signed to these quantifiers, as in (2a,b): 1 (1) Every boy admires some saxophonist. (2) a. Vx.boy' x -+ 3y.saxophonis/ y A admires' yx b. 3y.saxophonis/ y A Vx.bo/x -+ admires'yx The question then arises of how a grammar/parser can assign all and only the correct interpretations to sentences with multiple quantifiers. This process has on occasion been explained in terms of "quantifier movement" or essentially * Early versions of this paper were presented to audiences at Brown U., NYU, and Karlov2~ U. Prague. Thanks to Jason Baldridge, Gann Bierner, Tim Fernando, Kit Fine, Polly Ja- cobson, Mark Johnson, Aravind Joshi, Richard Kayne, Shalom Lappin, Alex Lascarides, Suresh Manandhar, Jaruslav Peregrin, Jong Park, Anna Szabolcsi, Bonnie Webber, Alistair Willis, and the referees for helpful comments. The work was supported in part by ESRC grant M423284002. tThe notation uses juxtaposition fa to indicate application of a functor f to an argument a. Constants are distinguished from variables by a prime, and semantic functors like admires' are assumed to be "Curried". A convention of "left associativi- ty" is assumed, so that admires'yx is equivalent to (admires'y)x. equivalent computational operations of "quantify- ing in" or "storage" at the level of logical form. However, such accounts present a problem for monostratal and monotonic theories of grammar like CCG that try to do away with movement or the equivalent in syntax. Having eliminated non- monotonic operations from the syntax, to have to restore them at the level of logical form would be dismaying, given the strong assumptions of trans- parency between syntax and semantics from which the monotonic theories begin. Given the assump- tions of syntactic/semantic transparency and mono- tonicity that are usual in the Frege-Montague tra- dition, it is tempting to try to use nothing but the derivational combinatorics of surface grammar to deliver all the readings for ambiguous sentences like (1). Two ways to restore monotonicity have been proposed, namely: enriching the notion of deriva- tion via type-changing operations; or enriching the lexicon and the semantic ontology. It is standard in the Frege-Montague tradition to begin by translating expressions like "every boy" and "some saxophonist" into "generalized quanti- tiers" in effect exchanging the roles of arguments like NPs and functors like verbs by a process of "type-raising" the former. In terms of the notation and assumptions of Combinatory Categorial Gram- mar (CCG, Steedman 1996) the standard way to in- corporate generalized quantifiers into the semantics of CG deterbainers is to transfer type-raising to the lexicon, assig~g the following categories to deter- miners like every and some, making them functions from nouns to "type-raised" noun-phrases, where the latter are simply the syntactic types correspond- ing to a generalized quantifier: (3) every := (T/(T\NP))/N : ~,p,~l.Vx.px -+ qx every := (T\(T/NP))/N : kp.kq.Vx.px --+ qx (4) some := (T/(T\UP))/U:~,p.~l.3x.pxAqx some := (T\(T/NP))/N:Lp.~l.3x.pxAqx 301 (T is a variable over categories unique to each in- dividual occurrence of the raised categories (3) and (4), abbreviating a finite number of different raised types. We will distinguish such distinct variables as T, T', as necessary.) Because CCG adds rules of function composition to the rules of functional application that are stan- dard in pure Categorial Grammar, the further in- clusion of type-raised arguments engenders deriva- tions in which objects command subjects, as well as more traditional ones in which the reverse is true. Given the categories in (3) and (4), these alterna- tive derivations will deliver the two distinct logi- cal forms shown in (2), entirely monotonically and without involving structure-changing operations. However, linking derivation and scope as simply and directly as this makes the obviously false pre- diction that in sentences where there is no ambi- guity of CCG derivation there should be no scope ambiguity. In particular, object topicalization and object right node raising are derivationally unam- biguous in the relevant respects, and force the dis- placed object to command the rest of the sentence in derivational terms. So they should only have the wide scope reading of the object quantifier. This is not the case: (5) a. Some saxophonist, every boy admires. b. Every boy admires, and every girl detests, some saxophonist. Both sentences have a narrow scope reading in which every individual has some attitude towards some saxophonist, but not necessarily the same sax- ophonist. This observation appears to imply that even the relatively free notion of derivation provided by CCG is still too restricted to explain all ambigu- ities arising from multiple quantifiers. Nevertheless, the idea that semantic quantifier scope is limited by syntactic derivational scope has some very attractive features. For example, it imme- diately explains why scope alternation is both un- bounded and sensitive to island constraints. There is a further property of sentence (5b) which was first observed by Geach (1972), and which makes it seem as though scope phenomena are strongly re- stricted by surface grammar. While the sentence has one reading where all of the boys and girls have strong feelings toward the same saxophonist--say, John Coltrane--and another reading where their feelings are all directed at possibly different saxo- phonists, it does not have a reading where the sax- ophonist has wide scope with respect to every boy, but narrow scope with respect to every girl that is, where the boys all admire John Coltrane, but the girls all detest possibly different saxophonists. There does not even seem to be a reading involving separate wide-scope saxophonists respectively tak- ing scope over boys and girls--for example where the boys all admire Coltrane and the girls all detest Lester Young. These observations are very hard to reconcile with semantic theories that invoke powerful mech- anisms like abstraction or "Quantifying In" and its relatives, or "Quantifier Movement." For example, if quantifiers are mapped from syntactic levels to canonical subject, object etc. position at predicate- argument structure in both conjuncts in (5b), and then migrate up the logical form to take either wide or narrow scope, then it is not clear why some saxo- phonist should have to take the same scope in both conjuncts. The same applies if quantifiers are gener- ated in situ, then lowered to their surface position. 2 Related observations led Partee and Rooth (1983), and others to propose considerably more general use of type-changing operations than are required in CCG, engendering considerably more flexibility in derivation that seems to be required by the purely syntactic phenomena that have motivated CCG up till now. 3 While the tactic of including such order- preserving type-changing operations in the gram- mar remains a valid alternative for a monotonic treatment of scope alternation in CCG and related forms of categorial grammar, there is no doubt that it complicates the theory considerably. The type- changing operations necessarily engender infinite sets of categories for each word, requiring heuris- tics based on (partial) orderings on the operations concerned, and raising questions about complete- ness and practical parsability. All of these ques- tions have been addressed by Hendriks and others, but the result has been to dramatically raise the ratio of mathematical proofs to sentences analyzed. It seems worth exploring an alternative response to these observations concerning interactions of sur- 2Such observations have been countered by the invocation of a "parallelism condition" on coordinate sentences, a rule of a very expressively powerful "transderivational" kind that one would otherwise wish to avoid. 3For example, in order to obtain the narrow scope object reading for sentence (5b), Hendriks (1993), subjects the cate- gory of the transitive verb to "argument lifting" to make it a function over a type-raised object type, and the coordination rule must be correspondingly semantically generalized. 302 face structure and scope-taking. The present paper follows Fodor (1982), Fodor and Sag (1982), and Park (1995, 1996) in explaining scope ambiguities in terms of a distinction between true generalized quantifiers and other purely referential categories. For example, in order to capture the narrow-scope object reading for Geach's right node raised sen- tence (5b), in whose CCG derivation the object must command everything else, the present paper fol- lows Park in assuming that the narrow scope read- ing arises from a non-quantificational interpretation of some scecophonist, one which gives rise to a read- ing indistinguishable from a narrow scope reading when it ends up in the object position at the level of logical form. The obvious candidate for such a non-quantificational interpretation is some kind of referring expression. The claim that many noun-phrases which have been assumed to have a single generalized quan- tifier interpretation are in fact purely referential is not new. Recent literature on the semantics of natural quantifiers has departed considerably from the earlier tendency for semanticists to reduce all semantic distinctions Of nominal meaning such as de dicto/de re, reference/attribution, etc. to dis- tinctions in scope of traditional quantifiers. There is widespread recognition that many such distinc- tions arise instead from a rich ontology of different types of (collective, distributive, intensional, group- denoting, arbitrary, etc.) individual to which nom- inal expressions refer. (See for example Webber 1978, Barwise and Perry 1980, Fodor and Sag 1982, Fodor 1982, Fine 1985, and papers in the recent col- lection edited by Szabolcsi 1997.) One example of such non-traditional entity types (if an idea that apparently originates with Aristotle can be called non-traditional) is the notion of "arbi- trary objects" (Fine 1985). An arbitrary object is an object with which properties can be associated but whose extensional identity in terms of actual objects is unspecified. In this respect, arbitrary objects re- semble the Skolem terms that are generated by in- ference rules like Existential Elimination in proof theories of first-order predicate calculus. The rest of the paper will argue that arbitrary ob- jects so interpreted are a necessary element of the ontology for natural language semantics, and that their involvement in CCG explains not only scope alternation (including occasions on which scope al- ternation is not available), but also certain cases of anomalous scopal binding which are unexplained under any of the alternatives discussed so far. 2 Donkeys as Skolem Terms One example of an indefinite that is probably better analyzed as an arbitrary object than as a quantified NP occurs in the following famous sentence, first brought to modern attention by Geach (1962): (6) Every farmer who owns a donkey/beats it/. The pronoun looks as though it might be a variable bound by an existential quantifier associated with a donkey. However, no purely combinatoric analysis in terms of the generalized quantifier categories of- fered earlier allows this, since the existential cannot both remain within the scope of the universal, and come to c-command the pronoun, as is required for true bound pronominal anaphora, as in: (7) Every farmer/in the room thinks that she/de- serves a subsidy One popular reaction to this observation has been to try to generalize the notion of scope, as in Dy- namic Predicate Logic (DPL). Others have pointed out that donkey pronouns in many respects look more like non-bound-variable or discourse-bound pronouns, in examples like the following: (8) Everybody who knows Gilbert/likes him/. I shall assume for the sake of argument that "a donkey" translates at predicate-argument structure as something we might write as arb'donkey'. I shall assume that the function arb t yields a Skolem term--that is, a term applying a unique functor to all variables bound by universal quantifiers in whose extent arb'donkey falls. Call it SkdonkeyX in this case, where Skdonkey maps individual instantiations of x-- that is, the variable bound by the generalized quan- tifier every farmer---onto objects with the property donkey in the database. 4 An ordinary discourse-bound pronoun may be bound to this arbitrary object, but unless the pro- noun is in the scope of the quantifiers that bind any variables in the Skolem term, it will include a vari- able that is outside the scope of its binder, and fail to refer. This analysis is similar to but distinct from the analyses of Cooper (1979) and Heim (1990), 41 assume that arb p "knows" what scopes it is in by the same mechanism whereby a bound variable pronoun "knows" about its binder. Whatever this mechanism is, it does not have the power of movement, abstraction, or storage. An arbitrary ob- ject is deterministically bound to all scoping universals. 303 who assume that a donkey translates as a quanti- fied expression, and that the entire subject every farmer who owns a donkey establishes a contextu- ally salient function mapping farmers to donkeys, with the donkey/E-type pronoun specifically of the type of such functions. However, by making the pronoun refer instead to a Skolem term or arbitrary object, we free our hands to make the inferences we draw on the basis of such sentences sensitive to world knowledge. For example, if we hear the stan- dard donkey sentence and know that farmers may own more than one donkey, we will probably in- fer on the basis of knowledge about what makes people beat an arbitrary donkey that she beats all of them. On the other hand, we will not make a parallel inference on the basis of the following sen- tence (attributed to Jeff Pelletier), and the knowl- edge that some people have more than one dime in their pocket. (9) Everyone who had a dime in their pocket put it in the parking meter. The reason is that we know that the reason for putting a dime into a parking meter, unlike the rea- son for beating a donkey, is voided by the act itself. The proposal to translate indefinites as Skolem term-like discourse entities is anticipated in much early work in Artificial Intelligence and Compu- tational Linguistics, including Kay (1973), Woods (1975 p.76-77), VanLehn (1978), and Webber (1983, p.353, cf. Webber 1978, p.2.52), and also by Chierchia (1995), Schlenker (1998), and in un- published work by Kratzer. Skolem functors are closely related to, but distinct from, "Choice Func- tions" (see Reinhart 1997, Winter 1997, Sauerland 1998, and Schlenker 1998 for discussion. Webber's 1978 analysis is essentially a choice functional anal- ysis, as is Fine's.) 3 Scope Alternation and Skolem Entities If indefinites can be assumed to have a referen- tial translation as an arbitrary object, rather than a meaning related to a traditional existential gener- alized quantifier, then other supposed quantifiers, such as some/a few/two saxophonists may also be better analyzed as referential categories. We will begin by assuming that some is not a quantifier, but rather a determiner of a (singular) ar- bitrary object. It therefore has the following pair of subject and complement categories: (10) a. some := (T/(T\NP))/N:~p.7~7.q(arb'p) b. some := (T\(T/NP))/N: ~,pS~q.q(arb'p) In this pair of categories, the constant arb' is the function identified earlier from properties p to en- tities of type e with that property, such that those entities are functionally related to any universally quantified NPs that have scope over them at the level of logical form. If arblp is not in the extent of any universal quantifier, then it yields a unique arbitrary constant individual. We will assume that every has at least the gen- eralized quantifier determiner given at (3), repeated here: (11) a. every := (T/(T\NP))/N : LpSkq.Vx.px -+ qx b. every := (T\(T/NP))/N: p. .Vx.px qx These assumptions, as in Park's related account, provide everything we need to account for all and only the readings that are actually available for the Geach sentence (5b), repeated here: (12) Every boy admires, and every girl detests, some saxophonist. The "narrow-scope saxophonist" reading of this sentence results from the (backward) referential cat- egory (10b) applying to the translation of Every boy admires and every girl detests of type S/NP (whose derivation is taken as read), as in (13). Crucially, if we evaluate the latter logical form with respect to a database after this reduction, as indicated by the dot- ted underline, for each boy and girl that we exam- ine and test for the property of admiring/detesting an arbitrary saxophonist, we will find (or in the sense of Lewis (1979) "accommodate" or add to our database) a potentially different individual, depen- dent via the Skolem functors sk(~ and sk~r2 upon that boy or girl. Each conjunct thereby gives the appearance of including a variable bound by an ex- istential within the scope of the universal. The "wide-scope saxophonist" reading arises from the same categories as follows. If Skolem- ization can act after reduction of the object, when the arbitrary object is within the scope of the uni- versal, then it can also act before, when it is not in scope, to yield a Skolem constant, as in (14). Since the resultant logical form is in all important respects model-theoretically equivalent to the one that would arise from a wide scope existential quantification, we can entirely eliminate the quantifier reading (4) for some, and regard it as bearing only the arbitrary object reading (10). 5 5Similar considerations give rise to apparent wide and nar- 304 (]3) (14) Every boy admires and every girl detests some saxophonist S/NP S\(S/NP) • Lr.and'(Vy.boy'y --+ admires'xy)(Vz.girl'z --+ detests'xz) • kq.q(arb'sd) S: and' (Vy.boy'y -+ admires' ( arb' sax~)y) (Vz.girl' z -+ detests' ( arb' sd )z~ S " and' (Vy.boy'y --+ admires' (sk~ax, y)y) (Vz.girl' z --+ detests' (sk~,tr 2 z) z) Every boy admires and every girl detests • Lx.and' (Vy.boy'y --+ admires xy) (Vz.girl'z --~ detests'xz) some saxophonist S\(S/NP) : 2~t.q( arb' sax I) • ; • < S : and' (Vy.boy'y --+ admires' sk~,vcy ) (Vz•girl'z --+ detests' sk~axZ ) Consistent with Geach's observation, these cate- gories do not yield a reading in which the boys ad- mire the same wide scope saxophonist but the girls detest possibly different ones• Nor do they yield one in which the girls also all detest the same sax- ophonist, but not necessarily the one the boys ad- mire• Both facts are necessary consequences of the monotonic nature of CCG as a theory of grammar, without any further assumptions of parallelism con- ditions• In the case of the following scope-inverting rel- ative of the Geach example, the outcome is subtly different• (15) Some woman likes and some man detests ev- ery saxophonist• The scope-inverting reading arises from the evalua- tion of the arbitrary woman and man after combina- tion with every saxophonist, within the scope of the universal: (16) Vx•saxophonist' x --+ / / / / / and (likes x(skwomanX) )(detests x(skmanX) ) The reading where some woman and some man ap- pear to have wider scope than every saxophonist arises from evaluation of (the interpretation of) the residue of right node raising, some woman likes and some man detests, before combination with the gen- eralized quantifier every saxophonist. This results in ' and sk~nan liking two Skolem constants, say skwoma n every saxophonist, again without the involvement of a true existential quantifier: (17) Vx.saxophonist' x --+ and' (likes'x skrwo,nan)(detests' x sk~nan ) These readings are obviously correct. However, row scope versions of the existential donkey in (6). since Skolemization of the arbitrary man and woman has so far been assumed to be free to occur any time, it seems to be predicted that one arbitrary object might become a Skolem constant in advance of reduction with the object, while the other might do so after. This would give rise to further read- ings in which only one of some man or some woman takes wide scope--for example: 6 (18) Vx.saxophonist' x --+ and' ( likes' x SUwoma n ) (detestS' x( Sk~nanx ) ) Steedman (1991) shows on the basis of pos- sible accompanying intonation contours that the coordinate fragments like Some woman likes and some man detests that result from right node rais- ing are identical with information structural units of utterances--usually, the "theme." In the present framework, readings like (18) can therefore be elim- inated without parallelism constraints, by the further assumption that Skolemization/binding of arbitrary objects can only be done over complete information structural units--that is, entire themes, rhemes, or utterances. When any such unit is resolved in this way, all arbitrary objects concerned are obligatorily bound. 7 While this account of indefinites might appear to mix derivation and evaluation in a dangerous way, this is in fact what we would expect from a mono- ~I'he non-availability of such readings has also been used to argue for parallelism constraints. Quite apart from the the- oretically problematic nature of such constraints, they must be rather carefully formulated if they are not to exclude perfectly legal conjunction of narrow scope existentials with explicitly referential NPs, as in the following: (i) Some woman likes, and Fred detests, every saxophonist. 71 am grateful to Gann Bierner for pointing me towards this solution. 305 tonic semantics that supports the use of incremental semantic interpretation to guide parsing, as humans appear to (see below). Further support for a non-quantificational analy- sis of indefinites can be obtained from the observa- tion that certain nominals that have been talked of as quantifiers entirely fail to exhibit scope alterna- tions of the kind just discussed. One important class is the "non-specific" or "non-group-denoting count- ing" quantifiers, including the upward-monotone, downward-monotone, and non-monotone quanti- tiers (Barwise and Cooper 1981) such as at least three, few, exactly five and at most two in examples like the following, which are of a kind discussed by Liu (1990), Stabler (1997), and Beghelli and Stow- ell (1997): (19) a. Some linguist can program in at most two programming languages. b. Most linguists speak at least three /few/exactly five languages. In contrast to true quantifiers like most and every, these quantified NP objects appear not to be able to invert or take wide scope over their subjects. That is, unlike some linguist can program in every program- ming language which has a scope-inverting read- ing meaning that every programming language is known by some linguist, (19a) has no reading mean- ing that there are at most two programming lan- guages that are known to any linguist, and (19b) cannot mean that there are at least three/few/exactly five languages, languages that most linguists speak. Beghelli and Stowell (1997) account for this be- havior in terms of different "landing sites" (or in GB terms "functional projections") at the level of LF for the different types of quantifier. However, another alternative is to believe that in syntactic terms these noun-phrases have the same category as any other but in semantic terms they are (plural) arbitrary ob- jects rather than quantifiers, like some, a few, six and the like. This in turn means that they cannot engen- der dependency in the arbitrary object arising from some linguist in (19a). As a result the sentence has a single meaning, to the effect that there is an arbitrary linguist who can program in at most two program- ming languages. 4 Computing Available Readings We may assume (at least for English) that even the non-standard constituents created by function composition in CCG cannot increase the number of quantifiable arguments for an operator beyond the limit of three or so imposed by the lexicon. It follows that the observation of Park (1995, 1996) that only quantified arguments of a single (possi- bly composed) function can freely alternate scope places an upper bound on the number of readings. The logical form of an n-quantifier sentence is a term with an operator of valency 1, 2 or 3, whose ar- gument(s) must either be quantified expressions or terms with an operator of valency 1, 2 or 3, and so on. The number of readings for an n quantifier sen- tence is therefore bounded by the number of nodes in a single spanning tree with a branching factor b of up to three and n leaves. This number is given by a polynomial whose dominating term is b t°gb'- that is, it is linear in n, albeit with a rather large constant (since nodes correspond up to 3! = 6 read- ings). For the relatively small n that we in practice need to cope with, this is still a lot of readings in the worst case. However, the actual number of readings for real sentences will be very much lower, since it depends on how many true quantifiers are involved, and in exactly what configuration they occur. For example, the following three-quantifier sentence is predicted to have not 3 ! = 6 but only 4 distinct readings, since the non-quantifiers exactly three girls and some book cannot alternate scope with each other inde- pendently of the truly quantificational dependency- inducing Every boy. (20) Every boy gave exactly three girls some book~ This is an important saving for the parser, as redun- dant analyses can be eliminated on the basis of iden- tity of logical forms, a standard method of eliminat- ing such "spurious ambiguities." Similarly, as well as the restrictions that we have seen introduced by coordination, the SVO grammar of English means (for reasons discussed in Steed- man 1996) that embedded subjects in English are correctly predicted neither to extract nor take scope over their matrix subject in examples like the fol- lowing: (21) a. *a boy who(m) I know that admires John Coltrane b. Somebody knows that every boy admires some saxophonist. As Cooper 1983 points out, the latter has no read- ings where every boy takes scope over somebody. This three-quantifier sentence therefore has not 3 ! = 6, not 2! • 2! = 4, but only 2! • 1 = 2 readings. Bayer (1996) and Kayne (1998) have noted related 306 restrictions on scope alternation that would other- wise be allowed for arguments that are marooned in mid verb-group in German. Since such embeddings are crucial to obtaining proliferating readings, it is likely that in practice the number of available read- ings is usually quite small. It is interesting to speculate finally on the relation of the above account of the available scope readings with proposals to minimize search during process- ing by building "underspecified" logical forms by Reyle (1992), and others cited in Willis and Man- andhar (1999). There is a sense in which arbitrary individuals are themselves under-specified quanti- tiers, which are disambiguated by Skolemization. However, under the present proposal, they are dis- ambiguated during the derivation itself. The alternative of building a single under- specified logical form can under some circum- stances dramatically reduce the search space and increase efficiency of parsing--for example with distributive expressions in sentences like Six girls ate .five pizzas, which are probably intrinsically un- specified. However, few studies of this kind have looked at the problems posed by the restrictions on available readings exhibited by sentences like (5b). The extent to which inference can be done with the under-specified representations themselves for the quantifier alternations in question (as opposed to distributives) is likely to be very limited. If they are to be disambiguated efficiently, then the disam- biguated representations must embody or include those restrictions. However, the restriction that Geach noted seems intrinsically disjunctive, and hence appears to threaten efficiency in both parsing with, and disambiguation of, under-specified repre- sentations. The fact that relatively few readings are available and that they are so tightly related to surface struc- ture and derivation means that the technique of in- cremental semantic or probabilistic disambiguation of fully specified partial logical forms mentioned earlier may be a more efficient technique for com- puting the contextually relevant readings. For ex- ample, in processing (22) (adapted from Hobbs and Shieber 1987), which Park 1995 claims to have only four readings, rather than the five predicted by their account, such a system can build both readings for the S/NP every representative of three companies saw and decide which is more likely, before build- ing both compatible readings of the whole sentence and similarly resolving with respect to statistical or contextual support: (22) Every representative of three companies saw some sample. 5 Conclusion The above observations imply that only those so- called quantifiers in English which can engender dependency-inducing scope inversion have interpre- tations corresponding to genuine quantifiers. The others are not quantificationai at all, but are various types of arbitrary individuals translated as Skolem terms. These give the appearance of taking nar- row scope when they are bound to truly quantified variables, and of taking wide scope when they are unbound, and therefore "take scope everywhere." Available readings can be computed monotonically from syntactic derivation alone. The notion of syn- tactic derivation embodied in CCG is the most pow- erful limitation on the number of available read- ings, and allows all logical-form level constraints on scope orderings to be dispensed with, a result related to, but more powerful than, that of Pereira (1990). References Barwise, Jon and Cooper, Robin, 1981. "General- ized Quantifiers and Natural Language." Linguis- tics and Philosophy 4:159-219. Barwise, Jon and Perry, John, 1980. "Situations and Attitudes." Journal of Philosophy 78:668-691. Bayer, Josef, 1996. Directionality and Logical Form: On the Scope of Focusing Particles and Wh-in-situ. Dordmcht: Kluwer. Beghelli, Filippo and Stowell, Tim, 1997. "Dis- tributivity and Negation: the Syntax of Each and Every." In Anna Szabolcsi (ed.), Ways of Scope- Taking, Dordrecht: Kluwer. 71-107. Chierchia, Gennaro, 1995. Dynamics of Meaning. Chicago, IL.: Chicago University Press. Cooper, Robin, 1979. "The Interpretation of Pro- nouns." In Frank Hew and Helmut Schnelle (eds.), The nature of Syntactic Representation, New York, NY: Academic Press, volume 10 of Syntax and Semantics. Cooper, Robin, 1983. Quantification and Syntactic Theory. Dordrecht: Reidel. Fine, Kit, 1985. Reasoning with Arbitrary Objects. Oxford: Oxford University Press. Fodor, Janet Dean, 1982. "The Mental Representa- tion of Quantifiers." In Stanley Peters and Esa 307 Saarinen (eds.), Processes, Beliefs, and Ques- tions, Dordrecht: Reidel. 129-164. Fodor, Janet Dean and Sag, Ivan, 1982. "Referen- tial and Quantificational Indefinites." Linguistics and Philosophy 5:355-398. Geach, Peter, 1962. Reference and Generality. Ithaca, NY: Cornell University Press. Geach, Peter, 1972. "A Program for Syntax." In Donald Davidson and Gilbert Harman (eds.), Se- mantics of Natural Language, Dordrecht: Reidel. 483-497. Heim, Irene, 1990. "E-Type Pronouns and Donkey Anaphora." Linguistics and Philosophy 13:137- 177. Hendriks, Herman, 1993. Studied Flexibility: Cate- gories and Types in Syntax and Semantics. Ph.D. thesis, Universiteit van Amsterdam. Hobbs, Jerry and Shieber, Stuart, 1987. "An Algo- rithm for Generating Quantifier Scopings." Com- putational Linguistics 13:47-63. Kay, Martin, 1973. "The MIND System." In Randall Rustin (ed.), Natural language process- ing, New York: Algorithmics Press, volume 8 of Courant Computer Science Symposium. 155- 188. Kayne, Richard, 1998. "Overt vs. Covert Move- ment." Syntax 1:1-74. Lewis, David, 1979. "Scorekeeping in a Language Game." Journal of Philosophical Logic 8:339- 359. Liu, Feng-Hsi, 1990. Scope and Dependency in En- glish and Chinese. Ph.D. thesis, University of California, Los Angeles. Park, Jong, 1995. "Quantifier Scope and Con- stituency." In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, Boston. Palo Alto, Calif.: Morgan Kaufmann, 205-212. Park, Jong, 1996. A Lexical Theory of Quantifica- tion in Ambiguous Query Interpretation. Ph.D. thesis, University of Pennsylvania. Tech Report MS-CIS-96-26/IRCS-96-27, University of Penn- sylvania. Partee, Barbara and Rooth, Mats, 1983. "Gen- eralised Conjunction and Type Ambiguity." In et al. R. Baiierle (ed.), Meaning, Use, and Inter- pretation of Language, Berlin: de Gruyter. Pereira, Fernando, 1990. "Categorial Semantics and Scoping." Computational Linguistics 16:1- 10. Reinhart, Tanya, 1997. "Quantifier Scope': How Labor is divided between QR and Choice Func- tions." Linguistics and Philosophy 20(4):335- 397. Reyle, Uwe, 1992. "On Reasoning with Ambigui- ties." In Proceedings of the 7th Conference of the European Chapter of the Association for Compu- tational Linguistics, Dublin. 1-8. Sauerland, Uli, 1998. The Meaning of Chains. Ph.D. thesis, MIT, Cambridge, MA. Schlenker, Philippe, 1998. "Skolem Functions and the Scope of Indefinites." In Proceedings of the 1998 Conference of the North-East Linguistics Society. to appear. Stabler, Ed, 1997. "Computing Quantifier Scope." In Anna Szaboicsi (ed.), Ways of Scope-Taking, Dordrecht: Kluwer. 155-182. Steedman, Mark, 1991. "Structure and Intonation." Language 67:262-296. Steedman, Mark, 1996. Surface Structure and In- terpretation. Cambridge Mass.: MIT Press. Lin- guistic Inquiry Monograph, 30. Szabolcsi, Anna (ed.), 1997. Ways of Scope-Taking. Dordrecht: Kluwer. VanLehn, Kurt, 1978. Determining the Scope of En- glish Quantifiers. Master's thesis, MIT. AI-TR- 483, Artificial Intelligence Laboratory, MIT. Webber, Bonnie Lynn, 1978. A Formal Approach to Discourse Anaphora. Ph.D. thesis, Harvard. publ. Garland 1979. Webber, Bonnie Lynn, 1983. "So What Can We Talk About Now?" In Michael Brady and Robert Berwick (eds.), Computational Models of Dis- course, Cambridge MA.: MIT Press. 331-371. Willis, Alistair and Manandhar, Suresh, 1999. "Two Accounts of Scope Availability and Seman- tic Underspecification." In Proceedings of the 37th Annual Meeting of the Association for Com- putational Linguistics Computational Semantics. College Park, MD, June, to appear. Winter, Yoad, 1997. "Choice Functions and the Scopal Semantics of Indefinites." Linguistics and Philosophy 20(4):399--467. Woods, William, 1975. "What's in a Link: Foun- dations for Semantic Networks." In Daniel Bo- brow and Alan Collins (eds.), Representation and Understanding: Readings in Cognitive Science, New York: Academic Press. 35-82. 308
1999
39
Measures of Distributional Similarity Lillian Lee Department of Computer Science Cornell University Ithaca, NY 14853-7501 llee@cs, cornell, edu Abstract We study distributional similarity measures for the purpose of improving probability estima- tion for unseen cooccurrences. Our contribu- tions are three-fold: an empirical comparison of a broad range of measures; a classification of similarity functions based on the information that they incorporate; and the introduction of a novel function that is superior at evaluating potential proxy distributions. 1 Introduction An inherent problem for statistical methods in natural language processing is that of sparse data -- the inaccurate representation in any training corpus of the probability of low fre- quency events. In particular, reasonable events that happen to not occur in the training set may mistakenly be assigned a probability of zero. These unseen events generally make up a sub- stantial portion of novel data; for example, Es- sen and Steinbiss (1992) report that 12% of the test-set bigrams in a 75%-25% split of one mil- lion words did not occur in the training parti- tion. We consider here the question of how to es- timate the conditional cooccurrence probability P(v[n) of an unseen word pair (n, v) drawn from some finite set N x V. Two state-of-the-art technologies are Katz's (1987) backoff method and Jelinek and Mercer's (1980) interpolation method. Both use P(v) to estimate P(v[n) when (n, v) is unseen, essentially ignoring the identity of n. An alternative approach is distance-weighted averaging, which arrives at an estimate for un- seen cooccurrences by combining estimates for 25 cooccurrences involving similar words: 1 /P(v[n) ---- ~-~mES(n) sim(n, m)P(v[m) ~-]mES(n) sim(n, m) , (1) where S(n) is a set of candidate similar words and sim(n, m) is a function of the similarity between n and m. We focus on distributional rather than semantic similarity (e.g., Resnik (1995)) because the goal of distance-weighted averaging is to smooth probability distributions -- although the words "chance" and "probabil- ity" are synonyms, the former may not be a good model for predicting what cooccurrences the latter is likely to participate in. There are many plausible measures of distri- butional similarity. In previous work (Dagan et al., 1999), we compared the performance of three different functions: the Jensen-Shannon divergence (total divergence to the average), the L1 norm, and the confusion probability. Our experiments on a frequency-controlled pseu- doword disambiguation task showed that using any of the three in a distance-weighted aver- aging scheme yielded large improvements over Katz's backoff smoothing method in predicting unseen coocurrences. Furthermore, by using a restricted version of model (1) that stripped in- comparable parameters, we were able to empir- ically demonstrate that the confusion probabil- ity is fundamentally worse at selecting useful similar words. D. Lin also found that the choice of similarity function can affect the quality of automatically-constructed thesauri to a statis- tically significant degree (1998a) and the ability to determine common morphological roots by as much as 49% in precision (1998b). 1The term "similarity-based", which we have used previously, has been applied to describe other models as well (L. Lee, 1997; Karov and Edelman, 1998). These empirical results indicate that investi- gating different similarity measures can lead to improved natural language processing. On the other hand, while there have been many sim- ilarity measures proposed and analyzed in the information retrieval literature (Jones and Fur- nas, 1987), there has been some doubt expressed in that community that the choice of similarity metric has any practical impact: Several authors have pointed out that the difference in retrieval performance achieved by different measures of asso- ciation is insignificant, providing that these are appropriately normalised. (van Rijsbergen, 1979, pg. 38) But no contradiction arises because, as van Rijs- bergen continues, "one would expect this since most measures incorporate the same informa- tion". In the language-modeling domain, there is currently no agreed-upon best similarity met- ric because there is no agreement on what the "same information"- the key data that a sim- ilarity function should incorporate -- is. The overall goal of the work described here was to discover these key characteristics. To this end, we first compared a number of com- mon similarity measures, evaluating them in a parameter-free way on a decision task. When grouped by average performance, they fell into several coherent classes, which corresponded to the extent to which the functions focused on the intersection of the supports (regions of posi- tive probability) of the distributions. Using this insight, we developed an information-theoretic metric, the skew divergence, which incorporates the support-intersection data in an asymmetric fashion. This function yielded the best perfor- mance overall: an average error rate reduction of 4% (significant at the .01 level) with respect to the Jensen-Shannon divergence, the best pre- dictor of unseen events in our earlier experi- ments (Dagan et al., 1999). Our contributions are thus three-fold: an em- pirical comparison of a broad range of similarity metrics using an evaluation methodology that factors out inessential degrees of freedom; a pro- posal, building on this comparison, of a charac- teristic for classifying similarity functions; and the introduction of a new similarity metric in- corporating this characteristic that is superior at evaluating potential proxy distributions. 2{} 2 Distributional Similarity Functions In this section, we describe the seven distri- butional similarity functions we initally evalu- ated. 2 For concreteness, we choose N and V to be the set of nouns and the set of transitive verbs, respectively; a cooccurrence pair (n, v) results when n appears as the head noun of the direct object of v. We use P to denote probabil- ities assigned by a base language model (in our experiments, we simply used unsmoothed rel- ative frequencies derived from training corpus counts). Let n and m be two nouns whose distribu- tional similarity is to be determined; for nota- tional simplicity, we write q(v) for P(vln ) and r(v) for P(vlm), their respective conditional verb cooccurrence probabilities. Figure 1 lists several familiar functions. The cosine metric and Jaccard's coefficient are com- monly used in information retrieval as measures of association (Salton and McGill, 1983). Note that Jaccard's coefficient differs from all the other measures we consider in that it is essen- tially combinatorial, being based only on the sizes of the supports of q, r, and q • r rather than the actual values of the distributions. Previously, we found the Jensen-Shannon di- vergence (Rao, 1982; J. Lin, 1991) to be a useful measure of the distance between distributions: JS(q,r)=-~l [D(q aVgq,r)+D(r aVgq,r) ] The function D is the KL divergence, which measures the (always nonnegative) average in- efficiency in using one distribution to code for another (Cover and Thomas, 1991): (v) D(pl(V) IIp2(V)) = EPl(V)log Pl p2(v) " V The function avga, r denotes the average distri- bution avgq,r(V ) --= (q(v)+r(v))/2; observe that its use ensures that the Jensen-Shannon diver- gence is always defined. In contrast, D(qllr ) is undefined if q is not absolutely continuous with respect to r (i.e., the support of q is not a subset of the support of r). 2Strictly speaking, some of these functions are dissim- ilarity measures, but each such function f can be recast as a similarity function via the simple transformation C - f, where C is an appropriate constant. Whether we mean f or C - f should be clear from context. Euclidean distance L1 norm cosine Jaccard's coefficient L2(q,r) = Ll(q,r) = cos(q, r) = Jac(q, r) = ~v (q(v) - r(v)) 2 Iq(v) - r(v)l V ~-~v q(v)r(v) X/~-~v q(v) 2 V/Y~-v r(v) 2 I{v : q(v) > 0 and r(v) > 0}l I{v I q(v) > 0 or r(v) > O}l Figure 1: Well-known functions The confusion probability has been used by several authors to smooth word cooccurrence probabilities (Sugawara et al., 1985; Essen and Steinbiss, 1992; Grishman and Sterling, 1993); it measures the degree to which word m can be substituted into the contexts in which n ap- pears. If the base language model probabili- ties obey certain Bayesian consistency condi- tions (Dagan et al., 1999), as is the case for relative frequencies, then we may write the con- fusion probability as follows: P(m) conf(q, r, P(m) ) = E q(v)r(v) -p-~(v) " V Note that it incorporates unigram probabilities as well as the two distributions q and r. Finally, Kendall's % which appears in work on clustering similar adjectives (Hatzivassilo- glou and McKeown, 1993; Hatzivassiloglou, 1996), is a nonparametric measure of the as- sociation between random variables (Gibbons, 1993). In our context, it looks for correlation between the behavior of q and r on pairs of verbs. Three versions exist; we use the simplest, Ta, here: r(q,r) = E sign [(q(vl) - q(v2))(r(vl) - r(v2))] v,,v 2(l t) where sign(x) is 1 for positive arguments, -1 for negative arguments, and 0 at 0. The intu- ition behind Kendall's T is as follows. Assume all verbs have distinct conditional probabilities. If sorting the verbs by the likelihoods assigned by q yields exactly the same ordering as that which results from ranking them according to r, then T(q, r) = 1; if it yields exactly the op- posite ordering, then T(q, r) -- -1. We treat a value of -1 as indicating extreme dissimilarity. 3 It is worth noting at this point that there are several well-known measures from the NLP literature that we have omitted from our ex- periments. Arguably the most widely used is the mutual information (Hindle, 1990; Church and Hanks, 1990; Dagan et al., 1995; Luk, 1995; D. Lin, 1998a). It does not apply in the present setting because it does not mea- sure the similarity between two arbitrary prob- ability distributions (in our case, P(VIn ) and P(VIm)) , but rather the similarity between a joint distribution P(X1,X2) and the cor- responding product distribution P(X1)P(X2). Hamming-type metrics (Cardie, 1993; Zavrel and Daelemans, 1997) are intended for data with symbolic features, since they count feature label mismatches, whereas we are dealing fea- ture Values that are probabilities. Variations of the value difference metric (Stanfill and Waltz, 1986) have been employed for supervised disam- biguation (Ng and H.B. Lee, 1996; Ng, 1997); but it is not reasonable in language modeling to expect training data tagged with correct prob- abilities. The Dice coej~cient (Smadja et al., 1996; D. Lin, 1998a, 1998b) is monotonic in Jac- card's coefficient (van Rijsbergen, 1979), so its inclusion in our experiments would be redun- dant. Finally, we did not use the KL divergence because it requires a smoothed base language model. SZero would also be a reasonable choice, since it in- dicates zero correlation between q and r. However, it would then not be clear how to average in the estimates of negatively correlated words in equation (1). 27 3 Empirical Comparison We evaluated the similarity functions intro- duced in the previous section on a binary dec- ision task, using the same experimental frame- work as in our previous preliminary compari- son (Dagan et al., 1999). That is, the data consisted of the verb-object cooccurrence pairs in the 1988 Associated Press newswire involv- ing the 1000 most frequent nouns, extracted via Church's (1988) and Yarowsky's process- ing tools. 587,833 (80%) of the pairs served as a training set from which to calculate base probabilities. From the other 20%, we pre- pared test sets as follows: after discarding pairs occurring in the training data (after all, the point of similarity-based estimation is to deal with unseen pairs), we split the remaining pairs into five partitions, and replaced each noun- verb pair (n, vl) with a noun-verb-verb triple (n, vl, v2) such that P(v2) ~ P(vl). The task for the language model under evaluation was to reconstruct which of (n, vl) and (n, v2) was the original cooccurrence. Note that by con- struction, (n, Vl) was always the correct answer, and furthermore, methods relying solely on uni- gram frequencies would perform no better than chance. Test-set performance was measured by the error rate, defined as T(# of incorrect choices + (# of ties)/2), where T is the number of test triple tokens in the set, and a tie results when both alternatives are deemed equally likely by the language model in question. To perform the evaluation, we incorporated each similarity function into a decision rule as follows. For a given similarity measure f and neighborhood size k, let 3f, k(n) denote the k most similar words to n according to f. We define the evidence according to f for the cooc- currence ( n, v~) as Ef, k(n, vi) = [(m E SLk(n) : P(vilm) > l }l • Then, the decision rule was to choose the alter- native with the greatest evidence. The reason we used a restricted version of the distance-weighted averaging model was that we sought to discover fundamental differences in behavior. Because we have a binary decision task, Ef,k(n, vl) simply counts the number of k nearest neighbors to n that make the right de- cision. If we have two functions f and g such that Ef,k(n, Vl) > Eg,k(n, vi), then the k most similar words according to f are on the whole better predictors than the k most similar words according to g; hence, f induces an inherently better similarity ranking for distance-weighted averaging. The difficulty with using the full model (Equation (1)) for comparison purposes is that fundamental differences can be obscured by issues of weighting. For example, suppose the probability estimate ~v(2 -Ll(q, r)). r(v) (suitably normalized) performed poorly. We would not be able to tell whether the cause was an inherent deficiency in the L1 norm or just a poor choice of weight function -- per- haps (2- Ll(q,r)) 2 would have yielded better estimates. Figure 2 shows how the average error rate varies with k for the seven similarity metrics introduced above. As previously mentioned, a steeper slope indicates a better similarity rank- ing. All the curves have a generally upward trend but always lie far below backoff (51% error rate). They meet at k = 1000 because Sf, looo(n) is always the set of all nouns. We see that the functions fall into four groups: (1) the L2 norm; (2) Kendall's T; (3) the confusion probability and the cosine metric; and (4) the L1 norm, Jensen-Shannon divergence, and Jaccard's co- efficient. We can account for the similar performance of various metrics by analyzing how they incor- porate information from the intersection of the supports of q and r. (Recall that we are using q and r for the conditional verb cooccurrrence probabilities of two nouns n and m.) Consider the following supports (illustrated in Figure 3): Vq = {veV : q(v)>O} = {v•V:r(v)>0} Yqr = {v • V : q(v)r(v) > 0} = Yq n We can rewrite the similarity functions from Section 2 in terms of these sets, making use of the identities ~-~veyq\yq~ q(v) + ~veyq~ q(v) = ~'~-v~U~\Vq~ r(v) + ~v~Vq~ r(v) = 1. Table 1 lists these alternative forms in order of performance. 28 0.4 0.38 0.36 0.34 ~ 0.32 0.3-- 0.28 0.26 100 Error rates (averages and ranges) I i i I i I.,2-*.-- Jag~ 200 300 400 500 600 700 800 900 1000 k Figure 2: Similarity metric performance. Errorbars denote the range of error rates over the five test sets. Backoff's average error rate was 51%. L2(q,r) . 2(l l) = ,/Eq(v)2-2Eq(v)r(v)+ Er(v) 2 V.. vq~ v~ = 2 IVq~l IV \ (vq u V~)l - 2 IVq \ Vail Iv~ \Vq~l + E E sign[(q(vl) - q(v2))(r(vl) - r(v2))] Vl E(VqA Vr) v2EYq~, + E E sign[(q(vl)-q(v2))(r(vl)-r(v2))] Vl eVqr v2EVqUVr conf(q, r, P(m)) cos(q, r) = P(ra) Y] q(v)r(v)/P(v) v e Vq~ = E q(v)r(v)( E q(v) 2 E r(v)2) -1/2 v~ Vqr ve Vq v~ Vr Ll(q,r) JS(q, r) Jac(q, r) = 2-- E (Iq(v)-r(v)l-q(v)-r(v)) vE Vqr = log2 + 1 E (h(q(v) + r(v)) - h(q(v)) - h(r(v))) , v ~ Vq~ = IV~l/IV~ u v~l h( x ) = -x log x Table 1: Similarity functions, written in terms of sums over supports and grouped by average performance. \ denotes set difference; A denotes symmetric set difference. We see that for the non-combinatorial functions, the groups correspond to the degree to which the measures rely on the verbs in Vat. The Jensen-Shannon divergence and the L1 norm can be computed simply by knowing the val- ues of q and r on Vqr. For the cosine and the confusion probability, the distribution values on Vqr are key, but other information is also incor- porated. The statistic Ta takes into account all verbs, including those that occur neither with 29 v Figure 3: Supports on V n nor m. Finally, the Euclidean distance is quadratic in verbs outside Vat; indeed, Kaufman and Rousseeuw (1990) note that it is "extremely sensitive to the effect of one or more outliers" (pg. 117). The superior performance of Jac(q, r) seems to underscore the importance of the set Vqr. Jaccard's coefficient ignores the values of q and r on Vqr; but we see that simply knowing the size of Vqr relative to the supports of q and r leads to good rankings. 4 The Skew Divergence Based on the results just described, it appears that it is desirable to have a similarity func- tion that focuses on the verbs that cooccur with both of the nouns being compared. However, we can make a further observation: with the exception of the confusion probability, all the functions we compared are symmetric, that is, f(q, r) -= f(r, q). But the substitutability of one word for another need not symmetric. For instance, "fruit" may be the best possible ap- proximation to "apple", but the distribution of "apple" may not be a suitable proxy for the dis- tribution of "fruit".a In accordance with this insight, we developed a novel asymmetric generalization of the KL di- vergence, the a-skew divergence: sa(q,r) = D(r [[a'q + (1 - a)-r) for 0 <_ a < 1. It can easily be shown that sa depends only on the verbs in Vat. Note that at a -- 1, the skew divergence is exactly the KL di- vergence, and su2 is twice one of the summands of JS (note that it is still asymmetric). 40n a related note, an anonymous reviewer cited the following example from the psychology literature: we can say Smith's lecture is like a sleeping pill, but "not the other way round". 30 We can think of a as a degree of confidence in the empirical distribution q; or, equivalently, (1 - a) can be thought of as controlling the amount by which one smooths q by r. Thus, we can view the skew divergence as an approx- imation to the KL divergence to be used when sparse data problems would cause the latter measure to be undefined. Figure 4 shows the performance of sa for a = .99. It performs better than all the other functions; the difference with respect to Jac- card's coefficient is statistically significant, ac- cording to the paired t-test, at all k (except k = 1000), with significance level .01 at all k except 100, 400, and 1000. 5 Discussion In this paper, we empirically evaluated a num- ber of distributional similarity measures, includ- ing the skew divergence, and analyzed their in- formation sources. We observed that the ability of a similarity function f(q, r) to select useful nearest neighbors appears to be correlated with its focus on the intersection Vqr of the supports of q and r. This is of interest from a computa- tional point of view because Vqr tends to be a relatively small subset of V, the set of all verbs. Furthermore, it suggests downplaying the role of negative information, which is encoded by verbs appearing with exactly one noun, although the Jaccard coefficient does take this type of infor- mation into account. Our explicit division of V-space into vari- ous support regions has been implicitly con- sidered in other work. Smadja et al. (1996) observe that for two potential mutual transla- tions X and Y, the fact that X occurs with translation Y indicates association; X's occur- ring with a translation other than Y decreases one's belief in their association; but the absence of both X and Y yields no information. In essence, Smadja et al. argue that information from the union of supports, rather than the just the intersection, is important. D. Lin (1997; 1998a) takes an axiomatic approach to deter- mining the characteristics of a good similarity measure. Starting with a formalization (based on certain assumptions) of the intuition that the similarity between two events depends on both their commonality and their differences, he de- rives a unique similarity function schema. The 0.4 0.38 I 0.36 [ 0.34 0.32 0.3 0.28 0.26 ¢- 100 Error rates (averages and ranges) L1 JS ~0 300 ~0 ~0 600 700 800 ~0 1000 k Figure 4: Performance of the skew divergence with respect to the best functions from Figure 2. definition of commonality is left to the user (sev- eral different definitions are proposed for differ- ent tasks). We view the empirical approach taken in this paper as complementary to Lin's. That is, we are working in the context of a particular appli- cation, and, while we have no mathematical cer- tainty of the importance of the "common sup- port" information, we did not assume it a priori; rather, we let the performance data guide our thinking. Finally, we observe that the skew metric seems quite promising. We conjecture that ap- propriate values for a may inversely correspond to the degree of sparseness in the data, and intend in the future to test this conjecture on larger-scale prediction tasks. We also plan to evaluate skewed versions of the Jensen-Shannon divergence proposed by Rao (1982) and J. Lin (1991). 6 Acknowledgements Thanks to Claire Cardie, Jon Kleinberg, Fer- nando Pereira, and Stuart Shieber for helpful discussions, the anonymous reviewers for their insightful comments, Fernando Pereira for ac- cess to computational resources at AT&T, and Stuart Shieber for the opportunity to pursue this work at Harvard University under NSF Grant No. IRI9712068. References Claire Cardie. 1993. A case-based approach to knowledge acquisition for domain-specific sentence analysis. In 11th National Confer- ence on Artifical Intelligence, pages 798-803. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual in- formation, and lexicography. Computational Linguistics, 16(1):22-29. Kenneth W. Church. 1988. A stochastic parts program and noun phrase parser for un- restricted text. In Second Conference on Applied Natural Language Processing, pages 136-143. Thomas M. Cover and Joy A. Thomas. 1991. Elements of Information Theory. John Wiley. Ido Dagan, Shanl Marcus, and Shanl Marko- vitch. 1995. Contextual word similarity and estimation from sparse data. Computer Speech and Language, 9:123-152. Ido Dagan, Lillian Lee, and Fernando Pereira. 1999. Similarity-based models of cooccur- rence probabilities. Machine Learning, 34(1- 3) :43-69. Ute Essen and Volker Steinbiss. 1992. Co- occurrence smoothing for stochastic language modeling. In ICASSP 92, volume 1, pages 161-164. Jean Dickinson Gibbons. 1993. Nonparametric Measures of Association. Sage University Pa- per series on Quantitative Applications in the 31 Social Sciences, 07-091. Sage Publications. Ralph Grishman and John Sterling. 1993. Smoothing of automatically generated selec- tional constraints. In Human Language Tech- nology: Proceedings of the ARPA Workshop, pages 254-259. Vasileios Hatzivassiloglou and Kathleen McKe- own. 1993. Towards the automatic identifica- tion of adjectival scales: Clustering of adjec- tives according to meaning. In 31st Annual Meeting of the ACL, pages 172-182. Vasileios Hatzivassiloglou. 1996. Do we need linguistics when we have statistics? A com- parative analysis of the contributions of lin- guistic cues to a statistical word grouping system. In Judith L. Klavans and Philip Resnik, editors, The Balancing Act, pages 67- 94. MIT Press. Don Hindle. 1990. Noun classification from predicate-argument structures. In 28th An- nual Meeting of the A CL, pages 268-275. Frederick Jelinek and Robert L. Mercer. 1980. Interpolated estimation of Markov source pa- rameters from sparse data. In Proceedings of the Workshop on Pattern Recognition in Practice. William P. Jones and George W. Furnas. 1987. Pictures of relevance. Journal of the American Society for Information Science, 38(6):420-442. Yael Karov and Shimon Edelman. 1998. Similarity-based word sense disambiguation. Computational Linguistics, 24(1):41-59. Slava M. Katz. 1987. Estimation of probabili- ties from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing, ASSP-35(3):400--401, March. Leonard Kanfman and Peter J. Rousseeuw. 1990. Finding Groups in Data: An Intro- duction to Cluster Analysis. John Wiley and Sons. Lillian Lee. 1997. Similarity-Based Approaches to Natural Language Processing. Ph.D. the- sis, Harvard University. Dekang Lin. 1997. Using syntactic dependency as local context to resolve word sense ambi- guity. In 35th Annual Meeting of the ACL, pages 64-71. Dekang Lin. 1998a. Automatic retrieval and 32 clustering of similar words. In COLING-A CL '98, pages 768-773. Dekang Lin. 1998b. An information theoretic definition of similarity. In Machine Learn- ing: Proceedings of the Fiftheenth Interna- tional Conference (ICML '98). Jianhua Lin. 1991. Divergence measures based on the Shannon entropy. IEEE Transactions on Information Theory, 37(1):145-151. Alpha K. Luk. 1995. Statistical sense disam- biguation with relatively small corpora using dictionary definitions. In 33rd Annual Meet- ing of the ACL, pages 181-188. Hwee Tou Ng and Hian Beng Lee. 1996. Inte- grating multiple knowledge sources to disam- biguate word sense: An exemplar-based ap- proach. In 3~th Annual Meeting of the ACL, pages 40--47. Hwee Tou Ng. 1997. Exemplar-based word sense disambiguation: Some recent improve- ments. In Second Conference on Empiri- cal Methods in Natural Language Processing (EMNLP-2), pages 208-213. C. Radhakrishna Rao. 1982. Diversity: Its measurement, decomposition, apportionment and analysis. SankyhZt: The Indian Journal of Statistics, 44(A):1-22. Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of IJCAI-95, pages 448-453. Gerard Salton and Michael J. McGill. 1983. In- troduction to Modern Information Retrieval. McGraw-Hill. Frank Smadja, Kathleen R. McKeown, and Vasileios Hatzivassiloglou. 1996. Translat- ing collocations for bilingual lexicons: A sta- tistical approach. Computational Linguistics, 22(1):1-38. Craig Stanfill and David Waltz. 1986. To- ward memory-based reasoning. Communica- tions of the ACM, 29(12):1213-1228. K. Sugawara, M. Nishimura, K. Toshioka, M. Okochi, and T. Kaneko. 1985. Isolated word recognition using hidden Markov mod- els. In ICASSP 85, pages 1-4. C. J. van Rijsbergen. 1979. Information Re- trieval. Butterworths, second edition. Jakub Zavrel and Walter Daelemans. 1997. Memory-based learning: Using similarity for smoothing. In 35th Annual Meeting of the A CL, pages 436-443.
1999
4
Automatic Detection of Poor Speech Recognition at the Dialogue Level Diane J. Litman, Marilyn A. Walker and Michael S. Kearns AT&T Labs Research 180 Park Ave, Bldg 103 Florham Park, N.J. 07932 {diane, walker, mkearns}@research, att. com Abstract The dialogue strategies used by a spoken dialogue system strongly influence performance and user sat- isfaction. An ideal system would not use a single fixed strategy, but would adapt to the circumstances at hand. To do so, a system must be able to identify dialogue properties that suggest adaptation. This paper focuses on identifying situations where the speech recognizer is performing poorly. We adopt a machine learning approach to learn rules from a dialogue corpus for identifying these situations. Our results show a significant improvement over the baseline and illustrate that both lower-level acoustic features and higher-level dialogue features can af- fect the performance of the learning algorithm. 1 Introduction Builders of spoken dialogue systems face a number of fundamental design choices that strongly influ- ence both performance and user satisfaction. Ex- amples include choices between user, system, or mixed initiative, and between explicit and implicit confirmation of user commands. An ideal system wouldn't make such choices a priori, but rather would adapt to the circumstances at hand. For in- stance, a system detecting that a user is repeatedly uncertain about what to say might move from user to system initiative, and a system detecting that speech recognition performance is poor might switch to a dialogUe strategy with more explicit prompting, an explicit confirmation mode, or keyboard input mode. Any of these adaptations might have been appropriate in dialogue D1 from the Annie sys- tem (Kamm et al., 1998), shown in Figure 1. In order to improve performance through such adaptation, a system must first be able to identify, in real time, salient properties of an ongoing dialogue that call for some useful change in system strategy. In other words, adaptive systems should try to auto- matically identify actionable properties of ongoing dialogues. Previous work has shown that speech recognition performance is an important predictor of user satis- faction, and that changes in dialogue behavior im- pact speech recognition performance (Walker et al., 1998b; Litman et al., 1998; Kamm et al., 1998). Therefore, in this work, we focus on the task of au- tomatically detecting poor speech recognition per- formance in several spoken dialogue systems devel- oped at AT&T Labs. Rather than hand-crafting rules that classify speech recognition performance in an ongoing dialogue, we take a machine learning ap- proach. We begin with a collection of system logs from actual dialogues that were labeled by humans as having had "good" or "bad" speech recognition (the training set). We then apply standard machine learning algorithms to this training set in the hope of discovering, in a principled manner, classifiers that can automatically detect poor speech recogni- tion during novel dialogues. In order to train such classifiers, we must provide them with a number of "features" of dialogues de- rived from the system logs that might allow the sys- tem to automatically identify poor recognition per- formance. In addition to identifying features that provide the best quantitative solutions, we are also interested in comparing the performance of classi- fiers derived solely from acoustic features or from "high-level" dialogue features, and from combina- tions of these and other feature types. Note that we are free to invent as many features as we like, as long as they can be computed in real time from the raw system logs. Since the dialogue systems we examine use au- tomatic speech recognition (ASR), one obvious fea- ture available in the system log is a per-utterance score from the speech recognizer representing its "confidence" in its interpretation of the user's utter- ance (Zeljkovic, 1996). For dialogue D1, the recog- nizer's output and the associated confidence scores 309 Actual Dialogue AI: Annie here. U1: Hello Annie, could you give me the information on a meeting I have tomorrow? A2: What was that? U2: I need to find out uh, a meeting time and place. Kim sent me a message on it. A3: For all options say help me out. U3: Help me out. A4: Annie here. U4: Help. A5: Help. Say call or say send a message, followed by a label or a number. Or say get my messages... U5: Get messages. (user interrupts A5) A6: Speech and Image Processing Lab. U6: Help. A7: Help. Say the first and last name ... U7: Kim. ASR Interpretation ASR Confidence REJECT -4.84 REJECT -4.20 REJECT -6.35 help - 1.74 get me sips -3.33 help - 1.78 annie -3.81 Figure 1: Example dialogue excerpt D1 with Annie. are in the last two columns of Figure 1. These con- fidence measures are based on the recognizer's lan- guage and acoustic models. The confidence scores are typically used by the system to decide whether it believes it has correctly understood the user's ut- terance. When the confidence score falls below a threshold defined for each system, the utterance is considered a rejection (e.g., utterances U1, U2, and U3 in D1). Note that since our classification prob- lem is defined by speech recognition performance, it might be argued that this confidence feature (or features derived from it) suffices for accurate classi- fication. However, an examination of the transcript in D1 suggests that other useful features might be derived from global or high-level properties of the dialogue history, such as features representing the system's repeated use of diagnostic error messages (utter- ances A2 and A3), or the user's repeated requests for help (utterances U4 and U6). Although the work presented here focuses ex- clusively on the problem of automatically detecting poor speech recognition, a solution to this problem clearly suggests system reaction, such as the strat- egy changes mentioned above. In this paper, we re- port on our initial experiments, with particular at- tention paid to the problem definition and method- ology, the best performance we obtain via a machine learning approach, and the performance differences between classifiers based on acoustic and higher- level dialogue features. 2 Systems, Data, Methods The learning experiments that we describe here use the machine learning program RIPPER (Co- hen, 1996) to automatically induce a "poor speech recognition performance" classification model from a corpus of spoken dialogues. 1 RIPPER (like other learning programs, such as c5.0 and CART) takes as input the names of a set of classes to be learned, the names and possible values of a fixed set of fea- tures, training data specifying the class and feature values for each example in a training set, and out- puts a classification model for predicting the class of future examples from their feature representation. In RIPPER, the classification model is learned using greedy search guided by an information gain metric, and is expressed as an ordered set of if-then rules. We use RIPPER for our experiments because it sup- ports the use of "set-valued" features for represent- ing text, and because if-then rules are often easier for people to understand than decision trees (Quin- lan, 1993). Below we describe our corpus of dia- logues, the assignment of classes to each dialogue, the extraction of features from each dialogue, and our learning experiments. Corpus: Our corpus consists of a set of 544 di- alogues (over 40 hours of speech) between humans and one of three dialogue systems: ANNIE (Kamm et al., 1998), an agent for voice dialing and mes- saging; ELVIS (Walker et al., 1998b), an agent for accessing email; and TOOT (Litman and Pan, 1999), an agent for accessing online train sched- ules. Each agent was implemented using a general- purpose platform for phone-based spoken dialogue systems (Kamm et al., 1997). The dialogues were obtained in controlled experiments designed to eval- uate dialogue strategies for each agent. The exper- ~We also ran experiments using the machine learning pro- gram BOOSTEXTER (Schapire and Singer, To appear), with re- sults similar to those presented below. 310 iments required users to complete a set of applica- tion tasks in conversations with a particular version of the agent. The experiments resulted in both a dig- itized recording and an automatically produced sys- tem log for each dialogue. Class Assignment: Our corpus is used to con- struct the machine learning classes as follows. First, each utterance that was not rejected by automatic speech recognition (ASR) was manually labeled as to whether it had been semantically misrecognized or not. 2 This was done by listening to the record- ings while examining the corresponding system log. If the recognizer's output did not correctly capture the task-related information in the utterance, it was labeled as a misrecognition. For example, in Fig- ure 1 U4 and U6 would be labeled as correct recog- nitions, while U5 and U7 would be labeled as mis- recognitions. Note that our labeling is semantically based; if U5 had been recognized as "play mes- sages" (which invokes the same application com- mand as "get messages"), then U5 would have been labeled as a correct recognition. Although this la- beling needs to be done manually, the labeling is based on objective criteria. Next, each dialogue was assigned a class of ei- ther good or bad, by thresholding on the percentage of user utterances that were labeled as ASR seman- tic misrecognitions. We use a threshold of 11% to balance the classes in our corpus, yielding 283 good and 261 bad dialogues. 3 Our classes thus reflect rel- ative goodness with respect to a corpus. Dialogue D1 in Figure 1 would be classified as "bad", be- cause U5 and U7 (29% of the user utterances) are misrecognized. Feature Extraction: Our corpus is used to con- struct the machine learning features as follows. Each dialogue is represented in terms of the 23 primitive features in Figure 2. In RIPPER, fea- ture values are continuous (numeric), set-valued, or symbolic. Feature values were automatically com- puted from system logs, based on five types of knowledge sources: acoustic, dialogue efficiency, dialogue quality, experimental parameters, and lexi- cal. Previous work correlating misrecognition rate with acoustic information, as well as our own 2These utterance labelings were produced during a previous set of experiments investigating the performance evaluation of spoken dialogue systems (Walker et al., 1997; Walker et al., 1998a; Walker et al., 1998b; Kamm et al., 1998; Litman et al., 1998; Litman and Pan, 1999). 3This threshold is consistent with a threshold inferred from human judgements (Litman, 1998). • Acoustic Features -mean confidence, pmisrecs%l, pmisrecs%2, pmis- recs%3, pmisrecs%4 • Dialogue Efficiency Features - elapsed time, system turns, user turns • Dialogue Quality Features - rejections, timeouts, helps, cancels, bargeins (raw) - rejection%, timeout%, help%, cancel%, bargein% (nor- malized) • Experimental Parameters Features - system, user, task, condition • Lexical Features - ASR text Figure 2: Features for spoken dialogues. hypotheses about the relevance of other types of knowledge, contributed to our features. The acoustic, dialogue efficiency, and dialogue quality features are all numeric-valued. The acous- tic features are computed from each utterance's confidence (log-likelihood) scores (Zeljkovic, 1996). Mean confidence represents the average log-likelihood score for utterances not rejected dur- ing ASR. The four pmisrecs% (predicted percent- age of misrecognitions) features represent differ- ent (coarse) approximations to the distribution of log-likelihood scores in the dialogue. Each pmis- recs% feature uses a fixed threshold value to predict whether a non-rejected utterance is actually a mis- recognition, then computes the percentage of user utterances in the dialogue that correspond to these predictedmisrecognitions. (Recall that our dialogue classifications were determined by thresholding on the percentage of actual misrecognitions.) For in- stance, pmisrecs%1 predicts that if a non-rejected utterance has a confidence score below -2 then it is a misrecognition. Thus in Figure 1, utterances U5 and U7 would be predicted as misrecognitions using this threshold. The four thresholds used for the four pmisrecs% features are -2,-3,-4,-5, and were chosen by hand from the entire dataset to be infor- mative. The dialogue efficiency features measure how quickly the dialogue is concluded, and include elapsed time (the dialogue length in seconds), and system turns and user turns (the number of turns for each dialogue participant). 311 mean confidence pmisrecs%1 pmisrecs%2 pmisrecs%3 pmisrecs%4 elapsed time system turns user turns -2.7 29 29 0 0 300 7 7 rejections timeouts helps cancels bargeins rejection% timeout% help% 3 0 2 0 1 43 0 29 cancel% bargein% system user task condition 0 14 annie mike day 1 novices without tutorial ASR text REJECT REJECT REJECT help get me sips help annie Figure 3: Feature representation of dialogue D1. The dialogue quality features attempt to capture aspects of the naturalness of the dialogue. Rejec- tions represents the number of times that the sys- tem plays special rejection prompts, e.g., utterances A2 and A3 in dialogue D1. This occurs whenever the ASR confidence score falls below a threshold associated with the ASR grammar for each system state (where the threshold was chosen by the system designer). The rejections feature differs from the pmisrecs% features in several ways. First, the pmis- recs% thresholds are used to determine misrecogni- tions rather than rejections. Second, the pmisrecs% thresholds are fixed across all dialogues and are not dependent on system state. Third, a system rejection event directly influences the dialogue via the rejec- tion prompt, while the pmisrecs% thresholds have no corresponding behavior. Timeouts represents the number of times that the system plays special timeout prompts because the user hasn't responded within a pre-specified time frame. Helps represents the number of times that the system responds to a user request with a (context- sensitive) help message. Cancels represents the number of user's requests to undo the system's pre- vious action. Bargeins represents the number of user attempts to interrupt the system while it is speaking. 4 In addition to raw counts, each feature is represented in normalized form by expressing the feature as a percentage. For example, rejection% represents the number of rejected user utterances di- vided by the total number of user utterances. In order to test the effect of having the maxi- mum amount of possibly relevant information avail- able, we also included a set of features describ- ing the experimental parameters for each dialogue (even though we don't expect rules incorporating such features to generalize). These features capture the conditions under which each dialogue was col- 4Since the system automatically detects when a bargein oc- curs, this feature could have been automatically logged. How- ever, because our system did not log bargeins, we had to hand- label them. lected. The experimental parameters features each have a different set of user-defined symbolic values. For example, the value of the feature system is either "annie", "elvis", or "toot", and gives RIPPER the op- tion of producing rules that are system-dependent. The lexical feature ASR text is set-valued, and represents the transcript of the user's utterances as output by the ASR component. Learning Experiments: The final input for learning is training data, i.e., a representation of a set of dialogues in terms of feature and class values. In order to induce classification rules from a variety of feature representations our training data is rep- resented differently in different experiments. Our learning experiments can be roughly categorized as follows. First, examples are represented using all of the features in Figure 2 (to evaluate the optimal level of performance). Figure 3 shows how Dialogue D1 from Figure 1 is represented using all 23 fea- tures. Next, examples are represented using only the features in a single knowledge source (to compara- tively evaluate the utility of each knowledge source for classification), as well as using features from two or more knowledge sources (to gain insight into the interactions between knowledge sources). Fi- nally, examples are represented using feature sets corresponding to hypotheses in the literature (to em- pirically test theoretically motivated proposals). The output of each machine learning experiment is a classification model learned from the training data. To evaluate these results, the error rates of the learned classification models are estimated using the resampling method of cross-validation (Weiss and Kulikowski, 1991). In 25-fold cross-validation, the total set of examples is randomly divided into 25 disjoint test sets, and 25 runs of the learning pro- gram are performed. Thus, each run uses the exam- pies not in the test set for training and the remain- ing examples for testing. An estimated error rate is obtained by averaging the error rate on the testing portion of the data from each of the 25 runs. 312 Features Used Accuracy (Standard Error) BASELINE 52% REJECTION% 54.5 % (2.0) EFFICIENCY 61.0 % (2.2) EXP-PARAMS 65.5 % (2.2) DIALOGUE QUALITY (NORMALIZED) 65.9 % (1.9) MEAN CONFIDENCE 68.4 % (2.0) EFFICIENCY + NORMALIZED QUALITY 69.7 % (1.9) ASR TEXT 72.0 % (1.7) PMISRECS%3 72.6 % (2.0) EFFICIENCY + QUALITY + EXP-PARAMS 73.4 % (1.9) ALL FEATURES 77.4 % (2.2) Figure 4: Accuracy rates for dialogue classifiers using different feature sets, 25-fold cross-validation on 544 dialogues. We use SMALL CAPS to indicate feature sets, and ITALICS to indicate primitive features listed in Figure 2. 3 Results Figure 4 summarizes our most interesting experi- mental results. For each feature set, we report accu- racy rates and standard errors resulting from cross- validation. 5 It is clear that performance depends on the features that the classifier has available. The BASELINE accuracy rate results from simply choos- ing the majority class, which in this case means pre- dicting that the dialogue is always "good". This leads to a 52% BASELINE accuracy. The REJECTION% accuracy rates arise from a classifier that has access to the percentage of dia- logue utterances in which the system played a re- jection message to the user. Previous research sug- gests that this acoustic feature predicts misrecogni- tions because users modify their pronunciation in response to system rejection messages in such a way as to lead to further misunderstandings (Shriberg et al., 1992; Levow, 1998). However, despite our ex- pectations, the REJECTION% accuracy rate is not better than the BASELINE at our desired level of sta- tistical significance. Using the EFFICIENCY features does improve the performance of the classifier significantly above the BASELINE (61%). These features, however, tend to reflect the particular experimental tasks that the users were doing. The EXP-PARAMS (experimental parameters) features are even more specific to this dialogue corpus than the efficiency features: these features consist of the name of the system, the experimen- 5Accuracy rates are statistically significantly different when the accuracies plus or minus twice the standard error do not overlap (Cohen, 1995), p. 134. tal subject, the experimental task, and the experi- mental condition (dialogue strategy or user exper- tise). This information alone allows the classifier to substantially improve over the BASELINE clas- sifter, by identifying particular experimental condi- tions (mixed initiative dialogue strategy, or novice users without tutorial) or systems that were run with particularly hard tasks (TOOT) with bad dialogues, as in Figure 5. Since with the exception of the ex- perimental condition these features are specific to this corpus, we wouldn't expect them to generalize. if (condition = mixed) then bad if (system = toot) then bad if (condition = novices without tutorial) then bad default is good Figure 5: EXP-PARAMS rules. The normalized DIALOGUE QUALITY features result in a similar improvement in performance (65.9%). 6 However, unlike the efficiency and ex- perimental parameters features, the normalization of the dialogue quality features by dialogue length means that rules learned on the basis of these fea- tures are more likely to generalize. Adding the efficiency and normalized quality fea- ture sets together (EFFICIENCY + NORMALIZED QUALITY) results in a significant performance im- provement (69.7%) over EFFICIENCY alone. Fig- ure 6 shows that this results in a classifier with three rules: one based on quality alone (per- centage of cancellations), one based on efficiency 6The normalized versions of the quality features did better than the raw versions. 313 alone (elapsed time), and one that consists of a boolean combination of efficiency and quality fea- tures (elapsed time and percentage of rejections). The learned ruleset says that if the percentage of cancellations is greater than 6%, classify the dia- logue as bad; if the elapsed time is greater than 282 seconds, and the percentage of rejections is greater than 6%, classify it as bad; if the elapsed time is less than 90 seconds, classify it as badT; otherwise clas- sify it as good. When multiple rules are applicable, RIPPER resolves any potential conflict by using the class that comes first in the ordering; when no rules are applicable, the default is used. if (cancel% > 6) then bad if (elapsed time > 282 secs) A (rejection% > 6) then bad if (elapsed time < 90 secs) then bad default is good for the MEAN CONFIDENCE classifier (68.4%) is not statistically different than that for the PMIS- RECS%3 classifier. Furthermore, since the feature does not rely on picking an optimal threshold, it could be expected to better generalize to new dia- logue situations. The classifier trained on (noisy) ASR lexical out- put (ASR TEXT) has access only to the speech rec- ognizer's interpretation of the user's utterances. The ASR TEXT classifier achieves 72% accuracy, which is significantly better than the BASELINE, REJEC- TION% and EFFICIENCY classifiers. Figure 7 shows the rules learned from the lexical feature alone. The rules include lexical items that clearly indicate that a user is having trouble e.g. help and cancel. They also include lexical items that identify particular tasks for particular systems, e.g. the lexical item p-m identifies a task in TOOT. Figure 6: EFFICIENCY + NORMALIZED QUALITY rules. We discussed our acoustic REJECTION% results above, based on using the rejection thresholds that each system was actually run with. However, a posthoc analysis of our experimental data showed that our systems could have rejected substantially more misrecognitions with a rejection threshold that was lower than the thresholds picked by the sys- tem designers. (Of course, changing the thresh- olds in this way would have also increased the num- ber of rejections of correct ASR outputs.) Re- call that the PMISRECS% experiments explored the use of different thresholds to predict misrecogni- tions. The best of these acoustic thresholds was PMISRECS%3, with accuracy 72.6%. This classi- fier learned that if the predicted percentage of mis- recognitions using the threshold for that feature was greater than 8%, then the dialogue was predicted to be bad, otherwise it was good. This classifier per- forms significantly better than the BASELINE, RE- JECTION% and EFFICIENCY classifiers. Similarly, MEAN CONFIDENCE is another acoustic feature, which averages confidence scores over all the non-rejected utterances in a dialogue. Since this feature is not tuned to the applications, we did not expect it to perform as well as the best PMISRECS% feature. However, the accuracy rate 7This rule indicates dialogues too short for the user to have completed the task. Note that this role could not be applied to adapting the system's behavior during the course of the dia- logue. if (ASR text contains cancel) then bad if (ASR text contains the) A (ASR text contains get) A (ASR text contains TIMEOUT) then bad if (ASR text contains today) ^ (ASR text contains on) then bad if (ASR text contains the) A (ASR text contains p-m) then bad if (ASR text contains to) then bad if (ASR text contains help) ^ (ASR text contains the) ^ (ASR text contains read) then bad if (ASR text contains help) A (ASR text contains previous) then bad if (ASR text contains about) then bad if (ASR text contains change-s trategy) then bad default is good Figure 7: ASR TEXT rules. Note that the performance of many of the classi- fiers is statistically indistinguishable, e.g. the per- formance of the ASR TEXT classifier is virtually identical to the classifier PMISRECS%3 and the EF- FICIENCY + QUALITY + EXP-PARAMS classifier. The similarity between the accuracies for a range of classifiers suggests that the information provided by different feature sets is redundant. As discussed above, each system and experimental condition re- suited in dialogues that contained lexical items that were unique to it, making it possible to identify ex- perimental conditions from the lexical items alone. Figure 8 shows the rules that RIPPER learned when it had access to all the features except for the lexical and acoustic features. In this case, RIPPER learns some rules that are specific to the TOOT system. Finally, the last row of Figure 4 suggests that a classifier that has access to ALL FEATURES may do better (77.4% accuracy) than those classifiers that 314 if (cancel% > 4) ^ (system = toot) then bad if (system turns _> 26) ^ (rejection% _> 5 ) then bad if (condition = mixed) ^ (user turns > 12 ) then bad if (system = toot)/x (user turns > 14 ) then bad if (cancels > 1) A (timeout% _> 11 ) then bad if (elapsed time _< 87 secs) then bad default is good Figure 8: EFFICIENCY + QUALITY + EXP-PARAMS rules. have access to acoustic features only (72.6%) or to lexical features only (72%). Although these dif- ferences are not statistically significant, they show a trend (p < .08). This supports the conclusion that different feature sets provide redundant infor- mation, and could be substituted for each other to achieve the same performance. However, the ALL FEATURES classifier does perform significantly bet- ter than the EXP-PARAMS, DIALOGUE QUALITY (NORMALIZED), and MEAN CONFIDENCE clas- sifiers. Figure 9 shows the decision rules that the ALL FEATURES classifier learns. Interestingly, this classifier does not find the features based on experi- mental parameters to be good predictors when it has other features to choose from. Rather it combines features representing acoustic, efficiency, dialogue quality and lexical information. if (mean confidence _< -2.2) ^ (pmisrecs%4 _> 6 ) then bad if (pmisrecs%3 >_ 7 ) A (ASR text contains yes) A (mean confidence _< -1.9) then bad if (cancel% _> 4) then bad if (system turns _> 29 ) ^ (ASR text contains message) then bad if (elapsed time <_ 90) then bad default is good Figure 9: ALL FEATURES rules. 4 Discussion The experiments presented here establish several findings. First, it is possible to give an objective def- inition for poor speech recognition at the dialogue level, and to apply machine learning to build clas- sifiers detecting poor recognition solely from fea- tures of the system log. Second, with appropri- ate sets of features, these classifiers significantly outperform the baseline percentage of the majority class. Third, the comparable performance of clas- sifiers constructed from rather different feature sets (such as acoustic and lexical features) suggest that there is some redundancy between these feature sets (at least with respect to the task). Fourth, the fact that the best estimated accuracy was achieved using all of the features suggests that even problems that seem inherently acoustic may best be solved by ex- ploiting higher-level information. This work differs from previous work in focusing on behavior at the (sub)dialogue level, rather than on identifying single misrecognitions at the utter- ance level (Smith, 1998; Levow, 1998; van Zanten, 1998). The rationale is that a single misrecognition may not warrant a global change in dialogue strat- egy, whereas a user's repeated problems communi- cating with the system might warrant such a change. While we are not aware of any other work that has applied machine learning to detecting patterns sug- gesting that the user is having problems over the course of a dialogue, (Levow, 1998) has applied machine learning to identifying single misrecogni- tions. We are currently extending our feature set to include acoustic-prosodic features such as those used by Levow, in order to predict misrecognitions at both the dialogue level as well as the utterance level. We are also interested in the extension and gen- eralization of our findings in a number of additional directions. In other experiments, we demonstrated the utility of allowing the user to dynamically adapt the system's dialogue strategy at any point(s) during a dialogue. Our results show that dynamic adapta- tion clearly improves system performance, with the level of improvement sometimes a function of the system's initial dialogue strategy (Litman and Pan, 1999). Our next step is to incorporate classifiers such as those presented in this paper into a system in order to support dynamic adaptation according to recognition performance. Another area for future work would be to explore the utility of using alter- native methods for classifying dialogues as good or bad. For example, the user satisfaction measures we collected in a series of experiments using the PAR- ADISE evaluation framework (Walker et al., 1998c) could serve as the basis for such an alternative clas- sification scheme. More generally, in the same way that learning methods have found widespread use in speech processing and other fields where large cor- pora are available, we believe that the construction and analysis of spoken dialogue systems is a ripe domain for machine learning applications. 5 Acknowledgements Thanks to J. Chu-Carroll, W. Cohen, C. Kamm, M. Kan, R. Schapire, Y. Singer, B. Srinivas, and S. 315 Whittaker for help with this research and/or paper. References Paul R. Cohen. 1995. Empirical Methods for Arti- ficial Intelligence. MIT Press, Boston. William Cohen. 1996. Learning trees and rules with set-valued features. In 14th Conference of the American Association of Artificial Intelli- gence, AAAI. C. Kamm, S. Narayanan, D. Dutton, and R. Rite- nour. 1997. Evaluating spoken dialog systems for telecommunication services. In 5th European Conference on Speech Technology and Commu- nication, EUROSPEECH 97. Candace Kamm, Diane Litman, and Marilyn A. Walker. 1998. From novice to expert: The ef- fect of tutorials on user expertise with spoken di- alogue systems. In Proceedings of the Interna- tional Conference on Spoken Language Process- ing, ICSLP98. Gina-Anne Levow. 1998. Characterizing and rec- ognizing spoken corrections in human-computer dialogue. In Proceedings of the 36th Annual Meeting of the Association of Computational Lin- guistics, COLING/ACL 98, pages 736-742. Diane J. Litman and Shimei Pan. 1999. Empirically evaluating an adaptable spoken dialogue system. In Proceedings of the 7th International Confer- ence on User Modeling (UM). Diane J. Litman, Shimei Pan, and Marilyn A. Walker. 1998. Evaluating Response Strategies in a Web-Based Spoken Dialogue Agent. In Pro- ceedings of ACL/COLING 98: 36th Annual Meet- ing of the Association of Computational Linguis- tics, pages 780-787. Diane J. Litman. 1998. Predicting speech recog- nition performance from dialogue phenomena. Presented at the American Association for Arti- ficial Intelligence Spring Symposium Series on Applying Machine Learning to Discourse Pro- cessing. J. Ross Quinlan. 1993. C4.5: Programs for Ma- chine Learning. San Mateo, CA: Morgan Kauf- mann. Robert E. Schapire and Yoram Singer. To appear. Boostexter: A boosting-based system for text cat- egorization. Machine Learning. Elizabeth Shriberg, Elizabeth Wade, and Patti Price. 1992. Human-machine problem solving using spoken language systems (SLS): Factors affect- ing performance and user satisfaction. In Pro- 316 ceedings of the DARPA Speech and NL Workshop, pages 49-54. Ronnie W. Smith. 1998. An evaluation of strate- gies for selectively verifying utterance meanings in spoken natural language dialog. International Journal of Human-Computer Studies, 48:627- 647. G. Veldhuijzen van Zanten. 1998. Adaptive mixed- initiative dialogue management. Technical Re- port 52, IPO, Center for Research on User- System Interaction. Marilyn Walker, Donald Hindle, Jeanne Fromer, Giuseppe Di Fabbrizio, and Craig Mestel. 1997. Evaluating competing agent strategies for a voice email agent. In Proceedings of the European Conference on Speech Communication and Tech- nology, EUROSPEECH97. M. Walker, J. Fromer, G. Di Fabbrizio, C. Mestel, and D. Hindle. 1998a. What can I say: Evaluat- ing a spoken language interface to email. In Pro- ceedings of the Conference on Computer Human Interaction ( CH198). Marilyn A. Walker, Jeanne C. Fromer, and Shrikanth Narayanan. 1998b. Learning optimal dialogue strategies: A case study of a spoken dialogue agent for email. In Proceedings of the 36th Annual Meeting of the Association of Com- putational Linguistics, COLING/ACL 98, pages 1345-1352. Marilyn. A. Walker, Diane J. Litman, Candace. A. Kamm, and Alicia Abella. 1998c. Evaluating spoken dialogue agents with PARADISE: Two case studies. Computer Speech and Language, 12(3). S. M. Weiss and C. Kulikowski. 1991. Computer Systems That Learn: Classification and Predic- tion Methods from Statistics, Neural Nets, Ma- chine Learning, and Expert Systems. San Mateo, CA: Morgan Kaufmann. Ilija Zeljkovic. 1996. Decoding optimal state se- quences with smooth state likelihoods. In Inter- national Conference on Acoustics, Speech, and Signal Processing, ICASSP 96, pages 129-132.
1999
40
Automatic Identification of Non-compositional Phrases Dekang Lin Department of Computer Science University of Manitoba and Winnipeg, Manitoba, Canada, R3T 2N2 [email protected] UMIACS University of Maryland College Park, Maryland, 20742 [email protected] Abstract Non-compositional expressions present a special challenge to NLP applications. We present a method for automatic identification of non-compositional ex- pressions using their statistical properties in a text corpus. Our method is based on the hypothesis that when a phrase is non-composition, its mutual infor- mation differs significantly from the mutual infor- mations of phrases obtained by substituting one of the word in the phrase with a similar word. 1 Introduction Non-compositional expressions present a special challenge to NLP applications. In machine transla- tion, word-for-word translation of non-compositional expressions can result in very misleading (sometimes laughable) translations. In information retrieval, ex- pansion of words in a non-compositional expression can lead to dramatic decrease in precision without any gain in recall. Less obviously, non-compositional expressions need to be treated differently than other phrases in many statistical or corpus-based NLP methods. For example, an underlying assumption in some word sense disambiguation systems, e.g., (Da- gan and Itai, 1994; Li et al., 1995; Lin, 1997), is that if two words occurred in the same context, they are probably similar. Suppose we want to determine the intended meaning of "product" in "hot product". We can find other words that are also modified by "hot" (e.g., "hot car") and then choose the mean- ing of "product" that is most similar to meanings of these words. However, this method fails when non-compositional expressions are involved. For in- stance, using the same algorithm to determine the meaning of "line" in "hot line", the words "product", "merchandise", "car", etc., would lead the algorithm to choose the "line of product" sense of "line". We present a method for automatic identification of non-compositional expressions using their statis- tical properties in a text corpus. The intuitive idea behind the method is that the metaphorical usage of a non-compositional expression causes it to have a different distributional characteristic than expres- sions that are similar to its literal meaning. 2 Input Data The input to our algorithm is a collocation database and a thesaurus. We briefly describe the process of obtaining this input. More details about the con- struction of the collocation database and the the- saurus can be found in (Lin, 1998). We parsed a 125-million word newspaper corpus with Minipar, 1 a descendent of Principar (Lin, 1993; Lin, 1994), and extracted dependency relationships from the parsed corpus. A dependency relationship is a triple: (head type modifier), where head and modifier are words in the input sentence and type is the type of the dependency relation. For example, (la) is an example dependency tree and the set of dependency triples extracted from (la) are shown in (lb). compl John married Peter's sister b. (marry V:subj:N John), (marry V:compl:N sister), (sister N:gen:N Peter) There are about 80 million dependency relation- ships in the parsed corpus. The frequency counts of dependency relationships are filtered with the log- likelihood ratio (Dunning, 1993). We call a depen- dency relationship a collocation if its log-likelihood ratio is greater than a threshold (0.5). The number of unique collocations in the resulting database 2 is about 11 million. Using the similarity measure proposed in (Lin, 1998), we constructed a corpus-based thesaurus 3 consisting of 11839 nouns, 3639 verbs and 5658 ad- jective/adverbs which occurred in the corpus at least 100 times. 3 Mutual Information of a Collocation We define the probability space to consist of all pos- sible collocation triples. We use LH R M L to denote the 1 available at http://www.cs.umanitoba.ca/-lindek/minipar.htm/ 2available at http://www.cs.umanitob&.ca/-lindek/nlldemo.htm/ 3available at http://www.cs.umanitoba.ca/-lindek/nlldemo.htm/ 317 frequency count of all the collocations that match the pattern (H R M), where H and M are either words or the wild card (*) and R is either a dependency type or the wild card. For example, • [marry V:¢ompl:N sister[ is the frequency count of (marry V: compl :N sister). • [marry V:compl:~ *1 is the total frequency count of collocations in which the head is marry and the type is V:compl:hi (the verb-object relation). • I* * *l is the total frequency count of all collo- cations extracted from the corpus. To compute the mutual information in a colloca- tion, we treat a collocation (head type modifier) as the conjunction of three events: A: (* type *) B: (head * *) C: (* * modifier) The mutual information of a collocation is the log- arithm of the ratio between the probability of the collocation and the probability of events A, B, and C co-occur if we assume B and C are conditionally independent given A: (2) mutualInfo(head, type, modifier) P(A,B,c) = log P(B[A)P(C[A)P(A) [head type modifier[ * * *] = log( [, type *[ [head type *[ [* t~Te modifier[ ) [* * *[ [* type *1 [*type *1 • , ]head type modifier[x * type * ---- log,]head type * x * type modifier / 4 Mutual Information and Similar Collocations In this section, we use several examples to demon- strate the basic idea behind our algorithm. Consider the expression "spill gut". Using the au- tomatically constructed thesaurus, we find the fol- lowing top-10 most similar words to the verb "spill" and the noun "gut": spill: leak 0.153, pour 0.127, spew 0.125, dump 0.118, pump 0.098, seep 0.096, burn 0.095, ex- plode 0.094, burst 0.092, spray 0.091; gut: intestine 0.091, instinct 0.089, foresight 0.085, creativity 0.082, heart 0.079, imagination 0.076, stamina 0.074, soul 0.073, liking 0.073, charisma 0.071; The collocation "spill gut" occurred 13 times in the 125-million-word corpus. The mutual information of this collocation is 6.24. Searching the collocation database, we find that it does not contain any collo- cation in the form (simvspilt V:compl:hl gut) nor (spill V: compl :N simngut), where sirnvsp~u is a verb similar to "spill" and simng,,~ is a noun sim- ilar to "gut". This means that the phrases, such as "leak gut", "pour gut", ... or "spill intestine", "spill instinct", either did not appear in the corpus at all, or did not occur frequent enough to pass the log-likelihood ratio test. The second example is "red tape". The top-10 most similar words to "red" and "tape" in our the- saurus are: red: yellow 0.164, purple 0.149, pink 0.146, green 0.136, blue 0.125, white 0.122, color 0.118, or- ange 0.111, brown 0.101, shade 0.094; tape: videotape 0.196, cassette 0.177, videocassette 0.168, video 0.151, disk 0.129, recording 0.117, disc 0.113, footage 0.111, recorder 0.106, audio 0.106; The following table shows the frequency and mutual information of "red tape" and word combinations in which one of "red" or "tape" is substituted by a similar word: Table 1: red tape mutual verb object freq info red tape 259 5.87 yellow tape 12 3.75 orange tape 2 2.64 black tape 9 1.07 Even though many other similar combinations ex- ist in the collocation database, they have very differ- ent frequency counts and mutual information values than "red tape". Finally, consider a compositional phrase: "eco- nomic impact". The top-10 most similar words are: economic: financial 0.305, political 0.243, social 0.219, fiscal 0.209, cultural 0.202, budgetary 0.2, technological 0.196, organizational 0.19, ecological 0.189, monetary 0.189; impact: effect 0.227, implication 0.163, conse- quence 0.156, significance 0.146, repercussion 0.141, fallout 0.141, potential 0.137, ramifica- tion 0.129, risk 0.126, influence 0.125; The frequency counts and mutual information val- ues of "economic impact" and phrases obtained by replacing one of "economic" and "impact" with a similar word are in Table 4. Not only many combi- nations are found in the corpus, many of them have very similar mutual information values to that of 318 Table 2: economic impact verb economic financial political social budgetary ecological economic economic economic economic economic economic economic economic economic object impact impact impact impact impact impact effect implication consequence significance fallout repercussion potential ramification risk mutual freq info 171 1.85 127 1.72 46 0.50 15 0.94 8 3.20 4 2.59 84 0.70 17 0.80 59 1.88 10 0.84 7 1.66 7 1.84 27 1.24 8 2.19 17 -0.33 nomial distribution can be accurately approximated by a normal distribution (Dunning, 1993). Since all the potential non-compositional expressions that we are considering have reasonably large frequency counts, we assume their distributions are normal. Let Ihead 1;ype modifier I = k and 1. * .1 = n. The maximum likelihood estimation of the true proba- bility p of the collocation (head type modifier) is /5 = ~. Even though we do not know what p is, since p is (assumed to be) normally distributed, there is N% chance that it falls within the interval k_.4_ZN _ k.4_z N n ,~, n V n n n n where ZN is a constant related to the confidence level N and the last step in the above derivation is due to the fact that k is very small. Table 3 shows the z~ values for a sample set of confidence intervals. "economic impact". In fact, the difference of mu- tual information values appear to be more impor- tant to the phrasal similarity than the similarity of individual words. For example, the phrases "eco- nomic fallout" and "economic repercussion" are in- tuitively more similar to "economic impact" than "economic implication" or "economic significance", even though "implication" and "significance" have higher similarity values to "impact" than "fallout" and "repercussion" do. These examples suggest that one possible way to separate compositional phrases and non- compositional ones is to check the existence and mu- tual information values of phrases obtained by sub- stituting one of the words with a similar word. A phrase is probably non-compositional if such sub- stitutions are not found in the collocation database or their mutual information values are significantly different from that of the phrase. 5 Algorithm In order to implement the idea of separating non- compositional phrases from compositional ones with mutual information, we must use a criterion to de- termine whether or not the mutual information val- ues of two collocations are significantly different. Al- though one could simply use a predetermined thresh- old for this purpose, the threshold value will be to- tally arbitrary, b-hrthermore, such a threshold does not take into account the fact that with different fre- quency counts, we have different levels confidence in the mutual information values. We propose a more principled approach. The fre- quency count of a collocation is a random variable with binomial distribution. When the frequency count is reasonably large (e.g., greater than 5), a bi- Table 3: Sample ZN values IN% 150% 80% 90% 95% 98% 99% I zg 0.67 1.28 1.64 1.96 2.33 2.58 We further assume that the estimations of P(A), P(B]A) and P(CIA ) in (2) are accurate. The confi- dence interval for the true probability gives rise to a confidence interval for the true mutual information (mutual information computed using the true proba- bilities instead of estimations). The upper and lower bounds of this interval are obtained by substituting k with k+z~v'-g and k-z~vff in (2). Since our con- n n n fidence of p falling between k+,~v~ is N%, we can I% have N% confidence that the true mutual informa- tion is within the upper and lower bound. We use the following condition to determine whether or not a collocation is compositional: (3) A collocation a is non-compositional if there does not exist another collocation/3 such that (a) j3 is obtained by substituting the head or the modifier in a with a similar word and (b) there is an overlap between the 95% confidence interval of the mutual information values of a and f~. For example, the following table shows the fre- quency count, mutual information (computed with the most likelihood estimation) and the lower and upper bounds of the 95% confidence interval of the true mutual information: freq. mutual lower upper verb-object count info bound bound make difference 1489 2.928 2.876 2.978 make change 1779 2.194 2.146 2.239 319 Since the intervals are disjoint, the two colloca- tions are considered to have significantly different mutual information values. 6 Evaluation There is not yet a well-established methodology for evaluating automatically acquired lexical knowl- edge. One possibility is to compare the automati- cally identified relationships with relationships listed in a manually compiled dictionary. For example, (Lin, 1998) compared automatically created the- saurus with the WordNet (Miller et al., 1990) and Roget's Thesaurus. However, since the lexicon used in our parser is based on the WordNet, the phrasal words in WordNet are treated as a single word. For example, "take advantage of" is treated as a transitive verb by the parser. As a result, the extracted non-compositional phrases do not usu- ally overlap with phrasal entries in the WordNet. Therefore, we conducted the evaluation by manu- ally examining sample results. This method was also used to evaluate automatically identified hy- ponyms (Hearst, 1998), word similarity (Richardson, 1997), and translations of collocations (Smadja et al., 1996). Our evaluation sample consists of 5 most frequent open class words in the our parsed corpus: {have, company, make, do, take} and 5 words whose fre- quencies are ranked from 2000 to 2004: {path, lock, resort, column, gulf}. We examined three types of dependency relationships: object-verb, noun-noun, and adjective-noun. A total of 216 collocations were extracted, shown in Appendix A. We compared the collocations in Appendix A with the entries for the above 10 words in the NTC's English Idioms Dictionary (henceforth NTC-EID) (Spears and Kirkpatrick, 1993), which contains ap- proximately 6000 definitions of idioms. For our eval- uation purposes, we selected the idioms in NTC-EID that satisfy both of the following two conditions: (4) a. the head word of the idiom is one of the above 10 words. b. there is a verb-object, noun-noun, or adjective-noun relationship in the idiom and the modifier in the phrase is not a variable. For example, "take a stab at something" is included in the evaluation, whereas "take something at face value" is not. There are 249 such idioms in NTC-EID, 34 of which are also found in Appendix A (they are marked with the '+' sign in Appendix A). If we treat the 249 en- tries in NTC-EID as the gold standard, the precision and recall of the phrases in Appendix A are shown in Table 4, To compare the performance with manually compiled dictionaries, we also compute the precision and recall of the entries in the Longman Dictionary of English Idioms (LDOEI) (Long and Summers, 1979) that satisfy the two conditions in (4). It can be seen that the overlap between manually compiled dictionaries are quite low, reflecting the fact that dif- ferent lexicographers may have quite different opin- ion about which phrases are non-compositional. Precision Recall Parser Errors Appendix A 15.7% 13.7% 9.7% LDOEI 39.4% 20.9% N.A. Table 4: Evaluation Results The collocations in Appendix A are classified into three categories. The ones marked with '+' sign are found in NTC-EID. The ones marked with 'x' are parsing errors (we retrieved from the parsed cor- pus all the sentences that contain the collocations in Appendix A and determine which collocations are parser errors). The unmarked collocations satisfy the condition (3) but are not found in NTC-EID. Many of the unmarked collocation are clearly id- ioms, such as "take (the) Fifth Amendment" and "take (its) toll", suggesting that even the most com- prehensive dictionaries may have many gaps in their coverage. The method proposed in this paper can be used to improve the coverage manually created lexical resources. Most of the parser errors are due to the incom- pleteness of the lexicon used by the parser. For ex- ample, "opt" is not listed in the lexicon as a verb. The lexical analyzer guessed it as a noun, causing the erroneous collocation "(to) do opt". The col- location "trig lock" should be "trigger lock". The lexical analyzer in the parser analyzed "trigger" as the -er form of the adjective "trig" (meaning well- groomed). Duplications in the corpus can amplify the effect of a single mistake. For example, the following dis- claimer occurred 212 times in the corpus. "Annualized average rate of return after ex- penses for the past 30 days: not a forecast of future returns" The parser analyzed '% forecast of future returns" as [S [NP a forecast of future] [VP returns]]. As a result, (return V:subj :N forecast) satisfied the condition (3). Duplications can also skew the mutual informa- tion of correct dependency relationships. For ex- ample, the verb-object relationship between "take" and "bride" passed the mutual information filter be- cause there are 4 copies of the article containing this phrase. If we were able to throw away the duplicates and record only one count of "take-bride", it would have not pass the mutual information filter (3). 320 The fact that systematic parser errors tend to pass the mutual information filter is both a curse and a blessing. On the negative side, there is no obvious way to separate the parser errors from true non-compositional expressions. On the positive side, the output of the mutual information filter has much higher concentration of parser errors than the database that contains millions of collocations. By manually sifting through the output, one can con- struct a list of frequent parser errors, which can then be incorporated into the parser so that it can avoid making these mistakes in the future. Manually go- ing through the output is not unreasonable, because each non-compositional expression has to be individ- ually dealt with in a lexicon anyway. To find out the benefit of using the dependency relationships identified by a parser instead of simple co-occurrence relationships between words, we also created a database of the co-occurrence relationship between part-of-speech tagged words. We aggre- gated all word pairs that occurred within a 4-word window of each other. The same algorithm and simi- larity measure for the dependency database are used to construct a thesaurus using the co-occurrence database. Appendix B shows all the word pairs that satisfies the condition (3) and that involve one of the 10 words {have, company, make, do, take, path, lock, resort, column, gulf}. It is clear that Appendix B contains far fewer true non-compositional phrases than Appendix A. 7 Related Work There have been numerous previous research on ex- tracting collocations from corpus, e.g., (Choueka, 1988) and (Smadja, 1993). They do not, however, make a distinction between compositional and non- compositional collocations. Mutual information has often been used to separate systematic associations from accidental ones. It was also used to compute the distributional similarity between words CHin - dle, 1990; Lin, 1998). A method to determine the compositionality of verb-object pairs is proposed in (Tapanainen et al., 1998). The basic idea in there is that "if an object appears only with one verb (of few verbs) in a large corpus we expect that it has an idiomatic nature" (Tapanainen et al., 1998, p.1290). For each object noun o, (Tapanainen et al., 1998) computes the distributed frequency DF(o) and rank the non-compositionality of o according to this value. Using the notation introduced in Section 3, DF(o) is computed as follows: DF(o) = ~ Iv,, v:compl:~, ol a n b i=1 where {vl,v2,... ,vn} are verbs in the corpus that took o as the object and where a and b are constants. The first column in Table 5 lists the top 40 verb- object pairs in (Tapanainen et ai., 1998). The "mi" column show the result of our mutual information filter. The '+' sign means that the verb-object pair is also consider to be non-compositional according to mutual information filter (3). The '-' sign means that the verb-object pair is present in our depen- dency database, but it does not satisfy condition (3). For each '-' marked pairs, the "similar collocation" column provides a similar collocation with a similar mutual information value (i.e., the reason why the pair is not consider to be non-compositional). The '<>' marked pairs are not found in our collocation database for various reasons. For example, "finish seventh" is not found because "seventh" is normal- ized as "_NUM", "have a go" is not found because "a go" is not an entry in our lexicon, and "take ad- vantage" is not found because "take advantage of" is treated as a single lexical item by our parser. The ~/marks in the "ntc" column in Table 5 indicate that the corresponding verb-object pairs is an idiom in (Spears and Kirkpatrick, 1993). It can be seen that none of the verb-object pairs in Table 5 that are filtered out by condition (3) is listed as an idiom in NTC-EID. 8 Conclusion We have presented a method to identify non- compositional phrases. The method is based on the assumption that non-compositionai phrases have a significantly different mutual information value than the phrases that are similar to their literal mean- ings. Our experiment shows that this hypothesis is generally true. However, many collocations resulted from systematic parser errors also tend to posses this property. Acknowledgements The author wishes to thank ACL reviewers for their helpful comments and suggestions. This re- search was partly supported by Natural Sciences and Engineering Research Council of Canada grant OGP121338. References Y. Choueka. 1988. Looking for needles in a haystack or lo- cating interesting collocational expressions in large tex- tual databases. In Proceedings of the RIA O Conference on User-Oriented Content-Based Text and Image Handling, Cambridge, MA, March 21-24. Ido Dagan and Alon Itai. 1994. Word sense disambiguation using a second language monolingual corpus. Computa- tional Linguistics, 20(4):563-596. Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61-74, March. Marti A. Hearst. 1998. Automated discovery of wordnet re- lations. In C. Fellbaum, editor, WordNet: An Electronic Lezical Database, pages 131-151. MIT Press. 321 Table 5: Comparison with (Tapanainen et al., 1998) verb-object mi ntc similar collocation take toll + go bust + make plain + mark anniversary - celebrate anniversary finish seventh o make inroad - make headway do homework - do typing have hesitation - have misgiving give birth + ~/ have a=go O X/ make mistake - make miscalculation go so=far=as o take precaution + look as=though o commit suicide - commit crime pay tribute - pay homage take place + ~/ make mockery + make headway - make inroad take wicket o cost $ - cost million have qualm - have misgiving make pilgrimage - make foray take advantage o ~/ make debut + have second=thought o ~/ do job - do work finish sixth o suffer heartattack o decide whether o have impact - have effect have chance - have opportunity give warn o have sexual=intercourse - have sex take plunge + have misfortune - share misfortune thank goodness + have nothing o make money - make profit strike chord + ~/ Donald Hindle. 1990. Noun classification from predicate- argument structures. In Proceedings of ACL-90, pages 268-275, Pittsburg, Pennsylvania, June. Xiaobin Li, Stan Szpakowicz, and Stan Matwin. 1995. A WordNet-based algorithm for word sense disambiguation. In Proceedings of IJCAI-95, pages 1368-1374, Montreal, Canada, August. Dekang Lin. 1993. Principle-based parsing without overgen- eration. In Proceedings of ACL-93, pages 112-120, Colum- bus, Ohio. Dekang Lin. 1994. Principar--an efficient, broad-coverage, principle-based parser. In Proceedings of COLING-9$, pages 482-488. Kyoto, Japan. Dekang Lin. 1997. Using syntactic dependency as local con- text to resolve word sense ambiguity. In Proceedings of ACL/EACL-97, pages 64-71, Madrid, Spain, July. Dekang Lin. 1998. Automatic retrieval and clustering of simi- lar words. In Proceedings of COLING/ACL-98, pages 768- 774, Montreal. T. H. Long and D. Summers, editors. 1979. Longman Die- tionary of English Idioms. Longman Group Ltd. George A. Miller, Richard Beckwith, Christiane Fellbaum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: An on-line lexical database. International Journal of Lexicography, 3(4):235-244. Stephen D. Richardson. 1997. Determining Similarity and Inferring Relations in a Lexical Knowledge Base. Ph.D. thesis, The City University of New York. Frank Smadja, Kathleen R. McKeown, and Vasileios Hatzi- vassiloglou. 1996. Translating collcations for bilingual lex- icons: A statistical approach. Computational Linguistics, 22(1):1-38, March. Frank Smadja. 1993. Retrieving collocations from text: Xtract. Computational Linguistics, 19(1):143-178. R. A. Spears and B. Kirkpatrick. 1993. NTC's English Id- ioms Dictionary. National Textbook Company. Pasi Tapanainen, Jussi Piitulainen, and Timo J~vinen. 1998. Idiomatic object usage and support verbs. In Proceedings of COLING/ACL-98, pages 1289-1293, Montreal, Canada. Appendix A Among the collocations in which the head word is one of {have, company, make, do, take, path, lock, resort, column, gulf}, the 216 collocations in the fol- lowing table are considered by our program to be idioms (i.e., they satisfy condition (3)). The codes in the remark column are explained as follows: ×: parser errors; +: collocations found in NTC-EID. collocation remark (to) have (the) decency (to) have (all the) earmark(s) (to) have enough + (to) have falling + have figuring x have giving x (to) have (a) lien (against) (to) have (all the) making(s) (of) (to) have plenty (to) have (a) record have working x have wrought × (a) holding company (a) touring company (a) insurance company Sinhalese make x mistake make x mos make x (to) make abrasive (to) make acquaintance (to) make believer (out of) (to) make bow (to) make (a) case (to) make (a) catch (to) make (a) dash (to) make (one's) debut (to) make (up) (the) bow Jones Indus- trial Average (to) make (a) duplicate (to) make enemy (to) make (an) error (to) make (an) exception + (to) make (an) excuse (to) make (a) fool + (to) make (a) fortune (to) make friend + 322 collocation remark (to) make (a) fuss + (tO) make (a) grab (to) make grade + (tO) make (a) guess (to) make hay + (to) make headline(s) (to) make (a) killing + (to) make (a) living + (to) make (a) long-distance call (to) make (one's) mark (to) make (no) mention (to) make (one's) mind (up) + (to) make (a) mint (to) make (a) mockery (of) (to) make noise (to) make (a) pitch + (to) make plain × (to) make (a) point + (to) make preparation(s) (to) make (no) pretense (to) make (a) pun (to) make referral(s) (to) make (the) round(s) (to) make (a) run (at) + (to) make savings and loan association x (to) make (no) secret (to) make (up) sect (to) make sense ~ + (to) make (a) shamble(s) (of) (to) make (a) showing (to) make (a) splash (to) make (a) start (to) make (a) stop (to) make (a) tackle (to) make (a) turn (to) make (a) virtue (of) (to) make wonder × (to) do (an) about-face + (to) do at home × (to) do bargain-hunting (to) do both (to) do business (to) do (a) cameo (to) do casting (to) do damage (to) do deal(s) (to) do (the) deed (to) do (a) disservice (to) do either (to) do enough (to) do (a) favor (to) do honor(s) + (to) do I. × (to) do (an) imitation (to) do justice + (to) do OK (to) do opt × (to) do puzzle do Santos x (to) do stunt(s) (to) do (the) talking collocation (to) do (the) trick (to) do (one's) utmost (to) (to) do well (to) do wonder(s) (tO) do (much) worse do you (the) box-office take (to) take aim (to) take back (to) take (the) bait (to) take (a) beating (tO) take (a) bet (to) take (a) bite (to) take (a) bow (to) take (someone's) breath (away) (to) take (the) bride (on honeymoon) (to) take charge (to) take command (to) take communion (to) take countermeasure (to) take cover (to) take (one's) cue (to) take custody (to) take (a) dip (to) take (a) dive (to) take (some) doing (to) take (a) drag (to) take exception (to) take (the Gish Road) exit (to) take (the) factor (into account) (to) take (the) Fifth Amendment (to) take forever (to) take (the) form (of) (to) take forward (to) take (a) gamble (to) take (a) genius (to figure out) (to) take (a) guess (to) take (the) helm (to) take (a) hit (to) take (a) holiday (to) take (a) jog (to) take knock(s) (to) take a lap (to) take (the) lead (to) take (the) longest (to) take (a) look (to) take lying (to) take measure (to) take (a) nosedive (to) take note (of) (to) take oath (to) take occupancy (to) take part (to) take (a) pick (to) take place (to) take (a) pledge (to) take plunge (to) take (a) poke (at) (to) take possession (to) take (a) pounding (to) take (the) precaution(s) remark + + + + + x + x + + + + 323 collocation remark (to) take private X (to) take profit (to) take pulse (to) take (a) quiz (to) take refuge (to) take root + (to) take sanctuary (to) take seconds (to) take shape (to) take (a) shine T (to) take side(s) + (to) take (a) sip (to) take (a) snap (to) take (the) sting (out of) (to) take (12) stitch(es) (to) take (a) swing (at) (to) take (its) toll (to) take (a) tumble (to) take (a) turn + (to) take (a) vote (to) take (a) vow (to) take whatever (a) beaten path mean path (a) career path (a) flight path (a) garden path (a) growth path (an) air lock (a) power lock (a) trig lock (a) virtual lock (a) combination lock (a) door lock (a) rate lock (a) safety lock (a) shift lock (a) ship lock (a) window lock (to) lock horns (to) lock key (a) last resort (a) christian resort (a) destination resort (an) entertainment resort (a) ski resort (a) spinal column (a) syndicated column (a) change column (a) gossip column (a) Greek column (a) humor column (the) net-income column (the) society column (the) steering column (the) support column (a) tank column (a) win column (a) stormy gulf + Appendix B (results obtained without a parser) collocation by proximity have[V] BIN] have[V] companion[N] have[V] conversation[N] have[V] each[N] collocation by proximity have[V] impact[N] have[V] legend[N] have[V] Magellan[N] have[V] midyear[N] have[V] orchestra[N] have[V] precinct[N] have[V] quarter[N] have[V] shame[N] have[V] year end[N] have[V] zoo[N] mix[N] company[N] softball[N] company[N] electronic[A] make[N] lost[A] make[N] no more than[A] make[N] sure[A] make[N] circus[N] make[N] flaw[N] make[N] recommendation[N] make[N] shortfall[N] make[N] way[N] make[N] make[V] arrest[N] make[V] mention[N] make[V] progress[N] make[V] switch[N] do[V] Angolan[N] do[V] damage[N] do[V] FSX[N] do[V] halr[N] do[V] harm[N] do[V] interior[N] do[V] justice[N] do[V] prawn[N] do[V] worst[N] place[N] take[N] take[V] precaution[N] moral[A] path[N] temporarily[A] path[N] Amtrak[N] path[N] door[N] path[N] reconciliation[N] path[N] trolley[N] path[N] up[A] lock[N] barrel[N] lock[N] key[N] lock[N] love[N] lock[N] step[N] lock[N] lock[V] Eastern[N] lock[V] nun[N] complex[A] resort[N] international[N] resort[N] Taba[N] resort[N] desk-top[A] column[N] incorrectly[A] column[N] income[N] column[N] smoke[N] column[N] resource[N] gulf[N] stream[N] gulf[N] 324
1999
41
Deep Read: A Reading Comprehension System Lynette Hirschman • Marc Light • Eric Breck • John D. Burger The MITRE Corporation 202 Burlington Road Bedford, MA USA 01730 { l ynette, light, ebreck, john } @ mitre.org Abstract This paper describes initial work on Deep Read, an automated reading comprehension system that accepts arbitrary text input (a story) and answers questions about it. We have acquired a corpus of 60 development and 60 test stories of 3 rd to 6 th grade material; each story is followed by short-answer questions (an answer key was also provided). We used these to construct and evaluate a baseline system that uses pattern matching (bag-of-words) techniques augmented with additional automated linguistic processing (stemming, name identification, semantic class identification, and pronoun resolution). This simple system retrieves the sentence containing the answer 30-40% of the time. 1 Introduction This paper describes our initial work exploring reading comprehension tests as a research problem and an evaluation method for language understanding systems. Such tests can take the form of standardized multiple-choice diagnostic reading skill tests, as well as fill-in- the-blank and short-answer tests. Typically, such tests ask the student to read a story or article and to demonstrate her/his understanding of that article by answering questions about it. For an example, see Figure 1. Reading comprehension tests are interesting because they constitute "found" test material: these tests are created in order to evaluate children's reading skills, and therefore, test materials, scoring algorithms, and human performance measures already exist. Furthermore, human performance measures provide a more intuitive way of assessing the capabilities of a given system than current measures of precision, recall, F-measure, operating curves, etc. In addition, reading comprehension tests are written to test a range of skill levels. With proper choice of test material, it should be possible to challenge systems to successively higher levels of performance. For these reasons, reading comprehension tests offer an interesting alternative to the kinds of special-purpose, carefully constructed evaluations that have driven much recent research in language understanding. Moreover, the current state-of-the- art in computer-based language understanding makes this project a good choice: it is beyond current systems' capabilities, but tractable. Our Library of Congress Has Books for Everyone (WASHINGTON, D.C., 1964) - It was 150 years ago this year that our nation's biggest library burned to the ground. Copies of all the wriuen books of the time were kept in the Library of Congress. But they were destroyed by fire in 1814 during a war with the British. That fire didn't stop book lovers. The next year, they began to rebuild the library. By giving it 6,457 of his books, Thomas Jefferson helped get it started. The first libraries in the United States could be used by members only. But the Library of Congress was built for all the people. From the start, it was our national library. Today, the Library of Congress is one of the largest libraries in the world. People can find a copy of just about every book and magazine printed. Libraries have been with us since people first learned to write. One of the oldest to be found dates back to about 800 years B.C. The books were written on tablets made from clay. The people who took care of the books were called "men of the written tablets." 1. Who gave books to the new library? 2. What is the name of our national library? 3. When did this library burn down? 4. Where can this library be found? 5. Why were some early people called "men of the written tablets"? Figure 1: Sample Remedia TM Reading Comprehension Story and Questions 325 simple bag-of-words approach picked an appropriate sentence 30--40% of the time with only a few months work, much of it devoted to infrastructure. We believe that by adding additional linguistic and world knowledge sources to the system, it can quickly achieve primary-school-level performance, and within a few years, "graduate" to real-world applications. Reading comprehension tests can serve as a testbed, providing an impetus for research in a number of areas: • Machine learning of lexical information, including subcategorization frames, semantic relations between words, and pragmatic import of particular words. • Robust and efficient use of world knowledge (e.g., temporal or spatial relations). • Rhetorical structure, e.g., causal relationships between propositions in the text, particularly important for answering why and how questions. • Collaborative learning, which combines a human user and the reading comprehension computer system as a team. If the system can query the human, this may make it possible to circumvent knowledge acquisition bottlenecks for lexical and world knowledge. In addition, research into collaboration might lead to insights about intelligent tutoring. Finally, reading comprehension evaluates systems' abilities to answer ad hoc, domain- independent questions; this ability supports fact retrieval, as opposed to document retrieval, which could augment future search engines - see Kupiec (1993) for an example of such work. There has been previous work on story understanding that focuses on inferential processing, common sense reasoning, and world knowledge required for in-depth understanding of stories. These efforts concern themselves with specific aspects of knowledge representation, inference techniques, or question types - see Lehnert (1983) or Schubert (to appear). In contrast, our research is concerned with building systems that can answer ad hoc questions about arbitrary documents from varied domains. We report here on our initial pilot study to determine the feasibility of this task. We purchased a small (hard copy) corpus of development and test materials (about 60 stories in each) consisting of remedial reading materials for grades 3-6; these materials are simulated news stories, followed by short-answer "5W" questions: who, what, when, where, and why questions, l We developed a simple, modular, baseline system that uses pattern matching (bag-of-words) techniques and limited linguistic processing to select the sentence from the text that best answers the query. We used our development corpus to explore several alternative evaluation techniques, and then evaluated on the test set, which was kept blind. 2 Evaluation We had three goals in choosing evaluation metrics for our system. First, the evaluation should be automatic. Second, it should maintain comparability with human benchmarks. Third, it should require little or no effort to prepare new answer keys. We used three metrics, P&R, HumSent, and AutSent, which satisfy these constraints to varying degrees. P&R was the precision and recall on stemmed content words 2, comparing the system's response at the word level to the answer key provided by the test's publisher. HumSent and AutSent compared the sentence chosen by the system to a list of acceptable answer sentences, scoring one point for a response on the list, and zero points otherwise. In all cases, the score for a set of questions was the average of the scores for each question. For P&R, the answer key from the publisher was used unmodified. The answer key for HumSent was compiled by a human annotator, I These materials consisted of levels 2-5 of "The 5 W's" written by Linda Miller, which can be purchased from Remedia Publications, 10135 E. Via Linda #D124, Scottsdale, AZ 85258. z Precision and recall are defined as follows: p = #ofmatchinscontent words # content words in answer key R = #ofmatchingcontent words # content words in system response Repeated words in the answer key match or fail together. All words are stemmed and stop words are removed. At present, the stop-word list consists of forms of be, have, and do, personal and possessive pronouns, the conjunctions and, or, the prepositions to, in, at, of, the articles a and the, and the relative and demonstrative pronouns this, that, and which. 326 Query: What is the name of our national library? Story extract: 1. But the Library of Congress was built for all the people. 2. From the start, it was our national library. Answer key: Library of Congress Figure 2: Extract from story who examined the texts and chose the sentence(s) that best answered the question, even where the sentence also contained additional (unnecessary) information. For AutSent, an automated routine replaced the human annotator, examining the texts and choosing the sentences, this time based on which one had the highest recall compared against the published answer key. For P&R we note that in Figure 2, there are two content words in the answer key (library and congress) and sentence 1 matches both of them, for 2/2 = 100% recall. There are seven content words in sentence 1, so it scores 2/7 = 29% precision. Sentence 2 scores 1/2=50% recall and 1/6=17% precision. The human preparing the list of acceptable sentences for HumSent has a problem. Sentence 2 responds to the question, but requires pronoun coreference to give the full answer (the antecedent of it). Sentence 1 contains the words of the answer, but the sentence as a whole doesn't really answer the question. In this and other difficult cases, we have chosen to list no answers for the human metric, in which case the system receives zero points for the question. This occurs 11% of the time in our test corpus. The question is still counted, meaning that the system receives a penalty in these cases. Thus the highest score a system could achieve for HumSent is 89%. Given that our current system can only respond with sentences from the text, this penalty is appropriate. The automated routine for preparing the answer key in AutSent selects as the answer key the sentence(s) with the highest recall (here sentence 1). Thus only sentence 1 would be counted as a correct answer. We have implemented all three metrics. HumSent and AutSent are comparable with human benchmarks, since they provide a binary score, as would a teacher for a student's answer. In contrast, the precision and recall scores of P&R lack such a straightforward comparability. However, word recall from P&R (called AnsWdRecall in Figure 3) closely mimics the scores of HumSent and AutSent. The correlation coefficient for AnsWdRecall to HumSent in our test set is 98%, and from HumSent to AutSent is also 98%. With respect to ease of answer key preparation, P&R and AutSent are clearly superior, since they use the publisher-provided answer key. HumSent requires human annotation for each question. We found this annotation to be of moderate difficulty. Finally, we note that precision, as well as recall, will be useful to evaluate systems that can return clauses or phrases, possibly constructed, rather than whole sentence extracts as answers. Since most national standardized tests feature a large multiple-choice component, many available benchmarks are multiple-choice exams. Also, although our short-answer metrics do not impose a penalty for incorrect answers, multiple- choice exams, such as the Scholastic Aptitude Tests, do. In real-world applications, it might be important that the system be able to assign a confidence level to its answers. Penalizing incorrect answers wouldhelp guide development in that regard. While we were initially concerned that adapting the system to multiple-choice questions would endanger the goal of real-world applicability, we have experimented with minor changes to handle the multiple choice format. Initial experiments indicate that we can use essentially the same system architecture for both short-answer and multiple choice tests. 3 System Architecture The process of taking short-answer reading comprehension tests can be broken down into the following subtasks: Extraction of information content of the question. • Extraction of information content of the document. • Searching for the information requested in the question against information in document. A crucial component of all three of these subtasks is the representation of information in text. Because our goal in designing our system was to explore the difficulty of various reading comprehension exams and to measure baseline 327 performance, we tried to keep this initial implementation as simple as possible. 3.1 Bag-of-Words Approach Our system represents the information content of a sentence (both question and text sentences) as the set of words in the sentence. The word sets are considered to have no structure or order and contain unique elements. For example, the representation for (la) is the set in (lb). la (Sentence): By giving it 6,457 of his books, Thomas Jefferson helped get it started. lb (Bag): {6,457 books by get giving helped his it Jefferson of started Thomas} Extraction of information content from text, both in documents and questions, then consists of tokenizing words and determining sentence boundary punctuation. For English written text, both of these tasks are relatively easy although not trivial--see Palmer and Hearst (1997). The search subtask consists of finding the best match between the word set representing the question and the sets representing sentences in the document. Our system measures the match by size of the intersection of the two word sets. For example, the question in (2a) would receive an intersection score of 1 because of the mutual set element books. 2a (Question): Who gave books to the new library? 2b (Bag): {books gave library new the to who} Because match size does not produce a complete ordering on the sentences of the document, we additionally prefer sentences that first match on longer words, and second, occur earlier in the document. 3.2 Normalizations and Extensions of the Word Sets In this section, we describe extensions to the extraction approach described above. In the next section we will discuss the performance benefits of these extensions. The most straightforward extension is to remove function or stop words, such as the, of, a, etc. from the word sets, reasoning that they offer little semantic information and only muddle the signal from the more contentful words. Similarly, one can use stemming to remove inflectional affixes from the words: such normalization might increase the signal from contentful words. For example, the intersection between (lb) and (2b) would include give if inflection were removed from gave and giving. We used a stemmer described by Abney (1997). A different type of extension is suggested by the fact that who questions are likely to be answered with words that denote people or organizations. Similarly, when and where questions are answered with words denoting temporal and locational words, respectively. By using name taggers to identify person, location, and temporal information, we can add semantic class symbols to the question word sets marking the type of the question and then add corresponding class symbols to the word sets whose sentences contain phrases denoting the proper type of entity. For example, due to the name Thomas Jefferson, the word set in (lb) would be extended by :PERSON, as would the word set (2b) because it is a who question. This would increase the matching score by one. The system makes use of the Alembic automated named entity system (Vilain and Day 1996) for finding named entities. In a similar vein, we also created a simple common noun classification module using WordNet (Miller 1990). It works by looking up all nouns of the text and adding person or location classes if any of a noun's senses is subsumed by the appropriate WordNet class. We also created a filtering module that ranks sentences higher if they contain the appropriate class identifier, even though they may have fewer matching words, e.g., if the bag representation of a sentence does not contain :PERSON, it is ranked lower as an answer to a who question than sentences which do contain :PERSON. Finally, the system contains an extension which substitutes the referent of personal pronouns for the pronoun in the bag representation. For example, if the system were to choose the sentence He gave books to the library, the answer returned and scored would be Thomas Jefferson gave books to the library, if He were resolved to Thomas Jefferson. The current system uses a very simplistic pronoun resolution system which 328 0.5 .......................................................................................................................... 0.45 0.4 0.35 0.3 0.25 0.2 )( Ans Wd Recall / -~-Hurn Sent Ace ]_ --o-.Aut Sent Acc i i i i t = i i i i ~ i ff + d -" " P d " " ~," ." ~" Figure 3: Effect of Linguistic Modules on System Performance matches he, him, his, she and her to the nearest prior person named entity. 4 Experimental Results Our modular architecture and automated scoring metrics have allowed us to explore the effect of various linguistic sources of information on overall system performance. We report here on three sets of findings: the value added from the various linguistic modules, the question- specific results, and an assessment of the difficulty of the reading comprehension task. 4.1 Effectiveness of Linguistic Modules We were able to measure the effect of various linguistic techniques, both singly and in combination with each other, as shown in Figure 3 and Table 1. The individual modules are indicated as follows: Name is the Alembic named tagger described above. NameHum is hand-tagged named entity. Stem is Abney's automatic stemming algorithm. Filt is the filtering module. Pro is automatic name and personal pronoun coreference. ProHum is hand- tagged, full reference resolution. Sem is the WordNet-based common noun semantic classification. We computed significance using the non- parametric significance test described by Noreen (1989). The following performance improvements of the AnsWdRecall metric were statistically significant results at a confidence level of 95%: Base vs. NameStem, NameStem vs. FiltNameHumStem, and FiltNameHumStem vs. FiltProHumNameHumStem. The other adjacent performance differences in Figure 3 are suggestive, but not statistically significant. Removing stop words seemed to hurt overall performance slightly--it is not shown here. Stemming, on the other hand, produced a small but fairly consistent improvement. We compared these results to perfect stemming, which made little difference, leading us to conclude that our automated stemming module worked well enough. Name identification provided consistent gains. The Alembic name tagger was developed for newswire text and used here with no modifications. We created hand-tagged named entity data, which allowed us to measure the performance of Alembic: the accuracy (F- measure) was 76.5; see Chinchor and Sundheim (1993) for a description of the standard MUC scoring metric. This also allowed us to simulate perfect tagging, and we were able to determine how much we might gain by improving the name tagging by tuning it to this domain. As the results indicate, there would be little gain from improved name tagging. However, some modules that seemed to have little effect with automatic name tagging provided small gains with perfect name tagging, specifically WordNet common noun semantics and automatic pronoun resolution. 329 When used in combination with the filtering module, these also seemed to help. Similarly, the hand-tagged reference resolution data allowed us to evaluate automatic coreference resolution. The latter was a combination of name coreference, as determined by Alembic, and a heuristic resolution of personal pronouns to the most recent prior named person. Using the MUC coreference scoring algorithm (see Vilain et al. 1995), this had a precision of 77% and a recall of 18%. 3 The use of full, hand- tagged reference resolution caused a substantial increase of the AnsWdRecall metric. This was because the system substitutes the antecedent for all referring expressions, improving the word- based measure. This did not, however, provide an increase in the sentence-based measures. Finally, we plan to do similar human labeling experiments for semantic class identification, to determine the potential effect of this knowledge source. 4.2 Question-Specific Analysis Our results reveal that different question- types behave very differently, as shown in Figure 4. Why questions are by far the hardest (performance around 20%) because they require understanding of rhetorical structure and because answers tend to be whole clauses (often occurring as stand-alone sentences) rather than phrases embedded in a context that matches the query closely. On the other hand, who and when queries benefit from reliable person, name, and time extraction. Who questions seem to benefit most dramatically from perfect name tagging combined with filtering and pronoun resolution. What questions show relatively little benefit from the various linguistic techniques, probably because there are many types of what question, most of which are not answered by a person, time or place. Finally, where question results are quite variable, perhaps because location expressions often do not include specific place names. 3 The low recall is attributable to the fact that the heuristic asigned antecedents only for names and pronouns, and completely ignored definite noun phrases and plural pronous. 4.3 Task Difficulty These results indicate that the sample tests are an appropriate and challenging task. The simple techniques described above provide a system that finds the correct answer sentence almost 40% of the time. This is much better than chance, which would yield an average score of about 4-5% for the sentence metrics, given an average document length of 20 sentences. Simple linguistic techniques enhance the baseline system score from the low 30% range to almost 40% in all three metrics. However, capturing the remaining 60% will clearly require more sophisticated syntactic, semantic, and world knowledge sources. 5 Future Directions Our pilot study has shown that reading comprehension is an appropriate task, providing a reasonable starting level: it is tractable but not trivial. Our next steps include: • Application of these techniques to a standardized multiple-choice reading comprehension test. This will require some minor changes in strategy. For example, in preliminary experiments, our system chose the answer that had the highest sentence matching score when composed with the question. This gave us a score of 45% on a small multiple- choice test set. Such tests require us to deal with a wider variety of question types, e.g., What is this story about? This will also provide an opportunity to look at rejection measures, since many tests penalize for random guessing. • Moving from whole sentence retrieval towards answer phrase retrieval. This will allow us to improve answer word precision, which provides a good measure of how much extraneous material we are still returning. • Adding new linguistic knowledge sources. We need to perform further hand annotation experiments to determine the effectiveness of semantic class identification and lexical semantics. • Encoding more semantic information in our representation for both question and document sentences. This information could be derived from syntactic analysis, including noun chunks, verb chunks, and clause groupings. 330 Parameters Ans Wd Acc Hum Sent Acc Hum Right Aut Sent Acc Aut Right Base 0.29 0.28 84 0.28 85 Stem 0.29 0.29 86 0.28 84 Name 0.33 0.31 92 0.31 93 NameStem 0.33 0.32 97 !0.31 92 NameHum 0.33 0.32 96 0.32 95 NameHumStem 0.34 0.33 98 0.31 94 FiltProNameStem 0.34 0.33 98 0.32 95 ProNameStem 0.34 0.33 100 0.32 95 ProNameHumStem 0.35 0.34 102 0.33 98 FiltNameHumStem 0.37 0.35 104 0.34 103 FiltSernNameHumStem 0.37 0.35 104 !0.34 103 FiltProNameHumStem 0.38 0.36 107 0.35 106 FiltProHumNameHumStem 0.42 0.36 109 0.35 105 Table 1: Evaluations (3 Metrics) from Combinations of Linguistic Modules #Q 300 300 '300 300 300 300 300 300 300 300 300 ;300 300 • who - .X- .what --e--where ~&-- when --It--why 0.6 0.5 0.4 0,3 0.2 0.1 * / / / / / / / / / / Figure 4: AnsWdRecall Performance by Query Type 331 Cooperation with educational testing and content providers. We hope to work together with one or more major publishers. This will provide the research community with a richer collection of training and test material, while also providing educational testing groups with novel ways of checking and benchmarking their tests. 6 Conclusion We have argued that taking reading comprehension exams is a useful task for developing and evaluating natural language understanding systems. Reading comprehension uses found material and provides human- comparable evaluations which can be computed automatically with a minimum of human annotation. Crucially, the reading comprehension task is neither too easy nor too hard, as the performance of our pilot system demonstrates. Finally, reading comprehension is a task that is sufficiently close to information extraction applications such as ad hoc question answering, fact verification, situation tracking, and document summarization, that improvements on the reading comprehension evaluations will result in improved systems for these applications. 7 Acknowledgements We gratefully acknowledge the contribution of Lisa Ferro, who prepared much of the hand- tagged data used in these experiments. References Abney, Steven (1997). The SCOL manual version 0.lb. Manuscript. Chinchor, Nancy and Beth Sundheim (1993). "MUC- 5 Evaluation Metrics," Proc. Fifth Message Understanding Conference (MUC-5). Morgan Kaufman Publishers. Kupiec, Julian (1993). "MURAX: A Robust Linguistic Approach for Question Answering Using an On-Line Encyclopedia," Proceedings of the 16th Intl. ACM SIGIR Conf on Research and Development in Information Retrieval (SIGIR-93). pp. 181-190, Pittsburgh, PA. Lehnert, Wendy, Michael Dyer, Peter Johnson, C.J. Yang, and Steve Harley (1983) "BORIS--an Experiment in In-Depth Understanding of Narratives", Artificial Intelligence, vol. 20, no. 1. Miller, George (1990). "WordNet: an On-line lexical database." International Journal of Lexicography. Noreen, Eric (1989). Computer Intensive methods for Testing Hypotheses. John Wiley & Sons. Palmer, David and Marti A. Hearst (1997). "Adaptive Multilingual Sentence Boundary Disambiguation." Computational Linguistics, vol. 23, no. 2, pp. 241- 268. Schubert, Lenhart and Chung Hee Hwang (to appear). "Episodic Logic Meets Little Red Riding Hood: A Comprehensive, Natural Representation for Language Understanding", in L. Iwanska and S.C. Shapiro (eds.), Natural Language Processing and Knowledge Representation: Language for Knowledge and Knowledge for Language, MIT/AAAI Press. Vilain, Marc and David Day (1996). "Finite-State Parsing by Rule Sequences." International Conference on Computational Linguistics (COLING- 96). Copenhagen, Denmark, August. The International Committee on Computational Linguistics. Vilain, Marc, John Burger, John Aberdeen, Dennis Connolly, Lynette Hirschman (1995). "A Model- Theoretic Coreference Scoring Scheme." Proc. Sixth Message Understanding Conference (MUC-6). Morgan Kaufman Publishers. 332
1999
42
Mixed Language Query Disambiguation Pascale FUNG, LIU Xiaohu and CHEUNG Chi Shun HKUST Human Language Technology Center Department of Electrical and Electronic Engineering University of Science and Technology, HKUST Clear Water Bay, Hong Kong {pascale, Ixiaohu, eepercy}@ee, ust. hk Abstract We propose a mixed language query disam- biguation approach by using co-occurrence in- formation from monolingual data only. A mixed language query consists of words in a primary language and a secondary language. Our method translates the query into mono- lingual queries in either language. Two novel features for disambiguation, namely contextual word voting and 1-best contextual word, are in- troduced and compared to a baseline feature, the nearest neighbor. Average query transla- tion accuracy for the two features are 81.37% and 83.72%, compared to the baseline accuracy of 75.50%. 1 Introduction Online information retrieval is now prevalent because of the ubiquitous World Wide Web. The Web is also a powerful platform for another application--interactive spoken language query systems. Traditionally, such systems were im- plemented on stand-alone kiosks. Now we can easily use the Web as a platform. Information such as airline schedules, movie reservation, car trading, etc., can all be included in HTML files, to be accessed by a generic spoken interface to the Web browser (Zue, 1995; DiDio, 1997; Ray- mond, 1997; Fung et al., 1998a). Our team has built a multilingual spoken language inter- face to the Web, named SALSA (Fung et al., 1998b; Fung et al., 1998a; Ma and Fung, 1998). Users can use speech to surf the net via vari- ous links as well as issue search commands such as "Show me the latest movie of Jacky Chan'. The system recognizes commands and queries in English, Mandarin and Cantonese, as well as mixed language sentences. Until recently, most of the search engines han- dle keyword based queries where the user types in a series of strings without syntactic structure. The choice of key words in this case determines the success rate of the search. In many situa- tions, the key words are ambiguous. To resolve ambiguity, query expansion is usu- ally employed to look for additional keywords. We believe that a more useful search engine should allow the user to input natural lan- guage sentences. Sentence-based queries are useful because (1) they are more natural to the user and (2) more importantly, they provide more contextual information which are impor- tant for query understanding. To date, the few sentence-based search engines do not seem to take advantage of context information in the query, but merely extracting key words from the query sentence (AskJeeves, 1998; ElectricMonk, 1998). In addition to the need for better query un- derstanding methods for a large variety of do- mains, it has also become important to han- dle queries in different languages. Cross- language information retrieval has emerged as an important area as the amount of non- English material is ever increasing (Oard, 1997; Grefenstette, 1998; Ballesteros and Croft, 1998; Picchi and Peters, 1998; Davis, 1998; Hull and Grefenstette, 1996). One of the important tasks of cross-language IR is to translate queries from one language to another. The original query and the translated query are then used to match documents in both the source and target lan- guages. Target language documents are either glossed or translated by other systems. Accord- ing to (Grefenstette, 1998), three main prob- lems of query translations are: 1. generating translation candidates, 2. weighting translation candidates, and 333 3. pruning translation alternatives for docu- ment matching. In cross-language IR, key word disambigua- tion is even more critical than in monolin- gual IR (Ballesteros and Croft, 1998) since the wrong translation can lead to a large amount of garbage documents in the target language, in addition to the garbage documents in the source language. Once again, we believe that sentence- based queries provide more information than mere key words in cross-language IR. In both monolingual IR and cross-language IR, the query sentence or key words are as- sumed to be consistently in one language only. This makes sense in cases where the user is more likely to be a monolingual person who is looking for information in any language. It is also eas- ier to implement a monolingual search engine. However, we suggest that the typical user of a cross-language IR system is likely to be bilin- gual to some extent. Most Web users in the world know some English. In fact, since En- glish still constitutes 88% of the current web pages, speakers of another language would like to find English contents as well as contents in their own language. Likewise, English speakers might want to find information in another lan- guage. A typical example is a Chinese user look- ing for the information of an American movie, s/he might not know the Chinese name of that movie. His/her query for this movie is likely to be in mixed language. Mixed language query is also prevalent in spoken language. We have observed this to be a common phenomenon among users of our SALSA system. The colloquial Hong Kong lan- guage is Cantonese with mixed English words. In general, a mixed language consists of a sen- tence mostly in the primary language with some words in a secondary language. We are inter- ested in translating such mixed language queries into monolingual queries unambiguously. In this paper, we propose a mixed language query disambiguation approach which makes use of the co-occurrence information of words between those in the primary language and those in the secondary language. We describe the overall methodology in Section 2. In Sec- tions 2.1-3, we present the solutions to the three disambiguation problems. In Section 2.3 we present three different discriminative features for disambiguation, ranging from the baseline model (Section 2.3.1), to the voting scheme (Section 2.3.2), and finally the 1-best model (Section 2.3.3). We describe our evaluation ex- periments in Section 3, and present the results in Section 4. We then conclude in Section 5. 2 Methodology Mixed language query translation is halfway be- tween query translation and query disambigua- tion in that not all words in the query need to be translated. There are two ways to use the disambiguated mixed language queries. In one scenario, all secondary language words are translated unam- biguously into the primary language, and the resulting monolingual query is processed by a general IR system. In another scenario, the primary language words are converted into sec- ondary language and the query is passed to another IR system in the secondary language. Our methods allows for both general and cross- language IR from a mixed language query. To draw a parallel to the three problems of query translation, we suggest that the three main problems of mixed language disambigua- tion are: 1. generating translation candidates in the primary language, 2. weighting translation candidates, and 3. pruning translation alternatives for query translation. Co-occurrence information between neighbor- ing words and words in the same sentence has been used in phrase extraction (Smadja, 1993; Fung and Wu, 1994), phrasal translation (Smadja et al., 1996; Kupiec, 1993; Wu, 1995; Dagan and Church, 1994), target word selection (Liu and Li, 1997; Tanaka and Iwasaki, 1996), domain word translation (Fung and Lo, 1998; Fung, 1998), sense disambiguation (Brown et al., 1991; Dagan et al., 1991; Dagan and Itai, 1994; Gale et al., 1992a; Gale et al., 1992b; Gale et al., 1992c; Shiitze, 1992; Gale et al., 1993; Yarowsky, 1995), and even recently for query translation in cross-language IR as well (Balles- teros and Croft, 1998). Co-occurrence statistics is collected from either bilingual parallel and 334 non-parallel corpora (Smadja et al., 1996; Ku- piec, 1993; Wu, 1995; Tanaka and Iwasaki, 1996; Fung and Lo, 1998), or monolingual corpora (Smadja, 1993; Fung and Wu, 1994; Liu and Li, 1997; Shiitze, 1992; Yarowsky, 1995). As we noted in (Fung and Lo, 1998; Fung, 1998), parallel corpora are rare in most domains. We want to devise a method that uses only mono- lingual data in the primary language to train co-occurrence information. 2.1 Translation candidate generation Without loss of generality, we suppose the mixed language sentence consists of the words S = (E1,E2,...,C,...,En}, where C is the only secondary language word 1. Since in our method we want to find the co-occurrence in- formation between all Ei and C from a mono- lingual corpus, we need to translate the lat- ter into the primary language word Ec. This corresponds to the first problem in query translation--translation candidate generation. We generate translation candidates of C via an online bilingual dictionary. All translations of secondary language word C, comprising of mul- tiple senses, are taken together as a set {Eci }. 2.2 Translation candidate weighting Problem two in query translation is to weight all translation candidates for C. In our method, the weights are based on co-occurrence informa- tion. The hypothesis is that the correct transla- tions of C should co-occur frequently with the contextual words Ei and incorrect translation of C should co-occur rarely with the contex- tual words. Obviously, other information such as syntactical relationship between words or the part-of-speech tags could be used as weights too. However, it is difficult to parse and tag a mixed language sentence. The only information we can use to disambiguate C is the co-occurrence information between its translation candidates { Ec, } and El, E2, . . . , En. Mutual information is a good measure of the co-occurrence relationship between two words (Gale and Church, 1993). We first compute the mutual information between any word pair from a monolingual corpus in the primary language 2 1In actual experiments, each sentence can contain multiple secondary language words 2This corpus does not need to be in the same domain as the testing data using the following formula, where E is a word and f (E) is the frequency of word E. MI(Ei, Ej) = log f(Ei, Ej) f(Ei) * f(Sj) (1) Ei and Ej can be either neighboring words or any two words in the sentence. 2.3 Translation candidate pruning The last problem in query translation is select- ing the target translation. In our approach, we need to choose a particular Ec from Ec~. We call this pruning process translation disam- biguation. We present and compare three unsupervised statistical methods in this paper. The first base- line method is similar to (Dagan et al., 1991; Dagan and Itai, 1994; Ballesteros and Croft, 1998; Smadja et al., 1996), where we use the nearest neighboring word of the secondary lan- guage word C as feature for disambiguation. In the second method, we chQose all contex- tual words as disambiguating feature. In the third method, the most discriminative contex- tual word is selected as feature. 2.3.1 Baseline: single neighboring word as disambiguating feature The first disambiguating feature we present here is similar to the statistical feature in (Dagan et al., 1991; Smadja et al., 1996; Dagan and Itai, 1994; Ballesteros and Croft, 1998), namely the co-occurrence with neighboring words. We do not use any syntactic relationship as in (Dagan and Itai, 1994) because such relationship is not available for mixed-language sentences. The as- sumption here is that the most powerful word for disambiguating a word is the one next to it. Based on mutual information, the primary lan- guage target word for C is chosen from the set {Ec~}. Suppose the nearest neighboring word for C in S is Ey, we select the target word Ecr, such that the mutual information between Ec~ and Ev is maximum. r = argmaxiMI(Ec,, Ey) (2) Ev is taken to be either the left or the right neighbor of our target word. This idea is illustrated in Figure 1. MI1, rep- resented by the solid line, is greater than MI2, 335 66 0 Word in the pr~ I~guagu Q ord in th¢ secondary language Selected translation word MII > MI2 Figure 1: The neighboring word as disambiguat- ing feature represented by the dotted line. Ey is the neigh- boring word for C. Since MI1 is greater than MI2, Ecl is selected as the translation of C. 2.3.2 Voting: multiple contextual words as disambiguating feature The baseline method uses only the neighboring word to disambiguate C. Is one or two neigh- boring word really sufficient for disambigua- tion? The intuition for choosing the nearest neigh- boring word Ey as the disambiguating feature for C is based on the assumption that they are part of a phrase or collocation term, and that there is only one sense per collocation (Dagan and Itai, 1994; Yarowsky, 1993). However, in most cases where C is a single word, there might be some other words which are more useful for disambiguating C. In fact, such long-distance dependency occurs frequently in natural lan- guage (Rosenfeld, 1995; Huang et al., 1993). Another reason against using single neighbor- ing word comes from (Gale and Church, 1994) where it is argued that as many as 100,000 con- text words might be needed to have high disam- biguation accuracy. (Shfitze, 1992; Yarowsky, 1995) all use multiple context words as discrim- inating features. We have also demonstrated in our domain translation task that multiple con- text words are useful (Fung and Lo, 1998; Fung and McKeown, 1997). Based on the above arguments, we enlarge the disambiguation window to be the entire sen- tence instead of only one word to the left or right. We use all the contextual words in the query sentence. Each contextual word "votes" by its mutual information with all translation candidates. Suppose there are n primary language words in S = E1,E2,...,C,...,En, as shown in Fig- ure 2, we compute mutual information scores between all Ec~ and all Ej where Eci is one of the translation candidates for C and Ej is one of all n words in S. A mutual information score matrix is shown in Table 1. whereMIjc~ is the mutual information score between contex- tual word Ej and translation candidate Eel. E1 E2 °o. Ej En Eel Ec2 MIlcl MIlc2 MI2cl MI2c2 Mljcl Mljc2 MIncl MInc2 °oo Ec~ ... MIlcm ... MI2cm ... MXjc ... Mlncm Table 1: Mutual information between all trans- lation candidates and words in the sentence For each row j in Table 1, the largest scoring MIjci receives a vote. The rest of the row get zero's. At the end, we sum up all the one's in each column. The column i receiving the highest vote is chosen as the one representing the real translation. m m L~ c 0 0 Selected tramlntion Figure 2: Voting for the best translation To illustrate this idea, Table 2 shows that candidate 2 is the correct translation for C. There are four candidates of C and four con- textual words to disambiguate C. E1 0 1 0 0 E2 1 0 0 0 E3 0 0 0 1 E4 0 1 0 0 Table 2: Candidate 2 is the correct translation 2.3.3 1-best contextual word as disambiguating feature In the above voting scheme, a candidate receives either a one vote or a zero vote from all contex- 336 tual words equally no matter how these words axe related to C. As an example, in the query "Please show me the latest dianying/movie of Jacky Chan", the and Jacky are considered to be equally important. We believe however, that if the most powerful word is chosen for disam- biguation, we can expect better performance. This is related to the concept of "trigger pairs" in (Rosenfeld, 1995) and Singular Value Decom- position in (Shfitze, 1992). In (Dagan and Itai, 1994), syntactic relation- ship is used to find the most powerful "trigger word". Since syntactic relationship is unavail- able in a mixed language sentence, we have to use other type of information. In this method, we want to choose the best trigger word among all contextual words. Referring again to Table 1, Mljci is the mutual information score be- tween contextual word Ej and translation can- didate Ec~. We compute the disambiguation contribution ratio for each context word Ej. For each row j in Table 1, the largest MI score Mljc~ and the second largest MI score Mljc~ are chosen to yield the contribution for word Ej, which is the ratio between the two scores Mljc/ Contribution(Ej, Eci) = Mljc~ (3) If the ratio between MIjc/and MIjc~ is close to one, we reason that Ej is not discriminative enough as a feature for disambiguating C. On the other hand, if the ratio between MIie/i and MIie.~ is noticeably greater than one, we can use Ej as the feature to disambiguate {Ec~} with high confidence. We choose the word Ey with maximum contribution as the disambiguating feature, and select the target word Ecr , whose mutual information score with Ey is the highest, as the translation for C. r = arg max MI(Ey, Ec,) (4) This method is illustrated in Figure 3. Since E2 is the contextual word with highest contri- bution score, the candidate Ei is chosen that the mutual information between E2 and Eci is the largest. 3 Evaluation experiments The mutual information between co-occurring words and its contribution weight is ob- i • "'-. ~iI!j/J / Q Word ia the primary language Word in die seconda~ language S©lectcd mutslalion of C Figure 3: The best contextual word as disam- biguating feature tained from a monolingual training corpus-- Wall Street Journal from 1987-1992. The train- ing corpus size is about 590MB. We evaluate our methods for mixed language query disam- biguation on an automatically generated mixed- language test set. No bilingual corpus, parallel or comparable, is needed for training. To evaluate our method, a mixed-language sentence set is generated from the monolingual ATIS corpus. The primary language is English and the secondary language is chosen to be Chi- nese. Some English words in the original sen- tences are selected randomly and translated into Chinese words manually to produce the test- ing data. These axe the mixed language sen- tences. 500 testing sentences are extracted from the ARPA ATIS corpus. The ratio of Chinese words in the sentences varies from 10% to 65%. We carry out three sets of experiments using the three different features we have presented in this paper. In each experiment, the percentage of primary language words in the sentence is incrementally increased at 5% steps, from 35% to 90%. We note the accuracy of unambiguous translation at each step. Note that at the 35% stage, the primary language is in fact Chinese. 4 Evaluation results One advantage of using the artificially gener- ated mixed-language test set is that it becomes very easy to evaluate the performance of the disambiguation/translation algorithm. We just need to compare the translation output with the original ATIS sentences. The experimental results are shown in Fig- ure 4. The horizontal axis represents the per- centage of English words in the testing data and the vertical axis represents the translation ac- curacy. Translation accuracy is the ratio of the number of secondary language (Chinese) words disambiguated correctly over the number of all 337 secondary language (Chinese) words present in the testing sentences. The three different curves represent the accuracies obtained from the base- line feature, the voting model, and the 1-best model. O.85 1 i 0,8 VoOng ~-. ba~ine .e.- m B"" ..u .. .. i i i i i i ~ i a ~ of primary l.a~uiita Words Figure 4: 1-best is the most discriminating fea- ture We can see that both voting contextual words and the 1-best contextual words are more pow- erful discriminant than the baseline neighboring word. The 1-best feature is most effective for disambiguating secondary language words in a mixed-language sentence. 5 Conclusion and Discussion Mixed-language query occurs very often in both spoken and written form, especially in Asia. Such queries are usually in complete sentences instead of concatenated word strings because they are closer to the spoken language and more natural for user. A mixed-language sentence consists of words mostly in a primary language and some in a secondary language. However, even though mixed-languages are in sentence form, they are difficult to parse and tag be- cause those secondary language words introduce an ambiguity factor. To understand a query can mean finding the matched document, in the case of Web search, or finding the corresponding se- mantic classes, in the case of an interactive sys- tem. In order to understand a mixed-language query, we need to translate the secondary lan- guage words into primary language unambigu- ously. In this paper, we present an approach of mixed,language query disambiguation by us- ing co-occurrence information obtained from a monolingual corpus. Two new types of dis- ambiguation features are introduced, namely voting contextual words and 1-best contextual word. These two features are compared to the baseline feature of a single neighboring word. Assuming the primary language is English and the secondary language Chinese, our experi- ments on English-Chinese mixed language show that the average translation accuracy for the baseline is 75.50%, for the voting model is 81.37% and for the 1-best model, 83.72%. The baseline method uses only the neighbor- ing word to disambiguate C. The assumption is that the neighboring word is the most semantic relevant. This method leaves out an important feature of nature language: long distance de- pendency. Experimental results show that it is not sufficient to use only the nearest neighbor- ing word for disambiguation. The performance of the voting method is bet- ter than the baseline because more contextual words are used. The results are consistent with the idea in (Gale and Church, 1994; Shfitze, 1992; Yarowsky, 1995). In our experiments, it is found that 1-best contextual word is even better than multiple contextual words. This seemingly counter- intuitive result leads us to believe that choos- ing the most discriminative single word is even more powerful than using multiple contextual word equally. We believe that this is consistent with the idea of using "trigger pairs" in (Rosen- feld, 1995) and Singular Value Decomposition in (Shiitze, 1992). We can conclude that sometimes long- distance contextual words are more discrimi- nant than immediate neighboring words, and that multiple contextual words can contribute to better disambiguation.Our results support our belief that natural sentence-based queries are less ambiguous than keyword based queries. Our method using multiple disambiguating con- textual words can take advantage of syntactic information even when parsing or tagging is not possible, such as in the case of mixed-language queries. Other advantages of our approach include: (1) the training is unsupervised and no domain- dependent data is necessary, (2) neither bilin- gual corpora or mixed-language corpora is needed for training, and (3) it can generate 338 monolingual queries in both primary and sec- ondary languages, enabling true cross-language IR. In our future work, we plan to analyze the various "discriminating words" contained in a mixed language or monolingual query to find out which class of words contribute more to the final disambiguation. We also want to test the significance of the co-occurrence informa- tion of all contextual words between themselves in the disambiguation task. Finally, we plan to develop a general mixed-language and cross- language understanding framework for both document retrieval and interactive tasks. References AskJeeves. 1998. http://www.askjeeves.com. Lisa Ballesteros and W. Bruce Croft. 1998. Resolving ambiguity for cross-language re- trieval. In Proceedings of the 21st Annual In- ternational A CM SIGIR Conference on Re- search and Development in Information Re- trieval, pages 64-:71, Melbourne, Australia, August. P. Brown, J. Lai, and R. Mercer. 1991. Aligning sentences in parallel corpora. In Proceedings of the 29th Annual Conference of the Associ- ation for Computational Linguistics. Ido Dagan and Kenneth W. Church. 1994. Ter- might: Identifying and translating technical terminology. In Proceedings of the 4th Con- ference on Applied Natural Language Process- ing, pages 34-40, Stuttgart, Germany, Octo- ber. Ido Dagan and Alon Itai. 1994. Word sense dis- ambiguation using a second language mono- lingual corpus. In Computational Linguistics, pages 564-596. Ido Dagan, Alon Itai, and Ulrike Schwall. 1991. Two languages are more informative than one. In Proceedings of the 29th Annual Con- ference of the Association for Computational Linguistics, pages 130-137, Berkeley, Califor- nia. M. Davis. 1998. Free resources and advanced alignment for cross-language text retrieval. In Proceedings of the 6th Text Retrieval Con- ference (TREC-6), NIST, Gaithersburg, MD, November. Laura DiDio. 1997. Os/2 let users talk back to 'net. page 12. ElectricMonk. 1998. http://www.electricmonk.com. Pascale Fung and Yuen Yee Lo. 1998. An IR approach for translating new words from non- parallel, comparable texts. In Proceedings of the 36th Annual Conference of the Associ- ation for Computational Linguistics, pages 414-420, Montreal,Canada, August. Pascale Fung and Kathleen McKeown. 1997. Finding terminology translations from non- parallel corpora. In The 5th Annual Work- shop on Very Large Corpora, pages 192-202, Hong Kong, Aug. Pascale Fung and Dekai Wu. 1994. Statistical augmentation of a Chinese machine-readable dictionary. In Proceedings of the Second An- nual Workshop on Very Large Corpora, pages 69-85, Kyoto, Japan, June. Pascale Fung, CHEUNG Chi Shuen, LAM Kwok Leung, LIU Wai Kat, and LO Yuen Yee. 1998a. A speech assisted online search agent (salsa). In ICSLP. Pascale Fung, CHEUNG Chi Shuen, LAM Kwok Leung, LIU Wai Kat, LO Yuen Yee, and MA Chi Yuen. 1998b. SALSA, a multilingual speech-based web browser. In The First AEARU Web Technolgy Workshop, Nov. Pascale Fung. 1998. A statistical view of bilin- gual lexicon extraction: from parallel corpora to non-parallel corpora. In Proceedings of the Third Conference of the Association for Ma- chine Translation in the Americas, Pennsyl- vania, October. William A. Gale and Kenneth W. Church. 1993. A program for aligning sentences in bilingual corpora. Computational Linguis- tics, 19(1):75-102. William A. Gale and Kenneth W. Church. 1994. Discrimination decisions in 100,000 dimen- sional spaces. Current Issues in Computa- tional Linguistics: In honour of Don Walker, pages 429-550. W. Gale, K. Church, and D. Yarowsky. 1992a. Estimating upper and lower bounds on the performance of word-sense disambiguation programs. In Proceedings of the 30th Con- ference of the Association for Computational Linguistics. Association for Computational Linguistics. W. Gale, K. Church, and D. Yarowsky. 1992b. 339 Using bilingual materials to develop word sense disambiguation methods. In Proceed- ings of TMI 92. W. Gale, K. Church, and D. Yarowsky. 1992c. Work on statistical methods for word sense disambiguation. In Proceedings of AAAI 92. W. Gale, K. Church, and D. Yarowsky. 1993. A method for disambiguating word senses in a large corpus. In Computers and Humanities, volume 26, pages 415-439. Gregory Grefenstette, editor. 1998. Cross- language Information Retrieval. Kluwer Aca- demic Publishers. Xuedong Huang, Fileno Alleva, Hisao-Wuen Hong, Mei-Yuh Hwang, Kai-Fu Lee, and Ronald Rosenfeld. 1993. The SPHINX- II speech recognition system: an overview. Computer, Speech and Language, pages 137- 148. David A. Hull and Gregory Grefenstette. 1996. A dictionary-based approach to multilingual informaion retrieval. In Proceedings of the 19th International Conference on Research and Development in Information Retrieval, pages 49-57. Julian Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual cor- pora. In Proceedings of the 31st Annual Con- ference of the Association for Computational Linguistics, pages 17-22, Columbus, Ohio, June. Xiaohu Liu and Sheng Li. 1997. Statistic-based target word selection in English-Chinese ma- chine translation. Journal of Harbin Institute of Technology, May. Chi Yuen Ma and Pascale Fung. 1998. Using English phoneme models for Chinese speech recognition. In International Symposium on Chinese Spoken language processing. D.W. Oard. 1997. Alternative approaches for cross-language text retrieval. In AAAI Sym- posium on cross-language text and speech re- trieval. American Association for Artificial Intelligence, Mar. Eugenio Picchi and Carol Peters. 1998. Cross- language information retrieval: a system for comparable corpus querying. In Gregory Grefenstette, editor, Cross-language Infor- mation Retrieval, pages 81-92. Kluwer Aca- demic Publishers. Lau Raymond. 1997. Webgalaxy : Beyond point and click - a conversational interface to a browser. In Computer Netowrks ~ ISDN Systems, pages 1385-1393. Rony Rosenfeld. 1995. A Corpus-Based Ap- proach to Language Learning. Ph.D. thesis, Carnegie Mellon University. Hinrich Shfitze. 1992. Dimensions of meaning. In Proceedings of Supercomputing '92. Frank Smadja, Kathleen McKeown, and Vasileios Hatzsivassiloglou. 1996. Translat- ing collocations for bilingual lexicons: A sta- tistical approach. Computational Linguistics, 21(4):1-38. Frank Smadja. 1993. Retrieving collocations from text: Xtract. Computational Linguis- tics, 19(1):143-177. Kumiko Tanaka and Hideya Iwasaki. 1996. Extraction of lexical translations from non- aligned corpora. In Proceedings of COLING 96, Copenhagan, Danmark, July. Dekai Wu. 1995. Grammarless extraction of phrasal translation examples from parallel texts. In Proceedings of TMI 95, Leuven, Bel- gium, July. Submitted. D. Yarowsky. 1993. One sense per collocation. In Proceedings of ARPA Human Language Technology Workshop, Princeton. D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd Conference of the Association for Computational Linguis- tics, pages 189-196. Association for Compu- tational Linguistics. Victor Zue. 1995. Spoken language interfaces to computers: Achievements and challenges. In The 33rd Annual Meeting of the Association of Computational Linguistics, Boston, June. 340
1999
43
Syntagmatic and Paradigmatic Representations of Term Variation Christian Jacquemin LIMSI-CNRS BP 133 91403 ORSAY Cedex FRANCE j acquemin@limsi, fr Abstract A two-tier model for the description of morphologi- cal, syntactic and semantic variations of multi-word terms is presented. It is applied to term normal- ization of French and English corpora in the medi- cal and agricultural domains. Five different sources of morphological and semantic knowledge are ex- ploited (MULTEXT, CELEX, AGROVOC, Word- Netl.6, and Microsoft Word97 thesaurus). 1 Introduction In the classical approach to text retrieval, terms are assigned to queries and documents. The terms are generated by a process called automatic index- ing. Then, given a query, the similarity between the query and the documents is computed and a ranked list of documents is produced as output of the system for information access (Salton and McGill, 1983). The similarity between queries and documents de- pends on the terms they have in common. The same concept can be formulated in many different ways, known as variants, which should be conflated in order to avoid missing relevant documents. For this purpose, this paper proposes a novel model of term variation that integrates linguistic knowledge and performs accurate term normalization. It re- lies on previous or ongoing linguistic studies on this topic (Sparck Jones and Tait, 1984; Jacquemin et al., 1997; Hamon et al., 1998). Terms are described in a two-tier framework composed of a paradigmatic level and a syntagmatic level that account for the three linguistic dimensions of term variability (mor- phology, syntax, and semantics). Term variants are extracted from tagged corpora through FASTR 1, a unification-based transformational parser described in (Jacquemin et al., 1997). Four experiments are performed on the French and the English languages and a measure of pre- cision is provided for each of them. Two experi- ments are made on a French corpus [AGRIC] com- posed of 1.2 x 106 words of scientific abstracts in I FASTR can be downloaded www. limsi, f r/Individu/j acquemi/FASTR. from the agricultural domain and two on an English cor- pus [MEDIC] composed of 1.3 x 106 words of sci- entific abstracts in the medical domain. The two experiments in the French language are [AGRIC] + Word97 and [AGRIC] + AGROVOC. In the for- mer, synonymy links are extracted from the Mi- crosoft Word97 thesaurus; in the latter, seman- tic classes are extracted from the AGROVOC the- saurus, a thesaurus specialized in the agricultural domain (AGROVOC, 1995). In both experiments, morphological data are produced by a stemming al- gorithm applied to the MULTEXT lexical database (MULTEXT, 1998). The two experiments on the English language are [MEDIC] + WordNet 1.6 or [MEDIC] + Word97; they correspond to two differ- ent sources of semantic knowledge. In both cases, the morphological data are extracted from CELEX (CELEX, 1998). 2 Term Variation: Representation and Exploitation Terms and variations are represented into two par- allel frameworks illustrated by Figure 1. While terms are described by a unique pair composed of a structure--at the syntagmatic level--and a set of lexical items--at the paradigmatic level--, a varia- tion is represented by a pair of such pairs: one of them is the source term (or normalized term) and the other one is the target term (or variant). The syntagmatic description of a term is a con- text free rule; it is complemented with lexical infor- mation embedded in a feature structure denoted by constraints between paths and values. For instance, the term speed measurement is represented by: { Syntagm:{i°-+N2N1} } (N1 lemma) = measurement Paradigm: {N2 lemma> = speed (1) This term is a noun phrase composed of a head noun N1 and a modifier N2; the lemmas are given by the constraints at the paradigmatic level. This frame- work is similar to the unification-based representa- tion of context-free grammars of (Shieber, 1992). 341 Term Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Normalized term Variant Syntagmatic ,ev., transformation ~ [ ~ I ............... -~ ..... :-= ............ : ---~-~- ._ ~-'~---- j .... ~ -: - _ _, Paradigmatic ILl\ L2 [ l / I L l / / L2I andsemanfic I Ll' L2'I level speed ~m~ment ','~J links lnstantiation of the [ource . . . . . . . . . . . . . . . . . . . . . . . . . . . . I_ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Figure 1: Two level description of terms and variations At the syntagmatic level, variations are repre- sented by a source and a target structure. At the paradigmatic level, the lexical elements of variations are not instantiated in order to ensure higher gener- ality. Instead, links between lexical elements are pro- vided. They denote morphological and/or semantic relations between lexical items in the source and tar- get structures of the variation. For example, the variation that associates a Noun-Noun term such as the preceding term speedN= measurementN1 with a verbal formof the head word and a synonym of the argument such as measuringvl maximaIh shorten- ingN velocityN,= is given by: Syntagm: { (N° -+ N2 N1) =0" } (V0 --~ V1 (Prep ? Det ? (AINIPart)*) N~) (2) { root)=(Vlroot) } Paradigm: {N12sem)=(Ni2sem ) If this variation is instantiated with the term given in (1), it recognizes the lexico-syntactic structure Vl (Prep ? Det ? (AINIPart)*) N~ (3) in which V1 and measurement are morphologically related, and N~ and speed are semantically related. The target structure is under-specified in order to describe several possible instantiations with a single expression and is therefore called a candidate varia- tion. In this example, a regular expression is used to under-specify the structure2; another solution would be to use quasi-trees with extended dependencies (Vijay-Shanker, 1992). 3 Paradigmatic relations As illustrated by Figure 2 and Formula (2), there are two types of paradigmatic relations between lemmas 2A stands for adjective, N for noun, Prep for preposition, V for verb, Det for determiner, Part for participle, and Adv for adverb. involved in the definition of term variations: mor- phological and semantic relations. The morphologi- cal family of a lemma l is denoted by the set FM(l) and its semantic family by the set FSL (l) or Fsc (l). Semantic family /~-~velocity Morphological family Semantic family Figure 2: Paradigmatic links between lemmas Roughly speaking, two words are morphologi- cally related if and only if they share the same root. In the preceding example, to measure and measure- ment are in the same morphological family because their common root is to measure. Let/: be the set of lemmas, morphological roots define a binary relation M from £ to/: that associates each lemma with its root(s): M E £ ~ £. M is not a function because compound lemmas have more than one root. The morphological family FM(l) of a lemma 1 is the set of lemmas (including l) which share a common root with l: Vle f~, FM (l) = {l' E /Z * 3r E /:, (/, r) E M A(/',r) E M} = M-I(M({I})) (4) 342 (liD(/:) is the power-set of £:, the set of its subsets.) There are principally two types of semantic re- lations: direct links through a binary relation SL E /2 ~ £: or classes C E ~(l?(/:)). In the case of semantic links, the semantic family Fs~ (l) of a lemma 1 is the set of lemmas (including l) which are linked to l: FSL • IP(E) Vl E ~, FSL (l) = {l' • f~ * (l, Y) • SL} tJ {l} (5) = u {l} In the case of semantic classes, the seman- tic family Fsc (l) of a lemma l is the union of all the classes to which it belongs: (6) VleL, Fsc(l)= U c U(l} (c~c)^(tec) Links and classes are equivalent, the choice of either model depends on the type of available se- mantic data. In the experiments reported here, di- rect links are used to represent data extracted from the word processor Microsoft Word97 because they are provided as lists of synonyms associated with each lemma. Conversely, the synsets extracted from WordNet 1.6 (Fellbaum, 1998) are classes of disam- biguated lemmas and, therefore, correspond to the second technique. With respect to the definitions of semantic and morphological families given in this section, the candidate variant (3) is such that V1 • FM(measurement) and N~ • FSL(speed) or N~ • Fsc (speed). 4 Morphological and Semantic Families In the experiments on the English corpora, the CELEX database is used to calculate morphologi- cal families. As for semantic families, either Word- Net 1.6 or the thesaurus of Microsoft Word97 are used. Morphological Links from CELEX In the CELEX morphological database (CELEX, 1998), each lemma is associated with a morpholog- ical structure that contains one or more root lem- mas. These roots are used to calculate morpholog- ical families according to Formula (4). For exam- ple, the morphological family FM(measurementN) of the lemmas with measurev as root word is { commensurable A , commensurably Adv , countermea- sureN, immeasurableA, immeasurablyAdv, incom- mensurableA, measurableA, measurablyAdv, mea- sureN , measureless A , measurementN , mensurable A , tape-measureN, yard-measureN , measurev }. Semantic Classes from WordNet Two sources of semantic knowledge are used for the English language: the WordNet 1.6 thesaurus and the thesaurus of the word processor Microsoft Word97. In the WordNet thesaurus, disambiguated words are grouped into sets of synonyms--called synsets--that can be used for a class-based ap- proach to semantic relations. For example, each of the five disambiguated meanings of the polysemous noun speed belongs to a different synset. In our approach, words are not disambiguated and, there- fore, the semantic family of speed is calculated as the union of the synsets in which one of its senses is included. Through Formula (6), the semantic fam- ily of speed based on WordNet is: Fsc (speedN) = {speedN, speedingN, hurryingN, hasteningN, swift- nessN, fastnessN, velocityN, amphetamineN }. Semantic Links from Microsoft Word97 For assisting document edition, the word proces- sor Microsoft Word97 has a command that returns the synonyms of a selected word. We have used this facility to build lists of synonyms. For exam- ple, FSn ( speed N ) = { speedN , swi]tnesss, velocityN , quicknessN , rapidityN , accelerationN , alacrityN , celerityN} (Formula (5)). Eight other synonyms of the word speed are provided by Word97, but they are not included in this semantic family because they are not categorized as nouns in CELEX. 5 Variations The linguistic transformations for the English lan- guage presented in this section are somehow simpli- fied for the sake of conciseness. First, we focus on binary terms that represent 91.3% of the occurrences of multi-word terms in the English corpus [MEDIC]. Then, simplifications in the combinations of types of variations are motivated by corpus explorations in order to focus on the most productive families of variations. The 3 Dimensions of Linguistic Variations There are as many types of morphological re- lations as pairs of syntactic categories of content words. Since the syntactic categories of content words are noun (N), verb (V), adjective (A), and adverb (Adv), there are potentially sixteen different pairs of morphological links. (Associations of iden- tical categories must be taken into consideration. For example, Noun-Noun associations correspond to morphological links between substantive nouns such as agent/process: promoter~promotion.) Morpho- logical relations are further divided into simple re- lations if they associate two words in the same po- sition and crossed relations if they associate a head word and an argument. Combining categories and positions, there are, in all, 64 different types of mor- phological relations. 343 In (Hamon et al., 1998), three types of semantic relations are studied: a link between the two head words, a link between the two arguments, or two parallel links between heads and arguments. These authors report that double links are rare and that their quality is low. They only represent 5% of the semantic variations on a French corpus and they are extracted with a precision of 9% only. We will there- fore focus on single semantic links. Since we are only concerned with synonyms, only two types of seman- tic links are studied: synonymous heads or synony- mous arguments. The last dimension of term variability is the structural transformation at the syntagmatic level. The source structure of the variation must match a term structure. There are basically two structures of binary terms: X1 N2 compounds in which X1 is a noun, an adjective or a participle, and N1 Prep N~ terms. According to (Jacquemin et al., 1997), there are three types of syntactic variations in French: coordinations (Coot), insertions of mod- ifiers (Modif), and compounding/decompounding (Comp). Each of these syntactic variations is fur- ther subdivided into finer categories. Multi-dimensional Linguistic Variations The overall picture of term variations is obtained by combining the 64 types of morphological relations, the two types of semantic relations and the three types of syntactic variations (and their sub-types). There are different constraints on these combina- tions that limit the number of possible variations: 1. Morphological and semantic links must operate on different words. For example, if the head word is transformed by a morphological link, the only word available for a semantic link is the argument word. 2. The target syntactic structure must be com- patible with the morphological transformations. For example, if a noun is transformed into a verb, the target structure must be a verb phrase. These two constraints influence the way in which a variation can be defined by combining different types of elementary modifications. Firstly, lexical relations are defined at the paradigmatic level: mor- phological links, semantic links or identical words. Then a syntactic structure that is compatible with the categories of the target words is chosen. The list of variations used for binary compound terms in English is given in Table 1. 3 It has been experimentally refined through a progressive corpus- based tuning. The Synt column gives the target syntactic structure. The Morph column describes 3punctuations are noted Pu and coordinating conjunction CC. the morphological link: a source and a target syn- tactic category and the syntactic positions of the source and target lemmas. The Sere column indi- cates whether the variation involves a semantic link and the position of the lemmas concerned by the link (both lemmas must have an identical position). The Pattern column gives the target syntactic structure as a function of the source structure which is either X1N2, A1N2, or N1N2. For example, Variation #42 transforms an Adjective-Noun term A1 N2 into N1 ((CC Det?) ? Prep Det ? (AIN[Part) °-a) N~ N1 is a noun in the morphological family of A1 (noted FM(A1)N) and N~ is semantically related with N2 (noted Fs(N2)). This variation recognizes malignancy in orbital turnouts as a variant of malig- nant tumor because malignancy and malignant are morphologically related, turnout and tumor are se- mantically related, and malignancyN inprep orbitaIA tumoursN matches the target pattern. Variation #56 is a more elaborated version of variation (2) given in Section 2. Sample Syntactico-semantic Variants from [MEDIC] The first 36 variations in Table 1 do not contain any morphological link. They are built as follows. Firstly, the different structures of noun phrases are used as target structures. Twelve structures are pro- posed: head coordination (#1), argument coordina- tion (#4), enumeration with conjunction (#7), enu- meration without conjunction (#10), etc. Then each transformation is enriched with ad- ditional semantic links between the head words or between the argument words. Semantic links between argument words are found in variations #(3n + 2)o<n<ll and between head words in vari- ations #(3n)l<n<12. (Due to the lack of space, only variations #2 and #3 constructed on top of vari- ation #1 are shown in Table 1.) Sample variants from [MEDIC] for the first 36 variations are given in Table 2. Some variations have not matched any variant in the whole corpus. Sample Morpho-syntactico-semantic Variants Morpho-syntactico-semantic variations are num- bered #37 to #62 in Table 1. Only 10 of the 64 possible morphological associations are found in the list of morphological links: Noun to Adjective on arguments (#37), Adjective to Noun on arguments (#39), etc. Each of these variations is doubled by adding a semantic link between the words that are not morphologically related. For example, variation (#40) is deduced from variation (#39) by adding a semantic link between the head words. Sample variants are given in Table 3. 344 Table 1: Patterns of semantic variation for terms of structure X1 N~. # Synt. Morph. Sere. Pattern 1 Coot -- 2 Coor -- Arg 3 Coor -- Head 4 Coor -- 7 Coor -- 10 Coor -- 13 Coor -- 16 Modif -- 19 Modif -- 22 Modif -- 25 Modif -- 28 Modif -- 31 Perm -- 34 Perm -- 37 Modif N--+A (Arg) -- 38 Modif N-+A (Arg) Head 39 Modif A-+N (Arg) -- 40 Modif A-+N (Arg) Head 41 Perm A--+N (Arg) -- 42 Perm A--+N (Arg) Head 43 Perm A--~N (Arg) -- 44 Perm A--4N (Arg) Head 45 Modif A-4Adv (Arg) -- 46 Modif A-+Adv (Arg) Head 47 Modif A-~A (Arg) -- 48 Modif A-~A (Arg) Head 49 Modif N-4N (Head) -- 50 Modif N-~N (Head) Arg 51 Modif N-+N (Arg) -- 52 Modif N~N (Arg) Head 53 Perm N-4N (Head) -- 54 Perm N-~N (Head) Arg 55 VP N--~V (Head) -- 56 VP N~V (Head) Arg 57 VP N--~V (Head) -- 58 VP N--~V (Head) Arg 59 NP N--cV (Head) -- 60 NP N-~V (Head) Arg 61 NP V--oN (Arg) -- 62 NP V--~N (Arg) Head Xl[sin] ((AINIPart) °-3 N Pu[','] ? CC) N2 Fs(X1)[sin] ((AINIPart) °-3 N Pu[','] ? CC) N2 Xl[sin] ((AINIPart) °-3 N Pu[','] ? CC) Fs(N2) X~[sin] (CC (AIN]Part) °-3) N2 X1 (Pu (A]NIPaxt) Pu ? CC (AINIPart)) N2 Xl[sin] (Pu (AINIPart) Pu (AINIPart) Pu ? CC (A[NIPart)) N~ Xl[sin] ((AINIPaxt) °-3 N Pu[','] CC) N2 X1 [sin] ((AIN]Part) °-3) N2 Xl[sin] (N Prep Det ? A T) N2 Xl[sin] (Pu[')'] (AIN]Part) ?) N2 X~[sin] (Pu['('] CC ? (AINIPaxt) ~-2 Pu[')']) N2 X,[sin] (Pu[','] (AINIPart)) N2 N: (V['be']lPu['(']) X1 N~ (V ? Prep Det ? (AIN]Paxt) °-3 ((N) CC Det?) ?) N1 FM(N1)A ((A]NIPart) °-3) N2 FM(Nz)A ((A[N]Paxt) °-3) Fs(N2) FM(A1)N ((AINIPart) °-3) N2 FM(Az)r~ ((AINIPart) °-3) Fs(N~) FM(At)N ((CC Det?) ? Prep Det ? (AINIPart) °-3) N2 FM(A1)N ((CC Det?) ? Prep Det ? (AINIPart) °-3) Fs(N2) N2 ((Prep Det?) ? (AIN]Paxt) °-3) FM(A1)N Fs(N2) ((Prep Det?) ? (AINIPart) °-3) FM(A1)N FM(A1)Adv ((AINIPart) °-a) N~ FM(A1)Adv ((AINIPart) °-3) Fs(N2) FM(A1)A ((AINIPart) °-3) N2 FM(A1)A ((AINIPart) °-a) Fs(N2) X1 ((AINIPart) °-3) FM(N2)N Fs(X1) ((AINIPaxt) °-a) FM(N2)N FM(N1)N ((AINIPart) °-a) N2 FM(N1)N ((AIN]Part) °-3) Fs(N2) FM(N2)N (Prep (AINIPart) °-3) N1 FM(N2)N (Prep (AINIPart) °-3) Fs(N1) FM(N2)v (Adv ? Prep ? (Det (N) ? Prep) ? Det ? (AINIPaxt) °-a) N1 FM(N2)v (Adv ? Prep ? (Det (N) ? Prep) ? Det ? (AINIPart) °-3) Fs(Nt) Nt ((N) ? V['be'] 7) FM(N2)v Fs(N1) ((N) ? V['be'] 7) FM(N~)v As ((AIN]Part) °-~ ((N) Prep) ?) FM(N~)v Fs(At) ((AIN[Part) °-2 ((N) Prep) ?) FM(N2)v FM(V1)N ((AINIPart) °-3) N2 FM (Vt)N ((AINIPart)°-3)Fs (N~) 6 Evaluation We provide two evaluations of term variant confla- tion. First, we calculate precision rates through a manual scanning of the variants. Secondly, we eval- uate the numbers of variations extracted through the four experiments. Precision Because of the large volumes of data, only experi- ments on the French corpus are evaluated. [AGRIC] + AGROVOC produces 2,739 variations and 2,485 of them are selected as correct. Since the number of synonym links proposed by Word97 is higher, the number of variants produced by [AGRIC] + Word97 is higher: 3,860. 3,110 of them are accepted after human inspection. The two experiments produce the same set of non- semantic variants (syntactic and morpho-syntactic variants). Associated values of precision are re- ported in Tables 4 and 5. The semantic variations are divided into two subsets: "pure" semantic vari- ations and semantic variations involving a syntactic transformation and/or a morphological link. Their precisions are given in Tables 6 and 7. As far as precision is concerned, these tables show that variations are divided into two levels of qual- ity. On the one hand, syntactic, morpho-syntactic and pure semantic variations are extracted with a high level of precision (above 78%, see the "Total" values in Tables 4 to 6). On the other hand, the 345 Table 2: Sample variants from [MEDIC] using the variations from Table 1 (#1 to #36). # Term Variant 1 cell differentiation 2 primary response 3 pressure decline 4 adipose tissue 5 extensive resection 6 clinical test 7 adipic acid 8 morphological change 9 clinical test 10 electrical property 12 hypothesis test 16 acidic protein 17 absorbed dose 18 cylindrical shape 19 assisted ventilation 20 genetic disease 21 early pregnancy 22 intertrochanteric fracture 25 arteriovenous fistula 27 pressure measure- ment 28 identification test 29 electrical stimulus 31 combined treatment 32 genetic disease 33 increased dose 34 acrylonitrile copoly- mer 35 development area 36 cell death cell growth and differenti- ation basal secretory activity and response pressure rise and fall adipose or fibroadipose tissue wide or radical resection clinical and histologic ex- aminations adipie, suberic and se- bacic acids morphologic, ultrastruc- rural and immunologic changes clinical, radiographic, and arthroscopic exami- nation electrical, mechanical, thermal and spectroscopic properties hypothesis, compara- bility, randomized and non-randomized trials acidic epidermal protein ingested human doses cylindrical fiberglass cast assisted modes of me- chanical ventilation hereditary transmission of the disease early stage of gestation intertrochanteric ) femoral fractures arteriovenous (A V) fistu- las pressure (SBP) measure identification, sensory tests electric, acoustic stimuli treatments were com- bined disease is familial dosage was increased copolymer of aerylonitrile areas of growth destruction of the virus- infected cell Table 3: Sample variants from [MEDIC] using the variations from Table 1 (#37 to #62). Term Variant 37 cell component cellular component 38 work place workable space 39 embryonic develop- embryo development ment 40 angular measure- angles measure ment 41 deficient diet deficiency in the diet 42 malignant tumor malignancy in orbital tu- rnouts 43 cerebral cortex cortex of the cerebrum 44 surgical advance- advance in middle ear ment surgery 45 inappropriate secre- inappropriately high TSH tion secretion 46 genetic variant genetically determined variance 47 fatty meal fat meals 48 optical system optic Nd-YA G laser unit 49 drug addiction drug addicts 50 simultaneous mea- concurrent measures surement 51 saline solution salt solution 52 flow limit airflow limitation 53 bile reflux flux of bile 55 measurement tech- measuring technique nique 57 age estimation estimating gestational age 58 density measure- measured COHb eoncen- ment trations 59 blood coagulation blood coagulated 60 concentration mea- density was measured surement 61 combined treatment combination treatment Table 4: Precision of syntactic variant extraction ([AGRIC] corpus). Coor Modif Comp Total 97.2% 88.7% 98.0% 95.7% Table 5: Precision of morpho-syntactic variant ex- traction ([AGRIC] corpus). A to N N to A N toN N to V Total 68.5% 69.6% 92.1% 75.3% 84.6% 346 Table 6: Precision of semantic variant extraction ([AGRIC] corpus). Word97 AGROVOC Sem Arg 76.3% 88.9% Sere Head 82.7% 91.3% Total 78.1% 91.0% Table 7: Precision of semantico-syntactic variant ex- traction ([AGRIC] corpus). texts in which words are disambiguated. Numbers of Variants Table 8 shows the numbers of term variants ex- tracted by the four experiments. For each experi- ment and for each type of variation, three values are reported: the number of variants v of this type and two percentages indicating the ratio of these vari- ants. The first percentage is ~ in which V is the total number of variants produced by this experi- v in which T ment. The second percentage is is the number of (non-variant) term occurrences ex- tracted by this experiment. Word97 AGROVOC Coor + sem 44.8% 62.6% Modif Jr sem 55.6% 87.5% A to N -1- sem 44.9% 0.0% N to A + sere 21.3% 0.0% N to N d- sem 0.0% 60.0% N to V d- sere 24.2% 44.4% Total 29.4% 55.0% combination of semantic links with syntax or with morphology results in poor precision (55% precision in average with the AGROVOC semantic links and 29.4% precision with the Word97 links, see line "To- tal" in Table 7). The lower precision of hybrid variations is due to a cumulative effect of semantic shift through com- bined variations. For instance, former un rdseau continu (build a continuous network) is incorrectly extracted as a variant of formation permanente (con- tinuing education) through a Noun-to-Verb varia- tion with a semantic link between argument words. The verb former and the associated deverbal noun formation are two polysemous words. In formation permanente, the meaning is related to a human ac- tivity (to train) while, in former un rdseau continu, the meaning is related to a physical construction (to build). Despite the relatively poor precision of hybrid variations, the average precision of term conflation is high because hybrid variations only represent a small fraction of term variations (5.4% and 0.9%, see lines '% sem" in Table 8 below). The average precision on [AGRIC] + Word97 is 79.8% and the average precision on [AGRIC] + AGROVOC is 91.1%. The exploitation of semantic links extracted from WordNet in term variant extraction does not suffer from the problem of ambiguity pointed out for query expansion in (Voorhees, 1998). The robustness to polysemy is due to the fact that we are dealing with multiword terms that build restricted linguistic con- The last line of Table 8 shows that variants rep- resent a significant proportion of term occurrences (from 27.3% to 37.3%). The distribution of the different types of variants depends the semantic database and on the language under study. Word- Net 1.6 is a productive source of knowledge for the extraction of semantic variants: In the experiment [MEDIC] + WordNet, semantic variants represent 58.6% of the variants, while they only represent 4.9% of the variants in the [AGRIC] + AGROVOC exper- iment. These values are reported in the line "Tot. Sem" of Table 8. Such results confirm the relevance of non-specialized semantic links in the extraction of specialized semantic variants (Hamon et al., 1998). 7 Conclusion The model proposed in this study offers a simple and generic framework for the expression of com- plex term variations. The evaluation proposed at the end of this paper shows that term variations are extracted with an excellent precision for the three types of elementary variations: syntactic, morpho- syntactic and semantic variations. The best perfor- mance is obtained with WordNet as source of seman- tic knowledge. Ongoing work on German, Japanese and Spanish shows that such a transformational and paradigmatic description of term variability applies to other languages than French and English reported in this study. Acknowledgement We would like to thank Jean Royaut@ and Xavier Polanco (INIST-CNRS) for their helpful collabora- tion. We are also grateful to B6atrice Daille (IRIN) for running her termer ACABIT on the data and to Olivier Ferret (LIMSI) for the Word97 macro- function used to extract the thesaurus. References AGROVOC. 1995. Thdsaurus Agricole Multi- lingue. Organisation de Nations Unies pour l'Alimentation et l'Agriculture, Roma. 347 Table 8: Numbers of term variants. [AGRIC] [AGRIC] [MEDIC] [MEDIC] + Word97 + AGROVOC + WordNet + Word97 v v v v v v v v V ~" VTT V V V'~T V V V~T V V V'~T Terms (T) Coor Modif Comp Perm Tot. Synt AtoA A to Adv AtoN NtoA NtoN NtoV VtoN Tot. Mor Sem Arg Sem Head Coor + sem Modif + sere Perm + sere A to A + sem A to Adv + s. A to N + sere N to A + sem N to N + sem N to V + sere N to V + sere Tot. Sem Variants (V) 5325 x 63.1% 173 5.6% 2.1% 346 11.1% 4.1% 1045 33.6% 12.4% × X X 1564 50.3% 18.5% 5325 x 68.2% 173 7.0% 2.2% 346 14.0% 4.4% 1045 42.1% 13.4% × X X 1564 62.9% 20.0% 25561 x 62.7% 531 3.5% 1.3% 1985 13,1% 4.9% X X X 1146 7,5% 2.8% 3662 24.1% 9.0% 25561 x 72.7% 531 5.5% 1.5% 1985 20.7% 5.6% × X X 1146 11.9% 3.3% 3662 38.1% 10.4% 17 0.5% 0.2% × × X 89 2.9% 1.1% 78 2.5% 0.9% 545 17.5% 6.5% 70 2.2% 0.8% × X X 17 0.7% 0.2% X × X 89 3.6% 1.1% 78 3.1% 1.0% 545 21.9% 7.0% 70 2.8% 0.9% )< × × 191 1.3% 0.5% 35 0.2% 0.1% 640 4.2% 1.6% 102 0.7% 0.3% 416 2.7% 1.0% 1230 8.1% 3.0% 21 0.1% 0.1% 191 2.0% 0.5% 35 0.3% 0.1% 640 6.7% 1.8% 102 1.1% 0.3% 416 4.3% 1.2% 1230 12.8% 3.5% 21 0.2% 0.1% 2635 27.4% 7.5% 799 25.7% 9.5% 180 5.8% 2.1% 397 12.8% 4.7% 30 1.0% 0.4% 100 3.1% 1.2% X X × 0 0.0% O.0% 0 0.0% 0.0% 22 0.7% 0.3% 10 0.3% 0.1% 0 O.0% 0.O% 8 0.3% 0.1% )< X × 747 24.0% 8.9% 3110 X 36.9% 799 32.2% 10.2% 16 0.6% 0.2% 84 3.4% 1.1% 5 0.2% 0.1% 7 0.3% 0.1% X X × 0 0.0% 0.0% 0 0.0% 0.0% 0 O.O% O.0% 0 0.0% O.O% 6 0.2% 0.1% 4 0.2% 0.1% × X × 122 4.9% 1.6% 2485 x 31.8% 2635 17.3% 6.5% 912 6.0% 2.2% 2555 16.8% 6.3% 183 1.2% 0.4% 3467 22.8% 8.5% 788 5.2% 1.9% 82 0.5% 0.2% 22 0.1% 0.1% 256 1.7% 0.6% 72 0.5% 0.2% 102 0.7% 0.3% 454 3.0% 1.1% 11 0.1% 0.0% 8904 58.6% 21.8% 15201 X 37.3% 629 6.6% 1.8% 698 7.3% 2.0% 102 1.1% 0.3% 1067 11.1% 3.0% 369 3.8% 1.0% 42 0.4% 0.1% 8 0.1% 0.0% 118 1.2% 0.3% 28 0.3% 0.1% 58 0.6% 0.2% 185 1.9% 0.5% 2 0.0% 0.0% 3306 34.4% %9.4 9603 x 27.3% CELEX. 1998. www. talc. upenn, edu/ readme_fi tes/ce fez. teatime, htmt. Consor- tium for Lexical Resources, UPenn. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cam- bridge, MA. Thierry Hamon, Adeline Nazarenko, and Cdcile Gros. 1998. A step towards the detection of se- mantic variants of terms in technical documents. In Proceedings, COLING-A CL'98, pages 498-504, Montreal. Christian Jacquemin, Judith L. Klavans, and Eve- lyne Tzoukermann. 1997. Expansion of multi- word terms for indexing and retrieval using mor- phology and syntax. In ACL - EACL'97, pages 24-31, Madrid. MULTEXT. 1998. www..~p t. univ-ai~, fv/ p~'ojects/muttezt/. Laboratoire Parole et Langage, Aix-en-Provence. Gerard Salton and Michael J. McGill. 1983. In- troduction to Modern Information Retrieval. Mc- Graw Hill, New York, NY. Stuart N. Shieber. 1992. Constraint-Based For- malisms. A Bradford Book. MIT Press, Cam- bridge, MA. Karen Sparck Jones and John I. Tait. 1984. Auto- matic search term variant generation. Journal of Documentation, 40(1):50-66. K. Vijay-Shanker. 1992. Using descriptions of trees in a Tree Adjoining Grammar. Computational Linguistics, 18(4):481-518, December. Ellen M. Voorhees. 1998. Using wordnet for text retrieval. In Christiane Fellbaum, editor, Word- Net: An Electronic Lexical Database, pages 285- 303. MIT Press, Cambridge, MA. 348
1999
44
Less is more: Eliminating index terms from subordinate clauses Simon H. Corston-Oliver and William B. Dolan Microsoft Research One Microsoft Way Redmond WA 98052 { simonco, billdol } @microsoft.com Abstract We perform a linguistic analysis of documents during indexing for information retrieval. By eliminating index terms that occur only in subordinate clauses, index size is reduced by approximately 30% without adversely affecting precision or recall. These results hold for two corpora: a sample of the world wide web and an electronic encyclopedia. 1 Introduction Efforts to exploit natural language processing (NLP) to aid information retrieval (IR) have generally involved augmenting a standard index of lexical terms with more complex terms that reflect aspects of the linguistic structure of the indexed text (Fagan 1988, Katz 1997, Arampatzis et al. 1998, Strzalkowski et al. 1998, inter alia). This paper shows that NLP can benefit information retrieval in a very different way: rather than increasing the size and complexity of an IR index, linguistic information can make it possible to store less information in the index. In particular, we demonstrate that robust NLP technology makes it possible to omit substantial portions of a text from the index without dramatically affecting precision or recall. This research is motivated by insights from Rhetorical Structure Theory (RST) (Mann & Thompson 1986, 1988). An RST analysis is a dependency analysis of the structure of a text, whose leaf nodes are the propositions encoded in clauses. In this structural analysis, some propositions in the text, called "nuclei," are more centrally important in realizing the writer's communicative goals, while other propositions, called "satellites," are less central in realizing those goals, and instead provide additional information about the nuclei in a manner consistent with the discourse relation between the nucleus and the satellite. This asymmetry has an analogue in sentence structure: main clauses tend to represent nuclei, while subordinate clauses tend to represent satellites (Matthiessen and Thompson 1988, Corston-Oliver 1998). From the perspective of discourse analysis, the task of information retrieval can be viewed as attempting to identify the "aboutness," or global topicality, of a document in order to determine the relevance of the document as a response to a user's query. Given an RST analysis of a document, we would expect that for the purposes of predicting document relevance, information that occurs in nucleic propositions ought to be more useful than information that occurs in satellite propositions. To test this expectation, we experimented with eliminating from an IR index those terms that occurred in certain kinds of subordinate clauses. 2 System description At the core of the Microsoft English Grammar (MEG), is a broad-coverage parser that produces conventional phrase structure analyses augmented with grammatical relations; this parser is the basis for the grammar checker in Microsoft Word (Heidorn 1999). Syntactic analyses undergo further processing in order to derive logical forms (LFs), which are graph structures that describe labeled dependencies among content words in the original input. LFs normalize certain syntactic alternations (e.g. active/passive) and resolve both intrasentential anaphora and long-distance dependencies. Over the past two years we have been exploring the use of MEG LFs as a means of 349 improving IR precision. This work, which is embodied in a natural language query feature in the Microsoft Encarta 99 encyclopedia, augments a traditional keyword document index with a second index that contains linguistically- informed terms. Two types of terms are stored in this linguistic index: 1. LF triples. These are subgraphs extracted from the LF. Each triple has the form wordl-relation-word2, describing a dependency relation between two content words. For example, for the sentence Abraham Lincoln, the president, was assassinated by John Wilkes Booth, we extract the following LF triples: t assassinate--LSubj--John_Wilkes_Booth assassinate--LOb j--Abraham_Lincoln Abraham_Lincoln--Equiv--president 2. Subject terms. These are terms that indicate which words served as the grammatical head of a surface syntactic subject in the document, for example: Subject: Abraham_Lincoln This linguistic index is used to postfilter the output of a conventional statistical search algorithm. An input natural language query is first submitted to the statistical search algorithm as a set of content words, resulting in a ranked set of documents. This ranked set is then re- ranked by attempting to find overlap between the set of linguistic terms stored for each of these documents and corresponding linguistic terms determined by processing the query in MEG. Documents that contain linguistic matches are heuristically ranked according to the nature of the match. Documents that fail to match do not receive a rank, and are typically not displayed to the user. The process of building a secondary linguistic index and matching terms from the query is referred to as natural language matching (NLM) in the discussion below. NLM has been used to filter documents retrieved by several different search technologies operating on different genres of text. Since NLM was intended for use in consumer products, it was important to minimize index size. We needed an algorithm that would enable us to achieve reductions in index size without adversely affecting precision and recall. At the time when we were conducting these experiments, there did not exist any sufficiently large publicly available corpora of questions and relevant documents for the two genres of interest to us: the word wide web and encyclopedia text. We therefore gathered queries and documents for a web sample (section 3.2) and Encarta 99 (section 3.3), and had non- linguists perform double-blind evaluations of relevance. Three implementation-specific aspects of the NLM index should be noted. First, in order to limit index size, duplicate instances of a term occurring in the same document are stored only once. Second, because of the particular compression scheme used to build the index, all terms require the same number of bits for storage, regardless of the length or number of words they contain. Third, the top ten percent of the NLM terms were suppressed, by analogy with stop words in conventional indexing schemes. Such high frequency terms tended not to be good predictors of document relevance. 3 Experiments We conducted experiments in which we eliminated terms from the NLM index, and then measured precision and recall. The experiments were performed on two test corpora: web pages returned by the Alta Vista search service (section 3.2) and articles from the Encarta electronic encyclopedia (section 3.3). 3.1 The kinds of subordinate clauses In order to test the hypothesis that information contained in subordinate clauses is less useful for IR than matrix clause information, we modified the indexing algorithm so that it eliminated terms that occurred in certain kinds of subordinate clauses. We experimented with the following clause types: I LSubj denotes a logical subject, LObj a logical object and Equiv an equivalence relation. 350 Abbreviated Clause (ABBCL) Until further indicated, lunch will be served at 1 p.m. Complement Clause (COMPCL) [ told the telemarketer that you weren't home. Adverbial Clause (ADVCL) After John went home, he ate dinner. Infinitival Clause (INFCL) John decided to go home. Relative Clause (RELCL) I saw the man, who was wearing a green ha__!. Present Participial Clause (PRPRTCL) Napoleon attacked the fleet, destroying it. completely In the experiments described below, terms were eliminated from documents during indexing. However, terms were never eliminated from the queries. 3.2 Alta Vista experiments We gathered 120 natural language queries from colleagues for submission to Alta Vista. 2 The queries averaged 3.7 content words, with a standard deviation of 1.7. 3 The following are illustrative of the queries submitted: Are there any air-conditioned hotels in Bali? Has anyone ported Eliza to Win95? What are the current weather conditions at Steven' s Pass ? What makes a cat purr? Where is Xian ? When will the next non-rerun showing of Star Trek air? 2 Alta Vista's main search page (http://altavista.com) encourages users to submit natural language queries. 3 Words like "know" and "find", which are common in natural language queries, are included in these counts. We examined the first thirty documents returned by Alta Vista (or fewer documents for queries that did not return at least thirty documents). This document set comprised 3,440 documents. Since we were not able to determine what percentage of the web Alta Vista accounted for, it was not possible to calculate the recall of this document set. In the discussion below, we calculate recall as a percentage of the relevant documents returned by Alta Vista. Precision and recall are averaged across all queries submitted to Alta Vista. The documents returned by Alta Vista were indexed using NLM (section 2) and filtered to retain only documents that contained matches. Table 1 contrasts the baseline NLM figures (indexing based on terms in all clauses) with the results of eliminating from the documents all terms that occurred in subordinate clauses. To measure the trade-off between precision and recall, we calculated the F-measure (Van Rij sbergen 1980), defined as F - (f12 + 1.0)PR, where P is precision, R is fl2p + R recall and [3 is the relative weight assigned to precision and recall (for these experiments, 13= 1). As Table 1 shows, by eliminating terms from all subordinate clauses in the documents, the NLM index size was reduced by 31.4% with only a minor impact (-0.82%) on F-measure. Given unique indexing of terms per document, and a constant size per term (section 2), we can deduce that 31.4% of the terms in the NLM index occurred only in subordinate clauses. Had they occurred even once in a main clause, they would not have been removed from the index. We ran two comparison experiments. In the first comparison, we deleted one third of all terms as they were produced. Table 2 gives the average results of three runs of this experiment. In each run, a different set of one third of the terms was deleted. Although fewer terms were omitted (28.8% 4 versus 31.4% when all terms in 4 TelTflS eliminated from a subordinate clause in one sentence might persist in the index if they occurred in the main clause of another sentence in the same document, hence a reduction of slightly less than 33.3%. 351 subordinate clauses were eliminated) the detrimental effect on F-measure was 5.3 times greater than when terms occuring in subordinate clauses were deleted. Table 1 Alta Vista: Effects of eliminating subordinate clauses Algorithm Precision Recall F % Change in F 5 Baseline NLM 34.3 43.2 38.24 0.00 Subordinate clauses 35.9 40.2 37.93 -0.82 % Change in index size 0.0 -31.4 Table 2 Alta Vista: Average effect of eliminating one third of terms Precision Recall F % Change % Change in in F index size 36.9 36.4 36.65 -4.34 -28.8 In the second comparison experiment, we tested the converse of the operation described in the discussion of Table 1 above: we eliminated all search terms from the main clauses of documents, leaving only search terms that occurred in subordinate clauses. Table 3 shows the dramatic effect of this operation: as we expected, the index size was greatly reduced (by 73.8%). However, F- measure was seriously affected, by more than two thirds, or -68.99%. The effect on F- measure is primarily due to the severe impact on recall, which fell from a tolerable baseline of 43.2% to an unacceptable 7.5%. Comparing the reduction in index size to the reduction when subordinate clause information was eliminated (73.8% versus 31.4%, a factor of approximately 2:1) to the reduction in F- measure (-68.99 versus -0.82, a factor of approximately 84:1), it is clear that the impact on F-measure from eliminating terms in main clauses is disproportionate to the reduction in index size. Table 3 Alta Vista: Effect of diminating main clauses Precision Recall F % Change % Change in in F index size 28.3 7.5 11.86 -68.99 -73.8 Table 4 isolates the effects of deleting each kind of subordinate clause. Most remarkable is the fact that eliminating terms that only occur in relative clauses (RELCL) yields a 7.3% reduction in index size while actually improving F-measure. Also worthy of special note is the fact that two kinds of subordinate clauses can be eliminated with no perceptible effect on F- measure: eliminating complement clauses (COMPCL), yields a reduction in index size of 7.4%, and eliminating present participial clauses (PRPRTCL) yields a reduction in index size of 4.2%. 5 F is calculated from the underlying figures, to minimise the effects of rounding errors. 352 Table 4 Alta Vista: Effect of eliminating different kinds of subordinate clauses Algorithm Precision Recall F % Change % Change in in F index size Baseline NLM 34.3 43.2 38.24 0.00 0.0 ADVCL 34.6 42.1 37.98 -0.67 -7.0 ABBCL ~ 34.3 43.2 38.24 0.00 -0.3 INFCL 34.8 42.1 38.10 -0.36 -11.8 RELCL 34.9 42.6 38.37 0.33 -7.3 COMPCL 34.5 42.9 38.24 0.00 -7.4 PRPRTCL 34.5 42.9 38.24 0.01 -4.2 Because of interactions among the different clause types, the effects illustrated in Table 4 are not additive. For example, an infinitival clause (INFCL) may contain a noun phrase with an embedded relative clause (RELCL). Elimination of all terms in the infinitival clause would therefore also lead to elimination of terms in the relative clause. 3.3 Encarta experiments We gathered 348 queries from middle- school students for submission to Encarta, an electronic encyclopedia. The queries averaged 3.4 content words, with a standard deviation of 1.4. The following are illustrative of the queries submitted: How many people live in Nebraska ? How many valence electrons does sodium have ? I need to know where hyenas live. In what event is Amy VanDyken the closest to the world record in swimming ? What color is a giraffe's tongue ? What is the life-expectancy of an elephant? We indexed the text of the Encarta articles, approximately 33,000 files containing approximately 576,000 sentences, using a simple statistical indexing engine. We then submitted each query and gathered the first thirty ranked documents, for a total of 5,218 documents. We constructed an NLM index for the documents returned and, in a second pass, filtered documents using NLM. In the discussion below, recall is calculated as a percentage of the relevant documents that the statistical search returned. Table 5 compares the baseline NLM accuracy (indexing all terms) to the accuracy of eliminating terms that occurred in subordinate clauses. The reduction in index size (29.0%) is comparable to the reduction observed in the Alta Vista experiment (31.4%). However, the effect on F-measure of eliminating terms from subordinate clauses is more marked (-4.91%) than in the Alta Vista experiment (-0.82%). Table 5 Encarta: Effects of eliminating subordinate clauses Algorithm Baseline NLM Subordinate clauses Precision 39.2 41.1 Recall 29.0 25.9 F 33.34 31.78 % Change % Change in in F index size 0.00 0.0 -4.91 -29.0 The impact on F-measure is still substantially less than the average of three runs during which arbitrary non-overlapping thirds of the terms were eliminated, as illustrated in 353 Table 6. This arbitrary deletion of terms results in an 11.57% reduction in F-measure compared to the baseline, approximately 2.4 times greater than the impact of eliminating material subordinate clauses. in Table 6 Encarta: Effects of eliminating one third of terms Precision Recall F % Change % Change in in F index size 40.2 23.8 29.88 - 11.57 -29.5 As Table 7 shows, eliminating terms from main clauses and retaining information in subordinate clauses has a profound effect on recall for the Encarta corpus. As with the Alta Vista experiment (section 3.2), it is instructive to compare the results in Table 7 to the results obtained when terms in subordinate clauses were deleted (Table 5). Approximately 2.7 times as many terms were eliminated from the index, yet the effect on F-measure is almost thirteen times worse. Table 7 Encarta: Effect of eliminating main clauses Precision Recall 40.9 7.4 F % Change in F 12.53 -62.41 % Change in index size -77.1 Table 8 isolates the effects for Encarta of eliminating terms from each kind of subordinate clause. It is interesting to compare the reduction in index size and the relative change in F- measure for Encarta, a relatively homogeneous corpus of academic articles, to the heterogeneous web sample of section 3.2. For both corpora, eliminating terms that only occur in abbreviated clauses (ABBCL) or present participial clauses (PRPRTCL) results in modest reductions in index size without negatively affecting F-measure. Eliminating terms from adverbial clauses (ADVCL) or infinitival clauses (INFCL) also produces a similar effects on the two corpora: a reduction in index size with a modest (less than 1%) reduction in F-measure. Relative clauses (RELCL) and complement clauses (COMPCL), however, behave differently across the two corpora. In both cases, the effects on F-measure are positive for web documents and negative for Encarta articles. The negative impact of the elimination of material from relative clauses in Encarta can perhaps be attributed to the pervasive use of non-restrictive relative clauses in the definitional encyclopedia text, as illustrated by the underlined sections of the following examples: Sargon H (ruled 722-705 BC), who followed Tiglath-pileser's successor, Shalmaneser V (ruled 727-722 BC), to the throne, extended Assyrian domination in all directions, from southern Anatolia to the Persian Gulf Amaral, Tarsila do (1886-1973), Brazilian painter whose works were instrumental in the development of modernist painting in Brazil. After the so-called Boston Tea Party in 1773, when Bostonians destroyed tea belonging to the East India Company, Parliament enacted four measures as an example to the other rebellious colonies. Another peculiar characteristic of the Encarta corpus, namely the pervasive use of 354 complement taking nominal expressions such as the belief that and the fact that, possibly explains the negative impact of the elimination of complement clause material in Table 8. Table 8 Encarta: Effect of eliminating different kinds of subordinate clauses Algorithm Precision Recall F % Change % Change in in F index size Baseline NLM 39.2 29.0 33.34 0.00 0.0 ADVCL 39.9 28.4 33.18 -0.47 -5.8 ABBCL 39.6 29.0 33.48 0.43 -0.4 INFCL 40.0 28.3 33.15 -0.57 -9.2 RELCL 39.7 28.2 32.98 - 1.10 -9.5 COMPCL 38.9 28.3 32.76 - 1.75 -3.3 PRPRTCL 39.8 29.0 33.55 0.64 -5.5 4 Discussion Although the results presented in section 3 are compelling, it may be possible to refine the identification of clauses from which index terms can be eliminated. In particular, complement clauses subordinate to speech act verbs would appear from failure analysis to warrant special attention. For example, in the following sentence our linguistic intuitions suggest that the content of the complement clause is more informative than the attribution to a speaker in the main clause: John said that the President would not resign in disgrace. Of course, more fine-grained distinctions of this type can only be made given sufficiently rich linguistic analyses as input. Another compelling topic for future research would be the impact of less sophisticated analyses to identify various kinds of subordinate clauses. The terms eliminated in the experiments presented in this paper were linguistic in nature. However, we would expect similar results if conventional word-based terms were eliminated in similar fashion. In future research, we intend to experiment with eliminating terms from a conventional statistical engine, combining this technique with the standard method of eliminating high frequency index terms.. Rather than eliminating terms from an index, it may also prove fruitful to investigate weighting terms according to the kind of clause in which they occur. 5 Conclusions We have demonstrated that, as implicitly predicted by RST, index terms may be eliminated from certain kinds of subordinate clauses without substantially affecting precision or recall. Rather than using NLP to generate more index terms, we have found tremendous gains from systematically eliminating terms. The exact severity of the impact on precision and recall that results from eliminating terms varies by genre. In all cases, however, the systematic elimination of subordinate clause material is substantially better than arbitrary deletion of index terms or the deletion of index terms that occur only in main clauses. Future research shall attempt to refine the analysis of the kinds of subordinate clauses from which index terms can be omitted, and to integrate these findings with conventional statistical IR algorithms. Acknowledgements Our thanks go to Lisa Braden-Harder, Susan Dumais, Raman Chandrasekar, Eric Ringger, Monica Corston-Oliver, Lucy Vanderwende and the three anonymous reviewers for their help and comments on an earlier draft of this paper and to Jing Lou for assistance in configuring a test environment. 355 References Arampatzis, A. T., T. Tsoris, C. H. A. Koster, T. P. Van Der Weide. (1998) "Phrase-based information retrieval", Information Processing and Management 34:693-707. Corston-Oliver, S. H. (1998) Computing Representations of the Structure of Written Discourse. Ph.D. dissertation. University of California, Santa Barbara. Fagan, J. L. (1988) Experiments in Automatic Phrase Indexing for Document Retrieval: A Comparison of Syntactic and Non-syntactic Methods. Ph.D. dissertation. Cornell University. Heidorn, G. (1999) "Intelligent writing assistance." To appear in Dale, R., H. Moisl and H. Somers (eds.), A Handbook of Natural Language Processing Techniques. Marcel Dekker. Katz, B. (1997) "Annotating the World Wide Web Using Natural Language." Proceedings of RIAO 97, Computer-assisted Information Search on lnternet, McGill University, Quebec, Canada, 25- 27 June 1997. Vol. 1:136-155. Mann, W. C. and Thompson, S. A. (1986) "Relational Propositions in Discourse." Discourse Processes 9:57-90. Mann, W. C. and Thompson, S. A. (1988) "Rhetorical Structure Theory: Toward a functional theory of text organization." Text 8:243-281. Matthiessen, C. and Thompson, S. A. (1988) "The structure of discourse and 'subordination'." In Haiman, J. and S. A. Thompson, (eds.). 1988. Clause Combining in Grammar and Discourse. John Benjamins: Amsterdam and Philadelphia. 275-329. Strzalkowski, T. G. Stein, G. B. Wise, J. Perez- Carball, P. Tapanainen, T. Jarvinent, A. Voutilainen, J. Karlgren. (1997)Natural Language Information Retrieval: TREC-7 Report. Van Rijsbergen, C. J. (1980) Information Retrieval. Butterworths: London and Boston. 356
1999
45
Statistical Models for Topic Segmentation Jeffrey C. Reynar l Microsoft Corporation One Microsoft Way Redmond, WA 98052 USA [email protected] Abstract Most documents are about more than one subject, but many NLP and IR techniques implicitly assume documents have just one topic. We describe new clues that mark shifts to new topics, novel algorithms for identifying topic boundaries and the uses of such boundaries once identified. We report topic segmentation performance on several corpora as well as improvement on an IR task that benefits from good segmentation. Introduction Dividing documents into topically-coherent sections has many uses, but the primary motivation for this work comes from information retrieval (IR). Documents in many collections vary widely in length and while the shortest may address one topic, modest length and long documents are likely to address multiple topics or be comprised of sections that address various aspects of the primary topic. Despite this fact, most IR systems treat documents as indivisible units and index them in their entirety. This is problematic for two reasons. First, most relevance metrics are based on word frequency, which can be viewed as a function of the topic being discussed (Church and Gale, 1995). (For example, the word header is rare in general English, but it enjoys higher frequency in documents about soccer.) In general, word frequency is a good indicator of whether a document is relevant to a query, but consider a long document containing only one section relevant to a query. If a keyword is used only in the pertinent section, its overall frequency in the document will be low and, as a result, the document as a whole may be judged irrelevant despite the relevance of one section. The second reason it would be beneficial to index sections of documents is that, once a search engine has identified a relevant document, users would benefit from direct access to the relevant sections. This problem is compounded when searching multimedia documents. If a user wants to find a particular news item in a database of radio or television news programs, they may not have the patience to suffer through a 30 minute broadcast to find the one minute clip that interests them. Dividing documents into sections based on topic addresses both of these problems. IR engines can index the resulting sections just like documents and subsequently users can peruse those sections their search engine deems relevant. In the next section we will discuss the nature of our approach, then briefly describe previous work, discuss various indicators of topic shifts, outline novel algorithms based on them and present our results. I Our Approach We treat the process of creating documents as an instance of the noisy channel model. In this idealization, prior to writing, the author has in mind a collection of disjoint topics that she intends to address. During the writing process, due to the goals of writing smooth prose and knitting her document into a coherent whole, she blurs the boundaries between these topics. Thus, we assume there is a correct segmentation that has been hidden from our view. Our goal, therefore, is to model the clues about the original segmentation that were not obliterated while writing. We view segmentation as a labeling task. Given the text of a document and a collection of putative topic boundary locations--which could correspond to sentence boundaries, paragraph boundaries, pauses between utterances, changes in speaker or some arbitrary list of choice points-- This work was conducted as part of my Ph.D. thesis work at the University of Pennsylvania. 357 we label each of them as either the location of a topic boundary or not. We perform this labeling using statistical algorithms that combine diverse sources of evidence to determine the likelihood of a topic boundary. 2 Previous Work Much research has been devoted to the task of structuring text--that is dividing texts into units based on information within the text. This work falls roughly into two categories. Topic segmentation focuses on identifying topically- coherent blocks of text several sentences through several paragraphs in length (e.g. see Hearst, 1994). The prime motivation for identifying such units is to improve performance on language- processing or IR tasks. Discourse segmentation, on the other hand, is often finer-grained, and focuses on identifying relations between utterances (e.g. Grosz and Sidner, 1986 or Hirschberg and Grosz, 1992). Many topic segmentations algorithms have been proposed in the literature. There is not enough space to review them all here, so we will focus on describing a representative sample that covers most of the features used to predict the location of boundaries. See (Reynar, 1998) for a more thorough review. Youmans devised a technique called the Vocabulary Management Profile based on the location of first uses of word types. He posited that large clusters of first uses frequently followed topic boundaries since new topics generally introduce new vocabulary items (Youmans, 1991). Morris and Hirst developed an algorithm (Morris and Hirst, 1991) based on lexical cohesion relations (Halliday and Hasan, 1976). They used Roget's 1977 Thesaurus to identify synonyms and other cohesion relations. Kozima defined a measure called the Lexical Cohesion Profile (LCP) based on spreading activation within a semantic network derived from. a machine-readable dictionary. He identified topic boundaries where the LCP score was low (Kozima, 1993). Hearst developed a technique called TextTiling that automatically divides expository texts into multi-paragraph segments using the vector space model from IR (Hearst, 1994). Topic boundaries were positioned where the similarity between the block Of text before and after the boundary was low. In previous work (Reynar, 1994), we described a method of finding topic boundaries using an optimisation algorithm based on word repetition that was inspired by a visualization technique known as dotplotting (Helfman, 1994). Ponte and Croft predict topic boundaries using a model of likely topic length and a query expansion technique called Local Content Analysis that maps sets of words into a space of concepts (Ponte and Croft, 1997). Richmond, Smith and Amitay designed an algorithm for topic segmentation that weighted words based on their frequency within a document and subsequently used these weights in a formula based on the distance between repetitions of word types (Richmond et al., 1997). Beeferman, Berger and Lafferty used the relative performance of two statistical language models and cue words to identify topic boundaries (Beeferman et al., 1997). 3 New Clues for Topic Segmentation Prior work on topic segmentation has exploited many different hints about where topic boundaries lie. The algorithms we present use many cues from the literature as well as novel ones. Our approach is statistical in nature and weights evidence based on its utility in segmenting a training corpus. As a result, we do not use clues to form hard and fast rules. Instead, they all contribute evidence used to either increase or decrease the likelihood of proposing a topic boundary between two regions of text. 3.1 Domain-specific Cue Phrases Many discourse segmentation techniques (e.g. Hirschberg and Litman, 1993) as well as some topic segmentation algorithms rely on cue words and phrases (e.g. Beeferman et al., 1997), but the types of cue words used vary greatly. Those we employ are highly domain specific. Taking an : 358 example from the broadcast news domain where we will demonstrate the effectiveness of our algorithms, the phrase joining us is a good indicator that a topic shift has just occurred because news anchors frequently say things such as joining us to discuss the crisis in Kosovo is Congressman... when beginning new stories. Consequently, our algorithms use the presence of phrases such as this one to boost the probability of a topic boundary having occurred. joining us good evening brought to you by this just in welcome back <person name> <station> this is <person name> Table 1: A sampling of domain-specific cue phrases we employ. Some cue phrases are more complicated and contain word sequences of particular types. Not surprisingly, the phrase this is is common in broadcast news. When it is followed by a person's name, however, it serves as a good clue that a topic is about to end. This is <person name> is almost always said when a reporter is signing off after finishing an on-location report. Generally such signoffs are followed by the start of new news stories. A sampling of the cue phrases we use is found in Table 1. Since our training corpus was relatively small we identified these by hand, but on a different corpus we induced them automatically (Reynar, 1998). The results we present later in the paper rely solely on manually identified cues phrases. Identifying complex cue phrases involves pattern matching and determining whether particular word sequences belong to various classes. To address this, we built a named entity recognition system in the spirit of those used for the Message Understanding Conference evaluations (e.g. Bikel et al., 1997). Our named entity recognizer used a maximum entropy model, built with Adwait Ratnaparkhi's tools (Ratnaparkhi, 1996) to label word sequences as either person, place, company or none of the above based on local cues including the surrounding words and whether honorifics (e.g. Mrs. or Gen.) or corporate designators (e.g. Corp. or Inc.) were present. Our algorithm's labelling accuracy of 96.0% by token was sufficient for our purposes, but performance is not directly comparable to the MUC competitors'. Though we trained from the same data, we preprocessed the data to remove punctuation and capitalization so the model could be applied to broadcast news data that lacked these helpful clues. We separately identified television network acronyms using simple regular expressions. 3.2 Word Bigram Frequency Many topic segmentation algorithms in the literature use word frequency (e.g. Hearst, 1994; Reynar, 1994; Beeferman et al., 1997). An obvious extension to using word frequency is to use the frequency of multi-word phrases. Such phrases are useful because they approximate word sense disambiguation techniques. Algorithms that rely exclusively on word frequency might be fooled into suggesting that two stretches of text containing the word plant were part of the same story simply because of the rarity of plant and the low odds that two adjacent stories contained it due to chance. However, if plant in one section participated in bigrams such as wild plant, native plant and woody plant but in the other section was only in the bigrams chemical plant, manufacturing plant and processing plant, the lack of overlap between sets of bigrams could be used to decrease the probability that the two sections of text were in the same story. We limited the bigrams we used to those containing two content words. 3.3 Repetition of Named Entities The named entities we identified for use in cue phrases are also good indicators of whether two sections are likely to be in the same story or not. Companies, people and places figure prominently in many documents, particularly those in the domain of broadcast news. The odds that different stories discuss the same entities are generally low. There are obviously exceptions--the President of the U.S. may figure in many stories in a single broadcast--but nonetheless the presence of the same entities in two blocks of text suggest that they are likely to be part of the same story. 3.4 Pronoun Usage In her dissertation, Levy described a study of the impact of the type of referring expressions used, the location of first mentions of people and the gestures speakers make upon the cohesiveness of 359 discourse (Levy, 1984). She found a strong correlation between the types of referring expressions people used, in particular how explicit they were, and the degree of cohesiveness with the preceding context. Less cohesive utterances generally contained more explicit referring expressions, such as definite noun phrases or phrases consisting of a possessive followed by a noun, while more cohesive utterances more. frequently contained zeroes and pronouns. We will use the converse of Levy's observation about pronouns to gauge the likelihood of a topic shift. Since Levy generally found pronouns in utterances that exhibited a high degree of cohesion with the prior context, we assume that the presence of a pronoun among the first words immediately following a putative topic boundary provides some evidence that no topic boundary actually exists there. 4 Our Algorithms We designed two algorithms for topic segmentation. The first is based solely on word frequency and the second combines the results of the first with other sources of evidence. Both of these algorithms are applied to text following some preprocessing including tokenization, conversion to lowercase and the application of a lemmatizer (Karp et al., 1992). 4.1 Word Frequency Algorithm Our word frequency algorithm uses Katz's G model (Katz, 1996). The G model stipulates that words occur in documents either topically or non- topically. The model defines topical words as those that occur more than 1 time, while non- topical words occur only once. Counterexamples of these uses of topical and nontopical, of course, abound. We use the G model, shown below, to determine the probability that a particular word, w, occurred k times in a document. We trained the model from a corpus of 78 million words of Wall Street Journal text and smoothed .the parameters using Dan Melamed's implementation of Good-Turing smoothing (Gale and Sampson, 1995) and additional ad hoc smoothing to account for unknown words. Pr(k, w) = (1 - ct,. )8~. o + a w (1 - y w )Sk,l + ( awrw (1 "----~1 ")k-2)(l-St. 0-St.l) B w - 1 B w - 1 ot w is the probability that a document contains at least 1 occurrence of word w. Y w is the probability that w is used topically in a document given that it occurs at all. B w is the average number of occurrences in documents with more than l occurrence of w. 6 is a function with value 1 if x = y and 0 x,v otherwise. The simplest way to view the G model is to decompose it into 3 separate terms that are summed. The first term is the probablility of zero occurrences of a word, the second is the probability of one occurrence and the third is the probability of any number of occurrences greater than one. To detect topic boundaries, we used the model to answer this simple question. Is it more or less likely that the words following a putative topic boundary were generated independently of those before it? Given a potential topic boundary, we call the text before the boundary region 1 and the text after it region 2. For the sake of our algorithm, the size of these regions was fixed at 230 words--the average size of a topic segment in our training corpus, 30 files from the HUB-4 Broadcast News Corpus annotated with topic boundaries by the LDC (HUB-4, 1996). Since the G model, unlike language models used for speech recognition, computes the probability of a bag of words rather than a word sequence, we can use it to compute the probability of some text given knowledge of what words have occurred before that text. We computed two probabilities with the model. P,,,,, is the probability that region 1 and region 2 discuss the same subject matter and hence that there is no topic boundary between them. P ..... is the probability that they discuss different subjects and are separated by a topic boundary. P ....... therefore, is the probability of seeing the words in region 2 given the context, called C, of region 1. P, is the 360 probability of seeing the words in region 2 independent of the words in region 1. Formulae for P ..... and P, are shown below. Boundaries were placed where P, was greater than P,,,,, by a certain threshold. The threshold was used to trade precision for recall and vice versa when identifying topic boundaries. The most natural threshold is a very small nonzero value, which is equivalent to placing a boundary wherever P.,, is greater than P,,,,v P,,~ : 1--[ Pr(k, w [ C) Pw,, = l--I Pr(k, w) W w • How many named entities were common to both regions? • How many content words in both regions were synonyms according to WordNet (Miller et al., 1990)? • What percentage of content words in the region after the putative boundary were first uses? • Were pronouns used in the first five words after the putative topic boundary? We trained this model from 30 files of HUB-4 data that was disjoint from our test data. Computing Pov,, is straightforward, but P,,,requires computing conditional probabilities of the number of occurrences of each word in region 2 given the number in region 1. The formulae for the conditional probabilities are shown in Table 2. We do not have space to derive these formulae here, but they can be found in (Reynar, 1998). M is a normalizing term required to make the conditional probabilities sum to 1. In the table, x+ means x occurrences or more. Occurrences in region 1 0 0 2+ Occurrences in region 2 0 2+ 1+ 0+ Conditional probability (x(l-y) ~-y 1 (1 - )~-2 B-I B-I l-y y 1 (1 - )~-2 B-I B-I 1 1 (1 - ~)k-2 M(B - 1) B - 1 Table 2: Conditional probabilities used to compute P nn~" 4.2 A Maximum Entropy Model Our second algorithm is a maximum entropy model that uses these features: • Did our word frequency algorithm suggest a topic boundary? • Which domain cues (such as Joining us or This is <person>) were present? • How many content word bigrams were common to both regions adjoining the putative topic boundary? 5 Evaluation We will present results for broadcast news data and for identifying chapter boundaries labelled by authors. 5.1 HUB-4 Corpus Performance Table 3 shows the results of segmenting the test portion of the HUB-4 coqgus, which consisted of transcribed broadcasts divided into segments by the LDC. We measured performance by comparing our segmentation to the gold standard annotation produced by the LDC. The row labelled Random guess shows the performance of a baseline algorithm that randomly guessed boundary locations with probability equal to the fraction of possible boundary sites that were boundaries in the gold standard. The row TextTiling shows the performance of the publicly available version of that algorithm (Hearst, 1994). Optimization is the algorithm we proposed in (Reynar, 1994). Word frequency and Max. Ent. Model are the algorithms we described above. Our word frequency algorithm does better than chance, TextTiling and our previous work and our maximum entropy model does better still. See (Reynar, 1998) for graphs showing the effects of trading precision for recall with these models. Algorithm Precision Recall Random Iguess 0.16 0.16 TextTiling 0.21 0.41 Optimization 0.36 0.20 Word Frequency 0.55 0.52 Max. Ent. Model 0.59 0.60 Table 3: Performance on the HUB-4 English corpus. 361 We also tested our models on speech-recognized broadca.sts from the 1997 TREC spoken document retrieval corpus. We did not have sufficient data to train the maximum entropy model, but our word frequency algorithm achieved precision of 0.36 and recall of 0.52, considerably better, than the baseline of 0.19 precision and recall. Using manually produced transcripts of the same data naturally yielded better performance--precision was 0.50 and. recall 0.58. Our performance on broadcast data was surprisingly good considering we trained the word frequency model from newswire data. Given a large corpus of broadcast data, we expect our algorithms would perform even better. We were curious, however, how much of the performance was attributable to having numerous parameters (3 per word) in the G model and how much comes from the nature of the model. To address this, we discarded the or, ~, and B parameters particular to each word and instead used the same parameter values for each word-- namely, those assigned to unknown words through our smoothing process. This reduced the number of parameters from 3 .per word to only 3 parameters total. Performance of this hobbled version of our word frequency algorithm was so good on the HUB-4 English corpuswachieving precision of 0.42 and recall of 0.50---that we tested it on Spanish broadcast news data from the HUB-4 corpus. Even for that corpus we found much better than baseline performance. Baseline for Spanish was precision and recall of 0.28, yet our 3-parameter word frequency model achieved 0.50 precision and recall of 0.62. To reiterate, we used our word frequency model with a total of 3 parameters trained from English newswire text to segment Spanish broadcast news data We believe that the G model, which captures the notion of burstiness very well, is a good model for segmentation. However, the more important lesson from this work is that the concept of burstiness alone can be used to segment texts. Segmentation performance is better when models have accurate measures of the likelihood of 0, 1 and 2 or more occurrences of a word. However, the mere fact that content words are bursty and are relatively unlikely to appear in neighboring regions of a document unless those two regions are about the same topic is sufficient to segment many texts. This explains our ability to segment Spanish broadcast news using a 3 parameter model trained from English newswire data. 5.2 Recovering Authorial Structure Authors endow some types of documents with structure as they write. They may divide documents into chapters, chapters into sections, sections into subsections and so forth. We exploited these structures to evalUate topic segmentation techniques by comparing algorithmic determinations of structure to the author's original divisions. This method of evaluation is especially useful because numerous documents are now available in electronic form. We tested our word frequency algorithm on four randomly selected texts from Project Gutenberg. The four texts were Thomas Paine's pamphlet Common Sense which was published in 1791, the first .volume of Decline and Fall of the Roman Empire by Edward Gibbon, G.K. Chesterton's book Orthodoxy. and Herman Melville's classic Moby Dick. We permitted the algorithm to guess boundaries only between paragraphs, which were marked by blank lines in each document. To assess performance, we set the number of boundaries to be guessed to the number the authors themselves had identified. As a result, this evaluation focuses solely on the algorithm's ability to rank candidate boundaries and not on its adeptness at determining how many boundaries to select. To evaluate performance, we computed the accuracy of the algorithm's guesses compared to the chapter boundaries the authors identified. The documents we used for this evaluation may have contained legitimate topic boundaries which did not correspond to chapter boundaries, but we scored guesses at those boundaries incorrect. Table 4 presents results for the four works. Our algorithm performed better than randomly assigning boundaries for each of the documents except the pamphlet Common Sense. Performance on the other three works was significantly better than chance and ranged from an improvement of a factor of three in accuracy over the baseline to a factor of nearly 9 for the lengthy Decline and Fall of the Roman Empire. 362 Work Common Sense Decline and Fall Moby Dick Orthodoxy Combined # of Boundaries 7 Word Frequency 0.00 Random 0.36 53 0.21 0.0024 132 0.55 0.173 8 0.25 0.033 200 0.059 0.43 Table 4: Accuracy of the Word Frequency algorithm on identifying chapter boundaries. 5.3 IR Task Performance The data from the HUB-4 corpus was also used for the TREC Spoken document retrieval task. We tested the utility of our segmentations by comparing IR performance when we indexed documents, the segments annotated by the LDC and the segments identified by our algorithms. We modified SMART (Buckley, 1985) to perform better normalization for variations in document length (Singhal et al., 1996) prior to conducting our IR experiments. This IR task is atypical in that there is only 1 relevant document in the collection for each query. Consequently, performance is measured by determining the average rank determined by the IR system for the document relevant to each query. Perfect performance would be an average rank of 1, hence lower average ranks are better. Table 5 presents our results. Note that indexing the segments identified by our algorithms was better than indexing entire documents and that our best algorithm even outperformed indexing the gold standard annotation produced by the LDC. Method Documents Annotator segments Word frequency model Max. Ent. Model Average Rank 9.52 8.42 9.48 7.54 Table 5: Performance on an IR task. Lower numbers are better. Conclusion We described two new algorithms for topic segmentation. The first, based solely on word frequency, performs better than previous algorithms on broadcast news data. It performs well on speech recognized English despite recognition errors. Most surprisingly, a version of our first model that requires little training data could segment Spanish broadcast news documents as well---even with parameters estimated from English documents. Our second technique, a statistical model that combined numerous clues about segmentation, performs better than the first, but requires segmented training data. We showed an improvement on a simple IR task to demonstrate the potential of topic segmentation algorithms for improving IR. Other potential uses of these algorithms include better language modeling by building topic-based language models, improving NLP algorithms (e.g. coreference resolution), summarization, hypertext linking (Salton and Buckley, 1992), automated essay grading (Burstein et al., 1997) and topic detection and tracking (TDT program committee, 1998). Some of these are discussed in (Reynar, 1998), and others will be addressed in future work. Acknowledgements My thanks to the anonymous reviewers and the members of my thesis committee, Mitch Marcus, Aravind Joshi, Mark Liberman, Julia Hirschberg and Lyle Ungar for useful feedback. Thanks also to Dan Melamed for use of his smoothing tools and to Adwait Ratnaparkhi for use of his maximum entropy modelling software. References Beeferman, D., Berger, A., and Lafferty, J. (1997). Text segmentation using exponential models. In Proceedings of the Second Conference on Empirical Methods in Natural Language Processing, pages 35-46, Providence, Rhode Island. Bikel, D.M., Miller, S., Schwartz, R., and Weischedel, R. (1997). Nymble: a high-performance learning name-finder. In Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 194-201, Washington, D.C. Buckley, C. (1985). Implementation of the SMART information retrieval system. Technical Report Technical Report 85-686, Cornell University. 363 Burstein, J., Wolff, S., Lu, C., and Kaplan, R. (1997). An automatic scoring system for advanced placement biology essays. In Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 174-181, Washington, D.C. Church, K.W. and Gale, W.A. (1995). Inverse document frequency (IDF): A measure of deviations from Poisson. tn Yarowsky, D. and Church, K., editors, Proceedings of the Third Workshop on Very Large Corpora, pages 121- 130. Association for Computational Linguistics. Gale, W. and Sampson, G. (1995). Good-Turing smoothing without tears. Journal of Quantitative Linguistics, 2. Grosz, B. J. and Sidner, C.L. (1986). Attention, Intentions and the Structure of Discourse. Computational Linguistics, 12 (3): 175-204. Halliday, M. and Hasan, R. (1976). Cohesion in English. Longman Group, New York. Hearst, M.A. (1994). Multi-paragraph segmentation of expository text. In Proceedings of the 32 ~" Annual Meeting of the Association for Computational Linguistics, pages 9-16, Las Cruces, New Mexico. Helfman, J.I. (1994). Similarity patterns in language. In IEEE Symposium on Visual Languages. Hirschberg, J. and Grosz, B. (1992). Intonational features of local and global discourse. In Proceedings of the Workshop on Spoken Language Systems, pages 441-446. DARPA. Hirschberg, J. and Litman, D. (1993). Empirical studies on the disambiguation of cue phrases. Computational Linguistics, 19(3):501-530. HUB-4 Program Committee (1996). The 1996 HUB-4 annotation specification for evaluation of speech recognition on broadcast news, version 3.5. Karp, D., Schabes, Y., Zaidel, M. and Egedi, D. (1992). A Freely Available Wide Coverage Morphological Analyzer for English. Proceedings of the 15 'h International Conference on Computational Linguistics. Nantes, France. Katz, S.M. (1996). Distribution of content words and phrases in text and language modeling. Natural Language Engineering, 2(1): 15-59. Kozima, H. (1993). Text segmentation based on similarity between words. In Proceedings of the 31 ~' Annual Meeting of the Association for Computational Linguistics, Student Session, pages 286-288. Levy, E.T. (1984). Communicating Thematic Structure in Narrative Discourse: The Use of Referring Terms and Gestures. Ph.D. thesis, University of Chicago. Miller, G.A., Beckwith, R., Fellbaum, C., Gross, D., and Miller, K. (1990). Five papers on WordNet. Technical report, Cognitive Science Laboratory, Princeton University. Morris, J. and Hirst, G. (1991). Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(I):21-42. Ponte, J.M. and Croft, W.B. (1997). Text segmentation by topic. In European Conference on Digital Libraries, pages 113-125, Pisa, Italy. Ratnaparkhi, A. (1996). A maximum entropy model for part-of-speech tagging. In Proceedings of the First Conference on Empirical Methods in Natural Language Processing, pages 133-142, University of Pennsylvania. Reynar, J.C. (1994). An automatic method of finding topic boundaries. In Proceedings of the 32 nd Annual Meeting of the Association for Computational Linguistics, Student Session, pages 331-333, Las Cruces, New Mexico. Reynar, J.C. (1998). Topic Segmentation: Algorithms and Applications. Ph.D. thesis, University of Pennsylvania, Department of Computer Science. Richmond, K., Smith, A., and Amitay, E. (1997). Detecting subject boundaries within text: A language independent statistical approach. In Exploratory Methods in Natural Language Processing, pages 47-54, Providence, Rhode Island. Salton, G. and Buckley, C. (1992). Automatic text structuring experiments. In Jacobs, P.S., editor, Text-Based Intelligent Systems: Current Research and Practice in Information Extraction and Retrieval, pages 199-210. Lawrence Erlbaum Associates, Hillsdale, New Jersey. Singhal, A., Buckley, C., and Mitra, M. (1996). Pivoted document length normalization. In Proceedings of the A CM-SIGIR Conference on Research and Development in Information Retrieval, pages 21- 29, Zurich, Switzerland. ACM. TDT Program Committee (1998). Topic Detection and Tracking Phase 2 Evaluation Plan, version 2.1. Youmans, G. (1991). A new tool for discourse analysis: The vocabulary management profile. Language, 67(4):763-789. 364
1999
46
A Decision-Based Approach to Rhetorical Parsing Daniel Marcu Information Sciences Institute and Department of Computer Science University of Southern California 4676 Admiralty Way, Suite 1001 Marina del Rey, CA 90292-6601 marcu @ isi. edu Abstract We present a shift-reduce rhetorical parsing algo- rithm that learns to construct rhetorical structures of texts from a corpus of discourse-parse action se- quences. The algorithm exploits robust lexical, syn- tactic, and semantic knowledge sources. I Introduction The application of decision-based learning tech- niques over rich sets of linguistic features has improved significantly the coverage and perfor- mance of syntactic (and to various degrees seman- tic) parsers (Simmons and Yu, 1992; Magerman, 1995; Hermjakob and Mooney, 1997). In this pa- per, we apply a similar paradigm to developing a rhetorical parser that derives the discourse structure of unrestricted texts. Crucial to our approach is the reliance on a cor- pus of 90 texts which were manually annotated with discourse trees and the adoption of a shift-reduce parsing model that is well-suited for learning. Both the corpus and the parsing model are used to gener- ate learning cases of how texts should be partitioned into elementary discourse units and how discourse units and segments should be assembled into dis- course trees. 2 The Corpus We used a corpus of 90 rhetorical structure trees, which were built manually using rhetorical rela- tions that were defined informally in the style of Mann and Thompson (1988): 30 trees were built for short personal news stories from the MUC7 co- reference corpus (Hirschman and Chinchor, 1997); 30 trees for scientific texts from the Brown corpus; and 30 trees for editorials from the Wall Street Jour- nal (WSJ). The average number of words for each text was 405 in the MUC corpus, 2029 in the Brown corpus, and 878 in the WSJ corpus. Each MUC text 365 was tagged by three annotators; each Brown and WSJ text was tagged by two annotators. The rhetorical structure assigned to each text is a (possibly non-binary) tree whose leaves correspond to elementary discourse units (edu)s, and whose in- ternal nodes correspond to contiguous text spans. Each internal node is characterized by a rhetori- cal relation, such as ELABORATION and CONTRAST. Each relation holds between two non-overlapping text spans called NUCLEUS and SATELLITE. (There are a few exceptions to this rule: some relations, such as SEQUENCE and CONTRAST, are multinu- clear.) The distinction between nuclei and satellites comes from the empirical observation that the nu- cleus expresses what is more essential to the writer's purpose than the satellite. Each node in the tree is also characterized by a promotion set that denotes the units that are important in the corresponding subtree. The promotion sets of leaf nodes are the leaves themselves. The promotion sets of internal nodes are given by the union of the promotion sets of the immediate nuclei nodes. Edus are defined functionally as clauses or clause-like units that are unequivocally the NU- CLEUS or SATELLITE of a rhetorical relation that holds between two adjacent spans of text. For ex- ample, "because of the low atmospheric pressure" in text (1) is not a fully fleshed clause. However, since it is the SATELLITE of an EXPLANATION rela- tion, we treat it as elementary. [Only the midday sun at tropical latitudes is warm enough] [to thaw ice on occasion,] [but any liquid wa- ter formed in this way would evaporate almost instantly] [because of the low atmospheric pressure.] (1) Some edus may contain parenthetical units, i.e., embedded units whose deletion does not affect the understanding of the edu to which they belong. For example, the unit shown in italics in (2) is paren- thetic. This book, which I have received from John, is the best (2) book that I have read in a while. The annotation process was carried out using a rhetorical tagging tool. The process consisted in as- signing edu and parenthetical unit boundaries, in as- sembling edus and spans into discourse trees, and in labeling the relations between edus and spans with rhetorical relation names from a taxonomy of 71 re- lations. No explicit distinction was made between intentional, informational, and textual relations. In addition, we also marked two constituency relations that were ubiquitous in our corpora and that often subsumed complex rhetorical constituents. These relations were ATTRIBUTION, which was used to la- bel the relation between a reporting and a reported clause, and APPOSITION. Marcu et al. (1999) discuss in detail the annotation tool and protocol and assess the inter-judge agreement and the reliability of the annotation. 3 The parsing model We model the discourse parsing process as a se- quence of shift-reduce operations. As front-end, the parser uses a discourse segmenter, i.e., an algorithm that partitions the input text into edus. The dis- course segmenter, which is also decision-based, is presented and evaluated in section 4. The input to the parser is an empty stack and an input list that contains a sequence of elementary dis- course trees, edts, one edt for each edu produced by the discourse segmenter. The status and rhetorical relation associated with each edt is UNDEFINED, and the promotion set is given by the corresponding edu. At each step, the parser applies a SHIFT or a REDUCE operation. Shift operations transfer the first edt of the input list to the top of the stack. Reduce opera- tions pop the two discourse trees located on the top of the stack; combine them into a new tree updating the statuses, rhetorical relation names, and promo- tion sets associated with the trees involved in the operation; and push the new tree on the top of the stack. Assume, for example, that the discourse seg- menter partitions a text given as input as shown in (3). (Only the edus numbered from 12 to 19 are shown.) Figure 1 shows the actions taken by a shift- reduce discourse parser starting with step i. At step i, the stack contains 4 partial discourse trees, which span units [1,11], [12,15], [16,17], and [18], and the 366 input list contains the edts that correspond to units whose numbers are higher than or equal to 19. ... [Close parallels between tests and practice tests (3) are common, 12] [some educators and researchers say. 13] [Test-preparation booklets, software and work- sheets are a booming publishing subindustryJ 4 ] [But some practice products are so similar to the tests them- selves that critics say they represent a form of school- sponsored cheatingJ 5 ] ["If I took these preparation booklets into my classroom, 16 ] [I'd have a hard time justifying to my stu- dents and parents that it wasn't cheating, "17 ] [says John Kaminsky, TM] [a Traverse City, Mich., teacher who has studied test coaching. 19 ] ... At step i the parser decides to perform a SHIFT op- eration. As a result, the edt corresponding to unit 19 becomes the top of the stack. At step i + 1, the parser performs a REDUCE-APPOSITION-NS opera- tion, that combines edts 18 and 19 into a discourse tree whose nucleus is unit 18 and whose satellite is unit 19. The rhetorical relation that holds be- tween units 18 and 19 is APPOSITION. At step i+2, the trees that span over units [16,17] and [18,19] are combined into a larger tree, using a REDUCE- ATTRIBUTION-NS operation. As a result, the status of the tree [16,17] becomes NUCLEUS and the status of the tree [18,19] becomes SATELLITE. The rhetor- ical relation between the two trees is ATTRIBUTION. At step i + 3, the trees at the top of the stack are combined using a REDUCE-ELABORATION-NS oper- ation. The effect of the operation is shown at the bottom of figure 1. In order to enable a shift-reduce discourse parser derive any discourse tree, it is sufficient to imple- ment one SHIFT operation and six types of REDUCE operations, whose operational semantics is shown in figure 2. For each possible pair of nuclearity assignments NUCLEUS-SATELLITE (NS), SATELLITE- NUCLEUS (SN), and NUCLEUS-NUCLEUS (NN) there are two possible ways to attach the tree located at position top in the stack to the tree located at po- sition top - 1. If one wants to create a binary tree whose immediate children are the trees at top and top - 1, an operation of type REDUCE-NS, REDUCE- SN, or REDUCE-NN needs to be employed. If one wants to attach the tree at top as an extra-child of the tree at top - 1, thus creating or modifying a non-binary tree, an operation of type REDUCE- BELOW-NS, REDUCE-BELOW-SN, or REDUCE-BELOW- NN needs to be employed. Figure 2 illustrates how the statuses and promotion sets associated with the s ~ l . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . It~UCg-gLAt~A~ON~NS mW~ATION Figure 1: Example of a sequence of shift-reduce operations that concern the discourse parsing of text (3). trees involved in the reduce operations are affected in each case. Since the labeled data that we relied upon was sparse, we grouped the relations that shared some rhetorical meaning into clusters of rhetor- ical similarity. For example, the cluster named CONTRAST contained the contrast-like rhetorical relations of ANTITHESIS, CONTRAST, and CON- CESSION. The cluster named EVALUATION- INTERPRETATION contained the rhetorical relations of EVALUATION and INTERPRETATION. And the cluster named OTHER contained rhetorical rela- tions such as QUESTION-ANSWER, PROPORTION, RE- STATEMENT, and COMPARISON, which were used 367 Figure 2: The reduce operations supported by our parsing model. very seldom in the corpus. The grouping pro- cess yielded 17 clusters, each characterized by a generalized rhetorical relation name. These names were: APPOSITION-PARENTHETICAL, ATTRI- BUTION, CONTRAST, BACKGROUND-CIRCUMSTANCE, CAUSE-REASON-EXPLANATION, CONDITION, ELABO- RATION, EVALUATION-INTERPRETATION, EVIDENCE, EXAMPLE, MANNER-MEANS, ALTERNATIVE, PUR- POSE, TEMPORAL, LIST, TEXTUAL, and OTHER. In the work described in this paper, we attempted to automatically derive rhetorical structures trees that were labeled with relations names that corre- sponded to the 17 clusters of rhetorical similarity. Since there are 6 types of reduce operations and since each discourse tree in our study uses relation 368 names that correspond to the 17 clusters of rhetor- ical similarity, it follows that our discourse parser needs to learn what operation to choose from a set of 6 × 17 + 1 = 103 operations (the 1 corresponds to the SHXFT operation). 4 The discourse segmenter 4.1 Generation of learning examples The discourse segmenter we implemented processes an input text one lexeme (word or punctuation mark) at a time and recognizes sentence and edu boundaries and beginnings and ends of parentheti- cal units. We used the leaves of the discourse trees that were built manually in order to derive the learn- ing cases. To each lexeme in a text, we associated one learning case, using the features described in section 4.2. The classes to be learned, which are as- sociated with each lexeme, are sentence-break, edu- break, start-paTen, end-paTen, and none. 4.2 Features used for learning To partition a text into edus and to detect parentheti- cal unit boundaries, we relied on features that model both the local and global contexts. The local context consists of a window of size 5 that enumerates the Part-Of-Speech (POS) tags of the lexeme under scrutiny and the two lexemes found immediately before and after it. The POS tags are determined automatically, using the Brill tagger (1995). Since discourse markers, such as because and and, have been shown to play a ma- jor role in rhetorical parsing (Marcu, 1997), we also consider a list of features that specify whether a lex- eme found within the local contextual window is a potential discourse marker. The local context also contains features that estimate whether the lexemes within the window are potential abbreviations. The global context reflects features that pertain to the boundary identification process. These features specify whether a discourse marker that introduces expectations (Cristea and Webber, 1997) (such as although) was used in the sentence under consider- ation, whether there are any commas or dashes be- fore the estimated end of the sentence, and whether there are any verbs in the unit under consideration. A binary representation of the features that char- acterize both the local and global contexts yields learning examples with 2417 features/example. 4.3 Evaluation We used the C4.5 program (Quinlan, 1993) in order to learn decision trees and rules that classify leT- Corpus # cases BI(%) B2(%) Acc(%) MUC 14362 91.28 93.1 96.244-0.06 WSJ 31309 92.39 94.6 97.144-0.10 Brown 72092 93.84 96.8 97.874-0.04 Table 1: Performance of a discourse segmenter that uses a decision-tree, non-binary classifier. Ace Action (a) (b) (c) (d) (e) sentence-break (a) 272 4 edu-break (b) 133 3 84 start-parcH (c) 4 26 end-paten (d) 20 6 none (e) 2 38 1 4 7555 Table 2: Confusion matrix for the decision-tree, non-binary classifier (the Brown corpus). /i / 2.00 4.00 J / ¢ cases x 1o 3 6.00 8.00 I0.00 12.00 edu boundaries. The performance is high with re- spect to recognizing sentence boundaries and ends of parenthetical units. The performance with re- spect to identifying sentence boundaries appears to be close to that of systems aimed at identify- ing only sentence boundaries (Palmer and Hearst, 1997), whose accuracy is in the range of 99%. Figure 3: Learning curve for discourse segmenter (the MUC corpus). emes as boundaries of sentences, edus, or parenthet- ical units, or as non-boundaries. We learned both from binary (when we could) and non-binary repre- sentations of the cases. 1 In general the binary rep- resentations yielded slightly better results than the non-binary representations and the tree classifiers were slightly better than the rule-based ones. Due to space constraints, we show here (in table 1) only accuracy results that concern non-binary, decision- tree classifiers. The accuracy figures were com- puted using a ten-fold cross-validation procedure. In table 1, B1 corresponds to a majority-based base- line classifier that assigns none to all lexemes, and B2 to a baseline classifier that assigns a sentence boundary to every DOT lexeme and a non-boundary to all other lexemes. Figure 3 shows the learning curve that corre- sponds to the MUC corpus. It suggests that more data can increase the accuracy of the classifier. The confusion matrix shown in table 2 corre- sponds to a non-binary-based tree classifier that was trained on cases derived from 27 Brown texts and that was tested on cases derived from 3 dif- ferent Brown texts, which were selected randomly. The matrix shows that the segmenter has problems mostly with identifying the beginning of parentheti- cal units and the intra-sentential edu boundaries; for example, it correctly identifies only 133 of the 220 ZLeaming from binary representations of features in the Brown corpus was too computationally expensive to terminate -- the Brown data file had about 0.5GBytes. 5 The shift-reduce action identifier , 5.1 Generation of learning examples The learning cases were generated automatically, in the style of Magerman (1995), by traversing in- order the final rhetorical structures built by anno- tators and by generating a sequence of discourse parse actions that used only SHIFT and REDUCE op- erations of the kinds discussed in section 3. When a derived sequence is applied as described in the parsing model, it produces a rhetorical tree that is a one-to-one copy of the original tree that was used to generate the sequence. For example, the tree at the bottom of figure 1 -- the tree found at the top of the stack at step i + 4 -- can be built if the fol- lowing sequence of operations is performed: {SHIFT 12; SHIFT 13; REDUCE-ATTRIBUTION-NS; SHIFT 14; REDUCE-JOINT-NN; SHIFT 15; REDUCE-CONTRAST- SN, SHIFT 16, SHIFT ]7; REDUCE-CONDITION- SN; SHIFT 18; SHIFT 19; REDUCE-APPOSITION-NS; REDUCE-ATTRIBUTION-NS; REDUCE-ELABORATION- NS.} 5.2 Features used for learning To make decisions with respect to parsing actions, the shift-reduce action identifier focuses on the three top most trees in the stack and the first edt in the in- put list. We refer to these trees as the trees in focus. The identifier relies on the following classes of fea- tures. Structural features. • Features that reflect the number of trees in the stack and the number of edts in the input list. • Features that describe the structure of the trees in focus in terms of the type of textual units that they subsume (sentences, paragraphs, titles); the number 369 of immediate children of the root nodes; the rhetor- ical relations that link the immediate children of the root nodes, etc. 2 Lexical (cue-phrase-like) and syntactic features. • Features that denote the actual words and POS tags of the first and last two lexemes of the text spans subsumed by the trees in focus. • Features that denote whether the first and last units of the trees in focus contain potential discourse markers and the position of these markers in the corresponding textual units (beginning, middle, or end). Operational features. • Features that specify what the last five parsing op- erations performed by the parser were. 3 Semantic-similarity-based features. • Features that denote the semantic similarity be- tween the textual segments subsumed by the trees in focus. This similarity is computed by applying in the style of Hearst (1997) a cosine-based metric on the morphed segments. • Features that denote Wordnet-based measures of similarity between the bags of words in the promo- tion sets of the trees in focus. We use 14 Wordnet- based measures of similarity, one for each Word- net relation (Fellbaum, 1998). Each of these sim- ilarities is computed using a metric similar to the cosine-based metric. Wordnet-based similarities re- flect the degree of synonymy, antonymy, meronymy, hyponymy, etc. between the textual segments sub- sumed by the trees in focus. We also use 14 x 13/2 relative Wordnet-based measures of similarity, one for each possible pair of Wordnet-based relations. For each pair of Wordnet-based measures of simi- larity w~l and wr2, each relative measure (feature) takes the value <, =, or >, depending on whether the Wordnet-based similarity w~l between the bags of words in the promotion sets of the trees in focus is lower, equal, or higher that the Wordnet-based sim- ilarity w~2 between the same bags of words. For ex- ample, if both the synonymy- and meronymy-based measures of similarity are 0, the relative similarity between the synonymy and meronymy of the trees in focus will have the value =. 2The identifier assumes that each sentence break that ends in a period and is followed by two '\n' characters, for example, is a paragraph break; and that a sentence break that does not end in a punctuation mark and is followed by two '\n' characters is a title. 3We could generate these features because, for learning, we used sequences of shift-reduce operations and not discourse trees. Corpus # cases B3(%) B4(%) Ace(%) MUC 1996 50.75 26.9 61.124-1.61 WSJ 4360 50.34 27.3 61.654-0.41 Brown 8242 50.18 28.1 61.814-0.48 Table 3: Performance of the tree-based, shift-reduce action classifiers. Ace 60.00 58.013 56.00 54.0~ 52.0G ~0.0~ t, 46.00 / 0.5tl S 1.00 1.50 ,,1 c,~es x l0 3 Figure 4: Learning curve for the shift-reduce action identifier (the MUC corpus). A binary representation of these features yields learning examples with 2789 features/example. 5.3 Evaluation The shift-reduce action identifier uses the C4.5 pro- gram in order to learn decision trees and rules that specify how discourse segments should be assem- bled into trees. In general, the tree-based classifiers performed slightly better than the rule-based classi- tiers. Due to space constraints, we present here only performance results that concern the tree classifiers. Table 3 displays the accuracy of the shift-reduce ac- tion identifiers, determined for each of the three cor- pora by means of a ten-fold cross-validation proce- dure. In table 3, the B3 column gives the accuracy of a majority-based classifier, which chooses action SHIFT in all cases. Since choosing only the action SHIFT never produces a discourse tree, in column B4, we present the accuracy of a baseline classifier that chooses shift-reduce operations randomly, with probabilities that reflect the probability distribution of the operations in each corpus. Figure 4 shows the learning curve that corre- sponds to the MUC corpus. As in the case of the discourse segmenter, this learning curve also sug- gests that more data can increase the accuracy of the shift-reduce action identifier. 6 Evaluation of the rhetorical parser Obviously, by applying the two classifiers sequen- tiaUy, one can derive the rhetorical structure of any 370 Corpus MUC WSJ Brown Seg- Train- Elementary units Hierarchical spans Span nuclearity ment- ing Judges[ Parser Judges[ Parser Judges I Parser Judges er corpus R I P R I P R I P R I P R I P R I P R I P DT MUC 88.0 88.0 37.1 100.0 84.4 84.4 38.2 61.0 79.1 83.5 25.5 51.5 78.6 78.6 DT All 75.4 96.9 70.9 72.8 58.3 68.9 M MUC 100.0 100.0 87.5 82.3 68.8 78.2 M All 100.0 100.0 84.8 73.5 71.0 69.3 DT WSJ 85.1 86.8 18.1 95.8 79.9 80.1 34.0 65.8 67.6 77.1 21.6 54.0 73.1 73.3 DT All 25.1 79.6 40.1 66.3 30.3 58.5 M WSJ I00.0 100.0 83.4 84.2 63.7 79.9 M All 100.0 100.0 83.0 85.0 69.0 82.4 DT Brown 89.5 88.5 60.5 79.4 80.6 79.5 57.3 63.3 67.6 75.8 44.6 57.3 69.7 68.3 DT All 44.2 80.3 44.7 59.1 33.2 51.8 M Brown 100.0 100.0 81.1 73.4 60.1 67.0 M All 100.0 100.0 80.8 77.5 60.0 72.0 Rhetorical relations Parser R ] P 14.9 28.7 38.4 45.3 72.4 62.8 66.5 53.9 13.0 34.3 17.3 36.0 56.3 57.9 59.8 63.2 26.7 35.3 15.7 25.7 59.5 45.5 51.8 44.7 Table 4: Performance of the rhetorical parser: labeled (R)ecall and (P)recision. The segmenter is either Decision-Tree-Based (DT) or Manual (M). text. Unfortunately, the performance results pre- sented in sections 4 and 5 only suggest how well the discourse segmenter and the shift-reduce action identifier perform with respect to individual cases. They say nothing about the performance of a rhetor- ical parser that relies on these classifiers. In order to evaluate the rhetorical parser as a whole, we partitioned randomly each corpus into two sets of texts: 27 texts were used for training and the last 3 texts were used for testing. The evalua- tion employs labeled recall and precision measures, which are extensively used to study the performance of syntactic parsers. Labeled recall reflects the num- ber of correctly labeled constituents identified by the rhetorical parser with respect to the number of labeled constituents in the corresponding manually built tree. Labeled precision reflects the number of correctly labeled constituents identified by the rhetorical parser with respect to the total number of labeled constituents identified by the parser. We computed labeled recall and precision figures with respect to the ability of our discourse parser to identify elementary units, hierarchical text spans, text span nuclei and satellites, and rhetorical rela- tions. Table 4 displays results obtained using seg- menters and shift-reduce action identifiers that were trained either on 27 texts from each corpus and tested on 3 unseen texts from the same corpus; or that were trained on 27×3 texts from all corpora and tested on 3 unseen texts from each corpus. The training and test texts were chosen randomly. Ta- ble 4 also displays results obtained using a man- ual discourse segmenter, which identified correctly all edus. Since all texts in our corpora were man- ually annotated by multiple judges, we could also 371 compute an upper-bound of the performance of the rhetorical parser by calculating for each text in the test corpus and each judge the average labeled recall and precision figures with respect to the discourse trees built by the other judges. Table 4 displays these upper-bound figures as well. The results in table 4 primarily show that errors in the discourse segmentation stage affect significantly the quality of the trees our parser builds. When a segmenter is trained only on 27 texts (especially for the MUC and WSJ corpora, which have shorter texts than the Brown corpus), it has very low per- formance. Many of the intra-sentential edu bound- aries are not identified, and as a consequence, the overall performance of the parser is low. When the segmenter is trained on 27 × 3 texts, its perfor- mance increases significantly with respect to the MUC and WSJ corpora, but decreases with respect to the Brown corpus. This can be explained by the significant differences in style and discourse marker usage between the three corpora. When a perfect segmenter is used, the rhetorical parser determines hierarchical constituents and assigns them a nucle- arity status at levels of performance that are not far from those of humans. However, the rhetorical la- beling of discourse spans is even in this case about 15-20% below human performance. These results suggest that the features that we use are sufficient for determining the hierarchical struc- ture of texts and the nuclearity statuses of discourse segments. However, they are insufficient for deter- mining correctly the elementary units of discourse and the rhetorical relations that hold between dis- course segments. 7 Related work The rhetorical parser presented here is the first that employs learning methods and a thorough evalua- tion methodology. All previous parsers aimed at determining the rhetorical structure of unrestricted texts (Sumita et al., 1992; Kurohashi and Nagao, 1994; Marcu, 1997; Corston-Oliver, 1998)em- ployed manually written rules. Because of the lack of discourse corpora, these parsers did not evaluate the correctness of the discourse trees they built per se, but rather their adequacy for specific purposes: experiments carded out by Miike et al. (1994) and Marcu (1999) showed only that the discourse struc- tures built by rhetorical parsers (Sumita et al., 1992; Marcu, 1997) can be used successfully in order to improve retrieval performance and summarize text. 8 Conclusion In this paper, we presented a shift-reduce rhetori- cal parsing algorithm that learns to construct rhetor- ical structures of texts from tagged data. The parser has two components: a discourse segmenter, which identifies the elementary discourse units in a text; and a shift-reduce action identifier, which deter- mines how these units should be assembled into rhetorical structure trees. Our results suggest that a high-performance dis- course segmenter would need to rely on more train- ing data and more elaborate features than the ones described in this paper -- the learning curves did not converge to performance limits. If one's goal is, however, to construct discourse trees whose leaves are sentences (or units that can be identified at high levels of performance), then the segmenter de- scribed here appears to be adequate. Our results also suggest that the rich set of features that consti- tute the foundation of the action identifier are suffi- cient for constructing discourse hierarchies and for assigning to discourse segments a rhetorical status of nucleus or satellite at levels of performance that are close to those of humans. However, more re- search is needed in order to approach human perfor- mance in the task of assigning to segments correct rhetorical relation labels. Acknowledgements. I am grateful to Ulf Herm- jakob, Kevin Knight, and Eric Breck for comments on previous drafts of this paper. References Eric Brill. 1995. Transformation-based error-driven learning and natural language processing: A case 372 study in part-of-speech tagging. Computational Lin- guistics, 21 (4):543-565. Simon H. Corston-Oliver. 1998. Beyond string match- ing and cue phrases: Improving efficiency and cover- age in discourse analysis. The AAAI Spring Sympo- sium on Intelligent Text Summarization, pages 9-15. Dan Cristea and Bonnie L. Webber. 1997. Expectations in incremental discourse processing. In Proceedings of ACL/EACL'97, pages 88-95. Christiane Fellbaum, editor. 1998. Wordnet: An Elec- tronic Lexical Database. The MIT Press. Marti A. Hearst. 1997. TextTiling: Segmenting text into multi-paragraph subtopic passages. Computa- tional Linguistics, 23(1):33--64. Ulf Hermjakob and Raymond J. Mooney. 1997. Learn- ing parse and translation decisions from examples with rich context. In Proceedings of ACI_,/EACL'97, pages 482-489. Lynette Hirschman and Nancy Chinchor, 1997. MUC-7 Coreference Task Definition. Sadao Kurohashi and Makoto Nagao. 1994. Automatic detection of discourse structure by checking surface information in sentences. In Proceedings of COL- ING'94, volume 2, pages 1123-1127. David M. Magerman. 1995. Statistical decision-tree models for parsing. In Proceedings of ACL'95, pages 276-283. William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text, 8(3):243-281. Daniel Marcu. 1997. The rhetorical parsing of natu- ral language texts. In Proceedings of ACL/EACL'97, pages 96-103. Daniel Marcu. 1999. Discourse trees are good indica- tors of importance in text. In Inderjeet Mani and Mark Maybury, editors, Advances in Automatic Text Sum- marization. The MIT Press. To appear. Daniel Marcu, Estibaliz Amorrortu, and Magdalena Romera. 1999. Experiments in constructing a corpus of discourse trees. The ACL'99 Workshop on Stan- dards and Tools for Discourse Tagging. Seiji Miike, Etsuo Itoh, Kenji Ono, and Kazuo Sumita. 1994. A full-text retrieval system with a dynamic abstract generation function. In Proceedings of SI- GIR'94, pages 152-161. David D. Palmer and Marti A. Hearst. 1997. Adap- tive multilingual sentence boundary disambiguation. Computational Linguistics, 23(2):241-269. J. Ross Quinlan. 1993. C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers. R.F. Simmons and Yeong-Ho Yu. 1992. The acquisition and use of context-depefident grammars for English. Computational Linguistics, 18(4):391-418. K. Sumita, K. Ono, T. Chino, T. Ukita, and S. Amano. 1992. A discourse structure analyzer for Japanese text. In Proceedings of the International Conference on Fifth Generation Computer Systems, volume 2, pages 1133-1140.
1999
47
Corpus-Based Identification of Non-Anaphoric Noun Phrases David L. Bean and Ellen Riloff Department of Computer Science University of Utah Salt Lake City, Utah 84112 {bean,riloff}@cs.utah.edu Abstract Coreference resolution involves finding antecedents for anaphoric discourse entities, such as definite noun phrases. But many definite noun phrases are not anaphoric because their meaning can be un- derstood from general world knowledge (e.g., "the White House" or "the news media"). We have developed a corpus-based algorithm for automat- ically identifying definite noun phrases that are non-anaphoric, which has the potential to improve the efficiency and accuracy of coreference resolu- tion systems. Our algorithm generates lists of non- anaphoric noun phrases and noun phrase patterns from a training corpus and uses them to recognize non-anaphoric noun phrases in new texts. Using 1600 MUC-4 terrorism news articles as the training corpus, our approach achieved 78% recall and 87% precision at identifying such noun phrases in 50 test documents. 1 Introduction Most automated approaches to coreference res- olution attempt to locate an antecedent for ev- ery potentially coreferent discourse entity (DE) in a text. The problem with this approach is that a large number of DE's may not have an- tecedents. While some discourse entities such as pronouns are almost always referential, def- inite descriptions I may not be. Earlier work found that nearly 50% of definite descriptions had no prior referents (Vieira and Poesio, 1997), and we found that number to be even higher, 63%, in our corpus. Some non-anaphoric def- inite descriptions can be identified by looking for syntactic clues like attached prepositional phrases or restrictive relative clauses. But other definite descriptions are non-anaphoric because readers understand their meaning due to com- mon knowledge. For example, readers of this 1In this work, we define a definite description to be a noun phrase beginning with the. paper will probably understand the real world referents of "the F.B.I.," "the White House," and "the Golden Gate Bridge." These are in- stances of definite descriptions that a corefer- ence resolver does not need to resolve because they each fully specify a cognitive representa- tion of the entity in the reader's mind. One way to address this problem is to cre- ate a list of all non-anaphoric NPs that could be used as a filter prior to coreference resolu- tion, but hand coding such a list is a daunt- ing and intractable task. We propose a corpus- based mechanism to identify non-anaphoric NPs automatically. We will refer to non-anaphoric definite noun phrases as existential NPs (Allen, 1995). Our algorithm uses statistical methods to generate lists of existential noun phrases and noun phrase patterns from a training corpus. These lists are then used to recognize existen- tial NPs in new texts. 2 Prior Research Computational coreference resolvers fall into two categories: systems that make no at- tempt to identify non-anaphoric discourse en- tities prior to coreference resolution, and those that apply a filter to discourse entities, identify- ing a subset of them that are anaphoric. Those that do not practice filtering include decision tree models (Aone and Bennett, 1996), (Mc- Carthy and Lehnert, 1995) that consider all pos- sible combinations of potential anaphora and referents. Exhaustively examining all possible combinations is expensive and, we believe, un- necessary. Of those systems that apply filtering prior to coreference resolution, the nature of the filter- ing varies. Some systems recognize when an anaphor and a candidate antecedent are incom- patible. In SRI's probabilistic model (Kehler, 373 The ARCE battalion command has reported that about 50 peasants of various ages have been kidnapped by terrorists of the Farabundo Marti National Liberation Front [FMLN] in San Miguel Department. According to that garrison, the mass kidnapping took place on 30 December in San Luis de la Reina. The source added that the terrorists forced the individuals, who were taken to an unknown location, out of their residences, presumably to incorporate them against their will into clandestine groups. Figure 1: Anaphoric and Non-Anaphoric NPs (definite descriptions highlighted.) 1997), a pair of extracted templates may be removed from consideration because an out- side knowledge base indicates contradictory fea- tures. Other systems look for particular con- structions using certain trigger words. For ex- ample, pleonastic 2 pronouns are identified by looking for modal adjectives (e.g. "necessary") or cognitive verbs (e.g. "It is thought that...") in a set of patterned constructions (Lappin and Leass, 1994), (Kennedy and Boguraev, 1996). A more recent system (Vieira and Poesio, 1997) recognizes a large percentage of non- anaphoric definite noun phrases (NPs) during the coreference resolution process through the use of syntactic cues and case-sensitive rules. These methods were successful in many in- stances, but they could not identify them all. The existential NPs that were missed were ex- istential to the reader, not because they were modified by particular syntactic constructions, but because they were part of the reader's gen- eral world knowledge. Definite noun phrases that do not need to be resolved because they are understood through world knowledge can represent a significant por- tion of the existential noun phrases in a text. In our research, we found that existential NPs ac- count for 63% of all definite NPs, and 24% of them could not be identified by syntactic or lex- ical mea.ns. This paper details our method for identifying existential NPs that are understood through general world knowledge. Our system requires no hand coded information and can rec- ognize a larger portion of existential NPs than Vieira and Poesio's system. 3 Definite NP Taxonomy To better understand what makes an NP anaphoric or non-anaphoric, we found it useful to classify definite NPs into a taxonomy. We 2Pronouns that are semantically empty, e.g. "It is clear that...." first classified definite NPs into two broad cat- egories, referential NPs, which have prior refer- ents in the texts, and existential NPs, which do not. In Figure 1, examples of referential NPs are "the mass kidnapping," "the terror- ists" and "the individuals.", while examples of existential NPs are "the ARCE battalion command" and "the Farabundo Marti Na- tional Liberation Front." (The full taxon- omy can be found in Figure 2.) We should clarify an important point. When we say that a definite NP is existential, we say this because it completely specifies a cognitive representation of the entity in the reader's mind. That is, suppose "the F.B.I." appears in both sentence 1 and sentence 7 of a text. Although there may be a cohesive relationship between the noun phrases, because they both completely specify independently, we consider them to be non-anaphoric. Definite Noun Phrases - Referential - Existential - Independent - Syntactic - Semantic - Associative Figure 2: Definite NP Taxonomy We further classified existential NPs into two categories, independent and associative, which are distinguished by their need for context. In- dependent existentials can be understood in iso- lation. Associative existentials are inherently associated with an event, action, object or other context 3. In a text about a basketball game, for example, we might find "the score," "the hoop" and "the bleachers." Although they may 3Our taxonomy mimics Prince's (Prince, 1981) in that our independent existentials roughly equate to her new class, our associative existentials to her inferable class, and our referentials to her evoked class. 374 not have direct antecedents in the text, we understand what they mean because they are all associated with basketball games. In isola- tion, a reader would not necessarily understand the meaning of "the score" because context is needed to disambiguate the intended word sense and provide a complete specification. Because associative NPs represent less than 10% of the existential NPs in our corpus, our ef- forts were directed at automatically identifying independent existentials. Understanding how to identify independent existential NPs requires that we have an understanding of why these NPs are existential. We classified independent existentials into two groups, semantic and syn- tactic. Semantically independent NPs are exis- tential because they are understood by readers who share a collective understanding of current events and world knowledge. For example, we understand the meaning of "the F.B.I." without needing any other information. Syntactically independent NPs, on the other hand, gain this quality because they are modified structurally. For example, in "the man who shot Liberty Va- lence," "the man" is existential because the rel- ative clause uniquely identifies its referent. 4 Mining Existential NPs from a Corpus Our goal is to build a system that can identify independent existential noun phrases automati- cally. In the previous section, we observed that "existentialism" can be granted to a definite noun phrase either through syntax or seman- tics. In this section, we introduce four methods for recognizing both classes of existentials. 4.1 Syntactic Heuristics We began by building a set of syntactic heuris- tics that look for the structural cues of restric- tive premodification and restrictive postmod- ification. Restrictive premodification is often found in noun phrases in which a proper noun is used as a modifier for a head noun, for ex- ample, "the U.S. president." "The president" itself is ambiguous, but "the U.S. president" is not. Restrictive postmodification is often rep- resented by restrictive relative clauses, preposi- tional phrases, and appositives. For example, "the president of the United States" and "the president who governs the U.S." are existen- tial due to a prepositional phrase and a relative clause, respectively. We also developed syntactic heuristics to rec- ognize referential NPs. Most NPs of the form "the <number> <noun>" (e.g., "the 12 men") have an antecedent, so we classified them as ref- erential. Also, if the head noun of the NP ap- peared earlier in the text, we classified the NP as referential. This method, then, consists of two groups of syntactic heuristics. The first group, which we refer to as the rule-in heuristics, contains seven heuristics that identify restrictive premodifica- tion or postmodification, thus targeting existen- tial NPs. The second group, referred to as the rule-out heuristics, contains two heuristics that identify referential NPs. 4.2 Sentence One Extractions (Sl) Most referential NPs have antecedents that pre- cede them in the text. This observation is the basis of our first method for identifying seman- tically independent NPs. If a definite NP occurs in the first sentence 4 of a text, we assume the NP is existential. Using a training corpus, we create a list of presumably existential NPs by collecting the first sentence of every text and extracting all definite NPs that were not classi- fied by the syntactic heuristics. We call this list the S1 extractions. 4.3 Existential Head Patterns (EHP) While examining the S1 extractions, we found many similar NPs, for example "the Salvadoran Government," "the Guatemalan Government," and "the U.S. Government." The similarities indicate that some head nouns, when premod- ified, represent existential entities. By using the S1 extractions as input to a pattern gen- eration algorithm, we built a set of Existen- tial Head Patterns (EHPs) that identify such constructions. These patterns are of the form "the <x+> 5 <nounl ...nounN>" such as "the <x+> government" or "the <x+> Salvadoran government." Figure 3 shows the algorithm for creating EHPs. 4Many of the texts we used were newspaper arti- cles and all headers, including titles and bylines, were stripped before processing. 5<x+> = one or more words 375 1. For each NP of more than two words, build a candidate pattern of the form "the <x+> headnoun." Example: if the NP was "the new Salvadoran government," the candidate pattern would be "the <x+> government." 2. Apply that pattern to the corpus, count how many times it matches an NP. 3. If possible, grow the candidate pattern by inserting the word to the left of the headnoun, e.g. the candidate pattern now becomes "the <x+> Salvadoran government." 4. Reapply the pattern to the corpus, count how many times it matches an NP. If the new count is less that the last iteration's count, stop and return the prior pattern. If the new count is equal to the last iteration's count, return to step 3. This iterative process has the effect of recognizing compound head nouns. Figure 3: EHP Algorithm If the NP was identified via the S1 or EHP methods: Is its definite probability above an upper threshold? Yes: Classify as existential. No: Is its definite probability above a lower threshold? Yes: Is its sentence-number less than or equal to an early allowance threshold? Yes : Classify as existential. No : Leave unclassified (allow later methods to apply). No : Leave unclassified (allow later methods to apply). Figure 4: Vaccine Algorithm 4.4 Definite-Only List (DO) It also became clear that some existentials never appear in indefinite constructions. "The F.B.I.," "the contrary," "the National Guard" are definite NPs which are rarely, if ever, seen in indefinite constructions. The chances that a reader will encounter "an F.B.I." are slim to none. These NPs appeared to be perfect can- didates for a corpus-based approach. To locate "definite-only" NPs we made two passes over the corpus. The first pass produced a list of ev- ery definite NP and its frequency. The second pass counted indefinite uses of all NPs cataloged during the first pass. Knowing how often an NP was used in definite and indefinite constructions allowed us to sort the NPs, first by the probabil- ity of being used as a definite (its definite prob- ability), and second by definite-use frequency. For example, "the contrary" appeared high on this list because its head noun occurred 15 times in the training corpus, and every time it was in a definite construction. From this, we created a definite-only list by selecting those NPs which occurred at least 5 times and only in definite constructions. Examples from the three methods can be found in the Appendix. 4.5 Vaccine Our methods for identifying existential NPs are all heuristic-based and therefore can be incor- rect in certain situations. We identified two types of common errors. 1. An incorrect $1 assumption. When the S1 as- sumption falls, i.e. when a definite NP in the first sentence of a text is truly referential, the referential NP is added to the S1 list. Later, an Existential Head Pattern may be built from this NP. In this way, a single misclassified NP may cause multiple noun phrases to be misclassified in new texts, acting as an "infection" (Roaxk and Charniak, 1998). 2. Occasional existentialism. Sometimes an NP is existential in one text but referential in an- other. For example, "the guerrillas" often refers to a set of counter-government forces that the reader of an E1 Salvadoran newspaper would understand. In some cases, however, a partic- ular group of guerrillas was mentioned previ- ously in the text ("A group of FMLN rebels attacked the capital..."), and later references to "the guerrillas" referred to this group. To address these problems, we developed a vaccine. It was clear that we had a number of in- fections in our S1 list, including "the base," "the 376 For every definite NP in a text 1. Apply syntactic RuleOutHeuristics, if any fired, classify the NP as referential. 2. Look up the NP in the S1 list, if found, classify the NP as existential (unless stopped by vaccine). 3. Look up the NP in the DO list, if found, classify the NP as existential. 4. Apply all EHPs, if any apply, classify the NP as existential (unless stopped by vaccine). 5. Apply syntactic RuleInHeuristics, if any fired, classify the NP as existential. 6. If the NP is not yet classified, classify the NP as referential. Figure 5: Existential Identification Algorithm individuals," "the attack," and "the banks." We noticed, however, that many of these in- correct NPs also appeared near the bottom of our definite/indefinite list, indicating that they were often seen in indefinite constructions. We used the definite probability measure as a way of detecting errors in the S1 and EHP lists. If the definite probability of an NP was above an upper threshold, the NP was allowed to be clas- sifted as existential. If the definite probability of an NP fell below a lower threshold, it was not al- lowed to be classified by the S1 or EHP method. Those NPs that fell between the two thresholds were considered occasionally existential. Occasionally existential NPs were handled by observing where the NPs first occurred in the text. For example, if the first use of "the guer- rillas" was in the first few sentences of a text, it was usually an existential use. If the first use was later, it was usually a referential use be- cause a prior definition appeared in earlier sen- tences. We applied an early allowance threshold of three sentences - occasionally existential NPs occuring under this threshold were classified as existential, and those that occurred above were left unclassified. Figure 4 details the vaccine's algorithm. 5 Algorithm & Training We trained and tested our methods on the Latin American newswire articles from MUC- 4 (MUC-4 Proceedings, 1992). The training set contained 1,600 texts and the test set contained 50 texts. All texts were first parsed by SUN- DANCE, our heuristic-based partial parser de- veloped at the University of Utah. We generated the S1 extractions by process- ing the first sentence of all training texts. This produced 849 definite NPs. Using these NPs as Vaccine Vaccine~ I DO EHP I ~' /\ Unresolved Marked referential existential definite NPs definite NPs Figure 6: Recognizing Existential NPs input to the existential head pattern algorithm, we generated 297 EHPs. The DO list was built by using only those NPs which appeared at least 5 times in the corpus and 100% of the time as definites. We generated the DO list in two iter- ations, once for head nouns alone and once for full NPs, resulting in a list of 65 head nouns and 321 full NPs 6. Once the methods had been trained, we clas- sifted each definite NP in the test set as referen- tial or existential using the algorithm in Figure 5. Figure 6 graphically represents the main el- ements of the algorithm. Note that we applied vaccines to the S1 and EHP lists, but not to the DO list because gaining entry to the DO list is much more difficult -- an NP must occur at least 5 times in the training corpus, and every time it must occur in a definite construction. 6The full NP list showed best performance using pa- rameters of 5 and 75%, not the 5 and 100% used to create the head noun only list. 377 Method Tested 0. Baseline 1. Syntactic Heuristics 2. Syntactic Heuristics + S1 3. Syntactic Heuristics + EHP 4. Syntactic Heuristics + DO 5. Syntactic Heuristics + S1 + EHP 6. Syntactic Heuristics + S1 + EHP + DO 7. Syntactic Heuristics + S1 + EHP + DO + Va(70/25) 8. Syntactic Heuristics + S1 + EHP + DO + Vb(50/25) Recall 100% 43.0% 66.3% 60.7% 69.2% 79.9% 81.7% 77.7% 79.1% Precision 72.2% 93.1% 84.3% 87.3% 83.9% 82.2% 82.2% 86.6% 84.5% Figure 7: Evaluation Results To evaluate the performance of our algorithm, we hand-tagged each definite NP in the 50 test texts as a syntactically independent existential, a semantically independent existential, an asso- ciative existential or a referential NP. Figure 8 shows the distribution of definite NP types in the test texts. Of the 1,001 definite NPs tested, 63% were independent existentials, so removing these NPs from the coreference resolution pro- cess could have substantial savings. We mea- sured the accuracy of our classifications using recall and precision metrics. Results are shown in Figure 7. 478 Independent existential, syntactic 48% 53 Independent existential, semantic 15% Associative existential 9% ::1 Referential 28% Total Figure 8: NP Distribution As a baseline measurement, we considered the accuracy of classifying every definite NP as ex- istential. Given the distribution of definite NP types in our test set, this would result in recall of 100% and precision of 72%. Note that we are more interested in high measures of preci- sion than recall because we view this method to be the precursor to a coreference resolution algorithm. Incorrectly removing an anaphoric NP means that the coreference resolver would never have a chance to resolve it, on the other hand, non-anaphoric NPs that slip through can still be ruled as non-anaphoric by the corefer- ence resolver. We first evaluated our system using only the syntactic heuristics, which produced only 43% recall, but 92% precision. Although the syn- tactic heuristics are a reliable way to identify existential definite NPs, they miss 57% of the true existentials. 6 Evaluation We expected the $1, EHP, and DO methods to increase coverage. First, we evaluated each method independently (on top of the syntac- tic heuristics). The results appear in rows 2-4 of Figure 7. Each method increased recall to between 61-69%, but decreased precision to 84- 87%. All of these methods produced a substan- tial gain in recall at some cost in precision. Next, we tried combining the methods to make sure that they were not identifying ex- actly the same set of existential NPs. When we combined the S1 and EHP heuristics, recall increased to 80% with precision dropping only slightly to 82%. When we combined all three methods (S1, EHP, and DO), recall increased to 82% without any corresponding loss of preci- sion. These experiments show that these heuris- tics substantially increase recall and are identi- fying different sets of existential NPs. Finally, we tested our vaccine algorithm to see if it could increase precision without sacri- ficing much recall. We experimented with two variations: Va used an upper definite probabil- ity threshold of 70% and ~ used an upper def- inite probability threshold of 50%. Both vari- ations used a lower definite probability thresh- old of 25%. The results are shown in rows 7-8 of Figure 7. Both vaccine variations increased precision by several percentage points with only a slight drop in recall. In previous work, the system developed by Vieria & Poesio achieved 74% recall and 85% precision for identifying "larger situation and unfamiliar use" NPs. This set of NPs does not correspond exactly to our definition of existen- tial NPs because we consider associative NPs 378 to be existential and they do not. Even so, our results are slightly better than their previous re- sults. A more equitable comparison is to mea- sure our system's performance on only the in- dependent existential noun phrases. Using this measure, our algorithm achieved 81.8% recall with 85.6% precision using Va, and achieved 82.9% recall with 83.5% precision using Vb. 7 Conclusions We have developed several methods for auto- matically identifying existential noun phrases using a training corpus. It accomplishes this task with recall and precision measurements that exceed those of the earlier Vieira & Poesio system, while not exploiting full parse trees, ap- positive constructions, hand-coded lists, or case sensitive text z. In addition, because the sys- tem is fully automated and corpus-based, it is suitable for applications that require portabil- ity across domains. Given the large percentage of non-anaphoric discourse entities handled by most coreference resolvers, we believe that us- ing a system like ours to filter existential NPs has the potential to reduce processing time and complexity and improve the accuracy of coref- erence resolution. Shalom Lappin and Herbert J. Leass. 1994. An al- gorithm for pronomial anaphora resolution. Com- putational Linguistics, 20(4):535-561. Joseph F. McCarthy and Wendy G. Lehnert. 1995. Using Decision Trees for Coreference Resolution. In Proceedings of the l~th International Joint Conference on Artificial Intelligence (IJCAI-95), pages 1050-1055. Ellen F. Prince. 1981. Toward a taxonomy of given- new information. In Peter Cole, editor, Radical Pragmatics, pages 223-255. Academic Press. Brian Roark and Eugene Charniak. 1998. Noun- phrase co-occurence statistics for semi-automatic semantic lexcon construction. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics. R. Vieira and M. Poesio. 1997. Processing defi- nite descriptions in corpora. In S. Botley and M. McEnery, editors, Corpus-based and Compu- tational Approaches to Discourse Anaphora. UCL Press. References James Allen. 1995. Natural Language Understand- ing. Benjamin/Cummings Press, Redwood City, CA. Chinatsu Aone and Scott William Bennett. 1996. Applying Machine Learning to Anaphora Reso- lution. In Connectionist, Statistical, and Sym- bolic Approaches to Learning for Natural Lan- guage Understanding, pages 302-314. Springer- Verlag, Berlin. Andrew Kehler. 1997. Probabilistic coreference in information extraction. In Proceedings of the Sec- ond Conference on Empirical Methods in Natural Language Processing (EMNLP-97). Christopher Kennedy and Branimir Boguraev. 1996. Anaphor for everyone: Pronomial anaphora reso- lution without a parser. In Proceedings of the 16th International Conference on Computational Lin- guistics (COLING-96). ~Case sensitive text can have a significant positive ef- fect on performance because it helps to identify proper nouns. Proper nouns can then be used to look for restric- tive premodification, something that our system cannot take advantage of because the MUC-4 corpus is entirely in uppercase. 379 Appendix Examples from the $1, EHP, & DO lists. $1 Extractions Existential Head Patterns Definite-Only NPs THE FMLN TERRORISTS THE <X+> NATIONAL CAPITOL THE STATE DEPARTMENT THE NATIONAL CAPITOL THE <X+> AFFAIR THE PAST 16 YEARS THE FMLN REBELS THE <X+> ATTACKS THE CENTRAL AMERICAN UNIVERSITY THE NATIONAL REVOLUTIONARY NETWORK THE <X-.b> AUTHORITIES THE MEDIA THE PAVON PRISON FARM THE <X--b> INSTITUTE THE 6TH INFRANTRY BRIGADE THE FMLN TERRORIST LEADERS THE THE CUSCATLAN RADIO NETWORK THE THE PAVON REHABILITATION FARM THE THE PLO THE THE TELA AGREEMENTS THE THE SALVADORAN ARMY THE THE COLOMBIAN GUERRILLA MOVEMENTS THE THE COLOMBIAN ARMY THE THE RELIGIOUS MONTHLY MAGAZINE 30 GIORNI THE THE REVOLUTIONARY LEFT THE <X+> GOVERNMENT <X+> COMMUNITY <X+> STRUCTURE < X.-[- > PATROL <X+> BORDER <X+> SQUARE < X--b> COMMAND <X+> SENATE <X-bY NETWORK <X-bY LEADERS THE PAST FEW HOURS THE U.N. SECRETARY GENERAL THE PENTAGON THE CONTRARY THE MRTA THE CARIBBEAN THE USS THE DRUG TRAFFICKING MAFIA THE MAQUILIGUAS THE MAYORSHIP THE PERUVIAN ARMY THE CENTRAL AMERICAN PEOPLES THE GUATEMALAN ARMY THE BUSINESS SECTOR THE HONDURAN ARM THE ANTICOMMUNIST ACTION ALLIANCE THE DEMOCRATIC SYSTEM THE U.S. THE BUSH ADMINISTRATION THE CATHOLIC CHURCH THE WAR THE <X-F> RESULT THE <X-.I-> SECURITY THE <X+> CRIMINALS THE <X--b> HOSPITAL THE <X+> CENTER THE <X+> REPORTS THE <X+> ELN THE <X+> AGREEMENTS THE <X--b> CONSTITUTION THE <X+> PEOPLES THE <X+> EMBASSY THE SANDINISTS THE LATTER THE WOUNDED THE SAME THE CITIZENRY THE KREMLIN THE BEST THE NEXT THE MEANTIME THE COUNTRYSIDE THE NAVY 380
1999
48
An Efficient Statistical Speech Act Type Tagging System for Speech Translation Systems Hideki Tanaka and Akio Yokoo ATR Interpreting Telecommunications Research Laboratories 2-2, Hikaridai, Seika-cho, Soraku-gun, Kyoto, 619-0288, Japan {t anakah I ayokoo}©itl, atr. co. jp Abstract This paper describes a new efficient speech act type tagging system. This system cov- ers the tasks of (1) segmenting a turn into the optimal number of speech act units (SA units), and (2) assigning a speech act type tag (SA tag) to each SA unit. Our method is based on a theoretically clear statistical model that integrates linguistic, acoustic and situational information. We report tagging experiments on Japanese and English dialogue corpora manually la- beled with SA tags. We then discuss the performance difference between the two languages. We also report on some trans- lation experiments on positive response expressions using SA tags. 1 Introduction This paper describes a statistical speech act type tagging system that utilizes linguistic, acoustic and situational features. This work can be viewed as a study on automatic "Discourse Tagging" whose ob- jective is to assign tags to discourse units in texts or dialogues. Discourse tagging is studied mainly from two different viewpoints, i.e., linguistic and engineer- ing viewpoints. The work described here belongs to the latter group. More specifically, we are interested in automatically recognizing the speech act types of utterances and in applying them to speech transla- tion systems. Several studies on discourse tagging to date have been motivated by engineering applications. The early studies by Nagata and Morimoto (1994) and Reithinger and Maier (1995) showed the possibility of predicting dialogue act tags for next utterances with statistical methods. These studies, however, presupposed properly segmented utterances, which is not a realistic assumption. In contrast to this assumption, automatic utterance segmentation (or discourse segmentation) is desired here. Discourse segmentation in linguistics, whether manual or automatic, has also received keen atten- tion because such segmentation provides the founda- tion of higher discourse structures (Grosz and Sid- net, 1986). Discourse segmentation has also received keen at- tention from the engineering side because the nat- ural language processing systems that follow the speech recognition system are designed to accept lin- guistically meaningful units (Stolcke and Shriberg, 1996). There has been a lot of research following this line such as (Stolcke and Shriberg, 1996) (Cet- tolo and Falavigna, 1998), to only mention a few. We can take advantage of these studies as a pre- process for tagging. In this paper, however, we pro- pose a statistical tagging system that optimally per- forms segmentation and tagging at the same time. Previous studies like (Litman and Passonneau, 1995) have pointed out that the use of a multiple informa- tion source can contribute to better segmentation and tagging, and so our statistical model integrates linguistic, acoustic and situational information. The problem can be formalized as a search prob- lem on a word graph, which can be efficiently han- dled by an extended dynamic programming algo- rithm. Actually, we can efficiently find the optimal solution without limiting the search space at all. The results of our tagging experiments involving both Japanese and English corpora indicated a high performance for Japanese but a considerably lower performance for the English corpora. This work also reports on the use of speech act type tags for translating Japanese and English positive response expressions. Positive responses quite often appear in task-oriented dialogues like those in our tasks. They are often highly ambiguous and problematic in speech translation. We will show that these ex- pressions can be effectively translated with the help of dialogue information, which we call speech act type tags. 2 The Problems In this section, we briefly explain our speech act type tags and the tagged data and then formally define the tagging problem. 381 2.1 Data and Tags The data used in this study is a collection of tran- scribed dialogues on a travel arrangement task be- tween Japanese and English speakers mediated by interpreters (Morimoto et al., 1994). The tran- scriptions were separated by language, i.e., En- glish and Japanese, and the resultant two corpora share the same content. Both transcriptions went through morphological analysis, which was manually checked. The transcriptions have clear turn bound- aries (TB's). Some of the Japanese and English dialogue files were manually segmented into speech act units (SA units) and assigned with speech act type tags (SA tags). The SA tags represent a speaker's intention in an utterance, and is more or less similar to the traditional illocutionary force type (Searle, 1969). The SA tags for the Japanese language were based on the set proposed by Seligman et al. (1994) and had 29 types. The English SA tags were based on the Japanese tags, but we redesigned and reduced the size to 17 types. We believed that an excessively detailed tag classification would decrease the inter- coder reliability and so pruned some detailed tags) The following lines show an example of the English tagged dialogues. Two turns uttered by a hotel clerk and a customer were Segmented into SA units and assigned with SA tags. <clerk's turn> Hello, (expressive) New York City Hotel, (inform) may I help you ? (offer) <customer(interpreter)'s turn> Hello, (expressive) my name is Hiroko Tanaka (inform) and I would like to make a reservation for a room at your hotel. (desire) The tagging work to the dialogue was conducted by experts who studied the tagging manual before- hand. The manual described the tag definitions and turn segmentation strategies and gave examples. The work involved three experts for the Japanese corpus and two experts for the English corpus. 2 The result was checked and corrected by one ex- pert for each language. Therefore, since the work was done by one expert, the inter-coder tagging in- stability was suppressed to a minimum. As the re- sult of the tagging, we obtained 95 common dialogue files with SA tags for Japanese and English and used them in our experiments. 1Japanese tags, for example, had four tags mainly used for dialogue endings: thank, offer-follow-up, good- wishes, and farewell, most of which were reduced to ex- pressive in English. 2They did not listen to the recorded sounds in either case. 2.2 Problem Formulation Our tagging system assumes an input of a word se- quence for a dialogue produced by a speech recog- nition system. The word sequence is accompanied with clear turn boundaries. Here, the words do not contain any punctuation marks. The word sequence can be viewed as a sequence of quadruples: "'" (Wi-1, li-1, ai-1, si-1), (wi, li, ai, 8i)... where wi represents a surface wordform, and each vector represents the following additional informa- tion for wi. li: canonical form and part of speech of wi (linguistic feature) ai: pause duration measured milliseconds after wi (acoustic feature) si: speaker's identification for wi such as clerk or customer (situational feature) Therefore, an utterance like Hello I am John Phillips and ... uttered by a cuslomer is viewed as a sequence like (Hello, (hello, INTER), 100, customer), (I,(i, PRON),0, customer)), (am, (be, BE), 0, customer) .... From here, we will denote a word sequence as W = wl, w2, .. • wi, .. •, Wn for simplicity. However, note that W is a sequence of quadruples as described above. The task of speech act type tagging in this pa- per covers two tasks: (1) segmentation of a word sequence into the optimal number of SA units, and (2) assignment of an SA tag to each SA unit. Here, the input is a word sequence with clear TB's, and our tagger takes each turn as a process unit. 3 In this paper, an SA unit is denoted as u and the sequence is denoted as U. An SA tag is denoted as e represents t and the sequence is denoted as T. x s a sequence of x starting from s to e. Therefore, represents a tag sequence from 1 to j. The task is now formally addressed as follows: find the best SA unit sequence U and tag sequence T for each turn when a word sequence W with clear TB's is given. We will treat this problem with the statistical model described in the next section. 3 Statistical Model The problem addressed in Section 2 can be formal- ized as a search problem in a word graph that holds all possible combinations of SA units in a turn. We take a probabilistie approach to this problem, which formalizes it as finding a path (U,T) in the word graph that maximizes the probability P(U, T I W). 3Although we do not explicitly represent TB's in a word sequence in the following discussions, one might assume virtual TB markers like @ in the word sequence. 382 This is formally represented in equation (1). This probability is naturally decomposed into the prod- uct of two terms as in equation (3). The first prob- ability in equation (3) represents an arbitrary word sequence constituting one SA unit ui, given hj (the history of SA units and tags from the beginning of a dialogue, hj = uJ-l,t j-l) and input W. The sec- ond probability represents the current SA unit u i bearing a particular SA tag tj, given uj, hi, and W. (U,T) = argmaxP(U,T I w), (1) U,T k P(uj,tj I hi, W), = argmax H (2) U,T j=l k _- argm x l] P(ui I hi, W) U,T j=l x P(tj I uj, hi, W). (3) We call the first term "unit existence probability" Ps and the second term "tagging probability" PT. Figure 1 shows a simplified image of the probability calculation in a word graph, where we have finished processing the word sequence of w~ -1 Now, we estimate the probability for the word se- quence w~ +p-1 constituting an SA unit uj and hav- ing a particular SA tag tj. Because of the problem of sparse data, these probabilities are hard to directly estimate from the training corpus. We will use the following approximation techniques. 3.1 Unit Existence Probability The probability of unit existence PE is actually equivalent to the probability that the word sequence w~,..., w,+p-1 exists as one SA unit given h i and W (Fig. 1). We then approximate PE by PE ~-- P(B~,_I,w, = l l hj, W) xP(B~.+,,_,,w,.,, = 1 I hi, W) s+p--2 x H P(Bw,-,~+I = 0 I hi,W), (4) ITl:$ where the random variable Bw=,,~=+l takes the bi- nary values 1 and 0. A value of 1 corresponds to the existence of an SA unit boundary between wx and w=+l, and a value of 0 to the non-existence of an SA unit boundary. PE is approximated by the product of two types of probabilities: for a word sequence break at both ends of an SA unit and for a non- break inside the unit. Notice that the probabilities of the former type adjust an unfairly high probabil- ity estimation for an SA unit that is made from a short word sequence. The estimation of PE is now reduced to that of P(Bw=,w~+l I hi, W). This probability is estimated by a probabilistic decision tree and we have P(Bw=,Wx+, I hi, W) ~- P(Bw .... +1 I eE(hj, W)), where riPE is a decision tree that categorizes hj, W into equivalent classes (Jelinek, 1997). We modi- fied C4.5 (Quinlan, 1993) style algorithm to produce probability and used it for this purpose. The deci- sion tree is known to be effective for the data sparse- ness problem and can take different types of parame- ters such as discrete and continuous values, which is useful since our word sequence contains both types of features. Through preliminary experiments, we found that hj (the past history of tagging results) was not useful and discarded it. We also found that the probability was well estimated by the information available in a short range of r around w=, which is stored in W. Actually, the attributes used to develop the tree were at~X-]-7* in W' = ~-r+l" *+r • surface wordforms for ~=-~+1, z+r and the pause duration parts of speech for wx_r+l, between wx and w=+l. The word range r was set from 1 to 3 as we will report in sub-section 5.3. As a result, we obtained the final form of PE as PE ~-- P(Bw .... ~, = 1 [~s(W')) x P(B~,+p_,,~,+p = 1 [ ~s(W')) s+p-2 × H P(S~,,.w~,+ 1 = 01~E(W'))(5) m:$ 3.2 Tagging Probability The tagging probability PT was estimated by the following formula utilizing a decision tree eT- Two functions named f and g were also utilized to extract information from the word sequence in uj. PT ~-- P(tj J ff2T(f(uj),g(uj),tj_l,...,tj_m)) (6) As this formula indicates, we only used information available with the uj and m histories of SA tags in hi. The function f(uj) outputs the speaker's identi- fication of uj. The function g(uj) extracts cue words for the SA tags from uj using a cue word list. The cue word list was extracted from a training corpus that was manually labeled with the SA tags. For each SA tag, the 10 most dependent words were ex- tracted with a x2-test. After converting these into canonical forms, they were conjoined. To develop a statistical decision tree, we used an input table whose attributes consisted of a cue word list, a speaker's identification, and m previous tags. The value for each cue word was a binary value, where 1 was set when the utterance uj contained the word, or otherwise 0. The effect of f(uj), g(uj), and length m for the tagging performance will be reported in sub-section 5.3. 4 Search Method A search in a word graph was conducted using the extended dynamic programming technique proposed 383 hj history turn boundary current process front o-----o o ] ~.~ Uj-l' (i-1 ~ uj, (] - - - O<::>IO . . . . C:) - C:> 0 . . . . CD Wl Ws-1 | Ws Ws+l Ws+p-1 |Ws+p Wn W word sequence for a dialogue Figure 1: Probability calculation. by Nagata (1994). This algorithm was originally de- veloped for a statistical Japanese morphological an- alyzer whose tasks are to determine boundaries in an input character sequence having no separators and to give an appropriate part of speech tag to each word, i.e., a character sequence unit. This algorithm can handle arbitrary lengths of histories of pos tags and words and efficiently produce n-best results. We can see a high similarity between our task and Japanese morphological analysis. Our task requires the segmentation of a word sequence instead of a character sequence and the assignment of an SA tag instead of a pos tag. The main difference is that a word dictionary is available with a morphological analyzer. Thanks to its dictionary, a morphological analyzer can assume possible morpheme boundaries. 4 Our tagger, on the other hand, has to assume that any word se- quence in a turn can constitute an SA unit in the search. This difference, however, does not require any essential change in the search algorithm. 5 Tagging Experiments 5.1 Data Profile We have conducted several tagging experiments on both the Japanese and English corpora described in sub-section 2.1. Table 1 shows a summary of the 95 files used in the experiments. In the experiments described below, we used morpheme sequences for input instead of word sequences and showed the cor- responding counts. The average number of SA units per turn was 2.68 for Japanese and 2.31 for English. The aver- age number of boundary candidates per turn was 18 for Japanese and 12.7 for English. The number of tag types, the average number of SA units, and the average number of SA boundary candidates in- dicated that the Japanese data were more difficult to process. 4Als0, the probability for the existence of a word can be directly estimated from the corpus. Table 1: Counts in both corpora. Counts Japanese English Turn 2,020 2,020 SA unit 5,416 4,675 Morpheme 38,418 27,639 POS types 30 33 SA tag type 29 17 5.2 Evaluation Methods We used "labeled bracket matching" for evalua- tion (Nagata, 1994). The result of tagging can be viewed as a set of labeled brackets, where brack- ets correspond to turn segmentation and their labels correspond to SA tags. With this in mind, the eval- uation was done in the following way. We counted the number of brackets in the correct answer, de- noted as R (reference). We also counted the num- ber of brackets in the tagger's output, denoted as S (system). Then the number of matching brackets was counted and denoted as M (match). Thus, we could define the precision rate with M/S and the recall rate with M/R. The matching was judged in two ways. One was "segmentation match": the positions of both start- ing and ending brackets (boundaries) were equal. The other was "segmentation+tagging match": the tags of both brackets were equal in addition to the segmentation match. The proposed evaluation simultaneously con- firmed both the starting and ending positions of an SA unit and was more severe than methods that only evaluate one side of the boundary of an SA unit. Notice that the precision and recall for the segmen- tation+tagging match is bounded by those of the segmentation match. 5.3 Tagging Results The total tagging performance is affected by the two probability terms PE and PT, both of which contain the parameters in Table 2. To find the best param- 384 Table 2: Parameters in probability terms. PE PT x+r Wx-r+l r: word range f(uj): speaker of uj g(uj): cue words in uj tj-1 ... tj_,~ : previous SA tags Table 4: T-scores for segmentation accuracies. Recall Precision A B C A B C B 2.84 - - B 1.25 - - C 2.71 0.12 - C 0.83 0.44 - D 2.57 0.28 0.17 D 0.74 0.39 0.01 Table 3: Average accuracy for segmentation match. Parameter Recall rate % Precision rate % A 89.50 91.99 B 91.89 92.92 C 92.00 92.57 D 92.20 92.58 Table 5: Average accuracy for seg.+tag, match. Parameter Recall rate % Precision rate % E 72.25 72.70 F 74.91 75.35 G 74.83 75.29 H 74.50 74.96 eter set and see the effect of each parameter, we conducted the following two types of experiments. I Change the parameters for PE with fixed pa- rameters for PT The effect of the parameters in PE was mea- sured by the segmentation match. II Change the parameters for PT with fixed pa- rameters for PE The effect of the parameters in PT was mea- sured by the segmentation+tagging match. Now, we report the details with the Japanese set. 5.3.1 Effects of DE with Japanese Data We fixed the parameters for PT as f(uj), g(uj), tj-1, i.e., a speaker's identification, cue words in the current SA unit, and the SA tag of the previous SA unit. The unit existence probability was estimated using the following parameters. (A): Surface wordforms and pos's ofw~ +1, i.e., word range r = 1 (B): Surface wordforms and pos's of w x+2 i.e., word x-i, range r ---- 2 (C): (h) with a pause duration between wx, Wx+l (D): (U) with a pause duration between wx, wx+l Under the above conditions, we conducted 10-fold cross-validation tests and measured the average re- call and precision rates in the segmentation match, which are listed in Table 3. We then conducted l-tests among these average scores. Table 4 shows the l-scores between different parameter conditions. In the following discussions, we will use the following l-scores: t~=0.0~5(18) -- 2.10 and t~=0.05(18) = 1.73. We can note the following features from Tables 3 and 4. • recall rate (B), (C), and (D) showed statistically signif- icant (two-sided significance level of 5%, i.e., t > 2.10) improvement from (A). (D) did not show significant improvement from either (B) nor (C). • precision rate Although (n) and (C) did not improve from (A) with a high statistical significance, we can observe the tendency of improvement. (D) did not show a significant difference from (B) or (C). We can, therefore, say that (B) and (C) showed equally significant improvement from (A): expansion of the word range r from I to 2 and using pause infor- mation with word range 1. The combination of word range 2 and pause (D), however, did not show any significant differences from (B) or (C). We believe that the combination resulted in data sparseness. 5.3.2 Effects of PT with Japanese Data For the Type II experiments, we set the parame- ters for PE as condition (C): surface wordforms and pos's of wx TM and a pause duration between w~ and w~+l. Then, PT was estimated using the following parameters. (E): Cue words in utterance uj, i.e., g(uj) (F): (S) with tj_ 1 (G): (E) with tj_l and tj_2 (H): (E) with tj-1 and a speaker's identification f(uj) The recall and precision rates for the segmenta- tion÷tagging match were evaluated in the same way as in the previous experiments. The results are shown in Table 5. The l-scores among these param- eter setting are shown in Table 6. We can observe the following features. • recall rate (F) and (G) showed an improvement from (E) with a two-sided significance level of 10% (1 > 385 Table 6: T-scores for seg.+tag, accuracies. Recall Precision E F G E F G F 1.87 - - F 1.97 - - G 1.78 0.05 - G 1.90 0.04 - H 1.50 0.26 0.21 H 1.60 0.28 0.24 1.73). However, (G) and (H) did not show sig- nificant improvements from (F). • precision rate Same as recall rate. Here, we can say that tj-1 together with the cue words (F) played the dominant role in the SA tag assignment, and the further addition of history tj-2 (G) or the speaker's identification f(uj) (H) did not result in significant improvements. 5.3.3 Summary of Japanese Tagging Experiments As a concise summary, the best recall and preci- sion rates for the segmentation match were obtained with conditions (n) and (C): approximately 92% and 93%, respectively. The best recall and preci- sion rates for the segmentation+tagging match were 74.91% and 75.35 %, respectively (Table 5 (F)). We consider these figures quite satisfactory considering the severeness of our evaluation scheme. 5.3.4 English Tagging Experiment We will briefly discuss the experiments with En- glish data. The English corpus experiments were similar to the Japanese ones. For the SA unit seg- mentation, we changed the word range r from 1 to 3 while fixing the parameters for PT to (H), where we obtained the best results with word range r --- 2, i.e., (B). The recall rate was 71.92% and the preci- sion rate was 78.10%. 5 We conducted the exact same tagging experi- ments as the Japanese ones by fixing the parame- ter for PE to (B). Experiments with condition (H) showed the best score: the recall rate was 53.17% and the precision rate was 57.75%. We obtained lower performance than that for Japanese. This was somewhat surprising since we thought English would be easier to process. The lower performance in seg- mentation affected the total tagging performance. We will further discuss the difference in section 7. 6 Application of SA tags to speech translation In this section, we will briefly discuss an application of SA tags to a machine translation task. This is one ~Experiments with pause information were not conducted. of the motivations of the automatic tagging research described in the previous sections. We actually dealt with the translation problem of positive responses appearing in both Japanese and English dialogues. Japanese positive responses like Hat and Soudesuka, and the English ones like Yes and I see appear quite often in our corpus. Since our di- alogues were collected from the travel arrangement domain, which can basically be viewed as a sequence of a pair of questions and answers, they naturally contain many of these expressions. These expressions are highly ambiguous in word- sense. For example, Hai can mean Yes (accept), Uh huh (acknowledgment), hello (greeting) and so on. Incorrect translation of the expression could confuse the dialogue participants. These expressions, how- ever, are short and do not contain enough clues for proper translation in themselves, so some other con- textual information is inevitably required. We assume that SA tags can provide such neces- sary information since we can distinguish the trans- lations by the SA tags in the parentheses in the above examples. We conducted a series of experiments to verify if positive responses can be properly translated us- ing SA tags with other situational information. We assumed that SA tags are properly given to these ex- pressions and used the manually tagged corpus de- scribed in Table 1 for the experiments. We collected Japanese positive responses from the SA units in the corpus. After assigning an En- glish translation to each expression, we categorized these expressions into several representative forms. For example, the surface Japanese expression Ee, Kekkou desu was categorized under the representa- tive form Kekkou. We also made such data for English positive re- sponses. The size of the Japanese and English data in representative forms (equivalent to SA unit) is shown in Table 7. Notice that 1,968 out of 5,416 Japanese SA units are positive responses and 1,037 out of 4,675 English SA units are positive responses. The Japanese data contained 16 types of English translations and the English data contained 12 types of Japanese translations in total. We examined the effects of all possible combi- nations of the following four features on transla- tion accuracy. We trained decision trees with the C4.5 (Quinlan, 1993) type algorithm while using these features (in all possible combinations) as at- tributes. (I) Representative form of the positive response (J) SA tag for the positive response (K) SA tag for the SA unit previous to the positive response (L) Speaker (Hotel/Clerk) 386 Table 7: Representation forms and the counts. Japanese freq. Kekkou 69 Soudesu ka 192 Hal 930 Soudesu 120 Moehiron 7 Soudesu ne 16 Shouchi 30 Wakari- mashita 304 Kashikomari- mashita 300 English freq. I understand 6 Great 5 Okay 240 I see 136 All right 136 Very well 13 Certainly 27 Yes 359 Fine 52 Right 10 Sure 44 Very good 9 Total 1,968 Total 1,037 Table 8: Accuracies with one feature. Feature J toE(%) EtoJ (%) I 54.83 46.96 J 51.73 34.33 K 73.02 55.35 L 40.09 37.80 We will show some of the results. Table 8 shows the accuracy when using one feature as the attribute. We can naturally assume that the use of feature (I) gives the baseline accuracy. The result gives us a strange impression in that the SA tags for the previous SA units (K) were far more effective than the SA tags for the positive re- sponses themselves (J). This phenomenon can be explained by the variety of tag types given to the utterances. A positive response expressions of the same representative form have at most a few SA tag types, say two, whereas the previous SA units can have many SA tag types. If a positive response ex- pression possesses five translations, they cannot be translated with two SA tags. Table 9 shows the best feature combinations at each number of features from 1 to 4. The best fea- ture combinations were exactly the same for both translation directions, Japanese to English and vice versa. The percentages are the average accuracy ob- tained by the 10-fold cross-validation, and the t- score in each row indicates the effect of adding one feature from the upper row. We again admit a t- score that is greater than 2.01 as significant (two- sided significance level of 5 %). The accuracy for Japanese translation was sat- urated with the two features (K) and (I). Further addition of any feature did not show any significant improvement. The SA tag for the positive responses did not work. The accuracy for English translation was satu- Table 9: Best performance for each number of fea- tures. Features J toE(%) t EtoJ (%) t K 73.02 - 55.35 - K,I 88.51 15.42 60.66 3.10 K,I,L 88.92 0.51 65.58 2.49 K,I,L,J 88.21 0.75 66.74 0.55 rated with the three features (K), (I), and (L). The speaker's identification proved to be effective, unlike Japanese. This is due to the necessity of controlling politeness in Japanese translations according to the speaker. The SA tag for the positive responses did not work either. These results suggest that the SA tag informa- tion for the previous SA unit and the speaker's in- formation should be kept in addition to representa- tive forms when we implement the positive response translation system together with the SA tagging sys- tem. 7 Related Works and Discussions We discuss the tagging work in this section. In sub- section 5.3, we showed that Japanese segmentation into SA units was quite successful only with lexical information, but English segmentation was not that successful. Although we do not know of any experiments di- rectly comparable to ours, a recent work reported by Cettolo and Falavigna (1998) seems to be sim- ilar. In that paper, they worked on finding se- mantic boundaries in Italian dialogues with the "appointment scheduling task." Their semantic boundary nearly corresponds to our SA unit bound- ary. Cettolo and Falavigna (1998) reported recall and precision rates of 62.8% and 71.8%, respec- tively, which were obtained with insertion and dele- tion of boundary markers. These scores are clearly lower than our results with a Japanese segmentation match. Although we should not jump to a generalization, we are tempted to say the Japanese dialogues are easier to segment than western languages. With this in mind, we would like to discuss our study. First of all, was the manual segmentation quality the same for both corpora? As we explained in sub- section 2.1, both corpora were tagged by experts, and the entire result was checked by one of them for each language. Therefore, we believe that there was not such a significant gap in quality that could explain the segmentation performance. Secondly, which lexical information yielded such a performance gap? We investigated the effects of part-of-speech and morphemes in the segmentation 387 of both languages. We conducted the same 10-fold cross-validation tests as in sub-section 5.3 and ob- tained 82.29% (recall) and 86.16% (precision) for Japanese under condition (B'), which used only pos's in " x+~ for the PE calculation. English, in con- Wx-1 trast, marked rates of 65.63% (recall) and 73.35% (precision) under the same condition. These results indicated the outstanding effectiveness of Japanese pos's in segmentation. Actually, we could see some pos's such as "ending particle (shu-jyoshi)" which clearly indicate sentence endings and we considered that they played important roles in the segmenta- tion. English, on the other hand, did not seem to have such strong segment indicating pos's. Although lexical information is important in English segmen- tation (Stoleke and Shriberg, 1996), what other in- formation can help improve such segmentation? Hirschberg and Nakatani (1996) showed that prosodic information helps human discourse segmen- tation. Litman and Passonneau (1995) addressed the usefulness of a "multiple knowledge source" in human and automatic discourse segmentation. Vendittiand Swerts (1996) stated that the into- national features for many Indo-European lan- guages help cue the structure of spoken dis- course. Cettolo and Falavigna (1998) reported im- provements in Italian semantic boundary detection with acoustic information. All of these works indi- cate that the use of acoustic or prosodic information is useful, so this is surely one of our future directions. The use of higher syntacticM information is also one of our directions. The SA unit should be a mean- ingful syntactic unit, although its degree of meaning- fulness may be less than that in written texts. The goodness of this aspect can be easily incorporated in our probability term PE. 8 Conclusions We have described a new efficient statistical speech act type tagging system based on a statistical model used in Japanese morphological analyzers. This sys- tem integrates linguistic, acoustic, and situational features and efficiently performs optimal segmenta- tion of a turn and tagging. From several tagging experiments, we showed that the system segmented turns and assigned speech act type tags at high ac- curacy rates when using Japanese data. Compara- tively lower performance was obtained using English data, and we discussed the performance difference. We Mso examined the effect of parameters in the sta- tistical models on tagging performance. We finally showed that the SA tags in this paper are useful in translating positive responses that often appear in task-oriented dialogues such as those in ours. Acknowledgment The authors would like to thank Mr. Yasuo Tanida for the excellent programming works and Dr. Seiichi Yamamoto for stimulus discussions. References M. Cettolo and D. Falavigna. 1998. Automatic de- tection of semantic boundaries based on acoustic and lexical knowledge. In ICSLP '98, volume 4, pages 1551-1554. B. J. Grosz and C. L. Sidner. 1986. Atten- tion, intentions and the structure of discourse. Computational Linguistics, 12(3):175-204, July- September. J. Hirschberg and C. H. Nakatani. 1996. A prosodic analysis of discourse segments in direction-giving monologues. In 34th Annual Meeting of the Asso- ciation for the Computational Linguistics, pages 286-293. F. Jelinek, 1997. Statistical Methods for Speech Recognition, chapter 10. The MIT Press. D. J. Litman and R. J. Passonneau. 1995. Com- bining multiple knowledge sourses for discourse segmentation. In 33rd Annual Meeting of the As- sociation for the Computational Linguistics, pages 108-115. T. Morimoto, N. Uratani, T. Takezawa, O. Furuse, Y. Sobashima, H. Iida, A. Nakamura, Y. Sagisaka, N. Higuchi, and Y. Yamazaki. 1994. A speech and language database for speech translation research. In ICSLP '94, pages 1791-1794. M. Nagata and T. Morimoto. 1994. An information- theoretic model of discourse for next utterance type prediction. Transactions of Information Processing Society of Japan, 35(6):1050-1061. M. Nagata. 1994. A stochastic Japanese morpholog- ical analyzer using a forward-DP and backward- A* N-best search algorithm. In Proceedings of Coling94, pages 201-207. J. R. Quinlan. 1993. C~.5: Programs for Machine Learning. Morgan Kaufmann. N. Reithinger and E. Maier. 1995. Utilizing statisti- cal dialogue act processing in verbmobil. In 33rd Annual Meeting of the Associations for Computa- tional Linguistics, pages 116-121. J. R. Searle. 1969. Speech Acts. Cambridge Univer- sity Press. M. Seligman, L. Fais, and M. Tomokiyo. 1994. A bilingual set of communicative act labels for spontaneous dialogues. Technical Report TR-IT- 0081, ATR-ITL. A. Stolcke and E. Shriberg. 1996. Automatic lin- guistic segmentation of conversational speech. In ICSLP '96, volume 2, pages 1005-1008. J. Venditti and M. Swerts. 1996. Intonational cues to discourse structure in Japanese. In ICSLP '96, volume 2, pages 725-728. 388
1999
49
Distributional Similarity Models: Clustering vs. Nearest Neighbors Lillian Lee Department of Computer Science Cornell University Ithaca, NY 14853-7501 llee@cs, cornell, edu Fernando Pereira A247, AT&T Labs - Research 180 Park Avenue Florham Park, NJ 07932-0971 pereira@research, att. com Abstract Distributional similarity is a useful notion in es- timating the probabilities of rare joint events. It has been employed both to cluster events ac- cording to their distributions, and to directly compute averages of estimates for distributional neighbors of a target event. Here, we examine the tradeoffs between model size and prediction accuracy for cluster-based and nearest neigh- bors distributional models of unseen events. 1 Introduction In many statistical language-processing prob- lems, it is necessary to estimate the joint proba- bility or cooeeurrence probability of events drawn from two prescribed sets. Data sparseness can make such estimates difficult when the events under consideration are sufficiently fine-grained, for instance, when they correspond to occur- rences of specific words in given configurations. In particular, in many practical modeling tasks, a substantial fraction of the cooccurrences of in- terest have never been seen in training data. In most previous work (Jelinek and Mercer, 1980; Katz, 1987; Church and Gale, 1991; Ney and Essen, 1993), this lack of information is ad- dressed by reserving some mass in the proba- bility model for unseen joint events, and then assigning that mass to those events as a func- tion of their marginal frequencies. An intuitively appealing alternative to relying on marginal frequencies alone is to combine es- timates of the probabilities of "similar" events. More specifically, a joint event (x, y) would be considered similar to another (x t, y) if the distri- butions of Y given x and Y given x' (the cooc- currence distributions of x and x ') meet an ap- propriate definition of distributional similarity. For example, one can infer that the bigram "af- ter ACL-99" is plausible -- even if it has never 33 occurred before -- from the fact that the bigram "after ACL-95" has occurred, if "ACL-99" and "ACL-95" have similar cooccurrence distribu- tions. For concreteness and experimental evalua- tion, we focus in this paper on a particular type of cooccurrence, that of a main verb and the head noun of its direct object in English text. Our main goal is to obtain estimates ~(vln ) of the conditional probability of a main verb v given a direct object head noun n, which can then be used in particular prediction tasks. In previous work, we and our co-authors have proposed two different probability estimation methods that incorporate word similarity infor- mation: distributional clustering and nearest- neighbors averaging. Distributional clustering (Pereira et al., 1993) assigns to each word a probability distribution over clusters to which it may belong, and characterizes each cluster by a centroid, which is an average of cooccur- rence distributions of words weighted according to cluster membership probabilities. Cooccur- rence probabilities can then be derived from ei- ther a membership-weighted average of the clus- ters to which the words in the cooccurrence be- long, or just from the highest-probability clus- ter. In contrast, nearest-neighbors averaging 1 (Dagan et al., 1999) does not explicitly clus- ter words. Rather, a given cooccurrence prob- ability is estimated by averaging probabilities for the set of cooccurrences most similar to the target cooccurrence. That is, while both meth- ods involve appealing to similar "witnesses" (in the clustering case, these witnesses are the cen- troids; for nearest-neighbors averaging, they are 1In previous papers, we have used the term "similarity-based", but this term would cause confusion in the present article. the most similar words), in nearest-neighbors averaging the witnesses vary for different cooc- currences, whereas in distributional clustering the same set of witnesses is used for every cooc- currence (see Figure 1). We thus see that distributional clustering and nearest-neighbors averaging are complementary approaches. Distributional clustering gener- ally creates a compact representation of the data, namely, the cluster membership probabil- ity tables and the cluster centroids. Nearest- neighbors averaging, on the other hand, asso- ciates a specific set of similar words to each word and thus typically increases the amount of stor- age required. In a way, it is clustering taken to the limit - each word forms its own cluster. In previous work, we have shown that both distributional clustering and nearest-neighbors averaging can yield improvements of up to 40% with respect to Katz's (1987) state-of-the-art backoffmethod in the prediction of unseen cooc- currences. In the case of nearest-neighbors aver- aging, we have also demonstrated perplexity re- ductions of 20% and statistically significant im- provement in speech recognition error rate. Fur- thermore, each method has generated some dis- cussion in the literature (Hofmann et al., 1999; Baker and McCallum, 1998; Ide and Veronis, 1998). Given the relative success of these meth- ods and their complementarity, it is natural to wonder how they compare in practice. Several authors (Schiitze, 1993; Dagan et al., 1995; Ide and Veronis, 1998) have suggested that clustering methods, by reducing data to a small set of representatives, might perform less well than nearest-neighbors averaging-type methods. For instance, Dagan et al. (1995, p. 124) argue: This [class-based] approach, which fol- lows long traditions in semantic clas- sification, is very appealing, as it at- tempts to capture "typical" properties of classes of words. However .... it is not clear that word co-occurrence pat- terns can be generalized to class co- occurrence parameters without losing too much information. Furthermore, early work on class-based lan- guage models was inconclusive (Brown et al., 1992). 34 In this paper, we present a detailed com- parison of distributional clustering and nearest- neighbors averaging on several large datasets, exploring the tradeoff in similarity-based mod- eling between memory usage on the one hand and estimation accuracy on the other. We find that the performances of the two methods are in general very similar: with respect to Katz's back-off, they both provide average error reduc- tions of up to 40% on one task and up to 7% on a related, but somewhat more difficult, task. Only in a fairly unrealistic setting did nearest- neighbors averaging clearly beat distributional clustering, but even in this case, both meth- ods were able to achieve average error reduc- tions of at least 18% in comparison to back- off. Therefore, previous claims that clustering methods are necessarily inferior are not strongly supported by the evidence of these experiments, although it is of course possible that the situa- tion may be different for other tasks. 2 Two models We now survey the distributional clustering (section 2.1) and nearest-neighbors averaging (section 2.2) models. Section 2.3 examines the relationships between these two methods. 2.1 Clustering The distributional clustering model that we evaluate in this paper is a refinement of our ear- lier model (Pereira et al., 1993). The new model has important theoretical advantages over the earlier one and interesting mathematical prop- erties, which will be discussed elsewhere. Here, we will outline the main motivation for the model, the iterative equations that implement it, and their practical use in clustering. The model involves two discreterandom vari- ables N (nouns) and V (verbs) whose joint dis- tribution we have sampled, and a new unob- served discrete random variable C representing probabilistic clusters of elements of N. The role of the hidden variable C is specified by the conditional distribution p(cln), which can be thought of as the probability that n belongs to cluster c. We want to preserve in C as much as possible of the information that N has about V, that is, maximize the mutual information 2 I(V, C). On the other hand, we would also 2I( X, Y) = ~-]~x ~ P(x, y) log (P(x, y)/P(x)P(y)). 6" "" "o o",0 I I I I ~' ~ O s / ',, O A O B ..___.... Figure 1: Difference between clustering and nearest neighbors. Although A and B belong mostly to the same cluster (dotted ellipse), the two nearest neighbors to A are not the nearest two neighbors to B. like to control the degree of compression of C relative to N, that is, the mutual information I(C,N). Furthermore, since C is intended to summarize N in its role as a predictor of V, it should carry no information about V that N does not already have. That is, V should be conditionally independent of C given N, which allows us to write p(vlc ) = ~-]p(vln)p(nlc ) . (1) n The distribution p(VIc ) is the centroid for clus- ter c. It can be shown that I(V, C) is maximized subject to fixed I(C, N) and the above condi- tional independence assumption when p(c) p(cln ) = ~ exp [-/3D(p(Yln)]]p(Ylc) ) ] , (2) where /3 is the Lagrange multiplier associated with fixed I(C, N), Zn is the normalization Zn = y~ p(c) exp [-/3D(p(Y[n)llp(Ylc ))] , c and D is the KuUback-Leiber (KL) divergence, which measures the distance, in an information- theoretic sense, between two distributions q and r: • q(v) D(qllr ) = ~ q(v) lOgr(v) . v The main behavioral difference between this model and our previous one is the p(c) factor in (2), which tends to sharpen cluster membership distributions. In addition, our earlier experi- ments used a uniform marginal distribution for the nouns instead of the marginal distribution in the actual data, in order to make clustering more sensitive to informative but relatively rare 35 nouns. While neither difference leads to major changes in clustering results, we prefer the cur- rent model for its better theoretical foundation. For fixed /3, equations (2) and (1) together with Bayes rule and marginalization can be used in a provably convergent iterative reestimation process for p(glc), p(YlC ) and p(C). These distributions form the model for the given/3. It is easy to see that for/3 = 0, p(nlc ) does not depend on the cluster distribution p(VIc), so the natural number of clusters (distinct values of C) is one. At the other extreme, for very large /3 the natural number of clusters is the same as the number of nouns. In general, a higher value of/3 corresponds to a larger number of clusters. The natural number of clusters k and the probabilistic model for different values of/3 are estimated as follows. We specify an increas- ing sequence {/3i} of/3 values (the "annealing" schedule), starting with a very low value/30 and increasing slowly (in our experiments, /30 = 1 and/3i+1 = 1-1/30. Assuming that the natural number of clusters and model for/3i have been computed, we set/3 =/3i+1 and split each clus- ter into two twins by taking small random per- turbations of the original cluster centroids. We then apply the iterative reestimation procedure until convergence. If two twins end up with sig- nificantly different centroids, we conclude that they are now separate clusters. Thus, for each i we have a number of clusters ki and a model relating those clusters to the data variables N and V. A cluster model can be used to estimate p(vln ) when v and n have not occurred together in training. We consider two heuristic ways of doing this estimation: • all-cluster weighted average: p(vln) = ~-]p(vlc)p(cln) c • nearest-cluster estimate: ~(vln) -- p(vlc*), where c* maximizes p(c*ln). 2.2 Nearest-neighbors averaging As noted earlier, the nearest-neighbors averag- ing method is an alternative to clustering for estimating the probabilities of unseen cooccur- fences. Given an unseen pair (n, v), we calcu- late an estimate 15(vln ) as an appropriate aver- age of p(vln I) where n I is distributionally sim- ilar to n. Many distributional similarity mea- sures can be considered (Lee, 1999). In this paper, we focus on the one that gave the best results in our earlier work (Dagan et al., 1999), the Jensen-Shannon divergence (Rao, 1982; Lin, 1991). The Jensen-Shannon divergence of two discrete distributions p and q over the same do- main is defined as 1 gS(p, q) = ~ It is easy to see that JS(p, q) is always defined. In previous work, we used the estimate ~5(vln ) = 1 ~ p(vln,)exp(_Zj(n,n,)), (In nlES(n,k) where J(n,n') = JS (p(VIn),p(Yln')), Z and k are tunable parameters, S(n, k) is the set of k nouns with the smallest Jensen-Shannon di- vergence to n, and an is a normalization term. However, in the present work we use the simpler unweighted average 1 /~(vln) = -~ ~ p(vln'), (3) n'ES(n,k) and examine the effect of the choice of k on modeling performance. By eliminating extra parameters, this restricted formulation allows a more direct comparison of nearest-neighbors av- eraging to distributional clustering, as discussed in the next section. Furthermore, our earlier experiments showed that an exponentially de- creasing weight has much the same effect on per- formance as a bound on the number of nearest neighbors participating in the estimate. 2.3 Discussion In the previous two sections, we presented two complementary paradigms for incorporat- ing distributional similarity information into cooccurrence probability estimates. Now, one cannot always draw conclusions about the rel- ative fitness of two methods simply from head- to-head performance comparisons; for instance, one method might actually make use of inher- ently more informative statistics but produce worse results because the authors chose a sub- optimal weighting scheme. In the present case, however, we are working with two models which, while representing opposite extremes in terms of generalization, share enough features to make the comparison meaningful. First, both models use linear combinations of cooccurrence probabilities for similar enti- ties. Second, each has a single free param- eter k, and the two k's enjoy a natural in- verse correspondence: a large number of clus- ters in the distributional clustering case results in only the closest centroids contributing sig- nificantly to the cooccurrence probability esti- mate, whereas a large number of neighbors in the nearest-neighbors averaging case means that relatively distant words are consulted. And fi- nally, the two distance functions are similar in spirit: both are based on the KL divergence to some type of averaged distribution. We have thus attempted to eliminate functional form, number and type of parameters, and choice of distance function from playing a role in the com- parison, increasing our confidence that we are truly comparing paradigms and not implemen- tation details. What are the fundamental differences be- tween the two methods? From the foregoing discussion it is clear that distributional clus- tering is theoretically more satisfying and de- pends on a single model complexity parameter. On the other hand, nearest-neighbors averaging in its most general form offers more flexibility in defining the set of most similar words and their relative weights (Dagan et al., 1999). Also, the training phase requires little computation, as opposed to the iterative re-estimation proce- dure employed to build the cluster model. But the key difference is the amount of data com- pression, or equivalently the amount of general- ization, produced by the two models. Cluster- 3{} ing yields a far more compact representation of the data when k, the model size parameter, is smaller than INf. As noted above, various au- thors have conjectured that this data reduction must inevitably result in lower performance in comparison to nearest-neighbor methods, which store the most specific information for each in- dividual word. Our experiments aim to ex- plore this hypothesized generalization-accuracy tradeoff. 3 Evaluation 3.1 Methodology We compared the two similarity-based esti- mation techniques at the following decision task, which evaluates their ability to choose the more likely of two unseen cooccurrences. Test instances consist of noun-verb-verb triples (n, vl, v2), where both (n, Vl) and (n, v2) are un- seen cooccurrences, but (n, vl) is more likely (how this is determined is discussed below). For each test instance, the language model prob- abilities 151 dej 15(vlln) and i52 dej 15(v2]n) are computed; the result of the test is either cor- rect (151 > 152), incorrect (/51 < ~52,) or a tie (151 = 152). Overall performance is measured by the error rate on the entire test set, defined as 1 ~(# of incorrect choices + (# of ties)/2), where T is the number of test triples, not count- ing multiplicities. Our global experimental design was to run ten-fold cross-validation experiments comparing distributional clustering, nearest-neighbors av- eraging, and Katz's backoff (the baseline) on the decision task just outlined. All results we report below are averages over the ten train-test splits. For each split, test triples were created from the held-out test set. Each model used the training set to calculate all basic quantities (e.g., p(vln ) for each verb and noun), but not to train k. Then, the performance of each similarity-based model was evaluated on the test triples for a sequence of settings for k. We expected that clustering performance with respect to the baseline would initially im- prove and then decline. That is, we conjec- tured that the model would overgeneralize at small k but overfit the training data at large k. In contrast, for nearest-neighbors averag- ing, we hypothesized monotonically decreasing performance curves: using only the very most similar words would yield high performance, whereas including more distant, uninformative words would result in lower accuracy. From pre- vious experience, we believed that both meth- ods would do well with respect to backoff. 3.2 Data In order to implement the experimental methodology just described, we employed the follow data preparation method: i. Gather verb-object pairs using the CASS partial parser (Abney, 1996) Partition set of pairs into ten folds . 3. For each test fold, (a) discard seen pairs and duplicates (b) discard pairs with unseen nouns or un- seen verbs (e) for each remaining (n, vl), create (n, vl, v2) such that (n, v~) is less likely Step 3b is necessary because neither the similarity-based methods nor backoff handle novel unigrams gracefully. We instantiated this schema in three ways: AP89 We retrieved 1,577,582 verb-object pairs from 1989 Associated Press (AP) newswire, discarding singletons (pairs occurring only once) as is commonly done in language modeling. We split this set by type 3, which does not realistically model how new data oc- curs in real life, but does conveniently guaran- tee that the entire test set is unseen. In step 3c all (n, v2) were found such that (n, vl) oc- curred at least twice as often as (n, v2) in the test fold; this gives reasonable reassurance that n is indeed more likely to cooccur with Vl, even though (n, v2) is plausible (since it did in fact occur). 3When a corpus is split by type, all instances of a given type must end up in the same partition. If the split is by token, then instances of the same type may end up in different partitions. For example, for corpus '% b a c', "a b" +"a c" is a valid split by token, but not by type. 37 Test type AP89 AP90unseen AP90fake split singletons? ~ training % of test ~ test baseline pairs unseen triples error type no 1033870 100 42795 28.3% token yes 1123686 14 4019 39.6% " " " " 14479 79.9% Table 1: Data for the three types of experiments. All numbers are averages over the ten splits. AP90unseen 1,483,728 pairs were extracted from 1990 AP newswire and split by token. Al- though splitting by token is undoubtedly a bet- ter way to generate train-test splits than split- ting by type, it had the unfortunate side effect of diminishing the average percentage of unseen cooccurrences in the test sets to 14%. While this is still a substantial fraction of the data (demonstrating the seriousness of the sparse data problem), it caused difficulties in creat- ing test triples: after applying filtering step 3b, there were relatively few candidate nouns and verbs satisfying the fairly stringent condition 3c. Therefore, singletons were retained in the AP90 data. Step 3c was carried out as for AP89. AP90fake The procedure for creating the AP90unseen data resulted in much smaller test sets than in the AP89 case (see Table I). To generate larger test sets, we used the same folds as in AP90unseen, but implemented step 3c dif- ferently. Instead of selecting v2 from cooccur- rences (n, v2) in the held-out set, test triples were constructed using v2 that never cooccurred with n in either the training or the test data. That is, each test triple represented a choice between a plausible cooccurrence (n, Vl) and an implausible ("fake") cooccurrence (n, v2). To ensure a large differential between the two al- ternatives, we further restricted (n, Vl) to occur at least twice (in the test fold). We also chose v2 from the set of 50 most frequent verbs, resulting in much higher error rates for backoff. 3.3 Results We now present evaluation results ordered by relative difficulty of the decision task. Figure 2 shows the performance of distribu- tional clustering and nearest-neighbors averag- ing on the AP90fake data (in all plots, error bars represent one standard deviation). Recall that the task here was to distinguish between plau- sible and implausible cooccurrences, making it 38 a somewhat easier problem than that posed in the AP89 and AP90unseen experiments. Both similarity-based methods improved on the base- line error (which, by construction of the test triples, was guaranteed to be high) by as much as 40%. Also, the curves have the shapes pre- dicted in section 3.1. all clu'sters nearest cluster 5'0 ,~0 ,~0 2~0 2;0 ~0 g0 ,~ k Figure 2: Average error reduction with respect to backoff on AP90fake test sets. We next examine our AP89 experiment re- sults, shown in Figure 3. The similarity-based methods clearly outperform backoff, with the best error reductions occurring at small k for both types of models. Nearest-neighbors aver- aging appears to have the advantage over dis- tributional clustering, and the nearest cluster method yields lower error rates than the aver- aged cluster method (the differences are statisti- cally significant according to the paired t-test). We might hypothesize that nearest-neighbors averaging is better in situations of extreme spar- sity of data. However, these results must be taken with some caution given their unrealistic type-based train-test split. A striking feature of Figure 3 is that all the curves have the same shape, which is not at all what we predicted in section 3.1. The reason ] 10 all clusters nearest cluster nearest neighbors 25 o , , , , , , 5 100 150 200 250 300 350 400 k Figure 3: Average error reduction with respect to backoff on AP89 test sets. 0.26 0.26 0.24 0.23 0.22 0.21 0.2 0.1~ that the very most similar words are appar- ently not as informative as slightly more dis- tant words is due to recall errors. Observe that if (n, vl) and (n, v2) are unseen in the train- ing data, and if word n' has very small Jensen- Shannon divergence to n, then chances are that n ~ also does not occur with either Vl or v2, re- sulting in an estimate of zero probability for both test cooccurrences. Figure 4 proves that this is the case: if zero-ties are ignored, then the error rate curve for nearest-neighbors averaging has the expected shape. Of course, clustering is not prone to this problem because it automati- cally smoothes its probability estimates. average error over APe9, normal vs. precision results nearest neighbors nearest neighbors. Ignodng recall errors • ' 0 ' ' ' ' ' ' 100 150 200 250 300 350 400 k Figure 4: Average error (not error reduction) using nearest-neighbors averaging on AP89, showing the effect of ignoring recall mistakes. Finally, Figure 5 presents the results of 39 our AP90unseen experiments. Again, the use of similarity information provides better-than- baseline performance, but, due to the relative difficulty of the decision task in these exper- iments (indicated by the higher baseline er- ror rate with respect to AP89), the maximum average improvements are in the 6-8% range. The error rate reductions posted by weighted- average clustering, nearest-centroid clustering, and nearest-neighbors averaging are all well within the standard deviations of each other. I all clusters nearest cluster nearest neighbors -2 0 50 100 150 200 250 300 350 400 k Figure 5: Average error reduction with respect to backoff on AP90unseen test sets. As in the AP89 case, the nonmonotonicity of the nearest- neighbors averaging curve is due to recall errors. 4 Conclusion In our experiments, the performances of distri- butional clustering and nearest-neighbors aver- aging proved to be in general very similar: only in the unorthodox AP89 setting did nearest- neighbors averaging clearly yield better error rates. Overall, both methods achieved peak per- formances at relatively small values of k, which is gratifying from a computational point of view. Some questions remain. We observe that distributional clustering seems to suffer higher variance. It is not clear whether this is due to poor estimates of the KL divergence to cen- troids, and thus cluster membership, for rare nouns, or to noise sensitivity in the search for cluster splits. Also, weighted-average clustering never seems to outperform the nearest-centroid method, suggesting that the advantages of prob- abilistic clustering over "hard" clustering may be computational rather than in modeling el- fectiveness (Boolean clustering is NP-complete (Brucker, 1978)). Last but not least, we do not yet have a principled explanation for the similar performance of nearest-neighbors averaging and distributional clustering. Further experiments, especially in other tasks such as language mod- eling, might help tease apart the two methods or better understand the reasons for their simi- larity. 5 Acknowledgements We thank the anonymous reviewers for their helpful comments and Steve Abney for help with extracting verb-object pairs with his parser CASS. References Steven Abney. 1996. Partial parsing via finite-state cascades. In Proceedings of the ESSLLI '96 Ro- bust 15arsing Workshop. L. Douglas Baker and Andrew Kachites McCallum. 1998. Distributional clustering of words for text classification. In Plst Annual International A CM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '98), pages 96- 103. Peter F. Brown, Vincent J. DellaPietra, Peter V. deSouza, Jennifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18(4):467-479, December. Peter Brucker. 1978. On the complexity of clus- tering problems. In Rudolf Henn, Bernhard H. Korte, and Werner Oettli, editors, Optimization and Operations Research, number 157 in Lecture Notes in Economics and Mathematical Systems. Springer-Verlag, Berlin. Kenneth W. Church and William A. Gale. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating proba- bilities of English bigrams. Computer Speech and Language, 5:19-54. Ido Dagan, Shaul Marcus, and Shaul Markovitch. 1995. Contextual word similarity and estimation from sparse data. Computer Speech and Lan- guage, 9:123-152. Ido Dagan, Lillian Lee, and Fernando Pereira. 1999. Similarity-based models of word cooccurrence probabilities. Machine Learning, 34(1-3):43-69. Thomas Hofmann, Jan Puzicha, and Michael I. Jor- dan. 1999. Learning from dyadic data. In Ad- vances in Neural Information Processing Systems 11. MIT Press. To appear. Nancy Ide and Jean Veronis. 1998. Introduction to the special issue on word sense disambiguation: 40 The state of the art. Computational Linguistics, 24(1):1-40, March. Frederick Jelinek and Robert L. Mercer. 1980. Inter- polated estimation of Markov source parameters from sparse data. In Proceedings of the Workshop on Pattern Recognition in Practice, Amsterdam, May. North Holland. Slava M. Katz. 1987. Estimation of probabilities from sparse data for the language model com- ponent of a speech recognizer. IEEE Transac- tions on Acoustics, Speech and Signal Processing, ASSP-35(3):400-401, March. Lillian Lee. 1999. Measures of distributional simi- larity. In 37th Annual Meeting of the ACL, Som- erset, New Jersey. Distributed by Morgan Kauf- mann, San Francisco. Jianhua Lin. 1991. Divergence measures based on the Shannon entropy. IEEE Transactions on In- formation Theory, 37(1):145-151. Hermann Ney and Ute Essen. 1993. Estimating 'small' probabilities by leaving-one-out. In Third European Conference On Speech Communication and Technology, pages 2239-2242, Berlin, Ger- many. Fernando C. N. Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In 31st Annual Meeting of the ACL, pages 183-190, Somerset, New Jersey. Association for Computational Linguistics. Distributed by Mor- gan Kaufmann, San Francisco. C. Radhakrishna Rao. 1982. Diversity: Its measure- ment, decomposition, apportionment and analy- sis. SankyhS: The Indian Journal of Statistics, 44(A):1-22. Hinrich Schiitze. 1993. Word space. In S. J. Hanson, J. D. Cowan, and C. L. Giles, editors, Advances in Neural Information Processing Systems 5, pages 895-902. Morgan Kaufmann, San Francisco.
1999
5
Projecting Corpus-Based Semantic Links on a Thesaurus* Emmanuel Morin IRIN 2, chemin de la housini~re - BP 92208 44322 NANTES Cedex 3, FRANCE morin@irin, univ-nant es. fr Christian Jacquemin LIMSI-CNRS BP 133 91403 ORSAY Cedex, FRANCE j acquemin@limsi, fr Abstract Hypernym links acquired through an infor- mation extraction procedure are projected on multi-word terms through the recognition of se- mantic variations. The quality of the projected links resulting from corpus-based acquisition is compared with projected links extracted from a technical thesaurus. 1 Motivation In the domain of corpus-based terminology, there are two main topics of research: term acquisition--the discovery of candidate terms-- and automatic thesaurus construction--the ad- dition of semantic links to a term bank. Sev- eral studies have focused on automatic acquisi- tion of terms from corpora (Bourigault, 1993; Justeson and Katz, 1995; Daille, 1996). The output of these tools is a list of unstructured multi-word terms. On the other hand, contri- butions to automatic construction of thesauri provide classes or links between single words. Classes are produced by clustering techniques based on similar word contexts (Schiitze, 1993) or similar distributional contexts (Grefenstette, 1994). Links result from automatic acquisi- tion of relevant predicative or discursive pat- terns (Hearst, 1992; Basili et al., 1993; Riloff, 1993). Predicative patterns yield predicative re- lations such as cause or effect whereas discursive patterns yield non-predicative relations such as generic/specific or synonymy links. * The experiments presented in this paper were per- formed on [AGRO], a 1.3-million word French corpus of scientific abstracts in the agricultural domain. The ter- mer used for multi-word term acquisition is ACABIT (Daille, 1996). It has produced 15,875 multi-word terms composed of 4,194 single words. For expository pur- poses, some examples are taken from [MEDIC], a 1.56- million word English corpus of scientific abstracts in the medical domain. The main contribution of this article is to bridge the gap between term acquisition and thesaurus construction by offering a framework for organizing multi-word candidate terms with the help of automatically acquired links between single-word terms. Through the extraction of semantic variants, the semantic links between single words are projected on multi-word can- didate terms. As shown in Figure 1, the in- put to the system is a tagged corpus. A par- tial ontology between single word terms and a set of multi-word candidate terms are pro- duced after the first step. In a second step, layered hierarchies of multi-word terms are con- structed through corpus-based conflation of se- mantic variants. Even though we focus here on generic/specific relations, the method would ap- ply similarly to any other type of semantic re- lation. The study is organized as follows. First, the method for corpus-based acquisition of semantic links is presented. Then, the tool for semantic term normalization is described together with its application to semantic link projection. The last section analyzes the results on an agricul- tural corpus and evaluates the quality of the induced semantic links. 2 Iterative Acquisition of Hypernym Links We first present the system for corpus-based in- formation extraction that produces hypernym links between single words. This system is built on previous work on automatic extraction of hy- pernym links through shallow parsing (Hearst, 1992; Hearst, 1998). In addition, our system incorporates a technique for the automatic gen- eralization of lexico-syntactic patterns. As illustrated by Figure 2, the system has two functionalities: 389 / 0 0 0 0 0 0 Termer ~-~0 • • • • • / Multi-word terms Corpus Single word hierarchy Term norrnalizer Hierarchies of multi-word terms Figure 1: Overview of the system for hierarchy projection 1. The corpus-based acquisition of lexico- syntactic patterns with respect to a specific conceptual relation, here hypernym. 2. The extraction of pairs of conceptually re- lated terms through a database of lexico- syntactic patterns. Shallow Parser and Classifier A shallow parser is complemented with a classi- fier for the purpose of discovering new patterns through corpus exploration. This procedure in- spired by (Hearst, 1992; Hearst, 1998) is com- posed of 7 steps: 1. Select manually a representative concep- tual relation, e.g. the hypernym relation. 2. Collect a list of pairs of terms linked by the previous relation. This list of pairs of terms can be extracted from a thesaurus, a knowledge base or manually specified. For instance, the hypernym relation neocortex IS-A vulnerable area is used. 3. Find sentences in which conceptually re- lated terms occur. These sentences are lemmatized, and noun phrases are iden- tified. They are represented as lexico- syntactic expressions. For instance, the previous relation HYPERNYM(vulnerable area, neocortex) is used to extract the sentence: Neuronal damage were found in the selectively vulnerable areas such as neocortex, striatum, hippocampus and tha- lamus from the corpus [MEDIC]. The sen- tence is then transformed into the following lexico-syntactic expression: 1 NP find in NP such as LIST (1) 1NP stands for a noun phrase, and LIST for a succes- sion of noun phrases. . Find a common environment that gener- alizes the lexicoosyntactic expressions ex- tracted at the third step. This environ- ment is calculated with the help of a func- tion of similarity and a procedure of gen- eralization that produce candidate lexico- syntactic pattern. For instance, from the previous expression, and at least another similar one, the following candidate lexico- syntactic pattern is deduced: NP such as LIST (2) 5. Validate candidate lexico-syntactic pat- terns by an expert. 6. Use these validated patterns to extract ad- ditional candidate pairs of terms. 7. Validate candidate pairs of terms by an ex- pert, and go to step 3. Through this technique, eleven of the lexico- syntactic patterns extracted from [AGRO] are validated by an expert. These patterns are ex- ploited by the information extractor that pro- duces 774 different pairs of conceptually related terms. 82 of these pairs are manually selected for the subsequent steps our study because they are constructing significant pieces of ontology. They correspond to ten topics (trees, chemical elements, cereals, enzymes, fruits, vegetables, polyols, polysaccharides, proteins and sugars). Automatic Classification of Lexico-syntactic Patterns Let us detail the fourth step of the preceding algorithm that automatically acquires lexico- syntactic patterns by clustering similar pat- terns. 390 Corpus -~ Loxical preprocessor iBniT:Slp:iP:rs of terms~ ~ Lemmadzed and tagged corpus ~ Database of lexico-syntactic patterns Shallow parser + classifier Information extractor Lexico-syntactic patterns Partial hierarchies of single-word terms J Figure 2: The information extraction system As described in item 3. above, pattern (1) is acquired from the relation HYPER- NYM( vulnerable area, neocortex ). Similarly, from the relation HYPERNYM(complication, infection), the sentence: Therapeutic complications such as infection, recurrence, and loss of support of the articular surface have continued to plague the treatment of giant cell tumor is extracted through corpus exploration. A second lexico-syntactic expression is inferred: NP such as LIST continue to plague NP (3) Lexico-syntactic expressions (1) and (3) can be abstracted as: 2 A = AIA2 " • Aj • .. Ak • "An HYPERNYM(Aj, Ak), k > j + 1 and (4) B : B1 B2 "" Bj .... B k .... B n, HYPERNYM(Bj,, B k,), k' > j' + 1 (5) Let Sire(A, B) be a function measuring the similarity of lexico-syntactic expressions A and B that relies on the following hypothesis: Hypothesis 2.1 (Syntactic isomorphy) If two lexico-syntactic expressions A and B represent the same pattern then, the items Aj and Bj,, and the items Ak and B k, have the same syntactic function. 2Ai is the ith item of the lexico-syntactic expression A, and n is the number of items in A. An item can be either a lemma, a punctuation mark, a symbol, or a tag (N P, LIST, etc.). The relation k > j 4-1 states that there is at least one item between Aj and Ak. I winl(A) i wiFq_)ln2fA win3(A) I A = A1 A2 ...... Aj ... Ak ......... An B = B1 B2 ... Bj'. ....... Bk'... Bn' Figure 3: Comparison of two expressions Let Winl(A) be the window built from the first through j-1 words, Win2 (A) be the window built from words ranking from j+l th through k- lth words, and Win3(A) be the window built from k+lth through nth words (see Figure 3). The similarity function is defined as follows: 3 Sim(A, B) = E Sim(Wini(A), Wini(B)) (6) i=1 The function of similarity between lexico- syntactic patterns Sim(Wini(A),Wini(B)) is defined experimentally as a function of the longest common string. After the evaluation of the similarity mea- sure, similar expressions are clustered. Each cluster is associated with a candidate pattern. For instance, the sentences introduced earlier generate the unique candidate lexico-syntactic pattern: NP such as LIST (7) We now turn to the projection of automat- ically extracted semantic links on multi-word terms. 3 3For more information on the PROMI~THEE system, in 391 3 Semantic Term Normalization The 774 hypernym links acquired through the iterative algorithm described in the preceding section are thus distributed: 24.5% between two multi-word terms, 23.6% between two single- word terms, and the remaining ones between a single-word term and a multi-word term. Since the terms produced by the termer are only multi-word terms, our purpose in this section is to design a technique for the expansion of links between single-word terms to links be- tween multi-word terms. Given a link between fruit and apple, our purpose is to infer a simi- lar link between apple juice and fruit juice, be- tween any apple N and fruit N, or between ap- ple N1 and fruit N2 with N1 semantically related to N 2. Semantic Variation The extension of semantic links between sin- gle words to semantic links between multi-word terms is semantic variation and the process of grouping semantic variants is semantic normal- ization. The fact that two multi-word terms wlw2 and w 1~ w 2~ contain two semantically- related word pairs (wl,w~) and (w2,w~) does not necessarily entail that Wl w2 and w~ w~ are se- mantically close. The three following require- ments should be met: Syntactic isomorphy The correlated words must occupy similar syntactic positions: both must be head words or both must be arguments with similar thematic roles. For example, procddd d'dlaboration (process of elaboration) is not a variant dlaboration d'une mdthode (elaboration of a process) even though procddd and mdthode are syn- onymous, because procddd is the head word of the first term while mdthode is the argu- ment in the second term. Unitary semantic relationship The corre- lated words must have similar meanings in both terms. For example, analyse du rayonnement (analysis of the radiation) is not semantically related with analyse de l'influence (analysis of the influence) even particular a complete description of the generalization patterns process, see the following related publication: (Morin, 1999). though rayonnement and influence are se- mantically related. The loss of semantic relationship is due to the polysemy of ray- onnement in French which means influence when it concerns a culture or a civilization and radiation in physics. Holistic semantic relationship The third criterion verifies that the global meanings of the compounds are close. For example, the terms inspection des aliments (food inspection) and contrSle alimentaire (food control) are not synonymous. The first one is related to the quality of food and the second one to the respect of norms. The three preceding constraints can be trans- lated into a general scheme representing two semantically-related multi-word terms: Definition 3.1 (Semantic variants) Two multi-word terms Wl W2 and W~l w~2 are semantic variants of each other if the three following constraints are satisfied: 4 1. wl and Wll are head words and w2 and wl2 are arguments with similar thematic roles. 2. Some type of semantic relation $ holds be- tween Wl and w~ and/or between w2 and wl2 (synonymy, hypernymy, etc.). The non semantically related words are either iden- tical or morphologically related. 3. The compounds wl w2 and Wrl wt2 are also linked by the semantic relation S. Corpus-based Semantic Normalization The formulation of semantic variation given above is used for corpus-based acquisition of semantic links between multi-word terms. For each candidate term Wl w2 produced by the ter- mer, the set of its semantic variants satisfying the constraints of Definition 3.1 is extracted from a corpus. In other words, a semantic normalization of the corpus is performed based on corpus-based semantic links between single words and variation patterns defined as all the 4wl w2 is an abbreviated notation for a phrase that contains the two content words wl and w2 such that one of both is the head word and the other one an argument. For the sake of simplicity, only binary terms are consid- ered, but our techniques would straightforwardly extend to n-ary terms with n > 3. 392 licensed combinations of morphological, syntac- tic and semantic links. An exhaustive list of variation patterns is pro- vided for the English language in (Jacquemin, 1999). Let us illustrate variant extraction on a sample variation: 5 Nt Prep N2 -+ M(N1,N) Adv ? A ? Prep_Ar.t ? A ? S(N2) Through this pattern, a semantic variation is found between composition du fruit (fruit com- position) and composgs chimiques de la graine (chemical compounds of the seed). It relies on the morphological relation between the nouns composg (compound, .h4(N1,N)) and composi- tion (composition, N1) and on the semantic relation (part/whole relation) between graine (seed, S(N2)) and fruit (fruit, N2). In addition to the morphological and semantic relations, the categories of the words in the semantic variant composdsN chimiquesA deprep laArt graineN sat- isfy the regular expression: the categories that are realized are underlined. Related Work Semantic normalization is presented as semantic variation in (Hamon et al., 1998) and consists in finding relations between multi-word terms based on semantic relations between single-word terms. Our approach differs from this preceding work in that we exploit domain specific corpus- based links instead of general purpose dictio- nary synonymy relationships. Another origi- nal contribution of our approach is that we ex- ploit simultaneously morphological, syntactic, and semantic links in the detection of semantic variation in a single and cohesive framework. We thus cover a larger spectrum of linguistic phenomena: morpho-semantic variations such as contenu en isotope (isotopic content) a vari- ant of teneur isotopique (isotopic composition), syntactico-semantic variants such as contenu en isotope a variant of teneur en isotope (isotopic content), and morpho-syntactico-semantic vari- ants such as duretd de la viande (toughness of the meat) a variant of rdsistance et la rigiditd de la chair (lit. resistance and stiffness of the flesh). 5The symbols for part of speech categories are N (Noun), A (Adjective), Art (Article), Prep (Preposition), Punc (Punctuation), Adv (Adverb). 4 Projection of a Single Hierarchy on Multi-word Terms Depending on the semantic data, two modes of representation are considered: a link mode in which each semantic relation between two words is expressed separately, and a class mode in which semantically related words are grouped into classes. The first mode corre- sponds to synonymy links in a dictionary or to generic/specific links in a thesaurus such as (AGROVOC, 1995). The second mode corre- sponds to the synsets in WordNet (Fellbaum, 1998) or to the semantic data provided by the information extractor. Each class is composed of hyponyms sharing a common hypernym-- named co-hyponyms--and all their common hy- pernyms. The list of classes is given in Table 1. Analysis of the Projection Through the projection of single word hierar- chies on multi-word terms, the semantic relation can be modified in two ways: Transfer The links between concepts (such as fruits) are transferred to another concep- tual domain (such as juices) located at a different place in the taxonomy. Thus the link between fruit and apple is transferred to a link between fruit juice and apple juice, two hyponyms of juice. This modification results from a semantic normalization of ar- gument words. Specialization The links between concepts (such as fruits) are specialized into parallel relations between more specific concepts lo- cated lower in the hierarchy (such as dried fruits). Thus the link between fruit and apple is specialized as a link between dried fruits and dried apples. This modification is obtained through semantic normalization of head words. The Transfer or the Specialization of a given hierarchy between single words to a hierarchy between multi-word terms generally does not preserve the full set of links. In Figure 4, the initial hierarchy between plant products is only partially projected through Transfer on juices or dryings of plant products and through Spe- cialization on fresh and dried plant products. Since multi-word terms are more specific than 393 Table 1: The twelve semantic classes acquired from the [AGRO] corpus Classes Hypernyrns and cc~hyponyms trees chemical elements cereals enzymes fruits olives apples vegetables polyols polysacchaxides proteins sugars arbre, bouleau, chine, drable, h~tre, orme, peuplier, pin, poirier, pommier, sap)n, dpicda dldment, calcium, potassium, magndsium, mangandse, sodium, arsenic, chrome, mercure, sdldnium, dtain, aluminium, fer, cad)urn, cuivre cdrdale, mais, mil, sorgho, bld, orge, riz, avoine enzyme, aspaxtate, lipase, protdase fruit, banane, cerise, citron, figue, fraise, kiwi, no)x, olive, orange, poire, pomme, p~che, raisin fruit, olive, Amellau, Chemlali, Chdtoui, Lucques, Picholine, Sevillana, Sigoise fruit, pomme, Caxtland, Ddlicious, Empire, McIntoch, Spartan ldgume, asperge, carotte, concombre, haricot, pois, tomate polyol, glycdrol, sorbitol polysaccharide, am)don, cellulose, styrene, dthylbenz~ne protdine, chitinase, glucanase, thaumatin-like, fibronectine, glucanase sucre, lactose, maltose, raffinose, glucose, saccharose p(roduit v~g~tal plant products) cH~ale ~pice fruit l~gurae (cereal) (spice) (fruit) (vegetable) ma)~ or e tomate endive (maize) (b~y) (tomatoes) (chicory) fruit a noyau fruit ~ p~pins petit fruit (stone frmts) (point fruits) (soft tnlits) (apples) (pears) (grapes) ~ (strawberries) abricot cassis (apricots) " (black currants) Specialization "~k Specialization Transfer .,~ 1 "~ fruit frais Idgume frais fruit sec sdchage de c~r~ale ] s~chage de I~gume (fresh fruits) (fresh vegetables) (dried/~ruits) jus de.fruit (cereal drying) V (vegetable drying) /\ (fruit juice) ....... ~ .... I ~ sdchagedecarotte fi~u~ee:~Cgsh~ • a=~,~sc ~c .~,a carrot m Jus de ananas . . ,.o~.~ ~'~7.'~ ~ / "N~ ( dry" g) (ananas juice) /\ \ ........ "'~" V ~/ "% / x k . . . . F sdcha~e de la banane raisinfrais raisin sec j \ ~ secnage ae nz X'anana d in ~ P \ ju~ de raisin (rice drying) \ W ry g, (fresh grapes) (dried grapes) jusdepomme \ (grape juice) \ (apple juice) ~ jus de poire sdchage de l'abricot (peat juice) (apricot drying) Figure 4: Projected links on multi-word terms (the hieraxchy is extracted from (AGROVOC, 1995)) single-word terms, they tend to occur less fre- quently in a corpus. Thus only some of the pos- sible projected links axe observed through cor- pus exploration. 5 Evaluation Projection of Corpus-based Links Table 2 shows the results of the projection of corpus-based links. The first column indicates the semantic class from Table 1. The next 394 three columns indicate the number of multi- word links projected through Specialization, the number of correct links and the corresponding value of precision. The same values are pro- vided for Transfer projections in the following three columns. Transfer projections are more frequent (507 links) than Specializations (77 links). Some classes, such as chemical elements, cereals and fruits are very productive because they are com- posed of generic terms. Other classes, such as trees, vegetables, polyols or proteins, yield few semantic variations. They tend to contain more specific or less frequent terms. The average precision of Specializations is relatively low (58.4% on average) with a high standard deviation (between 16.7% and 100%). Conversely, the precision of Transfers is higher (83.8% on average) with a smaller standard deviation (between 69.0% and 100%). Since Transfers are almost ten times more numer- ous than Specializations, the overall precision of projections is high: 80.5%. In addition to relations between multi-word terms, the projection of single-word hierar- chies on multi-word terms yields new candidate terms: the variants of candidate terms produced at the first step. For instance, sdchage de la banane (banana drying) is a semantic variant of sdchage de fruits (fruit drying) which is not provided by the first step of the process. As in the case of links, the production of multi- word terms is more important with Transfers (72 multi-word terms) than Specializations (345 multi-word terms) (see Table 3). In all, 417 rele- vant multi-word terms are acquired through se- mantic variation. Comparison with AGROVOC Links In order to compare the projection of corpus- based links with the projection of links ex- tracted from a thesaurus, a similar study was made using semantic links from the thesaurus (AGROVOC, 1995). 6 The results of this second experiment are very similar to the first experiment. Here, the preci- 6(AGROVOC, 1995) is composed of 15,800 descrip- tors but only single-word terms found in the corpus [AGRO] are used in this evaluation (1,580 descriptors). From these descriptors, 168 terms representing 4 topics (cultivation, plant anatomy, plant products and flavor- ings) axe selected for the purpose of evaluation. sion of Specializations is similar (57.8% for 45 links inferred), while the precision of Transfers is slightly lower (72.4% for 326 links inferred). Interestingly, these results show that links re- sulting from the projection of a thesaurus have a significantly lower precision (70.6%) than pro- jected corpus-based links (80.5%). A study of Table 3 shows that, while 197 projected links are produced from 94 corpus- based links (ratio 2.1), only 88 such projected links are obtained through the projection of 159 links from AGROVOC (ratio 0.6). Ac- tually, the ratio of projected links is higher with corpus-based links than thesaurus links, because corpus-based links represent better the ontology embodied in the corpus and associate more easily with other single word to produce projected hierarchies. 6 Perspectives Links between single words projected on multi- word terms can be used to assist terminologists during semi-automatic extension of thesauri. The methodology can be straightforwardly ap- plied to other conceptual relations such as syn- onymy or meronymy. Acknowledgement We are grateful to Ga~l de Chalendar (LIMSI), Thierry Hamon (LIPN), and Camelia Popescu (LIMSI & CNET) for their helpful comments on a draft version of this article. References AGROVOC. 1995. Thdsaurus Agricole Multi- lingue. Organisation de Nations Unies pour l'Alimentation et l'Agriculture, Roma. Roberto Basili, Maria Teresa Pazienza, and Paola Velardi. 1993. Acquisition of selec- tional patterns in sublanguages. Machine Translation, 8:175-201. Didier Bourigault. 1993. An endogeneous corpus-based method for structural noun phrase disambiguation. In EA CL'93, pages 81-86, Utrecht. B~atrice Daille. 1996. Study and implemen- tation of combined techniques for automatic extraction of terminology. In Judith L. Kla- vans and Philip Resnik, editors, The Balanc- ing Act: Combining Symbolic and Statistical 395 Table 2: Precision of the projection of corpus-based links Classes Specialization Transfer Occ. Correct occ. Precision ~ Occ. Correct occ. Precision trees chemical elements cereals enzymes fruits olives apples vegetables polyols polysaccharides proteins sugars 0 8 4 50.0% 6 1 16.7% 3 3 100.0% 32 20 62.5% 4 1 25.0% 4 1 25.0% 3 2 66.7% 0 3 1 33.3% 0 13 11 84.6% 3 3 100.0% 101 99 98.0% 76 65 85.5% 29 20 69.0% 214 172 80.4% 10 8 80.0% 16 12 75.0% 3 3 100.0% 0 13 11 84.6% 8 6 75.0% 34 26 76.5% Total II 77 45 58.4% 507 425 83.8% Table 3: Production of new terms and correct links through the projection of links Corpus-based links Thesaurus-based links Terms Relations Terms Relations Initial links I[ 96 94 Specialization 72 30 Transfer 345 167 Total 417 197 162 159 49 18 256 70 305 88 Approaches to Language, pages 49-66. MIT Press, Cambridge, MA. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA. Gregory Grefenstette. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publisher, Boston, MA. Thierry Hamon, Adeline Nazarenko, and C~cile Gros. 1998. A step towards the detection of semantic variants of terms in technical docu- meats. In COLING-A CL'98, pages 498-504, Montreal. Marti A. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In COLING'92, pages 539-545, Nantes. Marti A. Hearst. 1998. Automated discov- ery of wordnet relations. In Christiane Fell- baum, editor, WordNet: An Electronic Lexi- cal Database. MIT Press, Cambridge, MA. Christian Jacquemin. 1999. Syntagmatic and paradigmatic representation of term vaxia- tion. In A CL '99, University of Maryland. John S. Justeson and Slava M. Katz. 1995. Technical terminology: some linguistic prop- erties and an algorithm for identification in text. Natural Language Engineering, 1(1):9- 27. Emmanuel Morin. 1999. Using Lexico-syntactic Patterns to Extract Semantic Relations be- tween Terms from Technical Corpus. In Proceedings, 5th International Congress on Terminology and Knowledge Engineering (TKE'99), Innsbriick. Ellen Riloff. 1993. Automatically constructing a dictionay for information extraction tasks. In Proceedings, 11th National Conference on Artificial Intelligence, pages 811-816, Cam- bridge, MA. MIT Press. Hinrich Schiitze. 1993. Word space. In Stephen J. Hanson, Jack D. Cowan, and Lee Giles, editors, Advances in Neural Informa- tion Processing Systems 5. Morgan Kauff- mann, San Mateo, CA. 396
1999
50
Acquiring Lexical Generalizations from Corpora: A Case Study for Diathesis Alternations Maria Lapata School of Cognitive Science Division of Informatics, University of Edinburgh 2 Buccleuch Place, Edinburgh EH8 9LW, UK [email protected] Abstract This paper examines the extent to which verb diathesis alternations are empirically attested in corpus data. We automatically acquire alternating verbs from large balanced corpora by using partial- parsing methods and taxonomic information, and discuss how corpus data can be used to quantify lin- guistic generalizations. We estimate the productiv- ity of an alternation and the typicality of its mem- bers using type and token frequencies. 1 Introduction Diathesis alternations are changes in the realization of the argument structure of a verb that are some- times accompanied by changes in meaning (Levin, 1993). The phenomenon in English is illustrated in (1)-(2) below. (1) a. John offers shares to his employees. b. John offers his employees shares. (2) a. Leave a note for her. b. Leave her a note. Example (1) illustrates the dative alternation, which is characterized by an alternation between the prepositional frame 'V NP1 to NP2' and the double object frame 'V NP 1 NP2'. The benefactive alterna- tion (cf. (2)) is structurally similar to the dative, the difference being that it involves the preposition for rather than to. Levin (1993) assumes that the syntactic realiza- tion of a verb's arguments is directly correlated with its meaning (cf. also Pinker (1989) for a similar pro- posal). Thus one would expect verbs that undergo the same alternations to form a semantically co- herent class. Levin's study on diathesis alternations has influenced recent work on word sense disam- biguation (Dorr and Jones, 1996), machine transla- tion (Dang et al., 1998), and automatic lexical ac- quisition (McCarthy and Korhonen, 1998; Schulte im Walde, 1998). The objective of this paper is to investigate the ex- tent to which diathesis alternations are empirically attested in corpus data. Using the dative and bene- factive alternations as a test case we attempt to de- termine: (a) if some alternations are more frequent than others, (b) if alternating verbs have frame pref- erences and (c) what the representative members of an alternation are. In section 2 we describe and evaluate the set of automatic methods we used to acquire verbs under- going the dative and benefactive alternations. We assess the acquired frames using a filtering method presented in section 3. The results are detailed in section 4. Sections 5 and 6 discuss how the derived type and token frequencies can be used to estimate how productive an alternation is for a given verb se- mantic class and how typical its members are. Fi- nally, section 7 offers some discussion on future work and section 8 conclusive remarks. 2 Method 2.1 The parser The part-of-speech tagged version of the British Na- tional Corpus (BNC), a 100 million word collec- tion of written and spoken British English (Burnard, 1995), was used to acquire the frames characteris- tic of the dative and benefactive alternations. Sur- face syntactic structure was identified using Gsearch (Keller et al., 1999), a tool which allows the search of arbitrary POS-tagged corpora for shallow syntac- tic patterns based on a user-specified context-free grammar and a syntactic query. It achieves this by combining a left-corner parser with a regular ex- pression matcher. Depending on the grammar specification (i.e., re- cursive or not) Gsearch can be used as a full context- free parser or a chunk parser. Depending on the syn- tactic query, Gsearch can parse full sentences, iden- tify syntactic relations (e.g., verb-object, adjective- noun) or even single words (e.g., all indefinite pro- 397 nouns in the corpus). Gsearch outputs all corpus sentences containing substrings that match a given syntactic query. Given two possible parses that begin at the same point in the sentence, the parser chooses the longest match. If there are two possible parses that can be produced for the same substring, only one parse is returned. This means that if the number of ambiguous rules in the grammar is large, the correctness of the parsed output is not guaranteed. 2.2 Acquisition We used Gsearch to extract tokens matching the patterns 'V NP1 NP2', 'VP NP1 to NP2', and 'V NPI for NP2' by specifying a chunk grammar for recognizing the verbal complex and NPs. POS-tags were retained in the parser's output which was post- processed to remove adverbials and interjections. Examples of the parser's output are given in (3). Although there are cases where Gsearch produces the right parse (cf. (3a)), the parser wrongly iden- tifies as instances of the double object frame to- kens containing compounds (cf. (3b)), bare relative clauses (cf. (3c)) and NPs in apposition (cf. (3d)). Sometimes the parser attaches prepositional phrases to the wrong site (cf. (3e)) and cannot distinguish between arguments and adjuncts (cf. (3f)) or be- tween different types of adjuncts (e.g., temporal (cf. (3f)) versus benefactive (cf. (3g))). Erroneous output also arises from tagging mistakes. (3) a. The police driver [v shot] [NP Jamie] [ie a look of enquiry] which he missed. b. Some also [v offer] [ipa free bus] lip ser- vice], to encourage customers who do not have their own transport. c. A Jaffna schoolboy [v shows] [NP a draw- ing] lip he] made of helicopters strafing his home town. d. For the latter catalogue Barr [v chose] [NP the Surrealist writer] [yp Georges Hugnet] to write a historical essay. e. It [v controlled] [yp access] [pp to [Nr' the vault]]. f. Yesterday he [v rang] [NP the bell] [Pl, for [NP a long time]]. g. Don't Iv save] [NP the bread] [pp for [NP the birds]]. We identified erroneous subcategorization frames (cf. (3b)-(3d)) by using linguistic heuristics and a process for compound noun detection (cf. sec- tion 2.3). We disambiguated the attachment site of PPs (cf. (3e)) using Hindle and Rooth's (1993) lex- ical association score (cf. section 2.4). Finally, we recognized benefactive PPs (cf. (3g)) by exploiting the WordNet taxonomy (cf. section 2.5). 2.3 Guessing the double object frame We developed a process which assesses whether the syntactic patterns (called cues below) derived from the corpus are instances of the double object frame. Linguistic Heuristics. We applied several heuris- tics to the parser's output which determined whether corpus tokens were instances of the double object frame. The 'Reject' heuristics below identified er- roneous matches (cf. (3b-d)), whereas the 'Accept' heuristics identified true instances of the double ob- ject frame (cf. (3a)). 1. Reject if cue contains at least two proper names adjacent to each other (e.g., killed Henry Phipps ). 2. Reject if cue contains possessive noun phrases (e.g., give a showman's award). 3. Reject if cue's last word is a pronoun or an anaphor (e.g., ask the subjects themselves). 4. Accept if verb is followed by a personal or in- definite pronoun (e.g., found him a home). 5. Accept if verb is followed by an anaphor (e.g., made herself a snack). 6. Accept if cue's surface structure is either 'V MOD l NP MOD NP' or 'V NP MOD NP' (e.g., send Bailey a postcard). 7. Cannot decide if cue's surface structure is 'V MOD* N N+' (e.g., offer a free bus ser- vice). Compound Noun Detection. Tokens identified by heuristic (7) were dealt with separately by a pro- cedure which guesses whether the nouns following the verb are two distinct arguments or parts of a compound. This procedure was applied only to noun sequences of length 2 and 3 which were extracted from the parser's output 2 and compared against a compound noun dictionary (48,661 entries) com- piled from WordNet. 13.9% of the noun sequences were identified as compounds in the dictionary. I Here MOD represents any prenominal modifier (e.g., arti- cles, pronouns, adjectives, quantifiers, ordinals). 2Tokens containing noun sequences with length larger than 3 (450 in total) were considered negative instances of the double object frame. 398 G-score ~" 2-word compound 1967.68 775.21 87.02 45.40 30.58 29.94 24.04 bank manager tax liability income tax book reviewer designer gear safety plan drama school Table 1 : Random sample of two word compounds Table G-score 3-word compound 574.48 382.92 77.78 48.84 36.44 32.35 23.98 [[energy efficiency] office] [[council tax] bills] [alcohol [education course]] [hospital [out-patient department] [[turnout suppressor] function] [[nature conservation] resources] [[quality amplifier] circuits] 2: Random sample of three word compounds For sequences of length 2 not found in WordNet, we used the log-likelihood ratio (G-score) to esti- mate the lexical association between the nouns, in order to determine if they formed a compound noun. We preferred the log-likelihood ratio to other statis- tical scores, such as the association ratio (Church and Hanks, 1990) or ;(2, since it adequately takes into account the frequency of the co-occurring words and is less sensitive to rare events and corpus- size (Dunning, 1993; Daille, 1996). We assumed that two nouns cannot be disjoint arguments of the verb if they are lexically associated. On this basis, tokens were rejected as instances of the double ob- ject frame if they contained two nouns whose G- score had a p-value less than 0.05. A two-step process was applied to noun se- quences of length 3: first their bracketing was de- termined and second the G-score was computed be- tween the single noun and the 2-noun sequence. We inferred the bracketing by modifying an al- gorithm initially proposed by Pustejovsky et al. (1993). Given three nouns n 1, n2, n3, if either [n I n2] or [n2 n3] are in the compound noun dictionary, we built structures [[nt n2] n3] or [r/l [n2 n3]] accord- ingly; if both [n I n2] and In2 n3] appear in the dic- tionary, we chose the most frequent pair; if neither [n l n2] nor [n2 n3] appear in WordNet, we computed the G-score for [nl n2] and [n2 n3] and chose the pair with highest value (p < 0.05). Tables 1 and 2 display a random sample of the compounds the method found (p < 0.05). 2.3.1 Evaluation The performance of the linguistic heuristics and the compound detection procedure were evaluated by randomly selecting approximate!y 3,000 corpus to- kens which were previously accepted or rejected as instances of the double object frame. Two judges de- cided whether the tokens were classified correctly. The judges' agreement on the classification task was calculated using the Kappa coefficient (Siegel and Method l[ Prec l[ Kappa Reject heuristics 96.9% K = 0.76, N = 1000 Accept heuristics 73.6% K = 0.82, N = 1000 2-word compounds 98.9% K = 0.83, N = 553 3-word compounds 99.1% K = 0.70, N = 447 Verb attach-to 74.4% K = 0.78, N = 494 Noun attach-to 80.0% K = 0.80, N = 500 Verb attach-for 73.6% K = 0.85, N = 630 Noun attach-for 36.0% K = 0.88, N = 500 Table 3: Precision of heuristics, compound noun de- tection and lexical association Castellan, 1988) which measures inter-rater agree- ment among a set of coders making category judg- ments. The Kappa coefficient of agreement (K) is the ra- tio of the proportion of times, P(A), that k raters agree to the proportion of times, P(E), that we would expect the raters to agree by chance (cf. (4)). If there is a complete agreement among the raters, then K = 1. P(A) -- P(E) (4) K- 1 -- P(E) Precision figures 3 (Prec) and inter-judge agreement (Kappa) are summarized in table 3. In sum, the heuristics achieved a high accuracy in classifying cues for the double object frame. Agreement on the classification was good given that the judges were given minimal instructions and no prior training. 2.4 Guessing the prepositional frames In order to consider verbs with prepositional frames as candidates for the dative and benefactive alterna- tions the following requirements needed to be met: 1. the PP must be attached to the verb; 3Throught the paper the reported percentages are the aver- age of the judges' individual classifications. 399 2. in the case of the 'V NPI to NP2' structure, the to-PP must be an argument of the verb; 3. in the case of the 'V NPI for NP2' structure, the for-PP must be benefactive. 4 In older to meet requirements (1)-(3), we first de- termined the attachment site (e.g., verb or noun) of the PP and secondly developed a procedure for dis- tinguishing benefactive from non-benefactive PPs. Several approaches have statistically addressed the problem of prepositional phrase ambiguity, with comparable results (Hindle and Rooth, 1993; Collins and Brooks, 1995; Ratnaparkhi, 1998). Hin- dle and Rooth (1993) used a partial parser to extract (v, n, p) tuples from a corpus, where p is the prepo- sition whose attachment is ambiguous between the verb v and the noun n. We used a variant of the method described in Hindle and Rooth (1993), the main difference being that we applied their lexical association score (a log-likelihood ratio which com- pares the probability of noun versus verb attach- ment) in an unsupervised non-iterative manner. Fur- thermore, the procedure was applied to the special case of tuples containing the prepositions to and for only. 2.4.1 Evaluation We evaluated the procedure by randomly select- ing 2,124 tokens containing to-PPs and for-PPs for which the procedure guessed verb or noun at- tachment. The tokens were disambiguated by two judges. Precision figures are reported in table 3. The lexicai association score was highly accu- rate on guessing both verb and noun attachment for to-PPs. Further evaluation revealed that for 98.6% (K = 0.9, N = 494, k -- 2) of the tokens clas- sified as instances of verb attachment, the to-PP was an argument of the verb, which meant that the log-likelihood ratio satisfied both requirements (1) and (2) for to-PPs. A low precision of 36% was achieved in detecting instances of noun attachment for for-PPs. One rea- son for this is the polysemy of the preposition for: for-PPs can be temporal, purposive, benefactive or causal adjuncts and consequently can attach to var- ious sites. Another difficulty is that benefactive for- PPs semantically license both attachment sites. To further analyze the poor performance of the log-likelihood ratio on this task, 500 tokens con- 4Syntactically speaking, benefactive for-PPs are not argu- ments but adjuncts (Jackendoff, 1990) and can appear on any verb with which they are semantically compatible. taining for-PPs were randomly selected from the parser's output and disambiguated. Of these 73.9% (K = 0.9, N = 500, k ---- 2) were instances of verb attachment, which indicates that verb attachments outnumber noun attachments for for-PPs, and there- fore a higher precision for verb attachment (cf. re- quirement (1)) can be achieved without applying the log-likelihood ratio, but instead classifying all in- stances as verb attachment. 2.5 Benefactive PPs Although surface syntactic cues can be important for determining the attachment site of prepositional phrases, they provide no indication of the semantic role of the preposition in question. This is particu- larly the case for the preposition for which can have several roles, besides the benefactive. Two judges discriminated benefactive from non- benefactive PPs for 500 tokens, randomly selected from the parser's output. Only 18.5% (K ---- 0.73, N ---- 500, k = 2) of the sample contained bene- factive PPs. An analysis of the nouns headed by the preposition for revealed that 59.6% were animate, 17% were collective, 4.9% denoted locations, and the remaining 18.5% denoted events, artifacts, body parts,'or actions. Animate, collective and location nouns account for 81.5% of the benefactive data. We used the WordNet taxonomy (Miller et al., 1990) to recognize benefactive PPs (cf. require- ment (3)). Nouns in WordNet are organized into an inheritance system defined by hypernymic rela- tions. Instead of being contained in a single hier- archy, nouns are partitioned into a set of seman- tic primitives (e.g., act, animal, time) which are treated as the unique beginners of separate hier- archies. We compiled a "concept dictionary" from WordNet (87,642 entries), where each entry con- sisted of the noun and the semantic primitive dis- tinguishing each noun sense (cf. table 4). We considered a for-PP to be benefactive if the noun headed by for was listed in the concept dic- tionary and the semantic primitive of its prime sense (Sense 1) was person, animal, group or lo- cation. PPs with head nouns not listed in the dictio- nary were considered benefactive only if their head nouns were proper names. Tokens containing per- sonal, indefinite and anaphoric pronouns were also considered benefactive (e.g., build a home for him). Two judges evaluated the procedure by judging 1,000 randomly selected tokens, which were ac- cepted or rejected as benefactive. The procedure achieved a precision of 48.8% (K ----- 0.89, N = 400 gift cooking teacher university city pencil Sense 1 Sense 2 Sense 3 possession food person group location artifact cognition act cognition artifact location act group group Table 4: Sample entries from WordNet concept dic- tionary 500, k = 2) in detecting benefactive tokens and 90.9% (K = .94, N = 499, k = 2) in detecting non-benefactive ones. 3 Filtering Filtering assesses how probable it is for a verb to be associated with a wrong frame. Erroneous frames can be the result of tagging errors, parsing mistakes, or errors introduced by the heuristics and proce- dures we used to guess syntactic structure. We discarded verbs for which we had very little evidence (frame frequency = 1) and applied a rela- tive frequency cutoff: the verb's acquired frame fre- quency was compared against its overall frequency in the BNC. Verbs whose relative frame frequency was lower than an empirically established thresh- old were discarded. The threshold values varied from frame to flame but not from verb to verb and were determined by taking into account for each frame its overall frame frequency which was es- timated from the COMLEX subcategorization dic- tionary (6,000 verbs) (Grishman et al., 1994). This meant that the threshold was higher for less frequent frames (e.g., the double object frame for which only 79 verbs are listed in COMLEX). We also experimented with a method suggested by Brent (1993) which applies the binomial test on frame frequency data. Both methods yielded comparable results. However, the relative frequency threshold worked slightly better and the results re- ported in the following section are based on this method. 4 Results We acquired 162 verbs for the double object frame, 426 verbs for the 'V NP1 to NP2' frame and 962 for the 'V NPl for NP2' frame. Membership in al- ternations was judged as follows: (a) a verb partic- ipates in the dative alternation if it has the double object and 'V NP1 to NP2' frames and (b) a verb Dative Alternation Alternating V NPI NP2 allot, assign, bring, fax, feed, flick, give, grant, guarantee, leave, lend offer, owe, take pass, pay, render, repay, sell, show, teach, tell, throw, toss, write, serve, send, award allocate, bequeath, carry, catapult, cede, concede, drag, drive, extend, ferry, fly, haul, hoist, issue, lease, peddle, pose, preach, push, relay, ship, tug, yield V NPI to NP2 ask, chuck, promise, quote, read, shoot, slip Benefactive Alternation Alternating bake, build, buy, cast, cook, earn, fetch, find, fix, forge, gain, get, keep, knit, leave, make, pour, save procure, secure, set, toss, win, write V NPI NP2 arrange, assemble, carve, choose, compile, design, develop, dig, gather, grind, hire, play, prepare, reserve, run, sew V NP1 for NP2 boil, call, shoot Table 5: Verbs common in corpus and Levin participates in the benefactive alternation if it has the double object and 'V NP1 for NP2' frames. Ta- ble 5 shows a comparison of the verbs found in the corpus against Levin's list of verbs; 5 rows 'V NP1 to NP2' and 'V NP1 for NP2' contain verbs listed as alternating in Levin but for which we acquired only one frame. In Levin 115 verbs license the dative and 103 license the benefactive alternation. Of these we acquired 68 for the dative and 43 for the benefactive alternation (in both cases including verbs for which only one frame was acquired). The dative and benefactive alternations were also acquired for 52 verbs not listed in Levin. Of these, 10 correctly alternate (cause, deliver, hand, refuse, report and set for the dative alternation and cause, spoil, afford and prescribe for the benefactive), and 12 can appear in either frame but do not alter- nate (e.g., appoint, fix, proclaim). For 18 verbs two frames were acquired but only one was correct (e.g., swap and forgive which take only the double object frame), and finally 12 verbs neither alternated nor had the acquired frames. A random sample of the acquired verb frames and their (log-transformed) frequencies is shown in figure 1. 5The comparisons reported henceforth exclude verbs listed in Levin with overall corpus frequency less than 1 per million. 401 I0 8 0= .=. ,- 4 == ,,d 2 NP-PP to frame NP-PP_for frame NP-NP frame i1] Figure 1: Random sample of acquired frequencies for the dative and benefactive alternations class the number of verbs acquired from the cor- pus against the number of verbs listed in Levin. As can be seen in figure 2, Levin and the corpus ap- proximate each other for verbs of FUTURE HAVING (e.g., guarantee), verbs of MESSAGE TRANSFER (e.g., tell) and BRING-TAKE verbs (e.g., bring). The semantic classes of GIVE (e.g., sell), CARRY (e.g., drag), SEND (e.g., ship), GET (e.g., buy) and PREPARE (e.g., bake) verbs are also fairly well rep- resented in the corpus, in contrast to SLIDE verbs (e.g., bounce) for which no instances were found. Note that the corpus and Levin did not agree with respect to the most popular classes licensing the dative and benefactive alternations: THROWING (e.g., toss) and BUILD verbs (e.g., carve) are the biggest classes in Levin allowing the dative and benefactive alternations respectively, in contrast to FUTURE HAVING and GET verbs in the corpus. This can be explained by looking at the average cor- pus frequency of the verbs belonging to the seman- tic classes in question: FUTURE HAVING and GET Levi, I 1 1 verbs outnumber THROWING and BUILD verbs by 30 ~ Corpus dative . II 1 I a factor of two to one. 5 Productivity The relative productivity of an alternation for a se- 20 mantic class can be estimated by calculating the ra- tio of acquired to possible verbs undergoing the al- ternation (Aronoff, 1976; Briscoe and Copestake, Z l0 1996): (5) P(acquired[class) = f (acquired, class) f (class) o We express the productivity of an alternation for o =. "~ ~= ~ ,~.. ~ =.~ ¢ .-= ~ Figure 2: Semantic classes for the dative and bene- factive alternations Levin defines 10 semantic classes of verbs for which the dative alternation applies (e.g., GIVE verbs, verbs of FUTURE HAVING, SEND verbs), and 5 classes for which the benefactive alternation ap- plies (e.g., BUILD, CREATE, PREPARE verbs), as- suming that verbs participating in the same class share certain meaning components. We partitioned our data according to Levin's pre- defined classes. Figure 2 shows for each semantic a given class as f(acquired, class), the number of verbs which were found in the corpus and are mem- bers of the class, over f(class), the total number of verbs which are listed in Levin as members of the class (Total). The productivity values (Prod) for both the dative and the benefactive alternation (Alt) are summarized in table 6. Note that productivity is sensitive to class size. The productivity of BRING-TAKE verbs is esti- mated to be 1 since it contains only 2 members which were also found in the corpus. This is intu- itively correct, as we would expect the alternation to be more productive for specialized classes. The productivity estimates discussed here can be potentially useful for treating lexical rules proba- bilistically, and for quantifying the degree to which language users are willing to apply' a rule in order 402 BRING-TAKE 2 2 1 0.327 FUTURE HAVING 19 17 0.89 0.313 GIVE 15 9 0.6 0.55 M.TRANSFER 17 10 0.58 0.66 CARRY 15 6 0.4 0.056 DRIVE 11 3 0.27 0.03 THROWING 30 7 0.23 0.658 SEND 23 3 0.13 0.181 INSTR. COM. 18 1 0.05 0.648 SLIDE 5 0 0 0 Benefactive alternation Class Total Alt Prod Typ GET 33 17 0.51 0.54 PREPARE 26 9 0.346 0.55 BUILD 35 12 0.342 0.34 PERFORMANCE 19 1 0.05 0.56 CREATE 20 2 0.1 0.05 Table 6: Productivity estimates and typicality values for the dative and benefactive alternation to produce a novel form (Briscoe and Copestake, 1996). 6 Typicality Estimating the productivity of an alternation for a given class does not incorporate information about the frequency of the verbs undergoing the alterna- tion. We propose to use frequency data to quantify the typicality of a verb or verb class for a given alter- nation. The underlying assumption is that a verb is typical for an alternation if it is equally frequent for both frames which are characteristic for the alter- nation. Thus the typicality of a verb can be defined as the conditional probability of the frame given the verb: f (framei, verb) (6) P(frameilverb) = y~ f fframe n, verb) n We calculate Pfframeilverb) by dividing f(frame i, verb), the number of times the verb was attested in the corpus with frame i, by ~-~.,, f(frame,,, verb), the overall number of times the verb was attested. In our case a verb has two frames, hence P(frameilverb) is close to 0.5 for typical verbs (i.e., verbs with balanced frequencies) and close to either 0 or 1 for peripheral verbs, depending on their preferred frame. Consider the verb owe as an example (cf. figure 1). 648 instances of owe were found, of which 309 were instances of the double object frame. By dividing the latter by the former we can see that owe is highly typical of the dative alternation: its typicality score for the double object frame is 0.48. By taking the average of P(framei, verb) for all verbs which undergo the alternation and belong to the same semantic class, we can estimate how typi- cal this class is for the alternation. Table 6 illustrates the typicality (Typ) of the semantic classes for the two alternations. (The typicality values were com- puted for the double object frame). For the dative alternation, the most typical class is GIVE, and the most peripheral is DRIVE (e.g., ferry). For the bene- factive alternation, PERFORMANCE (e.g., sing), PREPARE (e.g., bake) and GET (e.g., buy) verbs are the most typical, whereas CREATE verbs (e.g., com- pose) are peripheral, which seems intuitively cor- rect. 7 Future Work The work reported in this paper relies on frame frequencies acquired from corpora using partial- parsing methods. For instance, frame frequency data was used to estimate whether alternating verbs ex- hibit different preferences for a given frame (typi- cality). However, it has been shown that corpus id- iosyncrasies can affect subcategorization frequen- cies (cf. Roland and Jurafsky (1998) for an exten- sive discussion). This suggests that different corpora may give different results with respect to verb al- ternations. For instance, the to-PP frame is poorly' represented in the syntactically annotated version of the Penn Treebank (Marcus et al., 1993). There are only 26 verbs taking the to-PP frame, of which 20 have frame frequency of 1. This indicates that a very small number of verbs undergoing the dative alter- nation can be potentially acquired from this corpus. In future work we plan to investigate the degree to which corpus differences affect the productivity and typicality estimates for verb alternations. 8 Conclusions This paper explored the degree to which diathesis alternations can be identified in corpus data via shal- low syntactic processing. Alternating verbs were ac- quired from the BNC by using Gsearch as a chunk parser. Erroneous frames were discarded by apply- ing linguistic heuristics, statistical scores (the log- likelihood ratio) and large-scale lexical resources 403 (e.g., WordNet). We have shown that corpus frequencies can be used to quantify linguistic intuitions and lexical generalizations such as Levin's (1993) semantic classification. Furthermore, corpus frequencies can make explicit predictions about word use. This was demonstrated by using the frequencies to estimate the productivity of an alternation for a given seman- tic class and the typicality of its members. Acknowledgments The author was supported by the Alexander S. Onassis Foundation and the UK Economic and Social Research Council. Thanks to Chris Brew, Frank Keller, Alex Lascarides and Scott McDonald for valuable comments. References Mark Aronoff. 1976. Word Formation in Generative Grammar. Linguistic Inquiry Monograph 1. MIT Press, Cambridge, MA. Michael Brent. 1993. From grammar to lexicon: Un- supervised learning of lexical syntax. Computational Linguistics, 19(3):243-262. Ted Briscoe and Ann Copestake. 1996. Contolling the application of lexical rules. In Proceedings of ACL SIGLEX Workshop on Breadth and Depth of Semantic Lexicons, pages 7-19, Santa Cruz, CA. Lou Burnard, 1995. Users Guide for the British National Corpus. British National Corpus Consortium, Oxford University Computing Service. Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicog- raphy. Computational Linguistics, 16(1):22-29. COLING/ACL 1998. Proceedings of the 17th Interna- tional Conference on Computational Linguistics and 36th Annual Meeting of the Association for Computa- tional Linguistics, Montr6al. Michael Collins and James Brooks. 1995. Prepositional phrase attachment through a backed-off model. In Proceedings of the 3rdWorkshop on Very Large Cor- pora, pages 27-38. B6atrice Daille. 1996. Study and implementation of combined techniques for automatic extraction of ter- minology. In Judith Klavans and Philip Resnik, ed- itors, The Balancing Act: Combining Symbolic and Statistical Approaches to Language, pages 49-66. MIT Press, Cambridge, MA. Hoa Trang Dang, Karin Kipper, Martha Palmer, and Joseph Rosenzweig. 1998. Investigating regular sense extensions based on intersective Levin classes. In COLING/ACL 1998, pages 293-299. Bonnie J. Dorr and Doug Jones. 1996. Role of word sense disambiguation in lexical acquisition: Predict- ing semantics from syntactic cues. In Proceedings of the 16th International Conference on Computational Linguistics, pages 322-327, Copenhagen. Ted Dunning. 1993. Accurate methods for the statistics of surprise and coincidence. Computational Linguis- tics, 19(1):61-74. Ralph Grishman, Catherine Macleod, and Adam Meyers. 1994. Comlex syntax: Building a computational lexi- con. In Proceedings of the 15th International Confer- ence on Computational Linguistics, pages 268-272, Kyoto. Donald Hindle and Mats Rooth. 1993. Structural am- biguity and lexical relations. Computational Linguis- tics, 19(1):103-120. Ray Jackendoff. 1990. Semantic Structures. MIT Press, Cambridge, MA. Frank Keller, Martin Corley, Steffan Corley, Matthew W. Crocker, and Shari Trewin. 1999. Gsearch: A tool for syntactic investigation of unparsed corpora. In Pro- ceedings of the EACL Workshop on Linguistically In- terpreted Corpora, Bergen. Beth Levin. 1993. English Verb Classes and Alter- nations: A Preliminary Investigation. University of Chicago Press, Chicago. Mitchell R Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated cor- pus of english: The penn treebank. Computational Linguistics, 19(2):313-330. Diana McCarthy and Anna Korhonen. 1998. Detecting verbal participation indiathesis alternations. In COL- ING/ACL 1998, pages 1493-1495. Student Session. George A. Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: An on-line lexical database. International Journal of Lexicography, 3(4):235-244. ' Steven Pinker. 1989. Learnability and Cognition: The Acquisition of Argument Structure. MIT Press, Cam- bridge MA. James Pustejovsky, Sabine Bergler, and Peter Anick. 1993. Lexical semantic techniques for corpus anal- ysis. ComputationalLinguistics, ~[9(3):331-358. Adwait Ratnaparkhi. 1998. Unsupervised statistical models for prepositional phrase attachment. In Pro- ceedings of the 7th International Conference on Com- putational Linguistics, pages 1079-1085. Douglas Roland and Daniel Jurafsky. 1998. How verb subcategorization frequencies are affected by corpus choice. In COLING/ACL 1998, pages 1122-1128. Sabine Schulte im Walde. 1998. Automatic semantic classification of verbs according to their alternation behaviour. Master's thesis, Institut f"ur Maschinelle Sprachverarbeitung, University of Stuttgart. Sidney Siegel and N Castellan. 1988. Non Parametric Statistics for the Behavioral Sciences. McGraw-Hill, New York. 404
1999
51
Charting the Depths of Robust Speech Parsing W. Kasper t, B. Kiefer t, H.-U. Kriegert, C. J. Rupp$, and K. L. Worm $ tGerman Research Center for Artificial Intelligence (DFKI) $Computational Linguistics Department, Universit~t des Saarlandes {kasper, kiefer, krieger}@dfki, de and {c j, worm}@coli, uni-sb, de Abstract We describe a novel method for coping with un- grammatical input based on the use of chart-like data structures, which permit anytime process- ing. Priority is given to deep syntactic anal- ysis. Should this fail, the best partial analy- ses are selected, according to a shortest-paths algorithm, and assembled in a robust process- ing phase. The method has been applied in a speech translation project with large HPSG grammars. 1 Introduction This paper describes a new method of deal- ing robustly with deficient speech or text in- put, which may be due to recognition errors, spontaneous speech phenomena, or ungrammat- ical constructions. Two key features of this ap- proach are: • the priority given to a deep and restrictive grammatical analysis, • and the use of chart-like data structures at each level of processing. The initial input is taken from a Word Hy- pothesis Graph, or WHG, (Oerder and Ney, 1993) from which the best ranked paths are successively selected until a result is found or a time limit proportional to the length of the utterance 1 is reached. Each path is parsed with an incremental chart parser that uses a Head- Driven Phrase Structure grammar (HPSG). The parser is adapted to input from WHGs and optimized to meet the needs of real-time speech processing. Since the goal of the parsing compo- nent is to process as many WHG paths as pos- sible, in order to find a grammatical utterance 1This is currently up to four times real time. and analyze it with highest accuracy, neither relaxation of the constraints imposed by the grammar nor repair rules are used at this stage. If the analysis of the current path is successful, the parsing process is complete. However, in most cases there is no spanning and syntacti- cally correct analysis. So a sequence of partial analyses is determined by incrementally eval- uating the passive edges in the parser's chart. These analyzed fragments are passed on to a robust semantic processing component for fur- ther treatment, while the next best WHG path is analyzed by the parser 2. Robust semantic processing similarly builds up a chart-like data structure including analyzed fragments and the results of applying robustness rules at the se- mantic level. After the first path of the WHG has been (unsuccessfully) analyzed, processing in both the restrictive parser and the robust- ness component proceeds in parallel, with the aid of a parallel virtual machine, until one of the following conditions is fulfilled: 1. a spanning grammatical analysis is found, 2. all the WHG paths have been explored, or 3. the time limit is reached. In the case of either of the latter two condi- tions, robust semantic processing is allowed a limited time to complete processing and then the best result or sequence of results is selected from its chart. Our approach has been implemented in VERBMOBIL (Wahlster, 1993), a large scale re- search project in the area of spoken language 2This means that the maximal sequential delay be- tween parsing and robust semantics processing is the parse time for one path. Similarly, the limit on pars- ing time, essentially, applies to both components 405 translation. Its goal is to develop a system that translates negotiation dialogues between Ger- man, English, and Japanese speakers in face- to-face or video conferencing situations. This application highlights the basic problem asso- ciated with machine processing of spontaneous speech, namely that the input to the natural language processing component is perturbed by two influences: 1. Speakers make mistakes, correct them- selves during speaking, produce false starts and use ungrammatical constructions. 2. The acoustic signal produced by a human speaker is mapped by a speech recognizer onto a written form; this mapping is rarely completely correct. This introduces two levels of uncertainty into the processing of speech, which make the task of linguistically analyzing a spoken utterance in a speech processing system doubly hard. In ad- dition, the dialogue context imposes strict time constraints, as the overall system must attempt to emulate real time performance. The strategy we adopt responds to time con- straints by universally incorporating an anytime property (Dean and Boddy, 1988) into the se- lection procedures. As will be seen, this prop- erty derives from the way in which intermedi- ate results are stored and the selections which can be made from among these. However, the overriding priority of this same strategy is to maximize the chance that a truly grammatical path will be found and analyzed, if one exists in the WHG. This means that while we have implemented extensive mechanisms to achieve robustness, their design, and in particular the separation of processing into a restrictive parser and a robust postprocessor, are subservient to the cases where a fully grammatical analysis is possible, since these results are in any case bet- ter. These decisions may be in conflict with much of the literature on robust parsing (e.g., (Hindle, 1983; Hipp, 1993; Heinecke et al., 1998)), but the alternative of relaxing the pars- ing constraints would appear to be a dead end in the context of the VERBMOBIL architecture. In the first place, the chances of locating the best grammatical path in the lattice would be reduced, e.g., by the acceptance of a preceding ungrammatical one. Secondly, a more liberal parser would raise the spectre of an explosion of edges in the parser's chart, so that in fact less paths could be processed overall, regardless of their quality. Either of these conditions could prove fatal. This paper focuses on the aspects of the VERBMOBIL analysis component which ensure that the most accurate results available are pro- vided to the system as a whole. We first de- scribe the basic inventory we need to explain our approach: the unification-based bottom-up chart parser, the HPSG grammar, and the in- terface terms which are exchanged between the parser and the robust semantic processing. Af- ter that, we come to the basic algorithm which determines best partial analyses. We also give an example of how the evaluation function on edges might look. In section 4, we focus on the robust semantic processing whose task is to store and combine the partial results, before choosing a final result out of a set of possible candidates. We end this paper by presenting empirical results on the usefulness of our ap- proach. 2 Preliminaries 2.1 The Chart Parser The parser used in the system is a bottom- up chart parser. Since the grammar is a pure unification-based grammar, there is no context- free backbone and the chart edges are labelled with typed feature structures. At the moment, there is no local ambiguity packing of chart edges. Therefore, the worst case complexity of parsing is potentially exponential, but since the parser employs a best-first strategy, exponential behavior is rarely found in practice. The parser provides a flexible priority system for guiding the parsing process, using parsing tasks on an agenda. A parsing task represents the combination of a passive chart edge and an active chart edge or a rule. When such a com- bination succeeds, new tasks are generated and for each new task, a priority is assigned. This priority system helps to obtain good par- tial results, even in cases where the search space cannot be fully explored due to parsing time re- strictions. A higher time bound would allow either the processing of more WHG paths or a more elaborate analysis of the given input, both 406 of which may lead to better results. The deci- sion when to switch to the next best path of a given WHG depends on the length of the input and on the time already used. After the pars- ing of one path is finished, the passive edges of the chart form a directed acyclic graph which is directly used as input to compute best partial analyses. We note here that the parser processes the n- best paths of a WHG fully incrementally. I.e., when the analysis of a new input path begins, only those input items are added to the chart that have not been part of a previously treated path. Everything else that has been computed up to that point remains in the chart and can be used to process the new input without being recomputed. 2.2 The HPSG Grammars The grammars for English, German, and Japanese follow the paradigm of HPSG (Pol- lard and Sag, 1994) which is the most advanced unification-based grammatical theory based on typed feature structures. The fundamental con- cept is that of a sign, a structure incorporating information from all levels of linguistic analysis, such as phonology, morphology, syntax, and se- mantics. This structure makes all information simultaneously available and provides declara- tive interfaces between these levels. The gram- mars use Minimal Recursion Semantics (Copes- take et al., 1996) as the semantic representation formalism, allowing us to deal with ambiguity by underspecification. To give an impression of the size of gram- mars, we present the numbers for the German grammar. It consists of 2,389 types, 76 rule schemata, 4,284 stems and an average of six entries per stem. Morphological information is computed online which further increases the lex- ical ambiguity. 2.3 Partial Analyses and the Syntax-Semantics Interface Our architecture requires that the linguistic analysis module is capable of delivering not just analyses of complete utterances, but also of phrases and even of lexical items in the special interface format of VITs (VERBMOBIL Interface Terms) (Bos et al., 1998). There are three con- siderations which the interface has to take into account: 1. Only maximal projections, i.e., complete phrases, are candidates for robust process- ing. This qualifies, e.g., prepositional and noun phrases. On the other hand, this approach leaves gaps in the coverage of the input string as not every word needs to be dominated by a maximal projec- tion. In particular, verbal projections be- low the sentential level usually are incom- plete phrases. The use of intermediate, in- complete projections is avoided for several reasons: • intermediate projections are highly grammar and language specific and • there are too many of them. 2. Phrases must be distinguished from ellipti- cal utterances. A major difference is that elliptical utterances express a speech act. E.g., a prepositional phrase can be a com- plete utterance expressing an answer to a question (On Monday.) or a question itself (On Monday?). If the phrase occurs in a sentence, it is not associated with a speech act of its own. This distinction is dealt with in the grammars by specifying special types for these complete utterances, phrases, and lexical items. 3. For robust processing, the interface must export a certain amount of information from syntax and morphology together with the semantics of the phrase. In addition, it is necessary to represent semantically empty parts of speech, e.g., separable verb prefixes in German. 3 Computing Best Partial Analyses In contrast to a traditional parser which never comes up with an analysis for input not cov- ered by the grammar, our approach focuses on partial analyses without giving up the correct- ness of the overall deep grammar. These par- tial analyses are combined in a later stage (see Section 4) to form total analyses. But what is a partial analysis? Obviously a derivation (sub)tree licensed by the grammar which covers a continuous part of the input (i.e., a passive parser edge). But not every passive edge is a good candidate, since otherwise we would end up with perhaps thousands of them. Our ap- 407 proach lies in between these two extremes: com- puting a connected sequence of best partial anal- yses which cover the whole input. The idea here is to view the set of passive edges of a parser as a directed graph which needs to be evaluated according to a user-defined (and therefore gram- mar and language specific) metric. Using this graph, we then compute the shortest paths w.r.t. the evaluation function, i.e., paths through this graph with minimum cost. Since this graph is acyclic and topologically sorted (vertices are integers and edges always connect a vertex to a larger vertex), we have chosen the DAG-shortest-path algorithm (Cot- men et al., 1990) which runs in O(V + E). This fast algorithm is a solution to the single-source shortest-paths problem. We modified and ex- tended this algorithm to cope with the needs we encountered in speech parsing: (i) one can use several start and end vertices (e.g., in the case of n-best chains or WHGs); (ii) all best shortest paths are returned (i.e., we obtain a shortest- path subgraph); and (iii) evaluation and selec- tion of the best edges is done incrementally as is the case for parsing the n-best chains (i.e., only new passive edges entered into the chart are evaluated and may be selected by our shortest- path algorithm). We now sketch the basic algorithm. Let G = (V, E) denote the set of passive edges, £ the set of start vertices, E the set of end ver- tices, and let n be the vertex with the high- est number (remember, vertices are integers): n = max(V). In the algorithm, we make use of two global vectors of length n which store information associated with each vertex: dist keeps track of the distance of a vertex to one of the start vertices (the so-called shortest-path estimate), whereas pred records the predeces- sors of a given vertex, weight defines the cost of an edge and is assigned its value during the evaluation stage of our algorithm according to the user-defined function Estimate. Finally, Adj consists of all vertices adjacent to a given vertex (we use an adjacency-list representation). Clearly, before computing the shortest path, the distance of a vertex to one of the start ver- tices is infinity, except for the start vertices, and there is of course no shortest path subgraph (pred(v) +-- 0). Initialise-Single-Source( G, S) : ¢:=~ global dist, pred; for each v E V(G) do dist(v) +-- co; pred(v) +-- 0 od; for each s E S do dist(s) +-- 0 od. After initialization, we perform evaluation and relaxation on every passive edge, taken in topologically sorted order. Relaxing an edge (u, v) means checking whether we can improve the shortest path(s) to v via u. There are two cases to consider: either we overwrite the shortest-path estimate for v since the new one is better (and so have a new predecessor for v, viz., u), or the shortest-path estimate is as good as the old one, hence we have to add v to the predecessors of v. In case the shortest-path es- timate is worse, there is clearly nothing to do. Relax(u, v) :¢==~ global dist, pred; if dist(v) > dist(u) + weight(u, v) then do dist(v) +-- dist(u) + weight(u, v); pred(v) ~ {u) od else do when dist(v) = dist(u) + weight(u, v) do pred(v) +-- pred(v) U {u) od od ft. The shortest paths are then determined by es- timating and relaxing edges, beginning with the start vertices S. The shortest path subgraph is stored in pred and can be extracted by walk- ing from the end vertices £ 'back' to the start vertices. DAG-Shortest-Paths(G, S, C) :¢--~ global pred; Initialis e-Single-S ource ( G , • ) ; for each u E V(G) \ C taken in topologically sorted order do for each v e Adj(u) do weight(u, v) +-- Estimate(u, v); Relax (u, v) od od; return pred. 408 After we have determined the shortest-path subgraph, the feature structures associated with these edges are selected and transformed to the corresponding VITs which are then sent to the robust semantic processing component. This approach has an important property: even if certain parts of the input have not un- dergone at least one rule application, there are still lexical edges which help to form a best path through the passive edges. Hence, this approach shows anytime behavior which is a necessary re- quirement in time-critical (speech) applications: even if the parser is interrupted at a certain point, we can always return a shortest path up to that moment through our chart. Let us now give an example to see what the evaluation function on edges (i.e., derivation trees) might look like3: • n-ary trees (n > 1) with utterance status (e.g., NPs, PPs): value 1 • lexical items: value 2 • otherwise: value oo If available, other properties, such as prosodic information or probabilistic scores can also be utilized in the evaluation function to determine the best edges. P R S Figure 1: Computing best partial analyses. Note that the paths PR and QR are chosen, but not ST, although S is the longest edge. By using uniform costs, all three paths would be selected. Depending on the evaluation, our method does not necessarily favor paths with longest edges as the example in Figure 1 shows -- the above strategy instead prefers paths contain- ing no lexical edges (where this is possible) and aThis is a slightly simplified form of the evaluation that is actually used for the German grammar. there might be several such paths having the same cost. Longest (sub)paths, however, can be obtained by employing an exponential func- tions during the evaluation of an edge e E E: Estimate (e) = - (max ($) - rain (8) )length (e). 4 Robust Semantic Processing The second phase of processing, after produc- ing a set of partial analyses, consists of assem- bling and combining the fragments, where pos- sible. We call this robust semantic processing (Worm and Rupp, 1998), since the structures being dealt with are semantic representations (VITs) and the rules applied refer primarily to the semantic content of fragments, though they also consider syntactic and prosodic informa- tion, e.g., about irregular boundaries. This phase falls into three tasks: 1. storing the partial analyses from the parser, 2. combining them on the basis of a set of rules, and 3. selecting a result. For storing of partial results, both delivered from the parser or constructed later, we make use of a chart-like data structure we call VIT hypothesis graph (VHG), since it bears a resem- blance to the WHG which is input to the parser. It is organized according to WHG vertices. We give an example in Figure 2, which will be ex- plained in 4.1. Combination of partial results takes place using a set of rules which describe how frag- mentary analyses can be combined. There are language-independent rules, e.g., describing the combination of a semantic functor with a possi- ble argument, and language specific ones, such as those for dealing with self-corrections in Ger- man. Each operation carried out delivers a con- fidence value which influences the score assigned to an edge. The overall mechanism behind the robust se- mantic processing resembles that of a chart parser. It runs in parallel with the HPSG parser; each time the parser delivers partial re- sults, they are handed over and processed, while the parser may continue to look for a better path in the WHG. The processing strategy is 409 81: oa'st + Ihnen + den h~ll~ n T~g ~'109998.3~ f43.11 i 3: pa'st (9999.01 I~ V 2: Ihnen (9999.01 ~ I 42: pa'sl + Ihnel (19998.31 [3.21 43: a'sl + Ihnen (19999.0}[3~2] ~ 1: den halben Taq (89999.0) 23: den halben Taq (80999.1) [1] 41: Ihnen + den halbert Ta.q (90998.9) ]'2,23] Figure 2: The VHG for the first example. Only three VITs are delivered by the parser (the shortest path), although the number of passive edges is 217. agenda-based, giving priority to new parser re- sults. Selection of a result means that the best edge covering the whole input, or if that has not been achieved, an optimal sequence of edges has to be selected. We use a simple graph search al- gorithm which finds the path with the highest sum of individual scores. Note that the robust semantic processing has the anytime property as well: as soon as the first partial result has been entered into the chart, a result can be delivered on demand. 4.1 An Example Consider the utterance (1) where the case of the NP den halben Tag ('half the day') is accusative and thus does not match the subcategorization requirements of the verb passen ('suit') which would require nominative. (1) Pa6t Ihnen den halben Tag? 'Does half the day suit you?' According to the grammar, this string is ill- formed, thus no complete analysis can be achieved. However, the parser delivers frag- ments for pa~t, Ihnen, and den halben Tag. (2) verb_arg_r :: [ [type (Vl, verbal), missing_arg (Vl) ], [type (V2, term), pos sible_arg (V2, Vl) ] ] [apply_fun (V1, V2, V3), assign_mood(V3,V4)] & V4. When these results are stored, the rule in (2) will combine the verb with its first argu- ment, Ihnen. Each rule consists of three parts: mnemonic rule name, tests on a sequence of in- put VITs and the operations performed to con- struct the ouput VIT. The first separator is : :, the second --->. A further application of the same rule accounts for the second argument, den halben Tag. However, the confidence value for the second combination will reflect the viola- tion of the case requirement. The resulting edge spans the whole input and is selected as output. The corresponding VHG is shown in Figure 2. 4.2 Bridging Not all cases can be handled as simply. Of- ten, there are partial results in the input which cannot be integrated into a spanning result. In these cases, a mechanism called bridging is ap- plied. Consider the self-correction in (3). (3) Ich treffe ... habe einen Terrain am Montag. 'I (will) meet ... have an appointment on Monday.' Again, the parser will only find partial results. Combinations of ich with tre~e lead nowhere; the combination of the second verb with the NP does not lead to a complete analysis either (cf. Figure 3). Note that if a nominal argument can bind several argument roles, for each such read- ing there is a passive edge in the VHG. Its score reflects to what degree the selectional require- ments of the verb, in terms of the required case and sortal restrictions, have been met. If no spanning result exists when all rules have been applied, the bridging mechanism pro- duces new active edges which extend edges al- ready present. Here, it extends the active edge aiming to combine ich with a verbal functor to end after tre]]e, thus allowing for a combination with the VP already built, habe einen Termin 410 r18: Ich ~9999.0) n~/O: treffe (9999.0) [ ® 76: ich + treffe (19998.3) ]18,10[ 77: ich + treffe (19999.0) [18,10] 258: ich + habe (19998.3) 259: ich + habe (19999.0) 1 260: ich + habe (19998.7) 1 264: icb + babe + einen Termin am Montaq (179998.71 [18~49] 263: ich + habe + einen Termin am Montaq (179998.0) [18,49] 262: ich + habe + einen Termin am Montaq (179998.71 [18,48] 261: ich + habe + Ainen Termiq a m M~ntao ~179999.0~ [18481 2: habe (9999.0) rl 1: einen Termin am Montaq (159999.0} n 3,2] 18.2] 8,21 48: babe + einen Termin am Montaq (169999.0) [2,1] 49: babe + einen Termin am Montaq (169996.7) r2~1] Figure 3: The VHG for the second example. am Montag. Extending the active edges from left to right corresponds to the linear nature of self-corrections, in which material to the right replaces some to the left. 4.3 Scoring and Result Selection The scoring function for edges takes into ac- count their length, the coverage of the edge, the number of component edges it consists of, and the confidence value for the operation which cre- ated it. It has to satisfy the following property, which is illustrated in Figure 4: If there are two edges which together span an interval (edges a and b) and another edge which has been built from them (edge c), the latter should get a bet- ter score than the sequence of the original two edges. If there is another edge from the parser which again spans the complete interval (edge d), it should get a better score than the edge built from the two components. d d c: [a,b] Figure 4: Requirements for the scoring function. The selection is done in two different ways. If there is more than one spanning result, the scores of the spanning results are weighted ac- cording to a statistical model describing se- quence probabilities based on semantic predi- cates (Ruland et al., 1998) and the best is se- lected. Otherwise, the best sequence, i.e., the one with the highest score, is chosen in square time, using a standard graph search algorithm. 5 Empirical Results For an intermediate evaluation of the robust semantic processing phase, we ran our system consisting of HPSG parser and robust semantic processing on a dialogue from the VERBMOBIL corpus of spontaneous appointment negotiation dialogues, producing WHGs from the original recorded audio data. The dialogue consists of 90 turns. These 90 turns were split into 130 seg- ments according to pauses by the speech recog- nizer. The segments received 213 segment anal- yses, i.e., there are 1.6 analyses per segment on average. 172 (80.8%) of these were generated by the parser and 41 (19.2%) were assembled from parser results by robust semantic process- ing. Of these 41 results, 34 (83%) were sensibly improved, while 7 (17~0) did not represent a real improvement. This evaluation is local in the sense that we only consider the input-output behaviour of ro- bust semantic processing. We do this in order to exclude the effects of insufficiencies introduced by other modules in the system, since they would distort the picture. For this same rea- son, the criterion we apply is whether the result delivered is a sensible combination of the frag- 411 ments received, without reference to the original utterance or the translation produced. How- ever, in the long run we plan to compare the complete system's behaviour with and without the robust processing strategy. 6 Conclusion The approach to the robust analysis of spoken language input, that we have described above, exhibits three crucial properties. 1. The restrictive parser is given the maxi- mum opportunity of finding a correct anal- ysis for a grammatical sequence of word hy- potheses, where this exists. 2. The robustness component assembles par- tial analyses as a fallback, if no grammati- cal sequence can be found. 3. Almost arbitrary time constraints can be supported. Though, obviously, more pro- cessing time would usually improve the re- sults. The latter property depends directly on the chart-like data structures used at each level of processing. Whether it be the input WHG, VHG for robust processing or, most signifi- cantly, the parser's chart; each is formally a di- rected acyclic graph and each permits a selec- tion of the best intermediate result at, virtually, any stage in processing, for a given evaluation function. The relatively efficient processing of WHG in- put achieved by parsing and robustness compo- nents working in parallel depends quite heav- ily on the successive processing of ranked WHG paths, effectively as alternative input strings. Acknowledgments We would like to thank the anonymous ACL reviewers for their detailed comments. This research was supported by the German Fed- eral Ministry for Education and Research under grants nos. 01 IV 701 R4 and 01 IV 701 V0. References Johan Bos, C.J. Rupp, Bianka Buschbeck-Wolf, and Michael Dorna. 1998. Managing infor- mation at linguistic interfaces. In Proc. of the 17 th COLING/36 th ACL, pages 160-166, Montr@al, Canada. Ann Copestake, Dan Flickinger, and Ivan A. Sag. 1996. Minimal recursion semantics, an introduction. Ms, Stanford. Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. 1990. Introduction to Al- gorithms. MIT Press, Cambridge, MA. Thomas Dean and Mark Boddy. 1988. An anal- ysis of time-dependent planning. In Proceed- ings of the 7th National Conference on Arti- ficial Intelligence, AAAI-88, pages 49-54. Johannes Heinecke, Jfirgen Kunze, Wolfgang Menzel, and Ingo SchrSder. 1998. Elimina- tive parsing with graded constraints. In Proc. of the 17 th COLING/36 ~h ACL, pages 526- 530, Montr@al, Canada. Donald Hindle. 1983. Deterministic parsing of syntactic non-fluencies. In Proc. of the 21 th ACL, pages 123-128, Cambridge, MA. Dwayne Richard Hipp. 1993. Design and De- velopment of Spoken Natural-Language Dia- log Parsing Systems. Ph.D. thesis, Depart- ment of Computer Science, Duke University, Durham, NC. Martin Oerder and Hermann Ney. 1993. Word graphs: An efficient interface between continuous-speech recognition and language understanding. In Proc. Int. Conf. on Acous- tics, Speech and Signal Processing (ICASSP), Minneapolis, MN. IEEE Signal Processing Society. Carl Pollard and Ivan A. Sag. 1994. Head- Driven Phrase Structure Grammar. Univer- sity of Chicago Press, Chicago. Tobias Ruland, C. J. Rupp, JSrg Spilker, Hans Weber, and Karsten L. Worm. 1998. Mak- ing the most of multiplicity: A multi-parser multi-strategy architecture for the robust processing of spoken language. In Proc. of the 1998 International Conference on Spo- ken Language Processing (ICSLP 98), Syd- ney, Australia. Wolfgang Wahlster. 1993. VERBMOBIL- translation of face-to-face dialogs. In Proc. MT Summit IV, pages 127-135, Kobe, Japan, July. Karsten L. Worm and C. J. Rupp. 1998. To- wards robust understanding of speech by combination of partial analyses. In Proc. of the 13 th ECAL pages 190-194, Brighton, UK. 412
1999
52
A Syntactic Framework for Speech Repairs and Other Disruptions Mark G. Core and Lenhart K. Schubert Department of Computer Science University of Rochester Rochester, NY 14627 mcore, schubert@cs, rochester, edu Abstract This paper presents a grammatical and pro- cessing framework for handling the repairs, hesitations, and other interruptions in nat- ural human dialog. The proposed frame- work has proved adequate for a collection of human-human task-oriented dialogs, both in a full manual examination of the corpus, and in tests with a parser capable of parsing some of that corpus. This parser can also correct a pre-parser speech repair identifier resulting in a 4.8% increase in recall. 1 Motivation The parsers used in most dialog systems have not evolved much past their origins in handling written text even though they may have to deal with speech repairs, speak- ers collaborating to form utterances, and speakers interrupting each other. This is especially true of machine translators and meeting analysis programs that deal with human-human dialog. Speech recognizers have started to adapt to spoken dialog (ver- sus read speech). Recent language mod- els (Heeman and Allen, 1997), (Stolcke and Shriberg, 1996), (Siu and Ostendorf, 1996) take into account the fact that word co- occurrences may be disrupted by editing terms 1 and speech repairs (take the tanker I mean the boxcar). These language models detect repairs as they process the input; however, like past work on speech repair detection, they do not 1Here, we define editing terms as a set of 30-40 words that signal hesitations (urn) and speech re- pairs (I mean) and give meta-comments on the ut- terance (right). specify how speech repairs should be handled by the parser. (Hindle, 1983) and (Bear et al., 1992) performed speech repair identifi- cation in their parsers, and removed the cor- rected material (reparandum) from consider- ation. (Hindle, 1983) states that repairs are available for semantic analysis but provides no details on the representation to be used. Clearly repairs should be available for se- mantic analysis as they play a role in di- alog structure. For example, repairs can contain referents that are needed to inter- pret subsequent text: have the engine take the oranges to Elmira, urn, I mean, take them to Corning. (Brennan and Williams, 1995) discusses the role of fillers (a type of editing term) in expressing uncertainty and (Schober, 1999) describes how editing terms and speech repairs correlate with planning difficultly. Clearly this is information that should be conveyed to higher-level reasoning processes. An additional advantage to mak- ing the parser aware of speech repairs is that it can use its knowledge of grammar and the syntactic structure of the input to correct er- rors made in pre-parser repair identification. Like Hindle's work, the parsing architec- ture presented below uses phrase structure to represent the corrected utterance, but it also forms a phrase structure tree con,rain- ing the reparandum. Editing terms are con- sidered separate utterances that occur inside other utterances. So for the partial utter- ance, take the ban- um the oranges, three constituents would be produced, one for urn, another for take the ban-, and a third for take the oranges. Another complicating factor of dialog is 413 the presence of more than one speaker. This paper deals with the two speaker case, but the principles presented should apply gener- ally. Sometimes the second speaker needs to be treated independently as in the case of backchannels (um-hm) or failed attempts to grab the floor. Other times, the speakers in- teract to collaboratively form utterances or correct each other. The next step in lan- guage modeling will be to decide whether speakers are collaborating or whether a sec- ond speaker is interrupting the context with a repair or backchannel. Parsers must be able to form phrase structure trees around interruptions such as backchannels as well as treat interruptions as continuations of the first speaker's input. This paper presents a parser architecture that works with a speech repair identify- ing language model to handle speech repairs, editing terms, and two speakers. Section 2 details the allowable forms of collaboration, interruption, and speech repair in our model. Section 3 gives an overview of how this model is implemented in a parser. This topic is ex- plored in more detail in (Core and Schubert, 1998). Section 4 discusses the applicability of the model to a test corpus, and section 5 includes examples of trees output by the parser. Section 6 discusses the results of us- ing the parser to correct the output of a pre- parser speech repair identifier. 2 What is a Dialog From a traditional parsing perspective, a text is a series of sentences to be analyzed. An interpretation for a text would be a se- ries of parse trees and logical forms, one for each sentence. An analogous view is often taken of dialog; dialog is a series of "utter- ances" and a dialog interpretation is a se- ries of parse trees and logical forms, one for each successive utterance. Such a view either disallows editing terms, repairs, interjected acknowledgments and other disruptions, or else breaks semantically complete utterances into fragmentary ones. We analyze dialog in terms of a set of utterances covering all the words of the dialog. As explained below, utterances can be formed by more than one speaker and the words of two utterances may be interleaved. We define an utterance here as a sen- tence, phrasal answer (to a question), edit- ing term, or acknowledgment. Editing terms and changes of speaker are treated specially. Speakers are allowed to interrupt themselves to utter an editing term. These editing terms are regarded as separate utterances. At changes of speaker, the new speaker may: 1) add to what the first speaker has said, 2) start a new utterance, or 3) continue an utterance that was left hanging at the last change of speaker (e.g., because of an ac- knowledgment). Note that a speaker may try to interrupt another speaker and suc- ceed in uttering a few words but then give up if the other speaker does not stop talk- ing. These cases are classified as incomplete utterances and are included in the interpre- tation of the dialog. Except in utterances containing speech re- pairs, each word can only belong to one ut- terance. Speech repairs are intra-utterance corrections made by either speaker. The reparandum is the material corrected by the repair. We form two interpretations of an utterance with a speech repair. One inter- pretation includes all of the utterance up to the reparandum end but stops at that point; this is what the speaker started to say, and will likely be an incomplete utterance. The second interpretation is the corrected utter- ance and skips the reparandum. In the ex- ample, you should take the boxcar I mean the tanker to Coming; the reparandum is the boxcar. Based on our previous rules the edit- ing term I mean is treated as a separate ut- terance. The two interpretations produced by the speech repair are the utterance, you should take the tanker to Coming, and the incomplete utterance, you should take the boxcar. 3 Dialog Parsing The modifications required to a parser to implement this definition of dialog are relatively straightforward. At changes of 414 speaker, copies are made of all phrase hypotheses (arcs in a chart parser, for example) ending at the previous change of speaker. These copies are extended to the current change of speaker. We will use the term contribution (contr) here to refer to an uninterrupted sequence of words by one speaker (the words between speaker changes). In the example below, consider change of speaker (cos) 2. Copies of all phrase hypotheses ending at change of speaker 1 are extended to end at change of speaker 2. In this way, speaker A can form a phrase from contr-1 and contr-3 skipping speaker B's interruption, or contr-1, contr-2, and contr-3 can all form one constituent. At change of speaker 3, all phrase hypotheses ending at change of speaker 2 are extended to end at change of speaker 3 except those hypotheses that were extended from the pre- vious change of speaker. Thus, an utterance cannot be formed from only contr-1 and contr-4. This mechanism implements the rules for speaker changes given in section 2: at each change of speaker, the new speaker can either build on the last contribution, build on their last contribution, or start a new utterance. A: contr-1 contr-3 B: contr-2 contr-4 cos 1 2 3 These rules assume that changes of speaker are well defined points of time, meaning that words of two speakers do not overlap. In the experiments of this paper, a corpus was used where word endings were time-stamped (word beginnings are unavail- able). These times were used to impose an ordering; if one word ends before another it is counted as being before the other word. Clearly, this could be inaccurate given that words may overlap. Moreover, speakers may be slow to interrupt or may anticipate the first speaker and interrupt early. However, this approximation works fairly well as dis- cussed in section 4. Other parts of the implementation are ac- complished through metarules. The term metarule is used because these rules act not on words but grammar rules. Consider the editing term metarule. When an editing term is seen 2, the metarule extends copies of all phrase hypotheses ending at the edit- ing term over that term to allow utterances to be formed around it. This metarule (and our other metarules) can be viewed declar- atively as specifying allowable patterns of phrase breakage and interleaving (Core and Schubert, 1998). This notion is different from the traditional linguistic conception of metarules as rules for generating new PSRs from given PSRs. ~ Procedurally, we can think of metarules as creating new (discon- tinuous) pathways for the parser's traversal of the input, and this view is readily imple- mentable. The repair metarule, when given the hypo- thetical start and end of a reparandum (say from a language model such as (Heeman and Allen, 1997)), extends copies of phrase hy- potheses over the reparandum allowing the corrected utterance to be formed. In case the source of the reparandum information gave a false alarm, the alternative of not skipping the reparandum is still available. For each utterance in the input, the parser needs to find an interpretation that starts at the first word of the input and ends at the last word. 4 This interpretation may have been produced by one or more applications of the repair metarule allowing the interpre- tation to exclude one or more reparanda. For each reparandum skipped, the parser needs to find an interpretation of what the user started to say. In some cases, what the user started to say is a complete constituent: take 2The parser's lexicon has a list of 35 editing terms that activate the editing term metarule. 3For instance, a traditional way to accommodate editing terms might be via a metarule, X -> Y Z ==> X -> Y editing-term Z, where X varies over categories and Y and Z vary over se- quences of categories. However, this would produce phrases containing editing terms as constituents, whereas in our approach editing terms are separate utterances. 4In cases of overlapping utterances, it will take multiple interpretations (one for each utterance) to extend across the input. 415 the oranges I mean take the bananas. Other- wise, the parser needs to look for an incom- plete interpretation ending at the reparan- dum end. Typically, there will be many such interpretations; the parser searches for the longest interpretations and then ranks them based on their category: UTT > S > VP > PP, and so on. The incomplete interpreta- tion may not extend all the way to the start of the utterance in which case the process of searching for incomplete interpretations is repeated. Of course the search process is re- stricted by the first incomplete constituent. If, for example, an incomplete PP is found then any additional incomplete constituent would have to expect a PP. Figure 1 shows an example of this process on utterance 62 from TRAINS dialog d92a- 1.2 (Heeman and Allen, 1995). Assuming perfect speech repair identification, the re- pair metarule will be fired from position 0 to position 5 meaning the parser needs to find an interpretation starting at position 5 and ending at the last position in the input. This interpretation (the corrected utterance) is shown under the words in figure 1. The parser then needs to find an interpretation of what the speaker started to say. There are no complete constituents ending at posi- tion 5. The parser instead finds the incom- plete constituent ADVBL -> adv • ADVBL. Our implementation is a chart parser and ac- cordingly incomplete constituents are repre- sented as arcs. This arc only covers the word through so another arc needs to be found. The arc S -> S • ADVBL expects an ADVBL and covers the rest of the input, completing the interpretation of what the user started to say (as shown on the top of figure 1). The editing terms are treated as separate utter- ances via the editing term metarule. 4 Verification of the Framework To test this framework, data was examined from 31 TRAINS 93 dialogs (Heeman and Allen, 1995), a series of human-human prob- lem solving dialogs in a railway transporta- tion domain. 5 There were 3441 utterances, 6 19189 words, 259 examples of overlapping utterances, and 495 speech repairs. The framework presented above covered all the overlapping utterances and speech repairs with three exceptions. Ordering the words of two speakers strictly by word ending points neglects the fact that speakers may be slow to interrupt or may anticipate the original speaker and inter- rupt early. The latter was a problem in utterances 80 and 81 of dialog d92a-l.2 as shown below. The numbers in the last row represent times of word endings; for example, so ends at 255.5 seconds into the dialog. Speaker s uttered the complement of u's sentence before u had spoken the verb. 80 u: so the total is 81 s: five 255.5 255.56 255.83 256 256.61 However, it is important to examine the context following: 82 s: that is right s: okay 83 u: five 84 s: so total is five The overlapping speech was confusing enough to the speakers that they felt they needed to reiterate utterances 80 and 81 in the next utterances. The same is true of the other two such examples in the corpus. It may be the case that a more sophisticated model of interruption will not be necessary if speakers cannot follow completions that lag or precede the correct interruption area. 5 The Dialog Parser Implementation In addition to manually checking the ad- equacy of the framework on the cited TRAINS data, we tested a parser imple- SSpecifically, the dialogs were d92-1 through d92a-5.2 and d93-10.1 through d93-14.1 6This figure does not count editing term utter- ances nor utterances started in the middle of another speaker's utterance. 416 broken-S S -> S eADVBL broken-ADVBL S ADVBL -> adv • ADVBL adv UTT UTI" s: we will take them through um let us see do we want to take them through to Dansville aux NP VP S Figure 1: Utterance 62 of d92a-1.2 mented as discussed in section 3 on the same data. The parser was a modified version of the one in the TRIPS dialog system (Fer- guson and Allen, 1998). Users of this sys- tem participate in a simulated evacuation scenario where people must be transported along various routes to safety. Interactions of users with TRIPS were not investigated in detail because they contain few speech re- pairs and virtually no interruptions. T But, the domains of TRIPS and TRAINS are sim- ilar enough to allow us run TRAINS exam- ples on the TRIPS parser. One problem, though, is the grammat- ical coverage of the language used in the TRAINS domain. TRIPS users keep their utterances fairly simple (partly because of speech recognition problems) while humans talking to each other in the TRAINS do- main felt no such restrictions. Based on a 100-utterance test set drawn randomly from the TRAINS data, parsing accuracy is 62% 8 However, 37 of these utterances are one word ~The low speech recognition accuracy encourages users to produce short, carefully spoken utterances leading to few speech repairs. Moreover, the system does not speak until the user releases the speech in- put button, and once it responds will not stop talk- ing even if the user interrupts the response. This virtually eliminates interruptions. 8The TRIPS parser does not always return a unique utterance interpretation. The parser was counted as being correct if one of the interpretations it returned was correct. The usual cause of failure was the parser finding no interpretation. Only 3 fail- ures were due to the parser returning only incorrect interpretations. long (okay, yeah, etc.) and 5 utterances were question answers (two hours, in Elmira); thus on interesting utterances, accuracy is 34.5%. Assuming perfect speech repair de- tection, only 125 of the 495 corrected speech repairs parsed. 9 Of the 259 overlapping utterances, 153 were simple backchannels consisting only of editing terms (okay, yeah) spoken by a second speaker in the middle of the first speaker's utterance. If the parser's grammar handles the first speaker's utterance these can be parsed, as the second speaker's in- terruption can be skipped. The experiments focused on the 106 overlapping utterances that were more complicated. In only 24 of these cases did the parser's grammar cover both of the overlapping utterances. One of these examples, utterances utt39 and 40 from d92a-3.2 (see below), involves three independently formed utterances that overlap. We have omitted the beginning of s's utterance, so that would be five a.m. for space reasons. Figure 2 shows the syntactic structure of s's utterance (a relative clause) under the words of the utterance, u's two utterances are shown above the words of figure 2. The purpose of this figure is to show how interpretations can be formed around interruptions by another speaker and how these interruptions themselves form interpretations. The specific syntactic 9In 19 cases, the parser returned interpretation(s) but they were incorrect but not included in the above figure. 417 UTT u: and then I go back to Avon s: via Dansville UTT Figure 3: Utterances 132 and 133 from d92a- 5.2 structure of the utterances is not shown. Typically, triangles are used to represent a parse tree without showing its internal structure. Here, polygonal structures must be used due to the interleaved nature of the utterances. s: when it would get to bath u: okay how about to dansville Figure 3 is an example of a collaboratively built utterance, utterances 132 and 133 from d92a-5.2, as shown below, u's interpretation of the utterance (shown below the words in figure 3) does not include s's contribution because until utterance 134 (where u utters right) u has not accepted this continuation. u: and then I go back to avon s: via dansville 6 Rescoring a Pre-parser Speech Repair Identifier One of the advantages of providing speech repair information to the parser is that the parser can then use its knowledge of gram- mar and the syntactic structure of the input to correct speech repair identification errors. As a preliminary test of this assumption, we used an older version of Heeman's language model (the current version is described in (Heeman and Allen, 1997)) and connected it to the current dialog parser. Because the parser's grammar only covers 35% of input sentences, corrections were only made based on global grammaticality. The effectiveness of the language module without the parser on the testing corpus is shown in table 1. i° The testing corpus con- i°Note, current versions of this language model perform significantly better. sisted of TRAINS dialogs containing 541 re- pairs, 3797 utterances, and 20,069 words, ii For each turn in the input, the language model output the n-best predictions it made (up to 100) regarding speech repairs, part of speech tags, and boundary tones. The parser starts by trying the language model's first choice. If this results in an in- terpretation covering the input, that choice is selected as the correct answer. Otherwise the process is repeated with the model's next choice. If all the choices are exhausted and no interpretations are found, then the first choice is selected as correct. This approach is similar to an experiment in (Bear et al., 1992) except that Bear et al. were more in- terested in reducing false alarms. Thus, if a sentence parsed without the repair then it was ruled a false alarm. Here the goal is to increase recall by trying lower probability alternatives when no parse can be found. The results of such an approach on the test corpus are listed in table 2. Recall increases by 4.8% (13 cases out of 541 repairs) show- ing promise in the technique of rescoring the output of a pre-parser speech repair iden- tifier. With a more comprehensive gram- mar, a strong disambiguation system, and the current version of Heeman's language model, the results should get better. The drop in precision is a worthwhile tradeoff as the parser is never forced to accept posited repairs but is merely given the option of pur- suing alternatives that include them. Adding actual speech repair identification (rather than assuming perfect identification) gives us an idea of the performance improve- ment (in terms of parsing) that speech repair handling brings us. Of the 284 repairs cor- rectly guessed in the augmented model, 79 parsed, i2 Out of 3797 utterances, this means that 2.1% of the time the parser would have failed without speech repair informa- nSpecifically the dialogs used were d92-1 through d92a-5.2; d93-10.1 through d93-10.4; and d93-11.1 through d93-14.2. The language model was never simultaneously trained and tested on the same data. i2In 11 cases, the parser returned interpretation(s) but they were incorrect and not included in the above figure. 418 s: when it UTT UTT would u: o~ay s: g e ~ l e S [rel] Figure 2: Utterances 39 and 40 of d92a-3.2 repairs correctly guessed false alarms missed recall precision 271 215 270 50.09% 55.76% Table 1: Heeman's Speech Repair Results repairs correctly guessed false alarms missed recall precision 284 371 257 52.50% 43.36% Table 2: Augmented Speech Repair Results tion. Although failures due to the gram- mar's coverage are much more frequent (38% of the time), as the parser is made more ro- bust, these 79 successes due to speech re- pair identification will become more signifi- cant. Further evaluation is necessary to test this model with an actual speech recognizer rather than transcribed utterances. 7 Conclusions Traditionally, dialog has been treated as a series of single speaker utterances, with no systematic allowance for speech repairs and editing terms. Such a treatment can- not adequately deal with dialogs involving more than one human (as appear in ma- chine translation or meeting analysis), and will not allow single user dialog systems to progress to more natural interactions. The simple set of rules given here allows speakers to collaborate to form utterances and pre- vents an interruption such as a backchannel response from disrupting the syntax of an- other speaker's utterance. Speech repairs are captured by parallel phrase structure trees, and editing terms are represented as separate utterances occurring inside other utterances. Since the parser has knowledge of gram- mar and the syntactic structure of the input, it can boost speech repair identification per- formance. In the experiments of this paper, the parser was able to increase the recall of a pre-parser speech identifier by 4.8%. An- other advantage of giving speech repair in- formation to the parser is that the parser can then include reparanda in its output and a truer picture of dialog structure can be formed. This can be crucial if a pronoun an- tecedent is present in the reparandum as in have the engine take the oranges to Elmira, urn, I mean, take them to Coming. In ad- dition, this information can help a dialog system detect uncertainty and planning dif- ficultly in speakers. The framework presented here is sufficient to describe the 3441 human-human utter- ances comprising the chosen set of TRAINS dialogs. More corpus investigation is neces- sary before we can claim the framework pro- vides broad coverage of human-human dia- log. Another necessary test of the framework is extension to dialogs involving more than two speakers. Long term goals include further inves- tigation into the TRAINS corpus and at- tempting full dialog analysis rather than ex- perimenting with small groups of overlap- ping utterances. Another long term goal is to weigh the current framework against a purely robust parsing approach (Ros~ and Levin, 1998), (Lavie, 1995) that treats out of vocabulary/grammar phenomena in the same way as editing terms and speech re- pairs. Robust parsing is critical to a parser 419 such as the one described here which has a coverage of only 62% on fluent utterances. In our corpus, the speech repair to utter- ance ratio is 14%. Thus, problems due to the coverage of the grammar are more than twice as likely as speech repairs. However, speech repairs occur with enough frequency to warrant separate attention. Unlike gram- mar failures, repairs are generally signaled not only by ungrammaticality, but also by pauses, editing terms, parallelism, etc.; thus an approach specific to speech repairs should perform better than just using a robust pars- ing algorithm to deal with them. Acknowledgments This work was supported in part by National Science Foundation grants IRI-9503312 and 5-28789. Thanks to James Allen, Peter Hee- man, and Amon Seagull for their help and comments on this work. References J. Bear, J. Dowding, and E. Shriberg. 1992. Integrating multiple knowledge sources for detection and correction of repairs in human-computer dialog. In Proc. of the 30th annual meeting of the Association for Computational Linguistics (A CL-92), pages 56-63. S. E. Brennan and M. Williams. 1995. The feeling of another's knowing: Prosody and filled pauses as cues to listeners about the metacognitive states of speakers. Journal of Memory and Language, 34:383-398. M. Core and L. Schubert. 1998. Implement- ing parser metarules that handle speech repairs and other disruptions. In D. Cook, editor, Proc. of the 11th International FLAIRS Conference, Sanibel Island, FL, May. G. Ferguson and J. F. Allen. 1998. TRIPS: An intelligent integrated problem-solving assistant. In Proc. of the National Confer- ence on Artificial Intelligence (AAAI-98), pages 26-30, Madison, WI, July. P. Heeman and J. Allen. 1995. the TRAINS 93 dialogues. TRAINS Technical Note 94-2, Department of Computer Science, University of Rochester, Rochester, NY 14627-0226. Peter A. Heeman and James F. Allen. 1997. Intonational boundaries, speech repairs, and discourse markers: Modeling spoken dialog. In Proc. of the 35 th Annual Meet- ing of the Association for Computational Linguistics, pages 254-261, Madrid, July. D. Hindle. 1983. Deterministic parsing of syntactic non-fluencies. In Proc. of the 21st annual meeting of the Association for Computational Linguistics (A CL-83), pages 123-128. A. Lavie. 1995. GLR*: A Robust Grammar Focused Parser for Spontaneously Spoken Language. Ph.D. thesis, School of Com- puter Science, Carnegie Mellon University, Pittsburgh, PA. C. P. Ross and L. S. Levin. 1998. An in- teractive domain independent approach to robust dialogue interpretation. In Proc. of the 36 th Annual Meeting of the Associa- tion for Computational Linguistics, Mon- treal, Quebec, Canada. M. Schober. 1999. Speech disfluencies in spoken language systems: A dialog- centered approach. In NSF Human Com- puter Interaction Grantees' Workshop (HCIGW 99), Orlando, FL. M.-h. Siu and M. Ostendorf. 1996. Model- ing disfluencies in conversational speech. In Proceedings of the ,~rd International Conference on Spoken Language Process- ing (ICSLP-96), pages 386-389. Andreas Stolcke and Elizabeth Shriberg. 1996. Statistical language modeling for speech disfluencies. In Proceedings of the International Conference on Audio, Speech and Signal Processing (ICASSP), May. 420
1999
53
Efficient probabilistic top-down and left-corner parsingt Brian Roark and Mark Johnson Cognitive and Linguistic Sciences Box 1978, Brown University Providence, RI 02912, USA brian-roark@brown, edu mj @cs. brown, edu Abstract This paper examines efficient predictive broad- coverage parsing without dynamic program- ming. In contrast to bottom-up methods, depth-first top-down parsing produces partial parses that are fully connected trees spanning the entire left context, from which any kind of non-local dependency or partial semantic inter- pretation can in principle be read. We con- trast two predictive parsing approaches, top- down and left-corner parsing, and find both to be viable. In addition, we find that enhance- ment with non-local information not only im- proves parser accuracy, but also substantially improves the search efficiency. 1 Introduction Strong empirical evidence has been presented over the past 15 years indicating that the hu- man sentence processing mechanism makes on- line use of contextual information in the preced- ing discourse (Crain and Steedman, 1985; Alt- mann and Steedman, 1988; Britt, 1994) and in the visual environment (Tanenhaus et al., 1995). These results lend support to Mark Steedman's (1989) "intuition" that sentence interpretation takes place incrementally, and that partial in- terpretations are being built while the sentence is being perceived. This is a very commonly held view among psycholinguists today. Many possible models of human sentence pro- cessing can be made consistent with the above view, but the general assumption that must un- derlie them all is that explicit relationships be- tween lexical items in the sentence must be spec- ified incrementally. Such a processing mecha- tThis material is based on work supported by the National Science Foundation under Grant No. SBR- 9720368. nism stands in marked contrast to dynamic pro- gramming parsers, which delay construction of a constituent until all of its sub-constituents have been completed, and whose partial parses thus consist of disconnected tree fragments. For ex- ample, such parsers do not integrate a main verb into the same tree structure as its subject NP until the VP has been completely parsed, and in many cases this is the final step of the entire parsing process. Without explicit on-line inte- gration, it would be difficult (though not impos- sible) to produce partial interpretations on-line. Similarly, it may be difficult to use non-local statistical dependencies (e.g. between subject and main verb) to actively guide such parsers. Our predictive parser does not use dynamic programming, but rather maintains fully con- nected trees spanning the entire left context, which make explicit the relationships between constituents required for partial interpretation. The parser uses probabilistic best-first pars- ing methods to pursue the most likely analy- ses first, and a beam-search to avoid the non- termination problems typical of non-statistical top-down predictive parsers. There are two main results. First, this ap- proach works and, with appropriate attention to specific algorithmic details, is surprisingly efficient. Second, not just accuracy but also efficiency improves as the language model is made more accurate. This bodes well for fu- ture research into the use of other non-local (e.g. lexical and semantic) information to guide the parser. In addition, we show that the improvement in accuracy associated with left-corner parsing over top-down is attributable to the non-local information supplied by the strategy, and can thus be obtained through other methods that utilize that same information. 421 2 Parser architecture The parser proceeds incrementally from left to right, with one item of look-ahead. Nodes are expanded in a standard top-down, left-to-right fashion. The parser utilizes: (i) a probabilis- tic context-free grammar (PCFG), induced via standard relative frequency estimation from a corpus of parse trees; and (ii) look-ahead prob- abilities as described below. Multiple compet- ing partial parses (or analyses) are held on a priority queue, which we will call the pending heap. They are ranked by a figure of merit (FOM), which will be discussed below. Each analysis has its own stack of nodes to be ex- panded, as well as a history, probability, and FOM. The highest ranked analysis is popped from the pending heap, and the category at the top of its stack is expanded. A category is ex- panded using every rule which could eventually reach the look-ahead terminal. For every such rule expansion, a new analysis is created 1 and pushed back onto the pending heap. The FOM for an analysis is the product of the probabilities of all PCFG rules used in its deriva- tion and what we call its look-ahead probabil- ity (LAP). The LAP approximates the product of the probabilities of the rules that will be re- quired to link the analysis in its current state with the look-ahead terminal 2. That is, for a grammar G, a stack state [C1 ... C,] and a look- ahead terminal item w: (1) LAP --- PG([C1. . . Cn] -~ wa) We recursively estimate this with two empir- ically observed conditional probabilities for ev- ery non-terminal Ci on the stack: /~(Ci 2+ w) and/~(Ci -~ e). The LAP approximation for a given stack state and look-ahead terminal is: (2) PG([Ci . .. Ca] wot) P(Ci w) + When the topmost stack category of an analy- sis matches the look-ahead terminal, the termi- nal is popped from the stack and the analysis 1We count each of these as a parser state (or rule expansion) considered, which can be used as a measure of efficiency. 2Since this is a non-lexicalized grammar, we are tak- ing pre-terminal POS markers as our terminal items. is pushed onto a second priority queue, which we will call the success heap. Once there are "enough" analyses on the success heap, all those remaining on the pending heap are discarded. The success heap then becomes the pending heap, and the look-ahead is moved forward to the next item in the input string. When the end of the input string is reached, the analysis with the highest probability and an empty stack is returned as the parse. If no such parse is found, an error is returned. The specifics of the beam-search dictate how many analyses on the success heap constitute "enough". One approach is to set a constant beam width, e.g. 10,000 analyses on the suc- cess heap, at which point the parser moves to the next item in the input. A problem with this approach is that parses towards the bottom of the success heap may be so unlikely relative to those at the top that they have little or no chance of becoming the most likely parse at the end of the day, causing wasted effort. An al- ternative approach is to dynamically vary the beam width by stipulating a factor, say 10 -5, and proceed until the best analysis on the pend- ing heap has an FOM less than 10 -5 times the probability of the best analysis on the success heap. Sometimes, however, the number of anal- yses that fall within such a range can be enor- mous, creating nearly as large of a processing burden as the first approach. As a compromise between these two approaches, we stipulated a base beam factor a (usually 10-4), and the ac- tual beam factor used was a •/~, where/3 is the number of analyses on the success heap. Thus, when f~ is small, the beam stays relatively wide, to include as many analyses as possible; but as /3 grows, the beam narrows. We found this to be a simple and successful compromise. Of course, with a left recursive grammar, such a top-down parser may never terminate. If no analysis ever makes it to the success heap, then, however one defines the beam-search, a top-down depth-first search with a left-recursive grammar will never terminate. To avoid this, one must place an upper bound on the number of analyses allowed to be pushed onto the pend- ing heap. If that bound is exceeded, the parse fails. With a left-corner strategy, which is not prey to left recursion, no such upper bound is necessary. 422 (a) (b) (c) (d) NP NP DT+JJ+JJ NN DT NP-DT DT+JJ JJ cat the JJ NP-DT-JJ DT JJ happy fat JJ NN I I I I the fat happy cat NP NP DT NP-DT DT NP-DT l the JJ NP-DT-JJ tLe JJ NP-DT-JJ _J fat JJ NP-DT-JJ-JJ fiat JJ NP-DT-JJ-JJ happy NN happy NN NP-DT-JJ-JJ-NN I I I cat cat e Figure 1: Binaxized trees: (a) left binaxized (LB); (b) right binaxized to binary (RB2); (c) right binaxized to unary (RB1); (d) right binarized to nullaxy (RB0) 3 Grammar transforms Nijholt (1980) characterized parsing strategies in terms of announce points: the point at which a parent category is announced (identified) rel- ative to its children, and the point at which the rule expanding the parent is identified. In pure top-down parsing, a parent category and the rule expanding it are announced before any of its children. In pure bottom-up parsing, they are identified after all of the children. Gram- mar transforms are one method for changing the announce points. In top-down parsing with an appropriately binaxized grammar, the pax- ent is identified before, but the rule expanding the parent after, all of the children. Left-corner parsers announce a parent category and its ex- panding rule after its leftmost child has been completed, but before any of the other children. 3.1 Delaying rule identification through binarization Suppose that the category on the top of the stack is an NP and there is a determiner (DT) in the look-ahead. In such a situation, there is no information to distinguish between the rules NP ~ DT JJ NN andNP--+DT JJ NNS. If the decision can be delayed, however, until such a time as the relevant pre-terminal is in the look-ahead, the parser can make a more in- formed decision. Grammar binaxization is one way to do this, by allowing the parser to use a rule like NP --+ DT NP-DT, where the new non-terminal NP-DT can expand into anything that follows a DT in an NP. The expansion of NP-DT occurs only after the next pre-terminal is in the look-ahead. Such a delay is essential for an efficient implementation of the kind of incremental parser that we are proposing. There axe actually several ways to make a grammar binary, some of which are better than others for our parser. The first distinction that can be drawn is between what we will call left binaxization (LB) versus right binaxization (RB, see figure 1). In the former, the leftmost items on the righthand-side of each rule are grouped together; in the latter, the rightmost items on the righthand-side of the rule are grouped to- gether. Notice that, for a top-down, left-to-right parser, RB is the appropriate transform, be- cause it underspecifies the right siblings. With LB, a top-down parser must identify all of the siblings before reaching the leftmost item, which does not aid our purposes. Within RB transforms, however, there is some variation, with respect to how long rule under- specification is maintained. One method is to have the final underspecified category rewrite as a binary rule (hereafter RB2, see figure lb). An- other is to have the final underspecified category rewrite as a unary rule (RB1, figure lc). The last is to have the final underspecified category rewrite as a nullaxy rule (RB0, figure ld). No- tice that the original motivation for RB, to delay specification until the relevant items are present in the look-ahead, is not served by RB2, because the second child must be specified without being present in the look-ahead. RB0 pushes the look- ahead out to the first item in the string after the constituent being expanded, which can be use- ful in deciding between rules of unequal length, e.g. NP---+ DT NN and NP ~ DT NN NN. Table 1 summarizes some trials demonstrat- 423 Binarization Rules in Percent of Avg. States Avg. Labelled Avg. MLP Ratio of Avg. Grammar Sentences Considered Precision and Labelled Prob to Avg. Parsed* Recall t Prec/Rec t MLP Prob t None 14962 34.16 19270 .65521 .76427 .001721 LB 37955 33.99 96813 .65539 .76095 .001440 I~B1 29851 91.27 10140 .71616 .72712 .340858 RB0 41084 97.37 13868 .73207 .72327 .443705 Beam Factor = 10 -4 *Length ~ 40 (2245 sentences in F23 Avg. length -- 21.68) tof those sentences parsed Table 1: The effect of different approaches to binarization ing the effect of different binarization ap- proaches on parser performance. The gram- mars were induced from sections 2-21 of the Penn Wall St. Journal Treebank (Marcus et al., 1993), and tested on section 23. For each transform tested, every tree in the training cor- pus was transformed before grammar induc- tion, resulting in a transformed PCFG and look- ahead probabilities estimated in the standard way. Each parse returned by the parser was de- transformed for evaluation 3. The parser used in each trial was identical, with a base beam factor c~ = 10 -4. The performance is evaluated using these measures: (i) the percentage of can- didate sentences for which a parse was found (coverage); (ii) the average number of states (i.e. rule expansions) considered per candidate sentence (efficiency); and (iii) the average la- belled precision and recall of those sentences for which a parse was found (accuracy). We also used the same grammars with an exhaustive, bottom-up CKY parser, to ascertain both the accuracy and probability of the maximum like- lihood parse (MLP). We can then additionally compare the parser's performance to the MLP's on those same sentences. As expected, left binarization conferred no benefit to our parser. Right binarization, in con- trast, improved performance across the board. RB0 provided a substantial improvement in cov- erage and accuracy over RB1, with something of a decrease in efficiency. This efficiency hit is partly attributable to the fact that the same tree has more nodes with RB0. Indeed, the effi- ciency improvement with right binarization over the standard grammar is even more interesting in light of the great increase in the size of the grammars. 3See Johnson (1998) for details of the transform/de- transform paradigm. It is worth noting at this point that, with the RB0 grammar, this parser is now a viable broad- coverage statistical parser, with good coverage, accuracy, and efficiency 4. Next we considered the left-corner parsing strategy. 3.2 Left-corner parsing Left-corner (LC) parsing (Rosenkrantz and Lewis II, 1970) is a well-known strategy that uses both bottom-up evidence (from the left corner of a rule) and top-down prediction (of the rest of the rule). Rosenkrantz and Lewis showed how to transform a context-free gram- mar into a grammar that, when used by a top- down parser, follows the same search path as an LC parser. These LC grammars allow us to use exactly the same predictive parser to evaluate top-down versus LC parsing. Naturally, an LC grammar performs best with our parser when right binarized, for the same reasons outlined above. We use transform composition to apply first one transform, then another to the output of the first. We denote this A o B where (A o B) (t) = B (A (t)). After applying the left-corner transform, we then binarize the resulting gram- mar 5, i.e. LC o RB. Another probabilistic LC parser investigated (Manning and Carpenter, 1997), which uti- lized an LC parsing architecture (not a trans- formed grammar), also got a performance boost 4The very efficient bottom-up statistical parser de- tailed in Charniak et al. (1998) measured efficiency in terms of total edges popped. An edge (or, in our case, a parser state) is considered when a probability is calcu- lated for it, and we felt that this was a better efficiency measure than simply those popped. As a baseline, their parser considered an average of 2216 edges per sentence in section 22 of the WSJ corpus (p.c.). 5Given that the LC transform involves nullary pro- ductions, the use of RB0 is not needed, i.e. nullary pro- ductions need only be introduced from one source. Thus binarization with left corner is always to unary (RB1). 424 Transform Rules in Pct. of Avg. States Avg Labelled Avg. MLP Ratio of Avg. Grammar Sentences Considered Precision and Labelled Prob to Avg. Parsed* Recall t Prec/Rec t MLP Prob t Left Corner (LC) 21797 91.75 9000 .76399 .78156 .175928 LB o LC 53026 96.75 7865 .77815 .78056 .359828 LC o RB 53494 96.7 8125 .77830 .78066 .359439 LC o RB o ANN 55094 96.21 7945 .77854 .78094 .346778 RB o LC 86007 93.38 4675 .76120 .80529 *Length _ 40 (2245 sentences in F23 - Avg. length ---- 21.68 Beam Factor ---- 10 -4 .267330 tOf those sentences parsed Table 2: Left Corner Results through right binarization. This, however, is equivalent to RB o LC, which is a very differ- ent grammar from LC o RB. Given our two bi- narization orientations (LB and RB), there are four possible compositions of binarization and LC transforms: (a) LB o LC (b) RB o LC (c) LC o LB (d) LC o RB Table 2 shows left-corner results over various conditions 6. Interestingly, options (a) and (d) encode the same information, leading to nearly identical performance 7. As stated before, right binarization moves the rule announce point from before to after all of the children. The LC transform is such that LC o RB also delays parent identification until after all of the chil- dren. The transform LC o RB o ANN moves the parent announce point back to the left corner by introducing unary rules at the left corner that simply identify the parent of the binarized rule. This allows us to test the effect of the position of the parent announce point on the performance of the parser. As we can see, however, the ef- fect is slight, with similar performance on all measures. RB o LC performs with higher accuracy than the others when used with an exhaustive parser, but seems to require a massive beam in order to even approach performance at the MLP level. Manning and Carpenter (1997) used a beam width of 40,000 parses on the success heap at each input item, which must have resulted in an order of magnitude more rule expansions than what we have been considering up to now, and 6Option (c) is not the appropriate kind of binarization for our parser, as argued in the previous section, and so is omitted. 7The difference is due to the introduction of vacuous unary rules with RB. yet their average labelled precision and recall (.7875) still fell well below what we found to be the MLP accuracy (.7987) for the grammar. We are still investigating why this grammar func- tions so poorly when used by an incremental parser. 3.3 Non-local annotation Johnson (1998) discusses the improvement of PCFG models via the annotation of non-local in- formation onto non-terminal nodes in the trees of the training corpus. One simple example is to copy the parent node onto every non- terminal, e.g. the rule S ~ NP VP becomes S ~ NP~S VP~S. The idea here is that the distribution of rules of expansion of a particular non-terminal may differ depending on the non- terminal's parent. Indeed, it was shown that this additional information improves the MLP accuracy dramatically. We looked at two kinds of non-local infor- mation annotation: parent (PA) and left-corner (LCA). Left-corner parsing gives improved accu- racy over top-down or bottom-up parsing with the same grammar. Why? One reason may be that the ancestor category exerts the same kind of non-local influence upon the parser that the parent category does in parent annotation. To test this, we annotated the left-corner ancestor category onto every leftmost non-terminal cat- egory. The results of our annotation trials are shown in table 3. There are two important points to notice from these results. First, with PA we get not only the previously reported improvement in accuracy, but additionally a fairly dramatic decrease in the number of parser states that must be vis- ited to find a parse. That is, the non-local in- formation not only improves the final product of the parse, but it guides the parser more quickly 425 Transform Rules in Pct. of Avg. States Avg Labelled Avg. MLP Ratio of Avg. Grammar Sentences Considered Precision and Labelled Prob to Avg. Parsed* Recall t Prec/Rec t MLP Prob t RB0 41084 97.37 13868 .73207 .72327 .443705 PA o RB0 63467 95.19 8596 .79188 .79759 .486995 LC o RB 53494 96.7 8125 .77830 .78066 .359439 LCA o RB0 58669 96.48 11158 .77476 .78058 .495912 PA o LC o RB 80245 93.52 4455 .81144 .81833 .484428 Beam Factor -- 10 -4 *Length ~ 40 (2245 sentences in F23 - Avg. length -= 21.68) tOf those sentences parsed Table 3: Non-local annotation results to the final product. The annotated grammar has 1.5 times as many rules, and would slow a bottom-up CKY parser proportionally. Yet our parser actually considers far fewer states en route to the more accurate parse. Second, LC-annotation gives nearly all of the accuracy gain of left-corner parsing s, in support of the hypothesis that the ancestor information was responsible for the observed accuracy im- provement. This result suggests that if we can determine the information that is being anno- tated by the troublesome RB o LC transform, we may be able to get the accuracy improve- ment with a relatively narrow beam. Parent- annotation before the LC transform gave us the best performance of all, with very few states considered on average, and excellent accuracy for a non-lexicalized grammar. 4 Accuracy/Efficiency tradeoff One point that deserves to be made is that there is something of an accuracy/efficiency tradeoff with regards to the base beam factor. The re- sults given so far were at 10 -4 , which func- tions pretty well for the transforms we have investigated. Figures 2 and 3 show four per- formance measures for four of our transforms at base beam factors of 10 -3 , 10 -4 , 10 -5 , and 10 -6. There is a dramatically increasing effi- ciency burden as the beam widens, with vary- ing degrees of payoff. With the top-down trans- forms (RB0 and PA o RB0), the ratio of the av- erage probability to the MLP probability does improve substantially as the beam grows, yet with only marginal improvements in coverage and accuracy. Increasing the beam seems to do less with the left-corner transforms. SThe rest could very well be within noise. 5 Conclusions and Future Research We have examined several probabilistic predic- tive parser variations, and have shown the ap- proach in general to be a viable one, both in terms of the quality of the parses, and the ef- ficiency with which they are found. We have shown that the improvement of the grammars with non-local information not only results in better parses, but guides the parser to them much more efficiently, in contrast to dynamic programming methods. Finally, we have shown that the accuracy improvement that has been demonstrated with left-corner approaches can be attributed to the non-local information uti- lized by the method. This is relevant to the study of the human sentence processing mechanism insofar as it demonstrates that it is possible to have a model which makes explicit the syntactic relationships between items in the input incrementally, while still scaling up to broad-coverage. Future research will include: • lexicalization of the parser • utilization of fully connected trees for ad- ditional syntactic and semantic processing • the use of syntactic predictions in the beam for language modeling • an examination of predictive parsing with a left-branching language (e.g. German) In addition, it may be of interest to the psy- cholinguistic community if we introduce a time variable into our model, and use it to compare such competing sentence processing models as race-based and competition-based parsing. References G. Altmann and M. Steedman. 1988. Interac- tion with context during human sentence pro- cessing. Cognition, 30:198-238. 426 x lO 4 Average States Considered per Sentence 98 96 94 14 i i RB0 ...... LC 0 RB 12 - - - PA 0 RB0 ..... PA 0 LC 0 RB 10 8 6 4 q " - 0r- 10 -3 10 .-4 Base Beam Factor 10 -s 10-6 Percentage of Sentences Parsed 100 RB0 ...... LC o RB - - - PAo RB0 ~ ~ .,,,, = ............... PAoLCoRB ~,~"~,.....,.,"'~, . ...... . ~ ~ . 92 ~ ,4"~ 90 880_ 3 I = 10 ..4 Base Beam Factor 10 -5 10 -6 Figure 2: Changes in performance with beam factor variation M. Britt. 1994. The interaction of referential ambiguity and argument structure. Journal o/ Memory and Language, 33:251-283. E. Charniak, S. Goldwater, and M. Johnson. 1998. Edge-based best-first chart parsing. In Proceedings of the Sixth Workshop on Very Large Corpora, pages 127-133. S. Crain and M. Steedman. 1985. On not be- ing led up the garden path: The use of con- text by the psychological parser. In D. Dowty, L. Karttunen, and A. Zwicky, editors, Natu- ral Language Parsing. Cambridge University Press, Cambridge, UK. M. Johnson. 1998. PCFG models of linguistic tree representations. Computational Linguis- tics, 24:617-636. C. Manning and B. Carpenter. 1997. Prob- abilistic parsing using left corner language models. In Proceedings of the Fifth Interna- tional Workshop on Parsing Technologies. 427 Average Labelled Precision and Recall 82 , , 81 80 79 78 o~ 7~ (1. 76 75 74 73 72 10"-3 0.65 0.6 0.55 0.5 .o 0,45 rr 0.4 0.35 0.3 0.25 10 -3 RB0 ...... LC o RB - - - PAo RB0 PA O LC o RB i 10 -4 i Base Beam Factor 10-6 10 -s Average Ratio of Parse Probability to Maximum Likelihood Probability , RB0 -' ' ...... LC o RB - - - PAo RB0 / ~ ~. - " I I 10 -4 Base Beam Factor 10 -s 10 -6 Figure 3: Changes in performance with beam factor variation M.P. Marcus, B. Santorini, and M.A. Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330. A. Nijholt. 1980. Context-/tee Grammars: Cov- ers, Normal Forms, and Parsing. Springer Verlag, Berlin. S.J. Rosenkrantz and P.M. Lewis II. 1970. De- terministic left corner parsing. In IEEE Con- ference Record of the 11th Annual Symposium on Switching and Automata, pages 139-152. M. Steedman. 1989. Grammar, interpreta- tion, and processing from the lexicon. In W. Marslen-Wilson, editor, Lexical represen- tation and process. MIT Press, Cambridge, MA. M. Tanenhaus, M. Spivey-Knowlton, K. Eber- hard, and J. Sedivy. 1995. Integration of vi- sual and linguistic information during spoken language comprehension. Science, 268:1632- 1634. 428
1999
54
A Selectionist Theory of Language Acquisition Charles D. Yang* Artificial Intelligence Laboratory Massachusetts Institute of Technology Cambridge, MA 02139 charles@ai, mit. edu Abstract This paper argues that developmental patterns in child language be taken seriously in computational models of language acquisition, and proposes a for- mal theory that meets this criterion. We first present developmental facts that are problematic for sta- tistical learning approaches which assume no prior knowledge of grammar, and for traditional learnabil- ity models which assume the learner moves from one UG-defined grammar to another. In contrast, we view language acquisition as a population of gram- mars associated with "weights", that compete in a Darwinian selectionist process. Selection is made possible by the variational properties of individual grammars; specifically, their differential compatibil- ity with the primary linguistic data in the environ- ment. In addition to a convergence proof, we present empirical evidence in child language development, that a learner is best modeled as multiple grammars in co-existence and competition. 1 Learnability and Development A central issue in linguistics and cognitive science is the problem of language acquisition: How does a human child come to acquire her language with such ease, yet without high computational power or favorable learning conditions? It is evident that any adequate model of language acquisition must meet the following empirical conditions: • Learnability: such a model must converge to the target grammar used in the learner's environ- ment, under plausible assumptions about the learner's computational machinery, the nature of the input data, sample size, and so on. • Developmental compatibility: the learner mod- eled in such a theory must exhibit behaviors that are analogous to the actual course of lan- guage development (Pinker, 1979). * I would like to thank Julie Legate, Sam Gutmann, Bob Berwick, Noam Chomsky, John Frampton, and John Gold- smith for comments and discussion. This work is supported by an NSF graduate fellowship. It is worth noting that the developmental compati- bility condition has been largely ignored in the for- mal studies of language acquisition. In the rest of this section, I show that if this condition is taken se- riously, previous models of language acquisition have difficulties explaining certain developmental facts in child language. 1.1 Against Statistical Learning An empiricist approach to language acquisition has (re)gained popularity in computational linguistics and cognitive science; see Stolcke (1994), Charniak (1995), Klavans and Resnik (1996), de Marcken (1996), Bates and Elman (1996), Seidenberg (1997), among numerous others. The child is viewed as an inductive and "generalized" data processor such as a neural network, designed to derive structural reg- ularities from the statistical distribution of patterns in the input data without prior (innate) specific knowledge of natural language. Most concrete pro- posals of statistical learning employ expensive and specific computational procedures such as compres- sion, Bayesian inferences, propagation of learning errors, and usually require a large corpus of (some- times pre-processed) data. These properties imme- diately challenge the psychological plausibility of the statistical learning approach. In the present discus- sion, however, we are not concerned with this but simply grant that someday, someone might devise a statistical learning scheme that is psychologically plausible and also succeeds in converging to the tar- get language. We show that even if such a scheme were possible, it would still face serious challenges from the important but often ignored requirement of developmental compatibility. One of the most significant findings in child lan- guage research of the past decade is that different aspects of syntactic knowledge are learned at differ- ent rates. For example, consider the placement of finite verb in French, where inflected verbs precede negation and adverbs: Jean voit souvent/pas Marie. Jean sees often/not Marie. This property of French is mastered as early as 429 the 20th month, as evidenced by the extreme rarity of incorrect verb placement in child speech (Pierce, 1992). In contrast, some aspects of language are ac- quired relatively late. For example, the requirement of using a sentential subject is not mastered by En- glish children until as late as the 36th month (Valian, 1991), when English children stop producing a sig- nificant number of subjectless sentences. When we examine the adult speech to children (transcribed in the CHILDES corpus; MacWhinney and Snow, 1985), we find that more than 90% of English input sentences contain an overt subject, whereas only 7-8% of all French input sentences con- tain an inflected verb followed by negation/adverb. A statistical learner, one which builds knowledge purely on the basis of the distribution of the input data, predicts that English obligatory subject use should be learned (much) earlier than French verb placement - exactly the opposite of the actual find- ings in child language. Further evidence against statistical learning comes from the Root Infinitive (RI) stage (Wexler, 1994; inter alia) in children acquiring certain languages. Children in the RI stage produce a large number of sentences where matrix verbs are not finite - un- grammatical in adult language and thus appearing infrequently in the primary linguistic data if at all. It is not clear how a statistical learner will induce non-existent patterns from the training corpus. In addition, in the acquisition of verb-second (V2) in Germanic grammars, it is known (e.g. Haegeman, 1994) that at an early stage, children use a large proportion (50%) of verb-initial (V1) sentences, a marked pattern that appears only sparsely in adult speech. Again, an inductive learner purely driven by corpus data has no explanation for these disparities between child and adult languages. Empirical evidence as such poses a serious prob- lem for the statistical learning approach. It seems a mistake to view language acquisition as an induc- tive procedure that constructs linguistic knowledge, directly and exclusively, from the distributions of in- put data. 1.2 The Transformational Approach Another leading approach to language acquisition, largely in the tradition of generative linguistics, is motivated by the fact that although child language is different from adult language, it is different in highly restrictive ways. Given the input to the child, there are logically possible and computationally simple in- ductive rules to describe the data that are never attested in child language. Consider the following well-known example. Forming a question in English involves inversion of the auxiliary verb and the sub- ject: Is the man t tall? where "is" has been fronted from the position t, the position it assumes in a declarative sentence. A pos- sible inductive rule to describe the above sentence is this: front the first auxiliary verb in the sentence. This rule, though logically possible and computa- tionally simple, is never attested in child language (Chomsky, 1975; Crain and Nakayama, 1987; Crain, 1991): that is, children are never seen to produce sentences like: , Is the cat that the dog t chasing is scared? where the first auxiliary is fronted (the first "is"), instead of the auxiliary following the subject of the sentence (here, the second "is" in the sentence). Acquisition findings like these lead linguists to postulate that the human language capacity is con- strained in a finite prior space, the Universal Gram- mar (UG). Previous models of language acquisi- tion in the UG framework (Wexter and Culicover, 1980; Berwick, 1985; Gibson and Wexler, 1994) are transformational, borrowing a term from evolution (Lewontin, 1983), in the sense that the learner moves from one hypothesis/grammar to another as input sentences are processed. 1 Learnability results can be obtained for some psychologically plausible algo- rithms (Niyogi and Berwick, 1996). However, the developmental compatibility condition still poses se- rious problems. Since at any time the state of the learner is identi- fied with a particular grammar defined by UG, it is hard to explain (a) the inconsistent patterns in child language, which cannot be described by ally single adult grammar (e.g. Brown, 1973); and (b) the smoothness of language development (e.g. Pinker, 1984; Valiant, 1991; inter alia), whereby the child gradually converges to the target grammar, rather than the abrupt jumps that would be expected from binary changes in hypotheses/grammars. Having noted the inadequacies of the previous approaches to language acquisition, we will pro- pose a theory that aims to meet language learn- ability and language development conditions simul- taneously. Our theory draws inspirations from Dar- winian evolutionary biology. 2 A Selectionist Model of Language Acquisition 2.1 The Dynamics of Darwinian Evolution Essential to Darwinian evolution is the concept of variational thinking (Lewontin, 1983). First, differ- 1 Note that the transformational approach is not restricted to UG-based models; for example, Brill's influential work (1993) is a corpus-based model which successively revises a set of syntactic_rules upon presentation of partially bracketed sentences. Note that however, the state of the learning sys- tem at any time is still a single set of rules, that is, a single "grammar". 430 ences among individuals are viewed as "real", as op- posed to deviant from some idealized archetypes, as in pre-Darwinian thinking. Second, such differences result in variance in operative functions among indi- viduals in a population, thus allowing forces of evo- lution such as natural selection to operate. Evolu- tionary changes are therefore changes in the distri- bution of variant individuals in the population. This contrasts with Lamarckian transformational think- ing, in which individuals themselves undergo direct changes (transformations) (Lewontin, 1983). 2.2 A population of grammars Learning, including language acquisition, can be characterized as a sequence of states in which the learner moves from one state to another. Transfor- mational models of language acquisition identify the state of the learner as a single grammar/hypothesis. As noted in section 1, this makes difficult to explain the inconsistency in child language and the smooth- ness of language development. We propose that the learner be modeled as a pop- ulation of "grammars", the set of all principled lan- guage variations made available by the biological en- dowment of the human language faculty. Each gram- mar Gi is associated with a weight Pi, 0 <_ Pi <_ 1, and ~pi -~ 1. In a linguistic environment E, the weight pi(E, t) is a function of E and the time vari- able t, the time since the onset of language acquisi- tion. We say that Definition: Learning converges if Ve,0 < e < 1,VGi, [ pi(E,t+ 1) -pi(E,t) [< e That is, learning converges when the composition and distribution of the grammar population are sta- bilized. Particularly, in a monolingual environment ET in which a target grammar T is used, we say that learning converges to T if limt-.cv pT(ET, t) : 1. 2.3 A Learning Algorithm Write E -~ s to indicate that a sentence s is an ut- terance in the linguistic environment E. Write s E G if a grammar G can analyze s, which, in a narrow sense, is parsability (Wexler and Culicover, 1980; Berwick, 1985). Suppose that there are altogether N grammars in the population. For simplicity, write Pi for pi(E, t) at time t, and p~ for pi(E, t+ 1) at time t + 1. Learning takes place as follows: The Algorithm: Given an input sentence s, the child with the probability Pi, selects a grammar Gi {, • ifsEGi P}=Pi+V(1-Pi) pj (1 - V)Pj if j ~ i p; = (1 - V)pi • ifsf[G~ p,j N--~_l+(1--V)pj if j~i Comment: The algorithm is the Linear reward- penalty (LR-p) scheme (Bush and Mostellar, 1958), one of the earliest and most extensively studied stochastic algorithms in the psychology of learning. It is real-time and on-line, and thus reflects the rather limited computational capacity of the child language learner, by avoiding sophisticated data pro- cessing and the need for a large memory to store previously seen examples. Many variants and gener- alizations of this scheme are studied in Atkinson et al. (1965), and their thorough mathematical treat- ments can be found in Narendra and Thathac!lar (1989). The algorithm operates in a selectionist man- ner: grammars that succeed in analyzing input sen- tences are rewarded, and those that fail are pun- ished. In addition to the psychological evidence for such a scheme in animal and human learning, there is neurological evidence (Hubel and Wiesel, 1962; Changeux, 1983; Edelman, 1987; inter alia) that the development of neural substrate is guided by the ex- posure to specific stimulus in the environment in a Darwinian selectionist fashion. 2.4 A Convergence Proof For simplicity but without loss of generality, assume that there are two grammars (N -- 2), the target grammar T1 and a pretender T2. The results pre- sented here generalize to the N-grammar case; see Narendra and Thathachar (1989). Definition: The penalty probability of grammar Ti in a linguistic environment E is ca = Pr(s ¢ T~ I E -~ s) In other words, ca represents the probability that the grammar T~ fails to analyze an incoming sen- tence s and gets punished as a result. Notice that the penalty probability, essentially a fitness measure of individual grammars, is an intrinsic property of a UG-defined grammar relative to a particular linguis- tic environment E, determined by the distributional patterns of linguistic expressions in E. It is not ex- plicitly computed, as in (Clark, 1992) which uses the Genetic Algorithm (GA). 2 The main result is as follows: Theorem: e2 if I 1-V(cl+c2) l< 1 (1) t_~ooPl_tlim () - C1 "[- C2 Proof sketch: Computing E[pl(t + 1) [ pl(t)] as a function of Pl (t) and taking expectations on both 2Claxk's model and the present one share an important feature: the outcome of acquisition is determined by the dif- ferential compatibilities of individual grammars. The choice of the GA introduces various psychological and linguistic as- sumptions that can not be justified; see Dresher (1999) and Yang (1999). Furthermore, no formal proof of convergence is given. 431 sides give E[pl(t + 1) = [1 - ~'(el -I- c2)]E~Ol(t)] + 3'c2 (2) Solving [2] yields [11. Comment 1: It is easy to see that Pl ~ 1 (and p2 ~ 0) when cl = 0 and c2 > 0; that is, the learner converges to the target grammar T1, which has a penalty probability of 0, by definition, in a mono- lingual environment. Learning is robust. Suppose that there is a small amount of noise in the input, i.e. sentences such as speaker errors which are not compatible with the target grammar. Then cl > 0. If el << c2, convergence to T1 is still ensured by [1]. Consider a non-uniform linguistic environment in which the linguistic evidence does not unambigu- ously identify any single grammar; an example of this is a population in contact with two languages (grammars), say, T1 and T2. Since Cl > 0 and c2 > 0, [1] entails that pl and P2 reach a stable equilibrium at the end of language acquisition; that is, language learners are essentially bi-lingual speakers as a result of language contact. Kroch (1989) and his colleagues have argued convincingly that this is what happened in many cases of diachronic change. In Yang (1999), we have been able to extend the acquisition model to a population of learners, and formalize Kroch's idea of grammar competition over time. Comment 2: In the present model, one can di- rectly measure the rate of change in the weight of the target grammar, and compare with developmental findings. Suppose T1 is the target grammar, hence cl = 0. The expected increase of Pl, APl is com- puted as follows: E[Apl] = c2PlP2 (3) Since P2 = 1 - pl, APl [3] is obviously a quadratic function of pl(t). Hence, the growth of Pl will pro- duce the familiar S-shape curve familiar in the psy- chology of learning. There is evidence for an S-shape pattern in child language development (Clahsen, 1986; Wijnen, 1999; inter alia), which, if true, sug- gests that a selectionist learning algorithm adopted here might indeed be what the child learner employs. 2.5 Unambiguous Evidence is Unnecessary One way to ensure convergence is to assume the ex- istence of unambiguous evidence (cf. Fodor, 1998): sentences that are only compatible with the target grammar but not with any other grammar. Unam- biguous evidence is, however, not necessary for the proposed model to converge. It follows from the the- orem [1] that even if no evidence can unambiguously identify the target grammar from its competitors, it is still possible to ensure convergence as long as all competing grammars fail on some proportion of in- put sentences; i.e. they all have positive penalty probabilities. Consider the acquisition of the target, a German V2 grammar, in a population of grammars below: 1. German: SVO, OVS, XVSO 2. English: SVO, XSVO 3. Irish: VSO, XVSO 4. Hixkaryana: OVS, XOVS We have used X to denote non-argument categories such as adverbs, adjuncts, etc., which can quite freely appear in sentence-initial positions. Note that none of the patterns in (1) could conclusively distin- guish German from the other three grammars. Thus, no unambiguous evidence appears to exist. How- ever, if SVO, OVS, and XVSO patterns appear in the input data at positive frequencies, the German grammar has a higher overall "fitness value" than other grammars by the virtue of being compatible with all input sentences. As a result, German will eventually eliminate competing grammars. 2.6 Learning in a Parametric Space Suppose that natural language grammars vary in a parametric space, as cross-linguistic studies sug- gest. 3 We can then study the dynamical behaviors of grammar classes that are defined in these para- metric dimensions. Following (Clark, 1992), we say that a sentence s expresses a parameter c~ if a gram- mar must have set c~ to some definite value in order to assign a well-formed representation to s. Con- vergence to the target value of c~ can be ensured by the existence of evidence (s) defined in the sense of parameter expression. The convergence to a single grammar can then be viewed as the intersection of parametric grammar classes, converging in parallel to the target values of their respective parameters. 3 Some Developmental Predictions The present model makes two predictions that can- not be made in the standard transformational theo- ries of acquisition: 1. As the target gradually rises to dominance, the child entertains a number of co-existing gram- mars. This will be reflected in distributional patterns of child language, under the null hy- pothesis that the grammatical knowledge (in our model, the population of grammars and their respective weights) used in production is that used in analyzing linguistic evidence. For grammatical phenomena that are acquired rela- tively late, child language consists of the output of more than one grammar. 3Although different theories of grammar, e.g. GB, HPSG, LFG, TAG, have different ways of instantiating this idea. 432 2. Other things being equal, the rate of develop- ment is determined by the penalty probabili- ties of competing grammars relative to the in- put data in the linguistic environment [3]. In this paper, we present longitudinal evidence concerning the prediction in (2). 4 To evaluate de- velopmental predictions, we must estimate the the penalty probabilities of the competing grammars in a particular linguistic environment. Here we exam- ine the developmental rate of French verb placement, an early acquisition (Pierce, 1992), that of English subject use, a late acquisition (Valian, 1991), that of Dutch V2 parameter, also a late acquisition (Haege- man, 1994). Using the idea of parameter expression (section 2.6), we estimate the frequency of sentences that unambiguously identify the target value of a pa- rameter. For example, sentences that contain finite verbs preceding adverb or negation ("Jean voit sou- vent/pas Marie" ) are unambiguous indication for the [+] value of the verb raising parameter. A grammar with the [-] value for this parameter is incompatible with such sentences and if probabilistically selected for the learner for grammatical analysis, will be pun- ished as a result. Based on the CHILDES corpus, we estimate that such sentences constitute 8% of all French adult utterances to children. This suggests that unambiguous evidence as 8% of all input data is sufficient for a very early acquisition: in this case, the target value of the verb-raising parameter is cor- rectly set. We therefore have a direct explanation of Brown's (1973) observation that in the acquisi- tion of fixed word order languages such as English, word order errors are "trifingly few". For example, English children are never to seen to produce word order variations other than SVO, the target gram- mar, nor do they fail to front Wh-words in question formation. Virtually all English sentences display rigid word order, e.g. verb almost always (immedi- ately) precedes object, which give a very high (per- haps close to 100%, far greater than 8%, which is sufficient for a very early acquisition as in the case of French verb raising) rate of unambiguous evidence, sufficient to drive out other word order grammars very early on. Consider then the acquisition of the subject pa- rameter in English, which requires a sentential sub- ject. Languages like Italian, Spanish, and Chinese, on the other hand, have the option of dropping the subject. Therefore, sentences with an overt subject are not necessarily useful in distinguishing English 4In Yang (1999), we show that a child learner, en route to her target grammar, entertains multiple grammars. For ex- ample, a significant portion of English child language shows characteristics of a topic-drop optional subject grammar like Chinese, before they learn that subject use in English is oblig- atory at around the 3rd birthday. from optional subject languages. 5 However, there exists a certain type of English sentence that is in- dicative (Hyams, 1986): There is a man in the room. Are there toys on the floor? The subject of these sentences is "there", a non- referential lexical item that is present for purely structural reasons - to satisfy the requirement in English that the pre-verbal subject position must be filled. Optional subject languages do not have this requirement, and do not have expletive-subject sentences. Expletive sentences therefore express the [+] value of the subject parameter. Based on the CHILDES corpus, we estimate that expletive sen- tences constitute 1% of all English adult utterances to children. Note that before the learner eliminates optional subject grammars on the cumulative basis of exple- tive sentences, she has probabilistic access to multi- ple grammars. This is fundamentally different from stochastic grammar models, in which the learner has probabilistic access to generative ~ules. A stochastic grammar is not a developmentally adequate model of language acquisition. As discussed in section 1.1, more than 90% of English sentences contain a sub- ject: a stochastic grammar model will overwhehn- ingly bias toward the rule that generates a subject. English children, however, go through long period of subject drop. In the present model, child sub- ject drop is interpreted as the presence of the true optional subject grammar, in co-existence with the obligatory subject grammar. Lastly, we consider the setting of the Dutch V2 parameter. As noted in section 2.5, there appears to no unambiguous evidence for the [+] value of the V2 parameter: SVO, VSO, and OVS grammars, mem- bers of the [-V2] class, are each compatible with cer- tain proportions of expressions produced.by the tar- get V2 grammar. However, observe that despite of its compatibility with with some input patterns, an OVS grammar can not survive long in the population of competing grammars. This is because an OVS grammar has an extremely high penalty probability. Examination of CHILDES shows that OVS patterns consist of only 1.3% of all input sentences to chil- dren, whereas SVO patterns constitute about 65% of all utterances, and XVSO, about 34%. There- fore, only SVO and VSO grammar, members of the [-V2] class, are "contenders" alongside the (target) V2 grammar, by the virtue of being compatible with significant portions of input data. But notice that OVS patterns do penalize both SVO and VSO gram- mars, and are only compatible with the [+V2] gram- 5Notice that this presupposes the child's prior knowledge of and access to both obligatory and optional subject gram- mars. 433 mars. Therefore, OVS patterns are effectively un- ambiguous evidence (among the contenders) for the V2 parameter, which eventually drive SVO and VSO grammars out of the population. In the selectioni-st model, the rarity of OVS sen- tences predicts that the acquisition of the V2 pa- rameter in Dutch is a relatively late phenomenon. Furthermore, because the frequency (1.3%) of Dutch OVS sentences is comparable to the frequency (1%) of English expletive sentences, we expect that Dutch V2 grammar is successfully acquired roughly at the same time when English children have adult-level subject use (around age 3; Valian, 1991). Although I am not aware of any report on the timing of the correct setting of the Dutch V2 parameter, there is evidence in the acquisition of German, a similar lan- guage, that children are considered to have success- fully acquired V2 by the 36-39th month (Clahsen, 1986). Under the model developed here, this is not an coincidence. 4 Conclusion To capitulate, this paper first argues that consider- ations of language development must be taken seri- ously to evaluate computational models of language acquisition. Once we do so, both statistical learn- ing approaches and traditional UG-based learnabil- ity studies are empirically inadequate. We proposed an alternative model which views language acqui- sition as a selectionist process in which grammars form a population and compete to match linguis- tic* expressions present in the environment. The course and outcome of acquisition are determined by the relative compatibilities of the grammars with in- put data; such compatibilities, expressed in penalty probabilities and unambiguous evidence, are quan- tifiable and empirically testable, allowing us to make direct predictions about language development. The biologically endowed linguistic knowledge en- ables the learner to go beyond unanalyzed distribu- tional properties of the input data. We argued in section 1.1 that it is a mistake to model language acquisition as directly learning the probabilistic dis- tribution of the linguistic data. Rather, language ac- quisition is guided by particular input evidence that serves to disambiguate the target grammar from the competing grammars. The ability to use such evi- dence for grammar selection is based on the learner's linguistic knowledge. Once such knowledge is as- sumed, the actual process of language acquisition is no more remarkable than generic psychological mod- els of learning. The selectionist theory, if correct, show an example of the interaction between domain- specific knowledge and domain-neutral mechanisms, which combine to explain properties of language and cognition. References Atkinson, R., G. Bower, and E. Crothers. (1965). An Introduction to Mathematical Learning Theory. New York: Wiley. Bates, E. and J. Elman. (1996). Learning rediscov- ered: A perspective on Saffran, Aslin, and Newport. Science 274: 5294. Berwick, R. (1985). The acquisition of syntactic knowledge. Cambridge, MA: MIT Press. Brill, E. (1993). Automatic grammar induction and parsing free text: a transformation-based approach. ACL Annual Meeting. Brown, R. (1973). A first language. Cambridge, MA: Harvard University Press. Bush, R. and F. Mostellar. Stochastic models ]'or learning. New York: Wiley. Charniak, E. (1995). Statistical language learning. Cambridge, MA: MIT Press. Chomsky, N. (1975). Reflections on language. New York: Pantheon. Changeux, J.-P. (1983). L'Homme Neuronal. Paris: Fayard. Clahsen, H. (1986). Verbal inflections in German child language: Acquisition of agreement markings and the functions they encode. Linguistics 24: 79- 121. Clark, R. (1992). The selection of syntactic knowl- edge. Language Acquisition 2: 83-149. Crain, S. and M. Nakayama (1987). Structure de- pendency in grammar formation. Language 63: 522- 543. Dresher, E. (1999). Charting the learning path: cues to parameter setting. Linguistic Inquiry 30: 27-67. Edelman, G. (1987). Neural Darwinism.: The the- ory of neuronal group selection. New York: Basic Books. Fodor, J. D. (1998). Unambiguous triggers. Lin- guistic Inquiry 29: 1-36. Gibson, E. and K. Wexler (1994). Triggers. Linguis- tic Inquiry 25: 355-407. Haegeman, L. (1994). Root infinitives, clitics, and truncated structures. Language Acquisition. Hubel, D. and T. Wiesel (1962). Receptive fields, binocular interaction and functional architecture in the cat's visual cortex. Journal of Physiology 160: 106-54. Hyams, N. (1986) Language acquisition and the the- ory of parameters. Reidel: Dordrecht. Klavins, J. and P. Resnik (eds.) (1996). The balanc- ing act. Cambridge, MA: MIT Press. Kroch, A. (1989). Reflexes of grammar in patterns of language change. Language variation and change 1: 199-244. Lewontin, R. (1983). The organism as the subject and object of evolution. Scientia 118: 65-82. de Marcken, C. (1996). Unsupervised language ac- quisition. Ph.D. dissertation, MIT. 434 MacWhinney, B. and C. Snow (1985). The Child Language Date Exchange System. Journal of Child Language 12, 271-296. Narendra, K. and M. Thathachar (1989). Learning automata. Englewood Cliffs, N J: Prentice Hall. Niyogi, P. and R. Berwick (1996). A language learn- ing model for finite parameter space. Cognition 61: 162-193. Pierce, A. (1992). Language acquisition and and syntactic theory: a comparative analysis of French and English child grammar. Boston: Kluwer. Pinker, S. (1979). Formal models of language learn- ing. Cognition 7: 217-283. Pinker, S. (1984). Language learnability and lan- guage development. Cambridge, MA: Harvard Uni- versity Press. Seidenberg, M. (1997). Language acquisition and use: Learning and applying probabilistic con- straints. Science 275: 1599-1604. Stolcke, A. (1994) Bayesian Learning of Probabilis- tic Language Models. Ph.D. thesis, University of California at Berkeley, Berkeley, CA. Valian, V. (1991). Syntactic subjects in the early speech of American and Italian children. Cognition 40: 21-82. Wexler, K. (1994). Optional infinitives, head move- ment, and the economy of derivation in child lan- guage. In Lightfoot, D. and N. Hornstein (eds.) Verb movement. Cambridge: Cambridge University Press. Wexler, K. and P. Culicover (1980). Formal princi- ples of language acquisition. Cambridge, MA: MIT Press. Wijnen, F. (1999). Verb placement in Dutch child language: A longitudinal analysis. Ms. University of Utrecht. Yang, C. (1999). The variational dynamics of natu- ral language: Acquisition and use. Technical report, MIT AI Lab. 435
1999
55
The grapho-phonological system of written French: Statistical analysis and empirical validation Marielle Lange Laboratory of Experimental Psychology, Universit6 Libre de BruxeUes Av. F.D. Roosevelt, 50 Bruxelles, Belgium, B 1050 Bruxelles [email protected] Alain Content Laboratory of Experimental Psychology, Universit6 Libre de Bruxelles Av. F.D. Roosevelt, 50 Bruxelles, Belgium, B 1050 Bruxelles [email protected] Abstract The processes through which readers evoke mental representations of phonological forms from print constitute a hotly debated and controversial issue in current psycholinguistics. In this paper we present a computational analysis of the grapho-phonological system of written French, and an empirical validation of some of the obtained descriptive statistics. The results provide direct evidence demonstrating that both grapheme frequency and grapheme entropy influence performance on pseudoword naming. We discuss the implications of those findings for current models of phonological coding in visual word recognition. Introduction One central characteristic of alphabetic writing systems is the existence of a direct mapping between letters or letter groups and phonemes. In most languages, although to a varying extent, the mapping from print to sound can be characterized as quasi-systematic (Plaut, McClelland, Seidenberg, & Patterson, 1996; Chater & Christiansen, 1998). Thus, descriptively, in addition to a large body of regularities (e.g. the grapheme CH in French regularly maps onto/~/), one generally observes isolated deviations (e.g. CH in CHAOS maps onto /k/)as well as ambiguities. In some cases but not always, these difficulties can be alleviated by considering higher order regularities such as local orthographic environment (e.g., C maps onto /k/ or/s/ as a function of the following letter), phonotactic and phonological constraints as well as morphological properties (Cf. PH in PHASE vs. SHEPHERD). One additional difficulty stems from the fact that the graphemes, the orthographic counterparts of phonemes, can consist either of single letters or of letter groups, as the previous examples illustrate. Psycholinguistic theories of visual word recognition have taken the quasi-systematicity of writing into account in two opposite ways. In one framework, generally known as dual-route theories (e.g. Coltheart, 1978; Coltheart, Curtis, Atkins, &Haller, 1993), it is assumed that dominant mapping regularities are abstracted to derive a tabulation of grapheme-phoneme correspondence rules, which may then be looked up to derive a pronunciation for any letter string. Because the rule table only captures the dominant regularities, it needs to be complemented by lexical knowledge to handle deviations and ambiguities (i.e., CHAOS, SHEPHERD). The opposite view, based on the parallel distributed processing framework, assumes that the whole set of grapho-phonological regularities is captured through differentially weighted associations between letter coding and phoneme coding units of varying sizes (Seidenberg & McClelland, 1989; Plaut, Seidenberg, McClelland & Patterson, 1996). These opposing theories have nourished an ongoing complex empirical debate for a number of years. This controversy constitutes one instance of a more general issue in cognitive science, which bears upon the proper explanation of rule- like behavior. Is the language user's capacity to exploit print-sound regularities, for instance to generate a plausible pronunciation for a new, unfamiliar string of letters, best explained by knowledge of abstract all-or-none rules, or of the 436 statistical structure of the language? We believe that, in the field of visual word processing, the lack of precise quantitative descriptions of the mapping system is one factor that has impeded resolution of these issues. In this paper, we present a descriptive analysis of the grapheme-phoneme mapping system of the French orthography, and we further explore the sensitivity of adult human readers to some characteristics of this mapping. The results indicate that human naming performance is influenced by the frequency of graphemic units in the language and by the predictability of their mapping to phonemes. We argue that these results implicate the availability of graded knowledge of grapheme-phoneme mappings and hence, that they are more consistent with a parallel distributed approach than with the abstract rules hypothesis. . Statistical analysis of grapho- phonological correspondences of French 1.1. Method Tables of grapheme-phoneme associations (henceforth, GPA) were derived from a corpus of 18.510 French one-to-three-syllable words from the BRULEX Database (Content, Mousty, & Radeau, 1990), which contains orthographic and phonological forms as well as word frequency statistics. As noted above, given that graphemes may consist of several letters, the segmentation of letter strings into graphemic units is a non-trivial operation. A semi-automatic procedure similar to the rule-learning algorithm developed by Coltheart et al. (1993) was used to parse words into graphemes. First, grapheme-phoneme associations are tabulated for all trivial cases, that is, words which have exactly the same number of graphemes and phonemes (i.e. PAR,/paR/). Then a segmentation algorithm is applied to the remaining unparsed words in successive passes. The aim is to select words for which the addition of a single new GPA would resolve the parsing. After each pass, the new hypothesized associations are manually checked before inclusion in the GPA table. The segmentation algorithm proceeds as follows. Each unparsed word in the corpus is scanned from left to right, starting with larger letter groups, in order to find a parsing based on tabulated GPAs which satisfies the phonology. If this fails, a new GPA will be hypothesized if there is only one unassigned letter group and one unassigned phoneme and their positions match. For instance, the single-letter grapheme-phoneme associations tabulated at the initial stage would be used to mark the P-/p/and R-/R/correspondences in the word POUR (/puRl) and isolate OU-/u/as a new plausible association. When all words were parsed into graphemes, a 80 70 60 50 40 30 20 10 0 Grapheme-Phoneme Association Probability Figure 1. Distribution of Grapheme-Phoneme Association probablity, based on type measures. 70 Grapheme Entropy (H) 60 ! Most unpredictable graphemes 50 ! (H • .90) Vowels: e, oe, u, ay, eu, 'i 40 Consonants: x, s, t, g, II, c 3o 20 10 o o ~ d o d o d o o d . . . . . . . . Figure2. Dis~ibutionof~aphemeEn~y(H) values, b~on~eme~rcs. 437 Predictibility of Grapheme-Phoneme Associations in French GPA probability GPA probability H (type) H (token) (type) (token) Numberof pmnunci=ions M SD M SD M SD M SD M SD All 1.70 (1.26) .60 (.42) .60 (.43) .27 (.45) .23 (.42) Vowels 1.66 (1.12) .60 (.41) .60 (.44) .29 (.48) .21 (.41) Consonants 1.76 (1.23) .60 (.42) .60 (.42) .25 (.42) .26 (.44) Table I. Number of different pronunciations of a grapheme, grapheme-phoneme association (GPA) probability, and entropy (H) values, by type and by token, for French polysyllabic words. final pass through the whole corpus computed grapheme-phoneme association frequencies, based both on a type count (the number of words containing a given GPA) and a token count (the number of words weighted by word frequency). Several statistics were then extracted to provide a quantitative description of the grapheme-phoneme system of French. (1) Grapheme frequency, the number of occurrences of the grapheme in the corpus, independently of its phonological value. (2) Number of alternative pronunciations for each grapheme. (3) Grapheme entropy as measured by H, the information statistic proposed by Shannon (1948) and previously used by Treiman, Mullennix, Bijeljac-Babic, & Richmond-Welty (1995). This measure is based on the probability distribution of the phoneme set for a given grapheme and reflects the degree of predictability of its pronunciation. H is minimal and equals 0 when a grapheme is invariably associated to one phoneme (as for J and/3/)- H is maximal and equals logs n when there is total uncertainty. In this particular case, n would correspond to the total number of phonemes in the language (thus, since there are 46 phonemes, max H = 5.52). (4) Grapheme-phoneme association probability, which is the GPA frequency divided by the total grapheme frequency. (5) Association dominance rank, which is the rank of a given grapheme- phoneme association among the phonemic alternatives for a grapheme, ordered by decreasing probability. 1.2. Results Despite its well-known complexity and ambiguity in the transcoding from sound to spelling, the French orthography is generally claimed to be very systematic in the reverse conversion of spelling to sound. The latter claim is confirmed by the present analysis. The grapheme-phoneme associations system of French is globally quite predictable. The GPA table includes 103 graphemes and 172 associations, and the mean association probability is relatively high (i.e., 0.60). Furthermore, a look at the distribution of grapheme-phoneme association probabilities (Figure 1) reveals that more than 40% of the associations are completely regular and unambiguous. When multiple pronunciations exist (on average, 1.70 pronunciations for a grapheme), the alternative pronunciations are generally characterized by low GPA probability values (i.e., below 0.15). The predictability of GPAs is confirmed by a very low mean entropy value. The mean entropy value for all graphemes is 0.27. As a comparison point, if each grapheme in the set was associated with two phonemes with probabilities of 0.95 and 0.05, the mean H value would be 0.29. There is no notable difference between vowel and consonant predictability. Finally, it is worth noting that in general, the descriptive statistics are similar for type and token counts. 2. Empirical study: Grapheme frequency and grapheme entropy To assess readers' sensitivity to grapheme frequency and grapheme entropy we collected naming latencies for pseudowords contrasted on those two dimensions. 438 I I • II I • Grapheme Frequency Grapheme Entropy Low High Low High Latencies Immediate Naming 609 (75) 585 (66) 596 (72) 644 (93) Delayed Naming 335 (42) 342 (53) 333 (51) 360 (54) Delta Scores 274 (94) 243 (84) 263 (94) 284 (105) Errors Immediate Naming 8.1 (7.0) 8.9 (5.8) 9.2 (4.7) 14.2 (7.3) Dela~ced Namin~ 2.7 ~3.41 3.9 ~5.7) 2.5 ~2.4 / 8.0 ~6.3 / Table 2. Average reaction times and errors for the grapheme frequency and grapheme entropy (uncertainty) manipulations (standard deviations are indicated into parentheses) in the immediate and delayed naming tasks. 2.1. Method Participants. Twenty French-speaking students from the Free University of Brussels took part in the experiment for course credits. All had normal or corrected to normal vision. Materials. Two lists of 64 pseudowords were constructed. The first list contrasted grapheme frequency and the second manipulated grapheme entropy. The grapheme frequency and grapheme entropy estimates for pseudowords were computed by averaging respectively grapheme frequency or grapheme entropy across all graphemes in the letter string. Low and high values items were selected among the lowest 30% and highest 30% values in a database of about 15.000 pseudowords constructed by combining phonotactically legal consonant and vocalic clusters. The frequency list comprised 32 pairs of items. In each pair, one pseudoword had a high averaged grapheme frequency, and the other had a low averaged grapheme frequency, with entropy kept constant. Similarly, the entropy list included 32 pairs of pseudowords with contrasting average values of entropy and close values of average grapheme frequency. In addition, stimuli in a matched pair were controlled for a number of orthographic properties known to influence naming latency (number of letters and phonemes; lexical neighborhood size; number of body friends; positional and non positional bigram frequency; grapheme segmentation probability; grapheme complexity). Procedure. Participants were tested individually in a computerized situation (PC and MEL experimentation software). They were successively tested in a immediate naming and a delayed naming task with the same stimuli. In the immediate naming condition, participants were instructed to read aloud pseudowords as quickly and as accurately as possible, and we recorded response times and errors. In the delayed naming task, the same stimuli were presented in a different random order, but participants were required to delay their overt response until a response signal appeared on screen. The delay varied randomly from trial to trial between 1200 and 1500 msec. Since participants are instructed to fully prepare their response for overt pronunciation during the delay period, the delayed naming procedure is meant to provide an estimate of potential artefactual differences between stimulus sets due to articulatory factors and to differential sensitivity of the microphone to various onset phonemes. Pseudowords were presented in a random order, different for each participant, with a pause after blocks of 32 stimuli. They were displayed in lower case, in white on a black background. In the immediate naming task, each trial began with a fixation sign (*) presented at the center of the screen for 300 msec. It was followed by a black screen for 200 msee and then a pseudoword which stayed on the screen until the vocal response triggered the microphone or for a maximum delay of 2000 msec. An interstimulus screen was finally presented for 1000 msee. In the delayed naming task, the fixation point and the black screen were 439 followed by a pseudoword presented for 1500 msec, followed by a random delay between 1300 and 1500 msec. After this variable delay, a go signal (####) was displayed in the center of the screen till a vocal response triggered the microphone or for a maximum duration of 2000 msec. Pronunciation errors, hesitations and triggering of the microphone by extraneous noises were noted by hand by the experimenter during the experiment. 2.2. Results Data associated with inappropriate triggering of the microphone were discarded from the error analyses. In addition, for the response time analyses, pronunciation errors, hesitations, and anticipations in the delayed naming task were eliminated. Latencies outside an interval of two standard deviations above and below the mean by subject and condition were replaced by the corresponding mean. Average reaction times and error rates were then computed by subjects and by items in both the immediate naming and the delayed naming task. By-subjects and by-items (Ft and F2, respectively) analyses of variance were performed with grapheme frequency and grapheme entropy as within-subject factors. Grapheme frequency. For naming latencies, pseudowords of low grapheme frequency were read 24 msec more slowly than pseudowords of high grapheme frequency. This difference was highly significant both by subjects and by items; Fj(1, 19) = 24.4, p < .001, Fe(1, 31) = 7.5, p < .001. On delayed naming times, the same comparison gave a nonsignificant difference of-7 msec. For pronunciation errors, there was no significant difference in the immediate naming task. In the delayed naming task, pseudowords of low mean grapheme frequency caused 1.2% more errors than high ones. This difference was marginally significant by items, but not significant by subjects; F2(1, 31) = 3.1,p < .1. Grapheme entropy. In the immediate naming task, high-entropy pseudowords were read 48 msec slower than low-entropy pseudowords; FI(1, 19) = 45.4,p < .001, Fe(1, 31) = 16.2,p < .001. In the delayed naming task, the same comparison showed a significant difference of 27 msec; FI(1, 19) = 22.9 p < .001, F2(1, 31) = 12.5, p < .005. Because of this articulatory effect, delta scores were computed by subtracting delayed naming times from immediate naming times. A significant difference of 21 msec was found on delta scores; FI(1, 19) = 5.7,p < .05, F2(1, 31) = 4.7,p < .05. The pattern of results was similar for errors. In the immediate naming task, high-entropy pseudowords caused 5% more errors than low- entropy pseudowords. This effect was significant by subjects but not by items; Ft(1, 19) = 7.4, p < .05, F2(1, 31) = 2.1,p > .1. The effect was of 6.5% in the delayed naming task and was significant by subjects and items; FI(1, 19) = 17.2, p < .001, F2(1, 31) = 8.3,p < .01. 2.3. Discussion A clear effect of the grapheme frequency and the grapheme entropy manipulations were obtained on immediate naming latencies. In both manipulations, the stimuli in the contrasted lists were selected pairwise to be as equivalent as possible in terms of potentially important variables. A difference between high and low-entropy pseudowords was also observed in the delayed naming condition. The latter effect is probably due to phonetic characteristics of the initial consonants in the stimuli. Some evidence confirming this interpretation is adduced from a further control experiment in which participants were required to repeat the same stimuli presented auditorily, after a variable response delay. The 27 msec difference in the visual delayed naming condition was tightly reproduced with auditory stimuli, indicating that the effect in the delayed naming condition is unrelated to print-to-sound conversion processes. Despite this unexpected bias, however, when the influence of phonetic factors was eliminated by computing the difference between immediate and delayed naming, a significant effect of 21 msec remained, demonstrating that entropy affects grapheme- phoneme conversion. These findings are incompatible with current implementations of the dual-route theory (Coltheart et aL, 1993). The "central dogma" of this theory is that the performance of human subjects on pseudowords is accounted for by an analytic process based on grapheme-phoneme conversion rules. Both findings are at odds with the additional core assumptions that (1) only 440 dominant mappings are retained as conversion rules; (2) there is no place for ambiguity or predictability in the conversion. In a recent paper, Rastle and Coltheart (1999) note that "One refinement of dual-route modeling that goes beyond DRC in its current form is the idea that different GPC rules might have different strengths, with the strength of the correspondence being a function'of, for example, the proportion of words in which the correspondence occurs. Although simple to implement, we have not explored the notion of rule strength in the DRC model because we are not aware of any work which demonstrates that any kind of rule-strength variable has effects on naming latencies when other variables known to affect such latencies such as neighborhood size (e.g., Andrews, 1992) and string length (e.g., Weekes, 1997) are controlled." We believe that the present results provide the evidence that was called for and should incite dual-route modelers to abandon the idea of all-or- none rules which was a central theoretical assumption of these models compared to connectionist ones. As the DRC model is largely based on the interactive activation principles, the most natural way to account for graded effects of grapheme frequency and pronunciation predictability would be to introduce grapheme and phoneme units in the nonlexical system. Variations in the activation resting level of grapheme detectors as a function of frequency of occurrence and differences in the strength of the connections between graphemes and phonemes as a function of association probability would then explain grapheme frequency and grapheme entropy effects. However an implementation of rule-strength in the conversion system of the kind suggested considerably modifies its processing mechanism, notably by replacing the serial table look-up selection of graphemes by a parallel activation process. Such a change is highly likely to induce non-trivial consequences on predicted performance. Furthermore, and contrary to the suggestion that the introduction of rule-strength would amount to a mere implementational adaptation of no theoretical importance, we consider that it would impose a substantial restatement of the theory, because it violates the core assumption of the approach, namely, that language users induce all- or-none rules from the language to which they are exposed. Hence, the cost of such a (potential) improvement in descriptive adequacy is the loss of explanatory value from a psycholinguistic perspective. As Seidenberg stated, "[we are] not claiming that data of the sort presented [here] cannot in principle be accommodated within a dual route type of model. In the absence of any constraints on the introduction of new pathways or recognition processes, models in the dual route framework can always be adapted to fit the empirical data. Although specific proposals might be refuted on the basis of empirical data, the general approach cannot." (Seidenberg, 1985, p. 244). The difficulty to account for the present findings within the dual-route approach contrasts with the straigthforward explanation they receive in the PDP framework. As has often been emphasized, rule-strength effects emerge as a natural consequence of learning and processing mechanisms in parallel distributed systems (see Van Orden, Pennington, & Stone, 1990; Plaut et al., 1996). In this framework, the rule-governed behavior is explained by the gradual encoding of the statistical structure that governs the mapping between orthography and phonology. Conclusions In this paper, we presented a semi-automatic procedure to segment words into graphemes and tabulate grapheme-phoneme mappings characteristics for the French writing system. In current work, the same method has been applied on French and English materials, allowing to provide more detailed descriptions of the similarities and differences between the two languages. Most previous work in French (e.g. Vrronis, 1986) and English (Venezky, 1970) has focused mainly on the extraction of a rule set. One important feature of our endeavor is the extraction of several quantitative graded measures of grapheme-phoneme mappings (see also Bern&, Reggia, & Mitchum, 1987, for similar work in American English). In the empirical investigation, we have shown how the descriptive data could be used to probe human readers' written word processing. The results demonstrate that the descriptive statistics 441 capture some important features of the processing system and thus provide an empirical validation of the approach. Most interestingly, the sensitivity of human processing to the degree of regularity and frequency of grapheme-phoneme associations provides a new argument in favor of models in which knowledge of print-to-sound mapping is based on a large set of graded associations rather than on correspondence rules. Acknowledgements This research was supported by a research grant from the Direction Grn6rale de la Recherche Scientifique -- Communaut6 fran~aise de Belgique (ARC 96/01-203). Marielle Lange is a research assistant at the Belgian National Fund for Scientific Research (FNRS). References Andrews, S. (1992). Frequency and neighborhood effects on lexical access: Lexical similarity or orthographic redundancy? Journal of Experimental Psychology: Learning, Memory, and Cognition, 18,234-254. Berndt, R. S., Reggia, J. A., & Mitchum, C. C. (1987). Empirically derived probabilities for grapheme-to-phoneme correspondences in English. Behavior Research Methods, Instruments, & Computers, 19, 1-9. Chater, N., & Christiansen, M. H. (1998). Connectionism and Natural Language Processing. In S. Garrod & M. Pickering. (Eds.), Language Processing. London, UK: University College London Press. Coltheart, M. (1978). Lexical access in simple reading tasks. In G. Underwood (Ed.), Strategies of information processing (pp. 151- 216). London: Academic Press. Coltheart, M., Curtis, B., Atkins, P., & Hailer, M. (1993). Models of reading aloud: Dual-route and parallel-distributed-processing approaches. Psychological Review, 100, 589-608. Content, A., Mousty, P., & Radeau, M. (1990). Brulex. Une base de donn~es lexicales informatiske pour le fran¢ais $crit et parl~ [Brulex, A lexical database for written and spoken French]. L'Ann6e Psychologique, 90, 551-566. Plaut, D. C., McClelland, J. L., Seidenberg, M. S., & Patterson, K. E. (1996). Understanding normal and impaired word reading: Computational principles in quasi-regular domains. Psychological Review, 103, 56-115. Rastle, K., & Coltheart, M. (1999). Serial and strategic effects in reading aloud. Journal of Experimental Psychology: Human Perception and Performance, (April, 1999, in press). Seidenberg, M. S. (1985). The time course of information activation and utilization in visual word recognition. In D. Besner, T. G. Waller, & E. M. MacKinnon (Eds.), Reading Research: Advances in theory and practice (Vol. 5, pp. 199-252). New York: Academic Press. Seidenberg, M. S., & McClelland, J. L. (1989). A distributed, developmental model of word recognition and naming. Psychological Review, 96, 523-568. Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27, 379-423, 623-656. Treiman, R., Mullennix, J., Bijeljac-Babic, R., & Richmond-Welty, E. D. (1995). The special role for rimes in the description, use, and acquisition of English Orthography. Journal of Experimental Psychology: General, 124, 107- 136. Van Orden, G. C., Pennington, B. F., & Stone, G. O. (1990). Word identification in reading and the promise of subsymbolic psycholinguistics. Psychological Review, 97, 488-522. Venezky, R. L. (1970). The structure of English orthography. The Hage, The Netherlands: Mouton. V6ronis, J. (1986). Etude quantitative sur le systbme graphique et phonologique du frangais. Cahiers de Psychologie Cognitive, 6, 501-531. Weekes, B. (1997). Differential effects of letter number on word and nonword naming latency. Quarterly Journal of Experimental Psychology, 50A, 439-456. 442
1999
56
Learning to Recognize Tables in Free Text Hwee Tou Ng Chung Yong Lim Jessica Li Teng Koo DSO National Laboratories 20 Science Park Drive, Singapore 118230 {nhweetou, ichungyo, kliteng}@dso, org. sg Abstract Many real-world texts contain tables. In order to process these texts correctly and extract the infor- mation contained within the tables, it is important to identify the presence and structure of tables. In this paper, we present a new approach that learns to recognize tables in free text, including the bound- ary, rows and columns of tables. When tested on Wall Street Journal news documents, our learning approach outperforms a deterministic table recogni- tion algorithm that identifies tables based on a fixed set of conditions. Our learning approach is also more flexible and easily adaptable to texts in different do- mains with different table characteristics. 1 Introduction Tables are present in many reai-world texts. Some information such as statistical data is bestpresented in tabular form. A check on the more than 100,000 Wall Street Journal (WSJ) documents collected in the ACL/DCI CD-ROM reveals that at least an es- timated one in 30 documents contains tables. Tables present a unique challenge to information extraction systems. At the very least, the presence of tables must be detected so that they can be skipped over. Otherwise, processing the lines that consti- tute tables as if they are normal "sentences" is at best misleading and at worst may lead to erroneous analysis of the text. As tables contain important data and information, it is critical for an information extraction system to be able to extract the information embodied in ta- bles. This can be accomplished only if the structure of a table, including its rows and columns, are iden- tified. That table recognition is an important step in in- formation extraction has been recognized in (Appelt and Israel, 1997). Recently, there is also a greater realization within the computational linguistics com- munity that the layout and types of information (such as tables) contained in a document are im- portant considerations in text processing (see the call for participation (Power and Scott, 1999) for the 1999 AAAI Fail Symposium Series). However, despite the omnipresence of tables and their importance, there is surprisingly very little work in computational linguistics on algorithms to recognize tables. The only research that we are aware of is the work of (Hurst and Douglas, 1997; Douglas and Hurst, 1996; Douglas et al., 1995). Their method is essentially a deterministic algorithm that relies on spaces and special punctuation sym- bols to identify the presence and structure of tables. However, tables are notoriously idiosyncratic. The main difficulty in table recognition is that there axe so many different and varied ways in which tables can show up in real-world texts. Any deterministic algorithm based on a fixed set of conditions is bound to fail on tables with unforeseen layout and structure in some domains. In contrast, we present a new approach in this pa- per that learns to recognize tables in free text. As our approach is adaptive and trainable, it is more flexible and easily adapted to texts in different do- mains with different table characteristics. 2 Task Definition The input to our table recognition program consists of plain texts in ASCII characters. Examples of in- put texts are shown in Figure I to 3. They are docu- ment fragments that contain tables. Figure 1 and 2 are taken from the Wall Street Journal documents in the ACL/DCI CD-ROM, whereas Figure 3 is taken from the patent documents in the TIPSTER IR Text Research Collection Volume 3 CD-ROM. 1 In Figure 1, we added horizontal 2-digit line num- bers "Line nn:" and vertical single-digit line num- bers "n" for ease of reference to any line in this doc- ument. We will use this document to illustrate the details of our learning approach throughout this pa- per. We refer to a horizontal line as hline and a vertical line as vline in the rest of this paper. Each input text may contain zerQ, one or more tables. A table consists of one or more hlines. For example, in Figure 1, hlines 13-18 constitute a ta- ble. Ear~ table is subdivided into columns and rows. 1 The extracted document fragments appear in a slightly edited form in this paper due to space constraint. 443 Line Line Line Line Line Line Line Line Line Line Line Line Line Line Line Line Line Line Line Line Line 1234567890123456789012345678901234567890123456789012345678901234567890 01: Raw-steel production by the nation's mills increased 4~ last week to 02:1,833,000 tons from 1,570,000 tons the previous week, the American Iron and Steel Institute said. 03: 04: 05: 06: 07: 08: 09: I0: Last week's output fell 9.5~ from the 1,804,000 tons produced a year earlier. The industry used 75.8X of its capability last week, compared with 71.9~ the previous week and 72.3~ a year earlier. 11: The American Iron and Steel Institute reported: 12: 13: Net tons Capability 14: produced utilization 15: Week to March 14 .............. 1,633,000 75.8~ 16: Week to March 7 ............... 1,570,000 71.9~ 17: Year to date .................. 15,029,000 66.9~ 18: Year earlier to date .......... 18,431,000 70.8~ 19: The capability utilization rate is a calculation designed 20:to indicate at what percent of its production capability the 21:industry is operating in a given week. Figure l:Wail Street Journ~ document fragment How Some. Highly Conditional 'Bids' Fared Stock's 'Bid'* Initial Date** Reaction*** Bidder (Target Company) TWAICarl Ic~h- (USAir Group) $52 +5 3/8 to 49 1/8 3/4/87 Outcome Bid, seen a ploy to get USAir to buy TWA, is shelved Monday with USAir at 45 i/4; closed Wed. at 44 1/2 Columbia Ventures (Harnischfeger) $19 +1/2 to 18 1/4 Harnischfeger rejects 2/23/87 bid Feb. 26 with stock at 18 3/8; closed Wed. at 17 5/8 Figure 2: Wail Street Journal document fragment Each column of a table consists of one or more vlines. For example, there are three columns in the table in Figure 1: vlines 4-23, 36-45, and 48-58. Each row of a table consists of one or more hlines. For ex- ample, there are five rows in the table in Figure 1: hlines 13-14, 15, 16, 17, and 18. More specifically, the task of table recognition is to identify the boundaries, columns and rows of ta- bles within an input text. For example, given the in- put text in Figure 1, our table recognition program will identify one table with the following boundary, columns and rows: I. Boundary: Mines 13-18 2. Columns: vlines 4-23, 36--45, and 48-58 3. Rows: hlines 13-14, 15, 16, 17, and 18 Figure 1 to 3 illustrate some of the dh~iculties of table recognition. The table in Figure I uses a string of contiguous punctuation symbols "." instead of blank space characters in between two columns. In Figure 2, the rows of the table can contain caption or title information, like "How Some Highly Con- ditionai 'Bids' Fared", or header information like "Stock's Initial Reaction***" and "Outcome", or 444 side walls of the tray to provide even greater protection from convective heat transfer. Preferred construction materials are shown in Table 1: TABLE 1 Component Material Stiffener Paperboard having a thickness of about 6 and 30 mil (between about 6 and 30 point chip board). Insulation Mineral wool, having a density of between 2.5 and 6.0 pounds per cubic foot and a thickness of between 1/4 and 1 and 1/4 inch. Plastic sheets Polyethylene, having a thickness of between 1 and 4 mil; coated with a reflective finish on the exterior surfaces, such as aluminum having a thickness of between 90 and 110 Angstroms applied using a standard technique such as vacuum deposition. The stiffener 96 makes a smaller contribution to the insulation properties of the blanket 92, than does the insulator 98. As stated above, the Figure 3: Patent body content information like "$52" and "+5 3/8 to 49 1/8". Each row containing body content infor- mation consists of several hlines -- information on "Outcome" spans several hlines. In Figure 3, strings of contiguous dashes "-" occur within the table. Fur- thermore, the two columns within the table appear right next to each other -- there are no blank vlines separating the two columns. Worse still, some words from the first column like "Insulation" and "Plastic sheets" spill over to the second column. Notice that there may or may not be any blank lines or delimiters that immediately precede or follow a table within an input text. In this paper, we assume that our input texts are plain texts that do not contain any formatting codes, such as those found in an SGML or HTML docu- ment. There is a large number of documents that fall under the plain text category, and these are the kinds of texts that our approach to table recognition handles. The work of (Hurst and Douglas, 1997; Douglas and Hurst, 1996; Douglas et al., 1995) also deals with plain texts. 3 Approach A table appearing in plain text is essentially a two dimensional entity. Typically, the author of the text uses the <newline> character to separate adjacent hlines and a row is formed from one or more of such hlines. Similarly, blank space characters or some document fragment special punctuation characters are used to delimit the columns. 2 However, the specifics of how exactly this is done can vary widely across texts, as exem- plified by the tables in Figure 1 to 3. Instead of resorting to an ad-hoc method to rec- ognize tables, we present a new approach in this pa- per that learns to recognize tables in plain text. Our learning method uses purely surface features like the proportion of the kinds of characters and their rela- tive locations in a line and across lines to recognize tables. It is domain independent and does not rely on any domain-specific knowledge. We want to in- vestigate how high an accuracy we can achieve based purely on such surface characteristics. The problem of table recognition is broken down into 3 subproblems: recognizing table boundary, col- umn, and row, in that order. Our learning approach treats eac~ subproblem as a separate classification problem and relies on sample training texts in which the table boundaries, columns, and rows have been correctly identified. We built a graphical user inter- face in which such markup by human annotators can be readily done. With our X-window based GUI, a typical table can be annotated with its boundary, column, and row demarcation within a minute. From these sample annotated texts, training ex- 2We assume that any <tab> character has been replaced by the appropriate number of blank space characters in the input text. 445 amples in the form of feature-value vectors with correctly assigned classes are generated. One set of training examples is generated for each subprob- lem of recognizing table boundary, column, and row. Machine learning algorithms are used to build clas- sifters from the training examples, one classifier per subproblem. After training is completed, the table recognition program will use the learned classifiers to recognize tables in new, previously unseen input texts. We now describe in detail the feature extraction process, the learning algorithms, and how tables in new texts are recognized. The following classes of characters are referred to throughout the rest of this section: • Space character: the character " " (i.e., the character obtained by typing the space bar on the keyboard). • Alphanumeric character: one of the following characters: "A" to "Z', "a" to "z', and "0" to "9". • Special character: any character that is not a space character and not an alphanumeric char- acter. • Separator character: one of the following char- acters: ".", "*', and %". 3.1 Feature Extraction 3.1.1 Boundary Every hline in an input text generates one train- ing example for the subproblem of table boundary recognition. Every hline H within (outside) a table generates a positive (negative) example. Each train- ing example consists of a set of 27 feature values. The first nine feature values are derived from the immediately preceding hline H-l, the second nine from the current hline Ho, and the last nine from the immediately following//1.3 For a given hline H, its nine features and their associated values are given in Table 1. To illustrate, the feature values of the training ex- ample generated by line 16 in Figure 1 are: f, 3, N, %, N, 4, 3, I, I, f, 3, N, %, N, 4, 3,1, 1, f, 3, N, %, N, 3, 3, I, 1 Line 16 generated the feature values f, 3, N, %, N, 4, 3,1, 1. Since line 16 does not consist of only space characters, the value of F1 is f. There are three space characters before the word 3For the purpose of generating the feature values for the first and last hline in a text, we assume that the text is padded with a line of blank space characters before the first line and after the last line. "Week" in line 16, so the value of F2 is 3. Since the first non-space character in line 16 is "W" and it is not one of the listed special characters, the value of F3 is "N". The last non-space character in line 16 is "%", which becomes the value of F4. Since line 16 does not consist of only special characters, the value of F5 is "N". There are four segments in line 16 such that each segment consists of two or more contiguous space characters: a segment of three contiguous space characters before the word "Week"; a segment of two contiguous space characters after the punctuation characters "..." and before the number "1,570,000"; a segment of three contiguous space characters between the two numbers "1,570,000" and "71.9%"; and the last segment of contiguous space characters trailing the number "71.9%". The values of the remaining features of line 16 are similarly determined. Fi- nally, line 15 and 17 generated the feature values f,3,N,%,N,4,3,1,1 and f,3,N,%,N,3,3,1,1, respectively. The features attempt to capture some recurring characteristics of lines that constitute tables. Lines with only space characters or special characters tend to delimit tables or are part of tables. Lines within a table tend to begin with some number of leading space characters. Since columns within a table are separated by contiguous space characters or special characters, we use segments of such contiguous char- acters as features indicative of the presence of tables. 3.1.2 Column Every vline within a table generates one training ex- ample for the subproblem of table column recogni- tion. Each vline can belong to exactly one of five classes: 1. Outside any column 2. First line of a column 3. Within a column (but neither the first nor last line) 4. Last line of a column 5. First and last line of a column (i.e., the column consists of only one line) Note that it is possible for one column to imme- diately follow another (as is the case in Figure 3). Thus a two-class representation is not adequate here, since there would be no way to distinguish between two adjoining columns versus one contiguous column using only two classes. 4 The start and end of a column in a table is typ- ically characterized by a transition from a vline of 4For the identification of table boundary, we assume in this paper that there is some hline separating any two tables, and so a two-class representation for table boundary suffices. 446 Feature Description F1 F2 F3 Whether H consists of only space characters. Possible values are t (if H is a blank line) or f (otherwise). The number of leading (or initial) space characters in H. The first non-space character in H. Possible values are one of the following special characters: 0[]{}<> +-*/=~!@#$%A& or N (if the first non-space character is not one of the above special characters). F4 The last non-space character in H. Possible values are the same as F3. F5 Whether H consists entirely of one special character only. Possible values are either one of the special characters listed in F3 (if H only consists of that special character) or N (if H does not consist of one special character only). F6 The number of segments in H with two or more contiguous space characters. F7 The number of segments in H with three or more contiguous space characters. F8 The number of segments in H with two or more contiguous separator characters. F9 The number of segments in H with three or more contiguous separator characters. Table 1: Feature values for table boundary space (or special) characters to a vline with mixed al- phanumeric and space characters. That is, the tran- sition of character types across adjacent vlines gives an indication of the demarcation of table columns. Thus, we use character type transition as the fea- tures to identify table columns. Each training example consists of a set of six fea- ture values. The first three feature values are derived from comparing the immediately preceding vline V-z and the current vline V0, while the last three feature values are derived from comparing V0 with the im- mediately following vline Vl.S Let Vj and Vj+ 1 be any two adjacent vlines. Suppose Vj = Clj...ci,j...c~,#, and Vj+I = Czj+l ... cij+l ... cm,j+z where m is the number of hlines that constitute a table. Then the three feature values that are derived from the two vlines Vj and 1~+1 are determined by counting the proportion of two horizontally ad- jacent characters c~,j and cij+l (1 < i < m) that satisfy some condition on the type of the two char- acters. The precise conditions on the three features are given in Table 2. To illustrate, the feature values of vline 4 in Fig- ure 1 are: 0.333, 0, 0.667, 0.333, 0, 0 and its class is 2 (first line of a column). In de- riving the feature values, only hlines 13-18, the lines that constitute the table, are considered (i.e., m = 6). For the first three feature values, F1 = 2/6 since there are two space-character-to-space- character transitions from vline 3 to 4 (namely, on hlines 13 and 14); F2 = 0 since there is no al- phanumeric character or special character in vline 5For the purpose of generating the feature values for the first and last vline in a table, we assume that the table is padded with a vline of blank space characters before the first vline and after the last vline. 3; F3 = 4/6, since there are four space-character-to- alphanumeric-character transitions from vline 3 to 4 (namely, on hlines 15-18). Similarly, the last 3 fea- ture values are derived by examining the character transitions from vline 4 to 5. 3.1.3 Row Every hline within a table generates one training ex- ample for the subproblem of table row recognition. Unlike table columns, every hline within a table be- longs to some row in our formulation of the row recognition problem. As such, each hline belongs to exactly one of two classes: 1. First hline of a row 2. Subsequent hline of a row (not the first line) The layout of a typical table is such that its rows tend to record repetitive or similar data or informa- tion. We use this clue in designing the features for table row recognition. Since the information within a row may span multiple hlines, as the "Outcome" information in Figure 2 illustrates, we use the first hline of a row as the basis for comparison across rows. If two hlines are similar, then they belong to two separate rows; otherwise, they belong to the same row. Similarity is measured by character type transitions, as in the case of table column recogni- tion. More specifically, to generate a training example for a hline H, we compare H with H ~, where H ~ is the first hline of the immediately preceding row if H is the first hline of the current row, and H ~ is the first hline of the current row if H is not the first hline of the current row. 6 Each training example consists of a set of four feature values F1,..., F4. F1, F2, and F3 are de- termined by comparing H and H ~ while F4 is de- termined solely from H. Let H = Ci,l ... cid.., ci,n ~H ~ = H for the very first hline within a table. 447 Feature Description F1 F2 cij is a space character and ei,jq_ 1 is a space character; or ci,j is a special character and ci,j+l is a special character cij is an alphanumeric character or a special character, and ci,j+l is a space char- acter F3 ci,j is a space character, and cl,j+l is an alphanumeric character or a special char- acter Table 2: Feature values for table column and H' = Ci',1 . . . Ci',j... Ci',n, where n is the number of vlines of the table. The values of F1,..., F3 are determined by counting the proportion of the pairs of characters ci, j and cl,j (1 _< j < n) that satisfy some condition on the type of the two characters, as listed in Table 3. Let ci,k be the first non-space character in H. Then the value of F4 is kin. To illustrate, the feature values of hline 16 in Fig- ure 1 are: 0.236, 0.018, 0.018, 0.018 and its class is 1 (first line of a row). There are 55 vlines in the table, so n = 55. 7 Since hline 16 is the first line of a row, it is compared with hline 15, the first hline of the immediately preceding row, to gen- erate F1, F2, and F3. F1 = 13/55 since there are 13 space-character-to-space-character transitions from hline 15 to 16. F2 = F3 = 1/55 since there is only one alphanumeric-character-to-space-character transition ("4" to space character in vline 19) and one space-character-to-special-character transition (space character to "." in vline 20). The first non- space character is "W" in the first vline within the table, so k = 1. 3.2 Learning Algorithms We used the C4.5 decision tree induction algorithm (Quinlan, 1993) and the backpropagation algorithm for artificial neural nets (Rumelhart et al., 1986) as the learning algorithms to generate the classifiers. Both algorithms are representative state-of-the-art learning algorithms for symbolic and connectionist learning. We used all the default learning parameters in the C4.5 package. For backpropagation, the learning parameters are: hidden units : 2, epochs = 1000, learning rate = 0.35 and momentum term = 0.5. We also used log n-bit encoding for the symbolic features and normalized the numeric features to [0... 1] for backpropagation. 3.3 Recognizing Tables in New Texts 3.3.1 Boundary Every hline generates a test example and a classi- fier assigns the example as either positive (within a ~'In generating the feature values for table row recognition, only the vlines enclosed within the identified first and last column of the table are considered. table) or negative (outside a table). 3.3.2 Column After the table boundary has been identified, clas- sification proceeds from the first (leftmost) vline to the last (rightmost) vline in a table. For each vline, a classifier will return one of five classes for the test example generated from the current vline. Sometimes, the class assigned by a classifier to the current vline may not be logically consistent with the classes assigned up to that point. For instance, it is not logically consistent if the previous vline is of class 1 (outside any column) and the current vline is assigned class 4 (last line of a column). When this happens, for the backpropagation algorithm, the class which is logically consistent and has the highest score is assigned to the current vline; for C4.5, one of the logically consistent classes is randomly chosen. 3.3.3 Row The first hline of a table always starts a new active row (class 1). Thereafter, for a given hline, it is compared with the first hline of the current active row. If the classifier returns class 1 (first hline of a row), then a new active row is started and the current hline is the first hline of this new row. If the classifier returns class 2 (subsequent hline of a row), then the current active row grows to include the current hline. 4 Evaluation To determine how well our learning approach per- forms on the task of table recognition, we selected 100 Wall Street Journal (WSJ) news documents from the ACL/DCI CD-ROM. After removing the SGML markups on the original documents, we man- ually annotated the plain-text documents with table boundary, column, and row information. The docu- ments shown in Figure 1 and 2 are part of the 100 documents used for evaluation. 4.1 Accuracy Definition To measure the accuracy of recognizing table bound- ary of a new text, we compare the class assigned by the human annotator to the class assigned by our ta- ble recognition program on every hline of the text. Let A be the number of hlines identified by the hu- man annotator as being part of some table. Let B 448 Feature Description F1 cl, j is a space character and ci,j is a space character F2 F3 F4 ci,,j is an alphanumeric character or a special character, and ci,j is a space character ci,,j is a space character, and ci,j is an alphanumeric character or a special character kin Table 3: Feature values for table row be the number of Mines identified by the program as being part of some table. Let C be the number of Mines identified by both the human annotator and the program as being part of some table. Then recall R = C/A and precision P = C/B. The accuracy of table boundary recognition is defined as the F mea- sure, where F = 2RP/(R + P). The accuracy of recognizing table column (row) is defined similarly, by comparing the class assigned by the human anno- tator and the program to every vline (hline) within a table. 4.2 Deterministic Algorithms To determine how well our learning approach per- forms, we also implemented deterministic algorithms for recognizing table boundary, column, and row. The intent is to compare the accuracy achieved by our learning approach to that of the baseline deter- ministic algorithms. These deterministic algorithms are described below. 4.2.1 Boundary A Mine is considered part of a table if at least one character of Mine is not a space character and if any of the following conditions is met: * The ratio of the position of the first non-space character in hline to the length of hline exceeds some pre-determined threshold (0.25) • Hline consists entirely of one special character. . Hline contains three or more segments, each consisting of two or more contiguous space char- acters. • Hline contains two or more segments, each con- sisting of two or more contiguous separator characters. 4.2.2 Column All vlines within a table that consist of entirely space characters are considered not part of any col- umn. The remaining vlines within the table are then grouped together to form the columns. 4.2.3 Row The deterministic algorithm to recognize table row is similar to the recognition algorithm of the learn- ing approach given in Section 3.3.3, except that the classifier is replaced by one that computes the pro- portion of character type transitions. All characters in the two hlines under consideration are grouped into four types: space characters, special characters, alphabetic characters, or digits. If the proportion of characters that change type exceeds some pre- determined threshold (0.5), then the two Mines be- long to the same row. 4.3 Results We evaluated the accuracy of our learning approach on each subproblem of table boundary, column, and row recognition. For each subproblem, we conducted ten random trials and then averaged the accuracy over the ten trials. In each random trial, 20% of the texts are randomly chosen to serve as the texts for testing, and the remaining 80% texts are used for training. We plot the learning curve as each clas- sifter is given increasing number of training texts. Figure 4 to 6 summarize the average accuracy over ten random trials for each subproblem. Besides the accuracy for the C4.5 and backpropagation classi- tiers, we also show the accuracy of the deterministic algorithms. The results indicate that our learning approach outperforms the deterministic algorithms for all sub- problems. The accuracy of the deterministic algo- rithms is about 70%, whereas the maximum accu- racy achieved by the learning approach ranges over 85% - 95%. No one learning algorithm clearly out- performs the other, with C4.5 giving higher accu- racy on recognizing table boundary and column, and backpropagation performing better at recognizing table row. To test the generality of our learning approach, we also evaluated it on 50 technical patent docu- ments from the TIPSTER Volume 3 CD-ROM. To test how well a classifier that is trained on one do- main of texts will generalize to work on a different domain, we also tested the accuracy of our learn- ing approach on patent texts after training on WSJ texts only, and vice versa. Space constraint does not permit us to present the detailed empirical results in this paper, but suffice to say that we found that our learning approach is able to generalize well to work on different domains of texts. 5 Future Work Currently, our table row recognition does not dis- tinguish among the different types of rows, such as title (or caption) row, header row, and content row. We would like to extend our method to make such 449 95 90 85 8O ~ ~° 65 6O 55 50 0 , , , , , & ................ Y ............... e ....,,"~ ....... .......~ .......... T C4.5 -'---e ..... ~,~ Bp -..---x. ..... I °-i ........... 10 20 30 40 50 60 70 80 Number of training examples 90 85 80 75 70 65 60 55 50 I I i i I I i / .. ~ .....~ . . . . . '""....~ ................ ,X ................ "X .............. ~:! ~:' " .............. """ • ................. • ................. "~" " -0 "" ""-'Q ........... C4.5 ----e ..... Bp "-"~ ..... Det ----'~ ..... I0 I I I,, I I I 1 20 30 40 50 60 70 80 Number of training examples Figure 4: Learning curve of boundary identification accuracy on WSJ texts Figure 6: Learning curve of row identification accu- racy on WSJ texts 90 85 8O 7~ 70 55 5O 45 ~ 0 10 , , , , , & ............... 4 ............... .-0 . . . . . . . . . . . . . . . . .,.. ..... - ............ Q .... t'" .... X. ...," . " .... ./ C4.5 ..-..-o ..... Bp --'-~ ..... Det --.-~ ..... ' 3'o ' 6'0 ' 20 40 70 80 Number of training examples Figure 5: Learning curve of column identification accuracy on WSJ texts distinction. We would also like to investigate the effectiveness of other learning algorithms, such as exemplar-based methods, on the task of table recog- nition. 6 Conclusion In this paper, we present a new approach that learns to recognize tables in free text, including the bound- ary, rows and columns of tables. When tested on Wall Street Journal news documents, our learning approach outperforms a deterministic table recogni- tion algorithm that identifies tables based on a fixed set of conditions. Our learning approach is also more flexible and easily adaptable to texts in different do- mains with different table characteristics. References Douglas Appelt and David Israel. 1997. Tutorial notes on building information extraction systems. Tutorial held at the Fifth Conference on Applied Natural Language Processing. Shona Douglas and Matthew Hurst. 1996. Layout & language: Lists and tables in technical doc- uments. In Proceedings o.f the A CL SIGPARSE Workshop on Punctuation in Computational Lin- guistics, pages 19-24. Shona Douglas, Matthew Hurst, and David Quinn. 1995. Using natural language processing for iden- tifying and interpreting tables in plain text. In Fourth Annual ~qymposium on Document Analy- sis and Information Retrieval, pages 535-545. Matthew Hurst and Shona Douglas. 1997. Layout & language: Preliminary experiments in assigning logical structure to table cells. In Proceedings of the Fifth Conference on Applied Natural Language Processing, pages 217-220. Richard Power and Donia Scott. 1999. Using lay- out for the generation, understanding or retrieval of documents. Call for participation at the 1999 AAAI Fall Symposium Series. John Ross Quinlan. 1993. C4.5: Programs for Ma- chine Learning. Morgan Kaufmann, San Fran- cisco, CA. David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams. 1986. Learning internal rep- resentation by error propagation. In David E. Rumelhart and James L. McClelland, editors, Parallel Distributed Processing, Volume 1, pages 318-362. MIT Press, Cambridge, MA. 450
1999
57
A semantically-derived subset of English for hardware verification Alexander Holt and Ewan Klein HCRC Language Technology Group Division of Informatics University of Edinburgh alexander, holt@ed, ac. uk ewan. kleinOed, ac. uk Abstract To verify hardware designs by model checking, circuit specifications are commonly expressed in the temporal logic CTL. Automatic conversion of English to CTL requires the definition of an appropriately restricted subset of English. We show how the limited semantic expressibility of CTL can be exploited to derive a hierarchy of subsets. Our strategy avoids potential difficulties with approaches that take existing computational semantic analyses of English as their starting point--such as the need to ensure that all sentences in the subset possess a CTL translation. 1 Specifications in Natural Language Mechanised formal specification and verification tools can significantly aid system design in both software and hardware (Clarke and Wing, 1996). One well-established approach to verification, par- ticularly of hardware and protocols, is temporal model checking, which allows the designer to check 'that certain desired properties hold of the system (Clarke and Emerson, 1981). In this approach, specifications are expressed in a temporal logic and systems are represented as finite state transition systems? An efficient search method determines whether the desired property is true in the model provided by the transition system; if not, it provides a counterexample. Despite the undoubted success of temporal model checking as a technique, the requirement that specifications be expressed in temporal logic has proved an obstacle to its take-up by circuit designers and therefore alternative interfaces involving graphics and natural language have been explored. In this paper, we address some of the challenges raised by converting l In practice, it turns out to be preferable to use a symbolic representation of the state model, thereby avoiding the state explosion problem (Macmillan, 1993). English specifications into temporal logic as a prelude to hardware verification. One general approach to this kind of task exploits existing results in the computational analysis of natural language semantics, including contextual phenomena such as anaphora and ellipsis, in order to bridge the gap between informal specifications in English and formal specifications in some target formalism (Fuchs and Schwitter, 1996; Schwitter and Fuchs, 1996; Pulman, 1996; Nelken and Francez, 1996). English input sentences are initially mapped into a general purpose semantic formalism such as Discourse Representation Theory (Kamp and Reyle, 1993) or the Core Language Engine's quasi logical form (Alshawi, 1992) at which point context dependencies are resolved. The output of this stage then undergoes a further mapping into the application-specific language which expresses formal specifications. One system which departs from this framework is presented by Fantechi et al. (1994), whose grammar contains special purpose rules for recognising constructions that map directly into ACTL formulas, 2 and can trigger clarification dialogues with the user in the case of a one-to-many mapping. Independently, the interface may require the user to employ a controlled language, in which syntax and lexicon are restricted in order to minimise ambiguity with respect to the formal specification language (Macias and Pulman, 1995; Fuchs and Schwitter, 1996; Schwitter and Fuchs, 1996). The design of a controlled language is one method of addressing the key problem pointed out by Pulman (1996, p. 235), namely to ensure that an English input has a valid translation into the target formalism; this is the problem that we focus on here. Inevitably, we need to pay some attention to 2ACTL is an action-based branching temporal logic which, despite the name, is not directly related to the CTL language that we discuss below. 451 SO v 2 SI Figure 1: A CTL structure the syntactic and semantic properties of our target • formalism and this is the topic of the next section. 2 CTL Specification and Model Checking While early attempts to use temporal logics for verification had explored both linear and branching models of time, Clarke et al. (1986) showed that the branching temporal logic CTL (Computation Tree Logic) allowed efficient model-checking in place of laborious proof construction methods) In models of CTL, the temporal order relation < defines a tree which branches towards the future. As pointed out by Thomason (1984), branching time provides a basis for formalising the intuition that statements of necessity and possibility are often non-trivially tensed. As we move forward through time, certain possible worlds (i.e., paths in the tree) are eliminated, and thus what was possible at t is no longer available as an option at some t' later than t. CTL uses formulas beginning with A to express necessity. AG f is true at a time t just in case f is true along all paths that branch forward from the tree at t (true globally). AFf holds when, on all paths, f is true at some time in the future. AXf is true at t when f is true at the next time point, along all paths. Finally, A[f U g] holds if, for each path, g is true at some time, and from now until that point f is true. Figure I, from Clarke et al. (1986), illustrates a CTL model structure, with the relation < represented by arrows between circles (states), and the atomic propositions holding at a state being the letters contained in the circle. A CTL structure gives rise to an infinite computation tree, and Figure 2 3Subsequently, model-checking methods which use linear temporal logic have been developed. While theoretically less efficient that those based on CTL, they may turn out to be effective in practice (Vardi, 1998). /\ SI $2 t L SO Sl /\ 1 Sl $2 SO Figure 2: Computation tree shows the initial part of such a tree corresponding to Figure 1, when so is selected as the initial state. States correspond to points of time in the course of a computation, and branches represent non-determinism. Formulas of CTL are either true or false with respect to any given model; see Table 1 for three examples interpreted at So in the Figure 1 structure. 3 Data One of our key tasks has been to collect an initial sample of specifications in English, so as to identify linguistic constructions and usages typical of specification discourse. We currently have a corpus of around a hundred sentences, most of which were elicited by asking suitably qualified respondents to describe the behaviour manifested by timing diagrams. An example of such a diagram is displayed in Figure 3, which is adapted from one of Fisler's (1996, p. 5). The horizontal axis of the diagram indicates the passing of time (as measured by clock cycles) and the vertical axis indicates the transition of signals between the states of high and low. (A signal is formula AXc AGb AF(AX(a /x b) ) sense for all paths, at the next state c is true for all paths, globally b is true for all paths, eventually there is a state from which, for all paths, at the following state a and b are true at So true false true Table 1: Interpretation of CTL formulas 452 O I i t t : =1 Figure 3: Timing diagram for pulsing circuit r /. .\ \ / . \ / \ \ Figure 4: Timing diagram for handshaking protocol a time-varying value present at some point in the circuit.) In Figure 3, the input signal i makes a transition from high to low which after a one-cycle delay triggers a unit-duration pulse on the output signal o. (la-b) give two possible English descriptions of the regularity illustrated by Figure 3, (1) a. A pulse of width one is generated on the output o one cycle after it detects a falling edge on input i. b. If i is high and then is low on the next cycle, then o is low and after one cycle becomes high and then after one more cycle becomes low. while (2) is a CTL description. (2) AG(i --+ AX(",i --+ (--,oAAX(oAAX-,o)))) A noteworthy difference between the two English renderings is that the first is clearly more abstract than the second. Description (lb) is closer to the CTL formula (2), and consequently easier to translate into CTL. 4 For another example of the same phenomenon, consider the timing diagram in Figure 4. As before, sentences (3a-b) give two possible English descriptions of the regularity illustrated by Figure 4, 4Our system does not yet resolve anaphoric references, as in (la). There are existing English-to-CTL systems which do, however, such as that of Nelken and Francez (1996). (3) a. Every request is eventually acknowledged and once a request is acknowledged the request is eventually deasserted and eventually after that the acknowledge signal goes low. b. If r rises then after one cycle eventually a rises and then after one cycle eventually r falls and then after one cycle eventually a falls. which can be rendered in CTL as (4). (4) AG('-,r AAXr ~ AF(-,a AAX(a AAF(r AAX(--,r AAF(a AAX--,a)))))) Example (3b) parallels (lb) in being closer to CTL than its (a) counterpart. Nevertheless, (3b) is ontologically richer than CTL in an important respect, in that it makes reference to the event predicates rise and fall. 4 Defining a Controlled Language Even confining our attention to hardware speci- fications of the level of complexity examined so far, we can conclude there are some kinds of English locutions which will map rather directly into CTL, whereas others have a much less direct relation. What is the nature of this indirect relation? Our claim in this paper is that we can give semantically-oriented characterisations of the relation between complexity in English sentences and their suitability for inclusion in a controlled language for hardware verification. Moreover, this semantic orientation yields a hierarchy of subsets of English. (This hierarchy is a theoretical entity constructed for our specific purposes, of course, not a general linguistic hypothesis about English.) Our first step in developing an English-to-CTL conversion system was to build a prototype based on the Alvey Natural Language Tools Grammar (Grover et al., 1993). The Alvey grammar is a broad coverage grammar of English using GPSG-style rules, and maps into a event-based, unscoped semantic representation. For this application, we used a highly restricted lexicon and simplified the grammar in a number of ways (for example: fewer coordination rules; no deontic readings of modals). Tidhar (1998) reports an initial experiment in taking the semantic output generated from a small set S of English specifications, and converting it into CTL. Given 453 that the Alvey grammar will produce plausible semantic readings for a much larger set S', the challenge is to characterise an intermediate set S, with S C S C S', that would admit a translation ~b into formulas of CTL. Let's assume that we have a reverse translation ~b -x from CTL to English; then we would like S = range(cP-x). 4.1 Transliteration Now suppose that ~b -l is a literal translation from CTL to English. That is, we recurse on the formulas of CTL, choosing a canonical lexical item or phrase in English as a direct counterpart to each constituent of the CTL formula. In fact, we have implemented such a translation as a DCG ct12eng. To illustrate, ct12eng maps the formula (2) into (5): (5) globally if i is high then after 1 cycle if i is low then o is low and after 1 cycle o is high and after 1 cycle o is low Let cp~ -1 be the function defined by ct12eng; then we call El = range(~-(1) the canonical transliteration level of English. We can be confident that it is possible to build a translation ~bl which will map any sentence in El into a formula of CTL. L t can be trivially augmented by adding near-synonymous lexical and syntactic variants. For example, i is high can be replaced by signal i holds, and after 1 cycle ... by 1 cycle later .... This adds no semantic complexity. We call the this language (notated/2+) the augmented transliteration level. One potential problem with defining q~t in this way is that the sentences generated by ctl2eng soon become structurally ambiguous. We can solve this either by generating unambiguous paraphrases, or by analysing the relevant class of ambiguities and making sure that ~bt is able to provide all relevant CTL interpretations. These languages contain only sentences. Hard- ware specifications often have the form of multi- sentence discourses, however. Such discourses, and the additional phenomena they introduce, occur at higher levels of our language hierarchy, and we presently lack any detailed analysis of them in the terms of this paper. 4.2 Compositional indirect semantics We'll say that an English input expression has compositional indirect semantics just in case 1. there is a compositional mapping to CTL, but where 2. the semantics of the English is ontologically richer than the intended CTL translation. The best way to explain these notions is by way of some examples. First, consider expressions like the nouns pulse, edge and the verbs rise, fall. These refer to certain kinds of event. For example, an edge denotes the event where a signal changes between two distinct states; from high at time t to low at time t + 1 or conversely. In CTL, the notion of an edge on signal i corresponds approximately to the following expression: 5 (6) (i A AX~i) v (",i A AXi) Similarly, a pulse can be analysed in terms of a rising edge followed by a falling edge. What do we mean by saying that there is a compositional mapping of locutions at this level to CTL? Our claim is that they can be algorithmically converted into pure CTL without reference to unbounded context. What do we mean by saying that these English expressions involve a richer ontology than CTL? If compositional mapping holds, then clearly we are not forced to augment the standard models for CTL in order to interpret them (although this route might be desirable for other reasons). Rather, we are saying that the 'natural' ontology for these expressions is richer than that allowed for CTL, even if reduction is possible. 6 4.3 Non-compositional indirect semantics We consider the conversion to involve non- compositional indirect semantics when there is some aspect of non-locality in the domain of the translation function. That is, some form of inference is required--probably involving domain-specific axioms or general temporal axioms--in order to obtain a CTL formula from the English expression. Here are two examples. The first comes from sentence (3a), where the use of eventually might normally be taken to correspond directly to the CTL operator AF. However because of the domain of (3a)--a handshaking protocol, evidenced by the use of the verbs acknowledge and request--it is in fact more accurate to require an extra AX in the CTL. 5Approximately, in the sense that one cannot simply substitute this expression arbitrarily into a larger formula, as it depends on the syntactic context--for example, whether it occurs in the antecedent or consequent of an implication. 6There is a further kind of ontological richness in English at this level, involving the relation between events, rather than the events themselves. Space prohibits a closer examination here. 454 level /21 expressiveness pure CTL examples i is high; after 1 cycle pure CTL i holds; 1 cycle later /22 extended CTL i rises; there is a pulse of unit duration /23 full SR? r is eventually acknowledged Table 2: Language hierarchy This ensures that the three transitions cannot occur at the same time. We see here an example of domain-specific interpretation conventions that our system needs to be aware of. Clearly, it must incorporate them in such a way that users are still able to reliably predict how the system will react to their English specifications. The second example is (7) From one cycle after i changes until it changes again x and y are different. In this case there is an interaction between a non-local linguistic phenomenon and something specific to the CTL conversion, namely how to make the right connection between the first and the second changes. 4.4 Language hierarchy Table 2 summarises the main proposals of this section. The left-hand column lists the hierarchy of postulated sublanguages, in increasing order of semantic expressiveness. The middle column tries to calibrate this expressiveness. By 'extended CTL', we mean a superset of CTL which is syntactically augmented to allow formulas such as rise(p), fall(p), discussed earlier, and pulse(p, v, n), where p is an atom, v is a Boolean indicating a high or low value, and n is a natural number indicating duration. The semantic clauses would have to be correspondingly augmented--as carried out for example by Nelken and Francez (1996), for rise(p) and fall(p). By 'full SR', we are hypothesising that it would be necessary to invoke a general semantic representation language for English. We have constructed a context-free grammar for /22, in order to obtain a concrete approximation to a controlled subset of English for expressing spec- ifications. There are two cautionary observations. First, as just indicated, /22 maps directly not into CTL, but into extended CTL. Second, our grammar for/22 ignores some subtleties of English syntax and morphology. For example, subject-verb agreement; modal auxiliary subcategorisation; varieties of verb phrase modification by adverbs; and forms of anaphora. These defects in our CFG for /22 are not fundamental problems, however. The device of using the ct12eng mapping to define a sublanguage is a specific methodology for finding a semantically motivated sublanguage. As such it is only an approximation to the language that we wish our system to deal with. This CFG is not the grammar used by our parser (which can, in fact, deal with many of the details of English syntax just mentioned). We may, therefore, introduce a language/2+ which corrects the grammatical errors of 122 and extends it with some degree of anaphora and ellipsis. We note that it would be useful to have a firmer theoretical grasp on the relations between our sublanguages; we have ongoing work in this area. 5 Conclusion Much work on controlled languages has been motivated by the ambition to "find the fight trade- off between expressiveness and processability" (Schwitter and Fuchs, 1996). An alternative, suggested by what we have proposed here, is to bring into play a hierarchy of controlled languages, ordered by the degree to which they semantically approximate the target formalism. Each point in the hierarchy brings different trade-offs between expressiveness and tractability, and evaluating their different merits will depend heavily on the particu- lar task within a generic application domain, as well as on the class of users. As a final remark, we wish to point out that there may be advantages in identifying plausible restrictions on the target formalism. Dwyer et al. (1998a; 1998b) have convincingly argued that users of formal verification languages make use of recurring specification patterns. That is, rather than drawing on the full complexity of languages such as CTL, documented specifications tend to fall into much simpler formulations which express commonly desired properties. In future work, we plan to investigate specification patterns as a further source of constraints that propagate backwards into the controlled English, perhaps providing additional mechanisms for dealing with apparent ambiguity in user input. 455 Acknowledgements The work reported here has been carried out as part of PROSPER (Proof and Specification Assisted De- sign Environments), ESPRIT Framework IV LTR 26241, http://www.dcs.gla.ac.uk/prosper/. Thanks to Marc Moens, Claire Grover, Mike Fourman, Dirk Hoffman, Tom Melham, Thomas Kropf, Mike Gordon, and our ACL reviewers. References Hiyan Alshawi, editor. 1992. The Core Language Engine. MIT Press. Edmund M. Clarke and E. Allen Emerson. 1981. Synthesis of synchronization skeletons for branching time temporal logic. In Logic of Programs: Workshop, Yorktown Heights, NY, May 1981, volume 131 of Lecture Notes in Computer Science. Springer-Verlag. Edmund M. Clarke and Jeanette M. Wing. 1996. Formal methods: State of the art and future direc- tions. ACM Computing Surveys, 28(4):626-643. Edmund M. Clarke, E. Allen Emerson, and A. Prasad Sistla. 1986. Automatic verification of finite-state concurrent systems using tempo- ral logic specifications. ACM Transactions on Programming Languages and Systems, 8(2):244- 263. Matthew B. Dwyer, George S. Avrunin, and James C. Corbett. 1998a. Patterns in property specifications for finite-state verification. Tech- nical Report KSU CIS TR-98-9, Department of Computing and Information Sciences, Kansas State University. Matthew B. Dwyer, George S. Avrunin, and James C. Corbett. 1998b. Property specification patterns for finite-state verification. In M. Ardis, editor, Proceedings of the Second Workshop on Formal Methods in Software Practice, pages 7-15. A. Fantechi, S. Gnesi, G. Ristori, M. Carenini, M. Marino, and P. Moreschini. 1994. Assisting requirement formalization by means of natural language translation. Formal Methods in System Design, 4:243-263. Kathryn Fisler. 1996. A Unified Approach to Hard- ware Verification through a Heterogeneous Logic of Design Diagrams. Ph.D. thesis, Department of Computer Science, Indiana University. Norbert E. Fuchs and Rolf Schwitter. 1996. Attempto Controlled English (ACE). In CLAW 96: First International Workshop on Controlled Language Applications. Centre for Computa- tional Linguistics, Katholieke Universiteit Leu- ven, Belgium. Claire Grover, John Carroll, and Ted Briscoe. 1993. The Alvey Natural Language Tools Grammar (4th release). Technical Report 284, Computer Laboratory, University of Cambridge. Hans Kamp and Uwe Reyle. 1993. From Discourse to Logic: Introduction to Modeltheoretic Se- mantics of Natural Language, Formal Logic and Discourse Representation Theory. Number 42 in Studies in Linguistics and Philosophy. Kluwer. Benjamin Macias and Stephen G. Pulman. 1995. A method for controlling the production of specifications in natural language. The Computer Journal, 38(4):310-318. Kenneth L. Macmillan. 1993. Symbolic Model Checking. Kluwer. Rani Nelken and Nissim Francez. 1996. Translat- ing natural language system specifications into temporal logic via DRT. Technical Report LCL- 96-2, Laboratory for Computational Linguistics, Technion, Israel Institute of Technology. Stephen G. Pulman. 1996. Controlled language for knowledge representation. In CLAW 96: Proceedings of the First International Workshop on Controlled Language Applications, pages 233-242. Centre for Computational Linguistics, Katholieke Universiteit Leuven, Belgium. Rolf Schwitter and Norbert E. Fuchs. 1996. Attempto -- from specifications in controlled natural language towards executable specifica- tions. In GI EMISA Workshop. Nattirlichsprach- licher Entwurf von Informations-systemen, Tutz- ing, Germany. Richmond H. Thomason. 1984. Combinations of tense and modality. In D. Gabbay and E Guenthner, editors, Handbook of Philosophical Logic. Volume II: Extensions of Classical Logic, volume 146 of Synthese Library, chapter 11.3, pages 89-134. D. Reidel. Dan Tidhar. 1998. ALVEY to CTL translation -- A preparatory study for finite-state verification natural language interface. Msc dissertation, De- partment of Linguistics, University of Edinburgh. Moshe Y. Vardi. 1998. Linear vs. branching time: A complexity-theoretic perspective. In LICS'98: Proceedings of the Annual IEEE Symposium on Logic in Computer Science. Indiana University. 456
1999
58
Efficient Parsing for Bilexical Context-Free Grammars and Head Automaton Grammars* Jason Eisner Dept. of Computer ~ Information Science University of Pennsylvania 200 South 33rd Street, Philadelphia, PA 19104 USA j eisner@linc, cis. upenn, edu Giorgio Satta Dip. di Elettronica e Informatica Universit£ di Padova via Gradenigo 6/A, 35131 Padova, Italy satt a@dei, unipd, it Abstract Several recent stochastic parsers use bilexical grammars, where each word type idiosyncrat- ically prefers particular complements with par- ticular head words. We present O(n 4) parsing algorithms for two bilexical formalisms, improv- ing the prior upper bounds of O(n5). For a com- mon special case that was known to allow O(n 3) parsing (Eisner, 1997), we present an O(n 3) al- gorithm with an improved grammar constant. 1 Introduction Lexicalized grammar formalisms are of both theoretical and practical interest to the com- putational linguistics community. Such for- malisms specify syntactic facts about each word of the language--in particular, the type of arguments that the word can or must take. Early mechanisms of this sort included catego- rial grammar (Bar-Hillel, 1953) and subcatego- rization frames (Chomsky, 1965). Other lexi- calized formalisms include (Schabes et al., 1988; Mel'~uk, 1988; Pollard and Sag, 1994). Besides the possible arguments of a word, a natural-language grammar does well to specify possible head words for those arguments. "Con- vene" requires an NP object, but some NPs are more semantically or lexically appropriate here than others, and the appropriateness depends largely on the NP's head (e.g., "meeting"). We use the general term bilexical for a grammar that records such facts. A bilexical grammar makes many stipulations about the compatibil- ity of particular pairs of words in particular roles. The acceptability of "Nora convened the " The authors were supported respectively under ARPA Grant N6600194-C-6043 "Human Language Technology" and Ministero dell'Universitk e della Ricerca Scientifica e Tecnologica project "Methodologies and Tools of High Performance Systems for Multimedia Applications." party" then depends on the grammar writer's assessment of whether parties can be convened. Several recent real-world parsers have im- proved state-of-the-art parsing accuracy by re- lying on probabilistic or weighted versions of bilexical grammars (Alshawi, 1996; Eisner, 1996; Charniak, 1997; Collins, 1997). The ra- tionale is that soft selectional restrictions play a crucial role in disambiguation, i The chart parsing algorithms used by most of the above authors run in time O(nS), because bilexical grammars are enormous (the part of the grammar relevant to a length-n input has size O(n 2) in practice). Heavy probabilistic pruning is therefore needed to get acceptable runtimes. But in this paper we show that the complexity is not so bad after all: • For bilexicalized context-free grammars, O(n 4) is possible. • The O(n 4) result also holds for head au- tomaton grammars. • For a very common special case of these grammars where an O(n 3) algorithm was previously known (Eisner, 1997), the gram- mar constant can be reduced without harming the O(n 3) property. Our algorithmic technique throughout is to pro- pose new kinds of subderivations that are not constituents. We use dynamic programming to assemble such subderivations into a full parse. 2 Notation for context-free grammars The reader is assumed to be familiar with context-free grammars. Our notation fol- 1Other relevant parsers simultaneously consider two or more words that are not necessarily in a dependency relationship (Lafferty et al., 1992; Magerman, 1995; Collins and Brooks, 1995; Chelba and Jelinek, 1998). 457 lows (Harrison, 1978; Hopcroft and Ullman, 1979). A context-free grammar (CFG) is a tuple G = (VN, VT, P, S), where VN and VT are finite, disjoint sets of nonterminal and terminal sym- bols, respectively, and S E VN is the start sym- bol. Set P is a finite set of productions having the form A --+ a, where A E VN, a E (VN U VT)*. If every production in P has the form A -+ BC or A --+ a, for A,B,C E VN,a E VT, then the grammar is said to be in Chomsky Normal Form (CNF). 2 Every language that can be generated by a CFG can also be generated by a CFG in CNF. In this paper we adopt the following conven- tions: a, b, c, d denote symbols in VT, w, x, y de- note strings in V~, and a, ~,... denote strings in (VN t_J VT)*. The input to the parser will be a CFG G together with a string of terminal sym- bols to be parsed, w = did2.., dn. Also h,i,j,k denote positive integers, which are assumed to be ~ n when we are treating them as indices into w. We write wi,j for the input substring di'." dj (and put wi,j = e for i > j). A "derives" relation, written =~, is associated with a CFG as usual. We also use the reflexive and transitive closure of o, written ~*, and define L(G) accordingly. We write a fl 5 =~* a75 for a derivation in which only fl is rewritten. 3 Bilexical context-free grammars We introduce next a grammar formalism that captures lexical dependencies among pairs of words in VT. This formalism closely resem- bles stochastic grammatical formalisms that are used in several existing natural language pro- cessing systems (see §1). We will specify a non- stochastic version, noting that probabilities or other weights may be attached to the rewrite rules exactly as in stochastic CFG (Gonzales and Thomason, 1978; Wetherell, 1980). (See §4 for brief discussion.) Suppose G = (VN, VT, P,T[$]) is a CFG in CNF. 3 We say that G is bilexical iff there exists a set of "delexicalized nonterminals" VD such that VN = {A[a] : A E VD,a E VT} and every production in P has one of the following forms: 2Production S --~ e is also allowed in a CNF grammar if S never appears on the right side of any production. However, S --+ e is not allowed in our bilexical CFGs. ,awe have a more general definition that drops the restriction to CNF, but do not give it here. • A[a] ~ B[b] C[a] (1) • A[a] --+ C[a] B[b] (2) • A[a] ~ a (3) Thus every nonterminal is lexicalized at some terminal a. A constituent of nonterminal type A[a] is said to have terminal symbol a as its lex- ical head, "inherited" from the constituent's head child in the parse tree (e.g., C[a]). Notice that the start symbol is necessarily a lexicalized nonterminal, T[$]. Hence $ appears in every string of L(G); it is usually convenient to define G so that the language of interest is actually L'(G) = {x: x$ E L(G)}. Such a grammar can encode lexically specific preferences. For example, P might contain the productions • VP [solve] --+ V[solve] NP[puzzles] • NP[puzzles] --+ DEW[two] N[puzzles] • V[solve] ~ solve • N[puzzles] --4 puzzles • DEW[two] --+ two in order to allow the derivation VP[solve] ~* solve two puzzles, but meanwhile omit the sim- ilar productions • VP[eat] -+ V[eat] NP[puzzles] • VP[solve] --~ V[solve] NP[goat] • VP[sleep] -+ V[sleep] NP[goat] • NP[goat] -+ DET[two] N[goat] since puzzles are not edible, a goat is not solv- able, "sleep" is intransitive, and "goat" cannot take plural determiners. (A stochastic version of the grammar could implement "soft prefer- ences" by allowing the rules in the second group but assigning them various low probabilities.) The cost of this expressiveness is a very large grammar. Standard context-free parsing algo- rithms are inefficient in such a case. The CKY algorithm (Younger, 1967; Aho and Ullman, 1972) is time O(n 3. IPI), where in the worst case IPI = [VNI 3 (one ignores unary productions). For a bilexical grammar, the worst case is IPI = I VD 13. I VT 12, which is large for a large vocabulary VT. We may improve the analysis somewhat by observing that when parsing dl ... dn, the CKY algorithm only considers nonterminals of the form A[di]; by restricting to the relevant pro- ductions we obtain O(n 3. IVDI 3. min(n, IVTI)2). 458 We observe that in practical applications we always have n << IVTI. Let us then restrict our analysis to the (infinite) set of input in- stances of the parsing problem that satisfy re- lation n < IVTI. With this assumption, the asymptotic time complexity of the CKY algo- rithm becomes O(n 5. IVDt3). In other words, it is a factor of n 2 slower than a comparable non-lexicalized CFG. 4 Bilexical CFG in time O(n 4) In this section we give a recognition algorithm for bilexical CNF context-free grammars, which runs in time O(n 4. max(p, IVDI2)) = O(n 4. IVDI3). Here p is the maximum number of pro- ductions sharing the same pair of terminal sym- bols (e.g., the pair (b, a) in production (1)). The new algorithm is asymptotically more efficient than the CKY algorithm, when restricted to in- put instances satisfying the relation n < IVTI. Where CKY recognizes only constituent sub- strings of the input, the new algorithm can rec- ognize three types of subderivations, shown and described in Figure l(a). A declarative specifi- cation of the algorithm is given in Figure l(b). The derivability conditions of (a) are guaran- teed by (b), by induction, and the correctness of the acceptance condition (see caption) follows. This declarative specification, like CKY, may be implemented by bottom-up dynamic pro- gramming. We sketch one such method. For each possible item, as shown in (a), we maintain a bit (indexed by the parameters of the item) that records whether the item has been derived yet. All these bits are initially zero. The algo- rithm makes a single pass through the possible items, setting the bit for each if it can be derived using any rule in (b) from items whose bits are already set. At the end of this pass it is straight- forward to test whether to accept w (see cap- tion). The pass considers the items in increas- ing order of width, where the width of an item in (a) is defined as max{h,i,j} -min{h,i,j}. Among items of the same width, those of type A should be considered last. The algorithm requires space proportional to the number of possible items, which is at most na]VDI 2. Each of the five rule templates can instantiate its free variables in at most n4p or (for COMPLETE rules) n41VDI 2 different ways, each of which is tested once and in constant time; so the runtime is O(n 4 max(p, IVDI2)). By comparison, the CKY algorithm uses only the first type of item, and relies on rules whose B C inputs are pairs .~.~ . z~::~ . Such rules can be instantiated in O(n 5) different ways for a fixed grammar, yielding O(n 5) time complexity. The new algorithm saves a factor of n by com- bining those two constituents in two steps, one of which is insensitive to k and abstracts over its possible values, the other of which is insensitive to h ~ and abstracts over its possible values. It is straightforward to turn the new O(n 4) recognition algorithm into a parser for stochas- tic bilexical CFGs (or other weighted bilexical CFGs). In a stochastic CFG, each nonterminal A[a] is accompanied by a probability distribu- tion over productions of the form A[a] --+ ~. A T is just a derivation (proof tree) of lZ~n ,.o parse and its probability--like that of any derivation we find--is defined as the product of the prob- abilities of all productions used to condition in- ference rules in the proof tree. The highest- probability derivation for any item can be re- constructed recursively at the end of the parse, provided that each item maintains not only a bit indicating whether it can be derived, but also the probability and instantiated root rule of its highest-probability derivation tree. 5 A more efficient variant We now give a variant of the algorithm of §4; the variant has the same asymptotic complexity but will often be faster in practice. Notice that the ATTACH-LEFT rule of Fig- ure l(b) tries to combine the nonterminal label B[dh,] of a previously derived constituent with every possible nonterminal label of the form C[dh]. The improved version, shown in Figure 2, restricts C[dh] to be the label of a previously de- rived adjacent constituent. This improves speed if there are not many such constituents and we can enumerate them in O(1) time apiece (using a sparse parse table to store the derived items). It is necessary to use an agenda data struc- ture (Kay, 1986) when implementing the declar- ative algorithm of Figure 2. Deriving narrower items before wider ones as before will not work here because the rule HALVE derives narrow items from wide ones. 459 (a) A i4 , A A h z j (i g h <j, A E VD) (i < j <h,A, C E VD) (h < i < j, A, C E VD) is derived iff A[dh] ~* wi,j is derived iff A[dh] ~ B[dh,]C[dh] ~* wi,jC[dh] for some B, h' is derived iff A[dh] ~ C[dh]B[dh,] ~* C[dh]wi,j for some B, h' (b) STAaT: ~ A[dh] ~ dh h@h ATTACH-LEFT: B A ./Q". c ~ 3 h ATTACH-RIGHT: B .4 h ~ 3 A[dh] -~ B[dh,]C[dh] A[dh] -~ C[dh]B[dh,] COMPLETE-RIGHT: COMPLETE-LEFT: A C 3 h j A iz k C A A iz@k Figure 1: An O(n 4) recognition algorithm for CNF bilexical CFG. (a) Types of items in the parse table (chart). The first is syntactic sugar for the tuple [A, A, i, h,j], and so on. The stated conditions assume that dl,...dn are all distinct. (b) Inference rules. The algorithm derives the item below -- if the items above -- have already been derived and any condition to the right of is met. It accepts input w just if item I/k, T, 1, h, n] is derived for some h such that dh -= $. (a) A A i//]h ( i <_ h, A e VD) A h~ (h < j, A E VD) ,~. ~C (i _< j < h, A,C E VD) 3 h A A C ~ . (h < i < j, A,C E VD) h ~ 3 (i < h _< j, A E VD) is derived iff A[dh] ~* wi,j is derived iff A[dh] ~* wi,j for some j _> h is derived iff A[dh] ~* w~,j for some i _< h is derived iff A[dh] ~ B[dh,]C[dh] ~* wi,jC[dh] ~* wi,k for some B, h ~, k is derived iff A[dh] ~ C[dh]B[dh,] ~* C[dh]wi,j ~* Wk,j for some B, h ~, k (b) As in Figure l(b) above, but add HALVE and change ATTACH-LEFT and ATTACH-RIGHT as shown. HALVE: ATTACH-LEFT: ATTACH-RIGHT: A B C C B A A A A[dh] ---4 B[dh,]V[dh] d d[dh] ---+ C[dh]B[dh,] Figure 2: A more efficient variant of the O(n 4) algorithm in Figure 1, in the same format. 460 6 Multiple word senses Rather than parsing an input string directly, it is often desirable to parse another string related by a (possibly stochastic) transduction. Let T be a finite-state transducer that maps a mor- pheme sequence w E V~ to its orthographic re- alization, a grapheme sequence v~. T may re- alize arbitrary morphological processes, includ- ing affixation, local clitic movement, deletion of phonological nulls, forbidden or dispreferred k-grams, typographical errors, and mapping of multiple senses onto the same grapheme. Given grammar G and an input @, we ask whether E T(L(G)). We have extended all the algo- rithms in this paper to this case: the items sim- ply keep track of the transducer state as well. Due to space constraints, we sketch only the special case of multiple senses. Suppose that the input is ~ =dl ... dn, and each di has up to • g possible senses. Each item now needs to track its head's sense along with its head's position in @. Wherever an item formerly recorded a head position h (similarly h~), it must now record a pair (h, dh) , where dh E VT is a specific sense of d-h. No rule in Figures 1-2 (or Figure 3 below) will mention more than two such pairs. So the time complexity increases by a factor of O(g2). 7 Head automaton grammars in time O(n 4) In this section we show that a length-n string generated by a head automaton grammar (A1- shawi, 1996) can be parsed in time O(n4). We do this by providing a translation from head automaton grammars to bilexical CFGs. 4 This result improves on the head-automaton parsing algorithm given by Alshawi, which is analogous to the CKY algorithm on bilexical CFGs and is likewise O(n 5) in practice (see §3). A head automaton grammar (HAG) is a function H : a ~ Ha that defines a head au- tomaton (HA) for each element of its (finite) domain. Let VT =- domain(H) and D = {~, +-- -}. A special symbol $ E VT plays the role of start symbol. For each a E VT, Ha is a tuple (Qa, VT, (~a, In, Fa), where • Qa is a finite set of states; 4Translation in the other direction is possible if the HAG formalism is extended to allow multiple senses per word (see §6). This makes the formalisms equivalent. • In, Fa C Qa are sets of initial and final states, respectively; • 5a is a transition function mapping Qa x VT × D to 2 Qa, the power set of Qa. A single head automaton is an acceptor for a language of string pairs (z~, Zr) E V~ x V~. In- formally, if b is the leftmost symbol of Zr and q~ E 5a(q, b, -~), then Ha can move from state q to state q~, matching symbol b and removing it from the left end of Zr. Symmetrically, if b is the rightmost symbol of zl and ql E 5a(q, b, ~---) then from q Ha can move to q~, matching symbol b and removing it from the right end of zl.5 More formally, we associate with the head au- tomaton Ha a "derives" relation F-a, defined as a binary relation on Qa × V~ x V~. For ev- ery q E Q, x,y E V~, b E VT, d E D, and q' E ~a(q, b, d), we specify that (q, xb, y) ~-a (q',x,Y) if d =+-; (q, x, by) ~-a (q', x, y) if d =--+. The reflexive and transitive closure of F-a is writ- ten ~-~. The language generated by Ha is the set L(Ha) = {<zl,Zr) I (q, zl,Zr) I-; (r,e,e), qEIa, rEFa}. We may now define the language generated by the entire grammar H. To generate, we ex- pand the start word $ E VT into xSy for some (x, y) E L(H$), and then recursively expand the words in strings x and y. More formally, given H, we simultaneously define La for all a E VT to be minimal such that if (x,y) E L(Ha), x r E Lx, yl ELy, then x~ay ~ E La, where Lal...ak stands for the concatenation language Lal "'" La k. Then H generates language L$. We next present a simple construction that transforms a HAG H into a bilexical CFG G generating the same language. The construc- tion also preserves derivation ambiguity. This means that for each string w, there is a linear- time 1-to-1 mapping between (appropriately de- ~Alshawi (1996) describes HAs as accepting (or equiv- alently, generating) zl and z~ from the outside in. To make Figure 3 easier to follow, we have defined HAs as accepting symbols in the opposite order, from the in- side out. This amounts to the same thing if transitions are reversed, Is is exchanged with Fa, and any transi- tion probabilities are replaced by those of the reversed Markov chain. 461 fined) canonical derivations of w by H and canonical derivations of w by G. We adopt the notation above for H and the components of its head automata. Let VD be an arbitrary set of size t = max{[Qa[ : a • VT}, and for each a, define an arbitrary injection fa : Qa --+ YD. We define G -- (VN, VT, P,T[$]), where (i) VN = {A[a] : A • VD, a • VT}, in the usual manner for bilexical CFG; (ii) P is the set of all productions having one of the following forms, where a, b • VT: • A[a] --+ B[b] C[a] where A = fa(r), B = fb(q'), C = f~(q) for some qr • Ib, q • Qa, r • 5a(q, b, +-) • A[a] -~ C[a] Bib] where A = fa(r), B = fb(q'), C = fa(q) for some q' • Ib, q • Qa, r • 5a (q, b,--+) ] • A[a --+ a where A = fa(q) for some q • Fa (iii) T = f$(q), where we assume WLOG that I$ is a singleton set {q}. We omit the formal proof that G and H admit isomorphic derivations and hence gen- erate the same languages, observing only that if (x,y) = (bib2... bj, bj+l.., bk) E L(Ha)-- a condition used in defining La above--then g[a] 3" BI[bl]"" Bj[bj]aBj+l[bj+l]... Bk[bk], for any A, B1,... Bk that map to initial states in Ha, Hbl,... Hb~ respectively. In general, G has p = O(IVDI 3) = O(t3). The construction therefore implies that we can parse a length-n sentence under H in time O(n4t3). If the HAs in H happen to be deterministic, then in each binary production given by (ii) above, symbol A is fully determined by a, b, and C. In this case p = O(t2), so the parser will operate in time O(n4t2). We note that this construction can be straightforwardly extended to convert stochas- tic HAGs as in (Alshawi, 1996) into stochastic CFGs. Probabilities that Ha assigns to state q's various transition and halt actions are copied onto the corresponding productions A[a] --~ c~ of G, where A = fa(q). 8 Split head automaton grammars in time O(n 3) For many bilexical CFGs or HAGs of practical significance, just as for the bilexical version of link grammars (Lafferty et al., 1992), it is possi- ble to parse length-n inputs even faster, in time O(n 3) (Eisner, 1997). In this section we de- scribe and discuss this special case, and give a new O(n 3) algorithm that has a smaller gram- mar constant than previously reported. A head automaton Ha is called split if it has no states that can be entered on a +-- transi- tion and exited on a ~ transition. Such an au- tomaton can accept (x, y) only by reading all of y--immediately after which it is said to be in a flip state--and then reading all of x. For- mally, a flip state is one that allows entry on a --+ transition and that either allows exit on a e-- transition or is a final state. We are concerned here with head automa- ton grammars H such that every Ha is split. These correspond to bilexical CFGs in which any derivation A[a] 3" xay has the form A[a] 3" xB[a] =~* xay. That is, a word's left dependents are more oblique than its right de- pendents and c-command them. Such grammars are broadly applicable. Even if Ha is not split, there usually exists a split head automaton H~ recognizing the same language. H a' exists iff {x#y : {x,y) e L(Ha)} is regular (where # ¢ VT). In particular, H~a must exist unless Ha has a cycle that includes both +-- and --+ transitions. Such cycles would be necessary for Ha itself to accept a formal language such as {(b n, c n) : n > 0}, where word a takes 2n de- pendents, but we know of no natural-language motivation for ever using them in a HAG. One more definition will help us bound the complexity. A split head automaton Ha is said to be g-split if its set of flip states, denoted Qa C_ Qa, has size < g. The languages that can be recognized by g-split HAs are those that can g be written as [Ji=l Li x Ri, where the Li and Ri are regular languages over VT. Eisner (1997) actually defined (g-split) bilexical grammars in terms of the latter property. 6 6That paper associated a product language Li x Ri, or equivalently a 1-split HA, with each of g senses of a word (see §6). One could do the same without penalty in our present approach: confining to l-split automata would remove the g2 complexity factor, and then allowing g 462 We now present our result: Figure 3 specifies an O(n3g2t 2) recognition algorithm for a head automaton grammar H in which every Ha is g-split. For deterministic automata, the run- time is O(n3g2t)--a considerable improvement on the O(n3g3t 2) result of (Eisner, 1997), which also assumes deterministic automata. As in §4, a simple bottom-up implementation will suffice. s For a practical speedup, add . ["'. as an an- h j tecedent to the MID rule (and fill in the parse table from right to left). Like our previous algorithms, this one takes two steps (ATTACH, COMPLETE) to attach a child constituent to a parent constituent. But instead of full constituents--strings xd~y E Ld~--it uses only half-constituents like xdi and diy. Where CKY combines z~ i h jj+ln we save two degrees of freedom i, k (so improv- ing O(n 5) to O(n3)) and combine, ,~:~...~J; n 2J~1 n The other halves of these constituents can be at- tached later, because to find an accepting path for (zl, Zr) in a split head automaton, one can separately find the half-path before the flip state (which accepts zr) and the half-path after the flip state (which accepts zt). These two half- paths can subsequently be joined into an ac- cepting path if they have the same flip state s, i.e., one path starts where the other ends. An- notating our left half-constituents with s makes this check possible. 9 Final remarks We have formally described, and given faster parsing algorithms for, three practical gram- matical rewriting systems that capture depen- dencies between pairs of words. All three sys- tems admit naive O(n 5) algorithms. We give the first O(n 4) results for the natural formalism of bilexical context-free grammar, and for AI- shawi's (1996) head automaton grammars. For the usual case, split head automaton grammars or equivalent bilexical CFGs, we replace the O(n 3) algorithm of (Eisner, 1997) by one with a smaller grammar constant. Note that, e.g., all senses would restore the g2 factor. Indeed, this approach gives added flexibility: a word's sense, unlike its choice of flip state, is visible to the HA that reads it. three models in (Collins, 1997) are susceptible to the O(n 3) method (cf. Collins's O(nh)). Our dynamic programming techniques for cheaply attaching head information to deriva- tions can also be exploited in parsing formalisms other than rewriting systems. The authors have developed an O(nT)-time parsing algorithm for bilexicalized tree adjoining grammars (Schabes, 1992), improving the naive O(n s) method. The results mentioned in §6 are related to the closure property of CFGs under generalized se- quential machine mapping (Hopcroft and Ull- man, 1979). This property also holds for our class of bilexical CFGs. References A. V. Aho and J. D. Ullman. 1972. The Theory of Parsing, Translation and Compiling, volume 1. Prentice-Hall, Englewood Cliffs, NJ. H. Alshawi. 1996. Head automata and bilingual tiling: Translation with minimal representations. In Proc. of ACL, pages 167-176, Santa Cruz, CA. Y. Bar-Hillel. 1953. A quasi-arithmetical notation for syntactic description. Language, 29:47-58. E. Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In Proc. o] the l~th AAAI, Menlo Park. C. Chelba and F. Jelinek. 1998. Exploiting syntac- tic structure for language modeling. In Proc. of COLING-ACL. N. Chomsky. 1965. Aspects of the Theory o] Syntax. MIT Press, Cambridge, MA. M. Collins and J. Brooks. 1995. Prepositional phrase attachment through a backed-off model. In Proe. of the Third Workshop on Very Large Corpora, Cambridge, MA. M. Collins. 1997. Three generative, lexicalised mod- els for statistical parsing. In Proc. of the 35th A CL and 8th European A CL, Madrid, July. J. Eisner. 1996. An empirical comparison of proba- bility models for dependency grammar. Technical Report IRCS-96-11, IRCS, Univ. of Pennsylvania. J. Eisner. 1997. Bilexical grammars and a cubic- time probabilistic parser. In Proceedings of the ~th Int. Workshop on Parsing Technologies, MIT, Cambridge, MA, September. R. C. Gonzales and M. G. Thomason. 1978. Syntac- tic Pattern Recognition. Addison-Wesley, Read- ing, MA. M. A. Harrison. 1978. Introduction to Formal Lan- guage Theory. Addison-Wesley, Reading, MA. J. E. Hopcroft and J. D. Ullman. 1979. Introduc- tion to Automata Theory, Languages and Com- putation. Addison-Wesley, Reading, MA. 463 (a) q q i4 q h q s:6 h h (h < j, q E Qdh) (i <_ h, q E Qdh U {F}, s E (~dh) (h < h', q E Qdh, s' E Qd h,) (h' < h, q • Qdh, s • Qd~, s' • Q. dh) is derived iff dh : I z ~ q where Whq_l, j E L~ is derived iff dh : q ( x s where W~,h-1 E Lx is derived iff dh : I xdh~ q and dh, : F ( Y S I where WhTl,h'-i ~ Lzy is derivediffdh, : I =~ s ~ and dh : q ~h,Y s where WhTl,h'--I E ixy (b) START: - - q E Ida MID: -- q s h 'h hA h 8 E Odh FINISH: ATTACH-RIGHT: q F h [~ _ l i ~h', r E 5d~ (q, dh,, --->) r ATTACH-LEFT: s ~ q ' s' E Qdh,, r E 5dh (q, dh,, t--) r s:6 h h F s (e) Accept input w just if l z~'nandn n '~" COMPLETE-RIGHT: q COMPLETE-LEFT: S I h hl~i q F q i h h h q i4 are derived for some h, s such that dh ---- $. q F -- q E Fdh Figure 3: An O(n 3) recognition algorithm for split head automaton grammars. The format is as in Figure 1, except that (c) gives the acceptance condition. The following notation indicates that a head automaton can consume a string x from its left or right input: a : q x) qr means that (q, e, x) ~-a (q', e, c), and a : I x ~ q, means this is true for some q E Ia. Similarly, a : q' ~ x q means that (q, x, e) t-* (q~, c, c), and a : F (x q means this is true for some q~ E Fa. The special symbol F also appears as a literal in some items, and effectively means "an unspecified final state." M. Kay. 1986. Algorithm schemata and data struc- tures in syntactic processing. In K. Sparck Jones B. J. Grosz and B. L. Webber, editors, Natu- ral Language Processing, pages 35-70. Kaufmann, Los Altos, CA. J. Lafferty, D. Sleator, and D. Temperley. 1992. Grammatical trigrams: A probabilistic model of link grammar. In Proc. of the AAAI Conf. on Probabilistic Approaches to Nat. Lang., October. D. Magerman. 1995. Statistical decision-tree mod- els for parsing. In Proceedings of the 33rd A CL. I. Mel'~uk. 1988. Dependency Syntax: Theory and Practice. State University of New York Press. C. Pollard and I. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press. Y. Schabes, A. Abeill@, and A. Joshi. 1988. Parsing strategies with 'lexicalized' grammars: Applica- tion to Tree Adjoining Grammars. In Proceedings of COLING-88, Budapest, August. Yves Schabes. 1992. Stochastic lexicalized tree- adjoining grammars. In Proc. of the l~th COL- ING, pages 426-432, Nantes, France, August. C. S. Wetherell. 1980. Probabilistic languages: A review and some open questions. Computing Sur- veys, 12(4):361-379. D. H. Younger. 1967. Recognition and parsing of context-free languages in time n 3. Information and Control, 10(2):189-208, February. 464
1999
59
Discourse Relations: A Structural and Presuppositional Account Using Lexicalised TAG* Bonnie Webber Univ of Edinburgh [email protected] Alistair Knott Univ of Otago [email protected] Matthew Stone Rutgers Univ [email protected] Aravind Joshi Univ of Pennsylvania joshi @cis.upenn.edu Abstract We show that discourse structure need not bear the full burden of conveying discourse relations by showing that many of them can be explained non- structurally in terms of the grounding of anaphoric presuppositions (Van der Sandt, 1992). This simpli- fies discourse structure, while still allowing the real- isation of a full range of discourse relations. This is achieved using the same semantic machinery used in deriving clause-level semantics. 1 Introduction Research on discourse structure has, by and large, attempted to associate all meaningful relations between propositions with structural connections between discourse clauses (syntactic clauses or structures composed of them). Recognising that this could mean multiple structural connections between clauses, Rhetorical Structure Theory (Mann and Thompson, 1988) simply stipulates that only a single relation may hold. Moore and Pollack (1992) argue that both informational (semantic) and inten- tional relations can hold between clauses simultan- eously and independently. This suggests that factor- ing the two kinds of relations might lead to a pair of structures, each still with no more than a single structural connection between any two clauses. But examples of multiple semantic relations are easy to find (Webber et al., 1999). Having struc- ture account for all of them leads to the complex- ities shown in Figure 1, including the crossing de- pendencies shown in Fig. l c. These structures are no longer trees, making it difficult to define a com- positional semantics. This problem would not arise if one recognised additional, non-structural means of conveying se- mantic relations between propositions and modal * Our thanks to Mark Steedman, Katja Markert, Gann Bierner and three ACL'99 reviewers for all their useful comments. operators on propositions. This is what we do here: Structurally, we assume a "bare bones" dis- course structure built up from more complex elements (LTAG trees) than those used in many other approaches. These structures and the op- erations used in assembling them are the basis for compositional semantics. Stimulated by structural operations, inference based on world knowledge, usage conventions, etc., can then make defeasible contributions to discourse interpretation that elaborate the non- defeasible propositions contributed by com- positional semantics. Non-structurally, we take additional semantic relations and modal operators to be conveyed through anaphoric presuppositions (Van der Sandt, 1992) licensed by information that speaker and hearer are taken to share. A main source of shared knowledge is the interpreta- tion of the on-going discourse. Because the entity that licences (or "discharges") a given presupposition usually has a source within the discourse, the presupposition seems to link the clause containing the presupposition-bearing (p-bearing) element to that source. However, as with pronominal and definite NP anaphora, while attentional constraints on their interpret- ation may be influenced by structure, the links themselves are not structural. The idea of combining compositional semantics with defeasible inference is not new. Neither is the idea of taking certain lexical items as anaphorically presupposing an eventuality or a set of eventualities: It is implicit in all work on the anaphoric nature of tense (cf. Partee (1984), Webber (1988), inter alia) and modality (Stone, 1999). What is new is the way we enable anaphoric presupposition to contribute to semantic relations and modal operators, in a way 41 Ci Ci (a) R1 R 2 C I Ci C k Ci C i C k C m (b) (c) Figure 1: Multiple semantic links (R j) between discourse clauses (Ci): (a) back to the same discourse clause; (b) back to different discourse clauses; (c) back to different discourse clauses, with crossing dependencies. that does not lead to the violations of tree structure mentioned earlier.t We discuss these differences in more detail in Section 2, after describing the lexicalised frame- work that facilitates the derivation of discourse se- mantics from structure, inference and anaphoric presuppositions. Sections 3 and 4 then present more detailed semantic analyses of the connectives for ex- ample and otherwise. Finally, in Section 5, we sum- marize our arguments for the approach and suggest a program of future work. 2 Framework In previous papers (Cristea and Webber, 1997; Webber and Joshi, 1998; Webber et al., 1999), we have argued for using the more complex structures (elementary trees) of a Lexicalized Tree-Adjoining Grammar (LTAG) and its operations (adjoining and substitution) to associate structure and semantics with a sequence of discourse clauses. 2 Here we briefly review how it works. In a lexicalized TAG, each elementary tree has at least one anchor. In the case of discourse, the an- chor for an elementary tree may be a lexical item, punctuation or a feature structure that is lexically null. The semantic contribution of a lexical anchor includes both what it presupposes and what it as- serts (Stone and Doran, 1997; Stone, 1998; Stone and Webber, 1998). A feature structure anchor will either unify with a lexical item with compatible fea- tures (Knott and Mellish, 1996), yielding the previ- ous case, or have an empty realisation, though one 1One may still need to admit structures having both a link back and a link forward to different clauses (Gardent, 1997). But a similar situation can occur within the clause, with rel- ative clause dependencies - from the verb back to the relative pronoun and forward to a trace - so the possibility is not unmo- tivated from the perspective of syntax. 2We take this to be only the most basic level of discourse structure, producing what are essentially extended descriptions of situations/events. Discourse may be further structured with respect to speaker intentions, genre-specific presentations, etc. that maintains its semantic features. The initial elementary trees used here corres- pond, by and large, to second-order predicate- argument structures - i.e., usually binary predicates on propositions or eventualities - while the auxil- iary elementary trees provide further information (constraints) added through adjoining. Importantly, we bar crossing structural connec- tions. Thus one diagnostic for taking a predicate argument to be anaphoric rather than structural is whether it can derive from across a structural link. The relation in a subordinate clause is clearly struc- tural: Given two relations, one realisable as "Al- though o¢ [3, the other realisable as "Because y ~5", they cannot together be realised as "Although ~ be- cause y [3 &" with the same meaning as "Although o¢ [3. Because y 8". The same is true of certain re- lations whose realisation spans multiple sentences, such as ones realisable as "On the one hand oz. On the other hand 13." and "Not only T- But also &" To- gether, they cannot be realised as "On the one hand o¢. Not only T. On the other hand 13. But also &" with the same meaning as in strict sequence. Thus we take such constructions to be structural as well (Webber and Joshi, 1998; Webber et al., 1999). On the other hand, the p-bearing adverb "then", which asserts that one eventuality starts after the culmination of another, has only one of its argu- ments coming structurally. The other argument is presupposed and thus able to come from across a structural boundary, as in (1) a. On the one hand, John loves Barolo. b. So he ordered three cases of the '97. c. On the other hand, because he's broke, d. he then had to cancel the order. Here, "then" asserts that the "cancelling" event in (d) follows the ordering event in (b). Because the link to (b) crosses the structural link in the parallel construction, we take this argument to come non- 42 structurally through anaphoric presupposition. 3 Now we illustrate briefly how short discourses built from LTAG constituents get their semantics. For more detail, see (Webber and Joshi, 1998; Webber et al., 1999). For more information on com- positional semantic operations on LTAG derivation trees, see (Joshi and Vijay-Shanker, 1999). (2) a. You shouldn't trust John because he never returns what he borrows. b. You shouldn't trust John. He never returns what he borrows. C. You shouldn't trust John because, for ex- ample, he never returns what he borrows. d. You shouldn't trust John. For example, he never retums what he borrows. Here A will stand for the LTAG parse tree for "you shouldn't trust John" and a, its derivation tree, and B will stand for the LTAG parse tree for "he never returns what he borrows" and 13, its derivation tree. The explanation of Example 2a is primarily struc- tural. It involves an initial tree (y) anchored by "be- cause" (Figure 2). Its derived tree comes from A substituting at the left-hand substitution site of y (in- dex 1) and B at the right-hand substitution site (in- dex 3). Semantically, the anchor of y ("because") asserts that the situation associated with the argu- ment indexed 3 (B) is the cause of that associated with the argument indexed 1 (A). The explanation of Example 2b is primarily struc- tural as well. It employs an auxiliary tree (y) anchored by "." (Figure 3). Its derived tree comes from B substituting at the right-hand substitution site (index 3) of ),, and "f adjoining at the root of A (index 0). Semantically, adjoining B to A via y simply implies that B continues the description of the situation associated with A. The general infer- ence that this stimulates leads to a defeasible con- tribution of causality between them, which can be denied without a contradiction - e.g. (3) You shouldn't trust John. He never returns what he borrows. But that's not why you shouldn't trust him. Presupposition comes into play in Example 2c. This example adds to the elements used in Ex- 3The fact that the events deriving from (b) and (d) appear to have the same temporal relation in the absence of "then" just shows that tense is indeed anaphoric and has no trouble crossing structural boundaries either. ample 2a, an auxiliary tree anchored by "for ex- ample" (8), which adjoins at the root of B (Fig- ure 4). "For example" contributes both a presup- position and an assertion, as described in more de- tail in Section 3. Informally, "for example" presup- poses a shared set of eventualities, and asserts that the eventuality associated with the clause it adjoins to, is a member of that set. In Example 2c, the set is licensed by "because" as the set of causes/reasons for the situation associated with its first argument. Thus, associated with the derivation of (2c) are the assertions that the situation associated with B is a cause for that associated with A and that the situ- ation associated with B is one of a set of such causes. Finally, Example 2d adds to the elements used in Example 2b, the same auxiliary tree anchored by "for example" (~5). As in Example 2b, the causal- ity relation between the interpretations of B and A comes defeasibly from general inference. Of in- terest then is how the presupposition of "for ex- ample" is licenced - that is, what provides the shared set or generalisation that the interpretation of B is asserted to exemplify. It appears to be li- cenced by the causal relation that has been inferred to hold between the eventualities denoted by B and A, yielding a set of causes/reasons for A. Thus, while we do not yet have a complete char- acterisation of how compositional semantics, de- feasible inference and anaphoric presupposition in- teract, Examples 2c and 2d illustrate one signific- ant feature: Both the interpretive contribution of a structural connective like "because" and the defeas- ible inference stimulated by adjoining can license the anaphoric presupposition of a p-bearing element like "for example". Recently, Asher and Lascarides (1999) have de- scribed a version of Structured Discourse Repres- entation Theory (SDRT) that also incorporates the semantic contributions of both presuppositions and assertions. In this enriched version of SDRT, a pro- position can be linked to the previous discourse via multiple rhetorical relations such as background and defeasible consequence. While there are similarities between their approach and the one presented here, the two differ in significant ways: • Unlike in the current approach, Asher and Las- carides (1999) take all connections (of both as- serted and presupposed material) to be struc- tural attachments through rhetorical relations. The relevant rhetorical relation may be inher- 43 ~/(because) ~ [ ] ~ ~ A because because Figure 2: Derivation of Example 2a. The derivation tree is shown below the arrow, and the derived tree, to its right. 0,," s B Figure 3: Derivation of Example 2b ent in the p-bearing element (as with "too") or it may have to be inferred. • Again unlike the current approach, all such at- tachments (of either asserted or presupposed material) are limited to the right frontier of the evolving SDRT structure. We illustrate these differences through Example 1 (repeated below), with the p-bearing element "then", and Example 5, with the p-bearing ele- ment "too". Both examples call into question the claim that material licensing presuppositions is con- strained to the right frontier of the evolving dis- course structure. (4) a. On the one hand, John loves Barolo. b. So he ordered three cases of the '97. c. On the other hand, because he's broke, d. he then had to cancel the order. (5) (a) I have two brothers. (b) John is a history major. (c) He likes water polo, (d) and he plays the drums. (e) Bill is in high school. (f) His main interest is drama. (g) He too studies his- tory, (h) but he doesn't like it much. In Example 1, the presupposition of "then" in (d) is licensed by the eventuality evoked by (b), which would not be on the right frontier of any structural analysis. If "too" is taken to presuppose shared knowledge of a similar eventuality, then the "too" in Example 5(g) finds that eventuality in (b), which is also unlikely to be on the right frontier of any structural analysis. 4 4The proposal in (Asher and Lascarides, 1999) to alter an 44 With respect to the interpretation of "too", Asher and Lascarides take it to presuppose a parallel rhet- orical relation between the current clause and some- thing on the right frontier. From this instantiated rhetorical relation, one then infers that the related eventualities are similar. But if the right frontier constraint is incorrect and the purpose of positing a rhetorical relation like parallel is to produce an assertion of similarity, then one might as well take "too" as directly presupposing shared knowledge of a similar eventuality, as we have done here. Thus, we suggest that the insights presented in (Asher and Lascarides, 1999) have a simpler explanation. Now, before embarking on more detailed ana- lyses of two quite different p-bearing adverbs, we should clarify the scope of the current approach in terms of the range of p-bearing elements that can create non-structural discourse links. We believe that systematic study, perhaps starting with the 350 "cue phrases" given in (Knott, 1996, Appendix A), will show which of them use presup- position in realising discourse relations. It is likely that these might include: • temporal conjunctions and adverbial connect- ives presupposing an eventuality that stands in a particular temporal relation to the one cur- rently in hand, such as "then", "later", "mean- while", "afterwards", "beforehand"'; • adverbial connectives presupposing shared knowledge of a generalisation or set, such existing SDRT analysis in response to a p-bearing element, would seem superfluous if its only role is to re-structure the right frontier to support the claimed RF constraint. T B A ~, [] ~, because for example . y (because) o/I 3 B % % Figure 4: Derivation of Example 2c T for example . C~ OJ o,,I 3 I 5 for example B Figure 5: Derivation of Example 2d as "for example", "first...second...", "for in- stance"; • adverbial connectives presupposing shared knowledge of an abstraction, such as "more specifically", "in particular"; • adverbial connectives presupposing a comple- mentary modal context, such as "otherwise"; • adverbial connectives presupposing an altern- ative to the current eventuality, such as "in- stead" and "rather". 5 For this study, one might be able to use the structure-crossing test given in Section 2 to distin- guish a relation whose arguments are both given structurally from a relation which has one of its arguments presupposed. (Since such a test won't distinguish p-bearing connectives such as "mean- while" from non-relational adverbials such as "at dawn" and "tonight", the latter will have to be ex- cluded by other means, such as the (pre-theoretical) test for relational phrases given in (Knott, 1996).) • 3 For example We take "For example, P" to presuppose a quanti- fied proposition G, and to assert that this proposition is a generalisation of the proposition rc expressed by the sentence P. (We will write generalisation(rt, G.) A precise definition of generalisation is not neces- sary for the purposes of this paper, and we will as- sume the following simple definition: 5Gann Bierner, personal communication • generalisation(rc, G) iff (i) G is a quantified proposition of the form Q I (x, a(x), b(x)); (ii) it allows the inference of a proposition G r of the form Q2 (x, a(x), b(x) ); and (iii) G' is inferrable from G (through having a weaker quantifier). The presupposed proposition G can be licensed in different ways, as the following examples show: (6) a. John likes many kinds of wine. For ex- ample, he likes Chianti. b. John must be feeling sick, because, for ex- ample, he hardly ate a thing at lunch. c. Because John was feeling sick, he did not for example go to work. d. Why don't we go to the National Gallery. Then, for example, we can go to the White House. Example 6a is straightforward, in that the pre- supposed generalisation "John likes many kinds of wine" is presented explicitly in the text. 6 In the re- maining cases, the generalisation must be inferred. In Example 6b, "because" licenses the generalisa- tion that many propositions support the proposi- 6Our definition of generalisation works as follows for this example: the proposition n introduced by "for ex- ample" is likes(john, chianti), the presupposed proposition G is many(x, wine(x),likes(john,x), and the weakened pro- position G I is some(x, wine(x),likes(john,x). ~ allows G I to be inferred, and G also allows G ~ to be inferred, hence generalisation(rc, G) is true. 45 tion that John must be feeling sick, while in Ex- ample 6c, it licences the generalisation that many propositions follow from his feeling sick. We can represent both generalisations using the meta-level predicate, evidence(rt, C), which holds iff a premise rc is evidence for a conclusion C. In Example 6d, the relevant generalisation in- volves possible worlds associated jointly with the modality of the first clause and "then" (Webber et al., 1999). For consistency, the semantic interpreta- tion of the clause introduced by "for example" must make reference to the same modal base identified by the generalisation. There is more on modal bases in the next section. 4 Otherwise Our analysis of "otherwise" assumes a modal se- mantics broadly following Kratzer (1991 ) and Stone (1999), where a sentence is asserted with respect to a set of possible worlds. The semantics of "other- wise ct" appeals to two sets of possible worlds. One is W0, the set of possible worlds consistent with our knowledge of the real world. The other, Wp, is that set of possible worlds consistent with the condition C that is presupposed, t~ is then asserted with re- spect to the complement set Wo - Wp. Of interest then is C - what it is that can serve as the source licensing this presupposition. 7 There are many sources for such a presupposi- tion, including if-then constructions (Example 7a- 7b), modal expressions (Examples 7c- 7d) and in- finitival clauses (Example 7e) (7) a. If the light is red, stop. Otherwise, go straight on. b. If the light is red, stop. Otherwise, you might get run over c. Bob [could, may, might] be in the kitchen. Otherwise, try in the living room. d. You [must, should] take a coat with you. Otherwise you'll get cold. e. It's useful to have a fall-back position. Oth- erwise you're stuck. 7There is another sense of "otherwise" corresponding to "in other respects", which appears either as an adjective phrase modifier (e.g. "He's an otherwise happy boy.") or a clausal modifier (e.g., "The physical layer is different, but otherwise it's identical to metropolitan networks."). What is presupposed here are one or more actual properties of the situation under discussion. each of which introduces new possibilities that are consistent with our knowledge of the real world (W0), that may then be further described through modal subordination (Roberts, 1989; Stone and Hardt, 1999). That such possibilities must be consistent with Wo (i.e., why the semantics of "otherwise" is not simply defined in terms of W r) can be seen by con- sidering the counterfactual variants of 7a-7d, with "had been", "could have been" or "should have taken". (Epistemic "must" can never be counterfac- tual.) Because counterfactuals provide an alternat- ive to reality, W e is not a subset of W0 - and we correctly predict a presupposition failure for "other- wise". For example, corresponding to 7a we have: (8) If the light had been red, John would have stopped. #Otherwise, he went straight on. The appropriate connective here - allowing for what actually happened - is "as it is" or "as it was". 8 As with "for example", "otherwise" is compat- ible with a range of additional relations linking dis- course together as a product of discourse structure and defeasible inference. Here, the clauses in 7a and 7c provide a more complete description of what to do in different circumstances, while those in 7b, 7d and 7e involve an unmarked "because", as did Ex- ample 2d. Specifically, in 7d, the "otherwise" clause asserts that the hearer is cold across all currently possible worlds where a coat is not taken. With the proposition understood that the hearer must not get cold (i.e., that only worlds where the hearer is not cold are compatible with what is required), this allows the inference (modus tollens) that only the worlds where the hearer takes a coat are compat- ible with what is required. As this is the proposi- tion presented explicitly in the first clause, the text is compatible with an inferential connective like "be- cause". (Similar examples occur with "epistemic" because.) Our theory correctly predicts that such discourse relations need not be left implicit, but can instead be explicitly signalled by additional connectives, as in 8There is a reading of the conditional which is not coun- terfactual, but rather a piece of free indirect speech report- ing on John's train of thought prior to encountering the light. This reading allows the use of "otherwise" with John's thought providing the base set of worlds W0, and "otherwise" then in- troducing a complementary condition in that same context: If the light had been red, John would have stopped. Otherwise, he would have carded straight on. But as it turned out, he never got to the light. 46 (9) You should take a coat with you because oth- erwise you'll get cold. and earlier examples. (Note that "Otherwise P" may yield an im- plicature, as well as having a presupposition, as in (10) John must be in his room. Otherwise, his light would be off. Here, compositional semantics says that the second clause continues the description of the situation par- tially described by the first clause. General infer- ence enriches this with the stronger, but defeasible conclusion that the second clause provides evidence for the first. Based on the presupposition of "oth- erwise", the "otherwise" clause asserts that John's light would be off across all possible worlds where he was not in his room. In addition, however, im- plicature related to the evidence relation between the clauses, contributes the conclusion that the light in John's room is on. The point here is only that presupposition and implicature are distinct mechan- isms, and it is only presupposition that we are fo- cussing on in this work. 5 Conclusion In this paper, we have shown that discourse struc- ture need not bear the full burden of discourse se- mantics: Part of it can be borne by other means. This keeps discourse structure simple and able to support a straight-forward compositional semantics. Specifically, we have argued that the notion of ana- phoric presupposition that was introduced by van der Sandt (1992) to explain the interpretation of various definite noun phrases could also be seen as underlying the semantics of various discourse con- nectives. Since these presuppositions are licensed by eventualities taken to be shared knowledge, a good source of which is the interpretation of the discourse so far, anaphoric presupposition can be seen as carrying some of the burden of discourse connectivity and discourse semantics in a way that avoids crossing dependencies. There is, potentially, another benefit to factor- ing the sources of discourse semantics in this way: while cross-linguistically, inference and anaphoric presupposition are likely to behave similarly, struc- ture (as in syntax) is likely to be more language spe- cific. Thus a factored approach has a better chance of providing a cross-linguistic account of discourse than one that relies on a single premise. Clearly, more remains to be done. First, the ap- proach demands a precise semantics for connect- ives, as in the work of Grote (1998), Grote et al. (1997), Jayez and Rossari (1998) and Lagerwerf (1998). Secondly, the approach demands an understand- ing of the attentional characteristics of presupposi- tions. In particular, preliminary study seems to sug- gest that p-bearing elements differ in what source can license them, where this source can be located, and what can act as distractors for this source. In fact, these differences seem to resemble the range of differences in the information status (Prince, 1981; Prince, 1992) or familiarity (Gundel et al., 1993) of referential NPs. Consider, for example: (11 ) I got in my old Volvo and set off to drive cross- country and see as many different mountain ranges as possible. When I got to Arkansas, for example, I stopped in the Ozarks, although I had to borrow another car to see them because Volvos handle badly on steep grades. Here, the definite NP-like presupposition of the "when" clause (that getting to Arkansas is shared knowledge) is licensed by driving cross-country; the presupposition of "for example" (that stopping in the Ozarks exemplifies some shared generalisation) is licensed by seeing many mountain ranges, and the presupposition of "another" (that an alternative car to this one is shared knowledge) is licensed by my Volvo. This suggests a corpus annotation effort for anaphoric presuppositions, similar to ones already in progress on co-reference. Finally, we should show that the approach has practical benefit for NL understanding and/or gener- ation. But the work to date surely shows the benefit of an approach that narrows the gap between dis- course syntax and semantics and that of the clause. References Nicholas Asher and Alex Lascarides. 1999. The semantics and pragmatics of presupposition. Journal of Semantics, to appear. Dan Cristea and Bonnie Webber. 1997. Expect- ations in incremental discourse processing. In Proc. 35 th Annual Meeting of the Association for Computational Linguistics, pages 88-95, Mad- rid, Spain. Morgan Kaufmann. Claire Gardent. 1997. Discourse tree adjoining grammars. Claus report nr.89, University of the Saarland, Saarbriicken. 47 Brigitte Grote, Nils Lenke, and Manfred Stede. 1997. Ma(r)king concessions in English and Ger- man. Discourse Processes, 24(1 ):87-118. Brigitte Grote. 1998. Representing temporal dis- course markers for generation purposes. In Coling/ACL Workshop on Discourse Relations and Discourse Markers, pages 22-28, Montreal, Canada. Jeanette Gundel, N.A. Hedberg, and R. Zacharski. 1993. Cognitive status and the form of referring expressions in discourse. Language, 69:274- 307. Jacques Jayez and Corinne Rossari. 1998. Prag- matic connectives as predicates. In Patrick Saint- Dizier, editor, Predicative Structures in Natural Language and Lexical Knowledge Bases, pages 306-340. Kluwer Academic Press, Dordrecht. Aravind Joshi and K. Vijay-Shanker. 1999. Compositional semantics with lexicalized tree- adjoining grammar (LTAG)? In Proc. 3 rd Int'l Workshop on Compuational Semantics, Tilburg, Netherlands, January. Alistair Knott and Chris Mellish. 1996. A feature- based account of the relations signalled by sen- tence and clause connectives. Language and Speech, 39(2-3): 143-183. Alistair Knott. 1996. A Data-driven Methodo- logy for Motivating a Set of Coherence Rela- tions. Ph.D. thesis, Department of Artificial In- telligence, University of Edinburgh. Angelika Kratzer. 1991. Modality. In A. von Stechow and D. Wunderlich, editors, Semantics: An International Handbook of Contemporary Re- search, pages 639-650. de Gruyter. Luuk Lagerwerf. 1998. Causal Connectives have Presuppositions. Holland Academic Graphics, The Hague, The Netherlands. PhD Thesis, Cath- olic University of Brabant. William Mann and Sandra Thompson. 1988. Rhet- orical structure theory. Text, 8(3):243-281. Johanna Moore and Martha Pollack. 1992. A prob- lem for RST: The need for multi-level discouse analysis. Computational Linguistics, 18(4):537- 544. Barbara Partee. 1984. Nominal and temporal ana- phora. Linguistics & Philosophy, 7(3):287-324. Ellen Prince. 1981. Toward a taxonomy of given- new information. In Peter Cole, editor, Radical Pragmatics, pages 223-255. Academic Press. Ellen Prince. 1992. The zpg letter: Subjects, definiteness and information-status. In Susan Thompson and William Mann, editors, Discourse Description: Diverse Analyses of a Fundraising Text, pages 295-325. John Benjamins. Craige Roberts. 1989. Modal subordination and pronominal anaphora in discourse. Linguistics and Philosophy, 12(6):683-721. Matthew Stone and Christine Doran. 1997. Sen- tence planning as description using tree adjoin- ing grammar. In Proc. 35 th Annual Meeting of the Association for Computational Linguistics, pages 198-205, Madrid, Spain. Morgan Kaufmann. Matthew Stone and Daniel Hardt. 1999. Dynamic discourse referents for tense and modals. In Harry Bunt, editor, Computational Semantics, pages 287-299. Kluwer. Matthew Stone and Bonnie Webber. 1998. Tex- tual economy through closely coupled syntax and semantics. In Proceedings of the Ninth Inter- national Workshop on Natural Language Gen- eration, pages 178-187, Niagara-on-the-Lake, Canada. Matthew Stone. 1998. Modality in Dialogue: Plan- ning, Pragmatics and Computation. Ph.D. thesis, Department of Computer & Information Science, University of Pennsylvania. Matthew Stone. 1999. Reference to possible worlds. RuCCS report 49, Center for Cognitive Science, Rutgers University. Rob Van der Sandt. 1992. Presupposition pro- jection as anaphora resolution. Journal of Se- mantics, 9:333-377. Bonnie Webber and Aravind Joshi. 1998. Anchor- ing a lexicalized tree-adjoining grammar for dis- course. In Coling/ACL Workshop on Discourse Relations and Discourse Markers, pages 86-92, Montreal, Canada. Bonnie Webber, Alistair Knott, and Aravind Joshi. 1999. Multiple discourse connectives in a lexic- alized grammar for discourse. In 3 ~d Int'l Work- shop on Computational Semantics, pages 309- 325, Tilburg, The Netherlands. Bonnie Webber. 1988. Tense as discourse anaphor. Computational Linguistics, 14(2):61-73. 4B
1999
6
An Earley-style Predictive Chart Parsing Method for Lambek Grammars Mark Hepple Department of Computer Science, University of Sheffield, Regent Court, 211 Portobello Street, Sheffield S1 4DP, UK [heppleOdcs .shef .ac.uk] Abstract We present a new chart parsing method for Lambek grammars, inspired by a method for D- Tree grammar parsing. The formulae of a Lam- bek sequent are firstly converted into rules of an indexed grammar formalism, which are used in an Earley-style predictive chart algorithm. The method is non-polynomial, but performs well for practical purposes -- much better than previous chart methods for Lambek grammars. 1 Introduction We present a new chart parsing method for Lambek grammars. The starting point for this work is the observation, in (Hepple, 1998), of certain similarities between categorial gram- mars and the D-Tree grammar (DTG) formal- ism of Rambow et al. (1995a). On this basis, we have explored adapting the DTG parsing ap- proach of Rambow et al. (1995b) for use with the Lambek calculus. The resulting method is one in which the formulae of a Lambek sequent that is to be proven are first converted to pro- duce rules of a formalism which combines ideas from the multiset-valued linear indexed gram- mar formalism of Rainbow (1994), with the Lambek calculus span labelling scheme of Mor- rill (1995), and with the first-order compilation method for categorial parsing of Hepple (1996). The resulting 'grammar' is then parsed using an Earley-style predictive chart algorithm which is adapted from Rambow et al. (1995b). 2 The Lambek Calculus We are concerned with the implicational (or 'product-free') fragment of the associative Lam- bek calculus (Lambek, 1958). A natural deduc- tion formulation is provided by the following rules of elimination and introduction, which cor- respond to steps of functional application and abstraction, respectively (as the term labelling reveals). The rules are sensitive to the order of assumptions. In the [/I] (resp. [\I]) rule, [B] in- dicates a discharged or withdrawn assumption, which is required to be the rightmost (resp. left- most) of the proof. A/B :a B :b /E B:b B\A a A: (ab) A: (ab) • ....[B: v] [B: v].:. A:a A:a /I \I A/B : Av.a B\A : Av.a \E (which) (mary) (ate) rel/(s/np) np (np\s)/np [np] /E np\s \E S rel The above proof illustrates 'hypothetical reasoning', i.e. the presence of additional as- sumptions ('hypotheticals') in proofs that are subsequently discharged. It is because of this phenomenon that standard chart methods are inadequate for the Lambek calculus -- hypo- theticals don't belong at any position on the single ordering over lexical categories by which standard charts are organised. 1 The previ- ous chart methods for the Lambek calculus deal with this problem in different ways. The method of K6nig (1990, 1994) places hypothet- icals on separate 'minicharts' which can attach into other (mini)charts where combinations are 1In effect, hypotheticals belong on additional subor- derings, which can connect into the main ordering of the chart at various positions, generating a branching, multi-dimensional ordering scheme. 465 possible. The method requires rather com- plicated book-keeping. The method of Hepple (1992) avoids this complicated book-keeping, and also rules out some useless subderivations allowed by Khnig's method, but does so at the cost of computing a representation of all the possible category sequences that might be tested in an exhaustive sequent proof search. Neither of these methods exhibits performance that would be satisfactory for practical use. 2 3 Some Preliminaries 3.1 First-order Compilation for Categorial Parsing Hepple (1996) introduces a method of first- order compilation for implicational linear logic, to provide a basis for efficient theorem proving of various categorial formalisms. Implicational linear logic is similar to the Lambek calculus, except having only a single non-directional im- plication --o. The idea of first-order compil- ation is to eliminate the need for hypothetical reasoning by simplifying higher-order formulae (whose presence requires hypothetical reason- ing) to first-order formulae. This involves ex- cising the subformulae that correspond to hy- potheticals, leaving a first-order residue. The excised subformulae are added as additional as- sumptions. For example, a higher-order formula (Z -o Y) --o X simplifies to Z+ (Y -o X), allow- ing proof (a) to be replaced by (b): (a) [Z] Z-oW W-oY (Z-oy)-oX W Y Z--oY X Y--oX (b) Z Z--o W W--o Y W Y X The method faces two key problems: avoiding invalid deduction and getting an appropriate se- 2Morrill (1996) provides a somewhat different tabular method for Lambek parsing within the proof net deduc- tion framework, in an approach where proof net check- ing is made by unifying labels marked on literals. The approach tabulates MGU's for the labels of contiguous subsegments of a proof net. mantics for the combination. To avoid invalid deduction, an indexing scheme is used to en- sure that a hypothetical must be used to de- rive the argument of the residue functor from which was excised (e.g. Z must be used to derive the argument Y of Y--o X, a condition satisfied in proof (b). To get the same se- mantics with compilation as without, the se- mantic effects of the introduction rule are com- piled into the terms of the formulae produced, e.g. (Z -o Y) --o X : w gives Z : z plus Y --o X : Au.w(Az.u). Terms are combined, not using standard application/fl-reduction, but rather an operation Ax.g + h =~ g[h//x] where a variant of substitution is used that allows 'ac- cidental' variable capture. Thus when Y--o X combines with its argument, whose derivation includes Z, the latter's variable becomes bound, e.g. lu.w(lz.u) + x(yz) =~ w(Iz.x(yz)) 3.2 Multiset-valued Linear Indexed Grammar Rambow (1994) introduces the multiset-valued linear indexed grammar formalism ({}-LIG). In- dices are stored in an unordered multiset rep- resentation (c.f. the stack of conventional lin- ear indexed grammar). The contents of the multiset at any mother node in a tree is dis- tributed amongst its daughter nodes in a lin- ear fashion, i.e each index is passed to pre- cisely one daughter. Rules take the form A0[m0]-+ Al[ml]...An[m,~]. The multiset of indices m0 are required to be present in, and are removed from, the multiset context of the mother node in a tree. For each daughter Ai, the indices mi are added into whatever other indices are inherited to that daughter. Thus, a rule A[] --+ B[1] C[] (where [] indicates an empty multiset) can license the use of a rule DIll ~ a within the derivation of its daugh- ter BIll, and so the indexing system allows the encoding of dominance relations. 4 A New Chart Parsing Method for Lambek Grammars 4.1 Lambek to SLMG Conversion The first task of the parsing approach is to con- vert the antecedent formulae of the sequent to be proved into a collection of rules of a form- alism I call Span Labelled Multiset Grammar (SLMG). For digestibility, I will present the con- version process in three stages. (I will assume 466 Method: (A:(i-j)) p = A:(i-j) where A atomic (A/B:(h-i))P = (A:(h-j))P / (B:(i-j)) ~ (B\A:(h-i)) p = (B:(j-h)) ~ \ (A:(j-i)) p where j is a new variable/constant aspis +/- Example: (X/(Y/Z):(O-1)) + = X:(O-h)/(Y:(1-k)/Z:(h-k)) (w:(1-2))+ = w:(1-2) ((W\Y)/Z:(2-3)) + = (W:(i-2)\Y:(i-j))/Z:(3-j) Figure 1: Phase 1 of conversion (span labelling) that in any sequent F ~ A to be proved, the succedent A is atomic. Any sequent not in this form is easily converted to one, of equivalent theoremhood, which is.) Firstly, directional types are labelled with span information using the labelling scheme of Morrill (1995) (which is justified in rela- tion to relational algebraic models for the Lam- bek calculus (van Benthem, 1991)). An ante- cedent Xi in X1...Xn =~ X0 has basic span (h-i) where h -- (i - 1). The labelled for- mula is computed from (Xi:(h-i)) + using the polar translation functions shown in Figure 1 (where /~ denotes the complementary polarity to p).3 As an example, Figure 1 also shows the results of converting the antededents of X/(Y/Z), W, (W\Y)/Z =~ X (where k is a con- stant and i,j variables). 4 The second stage of the conversion is adap- ted from the first-order compilation method of Hepple (1996), discussed earlier, modified to handle directional formulae and using a mod- ified indexation scheme to record dependencies 3The constants produced in the translation corres- pond to 'new' string positions, which make up the addi- tional suborderings on which hypotheticals are located. The variables produced in the translation become instan- tiated to some string constant during an analysis, fixing the position at which an additional subordering becomes 'attached to' another (sub)ordering. 4The idea of implementing categorial grammar as a non-directional logic, but associating atomic types with string position pairs (i.e. spans) to handle word order, is used in Pareschi (1988), although in that approach all string positions instantiate to values on a single ordering (i.e. integers 0 - n for a string of length n), which is not sufficient for Lambek calculus deductions. between residue formulae and excised hypothet- icals (one where both the residue and hypothet- ical record the dependency). For this proced- ure, the 'atomic type plus span label' units that result from the previous stage are treated as atomic units. The procedure T is defined by the cases shown in Figure 2 (although the method is perhaps best understood from the example also shown there). Its input is a pair (T, t), T a span labelled formula, t its associated term. 5 This procedure simplifies higher-order formu- lae to first-order ones in the manner already dis- cussed, and records dependencies between hy- pothetical and residue formulae using the in- dexing scheme. Assuming the antecedents of our example X/(Y/Z),W, (W\Y)/Z ~ X, to have terms 81,82,83 respectively, compilation yields results as in the example in Figure 2. The higher-order X/(Y/Z) yields two output formu- lae: the main residue X/Y and the hypothetical Z, with the dependency between the two indic- ated by the common index 1 in the argument index set of the former and the principal index set of the latter. The empty sets elsewhere in- dicate the absence of such dependencies. The final stage of the conversion process converts the results of the second phrase into SLMG productions. The method will be ex- plained by example. For a functor such as B\(((A\X)/D)/C), we can easily pro- ject the sequence of arguments it requires: 5Note that the "+" of (A + F) in (TO) simply pairs together the single compiled formula A with the set F of compiled formulae, where A is the main residue of the input formula and F its derived hypotheticals. 467 Method: (Tla) Q-lb) (~-2a) (v2b) (v3a) T((T,t))=AUF where T((O,T,t))=A+F T((m,X/Y,t)) = T((m,X/(Y:O),t)) where Y has no index set as for (Tla) modulo directionality of connective T((m, Xa/(Y:ml), t)) = (m, X2/(Y:ml), Av.s) + F where Y atomic, T((m, X1, (tv))) = (re, X2, s) + F, v a fresh variable as for (T2a) modulo directionality of connective v((m,X/((Y/Z):rni),t)) = A + (B U F U A) where w, v fresh variables, i a fresh multiset index, m2 = i U rnl v((m, X/(Y:m2), Aw.t(Av.w))) = A + F, T((i, Z, v)) = B + A (~'3b)-(T3d) as for (T3a) modulo directionality of,connectives Example: T((X:(O-h)/(Y:(1-k)/Z:(h-k)), si)) = T((W:(1--2),s2)) = ~(((W:(i-2)\Y:(i-j))/Z:(3-j), s3)) = (0, X:(O,h)/(Y:(1-k):{1}), Au.sl(Az.u)) ({1},Z:(h-k)),z) } (q}, W:(1-2), s2) (~, ( (W:( i-2):O) \ Y:( i-j) ) / ( Z:( 3-j):O), AvAw.( sa v w) ) Figure 2: Phase 2 of conversion (first-order compilation) A,B,B\(((A\X)/D)/C),C,D =~ X. If the functor was the lexical category of a word w, it might be viewed as fulfilling a role akin to a PS rule such as X --+ A B w C D. For the present approach, with explicit span labelling, there is no need to include a rhs element to mark the position of the functor (or word) itself, so the corresponding production would be more akin to X -+ A B C D. For an atomic formula, the corresponding production will have an empty rhs, e.g. A --4 0 .6 The left and right hand side units of SLMG productions all take the form Aim] (i-j), where A is an atomic type, m is a set of indices (if m is empty, the unit may be written A[](i-j)), 6Note that 0 is used rather than e to avoid the sug- gestion of the empty string, which it is not -- matters to do with the 'string' are handled solely within the span labelling. This point is reinforced by observing that the 'string language' generated by a collection SLMG pro- ductions will consist only of (nonempty) sequences of 0's. The real import of a SLMG derivation is not its ter- minal Yield, but rather the instantiation of span labels that it induces (for string matters), and its structure (for semantic matters). and (i-j) a span label. For a formula (m, T, t) resulting after first-order compilation, the rhs elements of the corresponding production cor- respond to the arguments (if any) of T, whereas its lhs combines the result type (plus span) of T with the multiset m. For our running ex- ample X/(Y/Z), W, (W\Y)/Z =~ X, the formu- lae resulting from the second phase (by first- order compilation) give rise to productions as shown in Figure 3. The associated semantic term for each rule is intended to be applied to the semantics if its daughters in their left-to- right order (which may require some reordering of the outermost lambdas c.f. the terms of the first-order formulae, e.g. as for the last rule). A sequent X1...Xn =~ Xo is proven if we can build a SLMG tree with root X0[](0-n) in which the SLMG rules derived from the ante- cedents are each used precisely once, and which induces a consistent binding over span variables. For our running example, the required deriva- tion, shown below, yields the correct interpret- ation Sl(AZ.S3 z s2). Note that 'linear resource use', i.e. that each rule must be used precisely 468 Example: (0, X:(O-h)/(Y:(1-k):{1}), Au.sl(Az.u)) ({1}, Z:(h-k)), z) (O, W:(1-2), s2) X[](0-h) --+ Y[1](1-k) Z[1](h-k) 0 : z W[](1-2) 0 : s2 (0, ( (W:(i-2):O)\Y:(i-j) )/( Z:(3-j):O), AvAw.(s3 v w)) Y[](i-j) --+ W[](i-2) Z[](3-j) : : u.sl( z.u) w v.(s3 v Figure 3: Phase 3 of conversion (converting to SLMG productions) once, is enforced by the span labelling scheme and does not need to be separately stipulated. Thus, the span (0-n) is marked on the root of the derivation. To bridge this span, the main residues of the antecedent formulae must all participate (since each 'consumes' a basic sub- span of the main span) and they in turn require participation of their hypotheticals via the in- dexing scheme. x[](o-3) I Y[ll(1-k) w[](1-2) Z[ll(3-k) I I 0 0 4.2 The Earley-style Parsing Method The chart parsing method to be presented is derived from the Earley-style DTG pars- ing method of Rambow et al. (1995), and in some sense both simplifies and complicates their method. In effect, we abstract from their method a simpler one for Eaxley-style parsing of {}-LIG (which is a simpler formalism than the Linear Prioritized Multiset Grammar (LPMG) into which they compile DTG), and then ex- tend this method to handle the span labelling of SLMG. A key differences of the new approach as compared to standard chart methods is that the usual external notion of span is dispensed with, and the combination of edges is instead re- girnented in terms of the explicit span labelling of categories in rules. The unification of span labels requires edges to carry explicit binding information for span variables. We use R to de- note the set of rules derived from the sequent, and E the set of edges in the chart. The general form of edges is: ((ml, m2), 9, r, (A ~ F * A)) where (~4 ~ F,A) E R, 0 is a substitution over span variables, r is a restrictor set identi- fying span variables whose values are required non-locally (explained below), and ml, m2 are multisets. In a {}-LIG or SLMG tree, there is no restriction on how the multiset indices associ- ated with any non-terminal node can be distrib- uted amongst its daughters. Rather than cash- ing out the possible distributions as alternative edges in the predictor step, we can instead, in effect, 'thread' the multiset through the daugh- ters, i.e. passing the entire multiset down to the first daughter, and passing any that are not used there on to the next daughter, and so on. For an edge ((ml, m2), 19, r, (A ~ F * A)), ml corresponds to the multiset context at the time the ancestor edge with dotted rule (,4 -+ .FA) was introduced, and m2 is the current multiset for passing onto the daughters in A. We call ml the initial multiset and m2 the current multiset. The chart method employs the rules shown in Figure 4. We shall consider each in turn. Initialisation: The rule recorded on the edge in this chart rule is not a real one (i.e. ~ R), but serves to drive the parsing process via the prediction of edges for rules that can derive X0[](1-n). A success- ful proof of the sequent is shown if the com- pleted chart contains an inactive edge for the special goal category, i.e. there is some edge ((0,0),0,0, (GOAL[](,-.) --+ h.)) E E Prediction: The current multiset of the predicting edge is passed onto the new edge as its initial multiset. The latter's current multiset (m6) may differ from its initial one due either to the removal of an index to license the new rule's use (i.e. if 469 Initialisation: if the initial sequent is X 1 ... X n :=~ Z 0 then ((O,O),$,O,(GOAL[](*-*) --4 .Xo[](1-n))) • E Prediction: ff ((ml,m2),Ol,rl,(A[m3](e-f) -+ r. B[m4](g-h), A)) • E and (B[rnh](i-j) --+ A) • R then ((m2, m6),O2,r2, (B[m5](g-(hO)) -~ .(A0))) • E where O=81+MGU((g-h),(i-j)) ; m5 Cm2Um4 ; m6 = (m2t2m4)-m5 r2 = nlv(m2 [_J m4) ; 82 = 0/(r2 U dauglnlv(A)) Completer: if ((ml,rr~2),Ol,rl,(A[m3](f-g) --+ F . B[m4](i-h),A)) E E and ((m2, ms), 02, r2, (B[m6](i-j) -4 A*)) E E then ((ml, ms), 03, rl, (A[m3](f -(gO)) -~ F, B[m4](i-j) * (A0))) E E where O=01+02+MGU(h,j) ; mhCrn2 ; m6C_m2Um4 ; 03 = O/(rl U dauglnlv(A)) Figure 4: Chart rules m5 is non-empty), or to the addition of indices from the predicting edge's next rhs unit (i.e. if ma is non-empty). (Note the 'sloppy' use of set, rather than explicitly multiset, notation. The present approach is such that the same index should never appear in both of two unioned sets, so there is in practice little difference.) The line 0 = 01 + MGU((g-h), (i-j)) checks that the corresponding span labels unify, and that the resulting MGU can consistently aug- ment the binding context of the predicting edge. This augmented binding is used to instantiate span variables in the new edge where possible. It is a characteristic of this parsing method, with top-down left-to-right traversal and associ- ated propagation of span information, that the left span index of the next daughter sought by any active edge is guarenteed to be instantiated, i.e. g above is a constant. Commonly the variables appearing in SLMG rules have only local significance and so their substitutions do not need to be carried around with edges. For example, an active edge might require two daughters B[](g-h) C[](h-i). A substitution for h that comes from combin- ing with an inactive edge for B[](g-h) can be immediately applied to the next daughter C[](h-i), and so does not need to be carried explicitly in the binding of the resulting edge. However, a situation where two occurrences of a variable appear in different rules may arise as a result of first-order compilation, which will sometimes (but not always) separate a variable occurrence in the hypothetical from another in the residue. For the rule set of our running ex- ample, we find an occurrence of h in both the first and second rule (corresponding to the main residue and hypothetical of the initial higher- order functor). The link between the two rules is also indicated by the indexing system. It turns out that for each index there is at most one vari- able that may appear in the two rules linked by the index. The identity of the 'non-local variables' that associate with each index can be straightforwardly computed off the SLMG grammar (or during the conversion process). The function nfvreturns the set of non-local variables that associate with a multiset of in- dices. The line r2 = nlv(m2 12 m4) computes the set of variables whose values may need to 470 be passed non-locally, i.e. from the predicting edge down to the predicted edge, or from an inactive edge that results from combination of this predicted edge up to the active edge that consumes it. This 'restrictor set' is used in redu- cing the substitution 8 to cover only those vari- ables whose values need to be stored with the edge. The only case where a substitution needs to be retained for variable that is not in the re- strictor set arises regarding the next daughter it seeks. For example, an active edge might require two daughters B[](g-h) C[1](k-i), where the second's index links it to a hypo- thetical with span (k-h). Here, a substitution for h from a combination for the first daughter cannot be immediately applied and so should be retained until a combination is made for the second daughter. The function call dauglnlv(A) returns the set of non-local variables associated with the multiset indices of the next daugh- ter in A (or the empty set if A is empty). There may be at most one variable in this set that appears in the substitution 8. The line 82 = 8/(r2 U dauglnlv(A)) reduces the substi- tution to cover only the variables whose values need to be stored. Failing to restrict the substi- tution in this way undermines the compaction of derivations by the chart, i.e. so that we find edges in the chart corresponding to the same subderivation, but which are not recognised as such during parsing due to them recording in- compatible substitutions. Completer: Recall from the prediction step that the pre- dicted edge's current multiset may differ from its initial multiset due to the addition of indices from the predicting edge's next rhs unit (i.e. m4 in the prediction rule). Any such added indices must~be 'used up' within the subderivation of that rhs element which is realised by the com- binations of the predicted edge. This require- ment is checked by the condition m5 C_ m2. The treatment of substitutions here is very much as for the prediction rule, except that both input edges contribute their own substitution. Note that for the inactive edge (as for all inact- ive edges), both components of the span (i-j) will be instantiated, so we need only unify the right index of the two spans -- the left indices can simply be checked for atomic identity. This observation is important to efficient implement- ation of the algorithm, for which most effort is in practice expended on the completer step. Act- ive edges should be indexed (i.e. hashed) with respect to the (atomic) type and left span index of the next rhs element sought. For inactive edges, the type and left span index of the lhs element should be used. For the completer step when an active edge is added, we need only ac- cess inactive edges that are hashed on the same type/left span index to consider for combina- tion, all others can be ignored, and vice versa for the addition of an inactive edge. It is notable that the algorithm has no scan- ning rule, which is due to the fact that the po- sitions of 'lexical items' or antecedent categor- ies are encoded in the span labels of rules, and need no further attention. In the (Rambow et hi., 1995) algorithm, the scanning component also deals with epsilon productions. Here, rules with an empty rhs are dealt with by prediction, by allowing an edge added for a rule with an empty rhs to be treated as an inactive edge (i.e. we equate "() -" and ". ()"). If the completed chart indicates a successful analysis, it is straightforward to compute the proof terms of the corresponding natural deduc- tion proofs, given a record of which edges were produced by combination of which other edges, or by prediction from which rule. Thus, the term for a predicted edge is simply that of the rule in R, whereas a term for an edge produced by a completer step is arrived at by combining a term of the active edge with one for the inactive edge (using the special substitution operation that allows 'accidental binding' of variables, as discussed earlier). Of course, a single edge may compact multiple alternative subproofs, and so return multiple terms. Note that the approach has no problem in handling multiple lexical as- signments, they simply result in multiple rules generated off the same basic span of the chart. 5 Efficiency and Complexity The method is shown to be non-polynomial by considering a simple class of examples of the form X1,...Xa-I,a =~ a, where each Xi is a/(a/(a\a)). Each such Xi gives a hypothetical whose dependency is encoded by a multiset in- dex. Examination of the chart reveals spans for which there are multiple edges, differing in their 'initial' multiset (and other ways), there being 471 xolal(xll(ala)),xll(x21(ala)),x21(ala),ala, ala, ala, ala, ala, a xo Figure 5: Example for comparison of methods one for edge for each subset of the indices deriv- ing from the antecedents XI,... Xn-2, i.e. giv- ing 2 ('~-2) distinct edges. This non-polynomial number of edge results in non-polynomial time for the completer step, and in turn for the al- gorithm as a whole. Hence, this approach does not resolve the open question of the polynomial time parsability of the Lambek calculus. In- formally, however, these observations are sug- gestive of a possible locus of difficulty in achiev- ing such a result. Thus, the hope for polyno- mial time parsability of the Lambek calculus comes from it being an ordered 'list-like' sys- tem, rather than an unordered 'bag-like' sys- tem, but in the example just discussed, we ob- serve 'bag-like' behaviour in a compact encoding (the multiset) of the dependencies of hypothet- ical reasoning. We should note that the DTG parsing method of (Rambow et al., 1995), from which the current approach is derived, is polynomial time. This follows from the fact that their com- pilation applies to a preset DTG, giving rise to a fixed maximal set of distinct indices in the LPMG that the compilation generates. This fixed set of indices gives rise to a very large, but polynomial, worst-case upper limit on the number of edges in a chart, which in turn yields a polynomial time result. A key difference for the present approach is that our task is to parse arbitrary initial sequents, and hence we do not have the fixed initial grammar that is the basis of the Rambow et al. complexity result. For practical comparison to the previous Lambek chart methods, consider the highly am- biguous artificial example shown in Figure 5, (which has six readings). KSnig (1994) reports that a Prolog implementation of her method, running on a major workstation produces 300 edges in 50 seconds. A Prolog implementation of the current method, on a current major work station, produces 75 edges in less than a tenth of a second. Of course, the increase in comput- ing power over the years makes the times not strictly comparable, but still a substantial speed up is indicated. The difference in the number of edges suggests that the KSnig method is sub- optimal in its compaction of alternative deriva- tions. References van Benthem, J. 1991. Language in Ac- tion: Categories, Lamdas and Dynamic Lo- gic. Studies in Logic and the Foundations of Mathematics, vol 130, North-Holland, Ams- terdam. Hepple, M. 1992. ' Chart Parsing Lambek Grammars: Modal Extensions and Incre- mentality', Proc. of COLING-92. Mark Hepple. 1996. 'A Compilation-Chart Method for Linear Categorial Deduction.' Proc. COLING-96, Copenhagen. Hepple, M. 1998. 'On Some Similarities Between D-Tree Grammars and Type-Logical Grammars.' Proc. Fourth Workshop on Tree- Adjoining Grammars and Related Frame- works. KSnig, E. 1990, 'The complexity of parsing with extended categorial grammars', Proc. o] COLING-90. Esther K5nig. 1994. 'A Hypothetical Reas- oning Algorithm for Linguistic Analysis.' Journal of Logic and Computation, Vol. 4, No 1. Lambek, J. 1958. 'The mathematics of sentence structure.' American Mathematical Monthly 65. 154-170. Morrill, G. 1995. 'Higher-order Linear Logic Programming of Categorial Dedution', Proc. o/EA CL-7, Dublin. Morrill, G. 1996. 'Memoisation for Categorial Proof Nets: Parallelism in Categorial Pro- cessing.' Research Report LSI-96-24-R, Uni- versitat Polit~cnica de Catalunya. Pareschi, R. 1988. 'A Definite Clause Version of Categorial Grammar.' Proc. 26th A CL. Rambow, O. 1994. 'Multiset-valued linear index grammars.' Proc. A CL '94. Rambow, O., Vijay-Shanker, K. & Weir, D. 1995a. 'D-Tree Grammars.' Proc. ACL-95. Rambow, O., Vijay-Shanker, K. & Weir, D. 1995b. 'Parsing D-Tree Grammars.' Proc. Int. Workshop on Parsing Technologies. 472
1999
60
A Bag of Useful Techniques for Efficient and Robust Parsing Bernd Kiefer t, Hans-Ulrich Kriegert, John Carroll $, and Rob Malouff IGerman Research Center for Artificial Intelligence (DFKI) Stuhlsatzenhausweg 3, D-66123 Saarbriicken $Cognitive and Computing Sciences, University of Sussex Falmer, Brighton BN1 9QH, UK *Center for the Study of Language and Information, Stanford University Ventura Hall, Stanford, CA 94305-4115, USA {kiefer, krieger}@dfki, de, j ohnca@cogs, susx. ac. uk, malouf@csli, stanford, edu Abstract This paper describes new and improved tech- niques which help a unification-based parser to process input efficiently and robustly. In com- bination these methods result in a speed-up in parsing time of more than an order of magni- tude. The methods are correct in the sense that none of them rule out legal rule applications. 1 Introduction This paper describes several generally- applicable techniques which help a unification- based parser to process input efficiently and robustly. As well as presenting a number of new methods, we also report significant improve- ments we have made to existing techniques. The methods preserve correctness in the sense they do not rule out legal rule applications. In particular, none of the techniques involve statistical or approximate processing. We also claim that these methods are independent of the concrete parser and neutral with re- spect to a given unification-based grammar theory/formalism. How can we gain reasonable efficiency in pars- ing when using large integrated grammars with several thousands of huge lexicon entries? Our belief is that there is no single method which achieves this goal alone. Instead, we have to develop and use a set of "cheap" filters which are correct in the above sense. As we indicate in section 10, combining these methods leads to a speed-up in parsing time (and reduction of space consumption) of more than an order of magnitude when applied to a mature, well en- gineered unification-based parsing system. We have implemented our methods as exten- sions to a HPSG grammar development environ- ment (Uszkoreit et al., 1994) which employs a sophisticated typed feature formalism (Krieger and Sch~ifer, 1994; Krieger and Sch~ifer, 1995) and an advanced agenda-based bottom-up chart parser (Kiefer and Scherf, 1996). A special- ized runtime version of this system is currently used in VERBMOBIL as the primary deep anal- ysis component. I In the next three sections, we report on trans- formations we have applied to the knowledge base (grammar/lexicon) and on modifications in the core formalism (unifier, type system). In Section 5-8, we describe how a given parser can be extended to filter out possible rule applica- tions efficiently before performing "expensive" unification. Section 9 shows how to compute best partial analyses in order to gain a certain level of robustness. Finally, we present empir- ical results to demonstrate the efficiency gains, and speculate on extensions we intend to work on in the near future. Within the different sec- tions, we refer to three corpora we have used to measure the effects of our methods. The refer- ence corpora for English, German, and Japanese consist of 1200-5000 samples. 2 Precompiling the Lexicon Lexicon entries in the development system are small templates that are loaded and expanded on demand by the typed feature structure sys- tem. Thereafter, all lexical rules are applied to the expanded feature structures. The results of these two computations form the input of the analysis stage. 1VERBMOBIL (Wahlster, 1993) deals with the trans- lation of spontaneously spoken dialogues, where only a minor part consists of "sentences" in a linguistic sense. Current languages are English, German, and Japanese. Some of the methods were originally developed in the context of another HPSG environment, the LKB (Copes- take, 1998). This lends support to our claims of their in- dependence from a particular parser or grammar engine. 473 In order to save space and time in the run- time system, the expansion and the application of lexical rules is now done off-line. In addi- tion, certain parts of the feature structure are deleted, since they are only needed to restrict the application of lexical rules (see also section 7 for a similar approach). For each stem, all results are stored in compact form as one com- piled LISP file, which allows to access and load a requested entry rapidly with almost no restric- tion on the size of the lexicon. Although load time is small (see figure 1), the most frequently used entries are cached in main memory, reduc- ing effort in the lexicon stage to a minimum. We continue to compute morphological infor- mation online, due to the significant increase of entries (a factor of 10 to 20 for German), which is not justifiable considering the minimal com- putation time for this operation. German English Japanese # stems 4269 3754 1875 space 10.3 KB 10.8 KB 5.4 KB entries 6 2.2 2.1 load time 25.8 msec 29.5 msec 7.5 msec Figure 1: Space and time requirements, space, entries and load time values are per stem 3 Improvements in unification Unification is the single most expensive oper- ation performed in the course of parsing. Up to 90% of the CPU time expended in parsing a sentence using a large-scale unification based grammar can go into feature structure and type unification. Therefore, any improvements in the efficiency of unification would have direct conse- quences for the overall performance of the sys- tem. One key to reducing the cost of unification is to find the simplest set of operations that meet the needs of grammar writers but still can be efficiently implemented. The unifier which was part of the original HPSG grammar develop- ment system mentioned in the introduction (de- scribed by (Backofen and Krieger, 1993)) pro- vided a number of advanced features, including distributed (or named) disjunctions (D6rre and Eisele, 1990) and support for full backtracking. While these operations were sometimes useful, they also made the unifier much more complex than was really necessary. The unification algorithm used by the cur- rent system is a modification of Tomabechi's (Tomabechi, 1991) "quasi-destructive" unifica- tion algorithm. Tomabechi's algorithm is based on the insight that unification often fails, and copying should only be performed when the uni- fication is going to succeed. This makes it par- ticularly well suited to chart-based parsing. During parsing, each edge must be built with- out modifying the edges that contribute to it. With a non-backtracking unifier, one option is to copy the daughter feature structures before performing a destructive unification operation, while the other is to use a non-destructive al- gorithm that produces a copy of the result up to the point a failure occurs. Either approach will result in some structures being built in the course of an unsuccessful unification, wasting space and reducing the overall throughput of the system. Tomabechi avoids these problems by simulating non-destructiveness without in- curring the overhead necessary to support back- tracking. First, it performs a destructive (but reversible) check that the two structures are compatible, and only when that succeeds does it produce an output structure. Thus, no out- put structures are built until it is certain that the unification will ultimately succeed. While an improvement over simple destruc- tive unification, Tomabechi's approach still suf- fers from what Kogure (Kogure, 1990) calls re- dundant copying. The new feature structures produced in the second phase of unification in- clude copies of all the substructures of the in- put graphs, even when these structures are un- changed. This can be avoided by reusing parts of the input structures in the output structure (Carroll and Malouf, 1999) without introducing significant bookkeeping overhead. To keep things as simple and efficient as pos- sible, the improved unifier also only supports conjunctive feature structures. While disjunc- tions can be a convenient descriptive tool for writing grammars, they are not absolutely nec- essary. When using a typed grammar formal- ism, most disjunctions can be easily put into the type hierarchy. Any disjunctions which cannot be removed by introducing new supertypes can be eliminated by translating the grammar into 474 disjunctive normal form (DNF). Of course, the ratio of the number of rules and lexical entries in the original grammar and the DNFed grammar depends on the 'style' of the grammar writer, the particular grammatical theory used, the number of disjunction alternatives, and so on. However, context management for distributed disjunctions requires enormous overhead when compared to simple conjunctive unification, so the benefits of using a simplified unifier out- weigh the cost of moving to DNF. For the Ger- man and Japanese VERBMOBIL grammars, we got 1.4-3× more rules and lexical entries, but by moving to a sophisticated conjunctive unifier we obtained an overall speed-up of 2-5. 4 Precompiling Type Unification After changing the unification engine, type uni- fication now became a big factor in processing: nearly 50% of the overall unification and copy- ing time was taken up by the computation of the greatest lower bounds (GLBs). Although we have in the past computed GLBs online effi- ciently with bit vectors, off-line computation is of course superior. The feasibility of the latter method depends on the number of types T of a grammar. The English grammar employs 6000 types which re- sults in 36,000,000 possible GLBs. Our exper- iments have shown, however, that only 0.5%- 2% of the type unifications were successful and only these GLBs need to be entered into the GLB table. In our implementation, accessing an arbitrary GLB takes less than 0.002 msec, compared to 15 msec of 'expensive' bit vector computation (following (A'/t-Kaci et al., 1989)) which also produces a lot of memory garbage. Our method, however, does not consume any memory and works as follows. We first assign a unique code (an integer) to every type t E 7-. After that, the GLB of s and t is assigned the following code (again an integer, in fact a fixnum): code(s) × ITI + code(t). This array- like encoding guarantees that a specific code is given away to a GLB at most once. Finally, this code together with the GLB is stored in a hash table. Hence, type unification costs are mini- mized: two symbol table lookups, one addition, one multiplication, and a hash table lookup. In order to access a unique maximal lower bound (= GLB), we must require that the type hierarchy is a lower semilattice (or bounded complete partial order). This is often not the case, but this deficiency can be overcome either by pre-computing the missing types (an efficient implementation of this takes approximately 25 seconds for the English grammar) or by making the online table lookup more complex. A naive implementation of the off-line compu- tation (compute the GLBs for T × T) only works for small grammars. Since type unification is a commutative operation (glb(s,t) = glb(t, s); s,t E 7"), we can improve the algorithm by computing only glb(s,t). A second improve- ment is due to the following fact: if the GLB of s and t is bottom, we do not have to com- pute the GLBs of the subtypes of both s and t, since they guarantee to fail. Even with these improvements, the GLB computation of a spe- cific grammar took more than 50 CPU hours, due to the special 'topology' of the type hierar- chy. However, not even the failing GLBs need to be computed (which take much of the time). When starting with the leaves of the type hi- erarchy, we can compute maximal components w.r.t, the supertype relation: by following the subsumption links upwards, we obtain sets of types, s.t. for a given component C, we can guarantee that glb(s,t) ~ _k, for all s,t E C. This last technique has helped us to drop the off-line computation time to less than one CPU hour. Overall when using the off-line GLBs, we ob- tained a parsing speed-up of 1.5, compared to the bit vector computation. 2 5 Precompiling Rule Filters The aim of the methods described in this and the next section is to avoid failing unifications by applying cheap 'filters' (i.e., methods that are cheaper than unification). The first filter we want to describe is a rule application filter. We have used this method for quite a while, and it has proven both efficient and easy to employ. Our rule application filter is a function that 2An alternative approach to improving the speed of type unification would be to implement the GLB table as a cache, rather than pre-computing the table's con- tents exhaustively. Whether this works well in practice or not depends on the efficiency of the primitive glb(s, t) computation; if the latter were relatively slow then the parser itself would run slowly until the cache was suffi- ciently full that cache hits became predominant. 475 takes two rules and an argument position and returns a boolean value that specifies if the sec- ond rule can be unified into the given argument position of the first rule. Take for example the binary filler-head rule in the HPSG grammar for German. Since this grammar allows not more than one el- ement on the SLASH list, the left hand side of the rule specifies an empty list as SLASH value. In the second (head) argument of the rule, SLASH has to be a list of length one. Consequently, a passive chart item whose top- most rule is a filler-head rule, and so has an empty SLASH, can not be a valid second ar- gument for another filler-head rule application. The filter function, when called with argu- ments (filler-head-rule-nr, filler-head-rule-nr, 2 ) for mother rule, topmost rule of the daughter and argument position respectively, will return false and no unification attempt will be made. The conjunctive grammars have between 20 and 120 unary and binary rule schemata. Since all rule schemata in our system bear a unique number, this filter can be realized as a three di- mensional boolean array. Thus, access costs are minimized and no additional memory is used at run-time. The filters for the three languages are computed off-line in less than one minute and rule out 50% to 60% of the failing unifications during parsing, saving about 45% of the parsing time. 6 Dynamic Unification Filtering ('Quick Check') Our second filter (which we have dubbed the 'quick check') exploits the fact that unification fails more often at certain points in feature structures than at others. For example, syn- tactic features such as CAW(egory) are very fre- quent points of failure, whereas unification al- most never fails on semantic features which are used merely to accumulate pieces of the logical form. Since all substructures are typed, uni- fication failure is manifested by a type clash when attempting a type unification. The quick check is invoked before each unification attempt to check the most frequent failure points, each stored as a feature path. The technique works as follows. First, there is an off-line stage, in which a modified unifi- cation engine is used that does not return im- mediately after a single type unification failure, but instead records in a global data structure the paths at which all such failures occurred. Using this modified system a set of sentences is parsed, and the n paths with the highest failure counts are saved. It is exactly these paths that are used later in filtering. During parsing, when an active chart item (i.e., a rule schema or a partly instantiated rule schema) and a passive chart item (a lexical entry or previously-built constituent) are combined, the parser has to unify the feature structure of the passive item into the substructure of the ac- tive item that corresponds to the argument to be filled. If either of the two structures has not been seen before, the parser associates with it a vector of length n containing the types at the end of the previously determined paths. The first position of the vector contains the type cor- responding to the most frequently failing path, the second position the second most frequently failing path, and so on. Otherwise, the existing vectors of types are retrieved. Corresponding elements in the vectors are then type-unified, and full unification of the feature structures is performed only if all the type unifications suc- ceed. Clearly, when considering the number of paths n used for this technique, there is a trade- off between the time savings from filtered uni- fications and the effort required to create the vectors and compare them. The main factors involved are the speed of type unification and the percentage of unification attempts filtered out (the 'filter rate') with a given set of paths. The optimum number of paths cannot be de- termined analytically. Our English, German and Japanese grammars use between 13 to 22 paths for quick check filtering, the precise num- ber having been established by experimenta- tion. The paths derived for these grammars are somewhat surprising, and in many cases do not fit in with the intuitions of the grammar-writers. In particular, some of the paths are very long (of length ten or more). Optimal sets of paths for grammars of this complexity could not be produced manually. The technique will only be of benefit if type unification is computationally cheap--as indeed it is in our implementation (section 4)--and if the filter rate is high (otherwise the extra work 476 performed essentially just duplicates work car- ried out later in unification). There is also over- lap between the quick check and the rule filter (previous section) since they are applied at the same point in processing. We have found that (given a reasonable number of paths) the quick check is the more powerful filter of the two be- cause it functions dynamically, taking into ac- count feature instantiations that occur during the parsing process, but that the rule filter is still valuable if executed first since it is a single, very fast table lookup. Applying both filters, the filter rate ranges from 95% to over 98%. Thus almost all failing unifications are avoided. Compared to the system with only rule applica- tion filtering, parse time is reduced by approxi- mately 75% 3 . 7 Reducing Feature Structure Size via Restrictors The 'category' information that is attached to each chart item of the parser consists of a single feature structure. Thus a rule is implemented by a feature structure where the daughters have to be unified into predetermined substructures. Although this implementation is along the lines of HPSG, it has the drawback that the tree structure that is already present in the chart items is duplicated in the feature structures. Since HPSG requires all relevant informa- tion to be contained in the SYNSEM feature of the mother structure, the unnecessary daugh- ters only increase the size of the overall feature structure without constraining the search space. Due to the Locality Principle of HPSG (Pollard and Sag, 1987, p. 145ff), they can therefore be legally removed in fully instantiated items. The situation is different for active chart items since daughters can affect their siblings. To be independent from a-certain grammati- cal theory or implementation, we use restrictors similar to (Shieber, 1985) as a flexible and easy- to-use specification to perform this deletion. A positive restrictor is an automaton describing the paths in a feature structure that will re- main after restriction (the deletion operation), 3There are refinements of the technique which we have implemented and which in practice produce ad- ditional benefits; we will report these in a subsequent paper. Briefly, they involve an improvement to th e path collection method, and the storage of other information besides types in the vectors. whereas a negative restrictor specifies the parts to be deleted. Both kinds of restrictors can be used in our system. In addition to the removal of the tree struc- ture, the grammar writer can specify the re- strictor further to remove features that are only used locally and do not play a role in further derivation. It is worth noting that this method is only correct if the specified restrictor does not remove paths that would lead to future unifica- tion failures. The reduction in size results in a speed-up in unification itself, but also in copy- ing and memory management. As already mentioned in section 2, there ex- ists a second restrictor to get rid of unnecessary parts of the lexical entries after lexicon process- ing. The speed gain using the restrictors in parsing ranges from 30% for the German sys- tem to 45% for English. 8 Limiting the Number of Initial Chart Items Since the number of lexical entries per stem has a direct impact on the number of parsing hy- potheses (in the worst case leads to an expo- nential increase), it would be a good idea to have a cheap mechanism at hand that helps to limit these initial items. The technique we have implemented is based on the following observa- tion: in order to contribute to a reading, certain items (concrete lexicon entries, but also classes of entries) require the existence of other items such that the non-existence of one allows a safe deletion of the other (and vice versa). In Ger- man, for instance, prefix verbs require the right separable prefixes to be present in the chart, but also a potential prefix requires its prefix verb. Note that such a technique operates in a much larger context (in fact, the whole chart) than a local rule application filter or the quick-check method. The method works as follows. In a preprocessing step, we first separate the chart items which encode prefix verbs from those items which represent separable prefixes. Since both specify the morphological form of the pre- fix, a set-exclusive-or operation yields exactly the items which can be safely deleted from the chart. Let us give some examples to see the useful- ness of this method. In the sentence Ich komme mo,'ge,~ (I (will) come tomorrow), komme maps 477 onto 97 lexical entries--remember, komme might encode prefix verbs such as ankommen (arrive), zuriickkommen (come back), etc. al- though here, none of the prefix verb readings are valid, since a prefix is missing. Using the above method, only 8 of 97 lexical entries will remain in the chart. The sentence Ich komme morgen an (I (will) arrive tomorrow) results in 8+7 entries for komme (8 entries for the come reading together with 7 entries for the arrive reading of komme) and 3 prepositional read- ings plus 1 prefix entry for an. However in Der Mann wartet an der Tiir (The man is waiting at the door), only the three prepositional read- ings for an come into play, since no prefix verb anwartet exists. Although there are no English prefix verbs, the method also works for verbs requiring certain particles, such as come, come along, come back, come up, etc. The parsing time for the second example goes down by a factor of 2.4; overall savings w.r.t, our reference corpus is 17% of the parsing time (i.e., speed-up factor of 1.2). 9 Computing Best Partial Analyses Given deficient, ungrammatical, or spontaneous input, a traditional parser is not able to de- liver a useful result. To overcome this disadvan- tage, our approach focuses on partial analyses which are combined in a later stage to form to- tal analyses without giving up the correctness of the overall deep grammar. But what can be considered good partial analyses? Obviously a (sub)tree licensed by the grammar which covers a continuous part of the input (i.e., a passive parser edge). But not every passive edge is a good candidate since otherwise we would end up with perhaps thousands of them. Instead, our approach computes an 'optimal' connected se- quence of partial analyses which cover the whole input. The idea here is to view the set of pas- sive edges as a directed graph and to compute shortest paths w.r.t, a user-defined estimation function. Since this graph is acyclic and topologically sorted, we have chosen the DAG-shortest-path algorithm (Cormen et al., 1990) which runs in O(V + E). We have modified this algorithm to cope with the needs we have encountered in speech parsing: (i) one can use several start and ~nd vertices (e.g., in case of n-best chains or word graphs); (ii) all best shortest paths are returned (i.e., we obtain a shortest-path sub- graph); (iii) estimation and selection of the best edges is done incrementally when parsing n- best chains (i.e., only new passive edges entered into the chart are estimated and perhaps se- lected). This approach has one important prop- erty: even if certain parts of the input have not undergone at least one rule application, there are still lexical edges which help to form a best path through the passive edges. This means that we can interrupt parsing at any time, but still obtain a useful result. Let us give an example to see how the estima- tion function on edges (-- trees) might look like (this estimation is actually used in the German grammar): • n-ary tree (n > 1) with utterance status (e.g., NPs, PPs): value 1 • lexical items: value 2 • otherwise: value c~ This approach does not always favor paths with longest edges as the example in figure 2 shows--instead it prefers paths containing no lexical edges (where this is possible) and there might be several such paths having the same cost. Longest (sub)paths, however, can be ob- tained by employing an exponential estimation function. Other properties, such as prosodic information or probabilistic scores could also be utilized in the estimation function. A de- tailed description of the approach can be found in (Kasper et al., 1999). P R S Figure 2: Computing best partial analyses. Note that the paths PR and QR are chosen, but not ST, although S is the longest edge. 478 10 Conclusions and Further Work The collection of methods described in this pa- per has enabled us to unite deep linguistic anal- ysis with speech processing. The overall speed- up compared to the original system is about a factor of 10 up to 25. Below we present some absolute timings to give an impression of the current systems' performance. German English Japanese # sentences 5106 1261 1917 # words 7 6.7 7.2 # lex. entries 40.9 25.6 69.8 # chart items 1024 234 565 # results 5.8 12.4 53.6 time first 1.46 s 0.24 s 0.9 s time overall 4.53 s 1.38 s 4.42 s In the table, the last six rows are average val- ues per sentence, time first and time overall are the mean CPU times to compute the first result and the whole search space respectively. # lex. entries and # chart items give an im- pression of the lexical and syntactic ambiguity of the respective grammars 4 The German and Japanese corpora and half of the English corpus consist of transliterations of spoken dialogues used in the VEI:tBMOBIL project. These dialogues are real world dia- logues about appointment scheduling and va- cation planning. They contain a variety of syn- tactic as well as spontaneous speech phenom- ena. The remaining half of the English corpus is taken from a manually constructed test suite, which may explain some of the differences in absolute parse time. Most of the methods are corpus independent, except for the quick check filter, which requires a training corpus, and the use of a purely con- junctive grammar, which will do worse in cases of great amounts of syntactic ambiguity because there is currently no ambiguity packing in the parser. For the quick check, we have observed that a random subset of the corpora with about one to two hundred sentences is enough to ob- tain a filter with nearly optimal filter rate. Although the actual efficiency gain will vary for differently implemented grammars, we are 4The computations were made using a 300MHz SUN Ultrasparc 2 with Solaris 2.5. The whole system is pro- grammed in Franz Allegro Common Lisp. certain that these techniques will lead to sub- stantial improvements in almost every unifica- tion based system. It is, for example, quite un- likely that unification failures are equally dis- tributed over the different nodes of the gram- mar's feature structure, which is the most im- portant prerequisite for the quick check filter to work. Avoiding disjunctions usually requires a reworking of the grammar which will pay off in the end. We have shown that the combination of al- gorithmic methods together with some disci- pline in grammar writing can lead to a practi- cal high performance analysis system even with large general grammars for different languages. There is, however, room for further improve- ments. We intend to generalize to other cases the technique for removing unnecessary lexical items. A detailed investigation of the quick- check method and its interaction with the rule application filter is planned for the near future. Since almost all failing unifications are avoided through the use of filtering techniques, we will now focus on methods to reduce the number of chart items that do not contribute to any anal- ysis; for instance, by computing context-free or regular approximations of the HPSG grammars (e.g., (Nederhof, 1997)). Acknowledgments The research described in this paper has greatly benefited from a very fruitful collaboration with the HPSG group of CSLI at Stanford University. This cooperation is part of the deep linguis- tic processing effort within the BMBF project VERBMOBIL. Special thanks are due to Stefem Miiller for discussing the topic of German prefix verbs. Thanks to Dan Flickinger who provided us with several English phenomena. We also want to thank Nicolas Nicolov for reading a ver- sion of this paper. Stephan Oepen's and Mark- Jan Nederhof's fruitful comments have helped us a lot. Finally, we want to thank the anony- mous ACL reviewers for their comments. This research was supported by the German Federal Ministry for Education, Science, Research and Technology under grant no. 01 IV 701 V0, and by a UK EPSRC Advanced Fellowship to the third author, and also is in part based upon work supported by the National Science Foun- dation under grant number IRL9612682. 479 References Hassan Ait-Kaci, Robert Boyer, Patrick Lin- coln, and Roger Nasr. 1989. Efficient imple- mentation of lattice operations. A CM Trans- actions on Programming Languages and Sys- tems, 11(1):115-146, January. Rolf Backofen and Hans-Ulrich Krieger. 1993. The TD£///D/A/'e system. In R. Backofen, H.- U. Krieger, S.P. Spackman, and H. Uszkor- eit, editors, Report of the EAGLES Work- shop on Implemented Formalisms at DFKI, Saarbriicken, pages 67-74. DFKI Research Report D-93-27. John Carroll and Robert Malouf. 1999. Effi- cient graph unification for parsing feature- based grammars. University of Sussex and Stanford University. Ann Copestake. 1998. The (new) LKB system. Ms, Stanford University, http ://~n~-csli. stanford, edu/~aac/newdoc, pdf. Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. 1990. Introduction to Al- gorithms. MIT Press, Cambridge, MA. Jochen DSrre and Andreas Eisele. 1990. Feature logic with disjunctive unification. In Proceedings of the 13th International Conference on Computational Linguistics, COLING-90, pages Vol. 3, 100-105. Walter Kasper, Bernd Kiefer, Hans-Ulrich Krieger, C.J. Rupp, and Karsten L. Worm. 1999. Charting the depths of robust speech parsing. In Proceedings of the ACL-99 The- matic Session on Robust Sentence-Level In- terpretation. Bernd Kiefer and Oliver Scherf. 1996. Gimme more HQ parsers. The generic parser class of DISCO. Unpublished draft. German Research Center for Artificial Intelligence (DFKI), Saarbr/icken, Germany. Kiyoshi Kogure. 1990. Strategic lazy incremen- tal copy graph unification. In Proceedings of the 13th International Conference on Com- putational Linguistics (COLING '90), pages 223-228, Helsinki. Hans-Ulrich Krieger and Ulrich Sch~ifer. 1994. 7"DE--a type description language for constraint-based grammars. In Proceedings of the 15th International Conference on Computational Linguistics, COLING-94, pages 893-899. An enlarged version of this paper is available as DFKI Research Report RR-94-37. Hans-Ulrich Krieger and Ulrich Sch~ifer. 1995. Efficient parameterizable type expansion for typed feature formalisms. In Proceedings of the l~th International Joint Conference on Artificial Intelligence, IJCAI-gS, pages 1428- 1434. DFKI Research Report RR-95-18. Mark Jan Nederhof. 1997. Regular approxima- tions of cfls: A grammatical view. In Pro- ceedings of the 5th International Workshop on Parsing Technologies, IWPT'97, pages 159- 170. Carl Pollard and Ivan A. Sag. 1987. Information-Based Syntax and Seman- tics. Vol. I: Fundamentals. CSLI Lecture Notes, Number 13. Center for the Study of Language and Information, Stanford. Stuart M. Shieber. 1985. Using restriction to extend parsing algorithms for complex- feature-based formalisms. In Proceedings of the 23rd Annual Meeting of the Associa- tion for Computational Linguistics, ACL-85, pages 145-152. Hideto Tomabechi. 1991. Quasi-destructive graph unification. In Proceedings of the 29th Annual Meeting of the Association for Com- putational Linguistics, volume 29, pages 315- 322. Hans Uszkoreit, Rolf Backofen, Stephan Buse- mann, Abdel Kader Diagne, Elizabeth A. Hinkelman, Walter Kasper, Bernd Kiefer, Hans-Ulrich Krieger, Klaus Netter, G/inter Neumann, Stephan Oepen, and Stephen P. Spackman. 1994. DISCO--an HPSG-based NLP system and its application for appoint- ment scheduling. In Proceedings of COLING- 94, pages 436-440. DFKI Research Report RR-94-38. Wolfgang Wahlster. 1993. VERBMOBIL-- translation of face-to-face dialogs. Re- search Report RR-93-34, German Research Center for Artificial Intelligence (DFKI), Saarbr/icken, Germany. Also in Proc. MT Summit IV, 127-135, Kobe, Japan, July 1993. 480
1999
61
Semantic Analysis of Japanese Noun Phrases : A New Approach to Dictionary-Based Understanding Sadao Kurohashi and Yasuyuki Sakai Graduate School of Informatics, Kyoto University Yoshida-honmachi, Sakyo, Kyoto, 606-8501, Japan kuro0i, kyoto-u, ac. jp Abstract This paper presents a new method of analyz- ing Japanese noun phrases of the form N1 no 5/2. The Japanese postposition no roughly cor- responds to of, but it has much broader us- age. The method exploits a definition of N2 in a dictionary. For example, rugby no coach can be interpreted as a person who teaches tech- nique in rugby. We illustrate the effectiveness of the method by the analysis of 300 test noun phrases. 1 Introduction The semantic analysis of Japanese noun phrases of the form N1 no N2 is one of the difficult prob- lems which cannot be solved by the current ef- forts of many researchers. Roughly speaking, Japanese noun phrase N1 no N2 corresponds to English noun phrase N2 of N1. However, the Japanese postposition no has much broader us- age than of as follows: watashi 'I' no kuruma 'car' tsukue 'desk' no ashi 'leg' gray no seihuku 'uniform' possession whole-part modification senmonka 'expert' no chousa 'study' agent rugby no coach subject yakyu 'baseball' no senshu 'player' category kaze 'cold' no virus result ryokou 'travel' no jyunbi 'preparation' purpose toranpu 'card' no tejina 'trick' instrument The conventional approach to this problem was to classify semantic relations, such as pos- session, whole-part, modification, and others. Then, classification rules were crafted by hand, or detected from relation-tagged examples by a machine learning technique (Shimazu et al., 1987; Sumita et al., 1990; Tomiura et al., 1995; Kurohashi et al., 1998). The problem in such an approach is to set up the semantic relations. For example, the above examples and their classification came from the IPA nominal dictionary (Information- Technology Promotion Agency, Japan, 1996). Is it possible to find clear boundaries among subject, category, result, purpose, instrument, and others? No matter how fine-grained rela- tions we set up, we always encounter phrases which are on the boundary or belong to two or more relations. This paper proposes a completely different approach to the task, which exploits semantic role information of nouns in an ordinary dictio- nary. 2 Semantic Roles of Nouns The meaning of a word can be recognized by the relationship with its semantic roles. In the case of verbs, the arguments of the predicates constitute the semantic roles, and a consider- able number of studies have been made. For example, the case grammar theory is a semantic valence theory that describes the logical form of a sentence in terms of a predicate and a series of case-labeled arguments such as agent, object, location, source, goal (Fillmore, 1968). Further- more, a wide-coverage dictionary describing se- mantic roles of verbs in machine readable form has been constructed by a great deal of labor (Ikehara et al., 1997). Not only verbs, but also nouns can have se- mantic roles. For example, coach is a coach of some sport; virus is a virus causing some dis- ease. Unlike the case of verbs, no semantic- 481 Table 1: Semantic relations in N1 no N2 Relation Noun Phrase N1 no N2 Verb Phrase Semantic-role rugby no coach, kaze 'cold' no virus, tsukue 'desk' no ashi 'leg', ryokou 'travel' no jyunbi 'preparation' hon-wo 'book-Ace' yomu 'read' Agent senmonka 'expert' no chousa 'study' kare-ga 'he-NOM' yomu 'read' Possession watashi 'I' no kuruma 'car' Belonging gakkou 'school' no sensei 'teacher' Time aki 'autumn' no hatake 'field' 3ji-ni 'at 3 o'clock' yomu 'read' Place Kyoto no raise 'store' heya-de 'in room' yomu 'read' Modification gray no seihuku 'uniform' isoide 'hurriedly' yomu 'read' huzoku 'attached' no neji 'screw' ki 'wooden' no hako 'box' Complement kimono no jyosei 'lady' nobel-sho 'Nobel prize' no kisetsu 'season' role dictionary for nouns has been constructed so far. However, in many cases, semantic roles of nouns are described in an ordinary dictio- nary for human being. For example, a Japanese dictionary for children, Reikai Shougaku Koku- gojiten (abbreviated to RSK) (Tadil~, 1997), gives the definition of the word coach and virus as follows 1: coach a person who teaches technique in some sport virus a living thing even smaller than bacte- ria which causes infectious disease like in- fluenza If an NLP system can utilize these definitions as they are, we do not need to take the trou- ble in constructing a semantic-role dictionary for nouns in the special format for machine-use. 3 Interpretation of N1 no N2 using a Dictionary Semantic-role information of nouns in an ordi- nary dictionary can be utilized to solve the dif- ficult problem in the semantic analysis of N1 1Although our method handles Japanese noun phrases by using Japanese definition sentences, in this paper we use their English translations for the explana- tion. In some sense, the essential point of our method is language-independent. no N2 phrases. In other words, we can say the problem disappears. For example, rugby no coach can be inter- preted by the definition of coach as follows: the dictionary describes that the noun coach has an semantic role of sport, and the phrase rugby no coach specifies that the sport is rugby. That is, the interpretation of the phrase can be regarded as matching rugby in the phrase to some sport in the coach definition. Furthermore, based on this interpretation, we can paraphrase rugby no coach into a person who teaches technique in rugby, by replacing some sport in the definition with rugby. Kaze 'cold' no virus is also easily interpreted based on the definition of virus, linking kaze 'cold' to infectious disease. Such a dictionary-based method can handle interpretation of most phrases where conven- tional classification-based analysis failed. As a result, we can arrange the diversity of N1 no N2 senses simply as in Table 1. The semantic-role relation is a relation that N1 fills in an semantic role of N2. When N2 is an action noun, an object-action relation is also regarded as a semantic-role relation. On the other hand, in the agent, posses- sion and belonging relations, N1 and N2 have a weaker relationship. In theory, any action can be done by anyone (my study, his reading, etc.); 482 anything can be possessed by anyone (my pen, his feeling, etc.); and anyone can belong to any organization (I belong to a university, he be- longs to any community, etc.). The difference between the semantic-role re- lation and the agent, possession, belonging rela- tions can correspond to the difference between the agent and the object of verbs. In general, the object has a stronger relationship with a verb than the agent, which leads several asym- metrical linguistic phenomena. The time and place relations have much clearer correspondence to optional cases for verbs. A modification relation is also parallel to modifiers for verbs. If a phrase has a modi- fication relation, it can be paraphrased into N2 is N1, like gray no seihuku 'uniform' is para- phrased into seihuku 'uniform' is gray. The last relation, the complement relation is the most difficult to interpret. The relation be- tween N1 and N2 does not come from Nl'S se- mantic roles, or it is not so weak as the other relations. For example, kimono no jyosei 'lady' means a lady wearing a kimono, and nobel-sho 'Nobel prize' no kisetsu 'season' means a sea- son when the Nobel prizes are awarded. Since automatic interpretation of the complement re- lation is much more difficult than that of other relations, it is beyond the scope of this paper. 4 Analysis Method Once we can arrange the diversity of N1 no N 2 senses as in Table 1, their analysis becomes very simple, consisting of the following two modules: 1. Dictionary-based analysis (abbreviated to DBA hereafter) for semantic-role relations. 2. Semantic feature-based analysis (abbrevi- ated to SBA hereafter) for some semantic- role relations and all other relations. After briefly introducing resources employed, we explain the algorithm of the two analyses. 4.1 Resources 4.1.1 RSK RSK (Reikai Shougaku Kokugojiten), a Japanese dictionary for children, is used to find semantic roles of nouns in DBA. The reason why we use a dictionary for children is that, generally speaking, definition sentences of such a dictionary are described by basic words, which helps the system finding links between N1 and a semantic role of a head word. All definition sentences in RSK were analyzed by JUMAN, a Japanese morphological analyzer, and KNP, a Japanese syntactic and case ana- lyzer (Kurohashi and Nagao, 1994; Kurohashi and Nagao, 1998). Then, a genus word for a head word, like a person for coach were detected in the definition sentences by simple rules: in a Japanese definition sentence, the last word is a genus word in almost all cases; if there is a noun coordination at the end, all of those nouns are regarded as genus words. 4.1.2 NTT Semantic Feature Dictionary NTT Communication Science Laboratories (NTT CS Lab) constructed a semantic feature tree, whose 3,000 nodes are semantic features, and a nominal dictionary containing about 300,000 nouns, each of which is given one or more appropriate semantic features. Figure 1 shows the upper levels of the semantic feature tree. SBA uses the dictionary to specify conditions of rules. DBA also uses the dictionary to cal- culate the similarity between two words. Sup- pose the word X and Y have a semantic feature Sx and Sy, respectively, their depth is dx and dy in the semantic tree, and the depth of their lowest (most specific) common node is de, the similarity between X and Y, sire(X, Y), is cal- culated as follows: sire(X, Y) = (dc x 2)/(dx + dy). If Sx and Sy are the same, the similarity is 1.0, the maximum score based on this criteria. 4.1.3 NTT Verb Case Frame Dictionary NTT CS Lab also constructed a case frame dictionary for 6,000 verbs, using the semantic features described above. For example, a case frame of the verb kakou-suru (process) is as fol- lows: N1 (AGENT)-ga N2(CONCRETE)-wo kako.u-suru 'N1 process N2' where ga and wo are Japanese nominative and accusative case markers. The frame describes 483 NOUN CONCRETE J AGENT PLACE /\ HUMAN ORGANIZATION CONCRETE ABSTRACT J ABSTRACT EVENT ABSTRACT RELATION J/l\ TIME POSITION QUANTITY . . . . Figure 1: The upper levels of NTT Semantic Feature Dictionary. that the verb kakou-suru takes two cases, nouns of AGENT semantic feature can fill the ga-case slot and nouns of CONCRETE semantic feature can fill the wo-case slot. KNP utilizes the case frame dictionary for the case analysis. 4.2 Algorithm Given an input phrase N1 no N2, both DBA and SBA are applied to the input, and then the two analyses are integrated. 4.2.1 Dictionary-based Analysis Dictionary based-Analysis (DBA) tries to find a correspondence between N1 and a semantic role of N2 by utilizing RSK, by the following process: 1. Look up N2 in RSK and obtain the defini- tion sentences of N2. 2. For each word w in the definition sentences other than the genus words, do the follow- ing steps: 2.1. When w is a noun which shows a semantic role explicitly, like kotog- ara 'thing', monogoto 'matter', nanika 'something', and N1 does not have a semantic feature of HUMAN or TIME, give 0.9 to their correspondence 2. 2.2. When w is other noun, calculate the similarity between N1 and w by using NTT Semantic Feature Dictionary (as described in Section 4.1.2), and give 2For the present, parameters in the algorithm were given empirically, not optimized by a learning method. the similarity score to their correspon- dence. 2.3. When w is a verb, it has a vacant case slot, and the semantic constraint for the slot meets the semantic feature of N1, give 0.5 to their correspondence. . . If we could not find a correspondence with 0.6 or more score by the step 2, look up the genus word in the RSK, obtain definition sentences of it, and repeat the step 2 again. (The looking up of a genus word is done only once.) Finally, if the best correspondence score is 0.5 or more, DBA outputs the best corre- spondence, which can be a semantic-role relation of the input; if not, DBA outputs nothing. For example, the input rugby no coach is ana- lyzed as follows (figures attached to words indi- cate the similarity scores; the underlined score is the best): (1) rugby no coach coach a person who teaches technique0.21 in some sport 1.0 Rugby, technique and sport have the semantic feature SPORT, METHOD and SPORT respectively in NTT Semantic Feature Dictionary. The low- est common node between SPORT and METHOD is ABSTRACT, and based on these semantic fea- tures, the similarity between rugby and tech- nique is calculated as 0.21. On the other hand, 484 the similarity between rugby and sport is calcu- lated as 1.0, since they have the same seman- tic feature. The case analysis finds that all case slots of teach are filled in the definition sentence. As a result, DBA outputs the correspondence between rugby and sport as a possible semantic- role relation of the input. On the other hand, bunsho 'writings' no tat- sujin 'expert' is an example that N1 corresponds to a vacant case slot of the predicate outstand- ing: (2) bunshou 'writings' no tatsujin 'expert' expert a person being outstanding (at ¢0.50) Puroresu 'pro wrestling' no chukei 'relay' is an example that the looking up of a genus word broadcast leads to the correct analysis: (3) puroresu 'pro wrestling' no chukei 'relay' relay a relay broadcast broadcast a radioo.o or televisiono.o presentation of news 0.48, entertainment 0.87, music o.so and others 4.2.2 Semantic Feature-based Analysis Since diverse relations in N1 no N2 are han- dled by DBA, the remaining relations can be detected by simple rules checking the semantic features of N1 and/or N2. The following rules are applied one by one to the input phrase. Once the input phrase meets a condition, SBA outputs the relation in the rule, and the subsequent rules are not applied any more. 1. NI:HUMAN, N2:RELATIVE --~ semantic- role(relative) e.g. kare 'he' no oba 'aunt' 2. NI:HUMAN, N2:PERSONAL._RELATION --~ semantic-role(personal relation) e.g. kare 'he' no tomodachi 'friend' 3. NI:HUMAN, N2:HUMAN --~ modifica- tion(apposition) e.g. gakusei 'student' no kare 'he' 4. NI:ORGANIZATION, N2:HUMAN ~ belong- ing e.g. gakkou 'school' no sensei 'teacher' 5. NI:AGENT, N2:EVENT ~ agent e.g. senmonka 'expert' no chousa 'study' 6. NI:MATERIAL, N2:CONCRETE --+ modifica- tion(material) e.g. ki 'wood' no hako 'box' 7. NI:TIME, N2:* 3 ___+ time e.g. aki 'autumn' no hatake 'field' 8. NI:COLOR, QUANTITY, or FIGURE, g2:* modification e.g. gray no seihuku 'uniform' 9. gl:*, N2:QUANTITY ~ semantic-role(at- tribute) e.g. hei 'wall' no takasa 'height' 10. gl:* , N2:POSITION ~ semantic-role(posi- tion) e.g. tsukue 'desk' no migi 'right' 11. NI:AGENT, Y2:* ~ possession e.g. watashi f no kuruma 'car' 12. NI:PLACE or POSITION, N2:* ---* place e.g. Kyoto no mise 'store' The rules 1, 2, 9 and 10 are for certain semantic-role relation. We use these rules be- cause these relations can be analyzed more ac- curately by using explicit semantic features, rather than based on a dictionary. 4.2.3 Integration of Two Analyses Usually, either DBA or SBA outputs some rela- tion. In rare cases, neither analysis outputs any relation, which means analysis failure. When both DBA and SBA output some relations, the results are integrated as follows (basically, if the output of the one analysis is more reliable, the output of the other analysis is discarded): If a semantic-role relation is detected by SBA, discard the output from DBA. Else if the correspondence of 0.95 or more score is detected by DBA, discard the output from SBA. Else if some relation is detected by SBA, discard the output from DBA if the corre- spondence score is 0.8 or less. In the case of the following example, rojin 'old person' no shozo 'portrait', both analyses were accepted by the above criteria. 3,., meets any noun. 485 Table 2: Experimental results of N1 no N2 analysis. Relation (R) Semantic-role (DBA) Semantic-role (SBA) Agent Possession Belonging Time Place Modification Correct R is correct, but the R was detected, detected correspon- but incorrect dence was incorrect R was not detected, though R is possibly correct 137 19 21 19 15 -- 2 0 10 -- 1 2 32 -- 7 0 12 -- 1 2 20 -- 1 0 23 -- 7 2 20 -- 3 21 (4) rojin 'old person' no shozo 'portrait' DBA : portrait a painting0.17 or photograph0.17 of a face0.1s or figure0.0 of real person 0.s4 SBA : NI:AGENT , N2:* ----+ possession DBA interpreted the phrase as a portrait on which an old person was painted; SBA detected the possession relation which means an old per- son possesses a portrait. One of these interpre- tations would be preferred depending on con- text, but this is a perfect analysis expected for N1 no N2 analysis. 5 Experiment and Discussion 5.1 Experimental Evaluation We have collected 300 test N1 no N2 phrases from EDR dictionary (Japan Electronic Dic- tionary Research Institute Ltd., 1995), IPA dictionary (Information-Technology Promotion Agency, Japan, 1996), and literatures on N1 no N2 phrases, paying attention so that they had enough diversity in their relations. Then, we analyzed the test phrases by our system, and checked the analysis results by hand. Table 2 shows the reasonably good result both of DBA and SBA. The precision of DBA, the ratio of correct analyses to detected anal- yses, was 77% (=137/(137+19+21)); the re- call of DBA, the ratio of correct analyses to potential semantic-role relations, was 78% (=137/(137+19+19)). The result of SBA is also good, excepting modification relation. Some phrases were given two or more rela- tions. On average, 1.1 relations were given to one phrase. The ratio that at least one correct relation was detected was 81% (=242/300); the ratio that all possibly correct relations were de- tected and no incorrect relation was detected was 73% (=219/300). 5.2 Discussion of Correct Analysis The success ratio above was reasonably good, but we would like to emphasize many interesting and promising examples in the analysis results. (5) mado 'window' no curtain 'curtain' curtain a hanging cloth that can be drawn to cover a window1.0 in a room0.s3, to divide a room0.s3, etc. (6) osetsuma 'living room' no curtain 'curtain' curtain a hanging cloth that can be drawn to cover a window0.s2 in a room 1.0, to divide a room 1.0, etc. (7) oya 'parent' no isan 'legacy' lagacy property left on the death of the owner 0.s4 Mado 'window' no curtain must embarrass conventional classification-based methods; it might be place, whole-part, purpose, or some other relation like being close. However, DBA can clearly explain the relation. Osetuma 'liv- ing room' no curtain is another interestingly an- alyzed phrase. DBA not only interprets it in a simple sense, but also provides us with more in- teresting information that a curtain might be being used for partition in the living room. 486 The analysis result of oya 'parent' no isan 'legacy' is also interesting. Again, not only the correct analysis, but also additional information was given by DBA. That is, the analysis result tells us that the parent died. Such information would facilitate intelligent peformance in a dia- logue system analyzing: User : I bought a brand-new car by the legacy from my parent. System : Oh, when did your parent die? I didn't know that. By examining these analysis results, we can conclude that the dictionary-based un- derstanding approach can provide us with much richer information than the conventional classification-based approaches. 5.3 Discussion of Incorrect Analysis It is possible to classify some of the causes of incorrect analyses arising from our method. One problem is that a definition sentence does not always describe well the semantic roles as follows: (8) shiire 'stocking' no saikaku 'resoucefulness' resoucefulness the ability to use one's head 0.1s cleverly Saikaku 'resourcefulness' can be the ability for some task, but the definition says nothing about that. On the other hand, the definition of sainou 'talent' is clearer about the semantic role as shown below. Concequently, shii~e 'stocking' no sainou 'tMent' can be interpretted correctly by DBA. (9) shiire 'stocking' no sainou 'talent' talent power and skill, esp. to do something 0.90 This represents an elementary problem of our method. Out of 175 phrases which should be interpreted as semantic-role relation based on the dictionary, 13 were not analyzed correctly because of this type of problem. However, such a problem can be solved by revising the definition sentences, of course in natural language. This is a humanly reason- able task, very different from the conventional approach where the classification should be re- considered, or the classification rules should be modified. Another problem is that sometimes the simi- larity calculated by NTT semantic feature dic- tionary is not high enough to correspond as fol- lows: (10) ume 'ume flowers' no meisho 'famous place' famous place a place being famous for scenery 0.20, etc. In some cases the structure of NTT semantic feature dictionary is questionable; in some cases a definition sentence is too rigid; in other cases an input phrase is a bit metaphorical. As for SBA, most relations can be detected well by simple rules. However, it is not possible to detect a modification relation accurately only by using NTT semantic feature dictionary, be- cause modifier and non-modifier nouns are often mixed in the same semantic feature category. Other proper resource should be incorporated; one possibility is to use the dictionary definition of N1. 6 Related Work From the view point of semantic roles of nouns, there have been several related research con- ducts: the mental space theory is discussing the functional behavior of nouns (Fauconnier, 1985); the generative lexicon theory accounts for the problem of creative word senses based on the qualia structure of a word (Pustejovsky, 1995); Dahl et al. (1987) and Macleod et al. (1997) discussed the treatment of nominaliza- tions. Compared with these studies, the point of this paper is that an ordinary dictionary can be a useful resource of semantic roles of nouns. Our approach using an ordinary dictionary is similar to the approach used to creat Mind- Net (Richardson et al., 1998). However, the se- manitc analysis of noun phrases is a much more specialized and suitable application of utilizing dictionary entries. 7 Conclusion The paper proposed a method of analyzing Japanese N1 no N2 phrases based on a dictio- nary, interpreting obscure phrases very clearly. The method can be applied to the analysis of compound nouns, like baseball player. Roughly speaking, the semantic diversity in compound nouns is a subset of that in N1 no N2 phrases. Furthermore, the method must be applicable to 487 the analysis of English noun phrases. The trans- lated explanation in the paper naturally indi- cates the possibility. Acknowledgments The research described in this paper was sup- ported in part by JSPS-RFTF96P00502 (The Japan Society for the Promotion of Science, Re- search for the Future Program) and Grant-in- Aid for Scientific Research 10143209. References Deborah A. DaM, Martha S. Palmer, and Re- becca J. Passonneau. 1987. Nominalizations in PUNDIT. In Proceedings of the 25th An- nual Meeting of ACL, pages 131-139, Stan- ford, California. Gilles Fauconnier. 1985. Mental Spaces : as- pects of meaning construction in natural lan- guage. The MIT Press. Charles J. Fillmore. 1968. The case for case. Holt, Rinehart and Winston, New York. Satoru Ikehara, Masahiro Miyazaki, Satoshi Shirai, Akio Yokoo, Hiromi Nakaiwa, Ken- tarou Ogura, and Yoshifumi Oyama Yoshi- hiko Hayashi, editors. 1997. Japanese Lexi- con. Iwanami Publishing. Information-Technology Promotion Agency, Japan. 1996. Japanese Nouns : A Guide to the IPA Lexicon of Basic Japanese Nouns. Japan Electronic Dictionary Research Institute Ltd. 1995. EDR Electronic Dictionary Spec- ifications Guide. Sadao Kurohashi and Makoto Nagao. 1994. A syntactic analysis method of long Japanese sentences based on the detection of conjunc- tive structures. Computational Linguistics, 20(4). Sadao Kurohashi and Makoto Nagao. 1998. Building a Japanese parsed corpus while im- proving the parsing system. In Proceedings of the First International Conference on Lan- guage Resources ~ Evaluation, pages 719- 724. Sadao Kurohashi, Masaki Murata, Yasunori Yata, Mitsunobu Shimada, and Makoto Nagao. 1998. Construction of Japanese nominal semantic dictionary using "A NO B" phrases in corpora. In Proceedings of COLING-A CL '98 workshop on the Computa- tional Treatment of Nominals. Catherine Macleod, Adam Meyers, Ralph Gr- ishman, Leslie Barrett, and Ruth Reeves. 1997. Designing a dictionary of derived nom- inals. In Proceedings of Recent Advances in Natural Language Processing, Tzigov Chark, Bulgaria. James Pustejovsky. 1995. The Generative Lex- icon. The MIT Press. Stephen D. Richardson, William B. Dolan, and Lucy Vanderwende. 1998. Mindnet: ac- quiring and structuring semantic informa- tion from text. In Proceedings of COLING- A CL '98. Akira Shimazu, Shozo Naito, and Hirosato No- mura. 1987. Semantic structure analysis of Japanese noun phrases wirh adnominal parti- cles. In Proceedings of the 25th Annual Meet- ing of ACL, pages 123-130, Stanford, Califor- nia. Eiichiro Sumita, Hitoshi Iida, and Hideo Ko- hyama. 1990. Translating with examples: A new approach to machine translation. In Pro- ceedings of the 3rd TMI, pages 203-212. Jyunichi Tadika, editor. 1997. Reika Shougaku Kokugojiten (Japanese dictionary for chil- dren). Sanseido. Yoichi Tomiura, Teigo Nakamura, and Toru Hi- taka. 1995. Semantic structure of Japanese noun phrases NP no NP (in Japanese). Transactions of Information Processing Soci- ety of Japan, 36(6):1441-1448. 488
1999
62
Lexical Semantics to Disambiguate Polysemous Phenomena of Japanese Adnominal Constituents Hitoshi Isahara and Kyoko Kanzaki Communications Research Laboratory 588-2 Iwaoka, Iwaoka-cho, Nishi-ku Kobe, Hyogo, 651-2401, Japan {isahara, kanzaki}~crl.go.jp Abstract We exploit and extend the Generative Lexicon The- ory to develop a formal description of adnominal constituents in a lexicon which can deal with linguis- tic phenomena found in Japanese adnominal con- stituents. We classify the problematic behavior into "static disambiguation" and "dynamic disambigua- tion" tasks. Static disambiguation can be done using lexical information in a dictionary, whereas dynamic disambiguation requires inferences at the knowledge representation level. 1 Introduction Natural language processing must disambiguate pol- ysemous constituents in the input sentences. A good description of information necessary for disambigua- tion in the lexicon is crucial in high quality NLP sys- tems. This paper discusses the treatment of linguis- tic phenomena in Japanese adnominM constituents and it focuses on how to generate the same semantic representation from different syntactic structures, and how to generate different semantic representa- tions from a semantically ambiguous sentence. We exploit and extend the Generative Lexicon Theory (Pustejovsky, 1995; Bouillon, 1996) to develop a for- mal description of adnominal constituents in a lexi- con which can offer a solution to these problems. We classify the problematic behavior of Japanese adnominal constituents into "static disambiguation" and "dynamic disambiguation" tasks. Whereas static disambiguation can be done using the lexical information in a dictionary, dynamic disambigua- tion needs inferences at the knowledge representa- tion level. This paper mainly discusses dynamic dis- ambiguation. 2 Classification of the Usage of Japanese Adnominal Constituents On consideration of the syntactic relations between adnominal constituents and their head nouns, we find that some adnominal constituents can appear both in the attributive and predicative positions (Sakuma, 1967; Martin, 1975; Makino and Tsutsui, 1986). However, some adjectives express different meanings when they appear in one or the other po- sition and some adjectives can appear only in one of these two positions (Hashimoto and Aoyama, 1992). We have classified the semantic relations between adnominal constituents and their modified nouns, based on whether the paraphrasing from attributive position to predicative position is possible or not. There are three possibilities: (Type A)A paraphrase can be made without changing the modifying relations semantically. Ad. + N -, N $~ (ga) Ad. (N is Ad.) Ad. = Adnominal constituent N = Head noun of noun phrase which is modified by Ad. (Type B) A paraphrase can be made only when a noun is restricted by its context: the presence of modifiers or determiners, e.g., articles. Ad. + N --* ~:¢)(sono) N F~ (wa) Ad. (that N is Ad.) (Type C) A paraphrase cannot be made at all, i.e., only the attributive position is available. Ad. + N --~ *none. We can classify semantic relations between ad- nominal constituents and their head nouns into three types by the use of paraphrase. Paraphrases exist for both Type A and Type B, however, a paraphrase cannot be made at all for Type C. This difference is based on the fact that adnominal constituents in types A and B modify the referents of their mod- ified nouns, while adnominal constituents in Type C do not modify their head nouns directly. Type C adnominal constituents modify (a) only a part of the meanings which their modified nouns allow, (b) the contents of the referents of their modified nouns, or (c) the states of being of the referents of their modified nouns. In this paper, we do not describe the semantic relations of (b) in detail but discuss 489 the semantic relations of (a) and (c) in the following section. There is a set of adnominal constituents which has the function of both adnominal and adverbial constituents (Teramura, 1991), and the third re- lation (c) above is the adverbial semantic relation which holds between adnominal constituents and their head nouns. 3 Classification of Problematic Behavior of Japanese Adnominal Constituents It is important for the analysis of adjectives to con- sider what its head noun denotes in the sentence (Bouillon, 1996). Also, when we analyze word mean- ings, it is important to take both context and our world knowledge into account (Pustejovsky, 1995; Lascarides and Copestake, 1998). In this section, the behavior of Japanese adnominal constituents is classified into three types, depending on how the se- mantic representation of noun phrases is generated from information in the lexicon (Kanzaki and Isa- hara, 1997; Kanzaki and Isahara, 1998). The types are: (1) the type where one must infer the attribute of the modified noun which is expressed by the adnominal constituent, (2) the type which necessitates inferences that change the structures of the semantic representation, and (3) the type whose adnominal constituents do not add information to the modified nouns but constrain the relations be- tween constituents in the text. These types are ex- plained in this section. Both semantic types A and B correspond to syntactic types 1 and 2. Type C corresponds to type 3. 3.1 Adnominal Constituents that Express the Attributes of the Modified Noun [Static disambiguation] This is the case where an adnominal constituent modifies a head noun semantically. Adnominal con- stituents modify nominals syntactically and most of these modify their head nouns semantically. Here, the "analysis" of the relationship between adnomi- nal constituents and their head nouns concerns the choice of the particular attribute of the nouns which adnominal constituents modify. There are two types of inferences for disambiguation. 3.1.1 Adnominal Constituents that Express Unique Inherent Attributes of the Modified Noun This is the case in which the relation between the adnominM constituent and its modified noun, i.e., what slot of the modified noun the modifier fills, can be predicted. In Example 1, F@$~P~Tk (yuruyaka_na, gen- tle)_l is the attribute value of an instance of the concept I-~ t (keisha, slope)J . The instance [-{~ ~r (keisha, slope)J involves a unique inherent at- tribute, i.e., "the angle (degree) of the slope," there- fore r@~-~,~ (yuruyaka_na, gentle)J is taken to be a value on the scale of the slope. The noun in this example has a unique inherent attribute whose value is number or intensity. Example 1 yuruyaka_na keisha , gentle slope Japanese pronunciation literal translation 3.1.2 KEISHA (slope) YURUYAKA_NA (gentle) ]degree I Adnominal Constituents that Express One of the Major Attributes of the Modified Noun This is the case in which the NLP system must iden- tify the slot of the modified noun which is filled by the modifier. Most nouns do not have a unique inherent attribute but have several attributes that adnominal constituents may embody. In Example 2, [-:~1 (otoko, man)J has several major attributes, e.g., name, age, character, and physique. An un- derstanding system must choose a suitable attribute (i.e., physique in this example) to plug information in from these attributes. Example 2 oogara_na large otoko man OTOKO (man) ~,~ age [ I"- OOGARA_NA (large) name I ' ~._1" physique/ vJ , character ~ [ ] These types of adjectives can appear both in the predicative position and in the attributive position without changing their meanings (Sakuma, 1967; Teramura, 1991; Hashimoto and Aoyama, 1992). r:gk:~: (oogara_na, large)J in Example 2 can ap- pear in predicative position, i.e., I-~:¢)~IJ~Jk:~ (sono otoko wa oogara_da, that man is large)_l , with the same meaning that the man has a big physique. We cannot decide on one particular attribute of the head noun without suitable semantic informa- tion. Also, still another problem remains here, that 490 is to identify whether the sentence needs a generic reading or whether it represents an instance of the concept. 3.2 Adnominal Constituents that Express Attributes of the Situation Inferred from the Modified Noun [Dynamic disambiguation 1] In some cases, adnominal constituents do not modify instances of nouns themselves, but modify, instead, instances of events, situations, or knowledge that are inferred from (the context of) the modified noun. 3.2.1 The Case in which New Elements must be Infered in the Semantic Representation There are cases in which we have to infer new ele- ments in the semantic representation so as to rep- resent semantic relations between adnominal con- stituents and their modified nouns. In Example 3, the adjective modifies some event participated in by the household members. A house cannot have a temporal scale as an attribute, how- ever, an event, in this example, spring-cleaning, can be inferred from the context and therefore the ad- jective F~w (hayai, early)J can modify the event, e.g., the beginning time of spring-cleaning. However, its computational implementation is not so simple, because there are metonymic extensions going on in this example. For example, even if an NLP system can find "spring-cleaning" in the con- text as an event whose "begining-time" is "early," the system must infer the people living there from "house" and identify him/her as an agent of the spring-cleaning. Some of these inferences are done using syntactic structure in English, however , that is not possible in Japanese. Such metonymic extensions are essential for determining the nature of the modifier/modified relationships in Japanese (Matsumoto, 1993). Example 3 (oosoji_no) hayai ie (spring-cleaning) early house "The house whose member begins spring-cleaning early." OOSOJI HAYAI (early) (spring-cleanin~) , [ I beginning-time --------~ - II 'HITO( on) IE (house) I ]member ] ..--1 3.2.2 The Case in which a Concept must be Converted into a Set of Concepts Adnominal constituents sometimes do not modify nouns as a whole but modify only specific features of a noun. Example 4 is ambiguous. The "as a whole" interpretation is that this person likes something and he/she is abnormal as a whole, i.e., this person has some mental disorder. The "specific" interpretation is that this person likes something abnormally, i.e., the way this person likes something is abnormal, i.e., this person is crazy about something x. Ambiguities of ['~,~ (ijo_na, abnormal)_l in Example 4 will be discussed systematically below. Example 4 ijo_na sensei-jutsu_no aikosha abnormal astrology enthusiast, one who likes something very much As a whole interpretation IJO AIKOSHA SENSEI-JUTSU abnormal) (enthusiast) (astrology) I Specific interpretation IJO AIKO-SURU SENSEI-JUTSU (abnormal) (like) (astrology) object object ~ [ To treat the "specific" interpretation, the system has to perform the concept conversion (Isahara and Uchida, 1995) shown in Figure 1. As for the "as a whole" interpretation, an adnomi- nal constituent modifies an extension of the modifiee (e.g., what is abnormal is a person who is an astrol- ogy enthusiast). Therefore, the object slot of (an instance of) "abnormal" is filled by (an instance of) "enthusiast." In the "specific" interpretation, how- ever, an adnominal constituent modifies part of the intensions to which the modifiee refers (e.g., what is abnormal is the way that person likes something). An analysis module converts the semantic structure (Figure 1) and the object slot of (an instance of) "abnormal" is filled by (an instance of) "like" which is extracted by the concept conversion. 1There is one more interpretation that "an enthusiast who likes abnormal astrology," however, this interpretation is odd in this example. 491 AIKOSHA (enthusiast) something Concept Conversion AIKO-SURU like) something object~ 'l agent----[-] [ ] ~HITO (perTn) "enthusiast" J Figure 1: Concept Conversion The concept conversion is, in a sense, a paraphrase of the original expression. The concept conversion is also useful in analyzing Example 5. Example 5 sensei-jutsu_no ijo_na aikosha astrology abnormal enthusiast Example 5 is not ambiguous, i.e., the only inter- pretation is % person who likes astrology abnor- mally," because the "as a whole" interpretation is not possible. Example 5 can be paraphrased into the phrase shown in Example 6. If I-~ (sensei- jutsu, astrology)A is semantically an object of r~ ~-"f B (aiko_suru, like)A, r~$~c (ijo_ni, abnor- mally)2 cannot modify r~ (mono, person)J , be- cause the dependencies in this interpretation cross each other. Example 6 sensei-julsu_wo ijo_ni aiko_suru mono astrology abnormally like person Example 7 exhibits the adnominal constituent F~?~ (ijo, abnormal)J in a predicative position. Using the extension of the Late Closure strategy (Frazier, 1979), only the "as a whole" interpretation is possible. Example 7 aikosha_ga ijo_da enthusiast abnormal "The enthusiast is abnormal." 3.3 Adnominal Constituents that Constrain the Relations between Constituents in the Text [Dynamic disambiguation 2] 3.3.1 Adnominal Constituents that do not Add Information to their Modified Nouns Directly Adnominal constituents mostly modify nouns syn- tactically and also semantically. However, some adnominal constituents work differently, i.e., they modify nouns syntactically but not semantically. Japanese nominal adjectivals F~: (junsui_na, pure)A, F~.~::~: (kanzen_na, perfect/complete).] and [-:~ < (mattaku, entire).] are typical examples of this type. 1-i~4~ (junsui_na, pure)A in Examples 8-10 and [-~.~: (kanzen_na, complete)_l in Examples 11-13 play different semantic roles. Example 8 junsui_na pure "pure water" Example 9 t~at ekkyo_wa border transgression mizu water junsui_na seiji_bomei datta. pure political (copula, flight past) "The border transgression was a pure political flight." Example 10 junsui_na churitsu_wa mutsukashii. pure/strict neutrality difficult "Strict neutrality is difficult." Example 11 kanzen_na shisutemu dewa nai. complete system (copula) (negation) "This is not a complete (perfect) system." Example 12 nousakumotsu_wa kanzen_na syohizai dearu. farm products complete consumer (copula) products "Farm products are nothing but consumer products." Example 13 kanzen_na mujin_no yakala complete uninhabited house "absolutely uninhabited house" 492 In Example 8, Fi~ (junsuLna, pure)J de- scribes the purity of water, i.e., it describes some- thing within the "water" concept. The adnominal constituent I-gk:~Tz (oogara_na, large)J , in Exam- ple 2, expresses a value of an attribute of the modi- fiednoun, i.e., [-~ (oloko, man)J . In contrast, the adnominal constituent [-~ (junsui_na, pure)J , in Example 8, does not express a value of an at- tribute of the modified noun, i.e., FT]( (raizu, wa- ter)] , but expresses the way some values fill at- tributes of this modified noun. That is "nothing but water is a filler of an attribute of the referent." In Example 11, F-~_/k (kanzen_na, complete)] de- scribes the completeness of a system as well, i.e., it describes something within the "system" concept, e.g., the function of the system. (Case 1) In Example 9, Fi~ (junsuLna, pure).l does not add information as to the purity of this polit- ical flight, however, it describes that there is only one purpose (or motivation), i.e., political flight, for this "border transgression." In other words, there is no other motivation, such as sightseeing or economic reasons, which would explain this action. {-~,~: (junsui_na, pure)J describes something outside of the "political flight" concept. In Example 12, r~ :~.~ (kanzen_na, complete)J plays a very similar role to that in Example 9. It notes that there is only one purpose, i.e., consumer products, which describes "farm products." In other words, there are no other usages, such as raw materials, for these products. (Case 2) Both referents in Examples 8 and 9 are still "wa- ter" or "political flight" even if they are not "pure," however, Example 10 means that strict neutrality is difficult, and "not pure" neutrality is not a neutral- ity in the strict sense of the word. ['~4~ (jnn- suLna, pure)3 describes the concept "neutrality" itself. As for Example 13, "not absolutely" unin- habited is not uninhabited in the strict sense of the word, as well. (Case 3) There are similar phenomena involving many other adnominal constituents in Japanese. Formal treatment of these phenomena will be discussed in Section 4.1. 3.3.2 Adnominal Constituents which Represents a State of Being Some adnominal constituents, e.g., F:~/:~ (rippa_na, splendid)] can be used in attributive po- sition so as to express the state of the modified noun. In Example 14, the adnominal constituent [-3~ (rippa_na, splendid)J does not describe aspects of an island itself, but the nature of what is required for it to be considered an island. In other words "this really is an island, not a large rock." Example 14 rippa_na sh~ma (splendid) (island) "Once this ocean mountain is elevated, or as we described above, its top appears above the ocean from the sea level falling, it will be a real island." Whereas adnominal constituents in Examples 8 and 11 can appear both in the attributive posi- tion and in the predicative position without chang- ing their meanings, and Examples 9, 10, 12 and 13 cannot appear in the predicative position with- out changing their meanings, when this F~& (rippa_na, splendid)J occurs in a predicative posi- tion, i.e., F~h~ "¢t:_~7~ (shima ga rippa_da)J, it means that "the island is splendid," a state of the island s . As [--0:~ (rippa_na, splendid) ~ (shima, is- land)J without context has two interpretations, i.e., describing aspects of an island itself, "the island is splendid," and describing the nature of what is re- quired for it to be considered an island, when an NLP system analyzes this noun phrase, the system has to choose a suitable interpretation from these two possibilities in the context of the semantic re- lations between adnominal constituents and their modified nouns. Furthermore, in order to inter- pret the semantic relations between adnominal con- stituents and their modified nouns, it is sometimes necessary to infer instances of newly introduced con- cepts using both contextual and world knowledge. Example 3, I--~ (hayai, early) ~ (ie, house)_] , in Section 2.2.I illustrates this. It is important for a lexical semantic system to take both context and our world knowledge into account. We should ana- lyze semantic functions of lexical items from several points of View. 4 Formal Treatment of Problematic Phenomena of Japanese Adnominal Constituents In this section we discuss the formal treatment of the phenomena described in Section 3.3.1, i.e., Cases 1, 2 and 3. 4.1 Hypothesis and Definition To handle these phenomena, we have established the following hypothesis and definition. 2 "Real" is a similar example in English. "A real friend" means "true friend" and "His friend is real" means "his friend is not imaginary." 493 [HYPOTHESIS] (a) There is something which can be shared by a plural number of constituents, e.g., there is some semantic definition which can con- tain/represent/embody/refer to various items. (b) Fi~IC~Z (junsui_na, pure)J works to constrain this number to one. Extending the Generative Lexicon format, some- thing pure is represented as Az[stg(x) A Telic = !1 Ae[~oa,~ob, ~c,...]] Here, '!1' is a function which restricts the number of its element to one. [DEFINITION] F~Z Uunsui_na, pure)3 is represented as pure ~ ASemN.ANewArg.[p(SemN, NewArg)] (1) Here, SemN and NewArg are underspecified types. In syntax, an adnominal constituent takes a noun as a syntactic argument and returns the same syn- tactic category (i.e., a noun). Semantically, it takes the semantics of the noun first, and returns the se- mantics of a one-place function, that is it narrows the semantic definition of the noun. Starting from (1), suppose we define 'p' as follows: Case 1 (SemN is constitutive/mass material therefore NewArg is too.): p ~ Vy.[~(SemN(y)) --~ ~(y E NewArg)] This is logically equivalent to the following: p ~ ~3y.[~(SemN(y)) A (y e UewArg)] (3) In Example 8, Fi~: Ounsui_na, pure) ;t~ (mizu, water)J , SemN is water and NewArg is some liquid referred to by this example sen- tence. That is "anything that is not water does not exist in this liquid." Case 2 (SemN is individual entity/event.): p VU.[- (u=SemN) --~ ~(view( NewArg, y))] (4) In Example 9, Fi~i~#-J: (ekkyo_wa, border transgression) ~ (junsui_na, pure) ~'~I:~ (seiji_bomei, political flight) f~-9 f~o ( datta, (copula, past))3 , SemN is "a political flight." The sentence refers to the fact that "the border transgression is a pure political flight." Thus, it is associated with the interpretation of NewArg, that is there is only one view of this action (bor- der transgression), i.e., "political flight." It seems that the semantics of "pure" shares the basic logical structure as seen in (2), (4), however, case 3 requires a different treatment. Case 3 (SemN is predicate/state.): If SernN is a predicate/state P, NewArg is generated as a sortal array of P and ~P. The binary predi- cate is coerced into a polar predicate. As for Example 10, neutrality is originally a bi- nary sortal predicate, that is VP[neutrality(P) V neutrality(~P)]. In this case, neutrality is coerced into two polar predicates, i.e., c~ which denotes "strictly neu- tral" and/3 which denotes "strictly not neutral." '~ a' and '~ fl' denote "not strictly" neutral, or a range of situation which can be considered as neutral. 4.2 Adnorninal Constituents and Adverbial Constituents Japanese nominal adjectivals, such as F~ (junsui, pure)_l , are inflected as follows 3. F~(2~CZ (junsui_na, pure)J , adnominal FEW- OunsuLni, purely)_l , adverbial F~2~ (junsuLsa, purity)J < nominal The nominal adjectival Fi~d~ (junsui, pure(ly))J modifies Fi~g::~ (seiji-bomei, political flight)J syntactically in Example 15 (adnominal) and mod- ifies Ffg-gfc (datla, (copula, past))J syntactically in Example 16 (adverbial). These two sentences have different syntactic structures, however, they have al- most the same meaning 4. Descriptions in a lexi- con of nominal adjectivals, such as F~I~ (junsui, 3These expressions belong to the same syntactic category, nominal adjectival. In English, on the contrary, the adnomi- hal constituent "pure" is an adjective and the adverbial con- stituent '~purely" is an adverb. 4Readers might think that the Japanese copula in gen- eral syntactically takes a noun and returns some kind of verb phrase. Then, as in the ease of the English copula, the se- mantics of the Japanese copula is "transparent." Thus, the function of 'tpure" taking either the adnominal or the adver- bial form should apply to the semantics of the common noun, 494 pure(ly))J , must be able to explain this kind of lin- guistic phenomena. Example 15 junsui_na seiji-bomei dalta. pure political flight (copula, past) Example 16 ~ ~ ~ ~ ~o junsui_ni seiji-bomei datta. purely political flight (copula, past) Example 17 ~ ~ "~ ~ o sezjimbomei da~ta. political flight (copula, past) A nominal refers to an extension of a thing with one or several intension(s). A copula refers to an in- stance of a state, which is a subconcept of an event. This state also has one or several extension(s) of events. The meanings of Examples 15, 16 and 17 are a function (or mapping) from extensions, i.e., "the border transgression," to intensions, i.e., "al- ternative views about a certain event." Then, Ex- ample 17, "the border transgression was a political flight" without "pure," corresponds to alternative views about "the border transgression," where the particular view as "political flight" is positively as- serted and others are left unstipulated. Then, Ex- ample 17 can be represented as follows; statel(views = extensionl(views = political flight, intensionl2, ...) extension2(views = intension21, intension22, ...) extension3(views = intension31, intension32, ...) ...) I'~ (junsui, pure(ly))3 in its adnominal usage (Example 15) corresponds to the views of an exten- sion and constrains the number of intensions to one by using the function '!1' introduced in Section 4.1 as shown in the following; which is indistinguishable from other one-place verbs. How- ever, some Japanese adjectives, e.g., r~,~ (akai, red)2 can be used only as an adnominal constituent. ~,w (akai, red (adnominal)) ~ (hako, box) E (da, (copula)) *~< (akaku, red (adverbial)) ~i (hako, box) E (da, (copula)) The copula in Examples 15-17 has a meaning similar to the verb "exist," therefore, it is not "transparent." Thus, it is necessary to analyze each of these sentences differently as we would sentences with ordinary verbs. extensionl(views = intensionl, intension2, extensionl(views = intensionl) ...) Then Example 15 is represented as follows; statel(views = extensionl(views = political flight) extension2(views = intension21, intension22 .... ) extension3(views = intension31, intension32 .... ) ...) ['i~ (junsui, pure(ly))_l in its adverbial usage (Example 16) corresponds to a state and singles out one extension using the function '! 1' as the following shows; statel(views = extensionl, extension2, ...) statel(views = extensionl) Then Example 16 is represented as follows; statel (views = extensionl(views = political flight, int ension21 .... ) ) Strictly speaking, these three example sentences represent different meanings. However, one tends to take no notice of this difference in daily conversation. Here, we introduce a new hypothesis to explain the similarity of these representations. [HYPOTHESIS] Extensions and intensions which are not men- tioned by overt expressions are not stressed in the context. They contribute little to the interpretation of a sentence. Therefore, Examples 15, 16 and 17 can be repre- sented similarly as follows; statel(views = extensionl(views = political flight)) The above simplification for Example 17 was all done following the above hypothesis, however, parts of the simplifications for Example 15 and 16 were de- pendent on the presence of "pure." Therefore, the reliability of these simplifications is different. To dis- cuss this interesting fact further is, however, beyond the scope of this paper. 495 5 Conclusion This paper discussed the treatment of linguistic phe- nomena in Japanese adnominal constituents and it focused on how to generate the same semantic rep- resentation from different syntactic structures, and how to generate different semantic representations from a semantically ambiguous sentence. In this paper, we classified the characteristics of adnominal constituents. That is (1) the type where one must infer what attribute of the modified noun is expressed by adnominal constituents, (2) the type necessitates inferences that change the structures of semantic representation, and (3) the type where the adnominal constituents do not add information to their modified nouns but constrain the relations be- tween constituents in the text. To achieve good results in natural language pro- cessing, e.g., high-quality machine translation, we have to consult lexicons based on concepts and so we exploited a concept representation method based on the Generative Lexicon Theory and a concept conversion module. Using these techniques, we ex- plained how the semantic ambiguities of adnominal constituents can be dealt with by analyzing the mod- ification relations between adnominal constituents and their modified nouns. For a more precise explanation of adnominal ex- pressions within our framework, it would be neces- sary to treat (1) the scope of negation, (2) negation and position of adnominal constituents, i.e., attribu- tive and predicative position, and (3) disambigua- tion with regard to the context and the position of adnominal constituents. H. Isahara and Y. Uchida. 1995. Analysis, genera- tion and semantic representation in contrast -- a context-based machine translation system -- Systems and Computers in Japan, 26(14). K. Kanzaki and H. Isahara. 1997. Lexical semantics for adnominal constituents in Japanese. In Proc. of the Natural Language Processing Pacific Rim Symposium. K. Kanzaki and H. Isahara. 1998. The semantic con- nection between adonominal and adverbial usage of Japanese adnominal constituents. In Proc. of Workshop on "Lexical Semantics in Context: Cor- pus, Inference and Discourse" in lOth European Summer School in Logic, Language and Informa- tion. A. Lascarides and A. Copestake. 1998. Pragmatics and word meaning. Journal of Linguistics, 34(2). S. Makino and M. Tsutsui. 1986. A Dictionary of Basic Japanese Grammar. The Japan Times. S. Martin. 1975. A Reference Grammar of Japanese. Yale University Press. Y. Matsumoto. 1993. Nihongo meisi-ku koozoo no goyooronteki koosatu (pragmaties of Japanese noun phrases). Nihongogaku (Japanese Linguis- tics), 12(11). (in Japanese). J. Pustejovsky. 1995. The Generative Lexicon. The MIT Press. K. Sakuma. 1967. Nihonleki Hyogen no Gengo Kagaku (Linguistics of Japanese Expressions). Kosei-sya Kosei-kaku. (in Japanese). H. Teramura. 1991. Nihongo no shintakksu to imi III (Japanese syntax and meanings III). Kuroshio shuppan. Acknowledgment We would like to thank Dr. James Pustejovsky of Brandeis University and Dr. Ann Copestake of CSLI for their extensive discussions on the formal treatment of the linguistic phenomena treated in this paper. References P. Bouillon. 1996. Mental state adjectives: the per- spective of generative lexicon. In Proc. of COL- ING96. L. Frazier. 1979. On Comprehending Sentences: Syntactic Parsing Strategies (doctoral disserta- tion). Ph.D. thesis, UMass at Amherst. M. Hashimoto and F. Aoyama. 1992. Keiyoshi no 3tsu no yoho (three usages of adjectives). Keiryo Kokugogaku (Mathematical Linguistics), 18(5). (in Japanese). 496
1999
63
Computational Lexical Semantics, Incrementality, and the So-called Punctuality of Events Patrick CAUDAL TALANA, UFRL, Universit6 Pads 7 2, place Jussieu 75251 Paris Cedex 05, France caudal @ linguist.jussieu.fr Abstract The distinction between achievements and accomplishments is known to be an empirically important but subtle one. It is argued here to depend on the atomicity (rather than punctuality) of events, and to be strongly related to incrementality (i.e., to event-object mapping functions). A computational treatment of incrementality and atomicity is discussed in the paper, and a number of related empirical problems considered, notably lexical polysemy in verb - argument relationships. Introduction Ever since Vendler (1957) introduced it, the so- called punctuality of achievements has been the object of many theoretical contests. After having demonstrated that punctuality actually breaks up into two, distinct notions, namely non-durativity and atomicity, I will argue here for a compositional semantic account of the latter. I will show that (non-)atomicity interacts closely with the notion of incrementality, as formulated in Dowty (1991), and that this property of verbs should be lexically encoded, although it is subject both to semantics and pragmatics-driven variations. I will finally discuss the formal specifications an NLP system could use to make predictions about atomicity and incrementality. 1. On Vendler's so-called achievements Vendler (1957) defined achievements and accomplishments as respectively punctual and durative. He based his claims on two main tests, noting that at <time expression> adverbials combine with achievements but not accomplishments, whereas finish combines with accomplishments but not achievements : (1 a) At what time did you reach the top ? At noon sharp. (lb) At what moment did you spot the plane ? At 10:53 A.M. (2a) *John finished leaving. (2b) John finished drawing the circle. Dowty (1986) and Moens and Steedman (1988) decisively questioned the coherence of the class of achievement verbs, arguing that not all of them are non-durative. As noted above, Vendler identifies punctual events through the conjunction of the (positive) at and (negative) finish tests. However, they do not always yield comparable results : (3a) (3b) (4a) (4b) Karpov beat Kasparov at 10.00 P.M. *The Allies beat Germany at I0.00 P.M. * Karpov finished beating Kasparov The Allies finished beating Germany. The at test fails to characterize (3b) as an achievement because it is durative, whereas (3a) passes this very test because it is non-durative. On the contrary, the fnish test in (4) yields an identical result for the beating of a chess player and that of a whole nation. It appears thus that the finish test does not indicate non-durativity, contrary to the at test, which refuses durative events, and that telic events such as (3b) fall outside Vendler's classification, since they fail both the finish test (unlike accomplishments) .AND the at test (unlike achievements). Since it 497 is desirable that achievements should include events such as (3b), durativity should not be considered as a necessary property of achievements. The salient common point between (3a) and (3b) is that both events lack proper subparts, i.e., are atomic. Atomicity should thus be regarded as the defining property of achievements ; it can be tested with finish. 2. Atomicity as a semantic issue Many authors, including Verkuyl (1993) and Jackendoff (1996), have denied atomicity any semantic content, and have argued that it is a pragmatic category. I do not intend to claim here that atomicity is not subject to pragmatic constraints. The following examples identify one such constraint, i.e., the relative size of arguments of verbs of consumption : (5a) (Sb) ??John finished eating the raspberry. The bird finished eating the raspberry. (5a) suggests that raspberries are so small with respect to a human 'eater' that eat denotes an atomic event. But the same does not hold true of birds (cf. (5b)). No attention will be paid to this kind of pragmatic constraint in this paper. Yet I will demonstrate here that atomicity does possess a semantic content, and that therefore it can be regarded as an aspectual category. Consider the following examples ~ : (6a) *The soldierfinished crossing the border. (6b) The soldiers finished crossing the border. (7a) *John finished slamming the door open. (7b) John finished slamming the doors open. The plural NPs the soldiers and the doors possess proper subparts, along which the crossing and slamming events in (6b) and (7b) are measured, making those events non-atomic (there are several distinct subevents of one door being slammed, and of one soldier crossing the border) ; compare with the atomic (6a) and (7a), where those very NPs are singular. The variation in noun quantification being a semantic one, 1 Similar examples were proposed by Declerck (1979), but were discussed in terms of durativity, and not of atomicity. atomicity should clearly receive some form of semantic content. Moreover, it should be noted that atomic events are not compatible with the progressive perfect, whereas non-atomic ones freely combine with it s : (8a) *The soldier has been crossing the border. (OK with iterative, non-atomic reading) (8b) The soldiers have been crossing the border. Those facts support a semantic view of atomicity 3. 3. Towards a semantic account : (non-) atomicity and incrementality The above data suggests an interesting solution to this puzzle : atomicity seems to be related to the notion of inerementality, as formulated in Dowty (1991) (see also graduality in Krifka 1992). To my knowledge, the concept of incrementality (originally proposed to account for the telicity of events) has never been discussed in the light of that of atomicity, although this is an obvious thing to do, both concepts being about the present or absence of subevents in the internal structure of events. I will undertake to bridge this gap here. 3.1 Incrementality and delimiting arguments Dowty defines incrementality as a property of verbs whose development can be measured along the inner structure of one of their arguments (which he calls incremental theme) • (9) John drank a glass of beer. In (9), the development of the drinking event can be measured along the subparts of the glass of beer. Each subpart of the incremental theme argument is mapped onto a subpart of the 2 Complementary tests such as the different readings of in etc. will not be studied here for want of space. 3 Caudal (1998) discusses at length related examples involving collection-referring nouns (e.g., orchestra or regiment), and shows that they behave similarly, cf. The regiment finished crossing the border. 498 corresponding event (a fact which Dowty (1991) and Krifka (1992) refers to as event-object homomorphism). Dowty (1991) rejects ostensibly the possibility to treat as incremental themes the patient arguments of so-called punctual (i.e., achievement) verbs, such as slam open. According to him, incremental themes should be able to undergo a gradual change of state 4. Unfortunately, Dowty does not consider examples such as (7b), which exhibit an incremental behaviour although they include this very kind of patient argument. I will therefore reject Dowty's objection, and regard (7b) as incremental. It follows naturally from the above definition that incrementality entails non-atomicity: it implies that a situation's development possesses proper subparts, and therefore that it is non- atomic. But does non-atomicity entail incrementality, conversely ? I.e., are those two notions equivalent ? If not, how should they be connected ? In order to answer those questions in the following sections, I will make use of a rough feature-based notation: [+/-ATM] will express atomicity/non-atomicity, and [+/-INC] incrementality/non-incrementality. 3.2 Non-atomicity with incrementality I will call delimiting arguments the arguments of a verb serving as 'measures' (or 'odometers') for the corresponding event (e.g. the internal arguments of drink or slam open). It should be noted that this term is broader than that of incremental theme, since it includes e.g., patient arguments of so-called punctual verbs, which Dowty refused to regard as incremental themes. For the sake of simplicity, I will focus in this paper exclusively on internal delimiting arguments : (lOa) (lOb) (lla) (llb) John finished eating his apple. John finished eating his apples. *John finished throwing his stone. John finished throwing his stones. 4 Cf. Dowty (1991:568): Many traditional Themes...are not Incremental Themes. Many achievement verbs entail a definite change of state in one of their arguments...but never in distinguishable separate stages, i.e. subevents. (10) shows that eat can be [-ATM],[+INC] both with a definite singular and plural delimiting argument, whereas (11) shows that throw can be [-ATM],[+INC] only with a definite plural delimiting argument. The development of eating his apple is measured in (10a) along the quantity of apple remaining to eat, whereas that of throwing his stones in (lib) is measured along the successive individual stones being thrown away. I will extend the notion of incrementality to this latter kind of event-object mapping. Under this view, incrementality arises from delimiting arguments, and not only fore incremental themes. However, I will distinguish two types of incrementality, thereby preserving a distinction between Dowty's incrementality and the extension I proposed. I will call m-incrementality (for quantity of matter- incrementali~) the type of incrementality exhibited by (10a) and i-incrementality (for individual-inerementalitv) that exhibited by (lib). At least two classes of verbs can be distinguished in this respect" verbs like eat are capable of m-incrementality, i.e., incrementality with individual-referring delimiting arguments (they have an incremental themes in the sense of Dowty), whereas verbs like throw are only capable of i-incrementality, i.e., incrementality with collection-referring delimiting arguments (they lack an incremental theme in the sense of Dowty). Of course, non-atomicity can follow from either i or m-incrementality. Another type of incremental non-atomic events can be found in path-movement verbs : (12) Mary walked the Appalachian trail. (Tenny 1994) The development of the walking event can be measured along the explicit path argument the Appalachian trail in (12). It is therefore [-ATM],[+INC]. White (1994) proposed a generalized path-based incremental theme role to account for the semantic behaviour of both patient and path delimiting arguments, fairly akin to the present one, since it crucially relies on a similar individual / quantity of matter distinction. One could conclude at this point that 499 the present account of incrementality is sufficient to predict (non-)atomicity, and that non-atomicity and incrementality are equivalent notions. If that is right, then non-incremental events should be non-atomic. However, I will show in 3.3 that it is not the case. 3.3 Non-atomicity without inerementality Some non-atomic events lack a delimiting argument, so that the type of non-atomicity involved seems unrelated to incrementality : (13) John finished digesting his pudding. (14) John finished cooking the chicken. (15) John finished registering his son at the university. Contrary to (10) and (llb) , neither (13), (14) nor (15) are (necessarily) measured along the subparts of their patient arguments. (13) and (14) are rather measured along the state of the latter, which vary as time passes. In this sense, his pudding and the chicken do not behave like delimiting arguments, and those non-atomic situations are non-incremental ([-ATM],[-INC]). Some sort of non-argumental odometer seems to be required. In the case of (13) and (14), digest and cook receive a scalar result state, i.e., one that varies as time passes: John's chicken becomes (as a whole) closer to being (finally) cooked as time passes in (14), and John's pudding gradually turns (as a whole, and not bit by bit) into nutriments inside his stomach in (13) (see Caudal (1999a/b) for a treatment of such data). I will refer to this kind of incremental-like reading as scalarity. If one considers (15), things are somewhat different, as there exists some sort of predetermined series of stages through which one should pass in order to register at the university: John's son is closer and closer to being registered at the university as his father goes through them. I will refer to this kind of data as gradual scenarios. I will turn now to the computational treatment of incremental non-atomic events (section 4), before suggesting some ways of accounting for non-incremental non-atomic ones (section 5). 4. A formal, computational treatment of incremental non-atomic events A formal and computational treatment of incremental non-atomic events will be formulated here, relying on model-theoretic logics and on the Generative Lexicon framework (GL henceforth ; see Pustejovsky (1995) for an introduction). I will first discuss a few theoretical notions related to the internal structure of objects and events, in order to formalize m and i-incrementality. I will leave aside the treatment of incremental path- arguments, referring the interested reader to White (1994). 4.1 Internal structure of objects and events : Link's part-of operators Following Link (1983), I will oppose individuals (i.e., the denotata of nouns referring to individual entities) and collections (i.e., the denotata of definite plural NPs, collectives, etc. ; see Caudal (1998a)). Let A be the domain of entities (events or objects), structured as a semi- lattice. Let individual_.part_of be a partial order relation on individual entities (henceforth i-part or <i), connecting an individual to the collection it belongs to. Let Idi be the join operation on individuals and collections, y a collection and x an individual, such that x is an i-part of y. The definition of the meronymic operator <i was formulated by Link as follows : (16) Vx,y [x <i Y ---> x Ui y = y] Following again G. Link, I will define similarly a partial order relation on non-individual parts, m-part (or -<m), which connects an individual and its non-individual parts (e.g. a slab of stone to a rock). All those operators will apply both to events and objects in the model (events being reified). As a consequence, collection-referring NPs as well as i-incremental events are endowed with i-parts, whereas individual-referring NPs and m-incremental events possess m-parts. I will argue that incrementality depends both on lexical information and structural composition. Whether events will receive (or not) an incremental reading is determined at the structural level, depending on the interaction of 500 a verb with its delimiting arguments (modulo pragmatic constraints). I will now describe the lexical component of this compositional procedure. 4.2 Encoding incrementality within the Generative Lexicon framework I will propose here to encode lexically whether verbs are capable of m-incrementality or i-incrementality. It should be noted that although the ability to exhibit m-incrementality seems to be a constant lexical property, any potentially incremental verb can receive an i-incremental reading (but recall that not all verbs can be read incrementally). In the spirit of Krifka's object- event mapping functions (see K_rifka 1992), I will assume an i-inc aspectual semantic role function that relates the i-parts of an argument to the development of an event (causing it to become i-incremental with an appropriate delimiting argument), and a m-inc function that relates the m-parts of an argument to the development of an event (causing it to become m-incremental with an appropriate delimiting argument). The following event/object mapping predicate MAP-I (applying only to i-inc aspectual roles) can be derived from Krifka's MAP-O/E (mapping to objects/events) predicates (see Krifka 1992:39) by replacing his standard partial order operator with --<i : (17) MAP-I : VR[MAP-I(R) ~ MAP-Ei (R) ^ MAP-Oi (R)] VR[MAP-Ei (R) ~-~ Ve,x,x' [R(e,x) ^ x'<i x ----> He' [e' <i e ^ R(e',x')] ] ] VR[MAP-Oi (R) <---> Ve,e',x [R(e,x) ^ e'<i e ---> qx' [ x'<i x ^ R(e',x')] ] ] A similar formulation can be given for m-incrementality ; replace --<i with -<m in (17). Thus, by combining Link's part-of operators with Krifka's event-object mapping functions, atomicity construal functions can be formulated. Finally, GL will provide us with the proper computational lexical machinery in which to insert those functions : ! will propose to encode those aspectual roles within the argument structure (ARGSTR) feature in GL, by making them bear directly on the relevant argument position. The following entries for eat and throw illustrate such an encoding for internal arguments (again, external arguments are left aside for the sake of simplicity) : throw :ARGSTR = EVENTSTR = QUALIA = eat ARGSTR = EVENTSTR = QUALIA = ~GI = x-'ind G2 y: Ind, i-inc (y, el) ~ i = e~:throw_a~t 2 e2 : Binary_RStag~ AGENTIVE = throw_act(ez,x,y) ~A~ G1 = x=ind G2 y: ind, m-inc (y, ex) 2 e2 : binary-RStage~ AGENTIVE = eat act(ex,x,y) =/i-inc (x, e) indicates that the internal structures of subevent e and argument x are related by an homorphic mapping. If x possesses proper subparts, then e will be incremental ; the whole point remains that incrementality is lexically licensed but structurally construed. The Binary_RStage subevent refers to the complex result state (Result Stage ; cf. Caudal 1999b) attached to a transition such as eat. Its binary structure expresses a change-of-state. I will now consider some difficulties related to lexical polysemy and verb-argument relationships. 4.3 Lexical polysemy and incrementality I assume here that the incrementality functions i-inc / m-inc are lexically specified. Yet the full story is a lot more complicated. Much data suggests that those functions can be modified or determined (when they are lexically underspecified) in context. An overview of a number of problems and a tentative treatment within GL will be proposed here. 4.3.1 Co-composition and inerementality The machinery proposed above is not sufficient to account for subtle cases of lexical polysemy originating in the interaction between the verb and its arguments. Some data would be best treated in terms of co-compostion within GL 5 : 5 Roughly, co-composition refers to cases of lexical polysemy in which a lexical item receives a 'new' 501 (18a) (18b) *Le moteur acheva de produire un bruit dtrange. The engine finished emitting a strange noise. Yannig acheva de produire son article. Yannig finished writing his paper. The French verb produire yields an i-incremental reading in (18a), vs. a m-incremental reading in (18b). Arguably, produire means 'to cause to come into existence', and therefore makes use of the content of the AGENTIVE qualia role (i.e., the qualia role indicating how a type is brought into existence) of its internal argument to determine the corresponding 'creation' event. The AGENTIVE roles of bruit and article can be represented as follows : (19) Fbrult ARGI =.. sound I I A R G S T R = ~ UALIA AGENT IVE = | ~4 t_sound (e, y, x)J (20) IAR rticle GSTR = ARGI = x : info I UALIA= AGENTIVE = write(e,y,x)~ By virtue of the co-composition operation involving events specified in the AGENTIVE of bruit and article, produire interacts differently with its internal argument, and receives different event structures. The e~_ e_so~-aa (e, y, z) event in (19) comes along an i-inc function mapping the internal argument x onto e, while the wriee(e,y,x) event in (20) comes along an ,--inc function mapping z onto e. In fact, the whole event structure of those AGENTIVE roles together with their incrementality functions override those lexically specified by default for produire. Another limit of GL until recent work (cf. Asher and Pustejovsky 1999) was its inability to construe more versatile qualia role information. Consider the following case of co-composition : sense (i.e., one not considered to be lexicalized) through the contribution of another lexical item with which it combines. See Pustejovsky (1995). (2 la) Yannigfinished hiding the bike. (2 lb) * Yannigfinished hiding the truth. Hide x arguably means 'to remove x from accessibility', and obviously the notion of 'accessibility' diverges when x is a physical object (21a) or a proposition (21b). This kind of phenomenological information might be encoded in the FORMAL role for the corresponding super-types and triggered in this context, but a detailed implementation still has to be worked out. See Asher and Pustejovsky (1999) for a discussion of such issues. 4.3.2 Other cases of polysemy Last but not least, many cases of apparent polysemy in the incrementality functions actually arise from the coercion of affected arguments : (22a) Yannig a fini de ranger sa chambre. Yannig finished tidying up his room. (22b) * Yannig a fini de ranger son livre. (gradual scenarios being left aside) Yannig finished putting away his book. Ranger receives an incremental reading with chambre in (22a), and no incremental reading in (22b), so that it seems to be properly neither i-incremental nor m-incremental. The way out of this puzzle is the following : ranger is lexically encoded as capable of i-incrementality but not of m-incrementality, and the aspectual polysemy of ranger sa chambre originates in the polysemy of chambre. Although there is no question that chambre normally refers to an individual, its meaning is coerced into a collective one in (22a). More precisely, chambre is coerced from an individual real estate sense (immovable_phys obj) to a collection sense involving the individual objects possibly enclosed within a room (movable_phys_obj), since only the latter is compatible with ranger. One way of accounting for such coercions within GL would be to associate with the CONST qualia role of chambre such a collection of instances of the movable_phys__obj type, the CONST role describing the meronymic constitution of a type. 502 In fact, the ability to trigger this very kind of coercion seems to be a general property of verbs addressing their arguments through their FORMAL role (i.e., requiring natural types - centrally defined through their CONST and FORMAL - and not functional types - centrally defined through their AGENTIVE and TELIC ; see Pustejovsky 1999). Such verbs are usually able to access their arguments' semantics as individuals through their FORMAL role, and as collections of individuals through their CONST role, if the FORMAL individual does not meet the selectional restrictions imposed by the verb, or other semantic constraints. See Caudal (1998) for detailed evidence of this, and for a tentative solution within GL to the problems raised by the polysemy of collective nouns (e.g., regiment, police and forest), which exhibit a similar behaviour, i.e., can either refer to individuals or to collections. Finally, it should be noted that homeomeronymic nouns (i.e., whose parts and whole refer to the same lexical type, e.g. estate or property seen as land surfaces, or quantity of matter nouns, such as gold or milk ; see Winston et al, (1987)) offer other interesting properties w.r.t, to incrementality/atomicity. I will not discuss them here for want of space. To put it in a nutshell, even prima facie individual-referring nouns such as chambre can behave like collection-referring ones under certain circumstances, making i-incremental readings of normally atomic events possible. Let us move now to some concluding remarks about non-incremental non-atomic events. 5. On the formal treatment of non- incremental non-atomic events I have shown above that the notion of incrementality fell short of explaining the non- atomicity of (13), (14), and (15). I will suggest here a solution based on an extended conception of result states. The non-incremental, non-atomic events discussed in 3.3 seem to fall into at least two distinct subclasses : scalar events (cf. (13)/(14)) vs. "gradual scenario" events (cf. (15)). I will focus on the former class, the latter class originating clearly in a pragmatic phenomenon 6. It should be noted that many resultative constructions (e.g., pound the metal flat; see Levin and Rappaport 1995) also receive scalar readings, making the phenomenon a fairly widespread one. \ It is a fact that the notions of affectedness and incrementality / event-object mapping do not apply to scalar events. Affectedness indicates that an argument undergoes an incremental (cf. eat) or a definite change of state (cf. throw), and not a gradual bu___!t total one, as in the case of scalar verbs (their delimiting arguments are gradually changing as a whole, and not bit by bit). (14) is telic and non-atomic because the chicken goes through successive states of 'cookedness' (i.e., result states) before reaching a final state, and not because of some event- object mapping function in the spirit of Krifka (1992). Therefore, the telicity of scalar events can only be explained by reference to this scalar change of state, which entails itself a scalar result state. Encoding a richer information about result states in the lexical entries of such verbs, as proposed in Caudal (1999a/b), would allow us to account elegantly for this kind of non-atomic, non-incremental, telic readings of events. This new conception of result states provide us with a unified account 7 of (non)-atomicity, incrementality and telicity - a result which generalized paths cannot achieve for reasons exposed above, and others not discussed here. Indeed, even the non-incremental, non-atomic events studied in 3.3 (except (15), but then again this is a pragmatic issue) can also be accounted for in this manner, and path-argument verbs can also be analysed in terms of result states if changes of location undergone by arguments are treated as changes-of-state. 6 Note that contrary to scalar events and incremental events, "gradual scenarios" do not combine with the progressive perfect, of. *John has been registering his son at the university. This fact suggests that they should be set apart from other non-atomic events, and possibly receive subevents of a different kind. 7 See Caudal (1999b), where incremental vs. scalar RStages are introduced. 503 Conclusion It has been demonstrated in this paper that the so-called punctuality of achievements should be reduced to the notion of atomicity. Formal means to calculate it within an NLP system have been discussed; see White (1994) for a computational implementation of related interest, in a similar spirit. The machinery exposed above can be used to predict whether an event should be considered as an accomplishment (non-atomic event; possesses subevents) or an achievement (atomic event; lacks any subevent). The above developments revealed that (non-)atomicity is at least partly amenable to a compositional semantic procedure, and does not fall altogether under the scope of pragmatics. It has been shown to be directly related to incrementality in many cases, though not in all cases. In order to construe incremental non- atomic events, I proposed to encode m-incrementality vs. i-incrementality in the lexicon, before discussing the accessibility of the internal structure of delimiting argument NPs ; I suggested a solution to the problems raised by the polysemous internal structure of certain nouns. Finally, a tentative result-state based account of non-incremental non-atomic events has been proposed. I even claimed that it can explain all types of non-atomicity and even incrementality in a unified way, and therefore might surpass all the existing accounts of event structure. References Asher, N. and J. Pustejovsky (1999) The Metaphysics of Words in Context. Ms., Brandeis University. Caudal, P. (1998) Using Complex Lexical Types to Model the Polysemy of Collective Nouns within the Generative Lexicon. Proceedings of DEXA98, IEEE Computer Society, Los Alamitos, pp. 154- 160. Caudal, P. (1999a) Resultativity in French - A Study in Contrastive Linguistics. Paper presented at the 29 t~ Linguistic Symposium on Romance Languages, University of Michigan, Ann Arbor, MI, April. Caudal, P. (1999b) Result Stages and the Lexicon : The Proper Treatment of Event Structure. Proceedings of the 9 th Conference of the European Chapter of the Association for Computational Linguistics, Bergen, Norway, June. Declerck, R. (1979). Aspect and the bounded/unbounded (telic/atelic) distinction. Linguistics 17, pp. 761-794. Dowty, D. (1986) The Effects of Aspectual Class on the Temporal Structure of Discourse : Semantics or Pragmatics ? Linguistics and Philosophy, 9, pp. 37- 61. Dowty, D. (1991) Thematic Proto-Roles and Argument Selection. Languages 67/3, pp. 547-619. Jackendoff, R. (1996) The Proper Treatment of Measuring Out, Telicity and Perhaps Event Quantification in English. Natural Language and Linguistic Theory, 14, pp. 305-354. Krifka, M. (1992) Thematic Relations as Links between Nominal Reference and Temporal Constitution. In Lexical Matters, I. Sag and A. Szabolsci, eds., CSLI, Stanford, CA, pp. 29-53. Levin, B. and M. Rappaport Hovav (1995) Unaccusativity: At the Syntax - Lexical Semantics Interface. MIT Press, Cambridge, MA. Link, G. (1983) The Logical Analysis of Plurals and Mass Terms. in R. Baiierle, C. Schwarze and A. von Stechow (eds.), Meaning, Use and Interpretation of Language, Walter de Gruyter, Berlin, pp. 302-323. Moens, M. and M. Steedman (1988) Temporal Ontology and Temporal Reference. Computational Linguistics, 14/2, pp.15-28. Pustejovsky, J. (1995) The Generative Lexicon. MIT Press, Cambridge, MA. Pustejovsky, J. (1999) Decomposition and Type Construction. Ms., Brandeis University. Tenny, C. (1994) Aspectual Roles and the Syntax- Semantics Interface, Kluwer, Dordrecht. Vendler, Z. (1957) Verbs and Times. The Philosophical Review, 66, pp. 143-160. Verkuyl, H. (1993) A Theory of Aspectuality. Cambridge University Press, Cambridge. Winston, M.E., R. Chaffin and D. Hermann (1987) A taxonomy of part-whole relations. Cognitive Science, 11, pp. 417-444. White, M. (1994) A Computational Approach to Aspectual Composition. Unpublished Ph.D. dissertation, Institute for Research in Cognitive Science, University of Pennsylvania, Philadelphia. Acknowledgements Many thanks to James Pustejovsky for the very fruitful discussions we had about incrementality. 504
1999
64
A Statistical Parser for Czech* Michael Collins AT&T Labs-Research, Shannon Laboratory, 180 Park Avenue, Florham Park, NJ 07932 mcollins@research, att.com Jan Haj i~. Institute of Formal and Applied Linguistics Charles University, Prague, Czech Republic [email protected], cuni. cz Lance Ramshaw BBN Technologies, 70 Fawcett St., Cambridge, MA 02138 i r amshaw@bbn, c om Christoph Tillmann Lehrstuhl ftir Informatik VI, RWTH Aachen D-52056 Aachen, Germany tillmann@informatik, rwth-aachen, de Abstract This paper considers statistical parsing of Czech, which differs radically from English in at least two respects: (1) it is a highly inflected language, and (2) it has relatively free word order. These dif- ferences are likely to pose new problems for tech- niques that have been developed on English. We describe our experience in building on the parsing model of (Collins 97). Our final results - 80% de- pendency accuracy - represent good progress to- wards the 91% accuracy of the parser on English (Wall Street Journal) text. 1 Introduction Much of the recent research on statistical parsing has focused on English; languages other than En- glish are likely to pose new problems for statisti- cal methods. This paper considers statistical pars- ing of Czech, using the Prague Dependency Tree- bank (PDT) (Haji~, 1998) as a source of training and test data (the PDT contains around 480,000 words of general news, business news, and science articles * This material is based upon work supported by the National Science Foundation under Grant No. (#IIS-9732388), and was carded out at the 1998 Workshop on Language Engineering, Center for Language and Speech Processing, Johns Hopkins University. Any opinions, findings, and conclusions or recom- mendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Sci- ence Foundation or The Johns Hopkins University. The project has also had support at various levels from the following grants and programs: Grant Agency of the Czech Republic grants No. 405/96/0198 and 405/96/K214 and Ministry of Education of the Czech Republic Project No. VS96151. We would also like to thank Eric Brill, Barbora Hladk~i, Frederick Jelinek, Doug Jones, Cynthia Kuo, Oren Schwartz, and Daniel Zeman for many useful discussions during and after the workshop. annotated for dependency structure). Czech differs radically from English in at least two respects: • It is a highly inflected (HI) language. Words in Czech can inflect for a number of syntac- tic features: case, number, gender, negation and so on. This leads to a very large number of possible word forms, and consequent sparse data problems when parameters are associated with lexical items, on the positive side, inflec- tional information should provide strong cues to parse structure; an important question is how to parameterize a statistical parsing model in a way that makes good use of inflectional infor- mation. • It has relatively free word order (F-WO). For example, a subject-verb-object triple in Czech can generally appear in all 6 possible surface orders (SVO, SOV, VSO etc.). Other Slavic languages (such as Polish, Russian, Slovak, Slovene, Serbo-croatian, Ukrainian) also show these characteristics. Many European lan- guages exhibit FWO and HI phenomena to a lesser extent. Thus the techniques and results found for Czech should be relevant to parsing several other languages. This paper first describes a baseline approach, based on the parsing model of (Collins 97), which recovers dependencies with 72% accuracy. We then describe a series of refinements to the model, giv- ing an improvement to 80% accuracy, with around 82% accuracy on newspaper/business articles. (As a point of comparison, the parser achieves 91% de- pendency accuracy on English (Wall Street Journal) text.) 505 2 Data and Evaluation The Prague Dependency Treebank PDT (Haji~, 1998) has been modeled after the Penn Treebank (Marcus et al. 93), with one important excep- tion: following the Praguian linguistic tradition, the syntactic annotation is based on dependencies rather than phrase structures. Thus instead of "non- terminal" symbols used at the non-leaves of the tree, the PDT uses so-called analytical functions captur- ing the type of relation between a dependent and its governing node. Thus the number of nodes is equal to the number of tokens (words + punctuation) plus one (an artificial root node with rather techni- cal function is added to each sentence). The PDT contains also a traditional morpho-syntactic anno- tation (tags) at each word position (together with a lemma, uniquely representing the underlying lexicai unit). As Czech is a HI language, the size of the set of possible tags is unusually high: more than 3,000 tags may be assigned by the Czech morphological analyzer. The PDT also contains machine-assigned tags and lemmas for each word (using a tagger de- scribed in (Haji~ and Hladka, 1998)). For evaluation purposes, the PDT has been di- vided into a training set (19k sentences) and a de- velopment/evaluation test set pair (about 3,500 sen- tences each). Parsing accuracy is defined as the ratio of correct dependency links vs. the total number of dependency links in a sentence (which equals, with the one artificial root node added, to the number of tokens in a sentence). As usual, with the develop- ment test set being available during the development phase, all final results has been obtained on the eval- uation test set, which nobody could see beforehand. 3 A Sketch of the Parsing Model The parsing model builds on Model 1 of (Collins 97); this section briefly describes the model. The parser uses a lexicalized grammar -- each non- terminal has an associated head-word and part-of- speech (POS). We write non-terminals as X (x): X is the non-terminal label, and x is a (w, t> pair where w is the associated head-word, and t as the POS tag. See figure 1 for an example lexicalized tree, and a list of the lexicalized rules that it contains. Each rule has the form 1 : P(h) --+ L,~(l,)...Ll(ll)H(h)Rl(rl)...Rm(rm) (1) IWith the exception of the top rule in the tree, which has the f0rmTOP -+ H(h). H is the head-child of the phrase, which inher- its the head-word h from its parent P. L1...Ln and R1...Rm are left and right modifiers of H. Either n or m may be zero, and n = m = 0 for unary rules. For example, in S (bought,VBD) -+ NP (yesterday,NN) NP (IBM, NNP) VP (bought, VBD) : n=2 m=0 P=S H=VP LI = NP L2 = NP l I = <IBM, NNP> 12 = <yesterday, NN> h = <bought, VBD) The model can be considered to be a variant of Probabilistic Context-Free Grammar (PCFG). In PCFGs each role cr --+ fl in the CFG underlying the PCFG has an associated probability P(/3la ). In (Collins 97), P(/~lo~) is defined as a product of terms, by assuming that the right-hand-side of the rule is generated in three steps: 1. Generate the head constituent label of the phrase, with probability 79H( H I P, h ). 2. Generate modifiers to the left of the head with probability Hi=X..n+l 79L(Li(li) [ P, h, H), where Ln+l(ln+l) = STOP. The STOP symbol is added to the vocabulary of non- terminals, and the model stops generating left modifiers when it is generated. 3. Generate modifiers to the right of the head with probability Hi=l..m+l PR(Ri(ri) [ P, h, H). Rm+l (rm+l) is defined as STOP. For example, the probability of s (bought, VBD) -> NP(yesterday,NN) NP(IBM,NNP) VP (bought, VBD) is defined as /oh (VP I S, bought, VBD) × Pt (NP ( IBM, NNP) I S, VP, bought, VBD) x Pt(NP (yesterday, NN) I S ,VP, bought ,VBD) × e~ (STOP I s, vP, bought, VBD) × Pr (STOP I S, VP, bought. VBD) Other rules in the tree contribute similar sets of probabilities. The probability for the entire tree is calculated as the product of all these terms. (Collins 97) describes a series of refinements to this basic model: the addition of "distance" (a con- ditioning feature indicating whether or not a mod- ifier is adjacent to the head); the addition of sub- categorization parameters (Model 2), and parame- ters that model wh-movement (Model 3); estimation 506 TOP I S(bought,VBD) NP(yesterday,NN) NP(IBM,NNP) I I NN NNP I I yesterday IBM TOP S(bought,VBD) NP(yesterday,NN) NP(IBM,NNP) VP(bought,VBD) NP(Lotus,NNP) -> S(bought,VBD) -> NP(yesterday,NN) -> NN(yesterday) -> NNP(IBM) -> VBD(bought) -> NNP(Lotus) VP(bought,VBD) VBD NP(Lotus,NNP) I I bought NNP I Lotus NP(IBM,NNP) VP(bought,VBD) NP(Lotus,NNP) Figure 1: A lexicalized parse tree, and a list of the rules it contains. techniques that smooth various levels of back-off (in particular using POS tags as word-classes, allow- ing the model to learn generalizations about POS classes of words). Search for the highest probabil- ity tree for a sentence is achieved using a CKY-style parsing algorithm. 4 Parsing the Czech PDT Many statistical parsing methods developed for En- glish use lexicalized trees as a representation (e.g., (Jelinek et al. 94; Magerman 95; Ratnaparkhi 97; Charniak 97; Collins 96; Collins 97)); several (e.g., (Eisner 96; Collins 96; Collins 97; Charniak 97)) emphasize the use of parameters associated with dependencies between pairs of words. The Czech PDT contains dependency annotations, but no tree structures. For parsing Czech we considered a strat- egy of converting dependency structures in training data to lexicalized trees, then running the parsing algorithms originally developed for English. A key point is that the mapping from lexicalized trees to dependency structures is many-to-one. As an exam- ple, figure 2 shows an input dependency structure, and three different lexicalized trees with this depen- dency structure. The choice of tree structure is crucial in determin- ing the independence assumptions that the parsing model makes. There are at least 3 degrees of free- dom when deciding on the tree structures: . How "fiat" should the trees be? The trees could be as fiat as possible (as in figure 2(a)), or bi- nary branching (as in trees (b) or (c)), or some- where between these two extremes. 2. What non-terminal labels should the internal nodes have? 3. What set of POS tags should be used? 4.1 A Baseline Approach To provide a baseline result we implemented what is probably the simplest possible conversion scheme: . . . The trees were as fiat as possible, as in fig- ure 2(a). The non-terminal labels were "XP", where X is the first letter of the POS tag of the head- word for the constituent. See figure 3 for an example. The part of speech tags were the major cate- gory for each word (the first letter of the Czech POS set, which corresponds to broad category distinctions such as verb, noun etc.). The baseline approach gave a result of 71.9% accu- racy on the development test set. 507 Input: sentence with part of speech tags: UN saw/V the/D man/N (N=noun, V=verb, D=determiner) dependencies (word ~ Parent): (I =~ saw), (saw =:~ START), (the =~ man), (man =¢, saw> Output: a lexicalized tree (a) X(saw) (b) X(saw) (c) N X(saw) X(I) V X(man) I [ I ~ I V X(man) N saw D N [ [ I I saw D N I the man [ [ the man X(saw) X(saw) X(man) N V D N I I I I I saw the man Figure 2: Converting dependency structures to lexicalized trees with equivalent dependencies. The trees (a), (b) and (c) all have the input dependency structure: (a) is the "flattest" possible tree; (b) and (c) are binary branching structures. Any labels for the non-terminals (marked X) would preserve the dependency structure. VP(saw) NP(I) V NP(man) N saw D N I I I I the man Figure 3: The baseline approach for non-terminal labels. Each label is XP, where X is the POS tag for the head-word of the constituent. '4.2 Modifications to the Baseline Trees While the baseline approach is reasonably success- ful, there are some linguistic phenomena that lead to clear problems. This section describes some tree transformations that are linguistically motivated, and lead to improvements in parsing accuracy. 4.2.1 Relative Clauses In the PDT the verb is taken to be the head of both sentences and relative clauses. Figure 4 illustrates how the baseline transformation method can lead to parsing errors in relative clause cases. Figure 4(c) shows the solution to the problem: the label of the relative clause is changed to SBAR, and an addi- tional vP level is added to the right of the relative pronoun. Similar transformations were applied for relative clauses involving Wh-PPs (e.g., "the man to whom I gave a book"), Wh-NPs (e.g., "the man whose book I read") and Wh-Adverbials (e.g., "the place where I live"). 4.2.2 Coordination The PDT takes the conjunct to be the head of coor- dination structures (for example, and would be the head of the NP dogs and cats). In these cases the baseline approach gives tree structures such as that in figure 5(a). The non-terminal label for the phrase is JP (because the head of the phrase, the conjunct and, is tagged as J). This choice of non-terminal is problematic for two reasons: (1) the JP label is assigned to all co- ordinated phrases, for example hiding the fact that the constituent in figure 5(a) is an NP; (2) the model assumes that left and right modifiers are generated independently of each other, and as it stands will give unreasonably high probability to two unlike phrases being coordinated. To fix these problems, the non-terminal label in coordination cases was al- tered to be the same as that of the second conjunct (the phrase directly to the right of the head of the phrase). See figure 5. A similar transformation was made for cases where a comma was the head of a phrase. 4.2.3 Punctuation Figure 6 shows an additional change concerning commas. This change increases the sensitivity of the model to punctuation. 4.3 Model Alterations This section describes some modifications to the pa- rameterization of the model. 508 (a) VP NP V NP John likes Mary VP Z P V NP I I [ I who likes Tim (b) VP VP Z VP NP V NP P V NP I I t I I I John likes Mary who likes Tim a) JP(a) b) NP(a) NP(hl) J NP(h 2) NP(hl) J NP(h 2) I I i I I I and . . . . . . and ... Figure 5: An example of coordination. The base- line approach (a) labels the phrase as a Jp; the re- finement (b) takes the second conjunct's label as the non-terminal for the whole phrase. NP(h) --t- NPX(h) Z(,) ~ N(h) ~ Z(,) NP(h) I ... h "" I r~(h) I .., i h Figure 6: An additional change, triggered by a comma that is the left-most child of a phrase: a new non-terminal NPX is introduced. (c) vP NP V NP John likes Mary SBAR Z P VP who V NP I I likes Tim Figure 4: (a) The baseline approach does not distin- guish main clauses from relative clauses: both have a verb as the head, so both are labeled VP. (b) A typ- ical parsing error due to relative and main clauses not being distinguished. (note that two main clauses can be coordinated by a comma, as in John likes Mary, Mary likes Tim). (c) The solution to the prob- lem: a modification to relative clause structures in training data. 4.3.1 Preferences for dependencies that do not cross verbs The model of (Collins 97) had conditioning vari- ables that allowed the model to learn a preference for dependencies which do not cross verbs. From the results in table 3, adding this condition improved accuracy by about 0.9% on the development set. 4.3.2 Punctuation for phrasal boundaries The parser of (Collins 96) used punctuation as an in- dication of phrasal boundaries. It was found that if a constituent Z ~ (...XY...) has two children X and Y separated by a punctuation mark, then Y is gen- erally followed by a punctuation mark or the end of sentence marker. The parsers of (Collins 96,97) en- coded this as a hard constraint. In the Czech parser we added a cost of -2.5 (log probability) z to struc- tures that violated this constraint. 4.3.3 First-Order (Bigram) Dependencies The model of section 3 made the assumption that modifiers are generated independently of each other. This section describes a bigram model, where the context is increased to consider the previously gen- erated modifier ((Eisner 96) also describes use of bigram statistics). The right-hand-side of a rule is now assumed to be generated in the following three step process: 1. Generate the head label, with probability ~'~ (H I P, h) 2. Generate left modifiers with probability 1-I Pc(L~(li) l Li-I'P'h'H) /=l..n+l where L0 is defined as a special NULL sym- bol. Thus the previous modifier, Li-1, is added to the conditioning context (in the pre- vious model the left modifiers had probability 1"[i=1..,~+1 Pc(Li(li) I P,h,H).) 3. Generate fight modifiers using a similar bi- gram process. Introducing bigram-dependencies into the parsing model improved parsing accuracy by about 0.9 % (as shown in Table 3). 2This value was optimized on the development set 509 1. main part of 8. person speech 2. detailed part of 9. tense speech 3. gender 10. degree of compar- ison 4. number I I. negativeness 5. case 12. voice 6. possessor's 13. variant/register gender 7. possessor's num- ber Table 1: The 13-character encoding of the Czech POS tags. 4.4 Alternative Part-of-Speech Tagsets Part of speech (POS) tags serve an important role in statistical parsing by providing the model with a level of generalization as to how classes of words tend to behave, what roles they play in sentences, and what other classes they tend to combine with. Statistical parsers of English typically make use of the roughly 50 POS tags used in the Penn Treebank corpus, but the Czech PDT corpus provides a much richer set of POS tags, with over 3000 possible tags defined by the tagging system and over 1000 tags actually found in the corpus. Using that large a tagset with a training corpus of only 19,000 sen- tences would lead to serious sparse data problems. It is also clear that some of the distinctions being made by the tags are more important than others for parsing. We therefore explored different ways of extracting smaller but still maximally informative POS tagsets. 4.4.1 Description of the Czech Tagset The POS tags in the Czech PDT corpus (Haji~ and Hladk~i, 1997) are encoded in 13-character strings. Table 1 shows the role of each character. For exam- ple, the tag NNMP1 ..... A-- would be used for a word that had "noun" as both its main and detailed part of speech, that was masculine, plural, nomina- tive (case 1), and whose negativeness value was "af- firmative". Within the corpus, each word was annotated with all of the POS tags that would be possible given its spelling, using the output of a morphological analy- sis program, and also with the single one of those tags that a statistical POS tagging program had predicted to be the correct tag (Haji~ and Hladka, 1998). Table 2 shows a phrase from the corpus, with Form Dictionary Tags Machine Tag poslanci NNMPI ..... A- - NNMP5 ..... A NNMP7 ..... A. NNMS3 ..... A. NNMS6 ..... A. NNMPI ..... A. Parlamentu NNIS2 ..... A-- NNIS2 ..... A NNIS3 ..... A. NNIS6 ..... A-I schv~ilili VpMP- - -XR-AA- VpMP- - -XR-AA- Table 2: Corpus POS tags for "the representatives of the Parliament approved". the alternative possible tags and machine-selected tag for each word. In the training portion of the cor- pus, the correct tag as judged by human annotators was also provided. 4.4.2 Selection of a More Informative Tagset In the baseline approach, the first letter, or "main part of speech", of the full POS strings was used as the tag. This resulted in a tagset with 13 possible values. A number of alternative, richer tagsets were ex- plored, using various combinations of character po- sitions from the tag string. The most successful al- ternative was a two-letter tag whose first letter was always the main POS, and whose second letter was the case field if the main POS was one that dis- plays case, while otherwise the second letter was the detailed POS. (The detailed POS was used for the main POS values D, J, V, and X; the case field was used for the other possible main POS values.) This two-letter scheme resulted in 58 tags, and pro- vided about a 1.1% parsing improvement over the baseline on the development set. Even richer tagsets that also included the per- son, gender, and number values were tested without yielding any further improvement, presumably be- cause the damage from sparse data outweighed the value of the additional information present. 4.4.3 Explorations toward Clustered Tagsets An entirely different approach, rather than search- ing by hand for effective tagsets, would be to use clustering to derive them automatically. We ex- plored two different methods, bottom-up and top- down, for automatically deriving POS tag sets based on counts of governing and dependent tags extracted from the parse trees that the parser constructs from the training data. Neither tested approach resulted in any improvement in parsing performance com- 510 pared to the hand-designed "two letter" tagset, but the implementations of each were still only prelim- inary, and a clustered tagset more adroitly derived might do better. 4.4.4 Dealing with Tag Ambiguity One final issue regarding POS tags was how to deal with the ambiguity between possible tags, both in training and test. In the training data, there was a choice between using the output of the POS tagger or the human annotator's judgment as to the correct tag. In test data, the correct answer was not avail- able, but the POS tagger output could be used if de- sired. This turns out to matter only for unknown words, as the parser is designed to do its own tag- ging, for words that it has seen in training at least 5 times, ignoring any tag supplied with the input. For "unknown" words (seen less than 5 times), the parser can be set either to believe the tag supplied by the POS tagger or to allow equally any of the dictionary-derived possible tags for the word, effec- tively allowing the parse context to make the choice. (Note that the rich inflectional morphology of Czech leads to a higher rate of"unknown" word forms than would be true in English; in one test, 29.5% of the words in test data were "unknown".) Our tests indicated that if unknown words are treated by believing the POS tagger's suggestion, then scores are better if the parser is also trained on the POS tagger's suggestions, rather than on the human annotator's correct tags. Training on the cor- rect tags results in 1% worse performance. Even though the POS tagger's tags are less accurate, they are more like what the parser will be using in the test data, and that turns out to be the key point. On the other hand, if the parser allows all possible dictio- nary tags for unknown words in test material, then it pays to train on the actual correct tags. In initial tests, this combination of training on the correct tags and allowing all dictionary tags for un- known test words somewhat outperformed the alter- native of using the POS tagger's predictions both for training and for unknown test words. When tested with the final version of the parser on the full de- velopment set, those two strategies performed at the same level. • 5 Results We ran three versions of the parser over the final test set: the baseline version, the full model with all additions, and the full model with everything but the bigram model. The baseline system on the fi- [I Modification II Improvement Coordination +2.6% Relative clauses + 1.5 % Punctuation -0.1% ?? Enriched POS tags +1. 1% Punctuation +0.4% Verb crossing +0.9% Bigram +0.9% I Total change +7.4% Total Relative Error reduction 26% Table 3: A breakdown of the results on the develop- ment set. Genre Newspaper Business Science Proportion (Sentences/ Dependencies) 50%/44% 25%/19% 25%/38% Accuracy 81.4% 81.4% 76.0% Table 4: Breakdown of the results by genre. Note that although the Science section only contributes 25% of the sentences in test data, it contains much longer sentences than the other sections and there- fore accounts for 38% of the dependencies in test data. nal test set achieved 72.3% accuracy. The final sys- tem achieved 80.0% accuracy 3: a 7.7% absolute im- provement and a 27.8% relative improvement. The development set showed very similar results: a baseline accuracy of 71.9% and a final accuracy of 79.3%. Table 3 shows the relative improvement of each component of the model 4. Table 4 shows the results on the development set by genre. It is inter- esting to see that the performance on newswire text is over 2% better than the averaged performance. The Science section of the development set is con- siderably harder to parse (presumably because of longer sentences and more open vocabulary). 3The parser fails to give an analysis on some sentences, be- cause the search space becomes too large. The baseline system missed 5 sentences; the full system missed 21 sentences; the full system minus bigrams missed 2 sentences. To score the full system we took the output from the full system minus bi- grams when the full system produced no output (to prevent a heavy penalty due to the 21 missed sentences). The remaining 2 unparsed sentences (5 in the baseline case) had all dependen- cies attached to the root. 4We were surprised to see this slight drop in accuracy for the punctuation tree modification. Earlier tests on a different development set, with less training data and fewer other model alterations had shown a good improvement for this feature. 511 5.1 Comparison to Previous Results The main piece of previous work on parsing Czech that we are aware of is described in (Kubofi 99). This is a rule-based system which is based on a man- ually designed set of rules. The system's accuracy is not evaluated on a test corpus, so it is difficult to compare our results to theirs. We can, however, make some comparison of the results in this paper to those on parsing English. (Collins 99) describes results of 91% accuracy in recovering dependen- cies on section 0 of the Penn Wall Street Journal Treebank, using Model 2 of (Collins 97). This task is almost certainly easier for a number of reasons: there was more training data (40,000 sentences as opposed to 19,000); Wall Street Journal may be an easier domain than the PDT, as a reasonable pro- portion of sentences come from a sub-domain, fi- nancial news, which is relatively restricted. Unlike model 1, model 2 of the parser takes subcategoriza- tion information into account, which gives some im- provement on English and might well also improve results on Czech. Given these differences, it is dif- ficult to make a direct comparison, but the overall conclusion seems to be that the Czech accuracy is approaching results on English, although it is still somewhat behind. 6 Conclusions The 80% dependency accuracy of the parser repre- sents good progress towards English parsing perfor- mance. A major area for future work is likely to be an improved treatment of morphology; a natural approach to this problem is to consider more care- fully how POS tags are used as word classes by the model. We have begun to investigate this is- sue, through the automatic derivation of POS tags through clustering or "splitting" approaches. It might also be possible to exploit the internal struc- ture of the POS tags, for example through incremen- tal prediction of the POS tag being generated; or to exploit the use of word lemmas, effectively split- ting word-word relations into syntactic dependen- cies (POS tag-POS tag relations) and more seman- tic (lemma-lemma) dependencies. References E. Charniak. 1997. Statistical Parsing with a Context-free Grammar and Word Statistics. Pro- ceedings of the Fourteenth National Conference on Artificial Intelligence, AAAI Press/MIT Press, Menlo Park (1997). M. Collins. 1996. A New Statistical Parser Based on Bigram Lexical Dependencies. Proceedings of 512 the 34th Annual Meeting of the Association for Computational Linguistics, pages 184-191. M. Collins. 1997. Three Generative, Lexicalised Models for Statistical Parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics, pages 16-23. M. Collins. 1999. Head-Driven Statistical Models for Natural Language Parsing. Ph.D. Thesis, Uni- versity of Pennsylvania. J. Eisner. 1996. Three New Probabilistic Models for Dependency Parsing: An Exploration. Proceed- ings of COLING-96, pages 340-345. Jan Haji6. 1998. Building a Syntactically Anno- tated Corpus: The Prague Dependency Treebank. Issues of Valency and Meaning (Festschrift for Jarmila Panevov~i). Carolina, Charles University, Prague. pp. 106-132. Jan Haji~ and Barbora Hladk~i. 1997. Tagging of In- flective Languages: a Comparison. In Proceed- ings of the ANLP'97, pages 136--143, Washing- ton, DC. Jan Haji6 and Barbora Hladka. 1998. Tagging In- flective Languages: Prediction of Morphological Categories for a Rich, Structured Tagset. In Pro- ceedings of ACL/Coling'98, Montreal, Canada, Aug. 5-9, pp. 483-490. E Jelinek, J. Lafferty, D. Magerman, R. Mercer, A. Ratnaparkhi, S. Roukos. 1994. Decision Tree Parsing using a Hidden Derivation Model. Pro- ceedings of the 1994 Human Language Technol- ogy Workshop, pages 272-277. V. Kubofi. 1999. A Robust Parser for Czech. Technical Report 6/1999, 0FAL, Matematicko- fyzikdlnf fakulta Karlovy univerzity, Prague. D. Magerman. 1995. Statistical Decision-Tree Mod- els for Parsing. Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, pages 276-283. M. Marcus, B. Santorini and M. Marcinkiewicz. 1993. Building a Large Annotated Corpus of En- glish: the Penn Treebank. Computational Lin- guistics, 19(2):313-330. A. Ratnaparkhi. 1997. A Linear Observed Time Sta- tistical Parser Based on Maximum Entropy Mod- els. In Proceedings of the Second Conference on Empirical Methods in Natural Language Pro- cessing, Brown University, Providence, Rhode Island.
1999
65
Automatic Compensation for Parser Figure-of-Merit Flaws* Don Blaheta and Eugene Charniak {dpb, ec}@cs, brown, edu Department of Computer Science Box 1910 / 115 Waterman St.--4th floor Brown University Providence, RI 02912 Abstract Best-first chart parsing utilises a figure of merit (FOM) to efficiently guide a parse by first attending to those edges judged better. In the past it has usually been static; this paper will show that with some extra infor- mation, a parser can compensate for FOM flaws which otherwise slow it down. Our re- sults are faster than the prior best by a fac- tor of 2.5; and the speedup is won with no significant decrease in parser accuracy. 1 Introduction Sentence parsing is a task which is tra- ditionMly rather computationally intensive. The best known practical methods are still roughly cubic in the length of the sentence-- less than ideM when deMing with nontriviM sentences of 30 or 40 words in length, as fre- quently found in the Penn Wall Street Jour- nal treebank corpus. Fortunately, there is now a body of litera- ture on methods to reduce parse time so that the exhaustive limit is never reached in prac- tice. 1 For much of the work, the chosen ve- hicle is chart parsing. In this technique, the parser begins at the word or tag level and uses the rules of a context-free grammar to build larger and larger constituents. Com- pleted constituents are stored in the cells of a chart according to their location and * This research was funded in part by NSF Grant IRI-9319516 and ONR Grant N0014-96-1-0549. IAn exhaustive parse always "overgenerates" be- cause the grammar contains thousands of extremely rarely applied rules; these are (correctly) rejected even by the simplest parsers, eventuMly, but it would be better to avoid them entirely. length. Incomplete constituents ("edges") are stored in an agenda. The exhaustion of the agenda definitively marks the comple- tion of the parsing algorithm, but the parse needn't take that long; Mready in the early work on chart parsing, (Kay, 1970) suggests that by ordering the agenda one can find a parse without resorting to an exhaustive search. The introduction of statistical pars- ing brought with an obvious tactic for rank- ing the agenda: (Bobrow, 1990) and (Chi- trao and Grishman, 1990) first used proba- bilistic context free grammars (PCFGs) to generate probabilities for use in a figure of merit (FOM). Later work introduced other FOMs formed from PCFG data (Kochman and Kupin, 1991); (Magerman and Marcus, 1991); and (Miller and Fox, 1994). More recently, we have seen parse times lowered by several orders of magnitude. The (Caraballo and Charniak, 1998) article con- siders a number of different figures of merit for ordering the agenda, and ultimately rec- ommends one that reduces the number of edges required for a full parse into the thou- sands. (Goldwater et al., 1998) (henceforth [Gold98]) introduces an edge-based tech- nique, (instead of constituent-based), which drops the average edge count into the hun- dreds. However, if we establish "perfection" as the minimum number of edges needed to generate the correct parse 47.5 edges on av- erage in our corpus--we can hope for still more improvement. This paper looks at two new figures of merit, both of which take the [Gold98] figure (of "independent" merit) as a starting point in cMculating a new figure 513 of merit for each edge, taking into account some additional information. Our work fur- ther lowers the average edge count, bringing it from the hundreds into the dozens. 2 Figure of independent merit (Caraballo and Charniak, 1998) and [Gold98] use a figure which indicates the merit of a given constituent or edge, relative only to itself and its children but indepen- dent of the progress of the parse we will call this the edge's independent merit (IM). The philosophical backing for this figure is that we would like to rank an edge based on the value P(N~,kIto,n ) , (1) where N~, k represents an edge of type i (NP, S, etc.), which encompasses words j through k- 1 of the sentence, and t0,~ represents all n part-of-speech tags, from 0 to n - 1. (As in the previous research, we simplify by look- ing at a tag stream, ignoring lexical infor- mation.) Given a few basic independence as- sumptions (Caraballo and Charniak, 1998), this value can be calculated as i i fl( N ,k) P(NJ'k]t°'~) = P(to,n) , (2) with fl and a representing the well-known "inside" and "outside" probability functions: fl(Nj, k) = P(tj,klNj,,) (3) a(N ,k) = P(tod, N ,k, tk,n). (4) Unfortunately, the outside probability is not calculable until after a parse is completed. Thus, the IM is an approximation; if we can- not calculate the full outside probability (the probability of this constituent occurring with all the other tags in the sentence), we can at least calculate the probability of this con- stituent occurring with the previous and sub- sequent tag. This approximation, as given in (Caraballo and Charniak, 1998), is P(Nj, kltj-1)/3(N~,k)P(tklNj, k) P(tj,klt~-1)P(tklt~-l) (5) Of the five values required, P(N~.,kltj) , P(tkltk_l), and P(tklN~,k) can be observed directly from the training data; the inside probability is estimated using the most prob- able parse for Nj, k, and the tag sequence probability is estimated using a bitag ap- proximation. Two different probability distributions are used in this estimate, and the PCFG prob- abilities in the numerator tend to be a bit lower than the brag probabilities in the de- nominator; this is more of a factor in larger constituents, so the figure tends to favour the smaller ones. To adjust the distribu- tions to counteract this effect, we will use a normalisation constant 7? as in [Gold98]. Effectively, the inside probability fl is mul- tiplied by r/k-j , preventing the discrepancy and hence the preference for shorter edges. In this paper we will use r/= 1.3 throughout; this is the factor by which the two distribu- tions differ, and was also empirically shown to be the best tradeoff between number of • popped edges and accuracy (in [Gold98]). 3 Finding FOM flaws Clearly, any improvement to be had would need to come through eliminating the in- correct edges before they are popped from the agenda--that is, improving the figure of merit. We observed that the FOMs used tended to cause the algorithm to spend too much time in one area of a sentence, gener- ating multiple parses for the same substring, before it would generate even one parse for another area. The reason for that is that the figures of independent merit are frequently good as relative measures for ranking differ- ent parses of the same sectio.n of the sen- tence, but not so good as absolute measures for ranking parses of different substrings. For instance, if the word "there" as an NP in "there's a hole in the bucket" had a low probability, it would tend to hold up the parsing of a sentence; since the bi-tag probability of "there" occurring at the be- ginning of a sentence is very high, the de- nominator of the IM would overbalance the numerator. (Note that this is a contrived 514 example--the actual problem cases are more obscure.) Of course, a different figure of in- dependent merit might have different char- acteristics, but with many of them there will be cases where the figure is flawed, causing a single, vital edge to remain on the agenda while the parser 'thrashes' around in other parts of the sentence with higher IM values. We could characterise this observation as follows: Postulate 1 The longer an edge stays in the agenda without any competitors, the more likely it is to be correct (even if it has a low figure of independent merit). A better figure, then, would take into ac- count whether a given piece of text had al- ready been parsed or not. We took two ap- proaches to finding such a figure. 4 Compensating for flaws 4.1 Experiment 1: Table lookup In one approach to the problem, we tried to start our program with no extra informa- tion and train it statistically to counter the problem mentioned in the previous section. There are four values mentioned in Postu- late 1: correctness, time (amount of work done), number of competitors, and figure of independent merit. We defined them as fol- lows: Correctness. The obvious definition is that an edge N~, k is correct if a constituent Nj, k appears in the parse given in the treebank. There is an unobvious but unfortunate consequence of choosing this definition, however; in many cases (especially with larger constituents), the "correct" rule appears just once in the entire corpus, and is thus consid- ered too unlikely to be chosen by the parser as correct. If the "correct" parse were never achieved, we wouldn't have any statistic at all as to the likelihood of the first, second, or third competitor be- ing better than the others. If we define "correct" for the purpose of statistics- gathering as "in the MAP parse", the problem is diminished. Both defini- tions were tried for gathering statis- tics, though of course only the first was used for measuring accuracy of output parses. Work. Here, the most logical measure for amount of work done is the number of edges popped off the agenda. We use it both because it is conveniently processor-independent and because it offers us a tangible measure of perfec- tion (47.5 edges--the average number of edges in the correct parse of a sentence). Competitorship. At the most basic level, the competitors of a given edge Nj, k would be all those edges N~, n such that m _< j and n > k. Initially we only con- sidered an edge a 'competitor' if it met this definition and were already in the chart; later we tried considering an edge to be a competitor if it had a higher in- .dependent merit, no matter whether it be in the agenda or the chart. We also tried a hybrid of the two. Merit. The independent merit of an edge is defined in section 2. Unlike earlier work, which used what we call "Independent Merit" as the FOM for parsing, we use this figure as just one of many sources of information about a given edge. Given our postulate, the ideal figure of merit would be P( correct l W, C, IM) . (6) We can save information about this proba- bility for each edge in every parse; but to be useful in a statistical model, the IM must first be discretised, and all three prior statis- tics need to be grouped, to avoid sparse data problems. We bucketed all three logarithmi- cally, with bases 4, 2, and 10, respectively. This gives us the following approximation: P( correct I [log 4 W J, [log 2 CJ, [log10 IMJ). (7) To somewhat counteract the effect of dis- cretising the IM figure, each time we needed 515 FOM = P(correct][log 4 WJ, [log2CJ, [logao IM])([logmI]Y -lOgloI]k 0 + P (correct l [log4 WJ, [log2 CJ, [log o IM]) (loglo IM- [log o IMJ) (8) to calculate a figure of merit, we looked up the table entry on either side of the IM and interpolated. Thus the actual value used as a figure of merit was that given in equation (8). Each trial consisted of a training run and a testing run. The training runs consisted of using a grammar induced on treebank sec- tions 2-21 to run the edge-based best-first algorithm (with the IM alone as figure of merit) on section 24, collecting the statis- tics along the way. It seems relatively obvi- ous that each edge should be counted when it is created. But our postulate involves edges which have stayed on the agenda for a long time without accumulating competi- tors; thus we wanted to update our counts when an edge happened to get more com- petitors, and as time passed. Whenever the number of edges popped crossed into a new logarithmic bucket (i.e. whenever it passed a power of four), we re-counted every edge in the agenda in that new bucket. In ad- dition, when the number of competitors of a given edge passed a bucket boundary (power of two), that edge would be re-counted. In this manner, we had a count of exactly how many edges--correct or not--had a given IM and a given number of competitors at a given point in the parse. Already at this stage we found strong evi- dence for our postulate. We were paying par- ticular attention to those edges with a low IM and zero competitors, because those were the edges that were causing problems when the parser ignored them. When, considering this subset of edges, we looked at a graph of the percentage of edges in the agenda which were correct, we saw an increase of orders of magnitude as work increased--see Figure 1. For the testing runs, then, we used as fig- ure of merit the value in expression 8. Aside from that change, we used the same edge- based best-first parsing algorithm as before. The test runs were all made on treebank sec- 0.12 0.1 0.08 G,~ O.Oe 0 1~0 0.04 =o 0.02 . [ IoglolM J = -4 . L IoglolM J = -5 ¢ [ IoglolM J = -6 L IoglolM J = -7 o L IoglolM J = -8 ,.~ ~ 2'.s • ~.5 ~ ~.s log4 edges popped 4.5 Figure 1: Zero competitors, low IM-- Proportion of agenda edges correct vs. work tion 22, with all sentences longer than 40 words thrown out; thus our results can be directly compared to those in the previous work. We made several trials, using different def- initions of 'correct' and 'competitor', as de- scribed above. Some performed much bet- ter than others, as seen in Table 1, which gives our results, both in terms of accuracy and speed, as compared to the best previous result, given in [Gold98]. The trial descrip- tions refer back to the multiple definitions given for 'correct' and 'competitor' at the beginning of this section. While our best speed improvement (48.6% of the previous minimum) was achieved with the first run, it is associated with a significant loss in ac- curacy. Our best results overall, listed in the last row of the table, let us cut the edge count by almost half while reducing labelled precision/recall by only 0.24%. 4.2 Experiment 2: Demeriting We hoped, however, that we might be able to find a way to simplify the algorithm such that it would be easier to implement and/or 516 Table 1: Performance of various statistical schemata Trial description [Gold98] standard Correct, Chart competitors Correct, higher-merit competitors Correct, Chart or higher-merit MAP, higher-merit competitors Labelled Labelled Change in Edges Percent Precision Recall LP/LR avg. popped 2 of std. 75.814% 73.334% 229.73 74.982% 72.920% -.623% 111.59 48.6% 75.588% 73.190% -.185% 135.23 58.9% 75.433% 73.152% -.282% 128.94 56.1% 75.365% 73.220% -.239% 120.47 52.4% . ....,.-"'""""'"i""'"'".:, • .'""'" ...i. i "'"'.. 0 5 6 -5 3 4 log m IM -,5 o log 2 competitors Figure 2: Stats at 64-255 edges popped line is not parallel to the competitor axis, but rather angled so that the low-IM low- competitor items pass the scan before the high-IM high-competitor items. This can be simulated by multiplying each edge's inde- pendent merit by a demeriting factor 5 per competitor (thus a total of 5c). Its exact value would determine the steepness of the scan line. Each trial consisted of one run, an edge- based best-first parse of treebank section 22 (with sentences longer than 40 words thrown out, as before), using the new figure of merit: k-j i i i ~, ~ ) . (9) faster to run, without sacrificing accuracy. To that end, we looked over the data, view- ing it as (among other things) a series of "planes" seen by setting the amount of work constant (see Figure 2). Viewed like this, the original algorithm behaves like a scan line, parallel to the competitor axis, scanning for the one edge with the highest figure of (in- dependent) merit. However, one look at fig- ure 2 dramatically confirms our postulate that an edge with zero competitors can have an IM orders of magnitude lower than an edge with many competitors, and still be more likely to be correct. Effectively, then, under the table lookup algorithm, the scan 2previous work has shown that the parser per- forms better if it runs slightly past the first parse; so for every run referenced in this paper, the parser was allowed to run to first parse plus a tenth. All reported final counts for popped edges are thus 1.1 times the count at first parse. This idea works extremely well. It is, pre- dictably, easier to implement; somewhat sur- prisingly, though, it actually performs bet- ter than the method it approximates. When 5 = .7, for instance, the accuracy loss is only .28%, comparable to the table lookup result, but the number of edges popped drops to just 91.23, or 39.7% of the prior result found in [Gold98]. Using other demeriting factors gives similarly dramatic decreases in edge count, with varying effects on accuracy--see Figures 3 and 4. It is not immediately clear as to why de- meriting improves performance so dramat- ically over the table lookup method. One possibility is that the statistical method runs into too many sparse data problems around the fringe of the data set--were we able to use a larger data set, we might see the statis- tics approach the curve defined by the de- meriting. Another is that the bucketing is too coarse, although the interpolation along 517 2~ , -0 t8o CL 100 76.5 76 )75.5 C~ "~ 74.5 74 72.8 01, o12 o13 o.,' o15 o15 0.7 o15 015 demeriting factor Figure 3: Edges popped vs. 5 O. 0 labelled recall o 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 X K X X X N X X X ~ X X X X X x X 0'., o~ 013 oi, 0'.5 015 o'., 015 oi, demeriting factor 8 Figure 4: Precision and recall vs. 5 the independent merit axis would seem to mitigate that problem. 5 Conclusion In the prior work, we see the average edge cost of a chart parse reduced from 170,000 or so down to 229.7. This paper gives a sim- ple modification to the [Gold98] algorithm that further reduces this count to just over 90 edges, less than two times the perfect minimum number of edges. In addition to speeding up tag-stream parsers, it seems rea- sonable to assume that the demeriting sys- tem would work in other classes of parsers such as the lexicalised model of (Charniak, 1997)--as long as the parsing technique has some sort of demeritable ranking system, or at least some way of paying less attention to already-filled positions, the kernel of the system should be applicable. Furthermore, because of its ease of implementation, we strongly recommend the demeriting system to those working with best-first parsing. References Robert J. Bobrow. 1990. Statistical agenda parsing. In DARPA Speech and Language Workshop, pages 222-224. Sharon Carabal]o and Eugene Charniak. 1998. New figures of merit for best- first probabilistic chart parsing. Compu- tational Linguistics, 24(2):275-298, June. Eugene Charniak. 1997. Statistical pars- ing with a context-free grammar and word statistics. In Proceedings of the Fourteenth National Conference on Artificial Intelli- gence, pages 598-603, Menlo Park. AAAI Press/MIT Press. Mahesh V. Chitrao and Ralph Grishman. 1990. Statistical parsing of messages. In DARPA Speech and Language Workshop, pages 263-266. Sharon Goldwater, Eugene Charniak, and Mark Johnson. 1998. Best-first edge- based chart parsing. In 6th Annual Work- shop for Very Large Corpora, pages 127- 133. Martin Kay. 1970. Algorithm schemata and data structures in syntactic processing. In Barbara J. Grosz, Karen Sparck Jones, and Bonne Lynn Weber, editors, Readings in Natural Language Processing, pages 35- 70. Morgan Kaufmann, Los Altos, CA. Fred Kochman and Joseph Kupin. 1991. Calculating the probability of a partial parse of a sentence. In DARPA Speech and Language Workshop, pages 273-240. David M. Magerman and Mitchell P. Mar- cus. 1991. Parsing the voyager domain using pearl. In DARPA Speech and Lan- guage Workshop, pages 231-236. Scott Miller and Heidi Fox. 1994. Auto- matic grammar acquisition. In Proceed- ings of the Human Language Technology Workshop, pages 268-271. 518
1999
66
Automatic Identification of Word Translations from Unrelated English and German Corpora Reinhard Rapp University of Mainz, FASK D-76711 Germersheim, Germany rapp @usun2.fask.uni-mainz.de Abstract Algorithms for the alignment of words in translated texts are well established. How- ever, only recently new approaches have been proposed to identify word translations from non-parallel or even unrelated texts. This task is more difficult, because most statistical clues useful in the processing of parallel texts cannot be applied to non-par- allel texts. Whereas for parallel texts in some studies up to 99% of the word align- ments have been shown to be correct, the accuracy for non-parallel texts has been around 30% up to now. The current study, which is based on the assumption that there is a correlation between the patterns of word co-occurrences in corpora of different lan- guages, makes a significant improvement to about 72% of word translations identified correctly. 1 Introduction Starting with the well-known paper of Brown et al. (1990) on statistical machine translation, there has been much scientific interest in the alignment of sentences and words in translated texts. Many studies show that for nicely parallel corpora high accuracy rates of up to 99% can be achieved for both sentence and word alignment (Gale & Church, 1993; Kay & R/Sscheisen, 1993). Of course, in practice - due to omissions, transpositions, insertions, and replacements in the process of translation - with real texts there may be all kinds of problems, and therefore ro- bustness is still an issue (Langlais et al., 1998). Nevertheless, the results achieved with these algorithms have been found useful for the corn- pilation of dictionaries, for checking the con- sistency of terminological usage in translations, for assisting the terminological work of trans- lators and interpreters, and for example-based machine translation. By now, some alignment programs are offered commercially: Translation memory tools for translators, such as IBM's Translation Manager or Trados' Translator's Workbench, are bundled or can be upgraded with programs for sentence alignment. Most of the proposed algorithms first con- duct an alignment of sentences, that is, they lo- cate those pairs of sentences that are translations of each other. In a second step a word alignment is performed by analyzing the correspondences of words in each pair of sentences. The algo- rithms are usually based on one or several of the following statistical clues: 1. correspondence of word and sentence order 2. correlation between word frequencies 3. cognates: similar spelling of words in related languages All these clues usually work well for parallel texts. However, despite serious efforts in the compilation of parallel corpora (Armstrong et al., 1998), the availability of a large-enough par- allel corpus in a specific domain and for a given pair of languages is still an exception. Since the acquisition of monolingual corpora is much easier, it would be desirable to have a program that can determine the translations of words from comparable (same domain) or possibly unrelated monolingnal texts of two languages. This is what translators and interpreters usually do when preparing terminology in a specific field: They read texts corresponding to this field in both languages and draw their conclusions on word correspondences from the usage of the 519 terms. Of course, the translators and interpreters can understand the texts, whereas our programs are only considering a few statistical clues. For non-parallel texts the first clue, which is usually by far the strongest of the three men- tioned above, is not applicable at all. The second clue is generally less powerful than the first, since most words are ambiguous in natural lan- guages, and many ambiguities are different across languages. Nevertheless, this clue is ap- plicable in the case of comparable texts, al- though with a lower reliability than for parallel texts. However, in the case of unrelated texts, its usefulness may be near zero. The third clue is generally limited to the identification of word pairs with similar spelling. For all other pairs, it is usually used in combination with the first clue. Since the first clue does not work with non-parallel texts, the third clue is useless for the identification of the majority of pairs. For unrelated languages, it is not applicable anyway. In this situation, Rapp (1995) proposed using a clue different from the three mentioned above: His co-occurrence clue is based on the as- sumption that there is a correlation between co- occurrence patterns in different languages. For example, if the words teacher and school co- occur more often than expected by chance in a corpus of English, then the German translations of teacher and school, Lehrer and Schule, should also co-occur more often than expected in a corpus of German. In a feasibility study he showed that this assumption actually holds for the language pair English/German even in the case of unrelated texts. When comparing an English and a German co-occurrence matrix of corresponding words, he found a high corre- lation between the co-occurrence patterns of the two matrices when the rows and columns of both matrices were in corresponding word order, and a low correlation when the rows and col- umns were in random order. The validity of the co-occurrence clue is ob- vious for parallel corpora, but - as empirically shown by Rapp - it also holds for non-parallel corpora. It can be expected that this clue will work best with parallel corpora, second-best with comparable corpora, and somewhat worse with unrelated corpora. In all three cases, the problem of robustness - as observed when applying the word-order clue to parallel corpo- ra- is not severe. Transpositions of text seg- ments have virtually no negative effect, and omissions or insertions are not critical. How- ever, the co-occurrence clue when applied to comparable corpora is much weaker than the word-order clue when applied to parallel cor- pora, so larger corpora and well-chosen sta- tistical methods are required. After an attempt with a context heterogeneity measure (Fung, 1995) for identifying word translations, Fung based her later work also on the co-occurrence assumption (Fung & Yee, 1998; Fung & McKeown, 1997). By presup- posing a lexicon of seed words, she avoids the prohibitively expensive computational effort en- countered by Rapp (1995). The method des- cribed here - although developed independently of Fung's work- goes in the same direction. Conceptually, it is a trivial case of Rapp's matrix permutation method. By simply assuming an initial lexicon the large number of permu- tations to be considered is reduced to a much smaller number of vector comparisons. The main contribution of this paper is to describe a practical implementation based on the co-occur- rence clue that yields good results. 2 Approach As mentioned above, it is assumed that across languages there is a correlation between the co- occurrences of words that are translations of each other. If - for example - in a text of one language two words A and B co-occur more of- ten than expected by chance, then in a text of another language those words that are transla- tions of A and B should also co-occur more fre- quently than expected. This is the only statisti- cal clue used throughout this paper. It is further assumed that there is a small dictionary available at the beginning, and that our aim is to expand this base lexicon. Using a corpus of the target language, we first compute a co-occurrence matrix whose rows are all word types occurring in the corpus and whose col- unms are all target words appearing in the base lexicon. We now select a word of the source language whose translation is to be determined. Using our source-language corpus, we compute 520 a co-occurrence vector for this word. We trans- late all known words in this vector to the target language. Since our base lexicon is small, only some of the translations are known. All un- known words are discarded from the vector and the vector positions are sorted in order to match the vectors of the target-language matrix. With the resulting vector, we now perform a similar- ity computation to all vectors in the co-occur- rence matrix of the target language. The vector with the highest similarity is considered to be the translation of our source-language word. 3 Simulation 3.1 Language Resources To conduct the simulation, a number of resour- ces were required. These are 1. a German corpus 2. an English corpus 3. a number of German test words with known English translations 4. a small base lexicon, German to English As the German corpus, we used 135 million words of the newspaper Frankfurter Allgemeine Zeitung (1993 to 1996), and as the English corpus 163 million words of the Guardian (1990 to 1994). Since the orientation of the two newspapers is quite different, and since the time spans covered are only in part overlapping, the two corpora can be considered as more or less unrelated. For testing our results, we started with a list of 100 German test words as proposed by Rus- sell (1970), which he used for an association experiment with German subjects. By looking up the translations for each of these 100 words, we obtained a test set for evaluation. Our German/English base lexicon is derived from the Collins Gem German Dictionary with about 22,300 entries. From this we eliminated all multi-word entries, so 16,380 entries re- mained. Because we had decided on our test word list beforehand, and since it would not make much sense to apply our method to words that are already in the base lexicon, we also re- moved all entries belonging to the 100 test words. 3.2 Pre-processing Since our corpora are very large, to save disk space and processing time we decided to remove all function words from the texts. This was done on the basis of a list of approximately 600 German and another list of about 200 English function words. These lists were compiled by looking at the closed class words (mainly ar- ticles, pronouns, and particles) in an English and a German morphological lexicon (for details see Lezius, Rapp, & Wettler, 1998) and at word frequency lists derived from our corpora. 1 By eliminating function words, we assumed we would lose little information: Function words are often highly ambiguous and their co-occur- rences are mostly based on syntactic instead of semantic patterns. Since semantic patterns are more reliable than syntactic patterns across language families, we hoped that eliminating the function words would give our method more generality. We also decided to lemmatize our corpora. Since we were interested in the translations of base forms only, it was clear that lemmatization would be useful. It not only reduces the sparse- data problem but also takes into account that German is a highly inflectional language, whereas English is not. For both languages we conducted a partial lemmatization procedure that was based only on a morphological lexicon and did not take the context of a word form into account. This means that we could not lem- matize those ambiguous word forms that can be derived from more than one base form. How- ever, this is a relatively rare case. (According to Lezius, Rapp, & Wettler, 1998, 93% of the to- kens of a German text had only one lemma.) Al- though we had a context-sensitive lemmatizer for German available (Lezius, Rapp, & Wettler, 1998), this was not the case for English, so for reasons of symmetry we decided not to use the context feature. I In cases in which an ambiguous word can be both a content and a function word (e.g., can), preference was given to those interpretations that appeared to occur more frequently. 521 3.3 Co-occurrence Counting For counting word co-occurrences, in most other studies a fixed window size is chosen and it is determined how often each pair of words occurs within a text window of this size. However, this approach does not take word order within a window into account. Since it has been empiri- cally observed that word order of content words is often similar between languages (even be- tween unrelated languages such as English and Chinese), and since this may be a useful statisti- cal clue, we decided to modify the common ap- proach in the way proposed by Rapp (1996, p. 162). Instead of computing a single co-occur- rence vector for a word A, we compute several, one for each position within the window. For example, if we have chosen the window size 2, we would compute a first co-occurrence vector for the case that word A is two words ahead of another word B, a second vector for the case that word A is one word ahead of word B, a third vector for A directly following B, and a fourth vector for A following two words after B. If we added up these four vectors, the result would be the co-occurrence vector as obtained when not taking word order into account. However, this is not what we do. Instead, we combine the four vectors of length n into a single vector of length 4n. Since preliminary experiments showed that a window size of 3 with consideration of word order seemed to give somewhat better results than other window types, the results reported here are based on vectors of this kind. However, the computational methods described below are in the same way applicable to window sizes of any length with or without consideration of word order. 3.4 Association Formula Our method is based on the assumption that there is a correlation between the patterns of word co-occurrences in texts of different lan- guages. However, as Rapp (1995) proposed, this correlation may be strengthened by not using the co-occurrence counts directly, but association strengths between words instead. The idea is to eliminate word-frequency effects and to empha- size significant word pairs by comparing their observed co-occurrence counts with their ex- pected co-occurrence counts. In the past, for this purpose a number of measures have been pro- posed. They were based on mutual information (Church & Hanks, 1989), conditional probabili- ties (Rapp, 1996), or on some standard statisti- cal tests, such as the chi-square test or the log- likelihood ratio (Dunning, 1993). For the pur- pose of this paper, we decided to use the log- likelihood ratio, which is theoretically well justified and more appropriate for sparse data than chi-square. In preliminary experiments it also led to slightly better results than the con- ditional probability measure. Results based on mutual information or co-occurrence counts were significantly worse. For efficient compu- tation of the log-likelihood ratio we used the fol- lowing formula: 2 kiiN - 2 log ~ = ~ ki~ log c~Rj i,j~{l,2} kilN -- kl2N = kll log c-~-+kl2 log c, R2 • k21N -- k22 N + k21 log ~ + g22 log c2R2 where C 1 =kll +k12 C 2 =k21 +k22 R l = kit + k2t Rz = ki2 + k22 N=kll+k12+k21+k22 with parameters kij expressed in terms of corpus frequencies: kl~ = frequency of common occurrence of word A and word B kl2 = corpus frequency of word A - kll k21 = corpus frequency of word B - kll k22 = size of corpus (no. of tokens) - corpus frequency of A - corpus frequency of B All co-occurrence vectors were transformed us- ing this formula. Thereafter, they were nor- malized in such a way that for each vector the sum of its entries adds up to one. In the rest of the paper, we refer to the transformed and nor- malized vectors as association vectors. 2 This formulation of the log-likelihood ratio was pro- posed by Ted Dunning during a discussion on the corpora mailing list (e-mail of July 22, 1997). It is faster and more mnemonic than the one in Dunning (1993). 522 3.5 Vector Similarity To determine the English translation of an un- known German word, the association vector of the German word is computed and compared to all association vectors in the English association matrix. For comparison, the correspondences between the vector positions and the columns of the matrix are determined by using the base lexicon. Thus, for each vector in the English matrix a similarity value is computed and the English words are ranked according to these values. It is expected that the correct translation is ranked first in the sorted list. For vector comparison, different similarity measures can be considered. Salton & McGill (1983) proposed a number of measures, such as the Cosine coefficient, the Jaccard coefficient, and the Dice coefficient (see also Jones & Fur- nas, 1987). For the computation of related terms and synonyms, Ruge (1995), Landauer and Dumais (1997), and Fung and McKeown (1997) used the cosine measure, whereas Grefenstette (1994, p. 48) used a weighted Jaccard measure. We propose here the city-block metric, which computes the similarity between two vectors X and Y as the sum of the absolute differences of corresponding vector positions: S:Z[Xi -Yi[ i=l In a number of experiments we compared it to other similarity measures, such as the cosine measure, the Jaccard measure (standard and bi- nary), the Euclidean distance, and the scalar product, and found that the city-block metric yielded the best results. This may seem sur- prising, since the formula is very simple and the computational effort smaller than with the other measures. It must be noted, however, that the other authors applied their similarity measures directly to the (log of the) co-occurrence vec- tors, whereas we applied the measures to the as- sociation vectors based on the log-likelihood ratio. According to our observations, estimates based on the log-likelihood ratio are generally more reliable across different corpora and lan- guages. 3.6 Simulation Procedure The results reported in the next section were obtained using the following procedure: 1. Based on the word co-occurrences in the German corpus, for each of the 100 German test words its association vector was com- puted. In these vectors, all entries belonging to words not found in the English part of the base lexicon were deleted. 2. Based on the word co-occurrences in the English corpus, an association matrix was computed whose rows were all word types of the corpus with a frequency of 100 or higher 3 and whose columns were all English words occurring as first translations of the German words in the base lexicon. 4 3. Using the similarity function, each of the German vectors was compared to all vectors of the English matrix. The mapping between vector positions was based on the first trans- lations given in the base lexicon. For each of the German source words, the English vo- cabulary was ranked according to the re- suiting similarity value. 3 The limitation to words with frequencies above 99 was introduced for computational reasons to reduce the number of vector comparisons and thus speed up the program. (The English corpus contains 657,787 word types after lemmatization, which leads to extremely large matrices.) The purpose of this limitation was not to limit the number of translation candidates considered. Experiments with lower thresholds showed that this choice has little effect on the results to our set of test words. 4 This means that alternative translations of a word were not considered. Another approach, as conducted by Fung & Yee (1998), would be to consider all possible translations listed in the lexicon and to give them equal (or possibly descending) weight. Our decision was motivated by the observation that many words have a salient first translation and that this translation is listed first in the Collins Gem Dictio- nary German-English. We did not explore this issue further since in a small pocket dictionary only few ambiguities are listed. 523 4 Results and Evaluation Table 1 shows the results for 20 of the 100 Ger- man test words. For each of these test words, the top five translations as automatically generated are listed. In addition, for each word its ex- pected English translation from the test set is given together with its position in the ranked lists of computed translations. The positions in the ranked lists are a measure for the quality of the predictions, with a 1 meaning that the pre- diction is correct and a high value meaning that the program was far from predicting the correct word. If we look at the table, we see that in many cases the program predicts the expected word, with other possible translations immediately following. For example, for the German word Hiiuschen, the correct translations bungalow, cottage, house, and hut are listed. In other cases, typical associates follow the correct translation. For example, the correct translation of Miid- chen, girl, is followed by boy, man, brother, and lady. This behavior can be expected from our associationist approach. Unfortunately, in some cases the correct translation and one of its strong associates are mixed up, as for example with Frau, where its correct translation, woman, is listed only second after its strong associate man. Another example of this typical kind of error is pfeifen, where the correct translation whistle is listed third after linesman and referee. Let us now look at some cases where the pro- gram did particularly badly. For Kohl we had expected its dictionary translation cabbage, but- given that a substantial part of our news- paper corpora consists of political texts - we do not need to further explain why our program lists Major, Kohl, Thatcher, Gorbachev, and Bush, state leaders who were in office during the time period the texts were written. In other cases, such as Krankheit and Whisky, the simu- lation program simply preferred the British us- age of the Guardian over the American usage in our test set: Instead of sickness, the program predicted disease and illness, and instead of whiskey it predicted whisky. A much more severe problem is that our cur- rent approach cannot properly handle ambigui- ties: For the German word weifl it does not pre- dict white, but instead know. The reason is that weifl can also be third person singular of the German verb wissen (to know), which in news- paper texts is more frequent than the color white. Since our lemmatizer is not context-sen- sitive, this word was left unlemmatized, which explains the result. To be able to compare our results with other work, we also did a quantitative evaluation. For all test words we checked whether the predicted translation (first word in the ranked list) was identical to our expected translation. This was true for 65 of the 100 test words. However, in some cases the choice of the expected transla- tion in the test set had been somewhat arbitrary. For example, for the German word Strafle we had expected street, but the system predicted road, which is a translation quite as good. Therefore, as a better measure for the accuracy of our system we counted the number of times where an acceptable translation of the source word is ranked first. This was true for 72 of the 100 test words, which gives us an accuracy of 72%. In another test, we checked whether an ac- ceptable translation appeared among the top 10 of the ranked lists. This was true in 89 cases, s For comparison, Fung & McKeown (1997) report an accuracy of about 30% when only the top candidate is counted. However, it must be emphasized that their result has been achieved under very different circumstances. On the one hand, their task was more difficult because they worked on a pair of unrelated languages (Eng- lish/Japanese) using smaller corpora and a ran- dom selection of test words, many of which were multi-word terms. Also, they predeter- mined a single translation as being correct. On the other hand, when conducting their evalua- tion, Fung & McKeown limited the vocabulary they considered as translation candidates to a few hundred terms, which obviously facilitates the task. 5 We did not check for the completeness of the translations found (recall), since this measure depends very much on the size of the dictionary used as the standard. 524 German test word Baby Brot Frau gelb H~iuschen Kind Kohl Krankheit M~idchen Musik Ofen pfeifen Religion Schaf Soldat StraBe siiB Tabak weiB Whisky expected trans- lation and rank baby 1 bread 1 woman 2 yellow 1 cottage 2 child 1 cabbage 17074 sickness 86 baby bread man yellow bungalow child Major disease top five translations as automatically generated child mother daughter father cheese meat food butter woman boy friend wife blue red pink green cottage house hut village daughter son father mother Kohl Thatcher Gorbachev Bush illness Aids patient doctor girl 1 girl music 1 music dance stove 3 heat oven stove house whistle 3 linesman referee whistle blow offside religion 1 sheep 1 soldier 1 street 2 boy man brother lady theatre musical song burn religion culture faith religious belief sheep cattle cow pig goat soldier army troop force civilian road street city town walk sweet smell delicious taste love sweet 1 tobacco 1 white 46 whiskey 11 tobacco cigarette consumption nicotine drink know say thought see think whisky beer Scotch bottle wine Table 1: Results for 20 of the 100 test words (for full list see http://www.fask.uni-mainz.de/user/rappl) 5 Discussion and Conclusion The method described can be seen as a simple case of the gradient descent method proposed by Rapp (1995), which does not need an initial lexicon but is computationally prohibitively ex- pensive. It can also be considered as an exten- sion from the monolingual to the bilingual case of the well-established methods for semantic or syntactic word clustering as proposed by Schtitze (1993), Grefenstette (1994), Ruge (1995), Rapp (1996), Lin (1998), and others. Some of these authors perform a shallow or full syntactical analysis before constructing the co- occurrence vectors. Others reduce the size of the co-occurrence matrices by performing a singular value decomposition. However, in yet un- published work we found that at least for the computation of synonyms and related words neither syntactical analysis nor singular value decomposition lead to significantly better results than the approach described here when applied to the monolingual case (see also Grefenstette, 1993), so we did not try to include these me- thods in our system. Nevertheless, both methods are of technical value since they lead to a re- duction in the size of the co-occurrence matri- ces. Future work has to approach the difficult problem of ambiguity resolution, which has not been dealt with here. One possibility would be to semantically disambiguate the words in the corpora beforehand, another to look at co-oc- currences between significant word sequences instead of co-occurrences between single words. To conclude with, let us add some specula- tion by mentioning that the ability to identify word translations from non-parallel texts can be seen as an indicator in favor of the associationist view of human language acquisition (see also Landauer & Dumais, 1997, and Wettler & Rapp, 1993). It gives us an idea of how it is possible to derive the meaning of unknown words from texts by only presupposing a limited number of known words and then iteratively expanding this knowledge base. One possibility to get the pro- 525 cess going would be to learn vocabulary lists as in school, another to simply acquire the names of items in the physical world. Acknowledgements I thank Manfred Wettler, Gisela Zunker-Rapp, Wolfgang Lezius, and Anita Todd for their sup- port of this work. References Armstrong, S.; Kempen, M.; Petitpierre, D.; Rapp, R.; Thompson, H. (1998). Multilingual Corpora for Cooperation. Proceedings of the 1st International Conference on Linguistic Resources and Evalua- tion (LREC), Granada, Vol. 2, 975-980. Brown, P.; Cocke, J.; Della Pietra, S. A.; Della Pietra, V. J.; Jelinek, F.; Lafferty, J. D.; Mercer, R. L.; Rossin, P. S. (1990). A statistical approach to ma- chine translation. Computational Linguistics, 16(2), 79-85. Church, K. W.; Hanks, P. (1989). Word association norms, mutual information, and lexicography. In: Proceedings of the 27th Annual Meeting of the As- sociation for Computational Linguistics. Vancou- ver, British Columbia, 76-83. Dunning, T. (1993). Accurate methods for the sta- tistics of surprise and coincidence. Computational Linguistics, 19(1), 61-74. Fung, P. (1995). Compiling bilingual lexicon entries from a non-parallel English-Chinese corpus. Pro- ceedings of the 3rd Annual Workshop on Very Large Corpora, Boston, Massachusetts, 173-183. Fung, P.; McKeown, K. (1997). Finding terminology translations from non-parallel corpora. Proceedings of the 5th Annual Workshop on Very Large Cor- pora, Hong Kong, 192-202. Fung, P.; Yee, L. Y. (1998). An IR approach for translating new words from nonparallel, compa- rable texts. In: Proceedings of COLING-ACL 1998, Montreal, Vol. 1,414-420. Gale, W. A.; Church, K. W. (1993). A program for aligning sentences in bilingual corpora. Computa- tional Linguistics, 19(3), 75-102. Grefenstette, G. (1993). Evaluation techniques for automatic semantic extraction: comparing syntactic and window based approaches. In: Proceedings of the Workshop on Acquisition of Lexical Knowledge from Text, Columbus, Ohio. Grefenstette, G. (1994). Explorations in Automatic Thesaurus Discovery. Dordrecht: Kluwer. Jones, W. P.; Furnas, G. W. (1987). Pictures of rele- vance: a geometric analysis of similarity measures. Journal of the American Society for Information Science, 38(6), 420-442. Kay, M.; Rfscheisen, M. (1993). Text-Translation Alignment. Computational Linguistics, 19(1), 121- 142. Landauer, T. K.; Dumais, S. T. (1997). A solution to Plato's problem: the latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2), 211- 240. Langlais, P.; Simard, M.; V6ronis, J. (1998). Methods and practical issues in evaluating alignment tech- niques. In: Proceedings of COLING-ACL 1998, Montreal, Vol. l, 711-717. Lezius, W.; Rapp, R.; Wettler, M. (1998). A freely available morphology system, part-of-speech tag- ger, and context-sensitive lemmatizer for German. In: Proceedings of COLING-ACL 1998, Montreal, Vol. 2, 743-748. Lin, D. (1998). Automatic Retrieval and Clustering of Similar Words. In: Proceedings of COLING-ACL 1998, Montreal, Vol. 2, 768-773. Rapp, R. (1995). Identifying word translations in non- parallel texts. In: Proceedings of the 33rd Meeting of the Association for Computational Linguistics. Cambridge, Massachusetts, 320-322. Rapp, R. (1996). Die Berechnung von Assoziationen. Hildesheim: Olms. Ruge, G. (1995). Human memory models and term association. Proceedings of the ACM SIGIR Con- ference, Seattle, 219-227. Russell, W. A. (1970). The complete German lan- guage norms for responses to 100 words from the Kent-Rosanoff word association test. In: L. Post- man, G. Keppel (eds.): Norms of Word Association. New York: Academic Press, 53-94. Salton, G.; McGill, M. (1983). Introduction to Mod- em Information Retrieval. New York: McGraw- Hill. Schiitze, H. (1993). Part-of-speech induction from scratch. In: Proceedings of the 31st Annual Meet- ing of the Association for Computational Lingu- istics, Columbus, Ohio, 251-258. Wettler, M.; Rapp, R. (1993). Computation of word associations based on the co-occurrences of words in large corpora. In: Proceedings of the 1st Work- shop on Very Large Corpora: Columbus, Ohio, 84- 93. 526
1999
67
Mining the Web for Bilingual Text Philip Resnik* Dept. of Linguistics/Institute for Advanced Computer Studies University of Maryland, College Park, MD 20742 resnik@umiacs, umd. edu Abstract STRAND (Resnik, 1998) is a language- independent system for automatic discovery of text in parallel translation on the World Wide Web. This paper extends the prelim- inary STRAND results by adding automatic language identification, scaling up by orders of magnitude, and formally evaluating perfor- mance. The most recent end-product is an au- tomatically acquired parallel corpus comprising 2491 English-French document pairs, approxi- mately 1.5 million words per language. 1 Introduction Text in parallel translation is a valuable re- source in natural language processing. Sta- tistical methods in machine translation (e.g. (Brown et al., 1990)) typically rely on large quantities of bilingual text aligned at the doc- ument or sentence level, and a number of approaches in the burgeoning field of cross- language information retrieval exploit parallel corpora either in place of or in addition to map- pings between languages based on information from bilingual dictionaries (Davis and Dunning, 1995; Landauer and Littman, 1990; Hull and Oard, 1997; Oard, 1997). Despite the utility of such data, however, sources of bilingual text are subject to such limitations as licensing restric- tions, usage fees, restricted domains or genres, and dated text (such as 1980's Canadian poli- tics); or such sources simply may not exist for * This work was supported by Department of De- fense contract MDA90496C1250, DARPA/ITO Con- tract N66001-97-C-8540, and a research grant from Sun Microsystems Laboratories. The author gratefully ac- knowledges the comments of the anonymous reviewers, helpful discussions with Dan Melamed and Doug Oard, and the assistance of Jeff Allen in the French-English experimental evaluation. language pairs of interest. Although the majority of Web content is in English, it also shows great promise as a source of multilingual content. Using figures from the Babel survey of multilinguality on the Web (htZp ://www. isoc. org/), it is possible to esti- mate that as of June, 1997, there were on the or- der of 63000 primarily non-English Web servers, ranging over 14 languages. Moreover, a follow- up investigation of the non-English servers sug- gests that nearly a third contain some useful cross-language data, such as parallel English on the page or links to parallel English pages -- the follow-up also found pages in five languages not identified by the Babel study (Catalan, Chi- nese, Hungarian, Icelandic, and Arabic; Michael Littman, personal communication). Given the continued explosive increase in the size of the Web, the trend toward business organizations that cross national boundaries, and high levels of competition for consumers in a global mar- ketplace, it seems impossible not to view mul- tilingual content on the Web as an expanding resource. Moreover, it is a dynamic resource, changing in content as the world changes. For example, Diekema et al., in a presentation at the 1998 TREC-7 conference (Voorhees and Har- man, 1998), observed that the performance of their cross-language information retrieval was hurt by lexical gaps such as Bosnia/Bosnie- this illustrates a highly topical missing pair in their static lexical resource (which was based on WordNet 1.5). And Gey et al., also at TREC-7, observed that in doing cross-language retrieval using commercial machine translation systems, gaps in the lexicon (their example was acupunc- ture/Akupunktur) could make the difference be- tween precision of 0.08 and precision of 0.83 on individual queries. ttesnik (1998) presented an algorithm called 527 Candidate Pair Generation Cmdidat~ Pair Evaluafio~ (structural) i i ' Candidate pak t i i a, Filtel/ng i , OanSuage d=pen&nO 1 i I 1_ _ _ ~ _ _l Figure 1: The STRAND architecture STRA N D (Structural Translation Recognition for Acquiring Natural Data) designed to explore the Web as a source of parallel text, demon- strating its potential with a small-scale evalu- ation based on the author's judgments. After briefly reviewing the STRAND architecture and preliminary results (Section 2), this paper goes beyond that preliminary work in two significant ways. First, the framework is extended to in- clude a filtering stage that uses automatic lan- guage identification to eliminate an important class of false positives: documents that appear structurally to be parallel translations but are in fact not in the languages of interest. The system is then run on a somewhat larger scale and eval- uated formally for English and Spanish using measures of agreement with independent human judges, precision, and recall (Section 3). Sec- ond, the algorithm is scaled up more seriously to generate large numbers of parallel documents, this time for English and French, and again sub- jected to formal evaluation (Section 4). The concrete end result reported here is an automat- ically acquired English-French parallel corpus of Web documents comprising 2491 document pairs, approximately 1.5 million words per lan- guage (without markup), containing little or no noise. 2 STRAND Preliminaries This section is a brief summary of the STRAND system and previously reported preliminary re- sults (Resnik, 1998). The STRAND architecture is organized as a pipeline, beginning with a candidate generation stage that (over-)generates candidate pairs of documents that might be parallel translations. (See Figure 1.) The first implementation of the generation stage used a query to the Altavista search engine to generate pages that could be viewed as "parents" of pages in parM]el transla- tion, by asking for pages containing one portion of anchor text (the readable material in a hy- perlink) containing the string "English" within a fixed distance of another anchor text contain- ing the string "Spanish". (The matching pro- cess was case-insensitive.) This generated many good pairs of pages, such as those pointed to by hyperlinks reading Click here for English ver- sion and Click here for Spanish version, as well as many bad pairs, such as university pages con- taining links to English Literature in close prox- imity to Spanish Literature. The candidate generation stage is followed by a candidate evaluation stage that represents the core of the approach, filtering out bad can- didates from the set of generated page pairs. It employs a structural recognition algorithm exploiting the fact that Web pages in parallel translation are invariably very similar in the way they are structured -- hence the 's' in STRAND. For example, see Figure 2. The structural recognition algorithm first runs both documents through a transducer that reduces each to a linear sequence of tokens corresponding to HTML markup elements, interspersed with tokens repre- senting undifferentiated "chunks" of text. For example, the transducer would replace the HTML source text <TITLE>hCL'99 Conference Home Page</TITLE> with the three tokens [BEGIN: TITLE], [Chunk: 24], and [END:TITLE]. The number inside the chunk token is the length of the text chunk, not counting whitespace; from this point on only the length of the text chunks is used, and therefore the structural filtering algorithm is completely language independent. Given the transducer's output for each doc- ument, the structural filtering stage aligns the two streams of tokens by applying a standard, widely available dynamic programming algo- rithm for finding an optimal alignment between two linear sequences. 1 This alignment matches identical markup tokens to each other as much as possible, identifies runs of unmatched tokens that appear to exist only in one sequence but not the other, and marks pairs of non-identical tokens that were forced to be matched to each other in order to obtain the best alignment pos- 1 Known to many programmers as diff. 528 Highlights Best Practices of Seminar on Self-Regulation re$,ulla~ !mo~. AJ medm,te~ fm rile sw m, Zm~ Bro,~ DirecSc~ Gr.aera]. Ccm*m'ael PSodu~ re.~a~t m ima= att~lmtive mm d*li~ (ASD) m ~ atmh u ,~lut~at7 ¢~d~a a~ in du.~T ~lf-nv*mq~nL He ~ thai • for~b,~m~n| ~ ~ A~[~ v, ua~l d e ~ inch topi~ u wl~ck ASD= pm,~d= tl~ ram1 =pprop*u~ mecl=m~= *=d wire ~ m= ~udk=l~ ~m=d w~ din=. Vdmm*r~ C=I~ "A voluuuu7 code iJ • ,~ ~4 ~aadardized ~t~at~ -- ~ cxpl~:ifly ~ ¢4 • I~isla~ve ~gut~orT ~gin'~ -* dc=iloed to ipB=oc~ ~**~, cc~Uol = ~ ¢ L~e b~i~ o( ~ who agre=d Treamry Board $ c~'*J.sr~, "t~imiam so~=u' rll6at to ~eguha~ They ,im#y c~IT~ the pm~ie,p~m• altetamln ~ bell I r©|tda~ed by the g~enm~" ~f~h~ to o~emen~ aed e~e mS~t~e f~, nSul=k~ Wht~ ~ ~des b~e • eemb~ ~ a¢~'=laSo, indudi~: • .,1~ p,~tm *o be ,k~tepett ~® q~ddy eum h*~: • the l~= c ~ i ~ ¢~,d m pre~ Id pm ie #=; S~mm,en*?' @ Fails saillants des praflques exemplalres S~minalre sin" I'autor(=glemen ration Le v~k=di 25 oc~m~ 1996, 40 ~ u d= mg~u d¢ IA n~gl¢~nu~m ml ~ ~ ~nduL.¢ ~ I~ prafiqo~ ¢~aku cn ~ 4¢ r~|l¢~ ~ vi=~l i ald¢¢ I~ odor= ~ ~ famillalis~ iv= Zaue Bmw~ dn¢~ gL~ruL Du'ccxi~ dc~ bi~ ~ ou~tvzmati~, a ~v~ La e~mcc ¢n ~Lulam que ~alt prod~mm m &~zl~mt ~ La di~Lf~ d~s nw~.s de pt~sLau~ des s ~ qm traltax~t d= divm ~jets. ~ lu m/:caai=~ ~ ~ ~¢ r~e t~t ~i¢¢= I~ I~ ~prew~ ~mi gl~ I¢i probl~ae~ ~v~l pu chacua. c~l~ ,d~Ud~ t~i~ l~lillatif m t~Ic~salr¢ - ~ paur iaflt,¢~, f ~ , o~m34~ = ~va]~ ~ m d¢ = ¢p~i ,tea oat ~. Ib ='$1imin~l p~. • p~rsui',i M. Bd= Gl~h~, ualy~e pn ~ap~. Affsi~• ~gle~mair=, = ~ aM C~il du T~sm, 5= rue& aM S~ven~nt do Au ~nt o~ I= n!gtcmcmask~ fair I'ob~ d'~ e ~ ~ du pabli¢, le= S ~ n u i I'L, ch¢ll© • ill ~tt== d'~t= b pm~lh~ de ~ qul fraRumt I~ iuiti~v= de t~ikmmtmkm: • h f=illt ~l~iu a~ IJm$1~lle iLs peuvuq ~,e m~llft4u= Cu fcm~.~= d~ ~uB~ d i n , Figure 2: Structural similarity in parallel translations on the Web sible. 2 At this point, if there were too many unmatched tokens, the candidate pair is taken to be prima facie unacceptable and immediately filtered out. Otherwise, the algorithm extracts from the alignment those pairs of chunk tokens that were matched to each other in order to obtain the best alignments. 3 It then computes the corre- lation between the lengths of these non-markup text chunks. As is well known, there is a re- ]]ably linear relationship in the lengths of text translations -- small pieces of source text trans- late to smaJl pieces of target text, medium to medium, and large to large. Therefore we can apply a standard statistical hypothesis test, and if p < .05 we can conclude that the lengths are reliably correlated and accept the page pair as likely to be translations of each other. Other- wise, this candidate page pair is filtered out. 4 2An anonymous reviewer observes that diff has no preference for aligning chunks of similar lengths, which in some cases might lead to a poor alignment when a good one exists. This could result in a failure to identify true translations and is worth investigating further. 3Chunk tokens with exactly equal lengths are ex- cluded; see (Resnik, 1998) for reasons and other details of the algorithm. 4The level of significance (p < .05) was the ini- tial selection during algorithm development, and never changed. This, the unmatched-tokens threshold for prima/aeie rejection due to mismatches (20~0), and the maximum distance between hyperlinks in the genera- In the preliminary evaluation, I generated a test set containing 90 English-Spanish candi- date pairs, using the candidate generation stage as just described• I evaluated these candi- dates by hand, identifying 24 as true translation pairs. 5 Of these 24, STRAND identified 15 as true translation pairs, for a recall of 62.5%. Perhaps more important, it only generated 2 additional translation pairs incorrectly, for a precision of 15/17 = s8.2%. 3 Adding Language Identification In the original STRAND architecture, addi- tional filtering stages were envisaged as pos- sible (see Figure 1), including such language- dependent processes as automatic language identification and content-based comparison of structually aligned document segments using cognate matching or existing bilingual dictio- naries. Such stages were initially avoided in order to keep the system simple, lightweight, and independent of linguistic resources• How- tion stage (10 lines), are parameters of the algorithm that were determined during development using a small amount of arbitrarily selected French-English data down- loaded from the Web. These values work well in prac- tice and have not been varied systematically; their values were fixed in advance of the preliminary evaluation and have not been changed since. • The complete test set and my judgments for this preliminary evaluation can be found at http ://umiacs. umd• edu/~resnik/amt a98/. 529 . . . . . . . . . . . "-.%', .... .~"~-'~. "2 .~ • ~u~ / v..B.~,~ I s~.~c.~ I o,~,~o I~.~1 ~lea~ ~ =~ ~ mmy oL ~ bo~J me~ free at.re 6~m ~ ~ , ~ ~ J ad,f~0~J dayJ dltpJltt b¢ fstac, tt¢l lain yt, ur ~=Ii~,=%~ = r~ l = tk:llvct7 I= LIPS OYELNIgIllr iato fiat Sptt~l 1 ~ ba~. Wt ~ig o~a~ ~ou ~ith tat dfiptfitg ~ (bared ~ uilka). Ykwlt~ PW'cbu~ o, ¢~'t ~. lo, ~ c~.,,,Its rmt* Figure 3: Structurally similar pages that are not translations ever, in conducting an error analysis for the pre- liminary evaluation~ and further exploring the characteristics of parallel Web pages, it became evident that such processing would be impor- tant in addressing one large class of potential false positives. Figure 3 illustrates: it shows two documents that are generated by looking for "parent" pages containing hyperlinks to En- glish and Spanish, which pass the structural fil- ter with flying colors. The problem is poten- tially acute if the generation stage happens to yield up many pairs of pages that come from on- line catalogues or other Web sites having large numbers of pages with a conventional structure. There is, of course, an obvious solution that will handle most such cases: making sure that the two pages are actually written in the lan- guages they are supposed to be written in. In order to filter out candidate page pairs that fail this test, statistical language identification based on character n-grams was added to the system (Dunning, 1994). Although this does introduce a need for language-specific training data for the two languages under consideration, it is a very mild form of language dependence: Dunning and others have shown that when classifying strings on the order of hundreds or thousands of characters, which is typical of the non-markup text in Web pages, it is possible to discriminate languages with accuracy in the high 90% range for many or most language pairs given as little as 50k characters per language as training material. For the language filtering stage of STRAND, the following criterion was adopted: given two documents dl and d2 that are supposed to be in languages L1 and L2, keep the document pair iff Pr(Llldl) > Pr(L21dl) and Pr(/21d2) > Pr(Llld2). For English and Spanish, this trans- lates as a simple requirement that the "English" page look more like English than Spanish, and that the "Spanish" page look more like Spanish than English. Language identification is per- formed on the plain-text versions of the pages. Character 5-gram models for languages under consideration are constructed using 100k char- acters of training data from the European Cor- pus Initiative (ECI), available from the Linguis- tic Data Consortium (LDC). In a formal evaluation, STRAND with the new language identification stage was run for English and Spanish, starting from the top 1000 hits yielded up by Altavista in the candidate gen- eration stage, leading to a set of 913 candidate pairs. A test set of 179 items was generated for annotation by human judges, containing: • All the pairs marked GOOD (i.e. transla- tions) by STRAND (61); these are the pairs that passed both the structural and lan- guage identification filter. • All the pairs filtered out via language idea- 530 tification (73) • A random sample of the pairs filtered out structurally (45) It was impractical to manually evaluate all pairs filtered out structurally, owing to the time re- quired for judgments and the desire for two in- dependent judgments per pair in order to assess inter-judge reliability. The two judges were both native speakers of Spanish with high proficiency in English, nei- ther previously familiar with the project. They worked independently, using a Web browser to access test pairs in a fashion that allowed them to view pairs side by side. The judges were told they were helping to evaluate a system that identifies pages on the Web that are translations of each other, and were instructed to make de- cisions according to the following criterion: Is this pair of pages intended to show the same material to two different users, one a reader of English and the other a reader of Spanish? The phrasing of the criterion required some con- sideration, since in previous experience with hu- man judges and translations I have found that judges are frequently unhappy with the qual- ity of the translations they are looking at. For present purposes it was required neither that the document pair represent a perfect transla- tion (whatever that might be), nor even nec- essarily a good one: STR,AND was being tested not on its ability to determine translation qual- ity, which might or might not be a criterion for inclusion in a parallel corpus, but rather its abil- ity to facilitate the task of locating page pairs that one might reasonably include in a corpus undifferentiated by quality (or potentially post- filtered manually). The judges were permitted three responses: • Yes: translations of each other • No: not translations of each other • Unable to tell When computing evaluation measures, page pairs classified in the third category by a hu- man judge, for whatever reason, were excluded from consideration. Comparison N Pr(Agree) J1, J2: 106 0.85 0.70 J1, STRAND: 165 0.91 0.79 J2, STRAND: 113 0.81 0.61 J1 f3 J2, STRAND: 90 0.91 0.82 Table 1: English-Spanish evaluation Table 1 shows agreement measures between the two judges, between STRAND and each individual judge, and the agreement between STRAND and the intersection of the two judges' annotations -- that is, STRAND evaluated against only those cases where the two judges agreed, which are therefore the items we can re- gard with the highest confidence. The table also shows Cohen's to, an agreement measure that corrects for chance agreement (Carletta, 1996); the most important t¢ value in the table is the value of 0.7 for the two human judges, which can be interpreted as sufficiently high to indi- cate that the task is reasonably well defined. (As a rule of thumb, classification tasks with < 0.6 are generally thought of as suspect in this regard.) The value of N is the number of pairs that were included, after excluding those for which the human judgement in the compar- ison was undecided. Since the cases where the two judges agreed can be considered the most reliable, these were used as the basis for the computation of recall and precision. For this reason, and because the human-judged set included only a sample of the full set evaluated by STRAND, it was nec- essary to extrapolate from the judged (by both judges) set to the full set in order to compute recall/precision figures; hence these figures are reported as estimates. Precision is estimated as the proportion of pages judged GOOD by STRAND that were also judged to be good (i.e. "yes") by both judges -- this figure is 92.1% Recall is estimated as the number of pairs that should have been judged GOOD by STRAND (i.e. that recieved a "yes" from both judges) that STRAND indeed marked GOOD -- this fig- ure is 47.3%. These results can be read as saying that of ev- ery 10 document pairs included by STRAND in a parallel corpus acquired fully automatically from the Web, fewer than 1 pair on average was included in error. Equivalently, one could say that the resulting corpus contains only about 531 8% noise. Moreover, at least for the confidently judged cases, STRAND is in agreement with the combined human judgment more often than the human judges agree with each other. The recall figure indicates that for every true translation pair it accepts, STRAND must also incorrectly re- ject a true translation pair. Alternatively, this can be interpreted as saying that the filtering process has the system identifying about half of the pairs it could in principle have found given the candidates produced by the genera- tion stage. Error analysis suggests that recall could be increased (at a possible cost to pre- cision) by making structural filtering more in- telligent; for example, ignoring some types of markup (such as italics) when computing align- ments. However, I presume that if the number M of translation pairs on the Web is large, then half of M is also large. Therefore I focus on in- creasing the total yield by attempting to bring the number of generated candidate pairs closer to M, as described in the next section. 4 Scaling Up Candidate Generation The preliminary experiments and the new ex- periment reported in the previous section made use of the Altavista search engine to locate "par- ent" pages, pointing off to multiple language versions of the same text. However, the same basic mechanism is easily extended to locate "sibling" pages: cases where the page in one language contains a link directly to the trans- lated page in the other language. Exploration of the Web suggests that parent pages and sib- ling pages cover the major relationships between parallel translations on the Web. Some sites with bilingual text are arranged according to a third principle: they contain a completely sep- arate monolingual sub-tree for each language, with only the single top-level home page point- ing off to the root page of single-language ver- sion of the site. As a first step in increasing the number of generated candidate page pairs, STRAND was extended to permit both parent and sibling search criteria. Relating monolin- gual sub-trees is an issue for future work. In principle, using Altavista queries for the candidate generation stage should enable STRAND to locate every page pair in the A1- tavista index that meets the search criteria. This likely to be an upper bound on the can- Comparison N Pr(Agree) J1, J2: 267 0.98 0.95 J1, STRAND: 273 0.84 0.65 J2, STRAND: 315 0.85 0.63 J1 N J2, STRAND: 261 0.86 0.68 Table 2: English-French evaluation didates that can be obtained without building a Web crawler dedicated to the task, since one of Altavista's distinguishing features is the size of its index. In practice, however, the user inter- face for Altavista appears to limit the number of hits returned to about the first 1000. It was possible to break this barrier by using a feature of Altavista's "Advanced Search": including a range of dates in a query's selection criteria. Having already redesigned the STRAND gener- ation component to permit multiple queries (in order to allow search for both parent and sibling pages), each query in the query set was trans- formed into a set of mutually exclusive queries based on a one-day range; for example, one ver- sion of a query would restrict the result to pages last updated on 30 November 1998, the next 29 November 1998, and so forth. Although the solution granularity was not perfect -- searches for some days still bumped up against the 1000-hit maximum -- use of both parent and sibling queries with date-range re- stricted queries increased the productivity of the candidate generation component by an or- der of magnitude. The scaled-up system was run for English-French document pairs in late November, 1998, and the generation component produced 16763 candidate page pairs (with du- plicates removed), an 18-fold increase over the previous experiment. After eliminating 3153 page pairs that were either exact duplicates or irretrievable, STRAND'S structural filtering removed 9820 candidate page pairs, and the language identification component removed an- other 414. The remaining pairs identified as GOOD -- i.e. those that STRAND considered to be parallel translations -- comprise a paral- lel corpus of 3376 document pairs. A formal evaluation, conducted in the same fashion as the previous experiment, yields the agreement data in Table 2. Using the cases where the two human judgments agree as ground truth, precision of the system is esti- mated at 79.5%, and recall at 70.3%. 532 Comparison N Pr(Agree) i¢ J1, J2: 267 0.98 0.95 J1, STRAND: 273 0.88 0.70 J2, STRAND: 315 0.88 0.69 J1 N J2, STRAND: 261 0.90 0.75 Table 3: English-French evaluation with stricter language ID criterion A look at STRAND'S errors quickly identifies the major source of error as a shortcoming of the language identification module: its implicit assumption that every document is either in En- glish or in French. This assumption was vi- olated by a set of candidates in the test set, all from the same site, that pair Dutch pages with French. The language identification cri- terion adopted in the previous section requires only that the Dutch pages look more like En- glish than like French, which in most cases is true. This problem is easily resolved by train- ing the existing language identification compo- nent with a wider range of languages, and then adopting a stricter filtering criterion requiring that Pr(Englishldl ) > Pr(Lldl ) for every lan- guage L in that range, and that d2 meet the corresponding requirement for French. 6 Doing so leads to the results in Table 3. This translates into an estimated 100% pre- cision against 64.1% recall, with a yield of 2491 documents, approximately 1.5 million words per language as counted after removal of HTML markup. That is, with a reasonable though admittedly post-hoc revision of the language identification criterion, comparison with human subjects suggests the acquired corpus is non- trivial and essentially noise free, and moreover, that the system excludes only a third of the pages that should have been kept. Naturally this will need to be verified in a new evaluation on fresh data. SLanguage ID across a wide range of languages is not. difficult to obtain. E.g. see the 13-language set of the freely available CMU stochastic language iden- tifier (http://www.cs.cmu.edu/,,~dougb/ident.html), the 18-language set of the Sun Language ID Engine (ht tp: / /www.sunlabs.com /research /ila/ demo /index.html ), or the 31-language set of the XRCE Language Identifier (http://www.rxrc.xerox.com/research/ mltt/Tools/guesser.html). Here I used the language ID method of the previous section trained with profiles of Danish, Dutch, English, French, German, Italian, Norwegian, Portuguese, Spanish, and Swedish. 5 Conclusions This paper places acquisition of parallel text from the Web on solid empirical footing, mak- ing a number of contributions that go beyond the preliminary study. The system has been extended with automated language identifica- tion, and scaled up to the point where a non- trivial parallel corpus of English and French can be produced completely automatically from the World Wide Web. In the process, it was discov- ered that the most lightweight use of language identification, restricted to just the the language pair of interest, needed to be revised in favor of a strategy that includes identification over a wide range of languages. Rigorous evaluation using human judges suggests that the technique pro- duces an extremely clean corpus -- noise esti- mated at between 0 and 8% -- even without hu- man intervention, requiring no more resources per language than a relatively small sample of text used to train automatic language identifi- cation. Two directions for future work are appar- ent. First, experiments need to be done using languages that are less common on the Web. Likely first pairs to try include English-Korean, English-Italian, and English-Greek. Inspection of Web sites -- those with bilingual text identi- fied by STRAND and those without -- suggests that the strategy of using Altavista to generate candidate pairs could be improved upon signifi- cantly by adding a true Web crawler to "mine" sites where bilingual text is known to be avail- able, e.g. sites uncovered by a first pass of the system using the Altavista engine. I would con- jecture that for English-French there is an order of magnitude more bilingual text on the Web than that uncovered in this early stage of re- search. A second natural direction is the applica- tion of Web-based parallel text in applications such as lexical acquisition and cross-language information retrieval -- especially since a side- effect of the core STRAND algorithm is aligned "chunks", i.e. non-markup segments found to correspond to each other based on alignment of the markup. Preliminary experiments using even small amounts of these data suggest that standard techniques, such as cross-language lex- ical association, can uncover useful data. 533 References P. Brown, J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, R. Mercer, and P. Roossin. 1990. A statistical approach to ma- chine translation. Computational Linguistics, 16(2):79-85. Jean Carletta. 1996. Assessing agreement on classification tasks: the Kappa statis- tic. Computational Linguistics, 22(2):249- 254, June. Mark Davis and Ted Dunning. 1995. A TREC evaluation of query translation methods for multi-lingual text retrieval. In Fourth Text Retrieval Conference (TREC-4). NIST. Ted Dunning. 1994. Statistical identification of language. Computing Research Laboratory Technical Memo MCCS 94-273, New Mexico State University, Las Cruces, New Mexico. David A. Hull and Douglas W. Oard. 1997. Symposium on cross-language text and speech retrieval. Technical Report SS-97-04, American Association for Artificial Intelli- gence, Menlo Park, CA, March. Thomas K. Landauer and Michael L. Littman. 1990. Fully automatic cross-language docu- ment retrieval using latent semantic indexing. In Proceedings of the Sixth Annual Confer- ence of the UW Centre for the New Oxford English Dictionary and Text Research, pages pages 31-38, UW Centre for the New OED and Text Research, Waterloo, Ontario, Octo- ber. Douglas W. Oar& 1997. Cross-language text retrieval research in the USA. In Third DELOS Workshop. European Research Con- sortium for Informatics and Mathematics March. Philip Resnik. 1998. Parallel strands: A pre- liminary investigation into mining the web for bilingual text. In Proceedings of the Third Conference of the Association for Machine Translation in the Americas, AMTA-98, in Lecture Notes in Artificial Intelligence, 1529, Langhorne, PA, October 28-31. E. M. Voorhees and D. K. Harman. 1998. The seventh Text REtrieval Conference (TREC-7). NIST special publication, Galthersburg, Maryland, November 9-11. http ://trec. nist. gov/pubs, html. 534
1999
68
Estimators for Stochastic "Unification-Based" Grammars* Mark Johnson Cognitive and Linguistic Sciences Brown University Stuart Geman Applied Mathematics Brown University Stephen Canon Cognitive and Linguistic Sciences Brown University Zhiyi Chi Dept. of Statistics The University of Chicago Stefan Riezler Institut fiir Maschinelle Sprachverarbeitung Universit~t Stuttgart Abstract Log-linear models provide a statistically sound framework for Stochastic "Unification-Based" Grammars (SUBGs) and stochastic versions of other kinds of grammars. We describe two computationally-tractable ways of estimating the parameters of such grammars from a train- ing corpus of syntactic analyses, and apply these to estimate a stochastic version of Lexical- Functional Grammar. 1 Introduction Probabilistic methods have revolutionized com- putational linguistics. They can provide a systematic treatment of preferences in pars- ing. Given a suitable estimation procedure, stochastic models can be "tuned" to reflect the properties of a corpus. On the other hand, "Unification-Based" Grammars (UBGs) can ex- press a variety of linguistically-important syn- tactic and semantic constraints. However, de- veloping Stochastic "Unification-based" Gram- mars (SUBGs) has not proved as straight- forward as might be hoped. The simple "relative frequency" estimator for PCFGs yields the maximum likelihood pa- rameter estimate, which is to say that it minimizes the Kulback-Liebler divergence be- tween the training and estimated distributions. On the other hand, as Abney (1997) points out, the context-sensitive dependencies that "unification-based" constraints introduce ren- der the relative frequency estimator suboptimal: in general it does not maximize the likelihood and it is inconsistent. * This research was supported by the National Science Foundation (SBR,-9720368), the US Army Research Of- fice (DAAH04-96-BAA5), and Office of Naval Research (N00014-97-1-0249). Abney (1997) proposes a Markov Random Field or log linear model for SUBGs, and the models described here are instances of Abney's general framework. However, the Monte-Carlo parameter estimation procedure that Abney proposes seems to be computationally imprac- tical for reasonable-sized grammars. Sections 3 and 4 describe two new estimation procedures which are computationally tractable. Section 5 describes an experiment with a small LFG cor- pus provided to us by Xerox PAaC. The log linear framework and the estimation procedures are extremely general, and they apply directly to stochastic versions of HPSG and other theo- ries of grammar. 2 Features in SUBGs We follow the statistical literature in using the term feature to refer to the properties that pa- rameters are associated with (we use the word "attribute" to refer to the attributes or features of a UBG's feature structure). Let ~ be the set of all possible grammatical or well-formed analyses. Each feature f maps a syntactic anal- ysis w E ~ to a real value f(w). The form of a syntactic analysis depends on the underlying linguistic theory. For example, for a PCFG w would be parse tree, for a LFG w would be a tuple consisting of (at least) a c-structure, an f- structure and a mapping from c-structure nodes to f-structure elements, and for a Chomskyian transformational grammar w would be a deriva- tion. Log-linear models are models in which the log probability is a linear combination of fea- ture values (plus a constant). PCFGs, Gibbs distributions, Maximum-Entropy distributions and Markov Random Fields are all examples of log-linear models. A log-linear model associates each feature fj with a real-valued parameter Oj. 535 A log-linear model with m features is one in which the likelihood P(w) of an analysis w is: PO(CO) -- 1 eEj= 1 ...... ojlj(~o) Zo Zo ----- Z eZJ=l ...... Ojfj(oJ) w'E~ While the estimators described below make no assumptions about the range of the .fi, in the models considered here the value of each feature fi(w) is the number of times a particu- lar structural arrangement or configuration oc- curs in the analysis w, so fi(w) ranges over the natural numbers. For example, the features of a PCFG are indexed by productions, i.e., the value fi(w) of feature fi is the number of times the ith production is used in the derivation w. This set of features induces a tree-structured dependency graph on the productions which is characteristic of Markov Branching Pro- cesses (Pearl, 1988; Frey, 1998). This tree structure has the important consequence that simple "relative-frequencies" yield maximum- likelihood estimates for the Oi. Extending a PCFG model by adding addi- tional features not associated with productions will in general add additional dependencies, de- stroy the tree structure, and substantially com- plicate maximum likelihood estimation. This is the situation for a SUBG, even if the features are production occurences. The uni- fication constraints create non-local dependen- cies among the productions and the dependency graph of a SUBG is usually not a tree. Conse- quently, maximum likelihood estimation is no longer a simple matter of computing relative frequencies. But the resulting estimation proce- dures (discussed in detail, shortly), albeit more complicated, have the virtue of applying to es- sentially arbitrary features--of the production or non-production type. That is, since estima- tors capable of finding maximum-likelihood pa- rameter estimates for production features in a SUBG will also find maximum-likelihood esti- mates for non-production features, there is no motivation for restricting features to be of the production type. Linguistically there is no particular reason for assuming that productions are the best fea- tures to use in a stochastic language model. For example, the adjunct attachment ambigu- ity in (1) results in alternative syntactic struc- tures which use the same productions the same number of times in each derivation, so a model with only production features would necessarily assign them the same likelihood. Thus models that use production features alone predict that there should not be a systematic preference for one of these analyses over the other, contrary to standard psycholinguistic results. 1.a Bill thought Hillary [vp[vP left ] yesterday ] 1.b Bill [vP[vP thought Hillary left ] yesterday ] There are many different ways of choosing features for a SUBG, and each of these choices makes an empirical claim about possible distri- butions of sentences. Specifying the features of a SUBG is as much an empirical matter as spec- ifying the grammar itself. For any given UBG there are a large (usually infinite) number of SUBGs that can be constructed from it, differ- ing only in the features that each SUBG uses. In addition to production features, the stochastic LFG models evaluated below used the following kinds of features, guided by the principles proposed by Hobbs and Bear (1995). Adjunct and argument features indicate adjunct and argument attachment respectively, and per- mit the model to capture a general argument attachment preference. In addition, there are specialized adjunct and argument features cor- responding to each grammatical function used in LFG (e.g., SUB J, OBJ, COMP, XCOMP, ADJUNCT, etc.). There are features indi- cating both high and low attachment (deter- mined by the complexity of the phrase being attached to). Another feature indicates non- right-branching nonterminal nodes. There is a feature for non-parallel coordinate structures (where parallelism is measured in constituent structure terms). Each f-structure attribute- atomic value pair which appears in any feature structure is also used as a feature. We also use a number of features identifying syntactic struc- tures that seem particularly important in these corpora, such as a feature identifying NPs that are dates (it seems that date interpretations of NPs are preferred). We would have liked to have included features concerning specific lex- ical items (to capture head-to-head dependen- cies), but we felt that our corpora were so small 536 that the associated parameters could not be ac- curately estimated. 3 A pseudo-likelihood estimator for log linear models Suppose ~ = Wl,..-,Wn is a training cor- pus of n syntactic analyses. Letting fj(~) = ~i=l,...,n fJ (wi), the log likelihood of the corpus and its derivatives are: logL0(~) = ~ Ojfj(~)-nlogZo(2) j=l,...,m 0 log L0 (~) - - nEd/j) (3) ooj where Eo(fj) is the expected value of fj under the distribution determined by the parameters 0. The maximum-likelihood estimates are the 0 which maximize log Lo(~). The chief difficulty in finding the maximum-likelihood estimates is calculating E0 (fj), which involves summing over the space of well-formed syntactic structures ft. There seems to be no analytic or efficient nu- merical way of doing this for a realistic SUBG. Abney (1997) proposes a gradient ascent, based upon a Monte Carlo procedure for esti- mating E0(fj). The idea is to generate random samples of feature structures from the distribu- tion P~i(w), where 0 is the current parameter estimate, and to use these to estimate E~(fj), and hence the gradient of the likelihood. Sam- ples are generated as follows: Given a SUBG, Abney constructs a covering PCFG based upon the SUBG and 0, the current estimate of 0. The derivation trees of the PCFG can be mapped onto a set containing all of the SUBG's syn- tactic analyses. Monte Carlo samples from the PCFG are comparatively easy to generate, and sample syntactic analyses that do not map to well-formed SUBG syntactic structures are then simply discarded. This generates a stream of syntactic structures, but not distributed accord- ing to P~(w) (distributed instead according to the restriction of the PCFG to the SUBG). Ab- ney proposes using a Metropolis acceptance- rejection method to adjust the distribution of this stream of feature structures to achieve de- tailed balance, which then produces a stream of feature structures distributed according to Po(w). While this scheme is theoretically sound, it would appear to be computationally impracti- cal for realistic SUBGs. Every step of the pro- posed procedure (corresponding to a single step of gradient ascent) requires a very large number of PCFG samples: samples must be found that correspond to well-formed SUBGs; many such samples are required to bring the Metropolis al- gorithm to (near) equilibrium; many samples are needed at equilibrium to properly estimate E0(Ij). The idea of a gradient ascent of the likelihood (2) is appealing--a simple calculation reveals that the likelihood is concave and therefore free of local maxima. But the gradient (in partic- ular, Ee(fj)) is intractable. This motivates an alternative strategy involving a data-based esti- mate of E0(fj): Ee(fj) = Ee(Ee(fj(w)ly(w))) (4) 1 = - ~ Ea(fj(w)ly(w) =yd(5) 72 i=l,...,n where y(w) is the yield belonging to the syn- tactic analysis w, and Yi = y(wi) is the yield belonging to the i'th sample in the training cor- pus. The point is that Ee(fj(w)ly(w ) = Yi) is gen- erally computable. In fact, if f~(y) is the set of well-formed syntactic structures that have yield y (i.e., the set of possible parses of the string y), then Eo(fj( o)ly( ,) = = Ew'Ef~(yi) f J(w') e~-~k=x ...... Ok$1,(w') Hence the calculation of the conditional expec- tations only involves summing over the possible syntactic analyses or parses f~(Yi) of the strings in the training corpus. While it is possible to construct UBGs for which the number of pos- sible parses is unmanageably high, for many grammars it is quite manageable to enumerate the set of possible parses and thereby directly evaluate Eo(f j(w)ly(w ) = Yi). Therefore, we propose replacing the gradient, (3), by fj(w) - ~ Eo(fj(w)lY(W) = Yi) (6) i=l,...,n and performing a gradient ascent. Of course (6) is no longer the gradient of the likelihood func- 537 tion, but fortunately it is (exactly) the gradient of (the log of) another criterion: PLo(~) = II Po(w = wily(w) = yi) (7) i=l,...,n Instead of maximizing the likelihood of the syn- tactic analyses over the training corpus, we maximize the conditional likelihood of these analyses given the observed yields. In our exper- iments, we have used a conjugate-gradient op- timization program adapted from the one pre- sented in Press et al. (1992). Regardless of the pragmatic (computational) motivation, one could perhaps argue that the conditional probabilities Po(wly ) are as use- ful (if not more useful) as the full probabili- ties P0(w), at least in those cases for which the ultimate goal is syntactic analysis. Berger et al. (1996) and Jelinek (1997) make this same point and arrive at the same estimator, albeit through a maximum entropy argument. The problem of estimating parameters for log-linear models is not new. It is especially dif- ficult in cases, such as ours, where a large sam- ple space makes the direct computation of ex- pectations infeasible. Many applications in spa- tial statistics, involving Markov random fields (MRF), are of this nature as well. In his seminal development of the MRF approach to spatial statistics, Besag introduced a "pseudo- likelihood" estimator to address these difficul- ties (Besag, 1974; Besag, 1975), and in fact our proposal here is an instance of his method. In general, the likelihood function is replaced by a more manageable product of conditional likeli- hoods (a pseudo-likelihood--hence the designa- tion PL0), which is then optimized over the pa- rameter vector, instead of the likelihood itself. In many cases, as in our case here, this sub- stitution side steps much of the computational burden without sacrificing consistency (more on this shortly). What are the asymptotics of optimizing a pseudo-likelihood function? Look first at the likelihood itself. For large n: 1 logL0(~) 1 log II Po(wi) n n i=l,...,n 1 ~ logp0(w d F& i=l,...,n f Poo(w)logPo(w)dw (8) where 0o is the true (and unknown) parame- ter vector. Up to a constant, (8) is the nega- tive of the Kullback-Leibler divergence between the true and estimated distributions of syntac- tic analyses. As sample size grows, maximizing likelihood amofints to minimizing divergence. As for pseudo-likelihood: 1 - log PL0(~) n l l°g IX Po(w wi{y(w)=yi) n i=l,...,n _-- _1 ~ logPo(w=wily( w )=Yi) n i=l,...,n EOo [f P0o (wly) log P0 (wly)dw] So that maximizing pseudo-likelihood (at large samples) amounts to minimizing the average (over yields) divergence between the true and estimated conditional distributions of analyses given yields. Maximum likelihood estimation is consistent: under broad conditions the sequence of dis- tributions P0 , associated with the maximum r~ likelihood estimator for 0o given the samples Wl,...wn, converges to P0o. Pseudo-likelihood is also consistent, but in the present implemen- tation it is consistent for the conditional dis- tributions P0o (w[y(w)) and not necessarily for the full distribution P0o (see Chi (1998)). It is not hard to see that pseudo-likelihood will not always correctly estimate P0o- Suppose there is a feature fi which depends only on yields: fi(w) = fi(y(w)). (Later we will refer to such features as pseudo-constant.) In this case, the derivative of PL0 (~) with respect to Oi is zero; PL0(~) contains no information about Oi. In fact, in this case any value of Oi gives the same conditional distribution Po(wly(w)); Oi is irrele- vant to the problem of choosing good parses. Despite the assurance of consistency, pseudo- likelihood estimation is prone to over fitting when a large number of features is matched against a modest-sized training corpus. One particularly troublesome manifestation of over fitting results from the existence of features which, relative to the training set, we might term "pseudo-maximal": Let us say that a feature f is pseudo-maximal for a yield y iff 538 Vw' E ~)(y)f(w) ~ f(J) where w is any cor- rect parse of y, i.e., the feature's value on every correct parse w of y is greater than or equal to its value on any other parse of y. Pseudo- minimal features are defined similarly. It is easy to see that if fj is pseudo-maximal on each sen- tence of the training corpus then the param- eter assignment Oj = co maximizes the cor- pus pseudo-likelihood. (Similarly, the assign- ment Oj = -oo maximizes pseudo-likelihood if fj is pseudo-minimal over the training corpus). Such infinite parameter values indicate that the model treats pseudo-maximal features categori- cally; i.e., any parse with a non-maximal feature value is assigned a zero conditional probability. Of course, a feature which is pseudo-maximal over the training corpus is not necessarily pseudo-maximal for all yields. This is an in- stance of over fitting, and it can be addressed, as is customary, by adding a regularization term that promotes small values of 0 to the objec- tive function. A common choice is to add a quadratic to the log-likelihood, which corre- sponds to multiplying the likelihood itself by a normal distribution. In our experiments, we multiplied the pseudo-likelihood by a zero-mean normal in 01,... Om, with diagonal covariance, and with standard deviation aj for 0j equal to 7 times the maximum value of fj found in any parse in the training corpus. (We experimented with other values for aj, but the choice seems to have little effect). Thus instead of maximizing the log pseudo-likelihood, we choose 0 to maxi- mize /3z 2 log PL0(~) - ~ 2avJ2 (9) j=l,...,m J 4 A maximum correct estimator for log linear models The pseudo-likelihood estimator described in the last section finds parameter values which maximize the conditional probabilities of the observed parses (syntactic analyses) given the observed sentences (yields) in the training cor- pus. One of the empirical evaluation measures we use in the next section measures the num- ber of correct parses selected from the set of all possible parses. This suggests another pos- sible objective function: choose ~ to maximize the number Co (~) of times the maximum likeli- hood parse (under 0) is in fact the correct parse, in the training corpus. Co(~) is a highly discontinuous function of 0, and most conventional optimization algorithms perform poorly on it. We had the most suc- cess with a slightly modified version of the sim- ulated annealing optimizer described in Press et al. (1992). This procedure is much more com- putationally intensive than the gradient-based pseudo-likelihood procedure. Its computational difficulty grows (and the quality of solutions de- grade) rapidly with the number of features. 5 Empirical evaluation Ron Kaplan and Hadar Shemtov at Xerox PArtC provided us with two LFG parsed corpora. The Verbmobil corpus contains appointment plan- ning dialogs, while the Homecentre corpus is drawn from Xerox printer documentation. Ta- ble 1 summarizes the basic properties of these corpora. These corpora contain packed c/f- structure representations (Maxwell III and Ka- plan, 1995) of the grammatical parses of each sentence with respect to Lexical-Functional grammars. The corpora also indicate which of these parses is in fact the correct parse (this information was manually entered). Because slightly different grammars were used for each corpus we chose not to combine the two corpora, although we used the set of features described in section 2 for both in the experiments described below. Table 2 describes the properties of the features used for each corpus. In addition to the two estimators described above we also present results from a baseline es- timator in which all parses are treated as equally likely (this corresponds to setting all the param- eters Oj to zero). We evaluated our estimators using held-out test corpus ~test. We used two evaluation measures. In an actual parsing application a SUBG might be used to identify the correct parse from the set of grammatical parses, so our first evaluation measure counts the number Co(~test) of sentences in the test corpus ~test whose maximum likelihood parse under the es- timated model 0 is actually the correct parse. If a sentence has 1 most likely parses (i.e., all 1 parses have the same conditional probability) and one of these parses is the correct parse, then we score 1/l for this sentence. The second evaluation measure is the pseudo- 539 Number of sentences Number of ambiguous sentences Number of parses of ambiguous sentences Verbmobil corpus Homecentre corpus 540 980 314 481 3245 3169 Table 1: Properties of the two corpora used to evaluate the estimators. Verbmobil corpus Homecentre corpus Number of features 191 227 Number of rule features 59 57 Number of pseudo-constant features 19 41 Number of pseudo-maximal features 12 4 Number of pseudo-minimal features 8 5 Table 2: Properties of the features used in the stochastic LFG models. The numbers of pseudo- maximal and pseudo-minimal features do not include pseudo-constant features. likelihood itself, PL~(wtest). The pseudo- likelihood of the test corpus is the likelihood of the correct parses given their yields, so pseudo- likelihood measures how much of the probabil- ity mass the model puts onto the correct anal- yses. This metric seems more relevant to ap- plications where the system needs to estimate how likely it is that the correct analysis lies in a certain set of possible parses; e.g., ambiguity- preserving translation and human-assisted dis- ambiguation. To make the numbers more man- ageable, we actually present the negative loga- rithm of the pseudo-likelihood rather than the pseudo-likelihood itself--so smaller is better. Because of the small size of our corpora we evaluated our estimators using a 10-way cross- validation paradigm. We randomly assigned sentences of each corpus into 10 approximately equal-sized subcorpora, each of which was used in turn as the test corpus. We evaluated on each subcorpus the parameters that were estimated from the 9 remaining subcorpora that served as the training corpus for this run. The evalua- tion scores from each subcorpus were summed in order to provide the scores presented here. Table 3 presents the results of the empiri- cal evaluation. The superior performance of both estimators on the Verbmobil corpus prob- ably reflects the fact that the non-rule fea- tures were designed to match both the gram- mar and content of that corpus. The pseudo- likelihood estimator performed better than the correct-parses estimator on both corpora un- der both evaluation metrics. There seems to be substantial over learning in all these mod- els; we routinely improved performance by dis- carding features. With a small number of features the correct-parses estimator typically scores better than the pseudo-likelihood estima- tor on the correct-parses evaluation metric, but the pseudo-likelihood estimator always scores better on the pseudo-likelihood evaluation met- ric. 6 Conclusion This paper described a log-linear model for SUBGs and evaluated two estimators for such models. Because estimators that can estimate rule features for SUBGs can also estimate other kinds of features, there is no particular reason to limit attention to rule features in a SUBG. In- deed, the number and choice of features strongly influences the performance of the model. The estimated models are able to identify the cor- rect parse from the set of all possible parses ap- proximately 50% of the time. We would have liked to introduce features corresponding to dependencies between lexical items. Log-linear models are well-suited for lex- ical dependencies, but because of the large num- ber of such dependencies substantially larger corpora will probably be needed to estimate such models. 1 1Alternatively, it may be possible to use a simpler non-SUBG model of lexical dependencies estimated from a much larger corpus as the reference distribution with 540 Baseline estimator Pseudo-likelihood estimator Correct-parses estimator Verbmobil corpus Homecentre corpus C(~test) -logPL(~test) C(~test) -logPL(~test) 9.7% 533 15.2% 655 58.7% 396 58.8% 583 53.7% 469 53.2% 604 Table 3: An empirical evaluation of the estimators. C(~test) is the number of maximum likelihood parses of the test corpus that were the correct parses, and -log PL(wtest) is the negative logarithm of the pseudo-likelihood of the test corpus. However, there may be applications which can benefit from a model that performs even at this level. For example, in a machine-assisted translation system a model like ours could be used to order possible translations so that more likely alternatives are presented before less likely ones. In the ambiguity-preserving trans- lation framework, a model like this one could be used to choose between sets of analyses whose ambiguities cannot be preserved in translation. References Steven P. Abney. 1997. Stochastic Attribute- Value Grammars. Computational Linguis- tics, 23(4):597-617. Adam~L. Berger, Vincent J. Della Pietra, and Stephen A. Della Pietra. 1996. A maximum entropy approach to natural lan- guage processing. Computational Linguistics, 22(1):39-71. J. Besag. 1974. Spatial interaction and the sta- tistical analysis of lattice systems (with dis- cussion). Journal of the Royal Statistical So- ciety, Series D, 36:192-236. J. Besag. 1975. Statistical analysis of non- lattice data. The Statistician, 24:179-195. Zhiyi Chi. 1998. Probability Models for Com- plex Systems. Ph.D. thesis, Brown University. Brendan J. Frey. 1998. Graphical Models for Machine Learning and Digital Communica- tion. The MIT Press, Cambridge, Mas- sachusetts. Jerry R. Hobbs and John Bear. 1995. Two principles of parse preference. In Antonio Zampolli, Nicoletta Calzolari, and Martha Palmer, editors, Linguistica Computazionale: Current Issues in Computational Linguistics: In Honour of Don Walker, pages 503-512. Kluwer. Frederick Jelinek. 1997. Statistical Methods for Speech Recognition. The MIT Press, Cam- bridge, Massachusetts. John T. Maxwell III and Ronald M. Kaplan. 1995. A method for disjunctive constraint satisfaction. In Mary Dalrymple, Ronald M. Kaplan, John T. Maxwell III, and Annie Zaenen, editors, Formal Issues in Lexical- Functional Grammar, number 47 in CSLI Lecture Notes Series, chapter 14, pages 381- 481. CSLI Publications. Judea Pearl. 1988. Probabalistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Mateo, California. William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. 1992. Numerical Recipies in C: The Art of Scientific Computing. Cambridge University Press, Cambridge, England, 2nd edition. respect to which the SUBG model is defined, as described in Jelinek (1997). 541
1999
69
Unifying Parallels Claire Gardent Computational Linguistics University of the Saarland Saarbriicken, Germany claire0coli, uni-sb, de Abstract I show that the equational treatment of ellipsis proposed in (Dalrymple et al., 1991) can further be viewed as modeling the effect of parallelism on semantic interpretation. I illustrate this claim by showing that the account straightfor- wardly extends to a general treatment of sloppy identity on the one hand, and to deaccented foci on the other. I also briefly discuss the results obtained in a prototype implementation. 1 Introduction (Dalrymple et al., 1991; Shieber et al., 1996) (henceforth DSP) present a treatment of VP- ellipsis which can be sketched as follows. An el- liptical construction involves two phrases (usu- ally clauses) which are in some sense struc- turally parallel. Whereas the first clause (we refer to it as the source) is semantically com- plete, the second (or target) clause is missing semantic material which can be recovered from the source. Formally the analysis consists of two com- ponents: the representation of the overall dis- course (i.e. source and target clauses) and an equation which permits recovering the missing semantics. I Representation Equation I S A R(T1, • • •, Tn) R(S1,..., Sn) = S I S is the semantic representation of the source, $1,..., Sn and T1,... ,Tn are the semantic rep- resentations of the parallel elements in the source and target respectively and R represents the relation to be recovered. The equation is solved using Higher-Order Unification (HOU): Given any solvable equation M = N, HOU yields a substitution of terms for free variables that makes M and N equal in the theory of a/~v-identity. The following example illustrates the work- ings of this analysis: (1) Jon likes Sarah and Peter does too. In this case the semantic representation and the equation associated with the overall discourse ar e: Equation R(j) = like(j,s) For this equation, HOU yields the substitution1: {R x.like(x,s)} and as a result, the resolved semantics of the target is: Ax.like(x, s)(p) - like(p, s) The DSP approach has become very influen- tial in computational linguistics for two main reasons. First, it accounts for a wide range of observations concerning the interaction of VP- ellipsis, quantification and anaphora. Second, it bases semantic construction on a tool, HOU, which is both theoretically and computationally attractive. Theoretically, HOU is well-defined and well-understood - this permits a clear un- derstanding of both the limitations and the pre- dictions of the approach. Computationally, it has both a declarative and a procedural inter- pretation - this supports both transparency and implementation. 1As (Dalrymple et al., 1991) themselves observe, HOU also yields other, linguistically invalid, solutions. For a proposal on how to solve this over-generation prob- lem, see (Gardent and Kohlhase, 1996b; Gardent et al., 1999). 49 In this paper, I start (section 2) by clari- fying the relationship between DSP's proposal and the semantic representation of discourse anaphors. In section 3 and 4, I then show that the HOU-treatment of ellipsis naturally extends to provide: • A treatment of the interaction between par- allelism and focus and * A general account of sloppy identity Section 6 concludes and compares the approach with related work. 2 Representing discourse anaphors The main tenet of the DSP approach is that interpreting an elliptical clause involves recov- ering a relation from the source clause and ap- plying it to the target elements. This leaves open the question of how this procedure relates to sentence level semantic construction and in particular to the semantic representation of VP- ellipsis. Consider for instance the following ex- ample: (2) Jon runs but Peter doesn't. Under the DSP analysis, the unresolved se- mantics of (2) is (3)a and equation (3)b is set up. HOU yields the solution given in (3)c and as a result, the semantics of the target clause Peter doesn't is (3)d. (3) a. pos(run(jon)) A R(neg)(peter) b. R(pos)(jon) = pos(run(jon)) c. d. O x.O(run(x))(neg)(peter) neg(run(peter)) It is unclear how the semantic representa- tion (3)a comes about. Under a Montague-type approach where syntactic categories map onto semantic types, the semantic type of a VP- Ellipsis is (et), the type of properties of individ- uals i.e. unary relations, not binary ones. And under a standard treatment of subject NPs and auxiliaries, one would expect the representation of the target clause to be neg(P(peter)) not P(neg)(peter). There is thus a discrepancy be- tween the representation DSP posit for the tar- get, and the semantics generated by a standard, Montague-style semantic construction module. Furthermore, although DSP only apply their analysis to VP-ellipsis, they have in mind a much broader range of applications: [...] many other elliptical phenom- ena and related phenomena subject to multiple readings akin to the strict and sloppy readings discussed here may be analysed using the same techniques (Dalrymple et al., 1991, page 450). In particular, one would expect the HOU- analysis to support a general theory of sloppy identity. For instance, one would expect it to account for the sloppy interpretation (I'll kiss you if you don't want me to kiss you) of (4). (4) I'll [help you] 1 if you [want me tol] 2. I'll kiss you if you don't2. But for such cases, the discrepancy between the semantic representation generated by se- mantic construction and the DSP representa- tion of the target is even more obvious. Assum- ing help and kiss are the parallel elements, the equation generated by the DSP proposal is: R(h) = wt(you, h(i, you)) --+ h(i, you) and accordingly, the semantic representation of the target is -~R(k) which is in stark contrast with what one could reasonably expect from a standard semantic construction process namely: -~P(you) -+ k(i, you). What is missing is a constraint which states that the representation of the target must unify with the semantic representation generated by the semantic construction component. If we in- tegrate this constraint into the DSP account, we get the following representations and con- straints: (5) Representation S A R(T1,...,Tn) Equations R(S1,..., Sn) = S R(T1,...,Tn) = T where T is the semantic representation gener- ated for the target by the semantic construction module. The second equation requires that this representation T unifies with the representation of the target postulated by DSP. With this clarification in mind, example (2) is handled as follows. The semantic representation 50 of (2) is (6)a where the semantic representation of the target clause is the representation one would expect from a standard Montague-style semantic construction process. The equations are as given in (6)b-c where C represents the se- mantics shared by the parallel structures and P the VP-Ellipsis. HOU then yields the solution in (6)d: the value of C is that relation shared by the two structures i.e. a binary relation as in DSP. However the value of P (the semantic representation of the VPE) is a property - as befits a verbal phrase. (6) a. pos(run(jon)) A neg(P(peter)) b. C(pos)(jon) = pos(run(jon)) c. C(neg)(peter) = neg(P(peter)) d. {C -+ AOAx.O(run(x)),P )~x.run(x) } e. AO)~xO(run(x))(neg)(peter) neg(run(peter)) -.+ B In sum, provided one equation is added to the DSP system, the relation between the HOU-approach to VP-ellipsis and standard Montague-style semantic construction becomes transparent. Furthermore it also becomes im- mediately obvious that the DSP approach does indeed generalise to a much wider range of data than just VP-Ellipsis. The key point is that there is now not just one, but several, free vari- ables coming into play; and that although the free variable C always represents the semantics shared by two parallel structures, the free vari- able(s) occuring in the semantic representation of the target may represent any kind of un- resolved discourse anaphors - not just ellipsis. Consider the following example for instance: (7) Jon 1 took his1 wife to the station. No, BILL took his wife to the station. There is no ellipsis in the target, yet the discourse is ambiguous between a strict and a sloppy interpretation 2 and one would expect the HOU-analysis to extend to such cases. Which indeed is the case. The analysis goes as follows. ~I assume that in the target took his wife to the station is deaccented. In such cases, it is clear that the ambiguity of his is restricted by parallelism i.e. is a sloppy/strict ambiguity rather than just an ambiguity in the choice of antecedent. As for ellipsis, anaphors in the source are resolved, whereas discourse anaphors in the target are represented using free variables (alternatively, we could resolve them first and let HOU filter unsuitable resolutions out). Specifically, the target pronoun his is repre- sented by the free variable X and therefore we have the following representation and equations: Representation tk(j, wife_of(j), s) Ark(b, wife_of(X), s) Equations C(j) = tk(j, wife_of(j), s) C(b) = tk(b, wife_of(X), s) HOU yields inter alia two solutions for these equations, the first yielding a strict and the sec- ond, a sloppy reading: {C <-- Az.tk(z, wife_of(j), s), X +- j} {C +-- Az.tk(z, wife_of(z), s), X +- b} Thus the HOU-approach captures cases of sloppy identity which do not involve ellipsis. More generally, the HOU-approach can be viewed as modeling the effect of parallelism on interpretation. In what follows, I substantiate this claim by considering two such cases: first, the interaction of parallelism and sloppy iden- tity and second, the interaction of parallelism and focus. 3 Parallelism and Focus Since (Jackendoff, 1972), it is widely agreed that focus can affect the truth-conditions of a sen- tence 3. The following examples illustrate this, where upper-letters indicate prosodic promi- nence and thereby focus. (8) a. Jon only introduced MARY to Sue. b. Jon only introduced Mary to SUE. Whereas (8a) says that the only person intro- duced by Jon to Sue is Mary, (8b) states that the only person Jon introduced Mary to, is Sue. To capture this effect of focus on semantics, a focus value 4 is used which in essence, is the 3The term focus has been put to many different uses. Here I follow (Jackendoff, 1972) and use it to refer to the semantics of that part of the sentence which is (or contains an element that is) prosodically prominent. aThis focus value is defined and termed differently by different authors: Jackendoff (Jackendoff, 1972) calls it the presuppositional set, Rooth (Rooth, 1992b) the Alternative Set and Krifka (Krifka, 1992) the Ground. 51 set of semantic objects obtained by making an appropriate substitution in the focus position. For instance, in (Gaxdent and Kohlhase, 1996a), the focus value of (8a) is defined with the help of the equation: I Focus Value Equation I Sere = X(F) I where Sern is the semantic of the sentence without the focus operator (e.g. intro(j,m,s) for (8)), F represents the focus and X helps deter- mine the value of the focus variable (written X) as follows: Definition 3.1 (Focus value) Let X = Ax.¢ be the value defined by the focus value equation and T be the type of x, then the Focus value derivable from X, written X, is {¢ J x wife}. Given (8a), the focus value equation is thus (9a) with solution (9b); the focus value derived from it is (9c) and the semantics of (8a) is (9d) which given (9c) is equivalent to (9e). (9) a. intro(j,m,s) = X(m) b. {X +-- Ax.intro(j,x,s)} c. --X = {intro(j, x, s) I x E wife} d. VP[P E -X A P -+ P = intro(j,m,s)] e. VP[P E {intro(j, x, s) I x E wife} A P ~ P = intro(j,m,s)] In English: the only proposition of the form John introduced x to Sue that is true is the proposition John introduced Mary to Sue. Now consider the following example: (10) a. Jon only likes MARY b. No, PETER only likes Mary. In a deaccenting context, the focus might be part of the deaccented material and therefore not prosodically prominent. Thus in (10)b, the semantic focus Mary is deaccented because of the partial repetition of the previous utterance. Because they all use focus to determine the fo- cus value and thereby the semantics of sentences such as (8a), focus deaccenting is a challenge for most theories of focus. So for instance, in the HOU-analysis of both (Pulman, 1997) and (Gaxdent and Kohlhase, 1996a), the right-hand side of the focus equation for (10b) becomes FV(F) where neither FV (the focus value) nor F (the focus) are known. As a result, the equa- tion is untyped and cannot be solved by Huet's algorithm (Huet, 1976). The solution is simple: if there is no focus, there is no focus equation. After all, it is the presence of a focus which triggers the formation of a focus value. But how do we determine the interpretation of (10b)? Without focus equation, the focus value remains unspecified and the representa- tion of (10b) is: VP[P E FV A P -+ P = like(p,m)] which is underspecified with respect to FV. (Rooth, 1992a) convincingly argues that deaccenting and VP-ellipsis are constrained by the same semantic redundancy constraint (and that VP-ellipsis is additionally subject to a syntactic constraint on the reconstructed VP). Moreover, (Gaxdent, 1999) shows that the equational constraints defined in (5) adequately chaxacterise the redundancy constraint which holds for both VPE and deaccenting. Now example (10b) clearly is a case of deaccenting: because it repeats the VP of (10a), the VP only likes mary in (10b) is deaccented. Hence the redundancy constraint holding for both VPE and deaccenting and encoded in (5) applies5: C(j) = VP[P G {likeO, x)} A P --+ P = like(j,m)] C(p) = VP[P E FV A P -+ P = like(p,m)] These equations axe solved by the following substitution: {C +-- FV +- Az.VP[P E {like(z,x)} A P --+ P = like(z,m)], { like (p,x)} } so that the interpretation of (10b) is correctly fixed to: VP[P E {like(p,x)} A P --+ P = like(p,m)] Thus, the HOU approach to deaccenting makes appropriate predictions about the inter- pretation of "second occurrence expressions" 5For lack of space, I shorten {like(j,x) I x G wife} to { like(j,x)} 52 (SOE) 6 such as (10b). It predicts that for these cases, the focus value of the source is inherited by the target through unification. Intuitively, a sort of "parallelism constraint" is at work which equates the interpretation of the repeated ma- terial in an SOE with that of its source coun- terpart. Such an approach is in line with (Krifka, 1992) which argues that the repeated material in an SOE is an anaphor resolving to its source counterpart. It is also partially in line with Rooth's account in that it similarly posits an initially underspecified semantics for the target; It is more specific than Rooth's however, as it lifts this underspecification by unification. The difference is best illustrated by an example: (11) ?? Jon only likes SARAH. No, PETER only likes Mary. Provided only likes Mary is deaccented, this discourse is ill-formed (unless the second speaker knows Sarah and Mary to denote the same individual). Under the HOU-analysis this falls out of the fact that the redundancy constraint cannot be satisfied as there is no unifying substitution for the following equa- tions: C(j) = VP[P E {like(j,x)} A P --+ P = like(j,s)] C(p) = VP[P • FV A P --+ P = like(p,m)] In constrast, Rooth's approach does not cap- ture the ill-formedness of (11) as it places no constraint on the interpretation of PETER only likes Mary other than that given by the compo- sitional semantics of the sentence namely: VP[P E FV A P --+ P = like(p,m)] where FV represents the quantification domain of only and is pragmatically determined. With- out going into the details of Rooth's treatment of focus, let it suffice to say, that the first clause does actually provide the appropriate an- tecedent for this pragmatic anaphor so that de- spite its ill-formedness, (11) is assigned a full- fledged interpretation. ~The terminology is borrowed from (Krifka, 1995) and refers to expressions which partially or totally re- peat a previous expression. Nonetheless there are cases where pragmatic liberalism is necessary. Thus consider Rooth's notorious example: (12) People who GROW rice usually only EAT rice This is understood to mean that people who grow rice usually eat nothing else than rice. But as the focus (RICE) and focus value (Ax.eat(pwgr, x)) that need to be inherited by the target VP only EAT rice are simply not available from the previous context, the redun- dancy constraint on deaccenting fails to predict this and hence, fails to further specify the un- derspecified meaning of (12). A related case in point is: (13) We are supposed to TAKE maths and semantics, but I only LIKE semantics. Again the focus on LIKE is a contrastive fo- cus which does not contribute information on the quantification domain of only. In other words, although the intended meaning of the but-clause is o/ all the subjects that I like, the only subject I like is semantics, the given prosodic focus on LIKE fails to establish the appropriate set of alternatives namely: all the subjects that I like. Such cases clearly involve inference, possibly a reasoning along the follow- ing lines: the but conjunction indicates an ex- pectation denial. The expectation is that if x takes maths and semantics then x likes maths and semantics. This expectation is thus made salient by the discourse context and provides in fact the set of alternatives necessary to interpret only namely the set {like(i, sem), like(i, maths)}. To be more specific, consider the representation of I only like semantics: VP[P E FV A P --+ P = like(i, sem)] By resolving FV to the set of propositions {like(i, sem),like(i, maths)}, we get the appro- priate meaning namely: VP[P E {like(i, sem), like(i, maths)} A P --+ P = like(i, sem)] Following (Rooth, 1992b), I assume that in such cases, the quantification domain of both usually and only are pragmatically determined. 53 The redundancy constraint on deaccenting still holds but it plays no role in determining these particular quantification domains. 4 Sloppy identity As we saw in section 2, an important property of DSP's analysis is that it predicts sloppy/strict ambiguity for VP-Ellipsis whereby the multiple solutions generated by HOU capture the multi- ple readings allowed by natural language. As (Hobbs and Kehler, 1997; Hardt, 1996) have shown however, sloppy identity is not necessar- ily linked to VP-ellipsis. Essentially, it can oc- cur whenever, in a parallel configuration, the antecedent of an anaphor/ellipsis itself contains an anaphor/ellipsis whose antecedent is a par- allel element. Here are some examples. (14) (15) (16) Jon 1 /took his1 wife to the station] 2. No, BILL/took his wife to the station]2. (Bill took Bill's wife to the station) Jon 1 spent /hisl paycheck] 2 but Peter saved it2. (Peter saved Peter's pay- check) I'll /help you] 1 if you /want me to1] 2. I'll kiss you if you don't2. (I'll kiss you if you don't want me to kiss you) Because the HOU-analysis reconstructs the semantics common to source and target rather than (solely) the semantics of VP-ellipses, it can capture the full range of sloppy/strict ambigu- ity illustrated above (and as (Gardent, 1997) shows some of the additional examples listed in (Hobbs and Kehler, 1997)). Consider for in- stance example (16). The ellipsis in the target has an antecedent want me to which itself con- tains a VPE whose antecedent (help you) has a parallel counterpart in the target. As a result, the target ellipsis has a sloppy interpretation as well as a strict one: it can either denote the same property as its antecedent VP want me to help you, or its sloppy copy namely want me to kiss you. The point to note is that in this case, sloppy interpretation results from a parallelism be- tween VPs not as is more usual, from a par- allelism between NPs. This poses no particular problem for the HOU-analysis. As usual, the parallel elements (help and kiss) determine the equational constraints so that we have the fol- lowing equalitiesZ: C(h) = wt(you, h(i, you)) -+ h(i, you) C(k) = P(you) --+ k(i, you) Resolution of the first equation yields AR.wt(you, R(i, you)) --+ R(i, you) as a possible value for C and consequently, the value for C(k) is: C(k) = wt(you, k(i, you)) -+ k(i, ou) Therefore a possible substitution for P is: {P +-- x.wt(x,k(i,x))} and the VPE occurring in the target can indeed be assigned the sloppy interpretation x want me to kiss x. Now consider example (15). The pronoun it occurring in the second clause has a sloppy interpretation in that it can be interpreted as meaning Peter's paycheck, rather than Jon's paycheck. In the literature such pronouns are known as paycheck pronouns and are treated as introducing a definite whose restriction is prag- matically given (cf. e.g. (Cooper, 1979)). We can capture this intuition by assigning paycheck pronouns the following representation: Pro ~-~ )~Q.3x[P(x) A Vy[P(y) y = x] A Q(x)] with P E Wj~(e_+t ) • That is, paycheck pronouns are treated as definites whose restriction (P) is a variable of type (e --+ t). Under this assump- tion, (15) is assigned the following equationsS: C(j, sp) = 31x~)c_of(x, j) A sp(j, x)] C(p, sa) = 31x[P(x) A sa(p, x)] Resolving the first equation yields ;~y.)~O.3xx~)c_of(x, y) A O(y, x)] as a value for C, and therefore we have that: C(p, sa) = 31xbc_of(x,p ) A sa(p, x)] {P +-- )~y.pc_of(y, p)} That is, the target clause is correctly assigned the sloppy interpretation: Peter saved Peter's paycheck. 7For simplicity, I've ommitted polarity information. sI abbreviate )~Q.3x[P(x)AVy[P(y) -+ y = x] A Q(x)] to)~Q.Blx[P(x) A Q(x)]. 54 Thus the HOU-treatment of parallelism can account for both paycheck pronouns and exam- ples such as (16). Though lack of space prevents showing how the other cases of sloppy identity are handled, the general point should be clear: because the HOU-approach associates sloppy identity with parallelism rather than with VP- ellipsis, it can capture a fairly wide range of data providing some reasonable assumptions are made about the representations of ellipses and anaphors. 5 Implementation It is known that for the typed lambda-calculus, HOU is only semi-decidable so that the unifi- cation algorithm need not terminate for unsolv- able problems. Fortunately, the class of equa- tions that is needed for semantic construction is a very restricted class for which much bet- ter results hold. In particular, the fact that free variables only occur on the left hand side of our equations reduces the problem of find- ing solutions to higher-order matching, a prob- lem which is decidable for the subclass of third- order formulae (Dowek, 1992). These theoretical considerations have been put into practice in the research proto- type CHoLI, a system which permits testing the HOU-approach to semantic construction. Briefly, the system can: parse a sequence of sen- tences and return its semantic representation, interactively build the relevant equations (par- allel elements are entered by the user and the corresponding equations are computed by the system) and solve them by means of HOU. The test-suite includes approximately one hundred examples and covers the following phe- nomena: • VP-ellipsis and its interaction with anaphora, proper nouns (e.g., Mary, Paul) and control verbs (i.e., verbs such as try whose subject "control" i.e., is co-referential with some other element in the verb complement). • Deaccenting and its interaction with anaphora, VP-ellipsis, context and sloppy/strict ambiguity. • Focus with varying and ambiguous foci. It is currently being extended to sentences with multiple foci and the interaction with deaccenting. As mentioned in section 2 the HOU-approach sometimes over-generates and yields solutions which are linguistically invalid. However as (Gardent et al., 1999) shows, this shortcoming can be remedied using Higher-Order Colored Unification (HOCU) rather than straight HOU. In CHOLI both an HOU and an HOCU algo- rithm can be used and all examples have been tested with and without colors. In all cases, col- ors cuts down the number of generated readings to exactly these readings which are linguistically acceptable. 6 Conclusion It should by now be clear that the DSP- treatment of ellipsis is better seen as a treat- ment of the effect of semantic parallelism: the equations constrain the interpretation of paral- lel structures and as a side effect, a number of linguistic phenomena are predicted e.g. VPE- resolution, sloppy/strict ambiguity and focus value inheritance in the case of SOEs. There are a number of proposals (Hobbs and Kehler, 1997; Priist et al., 1994; Asher, 1993; Asher et al., 1997) adopting a similar approach to parallelism and semantics of which the most worked out is undoubtly (Hobbs and Kehler, 1997). (Hobbs and Kehler, 1997) presents a general theory of parallelism and shows that it provides both a fine-grained analysis of the in- teraction between VP-ellipsis and pronominal anaphora and a general account of sloppy iden- tity. The approach is couched in the "interpre- tation as abduction framework" and consists in proving by abduction that two properties (i.e. sentence or clause meaning) are similar. Be- cause it interleaves a co-recursion on semantic structures with full inferencing (to prove sim- ilarity between semantic entities), Hobbs and Kehler's approach is more powerful than the HOU-approach which is based on a strictly syntactic operation (no semantic reasoning oc- curs). Furthermore, because it can represent coreferences explicitely, it achieves a better ac- count of the interaction between VP-ellipsis and anaphora (in particular, it accounts for the infamous "missing reading puzzles" of ellipsis (Fiengo and May, 1994)). On the other hand, the equational approach 55 provided by the HOU-treatment of parallelism naturally supports the interaction of distinct phenomena. We have seen that it correctly cap- tures the interaction of parallelism and focus. Further afield, (Niehren et al., 1997) shows that context unification supports a purely equational treatment of the interaction between ellipsis and quantification whereas (Shieber et al., 1996) presents a very extensive HOU-based treatment of the interaction between scope and ellipsis. Acknowledgments I wish to thank the ACL anonymous refer- tees for some valuable comments; and Stephan Thater, Ralf Debusman and Karsten Konrad for their implementation of CHoLI. The research presented in this paper was funded by the DFG in SFB-378, Project C2 (LISA). References Nicholas Asher. 1993. Reference to abstract ob- jects in discourse. Kluwer, Dordrecht. Nicholas Asher, Daniel Hardt, and Joan Bus- quets. 1997. Discourse parallelism, scope and ellipsis. In Proceedings of SALT'97, Palo Alto. Robin Cooper. 1979. The interpretation of pro- nouns. In F. Heny and H.S. Schnelle, editors, Syntax and Semantics, number 10, pages 61- 93. Mary Dalrymple, Stuart Shieber, and Fernando Pereira. 1991. Ellipsis and higher-order unifi- cation. Linguistics ~ Philosophy, 14:399-452. Gilles Dowek. 1992. Third order matching is decidable. In Proceedings of the 7th Annual IEEE Symposium on Logic in Computer Sci- ence (LICS-7), pages 2-10. IEEE Computer Society Press. Robert Fiengo and Robert May. 1994. Indices and Identity. MIT Press, Cambridge. Claire Gardent and Michael Kohlhase. 1996a. Focus and higher-order unification. In Pro- ceedings of COLING'96, Copenhagen. Claire Gardent and Michael Kohlhase. 1996b. Higher-order coloured unification and nat- ural language semantics. In Proceedings of A CL '96, Santa Cruz. Claire Gardent, Michael Kohlhase, and Karsten Konrad. 1999. Higher-order coloured unifi- cation: a linguistic application. Technique et Science Informatiques, 18(2):181-209. Claire Gardent. 1997. Sloppy identity. In Christian Retort, editor, Logical Aspects of Computational Linguistics, pages 188-207. Springer. Claire Gardent. 1999. Deaccenting and higher- order unification. University of the Saarland. Submitted for publication. Daniel Hardt. 1996. Dynamic interpretation of vp ellipsis. To appear in Linguistics and Phi- losophy. J. Hobbs and A. Kehler. 1997. A theory of par- allelism and the case of VP ellipsis. In Pro- ceedings of A CL, Madrid. Gdrard P. Huet. 1976. Rdsolution d'Equati°ns dans des Langages d'ordre 1,2,...,w. Th~se d'Etat, Universit~ de Paris VII. Ray S. Jackendoff. 1972. Semantic Interpre- tation in Generative Grammar. The MIT Press. Manfred Krifka. 1992. A compositional se- mantics for multiple focus constructions. In Joachim Jacobs, editor, Informationsstruktur and Grammatik. Heidelberg. Sonderheft 4. Manfred Krifka. 1995. Focus and/or context: A second look at second occurence expressions. Unpublished Ms. University of Texas, Austin, February. Joachim Niehren, Manfred Pinkal, and Peter Ruhrberg. 1997. A uniform approach to underspecification and parallelism. In Pro- ceedings of A CL'97, pages 410-417, Madrid, Spain. H. Priist, R. Scha, and M. van den Berg. 1994. Discourse grammar and verb phrase anaphora. Linguistics ~ Philosophy, 17:261- 327. Steve Pulman. 1997. Higher order unification and the interpretation of focus. Linguistics Philosophy, 20:73-115. Mats Rooth. 1992a. Ellipsis redundancy and reduction redundancy. In Steve Berman and Arild Hestvik, editors, Proceedings of the Stuttgart Ellipsis Workshop, University of Stuttgart. Mats Rooth. 1992b. A theory of focus interpre- tation. Natural Language Semantics, pages 75-116. Stuart Shieber, Fernando Pereira, and Mary Dalrymple. 1996. Interaction of scope and el- lipsis. Linguistics $J Philosophy, 19:527-552. 56
1999
7
Relating Probabilistic Grammars and Automata Steven Abney David McAllester Fernando Pereira AT&T Labs-Research 180 Park Ave Florham Park NJ 07932 {abney, dmac, pereira}@research.att.com Abstract Both probabilistic context-free grammars (PCFGs) and shift-reduce probabilistic push- down automata (PPDAs) have been used for language modeling and maximum likelihood parsing. We investigate the precise relationship between these two formalisms, showing that, while they define the same classes of probabilis- tic languages, they appear to impose different inductive biases. 1 Introduction Current work in stochastic language models and maximum likelihood parsers falls into two main approaches. The first approach (Collins, 1998; Charniak, 1997) uses directly the defini- tion of stochastic grammar, defining the prob- ability of a parse tree as the probability that a certain top-down stochastic generative pro- cess produces that tree. The second approach (Briscoe and Carroll, 1993; Black et al., 1992; Magerman, 1994; Ratnaparkhi, 1997; Chelba and Jelinek, 1998) defines the probability of a parse tree as the probability that a certain shift- reduce stochastic parsing automaton outputs that tree. These two approaches correspond to the classical notions of context-free grammars and nondeterministic pushdown automata re- spectively. It is well known that these two clas- sical formalisms define the same language class. In this paper, we show that probabilistic context- free grammars (PCFGs) and probabilistic push- down automata (PPDAs) define the same class of distributions on strings, thus extending the classical result to the stochastic case. We also touch on the perhaps more interesting ques- tion of whether PCFGs and shift-reduce pars- ing models have the same inductive bias with respect to the automatic learning of model pa- rameters from data. Though we cannot provide a definitive answer, the constructions we use to answer the equivalence question involve blow- ups in the number of parameters in both direc- tions, suggesting that the two models impose different inductive biases. We are concerned here with probabilistic shift-reduce parsing models that define prob- ability distributions over word sequences, and in particular the model of Chelba and Je- linek (1998). Most other probabilistic shift- reduce parsing models (Briscoe and Carroll, 1993; Black et al., 1992; Magerman, 1994; Rat- naparkhi, 1997) give only the conditional prob- ability of a parse tree given a word sequence. Collins (1998) has argued that those models fail to capture the appropriate dependency relations of natural language. Furthermore, they are not directly comparable to PCFGs, which define probability distributions over word sequences. To make the discussion somewhat more con- crete, we now present a simplified version of the Chelba-Jelinek model. Consider the following sentence: The small woman gave the fat man her sandwich. The model under discussion is based on shift- reduce PPDAs. In such a model, shift transi- tions generate the next word w and its associ- ated syntactic category X and push the pair (X, w) on the stack. Each shift transition is followed by zero or more reduce transitions that combine topmost stack entries. For exam- ple the stack elements (Det, the), (hdj, small), (N, woman) can be combined to form the single entry (NP, woman) representing the phrase "the small woman". In general each stack entry con- sists of a syntactic category and a head word. After generating the prefix "The small woman gave the fat man" the stack might contain the sequence (NP, woman)<Y, gave)(NP, man). The Chelba-Jelinek model then executes a shift tran- 542 S --+ (S, admired) (S, admired) --+ (NP, Mary)(VP, admired) (VP, admired) -+ (V, admired)(Np, oak) (NP, oak) -+ (Det, the)(N, oak) (N, oak) -+ (Adj, towering> (N, oak> (N, oak> -~ (Adj, strong>(N, oak> (N, oak) -+ (hdj, old>(N, oak) (NP, Mary) -+ Mary (N, oak) -+ oak Figure 1: Lexicalized context-free grammar sition by generating the next word. This is done in a manner similar to that of a trigram model except that, rather than generate the next word based on the two preceding words, it generates the next word based on the two top- most stack entries. In this example the Chelba- Jelinek model generates the word "her" from (V, gave)(NP, man) while a classical trigram model would generate "her" from "fat man". We now contrast Chelba-Jelinek style mod- els with lexicalized PCFG models. A PCFG is a context-free grammar in which each produc- tion is associated with a weight in the interval [0, 1] and such that the weights of the produc- tions from any given nonterminal sum to 1. For instance, the sentence Mary admired the towering strong old oak can be derived using a lexicalized PCFG based on the productions in Figure 1. Production probabilities in the PCFG would reflect the like- lihood that a phrase headed by a certain word can be expanded in a certain way. Since it can be difficult to estimate fully these likelihoods, we might restrict ourselves to models based on bilexical relationships (Eisner, 1997), those be- tween pairs of words. The simplest bilexical re- lationship is a bigram statistic, the fraction of times that "oak" follows "old". Bilexical rela- tionships for a PCFG include that between the head-word of a phrase and the head-word of a non-head immediate constituent, for instance. In particular, the generation of the above sen- tence using a PCFG based on Figure 1 would exploit a bilexical statistic between "towering" and "oak" contained in the weight of the fifth production. This bilexical relationship between "towering" and "oak" would not be exploited in either a trigram model or in a Chelba-Jelinek style model. In a Chelba-Jelinek style model one must generate "towering" before generating "oak" and then "oak" must be generated from (Adj, strong), (Adj, old). In this example the Chelba-Jelinek model behaves more like a clas- sical trigram model than like a PCFG model. This contrast between PPDAs and PCFGs is formalized in theorem 1, which exhibits a PCFG for which no stochastic parameterization of the corresponding shift-reduce parser yields the same probability distribution over strings. That is, the standard shift-reduce translation from CFGs to PDAs cannot be generalized to the stochastic case. We give two ways of getting around the above difficulty. The first is to construct a top-down PPDA that mimics directly the process of gen- erating a PCFG derivation from the start sym- bol by repeatedly replacing the leftmost non- terminal in a sentential form by the right-hand side of one of its rules. Theorem 2 states that any PCFG can be translated into a top- down PPDA. Conversely, theorem 3 states that any PPDA can be translated to a PCFG, not just those that are top-down PPDAs for some PCFG. Hence PCFGs and general PPDAs de- fine the same class of stochastic languages. Unfortunately, top-down PPDAs do not al- low the simple left-to-right processing that mo- tivates shift-reduce PPDAs. A second way around the difficulty formalized in theorem 1 is to encode additional information about the derivation context with richer stack and state alphabets. Theorem 7 shows that it is thus possible to translate an arbitrary PCFG to a shift-reduce PPDA. The construction requires a fair amount of machinery including proofs that any PCFG can be put in Chomsky normal form, that weights can be renormalized to ensure that the result of grammar transformations can be made into PCFGs, that any PCFG can be put in Greibach normal form, and, finally, that a Greibach normal form PCFG can be converted to a shift-reduce PPDA. The construction also involves a blow-up in the size of the shift-reduce parsing automaton. This suggests that some languages that are con- cisely describable by a PCFG are not concisely describable by a shift-reduce PPDA, hence that the class of PCFGs and the class of shift-reduce PPDAs impose different inductive biases on the 543 CF languages. In the conversion from shift- reduce PPDAs to PCFGs, there is also a blow- up, if a less dramatic one, leaving open the pos- sibility that the biases are incomparable, and that neither formalism is inherently more con- cise. Our main conclusion is then that, while the generative and shift-reduce parsing approaches are weakly equivalent, they impose different in- ductive biases. 2 Probabilistic and Weighted Grammars For the remainder of the paper, we fix a terminal alphabet E and a nonterminal alphabet N, to which we may add auxiliary symbols as needed. A weighted context-free grammar (WCFG) consists of a distinguished start symbol S E N plus a finite set of weighted productions of the form X -~ a, (alternately, u : X --~ a), where X E N, a E (Nt2E)* and the weight u is a non- negative real number. A probabilistic context- free grammar (PCFG) is a WCFG such that for all X, )-~u:x-~a u = 1. Since weights are non- negative, this also implies that u <_ 1 for any individual production. A PCFG defines a stochastic process with sentential forms as states, and leftmost rewrit- ing steps as transitions. In the more general case of WCFGs, we can no longer speak of stochastic processes; but weighted parse trees and sets of weighted parse trees are still well- defined notions. We define a parse tree to be a tree whose nodes are labeled with productions. Suppose node ~ is labeled X -~ a[Y1,...,Yn], where we write a[Y1,...,Yn] for a string whose nonter- minal symbols are Y1,...,Y~. We say that ~'s nonterminal label is X and its weight is u. The subtree rooted at ~ is said to be rooted in X. ~ is well-labeled just in case it has n children, whose nonterminal labels are Y1,..., Yn, respectively. Note that a terminal node is well-labeled only if a is empty or consists exclusively of terminal symbols. We say a WCFG G admits a tree d just in case all nodes of d are well-labeled, and all labels are productions of G. Note that no requirement is placed on the nonterminal of the root node of d; in particular, it need not be S. We define the weight of a tree d, denoted Wa(d), or W(d) if G is clear from context, to be the product of weights of its nodes. The depth r(d) of d is the length of the longest path from root to leaf in d. The root production it(d) is the label of the root node. The root symbol p(d) is the left-hand side of ~r(d). The yield a(d) of the tree d is defined in the standard way as the string of terminal symbols "parsed" by the tree. It is convenient to treat the functions 7r, p, a, and r as random variables over trees. We write, for example, {p = X} as an abbreviation for {dip(d)= X}; and WG(p = X) represents the sum of weights of such trees. If the sum diverges, we set WG(p = X) = oo. We call IIXHG = WG(p = X) the norm of X, and IIGII = IISlla the norm of the grammar. A WCFG G is called convergent if [[G[[ < oo. If G is a PCFG then [[G[[ = WG(p "- S) < 1, that is, all PCFGs are convergent. A PCFG G is called consistent if ]]GII = 1. A sufficient condition for the consistency of a PCFG is given in (Booth and Thompson, 1973). If (I) and • are two sets of parse trees such that 0 < WG(~) < co we define PG((I)]~) to be WG(~Nqt)/WG(kO). For any terminal string y and grammar G such that 0 < WG(p -- S) < co we define PG(Y) to be Pa(a = YIP = S). 3 Stochastic Push-Down Automata We use a somewhat nonstandard definition of pushdown automaton for convenience, but all our results hold for a variety of essentially equiv- alent definitions. In addition to the terminal alphabet ~, we will use sets of stack symbols and states as needed. A weighted push-down automaton (WPDA) consists of a distinguished start state q0, a distinguished start stack symbol X0 and a finite set of transitions of the following form where p and q are states, a E E L.J {e}, X and Z1, ..., Zn are stack symbols, and w is a nonnegative real weight: x, pa~ Zl ... Zn, q A WPDA is a probabilistic push-down automa- ton (PPDA) if all weights are in the interval [0, 1] and for each pair of a stack symbol X and a state q the sum of the weights of all transitions of the form X,p ~ Z1 ...Z=, q equals 1. A ma- chine configuration is a pair (fl, q) of a finite sequence fl of stack symbols (a stack) and a ma- chine state q. A machine configuration is called halting if the stack is empty. If M is a PPDA containing the transition X,p ~ Z1...Zn,q then any configuration of the form (fiX, p) has 544 probability w of being transformed into the con- figuration (f~Z1...Zn, q> where this transfor- mation has the effect of "outputting" a if a ¢ e. A complete execution of M is a sequence of tran- sitions between configurations starting in the initial configuration <X0, q0> and ending in a configuration with an empty stack. The prob- ability of a complete execution is the product of the probabilities of the individual transitions between configurations in that execution. For any PPDA M and y E E* we define PM(Y) to be the sum of the probabilities of all complete executions outputting y. A PPDA M is called consistent if )-~ye~* PM(Y) = 1. We first show that the well known shift- reduce conversion of CFGs into PDAs can not be made to handle the stochastic case. Given a (non-probabilistic) CFG G in Chomsky normal form we define a (non-probabilistic) shift-reduce PDA SIt(G) as follows. The stack symbols of SIt(G) are taken to be nonterminals of G plus the special symbols T and ±. The states of SR(G) are in one-to-one correspondence with the stack symbols and we will abuse notation by using the same symbols for both states and stack symbols. The initial stack symbol is 1 and the initial state is (the state corresponding to) _L. For each production of the form X --+ a in G the PDA SIt(G) contains all shift transi- tions of the following form Y,Z-~ YZ, X The PDA SR(G) also contains the following ter- mination transitions where S is the start symbol of G. E 1, S -+, T I,T -~,T Note that if G consists entirely of productions of the form S -+ a these transitions suffice. More generally, for each production of the form X -+ YZ in G the PDA SR(G) contains the following reduce transitions. Y, Z -~, X All reachable configurations are in one of the following four forms where the first is the initial configuration, the second is a template for all intermediate configurations with a E N*, and the last two are terminal configurations. <1, 1>, <11., x>, <I,T>, T> Furthermore, a configuration of the form (l_l_a, X) can be reached after outputting y if and only if aX :~ y. In particular, the machine can reach configuration (±_L, S) outputting y if and only if S :~ y. So the machine SR(G) generates the same language as G. We now show that the shift-reduce transla- tion of CFGs into PDAs does not generalize to the stochastic case. For any PCFG G we define the underlying CFG to be the result of erasing all weights from the productions of G. Theorem 1 There exists a consistent PCFG G in Chomsky normal .form with underlying CFG G' such that no consistent weighting M of the PDA SR(G ~) has the property that PM(Y) = Pa(u) for all U e To prove the theorem take G to be the fol- lowing grammar. 1_ 1_ S -~ AX1, S 3+ BY1 X, -~ CX2, X2 -~ CA Yl Cy2, Y2 A, C B A-~ a, S-~ b, C-~ c Note that G generates acca and bccb each with probability ½. Let M be a consistent PPDA whose transitions consist of some weight- ing of the transitions of SR(G'). We will as- sume that PM(Y) = PG(Y) for all y E E* and derive a contradiction. Call the nonter- minals A, B, and C preterminals. Note that the only reduce transitions in SR(G ~) com- bining two preterminals are C, A -~,X2 and C, B -~,Y2. Hence the only machine configu- ration reachable after outputting the sequence ace is (.I__LAC, C>. If PM(acca) -- ½ and PM(accb) -- 0 then the machine in configuration (.I_±AC, C> must deterministically move to con- figuration (I±ACC, A>. But this implies that configuration (IIBC, C> also deterministically moves to configuration <±±BCC, A> so we have PM(bccb) -= 0 which violates the assumptions about M. ,, Although the standard shift-reduce transla- tion of CFGs into PDAs fails to generalize to the stochastic case, the standard top-down con- version easily generalizes. A top-down PPDA is one in which only ~ transitions can cause the stack to grow and transitions which output a word must pop the stack. 545 Theorem 2 Any string distribution definable by a consistent PCFG is also definable by a top- down PPDA. Here we consider only PCFGs in Chom- sky normal form--the generalization to arbi- trary PCFGs is straightforward. Any PCFG in Chomsky normal form can be translated to a top-down PPDA by translating each weighted production of the form X --~ YZ to the set of expansion moves of the form W, X ~ WZ, Y and each production of the form X -~ a to the set of pop moves of the form Z, X 72-'~, Z. • We also have the following converse of the above theorem. Theorem 3 Any string distribution definable by a consistent PPDA is definable by a PCFG. The proof, omitted here, uses a weighted ver- sion of the standard translation of a PDA into a CFG followed by a renormalization step using lemma 5. We note that it does in general in- volve an increase in the number of parameters in the derived PCFG. In this paper we are primarily interested in shift-reduce PPDAs which we now define for- mally. In a shift-reduce PPDA there is a one- to-one correspondence between states and stack symbols and every transition has one of the fol- lowing two forms. Y, Za-~YZ, X a¢E EgW Y, Z -~+ , X Transitions of the first type are called shift transitions and transitions of the second type are called reduce transitions. Shift transitions output a terminal symbol and push a single symbol on the stack. Reduce transitions are e-transitions that combine two stack symbols. The above theorems leave open the question of whether shift-reduce PPDAs can express arbi- trary context-free distributions. Our main the- orem is that they can. To prove this some ad- ditional machinery is needed. 4 Chomsky Normal Form A PCFG is in Chomsky normal form (CNF) if all productions are either of the form X -St a, a E E or X -~ Y1Y2, Y1,Y2 E N. Our next theorem states, in essence, that any PCFG can be converted to Chomsky normal form. Theorem 4 For any consistent PCFG G with PG(e) < 1 there exists a consistent PCFG C(G) in Chomsky normal form such that, for all y E E+: Pa(y) - ea(yly # e) PC(G)(Y) -- 1 - Pa(e) To prove the theorem, note first that, without loss of generality, we can assume that all pro- ductions in G are of one of the forms X --~ YZ, X -5t Y, X -~ a, or X -Y+ e. More specifi- cally, any production not in one of these forms must have the form X -5t ¢rfl where a and fl are nonempty strings. Such a production can be replaced by X -~ AB, A -~ a, and B 2+ fl where A and B are fresh nonterminal symbols. By repeatedly applying this binarization trans- formation we get a grammar in the desired form defining the same distribution on strings. We now assume that all productions of G are in one of the above four forms. This im- plies that a node in a G-derivation has at most two children. A node with two children will be called a branching node. Branching nodes must be labeled with a production of the form X -~ YZ. Because G can contain produc- tions of the form X --~ e there may be ar- bitrarily large G-derivations with empty yield. Even G-derivations with nonempty yield may contain arbitrarily large subtrees with empty yield. A branching node in the G-derivation will be called ephemeral if either of its chil- dren has empty yield. Any G-derivation d with la(d)l _ 2 must contain a unique shallowest non-ephemeral branching node, labeled by some production X ~ YZ. In this case, define fl(d) = YZ. Otherwise (la(d)l < 2), let fl(d) = a(d). We say that a nonterminal X is nontrivial in the grammar G if Pa(a # e I P = X) > O. We now define the grammar G' to consist of all productions of the following form where X, Y, and Z are nontrivial nonterminals of G and a is a terminal symbol appearing in G. X PG(~=YZ~p=x, ~#~) YZ X PG(~=a 12+=x, ~¢~) a We leave it to the reader to verify that G' has the property stated in theorem 4. • The above proof of theorem 4 is non- constructive in that it does not provide any 546 way of computing the conditional probabilities PG(Z = YZ I p = x, # and Pa(Z = a [ p = X, a ¢ e). However, it is not difficult to compute probabilities of the form PG(¢ [ p = X, r <_ t+ 1) from probabili- ties of the form PG((I) ] p = X, v _< t), and PG(¢ I P = X) is the limit as t goes to infinity of Pa((I )] p= X, r_< t). We omit the details here. from X equals 1: = ~:x-~[Y1 ..... y.] u ~ -- E .x-,oIv, ..... Y.l II lla = ..... y.]ul-LwG(p= = wo(p=x)Wa(p= X) - 1 5 Renormalization A nonterminal X is called reachable in a gram- mar G if either X is S or there is some (re- cursively) reachable nonterminal Y such that G contains a production of the form Y -~ a where contains X. A nonterminal X is nonempty in G if G contains X -~ a where u > 0 and a contains only terminal symbols, or G contains X -~ o~[Y1, ..., Yk] where u > 0 and each 1~ is (recursively) nonempty. A WCFG G is proper if every nonterminal is both reachable and nonempty. It is possible to efficiently com- pute the set of reachable and nonempty non- terminals in any grammar. Furthermore, the subset of productions involving only nontermi- nals that are both reachable and nonempty de- fines the same weight distribution on strings. So without loss of generality we need only con- sider proper WCFGs. A reweighting of G is any WCFG derived from G by changing the weights of the productions of G. Lemma 5 For any convergent proper WCFG G, there exists a reweighting G t of G such that G ~ is a consistent PCFG such that for all ter- minal strings y we have PG' (Y) = Pa (Y). Proof." Since G is convergent, and every non- terminal X is reachable, we must have IIXIla < oo. We now renormalize all the productions from X as follows. For each production X -~ a[Y1,..., Yn] we replace u by ¢ = II IIG IIXIla To show that G' is a PCFG we must show that the sum of the weights of all productions For any parse tree d admitted by G let d ~ be the corresponding tree admitted by G ~, that is, the result of reweighting the pro- ductions in d. One can show by induc- tion on the depth of parse trees that if p(d) = X then Wc,(d') = [-~GWG(d). Therefore IIXIIG, = ~~{d[p(d)=X} WG,(d') -~ ~ ~{alo(e)=x} Wa(d) = = 1. In par- ticular, Ilaql = IlSlla,- 1, that is, G' is consis- tent. This implies that for any terminal string Y we have PG'(Y) = li-~Wa,(a = y, p = S) = Wa,(a = y, p = S). Furthermore, for any tree d with p(d) = S we have Wa,(d') = ~[~cWa(d) and so WG,(a = y, p = S) - ~WG(a = y, p = S) = Pc(Y). " 6 Greibach Normal Form A PCFG is in Greibach normal form (GNF) if every production X -~ a satisfies (~ E EN*. The following holds: Theorem 6 For any consistent PCFG G in CNF there exists a consistent PCFG G ~ in GNF such that Pc,(Y) = Pa(Y) for y e E*. Proof: A left corner G-derivation from X to Y is a G-derivation from X where the leftmost leaf, rather than being labeled with a produc- tion, is simply labeled with the nonterminal Y. For example, if G contains the productions X ~ YZ and Z -~ a then we canconstruct a left corner G-derivation from X to Y by build- ing a tree with a root labeled by X Z.~ YZ, a left child labeled with Y and a right child la- beled with Z -~ a. The weight of a left corner G-derivation is the product of the productions on the nodes. A tree consisting of a single node labeled with X is a left corner G-derivation from X toX. For each pair of nonterminals X, Y in G we introduce a new nonterminal symbol X/Y. 547 The H-derivations from X/Y will be in one to one correspondence with the left-corner G- derivations from X to Y. For each production in G of the form X ~ a we include the following in H where S is the start symbol of G: S --~ a S/X We also include in H all productions of the fol- lowing form where X is any nonterminal in G: x/x If G consists only of productions of the form S -~ a these productions suffice. More gener- ally, for each nonterminal X/Y of H and each pair of productions U ~ YZ, W ~-~ a we in- clude in H the following: X/Y ~2 a Z/W X/U Because of the productions X/X -~ e, WH(# : X/X) > 1 , and H is not quite in GNF. These two issues will be addressed momentarily. Standard arguments can be used to show that the H-derivations from X/Y are in one- to-one correspondence with the left corner G- derivations from X to Y. Furthermore, this one- to-one correspondence preserves weight--if d is the H-derivation rooted at X/Y corresponding to the left corner G-derivation from X to Y then WH (d) is the product of the weights of the pro- ductions in the G-derivation. The weight-preserving one-to-one correspon- dence between left-corner G-derivations from X to Y and H-derivations from X/Y yields the following. WH ( ao~ ) : ~'~(S_U+aS/X)EHUWH(~r : Ollp--- S/X) Po(a ) Theorem 5 implies that we can reweight the proper subset of H (the reachable and nonempty productions of H) so as to construct a consistent PCFG g with Pj((~) = PG(~). To prove theo- rem 6 it now suffices to show that the produc- tions of the form X/X -~ e can be eliminated from the PCFG J. Indeed, we can eliminate the e productions from J in a manner similar to that used in the proof of theorem 4. A node in an J-derivation is ephemeral if it is labeled X -~ e for some X. We now define a function 7 on J-derivations d as follows. If the root of d is labeled with X -~ aYZ then we have four sub- cases. If neither child of the root is ephemeral then 7(d) is the string aYZ. If only the left child is ephemeral then 7(d) is aZ. If only the right child is ephemeral then 7(d) is aY and if both children are ephemeral then 7(d) is a. Analo- gously, if the root is labeled with X -~ aY, then 7(d) is aY if the child is not ephemeral and a otherwise. If the root is labeled with X -~ e then 7(d) is e. A nonterminal X in K will be called trivial ifPj(7= e I P =X) = 1. We now define the final grammar G' to consist of all productions of the following form where X, Y, and Z are nontrivial nonterminals appearing in J and a is a terminal symbol appearing in J. X Pj(a=a I__~=X, "y¢¢) a X pj(a=aY~_~=X, "yCe) aY X PJ(a=aYZl-~ p=X' ~¢) aYZ As in section 4, for every nontrivial nonterminal X in K and terminal string (~ we have PK (a = (~ I P= X) = Pj(a= a I P= X, a ~ e). In particular, since Pj(e) = PG(() = 0, we have the following: = = Pj(a=alp=S ) = Pj(a) = Pa( ) The PCFG K is the desired PCFG in Greibach normal form. • The construction in this proof is essen- tially the standard left-corner transformation (Rosenkrantz and II, 1970), as extended by Sa- lomaa and Soittola (1978, theorem 2.3) to alge- braic formal power series. 7 The Main Theorem We can now prove our main theorem. Theorem 7 For any consistent PCFG G there exists a shift-reduce PPDA M such that PM(Y) = PG(Y) for all y E ~*. Let G be an arbitrary consistent PCFG. By theorems 4 and 6~ we can assume that G con- sists of productions of the form S -~ e and 548 S l~w St plus productions in Greibach normal form not mentioning S. We can then replace the rule S 1_:+~ S ~ with all rules of the form S 0-__~)~' a where G contains S ~ ~' -+ a. We now assume without loss of generality that G con- sists of a single production of the form S -~ e plus productions in Greibach normal form not mentioning S on the right hand side. The stack symbols of M are of the form W~ where ce E N* is a proper suffix of the right hand side of some production in G. For example, if G contains the production X -~ aYZ then the symbols of M include Wyz, Wy, and We. The initial state is Ws and the initial stack symbol is ±. We have assumed that G contains a unique production of the form S -~ e. We include the following transition in M corresponding to this production. A_,Ws~,T Then, for each rule of the form X -~ a~ in G and each symbol of the form Wx,~ we include the following in M: Z, Wx. ~ ZWx., Wz We also include all "post-processing" rules of the following form: Wx~W~ ~ W~ ~.,1 ±,W~ ~,T I,T -:+,T Note that all reduction transitions are determin- istic with the single exception of the first rule listed above. The nondeterministic shift tran- sitions of M are in one-to-one correspondence with the productions of G. This yields the prop- erty that PM(Y) = PG(Y). • 8 Conclusions The relationship between PCFGs and PPDAs is subtler than a direct application of the clas- sical constructions relating general CFGs and PDAs. Although PCFGs can be concisely trans- lated into top-down PPDAs, we conjecture that there is no concise translation of PCFGs into shift-reduce PPDAs. Conversely, there appears to be no concise translation of shift-reduce PP- DAs to PCFGs. Our main result is that PCFGs and shift-reduce PPDAs are intertranslatable, hence weakly equivalent. However, the non- conciseness of our translations is consistent with the view that stochastic top-down generation models are significantly different from shift- reduce stochastic parsing models, affecting the ability to learn a model from examples. References Alfred V. Aho and Jeffrey D. Ullman. 1972. The Theory of Parsing, Translation and Compiling, volume I. Prentice-Hall, Englewood Cliffs, New Jersey. Ezra Black, Fred Jelinek, John Lafferty, David Magerman, Robert Mercer, and Salim Roukos. 1992. Towards history-based grammars: Using richer models for probabilistic parsing. In Pro- ceedings of the 5th DARPA Speech and Natural Language Workshop. Taylor Booth and Richard Thompson. 1973. Apply- ing probability measures to abstract languages. IEEE Transactions on Computers, C-22(5):442- 450. Ted Briscoe and John Carroll. 1993. Generalized probabilistic LR parsing of natural language (cor- pora) with unification-based grammars. Compu- tational Linguistics, 19(1):25-59. Eugene Charniak. 1997. Statistical parsing with a context-free grammar and word statistics. In Fourteenth National Conference on Artificial Intelligence, pages 598-603. AAAI Press/MIT Press. Ciprian Chelba and Fred Jelinek. 1998. Exploit- ing syntactic structure for language modeling. In COLING-ACL '98, pages 225-231. Michael Collins. 1998. Head-Driven Statistical Mod- els for Natural Language Parsing. Ph.D. thesis, University of Pennsylvania. Jason Eisner. 1997. Bilexical grammars and a cubic- time probabilistic parser. In Proceedings of the International Workshop on Parsing Technologies. David M. Magerman. 1994. Natural Language Pars- ing as Statistical Pattern Recognition. Ph.D. the- sis, Department of Computer Science, Stanford University. Adwait Ratnaparkhi. 1997. A linear oberved time statistical parser based on maximum entropy models. In Claire Cardie and Ralph Weischedel, editors, Second Conference on Empirical Meth- ods in Natural Language Processing (EMNLP-2), Somerset, New Jersey. Association For Computa- tional Linguistics. Daniel J. Rosenkrantz and Philip M. Lewis II. 1970. Deterministic left corner parser. In IEEE Con- ference Record of the 11th Annual Symposium on Switching and Automata Theory, pages 139-152. Arto Salomaa and Matti Soittola. 1978. Automata- Theoretic Aspects of Formal Power Series. Springer-Verlag, New York. 549
1999
70
Information Fusion in the Context of Multi-Document Summarization Regina BarzUay and Kathleen R. McKeown Dept. of Computer Science Columbia University New York, NY 10027, USA Michael Elhadad Dept. of Computer Science Ben-Gurion University Beer-Sheva, Israel Abstract We present a method to automatically generate a concise summary by identifying and synthe- sizing similar elements across related text from a set of multiple documents. Our approach is unique in its usage of language generation to reformulate the wording of the summary. 1 Introduction Information overload has created an acute need for summarization. Typically, the same infor- mation is described by many different online documents. Hence, summaries that synthesize common information across documents and em- phasize the differences would significantly help readers. Such a summary would be beneficial, for example, to a user who follows a single event through several newswires. In this paper, we present research on the automatic fusion of simi- lar information across multiple documents using language generation to produce a concise sum- mary. We propose a method for summarizing a spe- cific type of input: news articles presenting dif- ferent descriptions of the same event. Hundreds of news stories on the same event are produced daily by news agencies. Repeated information about the event is a good indicator of its impor- tancy to the event, and can be used for summary generation. Most research on single document summa- rization, particularly for domain independent tasks, uses sentence extraction to produce a summary (Lin and Hovy, 1997; Marcu, 1997; Salton et al., 1991). In the case of multi- document summarization of articles about the same event, the original articles can include both similar and contradictory information. Extracting all similar sentences would produce a verbose and repetitive summary, while ex- tracting some similar sentences could produce a summary biased towards some sources. Instead, we move beyond sentence extraction, using a comparison of extracted similar sen- tences to select the phrases that should be in- cluded in the summary and sentence generation to reformulate them as new text. Our work is part of a full summarization system (McK- eown et al., 1999), which extracts sets of simi- lax sentences, themes (Eskin et al., 1999), in the first stage for input to the components described here. Our model for multi-document summariza- tion represents a number of departures from traditional language generation. Typically, lan- guage generation systems have access to a full semantic representation of the domain. A con- tent planner selects and orders propositions from an underlying knowledge base to form text content. A sentence planner determines how to combine propositions into a single sentence, and a sentence generator realizes each set of com- bined propositions as a sentence, mapping from concepts to words and building syntactic struc- ture. Our approach differs in the following ways: Content planning operates over full sentences, producing sentence frag- ments. Thus, content planning straddles the border between interpretation and gen- eration. We preprocess the similar sen- tences using an existing shallow parser (Collins, 1996) and a mapping to predicate- argument structure. The content planner finds an intersection of phrases by com- paring the predicate-argument structures; through this process it selects the phrases that can adequately convey the common information of the theme. It also orders selected phrases and augments them with 550 On 3th of September 1995, 120 hostages were released by Bosnian Serbs. Serbs were holding over 250 U.N. per- sonnel. Bosnian serb leader Radovan Karadjic said he ex- pected "a sign of goodwill" from the international com- munity. U.S. F-16 fighter jet was shot down by Bosnian ! Serbs. Electronic beacon signals, which might have been i transmitted by a downed U.S. fighter pilot in Bosnia, were no longer being received. After six days, O'Grady, downed pilot, was rescued by Marine force. The mission was carried out by CH-53 helicopters with an escort of missile- and rocket-armed Cobra helicopters. Figure 1: Summary produced by our system us- ing 12 news articles as input. information needed for clarification (en- tity descriptions, temporal references, and newswire source references). Sentence generation begins with phrases. Our task is to produce fluent sen- tences that combine these phrases, arrang- ing them in novel contexts. In this process, new grammatical constraints may be im- posed and paraphrasing may be required. We developed techniques to map predicate- argument structure produced by the content-planner to the functional represen- tation expected by FUF/SURGE(Elhadad, 1993; Robin, 1994) and to integrate new constraints on realization choice, using sur- face features in place of semantic or prag- matic ones typically used in sentence gen- eration. An example summary automatically gener- ated by the system from our corpus of themes is shown in Figure 1. We collected a corpus of themes, that was divided into a training por- tion and a testing portion. We used the training data for identification of paraphrasing rules on which our comparison algorithm is built. The system we describe has been fully implemented and tested on a variety of input articles; there are, of course, many open research issues that we are continuing to explore. In the following sections, we provide an overview of existing multi-document summa- rization systems, then we will detail our sen- tence comparison technique, and describe the sentence generation component. We provide ex- amples of generated summaries and conclude with a discussion of evaluation. 2 Related Work Automatic summarizers typically identify and extract the most important sentences from an input article. A variety of approaches exist for determining the salient sentences in the text: statistical techniques based on word distribu- tion (Salton et al., 1991), symbolic techniques based on discourse structure (Marcu, 1997), and semantic relations between words (Barzi- lay and Elhadad, 1997). Extraction techniques can work only if summary sentences already ap- pear in the article. Extraction cannot handle the task we address, because summarization of multiple documents requires information about similarities and differences across articles. While most of the summarization work has focused on single articles, a few initial projects have started to study multi-document summa- rization documents. In constrained domains, e.g., terrorism, a coherent summary of sev- eral articles can be generated, when a detailed semantic representation of the source text is available. For example, information extraction systems can be used to interpret the source text. In this framework, (Raclev and McKe- own, 1998) use generation techniques to high- light changes over time across input articles about the same event. In an arbitrary domain, statistical techniques are used to identify simi- larities and differences across documents. Some approaches directly exploit word distribution in the text (Salton et al., 1991; Carbonell and Goldstein, 1998). Recent work (Mani and Bloe- dorn, 1997) exploits semantic relations between text units for content representation, such as synonymy and co-reference. A spreading acti- vation algorithm and graph matching is used to identify similarities and differences across doc- uments. The output is presented as a set of paragraphs with similar and unique words high- lighted. However, if the same information is mentioned several times in different documents, much of the summary will be redundant. While some researchers address this problem by select- ing a subset of the repetitions (Carbonell and Goldstein, 1998), this approach is not always satisfactory. As we will see in the next section~ we can both eliminate redundancy from the out- put and retain balance through the selection of common information. 551 On Friday, a U.S. F-16 fighter jet was shot down by Bosnian Serb missile while policing the no-fly zone over the region. A Bosnian Serb missile shot down a U.S. F-16 over northern Bosnia on Friday. On the eve of the meeting, a U.S. F-16 fighter was shot down while on a routine patrol over northern Bosnia. O'Grady's F-16 fighter jet, based in Aviano, Italy, was shot down by a Bosnian Serb SA-6 anti-aircraft missile last Friday and hopes had diminished for finding him alive despite intermittent electronic signals from the area which later turned out to be a navigational beacon. Figure 2: A collection of similar sentences -- theme. 3 Content Selection: Theme Intersection To avoid redundant statements in a summary, we could select one sentence from the set of sim- ilar sentences that meets some criteria (e.g., a threshold number of common content words). Unfortunately, any representative sentence usu- ally includes embedded phrases containing in- formation that is not common to other similar sentences. Therefore, we need to intersect the theme sentences to identify the common phrases and then generate a new sentence. Phrases pro- duced by theme intersection will form the con- tent of the generated summary. Given the theme shown in Figure 2, how can we determine which phrases should be selected to form the summary content? For our example theme, the problem is to determine that only the phrase "On Friday, U.S. F-16 fighter jet was shot down by a Bosnian Serb missile" is common across all sentences. The first sentence includes the clause; how- ever, in other sentences, it appears in differ- ent paraphrased forms, such as "A Bosnian Serb missile shot down a U.S. F-16 on Fri- day.". Hence, we need to identify similari- ties between phrases that are not identical in wording, but do report the same fact. If para- phrasing rules are known, we can compare the predicate-argument structure of the sentences and find common parts. Finally, having selected the common parts, we must decide how to com- bine phrases, whether additional information is needed for clarification, and how to order the resulting sentences to form the summary. shoot class: verb voice :passive tense: past polarity: + fighter missile class: noun class: noun definite: yes U.S. class: noun Figure 3: DSYNT of the sentence "U.S. fighter was shot by missile." 3.1 An Algorithm for Theme Intersection In order to identify theme intersections, sen- tences must be compared. To do this, we need a sentence representation that emphasizes sentence features that are relevant for com- parison such as dependencies between sentence constituents, while ignoring irrelevant features such as constituent ordering. Since predicate- argument structure is a natural way to repre- sent constituent dependencies, we chose a de- pendency based representation called DSYNT (Kittredge and Mel'~uk, 1983). An example of a sentence and its DSYNT tree is shown in Fig- ure 3. Each non-auxiliary word in the sentence has a node in the DSYNT tree, and this node is connected to its direct dependents. Grammat- ical features of each word are also kept in the node. In order to facilitate comparison, words are kept in canonical form. In order to construct a DSYNT we first run our sentences through Collin's robust, statisti- cal parser (Collins, 1996). We developed a rule- based component that transforms the phrase- structure output of the parser to a DSYNT rep- resentation. Functional words (determiners and auxiliaries) are eliminated from the tree and the corresponding syntactic features are updated. The comparison algorithm starts with all sen- tence trees rooted at verbs from the input DSYNT, and traverses them recursively: if two nodes are identical, they are added to the out- put tree, and their children are compared. Once a full phrase (a verb with at least two con- stituents) has been found, it is added to the intersection. If nodes are not identical, the algorithm tries to apply an appropriate para- phrasing rule from a set of rules described in the next section. For example, if the phrases 552 "group of students" and "students" are com- pared, then the omit empty head rule is appli- cable, since "group" is an empty noun and can be dropped from the comparison, leaving two identical words, "students". If there is no ap- plicable paraphrasing rule, then the comparison is finished and the intersection result is empty. All the sentences in the theme are compared in pairs. Then, these intersections are sorted according to their frequencies and all intersec- tions above a given threshold result in theme intersection. For the theme in Figure 2, the intersection result is "On Friday, a U.S. F-16 fighter jet was shot down by Bosnian Serb missile." 1 3.2 Paraphrasing Rules Derived from Corpus Analysis Identification of theme intersection requires col- lecting paraphrasing patterns which occur in our corpus. Paraphrasing is defined as alter- native ways a human speaker can choose to "say the same thing" by using linguistic knowl- edge (as opposed to world knowledge) (Iordan- skaja et al., 1991). Paraphrasing has been widely investigated in the generation commu- nity (Iordanskaja et al., 1991; Robin, 1994). (Dras, 1997) considered sets of paraphrases re- quired for text transformation in order to meet external constraints such as length or read- ability. (Jacquemin et al., 1997) investigated morphology-based paraphrasing in the context of a term recognition task. However, there is no general algorithm capable of identifying a sen- tence as a paraphrase of another. In our case, such a comparison is less difficult since theme sentences are a priori close semanti- cally, which significantly constrains the kinds of paraphrasing we need to check. In order to ver- ify this assumption, we analyzed paraphrasing patterns through themes of our training corpus derived from the Topic Detection and Tracking corpus (Allan et al., 1998). Overall, 200 pairs of sentences conveying the same information were analyzed. We found that 85% of the paraphras- ing is achieved by syntactic and lexical transfor- mations. Examples of paraphrasing that require world knowledge are presented below: 1. "The Bosnian Serbs freed 121 U.N. soldiers 1To be exact, the result of the algorithm is a DSYNT that linearizes as this sentence. last week at Zvornik" and "Bosnian Serb leaders freed about one-third of the U.N. personnel" 2. "Sheinbein showed no visible reaction to the ruling." and "Samuel Sheinbein showed no reaction when Chief Justice Aharon Barak read the 3-2 decision" Since "surface" level paraphrasing comprises the vast majority of paraphrases in our corpus and is easier to identify than those requiring world-knowledge, we studied paraphrasing pat- terns in the corpus. We found the following most frequent paraphrasing categories: 1. ordering of sentence components: "Tuesday they met..." and "They met ... tuesday"; 2. main clause vs. a relative clause: "...a building was devastated by the bomb" and "...a building, devastated by the bomb"; 3. realization in different syntactic categories, e.g., classifier vs. apposition: "Palestinian leader Ararat" and "Ararat, palestinian leader", "Pentagon speaker" and "speaker from the Pentagon"; 4. change in grammatical features: ac- tive/passive, time, number. "...a building was devastated by the bomb" and "...the bomb devastated a building"; 5. head omission: "group of students" and "students"; 6. transformation from one part of speech to another: "building devastation" and "... building was devastated"; 7. using semantically related words such as synonyms: "return" and "alight", "regime" and "government". The patterns presented above cover 82% of the syntactic and lexical paraphrases (which is, in turn, 70~0 of all variants). These categories form the basis for paraphrasing rules used by our intersection algorithm. The majority of these categories can be iden- tified in an automatic way. However, some of the rules can only be approximated to a certain degree. For example, identification of similar- ity based on semantic relations between words depends on the coverage of the thesaurus. We 553 identify word similarity using synonym relations from WordNet. Currently, paraphrasing using part of speech transformations is not supported by the system. All other paraphrase classes we identified are implemented in our algorithm for theme intersection. 3.3 Temporal Ordering A property that is unique to multi-document summarization is the effect of time perspective (Radev and McKeown, 1998). When reading an original text, it is possible to retrieve the cor- rect temporal sequence of events which is usu- ally available explicitly. However, when we put pieces of text from different sources together, we must provide the correct time perspective to the reader, including the order of events, the temporal distance between events and correct temporal references. In single-document summarization, one of the possible orderings of the extracted information is provided by the input document itself. How- ever, in the case of multiple-document summa- rization, some events may not be described in the same article. Furthermore, the order be- tween phrases can change significantly from one article to another. For example, in a set of ar- ticles about the Oklahoma bombing from our training set, information about the "bombing" itself, "the death toll" and "the suspects" appear in three different orders in the articles. This phenomenon can be explained by the fact that the order of the sentences is highly influenced by the focus of the article. One possible discourse strategy for sum- maries is to base ordering of sentences on chronological order of events. To find the time an event occurred, we use the publication date of the phrase referring to the event. This gives us the best approximation to the order of events without carrying out a detailed interpretation of temporal references to events in the article, which are not always present. Typically, an event is first referred to on the day it occurred. Thus, for each phrase, we must find the earliest publication date in the theme, create a "time stamp", and order phrases in the summary ac- cording to this time stamp. Temporal distance between events is an essen- tim part of the summary. For example, in the summary in Figure 1 about a "U.S. pilot doumed in Bosnia", the lengthy duration between "the helicopter was shot down" and "the pilot was rescued" is the main point of the story. We want to identify significant time gaps between events, and include them in the summary. To do so, we compare the time stamps of the themes, and when the difference between two subse- quent time stamps exceeds a certain threshold (currently two days), the gap is recorded. A time marker will be added to the output sum- mary for each gap, for example "According to a Reuters report on the 10/21" Another time-related issue that we address is normalization of temporal references in the summary. If the word "today" is used twice in the summary, and each time it refers to a different date, then the resulting summary can be misleading. Time references such as "to- day" and "Monday" are clear in the context of a source article, but can be ambiguous when ex- tracted from the article. This ambiguity can be corrected by substitution of this temporal ref- erence with the full time/date reference, such as "10//21 '' . By corpus analysis, we collected a set of patterns for identification of ambigu- ous dates. However, we currently don't handle temporal references requiring inference to re- solve (e.g., "the day before the plane crashed," "around Christmas"). 4 Sentence Generation The input to the sentence generator is a set of phrases that are to be combined and realized as a sentence. Input features for each phrase are determined by the information recovered by shallow analysis during content planning. Be- cause this input structure and the requirements on the generator are quite different from typical language generators, we had to address the de- sign of the input language specification and its interaction with existing features in a new way, instead of using the existing SURGE syntactic realization in a "black box" manner. As an example, consider the case of tempo- ral modifiers. The DSYNT for an input phrase will simply note that it contains a prepositional phrase. FUF/SURGE, our language generator, requires that the input contain a semantic role, circumstantial which in turn contains a tempo- ral feature. The labelling of the circumstantial as time allows SURGE to make the following decisions 554 given a sentence such as: "After they made an emergency landing, the pilots were reported missing." • The selection of the position of the time circumstantial in front of the clause • The selection of the mood of the embedded clause as "finite". The semantic input also provides a solid ba- sis to authorize sophisticated revisions to a base input. If the sentence planner decides to ad- join a source to the clause, SURGE can decide to move the time circumstantial to the end of the clause, leading to: "According to Reuters on Thursday night, the pilots were reported miss- ing after making an emergency landing." With- out such paraphrasing ability, which might be decided based on the semantic roles, time and sources, the system would have to generate an awkward sentence with both circumstantials ap- pearing one after another at the front of the sentence. While in the typical generation scenario above, the generator can make choices based on semantic information, in our situation, the gen- erator has only a low-level syntactic structure, represented as a DSYNT. It would seem at first glance that realizing such an input should be easier for the syntactic realization component. The generator in that case is left with little less to do than just linearizing the input specifica- tion. The task we had to solve, however, is more difficult for two reasons: 1. The input specification we define must al- low the sentence planner to perform revi- sions; that is, to attach new constituents (such as source) to a base input specifica- tion without taking into account all possi- ble syntactic interactions between the new constituent and existing ones; 2. SURGE relies on semantic information to make decisions and verify that these deci- sions are compatible with the rest of the sentence structure. When the semantic in- formation is not available, it is more diffi- cult to predict that the decisions are com- patible with the input provided in syntactic form. We modified the input specification language for FUF/SURGE to account for these problems. We added features that indicate the ordering of circumstantials in the output. Ordering of cir- cumstantials can easily be derived from their ordering in the input. Thus, we label circum- stantials with the features front-i (i-th circum- stantial at the front of the sentence) and end-i (i-th circumstantial at the end), where i indi- cates the relative ordering of the circumstantial within the clause. In addition, if possible, when mapping input phrases to a SURGE syntactic input, the sen- tence planner tries to determine the semantic type of circumstantial by looking up the prepo- sition (for example: "after" indicates a "time" circumstantial). This allows FUF/SURGE to map the syntactic category of the circumstan- tial to the semantic and syntactic features ex- pected by SURGE. However, in cases where the preposition is ambiguous (e.g., "in" can indi- cate "time" or "location") the generator must rely solely on ordering circumstantials based on ordering found in the input. We have modified SURGE to accept this type of input: in all places SURGE checks the se- mantic type of the circumstantial before making choices, we verified that the absence of the cor- responding input feature would not lead to an inappropriate default being selected. In sum- mary, this new application for syntactic realiza- tion highlights the need for supporting hybrid inputs of variable abstraction levels. The imple- mentation benefited from the bidirectional na- ture of FUF unification in the handling of hy- brid constraints and required little change to the existing SURGE grammar. While we used circumstantials to illustrate the issues, we also handled revision for a variety of other categories in the same manner. 5 Evaluation Evaluation of multi-document summarization is difficult. First, we have not yet found an exist- ing collection of human written summaries of multiple documents which could serve as a gold standard. We have begun a joint project with the Columbia Journalism School which will pro- vide such data in the future. Second, methods used for evaluation of extraction-based systems are not applicable for a system which involves text regeneration. Finally, the manual effort needed to develop test beds and to judge sys- 555 tem output is far more extensive than for single document summarization; consider that a hu- man judge would have to read many input ar- ticles (our largest test set contained 27 input articles) to rate the validity of a summary. Consequently, the evaluation that we per- formed to date is limited. We performed a quan- titative evaluation of our content-selection com- ponent. In order to prevent noisy input from the theme construction component from skew- ing the evaluation, we manually constructed 26 themes, each containing 4 sentences on aver- age. Far more training data is needed to tune the generation portion. While we have tuned the system to perform with minor errors on the manual set of themes we have created (the miss- ing article in the fourth sentence of the sum- mary in Figure 1 is an example), we need more robust input data from the theme construction component, which is still under development, to train the generator before beginning large scale testing. One problem in improving output is determining how to recover from errors in tools used in early stages of the process, such as the tagger and the parser. 5.1 Intersection Component The evaluation task for the content selection stage is to measure how well we identify com- mon phrases throughout multiple sentences. Our algorithm was compared against intersec- tions extracted by human judges from each theme, producing 39 sentence-level predicate- argument structures. Our intersection algo- rithm identified 29 (74%) predicate-argument structures and was able to identify correctly 69% of the subjects, 74% of the main verbs, and 65% of the other constituents in our list of model predicate-argument structures. We present system accuracy separately for each cat- egory, since identifying a verb or a subject is, in most cases, more important than identifying other sentence constituents. 6 Conclusions and Future Work In this paper, we presented an implemented algorithm for multi-document summarization which moves beyond the sentence extraction paradigm. Assuming a set of similar sentences as input extracted from multiple documents on the same event (McKeown et al., 1999; Eskin et al., 1999), our system identifies common phrases across sentences and uses language generation to reformulate them as a coherent summary. The use of generation to merge similar infor- mation is a new approach that significantly im- proves the quality of the resulting summaries, reducing repetition and increasing fluency. The system we have developed serves as a point of departure for research in a variety of directions. First is the need to use learning tech- niques to identify paraphrasing patterns in cor- pus data. As a first pass, we found paraphrasing rules manually. This initial set might allow us to automatically identify more rules and increase the performance of our comparison algorithm. From the generation side, our main goal is to make the generated summary more concise, pri- marily by combining clauses together. We will be investigating what factors influence the com- bination process and how they can be computed from input articles. Part of combination will in- volve increasing coherence of the generated text through the use of connectives, anaphora or lex- ical relations (Jing, 1999). One interesting problem for future work is the question of how much context to include from a sentence from which an intersected phrase is drawn. Currently, we include no context, but in some cases context is crucial even though it is not a part of the intersection. This is the case, for example, when the context negates, or denies, the embedded sub-clause which matches a sub-clause in another negating context. In such cases, the resulting summary is actually false. This occurs just once in our test cases, but it is a serious error. Our work will characterize the types of contextual information that should be retained and will develop algorithms for the case of negation, among others. Acknowledgments We would like to thank Yael Dahan-Netzer for her help with SURGE. This material is based upon work supported by the National Science Foundation under grant No. IRI-96-1879. Any opinions, findings, and conclusions or recom- mendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. 556 References James Allan, Jaime Carbonell, George Dod- dington, Jon Yamron, and Y. Yang. 1998. Topic detection and tracking pilot study: Final report. In Proceedings of the Broad- cast News Understanding and Transcription Workshop, pages 194-218. Regina Barzilay and Michael Elhadad. 1997. Using lexical chains for text summarization. In Proceedings of the A CL Workshop on In- telligent Scalable Text Summarization, pages 10-17, Madrid, Spain, August. ACL. Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering documents and producing sum- maries. In Proceedings of the 21st Annual In- ternational A CM SIGIR Conference on Re- search and Development in Information Re- trieval, Melbourne, Australia, August. Michael Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguis- tics, Santa Cruz, California. Mark Dras. 1997. Reluctant paraphrase: Tex- tual restructuring under an optimisation model. In Proceedings of PA CLING97, pages 98-104, Ohme, Japan. Michael Elhadad. 1993. Using Argumentation to Control Lexical Choice: A Functional Uni- fication Implementation. Ph.D. thesis, De- partment of Computer Science, Columbia University, New York. Eleazar Eskin, Judith Klavans, and Vasileios Hatzivassiloglou. 1999. Detecting similarity by apllying learning over indicators, submit- ted. Lidija Iordanskaja, Richard Kittredge, and Alain Polguere, 1991. Natural language Gen- eration in Artificial Intelligence and Compu- tational Linguistics, chapter 11. Kluwer Aca- demic Publishers. Cristian Jacquemin, Judith L. Klavans, and Evelyne Tzoukermann. 1997. Expansion of multi-word terms for indexing and retrieval using morphology and syntax. In proceedings of the 35th Annual Meeting of the A CL, pages 24-31, Madrid, Spain, July. ACL. Hongyan Jing. 1999. Summary generation through intelligent cutting and pasting of the input document. PhD thesis proposal. Richard Kittredge and Igor A. Mel'Suk. 1983. Towards a computable model of meaning-text relations within a natural sublanguage. In Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI- 83), pages 657-659, Karlsruhe, West Ger- many, August. Chin-Yew Lin and Eduard Hovy. 1997. Iden- tifying topics by position. In Proceedings of the 5th A CL Conference on Applied Natural Language Processing, pages 283-290, Wash- ington, D.C., April. Inderjeet Mani and Eric Bloedorn. 1997. Multi- document summarization by graph search and matching. In Proceedings of the Fif- teenth National Conference on Artificial In- telligence (AAAI-97), pages 622-628, Provi- dence, Rhode Island. AAAI. Daniel Marcu. 1997. From discourse structures to text summaries. In Proceedings of the A CL Workshop on Intelligent Scalable Text Sum- marization, pages 82-88, Madrid, Spain, Au- gust. ACL. Kathleen R McKeown, Judith Klavans, Vasileios Hatzivassiloglou, Regina Barzilay, and Eleazar Eskin. 1999. Towards multi- document summarization by reformulation: Progress and prospects, submitted. Dragomir R. Radev and Kathleen R. McKeown. 1998. Generating natural language sum- maries from multiple on-line sources. Compu- tational Linguistics, 24(3):469-500, Septem- ber. Jacques Robin. 1994. Revision-Based Gener- ation of Natural Language Summaries Pro- riding Historical Background: Corpus-Based Analysis, Design, Implementation, and Eval- uation. Ph.D. thesis, Department of Com- puter Science, Columbia University, NY. Gerald Salton, James Allan, Chris Buckley, and Amit Singhal. 1991. Automatic analy- sis, theme generation, and summarization of machine-readable texts. Science, 264:1421- 1426, June. 557
1999
71
Improving Summaries by Revising Them Inderjeet Mani and Barbara Gates and Eric Bloedorn The MITRE Corporation 11493 Sunset Hills Rd. Reston, VA 22090, USA {imani,blgates,bloedorn}@mitre.org Abstract This paper describes a program which revises a draft text by aggregating together descriptions of discourse entities, in addition to deleting ex- traneous information. In contrast to knowledge- rich sentence aggregation approaches explored in the past, this approach exploits statistical parsing and robust coreference detection. In an evaluation involving revision of topic-related summaries using informativeness measures from the TIPSTER SUMMAC evaluation, the results show gains in informativeness without compro- mising readability. 1 Introduction Writing improves with revision. Authors are fa- miliar with the process of condensing a long pa- per into a shorter one: this is an iterative pro- cess, with the results improved over successive drafts. Professional abstractors carry out sub- stantial revision and editing of abstracts (Crem- rains 1996). We therefore expect revision to be useful in automatic text summarization. Prior research exploring the use of revision in sum- marization, e.g., (Gabriel 1988), (Robin 1994), (McKeown et al. 1995) has focused mainly on structured data as the input. Here, we exam- ine the use of revision in summarization of text input. First, we review some summarization termi- nology. In revising draft summaries, these con- densation operations, as well as stylistic reword- ing of sentences, play an important role. Sum- maries can be used to indicate what topics are addressed in the source text, and thus can be used to alert the user as to the source con- tent (the indicative function). Summaries can also be used to cover the concepts in the source text to the extent possible given the compres- sion requirements for the summary (the in for- mative function). Summaries can be tailored to a reader's interests and expertise, yielding topic- related summaries, or they can be aimed at a particular- usually broad - readership commu- nity, as in the cash of (so-called) generic sum- maries. Revision here applies to generic and topic-related informative summaries, intended for publishing and dissemination. Summarization can be viewed as a text-to- text reduction operation involving three main condensation operations: selection of salient portions of the text, aggregation of information from different portions of the text, and abstrac- tion of specific information with more general information (Mani and Maybury 1999). Our approach to revision is to construct an initial draft summary of a source text and then to add to the draft additional background information. Rather than concatenate material in the draft (as surface-oriented, sentence extraction sum- marizers do), information in the draft is com- bined and excised based on revision rules in- volving aggregation (Dalianis and Hovy 1996) and elimination operations. Elimination can increase the amount of compression (summary length/source length) available, while aggrega- tion can potentially gather and draw in relevant background information, in the form of descrip- tions of discourse entities from different parts of the source. We therefore hypothesize that these operations can result in packing in more infor- mation per unit compression than possible by concatenation. Rather than opportunistically adding as much background information that can fit in the available compression, as in (Robin 1994), our approach adds background informa- tion from the source text to the draft based on an information weighting function. Our revision approach assumes input sen- tences are represented as syntactic trees whose 558 nodes are annotated with coreference informa- tion. In order to provide open-domain cover- age the approach does not assume a meaning- level representation of each sentence, and so, un- like many generation systems, the system does not represent and reason about what is being said 1. Meaning-dependent revision operations are restricted to situations where it is clear from coreference that the same entity is being talked about. There are several criteria our revision model needs to satisfy. The final draft needs to be informative, coherent, and grammatically well- formed. Informativeness is explored in Sec- tion 4.2. We can also strive to guarantee, based on our revision rule set, that each revision will be syntactically well-formed. Regarding coher- ence, revision alters rhetorical structure in a way which can produce disfiuencies. As rhetori- cal structure is hard to extract from the source 2, our program instead uses coreference to guide the revision, and attempts to patch the coher- ence by adjusting references in revised drafts. 2 The Revision Program The summary revision program takes as input a source document, a draft summary specifi- cation, and a target compression rate. Using revision rules, it generates a revised summary draft whose compression rate is no more than above the target compression rate. The initial draft summary (and background) are specified in terms of a task-dependent weighting function which indicates the relative importance of each of the source document sentences. The program repeatedly selects the highest weighted sentence from the source and adds it to the initial draft until the given compression percentage of the source has been extracted, rounded to the near- est sentence. Next, for each rule in the sequence of revision rules, the program repeatedly applies the rule until it can no longer be applied. Each rule application results in a revised draft. The program selects sentences for rule application by giving preference to higher weighted sentences. 1Note that professional abstractors do not attempt to fully "understand" the text - often extremely technical material, but use surface-level features as above as well as the overall discourse structure of the text (Cremmins 1996). 2However, recent progress on this problem (Marcu 1997) is encouraging. A unary rule applies to a single sentence. A bi- nary rule applies to a pair of sentences, at least one of which must be in the draft, and where the first sentence precedes the second in the input. Control over sentence complexity is imposed by failing rule application when the draft sentence is too long, the parse tree is too deep 3, or if more than two relative clauses would be stacked to- gether. The program terminates when there are no more rules to apply or when the revised draft exceeds the required compression rate by more than 5. The syntactic structure of each source sen- tence is extracted using Apple Pie 7.2 (Sekine 1998), a statistical parser trained on Penn Tree- bank data. It was evaluated by (Sekine 1998) as having 79% F-score accuracy (parseval) on short sentences (less than 40 words) from the Treebank. An informal assessment we made of the accuracy of the parser (based on intuitive judgments) on our own data sets of news ar- ticles suggests about 66% of the parses were acceptable, with almost half of the remain- ing parsing errors being due to part-of-speech tagging errors, many of which could be fixed by preprocessing the text. To establish coref- erence between proper names, named entities are extracted from the document, along with coreference relations using SRA's NameTag 2.0 (Krupka 1995), a MUC-6 fielded system. In ad- dition, we implemented our own coreference ex- tension: A singular definite NP (e.g., beginning with "the", and not marked as a proper name) is marked by our program as coreferential (i.e., in the same coreference equivalence class) with the last singular definite or singular indefinite atomic NP with the same head, provided they are within a distance 7 of each other. On a cor- pus of 90 documents, drawn from the TIPSTER evaluation, described in Section 4.1 below, this coreference extension scored 94% precision (470 valid coreference classes/501 total coreference classes) on definite NP coreference. Also, "he" (likewise "she") is marked, subject to 7, as coreferential with the last person name men- tioned, with gender agreement enforced when the person's first name's gender is known (from NameTag's list of common first names) 4. Most 3Lengths or depths greater than two standard devia- tions beyond the mean are treated as too long or deep. 4 However, this very naive method was excluded from 559 rule-name: rel-clause-intro-which- 1 patterns: ?X1 ; ~ first sentence pattern ?Y1 ?Y2 ?Y3 # second sentence pattern tests: label-NP ?X1 ; not entity-class ?X1 person ; label-S ?Y1 ; root ?Y1 ; label-NP ?Y2 ; label-VP ?Y3 ; adjacent-sibling ?Y2 ?Y3 ; parent-child ?Y1 ?Y2 ; parent-child ?Y1 ?Y3 ; coref ?X1 ?Y2 actions: subs ?X1 (NP ?X1 (, -COMMA-) (SBAR (WHNP (WP which)) (S ?Y3)) (,-COMMA-)); elim-root-of ?Y1 # removes second sentence Figure 2: Relative Clause Introduction Rule showing Aggregation and Elimination opera- tions. of the errors were caused by different sequences of words between the determiner and the noun phrase head word (e.g., "the factory" -- "the cramped five-story pre-1915 factory" is OK, but "the virus program"- "the graduate computer science program" isn't). 3 Revision Rules The revision rules carry out three types of op- erations. Elimination operations eliminate con- stituents from a sentence. These include elim- ination of parentheticals, and sentence-initial PPs and adverbial phrases satisfying lexical tests (such as "In particular,", "Accordingly," "In conclusion," etc.) 5. Aggregation operations combine constituents from two sentences, at least one of which must be a sentence in the draft, into a new con- stituent which is inserted into the draft sen- tence. The basis for combining sentences is that of referential identity: if there is an NP in sen- tence i which is coreferential with an NP in sentence j, then sentences i and j are candi- dates for aggregation. The most common form of aggregation is expressed as tree-adjunction (Joshi 1998) (Oras 1999). Figures 1 and 2 show a relative clause introduction rule which turns a VP of a (non-embedded) sentence whose our analysis because of a system bug. 5Such lexical tests help avoid misrepresenting the meaning of the sentence. subject is coreferential with an NP of an ear- lier (draft) sentence into a relative clause mod- ifier of the draft sentence NP. Other appositive phrase insertion rules include copying and in- serting nonrestrictive relative clause modifiers (e.g., "Smith, who...,"), appositive modifiers of proper names (e.g., "Peter G. Neumann, a com- puter security expert familiar with the case,..."), and proper name appositive modifiers of definite NPs (e.g., "The network, named ARPANET, is operated by .."). Smoothing operations apply to a single sen- tence, performing transformations so as to ar- rive at more compact, stylistically preferred sen- tences. There are two types of smoothing. Reduction operations simplify coordinated con- stituents. Ellipsis rules include subject ellipsis, which lowers the coordination from a pair of clauses with coreferential subjects to their VPs (e.g., "The rogue computer program destroyed files over a five month period and the program infected close to 100 computers at NASA fa- cilities" ==~ "The rogue computer program de- stroyed files over a five month period and in- fected close to 100 computers at NASA facil- ities"). It usually applies to the result of an aggregation rule which conjoins clauses whose subjects are coreferential. Relative clause re- duction includes rules which apply to clauses whose VPs begin with "be" (e.g., "which is" is deleted) or "have" (e.g., "which have" : ,~ "with"), as well as for other verbs, a rule delet- ing the relative pronoun and replacing the verb with its present participle (i.e., "which V" ,~ "V+ing"). Coordination rules include relative clause coordination. Reference Adjustment op- erations fix up the results of other revision oper- ations in order to improve discourse-level coher- ence, and as a result, they are run last 6. They include substitution of a proper name with a name alias if the name is mentioned earlier, ex- pansion of a pronoun with a coreferential proper name in a parenthetical ("pronoun expansion"), and ("indefinitization") replacement of a def- inite NP with a coreferential indefinite if the definite occurs without a prior indefinite. SSuch operations have been investigated earlier by (Robin 1994). 560 Draft sentence Other sentence S N'P vP $1 1,IPl 'VPI (---~m~ NP2 vP NP SBAR \ S J ~ vP1 Rexult sentence Figure 1: Relative Clause Introduction showing tree NP2 being adjoined into tree S 4 Evaluation Evaluation of text summarization and other such NLP technologies where there may be many acceptable outputs, is a difficult task. Re- cently, the U.S. government conducted a large- scale evaluation of summarization systems as part of its TIPSTER text processing program (Mani et al. 1999), which included both an extrinsic (relevance assessment) evaluation, as well as an intrinsic (coverage of key ideas) evaluation. The test set used in the latter (Q&:A) evaluation along with several automat- ically scored measures of informativeness has been reused in evaluating the informativeness of our revision component. 4.1 Background: TIPSTER Q&A Evaluation In this Q&A evaluation, the summarization sys- tem, given a document and a topic, needed to produce an informative, topic-related summary that contained the correct answers found in that document to a set of topic-related questions. These questions covered "obligatory" informa- tion that has to be provided in any document judged relevant to the topic. The topics chosen (3 in all) were drawn from the TREC (Harman and Voorhees 1996) data sets. For each topic, 30 relevant TREC documents were chosen as the source texts for topic-related summariza- tion. The principal tasks of each Q&A evaluator were to prepare the questions and answer keys and to score the system summaries. To con- struct the answer key, each evaluator marked off any passages in the text that provided an an- swer to a question (example shown in Table 1). Two kinds of scoring were carried out. In the first, a manual method, the answer to each question was judged Correct, Partially Correct, or Missing based on guidelines involving a hu- man comparison of the summary of a docu- ment against the set of tagged passages for that question in the answer key for that document. The second method of scoring was an automatic method. This program 7 took as input a key file and a summary to be scored, and returns an informativeness score on four different metrics. The key file includes tags identifying passages in the file which answer certain questions. The scoring uses the overlap measures shown in Ta- ble 2 s. The automatically computed V4 thru V7 informativeness scores were strongly corre- lated with the human-evaluated scores (Pearson r > .97, ~ < 0.0001). Given this correlation, we decided to use these informativeness measures. 4.2 Revision Evaluation: Informativeness To evaluate the revised summaries, we first con- verted each summary into a weighting function which scored each full-text sentence in the sum- mary's source in terms of its similarity to the most similar summary sentence. The weight of a source document sentence s given a sum- 7The program was reimplemented by us for use in the revision evaluation. S Passage matching here involves a sequential match with stop words and punctuation removed. 561 Title : Computer Security Description : Identify instances of illegal entry into sensitive computer networks by nonauthorized personnel. Narrative : Illegal entry into sensitive computer networks is a serious and potentially menacing problem. Both 'hackers' and foreign agents have been known to acquire unauthorized entry into various networks. Items relative this subject would include but not be limited to instances of illegally entering networks containing information of a sensitive nature to specific countries, such as defense or technology information, international banking, etc. Items of a personal nature (e.g. credit card fraud, changing of college test scores) should not be considered relevant. Questions 1)Who is the known or suspected hacker accessing a sensitive computer or computer network? 2) How is the hacking accomplished or putatively achieved? 3) Who is the apparent target of the hacker? 4) What did the hacker accomplish once the violation occurred? What was the purpose in performing the violation? 5) What is the time period over which the breakins were occurring? As a federal grand jury decides whether he should be prosecuted, <Ol>a graduate student</Ql> linked to a ''virus'' that disrupted computers nationwide <Q5>last month</q5>has been teaching his lawyer about the technical subject and turning down offers for his life story ..... No charges have been filed against <ql>Norris</Ql>, who reportedly told friends that he designed the virus that temporarily clogged about <Q3>6,000 university and military computers</q3> <Q2>linked to the Pentagon's hrpanet network</Q2> ...... Table 1: Q&A Topic 258, topic-related questions, and part of a relevant source document showing answer key annotations. Overlap Metric V4 V5 Definition full credit if the text spans for all tagged key passages are found in their entirety in the summary full credit if the text spans for all tagged key passages are found in their entirety in the summary; haft credit if the text spans for all tagged key passages are found in some combination of full or truncated form in the summary full credit if the text spans for all tagged key passages are found in some combination of full or truncated form in the summary percentage of credit assigned that is commensurate with the extent to which the text spans for tagged key passages are present in the summary Table 2: Informativeness measures for Automatic Scoring of each question that has an answer according to the key. Party CGI/CMU Cornell/SabIR GE 15.43 ISI 19.57 NMSU 16.54 SRA 15.59 UPenn 16.29 Mean 16.48 FOG Kincaid Before After Before 16.49 15.50 13.22 15.51 15.08 12.15 15.14 12.13 17.94 16.18 15.52 13.32 15.29 12.26 16.21 12.93 15.82 13.15 After 12.23 11.71 11.87 14.51 12.30 11.99 12.83 12.51 Table 3: Readability of Summaries Before (Original Summary) and After Revision (A+E). Overall, both FOG and Kincaid scores show a slight but statistically significant drop on revision (~ <: 0.05). 562 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% i X A ~+]E X h. &+Z Z ,~ j~+~r Z ~. .K+][ ~Lose • M alntain IWiu Figure 3: Gains in Compression-Normalized Informativeness of revised summaries compared to initial drafts. E -- elimination, A - aggregation. A, E, and A+E are shown in the order V4, V5, V6, and V7. <sl> Researchers today tried to trace a "virus" that infected computer systems nationwide, <Q4> slowing machines in universities, a NASA and nuclear weapons lab and other federal research centers linked by a Defense Department computer network. </q4> <s3> Authorities said the virus, which <FROM S16> <Q3> the virus infected only unclassified computers </Q3> and <FROM $15> <Q3> the virus affected the unclassified, non-secured computer systems </q3> (and which <FROM S19> <Q4> the virus was %nainly just slowing down systems ) and slowing data ", </Q4> apparently <q4> destroyed no data but temporarily halted some research. </Q4> <s14>. The computer problem also was discovered late Wednesday at the <q3> Lawrence Livermore National Laboratory in Livermore, Calif. </Q3> <s15> <s20> "the developer was clearly a very high order hacker,", <FROM $25> <QI> a graduate student </QI> <Q2> who made making a programming error in designing the virus,causing the program to replicate faster than expected </q2> or computer buff, said John McAfee, chairman of the Computer Virus Industry Association in Santa Clara, Calif.. <s24> The Times reported today that the anonymous caller an anonymous caller to the paper said his associate was responsible for the attack and had meant it to be harmless. Figure 4: A revised summary specified in terms of an original draft (plain text) with added (bold- face) and deleted (italics) spans. Sentence <s> and Answer Key <Q> tags are overlaid. mary is the match score of s's best-matching summary sentence, where the match score is the percentage of content word occurrences in s that are also found in the summary sentence. Thus, we constructed an idealized model of each summary as a sentence extraction function. Since some of the participants truncated and occasionally mangled the source text (in addi- tion, Penn carried out pronoun expansion), we wanted to avoid having to parse and apply revi- sion rules to such relatively ill-formed material. This idealization is highly appropriate, for each of the summarizers considered 9 did carry out sentence extraction; in addition, it helps level the playing field, avoiding penalization of indi- vidual summarizers simply because we didn't cater to the particular form of their summary. Each summary was revised by calling the re- vision program with the full-text source, the original compression rate of the summary, and 9TextWise, which extracted named entities rather than passages, was excluded. 563 the summary weighting function (i.e., with the weight for each source sentence). The 630 re- vised summaries (3 topics x 30 documents per topic × 7 participant summaries per document) were then scored against the answer keys using the overlap measures above. The documents consisted of AP, Wall Street Journal, and Fi- nancial Times news articles from the TREC (Harman and Voorhees 1996) collection. The rules used in the system are very gen- eral, and were not modified for the evaluation except for turning off most of the reference ad- justment rules, as we wished to evaluate that component separately. Since the answer keys typically do not contain names of commenta- tors, we wanted to focus the algorithm away from such names (otherwise, it would aggregate information around those commentators). As a result, special rules were written in the revi- sion rule language to detect commentator names in reported speech ("X said that ..", "X said ...", ", said X..", ", said X..", etc.), and these names were added to a stoplist for use in enti- tyhood and coreference tests during regular re- vision rule application. Figure 3 shows percentage of losses, main- tains, and wins in informativeness against the initial draft (i.e., the result of applying the com- pression to the sentence weighting function). Informativeness using V7 is measured by V71° normalized for compression as: sl nV7 = V7 * (1 - ~-~) (1) where sl is summary length and sO is the source length. This initial draft is in itself not as in- formative as the original summary: in all cases except for Penn on 257, the initial draft either maintains or loses informativeness compared to the original summary. As Figure 3 reveals (e.g., for nVT), revising the initial draft using elimination rules only (E) results in summaries which are less informative than the initial draft 65% of the time, suggest- ing that these rules are removing informative material. Revising the initial draft using aggre- gation rules alone (A), by contrast, results in more informative summaries 47% of the time, and equally informative summaries another 13% 1°V7 computes for each question the percentage of its answer passages completely covered by the summary. This normalization is extended similarly for V4 thru V6. of the time. This is due to aggregation folding in additional informative material into the initial draft when it can. Inspection of the output sum- maries, an example of which is shown in Fig- ure 4, confirms the folding in behavior of aggre- gation. Finally, revising the initial draft using both aggregation and elimination rules (ATE) does no more than maintain the informative- ness of the initial draft, suggesting A and E are canceling each other out. The same trend is ob- serving for nV4 thru nV6, confirming that the relative gain in informativeness due to aggrega- tion is robust across a variety of (closely related) measures. Of course, if the revised summaries were instead radically different in wording from the original drafts, such informativeness mea- sures would, perhaps, fall short. It is also worth noting the impact of aggrega- tion is modulated by the current control strat- egy; we don't know what the upper bound is on how well revision could do given other con- trol regimes. Overall, then, while the results are hardly dramatic, they are certainly encour- aging zl. 4.3 Revision Evaluation: Readability Inspection of the results of revision indicates that the syntactic well-formedness revision cri- terion is satisfied to a very great extent. Im- proper extraction from coordinated NPs is an issue (see Figure 4), but we expect additional revision rules to handle such cases. Coher- ence disfiuencies do occur; for example, since we don't resolve possessive pronouns or plural def- inites, we can get infelicitous revisions like "A computer virus, which entered ,their comput- ers through ARPANET, infected systems from MIT." Other limitations in definite NP coref- erence can and do result in infelicitous refer- ence adjustments. For one thing, we don't link definites to proper name antecedents, result- ing in inappropriate indefinitization (e.g., "Bill Gates ... *A computer tycoon"). In addition, the "same head word" test doesn't of course ad- dress inferential relationships between the defi- nite NP and its antecedent (even when the an- tecedent is explicitly mentioned), again result- ing in inappropriate indefinitization (e.g., "The program ....a developer ~', and "The developer 11 Similar results hold while using a variety of other compression normalization metrics. 564 ... An anonymous caller said .a very high order hacker was a graduate student"). To measure fluency without conducting an elaborate experiment involving human judg- mentsl we fell back on some extremely coarse measurea based on word and sentence length computed by the (gnu) unix program style (Cherry 1981). The FOG index sums the av- erage sentence length with the percentage of words over 3 syllables, with a "grade" level over 12 indicating difficulty for the average reader. The Kincaid index, intended for technical text, computes a weighted sum of sentence length and word length. As can be seen from Table 3, there is a slight but significant lowering of scores on both metrics, revealing that according to these metrics revision is not resulting in more com- plex text. This suggests that elimination rather than aggregation is mainly responsible for this. 5 Conclusion This paper demonstrates that recent advances in information extraction and robust parsing can be exploited effectively in an open-domain model of revision inspired by work in natural language generation. In the future, instead of relying on adjustment rules for coherence, it may be useful to incorporate a level of text plan- ning. We also hope to enrich the background information by merging information from mul- tiple text and structured data sources. References Cherry, L.L., and Vesterman, W. Writing Tools: The STYLE and DICTION programs, Com- puter Science Technical Report 91, Bell Lab- oratories, Murray Hill, N.J. (1981). Cremmins, E. T. 1996. The Art of Abstracting. Information Resources Press. Dalianis, H., and Hov, E. 1996. Aggregation in Natural Language Generation. In Zock, M., and Adorni, G., eds., Trends in Natural Lan- guage Generation: an Artificial Intelligence Perspective, pp.88-105. Lecture Notes in Ar- tificial Intelligence, Number 1036, Springer Verlag, Berlin. Dras, M. 1999. Tree Adjoining Grammar and the Reluctant Paraphrasing of Text, Ph.D. Thesis, Macquarie University, Australia. Gabriel, R. 1988. Deliberate Writing. In Mc- Donald, D.D., and Bolc, L., eds., Natu- ral Language Generation Systems, Springer- Verlag, NY. Harman, D.K. and E.M. Voorhees. 1996. The fifth text retrieval conference (trec-5). Na- tional Institute of Standards and Technology NIST SP 500-238. Joshi, A. K. and Schabes, Y. 1996. "Tree- Adjoining Grammars". In Rosenberg, G., and Salomaa, A., eds., Handbook of Formal Lan- guages, Vol. 3, 69-123. Springer-Verlag, NY. Krupka, G. 1995. "SRA: Description of the SRA System as Used for MUC-6", Proceedings of the Sixth Message Understanding Conference (MUC-6), Columbia, Maryland, November 1995. Marcu, D. 1997. From discourse structures to text summaries, in Mani, L and Maybury, M., eds., Proceedings of the ACL/EACL '97 Workshop on Intelligent Scalable Text Sum- marization. Mani, I. and M. Maybury, eds. 1999. Ad- vances in Automatic Text Summarization. MIT Press. Mani, I., Firmin, T., House, D., Klein, G., Hirschman, L., and Sundheim, B. 1999. "The TIPSTER SUMMAC Text Summariza- tion Evaluation", Proceedings of EACL'99, Bergen, Norway, June 8-12, 1999. McKeown, K., J. Robin, and K. Kukich. 1995. Generating Concise Natural Language Sum- maries. Information Processing and Manage- ment, 31, 5, 703- 733. Robin, J. 1994. Revision-based generation of natural language summaries providing his- torical background: corpus-based analysis, design and implementation. Ph.D. Thesis, Columbia University. Sekine, S. 1998. Corpus-based Parsing and Sub- language Studies. Ph.D. Dissertation, New York University, 1998. 565
1999
72
Designing a Task-Based Evaluation Methodology for a Spoken Machine Translation System Kavita Thomas Language Technologies Institute Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA kavita@cs, cmu. edu Abstract In this paper, I discuss issues pertinent to the design of a task-based evaluation methodology for a spoken machine translation (MT) sys- tem processing human to human communica- tion rather than human to machine communi- cation. I claim that system mediated human to human communication requires new evaluation criteria and metrics based on goal complexity and the speaker's prioritization of goals. 1 Introduction Task-based evaluations for spoken language sys- tems focus on evaluating whether the speaker's task is achieved, rather than evaluating utter- ance translation accuracy or other aspects of system performance. Our MT project focuses on the travel reservation domain and facilitates on-line translation of speech between clients and travel agents arranging travel plans. Our prior evaluations (Gates et al., 1996) have focused on end-to-end translation accuracy at the ut- terance level (i.e., fraction of utterances trans- lated perfectly, acceptably, and unacceptably). While this method of evaluation conveys trans- lation accuracy, it does not give any information about how many of the client's travel arrange- ment goals have been conveyed, nor does it take into account the complexity of the speaker's goals and task, or the priority that they assign to their goals; for example, the same end-to-end score for two dialogues may hide the fact that in one dialogue the speakers were able to com- municate their most important goals while in the other they were only able to communicate successfully the less important goals. One common approach to evaluating spoken language systems focusing on human-machine dialogue is to compare system responses to cor- rect reference answers; however, as discussed by (Walker et al., 1997), the set of reference answers for any particular user query is tied to the system's dialogue strategy. Evaluation methods independent of dialogue strategy have focused on measuring the extent to which sys- tems for interactive problem solving aid users via log-file evaluations (Polifroni et al., 1992), quantifying repair attempts via turn correction ratio, tracking user detection and correction of system errors (Hirschman and Pao, 1993), and considering transaction success (Shriberg et al., 1992). (Danieli and Gerbino, 1995) measure the dialogue module's ability to recover from partial failures of recognition or understanding (i.e., implicit recovery) and inappropriate utter- ance ratio; (Simpson and Fraser, 1993) discuss applying turn correction ratio, transaction suc- cess, and contextual appropriateness to dialogue evaluations, and (Hirschman et ah, 1990) dis- cuss using task completion time as a black box evaluation metric. Current literature on task-based evaluation methodologies for spoken language systems pri- marily focuses on human-computer interactions rather than system-mediated human-human in- teractions. For a multilingual MT system, speakers communicate via the system, which translates their responses and generates the out- put in the target language via speech synthesis. Measuring solution quality (Sikorski and Allen, 1995), transaction success, or contextual appro- priateness is meaningless, since we are not in- terested in measuring how efficient travel agents are in responding to clients' queries, but rather, how well the system conveys the speakers' goals. Likewise, task completion time will not cap- ture task success for MT dialogues since it is dependent on dialogue strategies and speaker styles. Task-based evaluation methodologies for 569 MT systems must focus on whether goals are communicated, rather than whether they are achieved. 2 Goals of a Task-Based Evaluation Methodology for an MT System The goal of a task-based evaluation for an MT system is to convey whether speakers' goals were translated correctly. An advantage of fo- cusing on goal translation is that it allows us to compare dialogues where the speakers employ different dialogue strategies. In our project, we focus on three issues in goal communication: (1) distinction of goals based on subgoal com- plexity, (2) distinction of goals based on the speaker's prioritization, and (3) distinction of goals based on domain. 3 Prioritization of Goals While we want to evaluate whether speakers' important goals are translated correctly, this is sometimes difficult to ascertain, since not only must the speaker's goals be concisely describ- able and circumscribable, but also they must not change while she is attempting to achieve her task. Speakers usually have a prioritization of goals that cannot be predicted in advance and which differs between speakers; for example, if one client wants to book a trip to Tokyo, it may be imperative for him to book the flight tickets at the least, while reserving rooms in a hotel might be of secondary importance, and finding out about sights in Tokyo might be of lowest priority. However, his goals could be prioritized in the opposite order, or could change if he finds one goal too difficult to communicate and aban- dons it in frustration. If we insist on eschewing unreliability issues inherent in asking the client about the priority of his goals after the dialogue has terminated (and he has perhaps forgotten his earlier prior- ity assignment), we cannot rely on an invariant prioritization of goals across speakers or across a dialogue. The only way we can predict the speaker's goals at the time he is trying to com- municate them is in cases where his goals are not communicated and he attempts to repair them. We can distinguish between cases in which.goal communication succeeds or fails, and we can count the number of repair attempts in both cases. The insight is that speakers will attempt to repair higher priority goals more than lower priority goals, which they will abandon sooner. The number of repair attempts per goal quan- tifies the speaker's priority per goal to some de- gree. We can capture this information in a sim- ple metric that distinguishes between goals that eventually succeed or fail with at least one re- pair attempt. Goals that eventually succeed with tg repair attempts can be given a score of 1/tg, which has a maximum score of 1 when there is only one repair attempt, and decays to 0 as the number of repair attempts goes to infin- ity. Similarly, we can give a score of-(1 - 1/tg) to goals that are eventually abandoned with tg repair attempts; this has a maximum of 0 when there is only a single repair attempt and goes to -1 as tg goes to infinity. So the overall dia- logue score becomes the average over all goals of the difference between these two metrics, with a maximum score of 1 and a minimum score of --1. 1 for successful goal score(goal) = - (1 tg (1) - ~) for unsuccessful goal score(dialogue) -- 1 n mgoals Z score(goal) (2) goals 4 Complexity of Goals Another factor to be considered is goal com- plexity; clearly we want to distinguish between dialogues with the same main goals but in which some have many subgoals while others have few subgoals with little elaboration. For instance, one traveller going to Tokyo may be satisfied with simply specifying his departure and arrival times for the outgoing and return laps of his flight, while another may have the additional subgoals of wanting a two-day stopover in Lon- don, vegetarian meals, and aisle seating in the non-smoking section. In the metric above, both goals and subgoals are treated in the same way (i.e., the sum over goals includes subgoals), and we are not weighting their scores any differently. While many subgoals require that the main goal they fall under be communicated for them to be communicated, it is also true that for some speakers, communicating just the main goal and not the subgoal may be a communication fail- ure. For example, if it is crucial for a speaker 570 to get a stopover in London, even if his main goal (requesting a return flight from New York to Tokyo) is successfully communicated, he will view the communication attempt a failure un- less the system communicates the stopover suc- cessfully also. On the other hand, communi- cating the subgoal (e.g., a stopover in London), without communicating the main goal is non- sensical - the travel agent will not know what to make of "a stopover in London" without the accompanying main goal requesting the flight to Tokyo. However, even if two dialogues have the same goals and subgoals, the complexity of the trans- lation task may differ; for example, if in one dialogue (A) the speaker communicates a single goal or subgoal per speaker turn, while in the other (B) the speaker communicates the goal and all its subgoals in the same speaker turn, it is clear that the dialogue in which the entire goal structure is conveyed in the same speaker turn will be the more difficult translation task. We need to be able to account for the average goal complexity per speaker turn in a dialogue and scale the above metric accordingly; if dia- logues A and B have the same score according to the given metric, we should boost the score of B to reflect that it has required a more rigor- ous translation effort. A first attempt would be to simply multiply the score of the dialogue by the average subgoal complexity per main goal per speaker turn in the dialogue, where Nmg is the number of main goals in a speaker turn and Nsg is the number of subgoals. In the metric below, the average subgoal complexity is 1 for speaker turns in which there are no subgoals, and increases as the number of subgoals in the speaker turn increases. score'(dialogue) = score(dialogue) • 1 .Nsg + Nmg numspkturns ~--~" [ ~r--m~ ] (3) spkturns 5 Our Task-Based Evaluation Methodology Scoring a dialogue is a coding task; scorers will need to be able to distinguish goals and subgoals in the domain. We want to minimize train- ing for scorers while maximizing agreement be- tween them. To do so, we list a predefined set of main goals (e.g., making flight arrangements or hotel bookings) and group together all sub- goals that pertain to these main goals in a two- level tree. Although this formalization sacrifices subgoal complexity, we are unable to determine this without predefining a subgoal hierarchy and we want to avoid predefining subgoal priority, which is set by assigning a subgoal hierarchy. After familiarizing themselves with the set of main goals and their accompanying subgoals, scorers code a dialogue by distinguishing in a speaker turn between the main goals and sub- goals, whether they are successfully communi- cated or not, and the number of repair attempts in successive speaker turns. Scorers must also indicate which domain each goal falls under; we distinguish goals as in-domain (i.e., referring to the travel-reservation domain), out-of-domain (i.e., unrelated to the task in any way), and cross-domain (i.e., discussing the weather, com- mon polite phrases, accepting, negating, open- ing or closing the dialogue, or asking for re- peats). The distinction between domains is impor- tant in that we can separate in-domain goals from cross-domain goals; cross-domain goals of- ten serve a meta-level purpose in the dialogue. We can thus evaluate performance over all goals while maintaining a clear performance measure for in-domain goals. Scores should be calculated separately based on domain, since this will indi- cate system performance more specifically, and provide a useful metric for grammar develop- ers to compare subsequent and current domain scores for dialogues from a given scenario. In a large scale evaluation, multiple pairs of speakers will be given the same scenario (i.e., a specific task to try and accomplish; e.g., flying to Frankfurt, arranging a stay there for 2 nights, sightseeing to the museums, then flying on to Tokyo}; domain scores will then be calculated and averaged over all speakers. Actual evaluation is performed on transcripts of dialogues labelled with information from sys- tem logs; this enables us to see the original ut- terance (human transcription} and evaluate the correctness of the target output. If we wish to, log-file evaluations also permit us to eval- uate the system in a glass-box approach, evalu- ating individual system components separately (Simpson and Fraser, 1993). 571 6 Conclusions and Future Work This work describes an initial attempt to ac- count for some of the significant issues in a task- based evaluation methodology for an MT sys- tem. Our choice of metric reflects separate do- main scores, factors in subgoal complexity and normalizes all counts to allow for comparison among dialogues that differ in dialogue strat- egy, subgoal complexity, number of goals and speaker-prioritization of goals. The proposed metric is a first attempt, and describes work in progress; we have attempted to present the sim- plest possible metric as an initial approach. There are many issues that need to be ad- dressed; for instance, we do not take into ac- count optimality of translations. Although we are interested in goal communication and not utterance translation quality, the disadvantage to the current approach is that our optimality measure is binary, and does not give any infor- mation about how well-phrased the translated text is. More significantly, we have not resolved whether to use metric (1) for both subgoals and goals together, or to score them separately. The proposed metric does not reflect that commu- nicating main goals may be essential to com- municating their subgoals. It also does not ac- count for the possible complexity introduced by multiple main goals per speaker turn. We also do not account for the possibility that in an unsuccessful dialogue, a speaker may become more frustrated as the dialogue proceeds, and her relative goal priorities may no longer be re- flected in the number of repair attempts. We may also want to further distinguish in-domain scores based on sub-domain (e.g., flights, ho- tels, events). Perhaps most importantly, we still need to conduct a full-scale evaluation with the above metric with several scorers and speaker pairs across different versions of the system to be able to provide actual results. 7 Acknowledgements I would like to thank my advisor Lori Levin, Alon Lavie, Monika Woszczyna, and Aleksan- dra Slavkovic for their help and suggestions with this work. guage system. In Proceeedings of the 1995 AAAI Spring Symposium on Empirical Meth- ods in Discourse Interpretation and Genera- tion, pages 34-39. L.Hirschman, D.Dahl, D.P.McKay, L.M.Norton, M.C.Linebarger. 1990. Be- yond class A: A proposal for automatic evaluation of discourse. In Proceedings of the Speech and Natural Language Workshop, pages 109-113. L.Hirschman and C.Pao. 1993. The cost of er- rors in a spoken language system. In Pro- ceedings of the Third European Conference on Speech Communication and Technology, pages 1419-1422. J.Polifroni, L.Hirschman, S.Seneff, and V.Zue. 1992. Experiments in evaluating interactive spoken language systems. In Proceedings of the DARPA Speech and NL Workshop, pages 28-31. E.Shriberg, E.Wade, and P.Price. 1992. Human-machine problem solving using spo- ken language systems (sls): Factors affecting performance and user satisfaction. In Pro- ceedings of the DARPA Speech and NL Work- shop, pages 49-54. T.Sikorski and J.Allen. 1995. A task- based evaluation of the TRAINS-95 dia- logue system. Technical report, University of Rochester. A.Simpson, and N.A.Fraser. 1993. Black box and glass box evaluation of the SUNDIAL system. In Proceedings of the Third Euro- pean Conference on Speech Communication and Technology, pages 1423-1426. M.Walker, D.J.Litman, C.A.Kamm, and A.Abella. 1997. PARADISE: A framework for evaluating spoken dialogue agents. Tech- nical Report TR 97.26.1, AT and T Technical Reports. D.Gates, A.Lavie, L.Levin, A.Waibel, M.Gavalda, L.Mayfield, M.Woszczyna, P.Zhan. 1996. End-to-end Evaluation in JANUS: a Speech-to-speech Translation System. In Proceedings of the 12th Euro- pean Conference on Artificial Intelligence, Workshop on Dialogue, Budapest, Hungary. References M.Danieli and E.Gerbino. 1995. Metrics for evaluating dialogue strategies in a spoken lan- 572
1999
73
Robust, Finite-State Parsing for Spoken Language Understanding Edward C. Kaiser Center for Spoken Language Understanding Oregon Graduate Institute PO Box 91000 Portland OR 97291 kaiser©cse, ogi. edu Abstract Human understanding of spoken language ap- pears to integrate the use of contextual ex- pectations with acoustic level perception in a tightly-coupled, sequential fashion. Yet com- puter speech understanding systems typically pass the transcript produced by a speech rec- ognizer into a natural language parser with no integration of acoustic and grammatical con- straints. One reason for this is the complex- ity of implementing that integration. To ad- dress this issue we have created a robust, se- mantic parser as a single finite-state machine (FSM). As such, its run-time action is less com- plex than other robust parsers that are based on either chart or generalized left-right (GLR) architectures. Therefore, we believe it is ul- timately more amenable to direct integration with a speech decoder. 1 Introduction An important goal in speech processing is to ex- tract meaningful information: in this, the task is understanding rather than transcription. For extracting meaning from spontaneous speech full coverage grammars tend to be too brittle. In the 1992 DARPA ATIS task competition, CMU's Phoenix parser was the best scoring sys- tem (Issar and Ward, 1993). Phoenix operates in a loosely-coupled architecture on the 1-best transcript produced by the recognizer. Concep- tually it is a semantic case-frame parser (Hayes et al., 1986). As such, it allows slots within a particular ease-frame to be filled in any order, and allows out-of-grammar words between slots to be skipped over. Thus it can return partial parses -- as frames in which only some of the available slots have been filled. Humans appear to perform robust under- standing in a tightly-coupled fashion. They build incremental, partial analyses of an ut- terance as it is being spoken, in a way that helps them to meaningfully interpret the acous- tic evidence. To move toward machine under- standing systems that tightly-couple acoustic features and structural knowledge, researchers like Pereira and Wright (1997) have argued for the use of finite-state acceptors (FSAs) as an efficient means of integrating structural knowl- edge into the recognition process for limited do- main tasks. We have constructed a parser for spontaneous speech that is at once both robust and finite- state. It is called PROFER, for Predictive, RO- bust, Finite-state parsER. Currently PROFER accepts a transcript as input. We are modifying it to accept a word-graph as input. Our aim is to incorporate PROFER directly into a recog- nizer. For example, using a grammar that defines se- quences of numbers (each of which is less than ten thousand and greater than ninety-nine and contains the word "hundred"), inputs like the following string can be robustly parsed by PRO- FER: Input: first I've got twenty ahhh thirty yaaaaaa thirty ohh wait no twenty twenty nine hundred two errr three ahhh four and then two hundred ninety uhhhhh let me be sure here yaaaa ninety seven and last is five oh seven uhhh I mean six Parse-tree: [fsType:numher_type, hundred_fs: [decade:[twenty,nine],hundred,four], hundred_fs: [two,hundred,decade:[ninety,seven]], hundred_fs: [five,hundred,six]] 573 There are two characteristically "robust" ac- tions that are illustrated by this example. • For each "slot" (i.e., "As" element) filled in the parse-tree's case-frame structure, there were several words both before and after the required word, hundred, that had to be skipped-over. This aspect of robust parsing is akin to phrase-spotting. • In mapping the words, "five oh seven uhhh I mean six," the parser had to choose a later-in-the-input parse (i.e., "[five, hun- dred, six]") over a heuristically equivalent earlier-in-the-input parse (i.e., "[five, hun- dred, seven]"). This aspect of robust pars- ing is akin to dynamic programming (i.e., finding all possible start and end points for all possible patterns and choosing the best). 2 Robust Finite-state Parsing CMU's Phoenix system is implemented as a re- cursive transition network (RTN). This is sim- ilar to Abney's system of finite-state-cascades (1996). Both parsers have a "stratal" system of levels. Both are robust in the sense of skipping over out-of-grammar areas, and building up structural islands of certainty. And both can be fairly described as run-time chart-parsers. How- ever, Abney's system inserts bracketing and tag- ging information by means of cascaded trans- ducers, whereas Phoenix accomplishes the same thing by storing state information in the chart edges themselves -- thus using the chart edges like tokens. PROFER is similar to Phoenix in this regard. Phoenix performs a depth-first search over its textual input, while Abney's "chunking" and "attaching" parsers perform best-first searches (1991). However, the demands of a tightly- coupled, real-time system argue for a breadth- first search-strategy, which in turn argues for the use of a finite-state parser, as an efficient means of supporting such a search strategy. PROFER is a strictly sequential, breadth-first parser. PROFER uses a regular grammar formalism for defining the patterns that it will parse from the input, as illustrated in Figures 1 and 2. Net name tags correspond to bracketed (i.e., "tagged") elements in the output. Aside from ............. l ~.~¢:3 °"" 7 ......... "; ::::::::::::::::::::: ................ : .................................... .................... , ....................... ............. ' i .... rip.gin ','~i ~. ])~.'., i~:::ii~]);;~.: .I rewrite patterns ] ! ! Figure 1: Formalism net names, a grammar definition can also con- tain non-terminal rewrite names and terminals. Terminals are directly matched against input• Non-terminal rewrite names group together sev- eral rewrite patterns (see Figure 2), just as net names can be used to do, but rewrite names do not appear in the output. Each individual rewrite pattern defines a "conjunction" of particular terms or sub- patterns that can be mapped from the input into the non-terminal at the head of the pattern block, as illustrated in (Figure 1). Whereas, the list of patterns within a block represents a "dis- junction" (Figure 2). ~i iii !i ~agt,a ,'~i [id] ................................................. .. ~ ~ ...... ~ . ~:~:~ (two) "]ii~i :.::::i~~ ii;i; ~| [ii::: i ~ :] ........... ; .............................................................................................. ........... {~! ii::~i] Figure 2: Formalism Since not all Context-Free Grammar (CFG) expressions can be translated into regular ex- pressions, as illustrated in Figure 3, some re- strictions are necessary to rule out the possibil- ity of "center-embedding" (see the right-most block in Figure 3). The restriction is that nei- ther a net name nor a rewrite name can appear in one of its own descendant blocks of rewrite patterns. Even with this restriction it is still possible to define regular grammars that allow for self- 574 Figure 3: Context-Free translations to embedding to any finite depth, by copying the net or rewrite definition and giving it a unique name for each level of self-embedding desired. For example, both grammars illustrated in Fig- ure 4 can robustly parse inputs that contain some number of a's followed by a matching number of b's up to the level of embedding de- fined, which in both of these cases is four deep. EXAMPLE: nets EXAMPLE: rewrites [se] [ser] (a [se_one] b) (a SE_ONE b) (a b) (a b) [se_one] SE_0NE (a [se_t~o] b) (a SE_TWO b) (a b) (a b) [se_two] SE_TWO (a [se_three] b) (a SE_THREE b) (a b) (a b) [se_three] SE_THREE (a b) (a b) INPUT : INPUT: a c a b d e b ac abd eb PARSE: PARSE: se: [a,se_one: [a,b] ,b] set: [a,a,b,b] Figure 4: Finite self-embedding. 3 The Power of Regular Grammars Tomita (1986) has argued that context-free grammars (CFGs) are over-powered for natu- ral language. Chart parsers are designed to deal with the worst case of very-deep or infi- nite self-embedding allowed by CFGs. How- ever, in natural language this worst case does not occur. Thus, broad coverage Generalized Left-Right (GLR) parsers based on Tomita's al- gorithm, which ignore the worst case scenario, case-flame style regular expressions. are in practice more efficient and faster than comparable chart-parsers (Briscoe and Carroll, 1993). PROFER explicitly disallows the worst case of center-self-embedding that Tomita's GLR de- sign allows -- but ignores. Aside from infinite center-self-embedding, a regular grammar for- malism like PROFER's can be used to define every pattern in natural language definable by a GLR parser. 4 The Compilation Process The following small grammar will serve as the basis for a high-level description of the compi- lation process. [s] (n Iv] n) (p Iv] p) Iv] (v) In Kaiser et al. (1999) the relationship be- tween PROFER's compilation process and that of both Pereira and Wright's (1997) FSAs and CMU's Phoenix system has been described. Here we wish to describe what happens dur- ing PROFER's compilation stage in terms of the Left-Right parsing notions of item-set for- mation and reduction. As compilation begins the FSM always starts at state 0:0 (i.e., net 0, start state 0) and tra- verses an arc labeled by the top-level net name to the 0:1 state (i.e., net 0, final state 1), as il- lustrated in Figure 5. This initial arc is then re- written by each of its rewrite patterns (Fig- ure 5). As each new net within the grammar descrip- tion is encountered it receives a unique net-ID number, the compilation descends recursively into that new sub-net (Figure 5), reads in its 575 •. .................................................... ,° Figure 5: Definition expansion. grammar description file, and compiles it. Since rewrite names are unique only within the net in which they appear, they can be processed iter- atively during compilation, whereas net names must be processed recursively within the scope of the entire grammar's definition to allow for re-use. As each element within a rewrite pattern is encountered a structure describing its exact context is filled in. All terminals that appear in the same context are grouped together as a "context-group" or simply "context." So arcs in the final FSM are traversed by "contexts" not terminals. When a net name itself traverses an arc it is glued into place contextually with e arcs (i.e., NULL arcs) (Figure 6). Since net names, like any other pattern element, are wrapped inside of a context structure before being situated in the FSM, the same net name can be re-used inside of many different contexts, as in Figure 6. Figure 6: Contextualizing sub-nets. As the end of each net definition file is reached, all of its NULL arcs are removed. Each initial state of a sub-net is assumed into its par- ent state -- which is equivalent to item-set for- mation in that parent state (Figure 7 left-side). Each final state of a sub-net is erased, and its incoming arcs are rerouted to its terminal par- ent's state, thus performing a reduction (Fig- ure 7 right-side). Figure 7: Removing NULL arcs. 5 The Parsing Process At run-time, the parse proceeds in a strictly breadth-first manner (Figure 8,(Kaiser et al., 1999)). Each destination state within a parse is named by a hash-table key string com- posed of a sequence of "net:state" combina- tions that uniquely identify the location of that state within the FSM (see Figure 8). These "net:state" names effectively represent a snap- shot of the stack-configuration that would be seen in a parallel GLR parser. PROFER deals with ambiguity by "split- ting" the branches of its graph-structured stack (as is done in a Generalized Left-Right parser (Tomita, 1986)). Each node within the graph- structured stack holds a "token" that records the information needed to build a bracketed parse-tree for any given branch. When partial-paths converge on the same state within the FSM they are scored heuris- tically, and all but the set of highest scoring partial paths are pruned away. Currently the heuristics favor interpretations that cover the most input with the fewest slots. Command line parameters can be used to refine the heuristics, so that certain kinds of structures be either min- imized or maximized over the parse. Robustness within this scheme is achieved by allowing multiple paths to be propagated in par- allel across the input space. And as each such 576 ..... - !I T Figure 8: The parsing process. partial-path is extended, it is allowed to skip- over terms in the input that are not licensed by the grammar. This allows all possible start and end times of all possible patterns to be consid- ered. 6 Discussion Many researchers have looked at ways to im- prove corpus-based language modeling tech- niques. One way is to parse the training set with a structural parser, build statistical mod- els of the occurrence of structural elements, and then use these statistics to build or augment an n-gram language model. Gillet and Ward (1998) have reported reduc- tions in perplexity using a stochastic context- free grammar (SCFG) defining both simple se- mantic "classes" like dates and times, and de- generate classes for each individual vocabulary word. Thus, in building up class statistics over a corpus parsed with their grammar they are able to capture both the traditional n-gram word se- quences plus statistics about semantic class se- quences. Briscoe has pointed out that using stochas- tic context-free grammars (SCFGs) as the ba- sis for language modeling, "...means that in- formation about the probability of a rule apply- ing at a particular point in a parse derivation is lost" (1993). For this reason Briscoe developed a GLR parser as a more "natural way to obtain a finite-state representation ..." on which the statistics of individual "reduce" actions could be determined. Since PROFER's state names effectively represent the stack-configurations of a parallel GLR parser it also offers the ability to perform the full-context statistical parsing that Briscoe has called for. Chelba and Jelinek (1999) use a struc- tural language model (SLM) to incorporate the longer-range structural knowledge represented in statistics about sequences of phrase-head- word/non-terminal-tag elements exposed by a tree-adjoining grammar. Unlike SCFGs their statistics are specific to the structural context in which head-words occur. They have shown both reduced perplexity and improved word er- ror rate (WER) over a conventional tri-gram system. One can also reduce complexity and improve word-error rates by widening the speech recog- nition problem to include modeling not only the word sequence, but the word/part-of-speech (POS) sequence. Heeman and Allen (1997) has shown that doing so also aids in identifying speech repairs and intonational boundaries in spontaneous speech. However, all of these approaches rely on corpus-based language modeling, which is a large and expensive task. In many practical uses of spoken language technology, like using simple structured dialogues for class room instruction (as can be done with the CSLU toolkit (Sutton et al., 1998)), corpus-based language modeling may not be a practical possibility. In structured dialogues one approach can be to completely constrain recognition by the known expectations at a given state. Indeed, the CSLU toolkit provides a generic recognizer, which accepts a set of vocabulary and word se- quences defined by a regular grammar on a per- state basis. Within this framework the task of a recognizer is to choose the best phonetic path through the finite-state machine defined by the regular grammar. Out-of-vocabulary words are accounted for by a general purpose "garbage" phoneme model (Schalkwyk et al., 1996). We experimented with using PROFER in the same way; however, our initial attempts to do so did not work well. The amount of informa- tion carried in PROFER's token's (to allow for bracketing and heuristic scoring of the seman- tic hypotheses) requires structures that are an order of magnitude larger than the tokens in a typical acoustic recognizer. When these large tokens are applied at the phonetic-level so many 577 are needed that a memory space explosion oc- curs. This suggests to us that there must be two levels of tokens: small, quickly manipulated to- kens at the acoustic level (i.e., lexical level), and larger, less-frequently used tokens at the struc- tural level (i.e., syntactic, semantic, pragmatic level). 7 Future Work In the MINDS system Young et al. (1989) re- ported reduced word error rates and large re- ductions in perplexity by using a dialogue struc- ture that could track the active goals, topics and user knowledge possible in a given dialogue state, and use that knowledge to dynamically create a semantic case-frame network, whose transitions could in turn be used to constrain the word sequences allowed by the recognizer. Our research aim is to maximize the effective- ness of this approach. Therefore, we hope to: • expand the scope of PROFER's structural definitions to include not only word pat- terns, but intonation and stress patterns as well, and • consider how build to general language models that complement the use of the cat- egorial constraints PROFER can impose (i.e., syllable-level modeling, intonational boundary modeling, or speech repair mod- eling). Our immediate efforts are focused on consider- ing how to modify PROFER to accept a word- graph as input -- at first as part of a loosely- -coupled system, and then later as part of an integrated system in which the elements of the word-graph are evaluated against the structural constraints as they are created. 8 Conclusion We have presented our finite-state, robust parser, PROFER, described some of its work- ings, and discussed the advantages it may offer for moving towards a tight integration of robust natural language processing with a speech de- coder -- those advantages being: its efficiency as an FSM and the possibility that it may pro- vide a useful level of constraint to a recognizer independent of a large, task-specific language model. 9 Acknowledgements The author was funded by the Intel Research Council, the NSF (Grant No. 9354959), and the CSLU member consortium. We also wish to thank Peter Heeman and Michael Johnston for valuable discussions and support. References s. Abney. 1991. Parsing by chunks. In R. Berwick, S. Abney, and C. Termy, editors, Principle.Based Pars- ing. Kluwer Academic Publishers. S. Abney. 1996. Partial parsing via finite-state cas- cades. In Proceedings o/ the ESSLLI '96 Robust Pars- ing Workshop. T. Briscoe and J. Carroll. 1993. Generalized probabilis- tic LR parsing of natural language (corpora) with unification-based grammars. Computational Linguis- tics, 19(1):25-59. C. Chelba and F. Jelinek. 1999. Recognition perfor- mance of a structured language model. In The Pro- ceedings o/ Eurospeech '99 (to appear), September. J. Gillet and W. Ward. 1998. A language model combin- ing trigrams and stochastic context-free grammars. In Proceedings of ICSLP '98, volume 6, pgs 2319-2322. P. J. Hayes, A. G. Hauptmann, J. G. Carbonell, and M. Tomita. 1986. Parsing spoken language: a semantic caseframe approach. In l l th International Con]erence on Computational Linguistics, Proceedings of Coling '86, pages 587-592. P. A. Heeman and J. F. Allen. 1997. Intonational bound- aries, speech repairs, and discourse markers: Model- ing spoken dialog. In Proceedings o~ the 35th Annual Meeting o] the Association ]or Computational Lin- guistics, pages 254-261. S. Issar and W. Ward. 1993. Cmu's robust spoken lan- guage understanding system. In Eurospeech '93, pages 2147-2150. E. Kaiser, M. Johnston, and P. Heeman. 1999. Profer: Predictive, robust finite-state parsing for spoken lan- guage. In Proceedings o/ ICASSP '99. F. C. N. Pereira and R. N. Wright. 1997. Finite-state ap- proximations of phrase-structure grammars. In Em- manuel Roche and Yves Schabes, editors, Finite-State Language Processing, pages 149-173. The MIT Press. J. Schalkwyk, L. D. Colton, and M. Fanty. 1996. The CSLU-sh toolkit for automatic speech recognition: Technical report no. CSLU-011-96, August. S. Sutton, R. Cole, J. de Villiers, J. Schalkwyk, P. Ver- meulen, M. Macon, Y. Yan, E. Kaiser, B. Rundle, K. Shobaki, P. Hosom, A. Kain, J. Wouters, M. Mas- saro, and M. Cohen. 1998. Universal speech tools: the cslu toolkit". In Proceedings of ICSLP '98, pages 3221-3224, Nov.. M. Tomita. 1986. Efficient Parsing/or Natural Lan- guage: A Fast Algorithm ]or Practical Systems. Kluwer Academic Publishers. S. R. Young, A. G. Hauptmann, W. H. Ward, E. T. Smith, and P. Werner. 1989. High level knowledge sources in usable speech recognition systems. Com- munications o] the ACM, 32(2):183-194, February. 578
1999
74
Packing of Feature Structures for Efficient Unification of Disjunctive Feature Structures Yusuke Miyao Department of Information Science, University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033 Japan E-mail: yusuke~is, s.u-tokyo, ac. jp Abstract This paper proposes a method for packing fea- ture structures, which automatically collapses equivalent parts of lexical/phrasal feature struc- tures of HPSG into a single packed feature struc- ture. This method avoids redundant repetition of unification of those parts. Preliminary exper- iments show that this method can significantly improve a unification speed in parsing. 1 Introduction Efficient treatment of syntactic/semantic ambi- guity is a key to making efficient parsers for wide-coverage grammars. In feature-structure- based grammars 1, such as HPSG (Pollard and Sag, 1994), ambiguity is expressed not only by manually-tailored disjunctive feature struc- tures, but also by enumerating non-disjunctive feature structures. In addition, there is ambigu- ity caused by non-determinism when applying lexical/grammar rules. As a result, a large num- ber of lexical/phrasal feature structures are re- quired to express ambiguous syntactic/semantic structures. Without efficient processing of these feature structures, a sufficient parsing speed is unattainable. This paper proposes a method for packing feature structures, which is an automatic op- timization method for parsers based on feature structure unification. This method automati- cally extracts equivalent parts of feature struc- tures and collapses them into a single packed feature structure. A packed feature structure can be processed more efficiently because we can avoid redundant repetition of unification of the equivalent parts of original feature structures. There have been many studies on efficient 1In this paper we consider typed feature structures described in (Carpenter, 1992). unification of disjunctive feature structures (Kasper and Rounds, 1986; Hasida, 1986; DSrre and Eisele, 1990; Nakano, 1991; Blache, 1997; Blache, 1998). All of them suppose that dis- junctive feature structures should be given by grammar writers or lexicographers. However, it is not practical to specify all ambiguity us- ing only manually-tailored disjunctive feature structures in grammar development. Where dis- junctive feature structures cannot be given ex- plicitly those algorithms lose their advantages. Hence, an automatic conversion method, such as the packing method described hereafter, is re- quired for further optimization of those systems. In addition, this packing method converts gen- eral feature structures to a suitable form for a simple and efficient unification algorithm which is also described in this paper. Griffith (Griffith, 1995; Griffith, 1996) points out the same problem and proposes a compila- tion method for feature structures called mod- ularization. However, modularization is very time-consuming, and is not suitable for opti- mizing feature structures produced during pars- ing. An earlier paper of myself (Miyao et al., 1998) also discusses the same problem and pro- poses another packing method. However, that method can pack only pre-specified parts of input feature structures, and this characteris- tic limits the overall efficient gain. The new method in this paper can pack any kind of fea- ture structures as far as possible, and is more general than the previous method. 2 Data Structure and Algorithms This section describes the data structure of packed feature structures, and the algorithms for packing and unification of packed feature structures. Through of this section, I will refer to examples from the XHPSG system (Tateisi 579 PHON <'o'ed~o'~ r F F.E~O verb ]] • CArl HEAD / L,I r:- [] =_=o,,]> / I u~ /VAL I L SYNSEM ILOC~LI p ICOMP$ Am / I L LSPR <> / ::l tv~. rcred.edl -I I. I-:'"' LA~G~ [~J - word PHON <'cre~eo'> s~se~ .NONLOC I INHERISLASH ~T~ " ',~ocd PHON <'cr~led> r FHEAO ,,~, I I P FCATIHEAD r.o~ -I- /~T L: I.SUm <Lco~ [] ~o~J, ! Iv" m ; " CATI HEAD noun NONLOCII~HERISLASH<[cONT [] nom_obJ] > FHE~ verb "1 CATI HEAD noun SUBJ < : > r [] 1/ / >// L LSPR < > J J r r.~ .~ ]] • CATI HF.),D ~o~ t., I <[co,, [] _oJ> 1// I ''~ /VAL/coMP ~ noun -I>/// -1:1 / I: L LSPR <> J J/ I Figure 1:4 out of 37 lexical entries which the XHPSG system assigns to the word "credited". Parts shaded with the same pattern are equivalent. et al., 1998), an HPSG-based grammar for En- glish. 2.1 Packed Feature Structure Figure 1 shows 4 out of 37 lexical entries which the XHPSG system assigns to the word "cred- ited". These lexical entries have various equiva- lent parts in their respective feature structures. In Figure 1, equivalent parts are shaded with the same pattern. Figure 2 shows a packed feature structure for the lexical entries shown in Figure 1. Note that the equivalent parts of the original feature struc- tures are collapsed into a feature structure seg- ment, which is denoted by Si in Figure 2. So is a special segment called the root segment, which "word ; PHON <'crecl~ad'> I" ['HEAD ,~b / / [SU=<[CATI"EAD"°"] 1 So : LOCAL CAT VAL CONT A, 1 / L LSPR o l LCOm LNONLOCI INHERI SLASH A, S, : nomobj rcreditedl ] S= : I~] S, : LARG1 AT] $ 4 : noun i-CATIH~O no.n'l S e : nomobj S 1, " < > s,: ,o~_o~j L~ A,o J ~,'-> S I' I a2-* S ~/ I/%-) S, D,=Iz~s-*Ss/ D=_IzS,-*S,, I ~,'* S,ol -I ~5c-* S, LL~,-* S,J I z36-~$6 kZ~o-* S e I/%-* S 31 I ~--" S =/ I/%-* S o/ I A~-*S,| D,_IZ~,-*S,ol D~ =1 A,-* S.I I ~5~'-* S ~/ -I ~Sr* S 5/ I ZS,-" S,/ I zSs--* S 6/ I ~Se-" S ,/ LZS,~ S U LZ~9_~ S ,j Figure 2: A packed feature structure expressing the same information as the set of feature structures in Figure 1. Shaded parts correspond to the parts with the same pattern in Figure 1. describes the root nodes of all original feature structures. Each segment can have disjunctive nodes, which are denoted by Ai. For example, 53 has two disjunctive nodes, A 5 and A6. A de- pendency function, denoted by Di, is a mapping from a disjunctive node to a segment, and each Di corresponds to one original feature structure. We can obtain each original feature structure by replacing each disjunctive node with the output of the respective dependency function. For applying the unification algorithm de- scribed in Section 2.3, we introduce a con- dition on segments: a segment cannot have inter- or intra-segment shared nodes. For ex- ample, the disjunctive node i 1 in Figure 2 must be introduced for satisfying this con- dition, even though the value of this node is the same in all the original feature struc- tures. This is because this path is structure- shared with another path (SYNSEHILOCALJCONT j ARG1 and SYNSEHJLOCALJCONTJARG2). Structure- sharing in original feature structures is instead expressed by letting the dependency function return the same value for different inputs. For example, result values of applying D1 to A1 and A7 are both S1. The reason why we introduce this condition is to guarantee that a disjunctive node in the 580 r_ IPHON <'cmd~e~> So:/ FCAT F HEAD verb 0 T credited/ L P" L,.o, ,,,J $1 : John $2 : Yusuke D,=E At-~S,3 D2=EA,-~S2] Figure 3: A sample packed feature structure. If it is unified with the top feature structure in Figure 1, a new disjunctive node must he introduced to SYNSRM I LOCALICATJVALJSUBJ IFIRSTICONT. result of unification will appear only at a path where a disjunctive node appears in either of the input feature structures at the same path. For example, suppose we unify the top feature struc- ture in Figure 1 with the packed feature struc- ture in Figure 3. In the result of unification, a new disjunctive node must appear at SYNSEM I LOCALJCATIVALJSUBJJFIRSTJCONT , while no dis- junctive nodes appear in either of the input fea- ture structures at this path. By introducing such a disjunctive node in advance, we can sim- plify the algorithm for unification described in Section 2.3. Below I first describe the algorithm for pack- ing feature structures, and then the algorithm for unification of packed feature structures. 2.2 Algorithm for Packing The procedure pack_feature_structures in Figure 4 describes the algorithm for packing two packed feature structures, denoted by (S',:D') and (,9", D"). ,9' and S" denote sets of seg- ments, and 7)' and 7)" denote sets of depen- dency functions. We start from comparing the types of the root nodes of both feature struc- tures. If either of the nodes is a disjunctive node (Case 1 ), we compare the type of the other fea- ture structure with the type of each disjunct, and recursively pack nodes with the same type if they exist (Case 1.1). Otherwise, we just add the other feature structure to the disjunc- tive node as a new disjunct (Case 1.2). If the types of the nodes are equivalent (Case 2), we collapse them into one node, and apply packing recursively to all of their subnodes. If they are not equivalent (Case 3), we create a new dis- junctive node at this node, and let each original procedure pack.~eatureJtructures((S', Do), (S", D")) begin ~o ~ s'. s~' ~ s" 7:) := ~)t U "/3 II re~ura (S, D) end procedure pach(F s, F H) hesin i~ F / (or F Is) is disjzuction then if BG(G E diojuncts(F'). G a.d F" ha~e equivalent types) 1;hen S := SUdiojuncts(F') pack(G. F") Y~" := {DID" E DH,D = D" U(F' -- F")} else S := SUdisjuncts(FI)u{F/'} 7)" := {DID 'I E ~9", D = D" u(F' -- F")} endi:f else i:f F/ and F" ha~e equivalent types then F' := F" ~oreach f in features(F I) pack(:foUoe(.f, F'), :follou(.f, F")) eloe S:= SU{F',F"} F := 4io3uuctiYe-node D' := {DID' E ~)',D = D' U(F -- F')} D" := {DID" 6 D",D = D" U(F -- F")} endif cud disjuucts: return a set of disjuncts of the disjunctive node :features: return a set of features :folios: return a substructure reached by the specified feature • Cuae 1 • Case 1,1 • (:~.ue 1.2 • Case 2 • Cese 3 Figure 4: Algorithm for packing two packed feature structures (S',:D') and (S", $)"). feature structure from this node become a new segment. For simplicity, Figure 4 omits the algorithm for introducing disjunctive nodes into shared nodes. We can easily create disjunctive nodes in such places by preprocessing input feature structures in the following way. First each input feature structure is converted to a packed fea- ture structure in advance by converting shared nodes to disjunctive nodes. Then the above algorithm can be applied to these converted packed feature structures. 2.3 Algorithm for Unification Below I describe the algorithm for unification of packed feature structures, referring to the exam- ple in Figure 2. Suppose that we are unifying this packed feature structure with the feature structure in Figure 5. This example consid- ers unification of a non-packed feature structure with a packed feature structure, although this algorithm is capable of unifying two packed fea- ture structures. The process itself is described by the pro- cedure unify_packed_feature_structures in Figure 6. It is quite similar to a normal uni- 581 "word PHON <'ged#eo'> I I - ;YNSEM LOCAL CAT / ~SUBJ < ECONT [] -] |VAL|c(:~PS [] <> L LSPR < > CONTI ARG1 [] .NONLOC I INHER I SLASH list Figure 5: A sample feature structure to be unified with the packed feature structure in Figure 2. procedure unify.p¢cked.te=ture.=tructuree((S e, ~)e). (Se, 7)1,)) begin S:=¢. Z>:=@ fore,oh D e E ~Ot and D ee E ~H IEXT: besin push-eeSm.~-sCack(S~0 E S/, S~' E S') do until seipnen~-lCack.As-emp~y best. pop_ee~ment.o~ack(S I ,S/e) i~ S /ie di#j~ctlon chert S* := D~(S ~) ... (t) if S H is dlsj~nction ~hen S" := DH(S//) SEOHIIJ]IIF¥ : if alread~-nni~ied(S/,S H) th~n ''. (2) S :=restore2Jnify.reeul~( st,s/I ) ~' := S, S" := S ..- (3) else if S := unify(~,$/I) fails then Ko~o Iglt else S:= ~u{S} s~s_unificasien.reeul~(S, S ~, ~e) S e := 5. S" := S (a) 4ed~f endif e~d 7:' := "D u {D ~ U D'} e~d recur. (S, ~)) e~d procedure unify(F',F '~) besin i~ F ~ or F ee le d~oj~.c~ion ~heu (6) F := disjunctive.node push_se~nt_stack(F/, F ¢/) else IODB.UIIF¥ : F := unifyJype(F ~, F ~ ) forea©h ] ~n featureo(F) follou(f,F):= unify(fellou(f,F/), fellou(f,FH)) endif re~urn F oud already-unified: t~e when unification is already computed res~ere_uui~y_result: restore the result of unific&tion from the table seS_unify.xesul~: store the result of unification into the table unifyJype: return the unification of both types Figure 6: Algorithm for unifying two packed fea- ture structures (S',:D'} and (S",:D"}. fication algorithm. The only difference is the part that handles disjunctive nodes. When we reach a disjunctive node, we put it onto a stack (segment_stack), and postpone further unifi- cation from this node ((5) in Figure 6). In this example, we put A1, A2, A3, and A4 onto the stack. At the end of the entire unification, we "word PHON <'cred/ted> T A, SuN < So: LOCAL CAT VAL COMPS SYNSEM | ] L LS PR <> | LCONT A, LNONLOCIINHER[ SLASH A4 S, : nom_obj ~credltedl S=: <> Ss: LARGt ATJ [-CA~HEAD ~s 1 r'credited2 q s~: <Lco.T A, ." s s: IARa~ Ael $4 : .ou. LARG2 A*J Ss : bY S~o: <> Ss : nom obj FCAT~HEAO noun-] s, : .om obj s,,: <LCoNT A,o J> 1~ . ]. I As--* S sl O,=l ~s--" S e/ L,21" _-I[/k,-* S,ol]ks._. S ,[ O~ .... 04 .... I/'.,-~ S ,ol I/Xs--* Sol L~7 -> S ,J I Ge-" S,/ kL~s-* S sJ ae~t_st=~ = ( As As A, } D =CZ~I'* S ,] Figure 7: Intermediate data structure after unify- ing A 1 with [~. Disjunction is expressed by non- determinism when applying the dependency func- tions. When we unify a feature structure segment for A2, we unify $2 if we are applying Dz, or 53 if D2. apply a dependency function to each member of the stack, and unify every resulting segment with a corresponding part of the other feature structure ((1) in Figure 6). In this example, we apply D1 to A1, which returns segment 51. We therefore unify 5z with the feature structure tagged as [~] in Figure 5. Disjunction is expressed by non-determinism when applying the dependency functions. Fig- ure 7 shows the intermediate data structure af- ter unifying A1 with [~]. We are now focusing on the disjunctive node A2 which is now on the top of segment_stack. When we are applying Dz, we unify $2 with the corresponding feature structure [~]. Should we instead apply D2, 53 would be unified. A benefit of this unification algorithm is that we can skip unification of feature structure seg- ments whose unification is already computed ((2) in Figure 6). For example, we unify seg- ment So with the other feature structure only once. We can also skip unification of $1 and 5z0 for /:)2, because the result is already computed 582 So: -word PHON <'credited'> / / / Fsu~<F c^TIHEA°"°" /~OCAL/CAT/V~./ LCONT A, WNSEM| | | |CO~ /k= | | L LSPR <> / L cONT Z~, LNON'OCIINHERISLASH Z~, F c'd''al 7 s, : ,,om_~j s~ : LARG~ /k,J S= : <> S~o: <> FZ~,-, S,3 ]~-* S =/ u, =1 4~s ''~ S ~/ I Zl,-* S ,ol LZI~-" S,J D==... Ds=... D4=... aegmeat stac): = ( A, } F~,-- S,7 _ I A=-, S ~/ L/I, -~ S ,~1 "word PHON <'cmditeo'> / / / I-SU~<I-CATIH~O "" So: { LOCAL /CM/V/~L/COMPS LL~TT A, ~YNSEM| / L LSPR <> | L c-,ONT Z~ LNONLOCIiNHERISLASH /k, S i : nom obj S s : nom_obj S~ : <> Ss : <> F credi'ed! I rCATIHEAD noun] S 3 : LABG I /ks_] S, : < Lco~ A, • F credited# ] S, : |ARG1 L~| LARG2 /k,J FA,--> S ,7 [ ~=--> S,/ u,=l/_~-~ S ~] I ZM-" S e/ L/Is-* S ,J t/k,-~ S ,7 I As* S ~/ D,=I ]~,-> S,/ I ~,-" S ,/ I/k7 -~ S s/ LZI,-* S 5J Figure 8: Intermediate data structure after the uni- fication of A4. Because the result of applying Dz to AT is already overwritten by the result of unifying 51 within], we unify this resulting feature structure with ff]y for D1. This operation preserves the validity of unification because each segment does not have inter- or intra-segment shared nodes, because of the condition we previously introduced. Note that this method can correctly unify fea- ture structures with reentrancies. For example, Figure 8 shows the intermediate data structure after unifying A4, and the process currently reached A7 and E]" The result of the appli- cation of D1 to A7 is the result of unifying Sz with [~, because Sz is overwritten with the re- sult of this previous unification ((3) and (4) in Figure 6). Hence, we unify E] with this result. Above unification algorithm is applied to ev- ery combination of dependency functions. The result of the entire unification is shown in Fig- ure 9. 3 Experiments I implemented the algorithms for packing and unification in LiLFeS (Makino et al., 1998). LiLFeS is one of the fastest inference engines for processing feature structure logic, and effi- cient parsers have already been realized using this system. For performance evaluation I mea- sure the execution time for a part of application of grammar rules (i.e. schemata) of XHPSG. Table 1 shows the execution time for uni- fying the resulting feature structure of apply- Figure 9: The resulting packed feature structure of unifying the packed feature structure of Figure 2 with the feature structure of Figure 5. ing schemata to lexical entries of "Mary" as a left daughter, with lexical entries of "cred- ited"/"walked" as right daughters. Unification of packed feature structures achieved a speed- up by a factor of 6.4 to 8.4, compared to the naive approach. Table 2 shows the number of unification routine calls. NODE_UNIFY shows the number of nodes for which unification of types is computed. As can be seen, it is significantly reduced. On the other hand, SEGNENT_UNIFY shows the number of check operations whether unification is already computed. It shows that the number of node unification operations is sig- nificantly reduced by the packing method, and segment unification operations account for most of the time taken by the unification. These results indicate that a unification speed can be improved furthermore by reducing the number of the segment unification. The data structure of dependency functions has to be improved, and dependency functions can be packed. I observed that at least a quarter of the segment unification operations can be sup- pressed. This is one of the future works. 4 Conclusion The packing method I described in this paper automatically extracts equivalent parts from feature structures and collapses them into a sin- gle packed feature structure. It reduces redun- dant repetition of unification operations on the 583 Table 1: Execution time for unification. Test data shows the word used for the experiment. # of LEs shows the number of lexical entries assigned to the word. Naive shows the time for unification with a naive method. PFS shows the time for unification of packed feature structures (PFS). Improvement shows the ratio ( gaive)/( PFS). Test data # of LEs Naive (msec.) PFS (msec.) Improvement (factor) credited 37 36.5 5.7 6.4 walked 79 77.2 9.2 8.4 Table 2: The number of calling each part of the unification routines. Naive shows the number of node unification operations in the naive unification algorithm (corresponds to NODE_UNIFY of my algorithm). NODE_UNIFY and SEGMENT_UNIFY are specified in Figure 6. Test data Naive NODE_UNIFY SEGMENT_UNIFY credited 30929 256 5095 walked 65709 265 10603 equivalent parts. I implemented this method in LiLFeS, and achieved a speed-up of the unifica- tion process by a factor of 6.4 to 8.4. For realiz- ing efficient NLP systems, I am currently build- ing an efficient parser by integrating the packing method with the compilation method for HPSG (Torisawa and Tsujii, 1996). While the compi- lation method reduces the number of unification operations during parsing, it cannot prevent in- efficiency caused by ambiguity. The packing method will overcome this problem, and will hopefully enable us to realize practical and effi- cient NLP systems. References Philippe Blache. 1997. Disambiguating with controlled disjunctions. In Proc. Interna- tional Workshop on Parsing Technologies. Philippe Blache. 1998. Parsing ambigu- ous structures using controlled disjunctions and unary quasi-trees. In Proc. COLING- ACL'98, pages 124-130. Bob Carpenter. 1992. The Logic of Typed Fea- ture Structures. Cambridge University Press. Jochen DSrre and Andreas Eisele. 1990. Fea- ture logic with disjunctive unification. In Proc. 13th COLING, volume 2, pages 100- 105. John Griffith. 1995. Optimizing feature struc- ture unification with dependent disjunctions. In Proc. Workshop on Grammar Formalism for NLP at ESSLLI-94, pages 37-59. John Griffith. 1996. Modularizing contexted constraints. In Proc. COLING'96, pages 448- 453. KSiti Hasida. 1986. Conditioned unification for natural language processing. In Proc. 11th COLING, pages 85-87. Robert T. Kasper and William C. Rounds. 1986. A logical semantics for feature struc- tures. In Proc. 24th ACL, pages 257-266. Takaki Makino, Minoru Yoshida, Kentaro Tori- sawa, and Jun'ichi Tsujii. 1998. LiLFeS -- towards a practical HPSG parser. In Proc. COLING-A CL '98, pages 807-811. Yusuke Miyao, Kentaro Torisawa, Yuka Tateisi, and Jun'ichi Tsujii. 1998. Packing of fea- ture structures for optimizing the HPSG- style grammar translated from TAG. In Proc. TAG+4 Workshop, pages 104-107. Mikio Nakano. 1991. Constraint projection: An efficient treatment of disjunctive feature de- scriptions. In Proc. P9th ACL, pages 307-314. C. Pollard and I. A. Sag. 1994. Head-Driven Phrase Structure Grammar. University of Chicago Press. Yuka Tateisi, Kentaro Torisawa, Yusuke Miyao, and Jun'ichi Tsujii. 1998. Translating the XTAG English grammar to HPSG. In Proc. TAG+4 Workshop, pages 172-175. Kentaro Torisawa and Jun'ichi Tsujii. 1996. Computing phrasal-signs in HPSG prior to parsing. In Proc. 16th COLING, pages 949- 955. 584
1999
75
Parsing preferences with Lexicalized Tree Adjoining Grammars : exploiting the derivation tree Alexandra KINYON TALANA Universite Paris 7, case 7003, 2pl Jussieu 75005 Paris France [email protected] Abstract Since Kimball (73) parsing preference principles such as "Right association" (RA) and "Minimal attachment" (MA) are often formulated with respect to constituent trees. We present 3 preference principles based on "derivation trees" within the framework of LTAGs. We argue they remedy some shortcomings of the former approaches and account for widely accepted heuristics (e.g. argument/modifier, idioms...). Introduction The inherent characteristics of LTAGs (i.e. lexicalization, adjunction, an extended domain of locality and "mildly-context sensitive" power) makes it attractive to Natural Language Processing : LTAGs are parsable in polynomial time and allow an elegant and psycholinguistically plausible representation of natural language 1. Large coverage grammars were developed for English (Xtag group (95)) and French (Abeille (91)). Unfortunately, "large" grammars yield high ambiguity rates : Doran & al. (94) report 7.46 parses / sentence on a WSJ corpus of 18730 sentences using a wide coverage English grammar. Srinivas & al. (95) formulate domain independent heuristics to rank parses. But this approach is practical, English-oriented, not explicitly linked to psycholinguistic results, and does not fully exploit "derivation" i e.g. Frank (92) discusses the psycholinguistic relevance of adjunction for Children Language Acquisition, Joshi (90) discusses psycholinguistic results on crossed and serial dependencies. information. In this paper, we present 3 disambiguation principles which exploit derivation trees. 1, Brief presentation of LTAGs A LTAG consists of a finite set of elementary trees of finite depth. Each elementary tree must <<anchor>> one or more lexical item(s). The principal anchor is called daead>>, other anchors are called <<co-heads>>. All leaves in elementary trees are either <<anchor>>, <<foot node>> (noted *) or <<substitution node>> (noted $). These trees are of 2 types • auxiliary or initial 2. A tree has at most 1 foot-node, such a tree is an auxiliary tree. Trees that are not auxiliary are initial. Elementary trees combine with 2 operations : substitution and adjunetion. Substitution is compulsory and is used essentially for arguments (subject, verb and noun complements). It consists in replacing in a tree (elementary or not) a node marked for substitution with an initial tree that has a root of same category. Adjunction is optional (although it can be forbidden or made compulsory using specific constraints) and deals essentially with determiners, modifiers, auxiliaries, modals, raising verbs (e.g. seem). It consists in inserting in a tree in place of a node X an auxiliary tree with a root of same category. The descendants of X then become the descendants of the foot node of the auxiliary tree. Contrary to context-free rewriting rules, the history of derivation must be made explicit since the same derived tree can be obtained using different derivations. This is why parsing LTAGs yields a derivation tree, from 2 Traditionally initial trees are called o~, and auxiliary trees 13 585 which a derived tree (i.e. constituent tree) can be obtained. (Figure 1) 3 . Branches in a derivation tree are unordered. Moreover, linguistic constraints on the well- formedness of elementary trees have been formulated : • Predicate Argument Cooccurence Principle : there must be a leaf node for each realized argument of the head of an elementary tree. • Semantic consistency : No elementary tree is semantically void • Semantic minimality : an elementary tree corresponds at most to one semantic unit 2. Former results on parsing preferences A vast literature addresses parsing preferences. Structural approaches introduced 2 principles : RA accounts for the preferred reading of the ambiguous sentence (a) : "yesterday" attaches to "left" and not to "said" (Kimball (73)). MA accounts for the preferred reading of (b) : "for Sue" attaches to "bought" and not to "flowers" (Frazier & Fodor (78)) (a) Tom said that Joe left yesterday (b) Tom bought the flowers for Sue These structural principles have been criticized though : Among other things, the interaction between these principles is unclear. This type of approach lacks provision for integration with semantics and/or pragmatics (Schubert (84)), does not clearly establish the distinction between arguments and modifiers (Ferreira & Clifton (86)) and is English-biased : evidence against RA has been found for Spanish (Cuetos & Mitchell (88)) and Dutch (Brysbaert & Mitchell (96)). Some parsing preferences are widely accepted, though: The idiomatic interpretation of a sentence is favored over its literal interpretation (Gibbs & Nayak (89)). Arguments are preferred over modifiers (Abney (89), Britt & al. (92)). Additionally, lexical factors (e.g. frequency of subcategorization for a given verb) have been shown to influence parsing preferences (I-Iindle & Rooth (93)). It is striking that these three most consensual types of syntactic preferences tum out to be difficult to formalize by resorting only to "constituent trees" , but easy to formalize in terms of LTAGs. Before explaining our approach, we must underline that the examples 4 presented later on are not necessarily counter-examples to RA and or MA, but just illustrations : our goal is not to further criticize RA and MA, but to show that problems linked to these "traditional" structural approaches do not automatically condemn all structural approaches. 3 Three preference principles based on derivation trees For sake of brevity, we will not develop the importance of "lexical factors", but just note that LTAGs are obviously well suited to represent that type of preferences because of strong lexicalization 5. To account for the "idiomatic" vs "literal", and for the "argument" vs "modifier" preferences, we formulate three parsing preference principles based on the shape of derivation trees : 1. Prefer the derivation tree with the fewer number of nodes 2. Prefer to attach an m-tree low 6 3. Prefer the derivation tree with the fewer number of 13-tree nodes Principle 1 takes precedence over principle 2 and principle 2 takes precedence over principle 3. 3 Our examples follow linguistic analyses presented in (Abeill6 (91)), except that we substitute sentential complements when no extraction occurs. Thus we use no VP node and no Wh nor NP traces. But this has no incidence on the application of our preference principles. 4 These examples are kept simple on purpose, for sake of clarity. Also, "lexical preferences" and "structural preferences" are not necessarily antagonistic and can both be used for practical purpose. 6 By low we mean "as far as possible from the root". 586 3.1 What these principles account for Principle 1 accounts for the preference "idiomatic" over "literal": In LTAGs, all the set elements of an idiomatic expression are present m a single elementary tree. Figure 1 shows the 2 derivation trees obtained when parsing "Yesterday John kicked the bucket". The preferred one (i.e. idiomatic interpretation) has fewer nodes. lSf_yesterday (z_John (z.bucket 13.the ~'~X\ S N N N Adv S* John Bucket Det N* I I Yesterday The (z-kicked-the-bucket (z-kicked S S kicked kicked Det N I I the buckel Elementary trees for [ "Yesterday John kicked the bucket" ] / / or-kicked-the-bucket (z-kicked (z-John [3-yesterday (z-John (z-bucket [3-yesterday I ~ -the ~referred derivation tree I IDispreferred derivation tree [ $ Adv S Yesterday N V N John kicked Det N I I the bucket [ Both derivation trees yield the same derived tree [ FIGURE 17 Illustration of Principle 1 7 In derivation trees, plain lines indicate an, adjunction, dotted lines a substitution. ~N n [3-the ~xl-Organizer ct-Demonstrafi~m N N N I / / John Det N* Organizer Demonstration I The el-suspects c~2-Organizer S N N04, V NI4, Organizer PP Suspects o~2-suspects P~ep NI4, of S N04, V NI4, PP Suspects ~ep ~ d ~1 Elementary trees for I I " J°hn 'he °I *="*"°"" [ / al-suspects c¢2-suspects J'/'"" "J'" J"i ......................................... • / '....11 ./- ..j.s .... o~-John~anizer...,......, or.John ~l-Orlanizer ~x-Demonstrationl ~-the ~x-Demonstration 13.4he 13-the I~-the l Preferred deflation tree I [ Di~referred deri,ation tree I S $ N V N N V N PP J0hnsuspects Det IN John Suspects Det N Prep N / /~ / / / /',,. The Organizer pp The Organizer of Det N the demonstration of Det N [C#'esp'ding&rivedtrees] I I t J the demonstration FIGURE 2 Illustration of Principle 2 587 for French (Abeill6 & Candito (99)). We kept the1074 grammatical ones (i.e. noted "1" in the TSNLP terminology) of category S or augmented to S (excluding coordination ) that were accepted. A human picked one or more "correct" derivations for each sentence parsed 8. Principle 1, and then Principles 1 & 2 were applied on the derivation trees to eliminate some derivations. Table 1 shows the results obtained. Total #'of Before applying principles 1074 A.~er applying principlel 1074 A~er applying principles l&2 1074 sentences Total #of 3057 2474 2334 derivations 1070 (99.6 %) 537 537 n.a. 2.85 #of sentences with at least 1 correct parse #of ambiguous sentences # of non ambiguous sentences 1055 (98.2 %) 427 647 89 23 # of partially disambigua ted sentences # of parses / sentence TABLE 1 : results for TSNLP 1054 (98.1%) 424 650 86 2.i7 4.1 Comments on the results ARer disambiguating with principles 1 and 2, the proportion of sentences with at least one parse judged correct by a human only marginally decreased while the average number of parses per s More than one derivation was deemed "correct" when non spurious ambiguity remained in modifier attachment (e.g. He saw the man with a telescope) sentence went down from 2.85 to 2.17 (i.e. -24 %). Since "strict modifier attachment" is orthogonal to our concem, a sentence such as (f) still yields 5 derivations, partly because of spurious ambiguity, partly because of adverbial attachment (i.e. 'qaier" attached to S or to V). 1l a travailld hier (He worked yesterday) Therefore most sentences aren~ disambiguated by principles 1 or 2, especially those anchoring an intransitive verb. For sentences that are affected by at least one of these two principles, the average number of parses per sentence goes down from 6.76 to 2.94 after applying both principles (i.e. - 56.5 %). (Table 2). # of sentences affected by at least one principle # of derivations # of parses/sent ence Before applying principles 189 1279 A~er applying principle 1 189 After applying principles l&2 189 6.77 696 3.68 556 2.94 TABLE 2 : Results for sentences affected by at least one Principle 4.2 The gap between theory and practice Surprisingly, Principle 1 was used in only one case to prefer an idiomatic interpretation, but proved very useful in preferring arguments over modifiers : derivation trees with arguments often have fewer nodes because of co-heads. For instance it systematically favored the attachment of "by" phrases as passive with agent, Principle 2 favored lower attachment of arguments as in (g) but proved useful only in conjunction with Principle 1 : it provided further disambiguation by selecting derivation trees among those with an equally low number of nodes. 588 Principle 2 says to attach an argument low (e.g. to the direct object of the mare verb) rather than high (e.g. to the verb). In (el), "of the demonstration" attaches to "organizer" rather than to "suspect", while m (c2) "of the crime" can only attach to the verb. Figure 2 shows how principle 2 yields the preferred derivation tree for sentence (cl). Similarly, in sentence (dl) "to whom" attaches to "say" rather than to "give", while in (d2) it attaches to "give" since "think" can not take a PP complement. This agrees with psycholinguistic results such as "filled gap effects" (Cram & Fodor (85)). (cl) John suspects the organizer of the demonstration (c2) John suspects Bill of the crime (dl) To whom does Mary say that John gives flowers. (d2) To whom does Mary think that John gives flowers. Principle 3 prefers arguments over modifiers. Figure 3 shows that principle 3 predicts the preferred derivation tree for (e) : "to be honest" argument of "prefer", ruling out 'to be honest" as sentence modifier (i.e. "To be honest, he prefers his daughter"). (e) John prefers his daughter to be honest. These three principles aim at attaching arguments as accurately as possible and do not deal with "strict" modifier attachment for the following reasons : • There is a lack of agreement concerning the validity of preferences principles for "modifier attachment" • Principle 3, which deals the most with modifier attachment, turned out the least conclusive when confronted to empirical data • We wanted to evaluate how attaching arguments correctly affects ambiguity, all other factors remaining unchanged. 4 Some results French sentences from the test suite developed in the TSNLP project (Estival & Lehman (96)) were originally parsed using Xtag with a domain independent wide-coverage grammar /- a-John a-daughter N N I I John daughter al-Prefer ~-his a-honest N Adj Det N* Honest I a2-Prefer S S I I P~ff~ P~ ~z-Be I~-Be Vinf S i rep Vinf' S* P~p Vinf' to V Adj~ to "~ I I Be Be Elementary trees I 'Johnprefers his daughter to be honest" ] / I ! ! I . . . I " U U al-Prefer ..y....,Y '--.. ,. a-John a~a~ter ~-1~1 ~-Im ~-honest ~referredderivation'tree[ S ct2-Prefer w-John a~a~Jllter ~-Be I- I ~-his a-honest [ Dispreferred derivation tree [ S N V ] I A / ~ N Vinf / ~ P~ep Vinf' ~Adj JolmPrefers Det N PrepVinf' N V NTo his daughter to V Adi John Prefers Det N be honest //" I I Be Honest His Daughter ] Correspondingderivedtrees, ] FIGURE 3 Illustration of Principle 3 589 (g)- L 7ng~nieur obtient l 'accord de 1 'entreprise (The engineer obtains the agreement of the company/from the company) Principle 3 did not prove as useful as the two others : first, it aims at favoring arguments over modifiers, but these cases were already handled by Principle 1 (again because of co-heads). Second, it consistently made wrong predictions in cases oflexical ambiguity (e.g it favored "&re" as a copula rather than as an auxiliary, although the auxiliary is much more common in French.). Therefore we have postponed testing it until further refinement is found. 5 Conclusion We have presented three application-independent, domain-independent and language-independent disambiguation principles formulated in terms of derivation trees within the framework of LTAGs. But since they are straightforward to implement, these principles can be used for parse ranking applications or integrated into a parser to reduce non determinism. Preliminary results are encouraging as to the soundness of at least two of these principles. Further work will focus on testing these principles on larger corpora (e.g. Le Monde) as well as on other languages, refining them for practical purposes (e.g. addition of frequency information and principles for modifiers attachment). Since it is the first time to our knowledge that parsing preferences are formulated in terms of derivation trees, it would also be interesting to see how this could be adapted to dependency-based parsing. References Abeill6 /L (1991) Une grammaire lexicalisde d'arbres adjoints pour le franfais. Phi) dissertation.. Universit6 Paris 7. Abeill~ A., Candito M.H. (1999) P~AG : A LTAG for French. In Tree Adjoining Grammars. Abeill6, Rambow(eds). CSLI, Stanford. Abney S. (1989) A computational model of human parsing. Journal of psycholinguistic Research, 18, 129-144. Britt M, Perfetti C., Garrod S, Rayner K. (1992) Parsing and discourse : Context effects and their limits. Journal of memory and language, 31, 293- 314. Brysbaert M., Mitchell D.C. (1996) Modifier Attachment in sentence parsing : Evidence from Dutch. Quarterly journal of experimental psychology, 49a, 664-695. Crain S., Fodor J.D. (1985) How can grammars help parsers? In Natural language parsing .. 94-127. D. Dowty, L. Kartttmen, A. Zwicky (eds). Cambridge University Press. Cuetos F., Mitchell D.C. (1988) Cross linguistic differences in parsing : restrictions on the use of the Late Closure strategy in Spanish. Cognition, 30,73-105. Doran C., Egedi D., Hockey B.A., Srinivas B., Zaidel M. (1994))(tag System- a wide coverage grammar for English. COLING'94. Kyoto. Japan. Estival D., Lehman S (1997) TSNLP: des jeux de phrases testpour le TALN, TAL 38:1, 115-172 Ferreira F. Clifton C. (1986) The independence of syntactic processing. Journal of Memory and Language, 25,348-368. Frank R. (1992) Syntactic Locality and Tree Adjoining Grammar : Grammatical Acquisition and Processing Perspectives. PhD dissertation. University of Pennsylvania. Frazier L, Fodor J.D. (1978) "The sausage machine" : a new two stage parsing model. Cognition 6. Gibbs R., Nayak (1989) Psycholinguistic studies on the syntactic behaviour of idioms. Cognitive Psychology, 21, 100-138. Hindle D. Rooth M. (1993) Structural ambiguity and lexical relations. Computational Linguistics, 19, pp. 103-120. Joshi A. (1990) Processing crossed and serial dependencies : an automaton perspective on the psycholinguistic results. Language and cognitive processes, 5:1, 1-27. Kimball J. (1973) Seven principles of surface structure parsing in natural language. Cognition 2. Schubert L. (1984). On parsing preferences. COLING'84, Stanford. 247-250. Srinivas B., Doran C., Kulick S. (1995) Heuristics and Parse Ranking. 4 th international workshop on Parsing Technologies.. Prag. Czech Republic. Xtag group (1995) A LTAG for English. Technical ReportlRCS 95-03. University of Pennsylvania. 590
1999
76
Cohesion and Collocation: Using Context Vectors in Text Segmentation Stefan Kaufmann CSLI, Stanford University Linguistics Dept., Bldg. 460 Stanford, CA 94305-2150, U.S.A. kaufmann@csli, stanford,, edu Abstract Collocational word similarity is considered a source of text cohesion that is hard to measure and quan- tify. The work presented here explores the use of in- formation from a training corpus in measuring word similarity and evaluates the method in the text seg- mentation task. An implementation, the VecTile system, produces similarity curves over texts using pre-compiled vector representations of the contex- tual behavior of words. The performance of this system is shown to improve over that of the purely string-based TextTiling algorithm (Hearst, 1997). 1 Background The notion of text cohesion rests on the intuition that a text is "held together" by a variety of inter- nal forces. Much of the relevant linguistic literature is indebted to Halliday and Hasan (1976), where co- hesion is defined as a network of relationships be- tween locations in the text, arising from (i) gram- matical factors (co-reference, use of pro-forms, ellip- sis and sentential connectives), and (ii) lexical fac- tors (reiteration and collocation). Subsequent work has further developed this taxonomy (Hoey, 1991) and explored its implications in such are.as as para- graphing (Longacre, 1979; Bond and Hayes, 1984; Stark, 1988), relevance (Sperber and Wilson, 1995) and discourse structure (Grosz and Sidner, 1986). The lexical variety of cohesion is semantically de- fined, invoking a measure of word similarity. But this is hard to measure objectively, especially in the case of collocational relationships, which hold be- tween words primarily because they "regularly co- occur." Halliday and Hasan refrained from a deeper analysis, but hinted at a notion of "degrees of prox- imity in the lexical system, a function of the prob- ability with which one tends to co-occur with an- other." (p. 290) The VecTile system presented here is designed to utilize precisely this kind of lexical relationship, relying on observations on a large training corpus to derive a measure of similarity between words and text passages. 2 Related Work Previous approaches to calculating cohesion dif- fer in the kind of lexical relationship they quan- tify and in the amount of semantic knowledge they rely on. Topic parsing (Hahn, 1990) utilizes both grammatical cues and semantic inference based on pre-coded domain-specific knowledge More gen- eral approaches assess word mmllanty based on the- sauri (Morris and Hirst, 1991) or dictionary defini- tions (Kozima, 1994). Methods that solely use observations of pat- terns in vocabulary use include vocabulary manage- ment (Youmans, 1991) and the blocks algorithm im-- plemented in the TextTiling system (Hearst, 1997). The latter is compared below with the system intro- duced here. A good recent overview of previous approaches can be found in Chapters 4 and 5 of (Reynar, 1998). 3 The Method 3.1 Context Vectors The VecTile system is based on the WordSpae~ model of (Schiitze, 1997; Schfitze, 1998). The idea is to represent words by encoding the environments in which they typically occur in texts. Such a rep- resentation can be obtained automatically and often provides sufficient information to make deep linguis- tic analysis unnecessary. This has led to promis- ing results in information retrieval and related ar- eas (Flournoy et al., 1998a; Flournoy et al., 1998b). Given a dictionary W and a relatively small set- C of meaningful "content" words, for each pair in W × C, the number of times is recorded that the two co-occur within some measure of distance in a training corpus. This yields a [C]-dimensionalvector for each w E W. The direction that the vector has in the resulting ICI-dimensional space then represents the collocational behavior of w in the training cor- pus. In the present implementation, IW[-- 20,500 and ICI = 1000. For computational efficiency and to avoid the high number of zero values in the resulting matrix, the matrix is reduced to 100 dimensions us- ing Singular-Value Decomposition (Golub and van Loan, 1989). 591 0.98 0.96 0.94 0.92 1 2 3 9 1D 11 1920 21; 0.9 0 . . . . . . . . . . . 12 13 14 151B 17 18 4 $ 6 7 8 2 3 Section Breaks > (9 Figure 1: Example of a VecT±le similarity plot As a measure of similarity in collocational behav- ior between two words, the cosine between their vec- tors is computed: Given two n-dimensional vectors V, W, co8( , 3) = ,,w, (1) 3.2 Comparing Window Vectors . In order to represent pieces of text larger than sin- gle words, the vectors of the constituent words are added up. This yields new vectors in the same space, which can again be compared against each other and word vectors. If the word vectors in two adjacent portions of text are added up, then the cosine be- tween the two resulting vectors is a measure of the lexical similarity between the two portions of text. The VecTile system uses word vectors based on co-occurrence counts on a corpus of New York Times articles. Two adjacent windows (200 words each in this experiment) move over the input text, and at pre-determined intervals (every 10 words), the vec- tors associated with the words in each window are added up, and the cosine between the resulting win- dow vectors is assigned to the gap between the win- dows in the text. High values indicate lexical close- ness. Troughs in the resulting similarity'curve mark spots with low cohesion. 3.3 Text Segmentation To evaluate the performance of the system and facil- itate comparison with other approaches, it was used in text segmentation. The motivating assumption behind this test is that cohesion reinforces the topi- cal unity of subparts of text and lack of it correlates with their boundaries, hence if a system correctly; predicts segment boundaries, it is indeed measuring cohesion. For want of a way of observing cohesion directly, this indirect relationship is commonly used for purposes of evaluation. 4 Implementation The implementation of the text segmenter resem- bles that of the Texl~Tiling system (Hearst, 1997.), The words from the input are stemmed and asso- ciated with their context vectors. The similarity curve over the text, obtained as described above, is smoothed out by a simple low-pass filter, and low points are assigned depth scores according to the dif- ference between their values and those of the sur- rounding peaks. The mean and standard deviation of those depth scores are used to calculate a cutoff below which a trough is judged to be near a sec- tion break. The nearest paragraph boundary is then marked as a section break in the output. An example of a text similarity curve is given in Figure 1. Paragraph numbers are inside the plot at the bottom. Speaker judgments by five subjects are inserted in five rows in the upper half. 592 Table 1: Precision and recall on the text segmentation task TextTiling VecTile [ Subjects Text # Prec I Rec Free ] aec ] Prec ]aec 1 60 50 60 50 75 7,7 2 14 20 100 80 76 76 3 50 50 50 50 72 73 4 25 50 10 25 70 75 5 10 25 40 50 70 74 avg 32 40 52 51 73 75 The crucial difference between this and the TextTiling system is that the latter builds win- dow vectors solely by counting the occurrences of strings in the windows. Repetition is rewarded by the present approach, too, as identical 'words con- tribute most to the similarity between the block vec- tors. However, similarity scores can be high even in the absence of pure string repetition, as long as the adjacent windows contain words that co-occur frequently in the training corpus. Thus what a di- rect comparison between the systems will show is whether the addition of collocational information gleaned from the training corpu s sharpens or blunts the judgment. For comparison, the TextTfling algorithm was implemented and run with the same window size (200) and gap interval (10). 5 Evaluation 5.1 The Task In a pilot study, five subjects were presented with five texts from a popular-science magazine, all be- tween 2,000 and 3,400 words, or between 20 and 35 paragraphs, in length. Section headings and any other clues were removed from the layout. Para- graph breaks were left in place. Thus the task was not to find paragraph breaks, but breaks between multi-paragraph passages that according to the the subject's judgment marked topic shifts. All subjects were native speakers of English. 1 1 The instructions read: "You will be given five magazine articles of roughly equal length with section breaks removed. Please mark the places where the topic seems to change (draw a line between para- graphs). Read at normal speed, do not take much longer than you normally would. But do feel free to go back and recon- sider your decisions (even change your markings) as you go along. Also, for each section, suggest a headline of a few words that captures its main content. If you find it hard to decide between two places, mark both, giving preference to one and indicating that the other was a close rival." 5.2 Results To obtain an "expert opinion" against which to compare the algorithms, those paragraph bound- aries were marked as "correct" section breaks which at least three out of the five subjects had marked. (Three out of seven (Litman and Passonneau, 1995; Hearst, 1997) or 30% (Kozima, 1994) are also some- times deemed sufficient.) For the two systems as well as the subjects, precision and recall with respect to the set of "correct" section breaks were calculated. The results are listed in Table 1. The context vectors clearly led to an improved performance over the counting of pure string repeti- tions. The simple assignment of section breaks to the nearest paragraph boundary may have led to noise in some cases; moreover, it is not really part of the task of measuring cohesion. Therefore the texts were processed again, this time moving the windows over whole paragraphs at a time, calculating gap- values at the paragraph gaps. For each paragraph break, the number of subjects who had marked it as a section break was taken as an indicator of the "strength" of the boundary. There was a significant negative correlation between the values calculated by both systems and that measure of strength, with r = -.338(p = .0002) for the VecTile system and r --- -.220(p = .0172) for Tex¢Tiling. In other words, deep gaps in the similarity measure are asso- ciated with strong agreement between subjects that the spot marks a section boundary. Although r 2 is low both cases, the VecTile system yields more significant results. 5.3 Discussion and Further Work The results discussed above need further support with a larger subject pool, as the level of agree: ment among the judges was at the low end of what can be considered significant. This is shown by the Kappa coefficients, measured against the expert opinion and listed in Table 2. The overall average was .594. Despite this caveat, the results clearly show that adding collocational information from the training • r 593 Table 2: Kappa coefficients Subject# Text# 112]3141511~ 1 .775 .629 .596 .444 .642 .617 2 .723 .649 .491 .753 .557 .635 3 .859 .121 .173 .538 .738 .486 4 .870 .532 .635 .299 .870 .641 5 .833 .500 .625 .423 .500 .576 AH texts .814 .491 .508 481 .675 .594 corpus improves the prediction of section breaks, hence, under common assumptions, the measure- ment of lexical cohesion. It is likely that these en- couraging results can be further improved. Follow- ing are a few suggestions of ways to do so. Some factors work against the context vector method. For instance, the system currently has no mechanism to handle words that it has no context vectors for. Often it is precisely the co-occurrence of uncommon words not in the training corpus (per- sonal names, rare terminology etc.) that ties text together. Such cases pose no challenge to the string- based system, but the VecTile system cannot utilize them. The best solution might be a hybrid system with a backup procedure for unknown words. Another point to note is how well the much sim- pler TextTile system compares. Indeed, a close look at the figures in Table 1 reveals that the better re- sults of the VecTile system are due in large part to one of the texts, viz. #2. Considering the additional effort and resources involved in using context vec- tors, the modest boost in performance might often not be worth the effort in practice. This suggests that pure string repetition is a particularly strong indicator of similarity, and the vector-based system might benefit from a mechanism to give those vec- tors a higher weight than co-occurrences of merely similar words. Another potentially important parameter is the nature of the training corpus. In this case, it con- sisted mainly of news texts, while the texts in the experiment were scientific expository texts. A more homogeneous setting might have further improved the results. Finally, the evaluation of results in this task is complicated by the fact that "near-hits" (cases in which a section break is off by one paragraph) do not have any positive effect on the score." This prob- lem has been dealt with in the Topic Detection and Tracking (TDT) project by a more flexible score that becomes gradually worse as the distance between hy- pothesized and "real" boundaries increases (TDT, 1997a; TDT, 1997b). Acknowledgements Thanks to Stanley Peters, Yasuhiro Takayama, Hin- rich Schiitze, David Beaver, Edward Flemming and three anonymous reviewers for helpful discussion and comments, to Stanley Peters for office space and computational infrastructure, and to Raymond Flournoy for assistance with the vector space. References S.J. Bond and J.R. Hayes. 1984. Cues people use to paragraph text. Research in the Teaching of English, 18:147-167. Raymond Flournoy, Ryan Ginstrom, Kenichi Imai, Stefan Kaufmann, Genichiro Kikui, Stanley Pe- ters, Hinrich Schiitze, and Yasuhiro Takayama. 1998a. Personalization and users' semantic expec- tations. ACM SIGIR'98 Workshop on Query In- put and User Expectations, Melbourne, Australia. Raymond Flournoy, Hiroshi Masuichi, and Stan~ ley Peters. 1998b. Cross-language information re- trievM: Some methods and tools. In D. Hiemstra, F. de Jong, and K. Netter, editors, TWLT 13 Lan- guage Technology in Multimedia Information Re- trieval, pages 79-83. Talmy Givbn, editor. 1979. Discourse and Syntax. Academic Press. G. H. Golub and C. F. van Loan. 1989. Matrix Com- putations. Johns Hopkins University Press. . Barbara J. Grosz and Candace L. Sidner. 1986. At- tention, intentions, and the structure of discourse. Computational Linguistics, 12(3) :175-204. Udo Hahn. 1990. Topic parsing: Accounting for text macro structures in full-text analysis. Information Processing and Management, 26:135-170. Michael A.K. Halliday and Ruqaiya Hasan. 1976. Cohesion in English. Longman. Marti Hearst. 1997. TextTiling: Segmenting tex~ into multi-paragraph subtopic passages. Compu- tational Linguistics, 23(1):33-64. Michael Hoey. 1991. Patterns of Lexis in Text. Ox- ford University Press. Hideki Kozima. 1994. Computing Lexical Cohesion as a Tool for Text Analysis. Ph.D. thesis, Univer- sity of Electro-Communications. Chin-Yew Lin. 1997. Robust Automatic Topic Identification. Ph.D. thesis, Uni~ versity of Southern California. [Online] http ://ww.. isi. edu/~cyl/thesis/thesis, html [1999, April 24]. Diane J. Litman and Rebecca J. Passonneau. 1995. Combining multiple knowledge sources for dis- course segmentation. In Proceedings of the 33rd ACL, pages 108-115. L.E. Longacre. 1979. The paragraph as a grammat- ical unit. In Givbn (Givbn, 1979), pages 115-134: 594 Jane Morris and Graeme Hirst. 1991. Lexical co- hesion computed by thesaural relations as an in- dication of the structure of text. Computational Linguistics, 17(1):21-48. Jeffrey C. Reynar. 1998. Topic. Segmenta- tion: Algorithms and Applications. Ph.D. thesis, University of Pennsylvania. [Online] http ://~ww. cis. edu/-j creynar/research, html [1999, April 24]. K. Richmond, A. Smith, and E. Amitay. 1997. Detecting subject boundaries within text: A language independent statistical approach. In Proceedings of The Second Conference on Em- pirical Methods in Natural Language. Processing (EMNLP-2). Hinrich Schiitze. 1997. Ambiguity Resolution in Language Learning. CSLI. Hinrich Schiitze. 1998. Automatic word sense discrimination. Computational Linguistics, 24(1):97-123. Dan Sperber and Deidre Wilson. 1995. Relevance: Communication and Cognition. Harvard Univer- sity Press, 2nd edition. Heather Stark. 1988. What do paragraph markings do? Discourse Processes, 11(3):275-304. 1997a. The TOT Pilot Study Corpus Documenta- tion version 1.3, 10. Distributed by the Linguistic Data Consortium. 1997b. The Topic Detection and Tracking (TDT) Pi- lot Study Evaluation Plan, 10. Distributed by the Linguistic Data Consortium. Gilbert Youmans. 1991. A new tool for discourse analysis: The vocabulary-management profile. Language, 47(4):763-789. 595
1999
77
Using Linguistic Knowledge in Automatic Abstracting Horacio Saggion Ddpartement d'Informatique et Recherche Opdrationnelle Universitd de Montrdal CP 6128, Succ Centre-Ville Montrdal, Qudbec, Canada, H3C 3J7 Fax: +1-514-343-5834 saggion@iro, umontreal, ca Abstract We present work on the automatic generation of short indicative-informative abstracts of scien- tific and technical articles. The indicative part of the abstract identifies the topics of the docu- ment while the informative part of the abstract elaborate some topics according to the reader's interest by motivating the topics, describing en- tities and defining concepts. We have defined our method of automatic abstracting by study- ing a corpus professional abstracts. The method also considers the reader's interest as essential in the process of abstracting. 1 Introduction The idea of producing abstracts or summaries by automatic means is not new, several methodologies have been proposed and tested for automatic abstracting including among others: word distribution (Luhn, 1958); rhetor- ical analysis (Marcu, 1997); and probabilistic models (Kupiec et al., 1995). Even though some approaches produce acceptable abstracts for specific tasks, it is generally agreed that the problem of coherent selection and expres- sion of information in automatic abstracting remains (Johnson, 1995). One of the main problems is how to ensure the preservation of the message of the original text if sentences picked up from distant parts of the source text are juxtaposed and presented to the reader. Rino and Scott (1996) address the problem of coherent selection for gist preservation, however they depend on the availability of a complex meaning representation which in practice is difficult to obtain from the raw text. In our work, we are concerned with the auto- matic generation of short indicative-informative abstract for technical and scientific papers. We base our methodology on a study of a corpus of professional abstracts and source or parent doc- uments. Our method also considers the reader's interest as essential in the process of abstract- ing. 2 The Corpus The production of professional abstracts has long been object of study (Cremmins, 1982). In particular, it has been argued that structural parts of parent documents such as introduc- tions and conclusions are important in order to obtain the information for the topical sentence (Endres-Niggemeyer et al., 1995). We have been investigating which kind of information is re- ported in professional abstracts as well as where the information lies in parent documents and how it is conveyed. In Figure 1, we show a pro- fessional abstract from the "Computer and Con- trol Abstracts" journal, this kind of abstract aims to alert readers about the existence of a new article in a particular field. The example contains information about the author's inter- est, the author's development and the overview of the parent document. All the information reported in this abstract was found in the in- troduction of its parent document. In order to study the aforementioned aspects, we have manually aligned sentences of 100 pro- fessional abstracts with sentences of parent doc- uments containing the information reported in the abstract. In a previous study (Saggion and Lapalme, 1998), we have shown that 72% of the information in professional abstracts lies in ti- tles, captions, first sections and last sections of parent documents while the rest of the informa- tion was found in author abstracts and other sections. These results suggest that some struc- tural sections are particularly important in or- der to select information for an abstract but also 596 The production of understandable and maintainable expert systems using the current gen- eration of multiparadigm development tools is addressed. This issue is discussed in the context of COMPASS, a large and complex expert system that helps maintain an elec- tronic telephone exchange. As part of the work on COMPASS, several techniques to aid maintainability were developed and successfully implemented. Some of the techniques were new, others were derived from traditional software engineering but modified to fit the rapid prototyping approach of expert system building. An overview of the COMPASS project is presented, software problem areas are identified, solutions adopted in the final system are described and how these solutions can be generalized is discussed. Figure h Professional Abstract: CCA 58293 (1990 vol.25 no.293). Parent Document: "Maintain- ability Techniques in Developing Large Expert Systems." D.S. Prerau et al. IEEE Expert, vol.5, no.3, p.71-80, June 1990. that it is not enough to produce a good infor- mative abstract (i.e. we hardly find the results of an investigation in the introduction of a re- search paper). 3 Conceptual and Linguistic Information The complex process of scientific discovery that starts with the identification of a research problem and eventually ends with an answer to the problem (Bunge, 1967), would generally be disseminated in a technical or scientific paper: a complex record of knowledge containing, among others, references to the following con- cepts the author, the author's affiliation, others authors, the authors' development, the authors' interest, the research article and its components (sections, figures, tables, etc.), the problem un- der consideration, the authors' solution, others' solution, the topics of the research article, the motivation for the study, the importance of the study, what the author found, what the author think, what others have done, and so forth. Those concepts are systematically selected for inclusion in professional abstracts. We have noted that some of them are lexically marked while others appear as arguments of predicates conveying specific relations in the domain of discourse. For example, in an expression such as "We found significant reductions in ..." the verb "find" takes as an argument a result and in the expression "The lack of a library severely limits the impact of..." the verb "limit" entails a problem. We have used our corpus and a set of more than 50 complete technical articles in order to deduce a conceptual model and to gather lexical information conveying concepts and relations. Although our conceptual model does not deal with all the intricacies of the domain, we believe it covers most of the important in- formation relevant for an abstract. In order to obtain linguistic expressions marking concepts and relation, we have tagged our corpus with a POS tagger (Foster, 1991) and we have used a thesaurus (Vianna, 1980) to semantically classify the lexical items (most of them are polysemous). Figure 2, gives an overview of some concepts, relations and lexical items so far identified. The information we collected allow the defini- tion of patterns of two kinds: (i) linguistic pat- terns for the identification of noun groups and verb groups; and (ii) domain specific patterns for the identification of entities and relations in the conceptual model This allows for the identification of complex noun groups such as "The TIGER condition monitoring system" in the sentence "The TIGER gas turbine condition monitoring system addresses the performance monitoring aspects" and the interpretation of strings such as "University of Montreal" as a reference to an institution and verb forms such as "have presented" as a reference to a predi- cate possibly introducing the topic of the docu- ment. The patterns have been specified accord- ing to the linguistic constructions found in the corpus and then expanded to cope with other valid linguistic patterns, though not observed in our data. 597 Concepts/Relations Explanation Lexical Items make know The author mark the topic of the document describe, expose, present, ... study The author is engaged in study analyze, examine, explore, ... express interest The author is interested in address, concern, interest,... experiment The author is engaged in experimentation experiment, test, try out, ... identify goal The author identify the research goal necessary, focus on, ... explain The author gives explanations explain, interpret, justify,... define a concept is being defined define, be, ... describe entity is being described compose, form, ... authors The authors of the article We, I, author,... paper The technical article article, here, paper, study, ... institutions authors' affiliation University, UniversitY, ... other researchers Other researchers Proper Noun (Year), ... problem The problem under consideration difficulty, issue, problem, ... method The method used in the study equipment, methodology, ... results The results obtained result, find, reveal, ... 'hypotheses The assumptions of the author assumption, hypothesis .... Figure 2: Some Conceptual and Linguistic Information 4 Generating Abstracts It is generally accepted that there is no such thing as an ideal abstract, but different kinds of abstracts for different purposes and tasks (McK- eown et al., 1998). We aim at the generation of a type of abstract well recognized in the lit- erature: short indicative-informative abstracts. The indicative part identifies the topics of the document (what the authors present, discuss, address, etc.) while the informative part elabo- rates some topics according to the reader's inter- est by motivating the topics, describing entities, defining concepts and so on. This kind of ab- stract could be used in tasks such as accessing the content of the document and deciding if the parent document is worth reading. Our method of automatic abstracting relies on: • the identification of sentences containing domain specific linguistic patterns; • the instantiation of templates using the se- lected sentences; • the identification of the topics of the docu- ment and; • the presentation of the information using re-generation techniques. The templates represent different kinds of information we have identified as important for inclusion in an abstract. They are classified in: indicative templates used to represent con- cepts and relations usually present in indicative abstracts such as "the topic of the document", "the structure of the document", "the identifi- cation of main entities", "the problem", "the need for research", "the identification of the solution", "the development of the author" and so on; and informative templates rep- resenting concepts that appear in informative abstracts such as "entity/concept definition", "entity/concept description", "entity/concept relevance", "entity/concept function", "the motivation for the work", "the description of the experiments", "the description of the methodology", "the results", "the main con- clusions" and so on. Associated with each template is a set of rules used to identify potential sentences which could be used to instantiate the template. For example, the rules for the topic of the document template, specify to search the category make know in the introduction and conclusion of the paper while the rules for the entity description specify the search for the describe category in all the text. Only sentences matching specific patterns are retained in order to instantiate the templates and this reduces in part the problem of poly- semy of the lexical items. 598 The overall process of automatic abstracting shown in Figure 3 is composed of the following steps: Pre-processing and Interpretation: The raw text is tagged and transformed in a structured representation allowing the following processes to access the structure of the text (words, groups of words, titles, sentences, paragraphs, sections, and so on). Domain specific transducers are applied in order to identify possible concepts in the discourse domain (such as the authors, the paper, ref- erences to other authors, institutions and so on) and linguistic transducers are applied in order to identify noun groups and verb groups. Afterwards, semantic tags marking discourse domain relations and concepts are added to the different elements of the structure. Additionally, the process extracts noun groups, computes noun group distribution (assigning a weight to each noun group) and generates the topical structure of the paper: a structure with n + 1 components where n is the number of sections in the document. Component i (0 < i < n) contains the noun groups extracted from the title of section i (0 indicates the title of the document). The structure is used in the se- lection of the content for the indicative abstract. Indicative Selection: Its function is to identify potential topics of the document and to construct a pool of "propositions" introducing the topics. The indicative templates are used to this end: sentences are selected, filtered and used to instantiate the templates using patterns identified during the analysis of the corpus. The instantiated templates obtained in this step constitute the indicative data base. Each template contains, in addition to their specific slots, the following: the topic candidate slot which is filled in with the noun groups of the sentence used for instantiation, the weight slot filled in with the sum of the weights of the noun groups in the topic candidate slot and, the position slot filled in with the position of the sentence (section number and sentence number) which instantiated the template. In Figure 4, the "topic of the document" template appears instantiated using the sentence "this paper describes the Active Telepresence System with an integrated AR system to enhance the operator's sense of presence in hazardous environments." In order to select the content for the indicative abstract the system looks for a "match" be- tween the topical structure and the templates in the indicative data base: the system tries all the matches between noun groups in the topical structure and noun groups in the topic candidate slots. One template is selected for each component of the topical structure: the template with more matches. The selected templates constitute the content of the indica- tive abstract and the noun groups in the topic candidate slots constitute the potential topics. Informative Selection: this process aims to confirm which of the potential top- ics computed by the indicative selection are actual topics (i.e. topics the system could informatively expand according to the reader interest) and produces a pool of "proposi- tions" elaborating the topics. All informative templates are used in this step, the process considers sentences containing the potential topics and matching informative patterns. The instantiated informative templates constitute the informative data base and the potential topics appearing in the informative templates form the topics of the document. Generation: This is a two step process. First, in the indicative generation, the tem- plates selected by the indicative selection are presented to the reader in a short text which contains the topics identified by the informative selection and the kind of information the user could ask for. Second, in the informative generation, the reader selects some of the topics asking for specific types of information. The informative templates associated with the selected topics are used to present the required information to the reader using expansion operators such as the "description" operator whose effect is to present the description of the selected topic. For example, if the "topic of the document" template (Figure 4) is selected by the informative selection the following indicative text will be presented: 599 1 NOUN GROUPS J POTENTLAL TOPICS INPORMATIVB ~ O N RAW ~ I PRE PROCESSINO ~ITIERI~RTA'r[ON TEXT ~ A T I O N _ I INDICATIVE 1 TOPICAL $TRUCrUR~ INDICA"IIVlg (~0~ 1 i . . . . . . INDICATIVE II~PORMATIVB DATA BASE ~ USER l "~ .... INDICATIVE ABSTRACT INPORMA'nVE ~ ~' i GENEZ~ATION $1~..EC'I'~D TOPICS t INPORMATIVE ABSTRACT Figure 3: System Architecture Templates and Instantiated Slots Topic ol the document template Entity description template Main predicate: "describes": DESCRIBE Where: nil Who: "This paper": PAPER What: "the Active Telepresence System with an integrated AR system to enhance the operator's sense of presence in hazardous environments" " Position: Number 1 from "Conclusion" Section Topic candidates: "the Active Telepresence Sys- tem", "an integrated AR system", "the operator's sense", "presence", "hazardous environments" Weight :... Main predicate: "consist of" : CONSIST OF Topical entity: "The Active Telepresence Sys- tem" Related entities: "three distinct elements", "the stereo head", "its controller", "the display device" Position: Number 4 from "The Active Telepres- ence System" Section Weight:... Figure 4: Some Instantiated Templates for the article "Augmenting reality for telerobotics: unifying real and virtual worlds" J. Pretlove, Industrial Robot, voi.25, issue 6, 1998. Describes the Active Telepresence System with an integrated AR system to enhance the operator's sense of presence in hazardous environments. Topics: Active Telepresence System (de- scription); AR system (description); AR (definition) If the reader choses to expand the description of the topic "Active Telepresence System", the following text will be presented: The Active Telepresence System consists of three distinct elements: the stereo head, its controller and the display device. The pre-processing and interpretation step axe currently implemented. We axe testing the 600 processes of indicative and informative selection and we are developping the generation step. 5 Discussion In this paper, we have presented a new method of automatic abstracting based on the re- sults obtained from the study of a corpus of professional abstracts and parent docu- ments. In order to implement the model, we rely on techniques in finite state processing, instantiation of templates and re-generation techniques. Paice and Jones (1993) have already used templates representing specific information in a restricted domain in order to generate indicative abstracts. Instead, we aim at the generation of indicative-informative abstracts for domain independent texts. Radev and McKeown (1998) also used instantiated templates, but in order to produce summaries of multiple documents. They focus on the generation of the text while we are address- ing the overall process of automatic abstracting. We are testing our method using long tech- nical articles found on the "Web." Some out- standing issues axe: the problem of co-reference, the problem of polysemy of the lexical items, the re-generation techniques and the evaluation of the methodology which will be based on the judgment of readers. Acknowledgments I would like to thank my adviser, Prof. Guy Lapalme for encouraging me to present this work. This work is supported by Agence Cana- dienne de D~veloppement International (ACDI) and Ministerio de Educaci6n de la Naci6n de la Repdblica Argentina, Resoluci6n 1041/96. References M. Bunge. 1967. Scienti-fc Research I. The Search for System. Springer-Verlag New York Inc. E.T. Cremmins. 1982. The Art o-f Abstracting. ISI PRESS. B. Endres-Niggemeyer, E. Maier, and A. Sigel. 1995. How to implement a naturalistic model of abstracting: Four core working steps of an expert abstractor. Information Processing ?J Management, 31(5):631-674. G. Foster. 1991. Statistical lexical disam- biguation. Master's thesis, McGill University, School of Computer Science. F. Johnson. 1995. Automatic abstracting re- search. Library Review, 44(8):28-36. J. Kupiec, J. Pedersen, and F. Chen. 1995. A trainable document summarizer. In Proc. o-f the 18th ACM-SIGIR Conference, pages 68- 73. H.P. Luhn. 1958. The automatic creation of lit- erature abstracts. IBM Journal o? Research Development, 2(2):159-165. D. Marcu. 1997. From discourse structures to text summaries. In The Proceedings of the A CL'97/EA CL'97 Workshop on Intelligent Scalable Text Summarization, pages 82-88, Madrid, Spain, July 11. K. McKeown, D. Jordan, and V. Hatzivas- siloglou. 1998. Generating patient-specific summaries of on-line literature. In Intelli- gent Text Summarization. Papers from the 1998 AAAI Spring Symposium. Technical Re- port SS-98-06, pages 34-43, Standford (CA), USA, March 23-25. The AAAI Press. C.D. Paice and P.A. Jones. 1993. The iden- tification of important concepts in highly structured technical papers. In R. Korfhage, E. Rasmussen, and P. Willett, editors, Proc. o-f the 16th ACM-SIGIR Conference, pages 69-78. D.R. Radev and K.R. McKeown. 1998. Gener- ating natural language summaries from mul- tiple on-line sources. Computational Linguis- tics, 24(3):469-500. L.H.M. Rino and D. Scott. 1996. A discourse model for gist preservation. In D.L. Borges and C.A.A. Kaestner, editors, Proceedings o-f the 13th Brazilian Symposium on Artificial Intelligence, SBIA '96, Advances in Artificial Intelligence, pages 131-140. Springer, Octo- ber 23-25, Curitiba, Brazil. H. Saggion and G. Lapalme. 1998. Where does information come from? corpus analysis for automatic abstracting. In RIFRA'98. Ren- contre Internationale sur l'extraction le Fil- trate et le Rdsumd Automatique, pages 72-83. F. de M. Vianna, editor. 1980. Roger's II. The New Thesaurus. Houghton Mifflin Company, Boston. 601
1999
78