text
stringlengths 0
316k
| year
stringclasses 50
values | No
stringclasses 911
values |
---|---|---|
Functional Centering Michael Strube & Udo Hahn Freiburg University (~[} Computational Linguistics Lab Europaplatz 1, D-79085 Freiburg, Germany {strube, hahn}@coling, uni-freiburg, de Abstract Based on empirical evidence from a free word order language (German) we propose a fundamental revision of the principles guid- ing the ordering of discourse entities in the forward-looking centers within the center- ing model. We claim that grammatical role criteria should be replaced by indicators of the functional information structure of the utterances, i.e., the distinction between context-bound and unbound discourse ele- ments. This claim is backed up by an empir- ical evaluation of functional centering. 1 Introduction The centering model has evolved as a methodology for the description and explanation of the local coherence of discourse (Grosz et al., 1983; 1995), with focus on pronominal and nominal anaphora. Though several cross-linguistic studies have been carded out (cf. the enumeration in Grosz et al. (1995)), an almost canon- ical scheme for the ordering on the forward-looking centers has emerged, one that reflects well-known reg- ularities of fixed word order languages such as En- glish. With the exception of Walker et al. (1990; 1994) for Japanese, Turan (1995) for Turkish, Ram- bow (1993) for German and Cote (1996) for English, only grammatical roles are considered and the (par- tial) ordering in Table 11 is taken for granted. I subject > dir-object > indir-object I > complement(s) > adjunct(s) Table 1: Grammatical Role Based Ranking on the C! ~Table 1 contains the most explicit ordering of grammat- ical roles we are aware of and has been taken from Bren- nan et al. (1987). Often, the distinction between comple- ments and adjuncts is collapsed into the category "others" (c.f., e.g., Grosz et al. (1995)). Our work on the resolution of anaphora (Strube & Hahn, 1995; Hahn & Strube, 1996) and textual el- lipsis (Hahn et al., 1996), however, is based on Ger- man, a free word order language, in which grammat- ical role information is far less predictive for the or- ganization of centers. Rather, for establishing proper referential relations, the functional information struc- ture of the utterances becomes crucial (different per- spectives on functional analysis are brought forward in Dane~ (1974b) and Dahl (1974)). We share the no- tion of functional information structure as developed by Dane~ (1974a). He distinguishes between two cru- cial dichotomies, viz. given information vs. new infor- mation (constituting the information structure of ut- terances) on the one hand, and theme vs. rheme on the other (constituting the thematic structure of utter- ances; cf. Halliday & Hasan (1976, pp.325-6)). Dane~ refers to a definition given by Halliday (1967) to avoid the confusion likely to arise in the use of these terms: "[...] while given means what you were talking about (or what I was talking about before), theme means what I am talking about (now) [...]" Halliday (1967, p.212). Dane~ concludes that the distinction between given information and theme is justified, while the dis- tinction between new information and rheme is not. Thus, we arrive at a trichotomy between given infor- mation, theme and rheme (the latter being equivalent to new information). We here subscribe to these con- siderations, too, and will return in Section 3 to these notions in order to rephrase them more explicitly by using the terminology of the centering model. In this paper, we intend to make two contributions to the centering approach. The first one, the intro- duction of functional notions of information structure in the centering model, is methodological in nature. The second one concerns an empirical issue in that we demonstrate how a functional model of centering can successfully be applied to the analysis of several forms of anaphoric text phenomena. At the methodological level, we develop arguments that (at least for free word order languages) grammat- ical role indicators should be replaced by functional 270 role patterns to more adequately account for the or- dering of discourse entities in center lists. In Section 3 we elaborate on the particular information structure criteria underlying a function-based center ordering. We also make a second, even more general method- ological claim for which we have gathered some pre- liminary, though still not conclusive evidence. Based on a re-evaluation of empirical arguments discussed in the literature on centering, we stipulate that ex- changing grammatical by functional criteria is also a reasonable strategy for fixed word order languages. Grammatical role constraints can indeed be rephrased by functional ones, which is simply due to the fact that grammatical roles and the information structure patterns, as we define them, coincide in these kinds of languages. Hence, the proposal we make seems more general than the ones currently under discus- sion in that, given a functional framework, fixed and free word order languages can be accounted for by the same ordering principles. As a consequence, we argue against Walker et al.'s (1994, p.227) stipulation, which assumes that the C I ranking is the only parameter of the centering theory which is language-dependent. In- stead, we claim that functional centering constraints for the C! ranking are possibly universal. The second major contribution of this paper is re- lated to the unified treatment of specific text phe- nomena. It consists of an equally balanced treatment of intersentential (pro)nominal anaphora and textual ellipsis (also called functional or partial anaphora). The latter phenomenon (cf. the examples given in the next section), in particular, is usually only sketchily dealt with in the centering literature, e.g., by assert- ing that the entity in question "is realized but not di- rectly realized" (Grosz et al., 1995, p.217). Further- more, the distinction between those two kinds of re- alization is generally delegated to the underlying se- mantic theory. We will develop arguments how to lo- cate elliptical discourse entities and resolve textual el- lipsis properly at the center level. The ordering con- straints we supply account for all of the above men- tioned types of anaphora in a precise way, includ- ing (pro)nominal anaphora (Strube & Hahn, 1995; Hahn & Strube, 1996). This claim will be validated by a substantial body of empirical data (cf. Section 4). 2 Types of Anaphora Considered Text phenomena, e.g., textual forms of ellipsis and anaphora, are a challenging issue for the design of parsers for text understanding systems, since imper- fect recognition facilities either result in referentially incoherent or invalid text knowledge representations. At the conceptual level, textual ellipsis relates a quasi- anaphoric expression to its extrasentential antecedent by conceptual attributes (or roles) associated with that antecedent (see, e.g., the relation between "Akkus" (accumulator) and "316LT", a particular notebook, in (lb) and (la)). Thus, it complements the phenomenon of nominal anaphora, where an anaphoric expression is related to its antecedent in terms of conceptual gen- eralization (as, e.g., "Rechner" (computer) in (lc) refers to "316LT' in (la) mediated by the textual ellip- sis in (lb)). The resolution of text-level nominal (and pronominal) anaphora contributes to the construction of referentially valid text knowledge bases, while the resolution of textual ellipsis yields referentially coher- ent text knowledge bases. (1) a. Ein Reserve-Batteriepaek versorgt den 316LT ca. 2 Minuten mit Strom. (A reserve battery pack - supplies - the 316LT - for approximately 2 minutes - with power.) b. Der Status des Akkus wird dem Anwender ange- zeigt. (The status of the accumulator - is - to the user - indicated.) c. Ca. 30 Minuten vor der Entleerung beginnt der Rechner 5 Sekunden zu beepen. (Approximately 30 minutes - before the discharge - starts - the computer - for 5 seconds - to beep.) d. 5 Minuten bevor er sich ausschaltet, f'angt die Low-Battery-LED an zu blinken. (5 minutes - before - it - itself- turns off- begins - the low-battery-LED - to flash.) In the case of textual ellipsis, the missing concep- tual link between two discourse elements occurring in adjacent utterances must be inferred in order to estab- lish the local coherence of the discourse (for an early statement of that idea, cf. Clark (1975)). In the sur- face form of utterance (lb) the information is missing that "Akkus'" (accumulator) links up with "316LT". This relation can only be made explicit if conceptual knowledge about the domain, viz. the relation part-of between the concepts ACCUMULATOR and 316LT, is available (see Hahn et al. (1996) for a more detailed treatment of text ellipsis resolution). 3 Principles of Functional Centering Within the framework of the centering model (Grosz et al., 1995), we distinguish each utterance's backward-looking center (Gb(U,~)) and its forward- looking centers (G! (Un)). The ranking imposed on the elements of the G I reflects the assumption that the most highly ranked element of G I (tin) - the preferred center Cp(Un) - is the most preferred antecedent of an anaphoric or elliptical expression in Un+l, while the remaining elements are partially ordered accord- ing to decreasing preference for establishing referen- tial links. Hence, the most important single construct of the centering model is the ordering of the list of forward-looking centers (Walker et al., 1994). 271 The main difference between Grosz et al.'s work and our proposal concerns the criteria for ranking the forward-looking centers. While Grosz et al. assume that grammatical roles are the major determinant for the ranking on the C'y, we claim that for languages with relatively free word order (such as German), it is the functional information structure (IS) of the ut- terance in terms of the context-boundedness or un- boundedness of discourse elements. The centering data structures and the notion of context-boundedness can be used to redefine Dane~' (1974a) trichotomy be- tween given information, theme and new information (rheme). The Cb(U,), the most highly ranked element of C.t(Un-i) realized in [In, corresponds to the el- ement which represents the given information. The theme of U, is represented by the preferred center C'p(U,), the most highly ranked element of C! (Un). The theme/rheme hierarchy of [In is represented by CI(U,~ ) which - in our approach - is partly deter- mined by the C! (Un-i): the rhematic elements of Un are the ones not contained in C! (U,_ i ) (unbound dis- course dements); they express the new information in Un. The ones contained in Cl(U,_i ) and Cy(U,) (bound discourse elements) are thematic, with the theme/rheme hierarchy corresponding to the ranking in the Cls. The distinction between context-bound and unbound elements is important for the ranking on the C I, since bound elements are generally ranked higher than any other non-anaphoric elements (cf. also Haji~ov~i et al. (1992)). An alternative definition of theme and rheme in the context of the centering approach is proposed by Ram- bow (1993). In his approach the theme corresponds to the Cb and the theme/rheme hierarchy can be derived from those elements of C! (U,-i) that are realized in [In. Rambow does not distinguish, however, between the information structure and the thematic structure of utterances, which leads to problems when a change of the criteria for recognizing the thematic structure is envisaged, Our approach is flexible enough to acco- modate other conceptions of theme/rheme as defined, e.g., by Haji6ov~i et al. (1995), since this change af- fects only the thematic but not the information struc- ture of utterances. bound element(s) >~sb~.. unbound element(s) anaphora >X Sbo,,a (possessive pronoun xor elliptical antecedent) >,Sbo,,a (elliptical expression xor head of anaphoric expression) nom head, >pr,c nom head2 >p~,o ... >~, nom headn Table 2: Functional Ranking Constraints on the C! The rules holding for the ranking on the C' I, derived from a German language corpus, are summarized in Table 2. They are organized into three layers 2. At the top level, >,sb,,~ denotes the basic relation for the overall ranking of information structure (IS) patterns. Accordingly, any context-bound expression in the ut- terance U,_ i is given the highest preference as a po- tential antecedent of an anaphoric or elliptical expres- sion in [In while any unbound expression is ranked next to context-bound expressions. The second relation depicted in Table 2, >iSbou~u, denotes preference relations dealing exclusively with multiple occurrences of (resolved) anaphora, i.e., bound elements, in the preceding utterance. >'Sbo,,d distinguishes among different forms of context-bound elements (viz., anaphora, possessive pronouns and tex- tual ellipses) and their associated preference order. The final element of >,Sbou~u is either the elliptical expression or the head of an anaphoric expression which is used as a possessive determiner, a Saxon gen- itive, a prepositional or a genitival attribute (cf. the ellipsis in (2c): "die Ladezeit" (the charge time) vs. "seine Ladezeit" (its charge time) or "die Ladezeit des Akkus" (the accumulator's charge time)). For illustration purposes, consider text fragment (1) and the corresponding Oh~C! data in Table 33: In (ld) the pronoun "er" (it) might be resolved to "Akku" (accumulator) or "Rechner" (computer), since both fulfill the agreement condition for pronoun resolu- tion. Now, "der Rechner" (computer) figures as a nominal anaphor, already resolved to DELL-3 16LT, while "Akku" (accumulator) is only the antecedent of the elliptical expression "der Entleerung" (dis- charge). Therefore, the preferred antecedent of "er" (it) is determined as Rechner (computer). The bottom level of Table 2 specifies >~rco which covers the preference order for multiple occurrences of the same type of any information structure pattern, e.g., the occurrence of two anaphora or two unbound elements (all heads in an utterance are ordered by linear precedence relative to their text position). In sentence (2b), two nominal anaphors occur, "Akku" (accumulator) and "Rechner" (computer). The tex- tual ellipsis "Ladezeit" (charge time) in (2c) has to be resolved to the most preferred dement of the C' I of (2b), viz. the entity denoted by "Akku" (accumula- tor) (cf. Table 4). Note that "Rechner" (computer) is the subject of the sentence, though it is not the pre- ferred antecedent, since "Akku" (accumulator) pre- cedes "Rechner" (computer) and is anaphoric as well. 2Disregarding coordinations, the ordering we propose in- duces a strict ordering on the entities in a center list. 3Minuten (minutes) is excluded from the C! for reasons concerning the processing of complex sentences (cf. Strube (1996)). 272 (la) Cb: DELL-3 16LT: 316LT Cf." [DELL-316LT: 316LT, RESERVE-BATTERY-PAcK: Reserve-Batteriepack, TIME-UNIT-PAIR: 2 Minuten, POWER: Strom] (lb) Cb: DELL-316LT:- Cf: [DELL-316LT:--, Accu: Akku, STATUS: Status, USER: Anwender] (lc) Cb: DELL-3 16LT: Rechner Cf: [DELL-316LT: Rechner, Accu: --, DISCHARGE: Enfleerung, TIME-UNIT-PAIR: 30 Minuten, TIME-UNIT-PAIR: 5 Sekunden] (ld) Cb: DELL-3 16LT: er Cf: [DELL-316LT: er, LoW-BATTERY-LED: Low-Battery-LED Table 3: Centering Data for Text Fragment (1) CONTINUE CONTINUE CONTINUE CONTINUE (2a) Cb: DELL-3 16LT: 316LT Cf: [DELL-316LT: 316LT, NIMH-Accu: NiMH-Akku] (2b) Ch: DELL-3 16LT: Reehner Cf: [NIMH-Accu: Akku, DELL-316LT: Rechner, TIME-UNIT-PAIR: 4 Stunden, POWER: Sffom] (2c) . Cb: NIMH-Accu:- Cf: [NIMH-Accu: --, CHARGE-TIME: Ladezeit, TIME-UNIT-PAIR: 1,5 Stunden] CONTINUE RETAIN SMOOTH-SHIFT Table 4: Centering Data for Text Fragment (2) (2) Der316LTw~dmiteinemNiMH-Akku bestllckt. (I'he 316LT is - with a NiMH-accumulator - equipped.) b. Durch diesen neuartigen Akku wird der Rechner ffir ca. 4 Stunden mit Strom versorgt. (Because of this new type of accumulator - is the computer - for approximately 4 hours - with power - provided.) c. Dartiberhinaus ist die Ladezeit mit 1,5 Stunden sehr kurz. (Also - is - the charge time of 1.5 hours - quite short.) Given these basic relations, we may formulate the composite relation :>,s (Table 5). It states the condi- tions for the comprehensive ordering of items on C! (x and y denote lexical heads). >,s := { (x,y) I ~fx and y both represent the same type of IS pattern then the relation >~,,c applies to x and y else/fx and y both represent different forms of bound elements then the relation >rSbo,,a applies to x and y else the relation >rsb°,= applies to x and y } Table 5: Information Structure Relation 4 Evaluation In this section, we first describe the empirical and methodological framework in which our evaluation experiments were embedded, and then turn to a dis- cussion of evaluation results and the conclusions we draw from the data. 4.1 Evaluation Framework The test set for our evaluation experiment consisted of three different text sorts: 15 product reviews from the information technology (IT) domain (one of the two main corpora at our lab), one article from the German news magazine Der Spiegel, and the first two chapters of a short story by the German writer Heiner Miiller 4. The evaluation was carried out manually in order to circumvent error chaining 5. Table 6 summarizes the total numbers of anaphors, textual ellipses, utterances and words in the test set. ag. apho~ ellipses utterances words IT 308 294 451 5542 Spiegel 102 25 82 1468 MfiUer 153 20 87 867 563 339 620 7877 Table 6: Test Set Given this test set, we compared three major ap- proaches to centering, viz. the original model whose ordering principles are based on grammatical role in- dicators only (the so-called canonical model) as char- acterized by Table 1, an "intermediate" model which can be considered a naive approach to free word order languages, and, of course, the functional model based on information structure constraints as stated in Table 2. For reasons discussed below, augmented versions of the naive and the canonical approaches will also be considered. They are characterized by the additional 4Liebesgeschichte. In Heiner Mflller. Geschichten aus der Produktion 2. Berlin: Rotbuch Verlag, pp. 57-63. SA performance evaluation of the current anaphora and ellipsis resolution capacities of our system is reported in Hahn et al. (1996). 273 constraint that elliptical antecedents are ranked higher than elliptical expressions (short: "ante > express"). For the evaluation of a centering algorithm on nat- urally occurring text it is necessary to specify how to deal with complex sentences. In particular, methods for the interaction between intra- and intersentential anaphora resolution have to be defined, since the cen- tering model is concerned only with the latter case (see Suri & McCoy (1994)). We use an approach as de- scribed by Strube (1996) for the evaluation. Since most of the anaphors in these texts are nom- inal anaphors, the resolution of which is much more restricted than that of pronominal anaphors, the rate of success for the whole anaphora resolution process is not significant enough for a proper evaluation of the functional constraints. The reason for this lies in the fact that nominal anaphors are far more constrained by conceptual criteria than pronominal anaphors. So the chance to properly resolve a nominal anaphor, even at lower ranked positions in the center lists, is greater than for pronominal anaphors. While we shift our evaluation criteria away from simple anaphora resolu- tion success data to structural conditions based on the proper ordering of center lists (in particular, we focus on the most highly ranked item of the forward-looking centers) these criteria compensate for the high propor- tion of nominal anaphora that occur in our test sets. The types of centering transitions we make use of (cf. Table 7) are taken from Walker et al. (1994). ~(~)= ~(~) ~(~)# ~(~) cb(u.) = c~(u._~) OR Cb(U,,-I) undef. CONTINUE RETAIN ROUGH-SHIFT Cb(U.) # C~(U~_~) SMOOTH-SHIFT Table 7: Transition Types 4.2 Evaluation Results In Table 8 we give the numbers of centering transi- tions between the utterances in the three test sets. The first column contains those which are generated by the naive approach (such a proposal was made by Gordon et al. (1993) as well as by Rambow (1993) who, nev- ertheless, restricts it to the German middlefield only). We simply ranked the elements of C! according to their text position. While it is usually assumed that the elliptical expression ranks above its antecedent (Grosz et al., 1995, p.217), we assume the contrary. The sec- ond column contains the results of this modification with respect to the naive approach. In the third column of Table 8 we give the numbers of transitions which are generated by the canonical constraints as stated by Grosz et al. (1995, p.214, 217). The fourth column supplies the results of the same modification as was used for the naive approach, viz. elliptical antecedents are ranked higher than elliptical expressions. The fifth column shows the results which are generated by the functional constraints from Table 2. First, we examine the error data for anaphora res- olution for the five cases. All approaches have 99 errors in common. These are due to underspecifica- tions at different levels, e.g., the failure to account for prepositional anaphors (16), plural anaphors (8), anaphors which refer to a member of a set (14), sen- tence anaphors (21), and anaphors which refer to the global focus (12). Only 6 errors of the functional ap- proach are directly caused by an inappropriate order- ing of the C I, while the naive approach leads to 10 errors and the canonical to 7. When the antecedent of an elliptical expression is ranked above the elliptical expression itself the error rate of these two augmented approaches increases to 12 and 9, respectively. We now turn to the distribution of transition types for the different approaches. The centering model as- sumes a preference order among these transitions, e.g., CONTINUE ranks above RETAIN and RETAIN ranks above SHIFT. This preference order reflects the pre- sumed inference load put on the hearer or speaker to coherently decode or encode a discourse. Since the functional approach generates a larger amount of CONTINUE transitions, we interpret this as a first rough indication that this approach provides for more efficient processing than its competitors. But this reasoning is not entirely conclusive. Count- ing single occurrences of transition types, in general, does not reveal the entire validity of the center lists. Instead, considering adjacent transition pairs gives a more reliable picture, since depending on the text sort considered (e.g., technical vs. news magazine vs. lit- erary texts) certain sequences of transition types may be entirely plausible, though they include transitions which, when viewed in isolation, seem to imply con- siderable inferencing load (cf. Table 8). For instance, a CONTINUE transition which follows a CONTINUE transition is a sequence which requires the lowest pro- cessing costs. But a CONTINUE transition which fol- lows a RETAIN transition implies higher processing costs than a SMOOTH-SHIFT transition following a RETAIN transition. This is due to the fact that a RE- TAIN transition ideally predicts a SMOOTH-SHIFT in the following utterance. In this case the SMOOTH- SHIFT is the "least effort" transition, because only the first element of the C! of the preceding utterance has to be checked to perform the SMOOTH-SHIFT transi- tion, while in the case of CONTINUE at least one more check has to be performed. Hence, we claim that no one particular centering transition is preferred over an- other. Instead, we postulate that some centering tran- sition pairs are preferred over others. Following this 274 IT Transition Types naive naive & ante > express CONTINUE 49 167 RETAIN 269 158 SMOOTH-SHIFT 32 41 ROUGH-SHIFT 39 23 Errors ' 69 70 canonical 102 226 24 37 68 canonic~ & functional an~ > express 197 309 131 25 35 51 26 4 69 67 Spiegel Miiller CONTINUE 17 RETAIN 42 SMOOTH-SHIFT 9 ROUGH-SHIFT 7 Errors 18 CONTINUE 31 RETAIN 19 SMOOTH-SHIFT 15 ROUGH-SItlFT 14 Errors 22 CONTINUE 97 RETAIN 330 SMOOTH-SHIFT 56 ROUGH-SHIFT 60 Errors (specific errors) 109 (10) 28 32 9 6 19 31 19 17 12 22 37 28 7 3 16 32 18 15 14 22 43 50 23 12 8 13 1 0 17 f6 32 36 18 15 16 18 13 10 22 22 272 395 172 52 59 82 40 14 108 (9) 105 (6) 226 171 209 272 67 46 41 54 111 (12) 106 (7) Table 8: Numbers of Centering Transitions line of argumentation, we here propose to classify all occurrences of centering transition pairs with respect to the costs they imply. The cost-based evaluation of different C! orderings refers to evaluation criteria which form an intrinsic part of the centering model 6. Transition pairs hold for two immediately succes- sive utterances. We distinguish between two types of transition pairs, cheap ones and expensive ones. We call a transition pair cheap if the backward-looking center of the current utterance is correctly predicted by the preferred center of the immediately preced- ing utterance, i.e., Cb(Ui) = Gp(Ui_l),i = 2...n. Transition pairs are called expensive if the backward- looking center of the current utterance is not correctly predicted by the preferred center of the immediately preceding utterance, i.e., Cb(Ui) # Gp(Ui_l),i = 2... n. Table 9 contains a detailed synopsis of cheap and expensive transition pairs. In particular, chains of the RETAIN transition in passages where the Cb does not change (passages with constant theme) show that the canonical ordering constraints for the forward- looking centers are not appropriate, The numbers of centering transition pairs generated by the different approaches are shown in Table 10, In general, the functional approach shows the best re- 6As a consequence of this postulate, we have to rede- fine Rule 2 of the Centering Constraints (Grosz et al., 1995, p.215) appropriately, which gives an informal characteriza- tion of a preference for sequences of CONTINUE over se- quences of RETAIN arid, similarly, sequences of RETAIN over sequences of SHIFT. Our specification for the case of text interpretation says that cheap transitions are preferred over expensive ones, with cheap and expensive transitions as defined in Table 9. Suits, while the naive and the canonical approaches work reasonably well for the literary text, but exhibit a poor performance for the texts from the IT domain and the news magazine. The results for the latter ap- proaches become only slightly more positive with the modification of ranking the antecedent of a textual el- lipsis above the elliptical expression, but they do not compare to the results of the functional approach. We were also interested in finding out whether the functional ordering we propose possibly "includes" the grammatical role based criteria discussed so far. We, therefore, re-evaluated the examples already an- notated with Gb/C! data available in the literature (for the English language, we considered all exam- pies from Grosz et al. (1995) and Brennan et al. (1987); for Japanese we took the data from Walker et al. (1994)). Surprisingly enough, all examples of Grosz et al. (1995) passed the test successfully. Only with respect to the troublesome Alfa Romeo driving scenario (cf. Brennan et al. (1987, p.157)) our con- straints fail to properly rank the elements of the third sentence C! of that example. 7 Note also that these results were achieved without having recourse to ex- tra constraints, e.g., the shared property constraint to account for anaphora parallelism (Kameyama, 1986). We applied our constraints to Japanese examples in the same way. Again we abandoned all extra con- straints set up in these studies, e.g., the Zero Topic As- signment (ZTA) rule and the special role of empathy 7In essence, the very specific problem addressed by that example seems to be that Friedman has not been previously introduced in the local discourse segment and is only acces- sible via the global focus. 275 CONTINUE - cheap CONTINUE cheap RETAIN expensive SMOOTH-SHIFT cheap ROUGH-SHIFT expensive RETAIN expensive cheap expensive expensive expensive SMOOTH-SHIFT expensNe cheap expens~e cheap ROUGH-SHIFT i expensive expensive expensive expensive Table 9: Costs for Transition Pairs naive & cost type naive ante > express cheap 72 180 1T expensive 317 209 Cheap 25 36 Spiegel expensive 50 39 cheap 45 48 MOiler expensive 34 31 cheap 142 264 expensive 401 279 c~onical 129 260 45 30 46" 33 ,220 323 functional ante > express 236 153 51 24 48 31 335 208 321 68 62 13 55 24 438 105 Table 10: Cost Values for Centering Transition Pair Types verbs (Walker et al., 1994). However, the results our constraints generate are the same as those generated by Walker et al. including these model extensions. Only a single problematic case remains, viz. example (30) of Walker et al. (1994, p.214) causes the same problems they described (discourse-initial utterance, semantic or world knowledge should be available). Even for the crucial examples (32)-(36) of Walker et al. (1994, p.216-221) our constraints generate the same Cls as Walker et al.' s constraints with ZTA. To summarize the results of our empirical evalua- tion, we first claim that our proposal based on func- tional criteria leads to substantially better and -- with respect to the inference load placed on the text under- stander, whether human or machine -- more plausi- ble results for languages with free word order than the structural constraints given by Grosz et al. (1995) and those underlying a naive approach. We base these ob- servations on an evaluation approach which considers transition pairs in terms of the inference load specific pairs imply. Second, we have gathered some evidence, still far from being conclusive, that the functional con- straints on centering seem to incorporate the struc- tural constraints for English and the modified struc- tural constraints for Japanese. Hence, we hypothesize that functional constraints on centering might consti- tute a general mechanism for treating free an___dd fixed word order languages by the same descriptive mecha- nism. This claim, however, has to be further substan- tiated by additional cross-linguistic empirical studies. 5 Comparison with Related Approaches The centering model (Grosz et al., 1983; 1995) is con- cerned with the interactions between the local coher- ence of discourse and the choices of referring expres- sions. Crucial for the centering model is the way the forward-looking centers are organized. Despite several cross-linguistic studies a kind of "standard" has emerged based on the study of English (cf. Ta- ble 1 in Section 1). Only few of these cross-linguistic studies have led to changes in the basic order of dis- course entities, the work of Walker et al. (1990; 1994) being the most far reaching exception. They consider the role of expressive means in Japanese to indicate topic status and the speaker's perspective, thus introducing functional notions, viz. ToPIc and EMPATHY, into the discussion. German, the object language we deal with, is also a free word order lan- guage like Japanese (possibly even more constrained). Our basic revision of the ordering scheme completely abandons grammatical role information and replaces it with entirely functional notions reflecting the informa- tion structure of the utterances in the discourse. Inter- estingly enough, several extra assumptions introduced to account, e.g., for anaphora parallelism (e.g., the shared property constraint formulated by Kameyama (1986)) can be eliminated without affecting the cor- rectness of anaphora resolutions. Rambow (1993) has presented a theme/rheme distinction within the cen- tering model to which we fully subscribe. His pro- posal concerning the centering analysis of German (al- ready referred to as the "naive" approach; cf. Section 4) is limited, however, to the German middlefield and, hence, incomplete. A common topic of criticism relating to focusing approaches to anaphora resolution has been the diver- sity of data structures they require, which are likely to hide the underlying linguistic regularities. Focus- ing algorithms prefer the discourse element already in focus for anaphora resolution, thus considering context-boundedness, too. But the items of the fo- cus lists are either ordered by thematic roles (Sidner, 276 1983) or grammatical roles (Suri & McCoy, 1994; Dahl & Ball, 1990)). Dahl & Ball (1990) improve the focusing mechanism by simplifying its data struc- tures and, thus, their proposal is more closely related to the centering model than any other focusing mecha- nism. But their approach still relies upon grammatical information for the ordering of the centering list, while we use only the functional information structure as the guiding principle. 6 Conclusion In this paper, we provided an account for ordering the forward-looking centers which is entirely based on functional notions, grounded on the information struc- ture of utterances in a discourse. We motivated our proposal by the constraints which hold for a free word order language such as German and derived our results from data-intensive empirical studies of (real-world) expository texts. We have gathered preliminary evi- dence that the functional ordering of discourse enti- ties in the centers seems to coincide with the gram- matical roles of fixed word order languages. We also augmented the ordering criteria of the forward-looking center such that it accounts not only for (pro)nominal but also for functional anaphora (textual ellipsis), an issue that, so far, has only been sketchily dealt with in the centering framework. The extensions we pro- pose have been validated by the empirical analysis of real-world expository texts of considerable length. We thus follow methodological principles of corpus-based studies that have been successfully exercised in the work of Passonneau (1993). Still open are proper de- scriptions of deictic expressions, proper names (cf. the Alfa Romeo driving scenario), and plural or generic definite noun phrases. An anaphora resolution module and an ellipsis handler based on this functional center- ing model has been implemented as part of a compre- hensive text parser for German. Acknowledgments. We would like to thank our colleagues in the 8£ZY r group for fruitful discussions and Jon A1- cantara (Cambridge, UK) for re-reading the final version via Interact. This work has been funded by LGFG Baden- Wiirttemberg (M. S trube). References Brennan, S. E., M. W. Friedman & C. J. Pollard (1987). A centering approach to pronouns. In Proc. of ACL-87, pp. 155-162. Clark, H. H. (1975). Bridging. In Proc. of TINLAP-1, pp. 169-174. Cote, S. (1996). Ranking forward-looking centers. In E. Prince, A. Joshi & M. Walker (Eds.), Centering in Discourse. Oxford: Oxford University Press (to ap- pear). Dahl, D. A. & C. N. Ball (1990). Reference resolution in PUNDIT. In P. Saint-Dizier & S. Szpakowicz (Eds.), Logic and Logic Grammars for Language Processing, pp.. 168-184. Chichester, U.K.: Ellis Horwood. Dahl, O. (Ed.) (1974). Topic and Comment, Contextual Boundness, and Focus. Hamburg: Buske. Dane~, E (1974a). Functional sentence perspective and the organization of the text. In F. Dane~ (Ed.), Pa- person Functional Sentence Perspective, pp. 106-128. Prague: Academia. Dane~, F. (Ed.) (1974b). Paperson FunctionalSentencePer- spective. Prague: Academia. Gordon, P. C., B. J. Grosz & L. A. Gilliom (1993). Pro- nouns, names, and the centering of attention in dis- course. Cognitive Science, 17:311-347. Grosz, B. J., A. K. Joshi & S. Weinstein (1983). Providing a unified account of definite noun phrases in discourse. In Proc. of ACL-83, pp. 4430. Grosz, B. J., A. K. Joshi & S. Weinstein (1995). Center- .ing: A framework for modeling the local coherence of discourse. Computational Linguistics, 21 (2):203-225. Hahn, U. & M. Strube (1996). Incremental centering and center ambiguity. In Proc. of the 18 th Annual Confer- ence of the Cognitive Science Society. La Jolla, CA. Hahn, U., M. Strube & K. Markert (1996). Bridging textual ellipses. In Proc. of COLING-96. Hajieowi, E., V. Kubofi & P. Kubofi (1992). Stock of shared knowledge: A tool for solving pronominal anaphora: InProc. ofCOL1NG-92, Vol. 1, pp. 127-133. Haji~ov~, E., H. Skoumalov~ & P. Sgall (1995). An auto- marie procedure for topic-focus identification. Com- putational Linguistics, 21(1):81-94. Halliday, M. A. K. (1967). Notes on transitivity and theme in English, Part 2. Journal of Linguistics, 3:199-244. Halliday, M. A. K, & R. Hasan (1976). Cohesion in English. London: Longman. Kameyama, M. (1986). A property-sharing constraint in centering. In Proc. of ACL-86, pp. 200-206. Passormeau, R. J. (1993). Getting and keeping the center of attention. In M. Bates & R. Weisehedel (Eds.), Chal- lenges in Natural Language Processing, pp. 179-227. Cambridge, UK: Cambridge University Press. Rambow, O. (1993), Pragmatic aspects of scrambling and topicalization in German. In IRCS Workshop on Cen- tering in Discourse. Univ. of Pennsylvania, 1993. Sidner, C. L. (1983). Focusing in the comprehension of deft- nite anaphora. In M. Brady & R. Berwick (Eds.), Com- putational Models of Discourse, pp. 267-330. Cam- bridge, MA: MIT Press. Strube, M. (1996). Processing complex sentences in the cen- tering framework. In this volume. Strube, M. & U. Hahn (1995). ParseTalk about sentence- and text-level anaphora. In Proc. of EACL-95, pp. 237- 244. Suri, L. Z. & K. F. McCoy (1994). RAFT/RAPR and center° ing: A comparison and discussion of problems related to processing complex sentences. Computational Lin- guistics, 20(2):301-317. Turan, U. (1995). Null vs. Overt Subjects in Turkish: A Cen- tering Approach. (Ph.D. thesis). University of Penn- sylvania. Walker, M. A., M. Iida & S. Cote (1990). Centering in Japanese discourse. In Proc. of COLING.90, Ap- pendix, 6pp. Walker, M. A., M. Iida & S. Cote (1994). Japanesediscourse and the process of centering. Computational Linguis- tics, 20(2): 193-233. 277 | 1996 | 36 |
Mechanisms for Mixed-Initiative Human-Computer Collaborative Discourse Curry I. Guinn Department of Computer Science Duke University Box 90129 Durham, NC 27708 cig~cs, duke. edu Abstract In this paper, we examine mechanisms for automatic dialogue initiative setting. We show how to incorporate initiative chang- ing in a task-oriented human-computer di- alogue system, and we evaluate the ef- fects of initiative both analytically and via computer-computer dialogue simulation. 1 Motivation: Initiative in Task-Oriented Dialogues Efficient human-computer dialogue requires imme- diate utterance-by-utterance accommodation to the needs of the interaction. The participants must con- tinuously focus all concerns on the goals of the task and avoid extraneous paths. This means that the di- alogue initiative should always pass immediately to the participant who is best able to handle the current task. An agent is said to have dialogue ini- tiative over a mutual goal when that agent controls how that goal will be solved by the collaborators. In the implemented voice dialogue system "The Circuit Fix-it Shop" (Smith et al., 1992; Smith and Hipp, 1995), the dialogue fragment given in Figure 1 occurs between a human user and a computer collaborator. Utterances 2 and 4 indicate that the computer is di- recting the search for the missing wire in the faulty circuit. However, in Utterance 5 the user takes the initiative and suggests a different wire than has been proposed by the computer. In this paper we will present a theory explaining how initiative changes between participants and how computational agents can evaluate who should be in control of solving a goal. 2 Automating Dialogue Initiative Center for Digital Systems Engineering Research Triangle Institute Box 12194 Research Triangle Park, NC 27709 cig~rt±, org 1 U The LED is displaying an alter- nately flashing one and the top corner of a seven. 2 C There is supposed to be a wire between connector nine nine and connector one zero zero. 3 U It is there. 4 C There is supposed to be a wire between connector nine eight and connector one zero two. 5 U There is no wire on connector one zero four. [emphasis mine] 6 C There is supposed to be a wire between connector one zero four and connector one zero two. Table 1: A Dialogue Fragment in the Circuit Fix-it Shoppe Domain is in control and the other agent is passive (the master-slave assumption). For instance, in the LAD- DER system (Hendrix et al., 1978) the user has the initiative while the computer is passive. In contrast, the VODIS system (Young and Proctor, 1989) has the computer taking the initiative while the user is passive. Some dialogue systems (like GUS (Bobrow et al., 1977)) allow for a more mixed-initiative inter- action; however, the places where the user can take the initiative are limited and defined a priori. The dialogue model of Smith (1995) allows for either the computer or the user to assume degrees of initiative; however, Smith presents no algorithm for the com- puter to change initiative during a dialogue. Our model of mixed-initiative dialogue allows either par- ticipant to be in control of the dialogue at any point in time. Like Smith (1995) we believe that the level of ini- Many implemented dialogue systems are question- tiative in the dialogue should mirror the level of ini- answer systems with fixed initiative where one agent tiative in the task (which is a corollary to Grosz's 278 (1978) the structure of a dialog mirrors the structure o] the underlying task). Unlike previous research in dialogue initiative, however, we attach an initiative level to each goal in the task tree. Thus an agent may have initiative over one goal but not another. As goals get pushed and popped from the problem- solving stack, initiative changes accordingly. Thus many initiative changes are done implicitly based on which goal is being solved. 2.1 The Setting of Initiative Levels In our model of dialogue, initiative levels for each goal are defined during the interaction based on 1) explicit and implicit initiative-changing utterances and 2) competency evaluation. Explicit and Implicit Initiative-Changing Ut- terances Several researchers (Whittaker and Sten- ton, 1988; Walker and Whittaker, 1990) have noted that dialogue control can be exchanged through overt cues in the discourse. Our model concentrates on two specific dialogue cues: questions and answers. When an agent AI asks another agent A2 to satisfy a goal G, agent A2 gains initiative over goal G and all subgoals of G until agent A2 passes control of one of those subgoals back to agent A1. A similar initiative-setting mechanism is fired if agent A1 an- nounces that it cannot satisfy goal G. When a goal has been answered (satisfied) the problem-solving stack is popped. The initiative will now belong to whomever the initiative is for the goal on top of the stackJ. In the following human-computer dialogue gathered by Smith and Hipp(Smith and Hipp, 1995), U: How do I fix this circuit? C: What is the LED displaying? U: Nothing. C: What is the switch at? U: Where is the switch? C: In the lower left corner. U: The switch is down. C: Put the switch up. all initiative changes can be accounted for by ex- plicit initiative-changing utterances or by popping of the problem-solving stack due to goal resolution as illustrated in Figure 1. Competency Evaluation for Initiative Set- ting How does an agent decide whether to ask its collaborator for help? An obvious approach is to ask for help when the agent is unable to satisfy a goal on its own. This approach is the basic mechanism for several dialogue systems (Young et al., 1989; Smith iSince each participant is carrying out initiative eval- uation independently, there may be conflicts on who should be in control. Numerous researchers have stud- ied how negotiation may be used to resolve these con- flicts (Guinn, 1994; Guinn, 1993a; Lambert and Car- berry, 1992; McRoy, 1993; Sidner, 1993) and Hipp, 1995; Guinn, 1994). An additional ap- proach is to ask the collaborator for help if it is be- lieved that the collaborator has a better chance of solving the goal (or solving it more efficiently). Such an evaluation requires knowledge of the collaborat- ing agent's capabilities as well as an understanding of the agent's own capabilities. Our methodology for evaluating competency in- volves a probabilistic examination of the search space of the problem domain. In the process of solv- ing a goal, there may be many branches that can be taken in an attempt to prove a goal. Rather than selecting a branch at random, intelligent behavior involves evaluating (by some criteria) each possible branch that may lead toward the solution of a goal to determine which branch is more likely to lead to a solution. In this evaluation, certain important fac- tors are examined to weight various branches. For example, during a medical exam, a patient may com- plain of dizziness, nausea, fever, headache, and itchy feet. The doctor may know of thousands of possible diseases, conditions, allergies, etc. To narrow the search, the doctor will try to find a pathology that accounts for these symptoms. There may be some diseases that account for all 5 symptoms, others that might account for 4 out of the 5 symptoms, and so on. In this manner, the practitioner sorts and prunes his list of possible pathologies. Competency evalu- ation will be based on how likely an agent's branch will be successful (based on a weighted factor analy- sis) and how likely the collaborator's branch will be successful (based on a weighted factor analysis and a probabilistic model of the collaborator's knowledge). In Section 3 we will sketch out how this calcula- tion is made, present several mode selection schemes based on this factor analysis, and show the results of analytical evaluation of these schemes. In Section 4 we will present the methodology and results of using these schemes in a simulated dialogue environment. 3 Mathematical Analysis of Efficiency Our model of best-first search assumes that for each goal there exists a set of n factors, fl,.-., f~, which are used to guide the search through the problem- solving space. Associated with each factor are two weights, wi, which is the percentage of times a suc- cessful branch will have that factor and xi which is the percentage of all branches that satisfy fi. If an agent, a, knows q~',..., qn a percentage of the knowl- edge concerning factors fl,..., f~, respectively, and assuming independence of factors, using Bayes' rule an agent can calculate the success likelihood of each 279 U: How do I fix I this circuit? ~/ goal(fix_circuit). Initiative: Computer Problem-Solving Slack ITHINKING] > observe(switch). hdtiutive: Computer debug(led.oft). bfftiative: Computer goal(fix_circuit). lnititaive: Computer Problem-Solving Stack C: What is the switch at? [THINKING] <: observe(switch). Inith~tive: User debug(led,off). Initiutive: Computer goal(fixcircuit). Initiutive: Computer Problem-Solvlng Stack raise(switch). Initiative: User debug(led.off). Inithttive: Computer goal(fix_circuit). bdtiative: Computer Problem-Solving Stack U: Where is the switch? ::> C: Put the switch up. observe(led). Initiative: Computer goal(fix_circuit). Initiative: Computer Problem-Solving Stack debug(led,off). Initiative: Computer goal(fix_circuiO. Initiative: Computer Problem-Solving Stack locate(switch). Initiative: Computer observe(switch). bdtiutive: User debug(led,offL Initiative: Computer goal(fix_circuit). hdtiative: Computer Problem-Solving Stack raise(switch). Initiative: Computer debug(led,off). Initiative: Computer goal(fix_circuit). hlitiative: Computer Problem-Solving Stack C: What is the LED displaying? [THINKING] C: In the lower left comer. [POPI [THINKING] observe(led). Initiative: User goal(fix_circuit). Initiative: Computer Problem-Soiling Stack U: Nothing. / IPOP] i I goal(fix circuit). I Initiutive: Computer Problem-Solving Stack observe(switch). lnitiutive: U.~er debug(led,oil). Initiative: Computer goal(fix_circuit). blitiative: Computer Problem-Solving Stack I U: The switch is down I I 24-- I debug(ll~d,off). I goal(fixcircuit). Initiative." Computer Problem-Solving Stack Figure h Tracking Initiative via Explicit Initiative-Changing Utterances and Problem-Solving Stack Manip- ulation 280 possible branch of a goal G that it knows: p(b) = 1 - fI 1 - F(i)wi (1/k) (1) i=-I Xi where b is a branch out of a list of k branches and F(i) = 1 if the agent knows branch b satisfies factor f/and F(i) = xi(1-qa) otherwise. [Note: xi(1-qa) is the probability that the branch satisfies factor fi but the agent does not know this fact.] We define the sorted list of branches for a goal G that an agent knows, [b~,... , b~], where for each be~, p(b~) is the likelihood that branch b~ will result in success where p(b~) >= p(b~), Vi < j. 3.1 Efficiency Analysis of Dialogue Initiative For efficient initiative-setting, it is also necessary to establish the likelihood of success for one's col- laborator's lSt-ranked branch, 2nd-ranked branch, and so on. This calculation is difficult because the agent does not have direct access to its collabora- tor's knowledge. Again, we will rely on a proba- bilistic analysis. Assume that the agent does not know exactly what is in the collaborator's knowledge but does know the degree to which the collaborator knows about the factors related to a goal. Thus, in the medical domain, the agent may know that the collaborator knows more about diseases that account for dizziness and nausea, less about diseases that cause fever and headache, and nothing about dis- eases that cause itchy feet. For computational pur- poses these degrees of knowledge for each factor can be quantified: the agent, a, may know percentage q~ of the knowledge about diseases that cause dizzi- ness while the collaborator, c, knows percentage qC of the knowledge about these diseases. Suppose the agent has 1) a user model that states that the col- laborator knows percentages q{, q~,..., q~, about fac- tors fl,f2,...,fm respectively and 2) a model of the domain which states the approximate number of branches, N'. Assuming independence, the ex- pected number of branches which satisfy all n factors is ExpAUN = N" l-Ii=l Xi" Given that a branch sat- isfies all n factors, the likelihood that the collabora- tor will know that branch is rZin_l qC. Therefore, the expected number of branches for which the collabo- rator knows all n factors is ExpAllN I~i~=1 qg. The probability that one of these branches is a success- producing branch is 1-[L~I 1-wi ~ (from Equa- tion 1). By computing similar probabilities for each combination of factors, the agent can compute the likelihood that the collaborator's first branch will be a successful branch, and so on. A more detailed he- count of this evaluation is given by Guinn (1993b; 1994). We have investigated four initiative-setting schemes using this analysis. These schemes do not necessarily correspond to any observable human-human or human-computer dialogue behav- ior. Rather, they provide a means for exploring pro- posed dialogue initiative schemes. Random In Random mode, one agent is given ini- tiative at random in the event of a conflict. This scheme provides a baseline for initiative setting algorithms. Hopefully, a proposed algorithm will do better than Random. SingleSelection In SingleSelection mode, the more knowledgeable agent (defined by which agent has the greater total percentage of knowledge) is given initiative. The initiative is set through- out the dialogue. Once a leader is chosen, the participants act in a master-slave fashion. Continuous In Continuous mode, the more knowl- edgeable agent (defined by which agent's first- ranked branch is more likely to succeed) is ini- tially given initiative. If that branch fails, this agent's second-ranked branch is compared to the other agent's first-ranked branch with the winner gaining initiative. In general if Agent 1 is working on its ith-ranked branch and Agent 2 is working on its jth-ranked branch, we compare A1 A1 p (hi) to Oracle In Oracle mode, an all-knowing mediator selects the agent that has the correct branch ranked highest in its list of branches. This scheme is an upper bound on the effectiveness of initiative setting schemes. No initiative setting algorithm can do better. As knowledge is varied between participants we see some significant differences between the various strategies. Figure 2 summarizes this analysis. The x and y axis represent the amount of knowledge that each agent is given 2, and the z axis represents the percentage of branches explored from a single goal. SingleSelection and Continuous modes perform sig- nificantly better than Random mode. On aver- age Continuous mode results in 40% less branches searched per goal than Random. Continuous mode 2This distribution is normalized to insure that all the knowledge is distributed between each agent. Agent 1 will have ql + (1 ql)(1- 2 - q ) ql+q2 percent of the knowl- edge while Agent 2 will have q2 + (1 - ql)(1 - q2) q~ ql "~-q2 percent of the knowledge. If ql + q2 = O, then set ql -= q2 -= 0.5. 281 E q.., 1. Rando~ o. ~::, • $ingleSdcctioa x m Co~tiw~o~ x x x x x x x x : X:.*MXX::XX: XMXXXXXX: X x::x:::,:::x::: I~-, C1 ~ x ~ x x x x x x : ~ u I ~ 0 X-axis: q i Z-axis: E~ect.e4pezceat~g¢ of q~ o.7~ bzaaches explozed ~. Figure 2: An Analytical Comparison of Dialogue Initiative-Setting Schemes performs between 15-20% better than SingleSelec- tion. The large gap between Oracle and Continuous is due to the fact that Continuous initiative selection is only using limited probabilistic information about the knowledge of each agent. 4 Computer Simulations The dialogue model outlined in this paper has been implemented, and computer-computer dia- logues have been carried out to evaluate the model and judge the effectiveness of various dialogue initia- tive schemes. In a methodology similar to that used by Power (1979), Carletta (1992) and Walker (1993), knowledge is distributed by a random process be- tween agents, and the resulting interaction between these collaborating agents is observed. This method- ology allows investigators to test different aspects of a dialogue theory. Details of this experimental strat- egy are given by Guinn (1995). 4.1 The Usage of Computer-Computer Dialogues The use of computer-computer simulations to study and build human-computer dialogue systems is controversial. Since we are building computa- tional models of dialogue, it is perfectly reason- able to explore these computational models through computer-computer simulations. The difficulty lies in what these simulations say about human- computer or computer-computer dialogues. This author argues that computer-computer simulations are one layer in the multi-layer process of build- ing human-computer dialogue systems. Computer- computer simulations allow us to evaluate our com- putational models and explore issues that can not be resolved analytically. These simulations can help us prune out some mechanisms and suggest mech- anisms that may work well in a human-computer system. For instance, if the dialogues that result in a computer-computer conversation are incoherent to a human observer, this suggests that the dialogue mechanisms employed may be inappropriate for a human-computer system. On the other hand, if the resulting dialogues are coherent and exhibit features that are desired in a human-computer system, this suggests that these mechanisms may work well in a human-computer system. The final test, of course, must be in the implementation of a human-computer dialogue system. The scarcity of such systems sug- gests that it is an extremely expensive process to build a functional human-computer dialogue system, and computer-computer simulations can assist in re- ducing these costs. 4.2 Simulation Results A murder mystery domain was created with 22 suspects. An annotated example dialogue from a computer-computer collaboration in this domain is presented in Figure 3. Agents were given partial information through a random process. By using either Random mode or Continuous mode, we can evaluate the effect of those mechanisms in this ex- perimental environment. Table 2 presents results computed from 5008 collaborations where the agents 282 Speaker Utterance Watson Holmes Watson Holmes Watson Holmes Watson Hohnes Holmes Watson Holmes Watson Holmes Watson Holmes Watson Holmes Watson Watson Holmes Watson Holmes Watson Who is the murderer of Lord Dunsmore? ~ Is it the case that Suspect16 is the murderer of Lord Dunsmore? Is it the case that Suspectl6 had a motive to murder Lord Dunsmore? Suspectl6 had a motive to murder Lord Dunsmore. Is it the case that Suspectl6 had access to the poison? I am unable to determine whether Suspectl6 had access to the poison. I am unable to determine whether Suspectl6 is the murderer of Lord Dunsmore. I am unable to determine whether Suspectl6 is the murderer of Lord Dunsmore. Who is the murderer of Lord Dunsmore? b Is it the case that Suspectl0 is the murderer of Lord Dunsmore? Is it the case that Suspectl0 had a motive to murder Lord Dunsmore? Suspectl0 had a motive to murder Lord Dunsmore. Is it the case that Suspectl0 had access to the poison? Suspectl0 had access to the poison. Is it the case that Suspectl0 had an opportunity to administer the poison? I am unable to determine whether Suspectl0 had an opportunity to administer the poison. I am unable to determine whether Suspectl0 is the murderer of Lord Dunsmore. I am unable to determine whether Suspectl0 is the murderer of Lord Dunsmore. Is it the case that Suspect9 is the murderer of Lord Dunsmore? Is it the case that Suspect7 is the murderer of Lord Dunsmore? c I have proven that Suspect9 has a motive to murder Lord Dunsmore and Suspect9 had access to the poison, d I have proven that Suspect7 had access to the poison, Suspect7 had an opportunity to administer the poison, and Suspect7 has a criminal disposition. ~ Suspect7 is the murderer of Lord Dunsmore. f awatson gives control of the investigation over to Holmes. Each participant uses the Continuous Mode algorithm to determine who should be in control. bHolmes is giving up control of directing the investigation here. CHolmes is challenging Watson's investigative choice. dwatson negotiates for his choice. eHolmes negotiates for his choice. fWatson now has enough information to prove that Suspect7 is the murderer. Figure 3: A Sample Dialogue 283 had to communicate to solve the task. Random Continuous Times (secs) 82.398 44.528 of Utterances 39.921 26.650 ~uspects Examined 6.188 3.412 Table 2: Data on 5008 Non-trivial Dialogues from the Murder Mystery Domain 5 Extension to Human-Computer Dialogues Currently, two spoken-dialogue human-computer systems are being developed using the underlying algorithms described in this paper. The Duke Pro- gramming Tutor instructs introductory computer science students how to write simple Pascal pro- grams by providing multiple modes of input and output (voice/text/graphics) (Bierman et al., 1996). The Advanced Maintenance Assistant and Trainer (AMAT) currently being developed by Research Tri- angle Institute for the U.S. Army allows a mainte- nance trainee to converse with a computer assistant in the diagnosis and repair of a virtual MIA1 tank. While still in prototype development, preliminary results suggest that the algorithms that were suc- cessful for efficient computer-computer collabora- tion are capable of participating in coherent human- machine interaction. Extensive testing remains to be done to determine the actual gains in efficiency due to various mechanisms. One tenet of our theory is that proper initiative setting requires an effective user model. There are several mechanisms we are exploring in acquiring the kind of user model information necessary for the pre- viously described dialogue mode algorithms. Stereo- types (Rich, 1979; Chin, 1989) are a valuable tool in domains where user classification is possible and relevant. For instance, in the domain of military equipment maintenance, users can be easily classi- fied by rank, years of experience, equipment famil- iarity and so on. An additional source of user model information can be dynamically obtained in envi- ronments where the user interacts for an extended period of time. A tutoring/training system has the advantage of knowing exactly what lessons a stu- dent has taken and how well the student did on in- dividual lessons and questions. Dynamically mod- ifying the user model based on on-going problem solving is difficult. One mechanism that may prove particularly effective is negotiating problem-solving strategies (Guinn, 1994). The quality of a collabora- tor's negotiation reflects the quality of its underlying knowledge. There is a tradeoff in that negotiation is expensive, both in terms of time and computa- tional complexity. Thus, a synthesis of user model- ing techniques will probably be required for effective and efficient collaboration. 6 Acknowledgements Work on this project has been supported by grants from the National Science Foundation (NSF-IRI-92- 21842 ), the Office of Naval Research (N00014-94-1- 0938), and ACT II funding from STRICOM for the Combat Service Support Battlelab. References A. Bierman, C. Guinn, M. Fulkerson, G. Keim, Z. Liang, D Melamed, and K Rajagopalan. 1996. Goal-Oriented multimedia dialogue with variable initiative. In submitted for publication. D.G. Bobrow, R.M. Kaplan, M. Kay, D.A. Norman, H. Thompson, and T. Winograd. 1977. GUS, a frame driven dialog system. Artificial Intelligence, 8:155-173. J. Carletta. 1992. Planning to fail, not failing to plan: Risk-taking and recovery in task-oriented dialogue. In Proceedings of the l~th Interna- tional Conference on Computational Linguistics (COLING-92), pages 896-900, Nantes, France. D.N. Chin. 1989. KNOME: Modeling what the user knows in UC. In A. Kobsa and W. Wahlster, ed- itors, User Models in Dialog Systems, pages 74- 107. Springer-Verlag, New York. B. J. Grosz. 1978. Discourse analysis. In D. Walker, editor, Understanding Spoken Language, chap- ter IX, pages 235-268. Elsevier, North-Holland, New York, NY. C.I. Guinn. 1993a. Conflict resolution in collabora- tive discourse. In Computational Models of Con- flict Management in Cooperative Problem Solving, Workshop Proceedings from the 13th International Joint Conference on Artificial Intelligence, Cham- bery, France, August. Curry I. Guinn. 1993b. A computational model of dialogue initiative in collaborative dis- course. Human-Computer Collaboration: Recon- ciling Theory, Synthesizing Practice, Papers from the 1993 Fall Symposium Series, Technical Report FS-93-05. Curry I. Guinn. 1994. Meta-Dialogue Behaviors: Improving the EJficiency of Human-Machine Di- alogue -- A Computational Model of Variable Ini- tiative and Negotiation in Collaborative Problem- Solving. Ph.D. thesis, Duke University. 284 Curry I. Guinn. 1995. The role of computer- computer dialogues in human-computer dialogue system development. AAAI Spring Symposium on Empirical Methods in Discourse Interpretation and Generation, Technical Report SS-95-06. G.G. Hendrix, E.D. Sacerdoti, D. Sagalowicz, and J. Slocum. 1978. Developing a natural language interface to complex data. ACM Transactions on Database Systems, pages 105-147, June. L. Lambert and S. Carberry. 1992. Modeling ne- gotiation subdialogues. Proceedings o] the 30th Annual Meeting o] the Association for Computa- tional Linguistics, pages 193-200. S. McRoy. 1993. Misunderstanding and the ne- gotiation of meaning. Human-Computer Collab- oration: Reconciling Theory, Synthesizing Prac- tice, Papers from the 1993 Fall Symposium Series, AAAI Technical Report FS-93-05, September. R. Power. 1979. The organization of purposeful dialogues. Linguistics, 17. E. Rich. 1979. User modeling via stereotypes. Cog- nitive Science, 3:329-354. C. L. Sidner. 1993. The role of negotiation in collaborative activity. Human-Computer Collab- oration: Reconciling Theory, Synthesizing Prac- tice, Papers from the 1993 Fall Symposium Series, AAAI Technical Report FS-93-05, September. R.W. Smith and D.R. Hipp. 1995. Spoken Natural Language Dialog Systems: A Practical Approach. Oxford University Press, New York. R.W. Smith, D.R. Hipp, and A.W Biermann. 1992. A dialog control algorithm and its performance. In Proceedings o] the 3rd Conference on Applied Natural Language Processing. M. Walker and S Whittaker. 1990. Mixed ini- tiative in dialogue: An investigation into dis- course segmentation. In Proceedings of the 28th Annual Meeting of the Association for Computa- tional Linguistics, pages 70-78. M. A. Walker. 1993. Informational Redundancy and Resource Bounds in Dialogue. Ph.D. thesis, Uni- versity of Pennsylvania. S. Whittaker and P. Stenton. 1988. Cues and con- trol in expert-client dialogues. In Proceedings of the 26th Annual Meeting of the Association/or Computational Linguistics, pages 123-130. S.J. Young and C.E. Proctor. 1989. The design and implementation of dialogue control in voice oper- ated database inquiry systems. Computer Speech and Language, 3:329-353. S.R. Young, A.G. Hauptmann, W.H. Ward, E.T. Smith, and P. Werner. 1989. High level knowl- edge sources in usable speech recognition systems. Communications o] the ACM, pages 183-194, Au- gust. 285 | 1996 | 37 |
A Prosodic Analysis of Discourse Segments in Direction- Giving Monologues Julia Hirschberg AT&T Laboratories, 2C-409 600 Mountain Avenue Murray Hill, NJ 07974 Christine H. Nakatani* Harvard University 33 Oxford Street Cambridge, MA 02138 Abstract This paper reports on corpus-based research into the relationship between intonational vari- ation and discourse structure. We examine the effects of speaking style (read versus sponta- neous) and of discourse segmentation method (text-alone versus text-and-speech) on the na- ture of this relationship. We also compare the acoustic-prosodic features of initial, me- dial, and final utterances in a discourse seg- ment. 1 Introduction This paper presents empirical support for the as- sumption long held by computational linguists, that intonation can provide valuable cues for discourse processing. The relationship between intonational variation and discourse structure has been explored in a new corpus of direction-giving monologues. We examine the effects of speaking style (read versus spontaneous) and of discourse segmentation method (text-alone versus text-and-speech) on the nature of this relationship. We also compare the acoustic- prosodic features of initial, medial, and final utter- ances in a discourse segment. A better understand- ing of the role of intonation in conveying discourse structure will enable improvements in the natural- ness of intonational variation in text-to-speech sys- tems as well as in algorithms for recognizing dis- course structure in speech-understanding systems. 2 Theoretical and Empirical Foundations It has long been assumed in computational lin- guistics that discourse structure plays an impor- tant role in Natural Language Understanding tasks such as identifying speaker intentions and resolving anaphoric reference. Previous research has found *The second author was partially supported by NSF Grants No. IRI-90-09018, No. IRI-93-08173, and No. CDA-94-01024 at Harvard University and by AT&T Bell Laboratories. that discourse structural information can be inferred from orthographic cues in text, such as paragraph- ing and punctuation; from linguistic cues in text or speech, such as cue PHI~.ASES 1 (Cohen, 1984; Reichman, 1985; Grosz and Sidner, 1986; Passon- neau and Litman, 1993; Passonneau and Litman, to appear) and other lexical cues (Hinkelman and Allen, 1989); from variation in referring expres- sions (Linde, 1979; Levy, 1984; Grosz and Sidner, 1986; Webber, 1988; Song and Cohen, 1991; Passon- neau and Litman, 1993), tense, and aspect (Schu- bert and Hwang, 1990; Song and Cohen, 1991); from knowledge of the domain, especially for task- oriented discourses (Grosz, 1978); and from speaker intentions (Carberry, 1990; Litman and Hirschberg, 1990; Lochbaum, 1994). Recent methods for auto- matic recognition of discourse structure from text have incorporated thesaurus-based and other in- formation retrieval techniques to identify changes in topic (Morris and Hirst, 1991; Yarowsky, 1991; Iwafiska et al., 1991; Hearst, 1994; Reynar, 1994). Parallel investigations on prosodic/acoustic cues to discourse structure have investigated the contri- butions of features such as pitch range, pausal du- ration, amplitude, speaking rate, and intonational contour to signaling topic change. Variation in pitch range has often been seen as conveying 'topic struc- ture' in discourse. Brown et al. (1980) found that subjects typically started new topics relatively high in their pitch range and finished topics by com- pressing their range. Silverman (1987) found that manipulation of pitch range alone, or in conjunc- tion with pausal duration between utterances, facil- itated the disambiguation of ambiguous topic struc- tures. Avesani and Vayra (1988) also found variation in pitch range in professional recordings which ap- peared to correlate with topic structure, and Ayers (1992) found that pitch range correlates with hierar- chical topic structure more closely in read than spon- taneous conversational speech. Duration of pause between utterances or phrases has also been identi- 1 Also called DISCOURSE MARKERS or DISCOURSE PAR- TICLES, these are items such as now, first, and by the way, which explicitly mark discourse structure. 286 fled as an indicator of topic structure, with longer pauses marking major topic shifts (Lehiste, 1979; Brown, Currie, and Kenworthy, 1980; Avesani and Vayra, 1988; Passonneau and Litman, 1993); Wood- bury (1987), however, found no such correlation in his data. Amplitude was also found to increase at the start of a new topic and decrease at the end (Brown, Currie, and Kenworthy, 1980). Swerts and colleagues (1992) found that melody and duration can pre-signal the end of a discourse unit, in ad- dition to marking the discourse-unit-final utterance itself. And speaking rate has been found to cor- relate with structural variation; in several studies (Lehiste, 1980; Brubaker, 1972; Butterworth, 1975) segment-initial utterances exhibited slower rates, and segment-final, faster rates. Swerts and Osten- dorf (1995), however, report negative rate results. In general, these studies have lacked an independently-motivated notion of discourse struc- ture. With few exceptions, they rely on intuitive analyses of topic structure; operational definitions of discourse-level properties (e.g., interpreting para- graph breaks as discourse segment boundaries); or 'theory-neutral' discourse segmentations, where sub- jects are given instructions to simply mark changes in topic. Recent studies have focused on the ques- tion of whether discourse structure itself can be em- pirically determined in a reliable manner, a pre- requisite to investigating linguistic cues to its exis- tence. An intention-based theory of discourse was used in (Hirschberg and Grosz, 1992; Grosz and Hirschberg, 1992) to identify intonational correlates of discourse structure in news stories read by a professional speaker. Discourse structural elements were determined by experts in the Grosz and Sidner (1986) theory of discourse structure, based on either text alone or text and speech. This study revealed strong correlations of aspects of pitch range, ampli- tude, and timing with features of global and local structure for both segmentation methods. Passon- neau and Litman (to appear) analyzed correlations of pause, as well as cue phrases and referential re- lations, with discourse structure; their segmenters were asked to identify speakers' communicative "ac- tions". The present study addresses issues of speak- ing style and segmentation method while exploring in more detail than previous studies the prosodic pa- rameters that characterize initial, medial, and final utterances in a discourse segment. 3 Methods 3.1 The Boston Directions Corpus The current investigation of discourse and intonation is based on analysis of a corpus of spontaneous and read speech, the Boston Directions Corpus. 2 This 2The Boston Directions Corpus was designed and col- lected in collaboration with Barbara Grosz. corpus comprises elicited monologues produced by multiple non-professional speakers, who were given written instructions to perform a series of nine in- creasingly complex direction-giving tasks. Speakers first explained simple routes such as getting from one station to another on the subway, and progressed gradually to the most complex task of planning a round-trip journey from Harvard Square to several Boston tourist sights. Thus, the tasks were de- signed to require increasing levels of planning com- plexity. The speakers were provided with various maps, and could write notes to themselves as well as trace routes on the maps. For the duration of the experiment, the speakers were in face-to-face con- tact with a silent partner (a confederate) who traced on her map the routes described by the speakers. The speech was subsequently orthographically tran- scribed, with false starts and other speech errors re- paired or omitted; subjects returned several weeks after their first recording to read aloud from tran- scriptions of their own directions. 3.2 Acoustic-Prosodic Analysis For this paper, the spontaneous and read recordings for one male speaker were acoustically analyzed; fun- damental frequency and energy were calculated us- ing Entropic speech analysis software. The prosodic transcription, a more abstract representation of the intonational prominences, phrasing, and melodic contours, was obtained by hand-labeling. We em- ployed the ToBI standard for prosodic transcription (Pitrelli, Beckman, and Hirschberg, 1994), which is based upon Pierrehumbert's theory of Ameri- can English intonation (Pierrehumbert, 1980). The ToBI transcription provided us with a breakdown of the speech sample into minor or INTERMEDIATE PHRASES (Pierrehumbert, 1980; Beckman and Pier- rehumbert, 1986). This level of prosodic phrase served as our primary unit of analysis for measur- ing both speech and discourse properties. The por- tion of the corpus we report on consists of 494 and 552 intermediate phrases for read and spontaneous speech, respectively. 3.3 Discourse Segmentation In our research, the Grosz and Sidner (1986) the- ory of discourse structure, hereafter G&S, provides a foundation for segmenting discourses into con- stituent parts. According to this model, at least three components of discourse structure must be dis- tinguished. The utterances composing the discourse divide into segments that may be embedded rela- tive to one another. These segments and the em- bedding relationships between them form the LIN- GUISTIC STRUCTURE. The embedding relationships reflect changes in the ATTENTIONAL STATE, the dy- namic record of the entities and attributes that are salient during a particular part of the discourse. Changes in linguistic structure, and hence atten- 287 tional state, depend on the discourse's INTENTIONAL STRUCTURE; this structure comprises the intentions or DISCOURSE SEGMENT PURPOSES (DSPs) under- lying the discourse and relations between DSPs. Two methods of discourse segmentation were em- ployed by subjects who had expertise in the G~:S theory. Following Hirschberg and Grosz (1992), three subjects labeled from text alone (group T) and three labeled from text and speech (group S). Other than this difference in input modality, all subjects received identical written instructions. The text for each task was presented with line breaks correspond- ing to intermediate phrase boundaries (i.e., ToBI BREAK INDICES of level 3 or higher (Pitrelli, Beck- man, and Hirschberg, 1994)). In the instructions, subjects were essentially asked to analyze the lin- guistic and intentional structures by segmenting the discourse, identifying the DSPs, and specifying the hierarchical relationships among segments. 4 Results 4.1 Discourse Segmentation 4.1.1 Raw Agreement Labels on which all labelers in the group agreed are termed the CONSENSUS LABELS. 3 The consen- sus labels for segment-initial (SBEG), segment-final (SF), and segment-medial (SCONT, defined as nei- ther SBEG nor SF) phrase labels are given in Ta- ble 1. 4 Table h Percentage of Consensus Labels by Segment Boundary Type SBEG SF SCONT Total READ (N=494) Text alone (T) 14% 11% 32% 57% Text & Speech (S) 18% 14% 49% 80% SPON (N=552) Text alone (T) 13% 10% 40% 61% Text & Speech (S) 15% 13% 54% 81% Note that group T and group S segmentations differ significantly, in contrast to earlier findings of ttirschberg and Grosz (1992) on a corpus of read- aloud news stories and in support of informal find- ings of Swerts (1995). Table 1 shows that group S produced significantly more consensus boundaries for both read (p<.001, X=58.8, df=l) and spon- taneous (p<.001, X=55.4, df=l) speech than did 3Use of consensus labels is a conservative measure of labeler agreement. Results in (Passonneau and Litman, 1993) and (Swerts, 1995) show that with a larger num- ber of labelers, notions of BOUNDARY STRENGTH can be employed. 4Consensus percentages for the three types in Table 1 do not necessarily sum to the total consensus agreement percentage, since a phrase is both segment-initial and segment-final when it makes up a segment by itself. group T. When the read and spontaneous data are pooled, group S agreed upon significantly more SBEG boundaries (p<.05, X=4.7, df=l) as well as SF boundaries (p<.05, X=4.4, df=l) than did group T. Further, it is not the case that text-alone seg- menters simply chose to place fewer boundaries in the discourse; if this were so, then we would expect a high percentage of SCONT consensus labels where no SBEGs or SFs were identified. Instead, we find that the number of consensus SCONTs was signifi- cantly higher for text-and-speech labelings than for text-alone (p<.001, X=49.1, df=l). It appears that the speech signal can help disambiguate among al- ternate segmentations of the same text. Finally, the data in Table 1 show that spontaneous speech can be segmented as reliably as its read counterpart, con- trary to Ayer's results (1992). 4.1.2 Inter-labeler Reliability Comparisons of inter-labeler reliability, that is, the reproducibility of a coding scheme given multiple labelers, provide another perspective on the segmen- tation data. How best to measure inter-labeler re- liability for discourse segmentation tasks, especially for hierarchical segmentation, is an open research question (Passonneau and Litman, to appear; Car- letta, 1995; Flammia and Zue, 1995; Rotondo, 1984; Swerts, 1995). For comparative purposes, we ex- plored several measures proposed in the literature, namely, COCHRAN'S Q and the KAPPA (~) COEF- FICIENT (Siegel and Castellan, 1988). Cochran's Q, originally proposed in (Hirschberg and Grosz, 1992) to measure the likelihood that similarity among la- belings was due to chance, was not useful in the cur- rent study; all tests of similarity using this metric (pairwise, or comparing all labelers) gave probabil- ity near zero. We concluded that this statistic did not serve, for example, to capture the differences observed between labelings from text alone versus labelings from text and speech. Recent discourse annotation studies (Isard and Carletta, 1995; Flammia and Zue, 1995) have mea- sured reliability using the g coefficient, which factors out chance agreement taking the expected distribu- tion of categories into account. This coefficient is defined as Po- P~ 1-Ps where Po represents the observed agreement and PE represents the expected agreement. Typically, values of .7 or higher for this measure provide ev- idence of good reliability, while values of .8 or greater indicate high reliability. Isard and Car- letta (1995) report pairwise a scores ranging from .43 to .68 in a study of naive and expert classifi- cations of types of 'moves' in the Map Task dia- logues. For theory-neutral discourse segmentations of information-seeking dialogues, Flammia (Flam- mia and Zue, 1995) reports an average pairwise 288 of .45 for five labelers and of .68 for the three most similar labelers. An important issue in applying the t~ coefficient is how one calculates the expected agreement us- ing prior distributions of categories. We first calcu- lated the prior probabilities for our data based sim- ply on the distribution of SBEG versus non-SBEG labels for all labelers on one of the nine direction- giving tasks in this study, with separate calculations for the read and spontaneous versions. This task, which represented about 8% of the data for both speaking styles, was chosen because it was midway in planning complexity and in length among all the tasks. Using these distributions, we calculated x co- efficients for each pair of labelers in each condition for the remaining eight tasks in our corpus. The observed percentage of SBEG labels, prior distribu- tion for SBEG, average of the pairwise ~ scores, and standard deviations for those scores are presented in Table 2. Table 3: Comparison of Weighted Average Coefficients and Extra for Each Condition Using Flammia's Metric READ Text alone Text & Speech SPON Text alone Text & Speech % Weighted Average Low High 0.51 .22 .67 0.70 .48 .87 0.53 .19 .60 0.74 .63 1.00 Once again, averaged scores of .7 or better for text-and-speech labelings, for both speaking styles, indicate markedly higher inter-labeler reliability than do scores for text-alone labelings, which av- eraged .51 and .53. Table 2: Comparison of Average t¢ Coefficients for SBEGs % Avg. SBEG P~ a s.d. -READ Text alone .38 .53 .56 .08 Text & Speech .35 .55 .81 .01 SPON Text alone .39 .52 .63 .04 Text & Speech .35 .55 .80 .03 The average g scores for group T segmenters indi- cate weak inter-labeler reliability. In contrast, aver- age t~ scores for group S are .8 or better, indicating a high degree of inter-labeler reliability. Thus, ap- plication of this somewhat stricter reliability metric confirms that the availability of speech critically in- fluences how listeners perceive discourse structure. The calculation of reliability for SBEG versus non- SBEG labeling in effect tests the similarity of lin- earized segmentations and does not speak to the is- sue of how similar the labelings are in terms of hier- archical structure. Flammia has proposed a method for generalizing the use of the g coefficient for hi- erarchical segmentation that gives an upper-bound estimate on inter-labeler agreement. 5 We applied this metric to our segmentation data, calculating weighted averages for pairwise ~ scores averaged for each task. Results for each condition, together with the lowest and highest average ~ scores over the tasks, are presented in Table 3. 5Flammia uses a flexible definition of segment match to calculate pairwise observed agreement: roughly, a seg- ment in one segmentation is matched if both its SBEG and SF correspond to segment boundary locations in the other segmentation. 4.2 Intonational Features of Segments 4.2.1 Phrase Classes and Features For purposes of intonational analysis, we take advantage of the high degree of agreement among our discourse labelers and include in each seg- ment boundary class (SBEG, SF, and SCONT) only the phrases whose classification all subjects agreed upon. We term these the CONSENSUS-LABELED PHRASES, and compare their features to those of all phrases not in the relevant class (i.e., non-consensus- labeled phrases and consensus-labeled phrases of the other types). Note that there were one-third fewer consensus-labeled phrases for text-alone labelings than for text-and-speech (see Table 1). We exam- ined the following acoustic and prosodic features of SBEG, SCONT, and SF consensus-labeled phrases: f0 maximum and f0 average; 6 rms (energy) max- imum and rms average; speaking rate (measured in syllables per second); and duration of preced- ing and subsequent silent pauses. As for the seg- mentation analyses, we compared intonational cor- relates of segment boundary types not only for group S versus group T, but also for spontaneous versus read speech. While correlates have been identified in read speech, they have been observed in sponta- neous speech only rarely and descriptively. 6We calculated f0 maximum in two ways: as simple f0 peak within the intermediate phrase and also as f0 max- imum measured at the rms maximum of the sonorant portion of the nuclear-accented syllable in the interme- diate phrase (HIGH F0 in the ToBI framework (Pitrelli, Beckman, and Hirschberg, 1994)). The latter measure proved more robust, so we report results based on this metric. The same applies to measurement of rms maxi- mum. Average f0 and rms were calculated over the entire intermediate phrase. 289 Table 4: Acoustic-Prosodic Correlates of Consensus Labelings from Text Alone SBEG Read Spon SCONT Read Spon Max F0 (at HighF0) Avg FO (phrasal) Max RMS (at HighF0) Avg RMS (phrasal) Rate ~wer lower SF Read lower lower lower lower Spon lower ~wer lower lower Preceding Pause Subsequent Pause higher higher higher higher longer shorter higher higher higher higher longer shorter lower** lower** lower shorter ? shorter ? slower* shorter* shorter ? faster *? shorter longer faster ? shorter longer Table 5: Acoustic-Prosodic Correlates of Consensus Labelings from Text and Speech SBEG Read Spon SCONT Read Spon SF Read Spon Max F0 (at HighF0) higher higher lower lower lower lower Avg F0 (phrasal) higher higher lower ? lower lower Max RMS (at HighFO) higher higher lower lower Avg RMS ] (phrasal) higher higher lower Rate slower ? lower faster* lower faster Preceding Subsequent Pause Pause longer shorter longer shorter shorter t shorter t shorter ? shorter ? shorter longer shorter longer 4.2.2 Global Intonational Correlates We found strong correlations for consensus SBEG, SCONT, and SF phrases for all conditions. Results for group T are given in Table 4, and for group S, in Table 5. ~ Consensus SBEG phrases in all conditions pos- sess significantly higher maximum and average f0, higher maximum and average rms, shorter subse- quent pause, and longer preceding pause. For con- sensus SCONT phrases, we found some differences between read and spontaneous speech for both la- beling methods. Features for group T included sig- nificantly lower f0 maximum and average and lower rms maximum and average for read speech, but only lower f0 maximum for the spontaneous condition. Group S features for SCONT were identical to group T except for the absence of a correlation for maxi- mum rms. While SCONT phrases for both speak- ing styles exhibited significantly shorter preceding and subsequent pauses than other phrases, only the spontaneous condition showed a significantly slower rate. For consensus SF phrases, we again found simi- lar patterns for both speaking styles and both label- 7T-tests were used to test for statistical significance of difference in the means of two classes of phrases. Results reported are significant at the .005 level or better, except where '*' indicates significance at the .03 level or better. Results were calculated using one-tailed t-tests, except where't' indicates a two-tailed test. ing methods, namely lower f0 maximum and aver- age, lower rms maximum and average, faster speak- ing rate, shorter preceding pauses, and longer sub- sequent pauses. While it may appear somewhat surprising that results for both labeling methods match so closely, in fact, correlations for text-and-speech labels pre- sented in Table 5 were almost invariably statistically stronger than those for text-alone labels presented in Table 4. These more robust results for text-and- speech labelings occur even though the data set of consensus labels is considerably larger than the data set of consensus text-alone labelings. 4.2.3 Local Intonational Correlates With a view toward automatically segmenting a spoken discourse, we would like to directly clas- sify phrases of all three discourse categories. But SCONT and SF phrases exhibit similar prominence features and appear distinct from each other only in terms of timing differences. A second issue is whether such classification can be done 'on-line.' To address both of these issues, we made pairwise comparisons of consensus-labeled phrase groups us- ing measures of relative change in acoustic-prosodic parameters over a local window of two consecutive phrases. Table 6 presents significant findings on rel- ative changes in f0, loudness (measured in decibels), and speaking rate, from prior to current intermedi- 290 Table 6: Acoustic-Prosodic Change from Preceding Phrase for Consensus Labelings from Text and Speech SBEG versus SCONT Read Sport SCONT versus SF Read Spon SBEG versus SF Read Spon II Max F0 Change (at HighF0s) Max DB Change (at HighF0s) Rate Change increase increase increase increase increase* increase* increase increase increase increase increase decrease* t ate phrase, s First, note that SBEG is distinguished from both SCONT and SF in terms of f0 change and db change from prior phrase; that is, while SBEG phrases are distinguished on a variety of measures from all other phrases (including non-consensus-labeled phrases) in Table 5, this table shows that SBEGs are also distin- guishable directly from each of the other consensus- labeled categories. Second, while SCONT and SF appear to share prominence features in Table 5, Ta- ble 6 reveals differences between SCONT and SF in amount of f0 and db change. Thus, in addition to lending themselves to on-line processing, local mea- sures may also capture valuable prominence cues to distinguish between segment-medial and segment- final phrases. 5 Conclusion Although this paper reports results from only a sin- gle speaker, the findings are promising. We have demonstrated that a theory-based method for dis- course analysis can provide reliable segmentations of spontaneous as well as read speech. In addition, the availability of speech in the text-and-speech labeling method led to significantly higher reliability scores. The stronger correlations found for intonational fea- tures of the text-and-speech labelings suggest not only that discourse labelers make use of prosody in their analyses, but also that obtaining such data can lead to more robust modeling of the relationship be- tween intonation and discourse structure. The following preliminary results can be con- sidered for incorporation in such a model. First, segment-initial utterances differ from medial and fi- 8We present results only for text-and-speech label- ings; results for text-alone were quite similar. Note that 'increase' means that there is a significantly greater in- crease in f0, rms, or rate from prior to current phrase for category 1 than for category 2 of the comparison, and 'decrease' means that there is a significantly greater de- crease. T-tests again were one-tailed unless marked by t, and significance levels were .002 or better except those marked by *, which were at .01 or better. nal utterances in both prominence and rhythmic properties. Segment-medial and segment-final ut- terances are distinguished more clearly by rhythmic features, primarily pause. Finally, all correlations found for global parameters can also be computed based on relative change in acoustic-prosodic param- eters in a window of two phrases. Ongoing research is addressing the development of automatic classification algorithms for discourse boundary type; the role of prosody in conveying hier- archical relationships among discourse segments; in- dividual speaker differences; and discourse segmen- tation methods that can be used by naive subjects. References Avesani, Cinzia and Mario Vayra. 1988. Dis- corso, segmenti di discorso e un' ipotesi sull' in- tonazione. In Corso eli stampa negli Atti del Con- vegno Inlernazionale "Sull'Interpunzione', Val- leechi, Firenze, Maggio, pages 8-53. Ayers, Gayle M. 1992. Discourse functions of pitch range in spontaneous and read speech. Paper pre- sented at the Linguistic Society of America An- nual Meeting. Beckman, Mary and Janet Pierrehumbert. 1986. Intonational structure in Japanese and English. Phonology Yearbook, 3:15-70. Brown, G., K. Currie, and J. Kenworthy. 1980. Questions of Intonation. University Park Press, Baltimore. Brubaker, R. S. 1972. Rate and pause character- istics of oral reading. Journal of Psycholinguistic Research, 1(2):141-147. Butterworth, B. 1975. Hesitation and semantic planning in speech. Journal of Psycholinguistic Research, 4:75-87. Carberry, Sandra. 1990. Plan Recognition in Nat- ural Language Dialogue. MIT Press, Cambridge MA. 291 Carletta, Jean C. 1996. Assessing agreement on classification tasks: the kappa statistic. Compu- tational Linguistics, 22(2), To appear. Cohen, Robin. 1984. A computational theory of the function of clue words in argument understand- ing. In Proceedings of the lOth International Con- ference on Computational Linguistics, pages 251- 255, Stanford. Flammia, Giovanni and Victor Zue. 1995. Empir- ical evaluation of human performance and agree- ment in parsing discourse constituents in spoken dialogue. In Proceedings of EUROSPEECH-95, Volume 3, pages 1965-1968. Madrid. Grosz, B. and J. Hirschberg. 1992. Some intona- tional characteristics of discourse structure. In Proceedings of the 2nd International Conference on Spoken Language Processing, pages 429-432, Banff, October. Grosz, Barbara. 1978. Discourse analysis. In D. Walker, editor, Understanding Spoken Lan- guage. Elsevier, pages 235-268. Grosz, Barbara J. and Candace L. Sidner. 1986. At- tention, intentions, and the structure of discourse. Computational Linguistics, 12(3):175-204. Hearst, Marti A. 1994. Context and Structure in Automated Full-Text Information Access. Ph.D. thesis, University of California at Berkeley. Avail- able as Report No. UCB/CSD-94/836. Hinkelman, E. and J. Allen. 1989. Two constraints on speech act ambiguity. In Proceedings of the 27th Annual Meeting of the Association for Com- putational Linguistics, pages 212-219, Vancouver. Hirschberg, J. and B. Grosz. 1992. Intonational features of local and global discourse structure. In Proceedings of the Speech and Natural Lan- guage Workshop, pages 441-446, Harriman NY, DARPA, Morgan Kaufmann, February. Isard, Amy and Jean Carletta. 1995. Transaction and action coding in the map task corpus. Re- search paper HCRC/RP-65, March, Human Com- munication Research Centre, University of Edin- burgh, Edinburgh. Iwafiska, Lucia, Douglas Appelt, Damaris Ayuso, Kathy Dahlgren, Bonnie Glover Stalls, Ralph Gr- ishman, George Krupka, Christine Montgomery, and Ellen Riloff. 1991. Computational aspects of discourse in the context of Muc-3. In Proceedings of the Third Message Understanding Conference (Mac-3), pages 256-282, San Mateo, CA, Morgan Kaufmann, May. Lehiste, I. 1979. Perception of sentence and paragraph boundaries. In B. Lindblom and S. Oehman, editors, Frontiers of Speech Research. Academic Press, London, pages 191-201. Lehiste, I. 1980. Phonetic characteristics of dis- course. Paper presented at the Meeting of the Committee on Speech Research, Acoustical Soci- ety of Japan. Levy, Elena. 1984. Communicating Thematic Struc- ture in Narrative Discourse: The Use of Referring Terms and Gestures. Ph.D. thesis, The University of Chicago, Chicago, June. Linde, C. 1979. Focus of attention and the choice of pronouns in discourse. In T. Givon, editor, Syntax and Semantics, Vol. 12: Discourse and Syntax. The Academic Press, New York, pages 337-354. Litman, Diane and Julia Hirschberg. 1990. Dis- ambiguating cue phrases in text and speech. In Proceedings of the 13th International Confer- ence on Computational Linguistics, pages 251- 256, Helsinki, August. Lochbaum, Karen. 1994. Using Collaborative Plans to Model the Intentional Structure of Discourse. Ph.D. thesis, Harvard University. Available as Tech Report TR-25-94. Morris, J. and G. Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguis- tics, 17:21-48. Passonneau, R. and D. Litman. 1993. Feasibility of automated discourse segmentation. In Proceed- ings of the 31st Annual Meeting of the Associ- ation for Computational Linguistics, pages 148- 155, Columbus. Passonneau, Rebecca J. and Diane J. Litman. To Appear. Empirical analysis of three dimensions of spoken discourse: Segmentation, coherence and linguistic devices. In E. Hovy and D. Scott, ed- itors, Burning Issues in Discourse. Springer Vet- lag. Pierrehumbert, Janet B. 1980. The Phonology and Phonetics of English Intonation. Ph.D. thesis, Massachusetts Institute of Technology, Septem- ber. Distributed by the Indiana University Lin- guistics Club. Pitrelli, John, Mary Beckman, and Julia Hirschberg. 1994. Evaluation of prosodic transcription label- ing reliability in the ToBI framework. In Proceed- ings of the 3rd International Conference on Spo- ken Language Processing, volume 2, pages 123- 126, Yokohama. Reichman, R. 1985. Getting Computers to Talk Like You and Me: Discourse Context, Focus, and Se- mantics. Bradford. The Massachusetts Institute of Technology, Cambridge. Reynar, Jeffrey C. 1994. An automatic method of finding topic boundaries. In Proceedings of the Student Session of the 32nd Annual Meeting of the Association for Computational Linguistics, pages 331-333. Las Cruces, NM. 292 Rotondo, John A. 1984. Clustering analysis of sub- jective partitions of text. Discourse Processes, 7:69-88. Schubert, L. K. and C. H. Hwang. 1990. Picking ref- erence events from tense trees. In Proceedings of the Speech and Natural Language Workshop, pages 34-41, Hidden Valley PA. DARPA. Siegel, S. and Jr. Castellan, N. John. 1988. Non- parametric Statistics for the Behavioral Sciences. McGraw-Hill, New York, second edition. Silverman, K. 1987. The Structure and Processing of Fundamental Frequency Contours. Ph.D. the- sis, Cambridge University, Cambridge UK. Song, F. and R. Cohen. 1991. Tense interpretation in the context of narrative. In Proceedings of the 9th National Conference of the American Associ- ation for Artificial Intelligence, pages 131-136. Swerts, M., R. Gelyukens, and J. Terken. 1992. Prosodic correlates of discourse units in sponta- neous speech. In Proceedings of the International Conference on Spoken Language Processing, pages 421-428, Banff, Canada, October. Swerts, Marc. 1995. Combining statistical and pho- netic analyses of spontaneous discourse segmen- tation. In Proceedings of the XIIth International Congress of Phonetic Sciences, volume 4, pages 208-211, Stockholm, August. Swerts, Marc and Mari Ostendorf. 1995. Dis- course prosody in human-machine interactions. In Proceedings ESCA Workshop on Spoken Dialogue Systems: Theories and Applications, pages 205- 208, Visgo, Denmark, May/June. Webber, B. 1988. Discourse deixis: Reference to discourse segments. In Proceedings of the 26th Annual Meeting of the Association for Computa- tional Linguistics, pages 113-122, Buffalo. Woodbury, Anthony C. 1987. Rhetorical struc- ture in a central Alaskan Yupik Eskimo tradi- tional narrative. In J. Sherzer and A. Woodbury, editors, Native American Discourse: Poetics and Rhetoric, pages 176-239, Cambridge University Press, Cambridge UK. Yarowsky, David. 1991. Personal communication, December. 293 | 1996 | 38 |
An Information Structural Approach to Spoken Language Generation Scott Prevost The Media Laboratory Massachusetts Institute of Technology 20 Ames Street Cambridge, Massachusetts 02139-4307 USA prevost@media, mit. edu Abstract This paper presents an architecture for the generation of spoken monologues with con- textually appropriate intonation. A two- tiered information structure representation is used in the high-level content planning and sentence planning stages of generation to produce efficient, coherent speech that makes certain discourse relationships, such as explicit contrasts, appropriately salient. The system is able to produce appropriate intonational patterns that cannot be gen- erated by other systems which rely solely on word class and given/new distinctions. 1 Introduction While research on generating coherent written text has flourished within the computational linguistics and artificial intelligence communities, research on the generation of spoken language, and particularly intonation, has received somewhat less attention. In this paper, we argue that commonly employed models of text organization, such as schemata and rhetorical structure theory (RST), do not adequately address many of the issues involved in generating spoken language. Such approaches fail to consider contextually bound focal distinctions that are mani- fest through a variety of different linguistic and par- alinguistic devices, depending on the language. In order to account for such distinctions of fo- cus, we employ a two-tiered information structure representation as a framework for maintaining lo- cal coherence in the generation of natural language. The higher tier, which delineates the lheme, that which links the utterance to prior utterances, and the theme, that which forms the core contribution of the utterance to the discourse, is instrumental in de- termining the high-level organization of information within a discourse segment. Dividing semantic rep- resentations into their thematic and rhematic parts allows propositions to be presented in a way that maximizes the shared material between utterances. The lower tier in the information structure repre- sentation specifies the semantic material that is in "focus" within themes and themes. Material may be in focus for a variety of reasons, such as to empha- size its "new" status in the discourse, or to contrast it with other salient material. Such focal distinc- tions may affect the linguistic presentation of infor- mation. For example, the it-cleft in (1) may mark John as standing in contrast to some other recently mentioned person. Similarly, in (2), the pitch accent on red may mark the referenced car as standing in contrast to some other car inferable from the dis- course context 3 (1) It was John who spoke first. (2) Q: Which car did Mary drive? A: (MARY drove)th (the RED car.)rh L+H* LH(%) H* LL$ By appealing to the notion that the simple rise-fall tune (H* LL%) very often accompanies the rhematic material in an utterance and the rise-fall-rise tune of- ten accompanies the thematic material (Steedman, 1991 i Prevost and Steedman, 1994), we present a spoken language generation architecture for produc- ing short spoken monologues with contextually ap- propriate intonation. 2 Information Structure Information Structure refers to the organization of information within an utterance. In particular, it 1In this example, and throughout the remainder of the paper, the intonation contour is informally noted by placing prosodic phrases in parentheses and marking pitch accented words with capital letters. The tunes are more formally annotated with a variant of (Pierrehum- bert, 1980) notation described in (Prevost, 1995). Three different pause lengths are associated with boundaries in the modified notation. '(%)' marks intra-utterance boundaries with very little pausing, '%' marks intra- utterance boundaries associated with clauses demar- cated by commas, and '$' marks utterance-final bound- aries. For the purposes of generation and synthesis, these distinctions are crucial. 294 (3) Q: I know the AMERICAN amplifier produces MUDDY treble, (But WHAT) (does the BRITISH amplifier produce?) L+H* L(H%) H* LL$ A: (The BRITISH [ amplifier produces) L+H* [ L(H%) theme-]°CU Thleme (CLEANH, ] rherne-focus Rheme treble.) LL$ (4) Q: I know the AMERICAN amplifier produces MUDDY treble, (But WHAT) (produces CLEAN treble?) L+H* L(H%) H* LL$ A: (The BRITISH H* theme-focus Rheme amplifier) L(L%) (produces CLEAN L+H* theme-locus Theme treble.) LH$ defines how the information conveyed by a sentence is related to the knowledge of the interlocutors and the structure of their discourse. Sentences conveying the same propositional content in different contexts need not share the same information structure. That is, information structure refers to how the semantic content of an utterance is packaged, and amounts to instructions for updating the models of the dis- course participants. The realization of information structure in a sentence, however, differs from lan- guage to language. In English, for example, intona- tion carries much of the burden of information struc- ture, while languages with freer word order, such as Catalan (Engdahl and Vallduvi, 1994) and Turkish (Hoffman, 1995) convey information structure syn- tactically. 2.1 Information Structure and Intonation The relationship between intonational structure and information structure is illustrated by (3) and (4). In each of these examples, the answer contains the same string words but different intonational patterns and information structural representations. The theme of each utterance is considered to be represented by the material repeated from the question. That is, the theme of the answer is what links it to the ques- tion and defines what the utterance is about. The rheme of each utterance is considered to be repre- sented by the material that is new or forms the core contribution of the utterance to the discourse. By mapping the rise-fall tune (H* LL%) onto rhemes and the rise-fall-rise tune (L+H* LH%) onto themes (Steedman, 1991; Prevost and Steedman, 1994), we can easily identify the string of words over which these two prominent tunes occur directly from the information structure. While this mapping is cer- tainly overly simplistic, the results presented in Sec- tion 4.3 demonstrate its appropriateness for the class of simple declarative sentences under investigation. Knowing the strings of words to which these two tunes are to be assigned, however, does not pro- vide enough information to determine the location of the pitch accents (H* and L+H*) within the tunes. Moreover, the simple mapping described above does not account for the frequently occurring cases in which thematic material bears no pitch accents and is consequently unmarked intonationally. Previous approaches to the problem of determining where to place accents have utilized heuristics based on "givenness." That is, content-bearing words (e.g. nouns and verbs) which had not been previously mentioned (or whose roots had not been previ- ously mentioned) were assigned accents, while func- tion words were de-accented (Davis and Hirschberg, 1988; Hirschberg, 1990). While these heuristics ac- count for a broad range of intonational possibilities, they fail to account for accentual patterns that serve to contrast entities or propositions that were previ- ously "given" in the discourse. Consider, for ex- ample the intonational pattern in (5), in which the pitch accent on amplifier in the response cannot be attributed to its being "new" to the discourse. (5) Q: Do critics prefer the BRITISH amplifier L* H or the AMERICAN amplifier? H* LL$ A: They prefer the AMERICAN amplifier. H* LL$ For the determination of pitch accent placement, we rely on a secondary tier of information structure which identifies focused properties within themes and rhemes. The theme-foci and the rheme-foci mark the information that differentiates properties or entities in the current utterance from properties or entities established in prior utterances. Conse- 295 quently, the semantic material bearing "new" infor- mation is considered to be in focus. Furthermore, the focus may include semantic material that serves to contrast an entity or proposition from alterna- tive entities or propositions already established in the discourse. While the types of pitch accents (H* or L+H*) are determined by the theme/theme delin- eation and the aforementioned mapping onto tunes, the locations of pitch accents are determined by the assignment of foci within the theme and rheme, as illustrated in (3) and (4). Note that it is in pre- cisely those cases where thematic material, which is "given" by default, does not contrast with any other previously established properties or entities that this material is intonationally unmarked, as in (6). (6) Q: Which amplifier does Scott PREFER? H* LL$ A: (He prefers)th (the BI~ITISH amplifier.)rh H* LL$ 2.2 Contrastive Focus Algorithm The determination of contrastive focus, and con- sequently the determination of pitch accent loca- tions, is based on the premise that each object in the knowledge base is associated with a set of alter- natives from which it must be distinguished if refer- ence is to succeed. The set of alternatives is deter- mined by the hierarchical structure of the knowledge base. For the present implementation, only proper- ties with the same parent or grandparent class are considered to be alternatives to one another. Given an entity z and a referring expression for x, the contrastive focus feature for its semantic repre- sentation is computed on the basis of the contrastive focus algorithm described in (7), (8) and (9). The data structures and notational conventions are given below. (7) DElist: a collection of discourse entities that have been evoked in prior dis- course, ordered by recency. The list may be limited to some size k so that only the k most recent discourse enti- ties pushed onto the list are retrievable. ASet(z): the set of alternatives for object x, i.e. those objects that belong to the same class as x, as defined in the knowledge base. RSet(z,S): the set of alternatives for ob- ject z as restricted by the referring ex- pressions in DElist and the set of prop- erties S. CSet(x, S): the subset of properties of S to be accented for contrastive purposes. Props(z): a list of properties for object x, ordered by the grammar so that nomi- nal properties take precedence over ad- jectival properties. The algorithm, which assigns contrastive focus in both thematic and thematic constituents, begins by isolating the discourse entities in the given con- stituent. For each such entity x, the structures de- fined above are initialized as follows: (8) Props(x) :-- [P I P(x) is true in KB ] ASet(x) :-- {y I aZt(x, y)}, x's alternatives RSet(x,{}) :-- {x}U{y [ y 6 ASet(x) ~ y E DEiist}, evoked alternatives CSet(x,{}) := {} The algorithm appears in pseudo-code in (9). 2 (9) S := {} for each P in Props(x) RSet(x, S u {P}) := {Y I Y e RSet(x, S) ~ P(y)} if RSet(x, S U {P}) = RSet(x, S) then % no restrictions were made % based on property P. CSet(x, S O {P}) := CSet(z, S) else % property P eliminated some % members of the RSeL CSet(x, S U {P}) := CSe~(x, S) U {P} endif S:=SU{P} endfor In other words, given an object x, a list of its prop- erties and a set of alternatives, the set of alternatives is restricted by including in the initial RSet only x and those objects that are explicitly referenced in the prior discourse. Initially, the set of properties to be contrasted (CSe~) is empty. Then, for each property of x in turn, the RSet is restricted to include only those objects satisfying the given property in the knowledge base. If imposing this restriction on the RSet for a given property decreases the cardinality of the RSe~, then the property serves to distinguish x from other salient alternatives evoked in the prior discourse, and is therefore added to the contrast set. Conversely, if imposing the restriction on the RSet for a given property does not change the RSet, the property is not necessary for distinguishing x from its alternatives, and is not added to the CSet. Based on this contrastive focus algorithm and the mapping between information structure and into- nation described above, we can view information structure as the representational bridge between dis- course and intonational variability. The following sections elucidate how such a formalism can be in- tegrated into the computational task of generating spoken language. 2An in-depth discussion of the algorithm and numer- ous examples ate presented in (Prevost, 1995). 296 3 Generation Architecture The task of natural language generation (NLG) has often been divided into three stages: content plan- ning, in which high-level goals are satisfied and dis- course structure is determined, sentence planning, in which high-level abstract semantic representations are mapped onto representations that more fully constrain the possible sentential realizations (Ram- bow and Korelsky, 1992; Reiter and Mellish, 1992; Meteer, 1991), and surface generation, in which the high-level propositions are converted into sentences. The selection and organization of propositions and their divisions into theme and rheme are de- termined by the content planner, which maintains discourse coherence by stipulating that semantic in- formation must be shared between consecutive utter- ances whenever possible. That is, the content plan- ner ensures that the theme of an utterance links it to material in prior utterances. The process of determining foci within themes and rhemes can be divided into two tasks: determining which discourse entities or propositions are in fo- cus, and determining how their linguistic realizations should be marked to convey that focus. The first of these tasks can be handled in the content phase of the NLG model described above. The second of these tasks, however, relies on information, such as the construction of referring expressions, that is of- ten considered the domain of the sentence planning stage. For example, although two discourse entities el and e2 can be determined to stand in contrast to one another by appealing only to the discourse model and the salient pool of knowledge, the method of contrastively distinguishing between them by the placement of pitch accents cannot be resolved until the choice of referring expressions has been made. Since referring expressions are generally taken to be in the domain of the sentence planner (Dale and Haddock, 1991), the present approach resolves is- sues of contrastive focus assignment at the sentence processing stage as well. During the content generation phase, the content of the utterance is planned based on the previous discourse. While template-based systems (McKe- own, 1985) have been widely used, rhetorical struc- ture theory (RST) approaches (Mann and Thomp- son, 1986; Hovy, 1993), which organize texts by identifying rhetorical relations between clause-level propositions from a knowledge base, have recently flourished. Sibun (Sibun, 1991) offers yet another alternative in which propositions are linked to one another not by rhetorical relations or pre-planned templates, but rather by physical and spatial prop- erties represented in the knowledge-base. The present framework for organizing the con- tent of a monologue is a hybrid of the template and RST approaches. The implementation, which is presented in the following section, produces de- scriptions of objects from a knowledge base with context-appropriate intonation that makes proper distinctions of contrast between alternative, salient discourse entities. Certain constraints, such as the requirement that objects be identified or defined at the beginning of a description, are reminiscent of McKeown's schemata. Rather than imposing strict rules on the order in which information is presented, the order is determined by domain specific knowl- edge, the communicative intentions of the speaker, and beliefs about the hearer's knowledge. Finally, the system includes a set of rhetorical constraints that may rearrange the order of presentation for in- formation in order to make certain rhetorical rela- tionships salient. While this approach has proven effective in the present implementation, further re- search is required to determine its usefulness for a broader range of discourse types. 4 The Prolog Implementation The monologue generation program produces text and contextually-appropriate intonation contours to describe an object from the knowledge base. The system exhibits the ability to intonationally contrast alternative entities and properties that have been explicitly evoked in the discourse even when they occur with several intervening sentences. 4.1 Content Generation The architecture for the monologue generation pro- gram is shown in Figure 1, in which arrows repre- sent the computational flow and lines represent de- pendencies among modules. The remainder of this section contains a description of the computational path through the system with respect to a single example. The input to the program is a goal to de- scribe an object from the knowledge base, which in this case contains a variety of facts about hypothet- ical stereo components. In addition, the input pro- vides a communicative intention for the goal which may affect its ultimate realization, as shown in (1O). For example, given the goal describe(x), the in- tention persuade-to-buy(hearer,x) may result in a radically different monologue than the intention persuade-t o-s ell (hearer, x). (10) Goal: describe el Input: generat e (int ention(bel (hl, good-t o-buy (e I) ) ) Information from the knowledge base is selected to be included in the output by a set of relations that determines the degree to which knowledge base facts and rules support the communicative intention of the speaker. For example, suppose the system "be- lieves" that conveying the proposition in (11) mod- erately supports the intention of making hearer hl want to buy el, and further that the rule in (12) is known by hl. 297 Communicative Goals and Intentions ienten¢~ Planner ~-~Accen, Assignment Rules ) ~ i Sudace !enerator ~ CCG ) l Prosodical~/ Annotated Monologue 1 1 Spoken Ou~ut Figure 1: An Architecture for Monologue Genera- tion (II) bel(hl, holds(rating(X, powerful))) (12) holds(rating(X, powerful)) "- holds(produce(X, Y)), holds (isa(Y, watts-per-channel) ), holds(amount(Y, Z)), number(Z), z >= 100. The program then consults the facts in the knowl- edge base, verifies that the property does indeed hold and consequently includes the corresponding facts in the set of properties to be conveyed to the hearer, as shown in (13). (13) holds(produce(el, e7)). holds(isa(e7, watts-per-channel)). holds(amount(e7, I00)). The content generator starts with a simple de- scription template that specifies that an object is to be explicitly identified or defined before other propo- sitions concerning it are put forth. Other relevant propositions concerning the object in question are then linearly organized according to beliefs about how well they contribute to the overall intention. Fi- nMly, a small set of rhetorical predicates rearranges the linear ordering of propositions so that sets of sentences that stand in some interesting rhetorical relationship to one another will be realized together in the output. These rhetorical predicates employ information structure to assist in maintaining the coherence of the output. For example, the conjunc- tion predicate specifies that propositions sharing the same theme or theme be realized together in order to avoid excessive topic shifting. The contrast pred- icate specifies that pairs of themes or rhemes that explicitly contrast with one another be realized to- gether. The result is a set of properties roughly or- dered by the degree to which they support the given intention, as shown in (14). (14) holds (defn(isa(el, amplifier))) holds (design(el, solid-state) ,pres ) holds(cost (el, e9) ,pres) holds (produce (el, e7) ,pres) holds (contrast (praise (e4, el ), revile (eS, el) ) ,past) The top-level propositions shown in (14) were se- lected by the program because the hearer (hl) is believed to be interested in the design of the am- plifier and the reviews the amplifier has received. Moreover, the belief that the hearer is interested in buying an expensive, powerful amplifier justifies in- cluding information about its cost and power rat- ing. Different sets of propositions would be gener- ated for other (perhaps thriftier) hearers. Addition- ally, note that the propositions praise(e4, el) and revile(e5, el) are combined into the larger propo- sition contrast (praise ( e4, el ), revile (e5, e I ) ). This is accomplished by the rhetorical constraints that determine the two propositions to be con- trastive because e4 and e5 belong to the same set of alternative entities in the knowledge base and praise and revile belong to the same set of al- ternative propositions in the knowledge base. The next phase of content generation recognizes the dependency relationships between the proper- ties to be conveyed based on shared discourse enti- ties. This phase, which represents an extension of the rhetorical constraints, arranges propositions to ensure that consecutive utterances share semantic material (cf. (McKeown et el., 1994)). This rule, which in effect imposes a strong bias for Centering Theory's continue and retain transitions (Grosz et el., 1986) determines the theme-rheme segmentation for each proposition. 4.2 Sentence Planning After the coherence constraints from the previous section are applied, the sentence planner is respon- sible for making decisions concerning the form in which propositions are realized. This is accom- plished by the following simple set of rules. First, Definitional isa properties are realized by the ma- trix verb. Other isa properties are realized by nouns or noun phrases. Top-level properties (such as those in (14)) are realized by the matrix verb. Finally, embedded properties (those evoked for building re- ferring expressions for discourse entities) are realized by adjectival modifiers if possible and otherwise by relative clauses. While there are certainly a number of linguis- tically interesting aspects to the sentence planner, the most important aspect for the present purposes is the determination of theme-foci and rheme-foci. The focus assignment algorithm employed by the 298 sentence planner, which has access to both the dis- course model and the knowledge base, works as fol- lows. First, each property or discourse entity in the semantic and information structural representations is marked as either previously mentioned or new to the discourse. This assignment is made with re- spect to two data structures, the discourse entity list (DEList), which tracks the succession of entities through the discourse, and a similar structure for evoked properties. Certain aspects of the semantic form are considered unaccentable because they cor- respond to the interpretations of closed-class items such as function words. Items that are assigned fo- cus based on their "newness" are assigned the o focus operator, as shown in (15). (15) Semantics: defn(isa(oel, ocl)) Theme: oel Rheme: Ax.isa(x, ocl) Supporting Props: isa(cl, oamplifier) o design(cl, osolidstate) The second step in the focus assignment algorithm checks for the presence of contrasting propositions in the ISStore, a structure that stores a history of information structure representations. Propositions are considered contrastive if they contain two con- trasting pairs of discourse entities, or if they contain one contrasting pair of discourse entities as well as contrasting functors. Discourse entities are determined to be contrastive if they belong to the same set of alternatives in the knowledge base, where such sets are inferred from the isa-links that define class hierarchies. While the present implementation only considers entities with the same parent or grandparent class to be alterna- tives for the purposes of contrastive stress, a gradu- ated approach that entails degrees of contrastiveness may also be possible. The effects of the focus assignment algorithm are easily shown by examining the generation of an ut- terance that contrasts with the utterance shown in (15). That is, suppose the generation program has finished generating the output corresponding to the examples in (10) through (15) and is assigned the new goal of describing entity e2, a different am- plifier. After applying the second step on the focus assignment algorithm, contrasting discourse entities are marked with the • contrastive focus operator, as shown in (16). Since el and e2 are both instances of the class amplifiers and cl and c2 both describe the class araplifiers itself, these two pairs of dis- course entities are considered to stand in contrastive relationships. (16) Semantics: defn(isa(.e2, .c2)) Theme: -e2 Rheme: Ax.isa(x, .c2) Supporting Props: class(c2, amplifier) design(c2, otube) While the previous step of the algorithm deter- mined which abstract discourse entities and proper- ties stand in contrast, the third step uses the con- trastive focus algorithm described in Section 2 to determine which elements need to be contrastively focused for reference to succeed. This algorithm de- termines the minimal set of properties of an entity that must be "focused" in order to distinguish it from other salient entities. For example, although the representation in (16) specifies that e2 stands in contrast to some other entity, it is the property of e2 having a tube design rather than a solid-state design that needs to be conveyed to the hearer. Af- ter applying the third step of the focus assignment to (16), the result appears as shown in (17), with "tube" contrastively focused as desired. (17) Semantics: defn(isa(.e2, .c2)) Theme: .e2 Rheme: Ax.isa(x, .c2) Supporting Props: isa(c2, amplifier) design(c2, .tube) The final step in the sentence planning phase of generation is to compute a representation that can serve as input to a surface form generator based on Combinatory Categorial Grammar (CCG) (Steed- man, 1991), as shown in (18). 3 (18) Theme: np(3, s) : (el^S) ^d#(el,.xh(el)~s)~u/rh Rheme: s : ( acU pres)^ indeI(el, ( amplifier(cl )& • tube(el))~isa(el, el))\np(a, s): elerh 4.3 Results Given the focus-marked output of the sentence planner, the surface generation module consults a CCG grammar which encodes the information struc- ture/intonation mapping and dictates the genera- tion of both the syntactic and prosodic constituents. The result is a string of words and the appropriate prosodic annotations, as shown in (19). The output of this module is easily translated into a form suit- able for a speech synthesizer, which produces spoken output with the desired intonation. 4 (19) The X5 is a TUBE amplifier. L+H~ L(H%) H* LL$ The modules described above and shown in Fig- ure 1 are implemented in Quintus Prolog. The sys- tem produces the types of output shown in (20) and aA complete description of the CCG generator can be found in (Prevost and Steedman, 1993). CCG was chosen as the grammatical formalism because it licenses non-traditional syntactic constituents that are congruent with the bracketings imposed by information structure and intonational phrasing, as illustrated in (3). 4The system currently uses the AT&T Bell Laborato- ries TTS system, but the implementation is easily adapt- able to other synthesizers. 299 (21), which should be interpreted as a single (two paragraph) monologue satisfying a goal to describe two different objects. 5 Note that both paragraphs include very similar types of information, but radi- cally different intonational contours, due to the dis- course context. In fact, if the intonational patterns of the two examples are interchanged, the resulting speech sounds highly unnatural. (20) a. Describe the x4. b. The X4 L+H* L(H%) is a SOLID-state AMPLIFIER. H* 14" LL$ It COSTS EIGHT HUNDRED DOLLARS, H* H* H* H* LL% and PRODUCES H* ONE hundred watts-per-CHANNEL. H* H* LL$ It was PRAISED by STEREOFOOL, Hi !H~ LH% an AUDIO JOURNAL, H* H* LH% but was REVILED by AUDIOFAD, H=" !H~" LH% ANOTHER audio journal. H* LL$ (21) a. Describe the x5. b. The X5 is a TUBE amplifier. L+HI L(H%) Hi LL$ IT costs NINE hundred dollars, L+H~ L(H%) Hi LH% produces TWO hundred watts-per-channel. Hi LH% and was praised by Stereofool AND Audiofad. Hi LL$ Several aspects of the output shown above are worth noting. Initially, the program assumes that the hearer has no specific knowledge of any partic- ular objects in the knowledge base. Note however, that every proposition put forth by the generator is assumed to be incorporated into the bearer's set of beliefs. Consequently, the descriptive phrase "an audio journal," which is new information in the first paragraph, is omitted from the second. Additionally, when presenting the proposition 'Audiofad is an au- dio journal,' the generator is able to recognize the similarity with the corresponding proposition about Stereofool (i.e. both propositions are abstractions over the single variable open proposition 'X is an au- dio journal'). The program therefore interjects the o~her property and produces "another audio jour- nal." 5The implementation assigns slightly higher pitch to accents bearing the subscript c (e.g. H~), which mark contrastive focus as determined by the algorithm de- scribe above and in (Prevost, 1995). Several aspects of the contrastive intonational ef- fects in these examples also deserve attention. Be- cause of the content generator's use of the rhetorical contrast predicate, items are eligible to receive stress in order to convey contrast before the contrast- ing items are even mentioned. This phenomenon is clearly illustrated by the clause "PRAISED by STEREOFOOL" in (20), which is contrastively stressed before "REVILED by AUDIOFAD" is ut- tered. Such situations are produced only when the contrasting propositions are gathered by the content planner in a single invocation of the generator and identified as contrastive when the rhetorical predi- cates are applied. Moreover, unlike systems that rely solely on word class and given/new distinctions for determining accentual patterns, the system is able to produce contrastive accents on pronouns despite their "given" status, as shown in (21). 5 Conclusions The generation architecture described above and im- plemented in Quintus Prolog produces paragraph- length, spoken monologues concerning objects in a simple knowledge base. The architecture relies on a mapping between a two-tiered information struc- ture representation and intonational tunes to pro- duce speech that makes appropriate contrastive dis- tinctions prosodically. The process of natural lan- guage generation, in accordance with much of the re- cent literature in the field, is divided into three pro- cesses: high-level content planning, sentence plan- ning, and surface generation. Two points concern- ing the role of intonation in the generation process are emphasized. First, since intonational phrasing is dependent on the division of utterances into theme and theme, and since this division relates consecu- tive sentences to one another, matters of information structure (and hence intonational phrasing) must be largely resolved during the high-level planning phase. Second, since accentual decisions are made with respect to the particular linguistic realizations of discourse properties and entities (e.g. the choice of referring expressions), these matters cannot be fully resolved until the sentence planning phase. 6 Acknowledgments The author is grateful for the advice and help- ful suggestions of Mark Steedman, Justine Cassell, Kathy McKeown, Aravind Joshi, Ellen Prince, Mark Liberman, Matthew Stone, Beryl Hoffman and Kris Th6risson as well as the anonymous ACL review- ers. Without the AT&T Bell Laboratories TTS sys- tem, and the patient advice on its use from Julia Hirschberg and Richard Sproat, this work would not have been possible. This research was funded by NSF grants IRI91-17110 and IRI95-04372 and the generous sponsors of the MIT Media Laboratory. 300 References Bolinger, D. (1989). Intonation and Its Uses. Stan- ford University Press. Culicover, P. and Rochemont, M. (1983). Stress and focus in English. Language, 59:123-165. Dale, R. and Haddock, N. (1991). Content determi- nation in the generation of referring expressions. Computational Intelligence, 7(4):252-265. Davis, J. and Hirschberg, J. (1988). Assigning into- national features in synthesized spoken discourse. In Proceedings of the 26th Annual Meeting of the Association for Computational Linguistics, pages 187-193, Buffalo. Engdahl, E. and Vallduvl, E. (1994). Information packaging and grammar ar- chitecture: A constraint-based approach. In Eng- dahl, E., editor, Integrating Information Structure into Constraint-Based and Categorial Approaches (DYANA-2 Report R.1.3.B). CLLI, Amsterdam. Grosz, B. J., Joshi, A. K., and Weinstein, S. (1986). Towards a computational theory of discourse in- terpretation. Unpublished manuscript. Gussenhoven, C. (1983b). On the Grammar and Se- mantics of Sentence Accent. Foris, Dodrecht. Halliday, M. (1970). Language structure and lan- guage function. In Lyons, J., editor, New Horizons in Linguistics, pages 140-165. Penguin. Hirschberg, J. (1990). Accent and discourse context: Assigning pitch accent in synthetic speech. In Pro- ceedings of the Eighth National Conference on Ar- tificial Intelligence, pages 952-957. Hoffman, B. (1995). The Computational Analysis of the Syntax and Interprelation of 'Free' Word Or- der in Turkish. PhD thesis, University of Pennsyl- vania, Philadelphia. Hovy, E. (1993). Automated discourse generation us- ing discourse structure relations. Artificial Intelli- gence, 63:341-385. Mann, W. and Thompson, S. (1986). Rhetorical structure theory: Description and construction of text structures. In Kempen, G., editor, Natural Language Generation: New Results in Artificial Intelligence, Psychology and Linguistics, pages 279-300. Kluwer Academic Publishers, Boston. McKeown, K., Kukich, K., and Shaw, J. (1994). Practical issues in automatic documentation gen- eration. In Proceedings of the Fourth ACL Con- ference on Applied Natural Language Processing, pages 7-14, Stuttgart. Association for Computa- tional Linguistics. McKeown, K. R. (1985). Text Generation: Us- ing Discourse Strategies and Focus Constraints to Generate Natural Language Text. Cambridge Uni- versity Press, Cambridge. Meteer, M. (1991). Bridging the generation gap between text planning and linguistic realization. Computational Intelligence, 7(4):296-304. Pierrehumbert, J. (1980). The Phonology and Pho- netics of English Intonation. PhD thesis, Mas- sachusetts Institute of Technology. Distributed by Indiana University Linguistics Club, Blooming- ton, IN. Prevost, S. (1995). A Semantics of Contrast and In- formation Structure for Specifying Intonation in Spoken Language Generation. PhD Thesis, Uni- versity of Pennsylvania. Prevost, S. and Steedman, M. (1993). Generating contextually appropriate intonation. In Proceed- ings of the 6th Conference of the European Chap- ter of the Association for Computational Linguis- tics, pages 332-340, Utrecht. Prevost, S. and Steedman, M. (1994). Specifying in- tonation from context for speech synthesis. Speech Communication, 15:139-153. Rainbow, O. and Korelsky, T. (1992). Applied text generation. In Proceedings of the Third Conference on Applied Natural Language Processing (ANLP- 1992), pages 40-47. Reiter, E. and Mellish, C. (1992). Using classifica- tion to generate text. In Proceedings of the 30th Annual Meeting of the Association for Computa- tional Linguistics, pages 265-272. Robin, J. (1993). A revision-based generation ar- chitecture for reporting facts in their historical context. In Horacek, H. and Zock, M., editors, New Concepts in Natural Language Generation: Planning, Realization and Systems, pages 238- 265. Pinter Publishers, New York. Rochemont, M. (1986). Focus in Generative Gram- mar. John Benjamins, Philadelphia. Sibun, P. (1991). The Local Organization and Incre- mental Generation of Text. PhD thesis, University of Massachusetts. Steedman, M. (1991a). Structure and intonation. Language, pages 260-296. 301 | 1996 | 39 |
Morphological Cues for Lexical Semantics Marc Light Seminar ffir Sprachwissenschaft Universitgt Tfibingen Wilhelmstr. 113 D-72074 Tfibingen Germany light~sf s. nph±l, uni-tuebingen, de Abstract Most natural language processing tasks re- quire lexical semantic information. Au- tomated acquisition of this information would thus increase the robustness and portability of NLP systems. This pa- per describes an acquisition method which makes use of fixed correspondences be- tween derivational affixes and lexical se- mantic information. One advantage of this method, and of other methods that rely only on surface characteristics of language, is that the necessary input is currently available. 1 Introduction Some natural language processing (NLP) tasks can be performed with only coarse-grained semantic in- formation about individual words. For example, a system could utilize word frequency and a word cooccurrence matrix in order to perform informa- tion retrieval. However, many NLP tasks require at least a partial understanding of every sentence or utterance in the input and thus have a much greater need for lexical semantics. Natural language gen- eration, providing a natural language front end to a database, information extraction, machine trans- lation, and task-oriented dialogue understanding all require lexical semantics. The lexical semantic in- formation commonly utilized includes verbal argu- ment structure and selectional restrictions, corre- sponding nominal semantic class, verbal aspectual class, synonym and antonym relationships between words, and various verbal semantic features such as causation and manner. Machine readable dictionaries do not include much of this information and it is difficult and time consuming to encode it by hand. As a consequence, current NLP systems have only small lexicons and thus can only operate in restricted domains. Auto- mated methods for acquiring lexical semantics could increase both the robustness and the portability of such systems. In addition, such methods might pro- vide inSight into human language acquisition. After considering different possible approaches to acquiring lexicM semantic information, this paper concludes that a "surface cueing" approach is cur- rently the most promising. It then introduces mor- phological cueing, a type of surface cueing, and dis- cusses an implementation. It concludes by evalu- ating morphological cues with respect to a list of desiderata for good surface cues. 2 Approaches to Acquiring Lexical Semantics One intuitively appealing idea is that humans ac- quire the meanings of words by relating them to semantic representations resulting from perceptual or cognitive processing. For example, in a situation where the father says Kim is throwing the ball and points at Kim who is throwing the ball, a child might be able learn what throw and ball mean. In the human language acquisition literature, Grimshaw (1981) and Pinker (1989) advocate this approach; others have described partial computer implementa- tions: Pustejovsky (1988) and Siskind (1990). How- ever, this approach cannot yet provide for the auto- matic acquisition of lexical semantics for use in NLP systems, because the input required must be hand coded: no current artificial intelligence system has the perceptual and cognitive capabilities required to produce the needed semantic representations. Another approach would be to use the semantics of surrounding words in an utterance to constrain the meaning of an unknown word. Borrowing an example from Pinker (1994), upon hearing I glipped the paper to shreds, one could guess that the mean- ing of glib has something to do with tearing. Sim- ilarly, one could guess that filp means something like eat upon hearing I filped the delicious sandwich and now I'm full. These guesses are cued by the meanings of paper, shreds, sandwich, delicious, full, and the partial syntactic analysis of the utterances that contain them. Granger (1977), Berwick (1983), and Hastings (1994) describe computational systems 25 that implement this approach. However, this ap- proach is hindered by the need for a large amount of initial lexical semantic information and the need for a robust natural language understanding system that produces semantic representations as output, since producing this output requires precisely the lexical semantic information the system is trying to acquire. A third approach does not require any semantic information related to perceptual input or the in- put utterance. Instead it makes use of fixed cor- respondences between surface characteristics of lan- guage input and lexical semantic information: sur- face characteristics serve as cues for lexical seman- tics of the words. For example, if a verb is seen with a noun phrase subject and a sentential comple- ment, it often has verbal semantics involving spa- tial perception and cognition, e.g., believe, think, worry, and see (Fisher, Gleitman, and Gleitman, 1991; Gleitman, 1990). Similarly, the occurrence of a verb in the progressive tense can be used as a cue for the non-stativeness of the verb (Dorr and Lee, 1992); stative verbs cannot appear in the progress tense ( e.g.,* Mary is loving her new shoes). Another example is the use of patterns such as NP, NP * ,and otherNP to find lexical semantic information such as hyponym (Hearst, 1992). Tem- ples, treasuries, and other important civic buildings is an example of this pattern and from it the infor- mation that temples and treasuries are types of civic buildings would be cued. Finally, inducing lexical semantics from distributional data (e.g., (Brown et al., 1992; Church et al., 1989)) is also a form of sur- face cueing. It should be noted that the set of fixed correspondences between surface characteristics and lexical semantic information, at this point, have to be acquired through the analysis of the researcher-- the issue of how the fixed correspondences can be automatically acquired will not be addressed here. The main advantage of the surface cueing ap- proach is that the input required is currently avail- able: there is an ever increasing supply of on- line text, which can be automatically part-of-speech tagged, assigned shallow syntactic structure by ro- bust partial parsing systems, and morphologically analyzed, all without any prior lexical semantics. A possible disadvantage of surface cueing is that surface cues for a particular piece oflexical semantics might be difficult to uncover or they might not exist at all. In addition, the cues might not be present for the words of interest. Thus, it is an empirical question whether easily identifiable abundant sur- face cues exist for the needed lexical semantic infor- mation. The next section explores the possibility of using derivational affixes as surface cues for lexical semantics. 26 3 Morphological Cues for Lexical Semantic Information Many derivational affixes only apply to bases with certain semantic characteristics and only produce derived forms with certain semantic characteristics. For example, the verbal prefix un- applies to telic verbs and produces telic derived forms. Thus, it is possible to use un- as a cue for telicity. By search- ing a sufficiently large corpus we should be able to identify a number of telic verbs. Examples from the Brown corpus include clasp, coil, fasten, lace, and screw. A more implementation-oriented description of the process is the following: (i) analyze affixes by hand to gain fixed correspondences between affix and lexical semantic information (ii) collect a large cor- pus of text, (iii) tag it with part-of-speech tags, (iv) morphologically analyze its words, (v) assign word senses to the base and the derived forms of these analyses, and (vi) use this morphological structure plus fixed correspondences to assign semantics to both the base senses and the derived form senses. Step (i) amounts to doing a semantic analysis of a number of affixes the goal of which is to find se- mantic generalizations for an affix that hold for a large percentage of its instances. Finding the right generalizations and stating them explicitly can be time consuming but is only performed once. Tagging the corpus is necessary to make word sense disam- biguation and morphological analysis easier. Word sense disambiguation is necessary because one needs to know which sense of the base is involved in a particular derived form, more specifically, to which sense should one assign the feature cued by the affix. For example, stress can be either a noun the stress on the third syllable or a verb the advisor stressed the importance of finishing quickly. Since the suffix -ful applies to nominal bases, only a noun reading is possible as the stem of stressful and thus one would attach the lexical semantics cued by -ful to the noun sense. However, stress has multiple readings even as a noun: it also has the reading exemplified by the new parent was under a lot of stress. Only this reading is possible for stressful. In order to produce the results presented in the next section, the above steps were performed as fol- lows. A set of 18 affixes were analyzed by hand pro- viding the fixed correspondences between cue and semantics. The cued lexical semantic information was axiomatized using Episodic Logic (Hwang and Schubert, 1993), a situation-based extension of stan- dard first order logic. The Penn Treebank ver- sion of the Brown corpus (Marcus, Santorini, and Marcinkiewicz, 1993) served as the corpus. Only its words and part-of-speech tags were utilized. Al- though these tags were corrected by hand, part-of- speech tagging can be automatically performed with an error rate of 3 to 4 percent (Merialdo, 1994; Brill, 1994). The Alvey morphological analyzer (Ritchie et al., 1992) was used to assign morphological struc- ture. It uses a lexicon with just over 62,000 en- tries. This lexicon was derived from a machine read- able dictionary but contains no semantic informa- tion. Word sense disambiguation for the bases and derived forms that could not be resolved using part- of-speech tags was not performed. However, there exist systems for such word sense disambiguation which do not require explicit lexical semantic infor- mation (Yarowsky, 1993; Schiitze, 1992). Let us consider an example. One sense of the suf- fix -ize applies to adjectival bases (e.g., centralize). This sense of the affix will be referred to as -Aize. (A related but different sense applies to nouns, e.g., glamorize. The part-of-speech of the base is used to disambiguate these two senses of -ize.) First, the regular expressions ".*IZ(E[ING[ES[ED)$" and "^V. *" are used to collect tokens from the corpus that were likely to have been derived using -ize. The Alvey morphological analyzer is then applied to each type. It strips off -Aize from a word if it can find an entry with a reference form of the appropriate or- thographic shape and has the features "uninflected," "latinate," and "adjective." It may also build an ap- propriate base using other affixes, e.g.,[[tradition-a~ -Aize]. 1 Finally, all derived forms are assigned the lexical semantic feature CHANGE-OF-STATE and all the bases are assigned the lexical semantic feature IZE-DEPENDENT. Only the CHANGE-OF-STATE fea- ture will be discussed here. It is defined by the axiom below. For all predicates P with features CHANGE-OF-STATE and DYADIC : Vx,y,e [P(x,y)**e-> [3ol : [at-end-of (el, e) A cause(e, el)] [rstate(P) (y)**el] A 3e2 : at-beginning-of (e2, e) [-~rstate (P) (y)**e2]] J The operator ** is analogous to ~ in situation semantics; it indicates, among other things, that a formula describes an event. P is a place holder for the semantic predicate corresponding to the word sense which has the feature. It is assumed that each word sense corresponds to a single semantic predi- cate. The axiom states that if a CHANGE-OF-STATE predicate describes an event, then the result state of this predicate holds at the end of this event and that it did not hold at the beginning, e.g., if one wants to 1In an alternative version of the method, the mor- phological analyzer is also able to construct a base on its own when it is unable to find an appropriate base in its lexicon. However, these "new" bases seldom cor- respond to actual words and thus the results presented here were derived using a morphological analyzer config- ured to only use bases that are directly in its lexicon or can be constructed from words in its lexicon. 27 formalize something it must be non-formal to begin with and will be formal after. The result state of an -Aize predicate is the predicate corresponding to its base; this is stated in another axiom. Precision figures for the method were collected as follows. The method returns a set of normalized (i. e., uninflected) word/feature pairs. A human then determines which pairs are "correct" where correct means that the axiom defining the feature holds for the instances (tokens) of the word (type). Because of the lack of word senses, the semantics assigned to a particular word is only considered correct~ if it holds for all senses occurring in the relevant derived word tokens. 2 For example, the axiom above must hold for all senses of centralize occurring in the corpus in order for the centralize~CHANGE-OF-STATE pair to be correct. The axiom for IZE-DEPENDENT must hold only for those senses of central that occur in the tokens of centralize for the central/IzE-DEPENDENT pair to be correct. This definition of correct was constructed, in part, to make relatively quick hu- man judgements possible. It should also be noted that the semantic judgements require that the se- mantics be expressed in a precise way. This discipline is enforced in part by requiring that the features be axiomatized in a denotational logic. Another argu- ment for such an axiomatization is that many NLP systems utilize a denotational logic for representing semantic information and thus the axioms provide a straightforward interface to the lexicon. To return to our example, as shown in Table 1, there were 63 -Aize derived words (types) of which 78 percent conform to the CHANGE-OF-STATE ax- iom. Of the bases, 80 percent conform to the IZE- DEPENDENT axiom which will be discussed in the next section. Among the conforming words were equalize, stabilize, and federalize. Two words that seem to be derived using the -ize suffix but do not conform to the CHANGE-OF-STATE axiom are penal- ize and socialize (with the guests). A different sort of non-conformity is produced when the morpholog- ical analyzer finds a spurious parse. For example, it analyzed subsidize as [sub- [side -ize]] and thus pro- duced the sidize/CHANGE-OF-STATE pair which for the relevant tokens was incorrect. In the first sort, the non-conformity arises because the cue does not always correspond to the relevant lexical semantic information. In the second sort, the non-conformity arises because a cue has been found where one does not exist. A system that utilizes a lexicon so con- structed is interested primarily in the overall preci- sion of the information contained within and thus the results presented in the next section conflate these two types of false positives. 2Although this definition is required for many cases, in the vast majority of the cases, the derived form and its base have only one possible sense (e.g., stressful). 4 Results This section starts by discussing the semantics of 18 derivational affixes: re-, un-, de-,-ize,-en,-ify,-le, -ate, -ee, -er, -ant, -age, -ment, mis-,-able, -ful, - less, and -ness. Following this discussion, a table of precision statistics for the performance of these sur- face cues is presented. Due to space limitations, the lexical semantics cued by these affixes can only be loosely specified. However, they have been axiom- atized in a fashion exemplified by the CHANGE-OF- STATE axiom above (see (Light, 1996; Light, 1992)). The verbal prefixes un-, de-, and re- cue aspec- tual information for their base and derived forms. Some examples from the Brown corpus are unfas- ten, unwind, decompose, defocus, reactivate, and readapt. Above it was noted that un- is a cue for telicity. In fact, both un- and de- cue the CHANGE- OF-STATE feature for their base and derived forms-- the CHANGE-OF-STATE feature entails the TELIC fea- ture. In addition, for un- and de-, the result state of the derived form is the negation of the result state of the base (NEG-OF-BASE-IS-RSTATE), e.g., the result of unfastening something is the opposite of the result of fastening it. As shown by examples like reswim the last lap, re- only cues the TELIC feature for its base and derived forms: the lap might have been swum previously and thus the negation of the result state does not have to have held previously (DoTty, 1979). For re-, the result state of the derived form is the same as that of the base (RSTATE-EQ-BASE- RSTATE), e.g., the result of reactivating something is the same as activating it. In fact, if one reactivates something then it is also being activated: the derived form entails the base (ENTAILS-BASE). Finally, for re-, the derived form entails that its result state held previously, e.g., if one recentralizes something then it must have been central at some point previous to the event of recentralization (PRESUPS-RSTATE). The suffixes -Aize, -Nize, -en, -Airy, -Nify all cue the CHANGE-OF-STATE feature for their derived form as was discussed for -Aize above. Some ex- emplars are centralize, formalize, categorize, colo- nize, brighten, stiffen, falsify, intensify, mummify, and glorify. For -Aize, -en and -Airy a bit more can be said about the result state: it is the base predi- cate (RSTATE-EQ-BASE), e.g., the result of formaliz- ing something is that it is formal. Finally -Aize, -en, and -Airy cue the following feature for their bases: if a state holds of some individual then either an event described by the derived form predicate oc- curred previously or the predicate was always true of the individual (IZE-DEPENDENT), e.g., if some- thing is central then either it was centralized or it was always central. The "suffixes" -le and -ate should really be called verbal endings since they are not suffixes in English, i.e., if one strips them off one is seldom left with a word. (Consequently, only regular expressions were 28 used to collect types; the morphological analyzer was not used.) Nonetheless, they cue lexical semantics and are easily identified. Some examples are chuckle, dangle, alleviate, and assimilate. The ending -ate cues a CHANGE-OF-STATE verb and -le an ACTIVITY verb. The derived forms produced by -ee, -er, and -ant all refer to participants of an event described by their base (PART-IN-E). Some examples are appointee, de- porlee, blower, campaigner, assailant, and claimant. In addition, the derived form of -ee is also sentient of this event and non-volitional with respect to it (Barker, 1995). The nominalizing suffixes -age and -ment both produce derived forms that refer to something re- sulting from an event of the verbal base predicate. Some examples are blockage, seepage, marriage, pay- ment, restatement, shipment, and treatment. The derived forms of -age entail that an event occurred and refer to something resulting from it (EVENT- AND-RESULTANT)), e.g., seepage entails that seep- ing took place and that the seepage resulted from this seeping. Similarly, the derived forms of -ment entail that an event took place and refer either to this event, the proposition that the event occurred, or something resulting from the event (REFERS-TO- E-OR-PROP-OI~-RESULT), e.g., a restatement entails that a restating occurred and refers either to this event, the proposition that the event occurred, or to the actual utterance or written document resulting from the restating event. (This analysis is based on (Zucchi, 1989).) The verbal prefix mis-, e.g., miscalculate and mis- quote, cues the feature that an action is performed in an incorrect manner (INCORRECT-MANNER.). The suffix -able cues a feature that it is possible to per- form some action (ABLE-TO-BE-PEP, FORMED), e.g., something is enforceable if it is possible that some- thing can enforce it (DoTty, 1979). The words de- rived using -hess refer to a state of something having the property of the base (STATE-OF-HAVING-PROP- OF-BASE), e.g., in Kim's fierceness at the meeting yesterday was unusual the word fierceness refers to a state of Kim being fierce. The suffix -ful marks its base as abstract (ABSTRACT): careful, peaceful, powerful, etc. In addition, it marks its derived form as the antonym of a form derived by -less if it exists (LESS-ANTONYM). The suffix -less marks its derived forms with the analogous feature (FUL-ANTONYM). Some examples are colorful/less, fearful/less, harm- ful/less, and tasteful/less. The precision statistics for the individual lexical semantic features discussed above are presented in Table 1 and Table 2. Lexical semantic informa- tion was collected for 2535 words (bases and derived forms). One way to summarize these tables is to cal- culate a single precision number for all the features in a table, i.e., average the number of correct types for each affix, sum these averages, and then divide this sum by the total number of types. Using this statistic it can be said that if a random word is de- rived, its features have a 76 percent chance of being true and if it is a stem of a derived form, its features have a 82 percent chance of being true. Computing recall requires finding all true tokens of a cue. This is a labor intensive task. It was performed for the verbal prefix re- and the recall was found to be 85 percent. The majority of the missed re- verbs were due to the fact that the system only looked at verbs starting with RE and not other parts-of-speech, e.g., many nominalizations such as reaccommodation contain the re- morphological cue. However, increasing recall by looking at all open class categories would probably decrease precision. Another cause of reduced recall is that some stems were not in the Alvey lexicon or could not be prop- erly extracted by the morphological analyzer. For example, -Nize could not be stripped from hypoth- esize because Alvey failed to reconstruct hypothesis from hypothes. However, for the affixes discussed here, 89 percent of the bases were present in the Alvey lexicon. 5 Evaluation Good surface cues are easy to identify, abundant, and correspond to the needed lexical semantic in- formation (Hearst (1992) identifies a similar set of desiderata). With respect to these desiderata, derivational morphology is both a good cue and a bad cue. Let us start with why it is a bad cue: there may be no derivational cues for the lexical semantics of a particular word. This is not the case for other surface cues, e.g., distributional cues exist for every word in a corpus. In addition, even if a derivational cue does exist, the reliability (on average approxi- mately 76 percent) of the lexical semantic informa- tion is too low for many NLP tasks. This unrelia- bility is due in part to the inherent exceptionality of lexical generalization and thus can be improved only partially. However, derivational morphology is a good cue in the following ways. It provides exactly the type of lexical semantics needed for many NLP tasks: the affixes discussed in the previous section cued nomi- nal semantic class, verbal aspectual class, antonym relationships between words, sentience, etc. In ad- dition, working with the Brown corpus (1.1 million words) and 18 affixes provided such information for over 2500 words. Since corpora with over 40 million words are common and English has over 40 com- mon derivational affixes, one would expect to be able to increase this number by an order of magnitude. In addition, most English words are either derived themselves or serve as bases of at least one deriva- tional affix. 3 Finally, for some NLP tasks, 76 per- 3The following experiment supports this claim. Just 29 Feature TELIC RSTATE-EQ-BASE- RSTATE ENTAILS-BASE PRESUPS-RSTATE CHANGE-OF-STATE NEG-OF-BASE-IS- RSTATE CHANGE-OF-STATE NEG-OF-BASE-IS- RSTATE CHANGE-OF-STATE RSTATE-EQ-BASE CHANGE-OF-STATE ACTIVITY CHANGE-OF-STATE RSTATE-EQ-BASE CHANGE-OF-STATE RSTATE-EQ-BASE CHANGE-OF-STATE CHANGE-OF-STATE PART-IN-E SENTIENT NON-VOLITIONAL PART-IN-E PART-IN-E EVENT-AND- RESULTANT REFERS-TO-E-OR- PROP-OR-RESULTANT INCORRECT-MANNER ABLE-TO-BE- PERFORMED STATE-OF-HAVING- PROP-OF-BASE FUL-ANTONYM LESS-ANTONYM ] Affix I Types ] Precision I re- 164 91% re- 164 65% re- 164 65% re- 164 65% un- 23 100% un- 23 91% de- 35 34% de- 35 20% -Aize 63 78% -Aize 63 75% -Nize 86 56% -le 71 55% -en 36 100% -en 36 97% -Airy 17 94% -Aify 17 58% -Nify 21 67% -ate 365 48% -ee 22 91% -ee 22 82% -ee 22 68% -er 471 85% -ant 21 81% -age 43 58% -ment 166 88% mis- 21 86% -able 148 84% -hess 307 97% .less 22 77% -]ul 22 77% Table 1: Derived words Feature I Affix [Types [Precision TELIC re- 164 91% CHANGE-OF-STATE Vun- 23 91% CHANGE-OF-STATE Vde- 33 36% IZE-DEPENDENT -Aize 64 80% IZE-DEPENDENT -en 36 72% IZE-DEPENDENT -Airy 15 40% ABSTRACT -ful 76 93% Table 2: Base words cent reliability may be adequate. In addition, some affixes are much more reliable cues than others and thus if higher reliability is required then only the affixes with high precision might be used. The above discussion makes it clear that morpho- logical cueing provides only a partial solution to the problem of acquiring lexical semantic information. However, as mentioned in section 2 there are many types of surface cues which correspond to a vari- ety of lexical semantic information. A combination of cues should produce better precision where the same information is indicated by multiple cues. For example, the morphological cue re- indicates telic- ity and as mentioned above, the syntactic cue the progressive tense indicates non-stativity (Dorr and Lee, 1992). Since telicity is a type of non-stativity, the information is mutually supportive. In addition, using many different types of cues should provide a greater variety of information in general. Thus mor- phological cueing is best seen as one type of surface cueing that can be used in combination with others to provide lexical semantic information. 6 Acknowledgements A portion of this work was performed at the Uni- versity of Rochester Computer Science Department and supported by ONR/ARPA research grant num- ber N00014-92-J-1512. References Barker, Chris. 1995. The semantics of -ee. In Pro- ceedings of the SALT conference. Berwick, Robert. 1983. Learning word meanings from examples. In Proceedings of the 8th Interna- tional Joint Conference on Artificial Intelligence (IJCAI-S3). Brill, Eric. 1994. Some advances in transformation- based part of speech tagging. In Proceedings of the Twelfth National conference on Artificial In- telligence: American Association for Artificial In- telligence (AAAI). Brown, Peter F., Vincent J. Della Pietra, Peter V. deSouza, Jennifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18(4). Church, Kenneth, William Gale, Patrick Hanks, and Donald Hindle. 1989. Parsing, word associa- tions and typical predicate-argument relations. In International P~'~rkshop on Parsing Technologies, pages 389-98. over 400 open class words were picked randomly from the Brown corpus and the derived forms were marked by hand. Based on this data, a random open class word in the Brown corpus has a 17 percent chance of being derived, a 56 percent chance of being a stem of a derived form, and an 8 percent chance of being both. Dorr, Bonnie J. and Ki Lee. 1992. Building a lex- icon for machine translation: Use of corpora for aspectual classification of verbs. Technical Report CS-TR-2876, University of Maryland. Dowty, David. 1979. I~rd Meaning and Montague Grammar. Reidel. Fisher, Cynthia, Henry Gleitman, and Lila R. Gleit- man. 1991. On the semantic content of subcatego- rization frames. Cognitive Psychology, 23(3):331- 392. Gleitman, Lila. 1990. The structural sources of verb meanings. Language Acquisition, 1:3-55. Granger, R. 1977. Foulup: a program that figures out meanings of words from context. In Proceed- ings of the 5th International Joint Conference on Artificial Intelligence. Grimshaw, Jane. 1981. Form, function, and the lan- guage acquisition device. In C. L. Baker and J. J. McCarthy, editors, the logical problem of language acquisition. MIT Press. Hastings, Peter. 1994. Automatic Acquistion of I~rd Meaning from Context. Ph.D. thesis, Uni- versity of Michigan. Hearst, Marti. 1992. Automatic acquisition of hy- ponyms from large text corpora. In Proceedings of the fifteenth International Conference on Com- putational Linguistics (COLING). Hwang, Chung Hee and Lenhart Schubert. 1993. Episodic logic: a comprehensive natural represen- tation for language understanding. Mind and Ma- chine, 3(4):381-419. Light, Marc. 1992. Rehashing Re-. In Proceedings of the Eastern States Conference on Linguistics. Cornell University Linguistics Department Work- ing Papers. Light, Marc. 1996. Morphological Cues for Lexical Semantics. Ph.D. thesis, University of Rochester, Rochester, NY. Marcus, Mitchell, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Compu- tational Linguistics, 19(2):313-330. Merialdo, Bernard. 1994. Tagging English text with a probabilistic model. Computational Linguistics, 20(2):155-172. Pinker, Steven. 1989. Learnability and Cognition: The Acquisition of Argument Structure. MIT Press. Pinker, Steven. 1994. How could a child use verb syntax to learn verb semantics? Lingua, 92:377- 410. Pustejovsky, James. 1988. Constraints on the acqui- sition of semantic knowledge. International jour- nal of intelligent systems, 3:247-268. 30 Ritchie, Graeme D., Graham J. Russell, Alan W. Black, and Steve G. Pulman. 1992. Computa- tional Morphology: Practical Mechanisms for the English Lexicon. MIT press. Schiitze, Hinrich. 1992. Word sense disambiguation with sublexical representations. In Statistically- Based NLP Techniques (American Association for Artificial Intelligence l~'~rkshop, July 12-16, 1992, San Jose, CA.), pages 109-113. Siskind, Jeffrey M. 1990. Acquiring core meanings of words, represented as Jackendoff-style concep- tual structures, from correlated streams of linguis- tic and non-linguistic input. In Proceedings of the 28th Meeting of the Association for Compu- tational Linguistics. Yarowsky, David. 1993. One sense per collocation. In Proceedings of the ARPA l~'~rkshop on Human Language Technology. Morgan Kaufmann. Zucchi, Alessandro. 1989. The Language of Propo- sitions and Events: Issues in the Syntax and the Semantics of Nominalization. Ph.D. thesis, Uni- versity of Massachusetts, Amherst, MA. 31 | 1996 | 4 |
The Rhythm of Lexical Stress in Prose Doug Beeferman School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213, USA dougb+@cs, cmu. edu Abstract "Prose rhythm" is a widely observed but scarcely quantified phenomenon. We de- scribe an information-theoretic model for measuring the regularity of lexical stress in English texts, and use it in combination with trigram language models to demon- strate a relationship between the probabil- ity of word sequences in English and the amount of rhythm present in them. We find that the stream of lexical stress in text from the Wall Street Journal has an en- tropy rate of less than 0.75 bits per sylla- ble for common sentences. We observe that the average number of syllables per word is greater for rarer word sequences, and to normalize for this effect we run control ex- periments to show that the choice of word order contributes significantly to stress reg- ularity, and increasingly with lexical prob- ability. 1 Introduction Rhythm inheres in creative output, asserting itself as the meter in music, the iambs and trochees of poetry, and the uniformity in distances between objects in art and architecture. More subtly there is widely be- lieved to be rhythm in English prose, reflecting the arrangement of words, whether deliberate or sub- conscious, to enhance the perceived acoustic signal or reduce the burden of remembrance for the reader or author. In this paper we describe an information-theoretic model based on lexical stress that substantiates this common perception and relates stress regularity in written speech (which we shall equate with the in- tuitive notion of "rhythm") to the probability of the text itself. By computing the stress entropy rate for both a set of Wall Street Journal sentences and a ver- sion of the corpus with randomized intra-sentential word order, we also find that word order contributes significantly to rhythm, particularly within highly probable sentences. We regard this as a first step in quantifying the extent to which metrical properties influence syntactic choice in writing. 1.1 Basics In speech production, syllables are emitted as pulses of sound synchronized with movements of the mus- culature in the rib cage. Degrees of stress arise from variations in the amount of energy expended by the speaker to contract these muscles, and from other factors such as intonation. Perceptually stress is more abstractly defined, and it is often associated with "peaks of prominence" in some representation of the acoustic input signal (Ochsner, 1989). Stress as a lexical property, the primary concern of this paper, is a function that maps a word to a sequence of discrete levels of physical stress, approx- imating the relative emphasis given each syllable when the word is pronounced. Phonologists distin- guish between three levels of lexical stress in English: primary, secondary, and what we shall call weak for lack of a better substitute for unstressed. For the purposes of this paper we shall regard stresses as symbols fused serially in time by the writer or speaker, with words acting as building blocks of pre- defined stress sequences that may be arranged arbi- trarily but never broken apart. The culminative property of stress states that ev- ery content word has exactly one primary-stressed syllable, and that whatever syllables remain are sub- ordinate to it. Monosyllabic function words such as the and of usually receive weak stress, while content words get one strong stress and possibly many sec- ondary and weak stresses. It has been widely observed that strong and weak tend to alternate at "rhythmically ideal disyllabic distances" (Kager, 1989a). "Ideal" here is a complex function involving production, perception, and many unknowns. Our concern is not to pinpoint this ideal, nor to answer precisely why it is sought by speakers and writers, but to gauge to what extent it is sought. We seek to investigate, for example, whether the avoidance of primary stress clash, the placement of two or more strongly stressed syllables in succession, influences syntactic choice. In the Wall Street Jour- 302 nal corpus we find such sentences as "The fol-low- ing is-sues re-cent-ly were filed with the Se-cur- i-ties and Ex-change Com-mis-sion". The phrase "recently were filed" can be syntactically permuted as "were filed recently", but this clashes filed with the first syllable of recently. The chosen sentence avoids consecutive primary stresses. Kager postu- lates with a decidedly information theoretic under- tone that the resulting binary alternation is "simply the maximal degree of rhythmic organization com- patible with the requirement that adjacent stresses are to be avoided." (Kager, 1989a) Certainly we are not proposing that a hard deci- sion based only on metrical properties of the output is made to resolve syntactic choice ambiguity, in the case above or in general. Clearly semantic empha- sis has its say in the decision. But it is our belief that rhythm makes a nontrivial contribution, and that the tools of statistics and information theory will help us to estimate it formally. Words are the building blocks. How much do their selection (dic- tion) and their arrangement (syntax) act to enhance rhythm? 1.2 Past models and quantifications Lexical stress is a well-studied subject at the intra- word level. Rules governing how to map a word's orthographic or phonetic transcription to a sequence of stress values have been searched for and studied from rules-based, statistical, and connectionist per- spectives. Word-external stress regularity has been denied this level of attention. Patterns in phrases and compound words have been studied by Halle (Halle and Vergnaud, 1987) and others, who observe and reformulate such phenomena as the emphasis of the penultimate constituent in a compound noun (National Center for Supercomputing Applications, for example.) Treatment of lexical stress across word boundaries is scarce in the literature, however. Though prose rhythm inquiry is more than a hun- dred years old (Ochsner, 1989), it has largely been dismissed by the linguistic community as irrelevant to formal models, as a mere curiosity for literary analysis. This is partly because formal methods of inquiry have failed to present a compelling case for the existence of regularity (Harding, 1976). Past attempts to quantify prose rhythm may be classified as perception-oriented or signal-oriented. In both cases the studies have typically focussed on regularities in the distance between peaks of promi- nence, or interstress intervals, either perceived by a human subject or measured in the signal. The former class of experiments relies on the subjective segmentation of utterances by a necessarily limited number of participants--subjects tapping out the rhythms they perceive in a waveform on a recording device, for example (Kager, 1989b). To say nothing of the psychoacoustic biases this methodology intro- duces, it relies on too little data for anything but a sterile set of means and variances. Signal analysis, too, has not yet been applied to very large speech corpora for the purpose of inves- tigating prose rhythm, though the technology now exists to lend efficiency to such studies. The ex- periments have been of smaller scope and geared toward detecting isochrony, regularity in absolute time. Jassem et al.(Jassem, Hill, and Witten, 1984) use statistical techniques such as regression to ana- lyze the duration of what they term rhythm units. Jassem postulates that speech is composed of extra- syllable narrow rhythm units with roughly fixed du- ration independent of the number of syllable con- stituents, surrounded by varia.ble-length anacruses. Abercrombie (Abercrombie, 1967) views speech as composed of metrical feet of variable length that be- gin with and are conceptually highlighted by a single stressed syllable. Many experiments lead to the common conclu- sion that English is stress-timed, that there is some regularity in the absolute duration between strong stress events. In contrast to postulated syllable- timed languages like French in which we find exactly the inverse effect, speakers of English tend to expand and to contract syllable streams so that the dura- tion between bounding primary stresses matches the other intervals in the utterance. It is unpleasant for production and perception alike, however, when too many weak-stressed syllables are forced into such an interval, or when this amount of "padding" varies wildly from one interval to the next. Prose rhythm analysts so far have not considered the syl- lable stream independent from syllabic, phonemic, or interstress duration. In particular they haven't measured the regularity of the purely lexical stream. They have instead continually re-answered questions concerning isochrony. Given that speech can be divided into interstress units of roughly equal duration, we believe the more interesting question is whether a speaker or writer modifies his diction and syntax to fit a regular num- ber of syllables into each unit. This question can only be answered by a lexical approach, an approach that pleasingly lends itself to efficient experimenta- tion with very large amounts of data. 2 Stress entropy rate We regard every syllable as having either strong or weak stress, and we employ a purely lexical, con- text independent mapping, a pronunciation dictio- nary a, to tell us which syllables in a word receive which level of stress. We base our experiments on a binary-valued symbol set E1 = {W, S} and on a ternary-valued symbol set E2 = {W, S, P}, where 'W' indicates weak stress, 'S' indicates strong stress, 1 We use the ll6,000-entry CMU Pronouncing Dictio- nary version 0.4 for all experiments in this paper. 303 i (, Figure 2: A 5-gram model viewed as a first-order Markov chain and 'P' indicates a pause. Abstractly the dictionary maps words to sequences of symbols from {primary, secondary, unstressed}, which we interpret by down- sampling to our binary system--primary stress is strong, non-stress is weak, and secondary stress ('2') we allow to be either weak or strong depending on the experiment we are conducting. We represent a sentence as the concatenation of the stress sequences of its constituent words, with • 'P' symbols (for the N2 experiments) breaking the stream where natural pauses occur. Traditional approaches to lexicai language mod- eling provide insight on our analogous problem, in which the input is a stream of syllables rather than words and the values are drawn from a vocabu- lary N of stress levels. We wish to create a model that yields approximate values for probabilities of the form p(sklso, sl,..., Sk-1), where si E ~ is the stress symbol at syllable i in the text. A model with separate parameters for each history is prohibitively large, as the number of possible histories grows ex- ponentially with the length of the input; and for the same reason it is impossible to train on limited data. Consequently we partition the history space into equivalence classes, and the stochastic n-gram approach that has served lexicai language modeling so well treats two histories as equivalent if they end in the same n - 1 symbols. As Figure 2 demonstrates, an n-gram model is simply a stationary Markov chain of order k = n - 1, or equivalently a first-order Markov chain whose states are labeled with tuples from Ek. To gauge the regularity and compressibility of the training data we can calculate the entropy rate of the stochastic process as approximated by our model, an upper bound on the expected number of bits needed to encode each symbol in the best possible encod- ing. Techniques for computing the entropy rate of a stationary Markov chain are well known in infor- mation theory (Cover and Thomas, 1991). If {Xi} is a Markov chain with stationary distribution tt and transition matrix P, then its entropy rate is H(X) = - ~.i,j I'tiPij logpij. The probabilities in P can be trained by ac- cumulating, for each (sx,s2,...,sk) E E k, the k-gram count in C(sl,sz,...,sk) in the training data, and normalizing by the (k - 1)-gram count C(sl, s2,. . ., sl,-1). The stationary distribution p satisfies pP = #, or equivalently #k = ~j #jPj,k (Parzen, 1962). In general finding p for a large state space requires an eigenvector computation, but in the special case of an n-gram model it can be shown that the value in p corresponding to the state (sl, s2,..., sk) is simply the k-gram frequency C(sl, s2,..., sk)/N, where N is the number of symbols in the data. 2 We therefore can compute the entropy rate of a stress sequence in time linear in both the amount of data and the size of the state space. This efficiency will enable us to experiment with values of n as large as seven; for larger values the amount of training data, not time, is the limiting factor. 3 Methodology The training procedure entails simply counting the number of occurrences of each n-gram for the train- ing data and computing the stress entropy rate by the method described. As we treat each sentence as an independent event, no cross-sentence n-grams are kept: only those that fit between sentence bound- aries are counted. 3.1 The meaning of stress entropy rate We regard these experiments as computing the en- tropy rate of a Markov chain, estimated from train- ing data, that approximately models the emission of symbols from a random source. The entropy rate bounds how compressible the training sequence is, and not precisely how predictable unseen sequences from the same source would be. To measure the effi- cacy of these models in prediction it would be neces- sary to divide the corpus, train a model on one sub- set, and measure the entropy rate of the other with respect to the trained model. Compression can take place off-line, after the entire training set is read, while prediction cannot "cheat" in this manner. But we claim that our results predict how effective prediction would be, for the small state space in our Markov model and the huge amount of training data translate to very good state coverage. In language modeling, unseen words and unseen n-grams are a serious problem, and are typically combatted with smoothing techniques such as the backoff model and the discounting formula offered by Good and Tur- ing. In our case, unseen "words" never occur, for 2This ignores edge effects, for ~--~s C(sl, s2,..., sa) = N - k + 1, but this discrepancy is negligible when N is very large. 304 Lis ten to me close ly I'll en deav or to ex plain / S W S S S W S W S W S W S P what sep ar ates a char la tan from a Char le magne W S W 2 W S W W S W S W 2 P Figure 1: A song lyric exemplifies a highly regular stress stream (from the musical Pippin by Stephen Schwartz.) the tiniest of realistic training sets will cover the bi- nary or ternary vocabulary. Coverage of the n-gram set is complete for our prose training texts for n as high as eight; nor do singleton states (counts that occur only once), which are the bases of Turing's es- timate of the frequency of untrained states in new data, occur until n = 7. 3.2 Lexicallzing stress Lexical stress is the "backbone of speech rhythm" and the primary tool for its analysis. (Baum, 1952) While the precise acoustical prominences of sylla- bles within an utterance are subject to certain word- external hierarchical constraints observed by Halle (Halle and Vergnaud, 1987) and others, lexical stress is a local property. The stress patterns of individ- ual words within a phrase or sentence are generally context independent. One source of error in our method is the ambiguity for words with multiple phonetic transcriptions that differ in stress assignment. Highly accurate tech- niques for part-of-speech labeling could be used for stress pattern disambiguation when the ambiguity is purely lexical, but often the choice, in both pro- duction and perception, is dialectal. It would be straightforward to divide among all alternatives the count for each n-gram that includes a word with multiple stress patterns, but in the absence of reli- able frequency information to weight each pattern we chose simply to use the pronunciation listed first in the dictionary, which is judged by the lexicogra- pher to be the most popular. Very little accuracy is lost in making this assumption. Of the 115,966 words in the dictionary, 4635 have more than one pronunciation; of these, 1269 have more than one distinct stress pattern; of these, 525 have different primary stress placements. This smallest class has a few common words (such as "refuse" used as a noun and as a verb), but most either occur infrequently in text (obscure proper nouns, for example), or have a primary pronunciation that is overwhelmingly more common than the rest. 4 Experiments The efficiency of the n-gram training procedure al- lowed us to exploit a wealth of data--over 60 mil- lion syllables--from 38 million words of Wall Street Journal text. We discarded sentences not completely covered by the pronunciation dictionary, leaving 36.1 million words and 60.7 million syllables for experi- mentation. Our first experiments used the binary ~1 alpha- bet. The maximum entropy rate possible for this process is one bit per syllable, and given the unigram distribution of stress values in the data (55.2% are primary), an upper bound of slightly over 0.99 bits can be computed. Examining the 4-gram frequencies for the entire corpus (Figure 3a) sharpens this sub- stantially, yielding an entropy rate estimate of 0.846 bits per syllable. Most frequent among the 4-grams are the patterns WSWS and SWSW, consistent with the principle of binary alternation mentioned in sec- tion 1. The 4-gram estimate matches quite closely with the estimate of 0.852 bits that can be derived from the distribution of word stress patterns excerpted in Figure 3b. But both measures overestimate the entropy rate by ignoring longer-range dependencies that become evident when we use larger values of n. For n = 6 we obtain a rate of 0.795 bits per syllable over the entire corpus. Since we had several thousand times more data than is needed to make reliable estimates of stress entropy rate for values of n less than 7, it was prac- tical to subdivide the corpus according to some cri- terion, and calculate the stress entropy rate for each subset as well as for the whole. We chose to divide at the sentence level and to partition the 1.59 million sentences in the data based on a likelihood measure suitable for testing the hypothesis from section 1. A lexical trigram backoff-smoothed language model was trained on separate data to estimate the language perplexity of each sentence in the corpus. Sentence perplexity PP(S) is the inverse of sentence 1 probability normalized for length, 1/P(S)r~7, where P(S) is the probability of the sentence according to the language model and ISI is its word count. This measure gauges the average "surprise" after reveal- ing each word in the sentence as judged by the tri- gram model. The question of whether more probable word sequences are also more rhythmic can be ap- proximated by asking whether sentences with lower perplexity have lower stress entropy rate. Each sentence in the corpus was assigned to one of one hundred bins according to its perplexity-- sentences with perplexity between 0 and 10 were as- signed to the first bin; between 10 and 20, the sec- 305 3e406 W'u'4W: 0.78~ WSk'-~: 6.91~ SWt,/W: 2.96~ SSWW: 3.94~ ~S: 2.94~ WSWS: 11.00~ S~F~S: 7.80~ SSWS: 8.59~ I~SW: 6.97~ WSSW: 6.16~ SWSW: 11.21~ SSSW: 6.25~ k'WSS: 3.71~ WSSS: 6.06~ SWSS: 8.48~ SSSS: 6.27~ S 45.87~ SW 18.94~ W 9.54~ (b) s~r~ s.74~ ws 5.14~ WSW 4.54~ Figure 3: (a) The corpus frequencies of all binary stress 4-grams (based on 60.7 million syllables), with secondary stress mapped to "weak" (W). (b) The corpus frequencies of the top six lexical stress patterns. Wail Sb'eet Jouinal sylaldes per tmtd, by perpledty bin Wan Street Journal sentences 2.5e+Q6 == 13 ! :~ te~6 5QO000 Wall Street Journal Iraining symbols (sylabl=), by perple~dty bin Wall Street Journal se~llences 100 2~0 300 400 500 600 700 800 900 1000 L=~gu~e peq~zay 1.78 1,76 1,74 1.72 t.7 | ~. 1.68 _= 1.66 1.64 1,62 1.6 1.58 1.56 (a) I i f i I I I i I 100 200 300 400 500 600 700 8{]0 900 1000 Language peq31e~y Figure 4: The amount of training data, in syllables, in each perplexity bin. The bin at perplexity level pp contains all sentences in the corpus with perplexity no less than pp and no greater than pp + 10. The smallest count (at bin 990) is 50662. ond; and so on. Sentences with perplexity greater than 1000, which numbered roughly 106 thousand out of 1.59 million, were discarded from all exper- iments, as 10-unit bins at that level captured too little data for statistical significance. A histogram showing the amount of training data (in syllables) per perplexity bin is given in Figure 4. It is crucial to detect and understand potential sources of bias in the methodology so far. It is clear that the perplexity bins are well trained, but not yet that they are comparable with each other. Figure 5 shows the average number of syllables per word in sentences that appear in each bin. That this func- tion is roughly increasing agrees with our intuition that sequences with longer words are rarer. But it biases our perplexity bins at the extremes. Early bins, with sequences that have a small syllable rate per word (1.57 in the 0 bin, for example), are pre- disposed to a lower stress entropy rate since primary stresses, which occur roughly once per word, are more frequent. Later bins are also likely to be prej- udiced in that direction, for the inverse reason: The Figure 5: The average number of syllables per word for each perplexity bin. increasing frequency of multisyllabic words makes it more and more fashionable to transit to a weak- stressed syllable following a primary stress, sharpen- ing the probability distribution and decreasing en- tropy. This is verified when we run the stress entropy rate computation for each bin. The results for n- gram models of orders 3 through 7, for the case in which secondary lexical stress is mapped to the "weak" level, are shown in Figure 6. All of the rates calculated are substantially less than a bit, but this only reflects the stress regu- larity inherent in the vocabulary and in word se- lection, and says nothing about word arrangement. The atomic elements in the text stream, the words, contribute regularity independently. To determine how much is contributed by the way they are glued together, we need to remove the bias of word choice. For this reason we settled on a model size, n = 6, and performed a variety of experiments with both the original corpus and with a control set that con- tained exactly the same bins with exactly the same sentences, but mixed up. Each sentence in the control set was permuted with a pseudorandom se- quence of swaps based on an insensitive function of the original; that is to say, identical sentences in the 306 1 0.05 0.9 "~ 0.85 ® i 0.8 0.78 Wall Street Journal BINARY stress entropy rates, by pe~exily bin , i 3-gram model ~ 4-gr~ model -. 5-gram model S-gram rnodd ~-. 7-gram model ~.- - • . . . , il, J i i i i I I I i 100 200 300 400 500 600 700 800 goo 1000 Language ~Olexity Figure 6: n-gram stress entropy rates for ~z, weak secondary stress corpus were shuffled the same way and sentences differing by only one word were shuffled similarly. This allowed us to keep steady the effects of mul- tiple copies of the same sentence in the same per- plexity bin. More importantly, these tests hold ev- erything constant--diction, syllable count, syllable rate per word--except for syntax, the arrangement of the chosen words within the sentence. Compar- ing the unrandomized results with this control ex- periment allows us, therefore, to factor out every- thing but word order. In particular, subtracting the stress entropy rates of the original sentences from the rates of the randomized sentences gives us a fig- ure, relative entropy, that estimates how many bits we save by knowing the proper word order given the word choice. The results for these tests for weak and strong secondary stress are shown in Figures 7 and 8, including the difference curves between the randomized-word and original entropy rates. The consistently positive difference function demonstrates that there is some extra stress regu- larity to be had with proper word order, about a hundredth of a bit on average. The difference is small indeed, but its consistency over hundreds of well-trained data points puts the observation on sta- tistically solid ground. The negative slopes of the difference curves sug- gests a more interesting conclusion: As sentence per- plexity increases, the gap in stress entropy rate be- tween syntactic sentences and randomly permuted sentences narrows. Restated inversely, using entropy rates for randomly permuted sentences as a baseline, sentences with higher sequence probability are rela- tively more rhythmical in the sense of our definition from section 1. To supplement the ~z binary vocabulary tests we ran the same experiments with ~2 = {0, 1, P}, in- troducing a pause symbol to examine how stress be- haves near phrase boundaries. Commas, dashes, semicolons, colons, ellipses, and all sentence- terminating punctuation in the text, which were re- moved in the E1 tests, were mapped to a single pause symbol for E~. Pauses in the text arise not only from semantic constraints but also from physiologi- cal limitations. These include the "breath groups" of syllables that influence both vocalized and writ- ten production. (Ochsner, 1989). The results for these experiments are shown in Figures 9 and 10. Expectedly, adding the symbol increases the confu- sion and hence the entropy, but the rates remain less than a bit. The maximum possible rate for a ternary sequence is log 2 3 ~ 1.58. The experiments in this section were repeated with a larger perplexity interval that partitioned the corpus into 20 bins, each covering 50 units of perplexity. The resulting curves mirrored the finer- grain curves presented here. 5 Conclusions and future work We have quantified lexical stress regularity, mea- sured it in a large sample of written English prose, and shown there to be a significant contribution from word order that increases with lexical perplexity. This contribution was measured by comparing the entropy rate of lexical stress in natural sentences with randomly permuted versions of the same. Ran- domizing the word order in this way yields a fairly crude baseline, as it produces asyntactic sequences in which, for example, single-syllable function words can unnaturally clash. To correct for this we modi- fied the randomization algorithm to permute only open-class words and to fix in place determiners, particles, pronouns, and other closed-class words. We found the entropy rates to be consistently mid- way between the fully randomized and unrandom- ized values. But even this constrained randomiza- tion is weaker than what we'd like. Ideally we should factor out semantics as well as word choice, compar- ing each sentence in the corpus with its grammatical variations. While this is a difficult experiment to do automatically, we're hoping to approximate it using a natural language generation system based on link grammar under development by the author. Also, we're currently testing other data sources such as the Switchboard corpus of telephone speech (Godfrey, Holliman, and McDaniel, 1992) to mea- sure the effects of rhythm in more spontaneous and grammatically relaxed texts. 6 Acknowledgments Comments from John Lafferty, Georg Niklfeld, and Frank Dellaert contributed greatly to this paper. The work was supported in part by an ARPA AASERT award, number DAAH04-95-1-0475. 307 Wall Street Journal BINARY stress entmpJ rates, by pefideagy bin; secondap/slxess mapped to WEAK 0.81 . . . . Randomized Wall Street Journal sentences -h-, o.o ~'N*v~, 0.70 0.75 0.74 I I I I I I I I I 100 200 300 400 500 6(:0 700 800 900 10~O Lan~a9~ pe~pl®aly i 0,78 0,77 Wal Street Journal BINARY stpess entropy rate differences, by perplexity b~n; secondary stress mapped 1o WEAK 0.025 ~ , , , , WSJ randomized ~nus nomaedornized 0.02 0.015 0.005 i I I i i i i i i 100 200 300 400 500 600 700 800 900 1000 Langu~je pe~p~ex~ Figure 7: 6-gram stress entropy rates and difference curve for El, weak secondary stress 0.76 2 0.75 i 0.74" Wail Slmet J~mal BINARY alress enbopy rates, by pel~e~ty bin; seco~daly stress map~ to ~RONG Wall Street Journal BINARY sVess entropy into differences, by pe~ ~; s~daw stm~ ~ to STRONG 0.79 0.024 , , / , , . Wag Street Journal sentences ~ I WSJ randon~zed minus nonmndon~zed Randomized Wall Street Journal sentences -~---- 0.022 / ~*V,¢~ oo2 ~ ~ ", 0.018 0.014 0,012 0.01 0.73 0.008 0.72, 0.006 0,71 I I I I I I I I I 0,004 I I I I I I I I I 100 200 300 400 500 600 700 800 900 1000 0 100 200 300 400 500 600 700 800 900 1000 Language pelpleagy Language perplexity Figure 8: 6-gram entropy rates and difference curve for El, strong secondary stress 308 0.94 0.93 Wall Street Journal TERNARY stress entropy rates, by per~ex~ty bin; secomlary stress mapped to STRONG 0.97 ~, Randomized wW2 i, Sstl:eel, ~Ur~all :ene~e~c: .+--~- o.~ ' ~+~t** ,z~ ~; ;i '*' " 0.92 ~ % 0.91 i i I I l i i i I 100 200 300 400 500 600 700 800 900 10OO Language perplexity Wall Street Journal TERNARY stress entropy rate differences, by pmple~ty bin; secondary stress mapped to WEAK 0.05 . , 0.045 0.04 0.0~5 0.03 0.025 0.02 0.015 0.01 0.005 , i , , WSJ randomized minus nonrandomized -,,,-- I I I I I I I I I 100 200 300 400 800 600 700 800 900 1000 Language pe~plexily Figure 9: 6-gram entropy rates and difference curve for ~, weak secondary stress Wall Street Journal TERNARY sYess entropy rates, by perple}iffy bin; secorldaPj sb'ess mapped to STRONG Wall Street Journal TERNARY stress entrppy rate differences, by perplexity b~n; seco~a~ =~s ~ to STRONG 0.94 +.L , , , , , , , , , 0.05. , , , ~a%*~.. * W~dl Street Journal senterces ~ WSJ randomized minus n~mmloralzed '"~ ~ Randomized WaO Street Journal sentences -*--. / "U+"*% o.o~ o8~ "~!~ 0.92 ~ $ ~:~ 081 4 ,I.~ ~'~,jj~v~. ,: ~'" IN ,.Ii~ Vi~ i, .* 0.9 ' ; ' ' 0.89 ~ ~ 0.88 0.87 0 0.04 0.~ 0.0.', 0.025 0.0~ 0.01~ 0.01 0.005 i i i i i i ~ i i i i i i i i I i i 100 200 300 400 500 000 700 800 ~0 1000 0 100 200 300 ~x}O 5(]0 600 700 800 900 1000 Language pelple.,dly La.~ge perplexity Figure 10: 6-gram entropy rates and difference curve for E2, strong secondary stress References Abercrombie, D. 1967. Elements of general phonet- ics. Edinburgh University Press. Baum, P. F. 1952. The Other Harmony of Prose. Duke University Press. • Cover, T. M. and J. A. Thom~. 1991. Elements of information theory. John Wiley & Sons, Inc. Godfrey, J., E. Holliman, and J. McDaniel. 1992. Switchboard: Telephone speech corpus for re- search development. In Proc. ICASSP-92, pages 1-517-520. Halle, M. and J. Vergnaud. 1987. An essay on stress. The MIT Press. Harding, D. W. 1976. Words into rhythm: English speech rhythm in verse and prose. Cambridge Uni- versity Press. Jassem, W., D. R. Hill, and I. H. Witten. 1984. Isochrony in English speech: its statistical valid- ity and linguistic relevance. In D. Gibbon and H. Richter, editors, Intonation, rhythm, and ac- cent: Studies in Discourse Phonology. Walter de Gruyter, pages 203-225. Kager, R. 1989a. A metrical theory of stress and destressing in English and Dutch. Foris Publica- tions. Kager, R. 1989b. The rhythm of English prose. Foris Publications. Ochsner, R.S. 1989. Rhythm and writing. The Whitson Publishing Company. Parzen, E. 1962. Stochastic processes. Holden-Day. 309 | 1996 | 40 |
An Empirical Study of Smoothing Techniques for Language Modeling Stanley F. Chen Harvard University Aiken Computation Laboratory 33 Oxford St. Cambridge, MA 02138 sfc©eecs, harvard, edu Joshua Goodman Harvard University Aiken Computation Laboratory 33 Oxford St. Cambridge, MA 02138 goodma.n~eecs, harvard, edu Abstract We present an extensive empirical com- parison of several smoothing techniques in the domain of language modeling, includ- ing those described by Jelinek and Mer- cer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing tech- niques, one a variation of Jelinek-Mercer smoothing and one a very simple linear in- terpolation technique, both of which out- perform existing methods. 1 Introduction Smoothing is a technique essential in the construc- tion of n-gram language models, a staple in speech recognition (Bahl, Jelinek, and Mercer, 1983) as well as many other domains (Church, 1988; Brown et al., 1990; Kernighan, Church, and Gale, 1990). A lan- guage model is a probability distribution over strings P(s) that attempts to reflect the frequency with which each string s occurs as a sentence in natu- ral text. Language models are used in speech recog- nition to resolve acoustically ambiguous utterances. For example, if we have that P(it takes two) >> P(it takes too), then we know ceteris paribus to pre- fer the former transcription over the latter. While smoothing is a central issue in language modeling, the literature lacks a definitive compar- ison between the many existing techniques. Previ- ous studies (Nadas, 1984; Katz, 1987; Church and Gale, 1991; MacKay and Peto, 1995) only compare a small number of methods (typically two) on a sin- gle corpus and using a single training data size. As a result, it is currently difficult for a researcher to intelligently choose between smoothing schemes. In this work, we carry out an extensive empirical comparison of the most widely used smoothing techniques, including those described by 3elinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We carry out experiments over many training data sizes on varied corpora us- ing both bigram and trigram models. We demon- strate that the relative performance of techniques depends greatly on training data size and n-gram order. For example, for bigram models produced from large training sets Church-Gale smoothing has superior performance, while Katz smoothing per- forms best on bigram models produced from smaller data. For the methods with parameters that can be tuned to improve performance, we perform an automated search for optimal values and show that sub-optimal parameter selection can significantly de- crease performance. To our knowledge, this is the first smoothing work that systematically investigates any of these issues. In addition, we introduce two novel smooth- ing techniques: the first belonging to the class of smoothing models described by 3elinek and Mer- cer, the second a very simple linear interpolation method. While being relatively simple to imple- ment, we show that these methods yield good perfor- mance in bigram models and superior performance in trigram models. We take the performance of a method m to be its cross-entropy on test data 1 IT IvT - log Pro(t,) i=1 where Pm(ti) denotes the language model produced with method m and where the test data T is com- posed of sentences (tl,...,tzr) and contains a total of NT words. The entropy is inversely related to the average probability a model assigns to sentences in the test data, and it is generally assumed that lower entropy correlates with better performance in applications. 310 1.1 Smoothing n-gram Models In n-gram language modeling, the probability of a string P(s) is expressed as the product of the prob- abilities of the words that compose the string, with each word probability conditional on the identity of the last n - 1 words, i.e., ifs = wl-..wt we have l 1 P(s) = H P(wi[w{-1) ~ 1-~ P i-1 (1) i=1 i=1 where w i j denotes the words wi • •. wj. Typically, n is taken to be two or three, corresponding to a bigram or trigram model, respectively. 1 Consider the case n = 2. To estimate the proba- bilities P(wilwi-,) in equation (1), one can acquire a large corpus of text, which we refer to as training data, and take P(Wi-lWi) PML(Wil i-1) -- P(wi-1) c(wi-lWi)/Ns e(wi-1)/Ns c(wi_ w ) where c(c 0 denotes the number of times the string c~ occurs in the text and Ns denotes the total num- ber of words. This is called the maximum likelihood (ML) estimate for P(wilwi_l). While intuitive, the maximum likelihood estimate is a poor one when the amount of training data is small compared to the size of the model being built, as is generally the case in language modeling. For ex- ample, consider the situation where a pair of words, or bigram, say burnish the, doesn't occur in the training data. Then, we have PML(the Iburnish) = O, which is clearly inaccurate as this probability should be larger than zero. A zero bigram probability can lead to errors in speech recognition, as it disallows the bigram regardless of how informative the acous- tic signal is. The term smoothing describes tech- niques for adjusting the maximum likelihood esti- mate to hopefully produce more accurate probabili- ties. As an example, one simple smoothing technique is to pretend each bigram occurs once more than it ac- tually did (Lidstone, 1920; Johnson, 1932; Jeffreys, 1948), yielding C(Wi-lWi) "[- 1 = + IVl where V is the vocabulary, the set of all words be- ing considered. This has the desirable quality of 1To make the term P(wdw[Z~,,+~) meaningful for i < n, one can pad the beginning of the string with a distinguished token. In this work, we assume there are n - 1 such distinguished tokens preceding each sentence. preventing zero bigram probabilities. However, this scheme has the flaw of assigning the same probabil- ity to say, burnish the and burnish thou (assuming neither occurred in the training data), even though intuitively the former seems more likely because the word the is much more common than thou. To address this, another smoothing technique is to interpolate the bigram model with a unigram model PML(Wi) = c(wi)/Ns, a model that reflects how of- ten each word occurs in the training data. For ex- ample, we can take Pinto p( i J i-1) = APM (w pW _l) + (1 - getting the behavior that bigrams involving common words are assigned higher probabilities (Jelinek and Mercer, 1980). 2 Previous Work The simplest type of smoothing used in practice is additive smoothing (Lidstone, 1920; Johnson, 1932; aeffreys, 1948), where we take i w i-1 e(wi_,,+l) + = + elVl (2) and where Lidstone and Jeffreys advocate /i = 1. Gale and Church (1990; 1994) have argued that this method generally performs poorly. The Good-Turing estimate (Good, 1953) is cen- tral to many smoothing techniques. It is not used directly for n-gram smoothing because, like additive smoothing, it does not perform the interpolation of lower- and higher-order models essential for good performance. Good-Turing states that an n-gram that occurs r times should be treated as if it had occurred r* times, where r* = (r + 1)n~+l and where n~ is the number of n-grams that. occur exactly r times in the training data. Katz smoothing (1987) extends the intuitions of Good-Turing by adding the interpolation of higher- order models with lower-order models. It is perhaps the most widely used smoothing technique in speech recognition. Church and Gale (1991) describe a smoothing method that combines the Good-Turing estimate with bucketing, the technique of partitioning a set, of n-grams into disjoint groups, where each group is characterized independently through a set of pa- rameters. Like Katz, models are defined recursively in terms of lower-order models. Each n-gram is as- signed to one of several buckets based on its fre- quency predicted from lower-order models. Each bucket is treated as a separate distribution and Good-Turing estimation is performed within each, giving corrected counts that are normalized to yield probabilities. 311 Nd bucketing 2 ° * ~ ° % ° o °$ o • . ° .~ °e o *°*° * ° • ** o~,~L.s °o . • o oO o ~ o ° *b ;.°*~a-:.. • . ° • % a t ...,~;e.T¢: ° . .. : ° ° % o% **° ~ - ° ~°~ ° o o ° °• °~ ° o* ° o o o , , ,i , , ,i , , ,i , " . . . . 0 lo 100 1000 10000 100000 0.o01 r~rn~¢ of counts in distN~t]on new bucketing ., . . ., oeW~ o . 6'V, *°Na, o * * I * , , I , , * I , , * I , * 0.01 0.1 1 10 average r~n-zem count in dis~bution r~nus One Figure 1: )~ values for old and new bucketing schemes for Jelinek-Mercer smoothing; each point represents a single bucket The other smoothing technique besides Katz smoothing widely used in speech recognition is due to Jelinek and Mercer (1980). They present a class of smoothing models that involve linear interpola- tion, e.g., Brown et al. (1992) take i--1 PML(Wi IWi-n+l) "Iv ~Wi__ 1 i--1 i-- n-]-I P~ /W i-1 , (1-- )~to~-~ ) inte~pt i wi_n+2) (3) i-- u-I-1 That is, the maximum likelihood estimate is inter- polated with the smoothed lower-order distribution, which is defined analogously. Training a distinct I ~-1 for each wi_,~+li-1 is not generally felicitous; Wi--n-{-1 Bahl, Jelinek, and Mercer (1983) suggest partition- i-1 ing the 1~,~-~ into buckets according to c(wi_~+l), i-- n-l-1 where all )~w~-~ in the same bucket are constrained i-- n-l-1 to have the same value. To yield meaningful results, the data used to esti- mate the A~!-, need to be disjoint from the data ~-- n"l-1 used to calculate PML .2 In held-out interpolation, one reserves a section of the training data for this purpose. Alternatively, aelinek and Mercer describe a technique called deleted interpolation where differ- ent parts of the training data rotate in training either PML or the A,o!-' ; the results are then averaged. z-- n-[-I Several smoothing techniques are motivated within a Bayesian framework, including work by Nadas (1984) and MacKay and Peto (1995). 3 Novel Smoothing Techniques Of the great many novel methods that we have tried, two techniques have performed especially well. 2When the same data is used to estimate both, setting all )~ ~-~ to one yields the optimal result. Wl-- n-l-1 3.1 Method average-count This scheme is an instance of Jelinek-Mercer smoothing. Referring to equation (3), recall that Bahl et al. suggest bucketing the A~!-I according i--1 to c(Wi_n+l). We have found that partitioning the ~!-~ according to the average number of counts *--~+1 per non-zero element ~(~--~"+1) yields better Iwi:~(~:_.+~)>01 results. Intuitively, the less sparse the data for estimat- ing i-1 PML(WilWi_n+l), the larger A~,~-~ should be. *-- ~-t-1 While larger i-1 c(wi_n+l) generally correspond to less sparse distributions, this quantity ignores the allo- cation of counts between words. For example, we would consider a distribution with ten counts dis- tributed evenly among ten words to be much more sparse than a distribution with ten counts all on a single word. The average number of counts per word seems to more directly express the concept of sparse- ness, In Figure 1, we graph the value of ~ assigned to each bucket under the original and new bucketing schemes on identical data. Notice that the new buck- eting scheme results in a much tighter plot, indicat- ing that it is better at grouping together distribu- tions with similar behavior. 3.2 Method one-count This technique combines two intuitions. First, MacKay and Peto (1995) argue that a reasonable form for a smoothed distribution is • i-1 Pone(W i i-1 c(wL, +l) + Po,,e(wilw _ +9 IWi--nq-1) = i--1 c(wi_n+l) + The parameter a can be thought of as the num- ber of counts being added to the given distribution, 312 where the new counts are distributed as in the lower- order distribution. Secondly, the Good-Turing esti- mate can be interpreted as stating that the number of these extra counts should be proportional to the number of words with exactly one count in the given distribution. We have found that taking i-1 O~ = "y [nl(Wi_n+l) -~- ~] (4) works well, where i-i i is the number of words with one count, and where/3 and 7 are constants. 4 Experimental Methodology 4.1 Data We used the Penn treebauk and TIPSTER cor- pora distributed by the Linguistic Data Consor- tium. From the treebank, we extracted text from the tagged Brown corpus, yielding about one mil- lion words. From TIPSTER, we used the Associ- ated Press (AP), Wall Street Journal (WSJ), and San Jose Mercury News (SJM) data, yielding 123, 84, and 43 million words respectively. We created two distinct vocabularies, one for the Brown corpus and one for the TIPSTER data. The former vocab- ulary contains all 53,850 words occurring in Brown; the latter vocabulary consists of the 65,173 words occurring at least 70 times in TIPSTER. For each experiment, we selected three segments of held-out data along with the segment of train- ing data. One held-out segment was used as the test data for performance evaluation, and the other two were used as development test data for opti- mizing the parameters of each smoothing method. Each piece of held-out data was chosen to be roughly 50,000 words. This decision does not reflect practice very well, as when the training data size is less than 50,000 words it is not realistic to have so much devel- opment test data available. However, we made this decision to prevent us having to optimize the train- ing versus held-out data tradeoff for each data size. In addition, the development test data is used to op- timize typically very few parameters, so in practice small held-out sets are generally adequate, and per- haps can be avoided altogether with techniques such as deleted estimation. 4.2 Smoothing Implementations In this section, we discuss the details of our imple- mentations of various smoothing techniques. Due to space limitations, these descriptions are not com- prehensive; a more complete discussion is presented in Chen (1996). The titles of the following sections include the mnemonic we use to refer to the imple- mentations in later sections. Unless otherwise speci- fied, for those smoothing models defined recursively in terms of lower-order models, we end the recursion by taking the n = 0 distribution to be the uniform distribution Punif(wi) = l/IV[. For each method, we highlight the parameters (e.g., Am and 5 below) that can be tuned to optimize performance. Parameter values are determined through training on held-out data. 4.2.1 Baseline Smoothing (interp-baseline) For our baseline smoothing method, we use an instance of Jelinek-Mercer smoothing where we con- strain all A,~!-I to be equal to a single value A,~ for ,- n-hi each n, i.e., i--1 i-1 Pb so(wilw _ +i) = A,, + (I Am) -- Pbase(WilWi_n+2) 4.2.2 Additive Smoothing (plus-one and plus-delta) We consider two versions of additive smoothing. Referring to equation (2), we fix 5 = 1 in plus-one smoothing. In plus-delta, we consider any 6. 4.2.3 Katz Smoothing (katz) While the original paper (Katz, 1987) uses a single parameter k, we instead use a different k for each n > 1, k,~. We smooth the unigram distribution using additive smoothing with parameter 5. 4.2.4 Church-Gale Smoothing (church-gale) To smooth the counts n~ needed for the Good- Turing estimate, we use the technique described by Gale and Sampson (1995). We smooth the unigram distribution using Good-tiering without any bucket- ing. Instead of the bucketing scheme described in the original paper, we use a scheme analogous to the one described by Bahl, Jelinek, and Mercer (1983). We make the assumption that whether a bucket is large enough for accurate Good-Turing estimation depends on how many n-grams with non-zero counts occur in it. Thus, instead of partitioning the space of P(wi-JP(wi) values in some uniform way as was done by Church and Gale, we partition the space so that at least Cmi n non-zero n-grams fall in each bucket. Finally, the original paper describes only bigram smoothing in detail; extending this method to tri- gram smoothing is ambiguous. In particular, it is unclear whether to bucket trigrams according to i-1 i--1 P(wi_JP(w d or P(wi_JP(wilwi-1). We chose the former; while the latter may yield better perfor- mance, our belief is that it is much more difficult to implement and that it requires a great deal more computation. 4.2.5 Jelinek-Mercer Smoothing (interp-held-out and interp-del-int) We implemented two versions of Jelinek-Mercer smoothing differing only in what data is used to 313 train the A's. We bucket the A ~-1 according to Wi--n-bl i-1 C(Wi_~+I) as suggested by Bahl et al. Similar to our Church-Gale implementation, we choose buckets to ensure that at least Cmi n words in the data used to train the A's fall in each bucket. In interp-held-out, the A's are trained using held-out interpolation on one of the development test sets. In interp-del-int, the A's are trained using the relaxed deleted interpolation technique de- scribed by Jelinek and Mercer, where one word is deleted at a time. In interp-del-int, we bucket an n-gram according to its count before deletion, as this turned out to significantly improve performance. 4.2.6 Novel Smoothing Methods (new-avg-count and new-one-count) The implementation new-avg-count, correspond- ing to smoothing method average-count, is identical to interp-held-out except that we use the novel bucketing scheme described in section 3.1. In the implementation new-one-count, we have different parameters j3~ and 7~ in equation (4) for each n. 5 Results In Figure 2, we display the performance of the interp-baseline method for bigram and trigram models on TIPSTER, Brown, and the WSJ subset of TIPSTER. In Figures 3-6, we display the relative performance of various smoothing techniques with respect to the baseline method on these corpora, as measured by difference in entropy. In the graphs on the left of Figures 2-4, each point represents an average over ten runs; the error bars represent the empirical standard deviation over these runs. Due to resource limitations, we only performed multiple runs for data sets of 50,000 sentences or less. Each point on the graphs on the right represents a sin- gle run, but we consider sizes up to the amount of data available. The graphs on the bottom of Fig- ures 3-4 are close-ups of the graphs above, focusing on those algorithms that perform better than the baseline. To give an idea of how these cross-entropy differences translate to perplexity, each 0.014 bits correspond roughly to a 1% change in perplexity. In each run except as noted below, optimal val- ues for the parameters of the given technique were searched for using Powell's search algorithm as real- ized in Numerical Recipes in C (Press et al., 1988, pp. 309-317). Parameters were chosen to optimize the cross-entropy of one of the development test sets associated with the given training set. To constrain the search, we searched only those parameters that were found to affect performance significantly, as verified through preliminary experiments over sev- eral data sizes. For katz and church-gale, we did not perform the parameter search for training sets over 50,000 sentences due to resource constraints, and instead manually extrapolated parameter val- Method Lines interp-baseline ~ 400 plus-one 40 plus-delta 40 katz 300 church-gale i000 ±nterp-held-out 400 interp-del-int 400 new-avg-count 400 new-one-count 50 Table 1: Implementation difficulty of various meth- ods in terms of lines of C++ code ues from optimal values found on smaller data sizes. We ran interp-del-int only on sizes up to 50,000 sentences due to time constraints. From these graphs, we see that additive smooth- ing performs poorly and that methods katz and interp-held-out consistently perform well. Our implementation church-gale performs poorly ex- cept on large bigram training sets, where it performs the best. The novel methods new-avg-count and new-one-count perform well uniformly across train- ing data sizes, and are superior for trigram models. Notice that while performance is relatively consis- tent across corpora, it varies widely with respect to training set size and n-gram order. The method interp-del-int performs signifi- cantly worse than interp-held-out, though they differ only in the data used to train the A's. However, we delete one word at a time in interp-del-int; we hypothesize that deleting larger chunks would lead to more similar performance. In Figure 7, we show how the values of the pa- rameters 6 and Cmin affect the performance of meth- ods katz and new-avg-count, respectively, over sev- eral training data sizes. Notice that poor parameter setting can lead to very significant losses in perfor- mance, and that optimal parameter settings depend on training set size. To give an informal estimate of the difficulty of implementation of each method, in Table 1 we dis- play the number of lines of C++ code in each imple- mentation excluding the core code common across techniques. 6 Discussion To our knowledge, this is the first empirical compari- son of smoothing techniques in language modeling of such scope: no other study has used multiple train- ing data sizes, corpora, or has performed parameter optimization. We show that in order to completely 3To implement the baseline method, we just used the interp-held-out code as it is a special case. Written anew, it probably would have been about 50 lines. 314 11.5 10.5 10 9.5 0 a.5 average over ten runs at each size, up to 50,0OO sentences "-~:: :. TIPSTER bigram "-. "'~:-WS.J bigrarn 1000 10000 sentences of training data (-25 words~sentence) tl.5 11 10.5 tO 9.5 0 8.5 8 7.5 7 6.5 tOO single run at each size ",.. ~io~n t rigrarn -.~ ~... "',., "'"~'.:"-~. TIPSTER bigram ...... ,.. :=::::; .......... V~SJ b~gram . TIPSTER tdgra~ tO00 1O000 100000 le+06 )e+07 sentences of training data (-25 words/sentence) Figure 2: Baseline cross-entropy on test data; graph on left displays averages over ten runs for training sets up to 50,000 sentences, graph on right displays single runs for training sets up to 10,000,000 sentences average over ten runs at each size, up to 50,000 sentences 7 . . . . . . . . , . . . . . . . . ) • . . l~Us-one ........... ~ ............ ~ ............ = ................ 6 ....... ~ ..... c~ ..... plu s=dsita .......... I .... 4 ........... .. ! ._c -t . . . . . . . . ~ . . . . . . . . ~ 1000 10000 sentences of training data (-25 wordS/sentence) single run at each size, up to 10,000,000 sentences . ., . ., . . ., . . ., . . + ......... +....--~ plus~ne ..... ..~...y~ "'"'~" -..+.,..,,., .,-" .... o.-..-~'""~.......~ '-.. . --.. ~, 2=,j ........ ....... 1 J .church-gata ""* ks/z, interp-held-out, ~nterpdel-int, new-avg-count, new-one-count (see below) -1 , • . . . . . . . . ' ' ' "' " 100 1000 10000 100000 le+06 le+07 sentences of training data (-25 words~sentence) average over ten runs at each size, up to 50,000 sentences single run at each size, Up to 10,000,000 sentences 0.04 . . , • - - , • - , - - , • 0 ........................................................................................................................................................... -0,02 "~"-.. i n t e r p - d e l q n t -0.00 ~.1 n ...... ..... ~t 1 ............ ~ ........... ~ ........... .............. ~ ................ :::::::::::::::::::::::::::::::::::::::::::::: -0.16 . . . . . . . . J . . . . . . . . . . . too 1000 10000 sentences of training data (-25 words~sentence} o.02 o .,';'~o~o,-~nt t -0.02 -0.04 ~.-..._Z .~ . . . . . . ~.-. katz d I " / :*" -'-"(" '" "'m. ...~.,][F ,.a ........ a""~ ', JO.O8 " inteq)qle)d-out ..o'"" ~3.1 .~" .~----~. new-one-count c/" x...-~ -0.12 ..... "t~""" " ......... ~-.....-" ./ new-svg-count -0.14 k ................. x _~_.._~...i" "~"~ "0.1610 o lO0O 1OOOO 10oooo le+06 le+07 sentences of training data (-25 words/sentonce) Figure 3: Trigram model on TIPSTER data; relative performance of various methods with respect to baseline; graphs on left display averages over ten runs for training sets up to 50,000 sentences, graphs on right display single runs for training sets up to 10,000,000 sentences; top graphs show all algorithms, bottom graphs zoom in on those methods that perform better than the baseline method 315 E =o average over ten runs at each size, up to 50,000 senlences 5 . . . . . . . . , . . . . . . . . , • • - 4.6 .... ~- ................. "~'" plus-o'n~ ............. ~ .... 4 ........... 3.S " 3 ....... t~.,, " "-~ ........... ~ plus4el~ 2.5 ........... * 1. I church*gale ' 0.5 0 . . . . . -05 100 1000 10000 sentences of training data (-26 words/sentence) average over t~ runs at each size, up to 50,000 sentences 0.02 . . . . . . . , . . . . . . . , . . • -0.02 "" "~... humh*gale ~- ..... ......~ ............ {. .............. -0.00 ..... ~ ............. -'~-'~" :=L::~" T n:w:n2ount / ~ ........ 1 .0.14 100 1000 10000 sentences of training data (-26 words/sentence) single run at each size, up to 10,000,000 sentences 5 , • ., . • .. ., . • ., . • 4 " .. 3.5 "~"'~"... I~US*one 2.5 t''°'""'~'""'o.........." ""o. "*. 1 f , ~ " "~" " p us-de ta church-gale "'~... O.5 "~-.. --._o. ~3.6 T , ,k~tz, taterp-he~-out., interp~, el~tat, ,~ew,~zvg~ou ' ...... ~ne.~ount ! ...... ,ow), l 100 1000 10000 100000 le+06 le+07 sentences of training data (-26 words/sentence) stagle r~n at each size, up to 10,OCO,O00 sentences 0.02 • • •, . , • ., ., . . . o I -0.02 church*gale ...~:,'~ -0.04 ~ " " ,¢ " .. ". n erp-he d-out . " ~,~ -~ Interprdel-mt ..- .* ~'" ........ ~ ..~ .~./.- ~0~ -0.1 new-one-count ..~D -...B'" .~.. • . -..~ j.~.~..:.::;$., ~Om 1 2 ew-avg-count .0.14 ' , , I , , , i , , , m , . , I , , 10o 10oo 1ooo0 1oo000 le+06 le+07 sentences of training data (-25 wo~ds/sentence) Figure 4: Bigram model on TIPSTER data; relative performance of various methods with respect to baseline; graphs on left display averages over ten runs for training sets up to 50,000 sentences, graphs on right display single runs for training sets up to 10,000,000 sentences; top graphs show all algorithms, bottom graphs zoom in on those methods that perform better than the baseline method bigram model 0.02 . . . . . . , . . . . . . , 0 i -0.02 church-gale interprdel-int .0,04 ..-~, ............... ~. .0.00 .0.o8 ...z" ......... ~*.. -0.12 "= . ~inte~p~held-out lew-a-~n~t...~--~272~::'z--.. "n~:orpe-~t a .......... D .......... -0.16 -0.18 . . . . . . . . i . . . . . . . . i 100 1000 IOQO0 sentences of training data (-21 words~sentence) tzigram model 0 ....................................................................................................................................................................... .0.02 -0.06 katz .--~" "'~'"'""'" • ..-:::.. ......... .,,<.:..-" -0.06 .-" ........ ...i r ~ t e t p..<1 el-~ip_t .......... .0.12 :::.-.. "'~=.... ......... ~ ........... Q.. interp*held-out ............ 7~.=: =-P~::.... " ......... e ............... o ........... e ........... ~ ............... .0.14 - ~ - =':::=*~-~_.___ ~ new-one-count -0.1600 1000 10000 sentences of traJelng data (-21 words/sentence) Figure 5: Bigram and trigram models on Brown corpus; relative performance of various methods with respect to baseline 316 bigram model tdgram model "(, ~ .......................... o 0 i ~ 0.02 -0.02 hurch-gale ~ '--nt erp~J el-int "~ -0.04 .. . . . . ~ -0.02 inte rp-d el-int ".. ~ inte rpheld~out ... ~"" ~ "'-~ E -0.06 • " ., ~,~. ] ~ - .- ~,. "'A. -- -0.06 .-'-'katz ""-~ "-.= .. -0°3 ' :i: ,.. ........ ::.>~,.- .~. ..... .. ~ "'- -::::.., y -oo i -. .............. -0.14 • """ "~ " • " -k-atz -0.13 ~ -018 ........ = ' ' , 1oo 1000 10oo0 100000 le+06 10o 1000 10000 100000 le+0o sentences of training data (-25 words/sentence) sentences of t relelr~g data (-25 words/sentence) Figure 6: Bigram and trigram models on Wall Street Journal corpus; relative performance of various methods with respect to baseline z ~C == .=_ performance el katz with respect to delta 1.6 . . . . , • .., . . , . .., • .., . . 10O senl 1.4 1.2 1 10,0O0 sent 0.8 1,0O0 sent ..a 0.6 /' .,.~/" 0.4 / .-" .ED,O00 sent)< 0.2 "'"'d:. / .." ~'" e . . . . I , , ,i , , ,r , , ,I , , ,i , , , 0.0Ol o.01 0.1 1 lO 10o 1000 delta -0.0O -0.07 ==- -O.08 2 -0.O3 -0.1 -0.11 -0.12 -0.13 performance of new-avg-c~nt with respect to c-min . . ., . . ., . . x\ \ / ~'\ lO.000,000 sent /" / /" x\, , .o ",\ // ,,," .... / '"6. ..'"' 2 l """'u, 1 OO3,0OO sent " / j/10,0O0 sent 10 100 tO00 10(00 100000 minimum number of counts per bucket Figure 7: Performance of katz and new-avg-count with respect to parameters ~ and Cmin, respectively characterize the relative performance of two tech- niques, it is necessary to consider multiple training set sizes and to try both bigram and trigram mod- els. Multiple runs should be performed whenever possible to discover whether any calculated differ- ences are statistically significant. Furthermore, we show that sub-optimM parameter selection can also significantly affect relative performance. We find that the two most widely used techniques, Katz smoothing and Jelinek-Mercer smoothing, per- form consistently well across training set sizes for both bigram and trigram models, with Katz smooth- ing performing better on trigram models produced from large training sets and on bigram models in general. These results question the generality of the previous reference result concerning Katz smooth- ing: Katz (1987) reported that his method slightly outperforms an unspecified version of Jelinek-Mercer smoothing on a single training set of 750,000 words. Furthermore, we show that Church-Gale smooth- ing, which previously had not been compared with common smoothing techniques, outperforms all ex- isting methods on bigram models produced from large training sets. Finally, we find that our novel methods average-count and one-count are superior to existing methods for trigram models and perform well on bigram models; method one-count yields marginally worse performance but is extremely easy to implement. In this study, we measure performance solely through the cross-entropy of test data; it would be interesting to see how these cross-entropy differ- ences correlate with performance in end applications such as speech recognition. In addition, it would be interesting to see whether these results extend to fields other than language modeling where smooth- ing is used, such as prepositional phrase attachment (Collins and Brooks, 1995), part-of-speech tagging (Church, 1988), and stochastic parsing (Magerman, 1994). 317 Acknowledgements The authors would like to thank Stuart Shieber and the anonymous reviewers for their comments on pre- vious versions of this paper. We would also like to thank William Gale and Geoffrey Sampson for sup- plying us with code for "Good-Turing frequency esti- mation without tears." This research was supported by the National Science Foundation under Grant No. IRI-93-50192 and Grant No. CDA-94-01024. The second author was also supported by a National Sci- ence Foundation Graduate Student Fellowship. References Bahl, Lalit R., Frederick Jelinek, and Robert L. Mercer. 1983. A maximum likelihood approach to continuous speech recognition. IEEE Trans- actions on Pattern Analysis and Machine Intelli- gence, PAMI-5(2):179-190, March. Brown, Peter F., John Cocke, Stephen A. DellaPi- etra, Vincent J. DellaPietra, Frederick Jelinek, John D. Lafferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Computational Linguistics, 16(2):79- 85, June. Brown, Peter F., Stephen A. DellaPietra, Vincent J. DellaPietra, Jennifer C. Lai, and Robert L. Mer- cer. 1992. An estimate of an upper bound for the entropy of English. Computational Linguis- tics, 18(1):31-40, March. Chen, Stanley F. 1996. Building Probabilistic Mod- els for Natural Language. Ph.D. thesis, Harvard University. In preparation. Church, Kenneth. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, pages 136-143. Church, Kenneth W. and William A. Gale. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating proba- bilities of English bigrams. Computer Speech and Language, 5:19-54. Collins, Michael and James Brooks. 1995. Prepo- sitional phrase attachment through a backed-off model. In David Yarowsky and Kenneth Church, editors, Proceedings of the Third Workshop on Very Large Corpora, pages 27-38, Cambridge, MA, June. Gale, William A. and Kenneth W. Church. 1990. Estimation procedures for language context: poor estimates are worse than none. In COMP- STAT, Proceedings in Computational Statistics, 9th Symposium, pages 69-74, Dubrovnik, Yu- goslavia, September. Gale, William A. and Kenneth W. Church. 1994. What's wrong with adding one? In N. Oostdijk and P. de Haan, editors, Corpus-Based Research into Language. Rodolpi, Amsterdam. Gale, William A. and Geoffrey Sampson. 1995. Good-Turing frequency estimation without tears. Journal of Quantitative Linguistics, 2(3). To ap- pear. Good, I.J. 1953. The population frequencies of species and the estimation of population parame- ters. Biometrika, 40(3 and 4):237-264. Jeffreys, H. 1948. Theory of Probability. Clarendon Press, Oxford, second edition. Jelinek, Frederick and Robert L. Mercer. 1980. In- terpolated estimation of Markov source parame- ters from sparse data. In Proceedings of the Work- shop on Pattern Recognition in Practice, Amster- dam, The Netherlands: North-Holland, May. Johnson, W.E. 1932. Probability: deductive and inductive problems. Mind, 41:421-423. Katz, Slava M. 1987. Estimation of probabilities from sparse data for the language model com- ponent of a speech recognizer. IEEE Transac- tions on Acoustics, Speech and Signal Processing, ASSP-35(3):400-401, March. Kernighan, M.D., K.W. Church, and W.A. Gale. 1990. A spelling correction program based on a noisy channel model. In Proceedings of the Thirteenth International Conference on Compu- tational Linguistics, pages 205-210. Lidstone, G.J. 1920. Note on the general case of the Bayes-Laplace formula for inductive or a posteri- ori probabilities. Transactions of the Faculty of Actuaries, 8:182-192. MacKay, David J. C. and Linda C. Peto. 1995. A hi- erarchical Dirichlet language model. Natural Lan- guage Engineering, 1(3):1-19. Magerman, David M. 1994. Natural Language Pars- ing as Statistical Pattern Recognition. Ph.D. the- sis, Stanford University, February. Nadas, Arthur. 1984. Estimation of probabilities in the language model of the IBM speech recognition system. IEEE Transactions on Acoustics, Speech and Signal Processing, ASSP-32(4):859-861, Au- gust. Press, W.H., B.P. Flannery, S.A. Teukolsky, and W.T. Vetterling. 1988. Numerical Recipes in C. Cambridge University Press, Cambridge. 318 | 1996 | 41 |
Minimizing Manual Annotation Cost In Supervised Training From Corpora Sean P. Engelson and Ido Dagan Department of Mathematics and Computer Science Bar-Ilan University 52900 Ramat Gan, Israel {engelson, dagan}@bimacs, cs. biu. ac. il Abstract Corpus-based methods for natural lan- guage processing often use supervised training, requiring expensive manual an- notation of training corpora. This paper investigates methods for reducing annota- tion cost by sample selection. In this ap- proach, during training the learning pro- gram examines many unlabeled examples and selects for labeling (annotation) only those that are most informative at each stage. This avoids redundantly annotating examples that contribute little new infor- mation. This paper extends our previous work on committee-based sample selection for probabilistic classifiers. We describe a family of methods for committee-based sample selection, and report experimental results for the task of stochastic part-of- speech tagging. We find that all variants achieve a significant reduction in annota- tion cost, though their computational effi- ciency differs. In particular, the simplest method, which has no parameters to tune, gives excellent results. We also show that sample selection yields a significant reduc- tion in the size of the model used by the tagger. 1 Introduction Many corpus-based methods for natural language processing (NLP) are based on supervised training-- acquiring information from a manually annotated corpus. Therefore, reducing annotation cost is an important research goal for statistical NLP. The ul- timate reduction in annotation cost is achieved by unsupervised training methods, which do not require an annotated corpus at all (Kupiec, 1992; Merialdo, 1994; Elworthy, 1994). It has been shown, how- ever, that some supervised training prior to the un- supervised phase is often beneficial. Indeed, fully unsupervised training may not be feasible for cer- tain tasks. This paper investigates an approach for optimizing the supervised training (learning) phase, which reduces the annotation effort required to achieve a desired level of accuracy of the trained model. In this paper, we investigate and extend the committee-based sample selection approach to min- imizing training cost (Dagan and Engelson, 1995). When using sample selection, a learning program ex- amines many unlabeled (not annotated) examples, selecting for labeling only those that are most in- formative for the learner at each stage of training (Seung, Opper, and Sompolinsky, 1992; Freund et al., 1993; Lewis and Gale, 1994; Cohn, Atlas, and Ladner, 1994). This avoids redundantly annotating many examples that contribute roughly the same in- formation to the learner. Our work focuses on sample selection for training probabilistic classifiers. In statistical NLP, prob- abilistic classifiers are often used to select a pre- ferred analysis of the linguistic structure of a text (for example, its syntactic structure (Black et al., 1993), word categories (Church, 1988), or word senses (Gale, Church, and Yarowsky, 1993)). As a representative task for probabilistic classification in NLP, we experiment in this paper with sample se- lection for the popular and well-understood method of stochastic part-of-speech tagging using Hidden Markov Models. We first review the basic approach of committee- based sample selection and its application to part- of-speech tagging. This basic approach gives rise to a family of algorithms (including the original al- gorithm described in (Dagan and Engelson, 1995)) which we then describe. First, we describe the 'sim- plest' committee-based selection algorithm, which has no parameters to tune. We then generalize the selection scheme, allowing more options to adapt and tune the approach for specific tasks. The paper compares the performance of several instantiations of the general scheme, including a batch selection method similar to that of Lewis and Gale (1994). In particular, we found that the simplest version of the method achieves a significant reduction in an- notation cost, comparable to that of other versions. 319 We also evaluate the computational efficiency of the different variants, and the number of unlabeled ex- amples they consume. Finally, we study the effect of sample selection on the size of the model acquired by the learner. 2 Probabilistic Classification This section presents the framework and terminol- ogy assumed for probabilistic classification, as well as its instantiation for stochastic bigram part-of- speech tagging. A probabilistic classifier classifies input examples e by classes c E C, where C is a known set of pos- sible classes. Classification is based on a score func- tion, FM(C, e), which assigns a score to each possible class of an example. The classifier then assigns the example to the class with the highest score. FM is determined by a probabilistic model M. In many applications, FM is the conditional probability func- tion, PM (cle), specifying the probability of each class given the example, but other score functions that correlate with the likelihood of the class are often used. In stochastic part-of-speech tagging, the model as- sumed is a Hidden Markov Model (HMM), and input examples are sentences. The class c, to which a sen- tence is assigned is a sequence of the parts of speech (tags) for the words in the sentence. The score func- tion is typically the joint (or conditional) probability of the sentence and the tag sequence 1 . The tagger then assigns the sentence to the tag sequence which is most probable according to the HMM. The probabilistic model M, and thus the score function FM, are defined by a set of parameters, {hi}. During training, the values of the parameters are estimated from a set of statistics, S, extracted from a training set of annotated examples. We de- note a particular model by M = {hi}, where each ai is a specific value for the corresponding cq. In bigram part-of-speech tagging the HMM model M contains three types of parameters: transition probabilities P(ti---*tj) giving the probability of tag tj occuring after tag ti, lexical probabilities P(t[w) giving the probability of tag t labeling word w, and tag probabilities P(t) giving the marginal probability 2 of a tag occurring. The values of these parameters are estimated from a tagged corpus which provides a training set of labeled examples (see Section 4.1). 3 Evaluating Example Uncertainty A sample selection method needs to evaluate the expected usefulness, or information gain, of learn- ing from a given example. The methods we investi- 1This gives the Viterbi model (Merialdo, 1994), which we use here. 2This version of the method uses Bayes' theorem ~ (Church, 1988). (P(wdt,) o¢ P(t,) J gate approach this evaluation implicitly, measuring an example's informativeness as the uncertainty in its classification given the current training data (Se- ung, Opper, and Sompolinsky, 1992; Lewis and Gale, 1994; MacKay, 1992). The reasoning is that if an example's classification is uncertain given current training data then the example is likely to contain unknown information useful for classifying similar examples in the future. We investigate the committee-based method, where the learning algorithm evaluates an example by giving it to a committee containing several vari- ant models, all 'consistent' with the training data seen so far. The more the committee members agree on the classification of the example, the greater our certainty in its classification. This is because when the training data entails a specific classification with high certainty, most (in a probabilistic sense) classi- tiers consistent with the data will produce that clas- sification. The committee-based approach was first proposed in a theoretical context for learning binary non- probabilistic classifiers (Seung, Opper, and Som- polinsky, 1992; Freund et al., 1993). In this pa- per, we extend our previous work (Dagan and En- gelson, 1995) where we applied the basic idea of the committee-based approach to probabilistic classifi- cation. Taking a Bayesian perspective, the posterior probability of a model, P(M[S), is determined given statistics S from the training set (and some prior dis- tribution for the models). Committee members are then generated by drawing models randomly from P(MIS ). An example is selected for labeling if the committee members largely disagree on its classifi- cation. This procedure assumes that one can sample from the models' posterior distribution, at least ap- proximately. To illustrate the generation of committee- members, consider a model containing a single bi- nomial parameter a (the probability of a success), with estimated value a. The statistics S for such a model are given by N, the number of trials, and x, the number of successes in those trials. Given N and x, the 'best' parameter value may be estimated by one of several estimation methods. For example, the maximum likelihood estimate for a X is a = ~, giving the model M = {a} = {~}. When generating a committee of models, however, we are not interested in the 'best' model, but rather in sam- pling the distribution of models given the statistics. For our example, we need to sample the posterior density of estimates for a, namely P(a = a]S). Sam- pling this distribution yields a set of estimates scat- tered around ~ (assuming a uniform prior), whose variance decreases as N increases. In other words, the more statistics there are for estimating the pa- rameter, the more similar are the parameter values used by different committee members. For models with multiple parameters, parame- 320 ter estimates for different committee members differ more when they are based on low training counts, and they agree more when based on high counts. Se- lecting examples on which the committee members disagree contributes statistics to currently uncertain parameters whose uncertainty also affects classifica- tion. It may sometimes be difficult to sample P(M[S) due to parameter interdependence. Fortunately, models used in natural language processing often assume independence between most model parame- ters. In such cases it is possible to generate commit- tee members by sampling the posterior distribution for each independent group of parameters separately. 4 Bigram Part-Of-Speech Tagging 4.1 Sampling model parameters In order to generate committee members for bigram tagging, we sample the posterior distributions for transition probabilities, P(ti---~tj), and for lexical probabilities, P(t[w) (as described in Section 2). Both types of the parameters we sample have the form ofmultinomialdistributions. Each multinomial random variable corresponds to a conditioning event and its values are given by the corresponding set of conditioned events. For example, a transition prob- ability parameter P(ti--*tj) has conditioning event ti and conditioned event tj. Let {ui} denote the set of possible values of a given multinomial variable, and let S = {hi} de- note a set of statistics extracted from the training set for that variable, where ni is the number of times that the value ui appears in the training set for the variable, defining N = ~-~i hi. The parameters whose posterior distributions we wish to estimate are oil = P(ui). The maximum likelihood estimate for each of the multinomial's distribution parameters, ai, is &i = In practice, this estimator is usually smoothed in N' some way to compensate for data sparseness. Such smoothing typically reduces slightly the estimates for values with positive counts and gives small pos- itive estimates for values with a zero count. For simplicity, we describe here the approximation of P(~i = ailS) for the unsmoothed estimator 3. We approximate the posterior P(ai = ai[S) by first assuming that the multinomial is a collection of independent binomials, each of which corresponds to a single value ui of the multinomial; we then normal- ize the values so that they sum to 1. For each such binomial, we approximate P(ai = ai[S) as a trun- 3In the implementation we smooth the MLE by in- terpolation with a uniform probability distribution, fol- lowing Merialdo (1994). Approximate adaptation of P(c~i = ai[S) to the smoothed version of the estimator is simple. cated normal distribution (restricted to [0,1]), with and variance ~2 = #(1--#) 4 estimated mean#---- N N " To generate a particular multinomial distribution, we randomly choose values for the binomial param- eters ai from their approximated posterior distribu- tions (using the simple sampling method given in (Press et al., 1988, p. 214)), and renormalize them so that they sum to 1. Finally, to generate a random HMM given statistics S, we choose values indepen- dently for the parameters of each multinomial, since all the different multinomials in an HMM are inde- pendent. 4.2 Examples in bigram training Typically, concept learning problems are formulated such that there is a set of training examples that are independent of each other. When training a bigram model (indeed, any HMM), this is not true, as each word is dependent on that before it. This problem is solved by considering each sentence as an individ- ual example. More generally, it is possible to break the text at any point where tagging is unambiguous. We thus use unambiguous words (those with only one possible part of speech) as example boundaries in bigram tagging. This allows us to train on smaller examples, focusing training more on the truly infor- mative parts of the corpus. 5 Selection Algorithms Within the committee-based paradigm there exist different methods for selecting informative examples. Previous research in sample selection has used either sequential selection (Seung, Opper, and Sompolin- sky, 1992; Freund et al., 1993; Dagan and Engelson, 1995), or batch selection (Lewis and Catlett, 1994; Lewis and Gale, 1994). We describe here general algorithms for both sequential and batch selection. Sequential selection examines unlabeled examples as they are supplied, one by one, and measures the disagreement in their classification by the commit- tee. Those examples determined to be sufficiently informative are selected for training. Most simply, we can use a committee of size two and select an example when the two models disagree on its clas- sification. This gives the following, parameter-free, two member sequential selection algorithm, executed for each unlabeled input example e: 1. Draw 2 models randomly from P(MIS), where S are statistics acquired from previously labeled examples; 4The normal approximation, while easy to imple- ment, can be avoided. The posterior probability P(c~i -- ai[S) for the multinomial is given exactly by the Dirich- let distribution (Johnson, 1972) (which reduces to the Beta distribution in the binomial case). In this work we assumed a uniform prior distribution for each model pa- rameter; we have not addressed the question of how to best choose a prior for this problem. 321 2. Classify e by each model, giving classifications cl and c~; 3. If cl ~ c~, select e for annotation; 4. If e is selected, get its correct label and update S accordingly. This basic algorithm needs no parameters. If de- sired, it is possible to tune the frequency of selection, by changing the variance of P(MIS ) (or the variance of P(~i = ailS) for each parameter), where larger variances increase the rate of disagreement among the committee members. We implemented this ef- fect by employing a temperature parameter t, used as a multiplier of the variance of the posterior pa- rameter distribution. A more general algorithm results from allowing (i) a larger number of committee members, k, in or- der to sample P(MIS ) more precisely, and (it) more refined example selection criteria. This gives the fol- lowing general sequential selection algorithm, executed for each unlabeled input example e: 1. Draw k models {Mi) randomly from P(MIS ) (possibly using a temperature t); 2. Classify e by each model Mi giving classifica- tions {ci); 3. Measure the disagreement D over {ci); 4. Decide whether to select e for annotation, based on the value of D; 5. If e is selected, get its correct label and update S accordingly. It is easy to see that two member sequential selec- tion is a special case of general sequential selection, where any disagreement is considered sufficient for selection. In order to instantiate the general algo- rithm for larger committees, we need to define (i) a measure for disagreement (Step 3), and (it) a selec- tion criterion (Step 4). Our approach to measuring disagreement is to use the vote entropy, the entropy of the distribution of classifications assigned to an example ('voted for') by the committee members. Denoting the number of committee members assigning c to e by V(c, e), the vote entropy is: 1 V(e, e) log V(e, e) D-logk k e (Dividing by log k normalizes the scale for the num- ber of committee members.) Vote entropy is maxi- mized when all committee members disagree, and is zero when they all agree. In bigram tagging, each example consists of a se- quence of several words. In our system, we measure D separately for each word, and use the average en- tropy over the word sequence as a measurement of disagreement for the example. We use the average entropy rather than the entropy over the entire se- quence, because the number of committee members is small with respect to the total number of possible tag sequences. Note that we do not look at the en- tropy of the distribution given by each single model to the possible tags (classes), since we are only in- terested in the uncertainty of the final classification (see the discussion in Section 7). We consider two alternative selection criteria (for Step 4). The simplest is thresholded seleclion, in which an example is selected for annotation if its vote entropy exceeds some threshold 0. The other alternative is randomized selection, in which an ex- ample is selected for annotation based on the flip of a coin biased according to the vote entropy--a higher vote entropy entailing a higher probability of selection. We define the selection probability as a linear function of vote entropy: p = gD, where g is an entropy gain parameter. The selection method we used in our earlier work (Dagan and Engelson, 1995) is randomized sequential selection using this linear selection probability model, with parameters k, t and g. An alternative to sequential selection is batch se- lection. Rather than evaluating examples individ- ually for their informativeness a large batch of ex- amples is examined, and the m best are selected for annotation. The batch selection algorithm, exe- cuted for each batch B of N examples, is as follows: 1. For each example e in B: (a) Draw k models randomly from P(MIS); (b) Classify e by each model, giving classifica- tions {ci}; (c) Measure the disagreement De for e over {ei}; 2. Select for annotation the m examples from B with the highest De; 3. Update S by the statistics of the selected exam- ples. This procedure is repeated sequentially for succes- sive batches of N examples, returning to the start of the corpus at the end. If N is equal to the size of the corpus, batch selection selects the m globally best examples in the corpus at each stage (as in (Lewis and Catlett, 1994)). On the other hand, as N de- creases, batch selection becomes closer to sequential selection. 6 Experimental Results This section presents results of applying committee- based sample selection to bigram part-of-speech tag- ging, as compared with complete training on all ex- amples in the corpus. Evaluation was performed using the University of Pennsylvania tagged corpus from the ACL/DCI CD-ROM I. For ease of im- plementation, we used a complete (closed) lexicon which contains all the words in the corpus. The committee-based sampling algorithm was ini- tialized using the first 1,000 words from the corpus, 322 35000 t 25000 2OO0O 15OOO I0000 5OOO I I I I I i I I Batch selection ira=5; N=I00) Thresholded sel&-lion (fi,~0.2) ...... Randomized selection (.g=0.5) ...... Two metnber selection ....... , Co~l~ training / ! / / / : ! " i / i/ : ," / n ,. y .., ~ 1 I I I 0.85 0.86 0.87 0.88 0.89 0.9 0.91 0.92 0.93 Accuracy (a) / I I I I I 0.96 ~- Batch selection (m=5; N=IO0) -- | Th~sholded selection (th=0.3) ..... [ Randomized selection (g=0.5) ...... 0.94 1- Two member selection --- Complete training ...... ............................................. 0.92 I ..... " ~'--'--"~--':":=" ............ " l 0.9[/:::.::y i / ,' f i '.. :I 0.88 ~ /../ 0.86 0 50000 I00000 150000 200000 250000 300000 Examined training (b) Figure 1: Training versus accuracy. In batch, random, and thresholded runs, k = 5 and t = 50. (a) Number of ambiguous words selected for labeling versus classifi- cation accuracy achieved. (b) Accuracy versus number of words examined from the corpus (both labeled and unlabeled). and then sequentially examined the following exam- ples in the corpus for possible labeling. The training set consisted of the first million words in the cor- pus, with sentence ordering randomized to compen- sate for inhomogeneity in corpus composition. The test set was a separate portion of the corpus, con- sisting of 20,000 words. We compare the amount of training required by different selection methods to achieve a given tagging accuracy on the test set, where both the amount of training and tagging ac- curacy are measured over ambiguous words. 5 The effectiveness of randomized committee-based 5Note that most other work on tagging has measured accuracy over all words, not just ambiguous ones. Com- plete training of our system on 1,000,000 words gave us an accuracy of 93.5% over ambiguous words, which cor- responds to an accuracy of 95.9% over all words in the 0.925 I I I I I I I I I 3640 words selected -- 0.92 6640 words selected . . . . . ......... ~ 9660 words seleaed ...... 12660 words seleaed ....... 0.915 / ~o 0.91 / :'f ...... : ,j --.. 8 :.:: < 0.905 .............................................. 0.9 0.895 i 0.89 0 100 200 300 400 500 600 700 800 900 1000 Batch size (a) 0.98 , , , , , , , Two member selection 0.96 Batch selection (m=5; N=50) ..... Batch selection (m=5; N=I00) ...... Batch selection (m=5; N=-500) .... 0.94 Batch selection (m=5; N=IO00) ...... co~!~.~.:::::.. 0.92 < 0.9 t~J t 0.88 y/.1/./..~'/. . ........... 086 I I I I I I I 0 50000 100000 150000 200000 250000 300000 350000 400000 Examined training (b) Figure 2: Evaluating batch selection, for m = 5. (a) Ac- curacy achieved versus batch size at different numbers of selected training words. (b) Accuracy versus number of words examined from the corpus for different batch sizes. selection for part-of-speech tagging, with 5 and 10 committee members, was demonstrated in (Dagan and Engelson, 1995). Here we present and compare results for batch, randomized, thresholded, and two member committee-based selection. Figure 1 presents the results of comparing the sev- eral selection methods against each other. The plots shown are for the best parameter settings that we found through manual tuning for each method. Fig- ure l(a) shows the advantage that sample selection gives with regard to annotation cost. For example, complete training requires annotated examples con- taining 98,000 ambiguous words to achieve a 92.6% accuracy (beyond the scale of the graph), while the selective methods require only 18,000-25,000 am- biguous words to achieve this accuracy. We also find test set, comparable to other published results on bigram tagging. 323 20000 18000 16000 14000 12000 10000 80OO 6OO0 400O 20O0 I 0.85 0.86 I I I ; Two member selection Complete training" .... / ,/ / / / // / ( i J I I I 0.9 0.91 0.72 I I I 0.87 0.88 0.89 0.93 Accuracy (a) i i 1600 i i i i i i Two m~mbersel~on -:- 140o Complete training },.Z'_. / / / 12011 / ~ 1000 l 1 /' i Nil / // 200 i i i i I i I I 0.85 0.86 0.87 0.88 0.89 0.9 0.91 0.72 0.93 0.94 Accuracy (b) Figure 3: The size of the trained model, measured by the number of frequency counts > 0, plotted (y-axis) ver- sus classification accuracy achieved (x-axis). (a) Lexicai counts (freq(t, w)) (b) Bigram counts (freq(tl--+t2)). that, to a first approximation, all selection methods considered give similar results. Thus, it seems that a refined choice of the selection method is not crucial for achieving large reductions in annotation cost. This equivalence of the different methods also largely holds with respect to computational effi- ciency. Figure l(b) plots classification accuracy ver- sus number of words examined, instead of those selected. We see that while all selective methods are less efficient in terms of examples examined than complete training, they are comparable to each other. Two member selection seems to have a clear, though small, advantage. In Figure 2 we investigate further the properties of batch selection. Figure 2(a) shows that accuracy increases with batch size only up to a point, and then starts to decrease. This result is in line with theoretical difficulties with batch selection (Freund et al., 1993) in that batch selection does not account for the distribution of input examples. Hence, once batch size increases past a point, the input distribu- tion has too little influence on which examples are selected, and hence classification accuracy decreases. Furthermore, as batch size increases, computational efficiency, in terms of the number of examples exam- ined to attain a given accuracy, decreases tremen- dously (Figure 2(5)). The ability of committee-based selection to fo- cus on the more informative parts of the training corpus is analyzed in Figure 3. Here we examined the number of lexical and bigram counts that were stored (i.e, were non-zero) during training, using the two member selection algorithm and complete training. As the graphs show, the sample selec- tion method achieves the same accuracy as complete training with fewer lexical and bigram counts. This means that many counts in the data are less useful for correct tagging, as replacing them with smoothed estimates works just as well. 6 Committee-based se- lection ignores such counts, focusing on parameters which improve the model. This behavior has the practical advantage of reducing the size of the model significantly (by a factor of three here). Also, the average count is lower in a model constructed by selective training than in a fully trained model, sug- gesting that the selection method avoids using ex- amples which increase the counts for already known parameters. 7 Discussion Why does committee-based sample selection work? Consider the properties of those examples that are selected for training. In general, a selected train- ing example will contribute data to several statistics, which in turn will improve the estimates of several parameter vMues. An informative example is there- fore one whose contribution to the statistics leads to a significantly useful improvement of model parame- ter estimates. Model parameters for which acquiring additional statistics is most beneficial can be char- acterized by the following three properties: 1. The current estimate of the parameter is uncer- tain due to insufficient statistics in the training set. Additional statistics would bring the esti- mate closer to the true value. 2. Classification of examples is sensitive to changes in the current estimate of the parameter. Oth- erwise, even if the current value of the pa- rameter is very uncertain, acquiring additional statistics will not change the resulting classifi- cations. 3. The parameter affects classification for a large proportion of examples in the input. Parame- 6As noted above, we smooth the MLE estimates by interpolation with a uniform probability distribution (Merialdo, 1994). 324 ters that affect only few examples have low over- all utility. The committee-based selection algorithms work because they tend to select examples that affect pa- rameters with the above three properties. Prop- erty 1 is addressed by randomly drawing the parame- ter values for committee members from the posterior distribution given the current statistics. When the statistics for a parameter are insufficient, the vari- ance of the posterior distribution of the estimates is large, and hence there will be large differences in the values of the parameter chosen for different commit- tee members. Note that property 1 is not addressed when uncertainty in classification is only judged rel- ative to a single model 7 (as in, eg, (Lewis and Gale, 1994)). Property 2 is addressed by selecting examples for which committee members highly disagree in clas- sification (rather than measuring disagreement in parameter estimates). Committee-based selection thus addresses properties 1 and 2 simultaneously: it acquires statistics just when uncertainty in cur- rent parameter estimates entails uncertainty regard- ing the appropriate classification of the example. Our results show that this effect is achieved even when using only two committee members to sample the space of likely classifications. By appropriate classification we mean the classification given by a perfectly-trained model, that is, one with accurate parameter values. Note that this type of uncertainty regarding the identity of the appropriate classification, is differ- ent than uncertainty regarding the correctness of the classification itself. For example, sufficient statistics may yield an accurate 0.51 probability estimate for a class c in a given example, making it certain that c is the appropriate classification. However, the cer- tainty that c is the correct classification is low, since there is a 0.49 chance that c is the wrong class for the example. A single model can be used to estimate only the second type of uncertainty, which does not correlate directly with the utility of additional train- ing. Finally, property 3 is addressed by independently examining input examples which are drawn from the input distribution. In this way, we implicitly model the distribution of model parameters used for clas- sifying input examples. Such modeling is absent in batch selection, and we hypothesize that this is the reason for its lower effectiveness. 8 Conclusions Annotating large textual corpora for training natu- ral language models is a costly process. We propose reducing this cost significantly using committee- rThe use of a single model is also criticized in (Cohn, Atlas, and Ladner, 1994). based sample selection, which reduces redundant an- notation of examples that contribute little new in- formation. The method can be applied in a semi- interactive process, in which the system selects sev- eral new examples for annotation at a time and up- dates its statistics after receiving their labels from the user. The implicit modeling of uncertainty makes the selection system generally applicable and quite simple to implement. Our experimental study of variants of the selec- tion method suggests several practical conclusions. First, it was found that the simplest version of the committee-based method, using a two-member com- mittee, yields reduction in annotation cost compa- rable to that of the multi-member committee. The two-member version is simpler to implement, has no parameters to tune and is computationally more ef- ficient. Second, we generalized the selection scheme giving several alternatives for optimizing the method for a specific task. For bigram tagging, comparative evaluation of the different variants of the method showed similar large reductions in annotation cost, suggesting the robustness of the committee-based approach. Third, sequential selection, which im- plicitly models the expected utility of an example relative to the example distribution, worked in gen- eral better than batch selection. The latter was found to work well only for small batch sizes, where the method mimics sequential selection. Increas- ing batch size (approaching 'pure' batch selection) reduces both accuracy and efficiency. Finally, we studied the effect of sample selection on the size of the trained model, showing a significant reduction in model size. 8.1 Further research Our results suggest applying committee-based sam- ple selection to other statistical NLP tasks which rely on estimating probabilistic parameters from an annotated corpus. Statistical methods for these tasks typically assign a probability estimate, or some other statistical score, to each alternative analysis (a word sense, a category label, a parse tree, etc.), and then select the analysis with the highest score. The score is usually computed as a function of the estimates of several 'atomic' parameters, often bino- mials or multinomials, such as: • In word sense disambiguation (Hearst, 1991; Gale, Church, and Varowsky, 1993): P(slf ), where s is a specific sense of the ambiguous word in question w, and f is a feature of occurrences of w. Common features are words in the context of w or morphological attributes of it. • In prepositional-phrase (PP) attachment (Hin- dle and Rooth, 1993): P(alf), where a is a pos- sible attachment, such as an attachment to a head verb or noun, and f is a feature, or a com- bination of features, of the attachment. Corn- 325 mon features are the words involved in the at- tachment, such as the head verb or noun, the preposition, and the head word of the PP. • In statistical parsing (Black et al., 1993): P(rlh), the probability of applying the rule r at a certain stage of the top down derivation of the parse tree given the history h of the deriva- tion process. • In text categorization (Lewis and GMe, 1994; Iwayama and Tokunaga, 1994): P(tlC), where t is a term in the document to be categorized, and C is a candidate category label. Applying committee-based selection to supervised training for such tasks can be done analogously to its application in the current paper s. ~rthermore, committee-based selection may be attempted also for training non-probabilistic classifiers, where ex- plicit modeling of information gain is typically im- possible. In such contexts, committee members might be generated by randomly varying some of the decisions made in the learning algorithm. Another important area for future work is in de- veloping sample selection methods which are inde- pendent of the eventual learning method to be ap- plied. This would be of considerable advantage in developing selectively annotated corpora for general research use. Recent work on heterogeneous uncer- tainty sampling (Lewis and Catlett, 1994) supports this idea, using one type of model for example selec- tion and a different type for classification. Acknowledgments. We thank Yoav Freund and Yishay Mansour for helpful discussions. The first author gratefully acknowledges the support of the Fulbright Foundation. References Black, Ezra, Fred Jelinek, John Lafferty, David Magerman, Robert Mercer, and Salim Roukos. 1993. Towards history-based grammars: using richer models for probabilistic parsing. In Proc. of the Annual Meeting of the ACL, pages 31-37. Church, Kenneth W. 1988. A stochastic parts pro- gram and noun phrase parser for unrestricted text. In Proc. of ACL Conference on Applied Natural Language Processing. Cohn, David, Les Atlas, and Richard Ladner. 1994. Improving generalization with active learning. Machine Learning, 15:201-221. SMeasuring disagreement in full syntactic parsing is complicated. It may be approached by similar methods to those used for parsing evaluation, which measure the disagreement between the parser's output and the cor- rect parse. Dagan, Ido and Sean Engelson. 1995. Committee- based sampling for training probabilistic classi- tiers. In Proc. Int'l Conference on Machine Learn- ing, July. Elworthy, David. 1994. Does Baum-Welch re- estimation improve taggers? In Proc. of A CL Conference on Applied Natural Language Process- ing, pages 53-58. Freund, Y., H. S. Seung, E. Shamir, and N. Tishby. 1993. Information, prediction, and query by com- mittee. In Advances in Neural Information Pro- cessing, volume 5. Morgan Kaufmann. Gale, William, Kenneth Church, and David Yarowsky. 1993. A method for disambiguating word senses in a large corpus. Computers and the Humanities, 26:415-439. Hearst, Marti. 1991. Noun homograph disambigua- tion using local context in large text corpora. In Proc. of the Annual Conference of the UW Center for the New OED and Text Research, pages 1-22. Hindle, Donald and Mats Rooth. 1993. Structural ambiguity and lexical relations. Computational Linguistics, 19(1):103-120. Iwayama, M. and T. Tokunaga. 1994. A probabilis- tic model for text categorization based on a sin- gle random variable with multiple values. In Pro- ceedings of the .4th Conference on Applied Natural Language Processing. Johnson, Norman L. 1972. Continuous Multivariate Distributions. John Wiley & Sons, New York. Kupiec, Julian. 1992. Robust part-of-speech tagging using a hidden makov model. Computer Speech and Language, 6:225-242. Lewis, David D. and Jason Catlett. 1994. Heteroge- neous uncertainty sampling for supervised learn- ing. In Proc. lnt'l Conference on Machine Learn- ing. Lewis, David D. and William A. Gale. 1994. A sequential algorithm for training text classifiers. In Proc. of the ACM SIGIR Conference. MacKay, David J. C. 1992. Information-based ob- jective functions for active data selection. Neural Computation, 4. Merialdo, Bernard. 1994. Tagging text with a probabilistic model. Computational Linguistics, 20(2):155-172. Press, William H., Brian P. Flannery, Saul A. Teukolsky, and William T. Vetterling. 1988. Numerical Recipes in C. Cambridge University Press. Seung, H. S., M. Opper, and H. Sompolinsky. 1992. Query by committee. In Proc. A CM Workshop on Computational Learning Theory. 326 | 1996 | 42 |
Unsupervised Learning of Word-Category Guessing Rules Andrei Mikheev HCRC Language Technology Group University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW, Scotland, UK : Andrei.Mikheev~ed.ac.uk Abstract Words unknown to the lexicon present a substantial problem to part-of-speech tag- ging. In this paper we present a technique for fully unsupervised statistical acquisi- tion of rules which guess possible parts- of-speech for unknown words. Three com- plementary sets of word-guessing rules are induced from the lexicon and a raw cor- pus: prefix morphological rules, suffix mor- phological rules and ending-guessing rules. The learning was performed on the Brown Corpus data and rule-sets, with a highly competitive performance, were produced and compared with the state-of-the-art. 1 Introduction Words unknown to the lexicon present a substan- tial problem to part-of-speech (POS) tagging of real- world texts. Taggers assign a single POS-tag to a word-token, provided that it is known what parts- of-speech this word can take on in principle. So, first words are looked up in the lexicon. However, 3 to 5% of word tokens are usually missing in the lex- icon when tagging real-world texts. This is where word-Pos guessers take their place -- they employ the analysis of word features, e.g. word leading and trailing characters, to figure out its possible POS cat- egories. A set of rules which on the basis of ending characters of unknown words, assign them with sets of possible POS-tags is supplied with the Xerox tag- ger (Kupiec, 1992). A similar approach was taken in (Weischedel et al., 1993) where an unknown word was guessed given the probabilities for an unknown word to be of a particular POS, its capitalisation fea- ture and its ending. In (Brill, 1995) a system of rules which uses both ending-guessing and more morpho- logically motivated rules is described. The best of these methods are reported to achieve 82-85% of tagging accuracy on unknown words, e.g. (Brill, 1995; Weischedel et al., 1993). The major topic in the development of word-Pos guessers is the strategy which is to be used for the acquisition of the guessing rules. A rule-based tag- ger described in (Voutilainen, 1995) is equipped with a set of guessing rules which has been hand-crafted using knowledge of English morphology and intu- ition. A more appealing approach is an empiri- cal automatic acquisition of such rules using avail- able lexical resources. In (Zhang&Kim, 1990) a system for the automated learning of morphologi- cal word-formation rules is described. This system divides a string into three regions and from train- ing examples infers their correspondence to under- lying morphological features. Brill (Brill, 1995) out- lines a transformation-based learner which learns guessing rules from a pre-tagged training corpus. A statistical-based suffix learner is presented in (Schmid, 1994). From a pre-tagged training cor- pus it constructs the suffix tree where every suf- fix is associated with its information measure. Al- though the learning process in these and some other systems is fully unsupervised and the accuracy of obtained rules reaches current state-of-the-art, they require specially prepared training data -- a pre- tagged training corpus, training examples, etc. In this paper we describe a new fully automatic technique for learning part-of-speech guessing rules. This technique does not require specially prepared training data and employs fully unsupervised statis- tical learning using the lexicon supplied with the tag- ger and word-frequencies obtained from a raw cor- pus. The learning is implemented as a two-staged process with feedback. First, setting certain param- eters a set of guessing rules is acquired, then it is evaluated and the results of evaluation are used for re-acquisition of a better tuned rule-set. 2 Guessing Rules Acquisition As was pointed out above, one of the requirements in many techniques for automatic learning of part-of- speech guessing rules is specially prepared training data -- a pre-tagged training corpus, training ex- amples, etc. In our approach we decided to reuse the data which come naturally with a tagger, viz. the lexicon. Another source of information which is used and which is not prepared specially for the task is a text corpus. Unlike other approaches we don't require the corpus to be pre-annotated but use it in its raw form. In our experiments we used the lexicon and word-frequencies derived from the 327 Brown Corpus (Francis&Kucera, 1982). There are a number of reasons for choosing the Brown Cor- pus data for training. The most important ones are that the Brown Corpus provides a model of general multi-domain language use, so general language reg- ularities can be induced from it, and second, many taggers come with data trained on the Brown Cor- pus which is useful for comparison and evaluation. This, however, by no means restricts the described technique to that or any other tag-set, lexicon or corpus. Moreover, despite the fact that the train- ing is performed on a particular lexicon and a par- ticular corpus, the obtained guessing rules suppose to be domain and corpus independent and the only training-dependent feature is the tag-set in use. The acquisition of word-Pos guessing rules is a three-step procedure which includes the rule extrac- tion, rule scoring and rule merging phases. At the rule extraction phase, three sets of word-guessing rules (morphological prefix guessing rules, morpho- logical suffix guessing rules and ending-guessing rules) are extracted from the lexicon and cleaned from coincidental cases. At the scoring phase, each rule is scored in accordance with its accuracy of guessing and the best scored rules are included into the final rule-sets. At the merging phase, rules which have not scored high enough to be included into the final rule-sets are merged into more general rules, then re-scored and depending on their score added to the final rule-sets. 2.1 Rule Extraction Phase 2.1.1 Extraction of Morphological Rules. Morphological word-guessing rules describe how one word can be guessed given that another word is known. For example, the rule: [un (VBD VBN) (JJ)] says that prefixing the string "un" to a word, which can act as past form of verb (VBD) and participle (VBN), produces an adjective (J J). For instance, by applying this rule to the word "undeveloped", we first segment the prefix "un" and if the remaining part "developed" is found in the lexicon as (VBD VBN), we conclude that the word "undeveloped" is an adjective (JJ). The first POS-set in a guessing rule is called the initial class (/-class) and the POS- set of the guessed word is called the resulting class (R-class). In the example above (VBD VBN) is the /-class of the rule and (J~) is the R-class. In English, as in many other languages, morpho- logical word formation is realised by affixation: pre- fixation and suffixation. Although sometimes the af- fixation is not just a straightforward concatenation of the affix with the stem 1, the majority of cases clearly obey simple concatenative regularities. So, we decided first to concentrate only on simple con- catenative cases. There are two kinds of morpho- logical rules to be learned: suffix rules (A') -- rules which are applied to the tail of a word, and prefix rules (A p) -- rules which are applied to the begin- ning of a word. For example: 1consider an example: try - tried. A s : [ed (NN VB) (JJ VBD VBN)] says that if by stripping the suffix "ed" from an unknown word we produce a word with the POS-class (NN VB), the unknown word is of the class - (JJ VBD VBN). This rule works, for instance, for [book --*booked], [water---*watered], etc. To extract such rules a special operator V is applied to every pair of words from the lexicon. It tries to segment an affix by left- most string subtraction for suffixes and rightmost string subtraction for prefixes. If the subtraction results in an non-empty string it creates a morpho- logical rule by storing the POS-class of the shorter word as the /-class and the POs-class of the longer word as the R-class. For example: [booked (JJ VBD VBN)I ~7 [book (NN VB)] --~ A' : [ed (NN VB) (:IJ VBD VBN)] [undeveloped (JJ)l ~7 [developed (VBD VBN)] --+ A p : [un (VBD VBN) (JJ)l The ~7 operator is applied to all possible lexicon- entry pairs and if a rule produced by such an applica- tion has already been extracted from another pair, its frequency count (f) is incremented. Thus two different sets of guessing rules -- prefix and suffix morphological rules together with their frequencies -- are produced. Next, from these sets of guess- ing rules we need to cut out infrequent rules which might bias the further learning process. To do that we eliminate all the rules with the frequency f less than a certain threshold 82. Such filtering reduces the rule-sets more than tenfold and does not leave clearly coincidental cases among the rules. 2.1.2 Extraction of Ending Guessing Rules. Unlike morphological guessing rules, ending- guessing rules do not require the main form of an unknown word to be listed in the lexicon. These rules guess a POs-class for a word just on the ba- sis of its ending characters and without looking up its stem in the lexicon. Such rules are able to cover more unknown words than morphological guessing rules but their accuracy will not be as high. For example, an ending-guessing rule A~: [in s - (JJ NN VBG)] says that if a word ends with "ing" it can be an adjective, a noun or a gerund. Unlike a morphologi- cal rule, this rule does not ask to check whether the substring preceeding the "ing"-ending is a word with a particular POS-tag. Thus an ending-guessing rule looks exactly like a morphological rule apart from the/-class which is always void. To collect such rules we set the upper limit on the ending length equal to five characters and thus collect from the lexicon all possible word-endings of length 1, 2, 3, 4 and 5, together with the POs-classes of the words where these endings were detected to appear. This is done by the operator /X. For ex- ample, from the word [different (JJ)] the/% operator will produce five ending-guessing rules: It - (ss)]~ [nt " (J:])l; [ent- (Ji])]; [rent- (J:I)]; [erent- (:l J)]. The A operator is applied to each entry in the lexicon in the 2usua/ly we set this threshold quite low: 2-4. 328 way described for the ~7 operator of the morpholog- ical rules and then infrequent rules with f < 0 are filtered out. 2.2 Rule Scoring Phase Of course, not all acquired rules are equally good as plausible guesses about word-classes: some rules are more accurate in their guessings and some rules are more frequent in their application. So, for every acquired rule we need to estimate whether it is an effective rule which is worth retaining in the final rule-set. For such estimation we perform a statistical experiment as follows: for every rule we calculate the number of times this rule was applied to a word token from a raw corpus and the number of times it gave the right answer. Note that the task of the rule is not to disambiguate a word's POS but to provide all and only possible POSs it can take on. If the rule is correct in the majority of times it was applied it is obviously a good rule. If the rule is wrong most of the times it is a bad rule which should not be included into the final rule-set. To perform this experiment we take one-by-one each rule from the rule-sets produced at the rule ex- traction phase, take each word token from the cor- pus and guess its POS-set using the rule if the rule is applicable to the word. For example, if a guess- ing rule strips a particular suffix and a current word from the corpus does not have this suffix, we classify these word and rule as incompatible and the rule as not applicable to that word. If the rule is applicable to the word we perform look-up in the lexicon for this word and then compare the result of the guess with the information listed in the lexicon. If the guessed POS-set is the same as the POS-set stated in the lexicon, we count it as success, otherwise it is failure. The value of a guessing rule, thus, closely correlates with its estimated proportion of success /p~l which is the proportion of all positive outcomes of the rule application to the total number of the trials (n), which are, in fact, attempts to apply the rule to all the compatible words in the corpus. We also smooth/3 so as not to have zeros in.positive or negative outcome probabilities: 15 = ~.~i~ /3 estimate is a good indicator of rule accuracy. However, it frequently suffers from large estimation error due to insufficient training data. For example, ifa rule was detected to work just twice and the total number of observations was also two, its estimate/3 is very high (1, or 0.83 for the smoothed version) but clearly this is not a very reliable estimate because of the tiny size of the sample. Several smoothing methods have been proposed to reduce the estima- tion error. For different reasons all these smoothing methods are not very suitable in our case. In our approach we tackle this problem by calculating the lower confidence limit 7r L for the rule estimate. This can be seen as the minimal expected value of/3 for the rule if we were to draw a large number of sam- ples. Thus with certain confidence ~ we can assume that if we used more training data, the rule estimate /3 would be no worse than the ~L limit. The lower confidence limit 7r L is calculated as: ~rL =/3 -- Z(l-~)/2 * sp =/3 -- z(t-~)/2 * This function favours the rules with higher esti- mates obtained over larger samples. Even if one rule has a high estimate but that estimate was ob- tained over a small sample, another rule with a lower estimate but over a large sample might be valued higher. Note also that since/3 itself is smoothed we will not have zeros in positive (/3) or negative (1 -/3) outcome probabilities. This estimation of the rule value in fact resembles that used by (Tzoukermann et al., 1995) for scoring pos-disambiguation rules for the French tagger. The main difference between the two functions is that there the z value was implic- itly assumed to be 1 which corresponds to the con- fidence of 68%. A more standard approach is to adopt a rather high confidence value in the range of 90-95%. We adopted 90% confidence for which z(1-0.90)/2 = z0.05 = 1.65. Thus we can calculate the score for the ith rule as: /3i - 1.65 * ~/P~(QP') Another important consideration for scoring a word-guessing rule is that the longer the affix or end- ing of the rule the more confident we are that it is not a coincidental one, even on small samples. For example, if the estimate for the word-ending "o" was obtained over a sample of 5 words and the estimate for the word-ending "fulness" was also obtained over a sample of 5 words, the later case is more represen- tative even though the sample size is the same. Thus we need to adjust the estimation error in accordance with the length of the affix or ending. A good way to do that is to divide it by a value which increases along with the increase of the length. After several experiments we obtained: scorei =/3i - 1.65 * ~ / ( 1 + log(IS, I)) When the length of the affix or ending is 1 the estimation error is not changed since log(l) is 0. For the rules with the affix or ending length of 2 the es- timation error is reduced by 1 + log(2) = 1.3, for the length 3 this will be 1 + log(3) -- 1.48, etc. The longer the length the smaller the sample which wilt be considered representative enough for a confident rule estimation. Setting the threshold 0~ at a cer- tain level lets only the rules whose score is higher than the threshold to be included into the final rule- sets. The method for setting up this threshold is based on empirical evaluations of the rule-sets and is described in Section 3. 2.3 Rule Merging Phase Rules which have scored lower than the threshold 0s can be merged into more general rules which if scored above the threshold are also included into the final rule-sets. We can merge two rules which have scored below the threshold and have the same affix 329 (or ending) and the initial class (/)3. The score of the resulting rule will be higher than the scores of the merged rules since the number of positive ob- servations increases and the number of the trials re- mains the same. After a successful application of the merging, the resulting rule substitutes the two merged ones. To perform such rule-merging over a rule-set, first, the rules which have not been in- cluded into the final set are sorted by their score and best-scored rules are merged first. This is done recursively until the score of the resulting rule does not exceed the threshold in which case it is added to the final rule-set. This process is applied until no merges can be done to the rules which have scored below the threshold. 3 Direct Evaluation Stage There are two important questions which arise at the rule acquisition stage - how to choose the scoring threshold 0, and what is the performance of the rule- sets produced with different thresholds. The task of assigning a set of Pos-tags to a word is actually quite similar to the task of document categorisation where a document should be assigned with a set of descrip- tors which represent its contents. The performance of such assignment can be measured in: recall- the percentage of BOSs which were assigned correctly by the guesser to a word; precision- the percentage of BOSs the guesser as- signed correctly over the total number of BOSs it assigned to the word; coverage - the proportion of words which the guesser was able to classify, but not necessarily cor- rectly; In our experiments we measured word precision and word recall (micro-average). There were two types of data in use at this stage. First, we eval- uated the guessing rules against the actual lexicon: every word from the lexicon, except for closed-class words and words shorter than five characters 4, was guessed by the different guessing strategies and the results were compared with the information the word had in the lexicon. In the other evaluation experi- ment we measured the performance of the guessing rules against the training corpus. For every word we computed its metrics exactly as in the previous ex- periment. Then we multiplied these results by the corpus frequency of this particular word and aver- aged them. Thus the most frequent words had the greatest influence on the aggreagte measures. First, we concentrated on finding the best thresh- olds 08 for the rule-sets. To do that for each rule-set produced using different thresholds we recorded the three metrics and chose the set with the best aggre- gate. In Table 1 some results of that experiment are shown. The best thresholds were detected: for ending rules - 75 points, for suffix rules - 60, and for 3For ending-guessing rules this is always true, so only the ending itself counts. 4the actual size of the filtered lexicon was 47,659 en- tries out of 53,015 entries of the original lexicon. prefix rules - 80. One can notice a slight difference in the results obtained over the lexicon and the cor- pus. The corpus results are better because the train- ing technique explicitly targeted the rule-sets to the most frequent cases of the corpus rather than the lexicon. In average ending-guessing rules were de- tected to cover over 96% of the unknown words. The precision of 74% roughly can be interpreted as that for words which take on three different BOSs in their BOs-class, the ending-guessing rules will assign four, but in 95% of the times (recall) the three required BOSs will be among the four assigned by the guess. In comparison with the Xerox word-ending guesser taken as the base-line model we detect a substantial increase in the precision by about 22% and a cheerful increase in coverage by about 6%. This means that the Xerox guesser creates more ambiguity for the disambiguator, assigning five instead of three BOSs in the example above. It can also handle 6% less unknown words which, in fact, might decrease its performance even lower. In comparison with the ending-guessing rules, the morphological rules have much better precision and hence better accuracy of guessing. Virtually almost every word which can be guessed by the morphological rules is guessed ex- actly correct (97% recall and 97% precision). Not surprisingly, the coverage of morphological rules is much lower than that of the ending-guessing ones - for the suffix rules it is less than 40% and for the prefix rules about 5-6%. After obtaining the optimal rule-sets we per- formed the same experiment on a word-sample which was not included into the training lexicon and cor- pus. We gathered about three thousand words from the lexicon developed for the Wall Street Journal corpus 5 and collected frequencies of these words in this corpus. At this experiment we obtained simi- lar metrics apart from the coverage which dropped about 0.5% for Ending 75 and Xerox rule-sets and 7% for the Suffix 60 rule-set. This, actually, did not come as a surprise, since many main forms required by the suffix rules were missing in the lexicon. In the next experiment we evaluated whether the morphological rules add any improvement if they are used in conjunction with the ending-guessing rules. We also evaluated in detail whether a conjunctive application with the Xerox guesser would boost the performance. As in the previous experiment we mea- sured the precision, recall and coverage both on the lexicon and on the corpus. Table 2 demonstrates some results of this experiment. The first part of the table shows that when the Xerox guesser is ap- plied before the E75 guesser we measure a drop in the performance. When the Xerox guesser is applied after the E75 guesser no sufficient changes to the per- formance are noticed. This actually proves that the E75 rule-set fully supercedes the Xerox rule-set. The second part of the table shows that the cascading application of the morphological rule-sets together with the ending-guessing rules increases the over- 5these words were not listed in the training lexicon 330 all precision of the guessing by a further 5%. This makes the improvements against the base-line Xerox guesser 28% in precision and 7% in coverage. 4 Tagging Unknown Words The direct evaluation of the rule-sets gave us the grounds for the comparison and selection of the best performing guessing rule-sets. The task of unknown word guessing is, however, a subtask of the overall part-of-speech tagging process. Thus we are mostly interested in how the advantage of one rule-set over another will affect the tagging performance. So, we performed an independent evaluation of the impact of the word guessers on tagging accuracy. In this evaluation we tried two different taggers. First, we used a tagger which was a c++ re-implementation of the LISP implemented HMM Xerox tagger de- scribed in (Kupiec, 1992). The other tagger was the rule-based tagger of Brill (Brill, 1995). Both of the taggers come with data and word-guessing compo- nents pre-trained on the Brown Corpus 6. This, ac- tually gave us the search-space of four combinations: the Xerox tagger equipped with the original Xe- rox guesser, Brill's tagger with its original guesser, the Xerox tagger with our cascading Ps0+S60+E75 guesser and Brill's tagger with the cascading guesser. For words which failed to be guessed by the guess- ing rules we applied the standard method of classi- fying them as common nouns (NN) if they are not capitalised inside a sentence and proper nouns (NP) otherwise. As the base-line result we measured the performance of the taggers with all known words on the same word sample. In the evaluation of tagging accuracy on unknown words we pay attention to two metrics. First we measure the accuracy of tagging solely on unknown words: UnkownScore = Correctl~Ta$[ledUnkownWords TotalUnknown W ords This metric gives us the exact measure of how the tagger has done on unknown words. In this case, however, we do not account for the known words which were mis-tagged because of the guessers. To put a perspective on that aspect we measure the overall tagging performance: TotalScore = CorrectlyTagsedWords TotaIWords Since the Brown Corpus model is a general lan- guage model, it, in principle, does not put restric- tions on the type of text it can be used for, although its performance might be slightly lower than that of a model specialised for this particular sublanguage. Here we want to stress that our primary task was not to evaluate the taggers themselves but rather their performance with the word-guessing modules. So we did not worry too much about tuning the taggers for the texts and used the Brown Corpus model instead. We tagged several texts of different origins, except from the Brown Corpus. These texts were not seen at the training phase which means that neither the 6Since Brill's tagger was trained on the Penn tag-set (Marcus et al., 1993) we provided an additional mapping. 331 taggers nor the guessers had been trained on these texts and they naturally had words unknown to the lexicon. For each text we performed two tagging experiments. In the first experiment we tagged the text with the Brown Corpus lexicon supplied with the taggers and hence had only those unknown words which naturally occur in this text. In the second ex- periment we tagged the same text with the lexicon which contained only closed-class 7 and short 8 words. This small lexicon contained only 5,456 entries out of 53,015 entries of the original Brown Corpus lex- icon. All other words were considered as unknown and had to be guessed by the guessers. We obtained quite stable results in these experi- ments. Here is a typical example of tagging a text of 5970 words. This text was detected to have 347 un- known words. First, we tagged the text by the four different combinations of the taggers with the word- guessers using the full-fledged lexicon. The results of this tagging are summarised in Table 3. When us- ing the Xerox tagger with its original guesser, 63 un- known words were incorrectly tagged and the accu- racy on the unknown words was measured at 81.8%. When the Xerox tagger was equipped with our cas- cading guesser its accuracy on unknown words in- creased by almost 9% upto 90.5%. The same situa- tion was detected with Brill's tagger which in general was slightly more accurate than the Xerox one 9. The cascading guesser performed better than Brill's orig- inal guesser by about 8% boosting the performance on the unknown words from 84.5% 1° to 92.2%. The accuracy of the taggers on the set of 347 unknown words when they were made known to the lexicon was detected at 98.5% for both taggers. In the second experiment we tagged the same text in the same way but with the small lexicon. Out of 5,970 words of the text, 2,215 were unknown to the small lexicon. The results of this tagging are sum- marised in Table 4. The accuracy of the taggers on the 2,215 unknown words when they were made known to the lexicon was much lower than in the previous experiment -- 90.3% for the Xerox tagger and 91.5% for Brill's tagger. Naturally, the perfor- mance of the guessers was also lower than in the previous experiment plus the fact that many "semi- closed" class adverbs like "however", "instead", etc., were missing in the small lexicon. The accuracy of the tagging on unknown words dropped by about 5% in general. The best results on unknown words were again obtained on the cascading guesser (86%- 87.45%) and Brill's tagger again did better then the Xerox one by 1.5%. Two types of mis-taggings caused by the guessers 7articles, prepositions, conjunctions, etc. Sshorter than 5 characters 9This, however, was not an entirely fair comparison because of the differences in the tag-sets in use by the taggers. The Xerox tagger was trained on the original Brown Corpus tag-set which makes more distinctions be- tween categories than the Penn Brown Corpus tag-set. 1°This figure agrees with the 85% quoted by Brill (Brin, 1994). occured. The first type is when guessers provided broader POS-classes for unknown words and the tag- ger had difficulties with the disambiguation of such broader classes. This is especially the case with the "ing" words which, in general, can act as nouns, ad- jectives and gerunds and only direct lexicalization can restrict the search space, as in the case with the word "going" which cannot be an adjective but only a noun and a gerund. The second type of mis- tagging was caused by wrong assignments of BOSS by the guesser. Usually this is the case with irregular words like, for example, "cattle" which was wrongly guessed as a singular noun (NN) but in fact is a plural noun (NNS). 5 Discussion and Conclusion We presented a technique for fully unsupervised statistical acquisition of rules which guess possible parts-of-speech for words unknown to the lexicon. This technique does not require specially prepared training data and uses for training the lexicon and word frequencies collected from a raw corpus. Us- ing these training data three types of guessing rules are learned: prefix morphological rules, suffix mor- phological rules and ending-guessing rules. To select best performing guessing rule-sets we suggested an evaluation methodology, which is solely dedicated to the performance of part-of-speech guessers. Evaluation of tagging accuracy on unknown words using texts unseen by the guessers and the taggers at the training phase showed that tagging with the automatically induced cascading guesser was consis- tently more accurate than previously quoted results known to the author (85%). The cascading guesser outperformed the guesser supplied with the Xerox tagger by about 8-9% and the guesser supplied with Brill's tagger by about 6-7%. Tagging accuracy on unknown words using the cascading guesser was de- tected at 90-92% when tagging with the full-fledged lexicon and 86-88% when tagging with the closed- class and short word lexicon. When the unknown words were made known to the lexicon the accu- racy of tagging was detected at 96-98% and 90-92% respectively. This makes the accuracy drop caused by the cascading guesser to be less than 6% in gen- eral. Another important conclusion from the evalua- tion experiments is that the morphological guessing rules do improve the guessing performance. Since they are more accurate than ending-guessing rules they are applied before ending-guessing rules and improve the precision of the guessings by about 5%. This, actually, results in about 2% higher accuracy of tagging on unknown words. The acquired guessing rules employed in our cas- cading guesser are, in fact, of a standard nature and in that form or another are used in other POS- guessers. There are, however, a few points which make the rule-sets acquired by the presented here technique more accurate: • the learning of such rules is done from the lex- icon rather than tagged corpus, because the guesser's task is akin to the lexicon lookup; • there is a well-tuned statistical scoring proce- dure which accounts for rule features and fre- quency distribution; • there is an empirical way to determine an opti- mum collection of rules, since acquired rules are subject to rigorous direct evaluation in terms of precision, recall and coverage; • rules are applied cascadingly using the most ac- curate rules first. One of the most important issues in the induction of guessing rule-sets is the choice right data for train- ing. In our approach, guessing rules are extracted from the lexicon and the actual corpus frequencies of word-usage then allow for discrimination between rules which are no longer productive (but have left their imprint on the basic lexicon) and rules that are productive in real-life texts. Thus the major factor in the learning process is the lexicon. Since guessing rules are meant to capture generM language regular- ities the lexicon should be as general as possible (list all possible POSs for a word) and as large as possi- ble. The corresponding corpus should include most of the words from the lexicon and be large enough to obtain reliable estimates of word-frequency distri- bution. Our experiments with the lexicon and word frequencies derived from the Brown Corpus, which can be considered as a generM model of English, re- sulted in guessing rule-sets which proved to be do- main and corpus independent 11, producing similar results on test texts of different origin. Although in general the performance of the cas- cading guesser is only 6% worse than the lookup of a general language lexicon there is room for improve- ment. First, in the extraction of the morphological rules we did not attempt to model non-concatenative cases. In English, however, since most of letter mu- tations occur in the last letter of the main word it is possible to account for it. So our next goal is to ex- tract morphological rules with one letter mutations at the end. This would account for cases like "try - tries", "reduce - reducing", "advise - advisable". We expect it to increase the coverage of thesuffix mor- phological rules and hence contribute to the overall guessing accuracy. Another avenue for improvement is to provide the guessing rules with the probabilities of emission of POSs from their resulting POS-classes. This information can be compiled automatically and also might improve the accuracy of tagging unknown words. The described rule acquisition and evaluation methods are implemented as a modular set of c++ and AWK tools, and the guesser is easily extendable to sub-language specific regularities and retrainable to new tag-sets and other languages, provided that these languages have affixational morphology. Both the software and the produced guessing rule-sets are available by contacting the author. 11but tag-set dependent 332 6 Acknowledgements Some of the research reported here was funded as part of EPSRC project IED4/1/5808 "Integrated Language Database". I would also like to thank Chris Brew for helpful discussions on the issues re- lated to this paper. " References E. Brill 1994. Some Advances in Transformation- Based Part of Speech Tagging. In Proceedings of the Twelfth National Conference on Arlificial In- telligence (AAAAL94), Seattle, WA. E. Brill 1995. Transformation-based error-driven learning and Natural Language processing: a case study in part-of-speech tagging. In Computational Linguistics 21(4) pp. 543-565. W. Francis and H. Kucera 1982. Frequency Analysis of English Usage. Houghton Mifflin, Boston 1982. J. Kupiec 1992. Robust Part-of-Speech Tagging Us- ing a Hidden Markov Model. In Computer Speech and Language M. Marcus, M.A. Marcinkiewicz, and B. Santorini 1993. Building a Large Annotated Corpus of En- glish: The Penn Treebank. In Computational Lin- guistics, vol 19/2 pp.313-329 H. Schmid 1994. Part of Speech Tagging with Neu- ral Networks. In Proceedings of the International Conference on Computational Linguistics, pp.172- 176, Kyoto, Japan. E. Tzoukermann, D.R. Radev, and W.A. Gale 1995. Combining Linguistic Knowledge and Statistical Learning in French Part of Speech Tagging. In EACL SIGDAT Workshop, pp.51-59, Dublin, Ire- land A. Voutilainen 1995. A Syntax-Based Part-of- Speech Analyser In Proceedings of the Sev- enth Conference of European Chapter of the As- sociation for Computational Linguistics (EACL) pp.157-164, Dublin, Ireland R. Weischedel, M. Meteer, R. Schwartz, L. Ramshaw and J. Palmucci 1993. Coping with ambiguity and unknown words through probabilistic models. In Computational Linguistics, vol 19/2 pp.359-382 Byoung-Tak Zhang and Yung-Taek Kim 1990. Mor- phological Analysis and Synthesis by Automated Discovery and Acquisition of Linguistic Rules. In Proceedings of the 13th International Confer- ence on Computational Linguistics, pp.431-435, Helsinki, Finland. 333 Measure Recall Precision Coverage Test Lexicon Corpus Lexicon Corpus Lexicon Corpus Xerox 0.956313 0.944526 0.460761 0.523965 0.917698 0.893275 Endin G 75 0.945726 0.952016 0.675122 0.745339 0.977089 0.96104 Suffix 60 0.95761 0.97352 0.919796 0.979351 0.37597 0.320996 Prefix 80 0.955748 0.978515 0.922534 0.977633 0.049558 0.058372 Table 1: Results obtained at the evaluation of the acquired rule-sets over the training lexicon and the training corpus. Guessing rule-sets produced using different confidence thresholds were compared. Best- scored rule-sets detected: Prefix 80 - prefix morphological rules which scored over 80 points, Suffix 60 - suffix morphological rules which scored over 60 points and Ending 75 - ending-guessing rules which scored over 75 points. As the base-line model was taken the ending guesser developed by Xerox (X). Guessing Strate~;y Xerox (X) Ending 75 (E75) XWE75 E75+X Ps0+E75 $6o+E75 Ps0+S60+E75 Lexicon" Precision Recall Coverage 0.460761 0.956331 0.917698 0.675122 0.945726 0.977089 0.470249 0.95783 0.989843 0.670741 0.943319 0.989843 0.687126 0.946208 0.977488 0.734143 0.945015 0.979686 0.745504 0.945445 0.980086 Corpus Precision Recall Coverage 0.523965 0.944526 0.893275 0.745339 0.952016 0.96104 0.519715 0.949789 0.969023 0.743932 0.951541 0.969023 0.748922 0.951563 0.96104 0.792901 0.951015 0.963289 0.796252 0.950562 0.963289 Table 2: Results of the cascading application of the rule-sets over the training lexicon and training corpus. Ps0 - prefix rule-set scored over 80 points, $60 - suffix rule-set scored over 60 points, E75 - ending-guessing rule-set scored over 75 points. As the base-line model was taken the ending guesser developed by Xerox (X). The first part of the table shows that the E75 rule-set outperforms and fully supercedes the Xerox rule-set. The second part of the table shows that the cascading application of the morphological rule-sets together with the ending-guessing rules increases the performance by about 5% in precision. Tagger Guessing 'lbtal Unkn. rlbtal strategy words words mistag. Xerox Xerox 5,970 347 324 Xerox Ps0 +S60 +E75 5,970 347 292 Brill Brill 5,970 347 246 Brill Ps0 +$60 +E75 5,970 347 219 Unkn. 'lbtal Unkn. mistag. Score Score 63 94.3% 81.8% 33 95.1% 90.5% 54 95.9% 84.5% 27 96.3% 92.2% Table 3: This table shows the results of tagging a text with 347 unknown words by four different combinations of two taggers and three word-guessing modules using the Brown Corpus model. The accuracy of tagging the unknown words when they were made known to the lexicon was detected at 98.5% for both taggers. Tagger ] Guessing Total J Unkn. strategy words words Xerox Xerox 5,970 2215 Xerox Ps0 +$60 +E75 5,970 2215 Brill Brill 5,970 2215 Brill Ps0 +S60 +E7~ 5,970 2215 Total Unkn. mistag, mistag. 556 516 332 309 464 410 327 287 I Total Unkn. Score Score 90.7% 76.7% 94.44% 86.05% 93.1% 81.5% 94.52% 87.45% Table 4: This table shows the results of tagging the same as in Table 3 text by four different combinations of two taggers and three word-guessing modules using the Brown Corpus model and the lexicon which contained only closed-class and short words. The accuracy of tagging the unknown words when they were made known to the lexicon was detected at 90.3% for the Xerox tagger and at 91.5% for Brill's tagger. 334 | 1996 | 43 |
Linguistic Structure as Composition and Perturbation Carl de Marcken MIT AI Laboratory, NE43-769 545 Technology Square Cambridge, MA, 02139, USA [email protected] Abstract This paper discusses the problem of learn- ing language from unprocessed text and speech signals, concentrating on the prob- lem of learning a lexicon. In particular, it argues for a representation of language in which linguistic parameters like words are built by perturbing a composition of exist- ing parameters. The power of the represen- tation is demonstrated by several examples in text segmentation and compression, ac- quisition of a lexicon from raw speech, and the acquisition of mappings between text and artificial representations of meaning. 1 Motivation Language is a robust and necessarily redundant communication mechanism. Its redundancies com- monly manifest themselves as predictable patterns in speech and text signals, and it is largely these patterns that enable text and speech compression. Naturally, many patterns in text and speech re- flect interesting properties of language. For ex- ample, the is both an unusually frequent sequence of letters and an English word. This suggests us- ing compression as a means of acquiring under- lying properties of language from surface signals. The general methodology of language-learning-by- compression is not new. Some notable early propo- nents included Chomsky (1955), Solomonoff (1960) and Harris (1968), and compression has been used as the basis for a wide variety of computer programs that attack unsupervised learning in language; see (Olivier, 1968; Wolff, 1982; Ellison, 1992; Stolcke, 1994; Chen, 1995; Cartwright and Brent, 1994) among others. 1.1 Patterns and Language Unfortunately, while surface patterns often reflect interesting linguistic mechanisms and parameters, they do not always do so. Three classes of exam- ples serve to illustrate this. 1.1.1 Extrallngulstlc Patterns The sequence it was a dark and stormy night is a pattern in the sense it occurs in text far more frequently than the frequencies of its letters would suggest, but that does not make it a lexical or gram- matical primitive: it is the product of a complex mixture of linguistic and extra-linguistic processes. Such patterns can be indistinguishable from desired ones. For example, in the Brown corpus (Francis and Kucera, 1982) scratching her nose occurs 5 times, a corpus-specific idiosyncrasy. This phrase has the same structure as the idiom kicking the bucket. It is difficult to imagine any induction algorithm learn- ing kicking the bucket from this corpus without also (mistakenly) learning scratching her nose. 1.1.2 The Definition of Interesting This discussion presumes there is a set of desired patterns to extract from input signals. What is this set? For example, is kicking the bucket a proper lexi- cal unit? The answer depends on factors external to the unsupervised learning framework. For the pur- poses of machine translation or information retrieval this sequence is an important idiom, but with re- spect to speech recognition it is unremarkable. Sim- ilar questions could be asked of subword units like syllables. Plainly, the answers depends on the learn- ing context, and not on the signal itself. 1.1.3 The Definition of Pattern Any statistical definition of pattern depends on an underlying model. For instance, the sequence the dog occurs much more frequently than one would expect given an independence assumption about let- ters. But for a model with knowledge of syntax and word frequencies, there is nothing remarkable about the phrase. Since all existing models have flaws, pat- terns will always be learned that are artifacts of im- perfections in the learning algorithm. These examples seem to imply that unsupervised induction will never converge to ideal grammars and lexicons. While there is truth to this, the rest of this paper describes a representation of language that bypasses many of the apparent difficulties. 335 [national football league] ~O~] [football] [league] / \ / ' . . tnation] [al] [foot] [ball] [a] [gue] ,1/ / \ I- ~ \/ [n] [t] [b] [1] [g] Figure I: A compositional representation. Code Length Components 000 --- Co( 2 Co, c, 001 = c,h. 3 c,,c~,ce 010 = cln 2 ca, c, 0110 ----- c . . . . 4 C.,Co, Cm,Cc 0111 = C . . . . . nh. 3 C . . . . . CoS, C, he 10000 ---- . . . . . . . . . Figure 2: A coding of the first few words of a hypo- thetical lexicon. The first two columns can be coded succinctly, leaving the cost of pointers to component words as the dominant cost of both the lexicon and the representation of the input. 2 A Compositional Representation The examples in sections 1.1.1 and 1.1.2 seem to imply that any unsupervised language learning pro- gram that returns only one segmentation of the in- put is bound to make many mistakes. And sec- tion 1.1.3 implies that the decisions about linguistic units must be made relative to their representations. Both problems can be solved if linguistic units (for now, words in the lexicon) are built by composition of other units. For example, kicking the bucket might be built by composing kicking, the and bucket. 1 Of course, if a word is merely the composition of its parts, there is nothing interesting about it and no reason to include it in the lexicon. So the motiva- tion for including a word in the lexicon must be that it function differently from its parts. Thus a word is a perturbation of a composition. In the case of kicking the bucket the perturbation is one of both meaning and frequency. For scratching her nose the perturbation may just be frequency. ~ This is a very natural representation from the view- point of language. It correctly predicts that both phrases inherit their sound and syntax from their component words. At the same time it leaves open the possibility that idiosyncratic information will be attached to the whole, as with the meaning of kick- ing the bucket. This structure is very much like the class hierarchy of a modern programming language. It is not the same thing as a context-free grammar, since each word does not act in the same way as the default composition of its components. Figure 1 illustrates a recursive decomposition (un- der concatenation) of the phrase national football league. The phrase is broken into three words, each of which are also decomposed in the lexicon. This process bottoms out in the terminal characters. This is a real decomposition achieved by a program de- scribed in section 4. Not shown are the perturba- 1A simple composition operator is concatenation, but in section 6 a more interesting one is discussed. ~Naturally, an unsupervised learning algorithm with no access to meaning will not treat them differently. tions (in this case merely frequency changes) that distinguish each word from its parts. This general framework extends to other perturbations. For ex- ample, the word wanna is naturally thought of as a composition of want and to with a sound change. And in speech the three different words to, two and too may well inherit the sound of a common ancestor while introducing new syntactic and semantic prop- erties. 2.1 Coding Of course, for this representation to be more than an intuition both the composition and perturbation operators must be exactly specified. In particular, a code must be designed that enables a word (or a sentence) to be expressed in terms of its parts. As a simple example, suppose that the composition oper- ator is concatenation, that terminals are characters, and that the only perturbation operator-is the abil- ity to express the frequency of a word independently of the frequency of its parts. Then to code either a sentence of the input or a (nonterminal) word in the lexicon, the number of component words in the rep- resentation must be written, followed by a code for each component word. Naturally, each word in the lexicon must be associated with its code, and under a near-optimal coding scheme like a Huffman code, the code length will be related to the frequency of the word. Thus, associating a word with a code sub- stitutes for writing down the frequency of a word. Furthermore, if words are written down in order of decreasing frequency, a Huffman code for a large lexicon can be specified using a negligible number of bits. This and the near-negligible cost of writ- ing down word lengths will not be discussed further. Figure 2 presents a portion of an encoding of a hy- pothetical lexicon. 2.2 MDL Given a coding scheme and a particular lexicon (and a parsing algorithm) it is in theory possible to calcu- late the minimum length encoding of a given input. 336 Part of the encoding will be devoted to the lexicon, the rest to representing the input in terms of the lexicon. The lexicon that minimizes the combined description length of the lexicon and the input max- imally compresses the input. In the sense of Rissa- nen's minimum description-length (MDL) principle (Rissanen, 1978; Rissanen, 1989) this lexicon is the theory that best explains the data, and one can hope that the patterns in the lexicon reflect the underly- ing mechanisms and parameters of the language that generated the input. 2.3 Properties of the Representation Representing words in the lexicon as perturbations of compositions has a number of desirable properties. • The choice of composition and perturbation op- erators captures a particular detailed theory of language. They can be used, for instance, to reference sophisticated phonological and mor- phological mechanisms. • The length of the description of a word is a mea- sure of its linguistic plausibility, and can serve as a buffer against learning unnatural coinci- dences. • Coincidences like scratching her nose do not ex- clude desired structure, since they are further broken down into components that they inherit properties from. • Structure is shared: the words blackbird and blackberry can share the common substructure associated with black, such as its sound and meaning. As a consequence, data is pooled for estimation, and representations are compact. • Common irregular forms are compiled out. For example, if wang is represented in terms of go (presumably to save the cost of unnecessarily reproducing syntactic and semantic properties) the complex sound change need only be repre- sented once, not every time went is used. • Since parameters (words) have compact repre- sentations, they are cheap from a description length standpoint, and many can be included in the lexicon. This allows learning algorithms to fit detailed statistical properties of the data. This coding scheme is very similar to that found in popular dictionary-based compression schemes like LZ78 (Ziv and Lempel, 1978). It is capable of com- pressing a sequence of identical characters of length n to size O(log n). However, in contrast to compres- sion schemes like LZ78 that use deterministic rules to add parameters to the dictionary (and do not ar- rive at linguistically plausible parameters), it is pos- sible ta perform more sophisticated searches in this representation. Start with lexicon of terminals. Iterate Iterate (EM) Parse input and words using current lexicon. Use word counts to update frequencies. Add words to the lexicon. Iterate (EM) Parse input and words using current lexicon. Use word counts to update frequencies. Delete words from the lexicon. Figure 3: An iterative search algorithm. Two it- erations of the inner loops are usually sufficient for convergence, and for the tests described in this pa- per after 10 iterations of the outer loop there is little change in the lexicon in terms of either compression performance or structure. 3 A Search Algorithm Since the class of possible lexicons is infinite, the minimization of description length is necessarily heuristic. Given a fixed lexicon, the expectation- maximization algorithm (Dempster et al., 1977) can be used to arrive at a (locally) optimal set of fre- quencies and codelengths for the words in the lex- icon. For composition by concatenation, the algo- rithm reduces to the special case of the Baum-Welch procedure (Baum et al., 1970) discussed in (Deligne and Bimbot, 1995). In general, however, the parsing and reestimation involved in EM can be consider- ably more complicated. To update the structure of the lexicon, words can be added or deleted from it if this is predicted to reduce the description length of the input. This algorithm is summarized in fig- ure 3. 3 3.1 Adding and Deleting Words For words to be added to the lexicon, two things are needed. The first is a means of hypothesizing candi- date new words. The second is a means of evaluat- ing candidates. One reasonable means of generating candidates is to look at pairs (or triples) of words that are composed in the parses of words and sen- tences of the input. Since words are built by com- posing other words and act like their composition, a new word can be created from such a pair and substi- tuted in place of the pair wherever the pair appears. For example, if water and melon are frequently com- posed, then a good candidate for a new word is water o melon = watermelon, where o is the concatenation 3For the composition operators and test sets we have looked at, using single (Viterbi) parses produces almost exactly the same results (in terms of both compression and lexical structure) as summing probabilities over mul- tiple parses. 337 operator. In order to evaluate whether the addition of such a new word is likely to reduce the description length of the input, it is necessary to record during the EM step the extra statistics of how many times the composed pairs occur in parses. The effect on description length of adding a new word can not be exactly computed. Its addition will not only affect other words, but may also cause other words to be added or deleted. Furthermore, it is more computationally efficient to add and delete many words simultaneously, and this complicates the estimation of the change in description length. Fortunately, simple approximations of the change are adequate. For example, if Viterbi analyses are being used then the new word watermelon will com- pletely take the place of all compositions of water and melon. This reduces the counts of water and melon accordingly, though they are each used once in the representation of watermelon. If it is assumed that no other word counts change, these assumptions allow one to predict the counts and probabilities of all words after the change. Since the codelength of a word w with probability p(w) is approximately -log p(~), the total estimated change in description length of adding a new word W to a lexicon/; is zx -c'(W) logp'(w) + d.l.(changes) + Z + c( 0)logp( o)) where c(w) is the count of the word w, primes indi- cated counts and probabilities after the change and d.l.(changes) represents the cost of writing down the perturbations involved in the representation of W. If A < 0 the word is predicted to reduce the total description length and is added to the lexicon. Sim- ilar heuristics can be used to estimate the benefit of deleting words. 4 3.2 Search Properties A significant source of problems in traditional gram- mar induction techniques is local minima (de Mar- cken, 1995a; Pereira and Schabes, 1992; Carroll and Charniak, 1992). The search algorithm described above avoids many of these problems. The reason is that hidden structure is largely a "compile-time" phenomena. During parsing all that is important about a word is its surface form and codelength. The internal representation does not matter. Therefore, the internal representation is free to reorganize at any time; it has been decoupled. This allows struc- ture to be built bottom up or for structure to emerge inside already existing parameters. Furthermore, since parameters (words) encode surface patterns, it 4See (de Mareken, 1995b) for more detailed discus- sion of these estimations. The actual formulas used in the tests presented in this paper are slightly more com- plicated than presented here. is relatively easy to determine when they are useful, and their use is limited. They usually do not have competing roles, in contrast, for instance, to hidden nodes in neural networks. And since there are no fixed number of parameters, when words do start to have multiple disparate uses, they can be split with common substructure shared. Finally, since add and delete cycles can compensate for initial mistakes, in- exact heuristics can be used for adding and deleting words. 4 Concatenation Results The simplest reasonable instantiation of the composition-and-perturbation framework is with the concatenation operator and frequency perturbation. This instantiation is easily tested on problems of text segmentation and compression. Given a text docu- ment, the search algorithm can be used to learn a lexicon that minimizes its description length. For testing purposes, spaces will be removed from input text and true words will be defined to be minimal sequences bordered by spaces in the original input). The search algorithm parses the input as it com- presses it, and can therefore output a segmentation of the input in terms of words drawn from the lex- icon. These words are themselves decomposed in the lexicon, and can be considered to form a tree that terminates in the characters of the sentence. This tree can have no more than O(n) nodes for a sentence with n characters, though there are O(n 2) possible "true words" in the input sentence; thus, the tree contains considerable information. Define recall to be the percentage of true words that oc- cur at some level of the segmentation-tree. Define crossing-bracket to be the percentage of true words that violate the segmentation-tree structure, s The search algorithm was applied to two texts, a lowercase version of the million-word Brown cor- pus with spaces and punctuation removed, and 4 million characters of Chinese news articles in a two- byte/character format. In the case of the Chinese, which contains no inherent separators like spaces, segmentation performance is measured relative to another computer segmentation program that had access to a (human-created) lexicon. The algorithm was given the raw encoding and had to deduce the internal two-byte structure. In the case of the Brown corpus, word recall was 90.5% and crossing-brackets was 1.7%. For the Chinese word recall was 96.9% and crossing-brackets was 1.3%. In the case of both English and Chinese, most of the unfound words were words that occurred only once in the corpus. Thus, the algorithm has done an extremely good job of learning words and properly using them to seg- ment the input. Furthermore, the crossing-bracket 5The true word moon in the input [the/[moon] is a crossing-bracket violation of them in the segmentation tree [[th~mJfoI[on]]. 338 Kank Word 0 [s] 1 [the] 2 [and] 3 [a] 4 [o~] 5 [in] 6 [to] 500 [students] 501 [material] 502 [tun] 503 [words] 504 [period] 505 [class] 506 [question] 5000 [ ling] [them] ] 5001 [ [mort] [k] ] 5002 [ [re] [lax] ] 5003 [[rig] [id]] 5004 [[connect] [ed]] 5005 [[i]Ek]] 5006 [[hu] [t]] 26000 [ [ploural] [blood] [supply] ] 26001 [ [anordinary] [happy] [family] ] 26002 [[f] leas] [ibility] [of]] 26003 [ [lunar] [brightness] [distribut ion] ] 26004 [ [primarily] [diff] [using] ] 26005 [[sodium] [tri] [polyphosphate]] 26006 [[charcoal] [broil] ted]] Figure 4: Sections of the lexicon learned from the Brown corpus, ranked by frequency. The words in the less-frequent half are listed with their first-level decomposition. Word 5000 causes crossing-bracket violations, and words 26002 and 26006 have internal structure that causes recall violations. measure indicates that the algorithm has made very few clear mistakes. Of course, the hierarchical lexical representation does not make a commitment to what levels are "true words" and which are not; about 5 times more internal nodes exist than true words. Experiments in section 5 demonstrate that for most applications this is not only not a problem, but de- sirable. Figure 4 displays some of the lexicon learned from the Brown corpus. The algorithm was also run as a compressor on a lower-case version of the Brown corpus with spaces and punctuation left in. All bits neces- sary for exactly reproducing the input were counted. Compression performance is 2.12 bits/char, signifi- cantly lower than popular algorithms like gzip (2.95 bits/char). This is the best text compression result on this corpus that we are aware of, and should not be confused with lower figures that do not include the cost of parameters. Furthermore, because the compressed text is stored in terms of linguistic units like words, it can be searched, indexed, and parsed without decompression. 5 Learning Meanings Unsupervised learning algorithms are rarely used in isolation. The goal of this work has been to ex- plain how linguistic units like words can be learned, so that other processes can make use of these units. In this section a means of learning the map- pings between words and artificial representations of meanings is described. The composition-and- perturbation encompasses this application neatly. Imagine that text utterances are paired with rep- resentations of meaning, s and that the goal is to find the minimum-length description of both the text and the meaning. If there is mutual information between the meaning and text portions of the input, then bet- ter compression is achieved if the two streams are compressed simultaneously. If a text word can have some associated meaning, then writing down that word to account for some portion of text also ac- counts for some portion of the meaning of that text. The remaining meaning can be written down more succinctly. Thus, there is an incentive to associate meaning with sound, although of course the associ- ation pays a price in the description of the lexicon. Although it is obviously a naive simplification, many of the interesting properties of the composi- tional representation surface even when meanings are treating as sets of arbitrary symbols. A word is now both a character sequence and a set of symbols. The composition operator concatenates the charac- ters and unions the meaning symbols. Of course, there must be some way to alter the default meaning of a word. One way to do this is to explicitly write out any symbols that are present in the word's mean- ing but not in its components, or vice versa. Thus, the word red {RED} might be represented as r o e o d+RED. Given an existing word berry {BERRY}, the red berry cranberry {RED BERRY} can be rep- resented c o r o a o n o berry {BERRY}+RED. 5.1 Results To test the algorithm's ability to infer word mean- ings, 10,000 utterances from an unsegmented textual database of mothers' speech to children were paired with representations of meaning, constructed by as- signing a unique symbol to each root word in the vo- cabulary. For example, the sentence and wha~ is he painting a plc~ure off is paired with the unordered meaning AND WHAT BE HE PAINT A PIC- TURE OF. In the first experiment, the algorithm received these pairs with no noise or ambiguity, us- ing an encoding of meaning symbols such that each symbol's length was 10 bits. After 8 iterations of training without meaning and then a further 8 it- erations with, the text sequences were parsed again without access to the true meaning. The meanings SThis framework is easily extended to handle multi- ple ambiguous meanings (with and without priors) and noise, but these extensions will not be discussed here. 339 of the resulting word sequences were compared with the true meanings. Symbol accuracy was 98.9%, re- call was 93.6%. Used to differentiate the true mean- ing from the meanings of the previous 20 sentences, the program selected correctly 89.1% of the time, or ranked the true meaning tied for first 10.8% of the time. A second test was performed in which the algo- rithm received three possible meanings for each ut- terance, the true one and also the meaning of the two surrounding utterances. A uniform prior was used. Symbol accuracy was again 98.9%, recall was 75.3%. The final lexicon includes extended phrases, but meanings tend to filter down to the proper level. For instance, although the words duck, ducks, the ducks and duekdrink all exist and contain the mean- ing DUCK, the symbol is only written into the de- scription of duck. All others inherit it. Similar re- sults hold for similar experiments on the Brown cor- pus. For example, scratching her nose inherits its meaning completely from its parts, while kicking the bucke~ does not. This is exactly the result argued for in the motivation section of this paper, and illus- trates why occasional extra words in the lexicon are not a problem for most applications. 6 Other Applications and Current Work We have performed other experiments using this rep- resentation and search algorithm, on tasks in unsu- pervised learning from speech and grammar induc- tion. Figure 5 contains a small portion of a lexicon learned from 55,000 utterances of continuous speech by multiple speakers. The utterances are taken from dictated Wall Street :Journal articles. The concate- nation operators was used with phonemes as termi- nals. A second layer was added to the framework to map from phonemes to speech; these extensions are described in more detail in (de Marcken, 1995b). The sound model of each phoneme was learned sep- arately using supervised training on different, seg- mented speech. Although the phoneme model is ex- tremely poor, many words are recognizable, and this is the first significant lexicon learned directly from spoken speech without supervision. If the composition operator makes use of context, then the representation extends naturally to a more powerful form of context-free grammars, where com- position is tree-insertion. In particular, if each word is associated with a part-of-speech, and parts of speech are permissible terminals in the lexicon, then "words" become production rules. For example, a word might be VP ~ take off NP and represented in terms of the composition of VP ---* V P NP, V ---* ~ake and P ---* off. Furthermore, VP --* V P NP may be represented in terms of VP ---* V PP and PP ---* P~ank w rep(w) 5392 [wvrmr] [[w3r]mr] 5393 [Oauzn] [O[auzn]] 5394 [tahld] [[tah]Id] 5395 [~ktld] [~k[tld]] 5396 [Anitn] [An[itn]] 5397 [m£1i~ndalrz] [[m¢liindalr]z] 8948 [aldiiz] [[al]di~z] 8949 [s]krti] [s~k[rti]] 8950 [130taim ] [[130][talm]] 8951 [s£kgIn] [[s£k] [gln]] 8952 [wAnpA] [[wAn]PAl 8953 [vend~r] [v[~n][d~r]] 8954 [ollmlnei] [e[lImln][ei]] 8955 [m~lii~] [[m~l]i[i0]] 8956 [b£1iindal] [b~[liindal]] 9164 [gouldm~nsmks] [[goul] d[rr~n]s [a~ks]] 9165 [kmp~utr] [[kmp] [~ut]r] 9166 [gavrmin] [ga[vrmin]] 9167 [oublzohuou] [[oubl][~.ohuou]] 9168 [ministrei~in] [[min]i[strei~in]] 9169 [tj£rtn] [[tj£]r [in]] 9170 [hAblhahwou] [[hAbl][h~hwou]] 9171 [shmp~iO] [S[hmp] [6iO] ] 9172 [prplou ,l] [[prJ[plou] .l] 9173 [bouskgi] [[bou][skg]i] 9174 [kg£d]il] [[kg£][dji]l] 9175 [gouldmaiinz] [[goul]d[maiinz]] 9176 [k~rpreiUd] [[brpr] [eitld]] Figure 5: Some words from a lexicon learned from 55,000 utterances of continuous, dictated Wall Street :Journal articles. Although many words are seem- ingly random, words representing million dollars, Goldman-Sachs, thousand, etc. are learned. Further- more, as word 8950 (loTzg time) shows, they are often properly decomposed into components. P NP. In this way syntactic structure emerges in the internal representation of words. This sort of gram- mar offers significant advantages over context-free grammars in that non-independent rule expansions can be accounted for. We are currently looking at various methods for automatically acquiring parts of speech; in initial experiments some of the first such classes learned are the class of vowels, of consonants, and of verb endings. 7 Conclusions No previous unsupervised language-learning proce- dure has produced structures that match so closely with linguistic intuitions. We take this as a vindi- cation of the perturbation-of-compositions represen- tation. Its ability to capture the statistical and lin- guistic idiosyncrasies of large structures without sac- 340 rificing the obvious regularities within them makes it a valuable tool for a wide variety of induction prob- lems. References Leonard E. Baum, Ted Petrie, George Soules, and Nor- man Weiss. 1970. A maximization technique occur- ing in the statistical analysis of probabaJistic functions in markov chains. Annals of Mathematical Statistics, 41:164-171. Glenn Carroll and Eugene Charniak. 1992. Learn- ing probaballstic dependency grammars from labelled text. In Working Notes, Fall Symposium Series, AAAL pages 25-31. Timothy Andrew Cartwright and Michael R. Brent. 1994. Segmenting speech without a lexicon: Evidence for a bootstrapping model of lexical acquisition. In Proc. of the 16th Annual Meeting of the Cognitive Sci- ence Society, Hillsdale, New Jersey. Stanley F. Chen. 1995. Bayesian grammar induction for language modeling. In Proe. $2nd Annual Meeting of the Association for Computational Linguistics, pages 228-235, Cambridge, Massachusetts. Noam A. Chomsky. 1955. The Logical Structure of Lin- guistic Theory. Plenum Press, New York. Carl de Marcken. 1995a. Lexical heads, phrase structure and the induction of grammar. In Third Workshop on Very Large Corpora, Cambridge, Massachusetts. Carl de Marcken. 1995b. The unsupervised acquisition of a lexicon from continuous speech. Memo A.I. Memo 1558, MIT Artificial Intelligence Lab., Cambridge, Massachusetts. Sabine Deligne and Frederic Bimbot. 1995. Language modeling by variable length sequences: Theoretical formulation and evaluation of multigrams. In Proceed- ings of the International Conference on Speech and Signal Processing, volume 1, pages 169-172. A. P. Dempster, N. M. Liard, and D. B. Rubin. 1977. Maximum lildihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, B(39):I-38. T. Mark Ellison. 1992. The Machine Learning of Phono- logical Structure. Ph.D. thesis, University of Western Australia. W. N. Francis and H. Kucera. 1982. Frequency analysis of English usage: lexicon and grammar. Houghton- Mifflin, Boston. Zellig Harris. 1968. Mathematical Structure of Lan- guage. Wiley, New York. Donald Cort Olivier. 1968. Stochastic Grammars and Language Acquisition Mechanisms. Ph.D. thesis, Har- vard University, Cambridge, Massachusetts. Fernando Pereira and Yves Schabes. 1992. Inside- outside reestimation from partially bracketed corpora. In Proc. $9th Annual Meeting of the Association for Computational Linguistics, pages 128-135, Berkeley, California. Jorma Rissanen. 1978. Modeling by shortest data de- scription. Automatica, 14:465-471. Jorma Rissanen. 1989. Stochastic Complexity in Statis- tical Inquiry. World Scientific, Singapore. R. J. Solomonoff. 1960. The mechanization of linguis- tic learning. In Proceedings of the 2nd International Conference on Cybernetics, pages 180-193. Andreas Stolcke. 1994. Bayesian Learning of Proba- balistic Language Models. Ph.D. thesis, University of California at Berkeley, Berkeley, CA. J. Gerald Wolff. 1982. Language acquisition, data com- pression and generalization. Language and Communi- cation, 2(1):57-89. J. Ziv and A. Lempel. 1978. Compression of individual sequences by variable rate coding. Iggg Transactions on Information Theory, 24:530-538. 341 | 1996 | 44 |
Generating an LTAG out of a principle-based hierarchical representation Marie-H~l~ne Candito TALANA and UFRL, Universit6 Paris 7 2, place Jussieu, Tour centrale 8~me 6tage piece 801 75251 Paris cedex 05 FRANCE marie-helene.candito @ linguist.jussieu.fr Abstract Lexicalized Tree Adjoining Grammars have proved useful for NLP. However, numerous redundancy problems face LTAGs developers, as highlighted by Vijay-Shanker and Schabes (92). We present and a tool that automatically generates the tree families of an LTAG. It starts from a compact hierarchical organization of syntactic descriptions that is linguistically motivated and carries out all the relevant combinations of linguistic phenomena. 1 Lexicalized TAGs Lexicalized Tree Adjoining Grammar (LTAG) is a formalism integrating lexicon and grammar (Joshi, 87; Schabes et al., 88), which has proved useful for NLP. Linguists have developed over the years sizeable LTAG grammars, especially for English (XTAG group, 95) and French (Abeill6, 91). In this formalism, the lexical items are associated with elementary trees representing their maximal projection. Features structures are associated with the trees, that are combined with substitution and adjunction. Adjunction allows the extended domain of locality of the formalism : all trees anchored by a predicate contain nodes for all its arguments. Such a lexicalized formalism needs a practical organization. LTAGs consist of a morphological lexicon, a syntactic lexicon of lemmas and a set of tree schemata, i.e. trees in which the lexical anchor is missing. In the syntactic lexicon, lemrnas select the tree schemata they can anchor 1. The set of tree schemata forms the syntactic part of the grammar. The tree schemata selected by predicative items are grouped into families, and collectively selected. A tree family contains the different possible trees for a given canonical subcategorization. Along with the "canonical" trees, a family contains the ones that would be transformationally related in a movement- base approach. These are first the trees where a "redistribution" of the syntactic function of the arguments has occurred, for instance the passive trees, or 1At grammar use, the words of the sentence to be parsed are associated with the relevant tree schemata to form complete lexicalized trees. middle (for French) or dative shift (for English), leading to an "actual subcategorization" different from the canonical one. Secondly, a family may contain the trees with extracted argument (or cliticized in French). In the syntactic lexicon, a particular lemma may select a family only partially. For instance a lemma might select the transitive family, ruling out the passive trees. On the other hand, the features appearing in the tree schemata are common to every lemma selecting these trees. The idiosyncratic features (attached to the anchor or upper in the tree) are introduced in the syntactic lexicon. 2 Development and maintenance problems with LTAGs This extreme lexicalization entails that a sizeable LTAG comprises hundreds of elementary trees (over 600 for the cited large grammars). And as highlighted by Vijay-Shanker and Schabes (92), information on syntactic structures and associated features equations is repeated in dozens of tree schemata (hundreds for subject- verb agreement for instance). Redundancy makes the tasks of LTAG writing, extending or updating very difficult, especially because all combinations of phenomena must be handled. And, in addition to the practical problem of grammar storage, redundancy makes it hard to get a clear vision of the theoretical and practical choices on which the grammar is based. 3 Existing solutions A few solutions have been proposed for the problems described above. They use two main devices for lexicon representation : inheritance networks and lexical rules. But for LTAG representation, inheritance networks have to include phrase-structure information also, and lexical rules become "lexico-syntactic rules". Vijay-Shanker and Schabes, (92) have first proposed a scheme for LTAG representation. Implemented work is also described in (Becker, 93; 95) and (Evans et al., 95). The three cited solutions give an efficient representation (without redundancy) of an LTAG, but have in our opinion two major deficiencies. First these solutions use inheritance networks and lexical rules in a purely technical way. They give no principle about the form of the hierarchy or the lexical rules, whereas we 342 believe that addressing the practical problem of redundancy should give the opportunity of formalizing the well-formedness of elementary trees and of tree families. And second, the generative aspect of these solutions is not developed. Certainly the lexical rules are proposed as a tool for generation of new schemata or new classes in a inheritance network. But the automatic triggering, ordering and bounding of the lexical rules is not discussed 2. 4 Proposed solution : a principle-based representation and a generation system We propose a system for the writing and/or the updating of an LTAG. It comprises a principled and hierarchical representation of lexico-syntactic structures. Using this hierarchy and principles of well-formedness, the tool carries out automatically the relevant crossings of linguistic phenomena to generate the tree families. This solution not only addresses the problem of redundancy but also gives a more principle-based representation of an LTAG. The implementation of the principles gives a real generative power to the tool. Due to a lack of space we cannot develop all the aspects of this work 3. After a brief description of the organization of the syntactic hierarchy, we will focus on the use of partial descriptions of trees. 4.1 Organization of the hierarchy The proposed organization of the hierarchy follows from the linguistic principles of well-formedness of elementary TAG trees, mainly the predicate-arguments co-occurrence principle (Kroch and Joshi, 85; Abeillt, 91) : the trees for a predicative item contain positions for all its arguments. But for a given predicate, we expect the canonical arguments to remain constant through redistribution of functions. The canonical subject (argument 0) in a passive construction, even when unexpressed, is still an argument of the predicate. So the principle should be a principle of predicate-functions co-occurrence : the trees for a predicative item contain positions for all the functions of its actual subcategorization. This reformulated principle presupposes the definition of an actual subcategorization, given the canonical subcategorization of a predicate. This presupposition and the predicate-functions co-occurrence principle are fulfilled by organizing the hierarchy along the three following dimensions : dimension 1 : canonical subcategorization frame This dimension defines the types of canonical subcategorization. Its classes contain information on the arguments of a predicate, their index, their possible categories and their canonical syntactic function. 2Becket (93) gives a linguistic principle for the bounding of his meta-rules, but has no solution for the application of this principle. 3A fuller description of the work can be found in (Candito, to appear) dimension 2 : redistribution of syntactic functions This dimension defines the types of redistribution of functions (including the case of no redistribution at all). The association of a canonical subcategorization frame and a compatible redistribution gives an actual subcategorization, namely a list of argument-function pairs, that have to be locally realized. dimension 3 • syntactic realizations of functions It expresses the way the different syntactic functions are positioned at the phrase-structure level (in canonical, cliticized, extracted position...). These three dimensions constitute the core hierarchy. Out of this syntactic database and following principles of well-forrnedness the generator creates elementary trees. This is a two-steps process : it first creates some terminal classes with inherited properties only - they are totally defined by their list of super-classes. Then it translates these terminal classes into the relevant elementary tree schemata, in the XTAG 4 format, so that they can be used for parsing. Tree schemata generation respects the predicate- functions co-occurrence principle. Their corresponding terminal classes are created first by associating a canonical subcat (dimension 1) with a compatible redistribution, including the case of no redistribution (dimension 2). Then for each function defined in the actual subcat, exactly one realization of function is picked up in dimension 3. The generation is made family by family. This is simply achieved by fixing the canonical subcat frame (dimension 1), At the development stage, generation can also be done following other criterions. For instance, all passive trees or all trees with extracted complements can be generated. 4.2 Formal choices : monotonic inheritance network and partial descriptions of trees The generation process described above is quite powerful in the context of LTAGs, because it carries out automatically all the relevant crossings of linguistic phenomena. These crossings are precisely the major source of redundancy in LTAGs. Because of this generative device, we do not need to introduce lexico- syntactic rules, and thus we do not have to face the problems of ordering and bounding their application. Further, as was mentioned in section 1, lexical idiosyncrasies are handled in the syntactic lexicon, and not in the set of tree schemata. So to represent hierarchically this set, we do not think that nonmonotonicity is linguistically justified. We have thus chosen monotonicity, which gives more transparency and improves declarativity. We follow here 4XTAG (Paroubek et al., 92) is a tool for writing and using LTAGs, including among other things a tree editor and a syntactic parser. 343 Vijay-Shanker and Schabes (92) and use partial descriptions of trees (Rogers and Vijay-Shanker, 94) 5. A partial description is a set of constraints that characterizes a set of trees. Adding information to the description reduces monotonically the set of satisfying trees. The partial descriptions of Rogers and Vijay- Shanker (94) use three relations : left-of, parent and dominance (represented with a dashed line). A dominance link can be further specified as a path of length superior or equal to zero. These links are obviously useful to underspecify a relation between two nodes at a general level, that will be specified at an either lower or lateral level. Figure 1 shows a partial description representing a sentence with a nominal subject in canonical position, giving no other information about possible other complements. The underspecified link between the S and V nodes allows for either presence or absence of a cliticized complement on the verb. In the case of a clitic, the path between the S and V nodes can be specified with the description of figure 2. Then, if we have the information that the nodes labelled respectively S and V of figures 1 and 2 are the same, the conjunction of the two descriptions is equivalent to the description of figure 3. $ s I .. Vr -. / \ N V0 CI V0 Figure 1 Figure 2 $ N Vr a / \o Figure 3 This example shows the declarativity obtained with partial descriptions that use large dominance links. The inheritance of descriptions of figure 1 and 2 is order independent. Without large dominance links, an order of inheritance of the classes describing a subject in canonical position and a cliticized complement should be predefined. In the hierarchy of syntactic descriptions we propose, the partial description associated with a class is the unification of the own description of the class with all inherited partial descriptions. Identity of nodes is stated in our system by "naming" both nodes in the same way, since in descriptions of trees, nodes are referred to by constants. Two nodes, in two conjunct descriptions, referred to by the same constant are the same node. Equality of nodes can also be inferred, mainly using the fact that a tree node has only one direct parent node. We have added atomic features associated with each constant, such as category, index, canonical syntactic function and actual syntactic function. In the conjunction of two descriptions, the identification of two nodes known to be the same requires the unification 5Vijay-Shanker & Schabes (92) have used the partial descriptions introduced in (Rogers & Vijay-Shanker, 92), but we have used the more recent version of (Rogers & Vijay-Shanker, 94). The difference lies principally in the definition of quasi-trees, first seen as partial models of trees and later as distinguished sets of constraints. of such features. In case of failure, the whole conjunction leads to an unsatisfiable description. A terminal class is translated into its corresponding elementary tree(s) by taking the minimal satisfying tree(s) of the partial description of the class 6. 4.3 Application to the French LTAG The tool was used to generate tree families of the French grammar, using a hand-written hierarchy of syntactic descriptions. This task is facilitated by the guidelines given on the form of the hierarchy. Out of about 90 hand-written classes, the tool generates 730 trees for the 17 families for verbs without sentential complements 7, 400 of which were present in the pre- existing grammar. We have added phenomena such as some causative constructions or free order of complements. The proposed type of hierarchy is meant to be universal, and we are currently working on its application to Italian. 5 References A. Abeill~. 1991. Une grammaire lexicaliste d'Arbres Adjoints pour le fran~ais, PhD thesis, Univ. Paris 7. T. Becker. 1993. HyTAG : a new type of Tree Adjoining Grammars for Hybrid Syntactic Representation of Free Order Languages, PhD thesis, Univ. of Saarbriicken. T. Becker. 1994. Patterns in Metarules. Proc. of TAG+3. M-H. Candito. To appear. A principle-based hierarchical representation of LTAGs. Proc. of COLING'96, Copenhagen. R. Evans, G. Gazdar and D. Weir. 1995. Encoding Lexicalized Tree Adjoining Grammar with a Nonmonotonic Inheritance Hierarchy. Proc. of ACL'95, Boston. A. Joshi. 1987. Introduction to Tree Adjoining Grammar, in A. Manaster Ramer (ed), The Mathematics of Language, J. Benjamins, pp. 87-114. A. Kroch and A. Joshi. 1985. The linguistic relevance of Tree Adjoining Grammars. Technical report, Univ. of Pennsylvania. P. Paroubek, Y. Schabes and A. Joshi. 1992. XTAG - A graphical Workbench for developing Tree Adjoining Grammars. Proc. of 3-ANLP, Trento. J. Rogers and K. Vijay-Shanker. 1992. Reasoning with descriptions of trees. Proc. ACL'92, pp. 72-80. J. Rogers and K. Vijay-Shanker. 1994. Obtaining trees from their descriptions : an application to Tree-Adjoining Grammars. Computational Intelligence, vol. 10, N ° 4. Y. Schabes, A. Abeill6 and A. Joshi. 1988. Parsing strategies with lexicalized grammars : Tree Adjoining Grammars. Proc. of COLING'88, Budapest, vol. 2. K. Vijay-Shanker and Y. Schabes. 1992. Structure Sharing in Lexicalized Tree Adjoining Grammar. Proc, of COLING'92, Nantes, pp. 205-211. XTAG research group. 1995. A lexicalized TAG for English, Technical Report IRCS 95-03, Univ. of Pennsylvania. 6 Intuitively the remaining underspecified links are taken to be path of minimal length. See Rogers and Vijay-Shanker (94). 7 By the time of conference, we will be able to give figures for the families with sentential complements also. 344 | 1996 | 45 |
Using Parsed Corpora for Structural Disambiguation in the TRAINS Domain Mark Core Department of Computer Science University of Rochester Rochester, New York 14627 [email protected] Abstract This paper describes a prototype disam- biguation module, KANKEI, which was tested on two corpora of the TRAINS project. In ambiguous verb phrases of form V ... NP PP or V ... NP adverb(s), the two corpora have very different PP and adverb attachment patterns; in the first, the cor- rect attachment is to the VP 88.7% of the time, while in the second, the correct at- tachment is to the NP 73.5% of the time. KANKEI uses various n-gram patterns of the phrase heads around these ambiguities, and assigns parse trees (with these ambigu- ities) a score based on a linear combination of the frequencies with which these pat- terns appear with NP and VP attachments in the TRAINS corpora. Unlike previ- ous statistical disambiguation systems, this technique thus combines evidence from bi- grams, trigrams, and the 4-gram around an ambiguous attachment. In the current ex- periments, equal weights are used for sim- plicity but results are still good on the TRAINS corpora (92.2% and 92.4% accu- racy). Despite the large statistical differ- ences in attachment preferences in the two corpora, training on the first corpus and testing on the second gives an accuracy of 90.9%. 1 Introduction The goal of the TRAINS project is to build a com- puterized planning assistant that can interact con- versationally with its user. The current version of this planning assistant, TRAINS 95, is described in (Allen et al., 1995); it passes speech input onto a parser whose chart is used by the dialog manager and other higher-level reasoning components. The planning problems handled involve moving several trains from given starting locations to specified des- tinations on a map display (showing a network of rail lines in the eastern United States). The 95 dialogs are a corpus of people's utterances to the TRAINS 95 system; they contain 773 instances of PP or ad- verb postmodifiers that can attach to either NPs or VPs. Many of these cases were unambiguous, as there was no NP following the VP, or the NP did not follow a verb. Only 275 utterances contained ambiguous constructions and in 73.5% of these, the correct PP/adverb attachment was to the NP. One goal of the TRAINS project is to enhance the TRAINS 95 system sufficiently to handle the more complex TRAINS 91-93 dialogs. This corpus was created between 1991 and 1993 from discussions between humans on transportation problems involv- ing trains. The dialogs deal with time constraints and the added complexity of using engines to pick up boxcars and commodities to accomplish deliv- ery goals. This corpus contains 3201 instances of PP or adverb postmodifiers that can attach to ei- ther NPs or VPs. 1573 of these examples contained both an NP and a VP to which the postmodifier could attach. The postmodifier attached to the VP in 88.7% of these examples. On average, a post- modifier attachment ambiguity appears in the 91-93 dialogs after about 54 words, which is more frequent than the 74 word average of the 95 dialogs. This suggests that a disambiguation module is going to become necessary for the TRAINS system. This is especially true since some of the methods used by TRAINS 95 to recover from parse errors will not work in a more complex domain. For instance in the 95 dialogs, a PP of form at city-name can be assumed to give the current location of an engine that is to be moved. However, this is not true of the 91-93 dialogs where actions such as load often take at city-name as adjuncts. 2 Methodology KANKEI I is a first attempt at a TRAINS dis- ambiguation module. Like the systems in (Hindle and Rooth, 1993) and (Collins and Brooks, 1995), KANKEI records attachment statistics on informa- 1From the Japanese word, kankei, meaning "relation." 345 tion extracted from a corpus. This information con- sists of phrase head patterns around the possible lo- cations of PP/adverb attachments. Figure 1 shows how the format of these patterns allows for combi- nations including a verb, NP-head (rightmost NP before the postmodifier), and either the preposition and head noun in the PP, or one or more adverbs. 2 These patterns are similar to ones used by the disam- biguation system in (Collins and Brooks, 1995) and (Brill and Resnik, 1994) except that Brill and Resnik form rules from these patterns while KANKEI and the system of Collins and Brooks use the attachment statistics of multiple patterns. While KANKEI com- bines the statistics of multiple patterns to make a disambiguation decision, Collins and Brooks' model is a backed-off model that uses 4-gram statistics where possible, 3-gram statistics where possible if no 4-gram statistics are available, and bigram statistics otherwise. verb NP-head (preposition obj-head I adverbl adverb2) Figure 1: Format of an attachment pattern Most items in this specification are optional. The only requirement is that patterns have at least two items: a preposition or adverb and a verb or NP- head. The singular forms of nouns and the base forms of verbs are used. These patterns (with hy- phens separating the items) form keys to two hash tables; one records attachments to NPs while the other records attachments to VPs. Numbers are stored under these keys to record how often such a pattern was seen in a not necessarily ambiguous VP or NP attachment. Sentence 1 instantiates the longest possible pattern, a 4-gram that here consists of need, orange, in, and Elmira. I) I need the oranges in Elmira. The TRAINS corpora are much too small for KANKEI to rely only on the full pattern of phrase heads around an ambiguous attachment. While searching for attachment statistics for sentence 1, KANKEI will check its hash tables for the key need-orange-in-Elmira. If it relied entirely on full patterns, then if the pattern had not been seen, KANKEI would have to randomly guess the at- tachment. Such a technique will be referred to as full matching. Normally KANKEI will do partial matching, i.e., if it cannot find a pattern such as need-orange-in-Elmira, it will look for smaller partial patterns which here would be: need-in, orange-in, orange-in-Elmira, need-in-Elmira, and need-orange- in. The frequency with which NP and VP attach- ment occurs for these patterns is totaled to see if one attachment is preferred. Currently, we count partial patterns equally, but in future refinements we would 2Examples of trailing adverb pairs are first off and right now. like to choose weights more judiciously. For instance, we would expect shorter patterns such as need-in to carry less weight than longer ones. The need to choose weights is a drawback of the approach. How- ever, the intuition is that one source of evidence is insufficient for proper disambiguation. Future work needs to further test this hypothesis. The statistics used by KANKEI for partial or full matching can be obtained in various ways. One is to use the same kinds of full and partial pattern match- ing in training as are used in disambiguation. This is called comprehensive training. Another method, called raw training, is to record only full patterns for ambiguous and unambiguous attachments in the corpus. (Note that full patterns can be as small as bigrams, such as when an adverb follows an NP act- ing as a subject.) Although raw training only col- lects a subset of the data collected by comprehen- sive training, it still gives KANKEI some flexibility when disambiguating phrases. If the full pattern of an ambiguity has not been seen, KANKEI can test whether a partial pattern of this ambiguous attach- ment occurred as an unambiguous attachment in the training corpus. Like the disambiguation system of (Brill and Resnik, 1994), KANKEI can also use word classes for some of the words appearing in its patterns. The rudimentary set of noun word classes used in this project is composed of city and commodity classes and a train class including cars and engines. 3 Measure of Success One hope of this project is to make generaliza- tions across corpora of different domains. Thus, experiments included trials where the 91-93 dialogs were used to predict the 95 dialogs 3 and vice versa. Experiments on the effect of training and testing KANKEI on the same set of dialogs used cross val- idation; several trials were run with a different part of the corpus being held out each time. In all these cases, the use of partial patterns and word classes was varied in an attempt to determine their effect. Word Classes Raw Training P. Matching Default Guess % by Default % Accuracy Yes Yes Yes Yes Yes Yes No Yes Yes Yes No No VP NP NP NP 86.9 85.5 Table 1: Results of training with the 93 dialogs and testing on the 95 dialogs Tables 1, 2, and 3 show the results for the best parameter settings from these experiments. 391-93 dialogs were used for training and the 95 di- alogs for testing. 346 Word Classes Raw Training P. Matching Default Guess % by Default % Accuracy Yes Yes No Yes No No No No Yes Yes No Yes Yes Yes Yes NP VP VP VP NP [ ::4 ::0 ::0:150 ::0 I Table 2: Results of training and testing on the 95 dialogs Word Classes Raw Training P. Matching Default Guess % by Default % Accuracy Yes No Yes Yes No No Yes No Yes Yes No No VP VP VP VP 91.0 91.0 Table 3: Results of training and testing on the 93 dialogs The rows labeled % by Default give the portion of the total success rate (last row) accounted for by KANKEI's default guess. The results of training on the 95 data and testing on the 93 data are not shown because the best results were no better than always attaching to the VP. Notice that all of these results involve either word classes or partial patterns. There is a difference of at least 30 attachments (1.9% ac- curacy) between the best results in these tables and the results that did not use word classes or partial patterns. Thus, it appears that at least one of these methods of generalization is needed for this high- dimensional space. The 93 dialogs predicted attach- ments in the 95 test data with a success rate of 90.9% which suggests that KANKEI is capable of making generalizations that are independent of the corpus from which they were drawn. The overall accuracy is high: the 95 data was able to predict itself with an accuracy of 92.2%, while the 93 data predicted itself with an accuracy of 92.4%. 4 Discussion and Future Work The results for the TRAINS corpora are encourag- ing. We would also like to explore how KANKEI performs in a more general domain such as the Wall Street Journal corpus from the Penn Treebank. We could then compare results with Collins and Brooks' disambiguation system which was also tested using the Penn Treebank's Wall Street Journal corpus. Weighting the n-grams in a nonuniform manner should improve accuracy on the TRAINS corpora as well as in more general domains. (Alshawi and Carter, 1994) address a related problem, weighting scores from different disambiguation systems to ob- tain a single rating for a parse tree. They achieved good results using a hill climbing technique to ex- plore the space of possible weights. Another possible technique for combining evidence is the maximum- entropy technique of (Wu, 1993). We are also consid- ering using logical forms (instead of word and word classes) in collocation patterns. The integration of KANKEI with the TRAINS parser needs to be completed. As a first attempt, when the TRAINS parser tries to extend the arcs associated with the rules: VP -> VP (PP[ADV) and NP -> NP (PP[ADV), KANKEI will adjust the probabilities of these arcs based on attachment statistics. 4 Ultimately, the TRAINS disambiguation module will contain functions measuring rule habit- uation and distance effects. Then it will become nec- essary to weight the scores of each disambiguation technique according to its effectiveness. The abil- ity to adjust probabilities based on evidence seen is an advantage over rule-based approaches. This ad- vantage is obtained at the expense of storing all the patterns seen. Acknowledgments This work was supported in part by National Science Foundation grant IRI-95033312. Len Schubert's su- pervision and many helpful suggestions are grate- fully acknowledged. Thanks also to James Allen for his helpful comments. References James Allen, George Ferguson, Bradford Miller, and Eric Ringger. 1995. Spoken dialogue and inter- active planning. In Proc. of the ARPA Spoken Language Technology Workshop, Austin, TX. Hiyan Alshawi and David Carter. 1994. Training and scaling preference functions. Computational Linguistics, 20(4):635-648. Eric Brill and Philip Resnik. 1994. A rule-based ap- proach to prepositionM phrase attachment disam- biguation. In Proc. of 15th International Confer- ence on Computational Linguistics, Kyoto, Japan. Michael Collins and James Brooks. 1995. Prepo- sitional phrase attachment through a backed-off model. In Proc. of the 3rd Workshop on Very Large Corpora, pages 27-38, Boston, MA. Donald Hindle and Mats Rooth. 1993. Structural amiguity and lexical relations. Computational Linguistics, 19(1):103-120. Dekai Wu. 1993. Estimating probability distribu- tions over hypotheses with variable unification. In Proc. of the 11th National Conference on Artifi- cial Intelligence, pages 790-795, Washington D.C. 4The TRAINS parser is probabilistic although the probabilities are parse scores not formal probabilities. 347 | 1996 | 46 |
Subdeletion in Verb Phrase Ellipsis Paul G. Donecker Villanova University 800 Lancaster Avenue Villanova, PA 19085 [email protected] Abstract This paper stems from an ongoing research project ~ on verb phrase ellipsis. The project's goals are to implement a verb phrase ellipsis resolution algorithm, automatically test the algorithm on corpus data, then automatically evaluate the algorithm against human-generated answers. The paper will establish the current status of the algorithm based on this automatic evaluation, categorizing current problem situations. An algorithm to handle one of these problems, the ease of subdeletion, will be described and evaluated. The algorithm attempts to detect and solve subdeletion by locating adjuncts of similar types in a verb phrase ellipsis and corresponding antecedent. 1. Introduction A verb phrase ellipsis (VPE) exists when a sentence has an auxiliary verb but no verb phrase (VP). For example, in the sentence "Gather ye rosebuds while ye may," "may" is the beginning of a VPE. Its antecedent is "gather ye rosebuds." The research described in this paper is part of a project to automate the resolution of VPE occurrences, and also to automate the evaluation of the success of the VPE resolution (Hardt 1995). Based on these evaluations of the algorithm, several distinct categories of error situations have been determined. We have focused on errors in which the program selects the correct head verb as antecedent. These cases can be divided into the following categories: 1) too much material included from the antecedent, 2) not enough much material included from the antecedent, 3) discontinuous antecedents, and 4) miscellaneous. For a subset of case 1, subdeletion, an algorithm derived from (Lappin and McCord, 1990) is evaluated 1 This research was supported in part by NSF Career Grant, no. IRI-9502257. in regard to the Brown Corpus. 2. Background Previous studies on evaluating discourse processing (e.g., Walker, 1989; Hobbs, 1978) have involved subjectively examining test cases to determine correctness. With the development of resources such as the Penn Treebank (Marcus, Santorini, and Marcinkiewicz, 1993), it has become possible to automate empirical tests of discourse processing systems to obtain a more objective measure of their success. Towards this end, an algorithm was implemented in a Common Lisp program called VPEAL (Verb Phrase Ellipsis Antecedent Locator) (Hardt, 1995), drawing on the Penn Treebank as input. The portion of the Penn Treebank examined--the Brown Corpus, about a million words--contains about 400 VPEs. Furthermore, to automatically evaluate the algorithm, utilities were developed to automatically test the output of VPEAL for correctness. The most recent version of VPEAL contained 18 sub-parts for ranking and choosing antecedents. Testing the program's performance involved finding the percentage of correct antecedents found by any or all of these algorithms. This was achieved by having human coders read plain text versions of the parsed passages, marking what they felt to be the antecedent. Antecedents selected by VPEAL were considered correct if they matched the antecedents selected by the coders. The remainder of this paper will describe the categories of errors observed, then describe an approach to reducing one category of errors. 3. Categories of Errors The most recent version of VPEAL correctly selects 257 out of 380 antecedents from the Brown Corpus. We have divided the categories into the following categories: A. Incorrect verb: 90 cases. In these cases, VPEAL selected an incorrect head verb for the 348 antecedent. The causes of these errors are being evaluated. B. Incorrect antecedent but correct verb: 33 cases. VPEAL selected the correct verb to head the antecedent, but the selected antecedent was either incomplete or included incorrect information. These cases can be further divided into: 1) too much material included from the antecedent, 2) not enough much material included from the antecedent, 3) discontinuous antecedents, and 4) miscellaneous. These subcategories are described below. 1. Too much material is included from the antecedent: 11 cases. Example (excerpt from Penn Treebank): produce humorous effects in his novels and tales as they did in the writing of Longstreet and Hooper and Harris VPE: did VPEAL's antecedent: produce humorous effects in his novels and tales Coder's antecedent: produce humorous effects Normally, an entire verb phrase is selected as the antecedent. In these cases, though, part of the selected antecedent was not required by the VPE. The most common situation (6 cases), as in the above example, was subdeletion--when the VPE structure contains a noun phrase or prepositional phrase which substitutes for a corresponding structure in the antecedent verb phrase. 2. Not enough material is included from the antecedent: 10 cases. Example (excerpt from Penn Treebank): But even if we can not see the repulsive characteristics in this new image of America, foreigners can VPE: can VPEAL's antecedent: see the repulsive characteristics Coder's antecedent: see the repulsive characteristics in this new image of America By default, only text contained by the selected verb phrase is included in the antecedent. In these cases, however, human coders have selected text that is adjacent to but not parsed as contained by the verb phrase as part of the antecedent. It can be argued that these errors are not the fault of the VPEAL algorithm--that if text is parsed as not being a part of the verb phrase then it should still not be included when the verb phrase is chosen as the antecedent. If the above prepositional phrase "in this new image of America" were parsed as part of the verb phrase-- as indeed it should have been--then the algorithm would have derived the correct antecedent. 3. Discontinuous antecedents--the correct antecedent is split into two parts: 5 cases. Example (excerpt from Penn Treebank): representing as I do today my wife VPE: do VPEAL's antecedent: representing Coder's antecedent: representing my wife This situation is similar to B2 in that the antecedent is incorrect because text not contained by the selected verb phrase should be included in the antecedent. In these cases, however, the reason the omitted text is not contained by the antecedent verb phrase is that an interposing phrase (in the example above, the VPE itself) occurs in the middle of the antecedent. 4. Miscellaneous: 7 cases. 4. Improving Performance in the Case of Subdeletion In this section an algorithm is described to reduce the errors in error category B 1 caused by subdeletion. Subdeletion is probably the most straightforward of the error categories. The problem category occurred when prepositional phrases and noun phrases in the antecedent verb phrases were unnecessary because of analogous phrases adjacent to the VPE. The proposed solution was to check whether the VPE has a sister node that is a prepositional phrase or noun phrase. If it does, and a phrase of the same type exists as a sister node to the head verb in the antecedent, then the phrase in the antecedent is removed. This is essentially the strategy outlined by Lappin and McCord (1990). Following are the specific steps to implementing the algorithm: 1. Check if there are any prepositional phrases or noun phrases that are sister nodes to the antecedent head verb. 2. Check if there are any prepositional phrases or noun phrases that are sister nodes to the VPE head verb. 3. If a prepositional phrase or noun phrase is found in step 1, and a phrase of the same type is found 349 in step 2, then remove the phrase found in step 1 from the antecedent. For example, refer to the example from error case B. 1. Step 1 would locate the noun phrase "humorous effects" and the prepositional phrase "in his novels and tales" as sister nodes to the antecedent head verb "produce." Step 2 would locate the prepositional phrase "in the writing of Longstreet and Hooper and Harris" as a sister node to the VPE head verb "did." Step 3 would determine that a prepositional phrase exists after both the antecedent's head verb and the VPE and therefore would delete "in his novels and tales" from the antecedent, resulting in the correct antecedent, "produce humorous effects." This algorithm will correctly handle the 6 cases of subdeletion in the Brown Corpus. However, examples can be constructed for which this algorithm does not account. In the sentence "Julie drove to school on Friday, and Laura did on Saturday," for example, the VPE is "did" and the correct antecedent is "drove to school." In this example, two prepositional phrases--"to school" and "on Friday"--follow the anteeedent's head verb "drove." A prepositional phrase, "on Saturday," also exists following the VPE's head verb. Following the above algorithm, both prepositional phrases "to school" and "on Friday" would be deleted, resulting in an incorrect antecedent. The algorithm makes no provisions for cases containing multiple prepositional phrases and noun phrases. Fortunately, such situations seem rare, as none were found in the Brown Corpus. More significantly, the algorithm also assumes that analogous phrases following the antecedent and VPE always implies subdeletion. That is, it assumes that prepositional phrases or noun phrases following the VPE always implies that like phrases should be deleted from the antecedent. Again, it is possible to imagine a counterexample, for example, "Dad stayed in the Hilton like Morn did in Pittsburgh." Here, the above algorithm would incorrectly remove the prepositional phrase "in the Hilton." The expectation was that these counter examples would be less frequent than the cases in which the algorithm would correctly remove unwanted text. A manual sampling of VPEs in the Brown Corpus showed this to be true. When the algorithm was implemented, however, the number of correct answers improved to 258, an increase of 1. In addition to solving the 6 cases of subdeletion, the algorithm 350 inlxoduced 5 errors; each of these new errors involved a noun phrase or prepositional phrase in the VPE that did not require the deletion of a counterpart in the antecedent. For example, one of the newly introduced errors occurred in the fragment "...creaking in the fog as it had for thirty years." The prepositional phrase "for thirty years" in the VPE caused the removal of the phrase "in the fog" from the antecedent, even though the phrases are not parallel in meaning. These results imply that the structure of a sentence alone is insufficient to detect subdeletion. It is possible, however, that a larger sample of relevant examples would suggest the best choice (to delete or not to delete) in the absence of additional information. Towards these ends, other corpora in the Penn Treebank will be examined with VPEAL. Also, newer versions of the Treebank include semantic tags to adjunct phrases which will aid in preventing the misidentification of subdeletion described above. 5. Conclusion Improving the results of the VPEAL program is an iterative process. We have categorized the errors occurring in VPEAL. An algorithm for solving the error category of subdeletion was described and examined. Potential problem situations for the algorithm were also presented. Empirical evaluation of the algorithm indicates that a purely syntactic approach to detecting subdeletion is probably insufficient. Additional approaches to the problem of subdeletion were suggested. Other cases of errors will be likewise evaluated. Bibliography Hardt, Daniel. 1995. An empirical approach to VP ellipsis, Hobbs, Jerry. 1978. Resolving pronoun references. Lingua, 44:311-388. Lappin, Shalom and Michael McCord. 1990. Anaphora Resolution in Slot Grammar. Computational Linguistics, 16(4). Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of english: The penn treebank. Computational Linguistics, 19(2). Walker, Marilyn. 1989. Evaluating discourse processing algorithms. In Proceedings, 27th Annual Meeting of the ACL, Vancouver, Canada. | 1996 | 47 |
Using textual clues to improve metaphor processing St6phane Ferrari LIMSI-CNRS PO Box 133 F-91403 Orsay cSdex, FRANCE [email protected] Abstract In this paper, we propose a textual clue ap- proach to help metaphor detection, in order to improve the semantic processing of this figure. The previous works in the domain studied the semantic regularities only, over- looking an obvious set of regularities. A corpus-based analysis shows the existence of surface regularities related to metaphors. These clues can be characterized by syn- tactic structures and lexical markers. We present an object oriented model for repre- senting the textual clues that were found. This representation is designed to help the choice of a semantic processing, in terms of possible non-literal meanings. A prototype implementing this model is currently un- der development, within an incremental ap- proach allowing step-by-step evaluations. 1 1 Introduction Metaphor is a frequently used figure of speech, re- flecting common cognitive processes. Most of the previous works in Natural Language Understanding (NLU) looked for regularities only on the semantic side of this figure, as shown in a brief overview in section 2. This resulted in complex semantic pro- cessings, not based on any previous robust detec- tion, or requiring large and exhaustive knowledge bases. Our aim is to provide NLU systems with a set of heuristics for choosing the most adequate seman- tic processing, as well as to give some probabilistic clues for disambiguating the possibly multiple mean- ing representations. A corpus-based analysis we made showed the exis- tence of textual clues in relation with the metaphors. These clues, mostly lexical markers combined with syntactic structures, are easy to spot, and can pro- vide a first set of detection heuristics. We propose, in 1This work takes part in a research project sponsored by the AUPELF-UREF (Francophone Agency For Edu- cation and Research) section 3, an object oriented model for representing these clues and their properties, in order to integrate them in a NLU system. For each class, attributes give information for spoting the clues, and, when possible, the source and the target of the metaphor, using the results of a syntactic parsing. A prototype, STK, partially implementing the model, is currently under development, within an incremental approach. It is Mready used to evaluate the clues relevance. In conclusion, we will discuss how the model can help chosing the adequate semantic analysis to pro- cess at the sentence level or disambiguating multiple meaning representations, providing probabilities for non-literal meanings. 2 Classical methods: a brief overview The classical NLU points of view of metaphor have pointed out the multiple kinds of relations between what is called the source and the target of the metaphor, but rarely discuss the problem of detect- ing the figure that bears the metaphor. For our pur- pose, we choose to present these approaches in two main groups, depending on how they initiate the se- mantic processing. The previous works led to a classification intro- duced by Dan Fass (Fass, 1991). In the compari- son view, the metaphor corresponds to an analogy between the structures representing the source and the target of the figure, as in Gentner's works (Gen- tner, 1988) and their implementation (Falkenhainer et al., 1989). The interaction view, as in Hobbs (Hobbs, 1991), points at the novelty brought by the metaphor. Fass also distinguishes a selection restric- tions violations view presenting the metaphor as a kind of anomaly. We would argue that the two pre- vious views already considered metaphor as a kind of anomaly. Indeed, the semantic anMysis proposed for dealing with metaphors were processed depend- ing on the results of another, say a "classical" one 2. 2We prefer to call it a classical rather than literal meanings processing because it can deal with some con- ventional metaphors, even if not explicitly mentioned. 351 Thereby, detecting a metaphor meant detecting an anomaly in the meaning representation issued from such a classical analysis. Fass proposed a method for discriminating literal meanings, metaphors, metonymies and "anomalies", merging different points of view (Fass, 1991). In this approach, multiple semantic analysis can be pro- cessed, resulting in possibly multiple meaning repre- sentations. In (Prince and Sabah, 1992), a method to overcome similar kinds of ambiguities reveal the difficulties encountered if no previous detection is made. James Martin's approach (Martin, 1992), called the conventional view by Fass, is based on Lakoff's theory on cognitive metaphors (Lakoff and Johnson, 1980). It requires a specific knowledge rep- resentation base and also results in multiple repre- sentation meanings. Detecting a metaphor is mean- ingless here, and conventional metaphoric meanings can be viewed as polysemies. Martin revealed at least that the heuristic of the ill-formness of mean- ing representations issued from classical analysis is not sufficient at all to deal with all the possible metaphors. In our point of view, all the previous approaches were founded. The main remaining problem, how- ever, is to choose an adequate processing when con- fronted with a metaphor, and thus, to detect the metaphors before trying to build their meaning rep- resentation. This can be partially solved using tex- tual clues. 3 Textual clues: object oriented description If the classical views of the metaphor overlook the textual clues, in other domains, especially those concerning explanation, they have been wisely re- introduced. In (Pery-Woodley, 1990), Pery-Woodley shows the existence of such clues related to the explanatory discourse. They can help in generat- ing explanations in natural language as well as in modelling the student in a intelligent tutoring sys- tem (Daniel et al., 1992). A corpus of 26 explana- tory texts in French, of about 200 words each, has been collected under a shared research project be- tween psychologists and computer scientists, in or- der to study metaphors and analogies in teaching. The analysis we made showed the existence of tex- tual clues in relation with metaphoric contexts and analogies (e.g. "like", "such as", "illustrated by"). They can be characterized by syntactic regularities (e.g. the comparative is used in structures such as "less than", "more than"; the identification is made through attributes or appositions, ...). They also involve lexical markers (e.g. "literMy", "illustrat- ing", "metaphorically" ,). These properties, already found in the previous works, can help detecting the clues themselves. Studying the relation between the syntactic regularities and the lexical markers, one can observe that the first build the ground where to find the second. We thus propose an object-oriented model for representing these clues. A generic textual clue can thereby be described by the two following attributes: • the Surface Syntactic Pattern representing the syntactic regularity, with a label on the item where to find the lexical marker • the Lexical Marker itself Typically, the word "metaphor" itself can be used as a lexical marker in expressions such as '~to ex- tend the conventional metaphor, pruning such a tree means to generalize". On the other hand, "metaphor" will not be a marker if used as the subject of the sentence, like in this one. Thus, describing the syntactic regularities surrounding a lexical marker improves its relevance as a marker. We propose to represent this relevance for proba- bilistic purposes. Each clue that was found is cur- rently evaluated on a large corpus (about 450,000 words). The frequencies of use of the lexical mark- ers in metaphoric contexts are represented in the relevance attribute (see example below). The syntactic structures may also give infor- mation about the source and the target of the metaphor. For instance, in the sentence "Yesterday, at home, Peter threw himself on the dessert like a lion.", the subject inherits the properties of speed and voracity of a lion attacking its victim. It is here possible to spot the source and the target of the metaphor using the syntactic properties of the com- parison. Two attributes are added to textual clues related to metaphors, corresponding to the elements of the sentence bearing the source and the target. Example of textual clue representations type metaphor-analogy name B.2.2.2 comment comparison involving the meaning of a marker, adjective, attribute of the object, object before the verb SSP GNo GN1 Vx Adjo [prep] GN2 LM Adjo: pareil (meaning "similar") target GN1 source GN2 LM relevance (15/28) number of occurrences 28 conventional metaphors 3 new metaphors 2 metaphomc contexts 12 total 15 Notations: GN and GV stand for nominal or verbal groups, Adj and Adv for adjectives and adverbs, and prep for prepositions. The model has been partially implemented in a tool, STK, for detecting the textual clues related to 352 metaphors and adding specific marks when found. In its current version, STK allows us to tokenize, tag, and search for lexical markers on large corpora. The tagger we use is the one developped by Eric Brill (Brill, 1992) with a set of tags indicating the grammatical categories as well as other information such as the number and the gender for nouns and adjectives. It is evaluated under GRACE 3 protocol for corpus-oriented tools assigning grammatical cat- egories. It is currently used for the evaluation of the textual clues that were found. The latter can be easily retrieved using STK, avoiding lexical am- biguities. They are then analyzed by hand, in order to determine their relevance attribute. In the previ- ous example of textual clue, the relevance values are issued from this corpus-based analysis. 4 Conclusion, perspectives Classical approaches to the metaphor in NLU re- vealed multiple underlying processes. We there- fore focussed our study on how to help detecting metaphors in order to chose the most adequate se- mantic processing. Textual clues can give informa- tion about the figures that bear the metaphor, which are easy to spot. Indeed, they can be found using the results of syntactic parsing. We proposed an object-oriented model to represent these clues and their multiple properties. If textual clues give information about possible non-literal meanings, metaphors and analogies, one may argue they do not allow for a robust detection. Indeed, a textual clue is not sufficient to prove the presence of such figures of speech. The relevance of each clue can be used to help disambiguating mul- tiple meaning representation when it occurs. This must not be the only disambiguation tool, but when no other is avalaible, it provides NLU systems with a probabilistic method. Our future works will focuss on the study of the relation between the metaphors introduced by a clue and others that are not conventional. The guideline is that novel metaphors not introduced by a clue at the sentence level may have been introduced previ- ously in the text. tique la Mod~lisation Cognitive de l']~l~ve. Lec- ture Notes in Computer Sciences, 608:252-260. Proceedings of the International Conference on In- telligent Tutoring Systems (ITS-92), MontrEal. Falkenhainer, B., Forbus, K., and Gentner, D. (1989). The Structure-Mapping Engine: Algo- rithm and Examples. Artificial Intelligence, 41:1- 63. Fass, D. (1991). met : A Method for Discriminating Metonymy and Metaphor by Computer. Compu- tational Linguistics, 17(1):49-90. Fass, D., Hinkelman, E., and Martin, J., editors. Proceedings of the IJCAI Workshop on Computa- tional Approaches to Non-Literal Language, Syd- ney, Australia. 1991. Gentner, D. (1988). Analogical Inference and Ana- logical Access, In: Analogica, chapter 3, pages 63-88. Edited by Prieditis A., Pitman Publish- ing, London, Morgan Kaufmann Publishers, Inc., Los Altos, California. Hobbs, J. (1991). Metaphor and abduction. In (Fass et al., ), pages 52-61. Lakoff, G. and Johnson, M. (1980). Metaphors we live by. University of Chicago Press, Chicago, U.S.A. Martin, J. (1992). Computer Understanding of Con- ventional Metaphoric Language. Cognitive Sci- ence, 16:233-270. Pery-Woodley, M. (1990). Textual clues for user modeling in an intelligent tutoring system. Mas- ter's thesis, University of Manchester, England, Great-Britain. Prince, V. and Sabah, G. (1992). Coping with Vague and Fuzzy Words : A Multi-Expert Natural Language System which Overcomes Ambiguities. In Acts of PRICAI'92, Seoul, Corea. September, 1992. References Brill, E. (1992). A simple rule-based part of speech tagger. In Proceedings of the Third Conference on Applied Natural Language Processing, Trento. ACL. Daniel, M., Nicaud, L., Prince, V., and Pery- Woodley, M. (1992). Apport du style Linguis- 3GRACE stands for "Grammars and Resources for Corpora Analysis and their Evaluation". It is a national research project for the development of tools for French language processing. 353 | 1996 | 48 |
On Reversing the Generation Process in Optimality Theory J. Eric Fosler U.C. Berkeley and International Computer Science Institute 1947 Center Street, Suite 600, Berkeley, CA 94704 fosler~icsi.berkeley.edu Abstract Optimality Theory, a constraint-based phonol- ogy and morphology paradigm, has allowed linguists to make elegant analyses of many phenomena, including infixation and redupli- cation. In this work-in-progress, we build on the work of Ellison (1994) to investigate the possibility of using OT as a parsing tool that derives underlying forms from surface forms. a. Derivational Phonology b. Optimality Theory Figure I: Search Spaces Within Different Paradigms 1 Introduction Optimality Theory (Prince and Smolensky, 1993) is a constraint-based phonological and morphological system that allows violable constraints in deriving output sur- face forms from underlying forms. In OT a system of constraints selects an "optimal" surface output from a set of candidates. The methodology allows succinct anal- yses of phenomena such as infixation and reduplication that were difficult to describe under sets of transforma- tional rules. Several computational methods for OT have been pro- duced within the short amount of time since Prince and Smolensky's paper (Ellison, 1994; Tesar, 1995; Ham- mond, 1995). These systems were designed as genera- tion systems, deriving surface forms from an underlying lexicon. There have, however, been no computational models of OT parsers that derive underlying forms from the surface form. z In this work, we lay the theoretical groundwork for using OT as a parsing tool. 2 Comparing Derivational Methods to Optimality Theory In traditional computational phonology/morphology systems such as two-level phonology (Koskenniemi, 1983), grammars that generate surface forms axe invert- ible, allowing parsing back into underlying forms. In a derivational framework, the grammar converts underly- ing forms to surface outputs via transformations; the in- put and output share the same space (Figure la). In the one-level version of OT that most computational meth- ods use, the space is populated with candidate outputs I Some of the computational work in OT confusingly uses the term "parsing" to refer to generation. created by a generator function GZN operating on in- put strings. The search narrows in on an optimal out- put (Figure lb) using evaluation constraints in a process called EVAL; successively smaller boundaries are cut out by the constraints until only one candidate remains. It is easy to see why the derivational method can be run backward: it just retraces derivational links in the graph. It is not obvious, though, how the input can be found from the search space in OT. 3 Tagalog Infixation Infixation has traditionally been a difficult problem for computational models that use two-level phonology (Sproat, 1992). Infixation in Tagalog, however, has been modeled using OT (McCarthy and Prince, 1995). In Tagalog, the urn affix can appear as a prefix, or "move" slightly into the word to which it is attaching (French, 1988). Root with um Gloss alis um-alis "leave" sulat s-um-ulat "write" gradwet gr-um-adwet "graduate" McCarthy and Prince analyze um as a prefix, which moves into a word to reduce the number of coda con- sonants. They postulate two competing constraints, ALIGN-PREFIX and the higher-ranked NOCODA. ALIGN- PREFIX states that the prefix should remain as close to the front of the word as possible. NOCODA penalizes syllables with coda consonants. In the OT derivation of grumsdwet from um+gradwet (Figure 2), the winning candidate violates NOCODA twice, while the first two candidates violate it three times. The final candidate is pruned since it violates the ALIGN constraint more times than the winner. 354 Candidates NoCoda um.grad.wet • * .! gum.rad.wet • • .! V / gru.mad.wet ** gra.dum.wet ** Align ,,III~ ,,,,,L;;;;;;; i,[:ll * * *!* Figure 2: OT Evaluation for Tagalog Infixation (Morphenm Structure) ( PPWWWWWWW~ I / WPPWWWW~ { WWPP~ (Syllable Structure) { NC00NCONC/[ | 0NCONCONC~ ~ 00NONCONC} (Phoneme Strtlcturc) x umgradwet I [ ~ gumradwetl • grumadwet / Candl Cand2 Cand3 Figure 3: Candidate outputs for um+gradwet in an FST 4 Ellison's Conversion Method Ellison (1994) provides a paradigm for converting Opti- mality Theory constraints into Finite State Transducers. He requires that EVAL constraints output binary marks when ranking candidates and be describable as a regular language; the output of GEN must also be describable by a regular language. As Ellison points out, most con- straints can be reformulated to be binary. He is able to build FST representations for the constraints that he considers, showing them to be regular. For the Tagalog example, GEN will output the regu- lar language shown in Figure 3 for the first three candi- dates (umgradwet, gumradwet, and grumadwet). 2 Each candidate consists of segments associated with a syllable structure position and a morpheme structure marker. 3 We now consider the ALIGN-PREFIX constraint, re- stricting the prefix to occur as early in the word as pos- sible. This is encoded as an FST that writes marks on an output "Harmony Marks" tape. A 'T' is written for any word (W) morphological material that precedes prefix (P) material, and a "0" is written for any other segment. (Molpheme St~ctm'e) 0 (Syllable Structure) • (Phoneme Structure) • Figure 4: ALIGN-PREFIX FST Regular Language The regular language generated by this FST (Figure 4) has a very simple structure. Any Ws before Ps on the Morpheme Structure tape get a harmony violation mark. Taking the product of this language with the optimal candidate scores the candidate (Figure 5). The harmony marks include two non-harmonicmarks (i.e. "l"s); inthe OT tableau in Figure 2, we see that ALIGN also gives two marks to the optimal candidate. We can encode a similar FST for NOCODA. This FST examines the syllable structure tape to give harmony marks (Figure 6)-- codas (Cs) get a harmony violation mark, onsets (O) and nuclei (N) are unmarked. As in the OT tableau, the winning candidate (Figure 7) violates NOCODA twice. 2For brevity, we are not considering other candidates. aWe have extended Ellison's work by adding a third tape that marks segments as belonging to the prefix or to the word. (Harm°nyMarks) (1 1 0 0 0 0 0 0 i ) w (Morpheme Structure) W P P W W W W (Syllable S~cture) 0 0 N 0 N C 0 N (Phoneme Structure) g r u m a d w e Figure 5: Scoring of gruma~lwet by ALIGN-PREFIX -ony-,(ll °1o ) (Morpheme Structure) • • (Syllable S~ucmm) o N C (Phoneme Structure) • • • Figure 6: Regular Language generated by NOCODA Once the OT constraints are represented as FSTs, combining all of the EVAL constraints into one trans- ducer is a straightforward product. Ellison augments the product procedure so that harmony marks are con- catenated by the resulting transducer. We have used two different types of harmony marks in the ALIGN-PREFIX and NOCODA FSTs, representing the ranking of the two rules as suggested by McCarthy and Prince. The higher-ranked NOCODA constraint out- puts "2" marks while ALIGN-PREFIX outputs 'T' marks. 4 Harmonic comparisons between the candidates will con- sider the candidates with the smallest number of "2" marks first, followed by the smallest number of "1" marks. Marks are not added together, rather, the count of each type of mark is the deciding factor in evaluation, s The output of GEN and the constraints of EVAL are combined into a single transducer by taking the product of all of the FSTs. For the Tagalog example, the output rankings for the candidates are shown in Figure 8. Using the harmonic marks to prune the resulting transducer reveals the optimal candidate (Figure 9). 5 Extensions to Parsing Ellison's approach gives us an elegant method of per- forming OT generation using finite state automata. Nev- ertheless, the system cannot parse the output string back into underlying surface forms. In a derivational paradigm (Figure la) , the input and output forms are enclosed in the same space. The derivational grammar is a transform that one can invert using FSTs, searching for the input using the output. Ellison's FSTs transform output candidates to har- mony marks; even so, the inversion of these FSTs are useless. The crucial point is that GEN hides the surface- form-to-candidate mapping; in Ellison's system the EVAL portion of the system only combines with the output of GEN, so the mapping is lost. For invertability it is crit- ical that the FST have access to both input and output forms. In the version of 0T (one-level OT) Ellison incorpo- rated into his system, outputs of GEN are constrained 4Ellison uses only one type of mark and determines rank ordering from the relative positions of marks for each output segment. These two methods are equivalent. S0ne "2" is worse than two 'q"s. - .-iooooo oo ) (Morpheme Sa'uc~r¢) W W P P W W W W (Syllable StmcnLm) 0 0 N 0 N C 0 N (Phoneme Slructul~) g r u m a d w • t Figure 7: Scoring of grumadwet by NOCODA 355 ~ s ~ ) [ P pwwwwwww III wPewwwwww |j wwPPwwwww (Syllab~ Stnmt~e) INCOONCONC II ~ONCONCONC ~l OONONCONC ~S~gctw~)~umgradwetlJ%gumradwetll grumadwet Figure 8: Output of OT-FST System a~ooy~k,) [ zozoooooooo2ooooo2\ (Moqmen~S~ucut~) I ww P Pwwwww l (Syllable Stmctu~) %00NONCONC/ (Phonen= Struclm~) \ g r u m a d w e c / Figure 9: Pruned Output of OT-FST System to be similar to the input. McCarthy & Prince (1994) abandon this constraint principle, and use faithfulness constraints in EVAL to achieve the same effect within "modern" two-level OT. This will be a critical move for the OT-FST paradigm. In two-level OT, G~.N generates all strings; faith- fulness constraints in EVAL minimize the inserted and deleted material between underlying and candidate sur- face forms. By specifically modeling the faithfulness con- straints, we now allow the FST to have access to the input-output correspondences crucial for searching for underlying forms. The remaining question, however, is whether faithfulness constraints can be modeled by reg- ular grammars. Several formulations of two-level OT faithfulness constraints are discussed by McCarthy and Prince (1994) and Orgun (1994). To illustrate the fla- vor of these constraints and how they might be regular- izable, we consider two constraints, CORR and MATCH (named for their similarity to Orgun's constraints). For our Tagalog example, we add two tapes for the under- lying word and prefix forms (Figure 10). The CORR constraint requires that for every element in the surface phoneme string there is a segment in the underlying word or prefix, and vice versa. MATCH constrains the surface string phoneme to match 6 those in the word and prefix, and vice versa (Figure 11). Using these constraints, the OT-FSTs should be able to generate and parse in the Tagalog example. (Morpheme Structure)(WWP I, WWWWtc ~ (Syllable S~rucRn~) 0 0 N 0 N C 0 N (Surface Phoneme SU'nctu~) g r u m a d w e (Word Phoneme@ (gradwe t) (Pzef=Phoneme,) (u m) Figure 10: Adding Word and Prefix Tapes The additional computational complexity for imple- menting this type of system may be quite large; the search space for determining unknown strings at parse time will make for a slow implementation unless suit- able heuristics are found for searching over each type of string. Systems of this type are likely to become even more complex as more information such as moraic struc- ture is added. We envision that these heuristics will be based on the harmony mark scoring of the FST, but the exact nature of this is left to future work. 6 Conclusions & Future Work Current Computational Optimality Theory systems pro- vide solutions for OT generation, but deriving underly- ing forms from surface forms is not possible within these 6Here we mean be identical to; this definition can be extended with features and underspecified elements. ( 1o111111 ) Smear) • • (1%¢~ Pmmm~) • • Co~ Co-~a~iat '°l ll!b!l I:ll) (Moq~.~ Structure) [ w (Fno.~.z Sm,cah-~) [ a (W=dmmmm) ~ a ...... • Mazh Constmim Figure lh Faithfulness Constraints systems. In order to extend any generation system to an OT parsing system, two-level Optimality Theory should be a critical component, since it moves the hidden rela- tionship between input and output out of GEN and into EVAL. With two-level OT, the mapping from input to output can be directly operated upon by computational theories. We have proposed using two-level OT to extend E1- lison's technique for representing constraints as finite state transducers. By explicitly representing the input- to-output mapping using two-level OT, we have laid the theoretical groundwork for recovering underlying forms from surface forms. In future work, we will implement the extensions to Ellison's algorithm allowing us to morphologically ana- lyze cases like the Tagalog example. Search complexity will, however, be an issue in the implementation of the system; after an initial brute-force implementation, work must be focused on determining how the harmony marks can be used to heuristically guide the parser search. Acknowledgments We would like to thank Dan Jurafsky, Orhan Orgun, Sharon Inkelas, Nelson Morgan, Su-Lin Wu, and three anonymous ACL reviewers for comments, suggestions, and support. References T.M. Ellison. Phonological derivation in optimality theory. In COLING-9$, 1994. K. French. Insights into Tagalog: Reduplication, lnftxation, and Stress from Nonlinear Phonology. M.A. Thesis, Summer In- stitute of Linguistics and University of Texas, Arlington, 1988. M. Hammond. Syllable parsing in English and French. Rutgers Optimality Archive, 1995. L. Karttunen. Kimmo: A general morphological processor. In Texas Linguistics Forum P$, 1983. K. Koskenniemi. Two-Level Morphology: A General Compu- tational Model ]or Word.Forra Recognition and Production. Ph.D. thesis, University of Helsinki, 1983. J. McCarthy and A. Prince. Prosodic morphology, parts 1 and 2. Prosodic Morphology Workshop, OTS, Utrecht, 1994. J. McCarthy and A. Prince. Prosodic morphology. In J. Gold- smith, editor, Handbook of Phonological Theory, pages 318- 366. Basil Blackwell Ltd., 1995. O. Orgun, Containment: Why and why not. Unpublished ms., U. of California-Berkeley, Department of Linguistics, July 1994. A. Prince and P. Smolensky. Optimality theory. Unpublished ms., Rutgers University, 1993. R. Sproat. Morphology and Computation. MIT Press, Cam- bridge, MA, 1992. B. Tesar. Computational Optimality Theory. Ph.D. Thesis, U. of Colorado-Boulder, Department of Computer Science, 1995. 356 | 1996 | 49 |
From Submit to Submitted via Submission: On Lexical Rules in Large-Scale Lexicon Acquisition. Evelyne Viegas, Boyan Onyshkevych §, Victor Raskin §~, Sergei Nirenburg Computing Research Laboratory, New Mexico State University, Las Cruces, NM 88003, USA viegas, boyan, raskin, sergei~crl, nmsu. edu Abstract This paper deals with the discovery, rep- resentation, and use of lexical rules (LRs) during large-scale semi-automatic compu- tational lexicon acquisition. The analy- sis is based on a set of LRs implemented and tested on the basis of Spanish and English business- and finance-related cor- pora. We show that, though the use of LRs is justified, they do not come cost- free. Semi-automatic output checking is re- quired, even with blocking and preemtion procedures built in. Nevertheless, large- scope LRs are justified because they facili- tate the unavoidable process of large-scale semi-automatic lexical acquisition. We also argue that the place of LRs in the compu- tational process is a complex issue. 1 Introduction This paper deals with the discovery, representation, and use of lexical rules (LRs) in the process of large- scale semi-automatic computational lexicon acqui- sition. LRs are viewed as a means to minimize the need for costly lexicographic heuristics, to reduce the number of lexicon entry types, and generally to make the acquisition process faster and cheaper. The findings reported here have been implemented and tested on the basis of Spanish and English business- and finance-related corpora. The central idea of our approach - that there are systematic paradigmatic meaning relations be- tween lexical items, such that, given an entry for one such item, other entries can be derived auto- matically- is certainly not novel. In modern times, it has been reintroduced into linguistic discourse by the Meaning-Text group in their work on lex- ical functions (see, for instance, (Mel'~uk, 1979). § also of US Department of Defense, Attn R525, Fort Meade, MD 20755, USA and Carnegie Mellon University, Pittsburgh, PA. USA. §§ also of Purdue University NLP Lab, W Lafayette, IN 47907, USA. It has been lately incorporated into computational lexicography in (Atkins, 1991), (Ostler and Atkins, 1992), (Briscoe and Copestake, 1991), (Copestake and Briscoe, 1992), (Briscoe et al., 1993)). Pustejovsky (Pustejovsky, 1991, 1995) has coined an attractive term to capture these phenomena: one of the declared objectives of his 'generative lexi- con' is a departure from sense enumeration to sense derivation with the help of lexical rules. The gen- erative lexicon provides a useful framework for po- tentially infinite sense modulation in specific con- texts (cf. (Leech, 1981), (Cruse, 1986)), due to type coercion (e.g., (eustejovsky, 1993)) and simi- lar phenomena. Most LRs in the generative lexi- con approach, however, have been proposed for small classes of words and explain such grammatical and semantic shifts as +count to -count or -common to +common. While shifts and modulations are important, we find that the main significance of LRs is their promise to aid the task of massive lexical acqui- sition. Section 2 below outlines the nature of LRs in our approach and their status in the computational pro- cess. Section 3 presents a fully implemented case study, the morpho-semantic LRs. Section 4 briefly reviews the cost factors associated with LRs; the argument in it is based on another case study, the adjective-related LRs, which is especialy instructive since it may mislead one into thinking thai. LRs are unconditionally beneficial. 2 Nature of Lexical Rules 2.1 Ontological-Semantic Background Our approach to NLP can be characterized as ontology-driven semantics (see, e.g., (Nirenburg and Levin, 1992)). The lexicon for which our LRs are in- troduced is intended to support the computational specification and use of text meaning representa- tions. The lexical entries are quite complex, as they must contain many different types of lexical knowledge that may be used by specialist processes for automatic text analysis or generation (see, e.g., 32 (Onyshkevych and Nirenburg, 1995), for a detailed description). The acquisition of such a lexicon, with or without the assistance of LRs, involves a substan- tial investment of time and resources. The meaning of a lexical entry is encoded in a (lexieal) semantic representation language (see, e.g., (Nirenburg et al., 1992)) whose primitives are predominantly terms in an independently motivated world model, or ontol- ogy (see, e.g., (Carlson and Nirenburg, 1990) and (Mahesh and Nirenburg, 1995)). The basic unit of the lexicon is a 'superentry,' one for each citation form holds, irrespective of its lexi- cal class. Word senses are called 'entries.' The LR processor applies to all the word senses for a given superentry. For example, p~vnunciar has (at least) two entries (one could be translated as "articulate" and one as "declare"); the LR generator, when ap= plied to the superentry, would produce (among oth- ers) two forms of pronunciacidn, derived from each of those two senses/entries. The nature of the links in the lexicon to the ontol- ogy is critical to 'the entire issue of LRs. Represen- tations of lexical meaning may be defined in terms of any number of ontological primitives, called con= cepts. Any of the concepts in the ontology may be used (singly or in combination) in a lexical meaning representation. No necessary correlation is expected between syn- tactic category and properties and semantic or onto- logical classification and properties (and here we def- initely part company with syntax-driven semantics- see, for example, (Levin, 1992), (Dorr, 1993) -pretty much along the lines established in (Nirenburg and Levin, 1992). For example, although meanings of many verbs are represented through reference to on- tological EVENTs and a number of nouns are rep- resented by concepts from the OBJECT sublattice~ frequently nominal meanings refer to EVENTs and verbal meanings to OBJECTs. Many LRs produce entries in which the syntactic category of the input form is changed; however, in our model, the seman- tic category is preserved in many of these LRs. For example, the verb destroy may be represented by an EVENT, as will the noun destruction (naturally, with a different linking in the syntax-semantics in- terface). Similarly, destroyer (as a person) would be represented using the same event with the addi- tion of a HUMAN as a filler of the agent case role. This built-in transcategoriality strongly facilitates applications such as interlingual MT, as it renders vacuous many problems connected with category mismatches (Kameyama et al., 1991) and misalign- ments or divergences (Dorr, 1995), (Held, 1993) that plague those paradigms in MT which do not rely on extracting language-neutral text meaning represen- tations. This transcategoriality is supported by LRs. 2.2 Approaches to LRs and Their Types In reviewing the theoretical and computational lin- guistics literature on LRs, one notices a number of different delimitations of LRs from morphology, syn- tax, lexicon, and processing. Below we list three parameters which highlight the possible differences among approaches to LRs. 2.2.1 Scope of Phenomena Depending on the paradigm or approach, there are phenomena which may be more-or less-appropriate for treatment by LRs than by syntactic transfor- mations, lexical enumeration, or other mechanisms. LRs offer greater generality and productivity at the expense of overgeneration, i.e., suggesting inappro- priate forms which need to be weeded out before ac- tual inclusion in a lexicon. The following phenomena seem to be appropriate for treatment with LRs: • Inflected Forms- Specifically, those inflectional phenomena which accompany changes in sub- categorization frame (passivization, dative al- ternation, etc.). • Word Formation- The production of derived forms by LR is illustrated in a case study be- low, and includes formation of deverbal nom- inals (destruction, running), agentive nouns (catcher). Typically involving a shift in syn- tactic category, these LRs are often less pro- ductive than inflection-oriented ones. Conse- quently, derivational LRs are even more prone to overgeneration than inflectional LRs. • Regular Polysemy - This set of phenomena includes regular polysemies or regular non- metaphoric and non-metonymic alternations such as those described in (Apresjan, 1974), (Pustejovsky, 1991, 1995), (Ostler and htkins, 1992) and others. 2.2.2 When Should LRs Be Applied? Once LRs are defined in a computational scenario, a decision is required about the time of application of those rules. In a particular system, LRs can be applied at acquisition time, at lexicon load time and at run time. • Acquisition Time - The major advantage of this strategy is that the results of any LR expansion can be checked by the lexicon acquirer, though at the cost of substantial additional time. Even with the best left-hand side (LHS) conditions (see below), the lexicon acquirer may be flooded by new lexical entries to validate. During the re- view process, the lexicographer can accept the generated form, reject it as inappropriate, or make minor modifications. If the LR is being used to build the lexicon up from scratch, then mechanisms used by Ostler and Atkins (Ostler and Atkins, 1992) or (Briscoe et al., 1995), such as blocking or preemption, are not available as 33 automatic mechanisms for avoiding overgenera- tion. • Lexicon Load Time - The LRs can be applied to the base lexicon at the time the lexicon is loaded into the computational system. As with run-time loading, the risk is that overgenera- tion will cause more degradation in accuracy than the missing (derived) forms if the LRs were not applied in the first place. If the LR inven- tory approach is used or if the LHS constraints are very good (see below), then the overgener- ation penalty is minimized, and the advantage of a large run-time lexicon is combined with ef- ficiency in look-up and disk savings. • Run Time - Application of LRs at run time raises additional difficulties by not supporting an index of all the head forms to be used by the syntactic and semantic processes. For example, if there is an Lit which produces abusive-adj2 from abuse-v1, the adjectival form will be un- known to the syntactic parser, and its produc- tion would only be triggered by failure recovery mechanisms -- if direct lookup failed and the reverse morphological process identified abuse- vl as a potential source of the entry needed. A hybrid scenario of LR use is also plausible, where, for example, LRs apply at acquisition time to produce new lexical entries, but may also be avail- able at run time as an error recovery strategy to attempt generation of a form or word sense not al- ready found in the lexicon. 2.2.3 LR Triggering Conditions For any of the Lit application opportunities item- ized above, a methodology needs to be developed for the selection of the subset of LRs which are ap- plicable to a given lexical entry (whether base or derived). Otherwise, the Lits will grossly overgen- erate, resulting in inappropriate entries, computa- tional inefficiency, and degradation of accuracy. Two approaches suggest themselves. • Lit Itemization - The simplest mechanism of rule triggering is to include in each lexicon en- try an explicit list of applicable rules. LR ap- plication can be chained, so that the rule chains are expanded, either statically, in the speci- fication, or dynamically, at application time. This approach avoids any inappropriate appli- cation of the rules (overgeneration), though at the expense of tedious work at lexicon acquisi- tion time. One drawback of this strategy is that if a new LR is added, each lexical entry needs to be revisited and possibly updated. • Itule LIIS Constraints - The other approach is to maintain a bank of LRs, and rely on their LHSs to constrairi the application of the rules to only the appropriate cases; in practice, however, it is difficult to set up the constraints in such a way as to avoid over- or undergeneration a pri- or~. Additionally, this approach (at least, when applied after acquisition time) does not allow explicit ordering of word senses, a practice pre- ferred by many lexicographers to indicate rela- tive frequency or salience; this sort of informa- tion can be captured by other mechanisms (e.g., using frequency-of-occurrence statistics). This approach does, however, capture the paradig- matic generalization that is represented by the rule, and simplifies lexical acquisition. 3 Morpho-Semantics and Constructive Derivational Morphology: a Transcategorial Approach to Lexical Rules In this section, we present a case study of LRs based on constructive derivational morphology. Such LRs automatically produce word forms which are poly- semous, such as the Spanish generador 'generator,' either the artifact or someone who generates. The LRs have been tested in a real world application, in- volving the semi-automatic acquisition of a Spanish computational lexicon of about 35,000 word senses. We accelerated the process of lexical acquisition 1 by developing morpho-semantic LRs which, when applied to a lexeme, produced an average of 25 new candidate entries. Figure 1 below illustrates the overall process of generating new entries from a ci- tation form, by applying morpho-semantic LRs. Generation of new entries usually starts with verbs. Each verb found in the corpora is submitted to the morpho-semantic generator which produces all its morphological derivations and, based on a de- tailed set of tested heuristics, attaches to each form an appropriate semantic LR. label, for instance, the nominal form comprador will be among the ones gen- erated from the verb comprar and the semantic LR "agent-of" is attached to it. The mechanism of rule application is illustrated below. The form list generated by the morpho-semantic generator is checked against three MRDs (Collins Spanish-English, Simon and Schuster Spanish- English, and Larousse Spanish) and the forms found in them are submitted to the acquisition process. However, forms not found in the dictionaries are not discarded outright because the MRDs cannot be as- sumed to be complete and some of these ":rejected" forms can, in fact, be found in corpora or in the input text of an application system. This mecha- nism works because we rely on linguistic clues and a See (Viegas and Nirenburg, 1995) for the details on the acquisition process to build the core Spanish lexicon, and (Viegas and Beale, 1996) for the details oil the con- ceptual and technological tools used to check the quality of the lexicon. 34 verb list file: coznpr~.r con~r ¢ :~:.-.-:.~;~::::~:,::.~.:;~ ~:::~-::::.: :.: ~::~::~:::::::.:::.~:::.::~ ~..:::.::~ ×.: ¢ derived verb list file: ccn~xpra~,v,LRlevent compra,n,LR2event ii .................................... .................. :.: .......... ~ forme i ii ii:ii i iiii iiiiiii!iiiiiiiiiiiiiiiiiiJJii !i iii iiiii accepted forms rejected forms "comprar-V1 cat: dfn: ex: aAmin: syn: sere: V acquire the possession or right by paying or promising to pay troche eompro una nueva empress jlongwel "18/1 15:42:44" "root: [] rcat 0 bj: ~ [sem: "buy agent: fi-i] human theme: [~] object Figure 2: Partial Entry for the Spanish lexieal item comprar. Figure 1: Automatic Generation of New Entries. therefore our system does not grossly overgenerate candidates. The Lexical Rule Processor is an engine which produces a new entry from an existing one, such as the new entry compra (Figure 3) produced from the verb entry comprar (Figure 2) after applying the LR2event rule. 2 The acquirer must check the definition and enter an example, but the rest of the information is sim- ply retained. The LEXical-RUT.~.S zone specifies the morpho-semantic rule which was applied to produce this new entry and the verb it has been applied to. The morpho-semantic generator produces all pre- dictable morphonological derivations with their morpho-lexico-semantic associations, using three major sources of clues: 1) word-forms with their cor- responding morpho-semantic classification; 2) stem alternations and 3) construction mechanisms. The patterns of attachement include unification, concate- nation and output rules 3. For instance beber can be 2We used the typed feature structures (tfs) as de- scribed in (Pollard and Sag, 1997). We do not illustrate inheritance of information across partial lexical entries. 3The derivation of stem alternations is beyond the derived into beb{e]dero, bebe[e]dor, beb[i]do, beb[i]da, volver into vuelto, and communiear into telecommu- nicac[on, etc... All affixes are assigned semantic fea- tures. For instance, the morpho-semantic rule LRpo- larity_negative is at least attached to all verbs belong- ing to the -Aa class of Spanish verbs, whose initial stem is of the form 'con', 'tra', or 'fir' with the corre- sponding allomorph .in attached to it (inconlrolable, inlratable, ... ). Figure 4 below, shows tlle derivational morphol- ogy output for eomprar, with the associated lexical rules which are later used to actually generate the entries. Lexical rules 4 were applied to 1056 verb citation forms with 1263 senses among them. The rules helped acquire an average of 25 candidate new entries per verb sense, thus producing a total of 31,680 candidate entries. From the 26 different citation forms shown in Fig- ure 4, only 9 forms (see Figure 5), featuring 16 new entries, have been accepted after checking. 5 For instance, comprable, adj, LR3feasibility- allribulel, is morphologically derived from comprar, scope of this paper, and is discussed in (Viegas et al., 1996). 4We developed about a hundred morpho-semantic rules, described in (Viegas et al., 1996). 5The results of the derivational morphology program output are checked against, existing corpora and dictio- naries, automatically. 35 "compra-N1 cat: dfn: ex: admin: syn: sere: lex-rul: V acquire the possession or right by paying or promising to pay LR2event "11/12 20:33:02" [ oo, buy] comprar-Vl "LR2event" Figure 3: Partial Entry for the Spanish lexical item compra generated automatically. and adds to the semantics of comprar the shade of meaning of possibility. In this example no forms rejected by the dic- tionaries were found in the corpora, and therefore there was no reason to generate these new entries. However, the citation forms supercompra, precom- pra, precomprado, autocomprar actually appeared in other corpora, so that entries for them could be gen- erated automatically at run time. 4 The Cost of Lexical Rules It is clear by now that LRs are most useful in large- scale acquisition. In the process of Spanish acquisi- tion, 20% of all entries were created from scratch by H-level lexicographers and 80% were generated by LRs and checked by research associates. It should be made equally clear, however, that the use of LRs is not cost-free. Besides the effort of discoveriug and implementing them, there is also the significant time and effort expenditure on the procedure of semi- automatic checking of the results of the application of LRs to the basic entries, such as those for the verbs. The shifts and modulations studied in the litera- ture in connection with the LRs and generative lex- icon have also been shown to be not problem-free: sometimes the generation processes are blocked-or preempted-for a variety of lexical, semantic and other reasons (see (Ostler and Atkins, 1992)). In fact, the study of blocking processes, their view as systemic rather than just a bunch of exceptions, is by itself an interesting enterprise (see (Briscoe et al., 1995)). Obviously, similar problems occur in real-life large-scale lexical rules as well. Even the most seem- ingly regular processes do not typically go through in 100% of all cases. This makes the LR-affected entries not generable fully automatically and this is why each application of an LR to a qualifying phe- 36 Derived form II POS I Lexical Rule comprar v lrlevent compra n lr2eventSb compra n lr2theme_oLevent9b comprado n lr2reputation_attla comprador n lr2reputation_att2c comprador n lr2social_role_rel2c comprado n lr2theme_of_event la comprado axtj lr3event_telicla comprable adj lr3feasibility_ att 1 compradero adj lr3feasibility_att2c compradizo adj lr3feasibility_att3c comprado adj lr3reputation_ art 1 a comprador adj lr3reputation_att2c comprador adj lr3social_ role_relc malcomprar I[ v neg_evM_attitudel lr 1event malcomprado adj lr3event_telicla subcomprar I v part_oLrelation3 lrlevent subcomprado I adj lr3event_telicla autocomprar v agent_beneficiarylb lrlevent autocompra n lr2event8b autocompra n lr2theme_oLevent9b autocomprado adj lr3event_telicla recomprar v aspect_iter_semelfact 1 lrlevent recompra n lr2eventSb recompra n lr2theme_oLevent9b recomprado adj lr3event_telicla supercomprar v evM_attitude6 lrlevent supercompra n lr2eventSb supercompra n lr2theme_oLevent9b supercomprado adj lr3event_telicla precomprar v before_temporal_rel5 lrlevent precompra n Ir2eventSb precompra n lr2theme_oLevent9b precomprado adj lr3event_telicla deseomprar v opp_rel2 lrlevent descompra n lr2event8b descompra n lr2theme_of_event9b descomprado adj lr3event_telicla compraventa n lr2p_eventSb lr2s_eventSb Figure 4: Morpho-semantic Output. Derived form [[ POS [ Lexical Rule comprar v lrlevent comprado n lr2theme_oLevent 1 a compra n lr2event8b comprado n lr2reputation_attla comprador n lr2agent_of2c comprador n lr2sociaJ_role_rel2c compra n lr2theme_oLevent9b comprable adj lr3feasibility_att ] compradero adj lr3feasibility_att2c compradizo adj lr3feasibility_att3c I comprado adj lr3agent_ofla comprador adj lr3reputation_att2c comprador adj lr3social_role_rel2c comprado adj lr3event_telicla recomprar v aspectiter_semelfact I lrlevent , recompra n lr2event8b recompra n lr2theme_of_event9b compraventa l[ n [ lr2p_event8b lr2s_event8b Figure 5: Dictionary Checking Output. nomenon must be checked manually in the process of acquisition. Adjectives provide a good case study for that. The acquisition of adjectives in general (see (Raskin and Nirenburg, 1995)) results in the discovery and ap- plication of several large-scope lexical rules, and it appears that no exceptions should be expected. Ta- ble 1 illustrates examples of LRs discovered and used in adjective entries. The first three and the last rule are truly large- scope rules. Out of these, the -able rule seems to be the most homogeneous and 'error-proof.' Around 300 English adjectives out of the 6,000 or so, which occur in the intersection of LDOCE and the 1987-89 Wall Street Journal corpora, end in -able. About 87% of all the -able adjectives are like read- able: they mean, basically, something that can be read. In other words, they typically modify the noun which is the theme (or beneficiary, if animate) of the verb from which the adjective is derived: One can read the book.-The book is readable. The temptation to mark all the verbs as capable of assuming the suffix -able (or -ible) and forming adjectives with this type of meaning is strong, but it cannot be done because of various forms of blocking or preemption. Verbs like kill, relate, or necessitate do not form such adjectives comfortably or at all. Adjectives like audible or legible do conform to the formula above, but they are derived, as it were, from suppletive verbs, hear and read, respectively. More distressingly, however, a complete acquisition pro- cess for these adjectives uncovers 17 different com- binations of semantic roles for the nouns modified by the -ble adjectives, involving, besides the "stan- dard" theme or beneficiary roles, the agent, experi- encer, location, and even the entire event expressed by the verb. It is true that some of these combi- nations are extremely rare (e.g. perishable), and all together they account for under 40 adjectives. The point remains, however, that each case has to be checked manually (well, semi-automatically, because the same tools that we have developed for acquisi- tion are used in checking), so that the exact meaning of the derived adjective with regard to that of the verb itself is determined. It turns out also that, for a polysemous verb, the adjective does not necessarily inherit all its meanings (e.g., perishable again). 5 Conclusion In this paper, we have discussed several aspects of the discovery, representation, and implementation of LRs, where, we believe, they count, namely, in the actual process of developing a realistic-size, real-life NLP system. Our LRs tend to be large-scope rules, which saves us a lot of time and effort on massive lexical acquisition. Research reported in this paper has exhibited a finer grain size of description of morphemic seman- tics by recognizing more meaning components of non-root morphemes than usually acknowledged. The reported research concentrated on lexical rules for derivational morphology. The same mecha- nism has been shown, in small-scale experiments, to work for other kinds of lexical regularities, notably cases of regular polysemy (e.g., (Ostler and Atkins, 1992), (Apresjan, 1974)). Our treatment of transcategoriality allows for a lexicon superentry to contain senses which are not simply enumerated. The set of entries in a superen- try can be seen as an hierarchy of a few "original" senses and a number of senses derived from them according to well-defined rules. Thus, the argument between the sense-enumeration and sense-derivation schools in computational lexicography may be shown to be of less importance than suggested by recent lit- erature. Our lexical rules are quite different from the lex- ical rules used in lexical]y-based grammars (such as (GPSG, (Gazdar et al., 1985) or sign-based theories (HPSG, (Pollard and Sag, 1987)), as the latter can rather be viewed as linking rules and often deal with issues such as subcategorization. The issue of when to apply the lexical rules in a computational environment is relatively new. More studies must be made to determine the most bene- ficial place of LRs in a computational process. Finally, it is also clear that each LR comes at a cer- tain human-labor and computational expense, and if the applicability, or "payload," of a rule is limited, its use may not be worth the extra effort. We cannot say at this point that LRs provide any advantages in computation or quality of the deliverables. What 37 LRs Applied to Entry Type 1 Entry Type 2 Examples Comparative All scalars Event-Based Adjs Positive '.Degree Adj. Entry corresponding to one semantic role of the underlying verb Verbs taking the -able suffix to form an adj Comparative Degree Semantic Role Shifter Family of LR's -Able LR Human Organs LR Size Importance LR -Sealed LR Negative LR Event-Based Adjs Size adjs Size adjs VeryTrueScalars (age, size, price,) All adjs Adjs denoting general human size Basic size adjs True scalar adjectives Positive adjs Adj. entry corresponding to another semantic role of the underlying verb Adjs formed with the help of -able from these verbs (including "suppletivism" ) Adjs denoting the corresponding size of all or some external organs Figurative meanings of same adjectives Adj-scale(d) good-better big-bigger abusive noticeable noticeable vulnerable undersized-l-2 buxom-l-2 big-l-2 modest- modest(ly)- -price(d)old -old-age Corresponding noticeable Negative adjectives unnoticeable Table 1: Lexical Rules for Adjectives. we do know is that, when used justifiably and main- tained at a large scope, they facilitate tremendously the costly but unavoidable process of semi-automatic lexical acquisition. 6 Acknowledgements This work has been supported in part by Depart- merit of Defense under contract number MDA-904- 92-C-5189. We would like to thank Margarita Gon- zales and Jeff Longwell for their help and implemen- tation of the work reported here. We are also grate- ful to anonymous reviewers and the Mikrokosmos team from CRL. References Ju. D. Apresjan 1976 Regular Polysemy Linguistics vol 142, pp. 5-32. B. T. S. Atkins 1991 Building a lexicon:The con- tribution of lexicography In B. Boguraev (ed.), "Building a Lexicon", Special Issue, International Journal of Lexicography 4:3, pp. 167-204. E. J. Briscoe and A. Copestake 1991 Sense exten- sions as lexical rules In Proceedings of the IJCAI Workshop on Computational Approaches to Non- Literal Language. Sydney, Australia, pp. 12-20. E. J. Briscoe, Valeria de Paiva, and Ann Copestake (eds.) 1993 Inheritance, Defaults, and the Lexi- con. Cambridge: Cambridge University Press. E. J. Briscoe, Ann Copestake, and Alex Las- carides. 1995. Blocking. In P. Saint-Dizier and E.Viegas, Computational Lcxical Semantics. Cam- bridge University Press. Lynn Carlson and Sergei Nirenburg 1990. World Modeling for NLP. Center for Machine Trans- lation, Carnegie Mellon University, Tech Report CMU-CMT-90-121. Ann Copestake and Ted Briscoe 1992 Lexical operations in a unification-based framework. In J. Pustejovsky and S. Bergler (eds), Lexical Se- mantics and Knowledge Repres~:ntation. Berlin: Springer, pp. 101-119. D. A. Cruse 1986 Lexical Semantics Cambridge: Cambridge University Press. Bonnie Dorr 1993 Machine Translation: A View from the Lexicon Cambridge, MA: M.I.T. Press. Bonnie Dorr 1995 A lexical-semantic solution to the divergence problem in machine translation. In St-Dizier P. and Viegas E. (eds), Computational Lezical Semantics: CUP. Gerald Gazdar, E. Klein, Geoffrey Pullum and Ivan Sag 1985 Generalized Phrase Structure Gram- mar. Blackwell: Oxford. 38 Ulrich Heid 1993 Le lexique : quelques probl@mes de description et de repr@sentation lexieale pour la traduction automatique. In Bouillon, P. and Clas, A. (eds), La Traductique: AUPEL-UREF. M. Kameyama, R. Ochitani and S. Peters 1991 Re- solving Translation Mismatches With Information Flow. Proceedings of ACL'91. Geoffrey Leech 1981 Semantics. Cambridge: Cam- bridge University Press. Beth Levin 1992 Towards a Le~cical Organization of English Verbs Chicago: University of Chicago Press. Igor' Mel'~uk 1979. Studies in Dependency Syntax. Ann Arbor, MI: Karoma. Kavi Mahesh and Sergei Nirenburg 1995 A sit- uated ontology for practical NLP. Proceedings of the Workshop on Basic Ontological Issues in Knowledge Sharing, International Joint Confer- ence on Artificial Intelligence (IJCAI-95), Mon- treal, Canada, August 1995. Sergei Nirenburg and Lori Levin 1992 Syntax- Driven and Ontology-Driven Lexical Semantics In J. Pustejovsky and S. Bergler (eds), Lexical Se- mantics and Knowledge Representation. Berlin: Springer, pp. 5-20. Sergei Nirenburg and Victor Raskin 1986 A Metric for Computational Analysis of Meaning: Toward an Applied Theory of Linguistic Semantics Pro- ceedings of COLING '86. Bonn, F.R.G.: Univer- sity of Bonn, pp. 338-340 Sergei Nirenburg, Jaime Carbonell, Masaru Tomita, and Kenneth Goodman 1992 Machine Transla- tion: A Knowledge-Based Approach. San Mateo CA: Morgan Kaufmann Publishers. Boyan Onyshkevysh and Sergei Nirenburg 1995 A Lexicon for Knowledge-based MT Machine Translation 10: 1-2. Nicholas Ostler and B. T. S. Atkins 1992 Pre- dictable meaning shift: Some linguistic properties of lexical implication rules In J. Pustejovsky and S. Bergler (eds), Lexical Semantics and Knowledge Representation. Berlin: Springer, pp. 87-100. C. Pollard and I. Sag. 1987 An Information.based Approach to Syntax and Semantics: Volume 1 Fundamentals. CSLI Lecture Notes 13, Stanford CA. James Pustejovsky 1991 The generative lexicon. Computational Linguistics 17:4, pp. 409-441. James Pustejovsky 1993 Type coercion and [exical selection. In James Pustejovsky (ed.), Semantics and the Lexicon. Dordrecht-Boston: Kluwer, pp. 73-94. James Pustejovsky 1995 The Generative Lexicon. Cambridge, MA: MIT Press. Victor Raskin 1987 What Is There in Linguis- tic Semantics for Natural Language Processing? In Sergei Nirenburg (ed.), Proceedings of Natu- ral Language Planning Workshop. Blue Mountain Lake, N.Y.: RADC, pp. 78-96. Victor Raskin and Sergei Nirenburg 1995 Lexieal Semantics of Adjectives: A Microtheory of Adjec- tival Meaning. MCCS-95-28, CRL, NMSU, Las Cruces, N.M. Evelyne Viegas and Sergei Nirenburg 1995 Acquisi- tion semi-automatique du lexique. Proceedings of "Quatri~mes Journ@es scientifiques de Lyon", Lez- icologie Langage Terminologie, Lyon 95, France. Evelyne Viegas, Margarita Gonzalez and Jeff Long- well 1996 Morpho-semanlics and Constructive Derivational Morphology: a Transcategorial Ap- proach to Lexical Rules. Technical Report MCCS- 96-295, CRL, NMSU. Evelyne Viegas and Stephen Beale 1996 Multi- linguality and Reversibility in Computational Se- mantic Lexicons Proceedings of INLG'96, Sussex, England. 39 | 1996 | 5 |
A Synopsis of Learning to Recognize Names Across Languages Anthony F. Gallippi University of Southern California University Park, EEB 234 Los Angeles, CA 90089 USA gallippi @ aludra.usc.edu Abstract The development of natural language processing (NLP) systems that perform machine translation (MT) and information retrieval (IR) has highlighted the need for the automatic recognition of proper names. While vari- ous name recognizers have been developed, they suffer from being too limited; some only recognize one name class, and all are language specific. This work devel- ops an approach to multilingual name recognition that uses machine learning and a portable framework to simplify the porting task by maximizing reuse and au- tomation. 1 Introduction Proper names represent a unique challenge for MT and IR systems. They are not found in dictionaries, are very large in number, come and go every day, and ap- pear in many alias forms. For these reasons, list based matching schemes do not achieve desired performance levels. Hand coded heuristics can be developed to achieve high accuracy, however this approach lacks portability. Much human effort is needed to port the system to a new domain. A desirable approach is one that maximizes reuse and minimizes human effort. This paper presents an approach to proper name recognition that uses machine learning and a language independent framework. Knowledge incorporated into the framework is based on a set of measurable linguistic characteristics, or fea- tures. Some of this knowledge is constant across lan- guages. The rest can be generated automatically through machine learning techniques. Whether a phrase (or word) is a proper name, and what type of proper name it is (company name, loca- tion name, person name, date, other) depends on (1) the internal structure of the phrase, and (2) the surrounding context. Internal: 'qVlr. Brandon" Context: 'The new compan.~= Safetek, will make air bags." The person title "Mr." reliably shows "Mr. Brandon" to be a person name. "Safetek" can be recognized as a company name by utilizing the preceding contextual phrase and appositive "The new company,". The recognition task can be broken down into de- limitation and classification. Delimitation is the de- termination of the boundaries of the proper name, while classification serves to provide a more specific category. Original: Delimit: John Smith, chairman of Safetek, announced his resignation yesterday. <PN> John Smith </PN>, chairman of <PN> Safetek </PN> , announced his resignation yesterday. Classify: <person> John Smith </person>, chairman of <company> Safetek </company>, announced his resignation yesterday. During the delimit step, proper name boundaries are identified. Next, the delimited names are categorized. 2 Method The approach taken here is to utilize a data-driven knowledge acquisition strategy based on decision trees which uses contextual information. This differs from other approaches (Farwell et al., 1994; Kitani & Mita- mura, 1994; McDonald, 1993; Rau, 1992) which attempt to achieve this task by: (1) hand-coded heuris- tics, (2) list-based matching schemes, (3) human-gen- erated knowledge bases, and (4) combinations thereof. Delimitation occurs through the application of phrasal templates. These templates, built by hand, use logical operators (AND, OR, etc.) to combine features strongly associated with proper names, including: proper noun, ampersand, hyphen, and comma. In addi- tion, ambiguities with delimitation are handled by in- cluding other predictive features within the templates. To acquire the knowledge required for classifica- tion, each word is tagged with all of its associated fea- tures. Various types of features indicate the type of name: parts of speech (POS), designators, 357 Figure 1. Multilingual development system. morphology, syntax, semantics, and more. Designators are features which alone provide strong evidence for or against a particular name type. Examples include "Co." (company), "Dr." (person), and "County" (location). Features are derived through automated and manual techniques. On-line lists can quickly provide useful features such as cities, family names, nationalities, etc. Proven POS taggers (Farwell et al., 1994; Brill, 1992; Matsumoto et al., 1992) predetermine POS features. Other features are derived through statistical measures and hand analysis. A decision tree is built (for each name class) from the initial feature set using a recursive partitioning al- gorithm (Quinlan, 1986; Breiman et al., 1984) that uses the following function as its splitting criterion: -p*log2(p) - (1-p)*log2(1-p) (1) where p represents the proportion of names within a tree node belonging to the class for which the tree is built. The feature which minimizes the weighted sum of this function across both child nodes resulting from a split is chosen. A multitree approach was chosen over learning a single tree for all name classes because it allows for the straightforward association of features within the tree with specific name classes, and facili- tates troubleshooting. Once built, the trees are all ap- plied individually, and then the results are merged. Trees typically contained 100 or more nodes. In order to work with another language, the follow- ing resources are needed: (1) pre-tagged training text in the new language using same tags as before, (2) a tokenizer for non-token languages, (3) a POS tagger (plus translation of the tags to a standard POS conven- tion), and (4) translation of designators and lexical (list-based) features. Figure 1 shows the working development system. The starting point is training text which has been pre- tagged with the locations of all proper names. The tok- enizer separates punctuation from words. For non-to- ken languages (no spaces between words), it also sepa- rates contiguous characters into constituent words. The POS tagger (Brill, 1992; Farwell et. al., 1994; Matsu- moto et al, 1992) attaches parts of speech. The set of derived features is attached. Names are delimited using a set of POS based hand-coded templates. A de- cision tree is built based on the existing feature set and the specified level of context to be considered. The generated tree is applied to test data and scored. Hand analysis of results leads to the discovery of new fea- tures. The new features are added to the tokenized training text, and the process repeats. Language-specific modules are highlighted with bold borders. Feature translation occurs through the utilization of: on-line resources, dictionaries, atlases, bilingual speakers, etc. The remainder is constant across languages: a language independent core, and an optimally derived feature set for English. Parts of the development system that are executed by hand appear shaded. Everything else is automatic. 3 Experiment The system was first built for English and then ported to Spanish and Japanese. For English, the training text consisted of 50 messages obtained from the English Joint Ventures (E/V) domain MUC-5 corpus of the US Advanced Research Projects Agency (ARPA). This data was hand-tagged with the locations of companies, persons, locations, dates, and "other". The test set con- sisted of 10 new messages from the same corpus. Experimental results were obtained by applying the generated trees to test texts. Proper names which are voted into more than one class are handled by choosing the highest priority class. Priorities are determined based on the independent accuracy of each tree. The metrics used were recall (R), precision (P), and an averaging measure, P&R, defined as: P&R = 2*P*R/(P+R) (2) Obtained results for English compare to the English re- suits of Rau (1992) and McDonald (1993). The 358 weighted average of P&R for companies, persons, lo- cations, and dates is 94.0% (see Table 2). The date grammar is rather small in comparison to other name classes, hence the performance for dates was perfect. Locations, by contrast, exhibited the low- est performance. This can be attributed mainly to: (I) locations are commonly associated with commas, which can create ambiguities with delimitation, and (2) locations made up a small percentage of all names in the training set, which could have resulted in overfit- ting of the built tree to the training data. Three experiments were conducted for Spanish. First, the English trees, generated from the feature set optimized for English, are applied to the Spanish text (E-E-S). In the second experiment, new Spanish- specific trees are generated from the feature set optimized for English and applied to the Spanish test text (S-E-S). The third experiment proceeds like the second, except that minor adjustments and additions are made to the feature set with the goal of improving performance (S-S-S). The additional resources required for the first Spanish experiment (E-E-S) are a Spanish POS tagger (Farwell et aL, 1994) and also the translated feature set (including POS) optimally derived for English. The second and third Spanish experiments (S-E-S, S-S-S) require in addition pre-tagged Spanish training text us- ing the same tags as for English. The additional features derived for S-S-S are shown in Table 1 (FN/LN=given/family name, NNP=proper noun, DE="de"). Only a few new features allows for significant performance improvement. Table 1. Spanish specific features for S-S-S. Type Feature Instances How many List Companies "IBM", "AT&T', ... 100 Keyword "del" (OF THE) 1 Template Person < FN DE LN > 1 Person < FN DE NNP > 1 Date < Num OF MM > 1 Date <Num OF MM OF Num> 1 The same three experiments are being conducted for Japanese. The first two, E-E-J and J-E-J, have been completed; J-J-J is in progress. Table 2 summarizes performance results and compares them to other work. Acknowledgments The author would like to offer special thanks and grati- tude to Eduard Hovy for all ofhis support, direction, and encouragement from the onset of this work. Thanks also to Kevin Knight for his early suggestions, and to the Information Sciences Institute for use of their facilities and resources. Table 2. Performance comparison to other work. System Language Class R P P&R Ran English Com NA 95 NA PNF English Com NA NA "Near (McDonald) Pets 100%" Loc Date Panglyzer Spanish NA NA 80 NA MAJESTY Japanese Corn 84.3 81.4 82,8 Pers 93.1 98.6 95,8 Loc 92.6 96.8 94.7 MNR English Corn 97.6 91.6 94.5 (Gallippi) Pers 98.2 100 99.1 Loc 85.7 91.7 88.6 Date 100 100 100 (Avg) 94.0 MNR Spanish Corn 74.1 90.9 81.6 Pers 97.4 79.2 87.4 Loc 93.1 87.5 89.4 Date 100 100 100 (Avg) 89.2 MNR Japanese Corn 60.0 60.0 60.0 Pers 86.5 84.9 85.7 Loc 80.4 82.1 81.3 Date 90.0 94.7 92.3 (Avg) 83.1 References Breiman, L., Friedman, J.H., Olshen, R.A., and Stone, C.J. 1984. Classification and Regression Trees. Wadsworth International Group. Brill, E. 1992. A Simple Rule-Based Part of Speech Tagger. In Proceedings of the Third Conference on Applied Natural Language Processing, ACL. Farwell, D., Helmreich, S., Jin, W., Casper, M., Hargrave, J., Molina-Salgado, H., and Weng, F. 1994. Panglyzer: Spanish Language Analysis System. In Proceedings of the Conference of the Association of Machine Translation in the Americas (ATMA). Columbia, MD. Kitani, T. and Mitamura, T. 1994. An Accurate Morphological Analysis and Proper Name Identification for Japanese Text Processing. In Transactions of Information Processing Society of Japan, Vol. 35, No. 3, pp. 404-413. Matsumoto, Y., Kurohashi, S., Taegi, H. and Nagao, M. 1992. JUMAN Users' Manual Version 0.8, Nagao Laboratory, Kyoto University. McDonald, D. 1993. Internal and External Evidence in the Identification and Semantic Categorization of Proper Names. In Proceedings of the SINGLEX workshop on "Acquisition of Lexical Knowledge from Text", pp. 32-43. Quinlan, J.R. 1986. Induction of Decision Trees. In Machine Learning, pp. 81-106. 359 | 1996 | 50 |
An Application of WordNet to Prepositional Attachment Sanda M. Harabagiu University of Southern California Department of Electrical Engineering-Systems Los Angeles, CA 90089-2562 harabagi~usc.edu Abstract This paper presents a method for word sense disambiguation and coherence under- standing of prepositional relations. The method relies on information provided by WordNet 1.5. We first classify preposi- tional attachments according to semantic equivalence of phrase heads and then ap- ply inferential heuristics for understanding the validity of prepositional structures. 1 Problem description In this paper, we address the problem of disam- biguation and understanding prepositional attach- ment. The arguments of prepositional relations are automatically categorized into semantically equiva- lent classes of WordNet (Miller and Teibel, 1991) concepts. Then by applying inferential heuristics on each class, we establish semantic connections be- tween arguments that explain the validity of that prepositional structure. The method uses informa- tion provided by WordNet, such as semantic rela- tions and textual glosses. We have collected prepositional relations from the Wall Street Journal tagged articles of the PENN TREEBANK. Here, we focus on preposition of, the most frequently used preposition in the corpus. 2 Classes of prepositional relations Since most of the prepositional attachments obey the principle of locality (Wertmer, 1991), we consid- ered only the case of prepositional phrases preceded by noun or verb phrases. We scanned the corpus and filtered the phrase heads to create C, an ad hoc collection of sequences < noun prep noun > and < verb prep noun >. This collection is divided into classes of prepositional relations, using the following definitions: Definition 1: Two prepositional structures < noun1 prep noun2 > and < noun3 prep noun4 > belong to the same class if one of the following conditions holds: • noun1, and noun2 are hypernym/hyponym of noun3, and noun4 respectively, or • noun1, and noun2 have a common hyper- nym/hyponym and with noun3, and noun4, re- spectively. A particular case is when noun1 (noun2) and noun3 (noun4) are synonyms. Definition 2: Two prepositional structures <: verb1 prep noun1 > and < verb2 prep noun2 > be- long to the same class if one of the following condi- tions holds: • verb1, and noun1 are hypernym/hyponym of verb2, and noun2, respectively or • verb1, and noun1 have a common hyper- nym/hyponym with verb2, and noun2, respec- tively. A particular case is when the verbs or the nouns are synonyms, respectively. The main benefit and reason for grouping prepo- sitional relations into classes is the possibility to disambiguate the words surrounding prepositions. When classes of prepositional structures are iden- tified, two possibilities arise: 1. A class contains at least two prepositional se- quences from the collection g. In this case, all sequences in that class are disambiguated, be- cause for each pair (< nouni prep nounj > , < nounk prep nounq >), nouni and nounk (and nounj and nounq respectively) are in one of the following relations: (a) they are synonyms, and point to one synset that is their meaning. (b) they belong to synsets that are in hyper- nym/hyponym relation. (c) they belong to synsets that have a common hypernym/hyponym. In cases (a), (b) and (c), since words are as- sociated to synsets, their meanings are disam- biguated. The same applies for classes of prepo- sitional sequences < verb prep noun >. 360 acquisition of company Sense 1 = { acquisition, acquiring, getting } GLOSS: "the act of contracting or assuming ."HR1 or ~possession of something" "-. HR3 { buy, purchase, take } "~' . GLOSS: "obtain by purchase~ by means of a financial transaction" ISA { take over, buy out } ,¥ , GLOSS: " take over ownership of; HR2 I o~ corporations ~compar~es I " objeoC of Sense 1 = {company } ISA { business, concern, business concern } ISA / ~ { corporation } Figure 1: WordNet application of prepositional selection constraints 2. A class contains only one sequence. We dis- regard these classes from our study, since in this class it is not possible to disambiguate the words. The collection C has 9511 < noun of noun > se- quences, out of which 2158 have at least one of the nouns tagged as a proper noun. 602 of these se- quences have both nouns tagged as proper nouns. Due to the fact that WordNet's coverage of proper nouns is rather sparse, only 34% of these sequences were disambiguated. Successful cases are < House of Representatives >, < University of Pennsylvania > or < Museum of Art >. Sequences that couldn't be disambiguated comprise < Aerospaciale of France > or < Kennedy of Massachusetts >. A small dis- ambiguation rate of 28% covers the rest of the 1566 sequences relating a proper noun to a common noun. A successful disambiguation occurred for < hun- dreds of Californians > or < corporation of Vancou- ver >. Sequences like < aftermath of Iran-Contra > or < acquisition of Merryl Linch > weren't dis- ambiguated. The results of the disambiguation of the rest of 7353 sequences comprising only common nouns are more encouraging. A total of 473 classes were devised, out of which 131 had only one ele- ment, yielding a disambiguation rate of 72.3%. The number of elements in a class varies from 2 to 68. Now that we found disambiguated classes of prepositional structures, we provide some heuristics to better understand why the prepositional relations are valid. These heuristics are possible inferences performed on WordNet. 3 Selectional Heuristics on WordNet In this section we focus on semantic connections be- tween the words of prepositional structures. Con- 361 sider for example acquisition of company. Fig- ure 1 illustrates some of the relevant semantic con- nections that can be drawn from WordNet when an- alyzing this prepositional structure. We note that noun acquisition is semantically connected to the verb acquire, which is related to the concept { buy, purchase, take}, a hypernym of { take over, buy out}. Typical objects for buy out are corporations and companies, both hyper- nyms of concern. Thus, at a more abstract level, we understand acquisition of company as an action performed on a typical object. Such relations hold for an entire class of prepositional structures. What we want is to have a mechanism that ex- tracts the essence of such semantic connections, and be able to provide the inference that the elements of this class are all sequences of < nounl prep nounj >, with nounj always an object of the action described by nounl. Our approach to establish semantic paths is based on inferential heuristics on WordNet. Using sev- eral heuristics one can find common properties of a prepositional class. The classification procedure disambiguates both nouns as follows: the word acquisition has four senses in WordNet , but it is found in its synset number 1. The word company appears in its synset number 1. The gloss of acquisition satisfies the prerequisite of HRI: Heuristic Rule 1 (HR1) If the textual gloss of a noun concept begins with the expression the act of followed by the gerund of a verb, then the respec- tive noun concept describes an action represented by the verb from the gloss. This heuristic applies 831 times in WordNet, showing that nouns like accomplishment, dispatch or subsidization describe actions. I] Nr.crt. I Features for < N1 > of < N2 > Example II 1 N2 is the object of the action described by N1 acquisition of company 2 N2 is the agent of the action described by N1 approval of authorities 3 N1 is the agent of the action with object N2 author of paper 4 N1 is the agent of the action with purpose the action described by N2 activists of sup'port 5 N1 is the objcct of an action whosc agcnt is N2 record of athlete 6 N2 describes the action with the theme N1 allegations of fraud 7 N1 is the location of the activity described by N2 place of business 8 N1 describes an action occurring at the time described by N2 acquisition of 1995 9 N1 is the consequence of a phenomenon described by N2 impact of earthquake 10 N1 is the output of an action described by N2 result of study Table h Distribution of prepositions in the Wall Street Journal articles from PENN Treebank Thus acquisition is a description of any of the verbal expressions contract possession, assume possession and acquire possession. The role of company is recovered using another heuristic: Heuristic Rule 2 (HR2) The gloss of a verb may contain multiple textual explanations for that con- cept, which are separated by semicolons. If one such explanation takes one of the forms: • of noun1 • of nounl and noun 2 • of nOttTt 1 or noltn 2 then nounz and noun2 respectively are objects of that verb. Heuristic HR2 applies 134 times in WordNet, providing objects for such verbs as generalize, exfoliate or laicize. The noun company is recognized as an object of the synset {take over, buy out}, and so is corporation. Both of them are hyponyms of {business, concern, business concern}, which fills in the object role of {business, concern, business concern}. Because of that, both company and corporation from the gloss of {take over, buy out} are disambiguated and point to their first corresponding synsets. Due to the in- heritance property, company is an object of any hy- pernyms of {take over, buy out}. One such hy- pernym, {buy, purchase, take} also meets the re- quirements of HR3: Heuristic Rule 3 (HR3) If a verb concept has another verb at the beginning of its gloss, then that verb describes the same action, but in a more specific context. Therefore, acquire is a definition of {buy, purchase, take}, that has company as an object and involves a financial transaction. These three heuristics operate throughout all the sequences of the class comprising < acquisilion of company >, < addition of business >, < formalion of group > or < beginning of service > We conclude that for this class of prepositional relations, noun2 is the object of the action described by noun1. 4 A case study Table 1 illustrates the semantic relations observed in WordNet for some of the classes of prepositional relations with preposition of, when both arguments are nouns. We applied a number of 28 heuristics on 45 disambiguated classes. 5 Conclusions This paper proposes a method of extracting and val- idating semantic relations for prepositional attach- ment. The method is appealing because it uses WordNet (which is publicly available and applicable to broad English) and is scalable. A plausible expla- nation of prepositional attachment may be provided and the lexical disambiguation of the phrase heads is possible. The method may be improved by us- ing additional attachment locations as provided by the transformations proposed in (Brill and Resnik, 1994). References Eric Brill and Philip Resnik. 1994. A Rule-Based Approach to Prepositional Phrase Attachment Disambiguation. In Proceedings of COLING-9~. George Miller and Daniel Teibel. 1991. A proposal for lexical disambiguation. In Proceedings of the DARPA Speech and Nalural Language Workshop, pages 395-399, Washington, D.C. Philip Resnik. 1995. Disambiguating Noun Grouping with Respect to WordNet Senses. In Proceedings of the Third Workshop on Very Large Corpora, pages 54-68, MIT, Cam- bridge,Massachusets, June. Stefan Wermter. 1991. Integration of Semantic and Syntactic Constraints for Structural Noun Phrase Disambiguation. In Proceedings of IJCAI- 91,pages 1486-1491. 362 | 1996 | 51 |
Towards Testing the Syntax of Punctuation Bernard Jones* Centre for Cognitive Science University of Edinburgh 2 Buccleuch Place Edinburgh EH8 9LW United Kingdom bernie@cogsci, ed. ac. uk Abstract Little work has been done in NLP on the subject of punctuation, owing mainly to a lack of a good theory on which computational treatments could be based. This paper described early work in progress to try to construct such a theory. Two approaches to finding the syntactic function of punctuation marks are discussed, and procedures are described by which the results from these approaches can be tested and evaluated both against each other as well as against other work. Suggestions are made for the use of these results, and for future work. 1 Background The field of punctuation has been almost completely ignored within Natural Language Processing, with perhaps the exception of the sentence-final full-stop (period). This is because there is no coherent theory of punctuation on which a computational treatment could be based. As a result, most contemporary systems simply strip out punctuation in input text, and do not put any marks into generated texts. Intuitively, this seems very wrong, since punctu- ation is such an integral part of many written languages. If text in the real world (a newspaper, for example) were to appear without any punctu- ation marks, it would appear very stilted, ambiguous or infantile. Therefore it is likely that any computa- tional system that ignores these extra textual cues will suffer a degradation in performance, or at the very least a great restriction in the class of linguistic data it is able to process. Several studies have already shown the potential for using punctuation within NLP. Dale (1991) has * This work was carried out under an award from the (UK) ESRC. Thanks are also due to Lex Holt, Henry Thompson, Ted Briscoe and anonymous reviewers. shown the benefits of using punctuation in the fields of discourse structure and semantics, and Jones (1994) has shown in the field of syntax that using a grammar that includes punctuation yields around two orders of magnitude fewer parses than one which does not. Further work has been carried out in this area, particularly by Briscoe and Carroll (1995), to show more accurately the contribution that usage of punctuation can make to the syntactic analysis of text. The main problem with these studies is that there is little available in terms of a theory of punctu- ation on which computational treatments could be based, and so they have somewhat ad hoc, idiosyn- cratic treatments. The only account of punctuation available is that of Nunberg (1990), which although it provides a useful basis for a theory is a little too vague to be used as the basis of any implementation. Therefore it seems necessary to develop a new theory of punctuation, that is suitable for compu- tational implementation. Some work has already been carried out, showing the variety of punctuation marks and their orthographic interaction (Jones, 1995), but this paper describes the continuation of this research to determine the true syntactic function of punctuation marks in text. There are two possible angles to the problem of the syntactic function of punctuation: an observational one, and a theoretical one. Both approaches were adopted, in order to be be able to evaluate the results of each approach against those of the other, and in the hope that the results of both approaches could be combined. Thus the approaches are described one after the other here. 2 Corpus-based Approach The best data source for observation of grammatical punctuation usage is a large, parsed corpus. It ensures a wide range of real language is covered, and because of its size it should minimise the effect of any 363 errors or idiosyncrasies on the part of editors, parsers and transcribers. Since these corpora are almost all hand-produced, some errors and idiosyncrasies are inevitable -- one important part of the analysis is therefore to identify possible instances of these, and if they are cleat, to remove them from the results. The corpus chosen was the bow Jones section of the Penn Treebank (size: 1.95 million words). The bracketings were analysed so that each node with a punctuation mark as its immediate daughter is reported, with its other daughters abbreviated to their categories, as in (1) - (3). (1) [NP [NP the following] : ] ==~ [UP = NP :] (2) Is [PP In Edinburgh] , [s ...] ] ==~ Is = PP, s] (3) [NP [UP Bob] , [NP ...) , ] ==> [NP = NP , NP, ] In this fashion each sentence was broken down into a set of such category-patterns, resulting in a set of different category-patterns for each punctu- ation symbol, which were then processed to extract the underlying rule patterns which represent all the ways that punctuation behaves in this corpus, and are good indicators of how the punctuation marks might behave in the rest of language. There were 12,700 unique category-patterns extracted from the corpus for the five most common marks of point punctuation, ranging from 9,320 for the comma to 425 for the dash. These were then reduced to just 137 underlying rule-patterns for the colon, semicolon, dash, comma, full-stop. Even some of these underlying rule-patterns, however, were questionable since their incidence is very low (maybe once in the whole corpus) or their form is so linguistically strange so as to call into doubt their correctness (possibly idiosyncratic mis- parses), as in (4). (4) [ADVP --'~ PP , NP] Therefore all the patterns were checked against the original corpus to recover the original sentences. The sentences for patterns with low incidence and those whose correctness was questionable were carefully examined to determine whether there was any justification for a particular rule-pattern, given the content of the sentence. For example, the NP:NP:VP rule-pattern was removed since all the verb phrases occurring in this pattern were imperative ones, which can legiti- mately act as sentences (5). Therefore instances of this rule application were covered by the NP=NP:S rule-pattern. A detailed account of the removal of idiosyncratic, incorrect and exceptional rule- patterns, with justifications, is reported in (Jones, 1996). (5) [... ] the show's distributor, Viacom Inc, is giving an ultimatum: either sign new long-term commit- ments to buy future episodes or risk losing "Cosby" to a competitor. After this further pruning procedure, the number of rule-patterns was reduced to just 79, more than half of which related to the comma. It was now possible to postulate some generalisations about the use of the various punctuation marks from this reduced set of rule-patterns. These generalised punctuation rules, described in more detail in (Jones, 1996), are given below for colons (6), semicolons (7), full-stops (8), dashes (9,10), commas (11), basic quotation(12) and stress- markers (13-15). (6) X=X:{uPISlAOJP} X:{~P,S} (7) S ----- S , S S:{NP, S, VP, PP} (8) T = •. (9) ~ = '~ -- "D -- "~:{NP, S, VP, PP, ADJP} (10) e = e -- { NP I S I VP I PP } -- ~:{NP, S } (II) C = C , * C:{NP, S, VP, PP, ADJP, ADVP} C=,,C (12) Q="Q" Q:, (13) Z = Z ? Z : * (14) ~ = y ! Y : * (15) W=W... W:* 3 A Theoretical Approach The theoretical starting point is that punctuation seems to occur at a phrasal level, i.e. it comes immediately before or after a phrasal level lexical item (e.g. a noun phrase). However, this is a rather general definition, so we need to examine the problem more exactly. Punctuation could occur adjacent to any complex structure. However, we want to prevent occurrences such as (16). Conversely, punctuation could only occur adjacent to maximal level phrases (e.g. NP, vP). However, this rules out correct cases like (17). (16) The, new toy ... (17) He does, surprisingly, like fish. Clearly we need something stricter than the first approach, but more relaxed than the second. The notion of headedness seems to be involved, so we can postulate that only non-head structures can have punctuation attached. This system still does not rule out examples like (18) however, so 364 further refinement is necessary. The answer seems to be to look at the level of head daughter and mother categories under X-bar theory (Jackendoff, 1977). Attachment of punctuation to the non-head daughter only seems to be legal when mother and head-daughter are of the same bar level (and indeed more often than not they are identical categories), regardless of what that bar level is. (18) the, big, man ~om this theoretical approach it appears that punctuation could be described as being adjunctive (i.e. those phrases to which punctuation is attached serve an adjunctive function). Furthermore, conjunctive uses of punctuation (19,20), conven- tionally regarded as being distinct from other more grammatical uses (the adjunctive ones), can also be made to function via the theoretical principles formed here. (19) dogs, cats, fish and mice (20) most, or many, examples ... 4 Testing -- Work in Progress The next stage of this research is to test the results of both these approaches to see if they work, and also to compare their results. Since the results of the two studies do not seem incompatible, it should prove possible to combine them, and it will be inter- esting to see if the results from using the combined approaches differ at all from the results of using the approaches individually. It will also be useful to compare the results with those of studies that have a less formal basis for their treatments of punctuation, e.g. (Briscoe and Carroll, 1995). For this reason the best way to test the results of these approaches to punctuation's role in syntax is to incorporate them into otherwise identical grammars and study the coverage of the grammars in parsing and the quality and accuracy of the parses. For ease of comparison with other studies, the best parsing framework to use will be the Alvey Tools' Grammar Development Environment (GDE) (Carroll et al., 1991), which allows for rapid prototyping and easy analysis of parses. The corpus of sentences to run the grammars over should ideally be large, and consist mainly of real text from external sources. To avoid dealing with idiosyncratic tagging of words, and over-complicated sentences, we shall follow Briscoe and Carroll (1995) rather than Jones (1994) and use 35,000 prepared sentences from the Susanne corpus rather than using the Spoken English Corpus. 365 5 Further Work The theoretical approach not only seems to confirm the reality of the generalised punctuation rules derived observationally, since they all seem to have an adjunctive nature, but it also gives us a framework with which those generalised rules could be included in proper, linguistically-based, grammars. Results of testing will show whether either of the approaches are better on their own, and how they perform when they are combnined, and will, hopefully, show an improvement in performance over the ad-hoc methods used previously. The devel- opment of a theory of punctuation can then progress with investigations into the semantic function of punctuation marks, to ultimately form a theory that will be of great use to the NLP community. References Edward Briscoe and John Carroll. 1995. Devel- oping and Evaluating a Probabilistic LR Parser of Part-of-Speech and Punctuation Labels. In Proceedings of the ACL/SIGPARSE ~th Interna- tional Workshop on Parsing Technologies, pages 48-58, Prague John Carroll, Edward Briscoe and Claire Grover. 1991. A Development Environment for Large Natural Language Grammars. Technical Report 233, Cambridge University Computer Laboratory. Robert Dale. 1991. Exploring the Role of Punctu- ation in the Signalling of Discourse Structure. In Proceedings of the Workshop on Text Repre- sentation and Domain Modelling, pages 110-120, Technical University Berlin. Ray Jackendoff. 1977. X-bar Syntax: A Study of Phrase Structure. MIT Press, Cambridge, MA. Bernard Jones. 1994. Exploring the Role of Punctu- ation in Parsing Real Text. In Proceedings of the 15th International Conference on Computational Linguistics (COLING-94), pages 421-425, Kyoto, Japan, August. Bernard Jones. 1995. Exploring the Variety and Use of Punctuation. In Proceedings of the 17th Annual Cognitive Science Conference, pages 619- 624, Pittsburgh, Pennsylvania, July. Bernard Jones. 1996. Towards a Syntactic Account of Punctuation. To appear in Proceedings of the 16th International Conference on Compu- tational Linguistics (COLING-96), Copenhagen, Denmark, August. Geoffrey Nunberg. 1990. The Linguistics of Punctuation. CSLI Lecture Notes 18, Stanford, California. | 1996 | 52 |
Using Terminological Knowledge Representation Languages to Manage Linguistic Resources Pamela W. Jordan Intelligent Systems Program University of Pittsburgh Pittsburgh PA 15260 [email protected] Abstract I examine how terminological languages can be used to manage linguistic data dur- ing NL research and development. In par- ticular, I consider the lexical semantics task of characterizing semantic verb classes and show how the language can be extended to flag inconsistencies in verb class definitions, identify the need for new verb classes, and identify appropriate linguistic hypotheses for a new verb's behavior. 1 Introduction Problems with consistency and completeness can arise when writing a wide-coverage grammar or an- alyzing lexical data since both tasks involve working with large amounts of data. Since terminological knowledge representation languages have been valu- able for managing data in other applications such as a software information system that manages a large knowledge base of plans (Devanbu and Lit- man, 1991), it is worthwhile considering how these languages can be used in linguistic data management tasks. In addition to inheritance, terminological sys- tems provide a criterial semantics for links and auto- matic classification which inserts a new concept into a taxonomy so that it directly links to concepts more general than it and more specific than it (Woods and Schmolze, 1992). Terminological languages have been used in NLP applications for lexical representation (Burkert, 1995), and grammar representation (Brachman and Schmolze, 1991), and to assist in the acquisition and maintenance of domain specific lexical seman- tics knowledge (Ayuso et al., 1987). Here I explore additional linguistic data management tasks. In par- ticular I examine how a terminological language such as Classic (Brachman et al., 1991) can assist a lexi- cal semanticist with the management of verb classes. In conclusion, I discuss ways in which terminological languages can be used during grammar writing. Consider the tasks that confront a lexical seman- ticist. The regular participation of verbs belonging to a particular semantic class in a limited number of syntactic alternations is crucial in lexical seman- tics. A popular research direction assumes that the syntactic behavior of a verb is systematically influ- enced by its meaning (Levin, 1993; Hale and Keyser, 1987) and that any set of verbs whose members pat- tern together with respect to syntactic alternations should form a semantically coherent class (Levin, 1993). Once such a class is identified, the mean- ing component that the member verbs share can be identified. This gives further insight into lexical rep- resentation for the words in the class (Levin, 1993). Terminological languages can support three im- portant functions in this domain. First, the process of representing the system in a taxonomic logic can serve as a check on the rigor and precision of the original account. Once the account is represented, the terminological system can flag inconsistencies. Second, the classifier can identify an existing verb class that might explain an unassigned verb's be- havior. That is, given a set of syntactically ana- lyzed sentences that exemplify the syntactic alterna- tions allowed and disallowed for that verb, the clas- sifter will provide appropriate linguistic hypotheses. Third, the classifier can identify the need for new verb classes by flagging verbs that are not mem- bers of any existing, defined verb classes. Together, these functions provide tools for the lexical seman- ticist that are potentially very useful. The second and third of these three functions can be provided in two steps: (1) classifying each alter- nation for a particular verb according to the type of semantic mapping allowed for the verb and its argu- ments; and (2) either identifying the verb class that has the given pattern of classified alternations or us- ing the pattern to form the definition of a new verb class. 2 Sentence Classification The usual practice in investigating the alternation patterning of a verb is to construct example sen- tences in which simple, illustrative noun phrases are used as arguments of a verb. The sentences in (1) 366 exemplify two familiar alternations of give. (1) a. John gave Mary a book b. John gave a book to Mary. Such sentences exemplify an alternation that be- longs to the alternation pattern of their verb. 1 I will call this the alternation type of the test sentence. To determine the alternation type of a test sen- tence, the sentence must be syntactically analyzed so that its grammatical functions (e.g. subject, ob- ject) are marked. Then, given semantic feature in- formation about the words filling those grammatical functions (GFs), and information about the possible argument structures for the verb in the sentence and the semantic feature restrictions on these arguments, it is possible to find the argument structures appro- priate to the input sentence. Consider the sentences and descriptions shown below for pour: (2) a. [Mary,,hi] poured [Tinaobj] [a glass of mflkio]. b. [Marys,bj] poured [a glass of milkobj] for [Tinam, o]. poura: subj ~ agent[volitional] obj ~ recipient[voUtional] io ~ patient[liquid] pour2: subj --+ agent[volitional] obj ---* patient[l/quid] ppo ---* recipient[volitional] Given the semantic type restrictions and the GFs, pour1 describes (2a) and pourz, (2b). The mapping from the GFs to the appropriate argument structure is similar to lexical rules in the LFG syntactic theory except that here I semantically type the arguments. To indicate the alternation types for these sentences, I call sentence (2a) a benefactive-ditransitive and sentence (2b) a benefactive-transitive. Classifying a sentence by its alternation type requires linguistic and world knowledge. World knowledge is used in the definitions of nouns and verbs in the lexicon and describes high-level enti- ties, such as events, and animate and inanimate objects. Properties (such as LIQUID) are used to define specialized entities. For example, the prop- erty NON-CONSUMABLE (SMALL CAPITALS indicate Classic concepts in my implementation) specializes a LIQUID-ENTITY to define PAINT and distinguish it from WATER, which has the property that it is CON- SUMABLE. Specialized EVENT entities are used in the definition of verbs in the lexicon and represent the argument structures for the verbs. The linguistic knowledge needed to support sen- tence classification includes the definitions of (1) verb types such as intransitive, transitive and all- transitive; (2) verb definitions; and (3) concepts that define the links between the GFs and verb argument structures as represented by events. 1In the examples that I will consider, and in most examples used by linguists to test alternation patterns, there will only be one verb; this is the verb to be tested. Verb types (SUBCATEGORIZATIONS) are defined according to the GFs found in the sentence. For example, (2a) classifies as DITRANSITIVE and (2b) as a specialized TRANSITIVE with a PP. Once the verb type is identified, verb definitions (VERBs) are needed to provide the argument structures. A VERB can have multiple senses which are instances of EVENTs, for example the verb "pour" can have the senses pour or prepare, with the required arguments shown below. 2 Note that pour1 and pour2 in (2) are subcategorizations of prepare. pour: pourer[volitional] pouree[inanirnate--container] poured[inanimate-substance] prepare: preparer[volitional] preparee[liquia] prepared[volitional] For a sentence to classify as a particular ALTERNA- TION, a legal linking must exist between an EVENT and the SUBCATEGORIZATION. Linking involves re- stricting the fillers of the GFs in the SUBCATEGO- RIZATION to be the same as the arguments in an EVENT. In Classic, the same-as restriction is lim- ited so that either both attributes must be filled al- ready with the same instance or the concept must already be known as a LEGAL-LINKING. Because of this I created a test (written in LISP) to identify a LEGAL-LINKING. The test inputs are the sentence predicate and GF fillers arranged in the order of the event arguments against which they are to be tested. A linking is legal when at least one of the events as- sociated with the verb can be linked in the indicated way, and all the required arguments are filled. Once a sentence passes the linking test, and clas- sifies as a particular ALTERNATION, a rule associated with the ALTERNATION classifies it as a speciMiza- lion of the concept. This causes the EVENT argu- ments to be filled with the appropriate GF fillers from the SUBCATEGORIZATION. A side-effect of the alternation classification is that the EVENT classifies as a specialized EVENT and indicates which sense of the verb is used in the sentence. 3 Semantic Class Classification The semantic class of the verb can be identified once the example sentences are classified by their alterna- tion type. Specialized VERB-CLASSes are defined by their good and bad alternations. Note that VERB defines one verb whereas VERB-CLASS describes a set of verbs (e.g. spray/load class). Which AL- TERNATIONs are associated with a VERB-CLASS is a matter of linguistic evidence; the linguist discovers these associations by testing examples for grammat- icality. To assist in this task, I provide two tests, have-instances-of and have-no-instances-of. 2For generality in the implementation, I use argl ... arg, for all event definitions instead of agent ... patient or preparer ... preparee. 367 The have-instances-of test for an ALTERNATION searches a corpus of good sentences or bad sen- tences and tests whether at least one instance of the specified ALTERNATION, for example a benefactive- ditransitive, is present. A bad sentence with all the required verb ar- guments will classify as an ALTERNATION despite the ungrammatical syntactic realization, while a bad sentence with missing required arguments will only classify as a SUBCATEGORIZATION. The have-no-instances-of test for a SUBCATEGORIZA- TION searches a corpus of bad sentences and tests whether at least one instance of the specified SUBCATEGORIZATION, for example TRANSITIVE, is present as the most specific classification. 4 Discussion The ultimate test of this approach is in how well it will scale up. The linguist may choose to add knowledge as it is needed or may prefer to do this work in batches. To support the batch approach, it may be useful to extract detailed subcategoriza- tion information from English learner's dictionaries. Also it will be necessary to decide what semantic features are needed to restrict the fillers of the ar- gument structures. Finally, there is the problem of collecting complete sets of example sentences for a verb. In general, a corpus of tagged sentences is in- adequate since it rarely includes negative examples and is not guaranteed to exhibit the full range of al- ternations. In applications where a domain specific corpus is available (e.g. the Kant MT project (Mi- tamura et al., 1993)), the full range of relevant alter- nations is more likely. However, the lack of negative examples still poses a problem and would require the project linguist to create appropriate negative ex- amples or manually adjust the class definitions for further differentiation. While I have focused on a lexical research tool, an area I will explore in future work is how clas- sification could be used in grammar writing. One task for which a terminological language is appro- priate is flagging inconsistent rules. When writing and maintaining a large grammar, inconsistent rules is one type of grammar writing bug that occurs. For example, the following three rules are inconsistent since feature1 of NP and feature1 of VP would not unify in rule 1 given the values assigned in 2 and 3. 1) S --. NP VP <NP feature1 > = <VP feature1 > 2) NP ~ det N <N feature1 > = + <NP> = <N> 3) VP --* V <V feature1 > = - <VP> ~ <V> 5 Conclusion I have shown how a terminological language, such as Classic, can be used to manage lexical seman- tics data during analysis with two minor exten- sions. First, a test to identify LEGAL-LINKINGs is necessary since this cannot be directly expressed in the language and second, set membership tests, have-instances-of and have-no-instances-of are necessary since this type of expressiveness is not provided in Classic. While the solution of sev- eral knowledge acquisition issues would result in a friendlier tool for a linguistics researcher, the tool still performs a useful function. References Damaris M. Ayuso, Varda Shaked, and Ralph Weischedel. 1987. An environment for acquir- ing semantic information. In Proceedings of 25th ACL, pages 32-40. Po3nald J. Brachman and James Schmolze. 1991. An overview of the KL-ONE knowledge representation system. Cognitive Science, 9:171-216. Ronald J. Brachman, Deborah L. McGuinness, Pe- ter F. Patel-Schneider, and Lori A. Resnik. 1991. Living with CLASSIC: When and how to use a EL-ONE-like language. In John F. Sowa, editor, Principles of Semantic Networks, pages 401-456. Morgan Kaufmann, San Mateo, CA. Gerrit Burkert. 1995. Lexical semantics and ter- minological knowledge representation. In Patrick Saint-Dizier and Evelyne Viegas, editors, Compu- tational Lezical Semantics. Cambridge University Press. Premkumar Devanbu and Diane J. Litman. 1991. Plan-based terminological reasoning. In James F. Allen, Richard Fikes, and Erik Sandewall, edi- tors, KR '91: Principles of Knowledge Representa- tion and Reasoning, pages 128-138. Morgan Kauf- mann, San Mateo, CA. K. L. Hale and S. J. Keyser. 1987. A view from the middle. Center for Cognitive Science, MIT. Lexicon Project Working Papers 10. B. Levin. 1993. English verb classes and alterna- tions: a preliminary investigation. University of Chicago Press. T. Mitamura, E. Nyberg, and J. Carbonell. 1993. Automated corpus analysis and the acquisition of large, multi-lingual knowledge bases for MT. In Proceedings of TMI-93. William A. Woods and James G. Schmolze. 1992. The EL-ONE family. In Fritz Lehmann, editor, Se- mantic Networks in Artificial Intelligence, pages 133-177. Pergamon Press, Oxford. 368 | 1996 | 53 |
Transitivity and Foregrounding in News Articles: experiments in information retrieval and automatic summarising Roderick Kay and Ruth Aylett Information Technology Institute University of Salford Manchester M5 4WT United Kingdom {rnk,R.Aylett}@iti.salford.ac.uk Abstract This paper describes an on-going study which applies the concept of transitivity to news discourse for text processing tasks. The complex notion of transitivity is de- fined and the relationship between transi- tivity and information foregrounding is ex- plained. A sample corpus of news articles has been coded for transitivity. The corpus is being used in two text processing exper- iments. 1 Introduction 2 Definition of Transitivity Transitivity is usually considered to be a property of an entire clause (Hopper and Thompson, 1980). It is, broadly, the notion that an activity is transferred from an agent to a patient. It is therefore inherently linked with a clause containing two participants in which an action is highly effective. The concept of transitivity has been defined in terms of the following parameters: A. Participants h 1 B. Kinesis h 1 C. Aspect h 1 D. Punctuality h 1 E. Volitionality h 1 F. Affirmation h 1 G. Mode h 1 H. Agency h 1 I. Affectedness of O h 1 J. Individuation of O h 1 h=high, l=low Each component of transitivity contributes to the overall effectiveness or 'intensity' with which an ac- tion is transferred from one participant to another. A. There must be at least two participants for an action to be transferred. B. Transferable actions can be contrasted with non- transferable states, e.g. he pushed her; he thought about her. C. An action is either wholly or partially completed according to whether it is telic or atelic, e.g. I played the piano; I am playing the piano. D. Punctual actions have no transitional phase be- tween start and end point, having a greater effect The basic hypothesis of this study is that the degree of transitivity associated with a clause indicates the level of importance of a clause in a narrative text. For this assumption to form the basis of a practical implementation, transitivity must be objectively de- fined and the definition must be able to be processed automatically. The notion of transitivity clearly has many impli- cations for text processing, in particular information retrieval and automatic summarising, because it can be used to grade information in a document accord- ing to importance. In an information retrieval con- text, it means that a transitivity index could influ- ence a decision about the relevance of a document to a query. In automatic summarising, it means that less important information could be sieved accord- ing to transitivity, leaving only the most important information to form the basis of a summary. News discourse was chosen because it is narra- tive based and therefore broadly applicable to the notion of transitivity. There has also been exten- sive research in the structural characteristics of this text type (Duszak, 1995) (Kay and Aylett, 1994) (Bell, 1991). However, the study poses a challenge in the sense that the notion of transitivity has pre- viously been exemplified with relatively simple sen- tences presenting action sequences. A central ques- tion is how well the concept can be transferred to a domain which, although narrative based, diverges into commentary and analysis. 2 or more participants 1 participant action non-action telic atelic punctual non-punctual volitional non-volitional affirmative negative realis irrealis A high in potency A low in potency 0 totally affected 0 not affected 0 highly individuated 0 non-individuated 369 on their patients, e.g. he kicked the door; he opened the door. E. An action is more effective if it is volitional, e.g. he bought the present; he forgot the present. F. An affirmative action has greater transitivity than a negative action, e.g. he called the boy; he didn't call the boy. G. An action which is realis (occurring in the real world) is more effective than an action which is ir- realis (occurring in a non-real contingency world), e.g. they attacked the enemy; they might attack the enemy. H. Participants high in agency transfer an action more effectively than participants low in agency, e.g. he shocked me; the price shocked me. I. A patient is wholly or partially affected, e.g. I washed the dishes; I washed some of the dishes. J. Individuation refers to the distinctiveness of the object from the agent and of the object from its own background. The following properties contribute to the individuation of an object. INDIVIDUATED NON-INDIVIDUATED proper common human, animate inanimate concrete abstract singular plural count mass referential, definite non-referential Based on these components, clauses can be clas- sified as more or less transitive. In English, as a whole, transitivity is indicated by a cluster of fea- tures associated with a clause. The concept of foreground and background infor- mation is based on the idea that in narrative dis- course some parts are more essential than others. Certain sections of a narrative are crucially linked with the temporal sequence of events which form the backbone of a text. This material is normally foregrounded. In contrast, the contextual informa- tion relating to characters and environment is back- grounded. 3 Transitivity and Text Processing The relationship between transitivity and fore- grounding has potential for text processing, in par- ticular, information retrieval and automatic sum- marising. If it is possible to identify which clauses are central to a text, the information can be used to contribute to a relevance assessment or as the basis for a derived summary. 3.1 Information Retrieval The standard model of text retrieval is based on the identification of matching query/document terms which are weighted according to their distribution throughout a text database. This model has also been enhanced by a number of linguistic techniques: expansion of query/document terms according to thesaurus relations, synonyms, etc. The proposal for this study is to code matching query/document terms for the transitivity value of the clause in which they occur, as a starting point for producing comparative term weights based on linguistic features. Terms which are less central to a discourse will, on this basis, be given lower scores because they occur in low transitivity clauses. The net result will be to produce a document ranking order which more closely represents the importance of the documents to a user. There is also potential for producing a transitivity index for an entire doc- ument as well as for individual clauses so that this measure could also feature in a relevance assessment. 3.2 Automatic Summarising The fundamental task in automatic summarising is to identify the most important sections of a text so that these can be extracted and possibly modified to provide a summary. The notion of transitivity pro- vides a measure against which clauses can be scored. The highest scoring clauses, either above a threshold value or on a comparative basis, can then be iden- tified as the basic clauses of a summary. These can either be extracted raw or in context with pronom- inal references resolved and any logical antecedents included. A previous study in this area (Decker, 1985) extracted clauses and sentences on the basis of syntactic patterns which broadly correlate with certain features of transitivity. The present study fo- cuses on the semantic features of transitivity rather than associated syntax. 4 Experimental Procedure The feasibility of using transitivity as a tool in text processing will be assessed by two experiments us- ing the same corpus. Clauses in the corpus must be hand-coded for transitivity. The difficulties en- countered in this process will determine the basis for future automation. For the information retrieval task, only the clauses containing query/document matching terms will be coded for transitivity. For the automatic summarising experiment all sentences within a text will be coded. For the information retrieval experiment, ten queries are put to a newspaper database: a demon- stration system running on WAIS (Wide Area Infor- mation Server), carrying two weeks of articles from the Times newspaper from 1993 and 1994. The re- sults of the queries are downloaded in their initial ranked order (ranked by a host ranking algorithm) and re-ranked by a serial batch processor written in C+-t-. The processor identifies the transitivity fea- tures associated with each matching clause and pro- duces a ranked output of documents based on the weights assigned to each clause in which the search terms occur. The weights assigned to each clause are 370 numerically equivalent to the number of transitivity features associated with each clause. The total tran- sitivity weight for an entire document is the sum of clause weights normalised by document length. The output dataset consists of a total of 185 news articles, an average of 18.5 per batch. Each set of articles is ranked by volunteers. The articles are ranked for their degree of relevance to a query in two ways: on a scale of one to ten; and comparatively, by the degree of relevance of an article against all other articles. All terms are treated as equal so that dis- crimination between documents is based purely on accumulative transitivity scores. The performance of the ranking technique is evaluated according to two precision measures: the Spearman rank corre- lation coefficient (rho) and the CRE (Coefficient of Ranking Effectiveness) (Noreault et al. , 1977). For the automatic summarising experiment, ten articles are taken from the corpus at random. Sum- maries are produced by extracting clauses according to transitivity scores. In the initial implementation, transitivity scores will be equal to the number of transitivity features associated with the main clause of each sentence. The selection of sentences for a summary will be based, initially, on comparative transitivity scores and a reduction factor which will determine the number of sentences selected based on the length of a document. Summaries will be analysed and assessed by vol- unteers for coverage, in terms of the original text, and comprehensibility as a separate text. The sum- maries will be compared against summaries of the same texts compiled by the syntactic technique men- tioned previously and also against summaries con- sisting of the first paragraph of each news article. The study is currently at the end of the coding stage for the information retrieval experiment. guistics, No 2, European Studies Research Insti- tute, University of Salford. P. Hopper, S. Thompson. 1980. Transitivity in grammar in discourse. Language, 56: 251-299. T. Noreault, M. Koll, M. McGill. 1977. Automatic ranked output from Boolean searches in SIRE. Journal of the American Society for Information Science, 27(6): 333-339. References A. Bell. 1991. The language of news media. Basil Blackwell, Oxford N. Decker. 1985. The use of syntactic clues in dis- course processing. Proceedings of the 23rd meeting of the ACL, pages 315-323. A. Duszak. 1995. On variation in news-text pro- totypes: some evidence from English, Polish, and German. Discourse Processes, 19: 465-483. G. Green. 1979. Organization, goals and com- prehensibility in narratives: news writing, a case study. Technical Report 132, The Centre for the study of Reading, University of Illinois at Urbana- Champaign. R. Kay, R. Aylett. 1994. A text grammar for news reports. Working papers in Language and Lin- 371 | 1996 | 54 |
The Selection of the Most Probable Dependency Structure in Japanese Using Mutual Information Eduardo de Paiva Alves University of Electro-Connnunications 1-5-1 Chofugaoka Chofushi Tokyo Japan [email protected] Abstract We use a statistical method to select the most probable structure or parse for a given sentence. It takes as input the dependency structures generated for the sentence by a dependency grammar, finds all triple of modifier, particle and modificant relations, calculates mutual information of each re- lation and chooses the structure for which the product of the mutual information of its relations is the highest. 1 Introduction Computer Aided Instruction (CAI) systems are im- portant and effective tools, especially for teaching foreign languages. Many students of Japanese as a foreign language are aware of the Computer Assisted TEchnical Reading System (CATERS) that provides helpful information for reading texts in science and technology fields (Kano a~ld Yamamoto, 1995). One of the difficulties in learning Japanese lies in recognizing dependency relations in Japallese sen- tences. This is because the language allows relatively free word orders. Take an example from a leading newspaper: We would like to expect a prompt study of the causes, based on a national investigation To understand this sentence it is necessa~'y to know that ~1:~=~= (investigation) modifies ~o ~ -'3 ~,~$: (based) but not ~L~.=~ ~ (expect); ~0 ~"3~fa (based) modifies ~] (study) but not ~,~ (cause). CATERS is useful because it provides such infor- mation through several user-friendly functions. As effective as it is for foreign students, however, the texts in CATERS are fixed and the dependency structure of every sentence in them is all hand-coded. This inability to handle new text poses a serious problem in its general applicability and extensibility. This paper describes a method for selecting the right or most probable structure for a Japanese sen- tence among multiple probable structures generated by Restricted Dependency Grammar (RDG) (Fuku- moto, 1992). If this method works, then its results will be quite valuable for facilitating the develop- ment of new texts for CAI systems like CATERS. 2 Background As pointed out earlier, the dependency relation of elements in Japanese sentences are fairly compli- cated due to relatively free word orders. RDG is designed to determine dependency relations among words and phrases in sentences. To do so, it clas- sifies the phrases according to grammatical cate- gories and syntactic attributes. However, it fails to reject semantically unacceptable dependency struc- tures. The inevitable consequence is that RDG often produces multiple parses even for a simple sentence. Kurohashi and Nagao (1993) try to determine the dependency relations of a sentence by means of using sample sentences. When the sentence is structurally ambiguous, they determine its structure by compar- ing it to structurally similax patterns taken from a manually generated set of examples and calculating similarity values. Our method, on the contrary, uses a statistical ap- proach to select the most probable structure or parse of a given sentence. It takes as input dependency structures generated by RDG for a sentence, finds all of modifier-particle-modificant relations, calculates their mutual information and chooses the structure for which the product of the nmtual information of its relations is the highest. In order to calculate the mutual information for any modifier-particle-modificant pattern, we use the Conceptual Dictionary: (CD) to build a tax- onomic hierarchy of the modifiers which occur 1The Co-occurrence Dictionary and Conceptual Dic- tionary used in the process are part of a set of machine readable :]apanese dictionaries compiled by the .Japan Electronic Dictionary Research Institute (EDIt, 1993). The Conceptual Dictionary is a set of graphs consisting of 400,000 concepts and a number of taxonomic as well as functional relations between them. The Co-occurrence Dictionary consist of a list of 1,100,000 dependency re- lations (modifier, particle and modificant) taken from a corpus. Each entry includes syntactic information, con- cept identifiers (a numerical code) and the number of occurrences in the corpus. 372 with the particle-modificant sub-pattern in the Co- occurrence Dictionary (COD). The mutual informa- tion for any pattern is the maximum mutual infor- mation between the sub-pattern and the concepts in the taxonomic hierarchy which generalize the modi- tier in the pattern. Resnik and Hearst (1993) use a similar approach to calculate preferences for prepositional phrase at- tachment. While they use data on word groups, our method directly uses word co-occurrence data to es- timate the preferences using the CD to identify the most adequate grouping for each relation. While Kurohashi and Nagao compare the sentence with a single sample of patterns, we use all occur- rehces of the pattern in COD to calculate the mu- tual information. Our approach automatically ex- tracts the occurrences from the dictionary as well as builds the taxonomic hierarchy. Unlike Kurohashi and Nagao (1993), which uses only verb and adjec- tive patterns, we cover all dependency relations. 3 Selecting the Most Probable Structure RDG identifies all possible dependency structures which consist of modifier-modificant relations be- tween elements in a sentence. The arcs in the fol- lowing example show modifier-modificant relations which can be combined into six different dependency structures. I I, , ,InJ , n&tional investig&tion based cause pro--~ s ~ ~ Our objective is to develop a method to automat- ically select the correct dependency structures accu- rately or at least those which have the highest prob- ability of being correct. We evaluate the various possible structures according to the mutual infor- mation between modifiers and particle-modificants. In some cases there is no particle and the modifi- cant directly precedes the modifier (see example in section 3.2). To calculate the mutual information for each relation, we obtain form the COD the con- ceptual identifiers (a numerical code) for the mod- ifiers that appear with the particle-modifica~t and the number of their occurrences in the corpus. If the pattern is not present, backing off, we search this in- formation for the modificant only. For each of those concept identifiers we obtain from the CD all gen- eralizers (concept identifiers that express a similar meaning in a more general way) and build a taxo- nomic hierarchy with them. Using the number of occurrences obtained, we calculate the mutual infor- mation for the concepts in the taxonomic hierarchy. We also build a taxonomic hierarchy for the modi- fier that appears with the particle-modificant in the sentence. Then comparing these two taxonomic hi- erarchies (one for the modifiers in the COD, one for the modifers in the sentence), we look for the con- cept identifier common to both hierarchies that has the highest mutual information. This is the mutual information for the relation itself. For each depen- dency structure we calculate a score by multiplying the mutual information for all ambiguous relations (the non-ambiguous do not contribute to the evalua- tion). The dependency structure with highest prob- ability of being correct is the one with the highest score. Since all structures have the same number of relations, this multiplication reflects the likelyhood of the structure. 3.1 The Algorithm The process described above is written in an algo- rithmic form as follows: 1. Select the ambiguous relations (those with more than one modificant) for each structure. 2. Search COD for the particle-nmdificant sub- pattern, in the corresponding positions. If there is no entry, search for the modificant only. 3. Obtain from the COD the concept identifiers for the modificant (there may be multiple mean- ings) and the concept identifiers with the num- ber of their occurrences in the corpus for the modifiers which occur with the particle- modificant pattern. 4. For each modificant concept identifer, build a taxonomic hierarchy with its modifiers using CD to find the generalizer for each concept iden- tifier. 5. Calculate the mutual information 2 for all the concept identifiers in the taxonomic hierarchies. 6. For the modifiers in the sentence, extract their concept identifiers from COD and build the tax- onomic hierarchies using CD to find the gener- alizers for each concept identifier. 7. For each relation (modifier-particle-modificant pattern), search the concept identifier that gen- eralizes the modifier word and has maximum nmtual information. This value is the mutual information for the relation. 8. For each dependency structure, multiply the mutual information of its ambiguous depen- dency relations to obtain the score for that structure. 9. Arrange the structures according to their scores. 2The mutual information tells how much information one outcome &ives about the other and is given by the formula: I(Wl, w2) ---- In kp-(-~) ] (1) 373 3.2 Examples The following figure shows the output from RDG for a given sentence. The arrows in the figure indicate the dependency relations. ~ ~ -- work people stress structure - lllIIOY~lOn progress grow worse The ambiguous relations are ~$i~g~ ~J~./v'C, and ~ A~-~ ¢) ~ "~. Accordingly the occurrences for the modificants in these relations (~O~O, ~ (, (, ~, and ~©7o) are extracted from COD, ob- taining a list of modifier concept identifiers with the number of their occurrences. Note that in the pat- tern ~ ( A and /~ ( ~ b l/7, the modificant pre- cedes the modifier. The following figure shows some modifiers for ~ ((work) with their number of oc- currences. person wom~n mother drive each person f~ctory wife f~ct worker 32 18 6 6 3 3 3 2 2 2 Next, the taxonomic hierarchy for each particle- modificant is built and the mutual information cal- culated for each concept identifier. An extra£t of the hierarchy for ~ ( is shown in the following figure. ~ (0.0)-~ ~2~ 2 n ~ pseudo-stilq life ~,~'~7o:TF~ 71 life ~.bstract product huma, n or similar /~ live body relative to action human # )~ (3.61) ~ (3.40) person force Next the generalizers for (~, A, and ;~ b P~) are searched in the hierarchies for their modificants to obtain the mutual information for the relations. For ~ (A (working person) it happened to be the concept A (person) itself with mutual information of 3.61. For ~ ( 5~ b l~ 5~ (working stress) the match occurred for ~ (force) giving a mutual information of 0.69. Multiply the mutual information for all the depen- dency relations in each structure. For the example sentence the mutual information for the ambiguous relations are as follows: ~-~.95 .~- --'~ ")©~'C'~]'z From this the algorithm selects the parse with highest score which is drawn in thick lines. The next figure shows the result for the first example sentence. 1.60 3.40 sudden relation deep heart disease pressure more than 10"/0 was 4 Results and Evaluation We have applied our method to 35 sentences taken from a leading newspaper and included with RDG software. The average number of dependency struc- tures per sentence is 8.68. The method we used se- lected the correct structures for 25 sentences. The correct structures for 8 sentences were found as the second most probable structure by the method. In another experiment, we parsed 70 sentences us- ing a grammar similar to the one used in Kurohashi and Nagao (1993). Our method selected the most likely relation among the multiple generated in 95~, of the cases. Although the size of the test data is small, we say that our method provided a way to identify the most probable structure more efficiently than RDG. Since the sentences used are extracted from a newspaper, it's also general in its applicability. Therefore it can be used in preparing teaching materials such as the structures used by a CAI system such as CATERS, saving the instructor of hand-coding them. In future work we shall extract the co-occurrences directly from the corpora, and use other grouping techniques to replace the CD. 5 Acknowledgments I am thankful to my thesis advisor Dr. T. Furugori and the anonymous referees for their suggestions and comments. References Fukumoto, F.; Sano H., Saitoh, Y.; and Fukumoto J. 1992. A Framework for Dependency Gram- mar based on the word's modifiability level - Restricted Dependency Grammar. In Trans. IPS Japan, 33(10), (in Japanese). Resnik,P. and Hearst M. 1993. Structural Ambiguity and Conceptual Relations. In Proceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives. Ohio State University. Japan Electronic Dictiona~'y Research Institute, Ltd. 1993. EDR Electronic DictionaiT Specifications Guide (in Japanese). Kano, C. and Yanlamoto, H. 1995. A System for Reading Scientific and Technical Texts, Class- room, Instruction and Evaluation. In Jinbunka- gaku to computer 27(1) (in Japanese). Kurohashi, S., and Nagao, M. 1993. Structural Disambiguation in Japanese by Evaluating Case Structures based on Examples in Case Frame Dic- tionary. In Proceedings of IVCPT93. 374 | 1996 | 55 |
Incremental Parser Generation for Tree Adjoining Grammars* Anoop Sarkar University of Pennsylvania Department of Computer and Information Science 200 S. 33rd St., Philadelphia PA 19104-6389, USA anoop©linc, cis. upenn, edu Abstract This paper describes the incremental generation of parse tables for the LR- type parsing of Tree Adjoining Languages (TALs). The algorithm presented han- dles modifications to the input grammar by updating the parser generated so far. In this paper, a lazy generation of LR- type parsers for TALs is defined in which parse tables are created by need while parsing. We then describe an incremental parser generator for TALs which responds to modification of the input grammar by updating parse tables built so far. 1 LR Parser Generation Tree Adjoining Grammars (TAGs) are tree rewrit- ing systems which combine trees with the sin- gle operation of adjoining. (Schabes and Vijay- Shanker, 1990) describes the construction of an LR parsing algorithm for TAGs 1. Parser generation here is taken to be the construction of LR(0) ta- bles (i.e., without any lookahead) for a particular TAG z. The moves made by the parser can be ex- plained by an automaton which is weakly equivalent to TAGs called Bottom-Up Embedded Pushdown Automata (BEPDA) (Schabes and Vijay-Shanker, 1990) 3. Storage in a BEPDA is a sequence of stacks, *This work is partially supported by NSF grant NSF- STC SBR 8920230 ARPA grant N00014-94 and ARO grant DAAH04-94-G0426. Thanks to Breck Baldwin, Dania Egedi, Jason Eisner, B. Srinivas and the three anonymous reviewers for their valuable comments. 1 Familiarity with TAGs and their parsing techniques is assumed throughout the paper, see (Schabes and Joshi, 1991) for an introduction. We assume that our definition of TAG does not have the substitution opera- tion. See (Aho et al., 1986) for details on LR parsing. 2The algorithm described here can be extended to use SLR(1) tables (Schabes and Vijay-Shanker, 1990). SNote that the LR(0) tables considered here are deter- ministic and hence correspond to a subset of the TALs. Techniques developed in (Tomita, 1986) can be used to resolve nondeterminism in the parser. where new stacks can be introduced above and be- low the top stack in the automaton. Recognition of adjunction is equivalent to the unwrap move shown in Fig. 1. of Figure 1: Recognition of adjunction in a BEPDA. The LR parser (of (Schabes and Vijay-Shanker, 1990)) uses a parsing table and a sequence of stacks (Fig. 1) to parse the input. The parsing table en- codes the actions taken by the parser as follows (us- ing two GOTO functions): • Shift to a new state, pushed onto a new stack which appears on top of the current sequence of stacks. The current input token is removed. • Resume Right when the parser has reached right and below a node (in a dotted tree, ex- plained below) on which an auxiliary tree has been adjoined. The GOTOIoo, function en- codes the proper state such that the string to the right of the footnode can be recognized. • Reduce Root, the parser executes an unwrap move to recognize adjunction (Fig. 1). The proper state for the parser after adjunction is given by the GOTOr@h, function. • Accept and Error functions as in conventional LR parsing. There are four positions for a dot associated with a symbol in a dotted tree: left above, left below, right below and right above. A dotted tree has one such dotted symbol. The tree traversal in Fig. 2 scans the frontier of the tree from left to right while trying to recognize possible adjunctions between the 375 above and below positions of the dot. Adjunction on a node is recorded by marking it with an asterisk 4. IB$. ~C. $ Figure 2: Left to right dotted tree traversal. The parse table is built as a finite state automaton (FSA) with each state defined to be a set of dotted trees. The closure operations on states in the parse table are defined in Fig. 3. All the states in the parse table must be closed under these operations 5. The FSA is built as follows: in state 0 put all the initiM trees with the dot left and above the root. The state is then closed. New states are built by three transitions: s,{*a} - a sj {a'}, a is a terminal symbol; s,{A,} #"g~' sj{A'}, fl can adjoin at node A; s,{.A} #.?oo, sj{A,}, A is a footnode. Entries in the parse table are determined as follows: • a shift for each transition in the FSA. • resume right iff there is a node B. with the dot right and below it. • reduce root iff there is a rootnode in an aux- iliary tree with the dot right and above it. • accept and error with the usual interpreta- tion. The items created in each state before closure applies are called the kernels of each state in the FSA. The initial trees with the dot left and above the root form the kernel for state 0. 2 Lazy Parser Generation The algorithm described so far assumes that the parse table is precompiled before the parser is used. Lazy parser generation generates only those parts of the parser that become necessary during actual pars- ing. The approach is an extension of the algorithm for CFGs given in (Heering et al., 1990; I-Ieering et M., 1989). To modify the LR parsing strategy given earlier we move the closure and computation of tran- sitions from the table generation stage to the LR parser. The lazy technique expands a kernel state only when the parser, looking at the current input, indicates so. For example, a TAG and correspond- ing FSA is shown in Fig. 4 (ha rules out adjunction at a node) 6, Computation of closure and transitions in the state occurs while parsing as in Fig. 5 which 4For example, B*. This differs from the usual nota- tion for marking a footnode with an asterisk. 5Fig. 5 is a partial FSA for the grammar in Fig. 4. 6Unexpanded kernel states are marked with a bold- fa~=ed outline, acceptance states with double-lines. is the result of the LR parser expanding the FSA in Fig. 4 while parsing the string aec. The modified parse function checks the type of the state and may expand the kernel states while pars- ing a sentence. Memory use in the lazy technique is greater as the FSA is needed during parsing and parser generation. TAG G: a: Se] I~"~Snaa S~ FSA: 0 Sna e Figure 4: TAG G where L(G) = {anec n) and corre- sponding FSA after lazy parse table generation. /I "1 I a s l Sna • • a I t, s= s=) ¢~na ~na a s a" S h e r'- c Sno Sno a a s S~a a S* a S* "~c -'c Sna Sna S" S* "1 I e "e s~ a sc ~)~na Sna. a S h e .Sna a S* h e .Spa ¢ a S as Sr~ S,= ~c ~c Sna Sna . ~c Figure 5: The FSA after parsing the string aec. y. s s d /L b Sna Figure 6: New tree added to G with L(G) = { anbm ecn d m} 3 Incremental Parser Generation An incremental parser generator responds to gram- mar updates by throwing away only that information from the FSA of the old grammar that is inconsistent in the updated grammar. Incremental behaviour is obtained by selecting the states in the parse table af- fected by the change in the grammar and returning them to their kernel form (i.e. remove items added by the closure operations). The parse table FSA will now become a disconnected graph. The lazy parser will expand the states using the new grammar. All states in the disconnected graph are kept as the lazy parser will reconnect with those states (when the transitions between states are computed) that are unaffected by the change in the grammar. Consider 376 A .... ..... A A .... A Preen I Move Dot Up A ..... A Skip Node X~ ~ Figure 3: Closure Operations. the addition of a tree to the grammar (deletion will be similar). • for an initial tree a return state 0 to kernel form adding a with the dot left and above the root node. Also return all states where a possible Left Completion on a can occur to their kernel form. • for an auxiliary tree fl return all states where a possible Adjunction Prediction on/3 can occur and all states with a fl, ight transition to their kernel form. For example, the addition of the tree in Fig. 6 causes the FSA to fragment into the disconnected graph in Fig. 7. It is crucial to keep the discon- nected states around; consider the re-expansion of a single state in Fig. 8. All states compatible with the modified grammar are eventually reused• 4 ~ Figure 7: The parse table after the addition of 7. The approach presented above causes certain states to become unreachable from the start state 7. Frequent modifications of a grammar can cause many unreachable states. A garbage collection scheme defined in (Heering et al., 1990) can be used here which avoids overregeneration by retaining un- reachable states• 4 Conclusion What we have described above is work in progress in implementing an LR-type parser for a wide-coverage lexiealized grammar of English using TAGs (XTAG Group, 1995)• Incremental parser generation allows the addition and deletion of elementary trees from a rQuantitative results on the performance of the algo- rithm presented are forthcoming. f •S S S ~ 1 ~ 2ff ,~.. 3 • t .t,, s~ s~ s,, 8 7 .~o s~ bs,- bs.° I 6 5 .s "b ~ ~'¢" c S~ S~ 4." 4 as Figure 8: The parse table after expansion of state 0 with the modified grammar. TAG without recompilation of the parse table for the updated grammar• This allows precompilation of top-down dependencies such as the prediction of ad- junction while having the flexibility given by Earley- style parsers• References Aho, Alfred V., Ravi Sethi and Jeffrey D. Ullman, Com- pilers: Principles, Techniques and Tools, Addison Wesley, Reading, MA, 1986. Heering, Jan, Paul Klint and Jan Rekers, Incremental Generation of Parsers, In IEEE Transactions on Soft- ware Engineering, vol. 16, no. 12, pp. 1344-1350, 1990. Heering, Jan, Paul Klint and Jan Rekers, Incremental Generation of Parsers, In ACM SIGPLAN Notices (SIGPLAN '89 Conference on Programming Lan- guage Design and Implementation), vol. 24, no. 7, pp. 179-191, 1989. Schabes, Yves and K. Vijay-Shanker, Deterministic Left to Right Parsing of Tree Adjoining Languages, In P8th Meeting of the Association for Computational Lin- guistics (ACL '90), Pittsburgh, PA, 1990. Schabes, Yves and Aravind K. Joshi, Parsing with Lexi- calized Tree Adjoining Grammars, In Tomita, Masaru (ed.) Current Issues in Parsing Technologies, Kluwer Academic, Dordrecht, The Netherlands, 1991. Tomita, Masaru, Efficient Parsing/or Natural Language: A Fast Algorithm for Practical Systems, Kluwer Aca- demic, Dordrecht, The Netherlands, 1986. XTAG Research Group, A Lexicalized Tree Adjoining Grammar for English, IRCS Technical Report 95-03, University of Pennsylvania, Philadelphia, PA. 1995. 377 | 1996 | 56 |
Processing Complex Sentences in the Centering Framework Michael Strube Freiburg University Computational Linguistics Lab Europaplatz 1, D-79085 Freiburg, Germany st rube@ coling, uni- freiburg, de Abstract We extend the centering model for the resolution of intia-sentential anaphora and specify how to handle complex sentences. An empirical eval- uation indicates that the functional information structure guides the search for an antecedent within the sentence. 1 Introduction The centering model (Grosz et al., 1995) focuses on the resolution of inter-sentential anaphora. Since intra-sentential anaphora occur at high rates in real- world texts, the model has to be extended for the res- olution of anaphora at the sentence level. However, the centering framework is not fully specified to han- die complex sentences (Suri & McCoy, 1994). This underspocification corresponds to the lack of a pre- cise definition of the expression utterance, a term al- ways used but intentionally left undefined 1. There- fore, the centering algorithms currently under discus- sion are not able to handle naturally occurring dis- course. Possible strategies for treating sentence-level anaphora within the centering framework are 1. processing sentences linearly one clause at a time (as suggested by Grosz et al. (1995)), 2. preference for sentence-external antecedents which are proposed by the centering mechanism, 3. preference for sentence-internal antecedents which are filtered by the usual binding criteria, 4. a mixed-mode which prefers only a particular set of sentence-internal over sentence-external an- tecedents (e.g. Suri & McCoy (1994)). The question arises as to which strategy fits best for the interaction between the resolution of intra- and inter- sentential anaphora. In my contribution, evidence for a mixed-mode strategy is brought forward, which favors a particular set of sentence-internal antecedents given by functional criteria. 1Cf. the sketchy statements by Brennan et al. (1987, p.155): "[...] U is an utterance (not necessarily a full clause) [...]", and by Grosz et al. (1995, p.209): "U need not to be a full clause." 2 Constraints on Sentential Anaphora Our studies on German texts have revealed that the functional information structure of the sentence, con- sidered in terms of the context-boundedness of dis- course elements, is the major determinant for the rank- ing on the forward-looking-centers (C! (U,)) (Strube & Hahn, 1996). Hence, context-bound discourse ele- ments are generally ranked higher in the C! than any other non-anaphoric element. The functional informa- tion structure has impact not only on the resolution of inter-sentential anaphora, but also on the resolution of intra-sentential anaphora. Hence, the most preferred antecedent of an intra-sentential anaphor is a phrase which is also anaphoric. Consider sentences (1) and (2) and the corresponding centering data in Table 1 (Cb: backward-looking center; the first dement of the pairs denotes the discourse entity, the second element the surface). In sentence (1), a nominal anaphor oc- curs, der T3100SX (a particular notebook). In sentence (2), another nominal anaphor appears, der Rechner (the computer), which is resolved to T3100SX from the previous sentence. In the matrix clause, the pro- noun er (it) co-specifies the already resolved anaphor der Rechner in the subordinate clause. (1) Ist der Resume-Modus aktiviert, schaltet sich der T3100SX selbstiindig ab. (If the resume mode is active, - switches - itself - the T3100SX- automatically - off.) (2) Bei spiiterem Einschalten des Rechners arbeitet er so- fort an der alten Smile weiter. (The - later - turning on - of the computer - it - re- sumes working - at exactly the same place.) (1) Cb: T3100SX: T3100SX Cf: [1"3100SX: T3100SX] (2) Cb: T3100SX: er Cf: [T3100SX: er, TURN-ON: Einschalten, PLACE: Smile] Table 1: Centering Data for Sentences (1) and (2) This example illustrates our hypothesis that intra- sentential anaphors preferably co-specify context- bound discourse elements. In order to empirically 378 strenghten this argument, we have examined several texts of different types: 15 texts from the information technology (IT) domain, one text from the German news magazine Der Spiegel, and the first chapters of a short story by the German writer Heiner Miiller 2 (cf. Table 2). In the texts, 65 intra-sentential anaphors oc- I II text ana. I sent. ana. I anapho~ I words I IT 284 24 308 5542 Spiegel 90 12 102 1468 M~ller 124 29 153 867 498 65 563 7877 Table 2: Distribution of Anaphors in the Text Corpus cur, 58 of them (89,2%) have an antecedent which is a resolved anaphor, while only 32 of them (49,2%) have an antecedent which is the subject of the matrix clause (cf. Table 3). These data indicate that an approach based on grammatical roles (Sm'i & McCoy, 1994) is inappropriate for the German language, while an ap- proach based on the functional information structure seems preferable. In addition, we maintain that ex- changing grammatical with functional criteria is also a reasonable strategy for fixed word order languages. They can be rephrased in terms of functional crite- ria, simply due to the fact that grammatical roles and the information structure patterns we defined, unless marked, coincide in these languages. I II cont.-bound --,bound II subj. I -~ subj. I IT 20 4 16 8 Spiegel 10 2 6 6 MOiler 28 1 10 19 58 7 32 33 Table 3: Types of Intra-Sentential Antecedents Since the strategy described above is valid only for complex sentences which consist of a matrix clause and one or more subordinate clauses, compound sen- tences which consist of main clauses must be consid- ered. Each of these sentences is processed by our algorithm in linear order, one clause at a time with the usual centering operations. Compound sentences which consist of multiple full clauses also have multi- ple Cb/C! data. Now, we are able to define the expression utterance in a satisfactory manner: An utterance U is a simple sentence, a complex sentence, or each full clause of a compound sentence 3. The C! of an utterance is com- puted only with respect to the matrix clause. Given these findings, complex sentences can be processed at three stages (2a-2c; transitions from one stage to the next occur only when a suitable antecedent has not been found at the previous stage): 2Liebesgeschichte. In Heiner Mflller, Geschichten aus der Produktion 2, Berlin: Rotbuch Verlag, pp.57-63. aWe do not consider dialogues with elliptical utterances. 1. For resolving an anaphor in the first clause of Un, propose the dements of Cy (Un-1) in the given order. 2. For resolving an anaphor in a subsequent clause of U,, (a) propose already context-bound elements of Un from left to right 4. (b) propose the dements of C:(Un-1) in the given order. (c) propose all dements of Un not yet checked from left to right. 3. Compute the C! (Un), considering only the ele- ments of the matrix clause of Un. 3 Evaluation In order to evaluate the functional approach to the res- olution of intra-sentential anaphora within the center- ing model, we compared it to the other approaches mentioned in Section 1, employing the test set re- ferred to in Table 2. Note that we tried to eliminate er- ror chaining and false positives (for some remarks on evaluating discourse processing algorithms, cf. Walker (1989); we consider her results as a starting point for our proposal). First, we examine the errors which all strategies have in common (for the success rate, cf. Table 4). 99 errors are caused by underspecification at differ- ent levels, e.g., prepositional anaphors (16), plural anaphors (8), anaphors which refer to a member of a set (14), sentence anaphors (21), and anaphors which refer to a global focus (12) are not yet included in the mechanism. In 9 cases, any strategy will choose the false antecedent. The most interesting cases are the ones for which the performance of the different strategies varies. The linear approach generates 40 additional errors in the anaphora resolution, which are caused only by the or- dering strategy to process each clause of sentences with the centering mechanism. The approach which prefers inter-sentential anaphora causes 60 additional errors. Note that this strategy performs remarkably well at first sight. For 44 of the errors it chooses an inter-sentential antecedent which is, on the surface, identical to the correct intra-sentential antecedent. We count these 44 resolutions as false positives, since the anaphor has been resolved to the false discourse en- tity. The approach which prefers intra-sentential an- tecedents causes 27 additional errors. These errors occur whenever an inter-sentential anaphor can be re- solved with an incorrect intra-sentential antecedent. 4We abstract here from the syntactic criteria for filtering out some elements of the current sentence by applying bind- hag criteria (Strube & Hahn, 1995). Syntactic constraints like control phenomena override the preferences given by the context. 379 I II r4 I linear I ex=a > int a I int a > extra I f ctional I IT 308 237 (?6,9%) 229 (?4,3%) 240 (77,9%) 247 (80,2%) Spiegel 102 82 (80,4%) 76 (74,5%) 82 (80,4%) 86 (84,3%) Mtlller 153 105 (68,8%) 99 (64,7%) 115 (75,3%) 128 (83,7%) I II 563 1424 (75,3%)1404 (71,8%)1437 (77,6%)1461 (81,9%) I Table 4: Success Rate without Semantic Constraints The functional approach causes only 3 additional er- rors. These errors occur whenever the antecedent of an intra-sentential anaphor is not bound by the context (which is possible but rare) and when the anaphor can be resolved at the text level. The results change slightly if semantic/conceptual constraints (type and further admissibility constraints) on anaphora are considered. 22 errors of the lin- ear approach, 8 errors of the approach which prefers inter-sentential antecedents, and 12 errors of the ap- proach which prefers inter-sentential antecedents can be avoided. Only 6 errors of the functional approach can be avoided by incorporating semantic criteria. This might constitute a cognitively valid argument for the functional approach - the better the strategy, the lower the influence of semantics or world knowledge on anaphora resolution. To summarize the results of our empirical eval- uation, we claim that our proposal based on func- tional criteria leads to substantively better results for languages with free word order than the linear ap- proach suggested by Grosz et al. (1995) and the two approaches which prefer inter-sentential or intra- sentential antecedents. 4 Comparison to Related Work Crucial for the evaluation of the centering model (Grosz et al., 1995) and its applicability to naturally occurring discourse is the lack of a specification con- ceming how to handle complex sentences and intra- sentential anaphora. Grosz et al. suggest the pro- cessing of sentences linearly one clause at a time. We have shown that such an approach is not appro- priate for some types of complex sentences. Suri & McCoy (1994) argue in the same manner, but we consider the functional approach for languages with free word order superior to their grammatical criteria, while, for languages with fixed word order, both ap- proaches should give the same results. Hence, our ap- proach seems to be more generally applicable. Other approaches which integrate the resolution of sentence- and text-level anaphora are based on salience metrics (Haji~ov~i et al., 1992; Lappin & Leass, 1994). We consider such metrics to be a method which detracts from the exact linguistic specifications as we propose them. At first sight, grammar theories like GB (Chomsky, 1981) or I-IPSG (Pollard & Sag, 1994), are the best choice for resolving anaphora at the sentence-level. But these grammar theories only give filters for ex- cluding some elements from consideration. Neither gives any preference for a particular antecedent at the sentence-level, nor do they consider text anaphora. 5 Conclusions In this paper, we gave a specification for handling complex sentences in the centering model based on the functional information structure of utterances in dis- course. We motivated our proposal by the constraints which hold for a free word order language (German) and derived our results from data-intensive empirical studies of real texts of different types. Some issues remain open: the evaluation of the functional approach for languages with fixed word or- der, a fine-grained analysis of subordinate clauses as Suri & McCoy (1994) presented for SX because SY clauses, and, in general, the solution for the cases which cause errors in our evaluation. Acknowledgments. This work has been funded by LGFG Baden-Wiirttemberg. I would like to thank my colleagues in the C£27~" group for fruitful discussions. I would also like to thank Jon Alcantara (Cambridge) who kindly took the role of the native speaker via Internet. References Brerman, S. E., M. W. Friedman & C. J. Pollard (1987). A centering approach to pronouns. In Proc. of ACL-87, pp. 155-162. Chornsky, N. (1981). Lectures on Government and Binding. Dordreeht: Foris. Grosz, B. J., A. K. Joshi & S. Weinstein (1995). Center- ing: A framework for modeling the local coherence of discourse. Computational Linguistics, 21(2):203-225. Hajitov~, E., V. Kubofi & P. Kubofi (1992). Stock of shared knowledge: A tool for solving pronominal anaphora. In Proc. ofCOLING-92, Vol. 1, pp. 127-133. Lappin, S. & H. J. Leass (1994). An algorithm for pronom- inal anaphora resolution. Computational Linguistics, 20(4):535-561. Pollard, C. & I. V. Sag (1994). Head-Driven Phrase Struc- ture Grammar. Chicago, ILL: Chicago Univ. Press. Strobe, M. & U. Hahn (1995). ParseTalk about sentence- and text-level anaphora. In Proc. of EACL-95, pp. 237- 244. Strube, M. & U. Hahn (1996). Functional centering. In this volume. Suri, L. Z. & K. F. McCoy (1994). RAVr/RAPR and center- ing: A comparison and discussion of problems related to processing complex sentences. ComputationalLin- guistics, 20(2):301-317. Walker, M. A. (1989). Evaluating discourse processing al- gorithms. In Proc. of ACL-89, pp. 251-261. 380 | 1996 | 57 |
Maximizing Top-down Constraints for Unification-based Systems Noriko Tomuro School of Computer Science, Telecommunications and Information Systems DePaul University Chicago, IL 60604 [email protected] Abstract A left-corner parsing algorithm with top- down filtering has been reported to show very efficient performance for unification- based systems. However, due to the non- termination of parsing with left-recursive grammars, top-down constraints must be weakened. In this paper, a general method of maximizing top-down constraints is pro- posed. The method provides a procedure to dynamically compute *restrictor., a minimum set of features involved in an in- finite loop for every propagation path; thus top-down constraints are maximally prop- agated. 1 Introduction A left-corner parsing algorithm with top-down filter- ing has been reported to show very efficient perfor- mance for unification-based systems (Carroll, 1994). In particular, top-down filtering seems to be very ef- fective in increasing parse efficiency (Shann, 1991). Ideally all top-down expectation should be propa- gated down to the input word so that unsuccess- ful rule applications are pruned at the earliest time. However, in the context of unification-based parsing, left-recursive grammars have the formal power of a Turing machine, therefore detection of all infinite loops due to left-recursion is impossible (Shieber, 1992). So, top-down constraints must be weakened in order for parsing to be guaranteed to terminate. In order to solve the nontermination problem, Shieber (1985) proposes restrictor, a statically pre- defined set of features to consider in propagation, and restriction, a filtering function which removes the features not in restrictor from top-down expec- tation. However, not only does this approach fail to provide a method to automatically generate the re- strictor set, it may weaken the predicative power of top-down expectation more than necessary: a glob- ally defined restrictor can only specify the least com- mon features for all propagation paths. In this paper, a general method of maximizing top-down constraints is proposed. The method provides a procedure to dynamically compute *restrictor*, a minimum set of features involved in an infinite loop, for every propagation path. Fea- tures in this set are selected by the detection func- tion, and will be ignored in top-down propagation. Using .vestrictor., only the relevant features par- ticular to the propagation path are ignored, thus top-down constraints are maximally propagated. 2 Notation We use notation from the PATR-II formalism (Shieber, 1986) and (Shieber, 1992). Directed aeyclie graphs (dags) are adopted as the representa- tion model. The symbol -" is used to represent the equality relation in the unification equations, and the symbol • used in the form of pl • p2 represents the path concatenation of pl and p2. The subsumption relation is defined as "Dag D subsumes dag D ~ if D is more general than D'." The unification of D and D ~ is notated by D tJ D ~. The extraction function D/pl extracts the subdag under path pl for a given D, and the embedding function D \ pl injects D into the enclosing dag D' such that D'/pl = D. The filtering function p is similar to (Shieber, 1992): p(D) returns a copy of D in which some features may be removed. Note that in this paper .restrictor. specifies the features to be removed by p, whereas in (Shieber, 1985, 1992) restrictor specifies the features to be retained by re- striction which is equivalent to p. 3 Top-down Propagation Top-down propagation can be precomputed to form a teachability table. Each entry in the table is a compiled dag which represents the relation between a non-terminal category and a rule used to rewrite the constituents in the teachability relation (i.e., re- flexive, transitive closure of the left-corner path). For example, consider the following fragment of a grammar used in the syntax/semantics integrated system called LINK (Lytinen, 1992): 381 s~m ~ own~ D(1) em D(2) Ic (~._.~ead owner D(3) Figure 1: DAGs used in the example he° rZ= sere ~ owfler D(4) rl : NPo -+ NP1 POS NP2 (NPo head) = (NP2 head) (YPo head sem owner) - (NP1 head sem) (This rule is used to parse phrases such as "Kris's desk" .) The dag D(1) in Figure 11 represents the initial application of rl to the category NP. Note that the subdag under the lc arc is the rule used to rewrite the constituent on the left-corner path, and the paths from the top node represent which top- down constraints are propagated to the lower level. Top-down propagation works as follows: given a dad D that represents a teachability relation and a rule dad R whose left-hand side category (i.e., root) is the same as D's left-corner category (i.e., under its (lc 1) path), the resulting dag is D1 = p(D') U (R \ lc), where D' is a copy of D in which all the numbered arcs and lc arc are deleted and the subdag which used to be under the (lc 1) path is promoted to lie under the lc arc. Dags after the next two recursive applications of rl (D(2) and D(3) respectively 2) are shown in Figure 1. Notice the filtering function p is applied only to D'. In the case when p(D') = nil, the top node in D1 will have no connections to the rule dag under the lc arc. This means no top-down constraints are propagated to the lower level, therefore the parsing becomes pure bottom-up. In many unification-based systems, subsumption is used to avoid redundancy: a dag is recorded in the table if it is not subsumed by any other one. Therefore, if a newly created dag is incompatible or more general than existing dags, rule application continues. In the above example, D(2) is incompat- ible with D(1) and therefore gets entered into the table. The owner arc keeps extending in the subse- quent recursive applications (as in D(3)), thus the propagation goes into an infinite loop. 1 Category symbols are directly indicated in the dad nodes for simplicity. 2In this case, p is assumed to be an identity function. 3.1 Proposed Method Let A be a dag created by the first application of the rule R and B be a dad created by the second application during the top-down propagation. 3 In the proposed method, A and B are first checked for subsumption. If B is subsumed by A, the propaga- tion for this path terminates. Otherwise a possible loop is detected. The detection function (described in the next subsection) is called on A and B and selected features are added to the .restrictor. set. 4 Then, using the updated *restrictor*, propagation is re-done from A. When R is applied again yielding B', while B' is not subsumed by A, the following process is re- peated: if B' is incompatible with A, the detection function is called on A and B' and propagation is re- done from A. If B' is more general than A, then A is replaced by B' (thereby keeping the most general dag for the path) and propagation is re-done from B'. Otherwise the process stops for this propagation path. Thus, the propagation will terminate when enough features are detected, or when *restrictor* includes all the (finite number of) features in the grammar. 5 In the example, when the detection function is called on D(1) and D(2) after the first recursive ap- plication, the feature owner is selected and added to *restrictor*. After the propagation is re-done from D(1), the resulting dad D(4) becomes more general than O(1). 6 Then D(1) is replaced by 0(4), and the propagation is re-done once again. This time it results the same D(4), therefore the propagation 3 In the case of indirect recursion, there are some in- tervening rule applications between A and B. 4A separate *restrictor* must be kept for each prop- agation path. 51n reality, category feature will never be in *restrictor* because the same rule R is applied to derive both A and B'. 6Remember 0(4) = p(D(1)') U (rl \ le) where p filters out owner arc. 382 terminates. 3.2 Detection Function The detection function compares two dags X and Y by checking every constraint (unification equation) x in X with any inconsistent or more general con- straint y in Y. If such a constraint is found, the function selects a path in x or y and detects its last arc/feature as being involved in the possible loop. 7 If x is the path constraint pl - p2 where pl and p2 are paths of length > I, features may be detected in the following cases: 8 • (case 1) If both pl and p2 exist in Y, and there exists a more general constraint y in Y in the form pl • p3 - p2 - p3 (length of p3 is also > 1), the path p3 is selected; • (case 2) If both pl and p2 exist in Y, but the subdag under pl and the subdag under p2 do not unify, or if neither pl nor p2 exists in Y, whichever of pl or p2 does not contain the lc arc, or either if they both contain the lc arc, is selected; and • (case 3) If either pl or p2 does not exist in Y, the one which does not exist in Y is selected. If x is the constant constraint pl - c (where c is some constant), features may be detected in the fol- lowing cases: • (case 4) If there exists an incompatible con- straint y of the form pl - d where d 7~ c in Y, or if there is no path pl in Y, pl is selected; and • (case 5) If there exists an incompatible con- straint y of the form pl • p2 - c, then p2 is selected. 4 Related Work A similar solution to the nontermination problem with unification grammars in Prolog is proposed in (Samuelsson, 1993). In this method, an operation called anti-unification (often referred to as general- ization as the counterpart of unification) is applied to the root and leaf terms of a cyclic propagation, and the resulting term is stored in the reachablity table as the result of applying restriction on both terms. Another approach taken in (Haas, 1989) eliminates the cyclic propagation by replacing the features in the root and leaf terms with new vari- ables. The method proposed in this paper is more gen- eral than the above approaches: if the Selection or- dering is imposed in the detection function, features in .restrictor. can be collected incrementally as the cyclic propagations are repeated. Thus, this method 7This scheme may be rather conservative. 8Note the cases in this section do not represent all possible situations. is able to create a less restrictive *restrictor. than these other approaches. 5 Discussion and Future Work The proposed method has an obvious difficulty: the complexity caused by the repeated propaga- tions could become overwhelming for some gram- mars. However, in the experiment on LINK sys- tem using a fairly broad grammar (over 130 rules), precompilation terminated with only a marginally longer processing time. In the experiment, all features (around 40 syntac- tic/semantic features) except for one in the example in this paper were able to be used in propagation. In the preliminary analysis, the number of edges en- tered into the chart has decreased by 30% compared to when only the category feature (i.e., context-free backbone) was used in propagation. For future work, we intend to apply the proposed method to other grammars. By doing the empiri- cal analysis of precompilation and parse efficiency for different grammars, we will be able to conclude the practical applicability of the proposed method. We also indend to do more exhaustive case analysis and investigate the selection ordering of the detec- tion function. Although the current definition covers most cases, it is by no means complete. References Carroll, J. (1994). Relating complexity to practical performance in parsing with wide-coverage uni- fication grammars. In Proceedings of the 32nd Annual Meeting of the Association for Computa- tional Linguistics, pp. 287-293. Haas, A. (1989). A parsing algorithm for unification grammar, Computational Linguistics, 15(4), pp. 219-232. Lytinen, S. (1992). A unification-based, integrated natural language processing system. Computers and Mathematics with Applications, 23(6-9), pp. 403-418. Samuelsson, C. (1993). Avoiding non-termination in unification grammars. In Proceedings of Natu- ral Language Understanding and Logic Program- ming IV, Nara, Japan. Shann, P. (1991). Experiments with GLR and chart parsing. In Tomita, M. Generalized LR Parsing. Boston: Kluwer Academic Publishers, p. 17-34. Shieber, S. (1985). Using restriction to extend parsing algorithms for complex-feature-based for- malisms. In Proceedings of the 23rd Annual Meeting of the Association for Computational Linguistics, Chicago, IL, pp. 145-152. Shieber, S. (1986). An Introduction to Unification- Based Approaches to Grammar. Stanford, CA: Center for the Study of Language and Informa- tion. Shieber, S. (1992). Constraint-based Grammar For- malisms. Cambridge, MA: MIT Press. 383 | 1996 | 58 |
Integrating Multiple Knowledge Sources to Disambiguate Word Sense: An Exemplar-Based Approach Hwee Tou Ng Defence Science Organisation 20 Science Park Drive Singapore 118230 nhweet ou©trantor, dso. gov. sg Hian Beng Lee Defence Science Organisation 20 Science Park Drive Singapore 118230 lhianben@trant or. dso. gov. sg Abstract In this paper, we present a new approach for word sense disambiguation (WSD) us- ing an exemplar-based learning algorithm. This approach integrates a diverse set of knowledge sources to disambiguate word sense, including part of speech of neigh- boring words, morphological form, the un- ordered set of surrounding words, local collocations, and verb-object syntactic re- lation. We tested our WSD program, named LEXAS, on both a common data set used in previous work, as well as on a large sense-tagged corpus that we sep- arately constructed. LEXAS achieves a higher accuracy on the common data set, and performs better than the most frequent heuristic on the highly ambiguous words in the large corpus tagged with the refined senses of WoRDNET. 1 Introduction One important problem of Natural Language Pro- cessing (NLP) is figuring out what a word means when it is used in a particular context. The different meanings of a word are listed as its various senses in a dictionary. The task of Word Sense Disambigua- tion (WSD) is to identify the correct sense of a word in context. Improvement in the accuracy of iden- tifying the correct word sense will result in better machine translation systems, information retrieval systems, etc. For example, in machine translation, knowing the correct word sense helps to select the appropriate target words to use in order to translate into a target language. In this paper, we present a new approach for WSD using an exemplar-based learning algorithm. This approach integrates a diverse set of knowledge sources to disambiguate word sense, including part of speech (POS) of neighboring words, morphologi- cal form, the unordered set of surrounding words, local collocations, and verb-object syntactic rela- tion. To evaluate our WSD program, named LEXAS (LEXical Ambiguity-resolving _System), we tested it on a common data set involving the noun "interest" used by Bruce and Wiebe (Bruce and Wiebe, 1994). LEXAS achieves a mean accuracy of 87.4% on this data set, which is higher than the accuracy of 78% reported in (Bruce and Wiebe, 1994). Moreover, to test the scalability of LEXAS, we have acquired a corpus in which 192,800 word occurrences have been manually tagged with senses from WORD- NET, which is a public domain lexical database con- taining about 95,000 word forms and 70,000 lexical concepts (Miller, 1990). These sense tagged word occurrences consist of 191 most frequently occur- ring and most ambiguous nouns and verbs. When tested on this large data set, LEXAS performs better than the default strategy of picking the most fre- quent sense. To our knowledge, this is the first time that a WSD program has been tested on such a large scale, and yielding results better than the most fre- quent heuristic on highly ambiguous words with the refined sense distinctions of WOttDNET. 2 Task Description The input to a WSD program consists of unre- stricted, real-world English sentences. In the out- put, each word occurrence w is tagged with its cor- rect sense (according to the context) in the form of a sense number i, where i corresponds to the i-th sense definition of w as given in some dictionary. The choice of which sense definitions to use (and according to which dictionary) is agreed upon in ad- vance. For our work, we use the sense definitions as given in WORDNET, which is comparable to a good desk- top printed dictionary in its coverage and sense dis- tinction. Since WO•DNET only provides sense def- initions for content words, (i.e., words in the parts of speech (POS) noun, verb, adjective, and adverb), LEXAS is only concerned with disambiguating the sense of content words. However, almost all existing work in WSD deals only with disambiguating con- tent words too. LEXAS assumes that each word in an input sen- 40 tence has been pre-tagged with its correct POS, so that the possible senses to consider for a content word w are only those associated with the particu- lar POS of w in the sentence. For instance, given the sentence "A reduction of principal and interest is one way the problem may be solved.", since the word "interest" appears as a noun in this sentence, LEXAS will only consider the noun senses of "inter- est" but not its verb senses. That is, LEXAS is only concerned with disambiguating senses of a word in a given POS. Making such an assumption is reason- able since POS taggers that can achieve accuracy of 96% are readily available to assign POS to un- restricted English sentences (Brill, 1992; Cutting et al., 1992). In addition, sense definitions are only available for root words in a dictionary. These are words that are not morphologically inflected, such as "interest" (as opposed to the plural form "interests"), "fall" (as opposed to the other inflected forms like "fell", "fallen", "falling", "falls"), etc. The sense of a mor- phologically inflected content word is the sense of its uninflected form. LEXAS follows this convention by first converting each word in an input sentence into its morphological root using the morphological ana- lyzer of WORD NET, before assigning the appropriate word sense to the root form. 3 Algorithm LEXAS performs WSD by first learning from a train- ing corpus of sentences in which words have been pre-tagged with their correct senses. That is, it uses supervised learning, in particular exemplar-based learning, to achieve WSD. Our approach has been fully implemented in the program LExAs. Part of the implementation uses PEBLS (Cost and Salzberg, 1993; Rachlin and Salzberg, 1993), a public domain exemplar-based learning system. LEXAS builds one exemplar-based classifier for each content word w. It operates in two phases: training phase and test phase. In the training phase, LEXAS is given a set S of sentences in the training corpus in which sense-tagged occurrences of w ap- pear. For each training sentence with an occurrence of w, LEXAS extracts the parts of speech (POS) of words surrounding w, the morphological form of w, the words that frequently co-occur with w in the same sentence, and the local collocations containing w. For disambiguating a noun w, the verb which takes the current noun w as the object is also iden- tified. This set of values form the features of an ex- ample, with one training sentence contributing one training example. Subsequently, in the test phase, LEXAS is given new, previously unseen sentences. For a new sen- tence containing the word w, LI~XAS extracts from the new sentence the values for the same set of fea- tures, including parts of speech of words surround- 41 ing w, the morphological form of w, the frequently co-occurring words surrounding w, the local colloca- tions containing w, and the verb that takes w as an object (for the case when w is a noun). These values form the features of a test example. This test example is then compared to every train- ing example. The sense of word w in the test exam- ple is the sense of w in the closest matching train- ing example, where there is a precise, computational definition of "closest match" as explained later. 3.1 Feature Extraction The first step of the algorithm is to extract a set F of features such that each sentence containing an oc- currence of w will form a training example supplying the necessary values for the set F of features. Specifically, LEXAS uses the following set of fea- tures to form a training example: L3, L2, LI, 1~i, R2, R3, M, KI, . . . , Kin, el,..., 69, V 3.1.1 Part of Speech and Morphological Form The value of feature Li is the part of speech (POS) of the word i-th position to the left of w. The value of Ri is the POS of the word i-th position to the right of w. Feature M denotes the morphological form of w in the sentence s. For a noun, the value for this feature is either singular or plural; for a verb, the value is one of infinitive (as in the uninflected form of a verb like "fall"), present-third-person-singular (as in "falls"), past (as in "fell"), present-participle (as in "falling") or past-participle (as in "fallen"). 3.1.2 Unordered Set of Surrounding Words Kt, • •., Km are features corresponding to a set of keywords that frequently co-occur with word w in the same sentence. For a sentence s, the value of feature Ki is one if the keyword It'~ appears some- where in sentence s, else the value of Ki is zero. The set of keywords K1,..., Km are determined based on conditional probability. All the word to- kens other than the word occurrence w in a sen- tence s are candidates for consideration as keywords. These tokens are converted to lower case form before being considered as candidates for keywords. Let cp(ilk ) denotes the conditional probability of sense i of w given keyword k, where Ni,k cp(ilk) = N~ Nk is the number of sentences in which keyword k co- occurs with w, and Ni,k is the number of sentences in which keyword k co-occurs with w where w has sense i. For a keyword k to be selected as a feature, it must satisfy the following criteria: 1. cp(ilk ) >_ Mi for some sense i, where M1 is some predefined minimum probability. 2. The keyword k must occur at least M2 times in some sense i, where /1//2 is some predefined minimum value. 3. Select at most M3 number of keywords for a given sense i if the number of keywords satisfy- ing the first two criteria for a given sense i ex- ceeds M3. In this case, keywords that co-occur more frequently (in terms of absolute frequency) with sense i of word w are selected over those co-occurring less frequently. Condition 1 ensures that a selected keyword is in- dicative of some sense i of w since cp(ilk) is at least some minimum probability M1. Condition 2 reduces the possibility of selecting a keyword based on spu- rious occurrence. Condition 3 prefers keywords that co-occur more frequently if there is a large number of eligible keywords. For example, M1 = 0.8, Ms = 5, M3 = 5 when LEXAS was tested on the common data set reported in Section 4.1. To illustrate, when disambiguating the noun "in- terest", some of the selected keywords are: ex- pressed, acquiring, great, attracted, expressions, pursue, best, conflict, served, short, minority, rates, rate, bonds, lower, payments. 3.1.3 Local Collocations Local collocations are common expressions con- taining the word to be disambiguated. For our pur- pose, the term collocation does not imply idiomatic usage, just words that are frequently adjacent to the word to be disambiguated. Examples of local collo- cations of the noun "interest" include "in the interest of", "principal and interest", etc. When a word to be disambiguated occurs as part of a collocation, its sense can be frequently determined very reliably. For example, the collocation "in the interest of" always implies the "advantage, advancement, favor" sense of the noun "interest". Note that the method for extraction of keywords that we described earlier will fail to find the words "in", "the", "of" as keywords, since these words will appear in many different po- sitions in a sentence for many senses of the noun "interest". It is only when these words appear in the exact order "in the interest of" around the noun "interest" that strongly implies the "advantage, ad- vancement, favor" sense. There are nine features related to collocations in an example. Table 1 lists the nine features and some collocation examples for the noun "interest". For ex- ample, the feature with left offset = -2 and right off- set = 1 refers to the possible collocations beginning at the word two positions to the left of "interest" and ending at the word one position to the right of "interest". An example of such a collocation is "in the interest of". The method for extraction of local collocations is similar to that for extraction of keywords. For each 42 Left Offset Right Offset Collocation Example -1 -1 accrued interest 1 1 interest rate -2 -1 principal and interest -1 1 national interest in 1 2 interest and dividends -3 -1 sale of an interest -2 in the interest of -1 2 an interest in a 1 3 interest on the bonds Table 1: Features for Collocations of the nine collocation features, LEXAS concatenates the words between the left and right offset positions. Using similar conditional probability criteria for the selection of keywords, collocations that are predic- tive of a certain sense are selected to form the pos- sible values for a collocation feature. 3.1.4 Verb-Object Syntactic Relation LEXAS also makes use of the verb-object syntactic relation as one feature V for the disambiguation of nouns. If a noun to be disambiguated is the head of a noun group, as indicated by its last position in a noun group bracketing, and if the word immediately preceding the opening noun group bracketing is a verb, LEXAS takes such a verb-noun pair to be in a verb-object syntactic relation. Again, using similar conditional probability criteria for the selection of keywords, verbs that are predictive of a certain sense of the noun to be disambiguated are selected to form the possible values for this verb-object feature V. Since our training and test sentences come with noun group bracketing, determining verb-object re- lation using the above heuristic can be readily done. In future work, we plan to incorporate more syntac- tic relations including subject-verb, and adjective- headnoun relations. We also plan to use verb- object and subject-verb relations to disambiguate verb senses. 3.2 Training and Testing The heart of exemplar-based learning is a measure of the similarity, or distance, between two examples. If the distance between two examples is small, then the two examples are similar. We use the following definition of distance between two symbolic values vl and v2 of a feature f: e(vl, v2) = I c1' cl c2, c. I i=1 Cl,i is the number of training examples with value vl for feature f that is classified as sense i in the training corpus, and C1 is the number of training examples with value vl for feature f in any sense. C2,i and C2 denote similar quantities for value v2 of feature f. n is the total number of senses for a word W. This metric for measuring distance is adopted from (Cost and Salzberg, 1993), which in turn is adapted from the value difference metric of the ear- lier work of (Stanfill and Waltz, 1986). The distance between two examples is the sum of the distances between the values of all the features of the two ex- amples. During the training phase, the appropriate set of features is extracted based on the method described in Section 3.1. From the training examples formed, the distance between any two values for a feature f is computed based on the above formula. During the test phase, a test example is compared against allthe training examples. LEXAS then deter- mines the closest matching training example as the one with the minimum distance to the test example. The sense of w in the test example is the sense of w in this closest matching training example. If there is a tie among several training examples with the same minimum distance to the test exam- ple, LEXAS randomly selects one of these training examples as the closet matching training example in order to break the tie. 4 Evaluation To evaluate the performance of LEXAS, we con- ducted two tests, one on a common data set used in (Bruce and Wiebe, 1994), and another on a larger data set that we separately collected. 4.1 Evaluation on a Common Data Set To our knowledge, very few of the existing work on WSD has been tested and compared on a common data set. This is in contrast to established practice in the machine learning community. This is partly because there are not many common data sets pub- licly available for testing WSD programs. One exception is the sense-tagged data set used in (Bruce and Wiebe, 1994), which has been made available in the public domain by Bruce and Wiebe. This data set consists of 2369 sentences each con- taining an occurrence of the noun "interest" (or its plural form "interests") with its correct sense man- ually tagged. The noun "interest" occurs in six dif- ferent senses in this data set. Table 2 shows the distribution of sense tags from the data set that we obtained. Note that the sense definitions used in this data set are those from Longman Dictionary of Con- temporary English (LDOCE) (Procter, 1978). This does not pose any problem for LEXAS, since LEXAS only requires that there be a division of senses into different classes, regardless of how the sense classes are defined or numbered. POS of words are given in the data set, as well as the bracketings of noun groups. These are used to determine the POS of neighboring words and the LDOCE sense Frequency Percent 1: readiness to give 361 15% attention 2: quality of causing 11 <1% attention to be given 3: activity, subject, etc. 67 3% which one gives time and attention to 178 4: advantage, advancement, or favor 5: a share (in a company, business, etc.) 499 6: money paid for the use 1253 of money 8% 21% 53% Table 2: Distribution of Sense Tags verb-object syntactic relation to form the features of examples. In the results reported in (Bruce and Wiebe, 1994), they used a test set of 600 randomly selected sentences from the 2369 sentences. Unfortunately, in the data set made available in the public domain, there is no indication of which sentences are used as test sentences. As such, we conducted 100 random trials, and in each trial, 600 sentences were randomly selected to form the test set. LEXAS is trained on the remaining 1769 sentences, and then tested on a separate test set of sentences in each trial. Note that in Bruce and Wiebe's test run, the pro- portion of sentences in each sense in the test set is approximately equal to their proportion in the whole data set. Since we use random selection of test sen- tences, the proportion of each sense in our test set is also approximately equal to their proportion in the whole data set in our random trials. The average accuracy of LEXAS over 100 random trials is 87.4%, and the standard deviation is 1.37%. In each of our 100 random trials, the accuracy of LEXAS is always higher than the accuracy of 78% reported in (Bruce and Wiebe, 1994). Bruce and Wiebe also performed a separate test by using a subset of the "interest" data set with only 4 senses (sense 1, 4, 5, and 6), so as to compare their results with previous work on WSD (Black, 1988; Zernik, 1990; Yarowsky, 1992), which were tested on 4 senses of the noun "interest". However, the work of (Black, 1988; Zernik, 1990; Yarowsky, 1992) were not based on the present set of sentences, so the comparison is only suggestive. We reproduced in Table 3 the results of past work as well as the clas- sification accuracy of LEXAS, which is 89.9% with a standard deviation of 1.09% over 100 random trials. In summary, when tested on the noun "interest", LEXAS gives higher classification accuracy than pre- vious work on WSD. In order to evaluate the relative contribution of the knowledge sources, including (1) POS and mor- 43 WSD research Accuracy Black (1988) 72% Zernik (1990) 70% Yarowsky (1992) 72% Bruce & Wiebe (1994) 79% LEXhS (1996) 89% Table 3: Comparison with previous results Knowledge Source POS & morpho surrounding words collocations verb-object Mean Accuracy 77.2% 62.0% 80.2% 43.5% Std Dev 1.44% 1.82% 1.55% 1.79% Table 4: Relative Contribution of Knowledge Sources phological form; (2) unordered set of surrounding words; (3) local collocations; and (4) verb to the left (verb-object syntactic relation), we conducted 4 sep- arate runs of 100 random trials each. In each run, we utilized only one knowledge source and compute the average classification accuracy and the standard deviation. The results are given in Table 4. Local collocation knowledge yields the highest ac- curacy, followed by POS and morphological form. Surrounding words give lower accuracy, perhaps be- cause in our work, only the current sentence forms the surrounding context, which averages about 20 words. Previous work on using the unordered set of surrounding words have used a much larger window, such as the 100-word window of (Yarowsky, 1992), and the 2-sentence context of (Leacock et al., 1993). Verb-object syntactic relation is the weakest knowl- edge source. Our experimental finding, that local collocations are the most predictive, agrees with past observa- tion that humans need a narrow window of only a few words to perform WSD (Choueka and Lusignan, 1985). The processing speed of LEXAS is satisfactory. Running on an SGI Unix workstation, LEXAS can process about 15 examples per second when tested on the "interest" data set. 4.2 Evaluation on a Large Data Set Previous research on WSD tend to be tested only on a dozen number of words, where each word fre- quently has either two or a few senses. To test the scalability of LEXAS, we have gathered a corpus in which 192,800 word occurrences have been manually tagged with senses from WoRDNET 1.5. This data set is almost two orders of magnitude larger in size than the above "interest" data set. Manual tagging was done by university undergraduates majoring in Linguistics, and approximately one man-year of ef- forts were expended in tagging our data set. These 192,800 word occurrences consist of 121 nouns and 70 verbs which are the most frequently oc- curring and most ambiguous words of English. The 121 nouns are: action activity age air area art board body book business car case center cen- tury change child church city class college community company condition cost coun- try course day death development differ- ence door effect effort end example experi- ence face fact family field figure foot force form girl government ground head history home hour house information interest job land law level life light line man mate- rial matter member mind moment money month name nation need number order part party picture place plan point pol- icy position power pressure problem pro- cess program public purpose question rea- son result right room school section sense service side society stage state step student study surface system table term thing time town type use value voice water way word work world The 70 verbs are: add appear ask become believe bring build call carry change come consider continue determine develop draw expect fall give go grow happen help hold indicate involve keep know lead leave lie like live look lose mean meet move need open pay raise read receive remember require return rise run see seem send set show sit speak stand start stop strike take talk tell think turn wait walk want work write For this set of nouns and verbs, the average num- ber of senses per noun is 7.8, while the average num- ber of senses per verb is 12.0. We draw our sen- tences containing the occurrences of the 191 words listed above from the combined corpus of the 1 mil- lion word Brown corpus and the 2.5 million word Wall Street Journal (WSJ) corpus. For every word in the two lists, up to 1,500 sentences each con- taining an occurrence of the word are extracted from the combined corpus. In all, there are about 113,000 noun occurrences and about 79,800 verb oc- currences. This set of 121 nouns accounts for about 20% of all occurrences of nouns that one expects to encounter in any unrestricted English text. Simi- larly, about 20% of all verb occurrences in any unre- stricted text come from the set of 70 verbs chosen. We estimate that there are 10-20% errors in our sense-tagged data set. To get an idea of how the sense assignments of our data set compare with those provided by WoRDNET linguists in SEMCOR, the sense-tagged subset of Brown corpus prepared by Miller et al. (Miller et al., 1994), we compare 44 Test set BC50 WSJ6 Sense 1 40.5% 44.8% Most Frequent LEXAS 47.1% 54.0% 63.7% 68.6% Table 5: Evaluation on a Large Data Set a subset of the occurrences that overlap. Out of 5,317 occurrences that overlap, about 57% of the sense assignments in our data set agree with those in SEMCOR. This should not be too surprising, as it is widely believed that sense tagging using the full set of refined senses found in a large dictionary like WORDNET involve making subtle human judg- ments (Wilks et al., 1990; Bruce and Wiebe, 1994), such that there are many genuine cases where two humans will not agree fully on the best sense assign- ments. We evaluated LEXAS on this larger set of noisy, sense-tagged data. We first set aside two subsets for testing. The first test set, named BC50, consists of 7,119 occurrences of the 191 content words that oc- cur in 50 text files of the Brown corpus. The second test set, named WSJ6, consists of 14,139 occurrences of the 191 content words that occur in 6 text files of the WSJ corpus. We compared the classification accuracy of LEXAS against the default strategy of picking the most fre- quent sense. This default strategy has been advo- cated as the baseline performance level for compar- ison with WSD programs (Gale et al., 1992). There are two instantiations of this strategy in our current evaluation. Since WORDNET orders its senses such that sense 1 is the most frequent sense, one pos- sibility is to always pick sense 1 as the best sense assignment. This assignment method does not even need to look at the training sentences. We call this method "Sense 1" in Table 5. Another assignment method is to determine the most frequently occur- ring sense in the training sentences, and to assign this sense to all test sentences. We call this method "Most Frequent" in Table 5. The accuracy of LEXAS on these two test sets is given in Table 5. Our results indicate that exemplar-based classi- fication of word senses scales up quite well when tested on a large set of words. The classification accuracy of LEXAS is always better than the default strategy of picking the most frequent sense. We be- lieve that our result is significant, especially when the training data is noisy, and the words are highly ambiguous with a large number of refined sense dis- tinctions per word. The accuracy on Brown corpus test files is lower than that achieved on the Wall Street Journal test files, primarily because the Brown corpus consists of texts from a wide variety of genres, including newspaper reports, newspaper editorial, biblical pas- sages, science and mathematics articles, general fic- tion, romance story, humor, etc. It is harder to dis- 45 ambiguate words coming from such a wide variety of texts. 5 Related Work There is now a large body of past work on WSD. Early work on WSD, such as (Kelly and Stone, 1975; Hirst, 1987) used hand-coding of knowledge to per- form WSD. The knowledge acquisition process is la- borious. In contrast, LEXAS learns from tagged sen- tences, without human engineering of complex rules. The recent emphasis on corpus based NLP has re- sulted in much work on WSD of unconstrained real- world texts. One line of research focuses on the use of the knowledge contained in a machine-readable dictionary to perform WSD, such as (Wilks et al., 1990; Luk, 1995). In contrast, LEXAS uses super- vised learning from tagged sentences, which is also the approach taken by most recent work on WSD, in- cluding (Bruce and Wiebe, 1994; Miller et al., 1994; Leacock et al., 1993; Yarowsky, 1994; Yarowsky, 1993; Yarowsky, 1992). The work of (Miller et al., 1994; Leacock et al., 1993; Yarowsky, 1992) used only the unordered set of surrounding words to perform WSD, and they used statistical classifiers, neural networks, or IR-based techniques. The work of (Bruce and Wiebe, 1994) used parts of speech (POS) and morphological form, in addition to surrounding words. However, the POS used are abbreviated POS, and only in a window of -b2 words. No local collocation knowledge is used. A probabilistic classifier is used in (Bruce and Wiebe, 1994). That local collocation knowledge provides impor- tant clues to WSD is pointed out in (Yarowsky, 1993), although it was demonstrated only on per- forming binary (or very coarse) sense disambigua- tion. The work of (Yarowsky, 1994) is perhaps the most similar to our present work. However, his work used decision list to perform classification, in which only the single best disambiguating evidence that matched a target context is used. In contrast, we used exemplar-based learning, where the contribu- tions of all features are summed up and taken into account in coming up with a classification. We also include verb-object syntactic relation as a feature, which is not used in (Yarowsky, 1994). Although the work of (Yarowsky, i994) can be applied to WSD, the results reported in (Yarowsky, 1994) only dealt with accent restoration, which is a much simpler problem. It is unclear how Yarowsky's method will fare on WSD of a common test data set like the one we used, nor has his method been tested on a large data set with highly ambiguous words tagged with the refined senses of WORDNET. The work of (Miller et al., 1994) is the only prior work we know of which attempted to evaluate WSD on a large data set and using the refined sense dis- tinction of WORDNET. However, their results show no improvement (in fact a slight degradation in per- formance) when using surrounding words to perform WSD as compared to the most frequent heuristic. They attributed this to insufficient training data in SEMCOm In contrast, we adopt a different strategy of collecting the training data set. Instead of tagging every word in a running text, as is done in SEMCOR, we only concentrate on the set of 191 most frequently occurring and most ambiguous words, and collected large enough training data for these words only. This strategy yields better results, as indicated by a bet- ter performance of LEXAS compared with the most frequent heuristic on this set of words. Most recently, Yarowsky used an unsupervised learning procedure to perform WSD (Yarowsky, 1995), although this is only tested on disambiguat- ing words into binary, coarse sense distinction. The effectiveness of unsupervised learning on disam- biguating words into the refined sense distinction of WoRBNET needs to be further investigated. The work of (McRoy, 1992) pointed out that a diverse set of knowledge sources are important to achieve WSD, but no quantitative evaluation was given on the relative importance of each knowledge source. No previous work has reported any such evaluation either. The work of (Cardie, 1993) used a case-based approach that simultaneously learns part of speech, word sense, and concept activation knowledge, al- though the method is only tested on domain-specific texts with domain-specific word senses. 6 Conclusion In this paper, we have presented a new approach for WSD using an exemplar based learning algorithm. This approach integrates a diverse set of knowledge sources to disambiguate word sense. When tested on a common data set, our WSD program gives higher classification accuracy than previous work on WSD. When tested on a large, separately collected data set, our program performs better than the default strategy of picking the most frequent sense. To our knowledge, this is the first time that a WSD program has been tested on such a large scale, and yielding results better than the most frequent heuristic on highly ambiguous words with the refined senses of WoRDNET. 7 Acknowledgements We would like to thank: Dr Paul Wu for sharing the Brown Corpus and Wall Street Journal Corpus; Dr Christopher Ting for downloading and installing WoRDNET and SEMCOR, and for reformatting the corpora; the 12 undergraduates from the Linguis- tics Program of the National University of Singa- pore for preparing the sense-tagged corpus; and Prof K. P. Mohanan for his support of the sense-tagging project. References Ezra Black. 1988. An experiment in computational discrimination of English word senses. IBM Jour- nal of Research and Development, 32(2):185-194. Eric Brill. 1992. A simple rule-based part of speech tagger. In Proceedings of the Third Conference on Applied Natural Language Processing, pages 152- 155. Rebecca Bruce and Janyce Wiebe. 1994. Word- sense disambiguation using decomposable mod- els. In Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, Las Cruces, New Mexico. Claire Cardie. 1993. A case-based approach to knowledge acquisition for domain-specific sen- tence analysis. In Proceedings of the Eleventh Na- tional Conference on Artificial Intelligence, pages 798-803, Washington, DC. Y. Choueka and S. Lusignan. 1985. Disambiguation by short contexts. Computers and the Humani- ties, 19:147-157. Scott Cost and Steven Salzberg. 1993. A weighted nearest neighbor algorithm for learning with sym- bolic features. Machine Learning, 10(1):57-78. Doug Cutting, Julian Kupiec, Jan Pedersen, and Penelope Sibun. 1992. A practical part-of-speech tagger. In Proceedings of the Third Conference on Applied Natural Language Processing, pages 133- 140. William Gale, Kenneth Ward Church, and David Yarowsky. 1992. Estimating upper and lower bounds on the performance of word-sense disam- biguation programs. In Proceedings of the 30th Annual Meeting of the Association for Computa- tional Linguistics, Newark, Delaware. Graeme Hirst. 1987. Semantic Interpretation and the Resolution of Ambiguity. Cambridge Univer- sity Press, Cambridge. Edward Kelly and Phillip Stone. 1975. Com- puter Recognition of English Word Senses. North- Holland, Amsterdam. Claudia Leacock, Geoffrey Towell, and Ellen Voorhees. 1993. Corpus-based statistical sense resolution. In Proceedings of the ARPA Human Language Technology Workshop. Alpha K. Luk. 1995. Statistical sense disambigua- tion with relatively small corpora using dictio- nary definitions. In Proceedings of the 33rd An- nual Meeting of the Association for Computa- tional Linguistics, Cambridge, Massachusetts. Susan W. McRoy 1992. Using multiple knowledge sources for word sense discrimination. Computa- tional Linguistics, 18(1):1-30. 46 George A. Miller, Ed. 1990. WordNet: An on-line lexical database. International Journal of Lezi- cography, 3(4):235-312. George A. Miller, Martin Chodorow, Shari Landes, Claudia Leacock, and Robert G. Thomas. 1994. Using a semantic concordance for sense identifi- cation. In Proceedings of the ARPA Human Lan- guage Technology Workshop. Paul Procter et al. 1978. Longman Dictionary of Contemporary English. John Rachlin and Steven Salzberg. 1993. PEBLS 3.0 User's Guide. C Stanfill and David Waltz. 1986. Toward memory- based reasoning. Communications of the ACM, 29(12):1213-1228. Yorick Wilks, Dan Fass, Cheng-Ming Guo, James E. McDonald, Tony Plate, and Brian M. Slator. 1990. Providing machine tractable dictionary tools. Machine Translation, 5(2):99-154. David Yarowsky. 1992. Word-sense disambigua- tion using statistical models of Roger's categories trained on large corpora. In Proceedings of the Fifteenth International Conference on Computa- tional Linguistics, pages 454-460, Nantes, France. David Yarowsky. 1993. One sense per colloca- tion. In Proceedings of the ARPA Human Lan- guage Technology Workshop. David Yarowsky. 1994. Decision lists for lexical am- biguity resolution: Application to accent restora- tion in Spanish and French. In Proceedings of the 32nd Annual Meeting of the Association for Com- putational Linguistics, Las Cruces, New Mexico. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics, Cam- bridge, Massachusetts. Uri Zernik. 1990. Tagging word senses in corpus: the needle in the haystack revisited. Technical Report 90CRD198, GE R&D Center. 47 | 1996 | 6 |
INVITED TALK Eye Movements and Spoken Language Comprehension Michael K. Tanenhaus* Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 mtan@bcs, rochester, edu Michael J. Spivey-Knowlton Department of Psychology Cornell University Ithaca, NY 14583 mj sk@cornel i. edu Kathleen M. Eberhard Department of Psychology University of Notre Dame Notre Dame, IN 46556 kathleen .m. eberhard, l@nd. edu Julie C. Sedivy Department of Linguistics University of Rochester Rochester, NY 14627 sedivy@bcs, rochester, edu Paul D. Allopenna Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 al lopenObcs, rochester, edu James S. Magnuson Department of Brain and Cognitive Sciences University of Rochester Rochester, NY 14627 magnuson@bcs, rochester, edu Abstract We present an overview of recent work in which eye movements are monitored as people follow spoken instructions to move objects or pictures in a visual workspace. Subjects naturally make saccadic eye-movements to objects that are closely time-locked to relevant information in the instruction. Thus the eye-movements provide a window into the rapid mental processes that underlie spoken language comprehension. We review studies of reference resolution, word recognition, and pragmatic effects on syntactic ambiguity resolution. Our studies show that people seek to establish reference with respect to their behavioral goals during the earliest moments of linguistic processing. Moreover, referentially relevant non-linguistic information immediately affects how the linguistic input is initially structured. Introduction Many important questions about language comprehension can only be answered by examining processes that are closely time-locked to the linguistic input. These processes take place quite rapidly and they are largely opaque to introspection. As a consequence, psycholinguists have increasingly turned to experimental methods designed to tap real-time language processing. These include a variety of reading time measures as well as paradigms in which subjects monitor the incoming speech for targets or respond to visually presented probes. The hope is that these "on- line" measures can provide information that can be used to inform and evaluate explicit computational models of language processing. Although on-line measures have provided increasingly fine-grained information about the time-course of language processing, they ,are also limited in some important respects. Perhaps the most serious limitation is that they cannot be used to study language in natural tasks with real-world referents. This makes it difficult to study how interpretation develops. Moreover, the emphasis on processing "decontextualized" language may be underestimating the importance of interpretive processes in immediate language processing. Recently, we have been exploring a new paradigm for studying spoken language comprehension. Participants in our experiments follow spoken instructions to touch or manipulate objects in a visual workspace while we monitor their eye-movements using a lightweight camera mounted on a headband. The camera, manufactured by Applied Scientific Laboratories, provides an infrared image of the eye at 60Hz. The center of the pupil and the corneal reflection are tracked to determine the orbit of the eye relative to file head. Accuracy is better than one degree of arc, with virtually unrestricted head and body movements [Ballard, Hayhoe, and Pelz, 1995]. Instructions are spoken into a microphone connected to a Hi-8 VCR. The VCR also records the participant's field of view from a "scene" camera mounted on the headband. The participant's gaze fixation is superimposed on the video image We analyze each frame of the instructions to determine the location and timing of eye movements with respect to critical words in the instruction. We find that subjects make eye-movements to objects in the visual workspace that are closely time-locked to relevant information in the instruction. Thus the timing and patterns 48 of the eye movements provide a window into comprehension processes as the speech unfolds. Unlike most of the on-line measures that have been used to study spoken language processing in the past, our procedure can be used to examine comprehension during natural tasks with real-world referents [Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C., 1996]. In the remainder of this paper, we review some of our recent work using the visual world paradigm. We will focus on three areas: (a) reference resolution; (b) word recognition, and (c) the interaction of referential context and syntactic ambiguity resolution. Reference Resolution Evidence for Ineremental Interpretation In order to investigate the time course with which people establish reference we use different displays to manipulate where in an instruction the referent of a definite noun phrase becomes unique. The timing and patterns of the eye- movements clearly show that people establish reference incrementally by continuously evaluating the information in the instruction against the alternatives in the visual workspace. For example, in one experiment [Eberhard, Spivey-Knowlton, Sedivy & Tanenhaus, 1995], participants were told to touch one of four blocks. The blocks varied along three dimensions: marking (plain or starred), color (pink, yellow, blue and red) and shape (square or rectangle). The instructions referred to the block using a definite noun phrase with adjectives (e.g., "Touch the starred yellow square."). The display determined which word in the noun phrase disambiguated the target block with respect to the visual alternatives For example, the earliest point of disambiguation would be after "starred" if only one of the blocks was starred, after "yellow" if only one of the starred blocks was yellow, and after "square" if there were two starred yellow blocks, only one of which was a square (Instructions with definite noun phrases always had a unique referen0. An instruction began with subjects looking at a fixation cross. We then measured the latency from the beginning of the noun phrase until the onset of the eye-movement to the target object. Subjects made eye-movements before touching the target block on about 75% of the trials. Eye-movement latencies increased monotonically as the point of disambiguation shifted from the marking adjective to the color adjective to the head noun. Moreover, eye- movements were launched within 300 milliseconds of the end of the disambiguating word. It takes about 200 milliseconds from the point that an eye-movement is programmed until when the eye actually begins to move. On average then, participants began programming an eye- movement to the target block once they had heard the disambiguating word and before they had finished hearing the next word in the instruction. We used the same logic in an experiment with displays containing more objects and syntactically more complex instructions [Eberhard et al, 1995]. Participants were instructed to move miniature playing cards placed on slots on a 5X5 vertical board. Seven cards were displayed on each trial, A trial consisted of a sequence of three instructions. On the instructions of interest, there were two cards of the same suit and denomination in the display. The target card was disambiguated using a restrictive relative clause, e.g. "Put tile five of hearts that is below the eight of clubs above the three of diamonds." Figure 1 shows one of the displays for this instruction. 10~ + 8~ KO "Put the five of hearts that is below the eieht of clubs above the tht~ee of diamonds." Figure 1: Display of cards in which their are two fives of hearts. As each five of heart is below a different numbered card, the above instruction becomes unambiguous at "eight". The display determined tile point of disambiguation in the instruction. For the display in Figure 1, the point of disambiguation occurs after the word "eight" because only one of tile fives is below an eight. We also used an early point of disambiguation display in which only one of the potential target cards was immediately below a" "context" card and a l..al¢ point of disambiguation display in which the denomination of the "context" card disambiguated the target (i.e., one five was below an eight of spades and the other was below and eight of clubs). Participants always made an eye-movement to the target card before reaching for it. We again found a clear point of disambiguation effect. The mean latency of the eye- movement that preceded tile hand movement to the target card (measured from a common point in the instruction) increased monotonically with the point of disambiguation. In addition, participants made sequences of eye- movements which made it clear that interpretation was taking place continuously. We quantified this by examining the probability that the subject would be looking at (fixating on) particular classes of cards during segments of the instruction. For example, during the noun phrase that introduced the potential targets, "the five of hearts", nearly all of tile fixations were on one of the potential target cards. 49 During the beginning of the relative clause "...that is below the...", most of the fixations were to one of the context cards (i.e. the card that was above or below a potential target card). Shortly after the disambiguating word, the fixations shifted to the target card. Contrastive focus The presence of a circumscribed set of referents in a visual model makes it possible to use eye-movements to examine how presuppositional information associated with intonation is used in on-line comprehension. [Sedivy, Tanenhaus, Spivey-Knowlton, Eberhard & Carlson, 1995] For example, semantic analyses of contrast have converged on a representation of contrastive focus which involves the integration of presupposed and asserted information [e.g., Rooth, 1992; Kratzer, 1991; Krifka, 1991]. Thus a speaker uttering "Computational linguists give good talks" is making an assertion about computational linguists. However, a speaker who says "COMPUTATIONAL linguists give good talks." is both complimenting the community of computational linguists and making a derogatory comparison with a presupposed set of contrasting entities (perhaps the community of non- computationally oriented linguists). We explored whether contrast sets are computed on-line by asking whether contrastive focus could be used to disambiguate among potential referents, using a variation on the point of disambiguation manipulation described earlier. We used displays with objects that could differ along three dimensions: size (large or small), color (red, blue and yellow), and shape (circles, triangles and squares). Each display contained four objects [see Sedivy et al., 1995 for details]. Consider now the display illustrated in Figure 2 which contains a small yellow triangle, a large blue circle and two red squares, one large and one small. With the instruction "Touch the large red square." the point of dismnbiguation comes after "red". After "large" there are still two possible referents: the large red square and the large blue circle. After "red" only the large red square is a possible referent.. However, with the instruction "Touch the LARGE red square", contrastive focus on "large" restricts felicitous reference to objects that have a contrast member differing along the dimension indicated by the contrast (size). In the display in Figure 2, the small red square contrasts with the large red square. However, the display does not contain a contrast element for the large blue circle. Thus, if people use contrastive stress to compute a contrast set on-line, then they should have sufficient information to determine the target object after hearing the size adjective Thus eye- movements to the target object should be faster with contrastive stress. That is, in fact, what we found. Latencies to launch a saccade to the target were faster with contrastive stress than with neutral stress. However, there is a possible objection to an interpretatiou invoking contrasts sets. One could argue that stress shnply focused participants' attention on the size dimension, allowing them to restrict attention to the large objects, To rule out this alternative, we also included displays with two contrast sets: e.g., two red squares, one large and one small, and two blue circles, one large and one small. With a two contrast display, contrastive focus is still felicitous. However, the point of disambiguation now does not come until after the color adjective for instructions with contrastive stress and with neutral stress. Under these conditions, we found no effect of contrast. The interaction between type of display and stress provides clear evidence that participants were computing contrast sets rapidly enough to select among potential referents. 11o + hi A Figure 2: Display with one large and one small red square. The large circle is blue; the small triangle is yellow. Word Recognition The time course of spoken word recognition is strongly influenced by both the properties of the word itself (e.g., its frequency) and the set of words to which it is phonetically similar. Recognition of a spoken word occurs shortly after the auditory input uniquely specifies a lexical candidate [Marslen-Wilson, 1987]. For polysyllabic words, this is often prior to the end of the word. For example, the word "elephant" would be recognized shortly after the "phoneme" If/. Prior to that, the auditory input would be consistent with the beginnings of several words, including "elephant", "elegant", "eloquent" and "elevator". Most models of spoken word recognition account for these data by proposing that multiple lexical candidates are activated a~s the speech stream unfolds. Recognition then takes place with respect to the set of competing activated candidates. However, models differ in how the candidate set is defined. In some models, such as Marslen-Wilson's classic Cohort model, competition takes place in a strictly "left-to right" fashion. [Marslen-Wilson, 1987]. Thus the competitor set for "paddle" would contain "padlock", which has the same initial phonemes as "paddle", but would not include a phonetically similar word that did not overlap in 50 its initial phonemes, such as a rhyming word like "saddle". In contrast, activation models such as TRACE [McClelland & Elman, 1986] assume that competition can occur throughout the word and thus rhyming words would also compete for activation. Our initial experiments used real objects and instructions such as "Pick up the candy". We manipulated whether or not the display contained an object with a name that began with the same phonetic sequence as the target object [Tanenhaus, Spivey-Knowlton, Eberhard & Sedivy, 1995; Spivey-Knowlton, Tanenhaus, Eberhard & Sedivy, 1995]. Examples of objects with overlapping initial phonemes were "candy" and "candle", and "doll" and "dolphin". An eye- movement to the target object typically began shortly after the word ended, indicating that programming of the eye- movement often began before the end of the word. The presence of a competitor increased the latency of eye- movements to the target and induced frequent false launches to the competitor. The timing of these eye-movements indicated that they were programmed during the "ambiguous" segment of the target word. These results demonstrated that the two objects with similar names were, in fact, competing as the target word unfolded. Moreover, they highlight the sensitivity of the eye-movement paradigm. In ongoing work, we are exploring more fine-grained questions about the t/me-course of lexical activation. For example, in an experiment in progress [Allopenna, Magnuson & Tanenhaus, 1996], the stimuli are line drawings of objects presented on a computer screen (see Figure 3). On each trial, participants are shown a set of four objects and asked to "pick up" one of the objects with the mouse and move it to a specified location on the grid. The paddle was the target object for the trial shown in Figure 3. The display includes a "cohort" competitor sharing initial phonemes with the target (padlock) a rhyme competitor (saddle) and an unrelated object (castle). t Figure 3: Sample Display for the Instxuction: "Pick up the paddle." Figure 4 shows the probability that the eye is fixating on the target and the cohort competitor as the spoken target word unfolds. Early on in the speech stream, the eye is on the fixation cross, where subjects are told to look at the beginning of the trial. The probability of a fixation to the target word and the cohort competitor then increases. As the target word unfolds, the probability that the eye is fixated on the target increases compared to the cohort competitor. These data replicate our initial experiments and show how eye-movements can be used to trace the time course of spoken word recognition. Our preliminary data also make it clear that rhyme competitors attract fixations, as predicted by activation models. ,.o j-- 0.9 " Target 0,8 o.7 i 0.6 0.5 0.4 0.3 0.2 o, . \ : oo o,, O.O 5 O0 1000 1500 2000 2500 Tlmetme) Figure 4: Probabilities of eye-fixations in a competitor trial. Reference and Syntactic Ambiguity Resolution There has been an unresolved debate in the language processing community about whether there are initial stages in syntactic processing that are strictly encapsulated from influences of referential and pragmatic context. The strongest evidence for encapsulated processing modules has come from studies using sentences with brief syntactic "attachment" ambiguities in which readers have clear preferences for interpretations, associated with particular syntactic configurations. For example, in the. instruction "put the apple on the towel...," people prefer to attach the prepositional phrase "on the towel" to the verb "put", rather than the noun phrase "the apple", thus interpreting it as the argument of the verb (encoding the thematic relation of Goal), rather than as a modifier of the noun. If the instruction continues "Put the apple on the towel into the box", the initial preference for a verb-phrase attachment is revealed by clear "garden-path" effects when "into" is encountered. Encapsulated models account for this preference in terms of principles such as pursue the simplest 51 attachment first, or initially attach a phrase as an argument rather than as an adjunct. In contrast, constraint-based models attribute these preferences to the strength of multiple interacting constraints, including those provided by discourse context. [For a recent review, see Tanenhaus and Trueswell, 1995] An influential proposal, most closely associated with Crain and Steedman [1985], is that pragmatically driven expectations about reference are an important source of discourse constraint. For example, a listener hearing "put the apple..." might reasonably assume that there is a single apple and thus expect to be told where to put the apple (the verb-phrase attachment). However, in a context in which there was more than one apple, the listener might expect to be told which of the apples is the intended referent and thus prefer the noun phrase attachment. Numerous experiments have investigated whether or not the referential context established by a discourse context can modify attachment preferences. These studies typically introduce the context in a short paragraph and examine eye- movements to the disambiguating words in a target sentences containing the temporary ambiguity. While some studies have shown effects of discourse context, others have not. In particular, strong syntactic preferences persist momentarily, even when the referential context introduced by the discourse supports the normally less-preferred attachment. For example, the preference to initially attach a prepositional phrase to a verb requiring a goal argument (e.g., "put") cannot be overridden by linguistic context. These results have been taken as strong evidence for an encapsulated syntactic processing system. However, typical psycholinguistic experiments may be strongly biased against finding pragmatic effects on syntactic processing. For example, the context may not be immediately accessible because it has to be represented in memory. Moreover, readers may not consider the context to be relevant when the ambiguous region of the sentence is being processed. We reasoned that a relevant visual context that was available for the listener to interrogate as the linguistic input unfolded might influence initial syntactic analysis even though the same information might not be effective when introduced linguistically. Sample instructions are illustrated by the examples in (1). 1. a. Put the apple on the towel in the box. b. Put the apple that's on the towel in the box. In sentence (la), the first prepositional phrase "on the towel", is ambiguous as to whether it modifies the noun phrase ("the apple") thus specifying the location of the object to be picked up, or whether it modifies the verb, thus introducing the goal location. In example (lb) the word "that's" disambiguates the phrase as a modifier, serving as an unambiguous control condition. These instructions were paired with three types of display contexts. Each context contained four sets of real objects placed on a horizontal board. Sample displays for the instructions presented in (1) are illustrated in Figures 5, 6, and 7 Three of file objects were the same across all of the displays. Each display contained the target object (an apple on a towel) the correct goal, (a box) and an incorrect goal (another towel). In the one referent display (Figure 4) there was only one possible referent for the definite noun phrase "the apple", the apple on the towel. Upon hearing the phrase "the apple", participants can immediately identify the object to be moved because there is only one apple and thus they are likely to assume that "on the towel" is specifying the goal. In the two-referent display (Figure 5), there was a second possible referent (an apple on a napkin). Thus, "the apple", could refer to either of the two apples and the phrase "on the towel" provides modifying information that specifies which apple is the correct referent. Under these conditions a listener seeking to establish reference should interpret the prepositional phrase "on the towel" as providing disambiguating information about the location of the apple. In the three and one display, we added an apple cluster. The uniqueness presupposition associated with the definite noun phrase should bias the listener to assume that the single apple (the apple on the towel) is the intended referent for the theme argument. However, it is more felicitous to use a modifier with this instruction. This display was used to test if even a relatively subtle pragmatic effects will influence syntactic processing Strikingly different fixation patterns among the visual contexts revealed that the ambiguous phrase "on the towel" was initially interpreted as the goal in the one-referent context but as a modifier in the two-referent contexts and the three-and-one contexts [for details see Spivey-Knowlton et al, 1995; Tanenhaus et al., 1995] In the one-referent context, subjects looked at the incorrect goal (e.g., the irrelevant towel) on 55% of the trials shortly after hearing the ambiguous prepositional phrase, whereas they never looked at the incorrect goal with the unambiguous instruction. In contrast, when the context contained two possible referents, subjects rarely looked at the incorrect goal, and there were no differences between the ambiguous ,mid unambiguous instructions. Similar results obtained for the three-and-one context. Figures 5 and 6 summarize the most typical sequences of eye-movements and their timing in relation to words in the mnbiguous instructions for the one-referent and the two- referent contexts, respectively. In the one-referent context, subjects first looked at the target object (the apple) 500 ms after hearing "apple" then looked at the incorrect goal (the towel) 484 ms after hearing "towel". In contrast, with the unmnbiguous instruction, the first look to a goal did not occur until 500 ms after the subject heard the word "box". In the two-referent context, subjects often looked at both apples, reflecting the fact that the referent of "the apple" was temporarily mnbiguous. Subjects looked at the incorrect object on 42% of the unambiguous trials and on 61% of the mnbiguous trials. In contrast, in the one-referent context, subjects rarely looked at the incorrect object (0% and 6% of die trials for die ambiguous and unambiguous instructions, respectively). In the two-referent context, subjects selected 52 the correct referent as quickly for the ambiguous instruction as for the unambiguous instruction providing additional evidence that the first prepositional phrase was immediately interpreted as a modifier. The three-and-one context provided additional information. Typical sequences of eye-movements for this context are presented in Figure 7. Participants rarely looked at the apple cluster, making their initial eye-movement to the apple on the towel. The next eye-movement was to the box for both the ambiguous and unambiguous instruction. These data also rule out a possible objection to the results from the two referent condition. One could argue that participants were, in fact, temporarily misparsing the prepositional phrase as the goal. However, this misanalysis might not be reflected in eye-movements to the towel because the eye was already in transit, moving between the two apples. However, in the three-and-one condition, the eye remains on the referent throughout the prepositional phrase. Given the sensitivity of eye-movements to probabilistic information, e.g., false launches to cohort and rhyme competitors, it is difficult to argue that the participants experienced a temporary garden-path that was too brief to influence eye-movements. "Put the apple on the towel in the box." A B C D =,,._ I I , I , I , I ,e,.- ms 0 500 1000 1500 2000 2500 "Put the apple that's on the towel in the box." A' B' I I , I I , I I rns 0 500 1000 1500 2000 2500 =p,,,.- Figure 5: Typical sequence of eye movements in the one- referent context for the ambiguous and unambiguous instructions. Letters on the timeline show when in the instruction each eye movement occurred, as determined by mean latency of that type of eye movement (A' and B' correspond to the unambiguous instruction). "Put the apple on the towel in the box." A B C re= I I I I I I .e-- ms 0 500 1000 1500 2000 2500 "Put the apple that's on the towel in the box." I I I A. I B ; C, rns 0 500 1000 1500 2000 2500 "~"- Figure 6: Typical sequence of eye movements in the two- referent context. Note that the sequence and the timing of eye movements, relative to the nouns in the speech stream, did not differ for the ambiguous and unambiguous instructions. A B © ©© "Put the apple on the towel in the box." A B i=~ I , I , I , I , I I , r ms 0 500 1000 1500 2000 2500 "Put the apple that's on the towel in the box." I . I I A. I I , BI ms 0 500 1000 1500 2000 2500 Figure 7: Typical sequence of eye movements in the three- and-one context. Note that the sequence and the timing of eye movements, relative to the nouns in the speech stream, did not differ for the ambiguous and unambiguous instructions. 53 Conclusion We have reviewed results establishing that, with well- defined tasks, eye-movements can be used to observe under natural conditions the rapid mental processes that underlie spoken language comprehension. We believe that this paradigm will prove valuable for addressing questions on a full spectrum of topics in spoken language comprehension, ranging from the uptake of acoustic information during word recognition to conversational interactions during cooperative problem solving. Our results demonstrate that in natural contexts people interpret spoken language continuously, seeking to establish reference with respect to their behavioral goals during the earliest moments of linguistic processing. Thus our results provide strong support for models that support continuous interpretation. Our experiments also show that referentially relevant non-linguistic information immediately affects how the linguistic input is initially structured. Given these results, approaches to language comprehension that emphasize fully encapsulated processing modules are unlikely to prove fruitful. More promising are approaches in which grammatical constraints are integrated into processing systems that coordinate linguistic and non- linguistic information as the linguistic input is processed. Acknowledgments * This paper summarizes work that the invited talk by the first author (MKT) was based upon. Supported by NIH resource grant 1-P41-RR09283; NIH HD27206 to MKT; NIH F32DC00210 to PDA, NSF Graduate Research Fellowships to MJS-K and JSM and a Canadian Social Science Research Fellowship to JCS. References Allopenna, P. D., Magnuson, J. S., & Tanenhaus, M. K. (1996). Watching spoken language perception: Using eye-movements to track lexical access. Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society. Mahwah, NJ: Erlbaum. Ballard, D., Hayhoe, M. & Pelz, J. (1995). Memory representations in natural tasks. Journal of Cognitive Neuroscience, 7, 68-82. Crain, S. & Steedman, M. (1985). On not being led up the garden path. In Dowty, Kartunnen & Zwicky (eds.), Natural Language Parsing. Cambridge, MA: Cambridge U. Press. Eberhard, K., Spivey-Knowlton, M., Sedivy, J. & Tanenhaus, M. (1995). Eye movements as a window into real-time spoken language comprehension in natural contexts. Journal of Psycholinguistic Research, 24, 409- 436. Kratzer, J. (1991). Representation of focus. In A. yon Stechow & D. Wunderlich (Eds.), Semantik: Ein Internationales Hundbuch der Zeitgenossichen Forschung. Berlin: Walter de Guyter. Krifka, M. (1991). A compositional semantics for multiple focus constructions. Proceedings of Semantics and Linguistic Theory (SALT) I, Cornell Working Papers, 11. Marslen-Wilson, W.D. (1987). Functional Parallelism in spoken word-recognition. Cognition, 25, 71-102. McClelland, J. L., & Elman, J.L. (1986). The TRACE model of speech perception. Cognitive Psychology, 18, 1- 86. Rooth, M. (1992). A theory of focus interpretation, Natural Language Interpretation, 1, 75-116. Sedivy, J., Tanenhaus, M., Spivey-Knowlton, M., Eberhard, K. & Carlson, G. (1995). Using intonationally-marked presuppositional information in on-line language processing: Evidence from eye movements to a visual model. Proceedings of the 17th Annual Conference of the Cognitive Science Society (pp.375-380). Hillsdale, NJ: Erlbaum. Spivey-Knowlton, M., Tanenhaus, M., Eberhard, K. & Sedivy, J. (1995). Eye-movements accompanying language and action in a visual context: Evidence against modularity. Proceedings of the 17th Annual Conference of the Cognitive Science Society (pp.25-30). Hillsdale, NJ: Edbaum. Tanenhaus, M. K., Spivey-Knowlton, M.-J., Eberhard, K. M., & Sedivy, J. C. (1995). Integration of visual and linguistic infonnation in spoken language-comprehension. Science, 268, 1632-1634. Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M., & Sedivy, J. C. (1996). Using eye-movements to study spoken language comprehension: Evidence for visually mediated incremental interpretation. In T Inui & J.L. McClelland (eds.). Attention and Performance XVI: Information integration in perception and comnmnication., 457-478. Cambridge Mass: MIT Press. Tanenhaus, M. & Trueswell, J. (1995). Sentence comprehension. In J. Miller & P. Eimas (Eds.). Handbook of Perception and Cognition: Volume 11: Speech, Language and Communication. Academic Press., 217-262. New York: Academic Press. 54 | 1996 | 7 |
A FULLY STATISTICAL APPROACH TO NATURAL LANGUAGE INTERFACES Scott Miller, David Stallard, Robert Bobrow, Richard Schwartz BBN Systems and Technologies 70 Fawcett Street Cambridge, MA 02138 [email protected], [email protected], [email protected], [email protected] Abstract We present a natural language interface system which is based entirely on trained statistical models. The system consists of three stages of processing: parsing, semantic interpretation, and discourse. Each of these stages is modeled as a statistical process. The models are fully integrated, resulting in an end-to-end system that maps input utterances into meaning representation frames. 1. Introduction A recent trend in natural language processing has been toward a greater emphasis on statistical approaches, beginning with the success of statistical part-of-speech tagging programs (Church 1988), and continuing with other work using statistical part-of-speech tagging programs, such as BBN PLUM (Weischedel et al. 1993) and NYU Proteus (Grishman and Sterling 1993). More recently, statistical methods have been applied to domain-specific semantic parsing (Miller et al. 1994), and to the more difficult problem of wide-coverage syntactic parsing (Magerman 1995). Nevertheless, most natural language systems remain primarily rule based, and even systems that do use statistical techniques, such as AT&T Chronus (Levin and Pieraccini 1995), continue to require a significant rule based component. Development of a complete end-to-end statistical understanding system has been the focus of several ongoing research efforts, including (Miller et al. 1995) and (Koppelman et al. 1995). In this paper, we present such a system. The overall structure of our approach is conventional, consisting of a parser, a semantic interpreter, and a discourse module. The implementation and integration of these elements is far less conventional. Within each module, every processing step is assigned a probability value, and very large numbers of alternative theories are pursued in parallel. The individual modules are integrated through an n-best paradigm, in which many theories are passed from one stage to the next, together with their associated probability scores. The meaning of a sentence is determined by taking the highest scoring theory from among the n-best possibilities produced by the final stage in the model. Some key advantages to statistical modeling techniques are: • All knowledge required by the system is acquired through training examples, thereby eliminating the need for hand-written rules. In parsing for example, it is sufficient to provide the system with examples specifying the correct parses for a set of training examples. There is no need to specify an exact set of rules or a detailed procedure for producing such parses. • All decisions made by the system are graded, and there are principled techniques for estimating the gradations. The system is thus free to pursue unusual theories, while remaining aware of the fact that they are unlikely. In the event that a more likely theory exists, then the more likely theory is selected, but if no more likely interpretation can be found, the unlikely interpretation is accepted. The focus of this work is primarily to extract sufficient information from each utterance to give an appropriate response to a user's request. A variety of problems regarded as standard in computational linguistics, such as quantification, reference and the like, are thus ignored. To evaluate our approach, we trained an experimental system using data from the Air Travel Information (ATIS) domain (Bates et al. 1990; Price 1990). The selection of ATIS was motivated by three concerns. First, a large corpus of ATIS sentences already exists and is readily available. Second, ATIS provides an existing evaluation methodology, complete with independent training and test corpora, and scoring programs. Finally, evaluating on a common corpus makes it easy to compare the performance of the system with those based on different approaches. We have evaluated our system on the same blind test sets used in the ARPA e.valuations (Pallett et al. 1995), and present a preliminary result at the conclusion of this paper. The remainder of the paper is divided into four sections, one describing the overall structure of our models, and one for each of the three major components of parsing, semantic interpretation and discourse. 55 2. Model Structure Given a string of input words W and a discourse history H, the task of a statistical language understanding system is to search among the many possible discourse-dependent meanings Mo for the most likely meaning M0: M 0 = argmax P(M o I W, H). Mo Directly modeling P(Mo I W,/-/) is difficult because the gap that the model must span is large. A common approach in non-statistical natural language systems is to bridge this gap by introducing intermediate representations such as parse structure and pre-discourse sentence meaning. Introducing these intermediate levels into the statistical framework gives: M 0 =argmax EP(MD IW, H, Ms,T)P(Ms,TIW, H) MD M s,T where T denotes a semantic parse tree, and Ms denotes pre- discourse sentence meaning. This expression can be simplified by introducing two independence assumptions: 1. Neither the parse tree T, nor the pre-discourse meaning Ms, depends on the discourse history H. 2. The post-discourse meaning Mo does not depend on the words W or the parse structure T, once the pre-discourse meaning Ms is determined. Under these assumptions, M 0 = argmax EP(MD IH'Ms) P(Ms'TIW) " Mo M s ,T Next, the probability P(Ms,TIW) can be rewritten using Bayes rule as: P(M s,T I W) = leading to: P( M s ,T) P(W I M S ,T) P(W) M 0 = argmax E P(MD IH'Ms) P(Ms'T) P(WI Ms,T) MD Ms,r P(W) Now, since P(W) is constant for any given word string, the problem of finding meaning 34o that maximizes P(M S,T) P(WI M S,T) E P(M D IH, M s) P(W) M s ,T is equivalent to finding Mo that maximizes E P(M D I H, ,T) P(WI M S,T). Ms) P(Ms M s ,T M 0 = argmax EP(MD IH, M s) P(Ms,T) P(WI Ms,T). Mo M s ,T We now introduce a third independence assumption: 3. The probability of words W does not depend on meaning Ms, given that parse Tis known. This assumption is justified because the word tags in our parse representation specify both semantic and syntactic class information. Under this assumption: M 0 = argmax EP(Mo IH, M s) P(Ms,T) P(WIT) MD M s ,T Finally, we assume that most of the probability mass for each discourse-dependent meaning is focused on a single parse tree and on a single pre-discourse meaning. Under this (Viterbi) assumption, the summation operator can be replaced by the maximization operator, yielding: Mo = arg max( max ( P( M o l H, M s ) P( M s,T) P(W I T) ) ] M D ~.Ms,T This expression corresponds to the computation actually performed by our system which is shown in Figure 1. Processing proceeds in three stages: 1. Word string W arrives at the parsing model. The full space of possible parses T is searched for n-best candidates according to the measure P(T)P(WIT). These parses, together with their probability scores, are passed to the semantic interpretation model. 2. The constrained space of candidate parses T (received from the parsing model), combined with the full space of possible pre-discourse meanings Ms, is searched for n-best candidates according to the measure P(M s,T) P(W I T). These pre-discourse meanings, together with their associated probability scores, are passed to the discourse model. Thus, ___ Parsing ~ lnterpretati°n I f[ Model Model j \ Model y \ / / / P(T)P(WIT) P(Ms,T)P(WIT) P(MolMs,H)P(Ms,T)P(WIT) Figure 1: Overview of statistical processing. 56 3. The constrained space of candidate pre-discourse meanings Ms (received from the semantic interpretation model), combined with the full space of possible post- discourse meanings Mo, is searched for the single candidate that maximizes P( M o I H, M s) P( M s,T) P(W I T), conditioned on the current history H. The discourse history is then updated and the post-discourse meaning is returned. We now proceed to a detailed discussion of each of these three stages, beginning with parsing. 3. Parsing Our parse representation is essentially syntactic in form, patterned on a simplified head-centered theory of phrase structure. In content, however, the parse trees are as much semantic as syntactic. Specifically, each parse node indicates both a semantic and a syntactic class (excepting a few types that serve purely syntactic functions). Figure 2 shows a sample parse of a typical ATIS sentence. The semantic/syntactic character of this representation offers several advantages: 1. Annotation: Well-founded syntactic principles provide a framework for designing an organized and consistent annotation schema. 2. Decoding: Semantic and syntactic constraints are simultaneously available during the decoding process; the decoder searches for parses that are both syntactically and semantically coherent. 3. Semantic Interpretation: Semantic/syntactic parse trees are immediately useful to the semantic interpretation process: semantic labels identify the basic units of meaning, while syntactic structures help identify relationships between those units. 3.1 Statistical Parsing Model The parsing model is a probabilistic recursive transition network similar to those described in (Miller et ai. 1994) and (Seneff 1992). The probability of a parse tree T given a word string Wis rewritten using Bayes role as: P(T) P(W I T) P(TIW) = P(W) Since P(W) is constant for any given word string, candidate parses can be ranked by considering only the product P(T) P(W I 7"). The probability P(T) is modeled by state transition probabilities in the recursive transition network, and P(W I T) is modeled by word transition probabilities. * State transition probabilities have the form P(state n I staten_l, stateup) . For example, P(location/pp I arrival/vp-head, arrival/vp) is the probability of a location/pp following an arrival/vp- head within an arrival/vp constituent. • Word transition probabilities have the form P(word n I wordn_ l,tag) . For example, P("class" I "first", class-of-service/npr) is the probability of the word sequence "first class" given the tag class-of-service/npr. Each parse tree T corresponds directly with a path through the recursive transition network. The probability P(T) P(W I 1") is simply the product of each transition /wh-question // // // / / 1 / / / / ~v~P a~re / I / /wh-head /aux /det /np-head /comp /vp-head /prep /apt I I I I I I I I When do the flights that leave from Boston /vp /vp ation p Q arrival location city /vp-head /prep /npr J J I arrive in Atlanta Figure 2: A sample parse tree. 57 probability along the path corresponding to T. 3.2 Training the Parsing Model Transition probabilities are estimated directly by observing occurrence and transition frequencies in a training corpus of annotated parse trees. These estimates are then smoothed to overcome sparse data limitations. The semantic/syntactic parse labels, described above, provide a further advantage in terms of smoothing: for cases of undertrained probability estimates, the model backs off to independent syntactic and semantic probabilities as follows: Ps(semlsyn n I semlsynn_ 1 ,semlsyn up) = ~.( semlsyn n I semlsynn_ l ,seral syn up) x P(semlsyn n I semlsynn_ 1 ,sem/syn up) + (1 - ,].(semlsyn n I semlsynn_ ! ,semlsyn up) X P(sem n I semup) P(syn n I synn_l,synup) where Z is estimated as in (Placeway et al. 1993). Backing off to independent semantic and syntactic probabilities potentially provides more precise estimates than the usual strategy of backing off directly form bigram to unigram models. 3.3 Searching the Parsing Model In order to explore the space of possible parses efficiently, the parsing model is searched using a decoder based on an adaptation of the Earley parsing algorithm (Earley 1970). This adaptation, related to that of (Stolcke 1995), involves reformulating the Earley algorithm to work with probabilistic recursive transition networks rather than with deterministic production rules. For details of the decoder, see (Miller 1996). 4. Semantic Interpretation Both pre-discourse and post-discourse meanings in our current system are represented using a simple frame representation. Figure 3 shows a sample semantic frame corresponding to the parse in Figure 2. Air-Transportation Show: (Arrival-Time) Origin: (City "Boston") Destination: (City "Atlanta") Figure 3: A sample semantic frame. Recall that the semantic interpreter is required to compute P(Ms,T) P(WIT ). The conditional word probability P(WIT) has already been computed during the parsing phase and need not be recomputed. The current problem, then, is to compute the prior probability of meaning Ms and parse T occurring together. Our strategy is to embed the instructions for constructing Ms directly into parse T o resulting in an augmented tree structure. For example, the instructions needed to create the frame shown in Figure 3 are: 1. Create an Air-Transportation frame. 2. Fill the Show slot with Arrival-Time. 3. Fill the Origin slot with (City "Boston") 4. Fill the Destination slot with (City "Atlanta") These instructions are attached to the parse tree at the points indicated by the circled numbers (see Figure 2). The probability P(Ms,T ) is then simply the prior probability of producing the augmented tree structure. 4.1 Statistical Interpretation Model Meanings Ms are decomposed into two parts: the frame type FT, and the slot fillers S. The frame type is always attached to the topmost node in the augmented parse tree, while the slot filling instructions are attached to nodes lower down in the tree. Except for the topmost node, all parse nodes are required to have some slot filling operation. For nodes that do not directly trigger any slot fill operation, the special operation null is attached. The probability P(Ms, T) is then: P( Ms,T) = P( FT, S,T)= P( FT) P(T I FT) P(S I FT, T). Obviously, the prior probabilities P(FT) can be obtained directly from the training data. To compute P(T I FT), each of the state transitions from the previous parsing model are simply rescored conditioned on the frame type. The new state transition probabilities are: P(state n I staten_ t, stateup, FT) . To compute P(S I FT, T) , we make the independence assumption that slot filling operations depend only on the frame type, the slot operations already performed, and on the local parse structure around the operation. This local neighborhood consists of the parse node itself, its two left siblings, its two right siblings, and its four immediate ancestors. Further, the syntactic and semantic components of these nodes are considered independently. Under these assumptions, the probability of a slot fill operation is: P(slot n I FT, Sn_l,semn_ 2 ..... sem n ..... semn+2, Synn-2 ..... synn ..... Synn+2, semupl ..... semup4, Synupl ..... synup4 ) and the probability P(S I FT, T) is simply the product of all such slot fill operations in the augmented tree. 4.2 Training the Semantic Interpretation Model Transition probabilities are estimated from a training corpus of augmented trees. Unlike probabilities in the parsing model, there obviously is not sufficient training data to estimate slot fill probabilities directly. Instead, these probabilities are estimated by statistical decision trees similar 58 to those used in the Spatter parser (Magerman 1995). Unlike more common decision tree classifiers, which simply classify sets of conditions, statistical decision trees give a probability distribution over all possible outcomes. Statistical decision trees are constructed in a two phase process. In the first phase, a decision tree is constructed in the standard fashion using entropy reduction to guide the construction process. This phase is the same as for classifier models, and the distributions at the leaves are often extremely sharp, sometimes consisting of one outcome with probability I, and all others with probability 0. In the second phase, these distributions are smoothed by mixing together distributions of various nodes in the decision tree. As in (Magerman 1995), mixture weights are determined by deleted interpolation on a separate block of training data. 4.3 Searching the Semantic Interpretation Model Searching the interpretation model proceeds in two phases. In the first phase, every parse T received from the parsing model is rescored for every possible frame type, computing P(T I FT) (our current model includes only a half dozen different types, so this computation is tractable). Each of these theories is combined with the corresponding prior probability P(FT) yielding P(FT) P(T I FT). The n-best of these theories are then passed to the second phase of the interpretation process. This phase searches the space of slot filling operations using a simple beam search procedure. For each combination of FT and T, the beam search procedure considers all possible combinations of fill operations, while pruning partial theories that fall beneath the threshold imposed by the beam limit. The surviving theories are then combined with the conditional word probabilities P(W I T), computed during the parsing model. The final result of these steps is the n-best set of candidate pre-discourse meanings, scored according to the measure P(M s,T) P(WIT). 5. Discourse Processing The discourse module computes the most probable post- discourse meaning of an utterance from its pre-discourse meaning and the discourse history, according to the measure: P(M o I H, M S) P(M S , T) P(W I T). Because pronouns can usually be ignored in the ATIS domain, our work does not treat the problem of pronominal reference. Our probability model is instead shaped by the key discourse problem of the ATIS domain, which is the inheritance of constraints from context. This inheritance phenomenon, similar in spirit to one-anaphora, is illustrated by the following dialog:: USER 1: SYSTEM 1: USER2: I want to fly from Boston to Denver. <displays Boston to Denver flights> Which flights are available on Tuesday? SYSTEM2: <displays Boston to Denver flights for Tuesday> In USER2, it is obvious from context that the user is asking about flights whose ORIGIN is BOSTON and whose DESTINATION is DENVER, and not all flights between any two cities. Constraints are not always inherited, however. For example, in the following continuation of this dialogue: USER3: Show me return flights from Denver to Boston, it is intuitively much less likely that the user means the "on Tuesday" constraint to continue to apply. The discourse history H simply consists of the list of all post- discourse frame representations for all previous utterances in the current session with the system. These frames are the source of candidate constraints to be inherited. For most utterances, we make the simplifying assumption that we need only look at the last (i.e. most recent) frame in this list, which we call Me. 5.1 Statistical Discourse Model The statistical discourse model maps a 23 element input vector X onto a 23 element output vector Y. These vectors have the following interpretations: • X represents the combination of previous meaning Me and the pre-discourse meaning Ms. • Y represents the post-discourse meaning Mo. Thus, P( M D I H, Ms) = P(YI X) . The 23 elements in vectors X and Y correspond to the 23 possible slots in the frame schema. Each element in X can have one of five values, specifying the relationship between the filler of the corresponding slot in Me and Ms: INITIAL - slot filled in Ms but not in Me TACIT - slot filled in Me but not in Ms REITERATE - slot filled in both Me and Ms; value the same CHANGE - slot filled in both Me and Ms; value different IRRELEVANT - slot not filled in either Me or Ms Output vector Y is constructed by directly copying all fields from input vector X except those labeled TACIT. These direct copying operations are assigned probability 1. For fields labeled TACIT, the corresponding field in Y is filled with either INHERITED or NOT-INHERITED. The probability of each of these operations is determined by a statistical decision tree model. The discourse model contains 23 such statistical decision trees, one for each slot position. An ordering is imposed on the set of frame slots, such that inheritance decisions for slots higher in the order are conditioned on the decisions for slots lower in the order. 59 The probability P(YIX) is then the product of all 23 decision probabilities: P(Y I X) = P(YllX) P(Y2 1X,yl)... P(Y23 1X,Yl,y 2 ..... Y22) • 5.2 Training the Discourse Model The discourse model is trained from a corpus annotated with both pre-discourse and post-discourse semantic frames. Corresponding pairs of input and output (X, I,') vectors are computed from these annotations, which are then used to train the 23 statistical decision trees. The training procedure for estimating these decision tree models is similar to that used for training the semantic interpretation model. 5.3 Searching The Discourse Model Searching the discourse model begins by selecting a meaning frame Me from the history stack H, and combining it with each pre-discourse meaning Ms received from the semantic interpretation model. This process yields a set of candidate input vectors X. Then, for each vector X, a search process exhaustively constructs and scores all possible output vectors Y according to the measure P(Y I X) (this computation is feasible because the number of TACIT fields is normally small). These scores are combined with the pre-discourse scores P(M s,T) P(W I T), already computed by the semantic interpretation process. This computation yields: P(YI X) P(M S,r) P(WIT), which is equivalent to: P(M D I H, Ms) P(Ms,T) P(W IT). The highest scoring theory is then selected, and a straightforward computation derives the final meaning frame Mo from output vector Y. 6. Experimental Results We have trained and evaluated the system on a common corpus of utterances collected from naive users in the ATIS domain. In this test, the system was trained on approximately 4000 ATIS 2 and ATIS 3 sentences, and then evaluated on the December 1994 test material (which was held aside as a blind test set). The combined system produced an error rate of 21.6%. Work on the system is ongoing, however, and interested parties are encouraged to contact the authors for more recent results. 7. Conclusion We have presented a fully trained statistical natural language interface system, with separate models corresponding to the classical processing steps of parsing, semantic interpretation and discourse. Much work remains to be done in order to refine the statistical modeling techniques, and to extend the statistical models to additional linguistic phenomena such as quantification and anaphora resolution. 8. Acknowledgments We wish to thank Robert Ingria for his effort in supervising the annotation of the training corpus, and for his helpful technical suggestions. This work was supported by the Advanced Research Projects Agency and monitored by the Office of Naval Research under Contract No. N00014-91-C-0115, and by Ft. Huachuca under Contract Nos. DABT63-94-C-0061 and DABT63-94- C-0063. The content of the information does not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred. 9. References Bates, M., Boisen, S., and Makhoul, J. "Developing an Evaluation Methodology for Spoken Language Systems." Speech and Natural Language Workshop, Hidden Valley, Pennsylvania, 102-108. Church, K. "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text." Second Conference on Applied Natural Language Processing, Austin, Texas. Earley, J. (1970). "An ,Efficient Context-Free Parsing Algorithm." Communications of the ACM, 6, 451-455. Grishman, R., and Sterling, J. "Description of the Proteus System as Used for MUC-5." Fifth Message Understanding Conference, Baltimore, Maryland, 181-194. Koppelman, J., Pietra, S. D., Epstein, M., Roukos, S., and Ward, T. "A statistical approach to language modeling for the ATIS task." Eurospeech 1995, Madrid. Levin, E., and Pieraccini, R. "CHRONUS: The Next Generation." Spoken Language Systems Technology Workshop, Austin, Texas, 269-271. Magerman, D. "Statistical Decision Tree Models for Parsing." 33rd Annual Meeting of the Association for Computational Linguistics, Cambridge, Massachusetts, 276- 283. Miller, S. (1996). "Hidden Understanding Models," Northeastern University, Boston, MA. Miller, S., Bates, M., Bobrow, R., Ingria, R., Makhoul, J., and Schwartz, R. "Recent Progress in Hidden Understanding Models." Spoken Language Systems Technology Workshop, Austin, Texas, 276-280. Miller, S., Bobrow, R., Ingria, R., and Schwartz, R. "Hidden Understanding Models of Natural Language." 32nd Annual Meeting of the Association 'ibr Computational Linguistics, Las Cruces, New Mexico, 25-32. 60 Pallett, D., Fiscus, J., Fisher, W., Garofolo, J., Lund, B., Martin, A., and Przybocki, M. "1994 Benchmark Tests for the ARPA Spoken Language Program." Spoken Language Systems Technology Workshop, Austin, Texas. Placeway, P., Schwartz, R., Fung, P., and Nguyen, L. "The Estimation of Powerful Language Models from Small and Large Corpora." IEEE ICASSP, 33-36. Price, P. "Evaluation of Spoken Language Systems: the ATIS Domain." Speech and Natural Language Workshop, Hidden Valley, Pennsylvania, 91-95. Seneff, S. (1992). '°FINA: A Natural Language System for Spoken Language Applications." Computational Linguistics, 18,1, 61-86. Stolcke, A. (1995). "An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilites." Computational Linguistics, 21 (2), 165-201. Weischedel, R., Ayuso, D., Boisen, S., Fox, H., Ingfia, R., Matsukawa, T., Papageorgiou, C., MacLaughlin, D., Kitagawa, M., Sakai, T., Abe, J., Hosihi, H., Miyamoto, Y., and Miller, S. "Description of the PLUM System as Used for MUC-5." Fifth Message Understanding Conference, Baltimore, Maryland, 93-107. 61 | 1996 | 8 |
A Robust System for Natural Spoken Dialogue James F. Allen, Bradford W. Miller, Eric K. Ringger, Teresa Sikorski Dept. of Computer Science University of Rochester Rochester, NY 14627 {james, miller, ringger, sikorski)@cs.rochester.edu http : //www. cs. rochester, edu/research/trains / Abstract This paper describes a system that leads us to believe in the feasibility of constructing natural spoken dialogue systems in task-oriented domains. It specifically addresses the issue of robust interpre- tation of speech in the presence of recognition errors. Robustness is achieved by a combination of statistical error post-correction, syntactically- and semantically-driven robust parsing, and extensive use of the dialogue context. We present an evaluation of the system using time-to-completion and the quality of the final solution that suggests that most native speakers of English can use the system successfully with virtually no training. 1. Introduction While there has been much research on natural dialogue, there have been few working systems because of the difficulties in obtaining robust behavior. Given over twenty years of research in this area, if we can't construct a robust system even in a simple domain then that bodes ill for progress in the field. In particular, without some working systems, we are very limited in how we can evaluate the worth of different models. The prime goal of the work reported here was to demonstrate that it is feasible to construct robust spoken natural dialogue systems. We were not seeking to develop new theories, but rather to develop tech- niques to enable existing theories to be applied in practice. We chose a domain and task that was as simple as possible yet couldn't be solved without the collaboration of the human and system. In addition, there were three fundamental requirements: • the system must run in near real-time; • the user should need minimal training and not be constrained in what can be said; and • the dialogue should have a concrete result that can be independently evaluated. The second constraint means we must handle natural dialogue, namely dialogue as people use it rather than a constrained form of interaction determined by the system (which is often called a dialogue). We can only control the complexity of the dialogue by controlling the complexity of the task. Increasing the task complexity naturally increases the complexity of the dialogue. This paper reports on the first stage of this process, working with a highly simplified domain. At the start of this experiment in November 1994, we had no idea whether it was possible. While researchers were reporting good accuracy (upwards of 95%) for speech systems in simple question-answering tasks, our domain was considerably different with a much more spontaneous form of interaction. We also knew that it would not be possible to directly use general models of plan recognition to aid in speech act interpretation (as in Alien & Perrault, 1980, Litman & Allen 1987, Carberry 1990), as these models would not lend themselves to real-time processing. Similarly, it would not be feasible to use general planning models for the system back-end and for planning its responses. We could not, on the other hand, completely abandon the ideas underlying the plan-based approach, as we knew of no other theory that could provide an account for the interactions. Our approach was to try to retain the overall structure of plan-based systems, but to use domain-specific reasoning techniques to provide real-time performance. Dialogue systems are notoriously hard to evaluate as there is no well-defined "correct answer". So we cannot give end-to-end accuracy measures as is typically done to measure the performance of speech recognition systems and parsing systems. This is especially true when evaluating dialogue robustness, which results from many different sources: correcting speech recognition errors, using semantic knowledge to interpret fragments, and using dialogue strategies to keep the dialogue flowing efficiently despite recognition and interpretation errors. The approach we take is to use task-based evaluation. We measure how well the system does at helping the user solve the problem. The two most telling measures are time-to-completion and the quality of the final solution. In the evaluation described later in this paper, we show that all our subjects were able to use TRAINS- 95 to solve problems with only minimal training. We also evaluated the overall effectiveness of our robust processing techniques by comparing spoken dialogues with keyboard dialogues by the same subjects. Even with a 30% word error rate (including insertion errors), speech turned out to be considerably more efficient than keyboard input in this domain. 2. The Task and Example Session The domain in TRAINS-95 is simple route planning. The user is given a map on a screen showing cities, 62 connections and the locations of a set of trains (see Figure 1), and a verbal specification of a set of destination cities where trains are needed. The task is to find the most efficient set of routes possible. The route planner used by the system is deliberately weak so that interaction is needed to find good plans. Specifically, the planner cannot find routes longer than four hops without an intermediate city, and when it can generate a route, it randomly selects among the possibilities. Typically, the examples in papers on dialogue systems are carefully hand-constructed and designed to show off specific system capabilities. Such examples, however, give little feeling for how robust a system is. Much more revealing is how a system responds to unplanned interactions with naive users, and how it deals with the interaction when things start to go wrong. Thus, we won't present any examples in which everything went smoothly. Rather, we present a dialogue that stresses the system's capabilities, better illuminating what works and what areas require further improvement. The dialogue below was with a non-computer person who had used the system a few times before. In the transcripts, we show the input as it was produced by the speech recognizer in ALL CAPS followed by what was actually said. If the recognition was perfect we just show the speech recognition output. The dialogue contains a significant number of errors and has a 29.5% word error rate. The system display is also shown at key points 1. In this problem, the engines start at Detroit, Montreal and Albany and the goal is to get trains to Milwaukee, Lexington and Washington. S: Ok. I think I'm ready to start. U" OKAY LET'S SEE CONTAIN FROM DETROIT TO WASHINGTON Okay. Let's take a train from Detroit to Washington. S: What route would you like to get from Detroit to Washington? U: LET'S GO VIA TOLEDO AND PITTSBURGH S: (displays Figure 2) The terminal at city Scranton is delaying traffic due to localized heavy winds. The terminal at city Baltimore is delaying traffic due to localized heavy winds. An additional five hours will be needed to travel through them due to decreased visibility. U: NOW LET'S TAKE THE TRAIN FROM DETROIT TO WASHINGTON D S_X ANY No. Let's take the train from Detroit to Washington via Cincinnati. S: (displays Figure 3) Okay. U: OKAY IT THAT'S OKAY NOW Okay that - that's okay now S: Pardon Me? U; UP OKAY NOW LET'S TAKE THE TRAIN FROM MONTREAL TO LEXINGTON Okay. Now let's take the train from Montreal to Lexington. ' We apologize for the figures. The actual system uses color displays. The best way to present examples is by video. A quicktime movie can be found in http://www.cs.rochester.edu/research/trains/ACL96. Figure 1: The initial scenario Figure 2: The proposed route Figure 3: The corrected route 63 S: I need help choosing a route from Montreal to Lexington. U- LET'S GO P_M TO TRY Let's go via Detroit S: (displays a route to Detroit but not on to Lexington) Is this OK? U: YES NOW LET'S GO TO LEXINGTON S: Yep. U: OKAY NOW ADD STATE THE LAST TRAIN AND GO FROM ALBANY TO O O'S Okay Now let's take the last train and go from Albany to Milwaukee. S: No. What city are you trying to have the engine at Montreal arrive at, if I caught your drift. I don't understand where we are sending the engine at Albany to. U: ME A JET ADD ALBANY NEEDS TO GO TO MILWAUKEE The engine at Albany needs to go to Milwaukee. S: Tell me a route to use to get from Albany to Milwaukee, please. U" I'D GO VIA BUFFALO Uh - Go via Buffalo. S: (displays a route through Toronto and Buffalo) Yes. U: GO B_X SYRACUSE AT BUFFALO Go via Syracuse and Buffalo. S: (displays Figure 4) yeah. U; THAT'S COULD I CAN That's good. I'm done. S: Hey, itsthe programming. U: I_NEED DONE I'm done. 3. The System The TRAINS-95 system is organized as shown in Figure 5. At the top are the I/O facilities. The speech recognition system is the Sphinx-II system from CMU (Huang et al, 1993). The speech synthesizer is a commercial product: the TRUETALK system from Entropies. The rest of the system was built at Rochester. The display supports a communication language that allows other modules to control the content of the display. It also handles keyboard input. The speech recognition output is passed through the post-processor described in section 4. The parser, described in section 5, accepts input either from the post-processor (for speech) or the display manager (for keyboard), and produces a set of speech act interpretations that are passed to the discourse manager, described in section 6. The discourse manager breaks into a range of subcomponents handling reference, speech act interpretation and planning (the verbal reasoner), and the back-end of the system: the problem solver and domain reasoner. When a speech act is planned for output, it is passed to the generator, which constructs a sentence and passes this to both the speech synthesizer and the display. The generator is a simple template-based system. It uses templates associated with different speech act forms that are instantiated with descriptions of the particular objects involved. The form of these descriptions is defined for each class of objects in the domain. Figure 4: The final routes I S P ee ~h~eCx°-Iglni t i° npost I G eSnPeeea tcih° n [ / / ~ I Prooessor I / I iisplay [ ~4~ ~ ] Manager [ I Discourse I Reference II Verbal Reasoner Solver Figure 5 : The TRAINS-95 System Architecture In order to stress the system in our robustness evaluation, we used the ATIS language model provided from CMU. This system yields an overall word error rate of 30% on TRAINS-95 dialogues, as opposed to a 20% error rate that we can currently obtain by using language models trained on our TRAINS corpus. While this accuracy rate is significantly lower than often reported in the literature, remember that most speech recognition results are reported for read speech, or for constrained dialogue applications such as ATIS. Natural dialogue involves a more spontaneous form of interaction that is much more difficult to interpret. 4. Statistical Error Post-Correction The following are examples of speech recognition (SR) errors that occurred in the sample dialogue. In each, the words tagged REF indicate what was actually said, while 64 those tagged with HYP indicate what the speech recognition system proposed, and HYP' indicates the output of SPEECHPP, our post-processor. While the corrected transcriptions are not perfect, they are typically a better approximation of the actual utterance. As the first example shows, some recognition errors are simple word-for-word confusions: HYP: GO B_X SYRACUSE AT BUFFALO HYP': GO VIA SYRACUSE VIA BUFFALO REF: GO VIA SYRACUSE AND BUFFALO In the next example, a single word was replaced by more than one smaller word: HYP: LET'S GO P_M TO TRY HYP': LET'S GO P_M TO DETROIT REF: LET'S GO VIA DETROIT The post-processor yields fewer errors by effectively refining and tuning the vocabulary used by the speech recognizer. To achieve this, we adapted some techniques from statistical machine translation (such as Brown et al., 1990) in order to model the errors that Sphinx-II makes in our domain. Briefly, the model consists of two parts: a channel model, which accounts for errors made by the SR, and the language model, which accounts for the likelihood of a sequence of words being uttered in the first place. More precisely, given an observed word sequence o from the speech recognizer, SPEECHPP finds the most likely original word sequence by finding the sequence s that maximizes Prob(ols) * Prob(s), where • Prob(s) is the probability that the user would utter sequence s, and • Prob(ols) is the probability that the SR produces the sequence o when s was actually spoken. For efficiency, it is necessary to estimate these distributions with relatively simple models by making independence assumptions. For Prob(s), we train a word-bigram "back-offf language model (Katz, 87) from hand-transcribed dialogues previously collected with the TRAINS-95 system. For P(ols), we build a channel model that assumes independent word-for-word substitutions; i.e., Prob(o I s) = 1-I i Prob(oi I si) The channel model is trained by automatically aligning the hand transcriptions with the output of Sphinx-II on the utterances in the (SPEECHPP) training set and by tabulating the confusions that occurred. We use a Viterbi beam-search to find the s that maximizes the expression. This technique is widely known so is not described here (see Forney (1973) and Lowerre (1986)). Having a relatively small number of TRAINS-95 dialogues for training, we wanted to investigate how well the data could be employed in models for both the SR and the SPEECHPP. We ran several experiments to g5 T ! ! ~PEECHPPj+ Augment&t Sphinx-I! i~ i Aug~cntrxl Sphinx-U Alone i ~--~ 80 ............................................................. i.....,SI~EECl~P..~:.Baselidc.Sphinx:.II~.~.. 75 70 65 .... ~ ......... 6O ,, i ! i 0 5()(X) IIX)~X) 15(XX) 2000(I 25000 # Trains-95 Words in Training Set Figure 6: Post-processing Evaluation weigh our options. For a baseline, we built a class-based back-off language model for Sphinx-II using only transcriptions of ATIS spoken utterances. Using this model, the performance of Sphinx-II alone on TRAINS- 95 data was 58.7%. Note that this figure is lower than our previously mentioned average of 70%, since we were unable to exactly replicate the ATIS model from CMU. First, we used varying amounts of training data exclusively for building models for the SPEECHPP; this scenario would be most relevant if the speech recognizer were a black-box and we did not know how to train its model(s). Second, we used varying amounts of the training data exclusively for augmenting the ATIS data to build language models for Sphinx-II. Third, we combined the methods, using the training data both to extend the language models for Sphinx-II and to then train SPEECHPP on the newly trained SR. The results of the first experiment are shown by the bottom curve of Figure 6, which indicates the performance of the SPEECHPP with the baseline Sphinx-II. The first point comes from using approx. 25% of the available training data in the SPEECHPP models. The second and third points come from using approx. 50% and 75%, respectively, of the available training data. The curve clearly indicates that the SPEECHPP does a reasonable job of boosting our word recognition rates over baseline Sphinx-II and performance improves with additional training data. We did not train with all of our available data, since the remainder was used for testing to determine the results via repeated leave-one-out cross-validation. The error bars in the figure indicate 95% confidence intervals. Similarly, the results of the second experiment are shown by the middle curve. The points reflect the performance of Sphinx-II (without SPEECHPP) when using 25%, 50%, and 75% of the available training data in its LM. These results indicate that equivalent amounts of training data can be used with greater impact in the language model of the SR than in the post-processor. Finally, the outcome of the third experiment is reflected 65 in the uppermost curve. Each point indicates the performance of the SPEECHPP using a set of models trained on the behavior of Sphinx-II for the corresponding point from the second experiment. The results from this experiment indicate that even if the language model of the SR can be modified, then the post-processor trained on the same new data can still significantly improve word recognition accuracy on a separate test set. Hence, whether the SR's models are tunable or not, the post-processor is in neither case redundant. Since these experiments were performed, we have enhanced the channel model by relaxing the constraint that replacement errors be aligned on a word-by-word basis. We employ a fertility model (Brown et al, 1990) that indicates how likely each word is to map to multiple words or to a partial word in the SR output. This extension allows us to better handle the second example above, replacing TO TRY with DETROIT. For more details, see Ringger and Allen (1996). 5. Robust Parsing Given that speech recognition errors are inevitable, robust parsing techniques are essential. We use a pure bottom-up parser (using the system described in (Allen, 1995)) in order to identify the possible constituents at any point in the utterance based on syntactic and semantic restrictions. Every constituent in each grammar rule specifies both a syntactic category and a semantic category, plus other features to encode co- occurance restrictions as found in many grammars. The semantic features encode selectional restrictions, most of which are domain-independent. For example, there is no general rule for PP attachment in the grammar. Rather there are rules for temporal adverbial modification (e.g., at eight o'clock), locational modification (e.g., in Chicago), and so on. The end result of parsing is a sequence of speech acts rather than a syntactic analysis. Viewing the output as a sequence of speech acts has significant impact on the form and style of the grammar. It forces an emphasis on encoding semantic and pragmatic features in the grammar. There are, for instance, numerous rules that encode specific conventional speech acts (e.g., That's g o o d is a CONFIRM, O k a y is a CONFIRM/ACKNOWLEDGE, Let's go to Chicago is a SUGGEST, and so on). Simply classifying such utterances as sentences would miss the point. Thus the parser computes a set of plausible speech act interpretation based on the surface form, similar to the model described in Hinkelman & Allen (1989). We use a hierarchy of speech acts that encode different levels of vagueness, including a TELL act that indicates content without an identifiable illocutionary force. This allows us to always have an illocutionary force that can be refined as more of the utterance is processed. The final interpretation of an utterance is the sequence of speech acts that provides the "minimal covering" of the input - i.e., the shortest sequence that accounts for the input. Even if an utterance was completely uninterpretable, the parser would still produce output - a TELL act with no content. For example, consider an utterance from the sample dialogue that was garbled: OKAY NOW ! TAKE THE LAST TRAIN IN GO FROM ALBANY TO IS. The best sequence of speech acts to cover this input consists of three acts: 1. a CONFIRM/ACKNOWLEDGE (OKAY) 2. a TELL, with content to take the last train (NOW I TAKE THE LAST TRAIN) 3. a REQUEST to go from Albany (Go FROM ALBANY) Note that the to is at the end of the utterance is simply ignored as it is uninterpretable. While not present in the output, the presence of unaccounted words will lower the parser's confidence score that it assigns to the interpretation. The actual utterance was Okay now let's take the last train and go from Albany to Milwaukee. Note that while the parser is not able to reconstruct the complete intentions of the user, it has extracted enough to continue the dialogue in a reasonable fashion by invoking a clarification subdialogue. Specifically, it has correctly recognized the confirmation of the previous exchange (act 1), and recognized a request to move a train from Albany (act 3). Act 2 is an incorrect analysis, and results in the system generating a clarification question that the user ends up ignoring. Thus, as far as furthering the dialogue, the system has done reasonably well. 6. Robust Speech Act Processing The dialogue manager is responsible for interpreting the speech acts in context, formulating responses, and maintaining the system's idea of the state of the discourse. It maintains a discourse state that consists of a goal stack with similarities to the plan stack of Litman & Allen (1987) and the attentional state of Grosz & Sidner (1986). Each element of the stack captures 1. the domain or discourse goal motivating the segment 2. the object focus and history list for the segment 3. information on the status of problem solving activity (e.g., has the goal been achieved yet or not). A fundamental principle in the design of TRAINS-95 was a decision that, when faced with ambiguity it is better to choose a specific interpretation and run the risk of making a mistake as opposed to generating a clarification subdialogue. Of course, the success of this strategy depends on the system's ability to recognize and interpret subsequent corrections if they arise. Significant effort was made in the system to detect and handle a wide range of corrections, both in the grammar, the discourse processing and the domain reasoning. In later systems, we plan to specifically evaluate the effectiveness of this strategy. 66 The discourse processing is divided into reference resolution, verbal reasoning, problem solving and domain reasoning. Reference resolution, other than having the obvious job of identifying the referents of noun phrases, also may reinterpret the parser's assignment of illocutionary force if it has additional information to draw upon. One way we attain robustness is by having overlapping realms of responsibility: one module may be able to do a better job resolving a problem because it has an alternative view of it. On the other hand, it's important to recognize another module's expertise as well. It could be disastrous to combine two speech acts that arise from I really <garbled> think that's good. for instance, since the garbled part may include don't. Since speech recognition may substitute important words one for the other, it's important to keep in mind that speech acts that have no firm illocutionary force due to grammatical problems may have little to do with what the speaker actually said. The verbal reasoner is organized as a set of prioritized rules that match patterns in the input speech acts and the discourse state. These rules allow robust processing in the face of partial or ill-formed input as they match at varying levels of specificity, including rules that interpret fragments that have no identified illocutionary force. For instance, one rule would allow a fragment such as to Avon to be interpreted as a suggestion to extend a route, or an identification of a new goal. The prioritized rules are used in turn until an acceptable result is obtained. The problem solver handles all speech acts that appear to be requests to constrain, extend or change the current plan. It is also based on a set of prioritized rules, this time dealing with plan corrections and extensions. These rules match against the speech act, the problem solving state, and the current state of the domain. If fragmentary information is supplied, the problem solver attempts to incorporate the fragment into what it knows about the current state of the plan. As example of the discourse processing, consider how the system handles the user's first utterance in the dialogue, OKAY LET'S SEND CONTAIN FROM DETROIT TO WASHINGTOn. From the parser we get three acts: I. a CONFIRM/ACKNOWLEDGE (OKAY) 2. a TELL involving mostly uninterpretable words (LET'S SEND CONTAIN) 3. a TELL act that mentions a route (FROM DETROIT TO WASHINGTON) The discourse manager sets up its initial conversation state and passes the act to reference for identification of particular objects, and then hands the acts to the verbal reasoner. Because there is nothing on the discourse stack, the initial confirm has no effect. (Had there been something on the stack, e.g. a question of a plan, the initial confirm might have been taken as an answer to the question, or a confirm of the plan, respectively). The following empty TELL act is uninterpretable and hence ignored. While it is possible to claim the "send" could be used to indicate the illocutionary force of the following fragment, and that a "container" might even be involved, the fact that the parser separated out the speech act indicates there may have been other fragments lost. The last speech act could be a suggestion of a new goal to move from Detroit to Washington. After checking that there is an engine at Detroit, this interpretation is accepted. The planner is unable to generate a path between these points (since it is greater than four hops). It returns two items: 1. an identification of the speech act as a suggestion of a goal to take a train from Detroit to Washington 2. a signal that it couldn't find a path to satisfy the goal The discourse context is updated and the verbal reasoner generates a response to clarify the route desired, which is realized in the system's response What route would you like to get from Detroit to Washington? As another example of robust processing, consider an interaction later in the dialogue in which the user's response no is misheard as now:Now let's take the train from Detroit to Washington do S_X Albany (instead of No let's take the train from Detroit to Washington via Cincinnati). Since no explicit rejection is identified due to the recognition error, this utterance looks like a confirm and continuation of the plan. Thus the problem solver is called to extend the path with the currently focused engine (enginel) from Detroit to Washington. The problem solver realizes that enginel isn't currently in Detroit, so this can't be a route extension. In addition, there is no other engine at Detroit, so this is not plausible as a focus shift to a different engine. Since engine l originated in Detroit, it then decides to reinterpret the utterance as a correction. Since the utterance adds no new constraints, but there are the cities that were just mentioned as having delays, it presumes the user is attempting to avoid them, and invokes the domain reasoner to plan a new route avoiding the congested cities. The new path is returned and presented to the user. While the response does not address the user's intention to go through Cincinnati due to the speech recognition errors, it is a reasonable response to the problem the user is trying to solve. In fact, the user decides to accept the proposed route and forget about going through Cincinnati. In other cases, the user might persevere and continue with another correction such as No, through Cincinnati. Robustness arises in the example because the system uses its knowledge of the domain to produce a reasonable response. Note these examples both illustrate the "strong commitment" model. We believe it is easier to correct a poor plan, than having to keep trying to explain a perfect one, particularly in the face of 67 recognition problems. For further detail on the problem solver, see Ferguson et al (1996). 7. Evaluating the System While examples can be illuminating, they don't address the issue of how well the system works overall. To explore how well the system robustly handles spoken dialogue, we designed an experiment to contrast speech input with keyboard input. The experiment uses the different input media to manipulate the word error rate and the degree of spontaneity. Task performance was evaluated in terms of two metrics: the amount of time taken to arrive at a solution and the quality of the solution. Solution quality for our domain is determined by the amount of time needed to travel the routes. Sixteen subjects for the experiment were recruited from undergraduate computer science courses. None of the subjects had ever used the system before. The procedure was as follows: • The subject viewed an online tutorial lasting 2.4 minutes. • The subject was then allowed a few minutes to practice both speech and keyboard input. • All subjects were given identical sets of 5 tasks to perform, in the same order. Half of the subjects were asked to use speech first, keyboard second, speech third and keyboard fourth. The other half used keyboard first and then alternated. All subjects were given a choice of whether to use speech or keyboard input to accomplish the final task. • After performing the final task, the subject completed a questionnaire. An analysis of the experiment results shows that the plans generated when speech input was used are of similar quality to those generated when keyboard input was used. However, the time needed to develop plans was significantly lower when speech input was used. Overall, problems were solved using speech in 68% of the time needed to solve them using the keyboard. Figure 7 shows the task completion time results, and Figure 8 gives the solution quality results, each broken out by task. Of the 16 subjects, 12 selected speech as the input medium for the final task and 4 selected keyboard input. Three of the four selecting keyboard input had actually experienced better or similar performance using keyboard input during the first four tasks. The fourth subject indicated on his questionnaire that he believed he could solve the problem more quickly using the keyboard; however, that subject had solved the two tasks using speech input 19% faster than the two tasks he solved using keyboard input. Figure 7: Time to Completion by Task 30 ................................................................................................................ [ i Speech Keyboard 20 10 /11!/!i.I/ii TI T2 T3 T4 Av 1-4 T5 Av Figure 8 : Length of Solution by Task Of the 80 tasks attempted, there were 7 in which the stated goals were not met. In each unsuccessful attempt, the subject was using speech input. There was no particular task that was troublesome and no particular subject that had difficulty. Seven different subjects had a task where the goals were not met, and each of the five tasks was left unaccomplished at least once. A review of the transcripts for the unsuccessful attempts revealed that in three cases, the subject misinterpreted the system's actions, and ended the dialogue believing the goals were met. Each of the other four unsuccessful attempts resulted from a common sequence of events: after the system proposed an inefficient route, word recognition errors caused the system to misinterpret rejection of the proposed route as acceptance. The subsequent subdialogues intended to improve the route were interpreted to be extensions to the route, causing the route to "overshoot" the intended destination. This suggests that, while our robustness techniques were effective on average, the errors do create a higher variance in the effectiveness of the interaction. These problems reveal a need for better handling of corrections, especially as resumptions of previous topics. More details on the evaluation can be found in (Sikorski & Allen, forthcoming). 8. Discussion There are few systems that attempt to handle unconstrained natural dialogue. In most current speech 68 systems, the interaction is driven by a template filling mechanism (e.g., the ATIS systems (ARPA, 1995), BeRP (Jurafsky et al, 1994), Pegasus (Seneff et al, 1995)). Some of these systems support system-initiated questions to elicit missing information in the template, but that is the extent of the mixed initiative interaction. Specifically, there is no need for goal management because the goal is fixed throughout the dialogue. In addition, there is little support for clarification and correction subdialogues. The Duke system (Smith and Hipp, 1994) uses a more general model based on a reasoning system, but allows only a limited vocabulary and grammar and requires extensive training to use. Our approach here is clearly bottom-up. We have attempted to build a fully functional system in the simplest domain possible and focused on the problems that most significantly degraded overall performance. This leaves us open to the criticism that we are not using the most sophisticated models available. For instance, consider our generation strategy. Template- based generation is clearly inadequate for many generation tasks. In fact, when starting the project we thought generation would be a major problem. However, the problems we expected have not arisen. While we could clearly improve the output of the system even in this small domain, the current generator does not appear to drag the system's performance down. We approached other problems similarly. We tried the simplest approaches first and then only generalized those algorithms whose inadequacies clearly degrade the performance of the system. Likewise, we view the evaluation as only a very preliminary first step. While our evaluation appears similar to HCI experiments on whether speech or keyboard is a more effective interface in general (cf. Oviatt and Cohen, 1991), this comparison was not our goal. Rather, we used the modality switch as a way of manipulating the error rate and the degree of spontaneity. While keyboard performance is not perfect because of typos (we had a 5% word error rate on keyboard), it is considerably less error prone than speech. All we conclude from this experiment is that our robust processing techniques are sufficiently good that speech is a viable interface in such tasks even with high word error rates. In fact, it appears to be more efficient in this application than keyboard. In contrast to the results of Rudnicky (1993), who found users preferred speech even when less efficient, our subjects generally preferred the most efficient modality for them (which in a majority of cases was speech). Despite the limitations of the current evaluation, we are encouraged by this first step. It seems obvious to us that progress in dialogue systems is intimately tied to finding suitable evaluation measures. And task-based evaluation seems one of the most promising candidates. It measures the impact of proposed techniques directly rather than indirectly with an abstract accuracy figure. Another area where we are open to criticism is that we used algorithms specific to the domain in order to produce effective intention recognition, disambiguation, and domain planning. Thus, the success of the system may be a result of the domain and say little about the plan-based approach to dialogue. To be honest, with the current system, it is hard to defend ourselves against this. This is is a first step in what we see as a long ongoing process. To look at it another way: if we couldn't build a successful system by employing whatever means available, then there is little hope for finding more effective general solutions. We are addressing this problem in our current research: we are developing a domain-independent plan reasoning "shell" that manages the plan recognition, evaluation and construction around which the dialogue system is structured. This shell provides the abstract model of problem solving upon which the dialogue manager is built. It is then instantiated by domain specific reasoning algorithms to perform the actual searches, constraint checking and intention recognition for a specific application. The structure of the model remains constant across domains, but the actual details of constructing plans remain domain specific. Our next iteration of this process, TRAINS-96, involves adding complexity to the dialogues by increasing the complexity of the task. Specifically, we are adding distances and travel times between cities, several new modes of transportation (trucks and planes) with associated costs, and simple cargoes to be transported and possibly transferred between different vehicles. The expanded domain will require a much more sophisticated ability to answer questions, to display complex information concisely, and will stress our abilities to track plans and identify focus shifts. While there are clearly many places in which our current system requires further work, it does set a new standard for spoken dialogue systems. More importantly, it allows us to address new research issues in a much more systematic way, supported by empirical evaluation. Acknowledgements This work was supported in part by ONR/ARPA grants N0004-92-J-1512 and N00014-95-1-1088, and NSF grant IRI-9503312. Many thanks to Alex Rudnicky, Ronald Rosenfeld and Sunil Issar at CMU for providing the Sphinx-II system and related tools. This work would not have been possible without the efforts of George Ferguson on the TRAINS system infrastructure and model of problem solving. References J. F. Allen. 1995. Natural Language Understanding, 2nd Edition, Benjamin-Cummings, Redwood City, CA. J. F. Allen, G. Ferguson, B. Miller, and E. Ringger. 1995. Spoken dialogue and interactive planning. In Proc. ARPA SLST Workshop, Morgan Kaufmann 69 J. F. Allen and C. R. Perrault. 1980. Analyzing intention in utterances, Artificial Intelligence 15(3):143-178 ARPA, 1995. Proceedings of the Spoken Language Systems Technology Workshop, Jan. 1995. Distributed by Morgan Kaufmann. P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer and P. S. Roossin. 1990. A Statistical Approach to Machine Translation. Computational Linguistics 16(2):79--85. S. Carberry. 1990. Plan Recognition in Natural Language Dialogue, MIT Press, Cambridge, MA. P. R. Cohen and C. R. Perrault. 1979. Elements of a plan-based theory of speech acts, Cognitive Science 3 G. M. Ferguson, J. F. Allen and B. W. Miller, 1996. TRAINS-95: Towards a Mixed-Initiative Planning Assistant, to appear in Proc. Third Conference on Artificial Intelligent Planning Systems (AIPS-96). G. E. Forney, Jr. 1973. The Viterbi Algorithm. Proc. of IEEE 61:266--278. B. Grosz and C. Sidner. 1986. Attention, intention and the structure of discourse. Computational Linguistics 12(3). E. Hinkelman and J. F. Allen. 1989.Two Constraints on Speech Act Ambiguity, Proc. ACL. X. D. Huang, F. Alleva, H. W. Hon, M. Y. Hwang, K. F. Lee, and R. Rosenfeld. 1993. The Sphinx-II Speech Recognition System. Computer, Speech and Language D. Jurafsky, C. Wooters, G. Tajchman, J. Segal, A. Stolcke, E. Fosler and N. Morgan. 1994. The Berkeley Restaurant Project, Proc. ICSLP-94. S. M. Katz. 1987. Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer. In IEEE Transactions on Acoustics, Speech, and Signal Processing. IEEE. pp. 400-401. D. Litman and J. F. Allen. 1987. A plan recognition model for subdialogues in conversation. Cognitive Science 11(2): 163-200 B, Lowerre and R. Reddy. 1986. The Harpy Speech Understanding System. Reprinted in Waibel and Lee, 1990: 576-586. S. L. Oviatt and P.R. Cohen. 1991. The contributing influence of speech and interaction on human discourse patterns. In J.W. Sullivan and S.W. Tyler (eds), Intelligent User Interfaces. Addison-Wesley, NY, NY. E. K. Ringger and J. F. Allen. 1996. A Fertility Channel Model for Post-Correction of Continuous Speech Recognition. To appear in Proc. 1996 ICSLP, IEEE, October, 1996. A. Rudnicky. 1993. Mode Preference in a Simple Data- Retrieval Task, Proc. of ARPA Workshop on Human Language Technology, Dist. by Morgan Kaufmann. S. Seneff, V. Zue, J. Polifroni, C. Pao, L. Hetherington, D. Goddeau, and J. Glass. 1995. The Preliminary Development of a Displayless PEGASUS System. Proc. SLST Workshop, Jan. 1995. Morgan Kaufmann R. Smith and R. D. Hipp. 1994. Spoken Natural Language Dialog Systems: A Practical Approach, Oxford University Press. A. Waibel and K. F. Lee, editors. 1990. Readings in Speech Recognition. Morgan Kaufmann, CA. Appendix A: Transcript of post- processor repairs in the dialogue. HYP: OKAY LET'S SEE CONTAIN FROM DETROIT TO WASHINGTON HYP': OKAY LET'S SEND CONTAIN FROM DETROIT TO WASHINGTON REF: OKAY LET'S TAKE THE TRAIN FROM DETROIT TO WASHINGTON HYP: LET'S GO VIA TOLEDO AND PITTSBURGH HYP': LET'S GO VIA TOLEDO AND PITTSBURGH REF: LET'S GO VIA TOLEDO AND PII-rSBURGH HYP: NOW LET'S TAKE THE TRAIN FROM DETROIT TO WASHINGTON D S_X ANY HYP': NOW LET'S TAKE THE TRAIN FROM DETROIT TO WASHINGTON DO S_X ALBANY) REF: NO LET'S TAKE THE TRAIN FROM DETROIT TO WASHINGTON VIA CINCINNATI HYP: OKAY IT THAT'S OKAY NOW HYP': OKAY IT THAT'S OKAY NOW REF: OKAY THAT- THAT'S OKAY NOW HYP: UP OKAY NOW LET'S TAKE THE TRAIN FROM MONTREAL TO LEXINGTON HYP': UP OKAY NOW LET'S TAKE THE TRAIN FROM MONTREAL TO LEXINGTON REF: OKAY NOW LET'S TAKE THE TRAIN FROM MONTREAL TO LEXINGTON HYP: LET'S GO P_M TO TRY HYP': LET'S GO P_M TO DETROIT REF: LET'S GO VIA DETROIT HYP: YES NOW LET'S GO TO LEXINGTON HYP': YES NOW LET'S GO TO LEXINGTON REF: YES NOW LET'S GO TO LEXINGTON HYP: OKAY NOW ADD STATE THE LAST TRAIN AND GO FROM ALBANY TO O_O'S HYP': OKAY NOW I TAKE THE LAST TRAIN IN GO FROM ALBANY TO IS REF: OKAY NOW LET'S TAKE THE LAST TRAIN AND GO FROM ALBANY TO MILWAUKEE HYP: ME A JET ADD ALBANY NEEDS TO GO TO MILWAUKEE HYP': ME THE JET AT ALBANY INSTEAD TO GO TO MILWAUKEE REF: THE ENGINE AT ALBANY NEEDS TO GO TO MILWAUKEE HYP: I'D GO VIA BUFFALO HYP': UH GO VIA BUFFALO REF: UH GO VIA BUFFALO HYP: GO B X SYRACUSE AT BUFFALO HYP': GO VIA SYRACUSE VIA BUFFALO REF: GO VIA SYRACUSE AND BUFFALO HYP: THAT'S COULD I CAN HYP': THAT'S GREAT UH CAN REF: THAT'S GOOD I'M DONE HYP: I_NEED DONE HYP': I'M DONE REF: I'M DONE 70 | 1996 | 9 |
Interleaving Universal Principles And Relational Constraints Over Typed Feature Logic Thilo Giitz and Detmar Meurers SFB 340, Universit~t Tiibingen, Kleine Wilhelmstrafle 113, 72074 Tiibingen, Germany. {tg, dm}~sf s. nphil, uni-tuebingen, de Abstract We introduce a typed feature logic system providing both universal implicational principles as well as defi- nite clauses over feature terms. We show that such an architecture supports a modular encoding of linguistic theories and allows for a compact representation using underspecification. The system is fully implemented and has been used as a workbench to develop and test large HPSG grammars. The techniques described in this paper are not restricted to a specific implementa- tion, but could be added to many current feature-based grammar development systems. Introduction A significant part of the development of formalisms for computational linguistics has been concerned with find- ing the appropriate data structures to model the lin- guistic entities. The first order terms of Prolog and DCGs were replaced by feature structures in PATR style systems, 1 and in recent years systems using typed fea- ture structures have been developed. Following the tradition of DCG and PATR, these typed feature systems are generally definite clause based, e.g., CUF (D5rre and Dorna 1993), or phrase structure based, e.g., ALE (Carpenter and Penn 1994). Instead of per- mitting the grammar writer to express universal well- formedness constraints directly, the systems require the grammar writer to express relational constraints and attach them locally at the appropriate places in the grammar. 2 We believe there are several reasons why the advances in the linguistic data structures should entail the de- velopment of systems offering more expressive means for designing grammars. Using universal implicative constraints, or universal principles as they are usually called in the linguistic literature, grammatical generali- 1Cf. Shieber (1986) for a discussion of these formalisms. 2ALE has a restricted form of universal constraints, see the comparison section. sations can be expressed in a more compact and modu- lar way. Another advantage of an architecture includ- ing principles is that it computationally realizes the ar- chitecture assumed in Pollard and Sag (1994) for HPSG. It thus becomes possible to develop and test HPSG gram- mars in a computational system without having to re- code them as phrase structure or definite clause gram- mars. The architecture can also serve as extended ar- chitecture for principle based parsing (e.g., Stabler and Johnson 1993) since it facilitates the implementation of GB-style universal principles. Offering both more per- spicuous grammar code and closeness to linguistic the- ory, it seems well motivated to explore an architecture which allows both relational constraints and universal restrictions to be expressed. Our implementation is based on the idea to compile implicational constraints into a relational representa- tion (GStz and Meurers 1995) where calls to the con- straint solver are made explicit. This allows for an integration of implicational and relational constraints and a uniform evaluation strategy. Efficient processing is achieved through user-supplied delay patterns that work on both relations and implicational constraints, as well as preferred execution of deterministic goals at run-time. The paper is organised as follows. We will start out by illustrating our architecture with an example. We then go on to describe the key aspects of the imple- mentation. We compare our work to other approaches before presenting some conclusions and open issues in the last section. Motivating the architecture Consider the Head Feature Principle (HFP) of Pollard and Sag (1994) as an introductory example for a gram- matical principle. The HFP requires that in a headed construction the head features of the mother are iden- tified with the head features of the head daughter. In a typed feature logic 3 this may be expressed by the prin- ciple shown in Fig. 1. phrase A dtrs : headed-struc -+ synsem : loc : cat : head : X A dtrs : head-dtr : synsem : loc : cat : head : X Figure 1: A Head-Feature Principle In CUF, we can encode the HFP as a clause defining a unary relation hfp as shown in Fig. 2. 4 h/p := synsem : loc : cat : head: X A dtrs : head-dtr : synsem : loc : cat : head : X Figure 2: A relation encoding the HFP For the relation hfp to take effect, calls to it need to be attached at the appropriate places. Expressing grammatical constraints in such a way is both time con- suming and error prone. Suppose we want to define the unary relation wf- phrase to hold of all grammatical phrases. In case all grammatical phrases are constrained by a term ¢ and some relation P, we can define the relation wf-phrase shown in Fig. 3. w/-phrase := phrase A ¢ A P Figure 3: Defining the relation wf-phrase To specify that ¢ A P holds for all phrases while the HFP only holds for headed phrases, we now have to man- ually split the definition into two clauses, the subcase we want to attach the HFP to and the other one. This is both inelegant and, barring a clever indexing scheme, inefficient. Using universal principles on the other hand, the global grammar organisation does not need to account for every possible distinction. The or- ganisation of the data structure as typed feature struc- tures already provides the necessary structure and the grammatical constraints only need to enforce additional constraints on the relevant subsets. Thus, the implica- 3Following King (1989) and Carpenter (1992), we use a typed feature logic with appropriateness restrictions for the domains and ranges of features. For space reasons we cannot provide a formal definition of the logic here, but the interested reader is referred to Carpenter (1992) for an exposition. 4Throughout the paper and as syntax for the system discussed here we use the functional style notation of CUE (DSrre and Dorna 1993) for our relations, where a desig- nated result argument is made explicit. The denotation of a relation thus is a set of objects just like the denotation of any other feature term. w]-phrase := phrase A dtrs : headed-struc A h~pA CAP wf-phrase := phrase A dtrs : -~headed-struc ACAP Figure 4: Splitting up the wf-phrase relation to accom- modate the HFP call tional constraint encoding the HFP shown in Fig. 1 con- strains only the headed phrases, and the non-headed ones do not need to be considered. Finally, a system providing both universal principles and relational constraints at the same level offers a large degree of flexibility. While in ttPSG theories the principles usually form the main part of the grammar and relations such as append are used as auxiliary con- straints called by the principles, a more traditional kind of grammar for which one prefers a relational organisa- tion can also be expressed more compactly by adding some universal principles constraining the arguments of the relations to the relational core. Both kinds of inter- action are possible in the non-layered architecture we propose. With respect to our example, the first kind of inter- action would be obtained by also expressing the general restriction on phrase as a universal constraint as shown in Fig. 5, while the more traditional kind of grammar phrase -+ CAP Figure 5: A universal constraint on phrases would keep the relation defining well-formed phrases shown in Fig. 3 and combine it with the universal con- straint of Fig. 1 in order to avoid splitting up the re- lation as was shown in Fig. 4. The proper interaction of relational and universal constraints then needs to be taken care of by the system. An example grammar To further motivate our approach, we now show how to code a simple principle based grammar in our frame- work. Figure 6 shows the inheritance hierarchy of types which we will use for our example with the appropri- ateness conditions attached. Our example grammar consists of some universal principles, phrase structure rules and a lexicon. The lexicon and phrase structure rules are encoded in the wfs (well-formed sign) relation shown in Fig. 7 and the implicational principles in Fig. 8. The wfs predicate takes three arguments: a difference list pair threading the string through the tree in the style of DCGS, and level z e r o ~ ~ o ~ ~ s t / I I e-list rne-list 1 ~ / / I Ltlhdlast:mJ verb noun adj pry [ I atom i ign ] ead cat bar level I ubcat list J pW°rdatom] rphrase sign] hon Ihead-dtr Lcomp-dtr sign] arthur sleeps loves tintagel Figure 6: A type hierarchy the syntactic category (sign) as functional style result argument, s The analysis tree is encoded in two daugh- ters features as part of the syntactic categories in the style of HPSG. Clause 1 of wfs combines a verbal projec- tion with its subject, and clause 2 with its complements. The lexical entries for the verbs "loves" and "sleeps" are specified in clauses 3 and 4, respectively. Finally, clause 5 defines lexical entries for the proper names "Arthur" and "Tintagel". Now consider the principles defined in Fig. 8. Constraints 1-4 encode a simple version of X-bar theory, constraint 5 ensures the propagation of categorial infor- mation along the head path, and constraint 6 ensures that complements obey the subcategorization require- ments of the heads. We may now ask the system for a solution to queries like wfs([arthur, sleeps], D). The solution in this case is the AVM in Fig. 9. We can also query for a term like word A subcat : he-list and check it against the implications alone, as it contains no relational goals. The result in Fig. 10 shows that our X-bar principles have applied: bar level two requires that the subcat list must be empty, and bar level one can only appear on phrases. The system thus correctly infers that only bar level zero is possible. 5We use standard abbreviatory bracket notation for lists. 1. wfs(PO,P) :-- phrase A head : verb A subeat : [] A bar : two A comp-dtr : wfs(PO, P1) A head-rift : wfs( P1, P) 2. wfs(PO, P) := phrase A head : verb A subcat : ne-list A head-dtr : wfs(PO, P1) A eomp-dtr : wfs(P1, P) 3. wfs([XlY],Y) := word A head : verb A bar : zero A subcat : [head : noun, head : noun] A phon : (loves A X) 4. wfs([X[Y],Y) := word A head : verb A bar : zero A subcat : [head : noun] h phon : (sleeps A X) 5. wfs([XIY],Y) := word A head : noun A bar : two A subcat : [] A phon : ( ( arthur V tintageO A X) Figure 7: Phrase structure rules and the lexicon 1. bar: zero -+ word 2. bar: one ~ head-dtr : bar: (-~two) 3. bar:two -~ subcat : [ ] 4. phrase -~ comp-dtr : bar : two 5. phrase --~ head : X A head-dtr : head : X 6. phrase --+ comp-dtr : X A subeat : Y A head-dtr : subcat : [XIY ] Figure 8: X-bar theory, head feature principle and sub- cat principle The advantages of such a modular encoding of gram- matical principles are obvious. The intuition behind the constraints is clear, and new rules are easily added, since the principles apply to any rule. On the other hand, one can experiment with individual principles without having to change the other principles or rules. Finally, the option of encoding grammatical constraints as either implicational constraints or relations opens the possibility to chose the encoding most naturally suited to the specific problem. We feel that this improves on earlier, purely definite-clause-based approaches. Implementation Compilation Building on the compilation method described in GStz and Meurers (1995), our compiler "phrase BAR t~o HEAD [] verb SUBCAT [] [w°rd r] PHON arthlt COMP-DTR [] /BAR tWO [ IHEAD noun ] LSUBCAT e-list J -word PHON sleeps BAR zero HEAD-DTR HEAD [] [ne-list tl ~UBCAT /HD [] [TL [] e-lis Figure 9: Solution to the query wfs([arthur, sleeps], H) word zero ] BAR SUBCAT ne-listJ Figure 10: Solution for the query wordA subcat : ne-list collects the types for which principles are formulated, defines a relational encoding of the principles, and at- taches calls to the relations at the places in the gram- mar where a constrained type can occur. We assume that the grammar writer guarantees that each type in the grammar is consistent (for a grammar G and ev- ery type t there is a model of G that satisfies t). One therefore does not need to attach calls to each possible occurrence of a constrained type, but only to those oc- currences where the grammar contains additional spec- ifications which might lead to an inconsistency (GStz and Meurers 1996). The interpretation of the resulting program is lazy in the sense that we do not enumer- ate fully specific solutions but compute more general answers for which a grammatical instantiation is guar- anteed to exist. A good example for this behaviour was shown in Fig. 10: the system does not instantiate the PHON and the HEAD values of the solution, since the existence of grammatical values for these attributes is independent of the query. A way in which we deviate from the compilation method of GStz and Meurers (1995) is that our sys- tem performs all constraint inheritance at compile-time. While inheriting all principles to the most specific types and transforming the resulting constraints to a disjunc- tive normal form can significantly slow down compile times, the advantage is that no inheritance needs to be done on-line. To influence this trade-off, the user can instruct the system to hide a disjunctive principle in an auxiliary relation in order to keep it from being multiplied out with the other constraints. Such auxil- iary relations, which will be discussed further in con- nection with the delay mechanism, have turned out to be especially useful in conjunction with principles with complex antecedents. The reason is that our compiler transforms an implication with complex antecedents to an implication with a type antecedent. The negation of the complex antecedent is added to the consequent, which can result in highly disjunctive specifications. Interpretation As a guiding principle, the inter- preter follows the ideas of the Andorra Model 6 in that it always executes deterministic goals before non- deterministic ones. We consider determinacy only with respect to head unification: a goal is recognised to be determinate if there is at most one clause head that unifies with it. This evaluation strategy has two ad- vantages: it reduces the number of choice points to a minimum, and it leads to early failure detection. In our implementation, the overhead of determining which goals are determinate has turned out to be by far out- weighed by the reduction in search space for our lin- guistic applications. An additional speed-up can be ex- pected from applying known pre-processing techniques (Santos Costa, Warren, and Yang 1991) to automati- cally extract so-called determinacy code. The execution order of non-determinate goals can be influenced by the user with wait declarations (Naish 1985). The execution of some goal is postponed un- til the call is more specific than a user-specified term. Speculative computation may thus be reduced to a nec- essary minimum. For our previous example, we might define the delay statements in Fig. 11. The first state- delay(wfs,argl:list) delay(phrase,subcat:list) delay_deterministic(sign) Figure 11: Controlstatementexamples ment says that calls to wfs must be delayed until the first argument is instantiated to some list value. Sim- ilarly, the second statement delays the principles on phrase until the subcat information is known. , The third statement is of a slightly different form, based on the preferred treatment of determinate goals described above. Instead of specifying the instantiation state re- quired for execution, the delay_deterministic statement 6Cf. Haxidi and Janson (1990) and references cited therein. 4 specifies that the universal principles about signs can only be executed in case they are determinate. The delay mechanism for relational goals is very close to the one used in CUF. We extended this mechanism to the universal principles: the constraints on a certain type were only checked, once certain attributes were sufficiently instantiated (w.r.t. the delay statement). Our experience has shown, however, that delaying uni- versal principles in such a way turns out to be too weak. Instead of delaying all constraints on a type until some condition is met, one wants to be able to postpone the application of some particular universal principle. A subcategorization principle applying to phrases, for ex- ample, should be delayed until the valence requirements of the mother or the daughters are known. We therefore allow the user to name a principle and supply it with a specific delay. Internally, this corresponds to introduc- ing an auxiliary relation under the name supplied by the user and delaying it accordingly so that the choice points introduced by the principle are hidden. Let us illustrate the problem and its solution with a schematic example. Suppose the grammar writer writes a principle ¢ --4 ¢. Our compiler will generate from this a constraint t --~ (-~¢) V (¢ A ¢), for some appropriate type t. If ¢ is a complex conjunctive description, then the result of normaiising -~¢ might be highly disjunc- tive. This has two undesirable consequences. Firstly, if there is another constraint t --4 ~ with disjunctive ~, then the compiler will need to normalise the expression ((-~¢)V(¢A¢))A~. This is the appropriate thing to do in those cases where many of the generated disjuncts are inconsistent and the resulting disjunction thus turns out to be small. If, however, these constraints talk about different parts of t's structure, then the resulting dis- junction will be big and the expansion at compile-time should be avoided. The other problem is that we can only specify delays on all constraints on t at once, and cannot delay indi- vidual principles. In other words, the control for the execution of principles is not fine-grained enough. We solved these problems by offering the user the possibility to name constraints, e.g., principle1 : ¢ --4 ¢. This prohibits the compile-time cross-multiplication de- scribed above, and it allows the user to specify delays for such a principle, e.g. delay(principlel .... ) or even delay_deterministic (principlel), if that is appropriate. Debugging Having addressed the key issues behind compilation and interpretation, we now turn to a prac- tical problem which quickly arises once one tries to im- plement larger grammars. On the one hand, the com- plex data structures of such grammars contain an over- whelming number of specifications which are difficult to present to the user. On the other hand, the in- teraction of universal principles and relations tends to get very complex for realistic linguistic theories. While a powerful graphical user interface 7 solves the presen- tation problem, a sophisticated tracing and debugging tool was developed to allow stepwise inspection of the complex constraint resolution process. The debugger displays the feature structure(s) to be checked for gram- maticality and marks the nodes on which constraints still have to be checked. As a result of the determinacy check, each such node can also be marked as failed, delayed or deterministic. Similar to standard Prolog debuggers, the user can step, skip, or fail a constraint on a node, or request all deterministic processing to be undertaken. An interesting additional possibility for non-deterministic goals is that the user can inspect the matching defining clauses and chose which one the sys- tem should try. Figure 12 below shows a screen shot of the debugger. The debugger has turned out to be an indispensable tool for grammar development. As grammar size in- creases, it becomes very difficult to track down bugs or termination problems without it, since these prob- lems are often the result of some global interaction and thus cannot be reduced to a manageable sub-part of the grammar. The reader interested in further practical aspects of our system is referred to (GStz and Meurers 1997) Comparison with previous work There are quite a number of typed feature systems available today, among them ALE (Carpenter and Penn 1994), CUF (DSrre and Dorna 1993) and TFS (Emele and Zajac 1990; Emele 1994). TFS also offered type constraints and relations and to our knowledge was the first working typed feature sys- tems. However, it had some serious drawbacks. TFS did not allow universal principles with complex an- tecedents, but only type constraints. And the system did not include a delay mechanism, so that it was often impossible to ensure termination or efficient processing. The addition of a delay mechanism as described in this paper would certainly increase the efficiency of TFS. ALE provides relations and type constraints (i.e., only types as antecedents), but their unfolding is neither lazy, nor can it be controlled by the user in any way. 7To view grammars and computations our system uses a GUI which allows the user to interactively view (parts of) AVMS, compare and search AVM8, etc. The ouI comes with a clean backend interface and has already been used as front- end for other natural language applications, e.g., in VERB- MOBIL. The GUI was developed by Carsten Hess. 5 This can lead to severe termination problems with re- cursive constraints. The ALE type constraints were de- signed to enhance the typing system, and not for recur- sive computation. This should be done in the phrase structure or procedural attachment part. However, we believe that the addition of delaying and an interpre- tation strategy as described in this paper would add to the attractiveness of ALE as a constraint-based gram- mar development platform. The definite clause part of our system is very similar to the one of CUF: both use delay statements and pre- ferred execution of deterministic goals. Although CUF does not offer universal principles, their addition should be relatively simple. Given that CUF already offers the control strategies required by our scheme, the changes to the run-time system would be minimal. Conclusion and future research We have presented an architecture that integrates rela- tional and implicational constraints over typed feature logic. We showed how such an architecture facilitates the modular and compact encoding of principle based grammars. Our implementation has been tested with several smaller and one large (> 5000 lines) grammar, a linearisation-based grammar of a sizeable fragment of German (Hinrichs et al. 1997). As the grammar constraints combine sub-strings in a non-concatenative fashion, we use a preprocessor that "chunks" the input string into linearisation domains, which are then fed to the constraint solver. With our Prolog based inter- preter, parse times axe around 1-5 sec. for 5 word sen- tences and 10-60 sec. for 12 word sentences. It should be pointed out that parsing with such a grammar would be difficult with any system, as it does neither have nor allow the addition of a context-free backbone. We are currently experimenting with a C based com- piler (Zahnert 1997) using an abstract machine with a specialised set of instructions based on the WAM (War- ren 1983; A~-Kaci 1991). This compiler is still under development, but it is reasonable to expect speed im- provements of at least an order of magnitude. Abstract- machine-based compilation of typed feature logic lan- guages has recently received much attention (Carpenter and Qu 1995, Wintner 1997, Penn in prep.). True com- pilation is the logical development in a maturing field that has hitherto relied on interpreters in high-level pro- gramming languages such as Prolog and Lisp. We also plan to investigate a specialised constraint language for linearisation grammars, to be able to opti- raise the processing of freer word order languages such as German. References Ai-Kaci, H. (1991). Warren's Abstract Machine. MIT Press. Carpenter, B. (1992). The logic of typed feature struc- tures, Volume 32 of Cambridge Tracts in Theo- retical Computer Science. Cambridge University Press. Carpenter, B. and G. Penn (1994). ALE - The At- tribute Logic Engine, User's Guide, Version 2.0.1, December 1994. Technical report, Carnegie Mel- lon University. Carpenter, B. and Y. Qu (1995). An abstract ma- chine for attribute-value logics. In Proceedings of the Fourth International Workshop on Parsing Technology. Prague. DSrre, J. and M. Dorna (1993, August). CUF - a formalism for linguistic knowledge representa- tion. In J. DSrre (Ed.), Computational aspects of constraint based linguistic descriptions I, pp. 1- 22. Universit~it Stuttgart: DYANA-2 Deliverable R1.2.A. Emele, M. C. (1994). The typed feature structure representation formalism. In Proceedings of the International Workshop on Sharable Natural Lan- guage Resources, Ikoma, Nara, Japan. Emele, M. C. and R. Zajac (1990). Typed unifica- tion grammars. In Proceedings of the 13 th Inter- national Conference on Computational Linguis- tics. GStz, T. and W. D. Meurers (1995). Compiling HPSG type constraints into definite clause pro- grams. In Proceedings of the Thrirty-Third An- nual Meeting of the A CL, Boston. Association for Computational Linguistics. GStz, T. and W. D. Meurers (1996). The importance of being lazy - using lazy evaluation to process queries to HPSG grammars. In P. Blache (Ed.), Acres de la troisi~me confdrence anuelle sur le traitment automatique du langage naturel. GStz, T. and W. D. Meurers (1997). The ConTroll system as large grammar development platform. In Proceedings of the A CL/EA CL post-conference workshop on Computational Environments for Grammar Development and Linguistic Engineer- ing, Madrid, Spain. Haridi, S. and S. Janson (1990). Kernel Andorra Pro- log and its computation model. In D. H. D. War- ren and P. Szeredi (Eds.), Proceedings of the sev- enth international conference on logic program- ming, pp. 31-46. MIT Press. 6 Hinrichs, E., D. Meurers, F. Richter, M. Sailer, and H. Winhart (1997). Ein HPSG-Fragment des Deutschen, Teil 1: Theorie. Arbeitspapiere des SFB 340 Nr. 95, Universit~it Tiibingen. King, P. J. (1989). A logical formalism for head- driven phrase structure grammar. Ph. D. thesis, University of Manchester. Naish, L. (1985). Negation and Control in Prolog. Springer-Verlag. Penn, G. (in prep.). Statistical Optimizations in a Feature Structure Abstract Machine. Ph.D. the- sis, Carnegie Mellon University. Pollard, C. and I. A. Sag (1994). Head-Driven Phrase Structure Grammar. Chicago: University of Chicago Press. Santos Costa, V., D. H. D. Warren, and R. Yang (1991). The Andorra-I preprocessor: Supporting full Prolog on the Basic Andorra model. In Pro- ceedings of the Eighth International Conference on Logic Programming, pp. 443-456. Shieber, S. M. (1986). An Introduction to Unifi- cation-Based Approaches to Grammar. Number 4 in CSLI Lecture Notes. Center for the Study of Language and Information. Stabler, E. P. and M. Johnson (1993). Topics in prin- ciple based parsing. Course notes for the 1993 LSA Summer Institute. Warren, D. H. D. (1983). An abstract Prolog instruc- tion set. Technical note 309, SRI International. Wintner, S. (1997). An Abstract Machine for Unifi- cation Grammars. Ph.D. thesis, Technion, Haifa, Israel. Zahnert, A. (1997). fl2c - ein Compiler ffir CLP(TFS). Diplomarbeit, Fakult/it f/ir Infor- matik, Universit/it Tiibingen. 7 " • "'" I J Port: 3CALL t l:i on 171 1./.$~ asem " ~ lo¢ 'lay oat ca~ head[] fronted oonstl~l siv@le word phon [~"< z'az,r/a > s~ns era [ ~ /zoo ~r~ / /oat[ ~ L~atu~ ~,~ status co~p1e~e I spnsem[ sy~se~ I /lool'z~ I / /°atr '''¢ I / / Iva-'-r'~ I / [ [ L°°'~"s<-L> [ Lstatus ~o~.l'e ~ aadtr [] [ ~,.2~ze..,,~z-d 1 phon ~ ~ I Ispnsem F ~ 11 l ,l cat ca~ head [] append_string ( [] , [51 <> [] ) ~append_striug ( [] [] [] ) J Ipp_adj rip ( [] , < [] [] < I~ll's.~.ze word lff~l <> > > /~hon ,1 / s~,ns erar syr,~ l poor~ .::> :! " i . ~J I phrasq 41k cmep.;~,,L skip ~'v'~ unfoM? 31 F~_:'~ Figure 12: A screen shot of the graphical debugger 8 | 1997 | 1 |
Homonymy and Polysemy in Information Retrieval Robert Krovetz NEC Research Institute" 4 Independence Way Princeton, NJ. 08540 [email protected] Abstract This paper discusses research on distin- guishing word meanings in the context of information retrieval systems. We conduc- ted experiments with three sources of evid- ence for making these distinctions: mor- phology, part-of-speech, and phrases. We have focused on the distinction between homonymy and polysemy (unrelated vs. re- lated meanings). Our results support the need to distinguish homonymy and poly- semy. We found: 1) grouping morpholo- gical variants makes a significant improve- ment in retrieval performance, 2) that more than half of all words in a dictionary that differ in part-of-speech are related in mean- ing, and 3) that it is crucial to assign credit to the component words of a phrase. These experiments provide a better understanding of word-based methods, and suggest where natural language processing can provide further improvements in retrieval perform- ance. 1 Introduction Lexical ambiguity is a fundamental problem in nat- ural language processing, but relatively little quant- itative information is available about the extent of the problem, or about the impact that it has on spe- cific applications. We report on our experiments to resolve lexical ambiguity in the context of informa- tion retrieval (IR). Our approach to disambiguation is to treat the information associated with dictionary This paper is based on work that was done at the Center for Intelligent Information Retrieval at the Uni- versity of Massachusetts. It was supported by the Na- tional Science Foundation, Library of Congress, and Department of Commerce raider cooperative agreement number EEC-9209623. I am grateful for their support. senses (morphology. part of speech, and phrases) as multiple sources of evidence. 1 Experiments were de- signed to test each source of evidence independently, and to identify areas of interaction. Our hypothesis is: Hypothesis 1 Resolving lexical ambiguity will lead to an improvement in retrieval performance. There are many issues involved in determining how word senses should be used in information re- trieval. The most basic issue is one of identity -- what is a word sense? In previous work, research- ers have usually made distinctions based on their intuition. This is not satisfactory for two reasons. First, it is difficult to scale up; researchers have gen- erally focused on only two or three words. Second, they have used very coarse grained distinctions (e.g., 'river bank' v. 'commercial bank'). In practice it is often difficult to determine how many senses a word should have, and meanings are often related (Kilgar- rift 91). A related issue is sense granularity. Dictionar- ies often make very fine distinctions between word meanings, and it isn't clear whether these distinc- tions are important in the context of a particular application. For example, the sentence They danced across the lvom is ambiguous with respect to the word dance. It can be paraphrased as They were across the room and they were dancing, or as They crossed the tvom as they danced. The sentence is not. ambiguous in Romance languages, and can only have the former meaning. Machine translation sys- t.ems therefore need to be aware of this ambiguity and translate the sentence appropriately. This is a sys- tematic class of ambiguity, and applies to all "verbs of translatory motion" (e.g., The bottle floated ~mder the bridge will exhibit the same distinction (Talmy 85)). Such distinctions are unlikely to have an im- pact on information retrieval. However, there are 1We used the Longman Dictionary as our source of information about word senses (Procter 78). "/2 also distinctions that are important in information retrieval that are unlikely to be important in ma- chine translation. For example, the word west can be used in the context the East versus the West, or in the context West Germany. These two senses were found to provide a good separation between relevant and non-relevant documents, but the distinction is probably not important for machine translation. It is likely that different applications will require differ- ent types of distinctions, and the type of distinctions required in information retrieval is an open question. Finally, there are questions about how word senses should be used in a retrieval system. In general, word senses should be used to supplement word- based indexing rather than indexing on word senses alone. This is because of the uncertainty involved with sense representation, and the degree to which we can identify a particular sense with the use of a word in context. If we replace words with senses, we are making an assertion that we are very certain that the replacement does not lose any of the information important in making relevance judgments, and that the sense we are choosing for a word is in fact cor- rect. Both of these are problematic. Until more is learned about sense distinctions, and until very ac- curate methods are developed for identifying senses, it is probably best to adopt a more conservative ap- proach (i.e., uses senses as a supplement to word- based indexing). The following section will provide an overview of lexical ambiguity and information retrieval. This will be followed by a discussion of our experiments. The paper will conclude with a summary of what has been accomplished, and what work remains for the future. 2 Lexical Ambiguity and Information Retrieval 2.1 Background Many retrieval systems represent documents and queries by the words they contain. There are two problems with using words to represent the content of documents. The first problem is that words are ambiguous, and this ambiguity can cause documents to be retrieved that are not relevant. Consider the following description of a search that was performed using the keyword "AIDS': Unfortunately, not all 34 [references] were about AIDS, the disease. The references included "two helpful aids during the first three months after total hip replacemenC, and "aids in diagnosing abnormal voiding patterns". (Helm 83) One response to this problem is to use phrases to reduce ambiguity (e.g., specifying "hearing aids" if that is the desired sense). It is not always pos- sible, however, to provide phrases in which the word occurs only with the desired sense. In addition, the requirement for phrases imposes a significant burden on the user. The second problem is that a document can be relevant even though it does not use the same words as those that are provided in the query. The user is generally not interested in retrieving documents with exactly the same words, but with the concepts that those words represent. Retrieval systems ad- dress this problem by expanding the query words us- ing related words from a thesaurus (Salton and Mc- Gill 83). The relationships described in a thesaurus, however, are really between word senses rather than words. For example, the word "term" could be syn- onymous with 'word' (as in a vocabulary term), "sen- tence' (as in a prison term), or "condition' (as in 'terms of agreement'). If we expand the query with words from a thesaurus, we must be careful to use the right senses of those words. We not only have to know the sense of the word in the query (in this example, the sense of the word "term'), but the sense of the word that is being used to augment it (e.g., the appropriate sense of the word 'sentence') (Chodorow et al 88). 2.2 Types of Lexlcal Ambiguity Lexical ambiguity can be divided into homonymy and polysemy, depending on whether or not the meanings are related. The bark of a dog versus the bark of a tree is an example of homonymy; review as a noun and as a verb is an example of polysemy. The distinction between homonymy and polysemy is central. Homonymy is important because it sep- arates unrelated concepts. If we have a query about "AIDS' (tile disease), and a document contains "aids" in the sense of a hearing aid, then the word aids should not contribute to our belief that the docu- ment is relevant to the query. Polysemy is important because the related senses constitute a partial repres- entation of the overall concept. If we fail to group related senses, it is as if we are ignoring some of the occurrences of a query word in a document. So for example, if we are distinguishing words by part-of- speech, and the query contains 'diabetic' as a noun, the retrieval system will exclude instances in which 'diabetic' occurs as an adjective unless we recognize that the noun and adjective senses for that word are related and group them together. Although there is a theoretical distinction between homonymy and polysemy, it is not always easy to tell 73 them apart in practice. What determines whether the senses are related? Dictionaries group senses based on part-of-speech and etymology, but as illus- trated by the word review, senses can be related even though they differ in syntactic category. Senses may also be related etymologically, but be perceived as distinct at the present time (e.g., the "cardinal' of a church and "cardinal' numbers are etymologically re- lated). We investigated several methods to identify related senses both across part of speech and within a single homograph, and these will be described in more detail in Section 3.2.1. 3 Experiments on Word-Sense Disambiguation 3.1 Preliminary Experiments Our initial experiments were designed to investigate the following two hypotheses: Hypothesis 2 Word senses provide an effective separation between relevant and non-relevant docu- ments. As we saw earlier in the paper, it is possible for a query about 'AIDS' the disease to retrieve docu- ments about 'hearing aids'. But to what extent are such inappropriate matches associated with relevance judgments? This hypothesis predicts that sense mis- matches will be more likely to appear in documents that are not relevant than in those that are relevant. Hypothesis 3 Even a small domain-specific collec- tion of documents exhibits a significant degree of lex- ical ambiguity. Little quantitative data is available about lexical ambiguity, and such data as is available is often con- fined to only a small number of words. In addition, it is generally assumed that lexical ambiguity does not occur very often in domain-specific text. This hypothesis was tested by quantifying the ambiguity for a large number of words in such a collection, and challenging the assumption that ambiguity does not occur very often. To investigate these hypotheses we conducted ex- periments with two standard test collections, one consisting of titles and abstracts in Computer Sci- ence, and the other consisting of short articles from Time magazine. The first experiment was concerned with determ- ining how often sense mismatches occur between a query and a document, and whether these mis- matches indicate that the document is not relevant. To test this hypothesis we manually identified the senses of the words in the queries for two collec- tions (Computer Science and Time). These words were then manually checked against the words they matched in the top ten ranked documents for each query (the ranking was produced using a probabil- istic retrieval system). The number of sense mis- matches was then computed, and the mismatches in the relevant documents were identified. The second experiment involved quantifying the degree of ambiguity found in the test collections. We manually examined the word tokens in the corpus for each query word, and estimated the distribution of the senses. The number of word types with more than one meaning was determined. Because of the volume of data analysis, only one collection was ex- amined (Computer Science), and the distribution of senses was only coarsely estimated; there were ap- proximately 300 unique query words, and they con- stituted 35,000 tokens in the corpus. These experiments provided strong support for Hypotheses 2 and 3. Word meanings are highly cor- related with relevance judgements, and the corpus study showed that there is a high degree of lexical ambiguity even in a small collection of scientific text (over 40% of the query words were found to be am- biguous in the corpus). These experiments provided a clear indication of the potential of word mean- ings to improve the performance of a retrieval sys- tem. The experiments are described in more detail in (Krovetz and Croft 92). 3.2 Experiments with different sources of evidence The next set of experiments were concerned with determining the effectiveness of different sources of evidence for distinguishing word senses. We were also interested in the extent with which a difference in form corresponded to a difference in meaning. For example, words can differ in mor- phology (authorize/authorized), or part-of-speech (diabetic [noun]/diabetic [adj]), or in their abil- ity to appear in a phrase (database/data base). They can also exhibit such differences, but rep- resent different concepts, such as author/authorize. sink[noun]/sink[verb], or stone wall/stonewall. Our default assumption was that a difference in form is associated with a difference in meaning unless we could establish that the different word forms were related. 3.2.1 Linking related word meanings We investigated two approaches for relating senses with respect to morphology and part of speech: 1) exploiting the presence of a variant of a term within its dictionary definition, and 2) using the overlap of the words in the definitions of suspected variants. 74 For example, liable appears within the definition of liability, and this is used as evidence that those words are related. Similarly, flat as a noun is defined as "a flat tire', and the presence of the word in its own definition, but with a different part of speech, is taken as evidence that the noun and adjective mean- ings are related. We can also compute the overlap between the definitions of liable and liability, and if they have a significant number of words in com- mon then that is evidence that those meanings are related. These two strategies could potentially be used for phrases as well, but phrases are one of the areas where dictionaries are incomplete, and other methods are needed for determining when phrases are related. We will discuss this in Section 3.2.4. We conducted experiments to determine the effect- iveness of the two methods for linking word senses. In the first experiment we investigated the perform- ance of a part-of-speech tagger for identifying the related forms. These related forms (e.g., fiat as a noun and an adjective) are referred to as instances of zero-affix morphology, or functional shift (Marchand 63). We first tagged all definitions in the dictionary for words that began with the letter 'W'. This pro- duced a list of 209 words that appeared in their own definitions with a different part of speech. However, we found that only 51 (24%) were actual cases of related meanings. This low success rate was almost entirely due to tagging error. That is, we had a false positive rate of 76% because the tagger indicated the wrong part of speech. We conducted a failure ana- lysis and it indicated that 91% the errors occurred in idiomatic expressions (45 instances) or example sen- tences associated with the definitions (98 instances). We therefore omitted idiomatic senses and example sentences from further processing and tagged the rest of the dictionary. 2 The result of this experiment is that the dictionary contains at least 1726 senses in which the headword was mentioned, but with a different part of speech, of which 1566 were in fact related (90.7%). We ana- lyzed the distribution of the connections, and this is given in Table 1 (n = 1566). However, Table 1 does not include cases in which the word appears in its definition, but in an inflected form. For example, 'cook' as a noun is defined as 'a person who prepares and cooks food'. Unless we recognize the inflected form, we will not capture all of the instances. We therefore repeated the procedure, but allowing for inflectional variants. The result is given in Table 2 (n = 1054). We also conducted an experiment to determine ~Idiomatic senses were identified by the use of font codes. the effectiveness of capturing related senses via word overlap. The result is that if the definitions for the root and variant had two or more words in common ,3 93% of the pairs were semantically related. However, of the sense-pairs that were actually related, two- thirds had only one word in common. We found that 65% of the sense-pairs with one word in com- mon were related. Having only one word in common between senses is very weak evidence that the senses are related, and it is not surprising that there is a greater degree of error. Tile two experiments, tagging and word overlap, were found to be to be highly effective once the com- mon causes of error were removed. In the case of tagging the error was due to idiomatic senses and ex- ample sentences, and in the case of word overlap the error was links due to a single word in common. Both methods have approximately a 90% success rate in pairing the senses of morphological variants if those problems are removed. The next section will discuss our experiments with morphology. 3.2.2 Experiments with Morphology We conducted several experiments to determine the impact of grouping morphological variants on retrieval performance. These experiments are de- scribed in detail in (Krovetz 93), so we will only summarize them here. Our experiments compared a baseline (no stem- ming) against several different morphology routines: 1) a routine that grouped only inflectional variants (plurals and tensed verb forms), 2) a routine that grouped inflectional as well as derivational variants (e.g.,-ize,-ity), and 3) the Porter stemmer (Porter 80). These experiments were done with four different test collections which varied in both size and subject area. We found that there was a significant improve- ment over the baseline performance from grouping morphological variants. Earlier experiments with morphology in IR did not report improvements in performance (Harman 91). We attribute these differences to the use of different test collections, and in part to the use of different retrieval systems. We found that the improvement varies depending on the test collection, and that col- lections that were made up of shorter documents were more likely to improve. This is because morpholo- gical variants can occur within the same document, but they are less likely to do so in documents that are short. By grouping morphological variants, we are helping to improve access to the shorter docu- ments. However, we also found improvements even aExcluding closed class words, such as of and for. 75 in a collection of legal documents which had an av- erage length of more than 3000 words. We also found it was very difficult to improve retrieval performance over the performance of the Porter stemmer, which does not use a lexicon. The absence of a lexicon causes the Porter stemmer to make errors by grouping morphological "false friends" (e.g.. author/authority, or police/policy). We found that there were three reasons why the Porter stemmer improves performance despite such groupings. The first two reasons are associated with the heuristics used by the stemmer: 1) some word forms will be grouped when one of the forms has a combination of endings (e.g., -ization and -ize). We empirically found that the word forms in these groups are almost always related in meaning. 2) the stemmer uses a constraint on the form of the res- ulting stem based on a sequence of consonants and vowels; we found that this constraint is surprisingly effective at separating unrelated variants. The third reason has to do with the nature of morphological variants. We found that when a word form appears to be a variant, it often is a variant. For example, consider the grouping of police and policy. We ex- amined all words in the dictionary in which a word ended in 'y', and in which the 'y' could be replaced by 'e' and still yield a word in the dictionary. There were 175 such words, but only 39 were clearly un- related in meaning to the presumed root (i.e., cases like policy/police). Of the 39 unrelated word pairs, only 14 were grouped by the Porter stemmer because of the consonant/vowel constraints. We also identi- fied the morphological "'false friends" for the 10 most frequent suffixes. We found that out of 911 incorrect word pairs, only 303 were grouped by the Porter stemmer. Finally, we found that conflating inflectional vari- ants harmed the performance of about a third of the queries. This is partially a result of the inter- action between morphology and part-of-speech (e.g., a query that contains work in the sense of theoretical work will be grouped with all of the variants asso- ciated with the the verb- worked, working, works); we note that some instances of works can be related to the singular form work (although not necessarily the right meaning of work), and some can be related to the untensed verb form. Grouping inflectional variants also harms retrieval performance because of an overlap between inflected forms and uninflec- ted forms (e.g., arms can occur as a reference to weapons, or as an inflected form of arm). Conflat- ing these forms has the effect of grouping unrelated concepts, and thus increases the net ambiguity. Our experiments with morphology support our at- gument about distinguishing homonymy and poly- semy. Grouping related morphological variants makes a significant improvement in retrieval per- formance. Morphological false friends (policy/police) often provide a strong separation between relevant and non-relevant documents (see (Krovetz and Croft 92)). There are no morphology routines that can currently handle the problems we encountered with inflectional variants, and it is likely that separating related from unrelated forms will make further im- provements in performance. 3.2.3 Experiments with Part of Speech Relatively little attention has been paid in IR to the differences in a word's part of speech. These differences have been used to help identify phrases (Dillon and Gray 83), and as a means of filtering for word sense disambiguation (to only consider the meanings of nouns (Voorhees 93)). To the best of our knowledge the differences have never been examined for distinguishing meanings within the context of IR. The aim of our experiments was to determine how well part of speech differences correlate with differ- ences in word meanings, and to what extent the use of meanings determined by these differences will af- fect the performance of a retrieval system. We con- ducted two sets of experiments, one concerned with homonymy, and one concerned with polysemy. In the first experiment the Church tagger was used to identify part-of-speech of the words in documents and queries. The collections were then indexed by the word tagged with the part of speech (i.e., in- stead of indexing 'book', we indexed 'book/noun' and 'book/verb'). 4 A baseline was established in which all variants of a word were present in the query, regardless of part of speech variation; the baseline did not include any morphological variants of the query words because we wanted to test the in- teraction between morphology and part-of-speech in a separate experiment. The baseline was then com- pared against a version of the query in which all vari- ations were eliminated except for the part of speech that was correct (i.e., if the word was used as a noun ill the original query, all other variants were elimin- ated). This constituted the experiment that tested homonymy. We then identified words that were re- lated in spite of a difference in part of speech; this was based on the data that was produced by tagging the dictionary (see Section 3.2.1). Another version of the queries was constructed in which part of speech variants were retained if the meaning was related, 4in actuality, we indexed it with whatever tags were used by the tagger; we are just using 'noun' and 'verb' for purposes of illustration. 76 and this was compared to the previous version. When we ran the experiments, we found that performance decreased compared with the baseline. However, we found many cases where the tagger was incorrect. 5 We were unable to determine whether the results of the experiment were due to the incor- rectness of the hypothesis being tested (that distinc- tions in part of speech can lead to an improvement in performance), or to the errors made by the tagger. We also assumed that a difference in part-of-speech would correspond to a difference in meaning. The data in Table 1 and Table 2 shows that many words are related in meaning despite a difference in part- of-speech. Not all errors made by the tagger cause decreases in retrieval performance, and we are in the process of determining the error rate of the tagger on those words in which part-of-speech differences are also associated with a difference in concepts (e.g., novel as a noun and as an adjective). 6 3.2.4 Experiments with Phrases Phrases are an important and poorly understood area of IR. They generally improve retrieval perform- ance, but the improvements are not consistent. Most research to date has focused on syntactic phrases, in which words are grouped together because they are in a specific syntactic relationship (Fagan 87), (Smeaton and Van Rijsbergen 88). The research in this section is concerned with a subset of these phrases, namely those that are lexical. A lexical phrase is a phrase that might be defined in a dic- tionary, such as hot line or back end. Lexical phrases can be distinguished from a phrases such as sanc- tions against South Africa in that the meaning of a lexical phrase cannot necessarily be determined from the meaning of its parts. Lexical phrases are generally made up of only two or three words (overwhelmingly just two), and they usually occur in a fixed order. The literature men- tions examples such as blind venetians vs. venetian blinds, or science library vs. library science, but these are primarily just cute examples. It is very rare that the order could be reversed to produce a different concept. Although dictionaries contain a large number of phrasal entries, there are many lexical phrases that are missing. These are typically proper nouns (United States, Great Britain, United Nations) or technical concepts (operating system, specific heat, 5See (Krovetz 95) for more details about these errors. ~There are approximately 4000 words in the Long- man dictionary which have more than one part-of-speech. Less than half of those words will be like novel, and we are examining them by hand. due process, strict liability). We manually identified the lexical phrases in four different test collections (the phrases were based on our judgement), and we found that 92 out of 120 phrases (77%) were not found in the Longman dictionary. A breakdown of the phrases is given in (h:rovetz 95). For the phrase experiment we not only had to identify the lexical phrases, we also had to identiL' any related forms, such as database~data base. This was done via brute force -- a program simply con- catenated every adjacent word in the database, and if it was also a single word in the collection it prim ted out the pair. We tested this with the Computer Science and Time collections, and used those results to develop an exception list for filtering the pairs (e.g., do not consider "special ties/specialties'). We represented the phrases using a proximity operator: and tried several experiments to include the related form when it was found in the corpus. We found that retrieval performance decreased for 118 out of 120 phrases. A failure analysis indic- ated that this was due to the need to assign partial credit to individual words of a phrase. The com- ponent words were always related to the meaning of the compound as a whole (e.g., Britain and Great Britain). We also found that most of the instances of open/closed compounds (e.g., database~data base) were related. Cases like "stone wall/stonewall' or 'bottle neck/bottleneck' are infrequent. The effect oll performance of grouping the compounds is related to the relative distribution of the open and closed forms. Database~data base occurred in about a 50/50 distri- bution, and the queries in which they occurred were significantly improved when the related form was in- cluded. 3.2.5 Interactions between Sources of Evidence We found many interactions between the different sources of evidence. The most striking is the inter- action between phrases and morphology. We found that the use of phrases acts as a filter for the group- ing of morphological variants. Errors in morphology generally do not hurt performance within the restric- ted context. For example, the Porter stemmer will reduce department to depart, but this has no effect in the context of the phrase 'Justice department'. ~The proximity operator specifies that the query words must be adjacent and in order, or occur within a specific number of words of each other. 77 4 Conclusion Most of the research on lexical ambiguity has not been done in the context of an application. We have conducted experiments with hundreds of unique query words, and tens of thousands of word occur- rences. The research described in this paper is one of the largest studies ever done. We have examined the lexicon as a whole, and focused on the distinction between homonymy and polysemy. Other research on resolving lexical ambiguity for IR (e.g., (Sander- son 94) and (Voorhees 93)) does not take this dis- tinction into account. Our research supports the argument that it is im- portant to distinguish homonymy and polysemy. We have shown that natural language processing res- ults in an improvement in retrieval performance (via grouping related morphological variants), and our experiments suggest where further improvements can be made. We have also provided an explanation for the performance of the Porter stemmer, and shown it is surprisingly effective at distinguishing variant word forms that are unrelated in meaning. The ex- periment with part-of-speech tagging also high- lighted the importance of polysemy; more than half of all words in the dictionary that differ in part of speech are also related in meaning. Finally, our ex- periments with lexical phrases show that it is crucial to assign partial credit to the component words of a phrase. Our experiment with open/closed com- pounds indicated that these forms are almost always related in meaning. The experiment with part-of-speech tagging in- dicated that taggers make a number of errors, and our current work is concerned with identifying those words in which a difference in part of speech is as- sociated with a difference in meaning (e.g., train as a noun and as a verb). The words that exhibit such differences are likely to affect retrieval performance. We are also examining lexical phrases to decide how to assign partial credit to the component words. This work will give us a better idea of how language processing can provide further improvements in IR, and a better understanding of language in general. Part of Speech within Definition V N gdj Adv V 63 (32.6%) 15 (15.2%) N 1167 (95%) 82 (82.8%) 23 (41.8%) Adj 57 (4.6%) 126 (65.3%) 31 (56.4%) Adv 3 (0.4%) 4 (2.0%) Proportion 77.8% 12.2% 6.3% 3.3% Table 1: Distribution of zero-affix morphology within dictionary definitions V N idj Adv Part of Speech within Definition V 486 (85%) 15 (14%) 2 (2%) N 239 (97%) 87 (81%) 4 (3%) Adj 7 (3.O%) 87 (15%) 119 (95%) Adv 1 (0.1%) 4 (3.7%) Proportion 23% 54% 10% 12% Table 2: Distribution of zero-affix morphology (inflected) 78 Acknowledgements I am grateful to Dave Waltz for his comments and suggestions. References Chodorow M and Y Ravin and H Sachar, "'Tool for Investigating the Synonymy Relation in a Sense Disambiguated Thesaurus", in Proceedings of the Second Conference on Applied Natural Language Processing, pp. 144-151, 1988. Church K, "A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text", in Proceed- ings of the Second Conference on Applied Natural Language Processing, pp. 136-143, 1988. Dagan I and A Itai, "Word Sense Disambiguation Using a Second Language Monolingual Corpus", Computational Linguistics, Vol. 20, No. 4, 1994. Dillon M and A Gray, "FASIT: a Fully .Automatic Syntactically Based Indexing System", Journal of the American Society of Information Science, Vol. 34(2), 1983. Fagan J, "Experiments in Automatic Phrase Index- ing for Document Retrieval: A Comparison of Syntactic and Non-Syntactic Methods", PhD dis- sertation, Cornell University, 1987. Grishman R and Kittredge R (eds), Analyzing Lan- guage in Restricted Domains, LEA Press, 1986. Halliday M A K, "Lexis as a Linguistic Level", in In Memory of J. R. Firth, Bazell, Catford and Halliday (eds), Longman, pp. 148-162, 1966. Harman D, "'How Effective is Suffixing?", Journal of the American Society for Information Science, Vol 42(1), pp. 7-15, 1991 Helm S., "Closer Than You Think", Medicine and Computer, Vol. 1, No. 1., 1983 Kilgarriff A, "Corpus Word Usages and Dictionary Word Senses: What is the Match? An Empir- ical Study", in Proceedings of the Seventh Annual Conference of the UW Centre for the New OED and Text Research: Using Corpora, pp. 23-39, 1991. Krovetz R and W B Croft, "Lexical Ambiguity and Information Retrieval", A CM Transactions on In- formation Systems, pp. 145-161, 1992. Krovetz R, "Viewing Morphology as an Inference Process", in Proceedings of the Sixteenth Annual International ACM SIGIR Conference on Re- search and Development in Information Retrieval, pp. 191-202, 1993. Krovetz R, "Word Sense Disambiguation for Large Text Databases", PhD dissertation, University of Massachusetts. 1995. Marchand H, "'On a Question of Contrary Analysis with Derivational Connected but Morphologically Uncharacterized Words", English Studies. Vol. 44. pp. 176-187, 1963. Popovic M and P Witlet, "The Effectiveness of Stem- ming for Natural Language Access to Slovene Tex- tual Data", in Journal of the American Society for Information Science, Vol. 43(5), pp. 384-390, 1992. Porter M, "An Algorithm for Suffix Stripping", Pro- gram, Vol. 14 (3), pp. 130-137, 1980. Proctor P., Longman Dictionary of Contemporary English, Longman, 1978. Salton G., Automatic Information Organization and Retrieval, McGraw-Hill, 1968. Salton G. and McGill M., Introduction to Modern Information Retrieval, McGraw-Hill, 1983. Sanderson M, "Word Sense Disambiguation and In- formation Retrieval", in Proceedings of the Seven- teenth A nnual International A CM SIGIR Confer- ence on Research and Development in Information Retrieval, pp. 142-151, 1994. Small S., Cottrell G., and Tannenhaus M. (eds). Lexical Ambiguity Resolution, Morgan Kaufmann, 1988. Smeaton A and C J Van Rijsbergen, "Experiments on Incorporating Syntactic Processing of User Queries into a Document Retrieval Strategy", in Proceedings of the Eleventh Annual International ACM SIGIR Conference on Research and Devel- opment in Information Retrieval, pp. 31-51, 1988. Talmy L, "Lexicalization Patterns: Semantic Struc- ture in Lexical Forms", in Language Typology and Syntactic Description. Volume Ill: Gram,nat- ical Categories and the Lexicon, T Shopen (ed), pp. 57-160, Cambridge University Press, 1985. Van Rijsbergan C. J., Information Retrieval, But- terworths, 1979. Voorhees E, "Using WordNet to Disambiguate Word Senses for Text Retrieval", in Proceedings of the Sixteen Annual International ACM SIG1R Con- ference on Research and Development in Inform- ation Retrieval, pp. 171-180, 1993. Yarowsky D, "Word Sense Disambiguation Us- ing Statistical Models of Roget's Categories Trained on Large Corpora", in Proceedings of the 14th Conference on Computational Linguistics, COLING-9& pp. 454-450, 1992. 79 | 1997 | 10 |
Learning Features that Predict Cue Usage Barbara Di Eugenio" Johanna D. Moore t Massimo Paolucci "+ University of Pittsburgh Pittsburgh, PA 15260, USA {dieugeni, jmoore ,paolucci}@cs .pitt. edu Abstract Our goal is to identify the features that pre- dict the occurrence and placement of dis- course cues in tutorial explanations in or- der to aid in the automatic generation of explanations. Previous attempts to devise rules for text generation were based on in- tuition or small numbers of constructed ex- amples. We apply a machine learning pro- gram, C4.5, to induce decision trees for cue occurrence and placement from a corpus of data coded for a variety of features previ- ously thought to affect cue usage. Our ex- periments enable us to identify the features with most predictive power, and show that machine learning can be used to induce de- cision trees useful for text generation. 1 Introduction Discourse cues are words or phrases, such as because, first, and although, that mark structural and seman- tic relationships between discourse entities. They play a crucial role in many discourse processing tasks, including plan recognition (Litman and Allen, 1987), text comprehension (Cohen, 1984; Hobbs, 1985; Mann and Thompson, 1986; Reichman-Adar, 1984), and anaphora resolution (Grosz and Sidner, 1986). Moreover, research in reading comprehension indicates that felicitous use of cues improves compre- hension and recall (Goldman, 1988), but that their indiscriminate use may have detrimental effects on recall (Millis, Graesser, and Haberlandt, 1993). Our goal is to identify general strategies for cue us- age that can be implemented for automatic text gen- eration. From the generation perspective, cue usage consists of three distinct, but interrelated problems: (1) occurrence: whether or not to include a cue in the generated text, (2) placement: where the cue should be placed in the text, and (3) selection: what lexical item(s) should be used. Prior work in text generation has focused on cue selection (McKeown and Elhadad, 1991; Elhadad and McKeown, 1990), or on the relation between *Learning Research & Development Center tComputer Science Department, and Learning Re- search ~z Development Center tlntelllgent Systems Program cue occurrence and placement and specific rhetori- cal structures (RSsner and Stede, 1992; Scott and de Souza, 1990; Vander Linden and Martin, 1995). Other hypotheses about cue usage derive from work on discourse coherence and structure. Previous research (Hobbs, 1985; Grosz and Sidner, 1986; Schiffrin, 1987; Mann and Thompson, 1988; Elhadad and McKeown, 1990), which has been largely de- scriptive, suggests factors such as structural features of the discourse (e.g., level of embedding and segment complexity), intentional and informational relations in that structure, ordering of relata, and syntactic form of discourse constituents. Moser and Moore (1995; 1997) coded a corpus of naturally occurring tutorial explanations for the range of features identified in prior work. Because they were also interested in the contrast between oc- currence and non-occurrence of cues, they exhaus- tively coded for all of the factors thought to con- tribute to cue usage in all of the text. From their study, Moscr and Moore identified several interesting correlations between particular features and specific aspects of cue usage, and were able to test specific hypotheses from the hterature that were based on constructed examples. In this paper, we focus on cue occurrence and placement, and present an empirical study of the hy- potheses provided by previous research, which have never been systematically evaluated with naturally occurring data. Wc use a machine learning program, C4.5 (Quinlan, 1993), on the tagged corpus of Moser and Moore to induce decision trees. The number of coded features and their interactions makes the man- ual construction of rules that predict cue occurrence and placement an intractable task. Our results largely confirm the suggestions from the hterature, and clarify them by highhghting the most influential features for a particular task. Dis- course structure, in terms of both segment structure and levels of embedding, affects cue occurrence the most; intentional relations also play an important role. For cue placement, the most important factors are syntactic structure and segment complexity. The paper is organized as follows. In Section 2 we discuss previous research in more detail. Section 3 provides an overview of Moser and Moore's coding scheme. In Section 4 we present our learning exper- iments, and in Section 5 we discuss our results and conclude. 80 2 Related Work McKeown and Elhadad (1991; 1990) studied severai connectives (e.g., but, since, because), and include many insightful hypotheses about cue selection; their observation that the distinction between but and ¢l- thoug/~ depends on the point of the move is related to the notion of core discussed below. However, they do not address the problem of cue occurrence. Other researchers (R6sner and Stede, 1902; Scott and de Souza, 1990) are concerned with generating text from "RST trees", hierarchical structures where leaf nodes contain content and internal nodes indi- cate the rt~etorical relations, as defined in Rhetori- cal Structure Theory (RST) (Mann and Thompson, 1988), that exist between subtrees. They proposed heuristics for including and choosing cues based on the rhetorical relation between spans of text, the or- der of the relata, and the complexity of the related text spans. However, (Scott and de Souza, 1990) was based on a small number of constructed exam- pies, and (R6sner and Stede, 1992) focused on a small number of RST relations. (Litman, 1996) and (Siegel and McKeown, 1994) have applied machine learning to disambiguate be- tween the discourse and sentcntial usages of cues; however, they do not consider the issues of occur- rence and placement, and approach the problem from the point of view of interpretation. We closely follow the approach in (Litman, 1996) in two ways. First, we use C4.5. Second, we experiment first with each feature individually, and then with "interesting" sub- sets of features. 3 Relational Discourse Analysis This section briefly describes Relational Discourse Anal~tsis (RDA) (Moser, Moore, and Glendening, 1996), the coding scheme used to tag the data for our machine learning experiments. 1 RDA is a scheme devised for analyzing tutorial ex- planations in the domain of electronics troubleshoot- ing. It synthesizes ideas from (Grosz and Sidner, 1986) and from RST (Mann and Thompson, 1988). Coders use RDA to exhaustively analyze each expla- nation in the corpus, i.e., every word in each expla- nation belongs to exactly one element in the anal- ysis. An explanation may consist of multiple seg- ments. Each segment originates with an intention of the speaker. Segments are internally structured and consist of a core, i.e., that element that most di- rectly expresses the segment purpose, and any num- ber of contributors, i.e. the remaining constituents. For each contributor, one analyzes its relation to the core from an intentional perspective, i.e., how it is intended to support the core, and from an informa- tional perspective, i.e., how its content relates to that 1For more detail about the RDA coding scheme see (Moser and Moore, 1995; Moser and Moore, 1997). of the core. The set of intentional relations in RDA is a modification of the presentational relations of RST, while informational relations are similar to the subject matter relations in RST. Each segment con- stituent, both core and contributors, may itself be a segment with a core:contributor structure. In some cases the core is not explicit. This is often the case with the whole tutor's explanation, since its purpose is to answer the student's explicit question. As an example of the application of RDA, consider the partial tutor explanation in (1) 2 . The purpose of this segment is to inform the student that she made the strategy error of testing inside part3 too soon. The constituent that makes the purpose obvious, in this case (l-B), is the core of the segment. The other constituents help to serve the segment purpose by contributing to it. (1-C) is an example ofsubsegment with its own core:contributor structure; its purpose is to give a reason for testing part2 first. The RDA analysis of (I) is shown schematically in Figure 1. The core is depicted as the mother of all the relations it participates in. Each relation node is labeled with both its intentional and informational relation, with the order of relata in the label indicat- ing the linear order in the discourse. Each relation node has up to two daughters: the cue, if any, and the contributor, in the order they appear in the dis- course. Coders analyze each explanation in the corpus and enter their analyses into a database. The corpus con- sists of 854 clauses comprising 668 segments, for a total of 780 relations. Table 1 summarizes the dis- tribution of different relations, and the number of cued relations in each category. Joints are segments comprising more than one core, but no contributor; clusters are multiunit structures with no recogniz- able core:contributor relation. (l-B) is a cluster com- posed of two units (the two clauses), related only at the informational level by a temporal relation. Both clauses describe actions, with the first action descrip- tion embedded in a matriz ("You should"). Cues are much more likely to occur in clusters, where only in- formational relations occur, than in core:contributor structures, where intentional and informational rela- tions co-occur (X 2 = 33.367, p <.001, df = 1). In the following, we will not discuss joints and clusters any further. An important result pointed out by (Moser and Moore, 1995) is that cue placement depends on core position. When the core is first and a cue is asso- ciated with the relation, the cue never occurs with the core. In contrast, when the core is second, if a cue occurs, it can occur either on the core or on the contributor. aTo make the example more intelligible, we replaced references to parts of the circuit with the labels partl, part2 and part3. 81 (i) Although This is because Also, and A. you know that part1 is good, B. you should eliminate part2 before troubleshooting inside part3. C. D. E. 1. part2 is moved frequently and thus 2. is more susceptible to damage than part3. it is more work to open up part3 for testing the process of opening drawers and extending cards in part3 may induce problems which did not already exist. concede criterion:act Although A B. you should eliminate part2 before troubleshooting inside part3 convince Conusnce conugnee act:reason act:reason act:reason (Th 2 because } convince cause:effect C.1 and thus Figure 1: The RDA analysis of (1) 4 Learning from the corpus 4.1 The algorithm We chose the C4.5 learning algorithm (Quinlan, 1993) because it is well suited to a domain such as ours with discrete valued attributes. Moreover, C4.5 produces decision trees and rule sets, both often used in text generation to implement mappings from func- tion features to forms? Finally, C4.5 is both read- ily available, and is a benchmark learning algorithm that has been extensively used in NLP applications, e.g. (Litman, 1996; Mooney, 1996; Vander Linden and Di Eugenio, 1996). As our dataset is small, the results we report are based on cross-validation, which (Weiss and Ku- likowski, 1091) recommends as the best method to evaluate decision trees on datasets whose cardinality is in the hundreds. Data for learning should be di- vided into training and test sets; however, for small datasets this has the disadvantage that a sizable por- tion of the data is not available for learning. Cross- validation obviates this problem by running the algo- rithm N times (N=10 is a typical value): in each run, (N~l)th of the data, randomly chosen, is used as the training set, and the remaining ~th used as the test 3We will discuss only decision trees here. set. The error rate of a tree obtained by using the whole dataset for training is then assumed to be the average error rate on the test set over the N runs. Further, as C4.5 prunes the initial tree it obtains to avoid overfitting, it computes both actual and esti- mated error rates for the pruned tree; see (Quinlan, 1993, Ch. 4) for details. Thus, below we will report the average estimated error rate on the test set, as computed by 10-fold cross-validation experiments. 4.2 The features Each data point in our dataset corresponds to a core:contributor relation, and is characterized by the following features, summarized in Table 2. Segment Structure. Three features capture the global structure of the segment in which the current core:contributor relation appears. • (Con)Trib(utor)-pos(ition) captures the posi- tion of a particular contributor within the larger segment in which it occurs, and encodes the structure of the segment in terms of how many contributors precede and follow the core. For ex- ample, contributor (l-D) in Figure 1 is labeled as BIA3-2after, as it is the second contributor following the core in a segment with 1 contrib- utor before and 3 after the core. 82 of relation tl Total I # of cued relations II Core:Contributor 406 181 Joints 64 19 Clusters 310 276 Total 780 476 Table 1: Distributions of relations and cue occurrences [I feature type feature dencription Segment ntructure Trib-pos relative position of contrib in segment t number of contribs before and after core Inten-structure intentional structure of segment Infor-structure informational structure of segment Core:contributor Inten-rel enable, convince, concede relation Info-rel 4 classes of about 30 distinct relations Syn-rel independent sentences / segments, coordinated clauses, subordinated clauses Adjacency are core and contributor adjacent? Embedding Core-type segment, minimal unit Trib-type segment, minimal unit Above / Below number of relations hierarchically above / below current relation Table 2: Features • /nten(tional)-structure indicates which contrib- utors in the segment bear the same intentional relations to the core. • Infor(mationalJ-structure. Similar to inten- tional structure, but applied to informational relations. Core:contributor relation. These features more specifically characterize the current core:contributor relation. • lnten(tionalJ-rel(ation). One of concede, con- vince, enable. • Infor(maiional)-rel(ation). About 30 informa- tional relations have been coded for. However, as preliminary experiments showed that using them individually results in overfitting the data, we classify them according to the four classes proposed in (Moser, Moore, and Glendening, 1996): causality, similarity, elaboration, tempo- ral. Temporal relations only appear in clusters, thus not in the data we discuss in this paper. • Syn(tactic)-rel(atiou). Captures whether the core and contributor are independent units (seg- ments or sentences); whether they are coordi- nated clauses; or which of the two is subordinate to the other. • Adjacency. Whether core and contributor are adjacent in linear order. Embedding. These features capture segment em- bedding, Core-type and Trib-type qualitatively, and A bore/Below quantitatively. • Core-type/(ConJTrib(utor)-type. Whether the core/the contributor is a segment, or a mini- mal unit (further subdivided into action, state, matriz). • Above//Belozo encode the number of relations hi- erarchically above and below the current rela- tion. 4.3 The experiments Initially, we performed learning on all 406 instances of core:contributor relations. We quickly determined that this approach would not lead to useful decision trees. First, the trees we obtained were extremely complex (at least 50 nodes). Second, some of the sub- trees corresponded to clearly identifiable subclasses of the data, such as relations with an implicit core, which suggested that we should apply learning to these independently identifiable subclasses. Thus, we subdivided the data into three subsets: • Core/: core:contributor relations with the core in first position • Core~: core:contributor relations with the core in second position • Impl(icit)-core: core:contributor relations with an implicit core While this has the disadvantage of smaller training sets, the trees we obtain are more manageable and more meaningful. Table 3 summarizes the cardinal- ity of these sets, and the frequencies of cue occur- rence. 83 11 O t set II # of Z tio s I # of c ed reZatio s II Corel 127 Core2 155 Impl-core 124 52 100 (on Trib: 43) (on Core: 57) 29 II Total II 406 I 181 Table 3: Distributions of relations and cue occurrences We ran four sets of experiments. In three of them we predict cue occurrence and in one cue placement. 4 4.3.1 Cue Occurrence Table 4 summarizes our main results concerning cue occurrence, and includes the error rates asso- ciated with different feature sets. We adopt Lit- man's approach (1906) to determine whether two er- ror rates El and £2 are significantly different. We compute 05% confidence intervals for the two error rates using a t-test. £1 is significantly better than £~ if the upper bound of the 95% confidence inter- val for £1 is lower than the lower bound of the 95% confidence interval for g2-~ For each set of experiments, we report the following: 1. A baseline measure obtained by choosing the majority class. E.g., for Corel 58.9% of the re- lations are not cued; thus, by deciding to never include a cue, one would be wrong 41.1% of the times. 2. The best individual features whose predictive power is better than the baseline: as Table 4 makes apparent, individual features do not have much predictive power. For neither Gorcl nor Impl-core does any individual feature perform better than the baseline, and for Core~ only one feature is sufficiently predictive. 3. (One of) the best induced tree(s). For each tree, we list the number of nodes, and up to six of the features that appear highest in the tree, with their levels of embedding. 5 Figure 2 shows the tree for Core~ (space constraints prevent us from including figures for each tree). In the figure, the numbers in parentheses indicate the number of cases correctly covered by the leaf, and the number of expected errors at that leaf. Learning turns out to be most useful for Corel, where the error reduction (as percentage) from base- line to the upper bound of the best result is 32%; ~AII our experiments are run with groupin 9 turned on, so that C4.5 groups values together rather than creating a branch per value. The latter choice always results in trees overfitted to the data in our domain. Using classes of informational relations, rather than individual infor- mational relations, constitutes a sort of a priori grouping. SThe trees that C4.5 generates are right-branching, so this description is fairly adequate. error reduction is 19% for Core2 and only 3% for Impl- core. The best tree was obtained partly by informed choice, partly by trial and error. Automatically try- ing out all the 211 -- 2048 subsets of features would be possible, but it would require manual examina- tion of about 2,000 sets of results, a daunting task. Thus, for each dataset wc considered only the follow- ing subsets of features. 1. All features. This always results in C4.5 select- ing a few features (from 3 to 7) for the final tree. 2. Subsets built out of the 2 to 4 attributes appear- ing highest in the tree obtained by running C4.5 on all features. 3. In Table 2, three features -- Trib-pos, In~e~- struck, Infor-s~ruct- concern segment struc- ture, eight do not. We constructed three subsets by always including the eight features that do not concern segment structure, and adding one of those that does. The trees obtained by includ- ing Trib-pos, I~tert-struc~, Infor-struc~ at the same time are in general more complex, and not significantly better than other trees obtained by including only one of these three features. We attribute this to the fact that these features en- code partly overlapping information. Finally, the best tree was obtained as follows. We build the set of trees that are statistically equivalent to the tree with the best error rate (i.e., with the lowest error rate upper bound). Among these trees, we choose the one that we deem the most perspicuous in terms of features and of complexity. Namely, we pick the simplest tree with Trib-Pos as the root if one exists, otherwise the simplest tree. Trees that have Trib-Pos as the root are the most useful for text generation, because, given a complex segment, Trib-Pos is the only attribute that unambiguously identifies a specific contributor. Our results make apparent that the structure of segments plays a fundamental role in determining cue occurrence. One of the three features concerning segment structure (Trib-Pos, Inten-Structure, Infor- StrucZure) appears as the root or just below the root in all trees in Table 4; more importantly, this same configuration occurs in all trees equivalent to the best tree (even if the specific feature encoding segment structure may change). The level of embedding in a 84 Core l Core2 Impl-core Baseline 41.1 35.4 23.5 Best features 0 Info-rel: 33.44-0.94 O Best tree 25.64-1.24 (I5) O. Trlb-pos 1. Tril>-type 2. Syn-rel 3. C0re-type 4. Above 5. Inten-rel 27.44-1.28 (18) O. Trib-Pos I. Inten-rel 2. Info-rel 3. Above 4. Core-type 5. Below 22.1+0.57 (10) O. Core-type 1. Infor-struct 2. Inten-rel Table 4: Summary of learning results Trib POS } { B 1A0- I prc.B l A 1-1 prc.B 1A2-1 pre.B 1A3- I pre. {B IA,-I pre. / ~ _ 8 1 ) p ~ B2A0- I pre.B2A0-2pre. B2A2.2pr¢i ~ B2A I- 1 pre.B2A 1-2pr*2 B3A0-3pre { B21A2. ~ N . ~ . ~ B3A0-1P rc'B3A0-2prc } (4/I.2) No-Cue Cue [ Intcn Rcl J {causal. elaboration} / / [ ,,,,o~o } Cue [ Core Type ) { mat . . { action ) [ ae~ow ) No-Cu~ Cue [ Trib Pos ] {BIAl-lpre.B1A2-1prc. {B IA0-1 pre/ ~ B I A3-1pr¢. B2A0- I pre.B2AO-2prc. B2A l - I prc.B2A 1-2pro \ B3A0-1 pre.B3A0-2pre } ( 16/5~/ (15/3.3) Cue No-Cue {cneb'c} / ~ { .... i .......... d} (70/I 2.7) [ Int-o Rel J Cue { sioailarity } ~ /I 2, No-Cue { segment } (T.b Pos J {B1A0-1pre,// \ [BIAl-lpre.BlA2-1pr¢. B2A0-2pre } / B 1A3- I prc.B2A0- I pro. B2A 1 - I pre.B2A 1-2pre (1915.8, ~Zr B3A0- I prc.B3A0=2prc } (713 3) No-Cue Cue Figure 2: Decision tree for Core2 -- occurrence segment, as encoded by Core-type, Trib-type, Above and Below also figures prominently. InLen-rel appears in all trees, confirming the in- tuition that the speaker's purpose affects cue occur- rence. More specifically, in Figure 2, Inten-reldistin- guishes two different speaker purposes, convince and enable. The same split occurs in some of the best trees induced on Core1, with the same outcome: i.e., convince directly correlates with the occurrence of a cue, whereas for enable other features must be taken into account. 6 Informational relations do not appear as often as intentional relations; their discriminatory power seems more relevant for clusters. Preliminary ewe can't draw any conclusions concerning concede, as there are only 24 occurrences of concede out of 406 core:contributor relations. experiments show that cue occurrence in clusters de- pends only on informational and syntactic relations. Finally, Adjacency does not seem to play any sub- stantial role. 4.3.2 Cue Placement While cue occurrence and placement are interre- lated problems, we performed learning on them sep- arately. First, the issue of placement arises only in the case of Core~; for Core1, cues only occur on the contributor. Second, we attempted experiments on Core2 that discriminated between occurrence and placement at the same time, and the derived trees were complex and not perspicuous. Thus, we ran an experiment on the 100 cued relations from Core~ to investigate which factors affect placing the cue on the contributor in first position or on the core in second; 85 Baseline 43% Best features Syn-reh 24.1:t:0.69 Trib-pos: 40+0.88 Best tree 20.6+0.97 (5) O. Syn-rcl 1. Trib-pos Table 5: Cue placement on Core2 12d: Ttab depends on Core i¢: Core and Tab are independent clauses 21d: Core depends on Tab cc.cp.ct: Core and Tnb are coordinaled phrases "N~d .: ,:c ,=p ,:, I {izd} ."." ." . ,26,'2. V Cue-on-Trib [ Trib-Pos hB/AO71Pre.~'B. I A 1.~ I Pro' ~ { B2AO-Iofe B2AI-Iprc Cue-on-Core Cue~on-Trib Figure 3: Decision tree for Core~-- placement see Table 5. We ran the same trials discussed above on this dataset. In this case, the best tree -- see Figure 3 -- results from combining the two best individual features, and reduces the error rate by 50%. The most discriminant feature turns out to be the syn- tactic relation between the contributor and the core. However, segment structure still plays an important role, via Trib-pos. While the importance of S~ln-rel for placement seems clear, its role concerning occurrence requires further exploration. It is interesting to note that the tree induced on Gorel -- the only case in which Syn- rel is relevant for occurrence -- indudes the same dis- tinction as in Figure 3: namely, if the contributor de- pends on the core, the contributor must be marked, otherwise other features have to be taken into ac- count. Scott and de Souza (1990) point out that "there is a strong correlation between the syntactic specification of a complex sentence and its perceived rhetorical structure." It seems that certain syntactic structures function as a cue. 5 Discussion and Conclusions We have presented the results of machine learning ex- periments concerning cue occurrence and placement. As (Litman, 1996) observes, this sort of empirical work supports the utility of machine learning tech- niques applied to coded corpora. As our study shows, individual features have no predictive power for cue occurrence. Moreover, it is hard to see how the best combination of individual features could be found by manual inspection. Our results also provide guidance for those build- ing text generation systems. This study clearly in- dicates that segment structure, most notably the ordering of core and contributor, is crucial for de- termining cuc occurrence. Recall that it was only by considering Corel and Core~ relations in distinct datasets that we were able to obtain perspicuous de- cision trees that signifcantly reduce the error rate. This indicates that the representations produced by discourse planners should distinguish those ele- ments that constitute the core of each discourse seg- ment, in addition to representing the hierarchical structure of segments. Note that the notion of core is related to the notions of nucleus in RST, intended effect in (Young and Moore, 1994), and of point of a move in (Elhadad and McKeown, 1990), and that text generators representing these notions exist. Moreover, in order to use the decision trees derived here, decisions about whether or not to make the core explicit and how to order the core and contributor(s) must be made before deciding cue occurrence, e.g., by exploiting other factors such as focus (McKeown, 1985) and a discourse history. Once decisions about core:contributor ordering and cuc occurrence have been made, a generator must still determine where to place cues and se- lect appropriate Icxical items. A major focus of our future research is to explore the relationship be- tween the selection and placement decisions. Else- where, we have found that particular lexical items tend to have a preferred location, defined in terms of functional (i.e., core or contributor) and linear (i.e., first or second relatum) criteria (Moser and Moore, 1997). Thus, if a generator uses decision trees such as the one shown in Figure 3 to determine where a cuc should bc placed, it can then select an appro- priate cue from those that can mark the given in- tentional / informational relations, and are usually placed in that functional-linear location. To evaluate this strategy, we must do further work to understand whether there are important distinctions among cues (e.g., so, because) apart from their different preferred locations. The work of Elhadad (1990) and Knott (1996) will help in answering this question. Future work comprises further probing into ma- chine learning techniques, in particular investigating whether other learning algorithms are more appro- priate for our problem (Mooney, 1996), especially al- gorithms that take into account some a priori knowl- edge about features and their dependencies. Acknowledgements This research is supported by the Office of Naval Research, Cognitive and Neural Sciences Division (Grants N00014-91-J-1694 and N00014-93-I-0812). Thanks to Megan Moser for her prior work on this project and for comments on this paper; to Erin Glendening and Liina Pylkkanen for their coding ef- forts; to Haiqin Wang for running many experiments; to Giuseppe Carenini and Stefll Briininghaus for dis- cussions about machine learning. 86 References Cohen, Robin. 1984. A computational theory of the function of clue words in argument understand- ing. In Proceedings of COLINGS~, pages 251-258, Stanford, CA. Elhadad, Michael and Kathleen McKeown. 1990. Generating connectives. In Proceedings of COL- INGgO, pages 97-101, Helsinki, Finland. Goldman, Susan R. 1988. The role of sequence markers in reading and recall: Comparison of na- tive and normative english speakers. Technical re- port, University of California, Santa Barbara. Grosz, Barbara J. and Candace L. Sidner. 1986. At- tention, intention, and the structure of discourse. Computational Linguistics, 12(3):175-204. Hobbs, Jerry R. 1985. On the coherence and struc- ture of discourse. Technical Report CSLI-85-37, Center for the Study of Language and Informa- tion, Stanford University. Knott, Alistair. 1996. A Data-Driver, methodology for motivating a set of coherence relations. Ph.D. thesis, University of Edinburgh. Litman, Diane J. 1996. Cue phrase classification using machine learning. Journal of Artificial In- telligence Research, 5:53-94. Litman, Diane J. and James F. Allen. 1987. A plan recognition model for subdialogues in conver- sations. Cognitive Science, 11:163-200. Mann, William C. and Sandra A. Thompson. 1986. Relational propositions in discourse. Discourse Processes, 9:57-90. Mann, William C. and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Towards a functional theory of text organization. TEXT, 8(3):243-281. McKeown, Kathleen R. 1985. Tezt Generation: Us- ing Discourse Strategies and Focus Constraints to Generate Natural Language Tezt. Cambridge Uni- versity Press, Cambridge, England. McKeown, Kathleen R. and Michael Elhadad. 1991. A contrastive evaluation of functional unification grammar for surface language generation: A case study in the choice of connectives. In C. L. Paris, W. R. Swartout, and W. C. Mann, eds., Natu- ral Language Generation in Artificial Intelligence and Computational Linguistics. Kluwer Academic Publishers, Boston, pages 351-396. Millis, Keith, Arthur Graesser, and Karl Haberlandt. 1993. The impact of connectives on the memory for expository text. Applied Cognitive Psychology, 7:317-339. Mooney, Raymond J. 1996. Comparative experi- ments on disambiguating word senses: An illus- tration of the role of bias in machine learning. In Conference on Empirical Methods in Natural Lan- guage Processing. Moser, Megan and Johanna D. Moore. 1995. In- vestigating cue selection and placement in tutorial discourse. In Proceedings of ACLgS, pages 130- 135, Boston, MA. Moser, Megan and Johanna D. Moore. 1997. A cor- pus analysis of discourse cues and relational dis- course structure. Submitted for publication. Moser, Megan, Johanna D. Moore, and Erin Glen- dening. 1996. Instructions for Coding Explana- tions: Identifying Segments, Relations and Mini- real Units. Technical Report 96-17, University of Pittsburgh, Department of Computer Science. Quinlan, J. Ross. 1993. C~.5: Programs for Machine Learning. Morgan Kaufmann. Reichman-Adar, Rachel. 1984. Extended person-machine interface. Artificial Intelligence, 22(2):157-218. RSsner, Dietmar and Manfred Stede. 1992. Cus- tomizing RST for the automatic production of technical manuals. In R. Dale, E. Hovy, D. RSsner, and O. Stock, eds., 6th International Workshop or* Natural Language Generation, Springer-Verlag, Berlin, pages 199-215. Schiffrin, Deborah. 1987. Discourse Markers. Cam- bridge University Press, New York. Scott, Donia and Clarisse Sieckenius de Souza. 1990. Getting the message across in RST-based text gen- eration. In R. Dale, C. Mellish, and M. Zock, eds., Current Research in Natural Language Gen- eration. Academic Press, New York, pages 47-73. Siegel, Eric V. and Kathleen R. McKeown. 1994. Emergent linguistic rules from inducing decision trees: Disambiguating discourse clue words. In Proceedings of AAAI94, pages 820-826. Vander Linden, Keith and Barbara Di Eugenio. 1996. Learning micro-planning rules for preven- tative expressions. In 8th International Workshop on Natural Language Generation, Sussex, UK. Vander Linden, Keith and James H. Martin. 1995. Expressing rhetorical relations in instructional text: A case study of the purpose relation. Com- putational Linguistics, 21(1):29-58. Weiss, Sholom M. and Casimir Kulikowski. 1991. Computer Systems that learn: classification and prediction methods from statistics, neural nets, machine learning, and ezpert systems. Morgan Kaufmann. Young, R. Michael and Johanna D. Moore. 1994. DPOCL: A Principled Approach to Discourse Planning. In 7th International Workshop on Natu- ral Language Generation, Kennebunkport, Maine. 87 | 1997 | 11 |
Expectations in Incremental Discourse Processing Dan Cristea Faculty of Computer Science University "A.I. Cuza" 16, Berthelot Street 6600 - Iasi, Romania dcristea@infoiasi, ro Bonnie Webber Dept. of Computer L: Information Science University of Pennsylvania 200 South 33rd Street Philadelphia PA 19104-6389 USA bonnie©central, cis. upenn, edu Abstract The way in which discourse features ex- press connections back to the previous dis- course has been described in the literature in terms of adjoining at the right frontier of discourse structure. But this does not allow for discourse features that express ez- pectations about what is to come in the subsequent discourse. After characterizing these expectations and their distribution in text, we show how an approach that makes use of substitution as well as adjoining on a suitably defined right frontier, can be used to both process expectations and constrain discouse processing in general. 1 Introduction Discourse processing subsumes several distinguish- able but interlinked processes. These include refer- ence and ellipsis resolution, inference (e.g., inferen- tial processes associated with focus particles such as, in English, "even" and "only"), and identification of those structures underlying a discourse that are as- sociated with coherence relations between its units. In the course of developing an incremental approach to the latter, we noticed a variety of constructions in discourse that raise expectations about its future structural features. We found that we could rep- resent such expectations by adopting a lexical vari- ant of TAG - LTAG (Schabes, 1990) - and using its substitution operation as a complement to ad- joining. Perhaps more interesting was that these expectations appeared to constrain the subsequent discourse until they were resolved. This we found we could model in terms of constraints on adjoining and substitution with respect to a suitably defined Right Frontier. This short paper focuesses on the phenomenon of these expectations in discourse and their expression in a discourse-level LTAG. We con- elude the paper with some thoughts on incremental discourse processing in light of these expectations. The following examples illustrate the creation of expectations through discourse markers: Example 1 a. On the one hand, John is very generous. b. On the other, he is extremely difficult to find. Example 2 a. On the one hand, John is very generous. b. On the other, suppose you needed some money. c. You'd see that he's very difficult to find. Example 3 a. On the one hand, John is very generous. b. For example, suppose you needed some money. c. You would just have to ask him for it. b. On the other hand, he is very difficult to find. Example 1 illustrates the expectation that, follow- ing a clause marked "on the one hand", the discourse will express a constrasting situation (here marked by "on the other"). Examples 2 and 3 illustrate that such an expectation need not be satisfied im- mediately by the next clause: In Example 2, clause (b) partially resolves the expectation set up in (a), but introduces an expectation that the subsequent discourse will indicate what happens in such cases. That expectation is then resolved in clause (c). In Example 3, the next two clauses do nothing to sat- isfy the expectation raised in clause (a): rather, they give evidence for the claim made in (a). The expec- tation raised in (a) is not resolved until clause (d). These examples show expectations raised by sen- tential adverbs and the imperative use of the verb "suppose". Subordinate conjunctions (e.g., "just as", "although", "when", etc.) can lead to similar expectations when they appear in a preposed subor- dinate clause - eg. Example 4 a. Although John is very generous, b. if you should need some money, c. you'd see that he's difficult to find. As in Example 2, clause 4(a) raises the expectation of learning what is nevertheless the case. Clause 4(b) partially satisfies that expectation by raising a hy- 88 pothetical situation, along with the expectation of learning what is true in such a situation. This latter expectation is then satisfied in clause 4(c). In summary, these expectations can be charac- terized as follows: (1) once raised, an expectation must be resolved, but its resolvant can be a clause that raises its own expectations; (2) a clause rais- ing an expectation can itelf be elaborated before that expectation is resolved, including elaboration by clauses that raise their own expectations; and (3) the most deeply "embedded" expectations must al- ways be resolved first. Now these are very likely not the only kinds of expectations to be found in discourse: Whenever events or behavior follow fairly regular patterns over time, observers develop expectations about what will come next or at least eventually. For example, a di- alogue model may embody the expectation that a suggestion made by one dialogue participant would eventually be followed by an explicit or implicit re- jection, acceptance or tabling by the other. Other di- alogue actions such as clarifications or justifications may intervene, but there is a sense of an expectation being resolved when the suggestion is responded to. Here we are focussed on discourse at the level of individual monologue or turn within a larger dis- course: what we show is that discourse manifests cer- tain forward-looking patterns that have similar con- straints to those of sentence-level syntax and can be handled by similar means. One possible reason that these particualr kinds of expressions may not have been noticed before is that in non-incremental ap- proaches to discourse processing (Mann and Thomp- son, 1988; Marcu, 1996), they don't stand out as obviously different. The labels for discourse coherence relations used here are similar to those of RST (Mann and Thomp- son, 1988), but for simplicity, are treated as binary. Since any multi-branching tree can be converted to a binary tree, no representational power is lost. In do- ing this, we follow several recent converging compu- tational approaches to discourse analysis, which are also couched in binary terms (Gardent, 1997; Marcu, 1996; Polanyi and van den Berg, 1996; Schilder, 1997; van den Berg, 1996). Implicit in our discussion is the view that in processing a discourse incrementally, its semantics and pragmatics are computed compositionally from the structure reflected in the coherence relations between its units. In the figures presented here, non-terminal nodes in a discourse structure are la- belled with coherence relations merely to indicate the functions that project appropriate content, be- liefs and other side effects into the recipient's dis- course model. This view is, we believe, consistent with the more detailed formal interfaces to discourse semantics/pragmatics presented in (Gardent, 1997; Schilder, 1997; van den Berg, 1996), and also allows for multiple discourse relations (intentional and in- formational) to hold between discourse units (Moore and Pollack, 1992; Moser and Moore, 1995; Moser and Moore, 1996) and contribute to the seman- tic/pragmatics effects on the recipient's discourse model. 2 Expectations in Corpora The examples given in the Introduction were all "minimal pairs" created to illustrate the relevant phenomenon as succinctly as possible. Empirical questions thus include: (1) the range of lexico- syntactic constructions that raise expectations with the specific properties mentioned above; (2) the fre- quency of expectation-raising constructions in text; (3) the frequency with which expectations are sat- isfied immediately, as opposed to being delayed by material that elaborates the unit raising the expec- tation; (4) the frequency of embedded expectations; and (5) features that provide evidence for an expec- tation being satisfied. While we do not have answers to all these ques- tions, a very preliminary analysis of the Brown Cor- pus, a corpus of approximately 1600 email messages, and a short Romanian text by T. Vianu (approx. 5000 words) has yielded some interesting results. First, reviewing the 270 constructions that Knott has identified as potential cue phrases in the Brown Corpus 1, one finds 15 adverbial phrases (such as "initially", "at first", "to start with", etc.) whose presence in a clause would lead to an expectation being raised. All left-extraposed clauses in English raise expectations (as in Example 4) so all the sub- ordinate conjunctions in Knott's list would be in- cluded as well. Outside of cue phrases, we have iden- tified imperative forms of "suppose" and "consider" as raising expectations, but currently lack a more systematic procedure for identifying expectation- raising constructions in text than hand-combing text for them. With respect to how often expectation-raising constructions appear in text, we have Brown Cor- pus data on two specific types - imperative "sup- pose" and adverbial "on the one hand" - as well as a detailed analysis of the Romanian text by Vianu mentioned earlier. There are approximately 54K sentences in the Brown Corpus. Of these, 37 contain imperative "suppose" or "let us suppose". Twelve of these cor- respond to "what if" questions or negotiation moves which do not raise expectations: Suppose -just suppose this guy was really what he said he was! A retired professional killer If he was just a nut, no harm was done. But if he was the real thing, he could do something about Lolly. (c123) 1 Personal communication, but also see (Knott, 1996) 89 Alec leaned on the desk, holding the clerk's eyes with his. "Suppose you tell me the real reason", he drawled. "There might be a story in it". (c121) The remaining 25 sentences constitute only about 0.05% of the Brown Corpus. Of these, 22 have their expectations satisfied immediately (88%) - for ex- ample, Suppose John Jones, who, for 1960, filed on the basis of a calendar year, died June 20, 1961. His return for the period January 1 to June 20, 1961, is due April 16, 1962. One is followed by a single sentence elaborating the original supposition (also flagged by "suppose") - "Suppose it was not us that killed these aliens. Suppose it is something right on the planet, native to it. I just hope it doesn't work on Earthmen too. These critters went real sudden". (cmO~) while the remaining two contain multi-sentence elab- orations of the original supposition. None of the ex- amples in the Brown Corpus contains an embedded expectation. The adverbial "on the one hand" is used to pose a contrast either phrasally - Both plans also prohibited common direc- tors, officers, or employees between Du Pont, Christiana, and Delaware, on the one hand, and General Motors on the other. (ch16) You couldn't on the one hand decry the arts and at the same time practice them, could you? (ck08) or clausally. It is only the latter that are of interest from the point of discourse expectations. The Brown Corpus contains only 7 examples of adverbial "on the one hand". In three cases, the expectation is satisfied immediately by a clause cued by "but" or "or" -e.g. On the one hand, the Public Health Ser- vice declared as recently as October 26 that present radiation levels resulting from the Soviet shots "do not warrant undue public concern" or any action to limit the intake of radioactive substances by individuals or large population groups anywhere in the Aj. But the PHS conceded that the new radioactive particles "will add to the risk of genetic effects in succeeding generations, and possibly to the risk of health damage to some people in the United States".(cb21) In the remaining four cases, satisfaction of the ex- pectation (the "target" contrast item) is delayed by 2-3 sentences elaborating the "source" contrast item -- e.g. Brooklyn College students have an ambiva- lent attitude toward their school. On the one hand, there is a sense of not having moved beyond the ambiance of their high school. This is particularly acute for those who attended Midwood High School di- rectly across the street from Brooklyn Col- lege. They have a sense of marginality at being denied that special badge of status, the out-of-town school. At the same time, there is a good deal of self-congratulation at attending a good college . .. (cf25) In these cases, the target contrast item is cued by "on the other hand" in three cases and "at the same time" in the case given above. Again, none of the examples contains an embedded expectation. (The much smaller email corpus contained six ex- amples of clausal "on the one hand", with the target contrast cued by "on the other hand","on the other" or "at the other extreme". In one case, there was no explicit target contrast and the expectation raised by "on the one hand" was never satisfied. We will continue to monitor for such examples.) Before concluding with a close analysis of the Ro- manian text, we should note that in both the Brown Corpus and the email corpus, clausal adverbial "on the other hand" occurs more frequently without an expectation-raising "on the one hand" than it does with one. (Our attention was called to this by a frequency analysis of potential cue phrase instances in the Brown Corpus compiled for us by Alistair Knott and Andrei Mikheev, HCRC, University of Edinburgh.) We found 53 instances of clausal "on the other hand" occuring without an explicit source contrast cued earlier. Although one can only specu- late now on the reason for this phenomenon, it does make a difference to incremental analysis, as we try to show in Section 3.3. The Romanian text that has been closely anal- ysed for explicit expectation-raising constructions is T. Vianu's Aesthetics. It contains 5160 words and 382 discourse units (primarily clauses). Counting preposed gerunds as raising expectations as well as counting the constructions noted previously, 39 in- stances of expectation-raising discourse units were identified (10.2%). In 11 of these cases, 1-16 dis- course units intervened before the raised expectation was satisfied. One example follows: Dar de~i trebuie s~-l parcurgem in intregime, pentru a orienta cercetarea este nevoie s~. incerc~m inc~ de pe acum o pre- cizare a obiectului lui. (But although we must cover it entirely, in order to guide the research we need to try already an explanation of its subject mat- ter.) 90 3 A Grammar for Discourse The intuitive appeal of Tree-adjoining Grammar (TAG) (Joshi, 1987) for discourse processing (Gar- dent, 1997; Polanyi and van den Berg, 1996; Schilder, 1997; van den Berg, 1996; Webber, 1991) follows from the fact that TAG's adjoining operation allows one to directly analyse the current discourse unit as a sister to previous discourse material that it stands in a particular relation to. The new in- tuition presented here - that expectations convey a dependency between the current discourse unit and future discourse material, a dependency that can be "stretched" long-distance by intervening mate- rial - more fully exploits TAG's ability to express dependencies. By expressing in an elementary TAG tree, a dependency betwen the current discourse unit and future discourse material and using substitu- tion (Schabes, 1990) when the expected material is found, our TAG-based approach to discourse pro- cessing allows expectations to be both raised and resolved. 3.1 Categories and Operations The categories of our TAG-based approach consist of nodes and binary trees. We follow (Gardent, 1997) in associating nodes with feature structures that may hold various sorts of information, including information about the semantic interpretations pro- jected through the nodes, constraints on the specific operations a node may participate in, etc. A non- terminal node represents a discourse relation holding between its two daughter nodes. A terminal node can be either non-empty (Figure la), corresponding to a basic discourse unit (usually a clause), or empty. A node is "empty" only in not having an associated discourse unit or relation: it can still have an asso- ciated feature structure. Empty nodes play a role in adjoining and substitution, as explained below, and hence in building the derived binary tree that represents the structure of the discourse. Adjoining adds to the discourse structure an aux- iliary tree consisting of a root labelled with a dis- course relation, an empty foot node (labelled *), and at least one non-empty node (Figures lc and ld). In our approach, the foot node of an auxiliary tree must be its leftmost terminal because all adjoining oper- ations take place on a suitably defined right frontier (i.e., the path from the root of a tree to its rightmost leaf node) - such that all newly introduced mate- rial lies to the right of the adjunction site. (This is discussed in Section 3.2 in more detail.) Adjoining corresponds to identifying a discourse relation be- tween the new material and material in the previous discourse that is still open for elaboration. Figure 2(a) illustrates adjoining midway down the RF of tree a, while Figure 2(b) illustrates adjoining at the root of a's RF. Figure 2(c) shows adjoining at the "degenerate" case of a tree that consists only of its root. Figure 2(d) will be explained shortly. Substitution unifies the root of a substitution structure with an empty node in the discourse tree that serves as a substitution site. We currently use two kinds of substitution structures: non-empty nodes (Figure la) and elementary trees with substi- tution sites (Figure lb). The latter are one way by which a substitution site may be introduced into a tree. As will be argued shortly, substitution sites can only appear on the right of an elementary tree, al- though any number of them may appear there (Fig- ure lb). Figure 2(e) illustrates substitution of a non- empty node at ~, and Figure 2(f) illustrates substitu- tion of an elementary tree with its own substitution site at ~1 Since in a clause with two discourse markers (as in Example 3b) one may look backwards ("for exam- ple") while the other looks forwards ("suppose"), we also need a way of introducing expectations in the context of adjoining. This we do by allowing an aux- iliary tree to contain substitution sites (Figure ld) which, as above, can only appear on its right. 2 An- other term we use for auxiliary trees is adjoining structures. 3.2 Constraints Earlier we noted that in a discourse structure with no substitution sites, adjoining is limited to the right frontier (RF). This is true of all existing TAG-based approaches to discourse processing (Gardent, 1997; Hinrichs and Polanyi, 1986; Polanyi and van den Berg, 1996; Schilder, 1997; Webber, 1991), whose structures correspond to trees that lack substitution sites. One reason for this RF restriction is to main- tain a strict correspondence between a left-to-right reading of the terminal nodes of a discourse struc- ture and the text it analyses - i.e., Principle of Sequentiality: A left-to- right reading of the terminal frontier of the tree associated with a discourse must cor- respond to the span of text it analyses in that same left-to-right order. Formal proof that this principle leads to the restric- tion of adjoining to the right frontier is given in (Cristea and Webber, June 1997). The Principle of Sequentiality leads to additional constraints on where adjoining and substitution can occur in trees with substitution sites. Consider the tree in Figure 3(i), which has two such sites, and an adjoining operation on the right frontier at node Rj or above. Figure 3(it) shows that this would intro- duce a non-empty node (uk) above and to the right of the substitution sites. This would mean that later substitution at either of them would lead to a viola- tion of the Principle of Sequentiality, since the newly ~We currently have no linguistic evidence for the structure labelled ~ in Figure ld, but are open to its possibility. 9] U a. One-node tree (Non-empty node) {2 , U b. Elementary trees c. Auxiliary trees with substitution sites U*~I * U d. Aux trees with substitution sites Figure 1: Grammatical Categories. (* marks the foot of an auxiliary tree, and l, a substitution site.) R i Rk R i R2 (a) Adjoining at Ri+2 on the RF of a (b) Adjoining at the root (R1) of . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . u , 11 (c) Adjoining at root of single node tree ¢x RI R 1 R 3 1 1 (d) Adjoining at R3 on the right frontier of ot ~ ~'~ ~ ~-- R 2 u 1 (e) Substituting a material node at I (f) Substituting at elementary pn tree with substitution site 2 Figure 2: Examples of Adjoining and Substitution 92 /~Rjl ~k dRj'l (ii) (iii) Figure 3: Adjoining is constrained to nodes the inner_RF, indicated by the dashed arrow. substituted node u~+t would then appear to the left of uk in the terminal frontier, but to the right of it in the original discourse. Adjoining at any node above Rj+2 - the left sister of the most deeply em- bedded substitution site - leads to the same problem (Figure 3iii). Thus in a tree with substitution sites, adjoining must be limited to nodes on the path from the left sister of the most embedded site to that sis- ter's rightmost descendent. But this is just a right frontier (RF) rooted at that left sister. Thus, ad- joining is always limited to a RF: the presence of a substitution site just changes what node that RF is rooted at. We can call a RF rooted at the left sister of the most embedded substitution site, the inner right frontier or "inner_RF". (In Figure 3(i), the in- ner_RF is indicated by a dashed arrow.) In contrast, we sometimes call the RF of a tree without substi- tution sites, the outer right frontier or "outer_RF". Figure 2(d) illustrates adjoining on the inner_RF of a, a tree with a substitution site labelled h. Another consequence of the Principle of Sequen- tiality is that the only node at which substitution is allowed in a tree with substitution sites is at the most embedded one. Any other substitution would violate the principle. (Formal proof of these claims are given in (Cristea and Webber, June 1997). 3.3 Examples Because we have not yet implemented a parser that embodies the ideas presented so far, we give here an idealized analysis of Examples 2 and 3, to show how an ideal incremental monotonic algorithm that admitted expectations would work. Figure 4A illustrates the incremental analysis of Example 2. Figure 4A(i) shows the elementary tree corresponding to sentence 2a ("On the one hand ..."): the interpretation of "John is very generous" I corresponds to the left daughter labelled "a". The adverbial "On the one hand" is taken as signalling a coherence relation of Contrast with something ex- pected later in the discourse. In sentence 2b ("On the other hand, suppose ..."), the adverbial "On the other hand" signals the expected contrast item. Because it is al- ready expected, the adverbial does not lead to the creation of a separate elementary tree (but see the next example). The imperative verb "sup- pose", however, signals a coherence relation of an- tecedent/consequent (A/C) with a consequence expected later in the discourse. The elementary tree corresponding to "suppose ..." is shown in Figure 4A(ii), with the interpretation of "you need money" corresponding to the left daughter labelled "b". Figure 4A(iii) shows this elementary tree sub- stituted at ~1, satisfying that expectation. Fig- ure 4A(iv) shows the interpretation of sentence 2c ("You'd see he's very difficult to find") substituted at 12, satisfying that remaining expectation. Before moving on to Example 3, notice that if Sen- tence 2a were not explicitly cued with "On the other hand", the analysis would proceed somewhat differ- ently. Example 5 a. John is very generous. b. On the other hand, suppose you needed money. c. You'd see that he's very difficult to find. Here, the interpretation of sentence 5(a) would cor- respond to the degenerate case of a tree consisting of a single non-empty node shown in Figure 4B(i). The contrast introduced by "On the other hand" in sen- tence 5(b) leads to the auxiliary tree shown in Fig- ure 4B(ii), where T stands for the elementary tree corresponding to the interpretation of "suppose...". 93 Contrast A/C Contrast Contrast b ~2 b (i) (ii) (iii) (iv) Contrast Contrast Contrast Contrast • a a • T C ~" a A/C b ~l b ~! b (i) (ii) (iii) (iv) (v) Contrast (i) Contrast Evid~Xx ~" C OD Contrast Contrast b b c (iii) (iv) Figure 4: Analyses of Examples 2, 3 and 4. A. Example 2 B. Example 5 C. Example 3 The entire structure associated with sentence 5(b) is shown in Figure 4B(iii). This is adjoined to the single node tree in Figure 4B(i), yielding the tree shown in Figure 4B(iv). The analysis then contin- ues exactly as in that of Example 2 above. Moving on to Example 3, Figure 4C(i) shows the same elementary tree as in Figure 4A(i) correspond- ing to clause 3a. Next, Figure 4C(ii) shows the aux- iliary tree with substitution site ~2 corresponding to clause 3b being adjoined as a sister to the interpre- tation of clause 3a, as evidence for the claim made there. The right daughter of the node labelled "Ev- idence" is, as in Example 2b, an elementary tree expecting the consequence of the supposition "you need money". Figure 4C(iii) shows the interpreta- tion of clause 3c substituted at ~2, satisfying that expectation. Finally, Figure 4C(iv) shows the inter- pretation of clause 3d substituted at 11, satisfying the remaining expectation. 4 Sources of Uncertainty The idealized analysis presented above could lead to a simple deterministic incremental algorithm, if there were no uncertainty due to local or global am- biguity. But there is. We can identify three separate sources of uncertainty that would affect incremental processing according to the grammar just presented: • the identity of the discourse relation that is meant to hold between two discourse units; • the operation (adjoining or substitution) to be used in adding one discourse unit onto another; • if that operation is adjoining, the site in the target unit at which the operation should take place - that is, the other argument to the dis- course relation associated with the root of the auxiliary tree. It may not be obvious that there could be uncer- tainty as to whether the current discourse unit sat- isfies an expectation and therefore substitutes into the discourse structure, or elaborates something in the previous discourse, and therefore adjoins into it. 3 But the evidence clarifying this local ambiguity may not be available until later in the discourse. In the following variation of Example 4, the fact that clause (b) participates in elaborating the interpreta- tion of clause (a) rather than in satisfying the expec- tation it raises (which it does in Example 4) may not be unambiguously clear until the discourse marker "for example" in clause (c) is processed. Example 6 a. Because John is such a generous man - b. whenever he is asked for money, c. he will give whatever he has, for example - d. he deserves the "Citizen of the Year" award. The other point is that, even if a forward-looking cue phrase signals only a substitution structure as 3This is not the same as shift-reduce uncert~nty. 94 in Figure 4A(i) and 4A(ii), if there are no pending subsitution sites such as ~1 in 4A(i) against which to unify such a structure, then the substitution struc- ture must be coerced to an auxiliary tree as in Fig- ure ld (with some as yet unspecified cohesion rela- tion) in order to adjoin it somewhere in the current discourse structure. 5 Speculations and Conclusions In this paper, we have focussed on discourse expec- tations associated with forward-looking clausal con- nectives, sentential adverbs and the imperative verbs ("suppose" and "consider"). There is clearly more to be done, including a more complete characteri- zation of the phenomenon and development of an incremental discourse processor based on the ideas presented above. The latter would, we believe, have to be coupled with incremental sentence-level pro- cessing. As the previous examples have shown, the same phenomenon that occurs inter-sententially in Examples 1-3 occurs intra-sententially in Examples 4 and 6, suggesting that the two processors may be based on identical principles. In addition, carrying out sentence-level processing in parallel with dis- course processing and allowing each to inform the other would allow co-reference interpretation to fol- low from decisions about discourse relations and vice versa. 6 Acknowledgements Support for this work has come from the De- partment of Computer Science, Universiti Sains Malaysia (Penang, Malaysia), the Department of Computer Science, University "A.I.Cuza" (Iasi, Ro- mania) and the Advanced Research Project Agency (ARPA) under grant N6600194C6-043 and the Army Research Organization (ARO) under grant DAAHO494GO426. Thanks go to both the anony- mous reviewers and the following colleagues for their helpful comments: Michael Collins, Claire Gardent, Udo Hahn, Joseph Rosenzweig, Donia Scott, Mark Steedman, Matthew Stone, Michael Strube, and Michael Zock. Thanks also to Alistair Knott and Andrei Mikheev for giving us a rough count of cue phrases in the Brown Corpus. References Cristea, Dan and Bonnie Webber. June 1997. Ex- pectations in incremental discourse processing. Technical report, University A.I. Cuza, Iasi, Ro- mania. Gardent, Claire. 1997. Discourse tree adjoining grammars. Claus report nr.89, University of the Saarland, Saarbriicken. ttinrichs, Erhard.and Livia Polanyi. 1986. Pointing the way: A unified treatment of referential ges- ture in interactive discourse. In CLS 22, Part 2: Papers from the Parasession on Pragmatics and Grammatical Theory, pages 298-314, Chicago Linguistic Society. Joshi, Aravind. 1987. An introduction to Tree Ad- joining Grammar. In Alexis Manaster-Ramer, ed- itor, Mathematics of Language. John Benjamins, Amsterdam. Knott, Alistair. 1996. A Data-driven Methodol- ogy for Motivating a Set of Coherence Relations. Ph.D. thesis, Department of Artificial Intelligence, University of Edinburgh. Mann, William and Sandra Thompson. 1988. Rhetorical structure theory: Toward a functional theory of text organization. Text, 8(3):243-281. Marcu, Daniel. 1996. Building up rhetorical struc- ture trees. In Proceedings of AAAI-96, pages 1069-1074, Portland OR. Moore, Johanna and Martha Pollack. 1992. A prob- lem for rst: The need for multi-level discouse anal- ysis. Computational Linguistics, 18(4):537-544. Moser, Megan and Johanna Moore. 1995. Inves- tigating cue selection and placement in tutorial discourse. In Proc. 33rd Annual Meeting, Asso- ciation for Computational Linguistics, pages 130- 135, MIT, Boston MA. Moser, Megan and Johanna Moore. 1996. Toward a synthesis of two accounts of discourse structure. Computational Linguistics, 22(2):TBA. Polanyi, Livia and Martin H. van den Berg. 1996. Discourse structure and discourse interpretation. In P. Dekker and M. Stokhof, editors, Proceedings of the Tenth Amsterdam Colloquium, pages 113- 131, ILLC/Department of Philosophy, University of Amsterdam. Schabes, Yves. 1990. Mathematical and Compu- tational Aspects of Lexicalized Grammars. Ph.D. thesis, Department of Computer and Information Science, University of Pennsylvania. Technical Report MS-CIS-90-48, LINC Lab 179. Schilder, Frank. 1997. Tree discourse grammar, or how to get attached to a discourse. In Proceedings of the Tilburg Conference on Formal Semantics, Tilburg, Netherlands, January. van den Berg, Martin H. 1996. Discourse grammar and dynamic logic. In P. Dekker and M. Stokhof, editors, Proceedings of the Tenth Amsterdam Col- loquium, pages 93-111, ILLC/Department of Phi- losophy, University of Amsterdam. Webber, Bonnie. 1991. Structure and ostension in the interpretation of discourse deixis. Natural Language and Cognitive Processes, 6(2):107-135. 95 | 1997 | 12 |
The Rhetorical Parsing of Natural Language Texts Daniel Marcu Department of Computer Science University of Toronto Toronto, Ontario Canada M5S 3G4 marcu~cs, toronto, edu Abstract We derive the rhetorical structures of texts by means of two new, surface-form-based algorithms: one that identifies discourse usages of cue phrases and breaks sen- tences into clauses, and one that produces valid rhetorical structure trees for unre- stricted natural language texts. The algo- rithms use information that was derived from a corpus analysis of cue phrases. 1 Introduction Researchers of natural language have repeatedly ac- knowledged that texts are not just a sequence of words nor even a sequence of clauses and sentences. However, despite the impressive number of discourse-related theo- ries that have been proposed so far, there have emerged no algorithms capable of deriving the discourse struc- ture of an unrestricted text. On one hand, efforts such as those described by Asher (1993), Lascarides, Asher, and Oberlander (1992), Kamp and Reyle (1993), Grover et al. (1994), and Pr0st, Scha, and van den Berg (1994) take the position that discourse structures can be built only in conjunction with fully specified clause and sen- tence structures. And Hobbs's theory (1990) assumes that sophisticated knowledge bases and inference mech- anisms are needed for determining the relations between discourse units. Despite the formal elegance of these approaches, they are very domain dependent and, there- fore, unable to handle more than a few restricted exam- pies. On the other hand, although the theories described by Grosz and Sidner (1986), Polanyi (1988), and Mann and Thompson (1988) are successfully applied manually, they ,are too informal to support an automatic approach to discourse analysis. In contrast with this previous work, the rhetorical parser that we present builds discourse trees for unre- stricted texts. We first discuss the key concepts on which our approach relies (section 2) and the corpus analysis (section 3) that provides the empirical data for our rhetor- ical parsing algorithm. We discuss then an algorithm that recognizes discourse usages of cue phrases and that de- termines clause boundaries within sentences. Lastly, we present the rhetorical parser and an example of its opera- tion (section 4). 2 Foundation The mathematical foundations of the rhetorical parsing algorithm rely on a first-order formalization of valid text structures (Marcu, 1997). The assumptions of the for- malization are the following. 1. The elementary units of complex text structures are non-overlapping spans of text. 2. Rhetorical, coherence, and cohesive relations hold between textual units of various sizes. 3. Rela- tions can be partitioned into two classes: paratactic and hypotactic. Paratactic relations are those that hold be- tween spans of equal importance. Hypotactic relations are those that hold between a span that is essential for the writer's purpose, i.e., a nucleus, and a span that increases the understanding of the nucleus but is not essential for the writer's purpose, i.e., a satellite. 4. The abstract structure of most texts is a binary, tree-like structure. 5. If a relation holds between two textual spans of the tree structure of a text, that relation also holds between the most important units of the constituent subspans. The most important units of a textual span are determined re- cursively: they correspond to the most important units of the immediate subspans when the relation that holds between these subspans is paratactic, and to the most im- portant units of the nucleus subspan when the relation that holds between the immediate subspans is hypotactic. In our previous work (Marcu, 1996), we presented a complete axiomatization of these principles in the con- text of Rhetorical Structure Theory (Mann and Thomp- son, 1988) and we described an algorithm that, starting from the set of textual units that make up a text and the set of elementary rhetorical relations that hold be- tween these units, can derive all the valid discourse trees of that text. Consequently, if one is to build discourse trees for unrestricted texts, the problems that remain to be solved are the automatic determination of the tex- tual units and the rhetorical relations that hold between them. In this paper, we show how one can find and ex- ploit approximate solutions for both of these problems by capitalizing on the occurrences of certain lexicogram- matical constructs. Such constructs can include tense 96 and aspect (Moens and Steedman, 1988; Webber, 1988; Lascarides and Asher, 1993), certain patterns of pronom- inalization and anaphoric usages (Sidner, 1981; Grosz and Sidner, 1986; Sumita et al., 1992; Grosz, Joshi, and Weinstein, 1995),/t-clefts (Delin and Oberlander, 1992), and discourse markers or cue phrases (Ballard, Conrad, and Longacre, 1971; Halliday and Hasan, 1976; Van Dijk, 1979; Longacre, 1983; Grosz and Sidner, 1986; Schiffrin, 1987; Cohen, 1987; Redeker, 1990; Sanders, Spooren, and Noordman, 1992; Hirschberg and Litman, 1993; Knott, 1995; Fraser, 1996; Moser and Moore, 1997). In the work described here, we investigate how far we can get by focusing our attention only on discourse markers and lexicogrammatical constructs that can be detected by a shallow analysis of natural language texts. The intuition behind our choice relies on the following facts: • Psycholinguistic and other empirical research (Kintsch, 1977; Schiffrin, 1987; Segal, Duchan, and Scott, 1991; Cahn, 1992; Sanders, Spooren, and Noordman, 1992; Hirschberg and Litman, 1993; Knott, 1995; Costermans and Fayol, 1997) has shown that discourse markers are consistently used by human subjects both as cohesive ties between adjacent clauses and as "macroconnectors" between larger textual units. Therefore, we can use them as rhetorical indica- tors at any of the following levels: clause, sen- tence, paragraph, and text. • The number of discourse markers in a typical text -- approximately one marker for every two clauses (Redeker, 1990) -- is sufficiently large to enable the derivation of rich rhetorical structures for texts. • Discourse markers are used in a manner that is consistent with the semantics and pragmatics of the discourse segments that they relate. In other words, we assume that the texts that we pro- cess are well-formed from a discourse perspec- tive, much as researchers in sentence parsing as- sume that they are well-formed from a syntactic perspective. As a consequence, we assume that one can bootstrap the full syntactic, semantic, and pragmatic analysis of the clauses that make up a text and still end up with a reliable discourse structure for that text. Given the above discussion, the immediate objection that one can raise is that discourse markers are doubly ambiguous: in some cases, their use is only sentential, i.e., they make a semantic contribution to the interpre- tation of a clause; and even in the cases where markers have a discourse usage, they are ambiguous with respect to the rhetorical relations that they mark and the sizes of the textual spans that they connect. We address now each of these objections in turn. Sentential and discourse usages of cue phrases. Empirical studies on the disambiguation of cue phrases (Hirschberg and Litman, 1993) have shown that just by considering the orthographic environment in which a discourse marker occurs, one can distinguish between sentential and discourse usages in about 80% of cases. We have taken Hirschberg and Litman's research one step further and designed a comprehensive corpus analysis that enabled us to improve their results and cov- erage. The method, procedure, and results of our corpus analysis are discussed in section 3. Discourse markers are ambiguous with respect to the rhetorical relations that they mark and the sizes of the units that they connect. When we began this research, no empirical data supported the extent to which this am- biguity characterizes natural language texts. To better understand this problem, the corpus analysis described in section 3 was designed so as to also provide information about the types of rhetorical relations, rhetorical statuses (nucleus or satellite), and sizes of textual spans that each marker can indicate. We knew from the beginning that it would be impossible to predict exactly the types of rela- tions and the sizes of the spans that a given cue marks. However, given that the structure that we are trying to build is highly constrained, such a prediction proved to be unnecessary: the overall constraints on the structure of discourse that we enumerated in the beginning of this sec- tion cancel out most of the configurations of elementary constraints that do not yield correct discourse trees. Consider, for example, the following text: (1) [Although discourse markers are ambiguous, l] [one can use them to build discourse trees for unrestricted texts: 2] [this will lead to many new applications in natural language processing)] For the sake of the argument, assume that we are able to break text (1) into textual units as labelled above and that we are interested now in finding rhetorical rela- tions between these units. Assume now that we can infer that Although marks a CONCESSIVE relation be- tween satellite 1 and nucleus either 2 or 3, and the colon. all ELABORATION between satellite 3 and nucleus either 1 or 2. If we use the convention that hypotactic rela- tions are represented as first-order predicates having the form rhet_rel(NAME, satellite, nucleus) and that paratac- tic relations are represented as predicates having the form rhet_rel(NAME, nucleust, nucleus2), a correct representa- tion for text (1) is then the set of two disjunctions given in (2): rhet_rel(CONCESSlON, 1,2) V rhet_rel( CONCESSION, 1,3) (2) rhet_rel(ELABORATION, 3, 1) V rhet_rel(ELABORATION, 3, 2) Despite the ambiguity of the relations, the over- all rhetorical structure constraints will associate only one discourse tree with text (1), namely the tree given in figure 1: any discourse tree configura- tion that uses relations rhet_rel(CONCESSlON, 1,3) and rhet-reI(ELABORATION, 3, 1) will be ruled out. For ex- ample, relation rhet_reI(ELABORATION, 3, 1) will be ruled 97 LABORATION 1 2 Figure 1: The discourse tree of text (1). out because unit I is not an important unit for span [1,2] and, as mentioned at the beginning of this section, a rhetorical relation that holds between two spans of a valid text structure must also hold between their most impor- tant units: the important unit of span [1,2] is unit 2, i.e., the nucleus of the relation rhet_rel(CONCESSlON, 1,2). 3 A corpus analysis of discourse markers 3.1 Materials We used previous work on cue phrases (Halliday and Hasan, 1976; Grosz and Sidner, 1986; Martin, 1992; Hirschberg and Litman, 1993; Knott, 1995; Fraser, 1996) to create an initial set of more than 450 potential dis- course markers. For each potential discourse marker, we then used an automatic procedure that extracted from the Brown corpus a set of text fragments. Each text fragment contained a "window" of approximately 200 words and an emphasized occurrence of a marker. On average, we randomly selected approximately 19 text fragments per marker, having few texts for the markers that do not occur very often in the corpus and up to 60 text fragments for markers such as and, which we considered to be highly ambiguous. Overall, we randomly selected more than 7900 texts. All the text fragments associated with a potential cue phrase were paired with a set of slots in which an ana- lyst described the following. 1. The orthographic en- vironment that characterizes the usage of the potential discourse marker. This included occurrences of periods, commas, colons, semicolons, etc. 2. The type of usage: Sentential, Discourse, or Both. 3. The position of the marker in the textual unit to which it belonged: Begin- ning, Medial, or End. 4. The right boundary of the textual unit associated with the marker. 5. The relative position of the textual unit that the unit containing the marker was connected to: Before or After. 6. The rhetorical relations that the cue phrase signaled. 7. The textual types of the units connected by the discourse marker: from Clause to Multiple_Paragraph. 8. The rhetorical status of each textual unit involved in the relation: Nucleus or Satel- lite. The algorithms described in this paper rely on the results derived from the analysis of 1600 of the 7900 text fragments. 3.2 Procedure After the slots for each text fragment were filled, the results were automatically exported into a relational database. The database was then examined semi- automatically with the purpose of deriving procedures that a shallow analyzer could use to identify discourse usages of cue phrases, break sentences into clauses, and hypothesize rhetorical relations between textual units. For each discourse usage of a cue phrase, we derived the following: • A regular expression that contains an unambigu- ous cue phrase instantiation and its orthographic environment. A cue phrase is assigned a regu- lar expression if, in the corpus, it has a discourse usage in most of its occurrences and if a shallow analyzer can detect it and the boundaries of the textual units that it connects. For example, the regular expression "[,] although" identifies such a discourse usage. • A procedure that can be used by a shallow ana- lyzer to determine the boundaries of the textual unit to which the cue phrase belongs. For exam- ple, the procedure associated with "[,] although" instructs the analyzer that the textual unit that pertains to this cue phrase starts at the marker and ends at the end of the sentence or at a position to be determined by the procedure associated with the subsequent discourse marker that occurs in that sentence. • A procedure that can be used by a shallow ana- lyzer to hypothesize the sizes of the textual units that the cue phrase relates and the rhetorical re- lations that may hold between these units. For example, the procedure associated with "[,] al- though" will hypothesize that there exists a CON- CESSION between the clause to which it belongs and the clause(s) that went before in the same sentence. For most markers this procedure makes disjunctive hypotheses of the kind shown in (2) above. 3.3 Results At the time of writing, we have identified 1253 occur- rences of cue phrases that exhibit discourse usages and associated with each of them procedures that instruct a shallow analyzer how the surrounding text should be broken into textual units. This information is used by an algorithm that concurrently identifies discourse usages of cue phrases and determines the clauses that a text is made of. The algorithm examines a text sentence by sentence and determines a set of potential discourse markers that occur in each sentence, It then applies left to fight the procedures that are associated with each potential marker. These procedures have the following possible effects: • They can cause an immediate breaking of the cur- rent sentence into clauses. For example, when an "[,] although" marker is found, a new clause, whose right boundary is just before the occur- rence of the marker, is created. The algorithm is then recursively applied on the text that is found 98 Text Text . 2. 3. 'Total No. of sentences 1. 242 2. 80 3. 19 Total 341 No. of discourse markers identified manually 174 63 38 275 No. of discourse markers identified by the algorithm 169 55 24 248 No. of discourse Recall Precision markers identified correctly by the algorithm 150 86.2% 88.8% 49 77.8% 89.1% 23 63.2% 95.6% 222 80.8% 89.5% Table 1: Evaluation of the marker identification procedure. No. of clause boundaries identified manually o 428 151 61 640 No. of clause boundaries identified by the algorithm 416 123 37 576 No. of clause boundaries identified correctly by the algorithm 371 113 36 520 Table 2: Evaluation of the clause boundary identification procedure. Recall Precision 86.7% 89.2% 74.8% 91.8% 59.0% 97.3% 81.3% 90.3% between the occurrence of"[,] although" and the end of the sentence. • They can cause the setting of a flag. For example, when an "Although " marker is found, a flag is set to instruct the analyzer to break the current sentence at the first occurrence of a comma. • They can cause a cue phrase to be identified as having a discourse usage. For example, when the cue phrase "Although" is identified, it is also as- signed a discourse usage. The decision of whether a cue phrase is considered to have a discourse us- age is sometimes based on the context in which that phrase occurs, i.e., it depends on the occur- rence of other cue phrases. For example, an "and" will not be assigned a discourse usage in most of the cases; however, when it occurs in conjunction with "although", i.e., "and although", it will be assigned such a role. The most important criterion for using a cue phrase in the marker identification procedure is that the cue phrase (together with its orthographic neighborhood) is used as a discourse marker in at least 90% of the examples that were extracted from the corpus. The enforcement of this criterion reduces on one hand the recall of the dis- course markers that can be detected, but on the other hand, increases significantly the precision. We chose this deliberately because, during the corpus analysis, we no- ticed that most of the markers that connect large textual units can be identified by a shallow analyzer. In fact, the discourse marker that is responsible for most of our algorithm recall failures is and. Since a shallow analyzer cannot identify with sufficient precision whether an oc- currence of and has a discourse or a sentential usage, most of its occurrences are therefore ignored. It is true that, in this way, the discourse structures that we build lose some potential finer granularity, but fortunately, from a rhetorical analysis perspective, the loss has insignificant global repercussions: the vast majority of the relations that we miss due to recall failures of and are JOINT and SEQUENCE relations that hold between adjacent clauses. Evaluation. To evaluate our algorithm, we randomly selected three texts, each belonging to a different genre: 1. an expository text of 5036 words from Scientific American; 2. a magazine article of 1588 words from 7~me; 3. a narration of 583 words from the Brown Corpus. Three independent judges, graduate students in computa- tional linguistics, broke the texts into clauses. The judges were given no instructions about the criteria that they had to apply in order to determine the clause boundaries; rather, they were supposed to rely on their intuition and preferred definition of clause. The locations in texts that were labelled as clause boundaries by at least two of the three judges were considered to be "valid clause bound- aries". We used the valid clause boundaries assigned by judges as indicators of discourse usages of cue phrases and we determined manually the cue phrases that sig- nalled a discourse relation. For example, if an "and" was used in a sentence and if the judges agreed that a clause boundary existed just before the "and", we assigned that "and" a discourse usage. Otherwise, we assigned it a sentential usage. Hence, we manually determined all discourse usages of cue phrases and all discourse bound- aries between elementary units. We then applied our marker and clause identification algorithm on the same texts. Our algorithm found 80.8% of the discourse markers with a precision of 89.5% (see 99 INPUT: a text T. 1. Determine the set D of all discourse markers and the set Ur of elementary textual units in T. 2. Hypothesize a set of relations R between the elements of Ur. 3. Use a constraint satisfaction procedure to determine all the discourse trees of T. 4. Assign a weight to each of the discourse trees and determine the tree(s) with maximal weight. Figure 2: Outline of the rhetorical parsing algorithm table 1), a result that outperforms Hirschberg and Lit- man's (1993). The same algorithm identified correctly 81.3 % of the clause boundaries, with a precision of 90.3 % (see table 2). We are not aware of any surface-form-based algorithms that achieve similar results. 4 Building up discourse trees 4.1 The rhetorical parsing algorithm The rhetorical parsing algorithm is outlined in figure 2. In the first step, the marker and clause identification algo- rithm is applied. Once the textual units are determined, the rhetorical parser uses the procedures derived from the corpus analysis to hypothesize rhetorical relations between the textual units. A constraint-satisfaction pro- cedure similar to that described in (Marcu, 1996) then de- termines all the valid discourse trees (see (Marcu, 1997) for details). The rhetorical parsing algorithm has been fully implemented in C++. Discourse is ambiguous the same way sentences are: more than one discourse structure is usually produced for a text. In our experiments, we noticed, at least for En- glish, that the "best" discourse trees are usually those that are skewed to the right. We believe that the explanation of this observation is that text processing is, essentially, a left-to-rightprocess. Usually, people write texts so that the most important ideas go first, both at the paragraph and at the text level) The more text writers add, the more they elaborate on the text that went before: as a conse- quence, incremental discourse building consists mostly of expansion of the right branches. In order to deal with the ambiguity of discourse, the rhetorical parser com- putes a weight for each valid discourse tree and retains only those that are maximal. The weight function reflects how skewed to the right a tree is. 4.2 The rhetorical parser in operation Consider the following text from the November 1996 issue of Scientific American (3). The words in italics denote the discourse markers, the square brackets denote l In fact, journalists axe trained to employ this "pyramid" approach to writing consciously (Cumming and McKercher, 1994). the boundaries of elementary textual units, and the curly brackets denote the boundaries of parenthetical textual units that were determined by the rhetorical parser (see Marcu (1997) for details); the numbers associated with the square brackets are identification labels. (3) [With its distant orbit {-- 50 percent far- ther from the sun than Earth --}and slim at- mospheric blanket, 1] [Mars experiences frigid weather conditions. 2] [Surface temperatures typ- ically average about -60 degrees Celsius (-76 degrees Fahrenheit) at the equator and can dip to -123 degrees C near the poles)] [Only the midday sun at tropical latitudes is warm enough to thaw ice on occasion:] [but any liquid wa- ter formed in this way would evaporate al- most instantly 5] [because of the low atmospheric pressure. 6 ] [Although the atmosphere holds a small amount of water, and water-ice clouds sometimes develop, 7] [most Martian weather involves blow- ing dust or carbon dioxide)] [Each winter,for ex- ample, a blizzard of frozen carbon dioxide rages over one pole, and a few meters of this dry- ice snow accumulate as previously frozen carbon dioxide evaporates from the opposite polar cap. 9] [Yet even on the summer pole, { where the sun re- mains in the sky all day long,} temperatures never warm enough to melt frozen water) °] Since parenthetical information is related only to the el- ementary unit that it belongs to, we do not assign it an elementary textual unit status. Such an assignment will only create problems at the formal level as well, because then discourse structures can no longer be represented as binary trees. On the basis of the data derived from the corpus ,anal- ysis, the algorithm hypothesizes the following set of re- lations between the textual units: rhet_rel(JUSTIFICATION, 1,2) V rhet..rel(CONDITION, 1,2) rhet_rel(ELABORATION, 3, [1,2]) V rhet_reI(ELABORATION, [3, 6], [ 1,2]) rhet_rel(El_ABOgATlON, [4, 6], 3) V rhet_ret(ELABOr~YlON, [4, 6], [1, 3]) rhet_rel(CONTRAST, 4, 5) (4) rhet_rel(EVIDENCE, 6, 5) rhet_reI(ELABORATION, [7, 10], [1,6]) rhet_rel(CONCESSION, 7, 8) rhet_rel(EXAMPLE, 9, [7, 8]) V rhet_rel(EXAMPLE, [9, 10], [7, 8]) rhet_rel(ANTITHESlS, 9, 10) V rhet_rel(ANTITHESlS, [7,9], 10) The algorithm then determines all the valid discourse trees that can be built for elementary units 1 to 10, given the constraints in (4). In this case, the algorithm con- structs 8 different trees. The trees are ordered according to their weights. The "best" tree for text (3) has weight 3 and is fully represented in figure 3. The PostScript file corresponding to figure 3 was automatically generated by 100 : Exemplification • • (, forexample,) ' . . . . . . I" .... • ........ ! -... D Justificalion.Co~lion , C .... ion [ ""~n;it~is : .'(wth) . '~,~,~o,:, ." ~th.g i ~ : (wt) / , - .... %. • .'• / • • / " Each winter, ex~mxple, a bli~atd "N~ • --o-T of ~,.. ~n \ t .... &oxide rages over ' [ Surfaos • I tm~r,u~,s ........ ..'.. [ typically avenge ...... ; [ about -60 dagl~ :atmo~herehokk~a mostMattian I onepole, andafew Yetevenonthe [ Withil.ldhllant Mm~exl~tien¢~l [ eclairs(-76 " "' ..... smallJ~ountof ~athetthvolve~ I melelnofthia [sumn~rpole-P-teml~raml~n~et ] °tbit'P" and sl~m frigid weather [ dagr-- Fahzenheit) "C°nmut " ' 1 - ] I t a~osphcafiCblanket, oonthlion3. I'g at tl~ eq ..... d i !,but): water-icewal~r' andclouds blowing du~ orcarbon dioxide. [ accemnlttedl~'i . . . . . . . . . • fa~n gh to n~ltwat~. (I) . (2) l [ ¢an dip to .123 t ~" ~meti~esdevelop,. (8) previotLslyfrozen (10) ........... [ aegr~s C n~ tl~ / \ (7) ~ carbon ,~oxi,t- poles. ' ........... evaporates from the (3) ! op pc,~li t.. polar cap. (9) ' \ Only the midday sun I - 50 ~rc~nt at Izopical ___ ~1 farther from the latitudes b warm [ Evidence . where the sun r~.~ml in the sky SUla I~lm Earth - enough to thaw ice [ ( becanse ) all day long, on ~on. ............ !.'2 ............. / ""•'. but any liquid [ .... : ...... water formed in [ , because ofthe low this way would [ " atmo~het~c evaporate almo~ [ • ppe~sure. instantly [ : (6) P?__ I " .......... Figure 3: The discourse tree of maximal weight that can be associated with text (3). a back-end ,algorithm that uses "dot", a preprocessor for drawing directed graphs. The convention that we use is that nuclei are surrounded by solid boxes and satellites by dotted boxes; the links between a node and the subor- dinate nucleus or nuclei are represented by solid arrows, and the links between a node and the subordinate satel- lites by dotted lines. The occurrences of parenthetical information are marked in the text by a-P- and a unique subordinate satellite that contains the parenthetical infor- mation. 4.3 Discussion and evaluation We believe that there are two ways to evaluate the cor- rectness of the discourse trees that an automatic process builds. One way is to compare the automatically derived trees with trees that have been built manually. Another way is to evaluate the impact that the discourse trees that we derive automatically have on the accuracy of other natural language processing tasks, such as anaphora res- olution, intention recognition, or text summarization. In this paper, we describe evaluations that follow both these avenues. Unfortunately, the linguistic community has not yet built a corpus of discourse trees against which our rhetor- ical parser can be evaluated with the effectiveness that traditional parsers are. To circumvent this problem, two analysts manually built the discourse trees for five texts that ranged from 161 to 725 words. Although there were some differences with respect to the names of the rela- tions that the analysts used, the agreement with respect to the status assigned to various units (nuclei and satellites) and the overall shapes of the trees was significant. In order to measure this agreement we associated an importance score to each textual unit in a tree and com- puted the Spearman correlation coefficients between the importance scores derived from the discourse trees built by each analyst? The Spearman correlation coefficient 2The Spearman rank correlation coefficient is an alternative to the usual correlation coefficient. It is based on the ranks of the data, and not on the data itself, and so is resistant to outliers. The null hypothesis tested by Spearman is that two variables 101 between the ranks assigned for each textual unit on the bases of the discourse trees built by the two analysts was very high: 0.798, atp < 0.0001 level of significance. The differences between the two analysts came mainly from their interpretations of two of the texts: the discourse trees of one analyst mirrored the paragraph structure of the texts, while the discourse trees of the other mirrored a logical organization of the text, which that analyst be- lieved to be important. The Spearman correlation coefficients with respect to the importance of textual units between the discourse trees built by our program and those built by each analyst were 0.480, p < 0.0001 and 0.449, p < 0.0001. These lower correlation values were due to the differences in the overall shape of the trees and to the fact that the granularity of the discourse trees built by the program was not as fine as that of the trees built by the analysts. Besides directly comparing the trees built by the pro- gram with those built by analysts, we also evaluated the impact that our trees could have on the task of sum- marizing text. A summarization program that uses the rhetorical parser described here recalled 66% of the sen- tences considered important by 13 judges in the same five texts, with a precision of 68%. In contrast, a random pro- cedure recalled, on average, only 38.4% of the sentences considered important by the judges, with a precision of 38.4%. And the Microsoft Office 97 summarizer recalled 41% of the important sentences with a precision of 39%. We discuss at length the experiments from which the data presented above was derived in (Marcu, 1997). The rhetorical parser presented in this paper uses only the structural constraints that were enumerated in sec- tion 2. Co-relational constraints, focus, theme, anaphoric links, and other syntactic, semantic, and pragmatic fac- tors do not yet play a role in our system, but we neverthe- less expect them to reduce the number of valid discourse trees that can be associated with a text. We also ex- pect that other robust methods for determining coherence relations between textual units, such as those described by Harabagiu and Moldovan (1995), will improve the accuracy of the routines that hypothesize the rhetorical relations that hold between adjacent units. We are not aware of the existence of any other rhetor- ical parser for English. However, Sumita et ,'d. (1992) report on a discourse analyzer for Japanese. Even if one ignores some computational "bonuses" that can be eas- ily exploited by a Japanese discourse analyzer (such as co-reference and topic identification), there are still some key differences between Sumita's work and ours. Partic- ularly important is the fact that the theoretical foundations of Sumita et al.'s analyzer do not seem to be able to ac- commodate the ambiguity of discourse markers: in their axe independent of each other, against the alternative hypothesis that the rank of a variable is correlated with the rank of another variable. The value of the statistic ranges from -1, indicating that high ranks of one variable occur with low ranks of the other variable, through 0, indicating no correlation between tile variables, to + 1, indicating that high ranks of one variable occur with high ranks of the other variable. system, discourse markers are considered unambiguous with respect to the relations that they signal. In contrast, our system uses a mathematical model in which this am- biguity is acknowledged and appropriately treated. Also, the discourse trees that we build are very constrained structures (see section 2): as a consequence, we do not overgenerate invalid trees as Sumita et al. do. Further- more, we use only surface-based methods for determin- ing the markers and textual units and use clauses as the minimal units of the discourse trees. In contrast, Sumita et al. use deep syntactic and semantic processing tech- niques for determining the markers and the textual units and use sentences as minimal units in the discourse struc- tures that they build. A detailed comparison of our work with Sumita et al.'s and others' work is given in (Marcu, 1997). 5 Conclusion We introduced the notion of rhetorical parsing, i.e., the process through which natural language texts are au- tomatically mapped into discourse trees. In order to make rhetorical parsing work, we improved previous al- gorithms for cue phrase disambiguation, and proposed new algorithms for determining the elementary textual units and for computing the valid discourse trees of a text. The solution that we described is both general and robust. Acknowledgements. This research would have not been possible without the help of Graeme Hirst; there are no fight words to thank him for it. I am grateful to Melanie Baljko, Phil Edmonds, and Steve Green for their help with the corpus analysis. This research was supported by the Natural Sciences and Engineering Re- search Council of Canada. References Asher, Nicholas. 1993. Reference to Abstract Objects in Discourse. Kluwer Academic Publishers, Dordrecht. Ballard, D. Lee, Robert Conrad, and Robert E. Longacre. 1971. The deep and surface grammar of interclausal relations. Foundations of language, 4:70-118. Cahn, Janet. 1992. An investigation into the correlation of cue phrases, unfilled pauses and the structuring of spoken discourse. In Proceedings of the IRCS Work- shop on Prosody in Natural Speech, pages 19-30. Cohen, Robin. 1987. Analyzing the structure of argu- mentative discourse. Computational Linguistics, 13 (1- 2): 11-24, January-June. Costermans, Jean and Michel Fayol. 1997. Processing lnterclausal Relationships. Studies in the Production and Comprehension of Text. Lawrence Erlbaum Asso- ciates, Publishers. Cumming, Carmen and Catherine McKercher. 1994. The Canadian Reporter: News writing and reporting. Hartcourt Brace. 102 Delin, Judy L. and Jon Oberlander. 1992. Aspect- switching and subordination: the role of/t-clefts in dis- course. In Proceedings of the Fourteenth International Conference on Computational Linguistics (COLING- 92), pages 281-287, Nantes, France, August 23-28. Fraser, Bruce. 1996. Pragmatic markers. Pragmatics, 6(2): 167-190. Grosz, Barbara J., Aravind K. Joshi, and Scott Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21 (2):203-226, June. Grosz, Barbara J. and Candace L. Sidner. 1986. Atten- tion, intentions, and the structure of discourse. Compu- tational Linguistics, 12(3): 175-204, July-September. Grover, Claire, Chris Brew, Suresh Manandhar, and Marc Moens. 1994. Priority union and generalization in dis- course grammars. In Proceedings of the 32nd Annual Meeting of the Association for ComputationalLinguis- tics (ACL-94), pages 17-24, Las Cruces, June 27-30. HaUiday, Michael A.K. and Ruqaiya Hasan. 1976. Co- hesion in English. Longman. Harabagiu, Sanda M. and Dan I. Moldovan. 1995. A marker-propagation algorithm for text coherence. In Working Notes of the Workshop on Parallel Process- ing in Artificial Intelligence, pages 76-86, Montreal, Canada, August. Hirschberg, Julia and Diane Litman. 1993. Empirical studies on the disambiguation of cue phrases. Compu- tational Linguistics, 19(3):501-530. Hobbs, Jerry R. 1990. Literature and Cognition. CSLI Lecture Notes Number 21. Kamp, Hand and Uwe Reyle. 1993. From Discourse to Logic: Introduction to ModelTheoretic Semantics of Natural Language, Formal Logic and Discourse Representation Theory. Kluwer Academic Publishers, London, Boston, Dordrecht. Studies in Linguistics and Philosophy, Volume 42. Kintsch, Walter. 1977. On comprehending stories. In Marcel Just and Patricia Carpenter, editors, Cognitive processes in comprehension. Erlbaum, Hillsdale, New Jersey. Knott, Alistair. 1995. A Data-Driven Methodology for Motivating a Set of Coherence Relations. Ph.D. thesis, University of Edinburgh. Lascarides, Alex and Nicholas Asher. 1993. Temporal interpretation, discourse relations, and common sense entailment. Linguistics and Philosophy, 16(5):437- 493. Lascarides, Alex, Nicholas Asher, and Jon Oberlander. 1992. Inferring discourse relations in context. In Pro- ceedings of the 30th Annual Meeting of the Association for Computational Linguistics (ACL-92), pages 1-8. Longacre, Robert E. 1983. The Grammar of Discourse. Plenum Press, New York. Mann, William C. and Sandra A. Thompson. 1988. Rhetorical structure theory: Toward a functional the- ory of text organization. Text, 8(3):243-281. Marcu, Daniel. 1996. Building up rhetorical structure trees. In Proceedings of the Thirteenth National Con- ference on Artificial intelligence (AAA1-96 ), volume 2, pages 1069-1074, Portland, Oregon, August 4-8,. Marcu, Daniel. 1997. The rhetorical parsing, sum- marization, and generation of natural language texts. Ph.D. thesis, Department of Computer Science, Uni- versity of Toronto, Forthcoming. Martin, James R. 1992. English Text. System and Struc- ture. John Benjamin Publishing Company, Philadel- phia/Amsterdam. Moens, Marc and Mark Steedman. 1988. Temporal on- tology and temporal reference. Computational Lin- guistics, 14(2): 15-28. Moser, Megan and Johanna D. Moore. 1997. On the correlation of cues with discourse structure: Results from a corpus study. Submitted for publication. Polanyi, Livia. 1988. A formal model of the structure of discourse. Journal of Pragmatics, 12:601-638. Pr0st, H., R. Scha, and M. van den Berg. 1994. Discourse grammar and verb phrase anaphora. Linguistics and Philosophy, 17(3):261-327, June. Redeker, Gisela 1990. Ideational and pragmatic markers of discourse, structure. Journal ofPragmatics, 14:367- 381. Sanders, Ted J.M., Wilbert P.M. Spooren, and Leo G.M. Noordman. 1992. Toward a taxonomy of coherence relations. Discourse Processes, 15:1-35. Schiffrin, Deborah. 1987. Discourse Markers. Cam- bridge University Press. Segal, Erwin M., Judith F. Duchan, and Paula J. Scott. 1991. The role of interclausal connectives in narrative structuring: Evidence from adults' interpretations of simple stories. Discourse Processes, 14:27-54. Sidner, Candace L. 1981. Focusing for interpretation of pronouns. Computational Linguistics, 7(4):217-231, October-December. Sumita, K., K. Ono, T. Chino, T. Ukita, and S. Amano. 1992. A discourse structure analyzer for Japanese text. In Proceedings of the International Conference on Fifth Generation Computer Systems, volume 2, pages 1133-1140. Van Dijk, Teun A. 1979. Pragmatic connectives. Journal of Pragmatics, 3:447-456. Webber, Bonnie L. 1988. Tense as discourse anaphor. Computational Linguistics, 14(2):61-72, June. 103 | 1997 | 13 |
Centering in-the-Large: Computing Referential Discourse Segments Udo Hahn & Michael Strube Computational Linguistics Research Group Freiburg University, Werthmannplatz 1 D-79085 Freiburg, Germany http://www.coling.uni-freiburg.de/ Abstract We specify an algorithm that builds up a hi- erarchy of referential discourse segments from local centering data. The spatial extension and nesting of these discourse segments constrain the reachability of potential antecedents of an anaphoric expression beyond the local level of adjacent center pairs. Thus, the centering model is scaled up to the level of the global referential structure of discourse. An empiri- cal evaluation of the algorithm is supplied. 1 Introduction The centering model (Grosz et al., 1995) has evolved as a major methodology for computational discourse analy- sis. It provides simple, yet powerful data structures, con- straints and rules for the local coherence of discourse. As far as anaphora resolution is concerned, e.g., the model requires to consider those discourse entities as potential antecedents for anaphoric expressions in the current ut- terance Ui, which are available in the forward-looking centers of the immediately preceding utterance Ui- 1. No constraints or rules are formulated, however, that ac- count for anaphoric relationships which spread out over non-adjacent utterances. Hence, it is unclear how dis- course elements which appear in utterances preceding utterance Ui-1 are taken into consideration as potential antecedents for anaphoric expressions in Ui. The extension of the search space for antecedents is by no means a trivial enterprise. A simple linear backward search of all preceding centering structures, e.g., may not only turn out to establish illegal references but also contradicts the cognitive principles underlying the lim- ited attention constraint (Walker, 1996b). The solution we propose starts from the observation that additional constraints on valid antecedents are placed by the global discourse structure previous utterances are embedded in. We want to emphasize from the beginning that our pro- posal considers only the referential properties underlying the global discourse structure. Accordingly, we define the extension of referential discourse segments (over sev- eral utterances) and a hierarchy of referential discourse segments (structuring the entire discourse). 1 The algo- rithmic procedure we propose for creating and manag- ing such segments receives local centering data as input and generates a sort of superimposed index structure by which the reachability of potential antecedents, in par- ticular those prior to the immediately preceding utter- ance, is made explicit. The adequacy of this definition is judged by the effects centered discourse segmentation has on the validity of anaphora resolution (cf. Section 5 for a discussion of evaluation results). 2 Global Discourse Structure There have been only few attempts at dealing with the recognition and incorporation of discourse structure be- yond the level of immediately adjacent utterances within the centering framework. Two recent studies deal with this topic in order to relate attentional and intentional structures on a larger scale of global discourse coher- ence. Passonneau (1996) proposes an algorithm for the generation of referring expressions and Walker (1996a) integrates centering into a cache model of attentional state. Both studies, among other things, deal with the supposition whether a correlation exists between partic- ular centering transitions (which were first introduced by Brennan et al. (1987); cf. Table 1) and intention- based discourse segments. In particular, the role of SHIFT-type transitions is examined from the perspective of whether they not only indicate a shift of the topic be- tween two immediately successive utterances but also signal (intention-based) segment boundaries. The data in both studies reveal that only a weak correlation be- tween the SHIFT transitions and segment boundaries can be observed. This finding precludes a reliable predic- tion of segment boundaries based on the occurrence of 1 Our notion of referential discourse segment should not be confounded with the intentional one originating from Grosz & Sidner (1986), for reasons discussed in Section 2. 104 SHIFTS and vice versa. In order to accommodate to these empirical results divergent solutions are proposed. Pas- sonneau suggests that the centering data structures need to be modified appropriately, while Walker concludes that the local centering data should be left as they are and further be complemented by a cache mechanism. She thus intends to extend the scope of centering in ac- cordance with cognitively plausible limits of the atten- tional span. Walker, finally, claims that the content of the cache, rather than the intentional discourse segment structure, determines the accessibility of discourse enti- ties for anaphora resolution. c~(v.) = cdu.-~) c~(u.) # OR Cb(Vn-1) undef. Cb(Vn-1) Cb(Un) = CONTINUE (C) SMOOTH-SHIFT (SS) c~(u.) cb(u.) # RETAIN (R) ROUGH-SHIFT (RS) c~(u.) Table h Transition Types As a working hypothesis, for the purposes of anaphora resolution we subscribe to Walker's model, in particular to that part which casts doubt on the hypothesized de- pendency of the attentional from the intentional structure of discourse (Grosz & Sidner, 1986, p. 180). We diverge from Walker (1996a), however, in that we propose an al- ternative to the caching mechanism, which we consider to be methodologically more parsimonious and, at least, to be equally effective (for an elaboration of this claim, cf. Section 6). The proposed extension of the centering model builds on the methodological framework of functional center- ing (Strube & Hahn, 1996). This is an approach to cen- tering in which issues such as thematicity or topicality are already inherent. Its linguistic foundations relate the ranking of the forward-looking centers and the functional information structure of the utterances, a notion origi- nally developed by Dane~ (1974). Strube & Hahn (1996) use the centering data structures to redefine Dane~'s tri- chotomy between given information, theme and rheme in terms of the centering model. The Cb(Un), the most highly ranked element of C! (Un-1) realized in Un, cor- responds to the element which represents the given in- formation. The theme of Un is represented by the pre- ferred center Cp (Un), the most highly ranked element of C! ( Un ). The theme/rheme hierarchy of Un corresponds to the ranking in the C! s. As a consequence, utterances without any anaphoric expression do not have any given elements and, therefore, no Cb. But independent of the use of anaphoric expressions, each utterance must have a theme and a C! as well. The identification of the preferred center with the theme implies that it is of major relevance for determin- ing the thematic progression of a text. This is reflected in our reformulation of the two types of thematic progres- sion (TP) which can be directly derived from centering data (the third one requires to refer to conceptual gener- alization hierarchies and is therefore beyond the scope of this paper, cf. Dane~ (1974) for the original statement): 1. TP with a constant theme: Successive utterances continuously share the same Cp. 2. TP with linear thematization of rhemes: An element of the C! (Ui- 1 ) which is not the Cp (Ui- 1 ) appears in Ui and becomes the Cp(Ui) after the processing of this utterance. Cf(Vi-1) : [ c 1 ..... ej ..... cs ] C~(Vi) : [ Cl ..... ck ..... et ] Cf(Ui-1): [el ..... cj ..... cs] l<j<s Cf(Vd: [el ..... ek ..... e~l Table 2: Thematic Progression Patterns Table 2 visualizes the abstract schemata of TP pat- terns. In our example (cf. Table 8 in Section 4), U1 to Ua illustrate the constant theme, while U7 to U10 illustrate the linear thematization of rhemes. In the latter case, the theme changes in each utterance, from "Handbuch" (manual) via "Inhaltsverzeichnis" (table of contents) to "Kapitel" (chapter) etc. Each of the new themes are in- troduced in the immediately preceding utterance so that local coherence between these utterances is established. Daneg (1974) also allows for the combination and re- cursion of these basic patterns; this way the global the- matic coherence of a text can be described by recurrence to these structural patterns. These principles allow for a major extension of the original centering algorithm. Given a reformulation of the TP constraints in center- ing terms, it is possible to determine referential segment boundaries and to arrange these segments in a nested, i.e., hierarchical manner on the basis of which reacha- bility constraints for antecedents can be formulated. Ac- cording to the segmentation strategy of our approach, the Cp of the end point (i.e., the last utterance) of a discourse segment provides the major theme of the whole segment, one which is particularly salient for anaphoric reference relations. Whenever a relevant new theme is established, however, it should reside in its own discourse segment, either embedded or in parallel to another one. Anaphora resolution can then be performed (a) with the forward- looking centers of the linearly immediately preceding ut- terance, (b) with the forward-looking centers of the end point of the hierarchically immediately reachable dis- course segment, and (c) with the preferred center of the end point of any hierarchically reachable discourse seg- ment (for a formalization of this constraint, cf. Table 4). 105 3 Computing Global Discourse Structure Prior to a discussion of the algorithmic procedure for hy- pothesizing discourse segments based on evidence from local centering data, we will introduce its basic build- ing blocks. Let x denote the anaphoric expression under consideration, which occurs in utterance Ui associated with segment level s. The function Resolved(x, s, Us) (cf. Table 3) is evaluated in order to determine the proper antecedent ante for x. It consists of the evaluation of a teachability predicate for the antecedent on which we will concentrate here, and of the evaluation of the predi- cate lsAnaphorFor which contains the linguistic and con- ceptual constraints imposed on a (pro)nominal anaphor (viz. agreement, binding, and sortal constraints) or a tex- tual ellipsis (Hahn et al., 1996), not an issue in this paper. The predicate lsReachable (cf. Table 4) requires ante to be reachable from the utterance Us associated with the segment level s. 2 Reachability is thus made dependent on the segment structure DS of the discourse as built up by the segmentation algorithm which is specified in Table 6. In Table 4, the symbol "=str" denotes string equality, N the natural numbers. We also introduce as a notational convention that a discourse segment is identi- fied by its index s and its opening and closing utterance, viz. DS[s.beg] and DS[s.end], respectively. Hence, we may either identify an utterance Ui by its linear text in- dex, i, or, if it is accessible, with respect to its hierarchi- cal discourse segment index, s (e.g., cf. Table 8 where U3 = UDs[1.end] or U13 = UDs[3.end]). The discourse segment index is always identical to the currently valid segment level, since the algorithm in Table 6 implements a stack behavior. Note also that we attach the discourse segment index s to center expressions, e.g., Cb(s, Us). Resolved(x, s, Ui) := l ante if IsReachable(ante, s, Ui) A IsAnaphorFor(x, ante) under else Table 3: Resolution of Anaphora IsReachable(ante, s, Ui ) if ante 6 C/(s, Ui-1) else if ante E C/(s - 1, Uosts_,.~,a]) else if (3v E N : ante =~tr Cp(v, UDsI .... a]) ^ v < (s - 1)) A (-~Sv' 6 N: ante =,t,- Cp(v',UDst~,.~ndl) A v < v') Table 4: Reachability of the Anaphoric Antecedent Finally, the function Lift(s, i) (cf. Table 5) determines the appropriate discourse segment level, s, of an utter- 2The Cf lists in the functional centering model are totally ordered (Strobe & Hahn, 1996, p.272) and we here implicitly assume that they are accessed in the total order given. ance Ui (selected by its linear text index, i). Lift only applies to structural configurations in the centering lists in which themes continuously shift at three different con- secutive segment levels and associated preferred centers at least (cf. Table 2, lower box, for the basic pattern). Lift(s, i) := Lift(s- 1, i- 1) if s>2Ai>3 ^ c.(s,u,_~) # c~(~ - 1,u,_~) ^ c~(s - I, u,_~) # c.(s - 2, u,_~) ^ c~(s,u,_,) • cj(s- 1,u,_~) 8 else Table 5: Lifting to the Appropriate Discourse Segment Whenever a discourse segment is created, its starting and closing utterances are initialized to the current po- sition in the discourse. Its end point gets continuously incremented as the analysis proceeds until this discourse segment DS is ultimately closed, i.e., whenever another segment DS' exists at the same or a hierarchically higher level of embedding such that the end point of DS' ex- ceeds that of the end point of DS. Closed segments are inaccessible for the antecedent search. In Table 8, e.g., the first two discourse segments at level 3 (ranging from U5 to U5 and Us to Ull ) are closed, while those at level 1 (ranging from U1 to U3), level 2 (ranging from U4 to UT) and level 3 (ranging from U12 to U13) are open. The main algorithm (see Table 6) consists of three ma- jor logical blocks (s and Ui denote the current discourse segment level and utterance, respectively). 1. Continue Current Segment. The Cp(s, Ui-1) is taken over for Ui. If Ui-1 and Ui indicate the end of a sequence in which a series of thematizations of rhemes have occurred, all embedded segments are lifted by the function Lift to a higher level s'. As a result of lifting, the entire sequence (including the final two utterances) forms a single segment. This is trivially true for cases of a constant theme. 2. Close Embedded Segment(s). (a) Close the embedded segment(s) and continue another, already existing segment: If Ui does not include any anaphoric expression which is an element of the Cf (s, Ui-O, then match the antecedent in the hierarchically reachable seg- ments. Only the Cp of the utterance at the end point of any of these segments is considered a potential antecedent. Note that, as a side effect, hierarchically lower segments are ulti- mately closed when a match at higher segment levels succeeds. (b) Close the embedded segment and open a new, parallel one: If none of the anaphoric ex- pressions under consideration co-specify the 106 Cp(8 - 1, U[8_l.end]), then the entire C! at this segment level is checked for the given ut- terance. If an antecedent matches, the segment which contains Ui- 1 is ultimately closed, since Ui opens a parallel segment at the same level of embedding. Subsequent anaphora checks ex- clude any of the preceding parallel segments from the search for a valid antecedent and just visit the currently open one. (c) Open new, embedded segment: If there is no matching antecedent in hierarchically reach- able segments, then for utterance Ui a new, em- bedded segment is opened. 3. Open New, Embedded Segment. If none of the above cases applies, then for utterance Ui a new, embedded segment is opened. In the course of pro- cessing the following utterances, this decision may be retracted by the function Lift. It serves as a kind of "garbage collector" for globally insignificant dis- course segments which, nevertheless, were reason- able from a local perspective for reference resolu- tion purposes. Hence, the centered discourse seg- mentation procedure works in an incremental way and revises only locally relevant, yet globally irrel- evant segmentation decisions on the fly. s:=l i:=1 DS[s.be9] := i DS[s.end] := i while -- end of text i:=i+1 n := {Resolved(x,s, Ui) lx E U~} if3r • T~ : r ~---str Cp(s, Ui-1) (1) then s' 1= s i' := i DS[Lift(s', i').end] := i else if~3r E Tt : r • Cl(s, Ui_l ) (2a) then found := FALSE k:~s while-,found A (k > 1) k:=k-1 i_f3r • 7?.: r =s,r Cp(k, Utk.~,,~) then s := k DS[s.end] := i found := TRUE else if k = s - 1 (2b) then if3r •~:r• Cs(k, Utk.o,,,~) then DS[s.beg] := i DS[s.end] := i found := TRUE if -,found (2e) then s := s + 1 DS[s.beg] := i DS[s.end] := i else s := s q- 1 (3) DS[s.beg] := i DS[s.end] := i Table 6: Algorithm for Centered Segmentation 4 A Sample Text Segmentation The text with respect to which we demonstrate the work- ing of the algorithm (see Table 7) is taken from a German computer magazine (c't, 1995, No.4, p.209). For ease of presentation the text is somewhat shortened. Since the method for computing levels of discourse segments depends heavily on different kinds of anaphoric expres- sions, (pro)nominal anaphors and textual ellipses are marked by italics, and the (pro)nominal anaphors are un- derlined, in addition. In order to convey the influence of the German word order we provide a rough phrase-to- phrase translation of the entire text. The centered segmentation analysis of the sample text is given in Table 8. The first column shows the linear text index of each utterance. The second column contains the centering data as computed by functional centering (Strube & Hahn, 1996). The first element of the C I, the preferred center, Cp, is marked by bold font. The third column lists the centering transitions which are derived from the Cb/C! data of immediately successive utter- ances (cf. Table 1 for the definitions). The fourth column depicts the levels of discourse segments which are com- puted by the algorithm in Table 6. Horizontal lines in- dicate the beginning of a segment (in the algorithm, this corresponds to a value assignment to DS[s.beg]). Verti- cal lines show the extension of a segment (its end is fixed by an assignment to DS[s.end]). The fifth column indi- cates which block of the algorithm applies to the current utterance (cf. the right margin in Table 6). The computation starts at U1, the headline. The C1(Ux ) is set to "1260" which is meant as an abbre- viation of "Brother HL-1260". Upon initialization, the beginning as well as the ending of the initial discourse segment are both set to "1". U2 and Ua simply con- tinue this segment (block (1) of the algorithm), so Lift does not apply. The C v is set to "1260" in all utter- ances of this segment. Since U4 does neither contain any anaphoric expression which co-specifies the Cv(1 , Ua) (block (1)) nor any other element of the 67/( 1, U3) (block (2a)), and as there is no hierarchically preceding seg- ment, block (2c) applies. The segment counter s is in- cremented and a new segment at level 2 is opened, set- ting the beginning and the ending to "4". The phrase "das diinne Handbiichlein" (the thin leaflet) in U5 does not co-specify the C v (2, U4) but co-specifies an element of the C! (2, U4) instead (viz. "Handbuch" (manual)). Hence, block (3) of the algorithm applies, leading to the creation of a new segment at level 3. The anaphor "Handbuch" (manual) in U6 co-specifies the Cv(3 , Us). Hence block (1) applies (the occurrence of "1260" in CI(U5 ) is due to the assumptions specified by Strube & Hahn (1996)). Given this configuration, the func- tion Lift lifts the embedded segment one level, so the 107 (1) (2) (3) (4) (5) (6) (7) Brother HL- 1260 Ein Detail fiillt schon beim ersten Umgang mit dem grogen Brother auf: One particular - is already noticed - in the first approach to - the big Brother. Im Betrieb macht e._gr durch ein kr~iftiges Arbeitsger~usch auf sich aufmerksam, das auch im Stand-by-Modus noch gut vemehmbar ist. In operation - draws - it - with a heavy noise level - attention to itself- which - also - in the stand-by mode - is still well audible. F~r Standard-InstaUationen kommt man gut ohne Hand- buch aus. As far as standard installations are concerned- gets - one - well - by - without any manual. Zwar ed~iutert das dSnne Handbiichlein die Bedienung der Hardware anschaulich und gut illustriert. Admittedly, gives - the thin leaflet- the operation of the hardware- a clear description of - and - well illustrated. Die Software-Seite wurde im Handbuch dagegen stiefmSttedich behandelt: The software part - was - in the manual- however - like a stepmother- treated: bis auf eine karge Seite mit einem Inhaltsverzeichnis zum HP-Modus sucht man vergebens weitere Informationen. except for one meagre page- containing the table of con- tents for the HP mode - seeks- one- in vain- for further information. (8) Kein Wander: unter dem lnhaltsverzeichnis steht der lap- idare Hinweis, man m6ge sich die Seiten dieses Kapitels doch bitte yon Diskette ausdrucken- Frechheit. No wonder: beneath the table of contents - one finds the terse instruction, one should - oneself- the pages of this section - please - from disk - print out - - impertinence. (9) Ohne diesen Ausdruck sucht man vergebens nach einem Hinweis darauf, warum die Auto-Continue-Funktion in der PostScript-Emulation nicht wirkt. Without this print-out, looks - one - in vain - for a hint - why - the auto-continue-function - in the PostScript em- ulation - does not work. (10) Nach dem Einschalten zeigt das LC-Display an, dab diese praktische Hilfsfunktion nicht aktiv ist; After switching on - depicts - the LC display - that - this practical help function - not active - is; (11) si__.ge tiberwacht den Dateientransfer vom Computer. it monitors the file transfer from the computer. (12) Viele der kleinen Macken verzeiht man dem HL-1260 wenn man erste Ausdrucke in H~inden h~ilt. Many of the minor defects - pardons - one - the HL-1260, when - one - the first print outs - holds in [one' s] hands. (13) Gerasterte Grauflftchen erzeugt der Brother sehr homogen Raster-mode grey-scale areas - generates - the Brother- very homogeneously... Table 7: Sample Text segment which ended with U4 is now continued up to U6 at level 2. As a consequence, the centering data of U5 are excluded from further consideration as far as the co-specification by any subsequent anaphoric expression is concerned. Uz simply continues the same segment, since the textual ellipsis "Seite" (page) refers to "Hand- buch" (manual). The utterances U8 to U10 exhibit a typ- ical thematization-of-the-rhemes pattern which is quite common for the detailed description of objects. (Note the sequence of SHIFT transitions.) Hence, block (3) of the algorithm applies to each of the utterances and, correspondingly, new segments at the levels 3 to 5 are created. This behavior breaks down at the occurrence of the anaphoric expression "sie" (it) in Uxl which co- specifies the Cp ( 5, Ul o ), viz. "auto-continue function", denoted by another anaphoric expression, namely "Hil- fsfunktion" (help function) in U10. Hence, block (1) ap- plies. The evaluation of Lift succeeds with respect to two levels of embedding. As a result, the whole se- quence is lifted up to level 3 and continues this segment which started at the discourse element "lnhaltsverzeich- his" (list of contents). As a result of applying Lift, the whole sequence is captured in one segment. U12 does not contain any anaphoric expression which co-specifies an element of the C! (3, U11), hence block (2) of the al- gorithm applies. The anaphor "HL-1260" does not co- specify the Cp of the utterance which represents the end of the hierarchically preceding discourse segment (UT), but it co-specifies an element of the C! (2, UT). The im- mediately preceding segment is ultimately closed and a parallel segment is opened at UI~ (cf. block (2b)). Note also that the algorithm does not check the C! (3, U10) de- spite the fact that it contains the antecedent of "1260". However, the occurrences of "1260" in the Cfs of U9 and Ux0 are mediated by textual ellipses. If these ut- terances contained the expression "1260" itself, the al- gorithm would have built a different discourse structure and, therefore, "1260" in U10 were reachable for the anaphor in Ulz. Segment 3, finally, is continued by Ulz. 5 Empirical Evaluation In this section, we present some empirical data concern- ing the centered segmentation algorithm. Our study was based on the analysis of twelve texts from the informa- tion technology domain (IT), of one text from a Ger- 108 U~ (1) Cb: Cf." (2) Cb: Cf: (3) Cb: Cf: (4) Cb: Cf." (5) Cb: Cf: (6) Cb: Cf: (7) Cb: Cf: (8) Cb: Cf: (9) Cb: Cf: (10) Cb: Cf: (11) Cb: Cf: (12) Cb: Cf: (13) Cb: Cf: Centering Data Trans. [1260] 1260 C [1260, Umgang, Detail] 1260 C [1260, Betrieb, Arbeitsger~usch, Stand-by-Modus] [Standard-Installation, Handbuch] Handbuch C [Handbueh, 1260, Hardware, Bedienung] Handbuch C [Handbuch, 1260, Software] Handbuch C [Handbueh, Seite, 1260, HP-Modus, Inhaltsverzeichnis, Informationen] Inhaltsverzeichnis SS [Inhaltsverzeiehnis, Hinweis, Seiten, Kapitel, Diskette, Frechheit] Kapitel SS [Kapitel, Ausdmck, Hinweis, 1260, Auto-Continue-Funktion, PostScript-Emulation] 1260 RS [Auto-Continue-Funktion, 1260, LC-Display] Auto-Continue-Funktion SS [Auto-Continue-Funktion, Dateien-Transfer, Computer] [1260, Macken, Ausdmck] 1260 C [1260, Graufl~ichen] man news magazine (Spiegel) 3, and of two literary texts 4 (Lit). Table 9 summarizes the total numbers of anaphors, textual ellipses, utterances, and words in the test set. Levels of Discourse Segments 1 2 3 4 5 E 496 240 547 8319 IT Spiegel anaphors 197 101 198 ellipses 195 22 23 utterances 336 84 127 words 5241 1468 1610 Block 1 1 2e 3 1, Lift 1 I 3 1, Lift 2b Table 8: Sample of a Centered Text Segmentation Analysis neither specified for anaphoric antecedents in Ui, not an issue here, nor for anaphoric antecedents beyond Ui-1. In the test set, 139 anaphors (28%) and 116 textual el- lipses (48,3%) fall out of the (intersentential) scope of Lit those common algorithms. So, the problem we consider is not a marginal one. U~ Ui-2 Ui-a Ui-4 Ui-5 Table 9: Test Set Table 10 and Table 11 consider the number of anaphoric and text-elliptical expressions, respectively, and the linear distance they have to their correspond- ing antecedents. Note that common centering algorithms (e.g., the one by Brennan et al. (1987)) are specified only for the resolution of anaphors in Ui-1. They are 3japan - Der Neue der alten Garde. In Der Spiegel, Nr. 3, 1996. 4The first two chapters of a short story by the German writer Heiner MOiler (Liebesgeschichte. In Heiner MOiler. Geschichten aus der Produktion 2. Berlin: Rotbuch Verlag, 1974, pp.57-63) and the first chapter of a novel by Uwe Johnson (ZweiAnsichten. Frankfurt/Main: Suhrkamp Verlag, 1965.) 10 117 28 18 6 6 Lit E 7 32 49 70 121 308 14 24 66 5 10 33 1 5 12 0 1 7 1 3 12 1 1 5 2 1 4 Ui-~ to Ui-lO 8 Ui-l, to Ui-15 3 Ui-l~ to U,-2o 1 Table 10: Anaphoric Antecedent in Utterance U~ Table 12 and Table 13 give the success rate of the centered segmentation algorithm for anaphors and tex- tual ellipses, respectively. The numbers in these tables indicate at which segment level anaphors and textual el- lipses were correctly resolved. The category of errors 109 U/-1 Ui-2 Ui-3 Ui-4 Ui-5 Ui-6 to Ui-lo Ui-u to Ui-15 IT Spiegel Lit E 94 15 15 124 42 6 8 56 16 0 0 16 14 0 0 14 8 0 0 8 14 1 0 15 7 0 0 7 Table 11: Elliptical Antecedent in Utterance U covers erroneous analyses the algorithm produces, while the one for false positives concerns those resolution re- sults where a referential expression was resolved with the hierarchically most recent antecedent but not with the linearly most recent (obviously, the targeted) one (both of them denote the same discourse entity). The categories Cy(s, Ui-1) in Tables 12 and 13 contain more elements than the categories Ui-1 in Tables 10 and 11, respec- tively, due to the mediating property of textual ellipses in functional centering (Strube & Hahn, 1996). U~ cI(~,U~-,) Cp(s - 1, UDS[,--L,,d]) C/(s - 1, UDsls--l.end]) Cp(s - 2, UDS[8-2...~) Cp(s - 3, UDS[~-3.,,~) Cp(s - 4, UDSl,--4.,,d]) c~( ~ - s, uo s[,-~.,,~l) errors false positives ~ m 10 7 32 49 161 78 125 364 14 9 24 47 7 5 9 21 1 0 1 2 1 0 1 2 0 0 1 1 0 1 0 I 3 1 5 9 (I) (3) (7) (11) Table 12: Anaphoric Antecedent in Center~ cl (s, U~-i ) Cp(s - 1, UDSi,-1.,,~d]) CI(s - 1, Uosls-~.*,a]) Cp(s - 2, Uosts-~.~,,~l) Cp(s - 3, UDats-Z.ena]) errors IT Spiegel Lit 156 18 17 18 0 4 10 1 2 7 1 0 3 0 0 1 2 0 (2) (0) (3) E 191 22 13 8 3 3 (5) Table 13: Elliptical Antecedent in Centerx The centered segmentation algorithm reveals a pretty good performance. This is to some extent implied by the structural patterns we find in expository texts, viz. their single-theme property (e.g., "1260" in the sample text). In contrast, the literary texts in the test exhibited a much more difficult internal structure which resem- bled the multiple thread structure of dialogues discussed by Ros6 et al. (1995). The good news is that the seg- mentation procedure we propose is capable of dealing even with these more complicated structures. While only one antecedent of a pronoun was not reachable given the superimposed text structure, the remaining eight errors are characterized by full definite noun phrases or proper names. The vast majority of these phenomena can be considered informationally redundant utterances in the terminology of Walker (1996b) for which we currently have no solution at all. It seems to us that these kinds of phrases may override text-grammatical structures as evidenced by referential discourse segments and, rather, trigger other kinds of search strategies. Though we fed the centered segmentation algorithm with rather long texts (up to 84 utterances), the an- tecedents of only two anaphoric expressions had to bridge a hierarchical distance of more than 3 levels. This coincides with our supposition that the overall structure computed by the algorithm should be rather fiat. We could not find an embedding of more than seven levels. 6 Related Work There has always been an implicit relationship between the local perspective of centering and the global view of focusing on discourse structure (cf. the discussion in Grosz et al. (1995)). However, work establishing an ex- plicit account of how both can be joined in a computa- tional model has not been done so far. The efforts of Sidner (1983), e.g., have provided a variety of different focus data structures to be used for reference resolution. This multiplicity and the on-going growth of the number of different entities (cf. Suri & McCoy (1994)) mirrors an increase in explanatory constructs that we consider a methodological drawback to this approach because they can hardly be kept control of. Our model, due to its hier- archical nature implements a stack behavior that is also inherent to the above mentioned proposals. We refrain, however, from establishing a new data type (even worse, different types of stacks) that has to be managed on its own. There is no need for extra computations to deter- mine the "segment focus", since that is implicitly given in the local centering data already available in our model. A recent attempt at introducing global discourse no- tions into the centering framework considers the use of a cache model (Walker, 1996b). This introduces an addi- tional data type with its own management principles for data storage, retrieval and update. While our proposal for centered discourse segmentation also requires a data structure of its own, it is better integrated into centering than the caching model, since the cells of segment struc- tures simply contain "pointers" that implement a direct link to the original centering data. Hence, we avoid ex- tra operations related to feeding and updating the cache. The relation between our centered segmentation algo- rithm and Walker's (1996a) integration of centering into the cache model can be viewed from two different angles. On the one hand, centered segmentation may be a part of the cache model, since it provides an elaborate, non- linear ordering of the elements within the cache. Note, however, that our model does not require any prefixed size corresponding to the limited attention constraint. On the other hand, centered segmentation may replace the 110 cache model entirely, since both are competing models of the attentional state. Centered segmentation has also the additional advantage of restricting the search space of anaphoric antecedents to those discourse entities actually referred to in the discourse, while the cache model allows unrestricted retrieval in the main or long-term memory. Text segmentation procedures (more with an informa- tion retrieval motivation, rather than being related to ref- erence resolution tasks) have also been proposed for a coarse-grained partitioning of texts into contiguous, non- overlapping blocks and assigning content labels to these blocks (Hearst, 1994). The methodological basis of these studies are lexical cohesion indicators (Morris & Hirst, 1991) combined with word-level co-occurrence statis- tics. Since the labelling is one-dimensional, this approxi- mates our use of preferred centers of discourse segments. These studies, however, lack the fine-grained informa- tion of the contents of Cf lists also needed for proper reference resolution. Finally, many studies on discourse segmentation high- light the role of cue words for signaling segment bound- aries (cf., e.g., the discussion in Passonneau & Litman (1993)). However useful this strategy might be, we see the danger that such a surface-level description may actu- ally hide structural regularities at deeper levels of inves- tigation illustrated by access mechanisms for centering data at different levels of discourse segmentation. 7 Conclusions We have developed a proposal for extending the cen- tering model to incorporate the global referential struc- ture of discourse for reference resolution. The hierarchy of discourse segments we compute realizes certain con- straints on the reachability of antecedents. Moreover, the claim is made that the hierarchy of discourse segments implements an intuitive notion of the limited attention constraint, as we avoid a simplistic, cognitively implausi- ble linear backward search for potentional discourse ref- erents. Since we operate within a functional framework, this study also presents one of the rare formal accounts of thematic progression patterns for full-fledged texts which were informally introduced by Dane~ (1974). The model, nevertheless, still has several restrictions. First, it has been developed on the basis of a small corpus of written texts. Though these cover diverse text sorts (viz. technical product reviews, newspaper articles and literary narratives), we currently do not account for spo- ken monologues as modelled, e.g., by Passonneau & Lit- man (1993) or even the intricacies of dyadic conversa- tions Ros6 et al. (1995) deal with. Second, a thorough integration of the referential and intentional description of discourse segments still has to be worked out. Acknowledgments. We like to thank our colleagues in the CLIF group for fruitful discussions and instant support, Joe Bush who polished the text as a native speaker, the three anony- mous reviewers for their critical comments, and, in particular, Bonnie Webber for supplying invaluable comments to an ear- lier draft of this paper. Michael Strube is supported by a post- doctoral grant from DFG (Str 545/1-1). References Brennan, S. E., M. W. Friedman & C. J. Pollard (1987). A centering approach to pronouns. In Proc. of the 25 th Annual Meeting of the Association for Computational Linguistics; Stanford, Cal., 6-g July 1987, pp. 155-162. Dane~, E (1974). Functional sentence perspective and the orga- nization of the text. In E Dane~ (Ed.), Papers on Functional Sentence Perspective, pp. 106-128. Prague: Academia. Grosz, B. J., A. K. Joshi & S. Weinstein (1995). Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21 (2):203-225. Grosz, B. J. & C. L. Sidner (1986). Attention, intentions, and the structure of discourse. Computational Linguistics, 12(3): 175-204. Hahn, U., K. Markert & M. Strube (1996). A conceptual rea- soning approach to textual ellipsis. In Proc. of the 12 th Euro- pean Conference on Artificial Intelligence (ECAI '96); Bu- dapest, Hungary, 12-16 August 1996, pp. 572-576. Chich- ester: John Wiley. Hearst, M. A. (1994). Multi-paragraph segmentation of expos- nd itory text. In Proc. of the 32 Annual Meeting of the As- sociation for Computational Linguistics; Las Cruces, N.M., 27-30June 1994, pp. 9-16. Morris, J. & G. Hirst (1991). Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(1):21-48. Passonneau, R. J. (1996). Interaction of discourse structure with explicitness of discourse anaphoric noun phrases. In M. Walker, A. Joshi & E. Prince (Eds.), Centering in Dis- course. Preprint. Passonneau, R. J. & D. J. Litman (1993). Intention based seg- mentation: Human reliability and correlation with linguistic cues. In Proc. of the 318t Annual Meeting of the Associa- tion for Computational Linguistics; Columbus, Ohio, 22-26 June 1993, pp. 148-155. Ros6, C. E, B. Di Eugenio, L. S. Levin & C. Van Ess-Dykema (1995). Discourse processing of dialogues with multiple rd threads. In Proc. of the 33 Annual Meeting of the Asso- ciation for Computational Linguistics; Cambridge, Mass., 26-30June 1995, pp. 31-38. Sidner, C. L. (1983). Focusing in the comprehension of definite anaphora. In M. Brady & R. Berwick (Eds.), Computational Models of Discourse, pp. 267-330. Cambridge, Mass.: MIT Press. Strobe, M. & U. Hahn (1996). Functional centering. In Proc. of the 34 th Annual Meeting of the Association for Computa- tional Linguistics; Santa Cruz, Cal., 23-28 June 1996, pp. 270-277. Suri, L. Z. & K. E McCoy (1994). RAFT/RAPR and center- ing: A comparison and discussion of problems related to processing complex sentences. Computational Linguistics, 20(2):301-317. Walker, M. A. (1996a). Centering, anaphora resolution, and discourse structure. In M. Walker, A. Joshi & E. Prince (Eds.), Centering in Discourse. Preprint. Walker, M. A. (1996b). Limited attention and discourse struc- ture. Computational Linguistics, 22(2):255-264. 111 | 1997 | 14 |
Probing the lexicon in evaluating commercial MT systems Martin Volk University of Zurich Department of Computer Science, Computational Linguistics Group Winterthurerstr. 190, CH-8057 Zurich volk©ifi, unizh, ch Abstract In the past the evaluation of machine trans- lation systems has focused on single sys- tem evaluations because there were only few systems available. But now there are several commercial systems for the same language pair. This requires new methods of comparative evaluation. In the paper we propose a black-box method for comparing the lexical coverage of MT systems. The method is based on lists of words from dif- ferent frequency classes. It is shown how these word lists can be compiled and used for testing. We also present the results of using our method on 6 MT systems that translate between English and German. 1 Introduction The evaluation of machine translation (MT) sys- tems has been a central research topic in recent years (cp. (Sparck-Jones and Galliers, 1995; King, 1996)). Many suggestions have focussed on measur- ing the translation quality (e.g. error classification in (Flanagan, 1994) or post editing time in (Minnis, 1994)). These measures are time-consuming and dif- ficult to apply. But translation quality rests on the linguistic competence of the MT system which again is based first and foremost on grammatical coverage and lexicon size. Testing grammatical coverage can be done by using a test suite (cp. (Nerbonne et al., 1993; Volk, 1995)). Here we will advocate a prob- ing method for determining the lexical coverage of commercial MT systems. We have evaluated 6 MT systems which translate between English and German and which are all po- sitioned in the low price market (under US$ 1500). • German Assistant in Accent Duo V2.0 (de- veloper: MicroTac/Globalink; distributor: Ac- cent) * Langenscheidts T1 Standard V3.0 (developer: GMS; distributor: Langenscheidt) • Personal Translator plus V2.0 (developer: IBM; distributor: von Rheinbaben & Busch) • Power Translator Professional (developer/dis- tributor: Globalink) 1 • Systran Professional for Windows (developer: Systran S.A.; distributor: Mysoft) • Telegraph V1.0 (developer/distributor: Glob- alink) The overall goal of our evaluation was a compar- ison of these systems resulting in recommendations on which system to apply for which purpose. The evaluation consisted of compiling a list of criteria for self evaluation and three experiments with ex- ternal volunteers, mostly students from a local in- terpreter school. These experiments were performed to judge the information content of the translations, the translation quality, and the user-friendliness. The list of criteria for self evaluation consisted of technical, linguistic and ergonomic issues. As part of the linguistic evaluation we wanted to determine the lexical coverage of the MT systems since only some of the systems provide figures on lexicon size in the documentation. Many MT system evaluations in the past have been white-box evaluations performed by a test- ing team in cooperation with the developers (see (Falkedal, 1991) for a survey). But commercial MT systems can only be evaluated in a black-box setup since the developer typically will not make the source code and even less likely the linguistic source data (lexicon and grammar) available. Most of the evaluations described in the literature have centered around one MT system. But there are 1Recently a newer version has been announced as "Power Translator Pro 6.2". 112 hardly any reports on comparative evaluations. A noted exception is (Rinsche, 1993), which compares SYSTRAN 2, LOGOS and METAL for German - En- glish translation 3. She uses a test suite with 5000 words of authentic texts (from an introduction to Computer Science and from an official journal of the European Commission). The resulting translations are qualitatively evaluated for lexicon, syntax and semantics errors. The advantage of this approach is that words are evaluated in context. But the results of this study cannot be used for comparing the sizes of lexicons since the number of error tokens is given rather than the number of error types. Furthermore it is questionable if a running text of 5000 words says much about lexicon size, since most of this figure is usually taken up by frequent closed class words. If we are mainly interested in lexicon size this method has additional drawbacks. First, it is time- consuming to find out if a word is translated cor- rectly within running text. Second, it takes a lot of redundant translating to find missing lexical items. So, if we want to compare the lexicon size of differ- ent MT systems, we have to find a way to determine the lexical coverage by executing the system with selected lexical items. We therefore propose to use a special word list with words in different frequency ranges to probe the lexicon efficiently. 2 Our method of probing the lexicon Lexicon size is an important selling argument for print dictionaries and for MT systems. The counting methods however are not standardized and therefore the advertised numbers need to be taken with great care (for a discussion see (Landau, 1989)). In a simi- lar manner the figures for lexicon size in MT systems ("a lexicon of more than 100.000 words", "more than 3.000 verbs").need to be critically examined. While we cannot determine the absolute lexicon size with a black-box test we can determine the relative lexical coverage of systems dealing with the same language pair. When selecting the word lists for our lexicon eval- uation we concentrated on adjectives, nouns, and verbs. We assume that the relatively small num- ber of closed class words like determiners, pronouns, prepositions, conjunctions, and adverbs must be ex- haustively included in the lexicon. For each of the :SYSTRAN is not to be confused with Systran Pro- fessional for Windows. SYSTRAN is a system with a development history dating back to the seventies. It is weU known for its long-standing employment with the European Commission. 3Part of the study is also concerned with French - English translation. three word classes in question (Adj, N, V) we tested words with high, medium, and low absolute fre- quency. We expected that words with high fre- quency should all be included in the lexicon, whereas words with medium and low frequency should give us a comparative measure of lexicon size. With these word lists we computed: 1. What percentage of the test words is trans- lated? 2. What percentage of the test words is correctly translated? The difference between 1. and 2. stems mostly from the fact that the MT systems regard unknown words as compounds, split them up into known units, and translate these units. Obviously this re- sults in sometimes bizarre word creations (see sec- tion 2.3). Our evaluation consisted of three steps. First, we prepared the word lists. Second, we ran the tests on all systems. Finally, we evaluated the output. These steps had to be done for both translation directions (German to English and vice versa), but here we concentrate on English to German. 2.1 Preparation of the word lists We extracted the words for our test from the CELEX database. CELEX (Baayen, Piepenbrock, and van Rijn, 1995) is a lexical database for English, Ger- man and Dutch. It contains 51,728 stems for Ger- man (among them 9,855 adjectives; 30,715 nouns; 9,400 verbs) and 52,447 stems for English (among them 9,214 adjectives; 29,494 nouns; 8,504 verbs). This database also contains frequency data which for German were derived from the Mannheim cor- pus of the "Institut fiir deutsche Sprache" and for English were computed from the Cobuild corpus of the University of Birmingham. Looking at the fre- quency figures we decided to take: • The 100 most frequent adjectives, nouns, verbs. * 100 adjectives, nouns, verbs with frequency 25 or less. Frequency 25 was chosen because it is a medium frequency for all three word classes. • The first 100 adjectives, nouns, verbs with fre- quency 1. 4 4CELEX also contains entries with frequency 0, but we wanted to assure a minimal degree of commonness by selecting words with frequency 1. Still, many words with frequency 1 seem exotic or idiosyncratic uses. 113 Unfortunately the CELEX data contain some noise especially for the German entries. This meant that the extracted word lists had to be manually checked. One problem is that some stems occur twice in the list. This is the case if a verb is used with a prefix in both the separable and the fixed variant (as e.g. iibersetzen engl. to translate vs. to ferry across). Since our test does not distinguish these variants we took only one of these stems. An- other problem is that the frequency count is purely wordform-based. That means, if a word is frequently used as an adverb and seldom as a verb the count of the total number of occurrences will be attributed to both the adverb and the verb stem. Therefore, some words appear at strange frequency positions. For example the very unusual German verb heuen (engl. to make hay) is listed among the 100 most frequent verbs. This is due to the fact that its 3rd person past tense form is a homograph of the frequent ad- verb heute (engl. today). Such obviously misplaced words were eliminated from the list, which was re- filled with subsequent items in order to contain ex- actly 100 words in each frequency class of each word. The English data in CELEX are more reliable. The frequency count has been disambiguated for part of speech by manually checking 100 occurrences of each word-form and thus estimating the total dis- tribution. In this way it has been determined that bank is used as a noun in 97% of all occurrences (in 3% it is a verb). This does not say anything about the distribution of the different noun readings (financial institution vs. a slope alongside a river etc.). If a word is the same in English and in German (as e.g. international, Squaw) it must also be excluded from our test list. This is because some systems in- sert the source word into the target sentence if the source word (and its translation) is not in the lexi- con. If source word and target word are identical we cannot determine if the word in the target sentence comes from the lexicon or is simply inserted because it is unknown. After the word lists had been prepared, we con- structed a simple sentence with every word since some systems cannot translate lists with single word units. With the sentence we were trying to get each system to translate a given word in the intended part of speech. For German we chose the sentence templates: (1) Es ist (adjective/. Ein (noun) ist gut. Wir mtissen (verb/. Adjectives were tested in predicative use since this is the only position where they appear uninflected. Nouns were embedded within a simple copula sen- tence. The indefinite article for a noun sentence was manually adjusted to 'eine' for female gender nouns. Nouns that occur only in a plural form also need special treatment, i.e. a plural determiner and a plu- ral copula form. Verbs come after the modal verb miissen because it requires an infinitive and it does not distinguish between separable prefix verbs and other verbs. On similar reasons we took for English: (2) This is (adjective). The (noun) can be nice. We (verb). The modal can was used in noun sentences to avoid number agreement problems for plural-only words like people. Our sentence list for English nouns thus looked like: (3) 1. The time can be nice. 2. The man can be nice. 3. The people can be nice. 300. The unlikelihood can be nice. 2.2 Running the tests The sentence lists for adjectives, nouns, and verbs were then loaded as source document in one MT sys- tem after the other. Each system translated the sen- tence lists and the target document was saved. Most systems allow to set a subject area parameter (for subjects such as finances, electrical engineering, or agriculture). This option is meant to disambiguate between different word senses. The German noun Bank is translated as English bank if the subject area is finances, otherwise it is translated as bench. No subject area lexicon was activated in our test runs. We concentrated on checking the general vocabulary. In addition Systran allows for the selection of doc- ument types (such as prose, user manuals, corre- spondence, or parts lists). Unfortunately the doc- umentation does not tell us about the effects of such a selection. No document type was selected for our tests. Running the tests takes some time since 900 sen- tences need to be translated by 6 systems. On our 486-PC the systems differ greatly in speed. The fastest system processes at about 500 words per minute whereas the slowest system reaches only 50 words per minute. 2.3 Evaluating the tests After all the systems had processed the sentence lists, the resulting documents were merged for ease 114 of inspection. Every source sentence was grouped together with all its translations. Example 4 shows the English adjective hard (frequency rank 41) with its translations. 41. This is hard. 41. G. Assistant Dieser ist hart. 41. Lang. T1 Dies ist schwierig. (4) 41. Personal Tr. dies ist schwer. 41. Power Tr. Dieses ist hart. 41. Systran Dieses ist hart. 41. Telegraph Dies ist hart. Note that the 6 MT systems give three different translations for hard all of which are correct given an appropriate context. It is also interesting to see that the demonstrative pronoun this is translated into dif- ferent forms of its equivalent pronoun in German. These sentence groups must then be checked man- ually to determine whether the given translation is correct. The translated sentences were annotated with one of the following tags: u (unknown word) The source word is unknown and is inserted into the translation. Seldom: The source word is a compound, part of which is unknown and inserted into the translation (the warm-heartedness : das warme heartedness). w (wrong translation)The source word is in- correctly translated either because of an in- correct segmentation of a compound (spot-on : erkennen-auf/Stelle-auf instead of haarge- nau/exakt) or (seldom) because of an incor- rect lexicon entry (would : wiirdelen instead of wiirden). m (missing word) The source word is not trans- lated at all and is missing in the target sentence. wf (wrong form) The source word was found in the lexicon, but it is translated in an inappro- priate form (e.g. it was translated as a verb al- though it must be a noun) or at least in an un- expected form (e.g. it appears with duplicated parts (windscreen-wiper : Windschutzscheiben- scheibenwischer) ). s (sense preservingly segmented) The source word was segmented and the units were translated. The translation is not correct but the meaning of the source word ~an be inferred (unreasonableness : Vernunfllos-heit instead of Vnvernunft). f (missing interfix (nouns only)) The source word was segmented into units and correctly translated. But the resulting German compound is missing an interfix (windscreen- wiper : Windschutzscheibe- Wischer). wd (wrong determiner (nouns only)) The source word was correctly translated but comes with an incorrect determiner (wristband : die Handgelenkband instead of das Handge- lenkband). c (correct) The translation is correct. Out of these tags only u can be inserted auto- matically when the target sentence word is identical with the source word. Some of the tested translation systems even mark an unknown word in the target sentence with special symbols. All other tags had to be manually inserted. Some of the low frequency items required extensive dictionary look-up to verify the decision. After all translations had been tagged, the tags were checked for consistency and automat- ically summed up. 3 Results of our evaluation The MT systems under investigation translate be- tween English and German and we employed our evaluation method for both translation directions. Here we will report on the results for translating from English to German. First, we will try to an- swer the question of what percentage of the test words was translated at all (correctly or incor- rectly). This figure is obtained by taking the un- known words as negative counts and all others as positive counts. We thus obtained the triples in ta- ble 1. The first number in a triple is the percentage of positive counts in the high frequency class, the second number is the percentage of positive counts in the medium frequency class, and the third num- ber is the percentage of positive counts in the low frequency class. In table 1 we see immediately that there were no unknown words in the high frequency class for any of the systems. The figures for the medium and low frequency classes require a closer look. Let us ex- plain what these figures mean, taking the German Assistant as an example: 14 adjectives (14 nouns, 21 verbs) of the medium frequency class were unknown, resulting in 86% adjectives (86% nouns, 79% verbs) getting a translation. In the low frequency class 49 adjectives, 53 nouns, and 61 verbs got a translation. The average is computed as the mean value over the three word classes. Comparing the systems' averages we can observe that Personal Translator scores highest for all frequency classes. Langenschei- dts T1 and Telegraph are second best with about the 115 G. Assistant Lang. T1 Personal Tr. Power Tr. Systran Telegraph adjectives 100/86/49 100/98/66 100/95/84 100/87/54 100/49/31 100/97/59 nouns 100/86/53 100/91/62 100/97/78 100/83/53 100/59/32 100/94/63 verbs 100/79/61 100/97/73 100/97/88 100/84/55 100/61/37 100/93/75 average 100/84/54 100/95/67 100/96/83 100/85/54 100/56/33 100/95/66 Table 1: Percentage of words translated correctly or incorrectly G. Assistant Lang. T1 Personal Tr. Power Tr. Systran Telegraph adjectives nouns verbs average 100/79/24 99/83/38 97/78/50 99/80/37 100/92/36 100/88/50 99/93/59 100/91/48 100/94/77 100/95/74 100/97/86 100/95/79 100/86/49 100/81/47 100/84/50 100/84/49 100/47/23 100/57/27 100/61/33 lOO/55/28 100/96/53 100/92/53 100/93/73 '[!I!/mt~ Table 2: Percentage of correctly translated words same scores. German Assistant and Power Transla- tor rank third while Systran clearly has the lowest scores. This picture becomes more detailed when we look at the second question. The second question is about the percentage of the test words that are correctly translated. For this, we took unknown words, wrong translations, and missing words as negative counts and all others as positive counts. Note that our judgement does not say that a word is translated correctly in a given context. It merely states that a word is translated in a way that is understandable in some context. Table 2 gives additional evidence that Personal Translator has the most elaborate lexicon for English to German translation while German Assistant and Systran have the least elaborate. Telegraph is on second position followed by Langenscheidts T1 and Power Translator. We can also observe that there are only small differences between the figures in ta- ble 1 and table 2 as far as the high and medium frequency classes are concerned. But there are dif- ferences of up to 30% for the low frequency class. This means that we will get many wrong transla- tions if a word is not included in the lexicon and has to be segmented for translation. While annotating sentences with the tags we ob- served that verbs obtained many 'wrong form' judge- ments (20% and more for the low frequency class). This is probably due to the fact that many English verbs in the low frequency class are rare uses of ho- mograph nouns (e.g. to keyboard, to pitchfork, to sec- tion). If we omit the 'wrong form' tags from the posi- tive count (i.e. we accept only words that are correct, sense preservingly segmented, or close to correct be- cause of minor orthographical mistakes) we obtain the figures in table 3. In this table we can see even clearer the wide cov- erage of the Personal Translator lexicon because the system correctly recognizes around 70% of all low frequency words while all the other systems figure around 40% or less. It is also noteworthy that the Systran results differ only slightly between table 2 and table 3. This is due to the fact that Systran does not give many wrong form (wf) translations. Systran does not offer a translation of a word if it is in the lexicon with an inappropriate part of speech. So, if we try to translate the sentence in example 5 Systran will not offer a translation although keyboard as a noun is in the lexicon. All the other systems give the noun reading in such cases. (5) We keyboard. So the difference between the figures in tables 2 and 3 gives an indication of the precision that we can expect when the translation system deals with infrequent words. The smaller the difference, the more often the system will provide the correct part of speech (if it translates at all). 3.1 Some observations NLP systems can widen the coverage of their lexicon considerably if they employ word-building processes like composition and derivation. Especially deriva- tion seems a useful module for MT systems since the meaning shift in derivation is relatively predictable and therefore the derivation process can be recreated in the target language in most cases. It is therefore surprising to note that all systems in our test seem to lack an elaborate derivation mod- ule. All of them know the noun weapon but none is able to translate weaponless, although the English derivation suffix -less has an equivalent in German 116 adjectives nouns verbs G. Assistant 90/72/21 98/80/30 97/63/16 Lang. T1 97/74/28 100/83/44 97/85/26 Personal Tr. 99/92/69 100/94/73 99/91/67 Power Tr. 92/75/43 98/77/44 100/76/22 Systran 97/43/21 100/55/24 100/53/13 Telegraph 92/84/44 99/90/46 99/86/41 average 95/72/22 98/81/33 99/92/70 97/76/36 99/50/19 97/87/44 Table 3: Percentage of correctly translated words (without 'wrong forms') o Assistant I L ng Ti Personal I Power I Systr n I Telegraph I wd-nouns 8 2 - 7 0 2 Table 4: Number of incorrect gender assignments -los. German Assistant treats this word as a com- pound and incorrectly translates it as Waffe-weniger (engl. less weapon). Due to the lack of derivation modules, words like uneventful, unplayable, tearless, or thievish are either in the lexicon or they are not translated. Traces of a derivational process based on prefixes have been found for Langenscheidts T1 and for Personal Translator. They use the derivational prefix re- to translate English reorient as German orientieren wieder which is not correct but can be regarded as sense preserving. On the other hand all systems employ segmen- tation on unknown compounds. Example 6 shows the different translations for a compound noun. The marker 'M' in the Langenscheidts T1 translation in- dicates that the translation has been found via com- pound segmentation. While Springpferd, Turnpferd or simply Pferd could count as correct translations of vaulting-horse, Springen-Pferd can still be regarded as sense-preservingly segmented. English: vaulting-horse (6) G. Assistant Gewblbe-Pferd w Lang. T1 (M[Springpferd]) c Personal Tr. Wblbungspferd w Power Tr. Springen - Pferd s Systran Vaultingpferd u Telegraph Gewblbe-Kavallerie w An example of a verb compound that gets a trans- lation via segmentation is t0 tap-dance and an adjec- tive compound example is sweet-scented. All of these examples are hyphenated compounds. If we look at compounds that form an orthographic unit like vestryman, waterbird we can only find evidence for segmentations by Langenscheidts T1 and German Assistant. These findings only relate to translating from English to German. Working in the opposite direction all systems perform segmentatiqn of ortho- graphic unit compounds since this is a very common feature of German. As another side effect we used the lexicon evalua- tion to check for agreement within the noun phrase. Translating from English to German the MT system has to get the gender of the German noun from the lexicon since it cannot be derived from the English source. We can check if these nouns get the cor- rect gender assignment if we look at the form of the determiner. Table 4 gives the number of incorrect determiner selections (over all frequency classes). Since gender assignment in choosing the deter- miner is such a basic operation all systems are able to do this in most cases. But in particular if noun com- pounds are segmented and the translation is synthe- sized this operation sometimes fails. Personal Trans- lator does not give a determiner form in these cases. It simply gives the letter 'd' as the beginning letter of all three different forms (der, die, das). 3.2 Comparing translation directions Comparing the results for English to German trans- lation with German to English is difficult because of the different corpora used for the CELEX fre- quencies. Especially it is not evident whether our medium frequency (25 occurrences) leads to words of similar prominence in both languages. Neverthe- less our results indicate that some systems focus on either of the two translation directions and there- fore have a more elaborate lexicon in one direction. This can be concluded since these systems show big- ger differences than the others. For instance, Tele- graph, Systran and Langenscheidts T1 score much better for German to English. For Telegraph the rate of unknown words dropped by 2% for medium frequency and by 12% for low frequency, tbr Systran the same rate dropped by 36% for medium frequency and by 33% for low frequency words, and for Lan- genscheidts T1 the rate dropped by 1% for medium frequency and by 16% for low frequency. The latter 117 reflects the figures in the Langenscheidts T1 man- ual, where they report an inbalance in the lexicon of 230'000 entries for German to English and 90'000 entries for the opposite direction. Personal Transla- tor again ranks among the systems with the widest coverage while German Assistant shows the smallest coverage. 4 Conclusions As more translation systems become available there is an increasing demand for comparative evaluations. The method for checking lexical coverage as intro- duced in this paper is one step in this direction. Tak- ing the most frequent adjectives, nouns, and verbs is not very informative and mostly serves to anchor the method. But medium and low frequency words give a clear indication of the underlying relative lexicon size. Of course, the introduced method cannot claim that the relative lexicon sizes correspond exactly to the computed percentages. For this the test sample is too small. The method provides a plausible hy- pothesis but it cannot prove in a strict sense that one lexicon necessarily is bigger than another. A proof, however, cannot be expected from any black- box testing method. We mentioned above that some systems subclas- sify their lexical entries according to subject areas. They do this to a different extent. Langenscheidts T1 has a total of 55 subject ar- eas. They are sorted in a hierarchy which is three levels deep. An example is Technology with its subfields Space Technology, Food Tech- noloy, Technical Norms etc. Multiple ~ subject areas from different levels can be selected and prioritized. Personal Translator has 22 subject areas. They are all on the same level. Examples are: Biol- ogy, Computers, Law, Cooking. Multiple selec- tions can be made, but they cannot be priori- tized. Power Translator and Telegraph do not come with built-in subject dictionaries but these can be purchased separately and added to the sys- tem. Systran has 22 "Topical Glossaries", all on the same level. Examples are: Automotive, Avi- ation/Space, Chemistry. Multiple subject areas can be selected and prioritized. Our tests were run without any selection of a sub- ject area. We tried to check if a lexicon entry that is marked with a subject area will still be found if no subject area is selected. This check can only be performed reliably for Langenscheidt T1 since this is the only system that makes the lexicon transparent to the user to the point that one can access the sub- ject area of every entry. Personal Translator only allows to look at an entry and its translation op- tions, but not at its subject marker, and Systran does not allow any access to the built-in lexicon. For Langenscheidts T1 we tested the word compiler which is marked with data processing and computer software. This lexical entry does not have any read- ing without a subject area marker, but the word is still found at translation if no subject area is chosen. That means that a subject area, if chosen, is used as disambiguator, but if translating without a subject area the system has access to the complete lexicon. In this respect our tests have put Power Translator and Telegraph at a disadvantage since we did not extend their lexicons with any add-on lexicons. Only their built-in lexicons were evaluated here. Of course, lexical coverage by itself does not guar- antee a good translation. It is a necessary but not a sufficient condition. It must be complemented with lexical depth and grammatical coverage. Lexieal depth can be evaluated in two dimensions. The first dimension describes the number of readings avail- able for an entry. A look at some common nouns that received different translations from our test sys- tems reveals that there are big differences in this di- mension which are not reflected by our test results. Table 7 gives the number of readings for the word order ('N' standing for noun readings, 'V' for ver- bal, 'Prep' for prepositional, and 'Phr' for phrasal readings). G. Assistant 9 N 3 V Lang. T1 4 N 4 V Personal Tr. 6 N 5 V (7) Power Tr. 1 N 1 V Systran n.a. Telegraph 10 N 4 V 1 Prep 1 Prep 2 Phr There is no information for Systran since the built- in lexicon cannot be accessed. German Assistant contains a wide variety of readings although it scored badly in our tests. Power Translator on the contrary gives only the most likely readings. Still, there re- mains the question of whether a system is able to pick the most appropriate reading in a given con- text, which brings us to the second dimension. The second dimension of lexical depth is about the amount of syntactic and semantic knowledge at- tributed to every reading. This also varies a great deal. Telegraph offers 16 semantic features (ani- 118 mate, time, place etc.), German Assistant 9 and Langenscheidts T1 5. Power Translator offers few semantic features for verbs (movement, direction). The fact that these features are available does not entail that they are consistenly set at every appro- priate reading. And even if they are set, it does not follow that they are all optimally used during the translation process. To check these lexicon dimensions new tests need to be developped. We think that it is especially tricky to get to all the readings along the first di- mension. One idea is to use the example sentences listed with the different readings in a comprehen- siveprint dictionary. If these sentences are carefully designed they should guide an MT system to the respective translation alternatives. Our method for determining lexical coverage could be refined by looking at more frequency classes (e.g. an additional class between medium and low fre- quency). But since the results of working with one medium and one low frequency class show clear dis- tinctions between the systems, it is doubtful that the additional cost of taking more classes will pro- vide significantly better figures. The method as introduced in this paper requires extensive manual labor in checking the translation results. Carefully going through 900 words each for 6 systems including dictionary look-up for unclear cases takes about 2 days time. This could be reduced by automatically accessing translation lists or reli- able bilingual dictionaries. Judging sense-preserving segmentations or other close to correct translations must be left over to the human expert. A special purpose translation list could be incre- mentally built up in the following manner. For the first system all 900 words will be manually checked. All translations with their tags will be entered into the translation list. For the second system only those words will be checked where the translation differs from the translation saved in the translation list. Every new judgement will be added to the transla- tion list for comparison with the next system's trans- lations. 5 Acknowledgements I would like to thank Dominic A. Merz for his help in performing the evaluation and for many helpful suggestions on earlier versions of the paper. Linguistic Data Consortium, University of Penn- sylvania. Falkedal, Kirsten. 1991. Evaluation Methods for Machine Translation Systems. An historical overview and a critical account. ISSCO. Univer- sity of Geneva. Draft Report. Flanagan, Mary A. 1994. Error classification for MT evaluation. In Technology partnerships for crossing the language barrier: Proceedings of the 1st Conference of the Association for Machine Translation in the Americas, pages 65-71, Wash- ington,DC. Association for Machine Translation in the Americas. Landau, Sidney I. 1989. Dictionaries. The art and craft of lexicography. Cambridge University Press, Cambridge. first published 1984. King, Margaret. 1996. Evaluating natural language processing systems. CACM, 39(1):73-79. Minnis, Stephen. 1994. A simple and practical method for evaluating machine translation qual- ity. Machine Translation, 9(2):133-149. Rinsche, Adriane. 1993. Evaluationsverfahren fiir maschinelle ~)bersetzungssysteme - zur Methodik und experimentellen Praxis. Kommission der Europ~ischen Gemeinschaften, Generaldirektion XIII; Informationstechnologien, Informationsin- dustrie und Telekommunikation, Luxemburg. Nerbonne, J., K. Netter, A.K. Diagne, L. Dickmann, and J. Klein. 1993. A diagnostic tool for ger- man syntax. Machine Translation (Special Issue on Evaluation of MT Systems), (also as DFKI Re- search Report RR-91-18), 8(1-2):85-108. Sparck-Jones, K. and J.R. Galliers. 1995. Evalu- ating Natural Language Processing Systems. An Analysis and Review. Number 1083 in Lecture Notes in Artificial Intelligence. Springer Verlag, Berlin. Volk, Martin. 1995. Einsatz einer Testsatzsamm- lung im Grammar Engineering, volume 30 of Sprache und Information. Niemeyer Verlag, Tiibingen. References Baayen, R. H., R. Piepenbrock, and H. van Rijn. 1995. The CELEX lexical database (CD-ROM). 119 | 1997 | 15 |
Ambiguity Resolution for Machine Translation of Telegraphic Messages I Young-Suk Lee Lincoln Laboratory MIT Lexington, MA 02173 USA ysl@sst. II. mit. edu Clifford Weinstein Lincoln Laboratory MIT Lexington, MA 02173 USA cj w©sst, ll. mit. edu Stephanie Seneff SLS, LCS MIT Cambridge, MA 02139 USA seneff~lcs, mit. edu Dinesh Tummala Lincoln Laboratory MIT Lexington, MA 02173 USA tummala©sst. II. mit. edu Abstract Telegraphic messages with numerous instances of omis- sion pose a new challenge to parsing in that a sen- tence with omission causes a higher degree of ambi6u- ity than a sentence without omission. Misparsing re- duced by omissions has a far-reaching consequence in machine translation. Namely, a misparse of the input often leads to a translation into the target language which has incoherent meaning in the given context. This is more frequently the case if the structures of the source and target languages are quite different, as in English and Korean. Thus, the question of how we parse telegraphic messages accurately and efficiently becomes a critical issue in machine translation. In this paper we describe a technical solution for the issue, and reSent the performance evaluation of a machine trans- tion system on telegraphic messages before and after adopting the proposed solution. The solution lies in a grammar design in which lexicalized grammar rules defined in terms of semantic categories and syntactic rules defined in terms of part-of-speech are utilized to- ether. The proposed grammar achieves a higher pars- g coverage without increasing the amount of ambigu- ity/misparsing when compared with a purely lexical- ized semantic grammar, and achieves a lower degree of. ambiguity/misparses without, decreasing the pars- mg coverage when compared with a purely syntactic grammar. 1 Introduction Achieving the goal of producing high quality machine transla- tion output is hindered by lexica] and syntactic ambiguity of the input sentences. Lexical ambiguity may be greatly reduced by limiting the domain to be translated. However, the same is not generally true for syntactic ambiguity. In particular, telegraphic messages, such as military operations reports, pose a new chal- lenge to parsing in that frequently occurring ellipses in the cor- pus induce a h{gher degree of syntactic ambiguity than for text written in "~rammatical" English. Misparsing triggered by the ambiguity ot the input sentence often leads to a mistranslation in a machine translation system. Therefore, the issue becomes how to parse tele.graphic messages accurately and efficiently to produce high quahty translation output. In general the syntactic ambiguity of an input text may be greatly reduced by introducing semantic categories in the gram- mar to capture the co-occurrence restrictions of the input string. In addition, ambiguity introduced by omission can be reduced by lexicalizing grammar rules to delimit the lexical items which 1This work was sponsored by the Defense Advanced Research Projects Agency. Opinions, interpretations, conclusions, and rec- ommendations are those of the authors and are not necessarily endorsed by the United States Air Force. ~yrP iCally occur in phrases with omission in the given domain. A awback of this approach, however, is that the grammar cover- age is quite low. On the other hand, grammar coverage may be maximized when we rely on syntactic rules defined in terms of part-of-speech at the cost of a high degree of ambiguity. Thus, the goal of maximizing the parsing coverage while minimizing the ambiguity may be achieved by adequately combining lexi- calized rules with semantic categories, and non-lexicalized rules with syntactic categories. The question is how much semantic and syntactic information is necessary to achieve such a goal. In this paper we propose that an adequate amount of lex- ical information to reduce the ambiguity in general originates from verbs, which provide information on subcategorization, and prepositions, which are critical for PP-attachment ambiguity res- olution. For the given domain, lexicalizing domain-specific ex- pressions which typically occur in phrases with omission is ade- quate for ambiguity resolution. Our experimental results show that the mix of syntactic and semantic grammar as proposed here has advantages over either a syntactic grammar or a lexi- calized semantic grammar. Compared with a syntactic grammar, the proposed grammar achieves a much lower degree of ambigu- ity without decreasing the grammar coverage. Compared with a lexicalized semantic grammar, the proposed grammar achieves a higher rate of parsing coverage without increasing the ambi- guity. Furthermore, the generality introduced by the syntactic rules facilitates the porting of the system to other domains as well as enablin.g the system to handle unknown words efficiently. This paper is organized as follows. In section 2 we discuss the motivation for lexicalizing grammar rules with semantic cat- egories in the context of translating telegraphic messages, and its drawbacks with respect to parsing coverage. In section 3 we propose a grammar writing technique which minimizes the ambi- guity of the input and maximizes the parsing coverage. In section 4 we give our experimental results of the technique on the basis of two sets of unseen test data. In section 5 we discuss system engineering issues to accommodate the proposed technique, i.e., integration of part-of-speech tagger and the adaptation of the understanding system. Finally section 6 provides a summary of the paper. 2 Translation of Telegraphic Messages Telegraphic messages contain many instances of phrases with omission, cf. (Grishman, 1989), as in (1). This introduces a greater degree of syntactic ambiguities than for texts without any omitted element, thereby posing a new challenge to parsing. (1) TU-95 destroyed 220 nm. (~ An aircraft TU-95 was destroyed at 220 nautical miles) Syntactic ambiguity and the resultant misparse induced by such an omission often leads to a mistranslation in a machine translation system, such as the one described in (Weinstein et ai., 1996), which is depicted in Figure 1. The system depicted in Figure 1 has a language understanding module TINA, (Seneff, 1992), and a language generation module 120 LANGUAGE GENERATION GENESIS Figure 1: An Interlingua-Based English-to-Korean Machine Translation System GENESIS, (Glass, Polifroni and SeneR', 1994), at the core. The semantic frame is an intermediate meaning representation which is directly derived from the parse tree andbecomes .the input to the generation system. The hierarchical structure of the parse tree is preserved in the semantic frame, and therefore a misparse of the input sentence leads to a mistranslation. Suppose that the sentence (1) is misparsed as an active rather than a passive sentence due to the omission of the verb was, and that the prepo- sitional phrase 220 nm is misparsed as the direct object of the verb destroy. These instances of misunderstanding are reflected in the semantic frame. Since the semantic frame becomes the input to the generation system, the generation system produces the non-sensical Korean translation output, as in (2), as opposed to the sensible one, as in (3). 3 (2) TU-95-ka 220 hayli-lul pakoy-hayssta TU-95-NOM 220 nautical mile-OBJ destroyed (3) TU-95-ka 220 hayli-eyse pakoy-toyessta TU-95-NOM 220 nautical mile-LOC was destroyed Given that the generation of the semantic frame from the parse tree, and the generation of the translation output from the se- mantic frame, are quite straightforward in such a system, and that the flexibility of the semantic frame representation is well suited for multilingual machine translation, it would be more de- sirable to find a way of reducing the ambiguity of the input text to produce high quality translation output, rather than adjust- ing the translation process. In the sections below we discuss one such method in terms of grammar design and some of its side effects.x 2.1 Lexicalization of Grammar Rules with Semantic Categories In the domain of naval operational report messages (MUC-II messages hereafter), 4 (Sundheim, 1989), we find two types of ellipsis. First, top level categories such as subjects and the copula verb be are often omitted, as in (4). (4) Considered hostile act (= This was considered to be a hostile act). Second, many function words like prepositions and articles are omitted. Instances of preposition omission are given in (5), where z stands for Greenwich Mean Time (GMT). (5) a. Haylor hit by a torpedo and put out of action 8 hours (---- for 8 hours) b. All hostile recon aircraft outbound 1300 z (= at 1300 z) If we try to parse sentences containing such omissions with the grammar where the rules are defined in terms of syntactic cat- egories (i.e. part-of-speech), the syntactic ambiguity multiplies. 3In the examples, NOM stands for the nominative case marker, OBJ the object case marker, and LOC the locative postposition. 4MUC-II stands for the Second Message Understanding Con- ference. MUC-II messages were originally collected and prepared by NRaD(1989) to support DARPA-sponsored research in mes- sage understanding. To accommodate sentences like (5)a-b, the grammar needs to al- low all instances of noun phrases (NP hereafter) to be ambiguous between an NP and a prepositional phrase (PP hereafter) where the preposition is omitted. Allowing an input where the copula verb be is omitted in the grammar causes the past tense form of a verb to be interpreted either as the main verb with the ap- propriate form of be omitted, as in (6)a, or as a reduced relative clause modifying the preceding noun, as in (6)b. (6) Aircraft launched at 1300 z ... a. Aircraft were launched at 1300 z ... b. Aircraft which were launched at 1300 z ... Such instances of ambiguity are usually resolved on the basis of the semantic information. However, relying on a semantic module for ambiguity resolution implies that the parser needs to produce all possible parses of the input text andcarry them along, thereby requiring a more complex understanding process. One way of reducing the ambiguity at an early stage of pro- cessing without relying on a semantic module is to incorporate domain/semantic knowledge into the grammar as follows: • Lexicalize grammar rules to delimit the lexical items which typically occur in phrases with omission; • Introduce semantic categories to capture the co-occurrence restrictions of lexical items. Some example grammar rules instantiating these ideas are given in (7). (7) a..locative_PP {at in near off on ...} NP headless_PP e..np_distance numeric nautical_mile numeric yard e..time_expression [at] numeric gmt b..headless_PP [all np-distance a np_bearing d..temporal_PP (during after prior_to ...} NP time_expression f..gmt z (7)a states that a locative prepositional phrase consists of a subset of prepositions and a noun phrase. In addition, there is a subcategory headless_PP which consists of a subset of noun phrases which typically occur in a locative prepositional phrase with the preposition omitted. The head nouns which typically occur in prepositional phrases with the preposition omission are nautical miles and yard. The rest of the rules can be read in a similar manner. And it is clear how such lexicalized rules with the semantic categories reduce the syntactic ambiguity of the input text. 2.2 Drawbacks Whereas the language processing is very efficient when a system relies on a lexicalized semantic grammar, there are some draw- backs as well. • Since the grammar is domain and word specific, it is not easily ported to new constructions and new domains. • Since the vocabulary items are entered in the grammar as part of lexicalized grammar rules, if an input sentence con- tains words unknown to the grammar, parsing fails. These drawbacks are reflected in the performance evaluation of our machine translation system. After the system was developed on all the training data of the MUC-II corpus (640 sentences, 12 words/sentence average), the system was evaluated on the held- out test set of 111 sentences (hereafter TEST set). The results are shown in Table 1. The system was also evaluated on the data which were collected from an in-house experiment. For this experiment, the subjects were asked to study a number of MUC- II sentences, and create about 20 MUC-II-like sentences. These 121 Total No. of sentences 111 No. of sentences with no 66/111 (59.5%) unknown words No. of parsed sentences 23/66 (34.8%) No, of misparsed sentences 2/23 (8:7%) Table 1: TEST Data Evaluation Results on the Lexicalized Semantic Grammar Total .No. of sentences 281 No. of sentences with no 239/281 (85.1%) unknown words NO. of parsed sentences 103/239 (43.1%) No. of misparsed sentences 15/103 (14.6%) Table 2: TEST' Data Evaluation Results on the Lexicalized Semantic Grammar MUC-II-like sentences form data set TEST'. The results of the svstem evaluation on the data set TEST' are given in Table 2. " Table 1 shows that the grammar coverage for unseen data is about 35%, excluding the failures due to unknown words. Table 2 indicates that even for sentences constructed to be similar to the training data, the grammar coverage is about 43%, again exclud- ing the parsing failures due to unknown words. The misparse 5 rate with respect to the total parsed sentences ranges between 8.7% and 14.6%, which is considered to be highly accurate. 3 Incorporation of Syntactic Knowledge Considering the low parsing coverage of a semantic grammar which relies on domain specific knowledse, and the fact that the successful parsing of the input sentence ks a prerequisite for pro- ducing translation output, it is critical to improve the parsing coverage. Such a goal may be achieved by incorporating syn- tactic rules into the ~ammar while retaining lexical/semantic information to minim'ize the ambiguity of the input text. The question is: how much semantic and syntactic information is necessary? We propose a solution, as in (8): (8) (a) Rules involving verbs and prepositions need to be lexicalized to resolve the prepositional phrase attachment ambiguity, cf. (Brill and Resnik, 1993). (b) Rules involving verbs need to be lexicalized to prevent mis- arSing due to an incorrect subcategorization. ) Domain specific expressions (e.g.z. nm in the MUC-II cor- pus) which frequently occur in phrases with omitted elements. need to be lexicalized. (d) Otherwise. relv on svntactic rules defined in terms of part- of-speech. " " In this section, we discuss typical misparses for the syntac- tic grammar on experiments in the MUC-II corpus. We then illustrate how these misparses are corrected by lexicalizing the grammar rules for verbs, prepositions, and some domain-specific phrases. 3.1 Typical Misparses Caused by Syntactic Grammar The misparses we find in the MUC-II corpus, when tested on a syntactic grammar, are largely due to the three factors specified in (9). 5The term misparse in this paper should be interpreted with care. A number of the sentences we consider to be misparses are t svntacuc mksparses, but "semanucallv anomalous. Since we are interested in getting the accurate interpretation in the given context at the parsingstage, we consider parses which are semantically anomalous to be misparses. (9) i. Misparsing due to prepositional phrase attachment (hereafter PP-attachment) ambiguity ii. Misparsing due to incorrect verb subcategorizations iii. Misparsing due to the omission of a preposition, e.g. i,~10 z instead of at I~10 z Examples of misparses due to an incorrect verb subcatego- rization and a PP-attachment ambiguity are given in Figure 2 and Figure 3. respectively. An example of a misparse due to preposition omission is given in Figure 4. In Figure 2, the verb intercepted incorrectly subcategorizes for a finite complement clause. In Figure 3, the prepositional phrase with 12 rounds is u~ronglv attached to the noun phrase the contact, as opposed to the verb phrase vp_active, to which it properly belongs. Figure 4 shows that the prepositional phrase i,~i0 z with at omitted is misparsed as a part of the noun phrase expression hostile raid composition. 3.2 Correcting Misparses by Lexicalizing Verbs, Prepositions, and Domain Specific Phrases Providing the accurate subcategorization frame for the verb in- tercept by lexicalizing the higher level category "vp" ensures that it never takes a finite clause as its complement, leading to the correct parse, as in Figure 5. As for PP-attachment ambiguity, lexicalization of verbs and prepositions helps in identifying the proper attachment site of the prepositional phrase, cf. (t3rill and Resnik, 1993), as illustrated in Figure 6. Misparses due to omission are easily corrected by deploying lexicalized rules for the vocabulary items which occur in phrases with omitted elements. For the misparse illustrated in Figure 3, utilizing the lexicalized rules in (10) prevents IJI0 z from being analyzed as part of the subsequent noun phrase, as in Figure 7. (10) a..time_expression b..gmt [at] numeric gmt z 4 Experimental Results In this section we report two types of experimental results. One is the parsing results on two sets of unseen data TEST and TEST' (discussed in Section 2) using the syntactic grammar de- fined purely in terms of part-of-speech. Tl~e other is the parsing results on the same sets of data using the grammar which com- bines lexicalized semantic grammar rules and syntactic grammar rules. The results are compared with respect to the parsing cov- erage and the misparse rate. These experimental results are also compared with the parsing results with respect to the lexicalized semantic grammar discussed in Section 2. 4.1 Experimental Results on Data Set TEST "-Total .No. of sentences i iii I No. of parsed sentences i 84/ili (75.7%) ', [.No. of misparsed sentences 24/84 (29%) i Table 3: TEST Data Evaluation Results on the Syntactic G r am m ar I Total .No. of sentences i iIi i No. of parsed sentences i 86/III (77%) ! No. of misparsed sentences 9/86 (i0%) Table 4: TEST Data Evaluation Results on the Mixed Grammar In terms of parsing coverage, the two grammars perform equallv W -- -- * ell (around 76%). In terms of misparse rate, however, the gram- mar which utilizes only syntactic categories shows a much higher 122 '! I adver~ when t,~- : vverO (:let lntercepte~he nn_head range o~ prep sentence ¢ull_parse statement predicate vp_actlve ~Inlte_comp ~Inlte_statement subject o_np PP q_np clet nn_i~esd ;:p r I prep ._~,p nn_head the alrcra?t :o enterpr lsewas lln~_comp complement ¢L.np cardinal nn_head 30 nm Figure 2: Misparse due to incorrect verb subcategorization subject i cl_np nn_head spencer sentence I ?ull_parse I statement vver~ ensased preOicate [ vp_active o_np det nn_heaa pp prep q_np cardlnal nn_nead the contact with 12 rounds o? prep PP cLnp nn_head pp prep q_no cardinal nn_head I I 5-1rich at 3000 gOs Figure 3: Misparse due to PP-attachment ambiguity 123 Ii! • L,-: ' sentence [ full_parse I fragmen~ I complement ~..np possessive adjective z h o s t l l e Oet I t 1410 F:~ " nn_heacl raid composition PP prep q-nD car'~ ~ na i nn_hearl I I of Ig aLrcraft Figure 4: Misparse due to Omission of Preposition pre_adJunct 3 temporal_clause L when_clause det w h e n statement l partiCipLai_~ I passive I vp_intercept. I vlntercept I when sentence i Pull_parse I statement subJect L q_np nn_head op prep q_np brace det nnhead pp prep q_np i en nn_hesd E intercspte~he range Of the aircraft to enterpPisewas lin~_comg complement I complement_rip quant~?~e~a_distance I I cardinal nautlcal_mLJ 30 nm Figure 5: Parse Tree with Correct Verb Subcategorization 124 !! subject I q_np r.~_head dkr_object I vensase q_np wlth det nn_hesd spencer engsled the contact with m m sentence I ¢ull_parse J statement predicate i vp_ensase I wlth_no ~.nD cardinal nn_head PO pre~ ~_np I ' nn_heaO l 12 rounds O~ 5-Inch !ocatlve_pp at o_no cardznal nn_hesd i 1 t at 3000 Wds Figure 6: Parse Tree with Correct PP-attachment pre_adjunct I time_expression I gmt_tLme I numer~c_tlme cardinal gmt I I 14tO z sentence t ?uiL_parse I ?ragment Complement I q_np adjective nn_head pp hostile ra id composi t ion n_o? q_np car~ Lna I nn_head I I 0¢ Ig alrcra?t Figure 7: Corrected Parse Tree 125 rate of misparse (i.e. 29%) than the grammar which utilizes both syntactic and semantic categories (i.e. 10%). Comparing the evaluation results on the mixed grammar with those on the lexicalized semantic grammar discussed in Section 2, the parsing coverage of the mixed grammar is much higher (77%) than that of the semantic grammar (59.5%). In terms of misparse rate, both grammars perform equally well, i.e. around 9%. 6 4.2 Experimental Results on Data Set TEST' Total No. of sentences I 281 I No. of sentences which parse 215/281 (76.5%) No. of misparsed sentences 60/215 (28%) Table 5: TEST' Data Evaluation Results on Syntactic Grammar I Total No. of sentences I 289 No. of parsed sentences 236/289 /82%) No. of mlsparsed sentences 23/236 (10%) Table 6: TEST' Data Evaluation Results on Mixed Gram- mar Evaluation results of the two types of grammar on the TEST' data, given in Table 5 and Table 6, are similar to those of the two types of ~ammar on the TEST data discussed above. To summarize, the grammar which combines syntactic rules and lexicalized semantic rules fares better than the syntactic lgrcal.mm, mar or the semantic grammar. Compared with a lex- lzed semantic grammar, this grammar achieves a higher parsing coverage without increasing the amount of ambigu- ity/misparsing. When compared with a syntactic grammar, this grammar achieves a lower degree of ambiguity/misparsing with- out decreasing the parsing rate. 5 System Engineering An input to the parser driven by a grammar which utilizes both syntactic and lexicalized semantic rules consists of words (to be covered by lexicalized semantic rules) and parts-of-speech (to be covered by syntactic rules). To accommodate the part-of-speech input to the parser, the input sentence has to be part-of-speech tagged before parsing. To produce an adequate translation out- put from the input containing parts-of-speech, there has to be a mechanism by which parts-of-speech are used for parsing pur- poses, and the corresponding lexical items are used for the se- mantic frame representation. 5.1 Integration of Rule-Based Part-of-Speech Tagger To accommodate the part-of-speech input to the parser, we have integrated the rule-based part-of-speech tagger, (Brill, 1992), (Brill, 1995), as a preprocessor to the language understanding system TINA, as in Figure 8. An advantage of integrating a part-of-speech tagger over a lexicon containing part-of-speech in- formation is that only the former can tag words which are new to the system, and provides a way of handling unknown words. While most stochastic taggers require a large amount of train- ing data to achieve high rates of tagging accuracy, the rule-based eThe parsing coverage of the semantic grammar, i.e. 34.8%, is after discounting the parsing failure due to words unknown to the ~rammar. The reason why we do not give the statistics of the parsing failure due to unknown words for the syntactic and the mixed grammar is because the part-of-speech tagging process, which will be discussed in detail in Section 5, has the effect of handling unknown words, and therefore the problem does not arise. RULE-BASED ] I LANGUAGE I I LANGUAGE I PA RT-OF-SPEECI,-("~ UNDERSTANDiNGI-~ GENERATION I-'~ TEXT TAGGER I I TNA I I GENESIS I IOUTPUTI Figure 8: Integration of the Rule-Based Part-of-Speech Tag- ger as a Preprocessor to the Language Understanding Sys- tem tagger achieves performance comparable to or higher than that of stochastic taggers, even with a training corpus of a modest size. Given that the size of our training corpus is fairly small (total 7716 words), a transformation-based tagger is wellsuited to our needs. The transformation-based part-of-speech tagger operates in two stages. Each word in the tagged training corpus has an entry in the lexicon consisting of a partially ordered list of tags, indicating the most likely tag for that word, and all other tags seen with that word (in no particular order). Every word is first assigned its most likely tag in isolation. Unknown words are first assumed to be nouns, and then cues based upon prefixes, suffixes, infixes, and adjacent word co-occurrences are used to upgrade the most likely tag. Secondly, after the most likely tag for each word is assigned, contextual transformations are used to improve the accuracy. We have evaluated the tagger performance on the TEST Data both before and after training on the MUC-II corpus. The re- sults are given in Table 7. Tagging statistics 'before training' are based on the lexicon and rules acquired from the BROWN CORPUS and the WALL STREET JOURNAL CORPUS. Tag- ~ ing statistics 'after training' are divided into two categories, oth of which are based on the rules acquired from training data sets of the MUC-II corpus. The only difference between the two is that in one case (After Training I) we use a lexicon acquired from the MUC-II corpus, and in the other case (After Training II) we use a lexicon acquired from a combination of the BROWN CORPUS, the WALL STREET JOURNAL CORPUS, and the MUC-II database. Training Status Before Training After Tralnin ~ I After Trainin ~ II Ta~ging Accuracy 1125/1287 (87.4%) 1249/1287 /97%) 1263/1287 (98%) Table 7: Tagger Evaluation on Data Set TEST Table 7 shows that the tagger achieves a tagging accuracy of up to 98% after training and using the combined lexicon, with an accuracy for unknown words ranging from 82 to 87%. These high rates of tagging accuracy are largely due to two factors: (1) Combination of domain specific contextual rules obtained by training the MUC-II corpus with general contextual rules ob- tained by training the WSJ corpus; And (2) Combination of the MUC-II lexicon with the lexicon for the WSJ corpus. 5.2 Adaptation of the Understanding System The understanding system depicted in Figure 1 derives the se- mantic frame representation directly from the parse tree. The terminal symbols (i.e. words in general) in the parse tree are represented as vocabulary items in the semantic frame. Once we allow the parser to take part-of-speech as the input, the parts- of-speech (rather than actual words) will appear as the terminal symbols in the parse tree, and hence as the vocabulary items in the semantic frame representation. We adapted the system so that the part-of-speech tags are used for parsing, but are replaced with the original words in the final semantic frame. Generation can then proceed as usual. Figures 9 and (11) illustrate the parse tree and semantic frame produced by the adapted system for the input sentence 0819 z unknown contacts replied incorrectly. 126 I(£'- T F,:'F' H,9": pre_adjunct i time_expression i 8mtmtlme I numeric_tlme caPdlnal gmt I 0819 z sentence i Cull_parse i statement subject ! I q_np adjective nn,_head ) 1 l ) u~known contact predicate vp_repiy vrepiy adverb_phrase I adv replied ~n¢crrectlg Figure 9: Parse Tree Based on the Mix of Word and Part-of-Speech Sequence (11) {c statement :time_expression {p numeric_time :topic {q gmt :name "z" } :pred {p cardinal :topic "0819" } } :topic {q nn_head :name "contact" :pred {p --known :global 1 } } :subject 1 :pred {p reply_v :mode "past" :adverb {p incorrectly } } } 6 Summary In this paper we have proposed a technique which maximizes the parsing coverage and minimizes the misparse rate for machine translation of telegraphic messages. The key to the technique is to adequately mix semantic and syntactic rules in the grammar. We have given experimental results of the proposed grammar, and compared them with the experimental results of a syntac- tic grammar and a semantic grammar with respect to parsing coverage and misparse rate, which are summarized in Table 8 and Table 9. We have also discussed the system adaptation to accommodate the proposed technique. Grammar Type Parsing Rate Misparse Rate Semantic Grammar 34.8% 8.7% Syntactic Grammar 75.7% 29% Mixed Grammar 77% 10% Table 8: TEST Data Evaluation Results on the Three Types of Grammar Grammar Type Farsin~ Rate Misparse Rate Semantic Grammar 43.1% 14.6% Syntactic Grammar 76.5% 28% Mixed Grammar 82% 10% Table 9: TEST' Data Evaluation Results on the Three Types of Grammar References Eric Brill. 1992. A Simple Rule-Based Part of Speech Tagger. Proceedings of the Third Conference on Applied Natural Lan- guage Processing, A CL, Tcento, Italy. Eric Brill. 1995. Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part-of- Speech Tagging. Computational Linguistics, 21-4, pages 543- 565. Eric Brill and Philip Resnik. 1993 A Rule-Based Approach to Prepositional Phrase Attachment Disambiguation. Techni- cal report, Department of Computer and Information Science, University of Pennsylvania. James Glass, Joseph Polifroni and Stephanie Seneff. 1994. Mul- tilingual Language Generation Across Multiple Domains. Pre- sented at the 1994 International Conference on Spoken. Lan- guage Processing, Yokohama, Japan. Ralph Grishman. 1989. Analyzing Telegraphic Messages. Pro- ceedings of Speech and Natural Language Workshop, DARPA. Stephanie Seneff. 1992. TINA: A Natural Language System for Spoken Language Applications. Computational Linguistics, 18:1, pages 61-88. Beth M. Sundheim. Navy Tactical Incident Reporting in a Highly Constrained Sublanguage: Examples and Analysis. Technical Document 1477, Naval Ocean Systems Center, San Diego. Clifford Weinstein, Dinesh Tummala, Young-Suk Lee, Stephanie Seneff. 1996. Automatic Engish-to-Korean Text Translation of Telegraphic Messages in a Limited Domain. To be presented at the International Conference on Computational Linguistics '96. 127 | 1997 | 16 |
Machine Transliteration Kevin Knight and Jonathan Graehl Information Sciences Institute University of Southern California Marina del Rey, CA 90292 knight~isi, edu, graehl@isi, edu Abstract It is challenging to translate names and technical terms across languages with differ- ent alphabets and sound inventories. These items are commonly transliterated, i.e., re- placed with approximate phonetic equivalents. For example, computer in English comes out as ~ i/l:::'=--~-- (konpyuutaa) in Japanese. Translating such items from Japanese back to English is even more challenging, and of prac- tical interest, as transliterated items make up the bulk of text phrases not found in bilin- gual dictionaries. We describe and evaluate a method for performing backwards translitera- tions by machine. This method uses a gen- erative model, incorporating several distinct stages in the transliteration process. 1 Introduction Translators must deal with many problems, and one of the most frequent is translating proper names and technical terms. For language pairs like Spanish/English, this presents no great chal- lenge: a phrase like Antonio Gil usually gets trans- lated as Antonio Gil. However, the situation is more complicated for language pairs that employ very different alphabets and sound systems, such as Japanese/English and Arabic/English. Phonetic translation across these pairs is called translitera- tion. We will look at Japanese/English translitera- tion in this paper. Japanese frequently imports vocabulary from other languages, primarily (but not exclusively) from English. It has a special phonetic alphabet called katakana, which is used primarily (but not exclusively) to write down foreign names and loan- words. To write a word like golf bag in katakana, some compromises must be made. For example, Japanese has no distinct L and R sounds: the two En- glish sounds collapse onto the same Japanese sound. A similar compromise must be struck for English H and F. Also, Japanese generally uses an alter- nating consonant-vowel structure, making it impos- sible to pronounce LFB without intervening vow- els. Katakana writing is a syllabary rather than an alphabet--there is one symbol for ga (~I), another for gi (4e), another for gu (P'), etc. So the way to write gol]bag in katakana is =~'~ 7 ~ ~, ~, roughly pronounced goruhubaggu. Here are a few more ex- amples: Angela Johnson TvzJ~ • "J~ vY v (anj ira jyonson) New York Times (nyuuyooku t aimuzu) ice cream T 4 x ~, ~) -- z, (aisukuriimu) Notice how the transliteration is more phonetic than orthographic; the letter h in Johnson does not pro- duce any katakana. Also, a dot-separator (.) is used to separate words, but not consistently. And transliteration is clearly an information-losing oper- ation: aisukuriimu loses the distinction between ice cream and I scream. Transliteration is not trivial to automate, but we will be concerned with an even more challeng- ing problem--going from katakana back to En- glish, i.e., back-transliteration. Automating back- transliteration has great practical importance in Japanese/English machine translation. Katakana phrases are the largest source of text phrases that do not appear in bilingual dictionaries or training corpora (a.k.a. "not-found words"). However, very little computational work has been done in this area; (Yamron et al., 1994) briefly mentions a pattern- matching approach, while (Arbabi et al., 1994) dis- cuss a hybrid neural-net/expert-system approach to (forward) transliteration. The information-losing aspect of transliteration makes it hard to invert. Here are some problem in- stances, taken from actual newspaper articles: 1 ITexts used in ARPA Machine Translation evalua- tions, November 1994. 128 ? T--x~-- (aasudee) '9 (robaato shyoon renaado) ? "~':~ ~--:~" l.--)-~ y I- (masu~aazu~ oonamen~ o) English translations appear later in this paper. Here are a few observations about back- transliteration: • Back-transliteration is less forgiving than transliteration. There are many ways to write an English word like switch in katakana, all equally valid, but we do not have this flexibility in the reverse direction. For example, we can- not drop the t in switch, nor can we write arture when we mean archer. • Back-transliteration is harder than romaniza- tion, which is a (frequently invertible) trans- formation of a non-roman alphabet into ro- man letters. There are several romanization schemes for katakana writing--we have already been using one in our examples. Katakana Writing follows Japanese sound patterns closely, so katakana often doubles as a Japanese pro- nunciation guide. However, as we shall see, there are many spelling variations that compli- cate the mapping between Japanese sounds and katakana writing. • Finally, not all katakana phrases can be "sounded out" by back-transliteration. Some phrases are shorthand, e.g., r] _ 7" ~ (uaapuro) should be translated as word processing. Oth- ers are onomatopoetic and difficult to translate. These cases must be solved by techniques other than those described here. The most desirable feature of an automatic back- transliterator is accuracy. If possible, our techniques should also be: • portable to new language pairs like Ara- bic/English with minimal effort, possibly reusing resources. • robust against errors introduced by optical character recognition. • relevant to speech recognition situations in which the speaker has a heavy foreign accent. • able to take textual (topical/syntactic) context into account, or at least be able to return a ranked list of possible English translations. Like most problems in computational linguistics, this one requires full world knowledge for a 100% solution. Choosing between Katarina and Catalina (both good guesses for ~' ~ ~ ")-) might even require detailed knowledge of geography and figure skating. At that level, human translators find the problem quite difficult as well. so we only aim to match or possibly exceed their performance. 2 A Modular Learning Approach Bilingual glossaries contain many entries mapping katakana phrases onto English phrases, e.g.: (air- craft carrier --, ~ T ~ ~ 7 I. ~ ~ ~3 7" ). It is possible to automatically analyze such pairs to gain enough knowledge to accurately map new katakana phrases that come along, and learning approach travels well to other languages pairs. However, a naive approach to finding direct correspondences between English letters and katakana symbols suffers from a number of problems. One can easily wind up with a sys- tem that proposes iskrym as a back-transliteration of aisukuriimu. Taking letter frequencies into account improves this to a more plausible-looking isclim. Moving to real words may give is crime: the i cor- responds to ai, the s corresponds to su, etc. Unfor- tunately, the correct answer here is ice cream. Af- ter initial experiments along these lines, we decided to step back and build a generative model of the transliteration process, which goes like this: 1. An English phrase is written. 2. A translator pronounces it in English. 3. The pronunciation is modified to fit the Japanese sound inventory. 4. The sounds are converted into katakana. 5. Katakana is written. This divides our problem into five sub-problems. Fortunately, there are techniques for coordinating solutions to such sub-problems, and for using gen- erative models in the reverse direction. These tech- niques rely on probabilities and Bayes' Rule. Sup- pose we build an English phrase generator that pro- duces word sequences according to some probability distribution P(w). And suppose we build an English pronouncer that takes a word sequence and assigns it a set of pronunciations, again probabilistically, ac- cording to some P(plw). Given a pronunciation p, we may want to search for the word sequence w that maximizes P(wtp ). Bayes" Rule lets us equivalently maximize P(w). P(plw). exactly the two distribu- tions we have modeled. Extending this notion, we settled down to build five probability distributions: 1. P(w) -- generates written English word se- quences. 2. P(elw) -- pronounces English word sequences. 3. P(jle) -- converts English sounds into Japanese sounds. 129 4. P(k[j) ~ converts Japanese sounds to katakana writing. 5. P(o{k) ~ introduces misspellings caused by op- tical character recognition (OCR). Given a katakana string o observed by OCR, we want to find the English word sequence w that max- imizes the sum, over all e, j, and k, of P(w) • P(e[w). P(jle)" P(kJj). P(olk) Following (Pereira et al., 1994; Pereira and Riley, I996), we implement P(w) in a weighted finite-state aceeptor (WFSA) and we implement the other dis- tributions in weighted finite-state transducers (WF- STs). A WFSA is an state/transition diagram with weights and symbols on the transitions, making some output sequences more likely than others. A WFST is a WFSA with a pair of symbols on each transition, one input, and one output. Inputs and outputs may include the empty symbol e. Also fol- lowing (Pereira and Riley, 1996), we have imple- mented a general composition algorithm for con- structing an integrated model P(zlz) from models P(~IY) and P(ylz), treating WFSAs as WFSTs with identical inputs and outputs. We use this to combine an observed katakana string with each of the mod- els in turn. The result is a large WFSA containing all possible English translations. We use Dijkstra's shortest-path algorithm {Dijkstra, 1959) to extract the most probable one. The approach is modular. We can test each en- gine independently and be confident that their re- sults are combined correctly. We do no pruning, so the final WFSA contains every solution, however unlikely. The only approximation is the Viterbi one, which searches for the best path through a WFSA instead of the best sequence (i.e., the same sequence does not receive bonus points for appearing more than once). 3 Probabilistic Models This section describes how we desigued and built each of our five models. For consistency, we continue to print written English word sequences in italics (golf ball), English sound sequences in all capitals (G AA L F B A0 L). Japanese sound sequences in lower case (g o r u h u b o o r u)and katakana sequences naturally ( =':t. 7 .~- ~). 3.1 Word Sequences The first model generates scored word sequences, the idea being that ice cream should score higher than ice creme, which should score higher than nice kreem. We adopted a simple unigram scor- ing method that multiplies the scores of the known words and phrases in a sequence. Our 262,000-entry frequency list draws its words and phrases from the Wall Street Journal corpus, an online English name list, and an online gazeteer of place names." A por- tion of the WFSA looks like this: los / 0.000087 federal / O.O013~ angele s~ ~ month 10.000992 An ideal word sequence model would look a bit different. It would prefer exactly those strings which are actually grist for Japanese translitera- tots. For example, people rarely transliterate aux- iliary verbs, but surnames are often transliterated. We have approximated such a model by removing high-frequency words like has, an, are, am, were, their, and does, plus unlikely words corresponding to Japanese sound bites, like coup and oh. We also built a separate word sequence model con- taining only English first and last names. If we know (from context) that the transliterated phrase is a personal name, this model is more precise. 3.2 Words to English Sounds The next WFST converts English word sequences into English sound sequences. We use the English phoneme inventory from the online CMU Pronuncia- tion Dictionary, 3 minus the stress marks. This gives a total of 40 sounds, including 14 vowel sounds (e.g., AA, AE, UW), 25 consonant sounds (e.g., K, 1tlt, It), plus our special symbol (PAUSE). The dictionary has pro- nunciations for 110,000 words, and we organized a phoneme-tree based WFST from it: E:E :E E:IH ¢;::K Note that we insert an optional PAUSE between word pronunciations. Due to memory limitations, we only used the 50,000 most frequent words. We originally thought to build a general letter- to-sound WFST, on the theory that while wrong (overgeneralized) pronunciations might occasionally be generated, Japanese transliterators also mispro- nounce words. However, our letter-to-sound WFST did not match the performance of Japanese translit- 2Available from the ACL Dat~ Collection Initiative. 3ht%p ://~ww. speech, cs. cmu. edu/cgi-bin/cmudict. 130 erators, and it turns out that mispronunciations are modeled adequately in the next stage of the cascade. 3.3 English Sounds to Japanese Sounds Next, we map English sound sequences onto Japanese sound sequences. This is an inherently information-losing process, as English R and L sounds collapse onto Japanese r, the 14 English vowel sounds collapse onto the 5 Japanese vowel sounds, etc. We face two immediate problems: 1. What is the target Japanese sound inventory? 2. How can we build a WFST to perform the se- quence mapping? An obvious target inventory is the Japanese syl- labary itself, written down in katakana (e.g., ") or a roman equivalent (e.g., hi). With this approach, the English sound K corresponds to one of 2 (ka), -'Y (ki), ~' (ku), ~ (ke), or = (ko), depending on its context. Unfortunately, because katakana is a syllabary, we would be unable to express an obvi- ous and useful generalization, namely that English g usually corresponds to Japanese k, independent of context. Moreover, the correspondence of Japanese katakana writing to Japanese sound sequences is not perfectly one-to-one (see next section), so an inde- pendent sound inventory is well-motivated in any case. Our Japanese sound inventory includes 39 symbols: 5 vowel sounds, 33 consonant sounds (in- cluding doubled consonants like kk), and one spe- cial symbol (pause). An English sound sequence like (P R OW PAUSE S AA K ER) might map onto a Japanese sound sequence like (p u r o pause s a kk a a). Note that long Japanese vowel sounds are written with two symbols (a a) instead of just one (an). This scheme is attractive because Japanese sequences are almost always longer than English se- quences. Our WFST is learned automatically from 8,000 pairs of English/Japanese sound sequences, e.g., ( (s AA K ER) --* (s a kk a a)). We were able to pro- duce'these pairs by manipulating a small English- katakana glossary. For each glossary entry, we converted English words into English sounds us- ing the previous section's model, and we converted katakana words into Japanese sounds using the next section's model. We then applied the estimation- maximization (EM) algorithm (Baum, 1972) to gen- erate symbol-mapping probabilities, shown in Fig- ure 1. Our EM training goes like this: 1. For each English/Japanese sequence pair, com- pute all possible alignments between their ele- ments. In our case. an alignment is a drawing . that connects each English sound with one or more Japanese sounds, such that all Japanese sounds are covered and no lines cross. For ex- ample, there are two ways to align the pair ((L OW) <-> (r o o)): L OW L OW l /\ /\ I r o o r o o 2. For each pair, assign an equal weight to each of its alignments, such that those weights sum to 1. In the case above, each alignment gets a weight of 0.5. 3. For each of the 40 English sounds, count up in- stances of its different mappings, as observed in all alignments of all pairs. Each alignment con- tributes counts in proportion to its own weight. 4. For each of the 40 English sounds, normalize the scores of the Japanese sequences it maps to, so that the scores sum to 1. These are the symbol- mapping probabilities shown in Figure 1. 5. Recompute the alignment scores. Each align- ment is scored with the product of the scores of the symbol mappings it contains. 6. Normalize the alignment scores. Scores for each pair's alignments should sum to 1. 7. Repeat 3-6 until the symbol-mapping probabil- ities converge. We then build a WFST directly from the symbol- mapping probabilities: PAUSE:pause AA:a / 0 024 ~ AA:o / 0,018 o < --o Our WFST has 99 states and 283 arcs. We have also built models that allow individual English sounds to be "swallowed" (i.e., produce zero Japanese sounds). However, these models are ex- pensive to compute (many more alignments) and lead to a vast number of hypotheses during WFST composition. Furthermore, in disallowing "swallow- ing," we were able to automatically remove hun- dreds of potentially harmful pairs from our train- ing set, e.g., ((B AA R B ER SH AA P) -- (b a a b a a)). Because no alignments are possible, such pairs are skipped by the learning algorithm; cases like these must be solved by dictionary lookup any- way. Only two pairs failed to align when we wished they had--both involved turning English Y UW into Japanese u, as in ((Y UW K AH L EY L IY) ~ (u kurere)). Note also that our model translates each English sound without regard to context. We have built also context-based models, using decision trees receded as WFSTs. For example, at the end of a word, En- glish T is likely to come out as (= o) rather than (1;). However, context-based models proved unnecessary 131 e J P(j l e) o 0.566 a 0.382 a a 0.024 o o 0.018 AE a 0.942 y a 0.046 AH a 0.486 o 0.169 e 0.134 i 0.III u 0.076 AO o 0.671 o o 0.257 a 0.047 AW a u 0.830 a w 0.095 o o 0.027 a o 0.020 a 0.014 AY a i 0.864 i 0.073 a 0.018 a i y 0.018 B b 0.802 b u 0.185 CH ch y 0.277 ch 0.240 tch i 0.199 ch i 0.159 tch 0.038 ch y u 0.021 tch y 0.020 DH d 0.535 d o 0.329 dd o 0.053 j 0.032 z 0.670 z u 0.125 j 0.125 a z 0.080 EH e 0.901 a 0.069 ER a a 0.719 a 0.081 a r 0.063 e r 0.042 o r 0.029 e E¥ J P(J l e) e e 0.641 a 0.122 e 0.114 e i 0.080 a i 0.014 F h 0.623 h u 0.331 hh 0.019 a h u 0.010 G g 0.598 g u 0.304 gg u 0.059 gg 0.010 HH h 0.959 w 0.014 IH i 0.908 e 0.071 IY i i 0.573 i 0.317 e 0.074 e e 0.016 JR j 0.329 j y 0.328 j i 0.129 jj i 0.066 e j i 0.057 z 0.032 g 0.018 jj 0.012 e 0.012 k 0.528 k u 0.238 kk u 0.150 kk 0.043 k i 0.015 k y 0.012 L r 0.621 r u 0.362 M m 0.653 m u 0.207 n 0.123 n m 0.011 N n 0.978 NG n g u 0.743 n 0.220 n g 0.023 e j P(j I e) OW o 0.516 o o 0.456 o u 0.011 OY o i 0.828 o o i 0.057 i 0.029 o i y 0.029 o 0.027 o o y 0.014 o o 0.014 P p 0.649 p u 0.218 pp u 0.085 pp 0.045 PAUSE pause 1.000 R r 0.661 a 0.170 o 0.076 r u 0.042 u r 0.016 a r 0.012 s u 0.539 s 0.269 sh 0.109 u 0.028 ss 0.014 8H s h y 0.475 sh 0.175 ssh y u 0.166 ssh y 0.088 sh i 0.029 ssh 0.027 shy u 0.015 t 0.463 t o 0.305 tt o 0.103 ch 0.043 tt 0.021 ts 0.020 ts u 0.011 TH s u 0.418 s 0.303 sh 0.130 ch 0.038 t 0.029 e j PUle) UH u 0.794 u u 0.098 dd 0.034 a 0.030 o 0.026 UW u u 0.550 u 0.302 y u u 0.109 y u 0.021 V b 0.810 b u 0.150 w 0.015 W w 0.693 u 0.194 o 0.039 £ 0.027 a 0.015 e 0.012 y 0.652 i 0.220 y u 0.050 u 0.048 b 0.016 z 0.296 z u 0.283 j 0.107 s u 0.103 u 0.073 a 0.036 o 0.018 s 0.015 n 0.013 i 0.011 sh 0.011 ZH j y 0.324 sh i 0.270 j i 0.173 j 0.135 a j y u 0.027 s h y 0.027 s 0.027 a j i 0.016 Figure 1: English sounds (in capitals) with probabilistic mappings to Japanese sound sequences (in lower case), as learned by estimation-maximization. Only mappings with conditional probabilities greater than 1% are shown, so tile figures may not sum to 1. 132 for back-transliteration. 4 They are more useful for English-to-Japanese forward transliteration. 3.4 Japanese sounds to Katakana To map Japanese sound sequences like (m o o 1: a a) onto katakana sequences like (~--$t--), we manually constructed two WFSTs. Composed to- gether, they yield an integrated WFST with 53 states and 303 arcs. The first WFST simply merges long Japanese vowel sounds into new symbols aa, ii, uu, ee, and oo. The second WFST maps Japanese sounds onto katakana symbols. The basic idea is to consume a whole syllable worth of sounds before producing any katakana, e.g.: :-:,0951 This fragment shows one kind of spelling varia- tion in Japanese: long vowel sounds (oo) are usu- ally written with a long vowel mark (~-) but are sometimes written with repeated katakana (~). We combined corpus analysis with guidelines from a Japanese textbook (Jorden and Chaplin, 1976) to turn up many spelling variations and unusual katakana symbols: • the sound sequence (j ±) is usually written ~, but occasionally ¢:. • (g u a) is usually ~'T, but occasionally YT. • (w o o) is variously ~z'---, ~r-, or with a special, old-style katakana for wo. • (y e) may be =I=, d ~, or d ~. • (w i)is either #~" or ~ 4. • (n y e) is a rare sound sequence, but is written -~* when it occurs. • (1: y u) is rarer than (ch y u), but is written ~-~- when it occurs. and so on. Spelling variation is clearest in cases where an En- glish word like swiIeh shows up transliterated vari- ously (:~ ~" :, ¢-, :~4 ~, ¢-, x ~, 4 ~, 4-) in different dictionaries. Treating these variations as an equiv- alence class enables us to learn general sound map- pings even if our bilingual glossary adheres to a sin- gle narrow spelling convention. We do not, however, 4And harmfully restrictive in their unsmoothed incarnations. generate all katakana sequences with this model; for example, we do not output strings that begin with a subscripted vowel katakana. So this model also serves to filter out some ill-formed katakana sequences, possibly proposed by optical character recognition. 3.5 Katakana to OCR Perhaps uncharitably, we can view optical character recognition (OCR) as a device that garbles perfectly good katakana sequences. Typical confusions made by our commercial OCR system include ~ for ~-', ¢-for -)', T for 7, and 7 for 7". To generate pre- OCR text, we collected 19,500 characters worth of katakana words, stored them in a file, and printed them out. To generate post-OCR text, we OCR'd the printouts. We then ran the EM Mgorithm to de- termine symbol-mapping ("garbling") probabilities. Here is part of that table: k o P(o [k) ~:" ~:" 0.492 ~" O.434 0.042 7 0.011 ~" ~" 1.000 .,~ z, 0.964 ], 0.036 This model outputs a superset of the 81 katakana symbols, including spurious quote marks, alphabetic symbols, and the numeral 7. 4 Example We can now use the models to do a sample back- transliteration. We start with a katakana phrase as observed by OCR. We then serially compose it with the models, in reverse order. Each intermedi- ate stage is a WFSA that encodes many possibilities. The final stage contains all back-transliterations sug- gested by the models, and we finally extract the best one. We start with the masutaazutoonamento problem from Section 1. Our OCR observes: ~ x ~,--;~° 1. ---/- j :/ 1. This string has two recognition errors: ~' (ku) for $ (ta), and ¢-(ch£) for "3-(na). We turn the string into a chained 12-state/ll-arc WFSA and compose it with the P(k[o) model. This yields a fat- ter 12-state/15-arc WFSA, which accepts the cor- rect spelling at a lower probability. Next comes the P(jlk) model, which produces a 28-state/31-arc WFSA whose highest-scoring sequence is: mas ut aazut o o ch im ent o Next comes P(elj ), yielding a 62-state/241-arc WFSA whose best sequence is: M AE S T AE AE DH UH T AO AO CH IH M EH N T AO 133 Next to last comes P(wle), which results in a 2982- state/4601-arc WFSA whose best sequence (out of myriads) is: masters tone am ent awe This English string is closest phonetically to the Japanese, but we are willing to trade phonetic prox- imity for more sensical English; we restore this WFSA by composing it with P(w) and extract the best translation: masters tournament (Other Section 1 examples are translated correctly as earth day and robert scan leonard.) 5 Experiments We have performed two large-scale experiments, one using a full-language P(w) model, and one using a personal name language model. In the first experiment, we extracted 1449 unique katakana phrases from a corpus of 100 short news articles. Of these, 222 were missing from an on- line 100,000-entry bilingual dictionary. We back- transliterated these 222 phrases. Many of the trans- lations are perfect: technical program, sez scandal, omaha beach, new york times, ramon diaz. Oth- ers are close: tanya harding, nickel simpson, danger washington, world cap. Some miss the mark: nancy care again, plus occur, patriot miss real. While it is difficult to judge overall accuracy--some of the phases are onomatopoetic, and others are simply too hard even for good human translators--it is easier to identify system weaknesses, and most of these lie in the P(w) model. For example, nancy kerrigan should be preferred over nancy care again. In a second experiment, we took katakana versions of the names of 100 U.S. politicians, e.g.: -Jm :/. 7' =-- (jyon.buroo), T~/~ . ~'0' I" (a.rhonsu.dama~;'¢o), and "~'4 3' • ~7,f :/ (maiku.de~ain). We back-transliterated these by machine and asked four human subjects to do the same. These subjects were native English speakers and news-aware: we gave them brief instructions, ex- amples, and hints. The results were as follows: correct (e.g., spencer abraham / spencer abraham) phonetically equivalent, but misspelled (e.g., richard brian / richard bryan) incorrect (e.g., olin hatch / omen hatch) human machine 27% 64% 7,% 12% 66% 24% There is room for improvement on both sides. Be- ing English speakers, the human subjects were good at English name spelling and U.S. politics, but not at Japanese phonetics. A native Japanese speaker might be expert at the latter but not the former. People who are expert in all of these areas, however, are rare. On the automatic side. many errors can be cor- rected. A first-name/last-name model would rank richard bryan more highly than richard brian. A bi- gram model would prefer orren hatch over olin hatch. Other errors are due to unigram training problems, or more rarely, incorrect or brittle phonetic models. For example, "Long" occurs much more often than "R.on" in newspaper text, and our word selection does not exclude phrases like "Long Island." So we get long wyden instead of ton wyden. Rare errors are due to incorrect or brittle phonetic models. Still the machine's performance is impressive. When word separators (,) are removed from the katakana phrases, rendering the task exceedingly dif- ficult for people, the machine's performance is un- changed. When we use OCR. 7% of katakana tokens are mis-recognized, affecting 50% of test strings, but accuracy only drops from 64% to 52%. 6 Discussion We have presented a method for automatic back- transliteration which, while far from perfect, is highly competitive. It also achieves the objectives outlined in Section 1. It ports easily to new lan- guage pairs; the P(w) and P(e[w) models are entirely reusable, while other models are learned automati- cally. It is robust against OCR noise, in a rare ex- ample of high-level language processing being useful (necessary, even) in improving low-level OCK. We plan to replace our shortest-path extraction algorithm with one of the recently developed k- shortest path algorithms (Eppstein, 1994). We will then return a ranked list of the k best translations for subsequent contextual disambiguation, either by machine or as part of an interactive man-machine system. We also plan to explore probabilistic models for Arabic/English transliteration. Simply identify- ing which Arabic words to transliterate is a difficult task in itself; and while Japanese tends to insert ex- tra vowel sounds, Arabic is usually written without any (short) vowels. Finally, it should also be pos- sible to embed our phonetic shift model P(jle) in- side a speech recognizer, to help adjust for a heavy Japanese accent, although we have not experimented in this area. 7 Acknowledgments We would like to thank Alton Earl Ingram, Yolanda Gil, Bonnie Glover-Stalls, Richard Whitney, and Kenji Yamada for their helpful comments. We would 134 also like to thank our sponsors at the Department of Defense. References M. Arbabi, S. M. Fischthal, and V. C. Cheng andd E. Bart. 1994. Algorithms for Arabic name transliteration. IBM J. Res. Develop., 38(2). L. E. Baum. 1972. An inequality and associated maximization technique in statistical estimation ofprobabilistic functions of a Markov process. In- equalities, 3. E. W. Dijkstra. 1959. A note on two problems in connexion with graphs. Numerische Malhematik, 1. David Eppstein. 1994. Finding the k shortest paths. In Proc. 35th Syrup. Foundations of Computer Science. IEEE. E. H. Jorden and H. I. Chaplin. 1976. Reading Japanese. Yale University Press, New Haven. F. Pereira and M. Riley. 1996. Speech recognition by composition of weighted finite automata. In preprint, cmp-lg/9603001. F. Pereira, M. Riley, and R. Sproat. 1994. Weighted rational transductions and their application to hu- man language processing. In Proe. ARPA Human Language Technology Workshop. J. Yamron, J. Cant, A. Demedts, T. Dietzel, and Y. Ito. 1994. The automatic component of the LINGSTAT machine-aided translation sys- tem. In Proc. ARPA Workshop on Human Lan- guage Technology. 135 | 1997 | 17 |
Integrating Symbolic and Statistical Representations: The Lexicon Pragmatics Interface Ann Copestake Center for the Study of Language and Information, Stanford University, Ventura Hall, Stanford, CA 94305, USA aac~csl£, stanford, edu Alex Lascarides Centre for Cognitive Science and Human Communication Research Centre, University of Edinburgh, 2, Buccleuch Place, Edinburgh, EH8 9LW, Scotland, UK alex@cogsci, ed. ac. uk Abstract We describe a formal framework for inter- pretation of words and compounds in a discourse context which integrates a sym- bolic lexicon/grammar, word-sense proba- bilities, and a pragmatic component. The approach is motivated by the need to han- dle productive word use. In this paper, we concentrate on compound nominals. We discuss the inadequacies of approaches which consider compound interpretation as either wholly lexico-grammatical or wholly pragmatic, and provide an alternative inte- grated account. 1 Introduction VVhen words have multiple senses, these may have very different frequencies. For example, the first two senses of the noun diet given in WordNet are: O 1. (a prescribed selection of foods) => fare - (the food and drink that are regularly consumed) 2. => legislature, legislative assembly, general as- sembly, law-makers ]k|ost English speakers will share the intuition that the first sense is much more common than the sec- ond, and that this is (partly) a property of the word and not its denotation, since near-synonyms oc- cur with much greater frequency. Frequency differ- ences are also found between senses of derived forms (including morphological derivation, zero-derivation and compounding). For example, canoe is less fre- quent as a verb than as a noun. and the induced ac- tion use (e.g., they canoed the kids across the lake) is much less frequent than the intransitive form (with location PP) (they canoed across the lake). 1 A de- rived form may become established with one mean- ing, but this does not preclude other uses in suffi- ciently marked contexts (e.g., Bauer's (1983) exam- ple of garbage man with an interpretation analogous to snowman). Because of the difficulty of resolving lexical am- biguity, it is usual in NLP applications to exclude 'rare' senses from the lexicon, and to explicitly list frequent forms, rather than to derive them. But this increases errors due to unexpected vocabulary, espe- cially for highly productive derivational processes. For this and other reasons it is preferable to as- sume some generative devices in the lexicon (Puste- jovsky, 1995). Briscoe and Copestake (1996) argue that a differential estimation of the productivity of derivation processes allows an approximation of the probabilities of previously unseen derived uses. If more probable senses are preferred by the system, the proliferation of senses that results from uncon- strained use of lexical rules or other generative de- vices is effectively controlled. An interacting issue is the granularity of meaning of derived forms. If the lexicon produces a small number of very underspeci- fled senses for a wordform, the ambiguity problem is apparently reduced, but pragmatics may have insuf- ficient information with which to resolve meanings, or may find impossible interpretations. We argue here that by utilising probabilities, a language-specific component can offer hints to a pragmatic module in order to prioritise and con- trol the application of real-world reasoning to disam- biguation. The objective is an architecture utilising a general-purpose lexicon with domain-dependent probabilities. The particular issues we consider here are the integration of the statistical and symbolic components, and the division of labour between se- 1Here and below we base our frequency judgements on semi-automatic analysis of the written portion of the tagged British National Corpus (BNC). 136 Arzttermin *doctor appointment doctor's appointment Terminvorschlag * date proposal Terminvereinbarung * date agreement proposal for a date agreement on a date Januarh/ilfte Fr/ihlingsanfang * January half * spring beginning half of January beginning of spring Figure 1: Some German compounds with non-compound translations mantics and pragmatics in determining meaning. We concentrate on (right-headed) compound nouns, since these raise especially difficult problems for NLP system architecture (Sparck Jones, 1983). 2 The grammar of compound nouns Within linguistics, attempts to classify nominal com- pounds using a small fixed set of meaning relations (e.g., Levi (1978)) are usually thought to have failed, because there appear to be exceptions to any clas- sification. Compounds are attested with meanings which can only be determined contextually. Down- ing (1977) discusses apple juice seat, uttered in a context in which it identifies a place-setting with a glass of apple juice. Even for compounds with es- tablished meanings, context can force an alternative interpretation (Bauer, 1983). These problems led to analyses in which the re- lationship between the parts of a compound is un- determined by the grammar, e.g., Dowty (1979), Bauer (1983). Schematically this is equivalent to the following rule, where R is undetermined (to simplify exposition, we ignore the quantifier for y): NO ---4 N1 N2 (1))~x[P(x) A Q(y) A R(x, y)] )~y[Q(y)] )~x[P(x)] Similar approaches have been adopted in NLP with further processing using domain restrictions to re- solve the interpretation (e.g., Hobbs et al (1993)). However, this is also unsatisfactory, because (1) overgenerates and ignores systematic properties of various classes of compounds. Overgeneration is apparent when we consider translation of German compounds, since many do not correspond straight- forwardly to English compounds (e.g., Figure 1). Since these exceptions are English-specific they can- not be explained via pragmatics. Furthermore they are not simply due to lexical idiosyncrasies: for instance, Arzttermin/*doctor appointment is repre- sentative of many compounds with human-denoting first elements, which require a possessive in English. So we get blacksmith's hammer and not * blacksmith hammer to mean 'hammer of a type convention- ally associated with a blacksmith' (also driver's cab, widow's allowance etc). This is not the usual pos- sessive: compare (((his blacksmith)'s) hammer) with (his (blacksmith's hammer)). Adjective placement is also restricted: three English blacksmith's hammers/ *three blacksmith's English hammers. We treat these as a subtype of noun-noun compound with the pos- sessive analysed as a case marker. In another subcategory of compounds, the head provides the predicate (e.g., dog catcher, bottle crusher). Again, there are restrictions: it is not usually possible to form a compound with an agen- tire predicate taking an argument that normally re- quires a preposition (contrast water seeker with * wa- ter looker). Stress assignment also demonstrates in- adequacies in (1): compounds which have the in- terpretation 'Y made of X' (e.g., nylon rope, oak table) generally have main stress on the righthand noun, in contrast to most other compounds (Liber- man and Sproat, 1992). Stress sometimes disam- biguates meaning: e.g., with righthand stress cotton bag has the interpretation bag made of cotton while with leftmost stress an alternative reading, bag for cotton, is available. Furthermore, ordering of ele- ments is restricted: e.g., cotton garment bag/ *gar- ment cotton bag. The rule in (1) is therefore theoretically inade- quate, because it predicts that all noun-noun com- pounds are acceptable. Furthermore, it gives no hint of likely interpretations, leaving an immense burden to pragmatics. We therefore take a position which is intermediate between the two extremes outlined above. We as- sume that the grammar/lexicon delimits the range of compounds and indicates conventional interpre- tations, but that some compounds may only be re- solved by pragmatics and that non-conventional con- textual interpretations are always available. We de- fine a number of schemata which encode conven- tional meanings. These cover the majority of com- pounds, but for the remainder the interpretation is left unspecified, to be resolved by pragmatics. 137 general-nn [ possessive /1\ ] made-of] purpose-patient deverbal / I n°n-derived-pp I I deverbal-pp ] linen chest ice-cream container Figure 2: Fragment of hierarchy of noun-noun compound schemata. The boxed nodes indicate actual schemata: other nodes are included for convenience in expressing generalisations. general-nn NO -> N1 N2 Ax[P(x) A Q(y) A R(x, y)] Ay[Q(y)] Ax[P(x)] R =/general-nn anything anything /stressed made-of R = made-of substance physobj /stressed purpose-patient R = TELIC(N2) anything artifact Figure 3: Details of some schemata for noun-noun compounds. / indicates that the value to its right is default information. Space limitations preclude detailed discussion but Figures 2 and 3 show a partial default inheri- tance hierarchy of schemata (cf., Jones (1995)). 2 Multiple schemata may apply to a single com- pound: for example, cotton bag is an instantiation of the made-of schema, the non-derived-purpose- patient schema and also the general-nn schema. Each applicable schema corresponds to a different sense: so cotton bag is ambiguous rather than vague. The interpretation of the hierarchy is that the use of a more general schema implies that the meanings given by specific subschemata are excluded, and thus we have the following interpretations for cotton bag: 1. Ax[cotton(y) A bag(x) A made-of(y, x)] 2. Ax[cotton(y) A bag(x) A TELIC(bag)(y,x)] = Ax[cotton(y) A bag(x) A contain(y, x)] 2We formalise this with typed default feature struc- tures (Lascarides et al, 1996). Schemata can be re- garded formally as lexical/grammar rules (lexical rules and grammar rules being very similar in our framework) but inefficiency due to multiple interpretations is avoided in the implementation by using a form of packing. 3. Ax[R(y, x) A -~(made-of(y, x) V contain(y, x) V ...)] The predicate made-of is to be interpreted as ma- terial constituency (e.g. Link (1983)). We follow Johnston and Busa (1996) in using Pustejovsky's (1995) concept of telic role to encode the purpose of an artifact. These schemata give minimal indi- cations of compound semantics: it may be desirable to provide more information (Johnston et al, 1995), but we will not discuss that here. Established compounds may have idiosyncratic in- terpretations or inherit from one or more schemata (though compounds with multiple established senses due to ambiguity in the relationship between con- stituents rather than lexical ambiguity are fairly un- usual). But established compounds may also have unestablished interpretations, although, as discussed in §3, these will have minimal probabilities. In contrast, an unusual compound, such as apple-juice scat, may only be compatible with general-nn, and would be assigned the most underspecified interpre- tation. As we will see in §4, this means pragmatics 138 Unseen-prob-mass(cmp-form) = number-of-applicable-schemata(cmp-form) I ~eq( cmp-form ) + number-of-applicable-schemata( cmp-form ) Prod(csl) Estimated-freq(interpretationi with cmp-formj) = Unseen-prob-mass(cmp-formj) x ~ Prod(csl) ..... Prod(cs.,) (where csl ... cs, are the compound schemata needed to derive the n unattested entries for the form j) Figure 4: Probabilities for unseen compounds: adapted from Briscoe and Copestake (1996) must find a contextual interpretation. Thus, for any compound there may be some context in which it can be interpreted, but in the absence of a marked context, only compounds which instantiate one of the subschemata are acceptable. 3 Encoding Lexical Preferences In order to help pragmatics select between the multi- pie possible interpretations, we utilise probabilities. For an established form, derived or not, these de- pend straightforwardly on the frequency of a par- ticular sense. For example, in the BNC, diet has probability of about 0.9 of occurring in the food sense and 0.005 in the legislature sense (the remain- der are metaphorical extensions, e.g.. diet of crime). Smoothing is necessary to avoid giving a non-zero probability for possible senses which are not found in a particular corpus. For derived forms, the ap- plicable lexical rules or schemata determine possi- ble senses (Briscoe and Copestake, 1996). Thus for known compounds, probabilities of established senses depend on corpus frequencies but a residual probability is distributed between unseen interpreta- tions licensed by schemata, to allow for novel uses. This distribution is weighted to allow for productiv- it3" differences between schemata. For unseen com- pounds, all probabilities depend on schema produc- tivity. Compound schemata range from the non- productive (e.g., the verb-noun pattern exemplified by pickpocket), to the almost fully productive (e.g.; made-of) with many schemata being intermediate (e.g., has-part: ~-door car is acceptable but the ap- parently similar *sunroof car is not). We use the following estimate for productivity (adapted from Briscoe and Copestake (1996)): M+I Prod(cmp-schema) - N (where N is the number of pairs of senses which match the schema input and M is the number of attested two-noun output forms -- we ignore compounds with more than two nouns for simplic- ity). Formulae for calculating the unseen probability mass and for allocating it differentially according to schema productivity are shown in Figure 4. Finer- grained, more accurate productivity estimates can be obtained by considering subsets of the possible inputs -- this allows for some real-world effects (e.g., the made-of schema is unlikely for liquid/physical- artifact compounds). Lexical probabilities should be combined to give an overall probability for a logical form (LF): see e.g., Resnik (1992). But we will ignore this here and assume pragmatics has to distinguish between alter- natives which differ only in the sense assigned to one compound. (2) shows possible interpretations for cotton bag with associated probabilities. LFS are encoded in DRT. The probabilities given here are based on productivity figures for fabric/container compounds in the BNC, using WordNet as a source of semantic categories. Pragmatics screens the LFS for acceptability. If a LF contains an underspecified ele- ment (e.g., arising from general-nn), this must be instantiated by pragmatics from the discourse con- text. (2) a. b. Mary put a skirt in a cotton bag e, x, y~ Z~ W, t, now mary(x), skirt(y), cotton(w), bag(z), put(e, x, y, z ) , hold(e, t ) , t -~ now, made-of(z, w) P = 0.84 c. e, x, y, z, w, t, now mary(x), skirt(y), cotton(w), bag(z), put(e, x, y, z), hold(e, t ) , t -< now, contain(z, w) e, X; y~ Z, W~ t, now P = 0.14 d. mary(x), skirt(y), cotton(w), bag(z), put(e, x, y, z), hold(e, t), t -< now, Rc(z,w),Rc =?, -~( made-of(z, w)V contain(z, w) V . . .) P = 0.02 139 4 SDRT and the Resolution of Underspecified Relations The frequency information discussed in §3 is insuf- ficient on its own for disambiguating compounds. Compounds like apple juice seat require marked con- texts to be interpretable. And some discourse con- texts favour interpretations associated with less fre- quent senses. In particular, if the context makes the usual meaning of a compound incoherent, then prag- matics should resolve the compound to a less fre- quent but conventionally licensed meaning, so long as this improves coherence. This underlies the dis- tinct interpretations of cotton bag in (3) vs. (4): (3) a. Mary sorted her clothes into various large bags. b. She put her skirt in the cotton bag. (4) a. Mary sorted her clothes into various bags made from plastic. b. She put her skirt into the cotton bag. If the bag in (4b) were interpreted as being made of cotton--in line with the (statistically) most fre- quent sense of the compound--then the discourse becomes incoherent because the definite descrip- tion cannot be accommodated into the discourse context. Instead, it must be interpreted as hav- ing the (less frequent) sense given by purpose- patient; this allows the definite description to be accommodated and the discourse is coherent. In this section, we'll give a brief overview of the theory of discourse and pragmatics that we'll use for modelling this interaction during disam- biguation between discourse information and lex- ical frequencies. We'll use Segmented Discourse Representation Theory (SDRT) (e.g., Asher (1993)) and the accompanying pragmatic component Dis- course in Commonsense Entaihnent (DICE) (Las- carides and Asher. 1993). This framework has already been successful in accounting for other phenomena on the interface between the lexicon and pragmatics, e.g.. Asher and Lascarides (1995). Lascarides and Copestake (1995). Lascarides, Copestake and Briscoe (1996). SDRT is an extension of DRT (Kamp and Reyle, 1993). where discourse is represented as a recursive set of DRSS representing the clauses, linked together with rhetorical relations such as Elaboration and Contrast. cf. Hobbs (1985). Polanyi (1985). Build- ing an SDRS invoh'es computing a rhetorical relation between the representation of the current clause and the SDRS built so far. DICE specifies how various background knowledge resources interact to provide clues about which rhetorical relation holds. The rules in DICE include default conditions of the form P > Q, which means If P, then normally Q. For example, Elaboration states: if 2 is to be attached to a with a rhetorical relation, where a is part of the discourse structure r already (i.e., (r, a, 2) holds). and 3 is a subtype of a--which by Subtype means that o's event is a subtype of 8's, and the individ- ual filling some role Oi in 3 is a subtype of the one filling the same role in a--then normally, a and 2 are attached together with Elaboration (Asher and Lascarides, 1995). The Coherence Constraint on Elaboration states that an elaborating event must be temporally included in the elaborated event. • Subtype : (8~(ea,~l) A 8z(e3, ~2) A e-condn3 Z_ e-condn~ A 7"2 E_ ~,1) Subtype(3. a) • Elaboration: ((r, a, 2) A Subtype(3, a)) > Elaboration(o, ~) • Coherence Constraint on Elaboration: Elaboration(a, 3) --+ e3 C ea Subtype and Elaboration encapsulate clues about rhetorical structure given by knowledge of subtype relations among events and objects. Coherence Constraint on Elaboration constrains the se- mantic content of constituents connected by Elab- oration in coherent discourse. A distinctive feature of SDRT is that if the DICE ax- ioms yield a nonmonotonic conclusion that the dis- course relation is R, and information that's neces- sary for the coherence of R isn't already in the con- stituents connected with R (e.g., Elaboration(a, 8) is nonmonotonically inferred, but e3 C_ eo is not in a or in 3). then this content can be added to the con- stituents in a constrained manner through a process known as SDRS Update. Informally. Update( r, a. 3) is an SDRS, which includes (a) the discourse context r, plus (b) the new information '3. and (c) an attach- ment of S to a (which is part of r) with a rhetorical relation R that's computed via DICE, where (d) the content of v. a and 3 are modified so that the co- herence constraints on R are met. 3 Note that this is more complex than DRT:s notion of update. Up- date models how interpreters are allowed and ex- pected to fill in certain gaps in what the speaker says: in essence affecting semantic canter through context and pragmatics, lVe'll use this information 3If R's coherence constraints can't be inferred, then the logic underlying DICE guarantees that R won't be nonmonotonically inferred. 140 flow between context and semantic content to rea- son about the semantic content of compounds in dis- course: simply put, we will ensure that words are as- signed the most freqent possible sense that produces a well defined SDRS Update function. An SDnS S is well-defined (written 4 S) if there are no conditions of the form x =? (i.e., there are no um'esoh'ed anaphoric elements), and every con- stituent is attached with a rhetorical relation. A discourse is incoherent if "~ 3, Update(T, a,/3) holds for every available attachment point a in ~-. That is. anaphora can't be resolved, or no rhetorical con- nection can be computed via DICE. For example, the representm ions of (Sa.b) (in sire- plified form) are respectively a and t3: (5) a. Mary put her clothes into various large bags. x. ~ ". Z, e,~. to. u o. mary(x), clothes(Y), bag(Z). put(eo,x,~'. Z). hold(e,,,ta), ta "< n b. She put her skirt into the bag made out of cotton. x.y.z,w, e3.t2.n.u.B mary(x), skirt(y)~ bag(z), cotton(w), 3. made-of(z, w), u =?, B(u, z). B =?, put(e~,x,y,z), hold(e2,to), t~ -< n In words, the conditions in '3 require the object denoted by the definite description to be linked by some 'bridging' relation B (possibly identity, cf. van der Sandt (1992)) to an object v identi- fied in the discourse context (Asher and Lascarides. 1996). In SDRT. the values of u and B are com- puted as a byproduct of SDRT'5 Update function (cf. Hobbs (1979)); one specifies v and B by inferring the relevant new semantic content arising from R~s coherence constraints, where R is the rhetorical rela- tion inferred via the DICE axioms. If one cannot re- soh'e the conditions u =? or B =? through SDnS up- da~e. then by the above definition of well-definedness on SDRSS the discourse is incoherent (and we have presupposition failure). The detailed analysis of (3) and (52) involve rea- soning about the values of v and B. But for rea- sons of space, we gloss over the details given in Asher and Lascarides (1996) for specifying u and B through the SDRT update procedure. However. the axiom Assume Coherence below is derivable from the axioms given there. First some notation: let 3[C] mean that ~ contains condition C. a~d assume that 3[C/C'] stands for the SDRS which is the same as 3. save that the condition C in 3 is replaced by C'. Then in words, Assume Coherence stipulates that if the discourse can be coherent only if the anaphor u is resolved to x and B is resolved to the specific re- lation P, then one monotonically assumes that they are resoh,ed this way: • Assume Coherence: (J~ Update(z,a,B[u -.,-7 B =?/u = x.B, = P]) A (C' # (,~ = z ^ B = P) -~ $ Update( 7", a, ~[u =?.B =?/C']))) -~ ( Update(z, a, ~) Update( v, a, 3[u =?,B =?/u = x,B = P])) Intuitively, it should be clear that in (Sa.b) -, $ Update(a, a, 3) holds, unless the bag in (5b) is one of the bags mentioned in (5a)--i.e, u = Z and B = member-of For otherwise the events in (5) are too "disconnected" to support ant" rhetorical re- lation. On the other hand. assigning u and B these values allows us to use Subtype and Elaboration to infer Elaboration (because skirt is a kind of cloth- ing, and the bag in (Sb) is one of the bags in (5a)). So Assume Coherence, Subtype and Elaboration yield that (Sb) elaborates (Sa) and the bag in (5b) is one of the bags in (5a). Applying SDRT tO compounds encodes the ef- fects of pragmatics on the compounding relation. For example, to reflect the fact that compounds such as apple juice seat, which are compatible only with general-nn, are acceptable only when context resoh'es the compound relation, we as- sume that the DRS conditions produced by this schema are: Rc(y,x), Rc -.,-7 and -,(made-o/(y.x) V contain(y, x) V...). By the above definition of well- definedness on SDRSS, the compound is coherent only if we can resoh,e Rc to a particular relation via the SDRT Update function, which in turn is determined by DICE. Rules such as Assume Coherence serve to specify the necessary compound relation, so long as context provides enough information. 5 Integrating Lexical Preferences and Pragmatics \Ve now extend SDRT and DICE to handle the prob- abilistic information given in §3. We want the prag- matic component to utilise this knowledge, while still maintaining sufficient flexibility that less fre- quent senses are favoured in certain discourse con- texts. Suppose that the new information to be in- tegrated with the discourse context is ambigu- ous between ~1 .... ,Bn. Then we assume that exactly one of Update(z.a,~,). ] < i <_ n. holds. We gloss this complex disjunctive formula as 141 /Vl<i<n(Update(T,a, j3i)). Let ~k ~- j3j mean that the probability of DRS f~k is greater than that of f~j. Then the rule schema below ensures that the most frequent possible sense that produces discourse co- herence is (monotonically) favoured: • Prefer Frequent Senses: ( /Vl<i<n( Update(T, c~,/~i))A $ Update(T, oz,/~j) A (/~k ~" j3j --~ -~ $ Update(T,a,~k))) -+ Update(T, a,/~j) Prefer Frequent Senses is a declarative rule for disambiguating constituents in a discourse context. But from a procedural perspective it captures: try to attach the DRS based on the most probable senses first; if it works you're done; if not, try the next most probable sense, and so on. Let's examine the interpretation of compounds. Consider (3). Let's consider the representation ~' of (3b) with the highest probability: i.e., the one where cotton bag means bag made of cotton. Then similarly to (5), Assume Coherence, Subtype and Elaboration are used to infer that the cotton bag is one of the bags mentioned in (3a) and Elab- oration holds. Since this updated SDRS is well- defined, Prefer Frequent Senses ensures that it's true. And so cotton bag means bag made from cotton in this context. Contrast this with (4). Update( a, a, /~') is not well-defined because the cotton bag cannot be one of the bags in (4a). On the other hand, Update(a, (~, ~") is well-defined, where t3" is the DRS where cotton bag means bag containing cotton. This is because one can now assume this bag is one of the bags mentioned in (4a), and therefore Elabora- tion can be inferred as before. So Prefer Frequent Senses ensures that Update(a,a,~") holds but Update(a, o~, j3') does not. Prefer Frequent Senses is designed for reason- ing about word senses in general, and not just the semantic content of compounds: it predicts diet has its food sense in (6b) in isolation of the discourse context (assuming Update(O, 0, ~) = ~), but it has the law-maker sense in (6), because SDRT's coher- ence constraints on Contrast ((Asher, 1993))--which is the relation required for Update because of the cue word but--can't be met when diet means food. (6) a. In theory, there should be cooperation be- tween the different branches of government. b. But the president hates the diet. In general, pragmatic reasoning is computation- ally expensive, even in very restricted domains. But the account of disambiguation we've offered circum- scribes pragmatic reasoning as much as possible. All nonmonotonic reasoning remains packed into the definition of Update(T, a, f~), where one needs prag- matic reasoning anyway for inferring rhetorical re- lations. Prefer Frequent Senses is a monotonic rule, it doesn't increase the load on nonmonotonic reasoning, and it doesn't introduce extra pragmatic machinery peculiar to the task of disambiguating word senses. Indeed, this rule offers a way of check- ing whether fully specified relations between com- pounds are acceptable, rather than relying on (ex- pensive) pragmatics to compute them. We have mixed stochastic and symbolic reasoning. Hobbs et al (1993) also mix numbers and rules by means of weighted abduction. However, the theories differ in several important respects. First, our prag- matic component has no access to word forms and syntax (and so it's not language specific), whereas Hobbs et al's rules for pragmatic interpretation can access these knowledge sources. Second, our prob- abilities encode the frequency of word senses asso- ciated with word forms. In contrast, the weights that guide abduction correspond to a wider variety of information, and do not necessarily correspond to word sense/form frequencies. Indeed, it is unclear what meaning is conveyed by the weights, and con- sequently the means by which they can be computed are not well understood. 6 Conclusion We have demonstrated that compound noun in- terpretation requires the integration of the lexi- con, probabilistic information and pragmatics. A similar case can be made for the interpretation of morphologically-derived forms and words in ex- tended usages. We believe that the proposed archi- tecture is theoretically well-motivated, but also prac- tical, since large-scale semi-automatic acquisition of the required frequencies from corpora is feasible, though admittedly time-consuming. However fur- ther work is required before we can demonstrate this, in particular to validate or revise the formulae in §3 and to further develop the compound schemata. 7 Acknowledgements The authors would like to thank Ted Briscoe and three anonymous reviewers for comments on previ- ous drafts. This material is in part based upon work supported by the National Science Foundation un- der grant number IRI-9612682 and ESRC (UK) grant number R000236052. 142 References Asher, N. (1993) Reference to Abstract Objects in Discourse, Kluwer Academic Publishers. Asher, N. and A. Lascarides (1995) 'Lexical Disam- biguation in a Discourse Context', Journal of Se- mantics, voi.12.1, 69-108. Asher, N. and A. Lascarides (1996) Bridging, Pro- ceedings of the International Workshop on Se- mantic Underspecification, Berlin, October 1996, available from the Max Plank Institute. Bauer, L. (1983) English word-formation, Cam- bridge University Press, Cambridge, England. Briscoe, E.J. and A. Copestake (1996) 'Controlling the application of lexical rules', Proceedings of the A CL SIGLEX Workshop on Breadth and Depth of Semantic Lexicons, Santa Cruz, CA. Downing, P. (1977) 'On the Creation and Use of English Compound Nouns', Language, vol.53(~), 810-842. Dowty, D. (1979) Word meaning in Montague Gram- mar, Reidel, Dordrecht. Hobbs, J. (1979) 'Coherence and Coreference', Cog- nitive Science, vol.3, 67-90. Hobbs, J. (1985) On the Coherence and Structure of Discourse, Report No. CSLI-85-37, Center for the Study of Language and Information. Hobbs, J.R., M. Stickel, D. Appelt and P. Martin (1993) 'Interpretation as Abduction', Artificial In- telligence, vol. 63.1, 69-142. Johnston, M., B. Boguraev and J. Pustejovsky (1995) 'The acquisition and interpretation of com- plex nominals', Proceedings of the AAAI Spring Symposium on representation and acquisition of lexical knowledge, Stanford, CA. Johnston, M. and F. Busa (1996) 'Qualia struc- ture and the compositional interpretation of com- pounds', Proceedings of the ACL SIGLEX work- shop on breadth and depth of semantic lexicons, Santa Cruz, CA. Jones, B. (1995) 'Predicting nominal compounds', Proceedings o.f the 17th Annual conference of the Cognitive Science Society, Pittsburgh, PA. Kamp, H. and U. Reyle (1993) From Discourse to Logic: an introduction to modeltheoretic seman- tics, formal logic and Discourse Representation Theory, Kluwer Academic Publishers, Dordrecht, Germany. Lascarides, A. and N. Asher (1993) 'Temporal Inter- pretation, Discourse Relations and Commonsense Entailment', Linguistics and Philosophy, vol. 16.5, 437-493. Lascarides, A., E.J. Briscoe, N. Asher and A. Copes- take (1996) 'Persistent and Order Independent Typed Default Unification', Linguistics and Phi- losophy, voi.19:1, 1-89. Lascarides, A. and A. Copestake (in press) 'Prag- matics and Word Meaning', Journal of Linguis- tics, Lascarides, A., A. Copestake and E. J. Briscoe (1996) 'Ambiguity and Coherence', Journal of Se- mantics, vol.13.1, 41-65. Levi, J. (1978) The syntax and semantics of complex nominals, Academic Press, New York. Liberman, M. and R. Sproat (1992) 'The stress and structure of modified noun phrases in English' in I.A. Sag and A. Szabolsci (eds.), Lexical matters, CSLI Publications, pp. 131-182. Link, G. (1983) 'The logical analysis of plurals and mass terms: a lattice-theoretical approach' in Bguerle, Schwarze and von Stechow (eds.), Meaning, use and interpretation of language, de Gruyter, Berlin, pp. 302-323. Polanyi, L. (1985) 'A Theory of Discourse Structure and Discourse Coherence', Proceedings of the Pa- pers from the General Session at the Twenty-First Regional Meeting of the Chicago Linguistics Soci- ety, Chicago, pp. 25-27. Pustejovsky, J. (1995) The Generative Lexicon, MIT Press, Cambridge, MA. Resnik, P. (1992) 'Probabilistic Lexicalised Tree Ad- joining Grammar', Proceedings of the Coling92, Nantes, France. van der Sandt, R. (1992) 'Presupposition Projection as Anaphora Resolution', Journal of Semantics, voi.19.4, Sparck Jones, K. (1983) 'So what about parsing com- pound nouns?' in K. Sparck Jones and Y. Wilks (eds.), Automatic natural language parsing, Ellis Horwood, Chichester, England, pp. 164-168. Webber, B. (1991) 'Structure and Ostension in the Interpretation of Discourse Deixis', Language and Cognitive Processes, vol. 6.2, 107-135. 143 | 1997 | 18 |
Negative Polarity Licensing at the Syntax-Semantics Interface John Fry Stanford University and Xerox PARC Dept. of Linguistics Stanford University Stanford, CA 94305-2150, USA fry@csli, stanford, edu Abstract Recent work on the syntax-semantics in- terface (see e.g. (Dalrymple et al., 1994)) uses a fragment of linear logic as a 'glue language' for assembling meanings compositionally. This paper presents a glue language account of how nega- tive polarity items (e.g. ever, any) get licensed within the scope of negative or downward-entailing contexts (Ladusaw, 1979), e.g. Nobody ever left. This treat- ment of licensing operates precisely at the syntax-semantics interface, since it is car- ried out entirely within the interface glue language (linear logic). In addition to the account of negative polarity licensing, we show in detail how linear-logic proof nets (Girard, 1987; Gallier, 1992) can be used for efficient meaning deduction within this 'glue language' framework. 1 Background A recent strain of research on the interface between syntax and semantics, starting with (Dalrymple et al., 1993), uses a fragment of linear logic as a 'glue language' for assembling the meaning of a sentence compositionally. In this approach, meaning assem- bly is guided not by a syntactic constituent tree but rather by the flatter functional structure (the LFG f-structure) of the sentence. As a brief review of this approach, consider sen- tence (1): (1) Everyone left. [ PRED 'LEAVE' SUBJ [ ] g PR D 'EWRYONE'] Each word in the sentence is associated with a 'meaning constructor' template, specified in the lex- icon; these meaning constructors are then instanti- ated with values from the f-structure. For sentence (1), this produces two premises of the linear logic glue language: everyone: left: --o H"-*t every(person, S) g~,',-% X --o fa"-*t leave(X) In the everyone premise the higher-order variable S ranges over the possible scope meanings of the quantifier, with lower-case x acting as a traditional first-order variable "placeholder" within the scope. H ranges over LFG structures corresponding to the meaning of the entire generalized quantifier3 A meaning for (1) can be derived by applying the linear version of modus ponens, during which (unlike classical logic) the first premise everyone "consumes" the second premise left. This deduc- tion, along with the substitutions H ~-~ f~, X ~-~ x and S ~-~ Az.leave(x), produces the final mean- ing f~"-*t every(person, Ax.leave(x)), which is in this simple case the only reading for the sentence. One advantage of this deductive style of meaning assembly is that it provides an elegant account of quantifier scoping: each possible scope has a cor- responding proof, obviating the need for quantifier storage. 2 Meaning deduction via proof nets A proo] net (Girard, 1987) is an undirected, con- nected graph whose node labels are propositions. A 1Here we have simplified the notation of Dalrymple et al. somewhat, for example by stripping away the uni- versa/ quantifier operators from the variables..In this regard, note that the lower-case variables stand for ar- bitrary constants rather than particular terms, and gen- erally are given limited scope within the antecedent of the premise. Upper-case variables are Prolog-like vari- ables that become instantiated to specific terms within the proof, and generally their scope is the entire premise. 144 f lg_~,_"2*~x_)~ H".*tS_(z)_ ((g~-,~, zF- ~ H~.~ s(z)) ( H',-*t every(person, S) ) ± g,,',~e X ~ ((g='~e x) ~ ~ H"-*t S(x)) ® (H',.** every(person, S)) J- g~-,~ X @ (.f~',~, leave(X)) ± .f,,"~t M Figure 1: Proof net for Everyone left. theorem of multiplicative linear logic corresponds to only one proof net; thus the manipulation of proof nets is more efficient than sequent deduction, in which the same theorem might have different proofs corresponding to different orderings of the inference steps. A further advantage of proof nets for our pur- poses is that an invalid meaning deduction, e.g. one corresponding to some spurious scope reading of a particular sentence, can be illustrated by exhibiting its defective graph which demonstrates visually why no proof exists for it. Proof net techniques have also been exploited within the categorial grammar com- munity, for example for reasons of efficiency (Mor- rill, 1996) and in order to give logical descriptions of certain syntactic phenomena (Lecomte and Retord, 1995). In this section we construct a proof net from the premises for sentence (1), showing how to apply higher-order unification to the meaning terms in the process. We then review the O(n 2) algorithm of Gallier (1992) for propositional (multiplicative) lin- ear logic which checks whether a given proof net is valid, i.e. corresponds to a proof. The complete pro- cess for assembling a meaning from its premises will be shown in four steps: (1) rewrite the premises in a normalized form, (2) assemble the premises into a graph, (3) connect together the positive ("pro- ducer") and negative ("consumer") meaning terms, unifying them in the process, and (4) test whether the resulting graph encodes a proof. 2.1 Step 1: set up the sequent Since our goal is to derive, from the premises of sen- tence (1), a meaning M for the f-structure f of the entire sentence, what we seek is a proof of the form everyone ® left I- fa-,-q M. Glue language semantics has so far been restricted to the multiplicative fragment of linear logic, which uses only the multiplicative conjunction operator ® (tensor) and the linear implication operator --o. The same fragment is obtained by replacing --o with the operators ~ and ±, where ~ (par) is the multiplicative 'or '2 and ± is linear negation and (A --o B) - (A ± ~ B). Using the version with- out --% we normalize two sided sequents of the form A1, . . . , Am t- B1, . . . , B, into right-sided sequents of the form I- A~,..., A: m, B1,..., B,. (In sequent representations of this style, the comma represents ® on the left side of the sequent and ~ on the right side.) In our new format, then, the proof takes the form F everyone ±, left ± , .f~',ot M. The proof net further requires that sequents be in negation normal form, in which negation is applied only to atomic terms. 3 Moving the negations in- ward (the usual double-negation and 'de Morgan' properties hold), and displaying the full premises, we obtain the normalized sequent }- ((g~-,.%x) ± ~ H~S(x)) ®(H"~t every(person, S ) ) ±, g~"~e X ® (l~-,~t leave(X))', f~',~t M. 2.2 Step 2: create the graph The next step is to create a graph whose nodes con- sist of all the terms which occur in the sequent. That is, a node is created for each literal C and for each negated literal C'; a node is created for each com- pound term A ® B or A ~ B; and nodes are also created for its subterms A and B. Then, for each node of the form A ~ B, we draw a soft edge in the form of a horizontal dashed line connecting it to nodes A and B. For each node of the form A®B, we draw a hard edge (solid line) connecting it to nodes A and B. For the example at hand, this produces the graph in Figure 1 (ignoring the curved edges at the top). 2This notation is Gallier's (1992). 3Note that we refer to noncompound terms as 'literal' or 'atomic' terms because they are atomic from the point of view of the glue language, even though these terms are in fact of the form S',~ M, where S is an expression over LFG structures and M is a type-r expression in the meaning language. 145 2.3 Step 3: connect the Uterals The final step in assembling the proof net is to con- nect together the literal nodes at the top of the graph. It is at this stage that unification is applied to the variables in order to assign them the values they will assume in the final meaning. Each differ- ent way of connecting the literals and instantiating their variables corresponds to a different reading for the sentence. For each literal, we draw an edge connecting it to a matching literal of opposite sign; i.e. each literal A is connected to a literal B" where A unifies with B. Every literal in the graph must be connected in this way. If for some literal A there exists no matching literal B of opposite sign then the graph does not encode a proof and the algorithm fails. In this process the unifications apply to whole ex- pressions of the form S-~ M, including both vari- ables over LFG structures and variables over mean- ing terms. For the meaning terms, this requires a limited higher-order unification scheme that pro- duces the unifier ~x.p (x) from a second-order term T and a first-order term p(z). As noted by Dalrymple et al. (to appear), all the apparatus that is required for their simple intensional meaning language falls within the decidable l)~ fragment of Miller (1990), and therefore can be implemented as an extension of a first-order unification scheme such as that of Prolog. For the example at hand, there is only one way to connect the literals (and hence at most one read- ing for the sentence), as shown in Figure 1. At this stage, the unifications would bind the vari- ables in Figure 1 as follows: X ~-~ x, H ~-~ f~, S ,-+ )~x.leave(x), M ~+ every(person, )~x.leaue(x)). 2.4 Step 4: test the graph for validity Finally, we apply Gallier's (1992) algorithm to the connected graph in order to check that it corre- sponds to a proof. This algorithm recursively de- composes the graph from the bottom up while check- ing for cycles. Here we present the algorithm infor- mally; for proofs of its correctness and O(n 2) time complexity see (Gallier, 1992). Base case: If the graph consists of a single link be- tween literals A and A -L, the algorithm succeeds and the graph corresponds to a proof. Recursive case 1: Begin the decomposition by deleting the bottom-level par nodes. If there is some terminal node A ~ B connected to higher nodes A and B, delete A l~ B. This of course eliminates the dashed edge from A ~ B to A and to B, but does not remove nodes A and B. Then run the algorithm on the resulting smaller (possibly unconnected) graph. Recursive case 2: Otherwise, if no terminal par node is available, find a terminal tensor node to delete. This case is more complicated because not every way of deleting a tensor node necessarily leads to success, even for a valid proof net. Just choose some terminal tensor node A ® B. If deleting that node results in a single, connected (i.e. cyclic) graph, then that node was not a valid splitting tensor and a different one must be chosen instead, or else halt with failure if none is available. Otherwise, delete A ® B, which leaves nodes A and B belonging to two unconnected graphs G1 and G2. Then run the algorithm on G1 and G2. This process will be demonstrated in the examples which follow. 3 A glue language treatment of NPI licensing Ladusaw (1979) established what is now a well- known generalization in semantics, namely that neg- ative polarity lexical items (NPI's, e.g. any, ever) are licensed within the scope of downward-entailing operators (e.g. no, few). For example, the NPI ever occurs felicitously in a context like No one ever left but not in *John ever left3 Ladusaw showed that the status of a lexical item as a NPI or licenser de- pends on its meaning; i.e. on semantic rather than syntactic or lexical properties. On the other hand, the requirement that NPI's be licensed in order to appear felicitously in a sentence is a constraint on surface syntactic form. So the domain of NPI li- censing is really the inter/ace between syntax and semantics, where meanings are composed under syn- tactic guidance. This section gives an implementation of NPI li- censing at the syntax-semantics interface using glue language. No separate proof or interpretation appa- ratus is required, only modification of the relevant meaning constructors specified in the lexicon. 3.1 Meaning constructors for NPI's There is a resource-based interpretation of the NPI licensing problem: the negative or decreasing licens- ing operator must make available a resource, call it e, which will license the NPI's, if any, within its scope. If no such resource is made available the NPI's are unlicensed and the sentence is rejected. 4Here we consider only 'rightward' licensing (within the scope of the quantifier), but this approach ap- plies equally well to 'leftward' licensing (within the restriction). 146 ~t ( f~-,-*t sing(Y)) ± f~,",.*t (go"*, At) ± g~ "*e Y ® (f~"*t sing(Y)) ± (re,".*, P ® l) @ ((/~"-*, yet(P)) ± ~ l J- ) ]~"-*t M Figure 2: Invalid proof net of *AI sang yet. The NPI's must be made to require the l resource. The way one implements such a requirement in lin- ear logic is to put the required resource on the left side of the implication operator --o. This is precisely our approach. However, since the NPI is just 'bor- rowing' the license, not consuming it (after all, more than one NPI may be licensed, as in No one ever saw anyone), we also add the resource to the right hand side of the implication. That is, for a mean- ing constructor of the form A --o B, we can make a corresponding NPI meaning constructor of the form (A ® £) --o (B ® e). For example, the meaning constructor proposed in (Dalrymple et al., 1993) for the sentential modifier obviously is obviously: f~,,,z t P ---o fa"~t obviously(P). Under this analysis of sentential modification, NPI adverbs such as yet or ever would take the same form, but with the licensing apparatus added: ever: (fa.,~t P ® £) --o (fa"*t ever(P) ® g). This technique can be readily applied to the other categories of NPI as well. In the case of the NPI quantifier phrase anyone 5 the licensing apparatus is added to the earlier template for everyone to pro- duce the meaning constructor anyone: (ga".~e X ---o H"*t S(x) @ £) ---o (H"-*t any(person, S) ® £). The only function of the £ --o £ pattern inside an NPI is to consume the resource ~ and then produce it again. However, for this to happen, the resource £ will have to be generated by some licenser whose scope includes the NPI, as we show below. If no outside £ resource is made available, then the extra- neous, unconsumed g material in the NPI guarantees that no proof will be generated. In proof net terms, 5Any also has another, so-called 'free choice' inter- pretation (as in e.g. Anyone will do) (Ladusaw, 1979; Kadmon and Landman, 1993), which we ignore here. the output £ cannot feed back into the input l with- out producing a cycle. We now demonstrate how the deduction is blocked for a sentence containing an unlicensed NPI such as (2). (2) ,AI sang yet. {[PR .o The relevant premises are AI: g~"* e AI sang: g~'~e Y ---o f,,"*t sing(Y) yet: (fa,~,t p ® £) --o (fa,x,+t yet(P) ® £) The graph of (2), shown in Figure 2, does not encode a proof. The reason is shown in Figure 3. At this point in the algorithm, we have deleted the leftmost terminal tensor node. However, the only remaining terminal tensor node cannot be deleted, since doing so would produce a single connected subgraph; the cycle is in the edge from £ to £±. At this point the algorithm fails and no meaning is derived. 3.2 Meaning constructors for NPI licensers It is clear from the proposal so far that lexical items which license NPI's must make available a £ resource within their scope which can be consumed by the NPI. However, that is not enough; a licenser can still occur inside a sentence without an NPI, as in e.g. No one left. The resource accounting of linear logic requires thatwe 'clean up' by consuming any excess £ resources in order for the meaning deduction to go through. Fortunately, we can solve this problem within the licenser's meaning constructor itself. For a lexical category whose meaning constructor is of the form A--®B, we assign to the NPI licensers of that cate- gory the meaning constructor (e -o (A ® t)) --o B. By its logical structure, being embedded inside an- other implication, the inner implication here serves 147 ~ . Y (9.~., At) ± (].~-'t P @ t) @ ((.f~-., yet(P)) x ~ l ~) J.~-*, M Figure 3: Point of failure. Bottom tensor node cannot be deleted. to introduce 'hypothetical' material. All of the NPI licensing occurs within the hypothetical (left) side of the outermost implication. Since the l resource is made available to the NPI only within this hypo- thetical, it is guaranteed that the NPI is assembled within, and therefore falls under, the scope of the li- censer. Furthermore, the formula is 'self cleaning', in that the £ resource, even if not used by an NPI, does not survive the hypothetical and so cannot affect the meaning of the licenser in some other way. That is, the licensing constructor (£ --o (A ® l)) --o B can derive all of the same meanings as the nonlicensing version A --o B. Fact 1 (g-o(A ® l))--oB F- A--oB Proof We construct the proof net of the equivalent right-sided sequent I- (g~ I~ (A ® g)) ® B ±, A ± , B and then test that it is valid. (£~I~(A®£))®B ± A 1B ==~ A ± B ::=$ £± A®~ A ± ~zg AA ± [] This self-cleaning property means that a licensing resource £ is exactly that--a license. Within the scope of the licenser, the g is available to be used once, several times (in a "chain" of NPI's which pass it along), or not at all, as required. 6 A simple example is provided by the NPIAicensing adverb rarely. We modify our sentential adverb template to create a meaning constructor for rarely which licenses an NPI within the sentence it modi- fies. rarely: (£ -.-o (fa,~t p ® £)) --.o fa,,~t rarely(P) The case of licensing quantifier phrases such as nobody and Jew students follows the same pattern. For example, nobody takes the form nobody: ((g#"*e x ® £) -o (H"-*t S(x) ® £)) --o H"~t no(person, S). We can now derive a meaning for sentence (3), in which nobody and anyone play the roles of licenser and NPI, respectively. (3) Nobody saw anyone. :[PREo ' OBODY'] h:[PRED 'ANYONE'] Normally, a sentence with two quantifiers would generate two different scope readings--in this case, (4) and (5). (4) f~"~t no(person, ~x.any(person, Ay.see(x, y) ) ) (5) f a"-* t any(person, Ay.no(person, Ax.see ( x, y ) ) ) However, Ladusaw's generalization is that NPI's are licensed within the scope of their licensers. In fact, the semantics of any prevent it from taking wide scope in such a case (Kadmon and Landman, 1993; Ladusaw, 1979, p. 96-101). Our analysis, then, should derive (4) but block (5). 6This multiple-use effect can be achieved more di- rectly using the exponential operator !; however this un- necessary step would take us outside of the multiplica- live fragment of linear logic and preclude the proof net techniques described earlier. 148 ~2 o ~o ~o f~ o ~9 ~D ~9 @ o The premises are nobody: saw: anyone: ((g,,"~ x ® £) .-o (H".*t S(x) ® ~)) --o H~-*t no(person, S) (ga',ze X ® ha'x~e Y) --o fa-,~t see(X, Y) (h~.% y --o I~.*, T(y) ® i) --o (I~.,t any(person, T) ® £) The proof net for reading (4) is shown in Figure 4. T As required, the net in Figure 4, corresponding to wide scope for no, is valid. The first step in the proof of Figure 4 is to delete the only available splitting tensor, which is boxed in the figure. A second way of linking the positive and negative literals in Fig- ure 4 produces a net which corresponds to (5), the spurious reading in which any has wide scope. In that graph, however, all three of the available termi- nal tensor nodes produce a single, connected (cyclic) graph if deleted, so decomposition cannot even be- gin and the algorithm fails. Once again, it is the licensing resources which are enforcing the desired constraint. 4 Categorial grammar approaches The £ atom used here is somewhat analogous to the (negative) lexical 'monotonicity markers' proposed by S~chez Valencia (1991; 1995) and Dowty (1994) for categorial grammar. In these approaches, cate- gories of the form A/B axe marked with monotonic- ity properties, i.e. as A+/B +, A+/B -, A-/B +, or A-/B-, and similarly for left-leaning categories of the form A\B. Then monotonicity constraints can be enforced using category assignments like the fol- lowing from (Dowty, 1994): no: { (S+/VP-)/CN- (S-/VP+)/CN + } any: (S-/VP-)/CN- ever: VP-/VP- S~chez Valencia and Dowty, however, are less concerned with the distribution of NPI's than they are with using monotonicity properties to character- ize valid inference patterns, an issue which we have ignored here. Hence their work emphasizes logical polarity, where an odd number of negative marks indicates negative polarity, and an even number of negatives cancel each other to produce positive po- larity. For example, the category of no above "flips" the polarity of its argument. By contrast, our sys- tem, like Ladusaw's (1979) original proposal, is what Dowty (1994, p. 134-137) would call "intuitionistic": ~The subscripts have been stripped from the formulas in order to save space in the diagram. 149 since multiple negative contexts do not cancel each other out, we permit doubly-licensed NPI's as in Nobody rarely sees anyone. To handle such cases, while at the same time accounting for monotonic in- ference properties, Dowty (1994) proposes a double- marking framework whereby categories like A-/B + are marked for both logical polarity and syntactic polarity. 5 Conclusion We have elaborated on and extended slightly the 'glue language' approach to semantics of Dalrymple et al. It was shown how linear logic proof nets can be used for efficient natural-language meaning de- ductions in this framework. We then presented a glue language treatment of negative polarity licens- ing which ensures that NPI's are licensed within the semantic scope of their licensers, following (Ladu- saw, 1979). This system uses no new global rules or features, nor ambiguous lexical entries, but only the addition of Cs to the relevant items within the lexicon. The licensing takes place precisely at the syntax-semantics interface, since it is implemented entirely in the interface glue language. Finally, we noted briefly some similarities and differences be- tween this system and categorial grammar 'mono- tonicity marking' approaches. 6 Acknowledgements I'm grateful to Mary Dalrymple, John Lamping and Stanley Peters for very helpful discussions of this material. Vineet Gupta, Martin Kay, Fernando Pereira and four anonymous reviewers also provided helpful comments on several points. All remaining errors are naturally my own. References Mary Dalrymple, John Lamping, and Vijay Saraswat. 1993. LFG semantics via constraints. In Proceedings of the 6th Meeting of the European Association for Computational Linguistics, Uni- versity of Utrecht, April. Mary Dalrymple, John Lamping, Fernando Pereira, and Vijay Saraswat. 1994. A deductive account of quantification in LFG. In Makoto Kanazawa, Christopher J. Pifi6n, and Henriette de Swart, ed- itors, QuantiJ~ers, Deduction, and Context. CSLI Publications, Stanford, CA. Mary Dalrymple, John Lamping, Fernando Pereira, and Vijay Saraswat. To appear. Quantifiers, anaphora, and intensionality. Journal of Logic, Language and Information. David Dowty. 1994. The role of negative polar- ity and concord marking in natural language rea- soning. In Mandy Harvey and Lynn Santelmann, editors, Proceedings of SALT IV, pages 114-144, Ithaca, NY. Cornell University. Jean Gallier. 1992. Constructive logics. Part II: Linear logic and proof nets. MS, Department of Computer and Information Science, University of Pennsylvania. Jean-Yves Girard. 1987. Linear logic. Theoretical Computer Science, 50. Nirit Kadmon and Fred Landman. 1993. Any. Lin- guistics and Philosophy 16, pages 353-422. William A. Ladusaw. 1979. Polarity Sensitivity as Inherent Scope Relations. Ph.D. thesis, University of Texas, Austin. Reprinted in Jorge Hankamer, editor, Outstanding Dissertations in Linguistics. Garland, 1980. Alain Lecomte and Christian Retor6. 1995. Pom- set logic as an alternative categorial grammar. In Glyn V. Morrill and Richard T. Oehrle, editors, Formal Grammar. Proceedings of the Conference of the European Summer School in Logic, Lan- guage, and Information, Barcelona. Dale A. Miller. 1990. A logic programming language with lambda abstraction, function variables and simple unification. In Peter Schroeder-Heister, ed- itor, Extensions of Logic Programming, Lecture Notes in Artificial Intelligence. Springer-Verlag. Glyn V. Morrill. 1996. Memoisation of categorial proof nets: parallelism in categorial processing. In V. Michele Abrusci and Claudia Casadio, editors, Proceedings of the Roma Workshop on Proofs and Linguistic Categories, Rome. Victor Shnchez Valencia. 1991. Studies on Natu- ral Logic and Categorial Grammar. Ph.D. thesis, University of Amsterdam. Victor Shnchez Valencia. 1995. Parsing-driven in- ference: natural logic. Linguistic Analysis, 25(3- 4):258-285. 150 | 1997 | 19 |
Fast Context-Free Parsing Requires Fast Boolean Matrix Multiplication Lillian Lee Division of Engineering and Applied Sciences Harvard University 33 Oxford Street Cambridge, MA 012138 llee~eecs, harvard, edu Abstract Valiant showed that Boolean matrix multiplication (BMM) can be used for CFG parsing. We prove a dual re- sult: CFG parsers running in time O([Gl[w[ 3-e) on a grammar G and a string w can be used to multiply m x m Boolean matrices in time O(m3-e/3). In the process we also provide a formal definition of parsing motivated by an informal notion due to Lang. Our re- sult establishes one of the first limita- tions on general CFG parsing: a fast, practical CFG parser would yield a fast, practical BMM algorithm, which is not believed to exist. 1 Introduction The context-free grammar (CFG) formalism was developed during the birth of the field of computational linguistics. The standard meth- ods for CFG parsing are the CKY algorithm (Kasami, 1965; Younger, 1967) and Earley's al- gorithm (Earley, 1970), both of which have a worst-case running time of O(gN 3) for a CFG (in Chomsky normal form) of size g and a string of length N. Graham et al. (1980) give a vari- ant of Earley's algorithm which runs in time O(gN3/log N). Valiant's parsing method is the asymptotically fastest known (Valiant, 1975). It uses Boolean matrix multiplication (BMM) to speed up the dynamic programming in the CKY algorithm: its worst-case running time is O(gM(N)), where M(rn) is the time it takes to multiply two m x m Boolean matrices together. The standard method for multiplying ma- trices takes time O(m3). There exist matrix multiplication algorithms with time complexity O(m3-J); for instance, Strassen's has a worst- case running time of O(m 2"sl) (Strassen, 1969), and the fastest currently known has a worst-case running time of O(m 2"376) (Coppersmith and Winograd, 1990). Unfortunately, the constants involved are so large that these fast algorithms (with the possible exception of Strassen's) can- not be used in practice. As matrix multi- plication is a very well-studied problem (see Strassen's historical account (Strassen, 1990, section 10)), it is highly unlikely that simple, practical fast matrix multiplication algorithms exist. Since the best BMM algorithms all rely on general matrix multiplication 1, it is widely believed that there are no practical O(m 3-~) BMM algorithms. One might therefore hope to find a way to speed up CFG parsing without relying on matrix multiplication. However, we show in this paper that fast CFG parsing requires fast Boolean matrix multiplication in a precise sense: any parser running in time O(gN 3-e) that represents parse data in a retrieval-efficient way can be converted with little computational overhead into a O(m 3-e/3) BMM algorithm. Since it is very improbable that practical fast matrix multiplication algorithms exist, we thus establish one of the first nontrivial limitations on practical CFG parsing. 1The "four Russians" algorithm (Arlazarov et al., 1970), the fastest BMM algorithm that does not sim- ply use ordinary matrix multiplication, has worst-case running time O(mS/log m). Our technique, adapted from that used by Satta (1994) for tree-adjoining grammar (TAG) parsing, is to show that BMM can be efficiently reduced to CFG parsing. Satta's result does not apply to CFG parsing, since it explicitly relies on the properties of TAGs that allow them to generate non-context-free languages. 2 Definitions A Boolean matrix is a matrix with entries from the set {0, 1}. A Boolean matrix multiplication algorithm takes as input two m x m Boolean ma- trices A and B and returns their Boolean prod- uct A x B, which is the m × m Boolean matrix C whose entries c~j are defined by m = V (a,k A bkj). k=l That is, c.ij = 1 if and only if there exists a number k, 1 < k < m, such that aik = bkj = 1. We use the usual definition of a context-free grammar (CFG) as a 4-tuple G = (E, V, R, S), where E is the set of terminals, V is the set of nonterminals, R is the set of productions, and S C V is the start symbol. Given a string w ~ WlW2...WN over E*, where each wi is an element of E, we use the notation ~ to denote the substring wiwi+l " " " Wj-lWj • We will be concerned with the notion of c-derivations, which are substring derivations that are consistent with a derivation of an entire string. Intuitively, A =~* w~i is a c-derivation if it is consistent with at least one parse of w. Definition 1 Let G = (E, V, R, S) be a CFG, and let w = wlw2...wN, wi E ~. A nontermi- J hal A E V c-derives (consistently derives) w i if and only if the following conditions hold: • A ~* w~, and • S =::~* i--lA N 'u] 1 14wit 1 . (These conditions together imply that S ~* w.) We would like our results to apply to all "practical" parsers, but what does it mean for a parser to be practical? First, we would like to be able to retrieve constituent information for all possible parses of a string (after all, the recovery of structural information is what distinguishes parsing algorithms from recogni- tion algorithms); such information is very use- ful for applications like natural language under- standing, where multiple interpretations for a sentence may result from different constituent structures. Therefore, practical parsers should keep track of c-derivations. Secondly, a parser should create an output structure from which information about constituents can be retrieved in an efficient way -- Satta (1994) points out an observation of Lang to the effect that one can consider the input string itself to be a retrieval- inefficient representation of parse information. In short, we require practical parsers to output a representation of the parse forest for a string that allows efficient retrieval of parse informa- tion. Lang in fact argues that parsing means exactly the production of a shared forest struc- ture "from which any specific parse can be ex- tracted in time linear with the size of the ex- tracted parse tree" (Lang, 1994, pg. 487), and Satta (1994) makes this assumption as well. These notions lead us to equate practical parsers with the class of c-parsers, which keep track of c-derivations and may also calculate general substring derivations as well. Definition 2 A c-parser is an algorithm that takes a CFG grammar G = (E,V,R,S) and string w E E* as input and produces output ~G,w; J:G,w acts as an oracle about parse in- formation, as follows: • If A c-derives w~, then .7:G,w(A,i,j) = "yes ". If A ~* J :which implies that A does not • W i c-derive wJi ), then :7:G,w( A, i, j ) = "no". • J:G,w answers queries in constant time. Note that the answer 5~c,w gives can be arbi- J trary if A :=v* J but A does not c-derive w i . w i The constant-time constraint encodes the no- tion that information extraction is efficient; ob- serve that this is a stronger condition than that called for by Lang. ]0 We define c-parsers in this way to make the class of c-parsers as broad as possible. If we had changed the first condition to "If A derives ...", then Earley parsers would be excluded, since they do not keep track of all substring derivations. If we had written the second con- dition as "If A does not c-derive ur~i , then ... ", then CKY parsers would not be c-parsers, since they keep track of all substring derivations, not just c-derivations. So as it stands, the class of c-parsers includes tabular parsers (e.g. CKY), where 5rG,w is the table of substring deriva- tions, and Earley-type parsers, where ~'G,~ is the chart. Indeed, it includes all of the parsing algorithms mentioned in the introduction, and can be thought of as a formalization of Lang's informal definition of parsing. 3 The reduction We will reduce BMM to c-parsing, thus prov- ing that any c-parsing algorithm can be used as a Boolean matrix multiplication algorithm. Our method, adapted from that of Satta (1994) (who considered the problem of parsing with tree-adjoining grammars), is to encode informa- tion about Boolean matrices into a CFG. Thus, given two Boolean matrices, we need to produce a string and a grammar such that parsing the string with respect to the grammar yields out- put from which information about the product of the two matrices can be easily retrieved. We can sketch the behavior of the grammar as follows. Suppose entries aik in A and bkj in B are both 1. Assume we have some way to break up array indices into two parts so that i can be reconstructed from il and i2, j can be reconstructed from jl and J2, and k can be reconstructed from kl and k2. (We will describe a way to do this later.) Then, we will have the following derivation (for a quantity 5 to be defined later) : Cil ,Jl ~ Ail ,kl Bkl ,jl derived by Ail,k I derived by Bkl,jl The key thing to observe is that Cil,jt generates two nonterminals whose "inner" indices match, and that these two nonterminals generate sub- strings that lie exactly next to each other. The "inner" indices constitute a check on kl, and the substring adjacency constitutes a check on k2. Let A and B be two Boolean matrices, each of size m x m, and let C be their Boolean matrix product, C = A x B. In the rest of this section, we consider A, B, C, and m to be fixed. Set n = [ml/3], and set 5 = n+2. We will be constructing a string of length 35; we choose 5 slightly larger than n in order to avoid having epsilon-productions in our grammar. Recall that c/j is non-zero if and only if we can find a non-zero aik and a non-zero ~j such that k -- k. In essence, we need simply check for the equality of indices k and k. We will break matrix indices into two parts: our gram- mar will check whether the first parts of k and are equal, and our string will check whether the second parts are also equal, as we sketched above. Encoding the indices ensures that the grammar is of as small a size as possible, which will be important for our time bound results. Our index encoding function is as follows. Let i be a matrix index, 1 < i < m. Then we define the function/(i) -- (fl(i), f2(i)) by fl(i) = [i/nJ (0 < fl(i) <_ n2), and f2(i) = (i mod n) + 2 (2_f2(i)_<n+l). Since fl and f2 are essentially the quotient and remainder of integer division of i by n, we can retrieve i from (fl(i),f2(i)). We will use the notational shorthand of using subscripts instead of the functions fl and f2, that is, we write il and i2 for fl(i) and f2(i). It is now our job to create a CFG G = (E, ~/: R, S) and a string w that encode infor- mation about A and B and express constraints about their product C. Our plan is to include a set of nonterminals {Cp,q : 1 < p,q < n 2} in V so that cij = 1 if and only if Cil,jl c-derives w j2+2~ In section 3.11 we describe a version i2 of G and prove it has this c-derivation property. Then, in section 3.2 we explain that G can easily be converted to Chomsky normal form in such a way as to preserve c-derivations. 11 We choose the set of terminals to be E = {we : l<g<3n+6}, and choose the string to be parsed to be w = WlW2. "'w3n+6. We consider w to be made up of three parts, x, y, and z, each of size 6: w = WlW2 • " " Wn+2 Wn+3 • " " W2n+4 W2n+5 " " " W3n+6. ~ ~ ~ , z ~ z Observe that for any i, 1 < i < m, wi.~ lies within x, wi2+~ lies within y, and wi~+2~ lies within z, since i2 E [2, n+l], i2 + 6 ~ [n + 4, 2n + 3], and i2 + 26 E [2n + 6,3n + 5]. 3.1 The grammar Now we begin building the grammar G = (E, V, R, S). We start with the nonterminals V = {S} and the production set R = ~. We add nonterminal W to V for generating arbi- trary non-empty substrings of w; thus we need the productions (W-rules) W > wtWlwe, 1 < g < 3n + 6. Next we encode the entries of the input matrices A and B in our grammar. We include sets of non-terminals { Ap,q : 1 < p, q < n 2 } and { Bp,q : 1 < p, q < n2}. Then, for every non-zero entry aij in A, we add the production (A-rules) Ai~,j~ > wi~Wwj2+~. For every non-zero entry bij in B, we add the production (B-rules) BQ,jl > zoi2+l+6Wzoj2+26. We need to represent entries of C, so we cre- ate nonterminals {Cp,q : 1 < p, q <_ n 2 } and pro- ductions (C-rules) Cp,q > Ap,rBr,q, 1 < p, q, r < n 2. Finally, we complete the construction with productions for the start symbol S: (S-rules) S > WCp,qW, l <_ p,q < n 2. We now prove the following result about the grammar and string we have just described. Theorem 1 For 1 <_ i,j < m, the entry cij in C is non-zero if and only if Ci~,jl c-derives W j2 +26 i2 Proof. Fix i and j. Let us prove the :'only if" direction first. Thus, suppose c~j = 1. Then there exists a k such that aik = bkj = 1. Figure 1 sketches how Cil,j~ c-derives w~. -~+2~ iS Claim 1 Ci~,j~ 0* w. ~)+2~ i2 The production Cil,jl > Ah,k~Bkx,j ~ is one of the C-rules in our grammar. Since aik = 1, Aix,k~ > wi2 Wwk2+~ is one of our A-rules, and since bkj -: 1, Bkl,j I ) Wk2+l+sWwj2+2 6 is one of our B-rules. Finally, since i2 + 1 < (k2 + 6) -- 1 and (k2 + 1 +6) + 1 <__ (j2 +2~) - 1, we have W 0" .k2+~-1 and W =~* w j2+2~-~ wi2+l k2+2+6 ' since both substrings are of length at least one. Therefore, Cil ,jl o Ail ,kl Bkl ,jl =:~* Wi2 WWk2+~ Wk2+l+6Wwj2+26 derived by Aq,k~ derivedby B~,~ :=~ , j2+26 Wi 2 , and Claim 1 follows, Claim 2 S 0" " i~-lc~ ~,,3n+6 Wl ~il ,jl uJj2+26+l • This claim is essentially trivial, since by the definition of the S-rules, we know that S =~* WCil,jl W. We need only show that nei- w3n+6 ther w~ "2-1 nor j2+26+1 is the empty string (and hence can be derived by W); since 1 < i2 - 1 and j2 + 26 + 1 <__ 3n + 6, the claim holds. Claims 1 and 2 together prove that Cil,jl c- derives W j2+26 i2 , as required. 2 Next we prove the "if" direction. Sup- pose Cil,j~ c-derives W j2+26 which by definition i2 ' means Cil,jl o* W j2+26 Then there must be i2 a derivation resulting from the application of a C-rule as follows: Cil,jl 0 Ail,k, Bk,,jl =~* w~. .'2+2ci i2 2This proof would have been simpler if we had al- lowed W to derive the empty string. However, we avoid epsilon-productions in order to facilitate the conversion to Chomsky normal form, discussed later. 12 W S Cil,j~ W W 1 ... Wi 2 ... Wk2+SWk2+lq- ~ ... Wj2+28 ... W3n+6 x y z Figure 1: Schematic of the derivation process when aik -~ bkj ---- 1. The substrings derived by Ail,k~ and Bkl,jl lie right next to each other. for some k ~. It must be the case that for some ~, Ail,k' =:~* w ~. and Bk',jl 0" ~ j~+2~ But z2 ~£+1 " then we must have the productions Ail,k' wi2Wwt and Bk',jl > ?.l)£+lWWj2+2 5 with ~ = k" + ~ for some k". But we can only have such productions if there exists a number k such that kl = k t, k2 = k n, aik = 1, and bkj ---- 1; and this implies that cij = 1. • Examination of the proof reveals that we have also shown the following two corollaries. Corollary 1 For 1 < i,j < m, cij = 1 if and only if Cil,jl =:b* j2+2~ Wi 2 Corollary 2 S =~* w if and only if C is not the all-zeroes matrix. Let us now calculate the size of G. V consists of O((n2) 2) = O(m 4/3) nonterminals. R con- tains O(n) W-rules and O((n2) 2) = O(m 4/3) S-rules. There are at most m 2 A-rules, since we have an A-rule for each non-zero entry in A; similarly, there are at most m 2 B-rules. And lastly, there are (n2) 3 = O(m 2) C-rules. There- fore, our grammar is of size O(m2); since G en- codes matrices A and B, it is of optimal size. 3.2 Chomsky normal form We would like our results to be true for the largest class of parsers possible. Since some parsers require the input grammar to be in Chomsky normal form (CNF), we therefore wish to construct a CNF version G ~ of G. However, in order to preserve time bounds, we desire that O(IG'I) = O(]GI), and we also require that The- orem 1 holds for G ~ as well as G. The standard algorithm for converting CFGs to CNF can yield a quadratic blow-up in the size of the grammar and thus is clearly un- satisfactory for our purposes. However, since G contains no epsilon-productions or unit pro- ductions, it is easy to see that we can convert G simply by introducing a small (O(n)) num- ber of nonterminals without changing any c- derivations for the Cp,q. Thus, from now on we will simply assume that G is in CNF. 3.3 Time bounds We are now in a position to prove our relation between time bounds for Boolean matrix multi- plication and time bounds for CFG parsing. 13 Theorem 2 Any c-parser P with running time O(T(g)t(N)) on grammars of size g and strings of length N can be converted into a BMM algorithm Mp that runs in time O(max(m 2, T(m2)t(mU3))). In particular, if P takes time O(gN3-e), then l~/Ip runs in time 0(m3-~/3). Proof. Me acts as follows. Given two Boolean m x m matrices A and B, it constructs G and w as described above. It feeds G and w to P, which outputs $'c,w- To compute the prod- uct matrix C, Me queries for each i and j, 1 < i,j < m, whether Ci~,jl derives wJ ~+2~ -- -- 't 2 (we do not need to ask whether Cil,j~ c-derives w']J ~+26 because of corollary 1), setting cij appro- i2 priately. By definition of c-parsers, each such query takes constant time. Let us compute the running time of Me. It takes O(m 2) time to read the input matrices. Since G is of size O(rn 2) and Iwl = O(ml/3), it takes O(m 2) time to build the input to P, which then computes 5rG,w in time O(T(m2)t(ml/3)). Retrieving C takes O(m2). So the total time spent by Mp is O(max(m 2, T(m2)t(mU3))), as was claimed. In the case where T(g) = g and t(N) = N 3-e, Mp has a running time of O(m2(ml/3) a-e) = O(m 2+1-£/3) = O(m3-e'/3). II The case in which P takes time linear in the grammar size is of the most interest, since in natural language processing applications, the grammar tends to be far larger than the strings to be parsed. Observe that theorem 2 trans- lates the running time of the standard CFG parsers, O(gN3), into the running time of the standard BMM algorithm, O(m3). Also, a c- parser with running time O(gN 2"43) would yield a matrix multiplication algorithm rivalling that of Strassen's, and a c-parser with running time better than O(gN H2) could be converted into a BMM method faster than Coppersmith and Winograd. As per the discussion above, even if such parsers exist, they would in all likelihood not be very practical. Finally, we note that if a lower bound on BMM of the form f~(m 3-a) were found, then we would have an immediate lower bound of ~(N 3-3a) on c-parsers running in time linear in g. 4 Related results and conclusion We have shown that fast practical CFG parsing algorithms yield fast practical BMM algorithms. Given that fast practical BMM algorithms are unlikely to exist, we have established a limita- tion on practical CFG parsing. Valiant (personal communication) notes that there is a reduction of m × m Boolean matrix multiplication checking to context-free recog- nition of strings of length m2; this reduc- tion is alluded to in a footnote of a paper by Harrison and Havel (1974). However, this reduction converts a parser running in time O(Iwl 1"5) to a BMM checking algorithm run- ning in time O(m 3) (the running time of the standard multiplication method), whereas our result says that sub-cubic practical parsers are quite unlikely; thus, our result is quite a bit stronger. Seiferas (1986) gives a simple proof of N 2 an ~t(lo-Q-W) lower bound (originally due to Gallaire (1969)) for the problem of on-line lin- ear CFL recognition by multitape Turing ma- chines. However, his results concern on-line recognition, which is a harder problem than parsing, and so do not apply to the general off- line parsing case. Finally, we recall Valiant's reduction of CFG parsing to boolean matrix multiplication (Valiant, 1975); it is rather pleasing to have the reduction cycle completed. 5 Acknowledgments I thank Joshua Goodman, Rebecca Hwa, Jon Kleinberg, and Stuart Shieber for many helpful comments and conversations. Thanks to Les Valiant for pointing out the "folklore" reduc- tion. This material is based upon work sup- ported in part by the National Science Foun- dation under Grant No. IRI-9350192. I also gratefully acknowledge partial support from an NSF Graduate Fellowship and an AT&T GRPW/ALFP grant. Finally, thanks to Gior- gio Satta, who mailed me a preprint of his BMM/TAG paper several years ago. 14 References Arlazarov, V. L., E. A. Dinic, M. A. Kronrod, and I. A. Farad~ev. 1970. On economical construc- tion of the transitive closure of an oriented graph. Soviet Math. Dokl., 11:1209-1210. English trans- lation of the Russian article in Dokl. Akad. Nauk SSSR 194 (1970). Coppersmith, Don and Shmuel Winograd. 1990. Matrix multiplication via arithmetic progression. Journal of Symbolic Computation, 9(3):251-280. Special Issue on Computational Algebraic Com- plexity. Earley, Jay. 1970. ing algorithm. 13(2):94-102. An efficient context-free pars- Communications of the A CM, Gallaire, Herv& 1969. Recognition time of context- free languages by on-line turing machines. Infor- mation and Control, 15(3):288-295, September. Graham, Susan L., Michael A. Harrison, and Wal- ter L. Ruzzo. 1980. An improved context-free recognizer. A CM Transactions on Programming Languages and Systems, 2(3):415-462. Harrison, Michael and Ivan Havel. 1974. On the parsing of deterministic languages. Journal of the ACM, 21(4):525-548, October. Kasami, Tadao. 1965. An efficient recognition and syntax algorithm for context-free languages. Sci- entific Report AFCRL-65-758, Air Force Cam- bridge Research Lab, Bedford, MA. Lang, Bernard. 1994. Recognition can be harder than parsing. Computational Intelligence, 10(4):486-494, November. Satta, Giorgio. 1994. Tree-adjoining grammar pars- ing and boolean matrix multiplication. Computa- tional Linguistics, 20(2):173-191, June. Seiferas, Joel. 1986. A simplified lower bound for context-free-language recognition. Informa- tion and Control, 69:255-260. Strassen, Volker. 1969. Gaussian elimination is not optimal. Numerische Mathematik, 14(3):354-356. Strassen, Volker. 1990. Algebraic complexity the- ory. In Jan van Leeuwen, editor, Handbook of Theoretical Computer Science, volume A. Elsevier Science Publishers, chapter 11, pages 633-672. Valiant, Leslie G. 1975. General context-free recog- nition in less than cubic time. Journal of Com- puter and System Sciences, 10:308-315. Younger, Daniel H. 1967. Recognition and parsing of context-free languages in time n 3. Information and Control, 10(2):189-208. 15 | 1997 | 2 |
Deriving Verbal and Compositional Lexical Aspect for NLP Applications Bonnie J. Dorr and Marl Broman Olsen University of Maryland Institute for Ad.vanced Computer Studies A.V. Williams Building College Park, MD 20742, USA bonnie ,molsen©umiacs. umd. edu Abstract Verbal and compositional lexical aspect provide the underlying temporal struc- ture of events. Knowledge of lexical as- pect, e.g., (a)telicity, is therefore required for interpreting event sequences in dis- course (Dowty, 1986; Moens and Steed- man, 1988; Passoneau, 1988), interfacing to temporal databases (Androutsopoulos, 1996), processing temporal modifiers (An- tonisse, 1994), describing allowable alter- nations and their semantic effects (Resnik, 1996; Tenny, 1994), and selecting tense and lexical items for natural language gen- eration ((Dorr and Olsen, 1996; Klavans and Chodorow, 1992), cf. (Slobin and Bo- caz, 1988)). We show that it is possible to represent lexical aspect--both verbal and compositional--on a large scale, us- ing Lexical Conceptual Structure (LCS) representations of verbs in the classes cat- aloged by Levin (1993). We show how proper consideration of these universal pieces of verb meaning may be used to refine lexical representations and derive a range of meanings from combinations of LCS representations. A single algorithm may therefore be used to determine lexical aspect classes and features at both verbal and sentence levels. Finally, we illustrate how knowledge of lexical aspect facilitates the interpretation of events in NLP appli- cations. 1 Introduction Knowledge of lexical aspect--how verbs denote situ- ations as developing or holding in time--is required for interpreting event sequences in discourse (Dowty, 1986; Moens and Steedman, 1988; Passoneau, 1988), interfacing to temporal databases (Androutsopou- los, 1996), processing temporal modifiers (Antonisse, 1994), describing allowable alternations and their se- mantic effects (Resnik, 1996; Tenny, 1994), and for selecting tense and lexical items for natural language generation ((Dorr and Olsen. 1996: Klavans and Chodorow, 1992), cf. (Slobin and Bocaz, 1988)). In addition, preliminary pyscholinguistic experiments (Antonisse, 1994) indicate that subjects are sensi- tive to the presence or absence of aspectual features when processing temporal modifiers. Resnik (1996) showed that the strength of distributionally derived selectional constraints helps predict whether verbs can participate in a class of diathesis alternations. with aspectual properties of verbs clearly influenc- ing the alternations of interest. He also points out that these properties are difficult to obtain directly from corpora. The ability to determine lexical aspect, on a large scale and in the sentential context, therefore yields an important source of constraints for corpus anal- ysis and psycholinguistic experimentation, as well as for NLP applications such as machine transla- tion (Dorr et al., 1995b) and foreign language tu- toring (Dorr et al., 1995a; Sams. 1995; Weinberg et al., 1995). Other researchers have proposed corpus- based approaches to acquiring lexical aspect infor- mation with varying data coverage: Klavans and Chodorow (1992) focus on the event-state distinc- tion in verbs and predicates; Light (1996) considers the aspectual properties of verbs and affixes; and McKeown and Siegel (1996) describe an algorithm for classifying sentences according to lexical aspect. properties. Conversely. a number of works in the linguistics literature have proposed lexical semantic templates for representing the aspectual properties of verbs (Dowry, 1979: Hovav and Levin, 1995; Levin and Rappaport Hovav. To appear), although these have not been implemented and tested on a large scale. We show that. it is possible to represent the lexical aspect both of verbs alone and in sentential contexts using Lexical Conceptual Structure (LCS) represen- tations of verbs in the classes cataloged by Levin (1993). We show how proper consideration of these universal pieces of verb meaning may be used t.o refine lexical representations and derive a range of meanings from combinations of LCS representations. 151 A single algorithm may therefore be used to deter- mine lexical aspect classes and features at both ver- bal and sentential levels. Finally, we illustrate how access to lexical aspect facilitates lexical selection and the interpretation of events in machine transla- tion and foreign language tutoring applications, re- spectively. 2 Lexical Aspect Following Olsen (To appear in 1997), we distinguish between lexical and grammatical aspect, roughly the situation and viewpoint aspect of Smith (1991). Lexical aspect refers to the '0ype of situation denoted by the verb, alone or combined with other sentential constituents. Grammatical aspect takes these situa- tion types and presents them as impeffective (John was winning the race/loving his job) or perfective (John had won/loved his job). Verbs are assigned to lexical aspect classes, as in Table i (cf. (Brinton, 1988)[p. 57], (Smith, 1991)) based on their behavior in a variety of syntactic and semantic frames that focus on their features. 1 A major source of the difficulty in assigning lex- ical aspect features to verbs is the ability of verbs to appear in sentences denoting situations of multi- ple aspectual types. Such cases arise, e.g., in the context of foreign language tutoring (Dorr et al., 1995b; Sams, 1995; Weinberg et al., 1995), where a a 'bounded' interpretation for an atelic verb, e.g., march, may be introduced by a path PP to the bridge or across the field or by a NP the length of the field: (1) The soldier marched to the bridge. The soldier marched across the field. The soldier marched the length of the field. Some have proposed, in fact, that aspec- tual classes are gradient categories (Klavans and Chodorow, 1992), or that aspect should be evaluated only at the clausal or sentential level (asp. (Verkuyl, 1993); see (Klavans and Chodorow, 1992) for NLP applications). Olsen (To appear in 1997) showed that, although sentential and pragmatic context influence aspectual interpretation, input to the context is constrained in large part by verbs" aspectual information. In par- titular, she showed that the positively marked fea- tures did not vary: [+telic] verbs such as win were always bounded, for exainple, In contrast, the neg- atively marked features could be changed by other sentence constituents or pragmatic context: [-telic] verbs like march could therefore be made [+telic]. Similarly, stative verbs appeared with event inter- pretations, and punctiliar events as durative. Olsen 1Two additional categories are identified by Olsen (To appear in 1997): Semelfactives (cough, tap) and Stage- level states (be pregnant). Since they are not assigned templates by either Dowty (1979) or Levin and Rappa- port Hovav (To appear), we do not discuss them in this paper. therefore proposed that aspectual interpretation be derived through monotonic composition of marked privative features [+/1~ dynamic], [+/0 durative] and [+/0 relic], as shown in Table 2 (Olsen, To appear in 1997, pp. 32-33). With privative features, other sentential con- stituents can add to features provided by the verb but not remove them. On this analysis, the activity features of march ([+durative, +dynamic]) propa- gate to the sentences in (1). with [+telic] added by the NP or PP, yielding an accomplishment interpre- tation. The feature specification of this composition- ally derived accomplishment is therefore identical to that of a sentence containing a relic accomplishment verb, such as produce in (2). (2) The commander produced the campaign plan. Dowry (1979) explored the possibility that as- pectual features in fact constrained possible units of meaning and ways in which they combine. In this spirit, Levin and Rappaport Hovav (To appear) demonstrate that limiting composition to aspectu- ally described structures is an important part of an account of how verbal meanings are built up, and what semantic and syntactic combinations are pos- sible. We draw upon these insights in revising our LCS lexicon in order to encode the aspectual features of verbs. In the next section we describe the LCS rep- resentation used in a database of 9000 verbs in 191 major classes, We then describe the relationship of aspectual features to this representation and demon- strata that it is possible to determine aspectual fea- tures from LCS structures, with minimal modifica- tion. We demonstrate composition of the LCS and corresponding aspectual structures, by using exam- pies from NLP applications that employ the LCS database. 3 Lexical Conceptual Structures We adopt the hypothesis explored in Dorr and Olsen (1996) (cf. (Tenny. t994)), that lexical aspect fea- tures are abstractions over other aspects of verb se- mantics, such as those reflected ill the verb classes in Levin (1993). Specifically we show that a privative model of aspect provides an appropriate diagnostic for revising [exical representations: aspectual inter- pretations that arise only in the presence of other constituents may be removed from the lexicon and derived compositionally. Our modified LCS lexicon theu allows aspect features to be determined algo- rithmically both from the verbal lexicon and from composed structures built from verbs and other sen- tence constituents, using uniform processes and rep- resentations. This project on representing aspectual struc- ture builds on previous work, in which verbs were grouped automatically into Levin's semantic classes 152 Dynamic Durative Examples know. have Aspectual Class Telic State Activity Accomplishment ÷ Achievement + + + + march, paint + + destroy + notice, win Table 1: Featurai Identification of Aspectual Classes Aspectual Class Telic State Activity Accomplishment + Achievement + Dynamic Durative Examples + know. have + + march, paint + + destroy + notice, win Table 2: Privative Featural Identification of Aspectual Classes (Dorr and Jones, 1996; Dorr, To appear) and as- signed LCS templates from a database built as Lisp- like structures (Dorr, 1997). The assignment of as- pectual features to the classes in Levin was done by hand inspection of the semantic effect of the alter- nations described in Part I of Levin (Olsen, 1996), with automatic coindexing to the verb classes (see (Dorr and Olsen, 1996)). Although a number of Levin's verb classes were aspectually uniform, many required subdivisions by aspectual class; most of these divided atelic "manner" verbs from telic "re- sult" verbs, a fundamental linguistic distinction (cf. (Levin and Rappaport Hovav, To appear) and refer- ences therein). Examples are discussed below. Following Grimshaw (1993) Pinker (1989) and others, we distinguish between semantic struc- ture and semantic content. Semantic structure is built up from linguistically relevant and univer- sally accessible elements of verb meaning. Bor- rowing from Jackendoff (1990), we assume seman- tic structure to conform to wellformedness con- ditions based on Event and State types, further specialized into primitives such as GO, STAY, BE, GO-EXT, and ORIENT. We use Jackend- off's notion of field, which carries Loc(ational) se- mantic primitives into non-spatial domains such as Poss(essional), Temp(oral), Ident(ificational). Circ(umstantial), and Exist(ential). We adopt a new primitive, ACT, to characterize certain activi- ties (such as march) which are not adequately distin- guished from other event types by Jackendoff's GO primitive.-" Finally, we add a manner component, to distinguish among verbs in a class, such the motion verbs run, walk, and march. Consider march, one 2Jackendoff (1990) augments the thematic tier of Jackendoff (1983) with an action tier, which serves to characterize activities using additional machinery. We choose to simplify this characterization by using the ACT primitive rather than introducing yet another level of representation. of Levin's Ran kerbs (51.3.2): 3 we assign it the tem- plate in (3)(i), with the corresponding Lisp format shown in (3)(ii): (3) (i) [z .... ACTLoc ([xhi,g * 1],[M ..... BY MARCH 26])] (ii) (act loc (* thing 1) (by march 26)) This list structure recursively associates argu- ments with their logical heads, represented as primitive/field combinations, e.g., ACTLoc becomes (act loc ...) with a (thing 1) argument. Seman- tic content is represented by a constant in a se- mantic structure position, indicating the linguisti- cally inert and non-universal aspects of verb mean- ing (cf. (Grimshaw, 1993; Pinker, 1989; Levin and Rappaport Hovav, To appear)), the manner com- ponent by march in this case. The numbers in the lexical entry are codes that map between LCS po- sitions and their corresponding thematic roles (e.g., 1 = agent). The * marker indicates a variable po- sition (i.e., a non-constant) that is potentially filled through composition with other constituents. In (3), (thing 1) is the only argument. However. other arguments may be instantiated composition- ally by the end-NLP application, as in (4) below. for the sentence The soldier marched to the bridge: (4) (i) [E .... CAUSE ([Eve.t ACTLoc ([Thing SOLDIER], [M ..... BY MARCH])], [v~,h TOLo, ([Vhi,g SOLDIER], [Position ATLoc ([Thing SOLDIER], [Whi,,g BRIDGE])])])] (ii) (cause (act ]oc (soldier) (by march)) (to loc (soldier) (at loc (soldier) (bridge)))) 3The numbers after the verb examples are verb class sections in Levin (1993). 153 In the next sections we outline the aspectual proper- ties of the LCS templates for verbs in the lexicon and illustrate how LCS templates compose at the senten- tim level, demonstrating how lexical aspect feature determination occurs via the same algorithm at both verbal and sentential levels, 4 Determining Aspect Features from the LCS Structures The components of our LCS templates correlate strongly with aspectual category distinctions. An exhaustive listing of aspectual types and their cor- responding LCS representations is given below. The ! ! notation is used as a wildcard which is filled in by the lexeme associated with the word defined in the lexical entry, thus producing a semantic constant. (5) (i) States: (be ident/perc/loc (thing 2) ... (by !! 26)) (ii) Activities: (act loc/perc (thing 1) (by !! 26)) or (act loc/perc (thing 1) (with instr ... (!!-er 20))) or (act loc/perc (thing 1) (on loc/perc (thing 2)) (by ~ 26)) or (act loc/perc (thing 1) (on loc/perc (thing 2)) (with instr ... (! !-er 20))) (iii) Accomplishments: (cause/let (thing 1) (go loc (thing 2) (toward/away_frora ... ) ) (by !! 26)) or (cause/let (thing 1) (go/be ident (thing 2) ... (!!-ed 9))) or (cause/let (thing 1) (go loc (thing 2) ... (!! 6))) or (cause/let (thing I) (go loc (thing 2) ... (!! 4))) or (cause/let (thing I) (go exist (thing 2) ... (exist 9)) (by !! 26)) (iv) Achievements: (go loc (thing 2) (toward/away_from ...) (by !! 26)) or (go loc (thing 2) ... (!! 6)) or (go loc (thing 2) .... (!! 4)) or (go exist (thing 2) ... (exist 9) (by ~ 26)) or (go ident (thing 2) ... (!!-ed 9)) The Lexical Semantic Templates (LSTs) of Levin and Rappaport-Hovav (To appear) and the decom- positions of Dowry (1979) also capture aspectual dis- tinctions, but are not articulated enough to capture other distinctions among verbs required by a large- scale application. Since the verb classes (state, activity, etc.) are ab- stractions over feature combinations, we now discuss each feature in turn. 4.1 Dynamicity The feature [+dynamic] encodes the distinction be- tween events ([+dynamic]) and states ([0dynamic]). Arguably "the most salient distinction" in an aspect taxonomy (Dahh 1985, p. 28), in the LCS dynamic- ity is encoded at the topmost level. Events are char- acterized by go, act, stay, cause, or let, whereas States are characterized by go-ext or be, as illus- trated in (6). (6) (i) Achievements: decay, rust, redden (45.5) (go ident (* thing 2) (toward ident (thing 2) (at ident (thing 2) (!!-ed 9)))) (ii) Accomplishments: dangle, suspend (9.2} (cause (* thing 1) (be ident (* thing 2) (at ident (thing 2) (!!-ed 9)))) (iii) States: contain, enclose (47.8) (be loc (* thing 2) (in loc (thing 2) (* thing 11)) (by ~ 26)) (iv} Activities: amble, run. zigzag (51.3.2) (act loc (* thing 1) (by !! 26)) 4.2 Durativity The [+durative] feature denotes situations that take time (states, activities and accomplishments). Situ- ations that may be punctiliar (achievements) are un- specified for durativity ((O[sen, To appear in 1997) following (Smith, 1991), inter alia). In the LCS, du- rativity may be identified by the presence of act, be, go-ext, cause, and let primitives, as in (7): these are lacking in the achievement template, shown in (8). (7) (i) States: adore, appreciate, trust (31,2) (be perc (* thing 2) (at perc (thing 2) (* thing 8)) (by !! 26)) (ii) Activities: amble, run, zigzag (51.3.2) (act loc (* thing 1) (by !! 26)) {iii) Accomplishments: destroy, obliterate (44) (cause (* thing 1) (go exist (* thing 2) (away_from exist (thing 2) (at exist (thing 2) (exist 9)))) (by !! 26)) (8) Achievements: crumple, ]old, wrinkle (45.2) (go ident (* thing 2) (toward ident (thing 2) (at ident (thing 2) (!!-ed 9)))) 4.3 Telicity Telic verbs denote a situation with an inherent end or goal. Atelic verbs lack an inherent end, though. as (1) shows, they may appear in telic sentences with other sentence constituents. In the LCS, [+telic] verbs contain a Path of a particular type or a con- stant (!!) in the right-most leaf-node argument. Some examples are shown below: 154 (9) (i) leave (... (thing 2) (toward/away_from ...) (by ! ! 26)) (ii) enter (... (thing 2) ... (!!-ed 9)) (iii) pocket (... (thing 2) ... (!! 6)) (iv) mine (... (thing 2) ... (!! 4)) (v) create, destroy (... (thing 2) ... (exist 9) (by !! 26)) In the first case the special path component. toward or away_from, is the telicity indicator, in the next three, the (uninstantiated) constant in the rightmost leaf-node argument, and, in the last case, the special (instantiated) constant exist. Telic verbs include: (10) (i) Accomplishments: mine, quarry (10.9) (cause (* thing 1) (go loc (* thing 2) ((* away from 3) loc (thing 2) (at loc (thing 2) (!! 4))))) (ii) Achievements: abandon, desert, leave(51.2) (go foe (* thing 2) (away_from loc (thing 2) (at loc (thing 2) (* thing 4)))) Examples of atelic verbs are given in (11). The (a)telic representations are especially in keeping with the privative feature characterization Olsen (1994; To appear in 1997): telic verb classes are ho- mogeneously represented: the LCS has a path of a particular type, i.e., a "reference object" at an end state. Atelic verbs, on the other hand. do not have homogeneous representations. (11) (i) Activities: appeal, matter (31.4) (act perc (* thing 1) (on pert (* thing 2)) (by !! 26)) (ii) States: wear (41.3.1) (be loc (* !! 2) (on loc (!! 2) (* thing 11))) 5 Modifying the Lexicon We have examined the LCS classes with respect to identifying aspectual categories and determined that minor changes to 101 of 191 LCS class structures (213/390 subclasses) are necessary, including sub- stituting act for go ill activities and removing Path constituents that need not be stated lexically. For example, the original database entry for class 51.3.2 is: (12) (go loc (* thing 2) ((* toward 5) loc (thing 2) (at loc (thing 2) (thing 6))) (by !! 26)) This is modified to yield the following new database entry: (13) (act loc (* thing 1) (by march 26)) The modified entry is created by changing go to act and removing the ((* toward 5) ...) constituent. Modification of the lexicon to conform to aspec- tual requirements took 3 person-weeks, requiring 1370 decision tasks at 4 minutes each: three passes through each of the 390 subclasses to compare the LCS structure with the templates for each feature (substantially complete) and one pass to change 200 LCS structures to conform with the templates. (Fewer than ten classes need to be changed for dura- tivity or dynamicity, and approximately 200 of the 390 subclasses for telicity.) With the changes we can automatically assign aspect to some 9000 verbs in existing classes. Furthermore. since 6000 of the verbs were classified by automatic means, new verbs would receive aspectual assignments automatically as a result of the classification algorithm. We are aware of no attempt in the literature to determine aspectual information on a similar scale, in part, we suspect, because of the difficulty of assigning features to verbs since they appear in sentences denoting situations of multiple aspectual types. Based on our experience handcoding small sets of verbs, we estimate generating aspectual fea- tures for 9000 entries would require 3.5 person- months (four minutes per entry), with 1 person- month for proofing and consistency checking, given unclassified verbs, organized, say, alphabetically. 6 Aspectual Feature Determination for Composed LCS's Modifications described above reveal similarities be- tween verbs that carry a lexical aspect, feature as part of their lexical entry and sentences that have features as a result of LCS composition. Conse- quently, the algorithm that we developed for ver- ifying aspectual conformance of the LCS database is also directly applicable to aspectual feature de- termination in LCSs that have been composed from verbs and other relevant sentence constituents. LCS composition is a fundamental operation in two appli- cations for which the LCS serves as an interlingua: machine translation (Dorr et al.. 1993) and foreign language tutoring (Dorr et al., 1995b: Sams. I993: Weinberg et al., 1995). Aspectual feature determina- tion applies to the composed LCS by first, assigning unspecified feature values--atelic [@T], non-durative [@R], and stative [@D]--and then monotonically set- ting these to positive values according to the pres- ence of certain constituents. The formal specification of the aspectual feature determination algorithm is shown in Figure 1. The first step initializes all aspectual values to be un- specified. Next the top node is examined for mem- bership in a set of telicity indicators (CAUSE, LET, 155 Given an LCS representation L: I. Initialize: T(L):=[0T], D(L):=[0R], R(L):=[0D] 2. If Top node of L E {CAUSE, LET, GO} Then T(L):=[+T] If Top node of L E {CAUSE, LET} Then D(L):=[+D], R(L):=t+R] If Top node of L 6 {GO} Then D(L}:=[+D] 3. If Top node of L E {ACT, BE. STAY} Then If Internal node of L E {TO, TOWARD, FORTemp} Then T(L):=[+T] If Top node of L 6 {BE, STAY} Then R(L):=[+R] If Top node of L E {ACT} Then set D(L):=[+D], R(L):=[+R] 4. Return T(L), D(L), R(L). Figure 1: Algorithm for Aspectual Feature Determi- nation GO); if there is a match, the LCS is assumed to be [+T]. In this case, the top node is further checked for membership in sets that indicate dynamicity [+D] and durativity [+R]. Then the top node is exam- ined for membership in a set of atelicity indicators (ACT, BE, STAY); if there is a match, the LCS is further examined for inclusion of a telicizing com- ponent, i.e., TO, TOWARD, FORT¢~p. The LCS is assumed to be [@T] unless one of these telicizing components is present. In either case, the top node is further checked for membership in sets that indi- cate dynamicity [+D] and durativity [+R]. Finally, the results of telicity, dynamicity, and durativity as- signments are returned. The advantage of using this same algorithm for determination of both verbal and sentential aspect is that it is possible to use the same mechanism to perform two independent tasks: (1) Determine in- herent aspectual features associated with a lexical item; (2) Derive non-inherent aspectual features as- sociated with combinations of lexical items. Note, for example, that adding the path l0 the bridge to the [@relic] verb entry in (3) establishes a [+relic] value for the sentence as a whole, an in- terpretation available by the same algorithm that identifies verbs as telic in the LCS lexicon: (14) (i) [Otelic]: (act lee (* thing 1) (by march 26)) (ii) [+telic]: (cause (act loc (soldier) (by march)) (to loc (soldier) (at loc (soldier) (bridge)))) In our applications, access to both verbal and sen- tential lexical aspect features facilitates the task of lexieal choice in machine translation and interpreta- tion of students' answers in foreign language tutor- ing. For example, our machine translation system selects appropriate translations based on the match- ing of telicity values for the output sentence, whether or not the verbs in the language match in telicity. The English atelic manner verb march and the telic PP across the field from (1) is best translated into Spanish as the telic verb cruzar with the manner marchando as an adjunct.: (15) (i) E: Tile soldier marched across the field. S: El soldado cruz6 el campo marchando. (ii) (cause (act loc (soldier) (by march)) (to loc (soldier) (across loc (soldier) (field)))) Similarly, in changing the Weekend Verbs (i.e.. December, holiday, summer, weekend, etc.) tem- plate to telic, we make use of the measure phrase (for terap .. ,) which was previously available. though not employed, as a mechanism in our database. Thus, we now have a lexicalized exam- pie of 'doing something for a certain time' that has a representation corresponding to the canonical telic frame V for an hour phrase, as in The soldier marched for an hour: (16) (act loc (soldier) (by march) (for temp (*head*) (hour))) This same telicizing constituent--which is compo- sitionally derived in the crawl construction--is en- coded directly in the lexical entry for a verb such as December: (17) (stay loc (* thing 2) ((* [at] 5) loc (thing 2) (thing 6)) (for temp (*head*) (december 31))) This lexical entry is composed with other argu- ments to produce the LCS for .John Decembered at the new cabin: (18) (stay loc (john) (at loc (john) (cabin (new))) (for temp (ahead*) (december))) This same LCS would serve as the underlying representation for the equivalent Spanish sentence. which uses an atelic verb estar 4 in colnbination with a telnporal adjunct durance el m.es de Diciembre: John estuvo en la cabafia nueva durance el mes de Diciembre (literally, John was in lhe new cabin dur- ing lhe month of December). The monotonic composition permitted by the LCS templates is slightly different than that perlnit- ted by the privative feature model of aspect (Olsen. 1994; Olsen, To appear in 1997). For example, in tiw LCS states may be composed into an achievement or accomplishment structure, because states are part 4Since estar may be used with both relic {'estar alto) and atelic (estar contento) readings, we analyze it as atelic to permit appropriate composition. 156 of the substructure of these classes (cf. templates in (6)). They may not, however, appear as activi- ties. The privative model in Table 2 allows states to become activities and accomplishments, by adding [+dynamic] and [+telic] features, but they may not become achievements, since removal of the [+dura- tive] feature would be required. The nature of the alternations between states and events is a subject for future research. 7 Conclusion The privative feature model, on which our LCS com- position draws, allows us to represent verbal and sentential lexical aspect as monotonic composition of the same type, and to identify the contribution of both verbs and other elements. The lexical as- pect of verbs and sentences may be therefore deter- mined from the corresponding LCS representations, as in the examples provided from machine transla- tion and foreign language tutoring applications. We are aware of no attempt in the literature to represent and access aspect on a similar scale, in part, we sus- pect, because of the difficulty of identifying the as- pectual contribution of the verbs and sentences given the multiple aspectual types in which verbs appear. An important corollary to this investigation is that it is possible to refine the lexicon, because vari- able meaning may, in many cases, be attributed to lexical aspect variation predictable by composition rules. In addition, factoring out the structural re- quirements of specific lexical items from the pre- dictable variation that may be described by com- position provides information on the aspectual ef- fect of verbal modifiers and complements. We are therefore able to describe not only the lexical aspect at the sentential level, but also the set of aspectual variations available to a given verb type. References Androutsoponlos, Ioannis. 1996. A Principled Framework for Constructing Natural Language Interfaces to Temporal Databases. Ph.D. thesis, University of Edinburgh. Antonisse, Peggy. 1994. Processing Temporal and Locative Modifiers in a Licensing Model. Techni- cal Report 2:1-38, Working Papers in Linguistics, University of Maryland. Brinton, Laurel J. 1988. The Development of En- glish Aspectaal Systems: Aspectualizers and Post- Verbal Particles. Cambridge University Press, Cambridge. Dahl, ()sten. 1985. Tense and Aspect Systems. Basil Blackwell, Oxford. Dorr, Bonnie J. 1997. Large-Scale Acquisition of LCS-Based Lexicons for Foreign Language Tu- toring. In Proceedings of the Fifth Conference on Applied Natural Language Processing (.4 NLP). Washington, DC. Dorr, Bonnie J. To appear. Large-Scale Dictio- nary Construction for Foreign Language Tutoring and Interlingual Machine Translation. Machine Translation, 12(1). Dorr, Bonnie J., James Hendler, Scott Blanksteen. and Barrie Migdalof. 1993. Use of Lexical Con- ceptual Structure for Intelligent Tutoring. Tech- nical Report UMIACS TR 93-108, CS TR 3161. University of Maryland. Dorr, Bonnie J., Jim Hendler, Scott Blanksteen. and Barrie Migdalof. 1995a. Use of LCS and Dis- course for Intelligent Tutoring: On Beyond Syn- tax. In Melissa Holland, Jonathan Kaplan, and Michelle Sams, editors. Intelligent Language Tu- tors: Balancing Theory and Technology. Lawrence Erlbaum Associates. Hillsdale, N J, pages 289- 309. Dorr, Bonnie J. and Douglas Jones. 1996. Rote of Word Sense Disambiguation in Lexical Ac- quisition: Predicting Semantics from Syntactic Cues. In Proceedings of the International Col~- ference on Computational Linguistics, pages 322- 333, Copenhagen, Denmark. Dorr, Bonnie J., Dekang Lin, Jye-hoon Lee, and Sungki Suh. 1995b. Efficient Parsing for Korean and English: A Parameterized Message Passing Approach. Computational Linguistics, 21(2):255- 263. Doff, Bonnie J. and Mari Broman Olsen. 1996. Multilingual Generation: The Role of Telicity in Lexical Choice and Syntactic Realization. Ma- chine Translation, 11(1-3):37-74. Dowty, David. 1979. Word Meaning in MoT~tague Grammar. Reidel, Dordrecht. Dowty, David. 1986. The Effects of Aspectual Class on the Temporal Structure of Discourse: Seman- tics or Pragmatics? Linguistics and Philosophy. 9:37-61. Grimshaw, Jane. 1993. Semantic Structure and Semantic Content in Lexical Representa- tion. unpublished ms.. Rutgers University. Ne-w Brunswick, NJ. Hovav, Malka Rappaport and Beth Levin. 1995. The Elasticity of Verb .Meaning. In Processes in Argument Structure. pages 1-13, Germany. SfS- Report-06-95, Seminar fiir Sprachwissenschaft. Eberhard-Karls-Universit/it Tiibingen, Tiibingen. Jackendoff, Ray. 1983. Semantics and Cogldtiolt. The MIT Press, Cambridge, MA. Jackendoff, Ray. 1990. Semantic Structures. The MIT Press, Cambridge. MA. Klavans, Judith L. and M. Chodorow. 1992. De- grees of Stativity: The Lexical Representation of 157 Verb Aspect. In Proceedings of the 14th Interna- tional Conference on Computational Linguistics, Nantes. France. Levin, Beth. 1993. English Verb Classes and Alter- nations: A Preliminary Investigation. University of Chicago Press, Chicago, IL. Levin, Beth and Malka Rappaport Hovav. To ap- pear. Building Verb Meanings. In M. Butt and W. Gauder, editors, The Projection of Arguments: Lezical and Syntactic Constraints. CSLI. Light, Marc. 1996. Morphological Cues for Lex- ieal Semantics. In Proceedings of the 34th An- nual Meeting of the Association for Computa- tional Linguistics. Moens, Marc and Mark Steedman. 1988. Tempo- ral Ontology and Temporal Reference. Compu- tational Linguistics: Special Issue on Tense and Aspect, 14(2):15-28. Olsen, Mari Broman. 1994. The Semantics and Pragmatics of Lexical Aspect Features. In Pro- ceedings of the Formal Linguistic Society of Mi- dameriea V, pages 361-375, University of Illinois, Urbana-Champaign, May. In Studies in the Lin- guistic Sciences, Vol. 24.2, Fall 1994. Olsen, Mari Broman. 1996. Telicity and English Verb Classes and Alternations: An Overview. Umiacs tr 96-15, cs tr 3607, University of Mary- land, College Park, MD. Olsen, Mari Broman. To appear in 1997. The Se- mantics and Pragmatics of Lezical and Grammat- ical Aspect. Garland, New York. Passoneau, Rebecca. 1988. A Computational Model of the Semantics of Tense and Aspect. Compu- tational Linguistics: Special Issue on Tense and Aspect, 14(2):44-60. Pinker, Steven. 1989. Learnability and Cognition: The Acquisition of Argument Structure. The MIT Press. Cambridge, MA. Resnik, Philip. 1996. Selectional Constraints: An Information-Theoretic Model and its Computa- tional Realization. Cognition, 61:127-159. Sams, Michelle. 1993. An Intelligent Foreign Lan- guage Tutor Incorporating Natural Language Pro- cessing. In Proceedings of Conference on Intelli- gent Computer-Aided Training and Virtual Envi- ronment Technology, NASA: Houston, TX. Sams, Michelle. 1995. Advanced Technologies for Language Learning: The BRIDGE Project Within the ARI Language Tutor Program. In Melissa Holland, Jonathan Kaplan, and Michelle Sams, editors, Intelligent Language Tutors: The- ory Shaping Technology. Lawrence Erlbaum As- sociates, Hillsdale, N J, pages 7-21. Siegel, Eric V. and Kathleen R. McKeown. 1996. Gathering Statistics to Aspectually Classify Sen- tences with a Genetic Algorithm. Unpublished MS (cmp-lg/9610002).. Columbia University, New York, NY. Slobin, Dan I. and Aura Bocaz. 1988. Learning to Talk About Movement Through Time and Space: The Development of Narrative Abilities in Span- ish and English. Lenguas Modernas. 15:5-24. Smith, Carlota. 199/. The Parameter of Aspect. Kluwer, Dordrecht. Tenny, Carol. 1994, Aspectual Roles and the Syntax- Semantics Interface. Kluwer, Dordrecht. Verkuyl, Henk. 1993. ,4 Theory of Aspectualitg: The Interaction Between Temporal and Atempo- ral Structure. Cambridge University Press, Cam- bridge and New York. Weinberg, Amy, Joseph Garman. Jeffery Martin. and Paola Merlo. 1995. Principle-Based Parser for Foreign Language Training in German and Arabic. In Melissa Holland, Jonathan Kaplan. and Michelle Sams. editors, Intelligent Language Tutors: Theory Shaping Technology. Lawrence Erlbaum Associates. Hillsdale. N J, pages 23-44. 158 | 1997 | 20 |
A DOP Model for Semantic Interpretation* Remko Bonnema, Rens Bod and Remko Scha Institute for Logic, Language and Computation University of Amsterdam Spuistraat 134, 1012 VB Amsterdam Bonnema~mars. let. uva. nl Rens. Bod@let. uva. nl Remko. Scha@let. uva. nl Abstract In data-oriented language processing, an annotated language corpus is used as a stochastic grammar. The most probable analysis of a new sentence is constructed by combining fragments from the corpus in the most probable way. This approach has been successfully used for syntactic anal- ysis, using corpora with syntactic annota- tions such as the Penn Tree-bank. If a cor- pus with semantically annotated sentences is used, the same approach can also gen- erate the most probable semantic interpre- tation of an input sentence. The present paper explains this semantic interpretation method. A data-oriented semantic inter- pretation algorithm was tested on two se- mantically annotated corpora: the English ATIS corpus and the Dutch OVIS corpus. Experiments show an increase in seman- tic accuracy if larger corpus-fragments are taken into consideration. 1 Introduction Data-oriented models of language processing em- body the assumption that human language per- ception and production works with representations of concrete past language experiences, rather than with abstract grammar rules. Such models therefore maintain large corpora of linguistic representations of previously occurring utterances. When processing a new input utterance, analyses of this utterance are constructed by combining fragments from the cor- pus; the occurrence-frequencies of the fragments are used to estimate which analysis is the most probable one. * This work was partially supported by NWO, the Netherlands Organization for Scientific Research (Prior- ity Programme Language and Speech Technology). For the syntactic dimension of language, vari- ous instantiations of this data-oriented processing or "DOP" approach have been worked out (e.g. Bod (1992-1995); Charniak (1996); Tugwell (1995); Sima'an et al. (1994); Sima'an (1994; 1996a); Goodman (1996); Rajman (1995ab); Kaplan (1996); Sekine and Grishman (1995)). A method for ex- tending it to the semantic domain was first intro- duced by van den Berg et al. (1994). In the present paper we discuss a computationally effective version of that method, and an implemented system that uses it. We first summarize the first fully instanti- ated DOP model as presented in Bod (1992-1993). Then we show how this method can be straightfor- wardly extended into a semantic analysis method, if corpora are created in which the trees are enriched with semantic annotations. Finally, we discuss an implementation and report on experiments with two semantically analyzed corpora (ATIS and OVIS). 2 Data-Oriented Syntactic Analysis So far, the data-oriented processing method has mainly been applied to corpora with simple syntac- tic annotations, consisting of labelled trees. Let us illustrate this with a very simple imaginary example. Suppose that a corpus consists of only two trees: S S NP VP NP VP Det N sings Det N whistles I I I I every woman a man Figure 1: Imaginary corpus of two trees We employ one operation for combining subtrees, called composition, indicated as o; this operation identifies the leftmost nonterminal leaf node of one tree with the root node of a second tree (i.e., the second tree is substituted on the leftmost nontermi- 159 nal leaf node of the first tree). A new input sentence like "A woman whistles" can now be parsed by com- bining subtrees from this corpus. For instance: S N S NP VP wom~ NP VP Det N whistles Det N whistles t I I a a woman Figure 2: Derivation and parse for "A woman whistles" Other derivations may yield the same parse tree; for instance I : S NP Det VP I o I NP VP Det N a whistles I wonlaN S NP VP Det N whistles I I a woman Figure 3: Different derivation generating the same parse for ".4 woman whistles" or S Det N S NP VP a woman NP VP Det N whistles Det N whistles I I a woman Figure 4: Another derivation generating the same parse for "A woman whistles" Thus, a parse tree can have many derivations in- volving different corpus-subtrees. DOP estimates the probability of substituting a subtree t on a spe- cific node as the probability of selecting t among all subtrees in the corpus that could be substituted on that node. This probability is equal to the number of occurrences of a subtree t, divided by the total num- ber of occurrences of subtrees t' with the same root node label as t : P(t) = Itl/~t':root(e)=roo~(t) It'l" The probability of a derivation tl o ... o tn can be computed as the product of the probabilities of the subtrees this derivation consists of: P( tl o... o t,~) = rL P(ti). The probability of a parse tree is equal to 1Here t o u o v o w should be read as ((t o u) o v) o w. the probability that any of its distinct derivations is generated, which is the sum of the probabilities of all derivations of that parse tree. Let t~ be the i-th sub- tree in the derivation d that yields tree T, then the probability of T is given by: P(T) = ~d 1-Ii P(tid). The DOP method differs from other statisti- cal approaches, such as Pereira and Schabes (1992), Black et al. (1993) and Briscoe (1994), in that it does not predefine or train a formal grammar; in- stead it takes subtrees directly from annotated sen- tences in a treebank with a probability propor- tional to the number of occurrences of these sub- trees in the treebank. Bod (1993b) shows that DOP can be implemented using context-free pars- ing techniques. To select the most probable parse, Bod (1993a) gives a Monte Carlo approximation al- gorithm. Sima'an (1995) gives an efficient polyno- mial algorithm for a sub-optimal solution. The model was tested on the Air Travel In- formation System (ATIS) corpus as analyzed in the Penn Treebank (Marcus et al. (1993)), achiev- ing better test results than other stochastic grammars (cf. Bod (1996), Sima'an (1996a), Goodman (1996)). On Penn's Wall Street Jour- nal corpus, the data-oriented processing approach has been tested by Sekine and Grishman (1995) and by Charniak (1996). Though Charniak only uses corpus-subtrees smaller than depth 2 (which in our experience constitutes a less-than-optimal version of the data-oriented processing method), he reports that it "outperforms all other non-word-based sta- tistical parsers/grammars on this corpus". For an overview of data-oriented language processing, we refer to (Bod and Scha, 1996). 3 Data-Oriented Semantic Analysis To use the DOP method not just for syntactic anal- ysis, but also for semantic interpretation, four steps must be taken: 1. decide on a formalism for representing the meanings of sentences and surface-constituents. 2. annotate the corpus-sentences and their surface-constituents with such semantic repre- sentations. 3. establish a method for deriving the mean- ing representations associated with arbitrary corpus-subtrees and with compositions of such subtrees. 4. reconsider the probability calculations. We now discuss these four steps. 3.1 Semantic formalism The decision about the representational formalism is to some extent arbitrary, as long as it has a well- 160 S :Vx(woman(x)-*sing(x)) S'.qx(man(x)Awhistle(x)) NP: ~.YVx(woman(×)-*Y(x)) VP:sing NP: ;~.Y"'lx(man(x)AY(x)) VP:whistle Det:kX~Y~(X(x)-*Y(x)) N:woman sings Det:XXXY3x(X(×)AY(x)) N:man whistles I I I every woman a man Figure 5: Imaginary corpus of two trees with syntactic and semantic labels. S:dl(d2) NP:d 1 (d2) VP:sing NP:dl(d2) VP:whistle .i L Det:kXkY~(X(x)---~Y(x)) N:woman stags Det: ~,X~.Y3×(X(x)^Y(x)) N:man whi ties I I every woman a man Figure 6: Same imaginary corpus of two trees with syntactic and semantic labels using the daughter notation. defined model-theory and is rich enough for repre- senting the meanings of sentences and constituents that are relevant for the intended application do- main. For our exposition in this paper we will use a wellknown standard formalism: extensional type theory (see Gamut (1991)), i.e., a higher-order logical language that combines lambda-abstraction with connectives and quantifiers. The first imple- mented system for data-oriented semantic interpre- tation, presented in Bonnema (1996), used a differ- ent logical language, however. And in many appli- cation contexts it probably makes sense to use an A.I.-style language which highlights domain struc- ture (frames, slots, and fillers), while limiting the use of quantification and negation (see section 5). 3.2 Semantic annotation We assume a corpus that is already syntactically annotated as before: with labelled trees that indi- cate surface constituent structure. Now the basic idea, taken from van den Berg et al. (1994), is to augment this syntactic annotation with a semantic one: to every meaningful syntactic node, we add a type-logical formula that expresses the meaning of the corresponding surface-constituent. H we would carry out this idea in a completely direct way, the toy corpus of Figure 1 might, for instance, turn into the toy corpus of Figure 5. Van den Berg et al. indicate how a corpus of this sort may be used for data-oriented semantic inter- pretation. Their algorithm, however, requires a pro- cedure which can inspect the semantic formula of a node and determine the contribution of the seman- tics of a lower node, in order to be able to "fac- tor out" that contribution. The details of this pro- cedure have not been specified. However, van den Berg et ai. also propose a simpler annotation con- vention which avoids the need for this procedure, and which is computationally more effective: an an- notation convention which indicates explicitly how the semantic formula for a node is built up on the basis of the semantic formulas of its daughter nodes. Using this convention, the semantic annotation of the corpus trees is indicated as follows: • For every meaningful lexical node a type logical formula is specified that represents its meaning. • For every meaningful non-lexical node a for- mula schema is specified which indicates how its meaning representation may be put together out of the formulas assigned to its daughter nodes. In the examples below, these schemata use the vari- able dl to indicate the meaning of the leftmost daughter constituent, d2 to indicate the meaning of the second daughter constituent, etc. Using this notation, the semantically annotated version of the toy corpus of Figure 1 is the toy corpus rendered in Figure 6. This kind of semantic annotation is what will be used in the construction of the corpora de- scribed in section 5 of this paper. It may be noted that the rather oblique description of the semantics of the higher nodes in the tree would easily lead to mistakes, if annotation would be carried out com- pletely manually. An annotation tool that makes the expanded versions of the formulas visible for the annotator is obviously called for. Such a tool was developed by Bonnema (1996), it will be briefly de- scribed in section 5. 161 This annotation convention obviously, assumes that the meaning representation of a surface- constituent can in fact always be composed out of the meaning representations of its subconstituents. This assumption is not unproblematic. To maintain it in the face of phenomena such as non-standard quantifier scope or discontinuous constituents cre- ates complications in the syntactic or semantic anal- yses assigned to certain sentences and their con- stituents. It is therefore not clear yet whether our current treatment ought to be viewed as com- pletely general, or whether a treatment in the vein of van den Berg et al. (1994) should be worked out. 3.3 The meanings of subtrees and their compositions As in the purely syntactic version of DOP, we now want to compute the probability of a (semantic) analysis by considering the most probable way in which it can be generated by combining subtrees from the corpus. We can do this in virtually the same way. The only novelty is a slight modification in the process by which a corpus tree is decomposed into subtrees, and a corresponding modification in the composition operation which combines subtrees. If we extract a subtree out of a tree, we replace the semantics of the new leaf node with a unification variable of the same type. Correspondingly, when the composition operation substitutes a subtree at this node, this unification variable is unified with the semantic formula on the substituting tree. (It is required that the semantic type of this formula matches the semantic type of the unification vari- able.) A simple example will make this clear. First, let us consider what subtrees the corpus makes avail- able now. As an example, Figure 7 shows one of the decompositions of the annotated corpus sentence "A man whistles". We see that by decomposing the tree into two subtrees, the semantics at the breakpoint- node N: man is replaced by a variable. Now an analysis for the sentence "A woman whistles" can, for instance, be generated in the way shown in Fig- ure 8. 3.4 The Statistical Model of Data-Oriented Semantic Interpretation We now define the probability of an interpretation of an input string. Given a partially annotated corpus as defined above, the multiset of corpus subtrees consists of all subtrees with a well-defined top-node seman- tics, that are generated by applying to the trees of the corpus the decomposition mechanism described NP:dl(d2) VP:whistle w !t,os Det: kX~.Y~x(X(x)^Y(x)) N:man I I a mail VP:whistle Det: ~,XkY3x(X(x)AY(x)) N:U whistles I a N:man man Figure 7: Decomposing a tree into subtrees with uni- fication variables. N:woman o L NP:dl(d2) VP:whistle woman Det: kXkY--Jx(X(x) AY(x)) N:U whistles a NP:d I (d2) VP:whistle Dec kXkY3x(X(x)^Y(x)) N:woman whistles I a woman Figure 8: Generating an analysis for "A woman whistles". above. The probability of substituting a subtree t on a specific node is the probability of selecting t among all subtrees in the multiset that could be substituted on that node. This probability is equal to the num- ber of occurrences of a subtree t, divided by the total number of occurrences of subtrees t' with the same root node label as t: N P(t) = Et':root(t')=root(t) Irl (1) A derivation of a string is a tuple of subtrees, such that their composition results in a tree whose yield is the string. The probability of a derivation tl o... o tn is the product of the probabilities of these subtrees: P(tl o... o tn) = II P(td (2) i A tree resulting from a derivation of a string is called a parse of this string. The probability of a parse is 162 the probability that any of its derivations occurs; this is the sum of the probabilities of all its deriva- tions. Let rid be the i-th subtree in the derivation d that yields tree T, then the probability of T is given by: P(T) = E H P(t,d) (3) d i An interpretation of a string is a formula which is provably equivalent to the semantic annotation of the top node of a parse of this string. The proba- bility of an interpretation I of a string is the sum of the probabilities of the parses of this string with a top node annotated with a formula that is provably equivalent to I. Let ti4p be the i-th subtree in the derivation d that yields parse p with interpretation I, then the probability of I is given by: P(I) = E E H P(t,d,) (4) p d i We choose the most probable interpretation/.of a string s as the most appropriate interpretation of s. In Bonnema (1996) a semantic extension of the DOP parser of Sima'an (1996a) is given. But in- stead of computing the most likely interpretation of a string, it computes the interpretation of the most likely combination of semantically annotated subtrees. As was shown in Sima'an (1996b), the most likely interpretation of a string cannot be com- puted in deterministic polynomial time. It is not yet known how often the most likely interpretation and the interpretation of the most likely combination of semantically enriched subtrees do actually coincide. 4 Implementations The first implementation of a semantic DOP-model yielded rather encouraging preliminary results on a semantically enriched part of the ATIS-corpus. Im- plementation details and experimental results can be found in Bonnema (1996), and Bod et al. (1996). We repeat the most important observations: * Data-oriented semantic interpretation seems to be robust; of the sentences that could be parsed, a significantly higher percentage received a cor- rect semantic interpretation (88%), than an ex- actly correct syntactic analysis (62%). * The coverage of the parser was rather low (72%), because of the sheer number of differ- ent semantic types and constructs in the trees. • The parser was fast: on the average six times as fast as a parser trained on syntax alone. The current implementation is again an extension of Sima'an (1996a), by Bonnema 2. In our experi- ments, we notice a robustness and speed-up compa- rable to our experience with the previous implemen- tation. Besides that, we observe higher accuracy, and higher coverage, due to a new method of orga- nizing the information in the tree-bank before it is used for building the actual parser. A semantically enriched tree-bank will generally contain a wealth of detail. This makes it hard for a probabilistic model to estimate all parameters. In sections 4.1 and 4.2, we discuss a way of generalizing over semantic information in the tree-bank, be]ore a DOP-parser is trained on the material. We automat- ically learn a simpler, less redundant representation of the same information. The method is employed in our current implementation. 4.1 Simplifying the tree-bank A tree-bank annotated in the manner described above, consists of tree-structures with syntactic and semantic attributes at every node. The semantic attributes are rules that indicate how the meaning- representation of the expression dominated by that node is built-up out of its parts. Every instance of a semantic rule at a node has a semantic type asso- ciated with it. These types usually depend on the lexical instantiations of a syntactic-semantic struc- ture. If we decide to view subtrees as identical iff their syntactic structure, the semantic rule at each node, and the semantic type of each node is identical, any fine-grained type-system will cause a huge in- crease in different instantiations of subtrees. In the two tree-banks we tested on, there are many sub- trees that differ in semantic type, hut otherwise share the same syntactic/semantic structure. Disre- garding the semantic types completely, on the other hand, will cause syntactic constraints to govern both syntactic substitution and semantic unification. The semantic types of constituents often give rise to dif- ferences in semantic structure. If this type informa- tion is not available during parsing, important clues will be missing, and loss of accuracy will result. Apparently, we do need some of the information present in the types of semantic expressions. Ignor- ing semantic types will result in loss of accuracy, but distinguishing all different semantic types will result in loss of coverage and generalizing power. With these observations in mind, we decided to group the types, and relax the constraints on semantic unifi- cation. In this approach, every semantic expression, 2With thanks to Khalil Sima'an for fruitful discus- sions, and for the use of his parser 163 and every variable, has a set of types associated with it. In our semantic DOP model, we modify the con- straints on semantic unification as follows: A vari- able can be unified with an expression, if the inter- section of their respective sets of types is not empty. The semantic types are classified into sets that can be distinguished on the basis of their behavior in the tree-bank. We let the tree-bank data decide which types can be grouped together, and which types should be distinguished. This way we can generalize over semantic types, and exploit relevant type-information in the parsing process at the same time. In learning the optimal grouping of types, we have two concerns: keeping the number of different sets of types to a minimum, and increasing the se- mantic determinacy of syntactic structures enhanced with type-information. We say that a subtree T, with type-information at every node, is semantically determinate, iff we can determine a unique, correct semantic rule for every CFG rule R 3 occurring in T. Semantic determinacy is very attractive from a com- putational point of view: if our processed tree-bank has semantic determinacy, we do not need to involve the semantic rules in the parsing process. Instead, the parser yields parses containing information re- garding syntax and semantic types, and the actual semantic rules can be determined on the basis of that information. In the next section we will elabo- rate on how we learn the grouping of semantic types from the data. 4.2 Classification of semantic types The algorithm presented in this section proceeds by grouping semantic types occurring with the same syntactic label into mutually exclusive sets, and as- signing to every syntactic label an index that indi- cates to which set of types its corresponding seman- tic type belongs. It is an iterative, greedy algorithm. In every iteration a tuple, consisting of a syntactic category and a set of types, is selected. Distinguish- ing this tuple in the tree bank, leads to the great- est increase in semantic determinacy that could be found. Iteration continues until the increase in se- mantic determinacy is below a certain threshold. Before giving the algorithm, we need some defini- tions: 3By "CFG rule", we mean a subtree of depth 1, with- out a specified root-node semantics, but with the features relevant for substitution, i.e. syntactic category and se- mantic type. Since the subtree of depth 1 is the smallest structural building block of our DOP model, semantic determinacy of every CFG rule in a subtree, means the whole subtree is semantically determinate. tuplesO tuples(T) is the set of all pairs (c, s) in a tree- bank T, where c is a syntactic category, and s is the set of all semantic types that a constituent of category c in T can have. apply() if c is a category, s is a set of types, and T is a tree-bank then apply((c, s), T) yields a tree-bank T', by indexing each instance of category c in T, such that the c constituent is of semantic type t E s, with a unique index i. ambO if T is a tree-bank then arab(T) yields an n E N, such that n is the sum of the frequencies of all CFG rules R that occur in T with more than one corresponding semantic rule. The algorithm starts with a tree-bank To; in To, the cardinality of tuples(To) equals the number of different syntactic categories in To. 1. Ti=o repeat 2. 3. 4. until 5. Ti-1 D((c, s)) = amb(T/)- amb( apply( (c, s), Ti) ) = {(c,s')13(c, s) tuples(T~)& s' E 21sl) 7-/= argmax D(r') r'ET; i:=i+1 Ti := apply(ri, Ti-1) D(T~-I) <-- 5 (5) 21sl is the powerset of s. In the implementation, a limit can be set to the cardinality of s' E 21sl, to avoid excessively long processing time. Obviously, the iteration will always end, if we require 5 to be > 0. When the algorithm finishes, TO,... , Ti--1 con- tain the category/set-of-types pairs that took the largest steps towards semantic determinacy, and are therefore distinguished in the tree-bank. The se- mantic types not occurring in any of these pairs are grouped together, and treated as equivalent. Note that the algorithm cannot be guaranteed to achieve full semantic determinacy. The degree of se- mantic determinacy reached, depends on the consis- tency of annotation, annotation errors, the granular- ity of the type system, peculiarities of the language, in short: on the nature of the tree-bank. To force semantic determinacy, we assign a unique index to those rare instances of categories, i.e, left hand sides 164 PER USer I ik V wants I wi! ADV MP # today I I niet vandaag S dl.d2 VP dl.d2 MP MP MP dl.d2 MP CON MP P NP ! tomorrow destinatlon.place I I maar morgen naar NP NP town.almere suffix.buiten I I almere buiten Figure 9: A tree from the OVIS tree-bank of CFG-rules, that do not have any distinguishing features to account for their differing semantic rule. Now the resulting tree-bank embodies a function from CFG rules to semantic rules. We store this function in a table, and strip all semantic rules from the trees. As the experimental results in the next section show, using a tree-bank obtained in this way for data oriented semantic interpretation, results in high coverage, and good probability estimations. 5 Experiments on the OVIS tree-bank The NWO 4 Priority Programme "Language and Speech Technology" is a five year research pro- gramme aiming at the development of advanced telephone-based information systems. Within this programme, the OVIS 5 tree-bank is created. Using a pilot version of the OVIS system, a large number of human-machine dialogs were collected and tran- scribed. Currently, 10.000 user utterances have re- ceived a full syntactic and semantic analysis. Re- grettably, the tree-bank is not available (yet) to the public. More information on the tree-bank can be found on http : ~~grid. let. rug. nZ : 4321/. The se- mantic domain of all dialogs, is the Dutch railways schedule. The user utterances are mostly answers to questions, like: "From where to where do you want to travel?", "At what time do you want to arrive in Amsterdam?", "Could you please repeat your destination?". The annotation method is ro- bust and flexible, as we are dealing with real, spo- ken data, containing a lot of clearly ungrammatical utterances. For the annotation task, the annotation 4Netherlands Organization for Scientific Research 5Public Transport Information System workbench SEMTAGS is used. It is a graphical inter- face, written by Bonnema, offering all functionality needed for examining, evaluating, and editing syn- tactic and semantic analyses. SEMTAGS is mainly used for correcting the output of the DOP-parser. It incrementally builds a probabilistic model of cor- rected annotations, allowing it to quickly suggest al- ternative semantic analyses to the annotator. It took approximately 600 hours to annotate these 10.000 utterances (supervision included). Syntactic annotation of the tree-bank is conven- tional. There are 40 different syntactic categories in the OVIS tree-bank, that appear to cover the syn- tactic domain quite well. No grammar is used to determine the correct annotation; there is a small set of guidelines, that has the degree of detail nec- essary to avoid an "anything goes"-attitude in the annotator, but leaves room for his/her perception of the structure of an utterance. There is no concep- tual division in the tree-bank between POS-tags and nonterminal categories. Figure 9 shows an example tree from the tree- bank. It is an analysis of the Dutch sentence: "Ik(I) wil( want ) niet( not ) vandaag( today) maar( but ) mor- gen(tomorrow) naar(to) Almere Buiten(Almere Buiten)". The analysis uses the formula schemata discussed in section 3.2, but here the interpreta- tions of daughter-nodes are so-called "update" ex- pressions, conforming to a frame structure, that are combined into an update of an information state. The complete interpretation of this utterance is: user.wants.(([#today];[itomorrow]);destination.- place.(town.almere;suffix.buiten)). The semantic for- malism employed in the tree-bank is the topic of the next section. 5.1 The Semantic formalism The semantic formalism used in the OVIS tree-bank, is a frame semantics, defined in Veldhuijzen van Zanten (1996). In this section, we give a very short impression. The well-formedness and validity of an expression is decided on the ba- sis of a type-lattice, called a frame structure. The interpretation of an utterance, is an update of an information state. An information state is a repre- sentation of objects and the relations between them, that complies to the frame structure. For OVIS, the various objects are related to concepts in the train travel domain. In updating an information state, the notion of a slot-value assignment is used. Every object can be a slot or a value. The slot-value assign- ments are defined in a way that corresponds closely to the linguistic notion of a ground-focus structure. The slot is part of the common ground, the value 165 Interpretation: Exact Match 95 % 85 , , , 1 2 3 4 5 Max. subtree depth Figure 10: Size of training set: 8500 Sem./Synt. Analysis: Exact Match 90 % 87.69 88.21 85.64 -- 88.66 85 83.08-- 80" , I I I I 1 2 3 4 5 Max. subtree depth Figure 11: Size of training set: 8500 is new information. Added to the semantic formal- ism are pragmatic operators, corresponding to de- nial, confirmation , correction and assertion 6 that indicate the relation between the value in its scope, and the information state. An update expression is a set of paths through the frame structure, enhanced with pragmatic operators that have scope over a certain part of a path. For the semantic DOP model, the semantic type of an expression ¢ is a pair of types (tz,t2). Given the type-lattice "/-of the frame structure, tl is the lowest upper bound in T of the paths in ¢, and t2 is the greatest lower bound in Tof the paths in ¢. 5.2 Experimental results We performed a number of experiments, using a ran- dom division of the tree-bank data into test- and training-set. No provisions were taken for unknown words. The results reported here, are obtained by randomly selecting 300 trees from the tree-bank. All utterances of length greater than one in this selection are used as testing material. We varied the size of the training-set, and the maximal depth of the sub- trees. The average length of the test-sentences was 4.74 words. There was a constraint on the extrac- tion of subtrees from the training-set trees: subtrees could have a maximum of two substitution-sites, and no more than three contiguous lexical nodes (Expe- rience has shown that such limitations improve prob- 6In the example in figure 9, the pragmatic opera- tors #, denial, and !, correction, axe used Interpretation: Exact Match 90.76 92.31 /0 88.21~ 90 87.18 71.27 70" , ' , 1000 2500 40'00 5500 7000 85'00 Tralningset size Figure 12: Max. depth of subtrees = 4 Sem./Synt. Analysis: Exact Match 90 % 87.18 88.21 80 79 49 68 711 70 i , , 1000 2500 40'00 5500 7000 8500 Tralningset size Figure 13: Max. depth of subtrees = 4 ability estimations, while retaining the full power of DOP). Figures 10 and 11 show results using a train- ing set size of 8500 trees. The maximal depth of sub- trees involved in the parsing process was varied from 1 to 5. Results in figure 11 concern a match with the total analysis in the test-set, whereas Figure 10 shows success on just the resulting interpretation. Only exact matches with the trees and interpreta- tions in the test-set were counted as successes. The experiments show that involving larger fragments in the parsing process leads to higher accuracy. Appar- ently, for this domain fragments of depth 5 are too large, and deteriorate probability estimations 7. The results also confirm our earlier findings, that seman- tic parsing is robust. Quite a few analysis trees that did not exactly match with their counterparts in the test-set, yielded a semantic interpretation that did match. Finally, figures 12 and 13 show results for differing training-set sizes, using subtrees of maxi- mal depth 4. References M. van den Berg, R. Bod, and R. Scha. 1994. A Corpus-Based Approach to Semantic Interpre- tation. In Proceedings Ninth Amsterdam Collo- quium. ILLC,University of Amsterdam. 7Experiments using fragments of maximal depth 6 and maximal depth 7 yielded the same results as maxi- mal depth 5 166 E. Black, R. Garside, and G. Leech. 1993. Statistically-Driven Computer Grammars of En- glish: The IBM/Lancaster Approach. Rodopi, Amsterdam-Atlanta. R. Bod. 1992. A computational model of language performance: Data Oriented Parsing. In Proceed- ings COLING'92, Nantes. R. Bod. 1993a. Monte Carlo Parsing. In Proceedings Third International Workshop on Parsing Tech- nologies, Tilburg/Durbuy. R. Bod. 1993b. Using an Annotated Corpus as a Stochastic Grammar. In Proceedings EACL'93, Utrecht. R. Bod. 1995. Enriching Linguistics with Statistics: Performance models of Natural Language. Phd-thesis, ILLC-dissertation series 1995-14, University of Amsterdam. ftp ://ftp. fwi. uva. nl/pub/theory/illc/- dissert at ions/DS-95-14, text. ps. gz R Bod. 1996. Two Questions about Data-Oriented Parsing. In Proceedings Fourth Workshop on Very Large Corpora, Copenhagen, Denmark. (cmp- lg/9606022). R. Bod, R. Bonnema, and R. Scha. 1996. A data- oriented approach to semantic interpretation. In Proceedings Workshop on Corpus-Oriented Se- mantic Analysis, ECAI-96, Budapest, Hungary. (cmp-lg/9606024). R. Bod and R. Scha. 1996. Data-oriented lan- guage processing, an overview. Technical Re- port LP-96-13, Institute for Logic, Language and Computation, University of Amsterdam. (cmp- lg/9611003). R. Bonnema. 1996. Data oriented se- mantics. Master's thesis, Department of Computational Linguistics, University of Am- sterdam, http ://mars. let. uva. nl/remko_b/- dopsem/script ie. html T. Briscoe. 1994. Prospects for practical parsing of unrestricted text: Robust statistical parsing tech- niques. In N. Oostdijk and P de Haan, editors, Corpus-based Research into Language. Rodopi, Amsterdam. E. Charniak. 1996. Tree-bank grammars. In Pro- ceedings AAAI'96, Portland, Oregon. L. Gamut. 1991. Logic, Language and Meaning. Chicago University Press. J Goodman. 1996. Efficient Algorithms for Parsing the DOP Model. In Proceedings Empirical Meth- ods in Natural Language Processing, Philadelphia, Pennsylvania. R. Kaplan. 1996. A probabilistic approach to Lexical-Functional Grammar. Keynote pa- per held at the LFG-workshop 1996, Greno- ble, France. ftp ://ftp.parc. xerox, com/pub/- nl/slides/grenoble96/kaplan-dopt alk .ps. M. Marcus, B. Santorini, and M. Marcinkiewicz. 1993. Building a Large Annotated Corpus of En- glish: The Penn Treebank. Computational Lin- guistics, 19(2). F. Pereira and Y. Schabes. 1992. Inside-outside reestimation from partially bracketed corpora. In Proceedings of the 30th Annual Meeting of the ACL, Newark, De. M. Rajman. 1995a. Apports d'une approche a base de corpus aux techniques de traitement automa- tique du langage naturel. Ph.D. thesis, Ecole Na- tionale Superieure des Telecommunications, Paris. M. Rajman. 1995b. Approche probabiliste de l'analyse syntaxique. Traitement Automatique des Langues, 36:1-2. S. Sekine and R. Grishman. 1995. A corpus- based probabilistic grammar with only two non-terminals. In Proceedings Fourth Interna- tional Workshop on Parsing Technologies, Prague, Czech Republic. K. Sima'an, R. Bod, S. Krauwer, and R. Scha. 1994. Efficient Disambiguation by means of Stochastic Tree Substitution Grammars. In Proceedings In- ternational Conference on New Methods in Lan- guage Processing. CCL, UMIST, Manchester. K. Sima'an. 1995. An optimized algorithm for Data Oriented Parsing. In Proceedings International Conference on Recent Advances in Natural Lan- guage Processing. Tzigov Chark, Bulgaria. K. Sima'an. 1996a. An optimized algorithm for Data Oriented Parsing. In R. Mitkov and N. Ni- colov, editors, Recent Advances in Natural Lan- guage Processing 1995, volume 136 of Current Is- sues in Linguistic Theory. John Benjamins, Ams- terdam. K. Sima'an. 1996b. Computational Complexity of Probabilistic Disambiguation by means of Tree- Grammars. In Proceedings COLING'96, Copen- hagen, Denmark. D. Tugwell. 1995. A state-transition grammar for data-oriented parsing. In Proceedings European Chapter of the ACL'95, Dublin, Ireland. G. Veldhuijzen van Zanten. 1996. Seman- tics of update expressions. NWO priority Programme Language and Speech Technology, http ://grid. let. rug. nl : 4321/. 167 | 1997 | 21 |
Fertility Models for Statistical Natural Language Understanding Stephen Della Pietra °, Mark Epstein, Salim Roukos, Todd Ward IBM Thomas J. Watson Research Center P.O. Box 218 Yorktown Heights, NY 10598, USA (*Now With Renaissance Technologies, Stonybrook, NY, USA) sdella@rentec, tom [meps/roukos/tward] ©watson. ibm. com Abstract Several recent efforts in statistical nat- ural language understanding (NLU) have focused on generating clumps of English words from semantic meaning concepts (Miller et al., 1995; Levin and Pierac- cini, 1995; Epstein et al., 1996; Epstein, 1996). This paper extends the IBM Ma- chine Translation Group's concept of fertil- ity (Brown et al., 1993) to the generation of clumps for natural language understand- ing. The basic underlying intuition is that a single concept may be expressed in Eng- lish as many disjoint clump of words. We present two fertility models which attempt to capture this phenomenon. The first is a Poisson model which leads to appeal- ing computational simplicity. The second is a general nonparametric fertility model. The general model's parameters are boot- strapped from the Poisson model and up- dated by the EM algorithm. These fertility models can be used to impose clump fertil- ity structure on top of preexisting clump generation models. Here, we present re- sults for adding fertility structure to uni- gram, bigram, and headword clump gener- ation models on ARPA's Air Travel Infor- mation Service (ATIS] domain. 1 Introduction The goal of a natural language understanding (NLU) system is to interpret a user's request and respond with an appropriate action. We view this interpre- tation as translation from a natural language ex- pression, E, into an equivalent expression, F, in an unambigous formal language. Typically, this for- mal language will be hand-crafted to enhance per- formance on some task-specific domain. A statisti- cal NLU system translates a request E as the most likely formal expression ~' according to a probability model p, = are maxp(F[E) --- are maxp(F, E). over all F over all F We have previously built a fully automatic statis- tical NLU system (Epstein et al., 1996) based on the source-channel factorization of the joint distribution p(f , E) p(f , E) = p(f)p(ZlF ). This factorization, which has proven effective in speech recognition (Bahl, Jelinek, and Mercer, 1983), partitions the joint probability into an a pri- ori intention model p(F), and a translation model p(E[F) which models how a user might phrase a re- quest F in English. For the ATIS task, our formal language is a mi- nor variant of the NL-Parse (Hemphill, Godfrey, and Doddington, 1990) used by ARPA to annotate the ATIS corpus. An example of a formal and natural language pair is: • F: List flights from New Orleans to Memphis flying on Monday departing early_morning • E: do you have any flights going to Memphis leaving New Orleans early Monday morning Here, the evidence for the formal language concept 'early_morning' resides in the two disjoint clumps of English 'early' and 'morning'. In this paper, we in- troduce the notion of concept fertility into our trans- lation models p(EIF ) to capture this effect and the more general linguistic phenomenon of embedded clauses. Basically, this entails augmenting the trans- lation model with terms of the form p(nlf), where n is the number of clumps generated by the formal lan- guage word f. The resulting model can be trained automatically from a bilingual corpus of English and formal language sentence pairs. Other attempts at statistical NLU systems have used various meaning representations such as con- cepts in the AT&T system (Levin and Pieraccini, 1995) or initial semantic structure in the BBN sys- tem (Miller et al., 1995). Both of these systems re- quire significant rule-based transformations to pro- duce disambiguated interpretations which are then 168 used to generate the SQL query for ATIS. More re- cently, BBN has replaced handwritten rules with de- cision trees (Miller et al., 1996). Moreover, both sys- tems were trained using English annotated by hand with segmentation and labeling, and both systems produce a semantic representation which is forced to preserve the time order expressed in the Eng- lish. Interestingly, both the AT&T and BBN sys- tems generate words within a clump according to bigram models. Other statistical approachs to NLU include decision trees (Kuhn and Mori, 1995) and neural nets (Gorin et al., 1991). In earlier IBM translation systems (Brown et al., 1993) each English word would be generated by, or "aligned to", exactly one formal language word. This mapping between the English and formal lan- guage expressions is called the "alignment". In the simplest case, the translation model is simply pro- portional to the product of word-pair translation probabilities, one per element in the alignment. In these models, the alignment provides all of the struc- ture in the translation model. The alignment is a "hidden" quantity which is not annotated in the training data and must be inferred indirectly. The EM algorithm (Dempster, Laird, and Rubin, 1977) used to train such "hidden" models requires us to sum an expression over all possible alignments. These early models were developed for French to English translation. However, in NLU there is a fun- damental asymmetry between the natural language and the unambiguous formal language. Most no- tably, one formal language word may frequently cor- respond to whole English phrases. We added the "clump", an extra layer of structure, to accomodate this phenomenon (Epstein et al., 1996). In this para- digm, formal language words first generate a clump- ing, or partition, of the word slots of the English expression. Then, each clump is filled in according to a translation model as before. The alignment is defined between the formal language words and the clumps. Then, both the alignment and the clumping are hidden structures which must be summed over to train the models. Already, these models represent significant progress. They learn automatically from a bilin- gual corpus of English and formal language sen- tences. They do not require linguistically knowl- edgeable experts to tediously annotate a training corpus. Rather, they rely upon a group of trans- lators with significantly less linguistic knowledge to produce a bilingual training corpus. The fertility models introduced below maintain these benefits while slightly improving performance. 2 Fertility Clumping Translation Models The rationale behind a clumping model is that the input English can be clumped or bracketed into phrases. Each clump is then generated from a sin- gle formal language word using a translation model. The notion of what constitutes a natural clumping depends on the formal language. For example, sup- pose the English sentence were: I want to fly to Memphis please. If the formal language for this sentence were: LIST FLIGHTS TO LOCATION, then the most plausible clumping would be: [I want] [to fly] [to] [Memphis] [please], for which we would expect "[I want]" and "[please]" to be generated from "LIST", "[to fly]" from "FLIGHTS", "[to]" from "TO, and "[Memphis]" from LOCATION. Similarly, if the formal language were: LIST FLIGHTS DESTINATION_LOC then the most natural clumping would be: [I want] [to fly] [to Memphis] [please], in which we would now expect "[to Memphis]" to be generated by "DESTINATION_LOC". Although these ctumpings are perhaps the most natural, neither the clumping nor the alignment is annotated in our training data. Instead, both the alignment and the clumping are viewed as "hidden" quantities for which all values are possible with some probability. The EM algorithm is used to produce a maximum likelihood estimate of the model parame- ters, taking into account all possible alignments and clumpings. In the discussion of fertility models we denote an English sentence by E, which consists of I(E) words. Similarly, we denote the formal language by F, a tuple of order g(F), whose individual elements are denoted by fi. A clumping for a sentence partitions E into a tuple of clumps C. The number of clumps in C is denoted by g(C), and is an integer in the range 1...g(E). A particular clump is denoted by ci, where i 6 {1...g(C)}. The number of words in q is denoted by g(ci), cl begins at the first word in the sentence, and ct(c) ends at the last word in the sentence. The clumps form a proper partition of E. All the words in a clump c must align to the same f. An alignment between E and F determines which f generates each clump of E in C. Similarly, A denotes the alignment, with g(A) = g(C), and the ai denote the formal language word to which each e in c~ align. The individual words in a clump c are represented by el ..-el(~). For all fertility models, the fundamental parame- ters are the joint probabilities p( E, C, A, F). Since the clumping and alignment are hidden, to compute the probability that E is generated by F, one calcu- lates: p(E I f ) = Zp(E,C, A IF) C,A 169 3 General and Poisson Fertility In the general fertility model, the translation prob- ability with "revealed" alignment and clumping is p(E,C,A [ F) = 1 t(P) t(c) Z--[ 1-[ P( n' [ Y,)n,! rI p(c~- I Io,) (1) i=1 j=l e(c) p(c I f) = p(e(c) I f) 1] p(e, I fc) (2) i=1 where p(ni [ fi) is the fertility probability of gen- erating n i clumps by formal word f~. Note that ni = L. The factorial terms combine to give an inverse multinomial coefficient which is the uniform probability distribution for the alignment A of F to C. It appears that the computation of the likelihood, which is the sum of e(F)(e(F) + product terms, is exponential. Although dynamic program- ming can reduce the complexity, there remain an exponentially large number of terms to evaluate in each iteration of the EM algorithm. We resort to a top-N approximation to the EM sum for the gen- eral model, summing over candidate clumpings and alignments proposed by the Poisson fertility model developed below. If one assumes that the fertility is modeled by the Poisson distribution with mean fertility ),: e-Xt )tf n p(n If) - n! (3) then a polynomial time training algorithm exists. The simplicity arises from the fortuitous cancella- tion of n! between the Poisson distribution and the uniform alignment probability. Substituting equa- tion 3 into equation 1 yields: p(E, C, A I F) 1 t(F) t(C) = L-7 1] ISo,) (4) i=1 j=l I t(F) £(C) = Lq 1-I e-X" 1] q(cj In,) (5) i=i j=l If) -- If), (6) where A: '~ has been absorbed into the effective clump score q(c I f). In this form, it is particu- larly simple to explicitly sum over all alignments A to obtain p(E, C [ F) by repeated application of the distributive law. The resulting polynomial time ex- pressions are: 1 t(f) L(C) p(E, C l F) = L--[. rI e-X" 1] 4(cj IF) (7) i=I ]=i q(c I F) = q(c If) (8) ]EF The q(C [ F) values for all possible clumpings can be calculated in O(e(E)2e(F)) time if the maxi- mum clump size is unbounded, and in O(e(E)I(F)) if bounded. The Viterbi decoding algorithm (For- ney, 1973) is used to calculate p(E I L,F) from these expressions. The Viterbi algorithm produces a score which is the sum over all possible clump- ings for a fixed L. This score must then normal- ized by the exp(-X't(v) z...~,=l AA)/L! factor. The EM count accumulation is done using an adaptation of the Baum-Welch algorithm (Baum, 1972) which searches through the space of all possible ctumpings, first considering 1 clump, then 2, and so forth. Initial values for p(e [ f) are bootstrapped from Model 1 (Epstein et al., 1996) with the initial mean fertilities A/ set to 1. We also fixed the maximum clump size at 5 words. Empirically, we found it ben- eficial to hold the p(e I f) parameters fixed for 20 iterations to allow the other parameters to train to reasonable values. After training, the translation probabilities and clump lengths are smoothed using deleted interpolation (Bahl, Jelinek, and Mercer, 1983). Since we have been unable to find a polynomial time algorithm to train the general fertility model, we use the Poisson model to "expose" the hidden alignments. The Poisson fertility model gives the most likely 1000 clumpings and alignments, which are then restored according to the current general fertility model parameters. This gives fractional counts for each of the 1000 alignments, which are then used to update the the general fertility model parameters. 4 Improved Clump Modeling In both the Poisson and general fertility models, the computation ofp(clf ) in equation 2 uses a unigram model. Each English word e~ is generated with prob- ability p(ei[fc). Two more powerful modeling tech- niques for modeling clump generation are n-gram language models (Miller et al., 1995; Levin and Pier- accini, 1995; Epstein, 1996), and headword language models (Epstein, 1996). A bigram language model uses: p(c l Y) = p(e(c) l f)p(el l bdy, f~)p(bdy l el(c), fc) x t(¢) 1-Iv(e, t e,-1, fo) i=2 where bdy is a special marker to delimit the begin- ning and end of the clump. A headword language model uses two unigram models, a headword model and a non-headword model. Each clump is required to have a headword. All other words are non-headwords. The identity of a clump's headword is hidden, hence it is necessary 170 Word ~ p (n = O) late 1.49 .00 early 1.55 .00 morning 1.40 .01 afternoon 1.62 .00 early_morning 2.50 .00 = i) .62 .89 .85 .85 .16 p = 2) p >= 3) .28 .10 .03 .08 .11 .03 .12 .03 .69 .15 Table 1: Trained Poisson and General Fertility Word early morning List Top p(elf) Score early .37 an .22 i .09 morning .63 in .12 leaving .05 the .21 me .19 show .18 what .17 please .04 Top ph,ad(elf ) Score early .68 i .23 day .06 morning .75 leaving .06 flights .05 show .49 what .17 give .07 you .06 list .05 Top p,~onhe~d(elf ) Score an .30 flight .29 would .10 the .43 in .37 of .08 me .45 the .19 all .12 are .05 please .05 Table 2: Trained Translation Probabilities using Poisson Fertility Table Model DEC93 DEC93a 1 Clump Clump-HW Clump-BG Poisson Poisson-HW Poisson-BG General General-HW General-BG 75.00 74.78 75.89 76.79 78.12 78.12 78.12 79.91 79.91 73.21 75.22 77.01 78.35 78.12 81.25 81.25 81.25 82.59 79.91 83.04 3: Class A CAS on Patterns for DEC93 171 to sum over all possible headwords: p(c If) = If) ~°~ i=1 j¢i 5 Example Fertilities To illustrate how well fertility captures simple cases of embedding, trained fertilities are shown in table 1 for several formal language words denoting time in- tervals. As expected, "early_morning" dominantly produces two clumps, but can produce either one or three clumps with reasonable probability. "morn- ing" and "afternoon" train to comparable fertilities and preferentially generate a single clump. Another interesting case is the formal language token "List" which trains to a A of 0.62 indicating that it fre- quently generates no English text. As a further check, the A values for "from", "to", and the two special classed words "CITY-l" and "CITY-2" are near 1, ranging between 0.96 and 1.17. Some trained translation probabilities are shown for the unigram and headword models in table 2. The formal language words have captured reason- able English words for their most likely transla- tion or headword translation. However, "early" and "morning" have fairly undesirable looking sec- ond and third choices. The reason for this is that these undesirable words are frequently adjacent to the English words "early" and "morning"; hence the training algorithm includes contributions with two word clumps containing these extraneous words. This is the price we pay for not using supervised training data. Intriguingly, the headword model is more strongly biased towards the likely translations and has a smoother tail than the unigram model. 6 Results The translation models were trained with 5627 context-independent ATIS sentences and smoothed with 600 sentences. In addition, 3567 training sen- tences were manually aligned and included in a sep- arate training experiment. This allows comparison between an unannotated corpus and a partially an- notated one. We employ a trivial decoder and language model since our emphasis is on evaluating the performance of different translation models. Our decoder is a sim- ple pattern matcher. That is, we accumulate the dif- ferent formal language patterns seen in the training set, and score each of them on the test set. The lan- guage model is just the unsmoothed unigram prob- ability distribution of the patterns. This LM has a 10% chance of not including a test pattern and its use leads to pessimistic performance estimates. A more general language model for ATIS is presented in (Koppelman et al., 1995). Answers are gener- ated by an SQL program which is a deterministically constructed from the formal language of our system. The accuracy of these database answers is measured using ARPA's Common Answer Specification (CAS) metric. The results are presented in table 3 for ARPA's December 1993 blind test set. The column headed DEC93 reports results on unsupervised training data, while the column entitled DEC93a contains the results from using models trained on the partially annotated corpus. The rows correspond to various translation models. Model 1 is the word-pair trans- lation model used in simple machine translation and understanding models (Brown et al., 1993; Epstein et al., 1996). The models labeled "Clump" use a basic clumped model without fertility. The mod- els labeled "Poisson" and "General" use the Poisson and general fertility models presented in this paper. The "HW" and "BG" suffixes indicate the results when p(e[f) is computed with a headword or bigram model. The partially annotated corpus provides an in- crease in performance of about 2-3% for most mod- els. For General-LM, results increased by 8-10%. The Poisson and general fertility models show a 2- 5% gain in performance over the basic clump model when using the partially annotated corpus. This is a reduction of the error rate by 10-20%. The unan- notated corpus also shows a comparable gain. Acknowledgement: This work was sponsored in part by ARPA and monitored by Fort Huachuca HJ1500-4309-0513. The views and conclusions con- tained in this document should not be interpreted as representing the official policies of the U.S. Gov- ernment. References Bahl, Lalit R., Frederick Jelinek, and Robert L. Mercer. 1983. A maximum likelihood approach to continuous speech recognition. IEEE Trans- actions on Pattern Analysis and Machine Intelli- gence, PAMI-5(2):179-190, March. Baum, L.E. 1972. An inequality and associated maximization technique in statistical estimation of probabilistic functions of a Markov process. In- equalities, 3:1-8. Brown, Peter F., Stephen A. DellaPietra, Vincent J. DellaPietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. In Computational Linguis- tics, pages 19(2):263-311, June. Dempster, A.P., N.M. Laird, and D.B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, 39(B):1-38. 172 Epstein, M. 1996. Statistical Source Channel Mod- els for Natural Language Understanding. Ph.D. thesis, New York University, September. Epstein, M., K. Papineni, S. Roukos, T. Ward, and S. Della Pietra. 1996. Statistical natural lan- guage understanding using hidden clumpings. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, pages 176-179, Atlanta, Georgia, May. Forney, G. David. 1973. The viterbi algorithm. Pro- ceedings of the IEEE, 61:268-278, March. Gorin, A., S. Levinson, A. Gertner, and E. Goldman. 1991. Adaptive acquisition of language. Com- puter Speech and Language, 5:101-132. Hemphill, C., J. Godfrey, and G. Doddington. 1990. The ATIS spoken language systems pilot corpus. In Proceedings of the DARPA Speech and Natural Language Workshop, pages 96-101, Hidden Valley, PA, June. Morgan Kaufmann Publishers, Inc. Koppelman, J., S. Della Pietra, M. Epstein, and S. Roukos. 1995. A statistical approach to lan- guage modeling for the ATIS task. In Proceedings of the Spoken Language Systems Workshop, pages 1785-1788, Madrid, Spain, September. Kuhn, R. and R. De Mori. 1995. The application of semantic classification trees to natural language understanding. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(5):449-460, May. Levin, E. and R. Pieraccini. 1995. Chronus, the next generation. In Proceedings of the Spoken Lan- guage Systems Workshop, pages 269-271, Austin, Texas, January. Miller, S., M. Bates, R. Bobrow, R. Ingria, J. Makhoul, and R. Schwartz. 1995. Recent progress in hidden understanding models. In Pro- ceedings of the Spoken Language Systems Work- shop, pages 276-279, Austin, Texas, January. Miller, S., D. Stallard, R. Bobrow, and R. Schwartz. 1996. A fully statistical approach to natural lan- guage interfaces. In Proceedings of the 34th An- nual Meeting of the Association for Computa- tional Linguistics, pages 55-61, Santa Cruz, CA, June. Morgan Kaufmann Publishers, Inc. 173 | 1997 | 22 |
Predicting the Semantic Orientation of Adjectives Vasileios Hatzivassiloglou and Kathleen R. McKeown Department of Computer Science 450 Computer Science Building Columbia University New York, N.Y. 10027, USA {vh, kathy)©cs, columbia, edu Abstract We identify and validate from a large cor- pus constraints from conjunctions on the positive or negative semantic orientation of the conjoined adjectives. A log-linear regression model uses these constraints to predict whether conjoined adjectives are of same or different orientations, achiev- ing 82% accuracy in this task when each conjunction is considered independently. Combining the constraints across many ad- jectives, a clustering algorithm separates the adjectives into groups of different orien- tations, and finally, adjectives are labeled positive or negative. Evaluations on real data and simulation experiments indicate high levels of performance: classification precision is more than 90% for adjectives that occur in a modest number of conjunc- tions in the corpus. 1 Introduction The semantic orientation or polarity of a word indi- cates the direction the word deviates from the norm for its semantic group or lezical field (Lehrer, 1974). It also constrains the word's usage in the language (Lyons, 1977), due to its evaluative characteristics (Battistella, 1990). For example, some nearly syn- onymous words differ in orientation because one im- plies desirability and the other does not (e.g., sim- ple versus simplisfic). In linguistic constructs such as conjunctions, which impose constraints on the se- mantic orientation of their arguments (Anscombre and Ducrot, 1983; Elhadad and McKeown, 1990), the choices of arguments and connective are mutu- ally constrained, as illustrated by: The tax proposal was simple and well-received } simplistic but well-received *simplistic and well-received by the public. In addition, almost all antonyms have different se- mantic orientations3 If we know that two words relate to the same property (for example, members of the same scalar group such as hot and cold) but have different orientations, we can usually infer that they are antonyms. Given that semantically similar words can be identified automatically on the basis of distributional properties and linguistic cues (Brown et al., 1992; Pereira et al., 1993; Hatzivassiloglou and McKeown, 1993), identifying the semantic orienta- tion of words would allow a system to further refine the retrieved semantic similarity relationships, ex- tracting antonyms. Unfortunately, dictionaries and similar sources (theusari, WordNet (Miller et al., 1990)) do not in- clude semantic orientation information. 2 Explicit links between antonyms and synonyms may also be lacking, particularly when they depend on the do- main of discourse; for example, the opposition bear- bull appears only in stock market reports, where the two words take specialized meanings. In this paper, we present and evaluate a method that automatically retrieves semantic orientation in- formation using indirect information collected from a large corpus. Because the method relies on the cor- pus, it extracts domain-dependent information and automatically adapts to a new domain when the cor- pus is changed. Our method achieves high preci- sion (more than 90%), and, while our focus to date has been on adjectives, it can be directly applied to other word classes. Ultimately, our goal is to use this method in a larger system to automatically identify antonyms and distinguish near synonyms. 2 Overview of Our Approach Our approach relies on an analysis of textual corpora that correlates linguistic features, or indicators, with 1 Exceptions include a small number of terms that are both negative from a pragmatic viewpoint and yet stand in all antonymic relationship; such terms frequently lex- icalize two unwanted extremes, e.g., verbose-terse. 2 Except implicitly, in the form of definitions and us- age examples. 174 semantic orientation. While no direct indicators of positive or negative semantic orientation have been proposed 3, we demonstrate that conjunctions be- tween adjectives provide indirect information about orientation. For most connectives, the conjoined ad- jectives usually are of the same orientation: compare fair and legitimate and corrupt and brutal which ac- tually occur in our corpus, with ~fair and brutal and *corrupt and legitimate (or the other cross-products of the above conjunctions) which are semantically anomalous. The situation is reversed for but, which usually connects two adjectives of different orienta- tions. The system identifies and uses this indirect infor- mation in the following stages: 1. All conjunctions of adjectives are extracted from the corpus along with relevant morpho- logical relations. 2. A log-linear regression model combines informa- tion from different conjunctions to determine if each two conjoined adjectives are of same or different orientation. The result is a graph with hypothesized same- or different-orientation links between adjectives. 3. A clustering algorithm separates the adjectives into two subsets of different orientation. It places as many words of same orientation as possible into the same subset. 4. The average frequencies in each group are com- pared and the group with the higher frequency is labeled as positive. In the following sections, we first present the set of adjectives used for training and evaluation. We next validate our hypothesis that conjunctions con- strain the orientation of conjoined adjectives and then describe the remaining three steps of the algo- rithm. After presenting our results and evaluation, we discuss simulation experiments that show how our method performs under different conditions of sparseness of data. 3 Data Collection For our experiments, we use the 21 million word 1987 Wall Street Journal corpus 4, automatically an- notated with part-of-speech tags using the PARTS tagger (Church, 1988). In order to verify our hypothesis about the ori- entations of conjoined adjectives, and also to train and evaluate our subsequent algorithms, we need a 3Certain words inflected with negative affixes (such as in- or un-) tend to be mostly negative, but this rule applies only to a fraction of the negative words. Further- more, there are words so inflected which have positive orientation, e.g., independent and unbiased. 4Available form the ACL Data Collection Initiative as CD ROM 1. Positive: adequate central clever famous intelligent remarkable reputed sensitive slender thriving Negative: contagious drunken ignorant lanky listless primitive strident troublesome unresolved unsuspecting Figure 1: Randomly selected adjectives with positive and negative orientations. set of adjectives with predetermined orientation la- bels. We constructed this set by taking all adjectives appearing in our corpus 20 times or more, then re- moving adjectives that have no orientation. These are typically members of groups of complementary, qualitative terms (Lyons, 1977), e.g., domestic or medical. We then assigned an orientation label (either + or -) to each adjective, using an evaluative approach. The criterion was whether the use of this adjective ascribes in general a positive or negative quality to the modified item, making it better or worse than a similar unmodified item. We were unable to reach a unique label out of context for several adjectives which we removed from consideration; for example, cheap is positive if it is used as a synonym of in- expensive, but negative if it implies inferior quality. The operations of selecting adjectives and assigning labels were performed before testing our conjunction hypothesis or implementing any other algorithms, to avoid any influence on our labels. The final set con- tained 1,336 adjectives (657 positive and 679 nega- tive terms). Figure 1 shows randomly selected terms from this set. To further validate our set of labeled adjectives, we subsequently asked four people to independently label a randomly drawn sample of 500 of these adjectives. They agreed with us that the posi- tive/negative concept applies to 89.15% of these ad- jectives on average. For the adjectives where a pos- itive or negative label was assigned by both us and the independent evaluators, the average agreement on the label was 97.38%. The average inter-reviewer agreement on labeled adjectives was 96.97%. These results are extremely significant statistically and compare favorably with validation studies performed for other tasks (e.g., sense disambiguation) in the past. They show that positive and negative orien- tation are objective properties that can be reliably determined by humans. To extract conjunctions between adjectives, we used a two-level finite-state grammar, which covers complex modification patterns and noun-adjective apposition. Running this parser on the 21 mil- lion word corpus, we collected 13,426 conjunctions of adjectives, expanding to a total of 15,431 con- joined adjective pairs. After morphological trans- 175 Conjunction category Conjunction types analyzed All appositive and conjunctions All conjunctions 2,748 All and conjunctions 2,294 All or conjunctions 305 All but conjunctions 214 All attributive and conjunctions 1,077 All predicative and conjunctions 860 30 % same- orientation (types) 77.84% 81.73% 77.05% 30.84% 80.04% 84.77% 70.00% % same- orientation (tokens) 72.39% 78.07% 60.97% 25.94% 76.82% 84.54% 63.64% P-Value (for types) < i • i0 -I~ < 1 • 10 -1~ < 1 • 10 -1~ 2.09.10 -:~ < 1. i0 -16 < 1. i0 -I~ 0.04277 Table 1: Validation of our conjunction hypothesis. The P-value is the probability that similar extreme results would have been obtained if same- and different-orientation conjunction types were equally distributed. or more actually formations, the remaining 15,048 conjunction tokens involve 9,296 distinct pairs of conjoined adjectives (types). Each conjunction token is classified by the parser according to three variables: the conjunction used (and, or, bu~, either-or, or neither-nor), the type of modification (attributive, predicative, appos- itive, resultative), and the number of the modified noun (singular or plural). 4 Validation of the Conjunction Hypothesis Using the three attributes extracted by the parser, we constructed a cross-classification of the conjunc- tions in a three-way table. We counted types and to- kens of each conjoined pair that had both members in the set of pre-selected labeled adjectives discussed above; 2,748 (29.56%) of all conjoined pairs (types) and 4,024 (26.74%) of all conjunction occurrences (tokens) met this criterion. We augmented this ta- ble with marginal totals, arriving at 90 categories, each of which represents a triplet of attribute values, possibly with one or more "don't care" elements. We then measured the percentage of conjunctions in each category with adjectives of same or differ- ent orientations. Under the null hypothesis of same proportions of adjective pairs (types) of same and different orientation in a given category, the num- ber of same- or different-orientation pairs follows a binomial distribution with p = 0.5 (Conover, 1980). We show in Table 1 the results for several repre- sentative categories, and summarize all results be- low: • Our conjunction hypothesis is validated overall and for almost all individual cases. The results are extremely significant statistically, except for a few cases where the sample is small. • Aside from the use of but with adjectives of different orientations, there are, rather surpris- ingly, small differences in the behavior of con- junctions between linguistic environments (as represented by the three attributes). There are a few exceptions, e.g., appositive and conjunc- tions modifying plural nouns are evenly split between same and different orientation. But in these exceptional cases the sample is very small, and the observed behavior may be due to chance. • Further analysis of different-orientation pairs in conjunctions other than but shows that con- joined antonyms are far more frequent than ex- pected by chance, in agreement with (Justeson and Katz, 1991). 5 Prediction of Link Type The analysis in the previous section suggests a base- line method for classifying links between adjectives: since 77.84% of all links from conjunctions indicate same orientation, we can achieve this level of perfor- mance by always guessing that a link is of the same- orientation type. However, we can improve perfor- mance by noting that conjunctions using but exhibit the opposite pattern, usually involving adjectives of different orientations. Thus, a revised but still sim- ple rule predicts a different-orientation link if the two adjectives have been seen in a but conjunction, and a same-orientation link otherwise, assuming the two adjectives were seen connected by at least one conjunction. Morphological relationships between adjectives al- so play a role. Adjectives related in form (e.g., ade- quate-inadequate or thoughtful-thoughtless) almost always have different semantic orientations. We im- plemented a morphological analyzer which matches adjectives related in this manner. This process is highly accurate, but unfortunately does not apply to many of the possible pairs: in our set of 1,336 labeled adjectives (891,780 possible pairs), 102 pairs are morphologically related; among them, 99 are of different orientation, yielding 97.06% accuracy for the morphology method. This information is orthog- onal to that extracted from conjunctions: only 12 of the 102 morphologically related pairs have been observed in conjunctions in our corpus. Thus, we 176 Prediction method Always predict same orientation But rule Log-linear model Morphology used? No Yes No Yes Accuracy on reported same-orientation links 77.84% 78.18% 81.81% 82.20% No Yes 81.53% 82.00% Accuracy on reported different-orientation links 97.06% 69.16% 78.16% 73.70% 82.44% Table 2: Accuracy of several link prediction models. Overall accuracy 77.84% 78.86% 80.82% 81.75% 80.97% 82.05% add to the predictions made from conjunctions the different-orientation links suggested by morphologi- cal relationships. We improve the accuracy of classifying links de- rived from conjunctions as same or different orienta- tion with a log-linear regression model (Santner and Duffy, 1989), exploiting the differences between the various conjunction categories. This is a generalized linear model (McCullagh and Nelder, 1989) with a linear predictor = wWx where x is the vector of the observed counts in the various conjunction categories for the particular ad- jective pair we try to classify and w is a vector of weights to be learned during training. The response y is non-linearly related to r/ through the inverse logit function, e0 Y= l q-e" Note that y E (0, 1), with each of these endpoints associated with one of the possible outcomes. We have 90 possible predictor variables, 42 of which are linearly independent. Since using all the 42 independent predictors invites overfitting (Duda and Hart, 1973), we have investigated subsets of the full log-linear model for our data using the method of iterative stepwise refinement: starting with an ini- tial model, variables are added or dropped if their contribution to the reduction or increase of the resid- ual deviance compares favorably to the resulting loss or gain of residual degrees of freedom. This process led to the selection of nine predictor variables. We evaluated the three prediction models dis- cussed above with and without the secondary source of morphology relations. For the log-linear model, we repeatedly partitioned our data into equally sized training and testing sets, estimated the weights on the training set, and scored the model's performance on the testing set, averaging the resulting scores. 5 Table 2 shows the results of these analyses. Al- though the log-linear model offers only a small im- provement on pair classification than the simpler but prediction rule, it confers the important advantage 5When morphology is to be used as a supplementary predictor, we remove the morphologically related pairs from the training and testing sets. of rating each prediction between 0 and 1. We make extensive use of this in the next phase of our algo- rithm. 6 Finding Groups of Same-Oriented Adjectives The third phase of our method assigns the adjectives into groups, placing adjectives of the same (but un- known) orientation in the same group. Each pair of adjectives has an associated dissimilarity value between 0 and 1; adjectives connected by same- orientation links have low dissimilarities, and con- versely, different-orientation links result in high dis- similarities. Adjective pairs with no connecting links are assigned the neutral dissimilarity 0.5. The baseline and but methods make qualitative distinctions only (i.e., same-orientation, different- orientation, or unknown); for them, we define dis- similarity for same-orientation links as one minus the probability that such a classification link is cor- rect and dissimilarity for different-orientation links as the probability that such a classification is cor- rect. These probabilities are estimated from sep- arate training data. Note that for these prediction models, dissimilarities are identical for similarly clas- sifted links. The log-linear model, on the other hand, offers an estimate of how good each prediction is, since it produces a value y between 0 and 1. We construct the model so that 1 corresponds to same-orientation, and define dissimilarity as one minus the produced value. Same and different-orientation links between ad- jectives form a graph. To partition the graph nodes into subsets of the same orientation, we employ an iterative optimization procedure on each connected component, based on the exchange method, a non- hierarchical clustering algorithm (Spgth, 1985). We define an objective/unction ~ scoring each possible partition 7 ) of the adjectives into two subgroups C1 and C2 as i=1 x,y E Ci 177 Number of adjectives in test set ([An[) 2 730 3 516 4 369 5 236 Number of links in test set ([L~[) 2,568 2,159 1,742 1,238 Average number oflinksfor each adjective 7.04 8.37 9.44 10.49 Accuracy 78.08% 82.56% 87.26% 92.37% Ratio of average group frequencies 1.8699 1.9235 L3486 1.4040 Table 3: Evaluation of the adjective classification and labeling methods. where [Cil stands for the cardinality of cluster i, and d(z, y) is the dissimilarity between adjectives z and y. We want to select the partition :Pmin that min- imizes ~, subject to the additional constraint that for each adjective z in a cluster C, 1 1 ICl- 1 d(=,y) < --IVl d(=, y) (1) where C is the complement of cluster C, i.e., the other member of the partition. This constraint, based on Rousseeuw's (1987) s=lhoue~es, helps cor- rect wrong cluster assignments. To find Pmin, we first construct a random parti- tion of the adjectives, then locate the adjective that will most reduce the objective function if it is moved from its current cluster. We move this adjective and proceed with the next iteration until no movements can improve the objective function. At the final it- eration, the cluster assignment of any adjective that violates constraint (1) is changed. This is a steepest- descent hill-climbing method, and thus is guaran- teed to converge. However, it will in general find a local minimum rather than the global one; the prob- lem is NP-complete (Garey and $ohnson, 1979). We can arbitrarily increase the probability of finding the globally optimal solution by repeatedly running the algorithm with different starting partitions. 7 Labeling the Clusters as Positive or Negative The clustering algorithm separates each component of the graph into two groups of adjectives, but does not actually label the adjectives as positive or neg- ative. To accomplish that, we use a simple criterion that applies only to pairs or groups of words of oppo- site orientation. We have previously shown (Hatzi- vassiloglou and McKeown, 1995) that in oppositions of gradable adjectives where one member is semanti- cally unmarked, the unmarked member is the most frequent one about 81% of the time. This is relevant to our task because semantic markedness exhibits a strong correlation with orientation, the unmarked member almost always having positive orientation (Lehrer, 1985; Battistella, 1990). We compute the average frequency of the words in each group, expecting the group with higher av- erage frequency to contain the positive terms. This aggregation operation increases the precision of the labeling dramatically since indicators for many pairs of words are combined, even when some of the words are incorrectly assigned to their group. 8 Results and Evaluation Since graph connectivity affects performance, we de- vised a method of selecting test sets that makes this dependence explicit. Note that the graph density is largely a function of corpus size, and thus can be increased by adding more data. Nevertheless, we report results on sparser test sets to show how our algorithm scales up. We separated our sets of adjectives A (containing 1,336 adjectives) and conjunction- and morphology- based links L (containing 2,838 links) into training and testing groups by selecting, for several values of the parameter a, the maximal subset of A, An, which includes an adjective z if and only if there exist at least a links from L between x and other elements of An. This operation in turn defines a subset of L, L~, which includes all links between members of An. We train our log-linear model on L - La (excluding links between morphologically re- lated adjectives), compute predictions and dissimi- larities for the links in L~, and use these to classify and label the adjectives in An. c~ must be at least 2, since we need to leave some links for training. Table 3 shows the results of these experiments for a = 2 to 5. Our method produced the correct clas- sification between 78% of the time on the sparsest test set up to more than 92% of the time when a higher number of links was present. Moreover, in all cases, the ratio of the two group frequencies correctly identified the positive subgroup. These results are extremely significant statistically (P-value less than 10 -16 ) when compared with the baseline method of randomly assigning orientations to adjectives, or the baseline method of always predicting the most fre- quent (for types) category (50.82% of the adjectives in our collection are classified as negative). Figure 2 shows some of the adjectives in set A4 and their clas- sifications. 178 Classified as positive: bold decisive disturbing generous good honest important large mature patient peaceful positive proud sound stimulating straightforward strange talented vigorous witty Classified as negative: ambiguous cautious cynical evasive harmful hypocritical inefficient insecure irrational irresponsible minor outspoken pleasant reckless risky selfish tedious unsupported vulnerable wasteful Figure 2: Sample retrieved classifications of adjec- tives from set A4. Correctly matched adjectives are shown in bold. 9 Graph Connectivity and Performance A strong point of our method is that decisions on individual words are aggregated to provide decisions on how to group words into a class and whether to label the class as positive or negative. Thus, the overall result can be much more accurate than the individual indicators. To verify this, we ran a series of simulation experiments. Each experiment mea- sures how our algorithm performs for a given level of precision P for identifying links and a given av- erage number of links k for each word. The goal is to show that even when P is low, given enough data (i.e., high k), we can achieve high performance for the grouping. As we noted earlier, the corpus data is eventually represented in our system as a graph, with the nodes corresponding to adjectives and the links to predic- tions about whether the two connected adjectives have the same or different orientation. Thus the pa- rameter P in the simulation experiments measures how well we are able to predict each link indepen- dently of the others, and the parameter k measures the number of distinct adjectives each adjective ap- pears with in conjunctions. P therefore directly rep- resents the precision of the link classification algo- rithm, while k indirectly represents the corpus size. To measure the effect of P and k (which are re- flected in the graph topology), we need to carry out a series of experiments where we systematically vary their values. For example, as k (or the amount of data) increases for a given level of precision P for in- dividual links, we want to measure how this affects overall accuracy of the resulting groups of nodes. Thus, we need to construct a series of data sets, or graphs, which represent different scenarios cor- responding to a given combination of values of P and k. To do this, we construct a random graph by randomly assigning 50 nodes to the two possible orientations. Because we don't have frequency and morphology information on these abstract nodes, we cannot predict whether two nodes are of the same or different orientation. Rather, we randomly as- sign links between nodes so that, on average, each node participates in k links and 100 x P% of all links connect nodes of the same orientation. Then we consider these links as identified by the link pre- diction algorithm as connecting two nodes with the same orientation (so that 100 x P% of these pre- dictions will be correct). This is equivalent to the baseline link classification method, and provides a lower bound on the performance of the algorithm actually used in our system (Section 5). Because of the lack of actual measurements such as frequency on these abstract nodes, we also de- couple the partitioning and labeling components of our system and score the partition found under the best matching conditions for the actual labels. Thus the simulation measures only how well the system separates positive from negative adjectives, not how well it determines which is which. However, in all the experiments performed on real corpus data (Sec- tion 8), the system correctly found the labels of the groups; any misclassifications came from misplacing an adjective in the wrong group. The whole proce- dure of constructing the random graph and finding and scoring the groups is repeated 200 times for any given combination of P and k, and the results are averaged, thus avoiding accidentally evaluating our system on a graph that is not truly representative of graphs with the given P and k. We observe (Figure 3) that even for relatively low t9, our ability to correctly classify the nodes ap- proaches very high levels with a modest number of links. For P = 0.8, we need only about ? links per adjective for classification performance over 90% and only 12 links per adjective for performance over 99%. s The difference between low and high values of P is in the rate at which increasing data increases overall precision. These results are somewhat more optimistic than those obtained with real data (Sec- tion 8), a difference which is probably due to the uni- form distributional assumptions in the simulation. Nevertheless, we expect the trends to be similar to the ones shown in Figure 3 and the results of Table 3 on real data support this expectation. 10 Conclusion and Future Work We have proposed and verified from corpus data con- straints on the semantic orientations of conjoined ad- jectives. We used these constraints to automatically construct a log-linear regression model, which, com- bined with supplementary morphology rules, pre- dicts whether two conjoined adjectives are of same 812 links per adjective for a set of n adjectives requires 6n conjunctions between the n adjectives in the corpus. 179 ~ 75' 70. 65. 60" 55- 50 ~ 0i2~4567891() 1'2 14 16 18 20 Avem0e neiohbo~ per node (a) P = 0.75 25 30 32.77 95. 90. 85. ~75' Average neighbors per node (b) P = 0.8 ,~ 70 65 6O 5,5 50 Average netghbo~ per node (c) P = 0.85 25 28.64 Figure 3: Simulation results obtained on 50 nodes. 10( 95 9O 85 P ~ 7o 55 Average neighb0m per node (d) P = 0.9 In each figure, the last z coordinate indicates the (average) maximum possible value of k for this P, and the dotted line shows the performance of a random classifier. or different orientation with 82% accuracy. We then classified several sets of adjectives according to the links inferred in this way and labeled them as posi- tive or negative, obtaining 92% accuracy on the clas- sification task for reasonably dense graphs and 100% accuracy on the labeling task. Simulation experi- ments establish that very high levels of performance can be obtained with a modest number of links per word, even when the links themselves are not always correctly classified. As part of our clustering algorithm's output, a "goodness-of-fit" measure for each word is com- puted, based on Rousseeuw's (1987) silhouettes. This measure ranks the words according to how well they fit in their group, and can thus be used as a quantitative measure of orientation, refining the binary positive-negative distinction. By restricting the labeling decisions to words with high values of this measure we can also increase the precision of our system, at the cost of sacrificing some coverage. We are currently combining the output of this sys- tem with a semantic group finding system so that we can automatically identify antonyms from the cor- pus, without access to any semantic descriptions. The learned semantic categorization of the adjec- tives can also be used in the reverse direction, to help in interpreting the conjunctions they partici- pate. We will also extend our analyses to nouns and verbs. Acknowledgements This work was supported in part by the Office of Naval Research under grant N00014-95-1-0745, jointly by the Office of Naval Research and the Advanced Research Projects Agency under grant N00014-89-J-1782, by the National Science Founda- 180 tion under grant GER-90-24069, and by the New York State Center for Advanced Technology un- der contracts NYSSTF-CAT(95)-013 and NYSSTF- CAT(96)-013. We thank Ken Church and the AT&T Bell Laboratories for making the PARTS part-of-speech tagger available to us. We also thank Dragomir Radev, Eric Siegel, and Gregory Sean McKinley who provided models for the categoriza- tion of the adjectives in our training and testing sets as positive and negative. References Jean-Claude Anscombre and Oswald Ducrot. 1983. L ' Argumentation dans la Langue. Philosophic et Langage. Pierre Mardaga, Brussels, Belgium. Edwin L. Battistella. 1990. Markedness: The Eval- uative Superstructure of Language. State Univer- sity of New York Press, Albany, New York. Peter F. Brown, Vincent J. della Pietra, Peter V. de Souza, Jennifer C. Lai, and Robert L. Mercer. 1992. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18(4):487-479. Kenneth W. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing (ANLP-88), pages 136-143, Austin, Texas, February. Associa- tion for Computational Linguistics. W. J. Conover. 1980. Practical Nonparametric Statistics. Wiley, New York, 2nd edition. Richard O. Duda and Peter E. Hart. 1973. Pattern Classification and Scene Analysis. Wiley, New York. Michael Elhadad and Kathleen R. McKeown. 1990. A procedure for generating connectives. In Pro- ceedings of COLING, Helsinki, Finland, July. Michael R. Garey and David S. Johnson. 1979. Computers and Intractability: A Guide to the Theory ofNP-Completeness. W. H. Freeman, San Francisco, California. Vasileios Hatzivassiloglou and Kathleen R. McKe- own. 1993. Towards the automatic identification of adjectival scales: Clustering adjectives accord- ing to meaning. In Proceedings of the 31st Annual Meeting of the ACL, pages 172-182, Columbus, Ohio, June. Association for Computational Lin- guistics. Vasileios I-Iatzivassiloglou and Kathleen R. MeKe- own. 1995. A quantitative evaluation of linguis- tic tests for the automatic prediction of semantic markedness. In Proceedings of the 83rd Annual Meeting of the ACL, pages 197-204, Boston, Mas- sachusetts, June. Association for Computational Linguistics. John S. Justeson and Slava M. Katz. 1991. Co- occurrences of antonymous adjectives and their contexts. Computational Linguistics, 17(1):1-19. Adrienne Lehrer. 1974. Semantic Fields and Lezical Structure. North Holland, Amsterdam and New York. Adrienne Lehrer. 1985. Markedness and antonymy. Journal of Linguistics, 31(3):397-429, September. John Lyons. 1977. Semantics, volume 1. Cambridge University Press, Cambridge, England. Peter McCullagh and John A. Nelder. 1989. Gen- eralized Linear Models. Chapman and Hall, Lon- don, 2nd edition. George A. Miller, Richard Beckwith, Christiane Fell- baum, Derek Gross, and Katherine J. Miller. 1990. Introduction to WordNet: An on-line lexi- cal database. International Journal of Lexicogra- phy (special issue), 3(4):235-312. Fernando Pereira, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of English words. In Proceedings of the 3Ist Annual Meeting of the ACL, pages 183-190, Columbus, Ohio, June. As- sociation for Computational Linguistics. Peter J. Rousseeuw. 1987. Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. Journal of Computational and Applied Mathematics, 20:53-65. Thomas J. Santner and Diane E. Duffy. 1989. The Statistical Analysis of Discrete Data. Springer- Verlag, New York. Helmuth Sp~ith. 1985. Cluster Dissection and Anal- ysis: Theory, FORTRAN Programs, Examples. Ellis Horwo0d, Chiehester, West Sussex, England. 181 | 1997 | 23 |
Independence Assumptions Considered Harmful Alexander Franz Sony Computer Science Laboratory &: D21 Laboratory Sony Corporation 6-7-35 Kitashinagawa Shinagawa-ku, Tokyo 141, Japan amI©csl, sony. co. jp Abstract Many current approaches to statistical lan- guage modeling rely on independence a.~- sumptions 1)etween the different explana- tory variables. This results in models which are computationally simple, but which only model the main effects of the explanatory variables oil the response vari- able. This paper presents an argmnent in favor of a statistical approach that also models the interactions between the ex- planatory variables. The argument rests on empirical evidence from two series of ex- periments concerning automatic ambiguity resolution. 1 Introduction In this paper, we present an empirical argument in favor of a certain approach to statistical natural lan- guage modeling: we advocate statistical natural lan- guage models that account for the interactions be- tween the explanatory statistical variables, rather than relying on independence a~ssumptions. Such models are able to perform prediction on the basis of estimated probability distributions that are properly conditioned on the combinations of the individual values of the explanatory variables. After describing one type of statistical model that is particularly well-suited to modeling natural lan- guage data, called a loglinear model, we present ein- pirical evidence fi'om a series of experiments on dif- ferent ambiguity resolution tasks that show that the performance of the loglinear models outranks the performance of other models described in the lit- erature that a~ssume independence between the ex- planatory variables. 2 Statistical Language Modeling By "statistical language model", we refer to a mathe- matical object that "imitates the properties" of some respects of naturM language, and in turn makes pre- dictions that are useful from a scientific or engineer- ing point of view. Much recent work in this flame- work hm~ used written and spoken natural language data to estimate parameters for statisticM models that were characterized by serious limitations: mod- els were either limited to a single explanatory vari- able or. if more than one explanatory variable wa~s considered, the variables were assumed to be inde- pendent. In this section, we describe a method for statistical language modeling that transcends these limitations. 2.1 Categorical Data Analysis Categorical data analysis is the area of statistics that addresses categorical statistical variable: variables whose values are one of a set of categories. An exam- pie of such a linguistic variable is PART-OF-SPEECH, whose possible values might include nou.n, verb, de- terminer, preposition, etc. We distinguish between a set of explanatory vari- ames. and one response variable. A statistical model can be used to perforin prediction in the following manner: Given the values of the explanatory vari- ables, what is the probability distribution for the response variable, i.e.. what are the probabilities for the different possible values of the response variable? 2.2 The Contingency Table Tile ba,sic tool used in categorical data analysis is the contingency table (sometimes called the "cross- classified table of counts"). A contingency table is a matrix with one dimension for each variable, includ- ing the response variable. Each cell ill the contin- gency table records the frequency of data with the appropriate characteristics. Since each cell concerns a specific combination of feat.ures, this provides a way to estimate probabil- ities of specific feature combinations from the ob- served frequencies, ms the cell counts can easily be converted to probabilities. Prediction is achieved by determining the value of the response variable given the values of the explanatory variables. 182 2.3 The Loglinear Model A loglinear model is a statistical model of the effect of a set of categorical variables and their combina- tions on the cell counts in a contingency table. It can be used to address the problem of sparse data. since it can act a.s a "snmothing device, used to obtain cell estimates for every cell in a sparse array, even if the observed count is zero" (Bishop, Fienberg, and Holland. 1975). Marginal totals (sums for all values of some vari- ables) of the observed counts are used to estimate the parameters of the loglinear model; the model in turn delivers estimated expected cell counts, which are smoother than the original cell counts. The mathematical form of a loglinear model is a,s follows. Let mi5~ be the expected cell count for cell (i.j. k .... ) in the contingency table. The general form of a loglinear model is ms follows: logm/j~... = u.-{-ltlti).-~lt2(j)-~-U3(k)-~lZl2(ij)-~-.. . (1) In this formula, u denotes the mean of the logarithms of all the expected counts, u+ul(1) denotes the mean of the logarithms of the expected counts with value i of the first variable, u + u2(j) denotes the mean of the logarithms of the expected counts with value j of the second variable, u + ux~_(ii) denotes the mean of the logarithms of the expected counts with value i of the first veriable and value j of the second variable, and so on. Thus. the term uzii) denotes the deviation of the mean of the expected cell counts with value i of the first variable from the grand mean u. Similarly, the term Ul2(ij) denotes the deviation of the mean of the expected cell counts with value i of the first variable and value j of the second variable from the grand mean u. In other words, ttl2(ij) represents the com- bined effect of the values i and j for the first and second variables on the logarithms of the expected cell counts. In this way, a loglinear model provides a way to estimate expected cell counts that depend not only on the main effects of the variables, but also on the interactions between variables. This is achieved by adding "interaction terms" such a.s Ul2(ij ) to the nmdel. For further details, see (Fienberg, 1980). 2.4 The Iterative Estimation Procedure For some loglinear models, it is possible to obtain closed forms for the expected cell counts. For more complicated models, the iterative proportional fitting algorithm for hierarchical loglinear models (Denting and Stephan, 1940) can be used. Briefly, this proce- dure works ms follows. Let the values for the expected cell counts that are estimated by the model be represented by the sym- bol 7hljk .... The interaction terms in the loglinear nmdels represent constraints on the estimated ex- pected marginal totals. Each of these marginal con- straints translates into an adjustment scaling factor for the cell entries. The iterative procedure has the following steps: 1. Start with initial estimates for the estimated ex- pected cell counts. For example, set all 7hijal = 1.0. 2. Adjust each cell entry by multiplying it by the scaling factors. This moves the cell entries to- wards satisfaction of the marginal constraints specified by the nmdel. 3. Iterate through the adjustment steps until the maximum difference e between the marginal totals observed in the sample and the esti- mated marginal totals reaches a certain mini- mum threshold, e.g. e = 0.1. After each cycle, the estimates satisfy the con- straints specified in the model, and the estimated expected marginal totals come closer to matching the observed totals. Thus. the process converges. This results in Maximum Likelihood estimates for both multinomial and independent Poisson sampling schemes (Agresti, 1990). 2.5 Modeling Interactions For natural language classification and prediction tasks, the aim is to estimate a conditional proba- bility distribution P(H[E) over the possible values of the hypothesis H, where the evidence E consists of a number of linguistic features el, e2 ..... Much of the previous work in this area assumes independence between the linguistic features: P(/-/le~.ej .... ) ~ P(Hlel) x P(Hlej) x ... (2) For example, a model to predict Part-of-Speech of a word on the basis of its morphological affix and its capitalization might a.ssume independence between the two explanatory variables a,s follows: P(POSIAFFIX, CAPITALIZATION) ,,~ (3) P(POSIAFFIX ) x P(POSICAPITALIZATION ) This results ill a considerable computational sim- plification of the model but, as we shall see below. leads to a considerable loss of information and con- comitant decrease in prediction accuracy. With a loglinear model, on the other hand. such indepen- dence assumptions are not necessary. The loglinear model provides a posterior distribution that is prop- erly conditioned on the evidence, and maximizing the conditional probability P(HIE ) leads to mini- mum error rate classification (Duda and Hart. 1973). 183 s 3 Predicting Part-of-Speech We will now turn to the empirical evidence support- ing the argument against independence assumptions. ~ In this section, we will compare two models for pre- e ~ dicting the Part-of-Speech of an unknown word: A ~ simple model that treats the various explanatory variables ms independent, and a model using log- linear smoothing of a contingency table that takes into account the interactions between the explana- tory variables. 3.1 Constructing the Model The model wa~s constructed in the following way. First, features that could be used to guess the PUS of a word were determined by examining the training portion of a text corpus. The initial set of features consisted of the following: • INCLUDES-NUMBER. Does the word include a nunlber? • CAPITALIZED. Is the word in sentence-initial po- sition and capitalized, in any other position and capitalized, or in lower ca~e? • INCLUDES-PERIOD. Does the word include a pe- riod? • INCLUDES-COMMA. Does the word include a colnlna? • FINAL-PERIOD. Is the last character of the word a period? • INCLUDES-HYPHEN. Does the word include a hyphen? • ALL-UPPER-CASE. Is the word in all upper case? • SHORT. Is the length of the word three charac- ters or less? • INFLECTION. Does the word carry one of the English inflectional suffixes? • PREFIX. Does the word carry one of a list of frequently occurring prefixes? • SUFFIX. Does the word carry one of a list of frequently occurring suffixes? Next, exploratory data analysis was perfornled in order to determine relevant features and their values, and to approximate which features interact. Each word of the training data was then turned into a feature vector, and the feature vectors were cross- classified in a contingency table. The contingency table was smoothed using a loglinear models. 3.2 Data Training and evaluation data was obtained from the Penn Treebank Brown corpus (Marcus, Santorini, and Marcinkiewicz, 1993). The characteristics of "'rare" words that might show up ms unknown words differ fi'om the characteristics of words in general. so a two-step procedure wa~ employed a first time Overall Accuracy i. __,...,o_ 4 L~hnem¢ F~tgf~ 9 L~llnQ&¢ ~Oatu¢~ 8 . F=0.4 Set Accuracy .... 4 maeo,tnaom Flalu,~ [ i 4 LOgL'/~III ~omtur~ j i 9 l.~Jl~ar vulu,u Figure 1: Performance of Different Models to obtain a set of "'rare" words ms training data, and again a second time to obtain a separate set of "'rare*" words ms evMuation data. There were 17,000 words in the training data, and 21,000 words in the evalua- tion data. Ambiguity resolution accuracy was evalu- ated for the "'overall accuracy" (Percentage that the most likely PUS tag is correct), and "'cutoff factor accuracy" (accuracy of the answer set consisting of all PUS tags whose probability lies within a factor F of the most likely PUS (de Marcken, 1990)). 3.3 Accuracy Results (Weischedel et al., 1993) describe a model for un- known words that uses four features, but treats the features ms independent. We reimplemented this model by using four features: POS, INFLECTION, CAPITALIZED, and HYPHENATED, In Figures i 2, the results for this model are labeled 4 Indepen- dent Features. For comparison, we created a log- linear model with the same four features: the results for this model are labeled 4 Loglinear Features. The highest accuracy was obtained by the log- linear model that includes all two-way interac- tions and consists of two contingency tM)les with the following features: POS, ALL-UPPER-CASE. HYPHENATED, INCLUDES-NUMBER, CAPITALIZED, INFLECTION, SHORT. PREFIX, and SUFFIX. The re- sults for this model are lM)eled 9 Loglinear Fea- tures. The parameters for all three unknown word models were estimated from the training data. and the models were evaluated on the evaluation data. The accuracy of the different models in a.ssigning the most likely POSs to words is summarized in Fig- ure 1. In the left diagram, the two barcharts show two different accuracy memsures: Percent correct (Overall Accuracy), and percent correct within the F=0.4 cutoff factor answer set (F=0.4 Set Accuracy). In both cruses, the loglinear model with four features obtains higher accuracy than the method that assumes independence between the same four features. The loglinear model with nine 184 o o o o • .-- ......... ~ ...... o- ..... o ...... o ..... • -- L°glmea'wlt F~t~e= ] 1 2 3 4 5 6 7 N~ol Features Figure 2: Effect of Number of Features on Accuracy $ o Uregmm Pro~exe~ kog~r Mce.~ Figure 3: Error Rate on Unknown Words features further improves this score. 3.4 Effect of Number of Features on Accuracy The performance of the loglinear model can be im- proved by adding more features, but this is not pos- sible with the simpler nmdel that assumes indepen- dence between the features. Figure 2 shows the performance of the two types of nmdels with fen- ture sets that ranged from a single feature to nine features. As the diagram shows, the accuracies for both methods rise with the first few features, but then the two methods show a clear divergence. The ac- curacy of the simpler method levels off around at around 50-55%, while the loglinear model reaches an accuracy of 70-75%. This shows that the loglin- ear model is able to tolerate redundant features and use information from more features than the simpler method, and therefore achieves better results at am- biguity resolution. 3.5 Adding Context to the Model Next, we added of a stochastic POS tagger (Char- niak et al., 1993) to provide a model of context. A stochastic POS tagger assigns POS labels to words in a sentence by using two parameters: • Lexical Probabilities: P(wlt ) -- the proba- bility of observing word w given that the tag t occurred. • Contextual Probabilities: P(ti[ti-1, t~_2) -- the probability of observing tag ti given that the two previous tags ti-1, t,i--2 occurred. The tagger maximizes the probability of the tag se- quence T = t.l,t, 2 .... ,t.,, given the word sequence W = wz,w2,... ,w,,, which is approximated a.s fol- lows: I"L P(TIW) ~ II P(wdt~)P(tdt~_~, ti_=) (4) i= 1 The accuracy of the combination of the loglinear model for local features and the stochastic POS tag- ger for contextual features was evaluated empirically by comparing three methods of handling unknown words: • Unigram: Using the prior probability distri- bution P(t) of the POS tags for rare words. • ProbabUistic UWM: Using the probabilistic model that assumes independence between the features. • Classifier UWM: Using the loglinear model for unknown words. Separate sets of training and evaluation data for the tagger were obtained from from the Penn Treebank Wall Street corpus. Evaluation of the combined sys- t.em was performed on different configurations of the POS tagger on 30-40 different samples containing 4,000 words each. Since the tagger displays considerable variance in its accuracy in assigning POS to unknown words in context, we use boxplots to display the results. Fig- ure 3 compares the tagging error rate on unknown words for the unigram method (left) and the log- linear method with nine features (labeled statisti- cal classifier) at right. This shows that the Ioglin- ear model significantly improves the Part-of-Speech tagging accuracy of a stochastic tagger on unknown words. The median error rate is lowered consider- ably, and samples with error rates over 32% are elim- inated entirely. 185 o = == • PmO~¢ UWM • Logli~e= UWM o u , *=* • • • =a • o °° 08° 0 S tO 15 2Q 25 30 35 40 4S 50 SS 60 Peeclntage ol Unknown WO~= Figure 4: Effect of Proportion of Unknown Words on Overall Tagging Error Rate 3.6 Effect of Proportion of Unknown Words Since most of the lexical ambiguity resolution power of stochastic PUS tagging comes from the lexical probabilities, unknown words represent a significant source of error. Therefore, we investigated the effect of different types of models for unknown words on the error rate for tagging text with different propor- tions of unknown words. Samples of text that contained different propor- tions of unknown words were tagged using the three different methods for handling unknown words de- scribed above. The overall tagging error rate in- creases significantly as the proportion of new words increases. Figure 4 shows a graph of overall tagging accuracy versus percentage of unknown words in the text. The graph compares the three different meth- ods of handling unknown words. The diagram shows that the loglinear model leads to better overall tag- ging performance than the simpler methods, with a clear separation of all samples whose proportion of new words is above approximately 10%. 4 Predicting PP Attachment In the second series of experiments, we compare the performance of different statistical models on the task of predicting Prepositional Phrase (PP) attach- ment. 4.1 Features for PP Attachment First, an initial set of linguistic features that could be useful for predicting PP attachment was deter- mined. The initial set included the following fea- tures: • PREPOSITION. Possible values of this feature in- clude one of the more frequent prepositions in the training set, or the value other-prep. * VERB-LEVEL. Lexical association strength be- tween the verb and the preposition. • NOUN-LEVEL. Lexical association strength be- tween the noun and the preposition. • NOUN-TAG. Part-of-Speech of the nominal at- tachment site. This is included to account for correlations between attachment and syntactic category of the nominal attachment site, such as "PPs disfavor attachment to proper nouns." • NOUN-DEFINITENESS. Does the nominal attach- ment site include a definite determiner? This feature is included to account for a possible cor- relation between PP attachment to the nom- inal site and definiteness, which was derived by (Hirst, 1986) from the principle of presup- position minimization of (Craln and Steedman, 1985). • PP-OBJECT-TAG. Part-of-speech of the object of the PP. Certain types of PP objects favor at- tachment to the verbal or nominal site. For ex- ample, temporal PPs, such as "in 1959", where the prepositional object is tagged CD (cardi- nal), favor attachment to the VP, because tile VP is more likely to have a temporal dimension. The association strengths for VERB-LEVEL and NOUN-LEVEL were measured using the Mutual In- formation between the noun or verb, and the prepo- sition. 1 The probabilities were derived ms Maximum Likelihood estimates from all PP cases in the train- ing data. The Mutual Information values were or- dered by rank. Then, the a~ssociation strengths were categorized into eight levels (A-H), depending on percentile in the ranked Mutual Information values. 4.2 Experimental Data and Evaluation Training and evaluation data was prepared from the Penn treebank. All 1.1 million words of parsed text in the Brown Corpus, and 2.6 million words of parsed WSJ articles, were used. All instances of PPs that are attached to VPs and NPs were extracted. This resulted in 82,000 PP cases from the Brown Corpus, and 89,000 PP cases from the WS.] articles. Verbs and nouns were lemmatized to their root forms if the root forms were attested in the corpus. If the root form did not occur in the corpus, then the inflected form was used. All the PP cases from the Brown Curl)us, and 50,000 of the WSJ cases, were reserved ms training data. The remaining 39,00 WSJ PP cases formed the evaluation pool. In each experiment, performance IMutu',d Information provides an estimate of the magnitude of the ratio t)ctw(.(-n the joint prol)ability P(verb/noun,1)reposition), and the joint probability a.~- suming indcpendcnce P(verb/noun)P(prcl)osition ) - s(:(, (Church and Hanks, 1990). 186 o 1 | u R~m A~jllon Hfr,3~ & Roolh kog~eaw ~ak~r 1 ! o o ol °t I i o! l l o Figure 5: Results for Two Attachment Sites Figure 6: Three Attachment Sites: Right Associa- tion and Lexical Association was evaluated oil a series of 25 random samples of 100 PP cases fi'om the evaluation pool. in order to provide a characterization of the error variance. 4.3 Experimental Results: Two Attachments Sites Previous work oll automatic PP attachment disam- biguation has only considered the pattern of a verb phrase containing an object, and a final PP. This lends to two possible attachment sites, the verb and the object of the verb. The pattern is usually further simplified by considering only the heads of the possi- ble attachment sites, corresponding to the sequence "Verb Noun1 Preposition Noun2". The first set of experiments concerns this pattern. There are 53,000 such cases in the training data. and 16,000 such cases in the evaluation pool. A number of methods were evaluated on this pattern accord- ing to the 25-sample scheme described above. The results are shown in Figure 5. 4.3.1 Baseline: Right Association Prepositional phrases exhibit a tendency to attach to the most recent possible attachment site; this is referred to ms the principle of "'Right Association". For the "V NP PP'" pattern, this means preferring attachment to the noun phra~se. On the evaluation samples, a median of 65% of the PP cases were at- tached to the noun. 4.3.2 Results of Lexical Association (Hindle and R ooth. 1993) described a method for obtaining estimates of lexical a.ssociation strengths between nouns or verbs and prepositions, and then using lexical association strength to predict. PP at- tachment. In our reimplementation of this lnethod. the probabilities were estimated fi'om all the PP cases in the training set. Since our training data are bracketed, it was possible to estimate tile lexi- cal associations with much less noise than Hindle & R ooth, who were working with unparsed text. The median accuracy for our reimplementation of Hindle & Rooth's method was 81%. This is labeled "Hindle & Rooth'" in Figure 5. 4.3.3 Results of the Loglinear Model The loglinear model for this task used the features PREPOSITION. VERB-LEVEL, NOUN-LEVEL, and NOUN-DEFINITENESS, and it included all second- order interaction terms. This model achieved a me- dian accuracy of 82%. Hindle & Rooth's lexical association strategy only uses one feature (lexical aasociation) to predict PP attachment, but. ms the boxplot shows, the results from the loglinear model for the "V NP PP" pattern do not show any significant improvement. 4.4 Experimental Results: Three Attachment Sites As suggested by (Gibson and Pearlmutter. 1994), PP attachment for the "'Verb NP PP" pattern is relatively easy to predict because the two possible attachment sites differ in syntactic category, and therefore have very different kinds of lexical pref- erences. For example, most PPs with of attach to nouns, and most PPs with f,o and by attach to verbs. In actual texts, there are often more than two possi- ble attachment sites for a PP. Thus, a second, more realistic series of experiments was perforlned that investigated different PP attachment strategies for the pattern "'Verb Noun1 Noun2 Preposition Noun3"' that includes more than two possible attachment sites that are not syntactically heterogeneous. There were 28,000 such cases in the training data. and 8000 ca,~es in the evaluation pool. 187 "5 o RIgN AUCCUII~ Split HinOle & Rooln Lo~l~ur M0~el Figure 7: Summary of Results for Three Attachment Sites 4.4.1 Baseline: Right Association As in the first set of experiments, a number of methods were evaluated an the three attachment site pattern with 25 samples of 100 random PP cases. The results are shown in Figures 6-7. The baseline is again provided by attachment according to the principle of "Right Attachment'; to the nmst recent possible site, i.e. attaclunent to Noun2. A median of 69% of the PP cases were attached to Noun2. 4.4.2 Results of Lexical Association Next, the lexical association method was evalu- ated on this pattern. First. the method described by Hindle & Rooth was reimplemented by using the lexical association strengths estimated from all PP cases. The results for this strategy are labeled "Basic Lexical Association" in Figure 6. This method only achieved a median accuracy of 59%, which is worse than always choosing the rightmost attachment site. These results suggest that Hindle & R.ooth's scoring function worked well in the "'Verb Noun1 Preposi- tion Noun2"' case not only because it was an accurate estimator of lexical associations between individual verbs/nouns and prepositions which determine PP attachment, but also because it accurately predicted the general verb-noun skew of prepositions. 4.4.3 Results of Enhanced Lexical Association It seems natural that this pattern calls for a com- bination of a structural feature with lexical associa- tion strength. To implement this, we modified Hin- dle & Rooth's method to estimate attachments to the verb, first noun. and second noun separately. This resulted in estimates that combine the struc- tural feature directly with the lexical association strength. The modified method performed better than the original lexical association scoring function, but it still only obtained a median accuracy of 72%. This is labeled "Split Hindle & Rooth" in Figure 7. 4.4.4 Results of Loglinear Model To create a model that combines various structural and lexical features without indepen- dence assumptions, we implemented a loglinear model that includes the variables VERB-LEVEL FIRST-NOUN-LEVEL. and SECOND-NOUN-LEVEL. 2 The loglinear model also includes the variables PREPOSITION and PP-OBJECT-TAG. It, was smoothed with a loglinear model that includes all second-order interactions. This method obtained a median accuracy of 79%; this is labeled "Loglinear Model" in Figure 7. As the boxplot shows, it performs significantly better than the methods that only use estimates of lexical a,~so- clarion. Compared with the "'Split Hindle Sz Rooth'" method, the samples are a little less spread out, and there is no overlap at all between the central 50% of the samples from the two methods. 4.5 Discussion The simpler "V NP PP" pattern with two syntacti- cally different attachment sites yielded a null result: The loglinear method did not perform significantly better than the lexical association method. This could mean that the results of the lexical associa- tion method can not be improved by adding other features, but it is also possible that the features that could result in improved accuracy were not identi- fied. The lexical association strategy does not perform well on the more difficult pattern with three possible attachment sites. The loglinear model, on the other hand, predicts attachment with significantly higher accuracy, achieving a clear separation of the central 50% of the evaluation samples. 5 Conclusions We have contrasted two types of statistical language models: A model that derives a probability distribu- tion over the response variable that is properly con- ditioned on the combination of the explanatory vari- able, and a simpler model that treats the explana- tory variables as independent, and therefore models the response variable simply a~s the addition of the individual main effects of the explanatory variables. 2These features use tile s~unc Mutual Information- ba.~ed measure of lcxic',d a.sso(:iation a.s tim prc.vious log- linear model for two possibh~" attachment sites, which wcrc estimated from all nomin'M azt(l vcrhal PP att~t(:h- ments in the corpus. The features FIRST-NOUN-LEVEL aaM SECOND-NOUN-LEVEL use the same estimates: in other words, in contrm~t to the "split Lexi(:al Associa- tion" method, they were not estimated sepaxatcly for the two different nominaJ, attachment sites. 188 The experimental results show that, with the same feature set, inodeling feature interactions yields bet- ter performance: such nmdels achieves higher accu- racy, and its accura~,y can be raised with additional features. It is interesting to note that modeling vari- able interactions yields a higher perforlnanee gain than including additional explanatory variables. While these results do not prove that modeling feature interactions is necessary, we believe that they provide a strong indication. This suggests a mlmber of avenues for filrther research. First, we could attempt to improve the specific models that were presented by incorporating addi- tional features, and perhal)S by taking into account higher-order features. This might help to address the performance gap between our models and hu- man subjects that ha,s been documented in the lit- erature, z A more ambitious idea would be to use a statistical model to rank overall parse quality for en- tire sentences. This would be an improvement over schemes that a,ssnlne independence between a num- ber of individual scoring fimctions, such ms (Alshawi and Carter, 1994). If such a model were to include only a few general variables to account for such fea- tures a.~ lexical a.ssociation and recency preference for syntactic attachment, it might even be worth- while to investigate it a.s an approximation to the human parsing mechanism. References Agresti, Alan. 1990. Categorical Data Analysis. .John Wiley & Sons, New York. Alshawi, Hiyan and David Carter. 1994. Training and scaling preference functions for disambigua- tion. Computational Linguistics, 20(4):635-648. Bishop. Y. M., S. E. Fienberg, and P. W. Holland. 1975. Discrete Multivariate Analysis: Th, eory and Practice. MIT Press, Cambridge, MA. Charniak, Eugene, Curtis Hendrickson, Neil ,Jacob- son, and Mike Perkowitz. 1993. Equations for part-of-speech tagging. In AAAI-93, pages 784~ 789. Church, Kenneth W. and Patrick Hanks. 1990. Word a,~soeiation norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29. Crain, Stephen and Mark 3. Steedman. 1985. On not being led up the garden path: The use of 3For cXaml)l(', If random s(;ntcnc(;s with "V('rb NP PP" (:~(:s from th(: Penn tr(',(;l)ank aa'(: tak(:n ms the gohl standard, then (Hindlc and Rooth, 1993) and (Ratna- l)arkhi, Ryn~r, aal(t Roukos. 1994) rcl)ort that human, (:xi)(;rts using only hca(t words obtain 85%-88% a('cu- ra~:y. If the huma~l CXl)erts arc allow(:d to consult the whoh," scntcn(:(:, their accuracy judged against random Trc(}l)ank s(',ntclm(:s rises to al)l)roximatcly 93%. context by the psychological syntax processor. In David R. Dowty, Lauri Karttunen, and An- rnold M. Zwicky, editors, Natural Language Pars- ing, pages 320-358, Cambridge, UK. Cambridge University Press. de Marcken, Carl G. 1990. Parsing the LOB corpus. In Proceedings of A CL-90, pages 243-251. Deming, W. E. and F. F. Stephan. 1940. On a lea.st squares adjustment of a sampled frequency ta- ble when the expected marginal totals are known. Ann. Math. Statis, (11):427--444. Duda, Richard O. and Peter E. Hart. 1973. Pattern Classification and Scene Analysis. John Wiley & Sons, New York. Fienberg, Stephen E. 1980. Th.e Analysis of Cross- Classified Categorical Data. The MIT Press, Cambridge, MA, second edition edition. Franz, Alexander. 1996. Automatic Ambiguity Res- olution in Natural Language Processing. volume 1171 of Lecture Notes in Artificial Intelligence. Springer Verlag, Berlin. Gibson, Ted and Neal Pearhnutter. 1994. A corpus- ba,sed analysis of psycholinguistic constraints on PP attachment. In Charles Clifton Jr., Lyn Frazier, and Keith Rayner, editors, Perspectives on Sentence Processing. Lawrence Erlbaum Asso- ciates. Hindle, Donald and Mats Rooth. 1993. Structural ambiguity and lexical relations. Computational Linguistics, 19( 1 ): 103-120. Hirst, Graeme. 1986. Semantic Interpretation and the Resolution of Ambiguity. Cambridge Univer- sity Press, Cambridge. Marcus, Mitchell P., Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313-330. Ratnaparkhi, Adwait, Jeff B ynar, and Salim Roukos. 1994. A maximum entropy model for Prepositional Phra,se attachment. In ARPA Workshop on Human Language Technology. Plainsboro, N.], March 8-11. Weischedel, Ralph, Marie Meteer, Richard Schwartz, Lance Ramshaw, and Jeff Palmucci. 1993. Cop- ing with ambiguity and unknown words through probabilistic models. Computational Linguistics, 19(2):359-382. 189 | 1997 | 24 |
Planning Reference Choices for Argumentative Texts Xiaorong Huang* Techne Knowledge Systems 439 University Avenue Toronto, Ontario M5S 3G4 Canada xhCFormalSyst ems. ca Abstract This paper deals with the reference choices in- volved in the generation of argumentative text. Since a natual segmentation of discourse into attentional spaces is needed to carry out this task, this paper first proposes an architecture for natural language generation that combines hierarchical planning and focus-guided naviga- tion, a work in its own right. While hierarchi- cal planning spans out an attentional hierarchy of the discourse produced, local navigation fills details into the primitive discourse spaces. The usefulness of this architecture actually goes be- yond the particular domain of application for which it is developed. A piece of argumentative text such as the proof of a mathematical theorem conveys a sequence of derivations. For each step of derivation, the premises derived in the previous context and the inference method (such as the application of a particular theorem or definition) must be made clear. Although not restricted to nominal phrases, our reference decisions are similar to those concerning nominal subsequent referring expressions. Based on the work of Reichmann, this paper presents a discourse theory that han- dles reference choices by taking into account both textual distance as well as the attentional hierarchy. 1 Introduction This paper describes how reference decisions are made in PROVERB, a system that verbalizes machine-found natural deduction (ND) proofs. A piece of argumentative text such as the proof of a mathematical theorem can be viewed as a sequence *Much of this research was carried out while the au- thor was at Dept. of CS, Univ. of the Saarland, sup- ported by DFG (German Research Council). This paper was written while the author was a visitor at Dept. of CS, Univ. of Toronto, using facilities supported by a grant from the Natural Sciences and Engineering Re- search Council of Canada. of derivations. Each such derivation is realized in PROVERB by a proof communicative act (PEA), following the viewpoint that language utterances are actions. PeAs involve referring phrases that should help a reader to unambiguously identify an object of a certain type from a pool of candidates. Concretely, such references must be made for previously derived conclusions used as premises and for the inference method used in the current step. As an example, let us look at the PeA with the name Derive below: (Derive Derived-Formula: u * Iv = u Reasons : (unit(1u, U, *), u 6U) Method : Def-Semigroup*unit) Here, the slot Derived-Formula is filled by a new conclusion which this PeA aims to convey. It can be inferred by applying the filler of Method to the filler of Reasons as prernises. There are alternative ways of referring to both the Reasons and the Method. Depending on the discourse history, the following are two of the possible verbalizations: 1. (inference method omitted): "Since 1~ is the unit element of U, and u is an element of U, u * lu -- u." 2. (reasons omitted): "According to the definition of unit element, u * 1U -- U." An explicit reference to a premise or an inference method is not restricted to a nominal phrase, as opposed to many of the treatments of subsequent references found in the literature. Despite this dif- ference, the choices to be made here have much in common with the choices of subsequent references discussed in more general frameworks (Reichman, 1985; Grosz and Sidner, 1986; Dale, 1992): they depend on the availability of the object to be re- ferred to in the context and are sensitive to the seg- mentation of a context into an attentional hierarchy. Therefore, we have first to devise an architecture for natural language generation that facilitates a nat- ural and effective segmentation of discourse. The 190 basic idea is to distinguish between language pro- duction activities that effect the global shift of at- tention, and language production activities that in- volve only local attentional movement. Concretely, PROVERB uses an architecture that models text generation as a combination of hierarchical planning and focus-guided navigation. Following (Grosz and Sidner, 1986) we further assume that every posting of a new task by the hierarchical planning mecha- nism creates new attentional spaces. Based on this segmentation, PROVERB makes reference choices according to a discourse theory adapted from Reich- man (Reichman, 1985; Huang, 1990). 2 The System PROVERB PROVERB is a text planner that verbalizes natural deduction (ND) style proofs (Gentzen, 1935). Sev- eral similar attempts can be found in previous work. The system EXPOUND (Chester, 1976) is an exam- ple of direct translation: Although a sophisticated linearization is applied on the input ND proofs, the steps are then translated locally in a template-driven way. ND proofs were tested as inputs to an early version of MUMBLE (McDonald, 1983); the main aim, however, was to show the feasibility of the ar- chitecture. A more recent attempt can be found in THINKER (Edgar and Pelletier, 1993), which imple- ments several interesting but isolated proof presenta- tion strategies. PROVERB however can be seen as the first serious attempt for a comprehensive system that produces adequate argumentative texts from ND style proofs. Figure 1 shows the architecture of PROVERB(Huang, 1994a; HuangFiedler, 1997): the macroplanner produces a sequence of PCAs, the DRCC (Derive Reference Choices Component) mod- ule of the microplanner enriches the PCAs with ref- erence choices. The TSG (Text Structure Genera- tor) module subsequently produces the text struc- tures as the output of the microplanner. Finally, text structures are realized by TAG-GEN (Kilger and Finkler, 1995), our realization component. In this paper, we concentrate only on the macroplan- ner and the DRCC component. 2.1 Architecture of the Macroplanner Most current text planners adopt a hierarchical plan- ning approach (How, 1988; Moore and Paris, 1989; Dale, 1992; Reithinger, 1991). Nevertheless there is psychological evidence that language has an un- planned, spontaneous aspect as well (Ochs, 1979). Based on this observation, Sibun (Sibun, 1990) im- plemented a system for generating descriptions of objects with a strong domain structure, such as houses, chips, and families. Her system produces text using a technique she called local organization. While a hierarchical planner recursively breaks gen- eration tasks into subtasks, local organization navi- gates the domain-object following the local focus of Natural Deduction Proof i- . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . :~lacroplanner ,, : i&p]An-e; ............ ........................ V PMs 7 ................................. L LT_e_x_t_S_t__m_c_t_u_r_ e ............... Transformer ) ( e ,izo ) Figure 1: Architecture of PROVERB attention. PROVERB combines both of these approaches in a uniform planning framework (Huang, 1994a). The hierarchical planning splits the task of present- ing a particular proof into subtasks of presenting subproofs. While the overall planning mechanism follows the RST-based planning approach (How, 1988; Moore and Paris, 1989; Reithinger, 1991), the planning operators more resemble the schemata in schema-based planning (McKeown, 1985; Paris, 1988) since presentation patterns associated with specific proof patterns normally contain multiple RST-relations. PROVERB's hierarchical planning is driven by proof patterns that entail or suggest es- tablished ways of presentation. For trivial proofs that demonstrate no characteristic patterns, how- ever, this technology will fail. PRO VERB navigates such relatively small parts of a proof and chooses the next conclusion to be presented under the guidance of a local focus mechanism. While most existing systems follow one of the two approaches exclusively, PROVERB uses them as complementary techniques in an integrated frame- work. In this way, our architecture provides a clear way of factoring out domain-dependent presenta- tion knowledge from more general NLG techniques. While PROVERB's hierarchical planning operators encodes accepted format for mathematical text, its local navigation embodies more generic principles of 191 language production. The two kinds of planning operators are treated ac- cordingly. Since hierarchical planning operators em- body explicit communicative norms, they are given a higher priority. Only when none of them is appli- cable, will a local navigation operator be chosen. 2.2 Proof Communicative Acts PCAs are the primitive actions planned by the macroplanner of PROVERB• Like speech acts, they can be defined in terms of the communicative goals they fulfill as well as their possible verbalizations. The simplest one conveying the derivation of a new intermediate conclusion is illustrated in the intro- duction. There are also PCAs that convey a partial plan for further presentation and thereby update the reader's global attentional structure. For instance, the PCA (Begin-Cases Goal : Formula Assumptions: (A B)) creates two attentional spaces with A and B as the assumptions, and Formula as the goal by producing the verbalization: "To prove Formula, let us consider the two cases by assuming A and B." 2.3 Hierarchical Planning Hierarchical planning operators represent commu- nicative norms concerning how a proof is to be pre- sented can be split into subproofs, how the subproofs can be mapped onto some linear order, and how primitive subproofs should be conveyed by PCAs. Let us look at one such operator, which handles proof by case analysis. The corresponding schema of such a proof tree I is shown in Figure 2, where F G : : : ?L4 : F V G ~ ?L3 :~ CASE ?L1 :A~-Q Figure 2: Proof Schema Case the subproof rooted by ?L4 leads to F V G, while subproofs rooted by ?L2 and ?L3 are the two cases proving Q by assuming F or G, respectively• The applicability encodes the two scenarios of case anM- ysis, where we do not go into details. In both circum- stances this operator first presents the part leading to F V G, and then proceeds with the two cases. It also inserts certain PCAs to mediate between parts *We adopt for proof tree the notation of Gentzen. Each bar represents a step of derivation, where the for- mula beneath the bar is derived from the premises above the bar. For the convenience of discussion, some formu- lae are given an identifying label, such as ?L1. of proofs. This procedure is captured by the plan- ning operator below. Case-Implicit • Applicability Condition: ((task ?L1) V (local-focus ?L4)) A (not-conveyed (?L2 ?L3)) • Acts: 1. if ?L4 has not been conveyed, then present ?L4 (subgoal 1) 2. a PCA with the verbalization: "First, let us consider the first case by assuming F." 3. present ?L2 (subgoal 2) 4. a PCA with the verbalization: "Next, we con- sider the second case by assuming G." 5. present ?L3 (subgoal 3) 6. mark ?L1 as conveyed .features: (hierarchical-planning compulsory im- plicit) 2.4 Planning as Navigation The local navigation operators simulate the un- planned part of proof presentation. Instead of split- ting presentation goals into subgoals, they follow the local derivation relation to find a proof step to be presented next. 2.4.1 The Local Focus The node to be presented next is suggested by the mechanism of local focus. In PROVERB, our local focus is the last derived step, while focal centers are semantic objects mentioned in the local focus. Al- though logically any proof node that uses the local focus as a premise could be chosen for the next step, usually the one with the greatest semantic overlap with the focal centers is preferred• In other words, if one has proved a property about some semantic ob- jects, one will tend to continue to talk about these particular objects, before turning to new objects. Let us examine the situation when the proof below is awaiting presentation. [1]: P(a,b) [1]: P(a,b), [3]: S(c) [ 2] Q(a;b)' [4]: R(b,c) [5]: Q(a, b) A R(b, c) Assume that node [1] is the local focus, {a, b} is the set of focal centers, [3] is a previously presented node and node [5] is the root of the proof to be presented• [2] is chosen as the next node to be presented, since it does not introduce any new semantic object and its overlap with the focal centers ({a,b}) is larger than the overlap of [4] with the focal centers ({b}). For local focus mechanisms used in another do- main of application, readers are referred to (McKe- own, 1985). 3 The Attentional Hierarchy The distinction between hierarchical planning and local navigation leads to a very natural segmentation 192 NNo S;D Formula 7. 7; ~- group(F, *) A subgroup(U, F, *) A unit(F, 1, *) A unit(U, lt], *) 8. 7; ~- U C F 9. 7; I- lrr EU 10. 7; I- 3zx E U 11. ;11 I- u E U 12. 7;11 b u* lt] = u 13. 7;11 b u E F 14. 7;11 I- It] E F 15. 7;11 I- semigroup(F, *) 16. 7;11 b solution(u, u, lu, F, *) 17. 7;11 b u* 1 = u 18. 7;11 I- 1 E F 19. 7;11 I- solution(u, u, 1, F, *) 20. 7;11 b- 1 = lrr 21. 7; t- 1 = 1u 22. ; I- group(F, *) A subgroup(U, F, *) A unit(F, 1, *) A unit(U, lt], *) :=~ 1 = It] Reason (Hyp) (Def-subgroup 7) (Def-unit 7) (::1 9) (Hyp) (Def-unit 7 11) (Def-subset 8 11) (Def-subset 8 9) (Def-group 7) (Def-sohition 12 13 14 15) (Def-unit 7 13) (Def-unit 7) (Def-soluti0n 13 17 18 15) (Th-solution 17 16 19) (Choice 10 20) (Ded 7:21) Figure 3: Abstracted Proof about Unit Element of Subgroups of a discourse into an attentional hierarchy, since fol- lowing the theory of Grosz and Sidner (Grosz and Sidner, 1986), there is a one-to-one correspondence between the intentional hierarchy and the atten- tional hierarchy. In this section, we illustrate the attentional hierarchy with the help of an example, which will be used to discuss reference choices later. The input proof in Figure 3 is an ND style proof for the following theorem2: Theorem: Let F be a group and U a subgroup of F. If i and lv are unit elements of F and U respectively, then 1=1u. The definitions of semigroup, group, and unit are obvious, solution(a, b, c, F, ,) stands for "c is a so- lution of the equation a, z = b in F." Each line in the proof is of the form: Label A F- Conclusion (Justification reasons) where Justification is either an ND inference rule, a definition or theorem, which justifies the derivation of the Conclusion using as premises the formulas in the lines given as reasons. A can be ignored for our purpose. We assume a reader will build up a (partial) proof tree as his model of the ongoing discourse. The corresponding discourse model after the completion of the presentation of the proof in Figure 3 is a proof tree shown in Figure 4. Note that the bars in Gentzen's notion (Figure 2) are replaced by links for clarity. The numbers associated with nodes are the corresponding line numbers in Figure 4. Chil- dren of nodes are given in the order they have been presented. The circles denote nodes which are first 2The first 6 lines are definitions and theorems used in this proof, which are omitted. derived at this place, and nodes in the form of small boxes are copies of some previously derived nodes (circled nodes), which are used as premises again. For nodes in a box, a referring expression must have been generated in the text. The big boxes represent attentional spaces (previously called proof units by the author), created during the presentation process. The naturalness of this segmentation is largely due to the naturalness of the hierarchical planning oper- ators. For example, attentional space U2 has two subordinate spaces U3 and U4. This reflects a natu- ral shift of attention between a subproof that de- rives a formula of the pattern 3,P(z) (node 10, 3,x E U), and the subproof that proceeds after assuming a new constant u satisfying P (node 11, u E U). When PROVERB opens a new attentional space, the reader will be given information to post an open goal and the corresponding premises. Elemen- tary attentional spaces are often composed of multi- ple PCAs produced by consecutive navigation steps, such as U5 and U6. It is interesting to note that elementary attentional space cannot contain PCAs that are produced by consecutive planning operators in a pure hierarchical planning framework. Adapting the theory of Reichman for our purpose (Reichman, 1985), we assume that each attentional space may have one of the following status: • an attentional space is said to be open if its root is still an open goal. -The active attentional space is the innermost attentional space that contains the local focus. -The controlling attentional space is the inner- most proof unit that contains the active atten- tional space. -precontrol attentional spaces are attentional spaces that contain the controlling attentional space. 193 U4 U5 ~ U6 U1 Figure 4: Proof Tree as Discourse Model • Closed spaces are attentional spaces without open goals. 4 A Classification of Reference Forms A referring expression should help a reader to iden- tify an object from a pool of candidates, This sec- tion presents a classification of the possible forms with which mathematicians refer to conclusions pre- viously proved (called reasons) or to methods of in- ference available in a domain. 4.1 Reference Forms for Reasons Three reference forms have been identified by the author for reasons in naturally occurring proofs (Huang, 1990): 1. The omit form: where a reason is not mentioned at all. 2. The explicit form: where a reason is literally re- peated. 3. The implicit form: By an implicit form we mean that although a reason is not verbalized directly, a hint is given in the verbalization of either the inference method, or of the conclusion. For in- stance, in the verbalization below "Since u is an element in U, u • 1u = u by the definition of unit." the first reason of the PCA in Section 1, "since 1v is the unit element of U" is hinted at by the inference method which reads "by the definition of unit". Although omit and implicit forms lead to the same surface structure, the existence of an implicit hint in the other part of the verbalization affects a reader's understanding. 4.2 Reference Forms for Methods PROVERB must select referring expressions for methods of inference in PCAs as well. Below are the three reference forms identified by the author, which are analogous to the corresponding cases for reasons: 1. the explicit form: this is the case where a writer may decide to indicate explicitly which inference rule he is using. For instance, explicit translations of a definition may have the pattern: "by the def- inition of unit element", or "by the uniqueness of solution." ND rules have usually standard verbal- izations. 2. the omit form: in this case a word such as "thus" or "therefore" will be used. 3. The implicit form: Similar to the implicit form for the expression of reasons, an implicit hint to a domain-specific inference method can be given either in the verbalization of the reasons, or in that of the conclusion. 5 Reference Choices in PROVERB 5.1 Referring to Reasons Because reasons are intermediate conclusions proved previously in context, their reference choices have much in common with the problem of choosing anaphoric referring expressions in general. To ac- count for this phenomenon , concepts like activat- 194 edness, foregroundness and consciousness have been introduced. More recently, the shift of focus has been further investigated in the light of a structured flow of discourse (Reichman, 1985; Grosz and Sid- net, 1986; Dale, 1992). The issue of salience is also studied in a broader framework in (Pattabhiraman and Cercone, 1993). Apart from salience, it is also shown that referring expressions are strongly influ- enced by other aspects of human preference. For ex- ample, easily perceivable attributes and basic-level attributes values are preferred (Dale and Haddock, 1991; Dale, 1992; Reiter and Dale, 1992). In all discourse-based theories, the update of the focus status is tightly coupled to the factoring of the flux of text into segments. With the segmenta- tion problem settled in section 3, the DRCC module makes reference choices following a discourse theory adapted from Reichman (Reichman, 1985). Based on empirical data, Reichman argues that the choice of referring expressions is constrained both by the status of the discourse space and by the object's level of focus within this space. In her theory, there are seven status assignments a discourse space may have. Within a discourse space, four levels of focus can be assigned to individual objects: high, medium, low, or zero, since there are four major ways of re- ferring to an object using English, namely, by using a pronoun, by name, by a description, or implicitly. Our theory uses the notions of structural closeness and textual closeness, and takes both of these factors into account for argumentative discourse. 5.1.1 Structural Closeness The structural closeness of a reason reflects the foreground and background character of the inner- most attentional space containing it. Reasons that may still remain in the focus of attention at the cur- rent point from the structural perspective are con- sidered as structurally close. Otherwise they are considered as structurally distant. If a reason, for instance, is last mentioned or proved in the active attentional space (the subproof which a reader is supposed to concentrate on), it is likely that this reason still remains in his focus of attention. In con- trast, if a reason is in a closed subproof, but is not its conclusion, it is likely that the reason has already been moved out of the reader's focus of attention. Although finer differentiation may be needed, our theory only distinguishes between reasons residing in attentional spaces that are structurally close or structurally distant. DRCC assigns the structural status by applying the following rules. 1. Reasons in the active attentional space are struc- turally close. 2. Reasons in the controlling attentional space are structurally close. 3. Reasons in closed attentional spaces: (a) reasons that are the root of a closed attentional space immediate subordinate to the active at- tentional space are structurally close. (b) Other reasons in a closed attentional spac e are structurally distant. 4. Reasons in precontrol attentional spaces are struc- turally distant. Note that the rules are specified with respect to the innermost proof unit containing a proof node. Rule 3 means that only the conclusions of closed subordinated subproofs still remain in the reader's focus of attention. 5.1.2 Textual Closeness The textual closeness is used as a measure of the level of focus of an individual reason. In general, the level of focus of an object is established when it is activated, and decreases with the flow of dis- course. In Reichman's theory, although four levels of focus can be established upon activation, only one is used in the formulation of the four reference rules. In other words, it suffices to track the status high alone. Therefore, we use only two values to denote the level of focus of individual intermediate conclu- sions, which is calculated from textual distance be- tween the last mentioning of a reason and the current sentence where the reason is referred to. 5.1.3 Reference Rules We assume that each intermediate conclusion is put into high focus when it is presented as a newly derived conclusion or cited as a reason supporting the derivation of another intermediate result. This level of focus decreases, either when a attentional space is moved out of the foreground of discussion, or with the increase of textual distance. The DRCC component of PRO VERB models this behavior with the following four reference rules. Referring Expressions for Reasons 1. If a reason is both structurally and textually close, it will be omitted. 2. If a reason is structurally close but textually dis- tant, first try to find an implicit form; if impossi- ble, use an explicit form. 3. If a reason is structurally distant but textually close, first try to find an implicit form; if impossi- ble, omit it. 4. An explicit form will be used for reasons that are both structurally and textually far. Note that the result of applying rule 2 and rule 3 depends on the availability of an implicit form, which often interacts with the verbalization of the rest of a PCA, in particular with that of the inference method. Since the reference choice for methods is handled independent of the discourse segmentation (Huang, 1996), however, it is not discussed in this paper. Fourteen PCAs are generated by the macroplanner of PROVERB for our example in Figure 3. The 195 microplanner and the realizer of PROVERB finally produces: Proof: Let F be a group, U be a subgroup of F, 1 and 1u be unit elements of F and U, respec- tively. According to the definition of unit ele- ment, 1v E U. Therefore there is an X, X E U. Now suppose that u is such an X. According to the definition of unit element, u • ltr = u. Since U is a subgroup of F, U C F. Therefore lv E F. Similarly u E F, since u E U. Since F is a group, F is a semigroup. Because u*lv -= u, 1v is a solution of the equation u * X --= u. Since 1 is a unit element of F, u* 1 = u. Since 1 is a unit element of F, 1 E F. Because u E F, 1 is a solution of the equation u * X = u. Since F is a group, 1v = 1 by the uniqueness of solution. Some explanations are in order. PROVERB's microplanner cuts the entire text into three para- graphs, basically mirroring the larger attentional spaces U3, U5 and U6 in Figure 4. Since nodes 22 and 21 are omitted in this verbalization, node 20 (the last sentence) is merged into the paragraph for U6. Let's examine the reference choices in the second last sentence: Because u E F, 1 is a solution of the equation which is actually line 19 in Figure 3 and node 19 in Figure 4. Among the four reason nodes 13, 17, 18, 15, only node 13 is explicitly mentioned, since it is in a closed attentional space (U5) and is men- tioned five sentences ago. Node 17 and 18 are in the current space (U6) and was activated only one or two sentence ago, they are therefore omitted. Node 15 is also omitted although also in the same closed space U5, but it was mentioned one sentence after node 13 and is considered as near concerning textual distance. 6 Conclusion This paper describes the way in which PROVERB refers to previouslyderived results while verbalizing machine-found proofs. By distinguishing between hierarchical planning and focus-guided navigation, PROVERB achieves a natural segmentation of con- text into an attentional hierarchy. Based on this segmentation, PRO VERB makes reference decisions according to a discourse theory adapted from Reich- man for this special application. PROVERB works in a fully automatic way. The output texts are close to detailed proofs in text- books and are basically accepted by the community of automated reasoning. With the increasing size of proofs which PROVERB is getting as input, inves- tigation is needed both for longer proofs as well as for more concise styles. Although developed for a specific application, we believe the main rationales behind of our system ar- chitecture are useful for natural language generation in general. Concerning segmentation of discourse, a natural segmentation can be easily achieved if we could distinguish between language generation ac- tivities affecting global structure of attention and those only moving the local focus. We believe a global attentional hierarchy plays a crucial role in choosing reference expressions beyond this particu- lar domain of application. Furthermore, it turned out to be also important for other generation deci- sions, such as paragraph scoping and layout. Finally, the combination of hierarchical planning with local navigation needs more research as a topic in its own right. For many applications, these two techniques are a complementary pair. Acknowledgment Sincere thanks are due to all three anonymous re- viewers of ACL/EACL'97, who provided valuable comments and constructive suggestions. I would like to thank Graeme Hirst as well, who carefully read the final version of this paper. References Chester, Daniel. 1976. The translation of formal proofs into English. Artificial Intelligence, 7:178- 216. Dale, Robert. 1992. Generating Referring Expres- sions. ACL-MIT PressSeries in Natural Language Processing. MIT Press. Dale, Robert and Nicholas Haddock. 1991. Con- tent determination in the generation of referring expressions. Computational Intelligence, 7(4). Edgar, Andrew and Francis Jeffry Pelletier. 1993. Natural language explanation of natural deduc- tion proofs. In Proc. of the first Conference of the Pacific Association for Computational Linguistics, Vancouver, Canada. Centre for Systems Science, Simon Fraser University. Gentzen, Gerhard. 1935. Untersuchungen fiber das logische SchlieBen I. Math. Zeitschrift, 39:176-210. Grosz, Barbara J. and Candace L. Sidner. 1986. At- tention, intentions, and the structure of discourse. Computational Linguistics, 12(3):175-204. Hovy, Eduard H. 1988. Generating Natural Lan- guage under Pragmatic Constraints. Lawrence Erlbaum Associates, Hillsdale, New Jersey. Huang, Xiaorong. 1990. Reference choices in math- ematical proofs. In L. C. Aiello, editor, Proc. of 196 9th European Conference on Artificial Intelligence, pages 720-725. Pitman Publishing. Huang, Xiaorong. 1994. Planning argumentative texts. In Proc. of COLING-94, pages 329-333, Kyoto, Japan. Huang, Xiaorong. 1996. Human Oriented Proof Presentation: A Reconstructive Approach. Infix, Sankt Augustin. Huang, Xiaorong and Armin Fiedler 1997. Proof Verbalization as an Application of NLG. In Proc. of IJCA1-97, Nagoya, Japan, forthcoming. Kilger, Anne and Wolfgang Finkler. 1995. Incre- mental generation for real-time applications. Re- search Report RR-95-11, DFKI, Saarbriicken, Ger- many. McDonald, David D. 1983. Natural language gen- eration as a computational problem. In Brady and Berwick: Computational Models of Discourse. MIT Press. McKeown, Kathleen. 1985. Text Generation. Cam- bridge University Press, Cambridge, UK. Moore, Johanna and C6cile Paris. 1989. Plan- ning text for advisory dialogues. In Proc. 27th Annual Meeting of the Association for Compu- tational Linguistics, pages 203-211, Vancouver, British Columbia. Ochs, Elinor. 1979. Planned and unplanned dis- course. Syntax and Semantics, 12:51-80. Paris, C~cile. 1988. Tailoring object descriptions to a user's level of expertise. Computational Linguis- tics, 14:64-78. Pattabhiraman, T. and Nick Cercone. 1993. Decision-theoreticsalience interactions in lan- guage generation. In Ruzena Bajcsy, editor, Proc. of IJCAI-93, volume 2, pages 1246-1252, Chamb~ry, France. Morgan Kaufmann. Reichman, Rachel. 1985. Getting Computers to Talk Like You and Me. Discourse Context, Focus, and Semantics. MIT Press. Reiter, Ehud and Robert Dale. 1992. A fast algo- rithm for the generation of referring expressions. In Proc. of COLING-92, volume 1, pages 232-238. Reithinger, Norbert. 1991. Eine parallele Architek- tur zur inkrementellen Generierung multimodaler Dialogbeitriige. Ph.D. thesis, Universit~t des Saar- landes. Also available as book, Infix, Sankt Au- gustin, 1991. Sibun, Penelope. 1990. The local organization of text. In K. McKeown, J. Moore, and S. Niren- burg, editors, Proc. of the fifth international nat- ural language generation workshop, pages 120-127, Dawson, Pennsylvania. 197 | 1997 | 25 |
Sentence Planning as Description Using Tree Adjoining Grammar * Matthew Stone Christine Doran Department of Computer and Information Science Department of Linguistics University of Pennsylvania University of Pennsylvania Philadelphia, PA 19014 Philadelphia, PA 19014 matthew@linc, cis. upenn, edu cdoran@linc, cis. upenn, edu Abstract We present an algorithm for simultaneously constructing both the syntax and semantics of a sentence using a Lexicalized Tree Adjoin- ing Grammar (LTAG). This approach captures naturally and elegantly the interaction between pragmatic and syntactic constraints on descrip- tions in a sentence, and the inferential interac- tions between multiple descriptions in a sen- tence. At the same time, it exploits linguis- tically motivated, declarative specifications of the discourse functions of syntactic construc- tions to make contextually appropriate syntac- tic choices. 1 Introduction Since (Meteer, 1991), researchers in natural language generation have recognized the need to refine and re- organize content after the rhetorical organization of ar- guments and before the syntactic realization of phrases. This process has been named sentence planning (Ram- bow and Korelsky, 1992). Broadly speaking, it involves aggregating content into sentence-sized units, and then selecting the lexical and syntactic elements that are used in realizing each sentence. Here, we consider this second process. The challenge lies in integrating constraints from syn- tax, semantics and pragmatics. Although most generation systems pipeline decisions (Reiter, 1994), we believe the most efficient and flexible way to integrate constraints in sentence planning is to synchronize the decisions. In this paper, we provide a natural framework for dealing with interactions and ensuring contextually appropriate output in a single pass. As in (Yang et al., 1991), Lex- icalized Tree Adjoining Grammar (LTAG) provides an *The authors thank Aravind Joshi, Mark Steedman, Martha Palmer, Ellen Prince, Owen Rambow, Mike White, Betty Birner, and the participants of INLG96 for their helpful com- ments on various incarnations of this work. This work has been supported by NSF and IRCS graduate fellowships, NSF grant NSF-STC SBR 8920230, ARPA grant N00014-94 and ARt grant DAAH04-94-G0426. abstraction of the combinatorial properties of words. We combine LTAG syntax with declarative specifications of semantics and pragmatics of words and constructions, so that we can build the syntax and semantics of sentences simultaneously. To drive this process, we take descrip- tion as the paradigm for sentence planning. Our planner, SPUD (Sentence Planner Using Descriptions), takes in a collection of goals to achieve in describing an event or state in the world; SPUD incrementally and recursively applies lexical specifications to determine which entities to describe and what information to include about them. Our system is unique in the streamlined organization of the grammar, and in its evaluation both of contextual ap- propriateness of pragmatics and of descriptive adequacy of semantics. The organization of the paper is as follows. In sec- tion 2, we review research on generating referring ex- pressions and motivate our treatment of sentences as re- ferring expressions. Then, in section 3, we present the linguistic underpinnings of our work. In section 4, we describe our algorithm and its operation on an example. Finally, in section 5 we compare our system with related approaches. 2 Sentences as referring expressions Our proposal is to treat the realization of sentences as parallel to the construction of referring expressions, and thereby bring to bear modern discourse-oriented theories of semantics and the idea that language use is INTEN- TIONAL ACTION. Semantically, a DESCRIPTION D is just an open formula. D applies to a sequence of entities when substituting them for the variables in D yields a true formula. D REFERS to C jUSt in case it distinguishes c from its DISTRACTORS-- that is D applies to c but to no other salient alternatives. Given a sufficiently rich logical language, the meaning of a natural language sentence can be represented as a description in this sense, by assuming sentences refer to entities in a DISCOURSE MODEL, cf. alternative semantics (Karttunen and Peters, 1979; Rooth, 1985). Pragmatic analyses of referring expressions model speakers as PLANNING those expressions to achieve sev- eral different kinds of intentions (Donellan, 1966; Appelt, 198 1985; Kronfeld, 1986). Given a set of entities to describe and a set of intentions to achieve in describing them, a plan is constructed by applying operators that enrich the content of the description until all intentions are satisfied. Recent work on generating definite referring NPs (Reiter, 1991 ; Dale and Haddock, 1991; Reiter and Dale, 1992; Horacek, 1995) has emphasized how circumscribed in- stantiations of this procedure can exploit linguistic con- text and convention to arrive quickly at short, unambigu- ous descriptions. For example, (Reiter and Dale, 1992) apply generalizations about the salience of properties of objects and conventions about what words make base- level attributions to incrementally select words for inclu- sion in a description. (Dale and Haddock, 1991) use a constraint network to represent the distractors described by a complex referring NP, and incrementally select a property or relation that rules out as many alternatives as possible. Our approach is to extend such NP planning procedures to apply to sentences, using TAG syntax and a rich semantics. Treating sentences as referring expressions allows us to encompass the strengths of many disparate proposals. Incorporating material into descriptions of a variety of entities until the addressee can infer desired conclusions allows the sentence planner to enrich input content, so that descriptions refer successfully (Dale and Haddock, 1991) or reduce it, to eliminate redundancy (McDonald, 1992). Moreover, selecting alternatives on the basis of their syntactic, semantic, and pragmatic contributions to the sentence using TAG allows the sentence planner to choose words in tandem with appropriate syntax (Yang et al., 1991), in a flexible order (Elhadad and Robin, 1992), and, if necessary, in conventional combinations (Smadja and McKeown, 1991; Wanner, 1994). 3 Linguistic Specifications Realizing this procedure requires a declarative specifica- tion of three kinds of information: first, what operators are available and how they may combine; second, how operators specify the content of a description; and third, how operators achieve pragmatic effects. We represent operators as elementary trees in LTAG, and use TAG op- erations to combine them; we give the meaning of each tree as a formula in an ontologically promiscuous rep- resentation language; and, we model the pragmatics of operators by associating with each tree a set of discourse constraints describing when that operator can and should be used. Other frameworks have the capability to make com- parable specifications; for example, HPSG (Pollard and Sag, 1994) feature structures describe syntax (SUBCAT), semantics (CONTI3NT) and pragmatics (CONTEXT). We choose TAG because it enables local specification of syn- tactic dependencies in explicit constructions and flexibil- ity in incorporating modifiers; further, it is a constrained grammar formalism with tractable computational proper- ties. 3.1 Syntactic specification TAG (Joshi et al., 1975) is a grammar formalism built around two operations that combine pairs of trees, SUB- STITUTION and ADJOINING. A TAG grammar consists of a finite set of ELEMENTARY trees, which can be combined by these substitution and adjoining operations to produce derived trees recognized by the grammar. In substitu- tion, the root of the first tree is identified with a leaf of the second tree, called the substitution site. Adjoining is a more complicated splicing operation, where the first tree replaces the subtree of the second tree rooted at a node called the adjunction site; that subtree is then substituted back into the first tree at a distinguished leaf called the FOOT node. Elementary trees without foot nodes are called INITIAL trees and can only substitute; trees with foot nodes are called AUXILIARY trees, and must adjoin. (The symbol $ marks substitution sites, and the symbol * marks the foot node.) Figure l(a) shows an initial tree representing the book. Figure l(b) shows an auxiliary tree representing the modifier syntax, which could adjoin into the tree for the book to give the syntax book. Our grammar incorporates two additional principles. First, the grammar is LEXICALIZED (Schabes, 1990): each elementary structure in the grammar contains at least one lexical item. Second, our trees include FEATURES, following (Vijay-Shanker, 1987). LTAG elementary trees abstract the combinatorial properties of words in a linguistically appealing way. All predicate-argument structures are localized within a sin- gle elementary tree, even in long-distance relationships, so elementary trees give a natural domain of locality over which to state semantic and pragmatic constraints. The LTAG formalism does not dictate particular syntactic analyses; ours follow basic GB conventions. 3.2 Semantics We specify the semantics of trees by applying two prin- ciples to the LTAG formalism. First, we adopt an ONTO- LOGICALLY PROMISCUOUS representation (Hobbs, 1985) that includes a wide variety of types of entities. On- tological promiscuity offers a simple syntax-semantics interface. The meaning of a tree is just the CONJUNCTION of the meanings of the elementary trees used to derive it, once appropriate parameters are recovered. Such fiat se- mantics is enjoying a resurgence in NLP; see (Copestake et al., 1997) for an overview and formalism. Second, we constrain these parameters syntactically, by labeling each syntactic node as supplying information about a par- ticular entity or collection of entities, as in Jackendoff's X-bar semantics (Jackendoff, 1990). A node X:X (about ×) can only substitute or adjoin into another node with the same label. These semantic parameters are instantiated using a knowledge base (cf. figure 7). For Jackendoff, noun phrases describe ordinary indi- viduals, while PPs describe PLACES or PATHS and VPs describe ACTIONS and EVENTUALITIES (in terms of a Re- ichenbachian reference point). Under these assumptions, the trees of figure 1, are elaborated for semantics as in 199 S N NP, s DetP N / ~ [ I N N* NPJ, VP D book / ~ V NP I I the (h) (a) have ¢ (c) Figure 1: Sample LTAG trees: (a) NP, (b) Noun-Noun Compound, (c) Topicalized Transitive NP : <l>x DetP N : <1> [ , Det book I the book(x) (a) N : <l>x N : syntax N* : <1> t syntax concerns(x, syntax) (b) S : <l><r,havlng> NPJ. : <2>havee S : <1> NP;. : hayer VP: <1> V NP : <2> I I /have/ t during(r, having) ^ have(having, hayer, havee) (c) Figure 2: LTAG trees with semantic specifications figure 2. Ontological promiscuity makes it possible to explore more complicated analyses in this general frame- work. For example, in (Stone and Doran, 1996), we use reference to properties, actions and belief contexts (Bal- lim et al., 1991) to describe semantic collocations (Puste- jovsky, 1991) and idiomatic composition (Nunberg et al., 1994). 3.3 Pragmatics Different constructions make different assumptions about the status of entities and propositions in the discourse, which we model by including in each tree a specification of the contextual conditions under which use of the tree is pragmatically licensed. We have selected four repre- sentative pragmatic distinctions for our implementation; however, the framework does not commit one to the use of particular theories. We use the following distinctions. First, entities differ in NEWNESS (Prince, 1981). At any point, an entity is ei- ther new or old to the HEARER and either new or old to the DISCOURSE. Second, entities differ in SALIENCE (Grosz and Sidner, 1986; Grosz et al., 1995). Salience assigns each entity a position in a partial order that indicates how accessible it is for reference in the current con- text. Third, entities are related by salient PARTIALLY- ORDERED SET (POSET) RELATIONS to other entities in the context (Hirschberg, 1985). These relations include part and whole, subset and superset, and membership in a common class. Finally, the discourse may distinguish some OPEN PROPOSITIONS (propositions containing free variables) as being under discussion (Prince, 1986). We assume that information of these four kinds is available in a model of the current discourse state. The applicability conditions of constructions can freely make reference to this information. In particular, NP trees include the determiner (the determiner does not have a separate tree), the head noun, and pragmatic conditions that match the determiner with the status of the entity in context, as in 3(a). Following (Gundel et al., 1993), the definite article the may be used when the entity is UNIQUELY IDENTIFIABLE in the discourse model, i.e. the hearer knows or can infer the existence of this entity and can distinguish it from any other hearer-old entity of equal or greater salience. (Note that this test only determines the status of the entity in context; we ensure separately that the sentence includes enough content to distinguish 200 NP : <l>x N : <I>x DetP N : ,:1> / ~ - \ N : syntax N* I Det book I syntax the concerns(x, syntax) [always applicable] book(x) (b) (unique-id(x)) (a) : <1> S : <l><r, having> NP~ :<2>havee S : <1> NP,L : hayer VP: <1> V NP : <2> /have/ E during(r, having) ^ have(having, hayer, havee) (in-poset(havee), in-op(have(having, haver, havee))) (c) Figure 3: LTAG trees with semantic and pragmatic specifications the entity from all its alternatives.) In contrast, the indef- inite articles, a, an, and 0, are used for entities that are NOT uniquely identifiable. S trees specify the main verb and the number and po- sition of its arguments. Our S trees specify the unmarked SVO order or one of a number of fancy variants: topical- ization (TOP), left-dislocation (LD), and locative inversion (INV). We follow the analysis of TOP in (Ward, 1985). For Ward, TOP is not a topic-marking construction at all. Rather, TOP is felicitous as long as (1) the fronted NP is in a salient poset relation to the previous discourse and (2) the utterance conveys a salient open proposition which is formed by replacing the tonically stressed constituent with a variable (3(c)). Likewise, we follow (Prince, 1993) and (Birner, 1992) for LD and INV respectively. 4 SPUD 4.1 The algorithm Our system takes two types of goals. First, goals of the form distinguish x as cat instruct the algorithm to construct a description of entity x using the syntactic category cat. Ifx is uniquely identifiable in the discourse model, then this goal is only satisfied when the meaning planned so far distinguishes x for the hearer. Ifx is hearer new, this goal is satisfied by including any constituent of type cat. Second, goals of the form communicate p instruct the algorithm to include the proposition p. This goal is satisfied as long as the sentence IMPLIES p given shared common-sense knowledge. In each iteration, our algorithm must determine the ap- propriate elementary tree to incorporate into the current description. It performs this task in two steps to take advantage of the regular associations between words and trees in the lexicon. Sample lexical entries are shown in figure 4. They associate a word with the semantics of the word, special pragmatic restrictions on the use of the word, and a set of trees that describe the combina- tory possibilities for realizing the word and may impose additional pragmatic restrictions. Tree types are shared between lexical items (figure 5). This allows us to spec- ify the pragmatic constraints associated with the tree type once, regardless of which verb selects it. Moreover, we can determine which tree to use by looking at each tree ONCE per instantiation of its arguments, even when the same tree is associated with multiple lexical items. Hence, the first step is to identify applicable lexical entries by meaning: these items must truly and appropri- ately describe some entity; they must anchor trees that can substitute or adjoin into a node that describes the entity; and they must distinguish entities from their distractors or entail required information. Then, the second step identifies which of the associated trees are applicable, by testing their pragmatic conditions against the current representation of discourse. The algorithm identifies the combinations of words and trees that satisfy the most communicate goals and eliminate the most distractors. From these, it selects the entry with the most specific se- mantic and pragmatic licensing conditions. This means that the algorithm generates the most marked licensed form. In (Stone and Doran, 1996) we explore the use of additional factors, such as attentional state and lexical preferences, in this step. The new tree is then substituted or adjoined into the existing tree at the appropriate node. The entry may specify additional goals, because it describes one entity in terms of a new one. These new goals are added to the current goals, and then the algorithm repeats. Note that this algorithm performs greedy search. To avoid backtracking, we choose uninflected forms. Mor- phological features are set wherever possible as a result of the general unification processes in the grammar; the inflected form is determined from the lemma and its as- sociated features in a post-processing step. The specification of this algorithm is summarized in the following pseudocode: 201 STEM /buy/ /sell/ /purchase/ /book/ SEMANTICS S YNTAX PRAGMATICS S buyer/buy/bought/from/seller, etc. register(informal) S seller~sell~bought~to~buyer, etc. register(informal) S buyer/purchase/bought/from/seller, etc. register(formal) book(x) /a//book/, etc. [always possible] S = buy(buying,buyer, seller, bought) Figure 4: Sample entries from the lexicon SUBCAT FRAME TREES PRAGMATICS Intransitive Active [always possible] Transitive Active [always possible] Topicalized Object in-poset(obj), in-op(event) Left-Dislocated Object in-poset(obj) Ditransitive Active [always possible] Topicalized Dir Object in-poset(dir obj), in-op(event) Left-Dislocated Dir Object in-poset(dir obj) PP Predicative Active [always possible] Locative Inversion newer-than(subj,loc) etc. Figure 5: Sample entries from the tree database until goals are satisfied: determine which uninflected forms apply; determine which associated trees apply; evaluate progress towards goals; incorporate most specific, best ( form, tree ): perform adjunction or substitution; conjoin new semantics; add any additional goals; 4.2 The system SPUD's grammar currently includes a range of syntactic constructions, including adjective and PP modification, relative clauses, idioms and various verbal alternations. Each is associated with a semantic and pragmatic specifi- cation as discussed above and illustrated in figures 4 and 5. These linguistic specifications can apply across many domains. In each domain, an extensive set of inferences, pre- sumed known in common with the user, are required to ensure appropriate behavior. We use logic programming to capture these inferences. In our domain, the system has the role of a librarian answering patrons' queries. Our specifications define: the properties and identities of objects (e.g., attributes of books, parts of the library); the taxonomic relationships among terms (e.g., that a service desk is an area but not a room); and the typical span and course of events in the domain (e.g., rules about how to check out books). This information is complete and available for each lexical entry. Of course, SPUD also represents its private knowledge about the domain. This includes facts like the status of books in the library. 4.3 An example Suppose our system is given the task of answering the following question: (1) Do you have the books for Syntax 551 and Pragmatics 590? Figure 6 shows part of the discourse model after process- ing the question. The two books, the set they comprise (introducing a poset relation), and the library are men- tioned in (1). Hence, these entities must be both hearer- old and discourse-old. As in centering (Grosz et al., 1995), the subject is taken to be the most salient entity. Finally, the meaning of the question becomes a salient open proposition. On the basis of the knowledge in figure 7, a rhetor- ical planner might decide to answer by describing state have27 as an S and lose5 likewise. To construct its refer- ence to have27, SPUD first determines which lexical and syntactic options are available. Using the lexicon and in- formation about have27 available from figure 7(b), SPUD determines that, of lemmas that truthfully and appropri- ately describe have27 as an S,/have/has the most spe- cific licensing conditions. The tree set for/have/includes unmarked, LD and TOP trees. All are licensed, because of the poset relation R between book19 and books and the salient open proposition O. We choose TOP, the tree with the most specific condition--TOP requires R and O, while LD requires only R and the unmarked form has no requirements. Thus, a topicalized /have/ tree, appropriately instan- tiated as shown in figure 8, is added to the description. The tree refers to three new entities, the object book19, the subject library and the reference point r of the tense. 202 STATUS DISCOURSE OLD: HEARER OLD: SALIENCE: POSET RELATIONS: " OPEN PROPOSITION: ENTITIES book19, book2, books, library, patron bookl 9, book2, books, library, patron {library, patron} > { bookl 9, book2, books} book19 MEMBER-OF books; book2 MEMBER-OF books library X have Y: X={does/doesn't}; Y E books. Figure 6: Discourse model for the example book(book19) book(book2) concerns(book19, syntax) concerns(book2, pragrnatics) (a) Common Knowledge have(have27, library, book19) lost(lose5, book2) during (have27, now) during(lose5, now) (b) Speaker's Knowledge Figure 7: Knowledge bases for the example S : <1><r,have27> NP.~ : <2>book19 S : <1> NP,~ : library VP :<1> V NP : <2> I I /have/ during(r, have27) A have(have27, library, book19) Figure 8: The first tree incorporated into the description of have27. Subgoals of distinguishing these entities are introduced. Other constructions that describe one entity in terms of another, such as complex NPs, relative clauses and se- mantic collocations, are also handled this way by SPUD. The algorithm now selects the goal of describing book19 as an NP. Again, the knowledge base is con- sulted to select NP lemmas that truly describe book19. The best is/book/. (Note that SPUD would use it if either the verb or the discourse context ruled out all distrac- tors.) The tree set for/book/includes trees with definite and indefinite determiners; since the hearer can uniquely identify book19, the definite tree is selected and substi- tuted as the leftmost NP, as in figure 9. The goal of distinguishing book19 is still not satis- fied, however; as far as the hearer knows, both book19 and book2 could match the current description. We con- sult the knowledge base for further information about book19. The modifier entry for the lexical item/syntax/ can apply to the new N node; its tree adjoins there, giving the tree in figure 10. Note that because trees are lexi- calized and instantiated, and must unify with the existing derivation, SPUD can enforce collocations and idiomatic S : <1><r,have27> NP : <2>book19 S : <1> DetP N :<2> NPJ. : library VP :<1> Det book V ~ : <2> I I r the /have/ ¢ during(r, have27) A have(have27, library, book19) A book(book19) Figure 9: The description of have27 after substituting the book. S : <l><r,have27> : <2>book19 S : <1> DetP N : <2> NP~ : library VP : <1> Det N :syntax N :<2> V NP :<2> I I t I t the syntax book /have/ c during(r, have27) A have(have27, library, book19) A book(book19) A concerns(book19, syntax) Figure 10: The tree with the complete description of book19. composition in steps like this one. Now we describe the library. Since we ARE the library, the lexical item/we/is chosen. Finally, we describe r. Consulting the knowledge base, we determine that r is now, and that the present tense morpheme applies. For uniformity with auxiliary verbs, we represent it as a separate tree, in this case with a null head, which assigns a morphological feature to the main verb. This gives the 203 $ : <1><r,ha~27~ NP : ,~1> boolO0 I~¢P N , <2> D®I N : syntax N : <2> t!e syntax blook s : ,~1> /h ve/ during(r, have27) A have(have27, library, book19) A book(book19) ^ concerns(book19, syntax) A we(library) A pres(r, have27) Figure 11: The final tree. tree in figure 11, representing the sentence: (2) The syntax book, we have. All goals are now satisfied. Note that the semantics has been accumulated incrementally and straightforwardly in parallel with the syntax. To illustrate the role of inclusion goals, let us suppose that the system also knows that book19 is on reserve in the state have27. Given the additional input goal of communicating this fact, the algorithm would proceed as before, deriving The syntax book we have. However, the new goal would still be unsatisfied; in the next iteration, the PP on reserve would be adjoined into the tree to satisfy it: The syntax book, we have on reserve. Because TAG allows adjunction to apply at any time, flexible realization of content is facilitated without need for sophisticated back-tracking (Elhadad and Robin, 1992). The processing of this example may seem simple, but it illustrates the way in which SPUD integrates syntac- tic, semantic and pragmatic knowledge in realizing sen- tences. We tackle additional examples in (Stone and Doran, 1996). 5 Comparison with related work The strength of the present work is that it captures a num- ber of phenomena discussed elsewhere separately, and does so within a unified framework. With its incremental choices and its emphasis on the consequences of func- tional choices in the grammar, our algorithm resembles the networks of systemic grammar (Mathiessen, 1983; Yang et al., 1991). However, unlike systemic networks, our system derives its functional choices dynamically us- ing a simple declarative specification of function. Like many sentence planners, we assume that there is a flexible • association between the content input to a sentence plan- ner and the meaning that comes out. Other researchers (Nicolov et al., 1995; Rubinoff, 1992) have assumed that this flexibility comes from a mismatch between input content and grammatical options. In our system, such differences arise from the referential requirements and inferential opportunities that are encountered. Previous authors (McDonald and Pustejovsky, 1985; Joshi, 1987) have noted that TAG has many advantages for generation as a syntactic formalism, because of its localization of argument structure. (Joshi, 1987) states that adjunction is a powerful tool for elaborating descrip- tions. These aspects of TAGs are crucial to SPUD, as they are to (McDonald and Pustejovsky, 1985; Joshi, 1987; Yang et al., 1991; Nicolov et al., 1995; Wahlster et al., 1991; Danlos, 1996). What sets SPUD apart is its simul- taneous construction of syntax and semantics, and the tripartite, lexicalized, declarative grammatical specifica- tions for constructions it uses. Two contrasts should be emphasized in this regard. (Shieber et al., 1990; Shieber and Schabes, 1991) construct a simultaneous derivation of syntax and semantics but they do not construct the semantics--it is an input to their system. (Prevost and Steedman, 1993; Hoffman, 1994) represent syntax, se- mantics and pragmatics in a lexicalized framework, but concentrate on information structure rather than the prag- matics of particular constructions. 6 Conclusion Most generation systems pipeline pragmatic, semantic, lexical and syntactic decisions (Reiter, 1994). With the fight formalism, constructing pragmatics, semantics and syntax simultaneously is easier and better. The approach elegantly captures the interaction between pragmatic and syntactic constraints on descriptions in a sentence, and the inferential interactions between multiple descriptions in a sentence. At the same time, it exploits linguistically mo- tivated, declarative specifications of the discourse func- tions of syntactic constructions to make contextually ap- propriate syntactic choices. References D. Appelt. 1985. Planning English Sentences. Cambridge University Press. A. Ballim, Y. Wilks, and J. Barnden. 1991. Belief ascription, metaphor, and intensional identification. Cognitive Science, 15:133-171. B. Birner. 1992. The Discourse Function of lnversion in En- glish. Ph.D. thesis, Northwestern University. A. Copestake, D. Flickinger, and I. A. Sag. 1997. Min- imal Recursion Semantics An Introduction. MS, CSLI, http://hpsg.stanford.edu/hpsg/sag.html. R. Dale and N. Haddock. 1991. Content determination in the generation of referring expressions. Computational huelli- gence, 7(4):252-265. L. Danlos. 1996. G-TAG: A formalism for Text Generation inspired from Tree Adjoining Grammar: TAG issues. Un- published manuscript, TALANA, Universit6 Paris 7. K. Donellan. 1966. Reference and definite description. Philo- sophical Review, 75:281-304. M. Elhadad and J. Robin. 1992. Controlling content realiza- tion with functional unification grammars. In Dale, Hovy, Rtisner, and Stock, editors, Aspects of Automated Natural 204 Language Generation: 6th International Workshop on Nat- ural Language Generation, pages 89-104. Springer Verlag. B. Grosz and C. Sidner. 1986. Attention, intentions, and the structure of discourse. Computational Linguistics, 12:175- 204. B. J. Grosz, A. K. Joshi, and S. Weinstein. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, 21 (2):203-225. J. K. Gundel, N. Hedberg, and R. Zacharski. 1993. Cognitive status and the form of referring expressions in discourse. Language, 69(2):274-307. J. Hirschberg. 1985. A Theory of Scalar Implicature. Ph.D. thesis, University of Pennsylvania. J. R. Hobbs. 1985. Ontological promiscuity. In ACL, pages 61-69. B. Hoffman. 1994. Generating context-appropriate word or- ders in Turkish. In Seventh International Generation Work- shop. H. Horacek. 1995. More on generating referring expressions. In Fifth European Workshop on Natural Language Genera- tion, pages 43-58, Leiden. R. S. Jackendoff. 1990. Semantic Structures. MIT Press. A. K. Joshi, L. Levy, and M. Takahashi. 1975. Tree adjunct grammars. Journal of the Computer and System Sciences, 10:136-163. A. K. Joshi. 1987. The relevance of tree adjoining grammar to generation. In Kempen, editor, Natural Language Genera- tion, pages 233-252. Martinus Nijhoff Press, Dordrect, The Netherlands. L. Karttunen and S. Peters. 1979. Conventional implicature. In Oh and Dineen, editors, Syntax and Semantics 11: Pre- supposition. Academic Press. A. Kronfeld. 1986. Donellan's distinction and a computational model of reference. In ACL, pages 186-191. C. M. I. M. Mathiessen. 1983. Systemic grammar in computa- tion: the Nigel case. In EACL, pages 155-164. D. D. McDonald and J.. Pustejovsky. 1985. TAG's as a gram- matical formalism for generation. In ACL, pages 94-103. D. McDonald. 1992. Type-driven suppression of redundancy in the generation of inference-rich reports. In Dale, Hovy, R0sner, and Stock, editors, Aspects of Automated Natural Language Generation: 6th International Workshop on Nat- ural Language Generation, pages 73-88. Springer Verlag. M. W. Meteer. 1991. Bridging the generation gap between text planning and linguistic realization. Computational Intelli- gence, 7(4):296-304. N. Nicolov, C. Mellish, and G. Ritchie. 1995. Sentence genera- tion from conceptual graphs. In W. Rich G. Ellis, R. Levinson and E Sowa, editors, Conceptual Structures: Applications, Implementation and Theory., pages 74-88. Springer. G. Nunberg, 1. A. Sag, andT. Wasow. 1994. Idioms. Language, 70(3):491-538. C. Pollard and I. A. Sag. 1994. Head-Driven Phrase Structure Grammar. CSLI. S. Prevost and M. Steedman. 1993. Generating contextually appropriate intonation. In EACL. E. Prince. 1981. Toward a taxonomy of given-new information. In P. Cole, editor, Radical Pragmatics. Academic Press. E. Prince. 1986. On the syntactic marking of presupposed open propositions. In CLS, pages 208-222, Chicago. CLS. E. Prince. 1993. On the functions of left dislocation. Manuscript, University of Pennsylvania. J. Pustejovsky. 1991. The generative lexicon. Computational Linguistics, 17(3):409--441. O. Rambow and T. Korelsky. 1992. Applied text generation. In ANLP, pages 40--47. E. Reiter and R. Dale. 1992. A fast algorithm for the generation of referring expressions. In Proceedings of COLING, pages 232-238. E. Reiter. 1991. A new model of lexical choice for nouns. Computational Intelligence, 7(4):240-251. E. Reiter. 1994. Has a consensus NL generation architecture appeared, and is it psycholinguistically plausible? In Pro- ceedings of the Seventh International Workshop on Natural Language Generation, pages 163-170. M. Rooth. 1985. Association with focus. Ph.D. thesis, Univer- sity of Massachusetts. R. Rubinoff. 1992. Integrating text planning and linguistic choice by annotating linguistic structures. In Dale, Hovy, ROsner, and Stock, editors, Aspects of Automated Natural Language Generation: 6th International Workshop on Nat- ural Language Generation, pages 45-56. Springer Verlag. Y. Schabes. 1990. Mathematical and Computational Aspects ofLexicalized Grcunmars. Ph.D. thesis, University of Penn- sylvania. S. Shieber and Y. Schabes. 1991. Generation and syn- chronous tree adjoining grammars. Computational Intelli- gence, 4(7):220-228. S. Shieber, G. van Noord, E Pereira, and R. Moore. 1990. Semantic-head-driven generation. Computational Linguis- tics, 16:30--42. F. Smadja and K. McKeown. 1991. Using collocations for language generation. Computational Intelligence, 7(4):229- 239. K. Vijay-Shanker. 1987. A Study of Tree Adjoining Grammars. Ph.D. thesis, University of Pennsylvania. M. Stone and C. Doran. 1996. Paying heed to collocations. In Eighth International Workshop on Natural Language Gen- eration 96, pages 91-100. W. Wahlster, E. Andrr, S. Bandyopadhyay, W. Graf, and T. Rist. 199t. WIP: The coordinated generation of multimodal pre- sentations from a common representation. In Stock, Slack, and Ortony, editors, Computational Theories of Communi- cation and their Applications. Springer Verlag. L. Wanner. 1994. Building another bridge over the gener- ation gap. In Seventh International Workshop on Natural Language Generation, pages 137-144, June. G. Ward. 1985. The Semantics and Pragmatics of Preposing. Ph.D. thesis, University of Pennsylvania. G. Yang, K. F. McCoy, and K. Vijay-Shanker. 1991. From functional specification to syntactic structures: systemic grammar and tree-adjoining grammar. Computational In- telligence, 7(4):207-219. 205 | 1997 | 26 |
An Algorithm For Generating Referential Descriptions With Flexible Interfaces Helmut Horacek Universit/it des Saarlandes FB 14 Informatik D-66041 Saarbrticken, Deutschland [email protected] Abstract Most algorithms dedicated to the generation of referential descriptions widely suffer from a fundamental problem: they make too strong assumptions about adjacent processing components, resulting in a limited coordination with their perceptive and linguistics data, that is, the provider for object descriptors and the lexical expression by which the chosen descriptors is ultimately realized. Motivated by this deficit, we present a new algorithm that (1) allows for a widely unconstrained, incremental, and goal-driven selection of descriptors, (2) integrates linguistic constraints to ensure the expressibility of the chosen descriptors, and (3) provides means to control the appearance of the created referring expression. Hence, the main achievement of our approach lies in providing a core algorithm that makes few assumptions about other processing components and improves the flow of control between modules. 1 Introduction Generating referential descriptions I requires selecting a set of descriptors according to criteria which reflect humans preferences and verbalizing these descriptors while meeting natural language constraints. Over the last decade, (Dale, 1989, Dale, Haddock, 1991, Reiter, 1990b, Dale, Reiter, 1995), and others 2 have contributed to this issue The term 'referential description' is due to Donellan (Donellan, 1966). This notion signifies a referring expression that serves the purpose of letting the hearer identify a particular object out of a set of objects assumed to be in the current focus of attention. The approach undertaken by Appelt and Kronfeld (Appelt, 1985a, Appelt, 1985b, Kronfeld, 1986, Appelt, Kronfeld, 1987) is very elaborate but it suffers from very limited coverage, missing assessments of the relative benefit of alternatives, and notorious inef- ficiency. (see the systems NAOS (Novak, 1988), EPICURE (Dale, 1988), FN (Reiter, 1990a), and IDAS (Reiter, Dale, 1992)). Nevertheless, these approaches still suffer from some crucial deficits, including limited coverage (see (Horacek, 1995, Horacek, 1996) for an improved algo- rithm), and too strong assumptions about adjacent processing components, namely: • the instant availability of all descriptors for an object to be described, • the adequate expressibility of a chosen set of descriptors in terms of lexical items. Motivated by the resulting deficits, we develop a new algorithm that does not rely on these assumptions. It (1) allows for a widely unconstrained, incremental, and goal- driven selection of descriptors, (2) integrates linguistic constraints to ensure the expressibility of the chosen descriptors, and (3) provides means to control the appear- ance of the created referring expression. This paper is organized as follows. After having introduced some basic terminology, we elaborate interface deficits of existing algorithms, form which we derive desi- derata for an improved algorithm. Then we describe concepts to meet these desiderata, and we illustrate their operationalization in a schematic and in a detailed version. Finally, we demonstrate the increased functionality of the new algorithm, and we evaluate the achievements. 2 Terminology Used In the scope of this paper, we adopt the terminology originally formulated in (Dale, 1988) and also used by several successor approaches. The referring expression to generate is required to be a distinguishing description, that is a description of the entity being referred to, but not to any other object in the current context set. A context set is defined as the set of entities the addressee is currently assumed to be attending to - this is similar to the set of entities in the focus spaces of the discourse focus stack in Grosz and Sidner's theory of discourse structure (Grosz, Sidner, 1986). Moreover, the contrast set (or, the set of potential distractors (McDonald, 1981)), is defined to entail all elements of the context set except the intended 206 referent. In the scope of some context set, an attribute or a relation applicable to the intended referent can be assigned its discriminatory power, 3 that is a measure similar to the number of potential distractors that can be removed from the contrast set with confidence, because this attribute or relation does not apply to them. 3 Previous Algorithms and Deficits The existing algorithms attempt to identify the intended referent by determining a set of descriptors attributed to that referent or to another entity related to it, thereby keeping the set of descriptors as small as possible. This minimization issue can be interpreted in different degrees of specificity, which also has consequences on the asso- ciated computational complexity. Full brevity, the strongest interpretation, is underlying Dale's algorithm (Dale, 1989), which produces a description entailing the minimal number of attributes possible, at the price of suffering NP-hard complexity. Two other interpretations, the Greedy heuristic interpretation (Dale, 1989) and the local brevity interpretation (Reiter, 1990a) lead to algo- rithms that have polynomial complexity in the same order of magnitude. The weakest interpretation, the incremental algorithm interpretation (Reiter, Dale, 1992), has still polynomial complexity but, unlike the last two interpre- tations, it is independent of the number of attributes avail- able for building a description. Applying this interpre- tation may lead to the inclusion of globally redundant attributes in the final description, but this is justified by various results of psychological experiments (see the summary in (Levelt 1989)). Because of these reasons, the incremental algorithm interpretation is generally consi- dered best now, and we adopt it for our algorithm, too. In the realization described in (Reiter, Dale, 1992), attributes are incrementally selected according to an a priori computed domain-dependent preference list, provided each attribute contributes to the exclusion of at least one potential distractor. However, there still remains the problem of meaningfully applying this criterion in the context of nested descriptions, when the intended referent is to be described not only by attributes such as color and shape, but also in terms of other referents related to it. Neither the psychological experiments nor the realization in (Reiter, Dale, 1992) can deal with this sort of recur- sion. In the generalization introduced in (Horacek, 1996), descriptors of the referents are incrementally selected according to domain-dependent preference lists in a limited depth-first fashion, which leads to some sort of inflexibi- lity through restricting the set of locally applicable 3 A precise definition based on numerical values assigned to attribute-value pairs is given in (Dale, 1988). descriptors. Besides, the preference list needs to be fully instantiated for each referent to be described, which consti- tutes a significant overhead. An even more crucial problem lies in the fact that practically all algorithms proposed so far contend them- selves with producing a set of descriptors rather than natural language expressions. They more or less impli- citly assume that the set of descriptors represented as one- and two-place predicates can be expressed adequately in natural language terms. A few drastic examples should be sufficient to illustrate some of the problems that might occur due to ignoring these issues: (a) the bottle which is on a table on which there is a cup which is besides the bottle .... (a problem of organization) (b) the large, red, speedy, comfortable ..... car (a problem of complexity) (c) the cup which is besides a bottle which is on a table which is left to another table and which is empty (a scoping problem, in addition) Altogether, two strong assumptions influence existing algorithms, namely the instant availability of all descrip- tors of a referent and the satisfactory expressibility of the chosen set of descriptors. They are responsible for three serious deficits negatively influencing the quality of the expression (the first one primarily causing inefficiency): 1. Applicable processing strategies are restricted because all descriptors of some referent need to be evaluated before descriptors of other referents can be considered. 2. The linguistic aspects are largely simplified and even neglected in parts. Because of the 'generation gap' (Meteer, 1992), there is no guarantee that the set of descriptors chosen can be expressed at all in the target language, not to say adequately. 3. There is no control to assess the adequacy of a certain description, for instance, in terms of structural com- plexity, and no feedback from linguistic form production to property selection is provided. The first deficit restricts feasible architectures of a gener- ation system in which such an algorithm can reasonably be embedded because flexibility and incrementality of the descriptor selection task are limited. Moreover, the under- lying assumption is unrealistic in cognitive as well as in technical terms. From the perspective of human behavior, it would simply be unnecessary to determine all descrip- tors of a referent to be described beforehand without even attempting to generate a description; usually, just a few descriptors are sufficient for this purpose. The same consi- derations apply to the machine-oriented perspective: neither for a vision system nor for a knowledge-based system is it without costs to determine all descriptors of a certain object - especially for the vision system, the computational effort may be considerable. 207 The second deficit results from ignoring that the ulti- mate goal envisioned consists in producing a natural language expression that satisfies the discourse goal and not merely in choosing a set of descriptors by which this goal can in principle be achieved. In general, there is no guarantee that the set of descriptors chosen can be ade- quately expressed in the target language, given some repertoire of lexical operators: conceptual predicates cannot always be mapped straightforwardly onto lexemes and grammatical features so that the anticipation of their composability is limited. Even more importantly, matters of grammaticality are not taken into account at all by previous algorithms. Simple cases are not problematic, for instance, when two descriptors achieve unique identifi- cation and can be expressed by a simple noun phrase consisting of a head noun and an adjective. In more complex cases, however, considerations of grammaticality such as overloading and even interference due to scoping ambiguity may become a serious concern. The third deficit concerns the lack of control that these algorithms suffer from when assessing the structural complexity of a certain description is required, which certainly influences its communicative adequacy, too. The lack of control over the appearance of the expression to be generated is further augmented by the fact that any kind of feed-back is missing that puts the property selection facility in a position to take the needs of ultimately building a referring expression into account. Particular difficulties can be expected when a referential description needs to be produced in an incremental style, that is, portions of a surface expression are built and uttered once a further descriptor is selected, that is, prior to completion of the entire descriptor selection task. 4 Conception of a New Algorithm Besides the primary goal of producing a distinguishing and cognitively adequate description of the intended refer- ent, there are also the inherent secondary goals of verbally expressing the chosen descriptors in a natural way, and of applying a suitable processing strategy. In order to pursue these goals, we state the following desiderata: 1. The requirements on the descriptor providing component should widely be unconstrained, allowing for incremental and goal-driven processing. 2. A component that takes care of the expressibility of conceptual descriptors in terms of natural language expressions should be interfaced. 3. Adequate control should be provided over the comple- xity and components of the referring expression. Several concepts are intended to meet these desiderata: I. In the predecessor algorithms, attributes are taken from an a priori computed domain-dependent prefer- ence list in the indicated order, provided each attribute contributes to the exclusion of at least one potential distractor. Instead, we simply allow the responsible component to produce descriptors incrementally, even from varying referents, provided the selected descriptor is directly related to some referent already included in the expression built so far. While the precise form of this restriction is technically motivated - it guaran- tees that a description built this way is always connected - we believe that it is also cognitively plausible. In order to pursue the identification goal, the perception facilities preferably look for salient places in the vicinity of the object to be identified, rather than to distant places. The pre-selection obtain- ed this way can be based on salience, eventually combined with some measure of computational effort. By applying this strategy, a best-first behavior is achieved instead of pure breadth-first (Reiter, Dale, 1992), depth-first (Dale, Haddock, 1991), and iterative deepening (Horacek, 1995, Horacek, 1996) strategies. 2. The algorithm interfaces a subprocess that incremen- tally attempts to build natural language expressions out of the descriptors selected. Through taking gram- matical and lexical constraints into account, this pro- cess is capable of exposing expressibility problems early: expressing a proposed descriptor may require refilling an already filled slot, or integrating the map- ping result of a newly inserted descriptor may lead to a global conflict such as unintended scope relations. A goal-driven aspect is added by encouraging the selection of descriptors whose images are candidates of filling empty slots in the expression built so far. 3. The algorithm enables one to control the processing aspect of building the referential description and its complexity. A parameter is provided to specify the appearance of that expression in terms of slots that are allowed to be filled. In an incremental style, where parts of the referential description are uttered prior to its completion, the slots that can be filled by the des- criptor selected are substantially influenced by prece- dence relations (in the ordinary compositional style, this is simply identical to the set of yet empty slots). 5 Operationalization in the Algorithm The new algorithm designed to incorporate these concepts is built on the basis of some predecessor algorithms (Dale, Haddock, 1991, Reiter, Dale 1992, Horacek, 1995, Horacek, 1996), from which we also adopt the notation. The algorithm is shown in two different degrees of preci- sion. An informal, schematic view in Figure 1 that abstracts from technical details is complemented by a detailed pseudo-code version in Figure 2. In both versions, the lines are marked, by IS#] in the schematic view and by [C#] in the pseudo-code version to ease references from 208 Check Success if <the intended referent is identified uniquely> then <exit with an identifying description> if <the complexity limit of the expression is reached> then <exit with a non-identifying description> Choose property if <no further descriptors are available> then <exit with a non-identifying description> else <call the descriptor selection component to propose the next property> if <the descriptor does not reduce the set of potential distactors> or <the referent further described is already identified uniquely> or <the descriptor is inferable from the description generated so far> or <the descriptor cannot be lexicalized with the given linguistic resources> or <lexicalizing the descriptor would cause a scoping problem> then <reject the proposed property> and goto 2 Extend description <update the linguistic resources used> <determine properties which, when being lexicalized, are likely to fill yet empty slots> <update the constraints holding between referents and partial descriptions> goto 1 [SI] [$2] [s3] [s4] [ss] [$61 [$7] [s81 [s9] is10] [Sl~l [sl2] [S13] IS ~4] [sis] [S16] [S17] [S18] [S19] [$20] Figure 1 : Schematic presentation of the algorithm, as the text. In addition, the identifiers used in the pseudo- code version are explained in Table 1 (the variables) and in Table 2 (the functions). We first illustrate the basic structure of the procedure from some sort of a bird's eyes view. The algorithm consists of three major parts: Check success IS 1 ], Choose property [$6], and Extend description [S16]; this organi- zation stems from (Dale, Haddock, 1991) and is extended here. Basically, these parts are evaluated in sequence, which is repeated iteratively [$20]. The first part merely comprises two of the algorithm's termination criteria: [$2], which constitutes the successful accomplishment of the whole task, and [$4], which reports the failure to do this within the given limits of the linguistic resources, and corresponding return statements [$3] and [$5]. [$4] and [$5] constitute an extension to previous approaches. The second part entails a call to an external descriptor selection component [$9]. In the unlikely case that no further descriptors are available [$7] the algorithm termi- nates without complete success [$8]. Various tests check the suitability of the descriptor proposed in the global context: the descriptor does not contribute further to the identification task (it must be an attribute) [S 10], the need of further elaborating the description of that referent to which the proposed descriptor adds information [SI 1 ], the descriptor's effective contribution to the identification task, which may be nullified due to contextual effects [S12], unavailability of lexical material to express the an abstraction from the detailed pseudo-code in Figure 2 proposed descriptor as an extension to the referring expres- sion composed s.o far [S13], and scoping problems in the attempt in extending the referring expression composed so far [S 14]. The last two criteria are additions introduced in the new algorithm. In the third part, some sort of book keeping is carried out: evidence about the used lexical resources is updated IS 17], descriptors that are likely to be expressible by yet empty slots are determined IS 18], and relations between the context sets of all referents consid- ered and partial descriptions are maintained [S19]. After this overview, we explain the algorithm in detail. We describe the data structure that helps controlling to whether or not a referent is identified and which the poten- tial distractors are. Next, we illustrate the interfaces to the two major external modules. We conclude this presen- tation by explaining the pseudo code, thereby pointing to the corresponding parts in the schematic overview. In companion with the variables and functions explained in separated tables, this description should enable the reader to understand the functionality of the algorithm. Throughout processing, the algorithm maintains a constraint network N which is a pair relating (a) a set of constraints, which correspond to predications over vari- ables (properties abstracted from the individuals they apply to) to (b) sets of variables each of which fulfill these constraints in view of a given knowledge base (the context sets). The notation N ~ p is used to signify the result of adding the constraint p to the network N. In 209 Variable Description r, gr,v, gv R C N C L FD DD List <p,r> refs P-props excluded local (r) and global referents (gr) and variables (v and gv) associated with them a specification of slots which the target referring expression may entail (contextually-motivated) expected category of the intended referent constraint network, a pair relating a set of constraints to sets of variables fulfilled by them context set, indexed by variables associated with referents (e.g., C v, Cg v) list of attribute-value pairs which corresponds to the constraint part of N functional description that is an appropriate lexical description expressing L distinguishing description, appearing as a pair <L,FD> communicative goals to pursue, expressed by Describe(r,v) property p ascribed to referent r referents already processed properties whose images on the lexical level are likely to fill empty slots in FD property-referent combinations that cannot be verbalized in the given context Table 1: Variables used in the algorithm addition, the notation [r~v]p is used to signify the result of replacing every occurrence of the constant r in p by variable v (for an algorithm to maintain consistency see AC-3 (Mackworth, 1977), as used in (Haddock, 1991)). According to our desiderata, the new algorithm inter- faces two major external modules whose precise function- ality is outside the scope of this paper: Next-Property and Insert-Unify. Next-Property [C19], [$9] selects a cogni- tively-motivated candidate property to be included next. Generally applicable psychological preferences, such as basic level categories, as well as special criteria, such as degrees of applicability of local relations [Gapp 1995], may guide this selection. It is additionally influenced by two parameters: refs, which specifies those referent which must be directly related to the chosen descriptor, and P- props, which entails a list of properties whose lexical images are likely to fill yet empty slots. Insert-Unify updates the data structure FD by incre- mentally inserting mappings of selected descriptors [C43], [S13], unless Check-Scope detects a global problem [C44], [S 14]. This language-dependent procedure analyzes the functional description created so far for potential mis- interpretations and scope ambiguities, which may occur in connection with nested postnominal modifiers or relative clauses that depend on an NP with a postnominal modi- fier. Examining these structures is much less expensive than a global anticipation-feedback loop, but it requires specialized grammatical knowledge. Whether the intended reading is also the preferred one depends on selectional re- strictions, preference criteria, and morphological features. Function Description Next-Property(refs, ps) A(p) find-best-value(A(p),V) basic-level-value(r,A(p)) rules-out(<A(p),V>) Assoc-var(r) Prototypical(p, r) Descriptors(r) Map-to(Empty-Slots(FD)) lnsert-Unify( FD,<v,p> ) Check-Scope(FD) Slots-of(mappings(p)) Rel(A(p) ) Salient(A(p)) selects a property, influenced by the connection to referents refs and by properties ps functor to provide access to the predicate of predication p procedure to determine the value of property p that describes r according to (Dale, Reiter, 1992) yields the basic level value of property p for referent r yields the set of referents that are ruled out as distractors due to the value V of property A(p) function to get access to the variable associated with referent r yields true if property p is prototypical for referent r and false otherwise yields the set of predicates entailed in N and holding for referent r yields properties which map onto the set of uninstantiated slots in FD inserts a lexical description of property p of the referent associated with variable v into FD yields true if no scope problems are expected to occur and false otherwise yields the slots of the set of lexical items by which predicate p can be expressed yields true if descriptor p is a relation and false otherwise yields true if salience is assigned to property p and false otherwise Table 2: Functions used in the algorithm 210 D_.e..~.(lh~ ( r, v, N, R, c ) DD ~ nil, FD ~ nil [CI] unique ~ false [C2] gr ~-- r, gv 6-- v [C3] excluded ~-- nil, P-props ~-- nil [C4] rel:~ ~ {r} [C5l Cv~ Cvn {x I c(x)} [C6] List +-- [Describe(r,v)] [C7] 1 Check Success [C8] if ICxvl = 1 then [C9] • unique +--- true [CI0] return <L,FD> (as a distinguishing description) [CI I] endif [C 12] if IRI = 0 then [C13] return <L,FD> [C 14] (as a non-distinguishing description) [C 15] endif [C 16] 2 Choose Property [C17] repeat [C18] <r,p> ~-- Next-Property(rely,P-props) [C19] if p = nil then [C20] return <L,FD> [C21] (as a non-distinguishing description) lC22] endif [C23] v ~ Assoc-var(r) [C24] if Prototypical(p,r) or [C25] ((Slots-of(Mappings(p)) n R) = O) [C26] then excluded ~ excluded u { <r,p> I [C27] elseif (p in Taxonomic-Inferences [C28] (Descriptors(v))) or (ICvl -- 1) then [C29] excluded ~- excluded u { <r,p> } [C30] endif [C31 ] endif [C32] if <r,p> e excluded then lC33] goto 2 [C34] endif [C35] V = find-best-value(A(p), [C36] basic-level-value(r,A(p))) [C37] if not (((rules-out(<A(p), V>) ~ nil) and (V ~ nil)) [C38] or Rel(A(p))) or Salient(A(p)) then [C39] excluded ~ excluded u {<r,p> } [C40] goto 2 [C41 ] endif [C42] FDH ~ Insert-Unify(FD, <v,p>) [C43] if not Check-Scope(FDH) then [C44] excluded ~ excluded u {<r,p>} [C45] goto 2 [C46] endif [C47] 3 Extend Description [C48] FD ~ FDH [C49] R ~-- R \ slots(FD) [C50] P-props ~-- Map-to(Empty-slots(FD)) [C51] p ~ [r\vlp [C52] if Rel(A(p)) then [C53] for every other constant r' in p do [C54] if Assoc-var(r') = nil then [C55] associate r' with a new, unique variable v' [C56] p ~-- [r'~vqp [C57] ref~ ~-- rel;~ u {r7 [C58] List ~ Append(List, Describe(r'v')) [C59] endif [C60] next [C61 ] else set the value of attribute p to V [C62] endif [C63] N ~- N @ p [C64] goto 1 [C65] Figure 2: Detailed pseudo-code of the new algorithm The first part of the algorithm, 'Check Success', comprises the algorithm's termination criteria: 1. A distinguishing description is completed [C9-C11], [$2-$4], the exit in case of full success. 2. No more descriptors are globally available [C19- C22], [$7-$8]. In the predecessor algorithms, this check is done for each referent separately. 3. All available slots are filled [C13-C14], [$4-$5] - this is a new criterion. The second part, 'Choose Property', is dedicated to test the contextual suitability of the candidate property proposed by Next-Property, which may be inappropriate for one of the following reasons (criteria 3. and 5. are new ones): 1. The property can be inferred from the description generated so far, or it is prototypical for the object to be identified and may thus yield a false implicature [C25, C28], [S12]. 2. The object is already identified uniquely [C29], [SI 1]. 3. The descriptor chosen cannot be mapped onto a slot of the description generated so far [C26], [S 13]. 4. The descriptor is an attribute, and it does not further reduce the set of potential distractors [C38], [S 10]. 5. Incorporating the descriptor into the functional description created so far leads to a global conflict [C43-C44], [S14]. The third part, 'Extend Description', takes care of updating some control variables. The descriptor p is fed into N [C64] goals to describe new referents reached via the relation p are put into List [C54-C60], [S19], all slots filled in FD are eliminated in R [C50], [S17], and the yet empty slots are fed into reversed lexicalization rules to yield properties collected in P-props [C51 ], [S 18]. 6 Effects the Algorithm Can Handle Space restrictions do not permit a detailed presentation of the new algorithm at work. Therefore, we have confined ourselves to a sketchy description of the algorithm's behavior in a moderately complex situation. Let us assume an environment consisting of four tables (t I to t4), roughly placed in a row, as depicted in Figure 2. The communicative goal is to distinguish one of the tables uniquely from the other three, by a referring expression entailing an adjective (a prenominal modifier), a category, an attribute (a postnominal modifier), and a relative clause, at most. The situation permits building a large variety of expressions for accomplishing this purpose. Some interesting cases are: 1) achieving global rather than local goal satisfaction: If t 3 is the intended referent, and on(bl, o) is the descriptor selected next, adding the category of the entities on top of t3 (here, books) is sufficient to identify t3 uniquely. Some predecessor algorithms, for instance (Dale, Haddock 1991), would still attempt to distinguish b I from b2. 211 ~ bt~ b2 13 ~/~"g' 1 I I112 12 12 t 2 Figure 3: A scenery with tables, cups, glasses, and books 2) producing flat expressions instead of embedded ones If t 2 is the intended referent, and on(gt,t2) is the descriptor selected next, another descriptor must be selected to distin- guish t2 from t4. The descriptor selection component is free to choose on(c3,t2), to yield the natural, flat expression 'the table on which there are a glass and a cup'. In (Horacek 1996), the same result can be obtained through an adequate selection of search parameters. The algorithm in (Dale, Haddock 1991) would produce the less natural, embedded expression 'the table on which there is a glass besides which there is a cup' instead. 3) rejection of a descriptor because it can be inferred If tl is the intended referent, and size(tl,low) is the des- criptor selected this time, another descriptor must be added, since t 3 is also subsumed by this description. If part-of(t1,//) is chosen for that purpose (l I being the legs of tl), the descriptor size(lvshort) to describe 11 further is rejected because it can be inferred from {size(tl, lOw),part- of(tl,ll)}. 4) Rejection of a descriptor because of a clash Let t 2 be the intended referent, and the descriptors left- of(t3,t2) and type(t3,table ) expressed by 'the one which is to the left of a table'. If on(gl,t2) is selected next, the only way to link it to the partial expression generated so far is via a relative clause, but this slot is already filled. 5) Rejection of a descriptor because of a scope problem However, if the local relation in the previous example is expressed by 'the one to the left of a table', adding a relative clause expressing the objects on t3 would still work badly because the addressee would interpret these objects to be placed on t2 - Check-Scope should recognize this reference problem. 7 Evaluating the Algorithm The examples discussed in the previous section demon- strate that our procedure avoids many of the deficits pre- vious algorithms suffer from. Therefore, it provides excel- lent prerequisites for producing natural referring expres- sions in terms of both, descriptors selected and structural appearance. Whether this is actually the case depends pri- marily on the quality of the external components, the des- criptor selection and the lexicalization component and, to some minor extent, on the parameterization of the struc- tural appearance of the referring expression to be produced. As far as its complexity is concerned, the algorithm is in some sense even more efficient than its predecessors, because it does not require complete lists of descriptors to be produced for each referent. However, this saving is partially nullified by the additional operations incorpor- ated, especially by the application of lexicalization operators and scoping verifications. Nevertheless, an overall analysis of the algorithm's complexity is hardly possible in a general sense because • the operations in this algorithm are rather hetero- geneous, and their relative costs are far from clear, • the costs of individual operations, such as descriptor computation in the descriptor selection component and constraint network maintenance, may vary signifi- cantly in dependency of the underlying representation, especially if the primary representation is a pictorial rather than a propositional one. 8 Conclusion In this paper, we have presented a new algorithm for generating referential descriptions which exhibits some extraordinary capabilities: • Descriptors can be selected in a goal-driven and incre- mental fashion, with contributions from varying referents interleaving with one another. • A component is interfaced which attempts to express the descriptors chosen on the lexical representation level to encounter expressibility problems. • The structural appearance of the resulting referential description can be controlled. 212 Major problems for the future are an even tighter inte- gration of the algorithm in the generation process as a whole and finding adequate concepts for dealing with negation and sets. References Doug Appelt. 1985a. Planning English Referring Expressions. Artificial Intelligence, 26:1-33. Doug Appelt. 1985b. Some Pragmatic Issues in the Planning of Definite and Indefinite Referring Expressions. In 23rd Annual Meeting of the Association for Computational Linguistics, pages 198-203. Asso- ciation for Computational Linguistics, Morristown, New Jersey. Doug Appelt, and Amichai Kronfeld. 1987. A Computational Model of Referring. In Proceedings of the lOth International Joint Conference on Artificial Intelligence, pages 640-647, Milano, Italy. Robert Dale. 1988. Generating Referring Expressions in a Domain of Objects and Processes. PhD Thesis, Centre for Cognitive Science, University of Edinburgh. Robert Dale. 1989. Cooking Up Referring Expressions. In 27th Annual Meeting of the Association for Compu- tational Linguistics, pages 68-75, Vancouver, Canada. Association for Computational Linguistics, Morris- town, New Jersey. Robert Dale, and Nick Haddock. 1991. Generating Refer- ring Expressions Involving Relations. In Proceedings of the European Chapter of the Association for Compu- tational Linguistics, pages 161-166, Berlin, Germany. Robert Dale, and Ehud Reiter. 1995. Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions. Cognitive Science, 19:233- 263. K. Donellan. 1966. Reference and Definite Description. Philosophical Review, 75:281-304. Klaus-Peter Gapp. 1995. Efficient Processing of Spatial Relations in General Object Localization Tasks. In Proceedings of the Eighth Australian Joint Conference on Artificial Intelligence, Canberra, Australia. Barbara Grosz, and Candace Sidner. 1986. Attention, Intention, and the Structure of Discourse. Compu- tational Linguistics, 12:175-206. Nicolas Haddock. 1991. Linear-Time Reference Evalu- ation. Technical Report, Hewlett Packard Laboratories, Bristol. Helmut Horacek. 1995. More on Generating Referring Expressions. In Proceedings of the 5th European Work- shop on Natural Language Generation, pages 43-58, Leiden, The Netherlands. Helmut Horacek. 1996. A New Algorithm for Generating Referring Expressions. In Proceedings of the 8th European Conference on Artificial Intelligence, pages 577-581, Budapest, Hungary. Amichai Kronfeld. 1986. Donellan's Distinction and a Computational Model of Reference. In 24th Annual Meeting of the Association for Computational Linguistics, pages 186-191. Association for Computa- tional Linguistics, Morristown, New Jersey. William Levelt. 1989. Speaking: From Intention to Articulation. MIT Press. Alan Mackworth. 1977. Consistency in Networks of Relations. Artificial Intelligence, 8:99-118. David McDonald. 1981. Natural Language Generation as a Process of Decision Making Under Constraints. PhD Thesis, MIT, Cambridge, Massachusetts. Marie Meteer. 1992. Expressibility and the Problem of Efficient Text Planning. Pinter Publishers, London. Hans-Joachim Novak. 1988. Generating Referring Phrases in a Dynamic Environment. In M. Zock, G. Sabah, editors, Advances in Natural Language Generation, Vol. 2, pages 76-85, Pinter publishers, London. Ehud Reiter. 1990a. The Computational Complexity of Avoiding Conversational lmplicatures. In 28th Annual Meeting of the Association for Computational Linguistics, pages 97-104, Pittsburgh, Pennsylvania. Association for Computational Linguistics, Morris- town, New Jersey. Ehud Reiter. 1990b. Generating Descriptions that Exploit a User's Domain Knowledge. In R. Dale, C. Mellish, M. Zock, editors, Current Issues in Natural Language Generation, pages 257-285, Academic Press, New York. Ehud Reiter, and Robert Dale. 1992. Generating Definite NP Referring Expressions. In Proceedings of the International Conference on Computational Linguistics, Nantes, France. 213 | 1997 | 27 |
Applying Explanation-based Learning to Control and Speeding-up Natural Language Generation Giinter Neumann DFKI GmbH Stuhlsatzenhausweg 3 66123 Saarbriicken, Germany neumann@df k i. uni- sb. de Abstract This paper presents a method for the au- tomatic extraction of subgrammars to con- trol and speeding-up natural language gen- eration NLG. The method is based on explanation-based learning EBL. The main advantage for the proposed new method for NLG is that the complexity of the grammatical decision making process dur- ing NLG can be vastly reduced, because the EBL method supports the adaption of a NLG system to a particular use of a lan- guage. 1 Introduction In recent years, a Machine Learning tech- nique known as Explanation-based Learning EBL (Mitchell, Keller, and Kedar-Cabelli, 1986; van Harmelen and Bundy, 1988; Minton et al., 1989) has successfully been applied to control and speeding-up natural language parsing (Rayner, 1988; Samuelsson and Rayner, 1991; Neumann, 1994a; Samuelsson, 1994; Srinivas and Joshi, 1995; Rayner and Carter, 1996). The core idea of EBL is to transform the derivations (or explanations) computed by a prob- lem solver (e.g., a parser) to some generalized and compact forms, which can be used very efficiently for solving similar problems in the future. EBL has primarily been used for parsing to automatically spe- cialize a given source grammar to a specific domain. In that case, EBL is used as a method for adapting a general grammar and/or parser to the sub-language defined by a suitable training corpus (Rayner and Carter, 1996). A specialized grammar can be seen as describ- ing a domain-specific set of prototypical construc- tions. Therefore, the EBL approach is also very interesting for natural language generation (NLG). Informally, NLG is the production of a natural language text from computer-internal representa- tion of information, where NLG can be seen as a complex--potentially cascaded--decision making process. Commonly, a NLG system is decomposed into two major components, viz. the strategic com- ponent which decides 'what to say' and the tacti- cal component which decides 'how to say' the result of the strategic component. The input of the tacti- cal component is basically a semantic representation computed by the strategic component. Using a lexi- con and a grammar, its main task is the computation of potentially all possible strings associated with a semantic input. Now, in the same sense as EBL is used in parsing as a means to control the range of possible strings as well as their degree of ambigu- ity, it can also be used for the tactical component to control the range of possible semantic input and their degree of paraphrases. In this paper, we present a novel method for the automatic extraction of subgrammars for the control and speeding-up of natural language generation. Its main advantage for NLG is that the complexity of the (linguistically oriented) decision making process during natural language generation can be vastly re- duced, because the EBL method supports adaption of a NLG system to a particular language use. The core properties of this new method are: • prototypical occuring grammatical construc- tions can automatically be extracted; • generation of these constructions is vastly sped up using simple but efficient mechanisms; the new method supports partial matching, in the sense that new semantic input need not be completely covered by previously trained exam- ples; • it can easily be integrated with recently de- veloped chart-based generators as described in, 214 e.g., (Neumann, 1994b; Kay, 1996; Shemtov, 1996). The method has been completely implemented and tested With a broad-coverage HPSG-based grammar for English (see sec. 5 for more details). 2 Foundations The main focus of this paper is tactical generation, i.e., the mapping of structures (usually represent- ing semantic information eventually decorated with some functional features) to strings using a lexicon and a grammar. Thus stated, we view tactical gen- eration as the inverse process of parsing. Informally, EBL can be considered as an intelligent storage unit of example-based generalized parts of the grammat- ical search space determined via training by the tac- tical generator3 Processing of similar new input is then reduced to simple lookup and matching oper- ations, which circumvent re-computation of this al- ready known search space. We concentrate on constraint-based grammar for- malism following a sign-based approach consider- ing linguistic objects (i.e., words and phrases) as utterance-meaning associations (Pollard and Sag, 1994). Thus viewed, a grammar is a formal state- ment of the relation between utterances in a natu- ral language and representations of their meanings in some logical or other artificial language, where such representations are usually called logical forms (Shieber, 1993). The result of the tactical generator is a feature structure (or a set of such structures in the case of multiple paraphrases) containing among others the input logical form, the computed string, and a representation of the derivation. In our current implementation we are using TDL, a typed feature-based language and inference system for constraint-based grammars (Krieger and Sch~ifer, 1994). TDL allows the user to define hierarchically- ordered types consisting of type and feature con- straints. As shown later, a systematic use of type information leads to a very compact representation of the extracted data and supports an elegant but efficient generalization step. We are adapting a "flat" representation of log- ical forms as described in (Kay, 1996; Copestake et al., 1996). This is a minimally structured, but descriptively adequate means to represent seman- tic information, which allows for various types of under-/overspecification, facilitates generation and the specification of semantic transfer equivalences l In case a reversible grammar is used the parser can even be used for processing the training corpus. used for machine translation (Copestake et al., 1996; Shemtov, 1996). 2 Informally, a flat representation is obtained by the use of extra variables which explicitly repre- sent the relationship between the entities of a logical form and scope information. In our current system we are using the framework called minimal recur- sion semantics (MRS) described in (Copestake et al., 1996). Using their typed feature structure nota- tion figure 1 displays a possible MRS of the string "Sandy gives a chair to Kim" (abbreviated where convenient). The value of the feature LISZT is actually treated like a set, i.e., the relative order of the elements is immaterial. The feature HANDEL is used to repre- sent scope information, and INDEX plays much the same role as a lambda variable in conventional rep- resentations (for more details see (Copestake et al., 1996)). 3 Overview of the method aIn~tc 80P .s. I ~B I ,t~ f-" : I : I ! : '"1 g=,~m~.l L: P"--"g ," ,:i o ......... I , gene_,-ll tze I ' ,, ', V ~ndex ~'esulCs Figure 3: A blueprint of the architecture. The above figure displays the overall architecture of the EBL learning method. The right-hand part of the diagram shows the linguistic competence base (LCB) and the left the EBL-based subgrammar pro- cessing component (SGP). LCB corresponds to the tactical component of a general natural language generation system NLG. In this paper we assume that the strategic component of the NLG has already computed the MRS repre- sentation of the information of an underlying com- puter program. SGP consists of a training module TM, an application module AM, and the subgram- 2But note, our approach does not depend on a flat representation of logical forms. However, in the case of conventional representation form, the mechanisms for indexing the trained structures would require more com- plex abstract data types (see sec. 4 for more details). 215 "HANDEL hl INDEX e2 LISZT [.ANDEL hl] ] /EVEN~ ez [RANDEL IHANDEL hi [ACT x5 SandyRel L INST ~5 , |PREPARG x6 ' TempOver [EVENT e2 , GiveRel LUND x7 [HANDEL hl2] [.ANDEL hIJ]\ [HANDEL hlO], |ARG v13| z6 J ChairRel L INST x7 J [PREP x6 J ' KimRel L INST / To Some I HANDEL h9 ] BV x7 RESTR hlO [.SCOPE h11J Figure 1: The MRS of the string "Sandy gives a chair to Kim" LISZT (SandyRel [HANDEL h4 ], GiveRel [HANDEL hl], TempOver [HANDEL hl], Some [HANDEL h9], ] ChairReI[HANDEL hlO], To[HANDEL h12], KimRel[HANDEL hi,I) J Figure 2: The generalized MRS of the string "Sandy gives a chair to Kim" mar, automatically determined by TM and applied by AM. Briefly, the flow of control is as follows: During the training phase of the system, a new logical form mrs is given as input to the LCB. After grammatical processing, the resulting feature structure fs(mrs) (i.e., a feature structure that contains among others the input MRS, the computed string and a repre- sentation of the derivation tree) is passed to TM. TM extracts and generalizes the derivation tree of fs(mrs), which we call the template tempi(mrs) of fs(mrs), tempi(mrs) is then stored in a deci- sion tree, where indices are computed from the MRS found under the root of tempi(mrs). During the ap- plication phase, a new semantic input mrs t is used for the retrieval of the decision tree. If a candidate template can be found and successfully instantiated, the resulting feature structure fs(mrd) constitutes the generation result of mrs ~. Thus described, the approach seems to facilitate only exact retrieval and matching of a new seman- tic input. However, before we describe how partial matching is realized, we will demonstrate in more de- tail the exact matching strategy using the example MRS shown in figure 1. Training phase The training module TM starts right after the resulting feature structure fs for the input MRS mrs has been computed. In the first phase, TM extracts and generalizes the derivation tree of fs, called the template of fs. Each node of the template contains the rule name used in the cor- responding derivation step and a generalization of the local MRS. A generalized MRS is the abstrac- tion of the LISZT value of a MRS where each element only contains the (lexical semantic) type and HAN- DEL information (the HANDEL information is used for directing lexical choice (see below)). In our example mrs, figure 2 displays the gener- alized MRS mrsg. For convenience, we will use the more compact notation: {(SandyRel h4), (Giveael hl), (TempOver hl), (Some h9), (ChairRel hl0), (To h12), (KimRel h14)} Using this notation, figure 4 (see next page) dis- plays the template tempi(mrs) obtained from fs. Note that it memorizes not only the rule application structure of a successful process but also the way the grammar mutuMly relates the compositional parts of the input MRS. In the next step of the training module TM, the generalized MRS mrs~ information of the root node of tempi(mrs) is used for building up an index in a decision tree. Remember that the relative order of the elements of a MRS is immaterial. For that reason, the elements of mrsg are alphabetically or- dered, so that we can treat it as a sequence when used as a new index in the decision tree. The alphabetic ordering has two advantages. Firstly, we can store different templates under a common prefix, which allows for efficient storage and retrieval. Secondly, it allows for a simple efficient treatment of MRS as sets during the retrieval phase of the application phase. 216 SubjhD I (SandyRel h4), (GiveRel h I ), (TempOver h I), (S~une hg). (ChairRel hi0). (Tt) h 12), (KimRel h 14) ProperLe HCompNc ((SandyRel h4)} {(GiveRel hi), (TempOver hi) (Some hg), (ChairRel hlO). (T~ h 12), (KimRel hi4)} ~ ~ ~ D e t N [(Ti) hi2), (KimRel hi4)} HCompNc {(GiveRel h I ), (TempOver h I ), (St)me hg), (ChairRel h 10)} PrepNoModLe ProperLe [ (T<) h 12 ) } { (Ki mRel h 14 ) } MvTo+DitransLe DetN { (GiveRel h I ). { (S()me ht)), (Tem pOve~ h 1 ) } (ChairRel h I(1) ] DetSgLe IntrNLe { (Some hg)) { (ChairRel h 11)) } Figure 4: The template tempi(mrs). Rule names are in bold. Application phase The application module AM basically performs the following steps: 1. Retrievah For a new MRS mrs' we first con- struct the alphabetically sorted generalized MRS mrsg. mr% is then used as a path description for traversing the decision tree. For reasons we will explain soon, traversal is directed by type ! subsumption. Traversal is successful if mrsg has been completely processed and if the end node in the decision tree contains a template. Note that because of the alphabetic ordering, the rel- ative order of the elements of new input mrs ~ is immaterial. 2. Expansion: A successfully retrieved template templ is expanded by deterministically applying the rules denoted by the non-terminal elements from the top downwards in the order specified by tempi. In some sense, expansion just re-plays the derivation obtained in the past. This will result in a grammatically fully expanded fea- ture structure, where only lexical specific infor- mation is still missing. But note that through structure sharing the terminal elements will al- ready be constrained by syntactic information. 3 3It is possible to perform the expansion step off-line as early as the training phase, in which case the applica- tion phase can be sped up, however at the price of more memory being taken up. 3. Lexical lookup: From each terminal element of the unexpanded template templ the type and HANDEL information is used to select the cor- responding element from the input MRS mrs' (note that in general the MRS elements of the mrs' are much more constrained than their cor- responding elements in the generalized MRS mrs'g). The chosen input MRS element is then used for performing lexical lookup, where lexi- cal elements are indexed by their relation name. In general this will lead to a set of lexical can- didates. 4. Lexical instantiation: In the last step of the ap- plication phase, the set of selected lexical el- ements is unified with the constraints of the terminal elements in the order specified by the terminal yield. We also call this step terminal- matching. In our current system terminal- matching is performed from left to right. Since the ordering of the terminal yield is given by the template, it is also possible to follow other se- lection strategies, e.g., a semantic head-driven strategy, which could lead to more efficient terminal-matching, because the head element is supposed to provide selectional restriction in- formation for its dependents. A template together with its corresponding index describes all sentences of the language that share the same derivation and whose MRS are consistent with that of the index. Furthermore, the index and the MRS of a template together define a normaliza- tion for the permutation of the elements of a new input MRS. The proposed EBL method guarantees soundness because retaining and applying the orig- inal derivation in a template enforces the full con- straints of the original grammar. Achieving more generality So far, the applica- tion phase will only be able to re-use templates for a semantic input which has the same semantic type information. However, it is possible to achieve more generality, if we apply a further abstraction step on a generalized MRS. This is simply achieved by se- lecting a supertype of a MRS element instead of the given specialized type. The type abstraction step is based on the stan- dard assumption that the word-specific lexical se- mantic types can be grouped into classes represent- ing morpho-syntactic paradigms. These classes de- fine the upper bounds for the abstraction process. In our current system, these upper bounds are directly used as the supertypes to be considered during the type abstraction step. More precisely, for each el- ement x of a generalized MRS mrsg it is checked 217 whether its type Tx is subsumed by an upper bound T, (we assume disjoint sets). Only if this is the case, Ts replaces Tx in mrsg.4 Applying this type abstrac- tion strategy on the MRS of figure 1, we obtain: {(Named h4), (ActUndPrep hl), (TempOver hl), (Some h9), (RegNom hl0), (To h12), (Named h14)} SubjhD { (Named h4). (ActUndPrep h 1), (TempOver h I ), (Some h9). (RegNom hi0), (To hi2). (Named hi4)} ProperLe HCompNc { (Named h4) } { (ActUndPmp h I), (TempOver h l) (Some h9), (RegNom hi0), (To hi2), (Named h)4)} HCompNc / ~ 1 (To h 12), (Name h 14) } { (ActUndPrep h l ), (TempOver h 1), / ~ (Some h9), (RegNom h 10)} / \ PrepNoModLe ProperLe {(To hi2)} {(Name hi4)} MvTo+DitransLe DetN { (ActUndPrep h 1). { (Some h9), (TempOver h I )} (RegNom h 10)] DetSgLe IntrNLe {(Some h9)} ((RegNom hi0)} Figure 5: The more generalized derivation tree dtg of dt. where e.g., NAMED is the common supertype of SANDYREL and KIMREL, and ACTUNDPREP is the supertype of GIVEREL. Figure 5 shows the tem- plate templg obtained from fs using the more gen- eral MRS information. Note, that the MRS of the root node is used for building up an index in the decision tree. Now, if retrieval of the decision tree is directed by type subsumption, the same template can be re- trieved and potentially instantiated for a wider range of new MRS input, namely for those which are type compatible wrt. subsumption relation. Thus, the template templ 9 can now be used to generate, e.g., the string "Kim gives a table to Peter", as well as the string "Noam donates a book to Peter". However, it will not be able to generate a sentence like "A man gives a book to Kim", since the retrieval 4 Of course, if a very fine-grained lexical semantic type hierarchy is defined then a more careful selection would be possible to obtained different degrees of type abstrac- tion and to achieve a more domain-sensitive determina- tion of the subgrammars. However, more complex type abstraction strategies are then needed which would be able to find appropriate supertypes automatically. phase will already fail. In the next section, we will show how to overcome even this kind of restriction. 4 Partial Matching The core idea behind partial matching is that in case an exact match of an input MRS fails we want at least as many subparts as possible to be instantiated. Since the instantiated template of a MRS subpart corresponds to a phrasal sign, we also call it a phrasal template. For example, assuming that the training phase has only to be performed for the example in figure 1, then for the MRS of "A man gives a book to Kim", a partial match would generate the strings "a man" and "gives a book to Kim".5 The instantiated phrasal templates are then combined by the tactical component to produce larger units (if possible, see below). Extended training phase The training module is adapted as follows: Starting from a template templ obtained for the training example in the man- ner described above, we extract recursively all pos- sible subtrees templs also called phrasal templates. Next, each phrasal template is inserted in the deci- sion tree in the way described above. It is possible to direct the subtree extraction pro- cess with the application of filters, which are ap- plied to the whole remaining subtree in each recur- sive step. By using these filters it is possible to re- strict the range of structural properties of candidate phrasal templates (e.g., extract only saturated NPs, or subtrees having at least two daughters, or sub- trees which have no immediate recursive structures). These filters serve the same means as the "chunking criteria" described in (Rayner and Carter, 1996). During the training phase it is recognized for each phrasal template templs whether the decision tree already contains a path pointing to a previously ex- tracted and already stored phrasal template tempi's, such that templs = templ's. In that case, templ~ is not inserted and the recursion stops at that branch. Extended application phase For the applica- tion module, only the retrieval operation of the de- cision tree need be adapted. Remember that the input of the retrieval opera- tion is the sorted generalized MRS mrsg of the input MRS mrs. Therefore, mrsg can be handled like a sequence. The task of the retrieval operation in the case of a partial match is now to potentially find all subsequences of mrsg which lead to a template. 5If we would allow for an exhaustive partial match (see below) then the strings '% book" and "Kim" would additionally be generated. 218 In case of exact matching strategy, the decision tree must be visited only once for a new input. In the case of partial matching, however, the decision tree describes only possible prefixes for a new input. Hence, we have to recursively repeat retrieval of the decision tree as long as the remaining suffix is not empty. In other words, the decision tree is now a finite representation of an infinite structure, because implicitly, each endpoint of an index bears a pointer to the root of the decision tree. Assuming that the following template/index pairs have been inserted into the decision tree: (ab, tl), (abcd, t2), (bcd, t3). Then retrieval using the path abcd will return all three templates, retrieval using aabbcd will return template tl and t3, and abc will only return tl.6 Interleaving with normal processing Our EBL method can easily be integrated with normal processing, because each instantiated template can be used directly as an already found sub-solution. In case of an agenda-driven chart generator of the kind described in (Neumann, 1994a; Kay, 1996), an instantiated template can be directly added as a passive edge to the generator's agenda. If passive edges with a wider span are given higher priority than those with a smaller span, the tactical gener- ator would try to combine the largest derivations before smaller ones, i.e., it would prefer those struc- tures determined by EBL. 5 Implementation The EBL method just described has been fully im- plemented and tested with a broad coverage HPSG- based English grammar including more than 2000 fully specified lexical entries. 7 The TDL grammar formalism is very powerful, supporting distributed disjunction, full negation, as well as full boolean type logic. In our current system, an efficient chart-based bidirectional parser is used for performing the train- ing phase. During training, the user can interac- tively select which of the parser's readings should be considered by the EBL module. In this way the user can control which sort of structural ambigui- ties should be avoided because they are known to cause misunderstandings. For interleaving the EBL application phase with normal processing a first pro- 6It is possible to parameterize our system to per- form an exhaustive or a non-exhaustive strategy. In the non-exhaustive mode, the longest matching prefixes axe preferred. ~This grammar has been developed at CSLI, Stan- ford, and kindly be provided to the author. totype of a chart generator has been implemented using the same grammar as used for parsing. First tests has been carried out using a small test set of 179 sentences. Currently, a parser is used for processing the test set during training. Generation of the extracted templates is performed solely by the EBL application phase (i.e., we did not consid- ered integration of EBL and chart generation). The application phase is very efficient. The average pro- cessing time for indexing and instantiation of a sen- tence level template (determined through parsing) of an input MRS is approximately one second. S Com- pared to parsing the corresponding string the factor of speed up is between 10 to 20. A closer look to the four basic EBL-generation steps: indexing, in- stantiation, lexical lookup, and terminal matching showed that the latter is the most expensive one (up to 70% of computing time). The main reasons are that 1.) lexical lookup often returns several lexical readings for an MRS element (which introduces lex- ical non-determinism) and 2.) the lexical elements introduce most of the disjunctive constraints which makes unification very complex. Currently, termi- nal matching is performed left to right. However, we hope to increase the efficiency of this step by us- ing head-oriented strategies, since this might help to re-solve disjunctive constraints as early as possible. 6 Discussion The only other approach I am aware of which also considers EBL for NLG is (Samuelsson, 1995a; Samuelsson, 1995b). However, he focuses on the compilation of a logic grammar using LR-compiling techniques, where EBL-related methods are used to optimize the compiled LR tables, in order to avoid spurious non-determinisms during normal genera- tion. He considers neither the extraction of a spe- cialized grammar for supporting controlled language generation, nor strong integration with the normal generator. However, these properties are very important for achieving high applicability. Automatic grammar extraction is worthwhile because it can be used to support the definition of a controlled domain-specific language use on the basis of training with a gen- eral source grammar. Furthermore, in case exact matching is requested only the application module is needed for processing the subgrammar. In case of normal processing, our EBL method serves as a speed-up mechanism for those structures which have SEBL-based generation of all possible templates of an input MRS is less than 2 seconds. The tests have been performed using a Sun UltraSpaxc. 219 "actually been used or uttered". However, complete- ness is preserved. We view generation systems which are based on "canned text" and linguistically-based systems sim- ply as two endpoints of a contiguous scale of possible system architectures (see also (Dale et al., 1994)). Thus viewed, our approach is directed towards the automatic creation of application-specific generation systems. 7 Conclusion and Future Directions We have presented a method of automatic extrac- tion of subgrammars for controlling and speeding up natural language generation (NLG). The method is based on explanation-based learning (EBL), which has already been successfully applied for parsing. We showed how the method can be used to train a system to a specific use of grammatical and lexical usage. We already have implemented a similar EBL method for parsing, which supports on-line learn- ing as well as statistical-based management of ex- tracted data. In the future we plan to combine EBL- based generation and parsing to one uniform EBL approach usable for high-level performance strate- gies which are based on a strict interleaving of pars- ing and generation (cf. (Neumann and van Noord, 1994; Neumann, 1994a)). 8 Acknowledgement The research underlying this paper was supported by a research grant from the German Bundesmin- isterium f/Jr Bildung, Wissenschaft, Forschung und Technologie (BMB+F) to the DFKI project PARADIME FKZ ITW 9704. I would like to thank the HPSG people from CSLI, Stanford for their kind support and for providing the HPSG-based English grammar. In particular I want to thank Dan Flickinger and Ivan Sag. Many thanks also to Walter Kasper for fruitful discussions. References Copestake, A., D. Flickinger, R. Malouf, S. Riehe- mann, and I. Sag. 1996. Translation using minimal recursion semantics. In Proceedings, 6th International Conference on Theoretical and Methodological Issues in Machine Translation. Dale, R., W. Finkler, R. Kittredge, N. Lenke, G. Neumann, C. Peters, and M. Stede. 1994. Re- port from working group 2: Lexicalization and architecture. In W. Hoeppner, H. Horacek, and J. Moore, editors, Principles of Natural Language Generation, Dagstuhl-Seminar-Report; 93. Schlofl Dagstuhl, Saarland, Germany, Europe, pages 30- 39. Kay, M. 1996. Chart generation. In 3~th An- nual Meeting of the Association for Computa- tional Linguistics, Santa Cruz, Ca. Krieger, Hans-Ulrich and Ulrich Sch~fer. 1994. 7"Dt:--a type description language for constraint- based grammars. In Proceedings of the 15th Inter- national Conference on Computational Linguis- tics, COLING-9~, pages 893-899. Minton, S., J. G. Carbonell, C. A. Knoblock, D. R.Kuokka, O. Etzioni, and Y.Gi. 1989. Explanation-based learning: A problem solving perspective. Artificial Intelligence, 40:63-115. Mitchell, T., R. Keller, and S. Kedar-Cabelli. 1986. Explanation-based generalization: a uni- fying view. Machine Learning, 1:47-80. Neumann, G. 1994a. Application of explanation- based learning for efficient processing of constraint based grammars. In Proceedings of the Tenth IEEE Conference on Artificial Intelligence for Ap- plications, pages 208-215, San Antonio, Texas, March. Neumann, G. 1994b. A Uniform Computational Model for Natural Language Parsing and Gener- ation. Ph.D. thesis, Universit~t des Saarlandes, Germany, Europe, November. Neumann, G. and G. van Noord. 1994. Re- versibility and self-monitoring in natural language generation. In Tomek Strzalkowski, editor, Re- versible Grammar in Natural Language Process- ing. Kluwer, pages 59-96. Pollard, C. and I. M. Sag. 1994. Head-Driven Phrase Structure Grammar. Center for the Study of Language and Information Stanford. Rayner, M. 1988. Applying explanation-based gen- eralization to natural language processing. In Pro- ceedings of the International Conference on Fifth Generation Computer Systems, Tokyo. Rayner, M. and D. Carter. 1996. Fast parsing us- ing pruning and grammar specialization. In 34th Annual Meeting of the Association for Computa- tional Linguistics, Morristown, New Jersey. Samuelsson, C. 1994. Fast Natural-Language Pars- ing Using Explanation-Based Learning. Ph.D. thesis, Swedish Institute of Computer Science, Kista, Sweden, Europe. Samuelsson, C. 1995a. An efficient algorithm for surface generation. In Proceedings of the 14th In- ternational Joint Conference on Artificial Intelli- gence, pages 1414-1419, Montreal, Canada. 220 Samuelsson, C. 1995b. Example-based optimiza- tion of surface-generation tables. In Proceedings of Recent Advances in Natural Language Process- ing, Velingrad, Bulgaria, Europe. Samuelsson, C. and M. Rayner. 1991. Quantita- tive evaluation of explanation-based learning as an optimization tool for a large-scale natural lan- guage system. In IJCAI-91, pages 609-615, Syd- ney, Australia. Shemtov, H. 1996. Generation of Paraphrases from Ambiguous Logical Forms. In Proceedings of the 16th International Conference on Computational Linguistics (COLING), pages 919-924, Kopen- hagen, Denmark, Europe. Shieber, S. M. 1993. The problem of logical-form equivalence. Computational Linguistics, 19:179- 190. Srinivas, B. and A. Joshi. 1995. Some novel ap- plications of explanation-based learning to pars- ing lexicalized tree-adjoining grammars. In 33th Annual Meeting of the Association for Computa- tional Linguistics, Cambridge, MA. van Harmelen, F. and A. Bundy. 1988. Explanation- based generalization=partial evaluation. Artifi- cial Intelligence, 36:401-412. 221 | 1997 | 28 |
Morphological Disambiguation by Voting Constraints Kemal Oflazer and GSkhan Tfir Department of Computer Engineering and Information Science Bilkent University, Bilkent, TR-06533, Turkey {ko, tur}©cs, bilkent, edu. tr Abstract We present a constraint-based morpholog- ical disambiguation system in which indi- vidual constraints vote on matching mor- phological parses, and disambiguation of all the tokens in a sentence is performed at the end by selecting parses that receive the highest votes. This constraint applica- tion paradigm makes the outcome of the disambiguation independent of the rule se- quence, and hence relieves the rule devel- oper from worrying about potentially con- flicting rule sequencing. Our results for disambiguating Turkish indicate that using about 500 constraint rules and some addi- tional simple statistics, we can attain a re- call of 95-96~ and a precision of 94-95~ with about 1.01 parses per token. Our sys- tem is implemented in Prolog and we are currently investigating an efficient imple- mentation based on finite state transduc- ers. 1 Introduction Automatic morphological disambiguation is an im- portant component in higher level analysis of natural language text corpora. There has been a large num- ber of studies in tagging and morphological disam- biguation using various techniques such as statisti- cal techniques, e.g., (Church, 1988; Cutting et al., 1992; DeRose, 1988), constraint-based techniques (Karlsson et al., 1995; Voutilainen, 1995b; Vouti- lainen, Heikkil/i, and Anttila, 1992; Voutilainen and Tapanainen, 1993; Oflazer and KuruSz, 1994; Oflazer and Till 1996) and transformation-based techniques (Brilt, 1992; Brill, 1994; Brill, 1995). This paper presents a novel approach to constraint based morphological disambiguation which relieves the rule developer from worrying about conflicting rule ordering requirements. The approach depends on assigning votes to constraints according to their complexity and specificity, and then letting con- straints cast votes on matching parses of a given lexical item. This approach does not reflect the out- come of matching constraints to the set of morpho- logical parses immediately. Only after all applicable rules are applied to a sentence, all tokens are dis- ambiguated in parallel. Thus, the outcome of the rule applications is independent of the order of rule applications. Rule ordering issue has been discussed by Voutilainen(1994), but he has recently indicated 1 that insensitivity to rule ordering is not a property of their system (although Voutilainen(1995a) states that it is a very desirable property) but rather is achieved by extensively testing and tuning the rules. In the following sections, we present an overview of the morphological disambiguation problem, high- lighted with examples from Turkish. We then present our approach and results. We finally con- clude with a very brief outline of our investigation into efficient implementations of our approach. 2 Morphological Disambiguation In all languages, words are usually ambiguous in their parts-of-speech or other morphological fea- tures, and may represent lexical items of different syntactic categories, or morphological structures de- pending on the syntactic and semantic context. In languages like English, there are a very small number of possible word forms that can be generated from a given root word, and a small number of part-of- speech tags associated with a given lexical form. On the other hand, in languages like Turkish or Finnish with very productive agglutinative morphology, it is possible to produce thousands of forms (or even millions (Hankamer, 1989)) from a given root word and the kinds of ambiguities one observes are quite different than what is observed in languages like En- glish. In Turkish, there are ambiguities of the sort typically found in languages like English (e.g., book/noun vs book/verb type). However, the ag- glutinative nature of the language usually helps res- olution of such ambiguities due to the restrictions on morphotactics of subsequent morphemes. On the 1Voutilainen, Private communication. 222 other hand, this very nature introduces another kind of ambiguity, where a lexical form can be morpho- logically interpreted in many ways not usually pre- dictable in advance. Furthermore, Turkish allows very productive derivational processes and the infor- mation about the derivational structure of a word form is usually crucial for disambiguation (Oflazer and Tiir, 1996). Most kinds of morphological ambiguities that we have observed in Turkish typically fall into one the following classes: ~ 1. the form is uninflected and assumes the default inflectional features, e.g., I. taS (made of stone) [ [CAT=ADJ] [ROOT=taS]] 2. taS (stone) [ [CAT=NOUN] [ROOT=taS] [AGR=3SG] [POSS=NONE] [CASE=NOM] ] 3. taS (overflow!) [ [CAT=VERB] [ROOT=t aS] [SENSE=POS] [TAMI=IMP] [AGR=2SQ]] 2. Lexically different affixes (conveying different morphological features) surface the same due to the morphographemic context, e.g., 1. ev+[n]in (of the house) [ [CAT=NOUN] [ROOT=ev] [AGR=3SG] [PDSS=NONE] [CASE=GEN] ] 2. ev+in (your house) [ [CAT=NOUN] [ROOT=ev] [AGR=3SG] [POSS=2SG] [CASE=NOM] ] 3. The root of one of the parses is a prefix string of the root, of the other parse, and the parse with the shorter root word has a suffix which surfaces as the rest of the longer root word, e.g., 1. koyu+[u]n (your dark (thing)) [ [CAT=ADJ] [ROOT=koyu] [CONV=NOUN=NONE] [AGR=3SG] [POSS=2SG] [CASE=NOM]] 2. koyun (sheep) [ [CAT=NOUN] [ROOT=koyun] [AGR=3SG] [POSS=NONE] [CASE=NOM] ] 3. koy+[n]un (of the bay) [ [CAT=NOUN] [ROOT=koy] [AGR=3SG] [POSS=NONE] [CASE=GEN] ] 4. koy+un (ybur bay) [ [CAT=NOUN] [R00T=koy] [AGR=bSG] [POSS=RSG] [CASE=NOM]] 2Output of the morphological analyzer is edited for clarity, and English glosses have been given. We have also provided the morpheme structure, where [...]s, in- dicate elision. Glosses are given as linear feature value sequences corresponding to the morphemes (which are not shown). The feature names are as follows: CAT-major category, TYPE-minor category, R00T-main root form, AGR -number and person agreement, P0SS - possessive agree- ment, CASE - surface case, CONV - conversion to the cat- egory following with a certain suffix indicated by the argument after that, TAMl-tense, aspect, mood marker 1, SENSE-verbal polarity. Upper cases in morphological output indicates one of the non-ASCII special Turkish characters: e.g., G denotes ~, U denotes /i, etc. 5. koy+[y]un (put!) [ [CAT=VERB] [ROOT=koy] [SENSE=POS] [TAMI=IMP] [AGR=2PL] ] 4. The roots take different numbers of unrelated inflectional and/or derivational suffixes which when concatenated turn out to have the same surface form, e.g., I. yap+madan (without having done (it)) [ [CAT=VERB] [ROOT=yap] [SENSE=POS] [CONV=ADVERB=MADAN] ] 2. yap+ma+dan (from doing (it)) [ [CAT=VERB] [ROOT=yap] [SENSE=POS] [CONV=NOUN=MA] [TYPE=INFINITIVE] [AGR=3SG] [POSS=NONE] [CASE=ABL] ] 5. One of the ambiguous parses is a lexicalized form while another is form derived by a pro- ductive derivation as in 1 and 2 below. 6. The same suffix appears in different positions in the morphotactic paradigm conveying different information as in 2 and 3 below. 1. uygulama / (application) [ [CAT=NOUN] [ROOT=uygulama] [AGR=3SG] [POSS=NONE] [CASE=NDM] ] 2. uygula+ma / ((the act of) applying) [ [CAT=VERB] [ROOT=uygula] [SENSE=POS] [CONV=NOUN=MA] [TYPE=INFINITIVE] [AGR=3SG] [POSS=NONE] [CASE=NOM] ] 3. uygula+ma / (do not apply!) [ [CAT=VERB] [ROOT=uygula] [SENSE=NEG] [TAMI=IMP] [AGR=2SG] ] • The main intent of our system is to achieve mor- phological disambiguation by choosing for a given ambiguous token, the correct parse in a given con- text. It is certainly possible that a given token may have nmltiple correct parses, usually with the same inflectional features, or with inflectional features not ruled out by the syntactic context, but one will be the "correct" parse usually on semantic grounds. We consider a token fully disambiguated if it has only one morphological parse remaining after auto- matic disambiguation. We Consider a token as cor- rectly disambiguated, if one of the parses remaining for that token is the correct intended parse. We eval- uate the resulting disambiguated text by a number of metrics defined as follows (Voutilainen, 1995a): #Parses Ambiguity- #Tokens Recall = #Tokens Correctly Disambiguated #Tokens Precision = #Tokens Correctly Disambiguated #Parses In the ideal case where each token is uniquely and correctly disambiguated with the correct parse, both recall and precision will be 1.0. On the other hand, a 223 text where each token is annotated with all possible parses, 3 the recall will be 1.0, but the precision will be low. The goal is to have both recall and precision as high as possible. 3 Constraint-based Morphological Disambiguation This section outlines our approach to constraint- based morphological disambiguation where con- straints vote on matching parses of sequential to- kens. 3.1 Constraints on morphological parses We describe constraints on the morphological parses of tokens using rules with two components R= (Cl,C~.,...,C,~;V) where the Ci are (possibly hierarchical) feature con- straints on a sequence of the morphological parses, and V is an integer denoting the vote of the rule. To illustrate the flavor of our rules we can give the following examples: 1. The following rule with two constraints matches parses with case feature ablative, preceding a parse matching a postposition subcategorizing for an ablative nominal form. [ [case : abl] , [cat : postp, subcat : abl] ] 2. The rule [ [agr : '2SG', case : gen] , [cat : noun, poss : ' 2SG '] ] matches a nominal form with a possessive marker 2SG, following a pronoun with 2SG agreement and genitive case, enforcing the sim- plest form of noun phrase constraints. 3. In general constraints can make references to tile derivational structure of the lexical form and hence be hierarchicah For instance, the fol- lowing rule is an example of a rule employing a hierarchical constraint: [ [cat : adj, stem : [taml : narr] ] , [cat : noun, st em :no] ] which matches tile derived participle reading of a verb with narrative past tense, if it is followed by an underived noun parse. 3.2 Determining the vote of a rule There are a number of ways votes can be assigned to rules. For the purposes of this work the vote of a rule is determined by its static properties, but it is certainly conceivable that votes can be assigned or learned by using statistics from disambiguated corpora. 4 For static vote assignment, intuitively, we would like to give high votes to rules that are more specific: i.e., to rules that have aAssuming no unknown words. 4We have left this for future work. • higher number of constraints, • higher number of features in the constraints, • constraints that make reference to nested stems (from which the current form is derived), • constraints that make reference to very specific features or values. Let R = (C1,C2,'",C~;V) be a constraint rule. The vote V is determined as n v = i=l where V(Ci) is the contribution of constraint Ci to the vote of the rule R. A (generic) constraint has the following form: C -- [(fl : vl)•(f2 : v2)&5... (fro : vm)] where fi is the name of a morphological feature, and vi is one of the possible values for that feature. The contribution of fi : vi in the vote of a constraint depends on a number of factors: 1. The value vi may be a distinguished value that has a more important function in disambigua- tion. 5 In this case, the weight of the feature constraint is w(vi)(> 1). 2. The feature itself may be a distinguished feature which has more important function in disam- biguation. In this case the weight of the feature is w(fi)(> 1). 3. If the feature fi refers to the stem of a de- rived form and the value part of the feature con- straint is a full fledged constraint C' on the stem structure, the weight of the feature constraint is found by recursively computing the vote of C' and scaling the resulting value by a factor (2 in our current system) to improve its specificity. 4. Otherwise, the weight of the feature constraint is 1. For example suppose we have the following con- straint: [cat :noun, case : gen, stem:[cat:adj, stem:[cat:v], suffix=mis]] Assuming the value gen is a distinguished value with weight 4 (cf., factor 1 above), the vote of this constraint is computed as follows: 1. cat :noun contributes 1, 2. case:gen contributes 4, 3. stem:[cat:adj, stem: [cat:v],suffix=mis] contributes 8 computed as follows: (a) cat :adj contributes 1, 5For instance, for Turkish we have noted that the genitive case marker is usually very helpful in disambiguation. 224 (b) suffYx=mS.s contributes 1, (c) stem: [cat:v] contributes 2 = 2* 1, the 1 being from cat : v, (d) the sum 4 is scaled by 2 to give 8. 4. Votes from steps 1, 2 and 3(d) are added up to give 13 as the constraint vote. We also employ a set of rules which express pref- erences among the parses of single lexical form in- dependent of the context in which the form occurs. The weights for these rules are currently manually determined. These rules give negative votes to the parses which are not preferred or high votes to cer- tain parses which are always preferred. Our experi- ence is that such preference rules depend on the kind of the text one is disambiguating. For instance if one is disambiguating a manual of some sort, imperative readings of verbs are certainly possible, whereas in normal plain text with no discourse, such readings are discouraged. 3.3 Voting and selecting parses A rule R = (C1,62,'", Cn; V) will match a se- quence of tokens wi, Wi+l, • •., wi+n-1 within a sen- tence wl through ws if some morphological parse of every token wj,i < j < i + n - 1 is subsumed by the corresponding constraint Cj-i+l. When all con- straints match, the votes of all the matching parses are incremented by V. If a given constraint matches more than one parse of a token, then the votes of all such matching parses are incremented. After all rules have been applied to all token po- sitions in a sentence and votes are tallied, morpho- logical parses are selected in the following manner. Let vt and Vh be the votes of the lowest and high- est scoring parses for a given token. All parses with votes equal to or higher than vt + m * (Vh -- vt) are selected with m (0 _< m _< 1) being a parameter. m = 1 selects the highest scoring parse(s). 4 Results from Disambiguating Turkish Text We have applied our approach to disambiguating Turkish text. Raw text is processed by a prepro- cessor which segments the text into sentences using various heuristics about punctuation, and then to- kenizes and runs it through a wide-coverage high- performance morphological analyzer developed us- ing two-level morphology tools by Xerox (Kart- tunen, 1993). The preprocessor module also per- forms a number of additional functions such as grouping of lexicalizcd and non-lexicalized colloca- tions, compound verbs, etc., (Ofiazer and Kurubz, 1994; Oflazer and Tiir, 1996). The preprocessor also uses a second morphological processor for dealing with unknown words which recovers any derivational and inflectional information from a word even if the root word is not known. This unknown word pro- cessor has a (nominal) root lexicon which recognizes S +, where S is the Turkish surface alphabet (in the two-level morphology sense), but then tries to in- terpret an arbitrary postfix string of the unknown word, as a sequence of Turkish suffixes subject to all morphographemic constraints (Oflazer and Tfir, 1996). We have applied our approach to four texts la- beled ARK, HIST, MAN, EMB, with statistics given in Table 1. The tokens considered are those that are generated after morphological analysis, un- known word processing and any lexical coalescing is done. The words that are counted as unknown are those that could not even be processed by the un- known noun processor as they violate Turkish mor- phographemic constraints. Whenever an unknown word has more than one parse it is counted under the appropriate group. 6 The fourth and fifth columns in this table give the average parses per token and the initial precision assuming initial recall is 100%. We have disambiguated these texts using a rule base of about 500 hand-crafted rules. Most of the rule crafting was done using the general linguistic constraints and constraints that we derived from the first text, ARK. In this sense, this text is our "train- ing data", while the other three texts were not con- sidered in rule crafting. Our results are summarized in Table 2. The last four columns in this table present results for differ- ent values for the parameter rn mentioned above, m = 1 denoting the case when only the highest scoring parse(s) is (are) selected. The columns for m < 1 are presented in order to emphasize that dras- tic loss of precision for those cases. Even at m = 0.95 there is considerable loss of precision and going up to m = 1 causes a dramatic increase in precision without a significant loss in recall. It can be seen that we can attain very good recall and quite ac- ceptable precision with just voting constraint rules. Our experience is that we can in principle add highly specialized rules by covering a larger text base to improve our recall and precision for the m = 1. A post-mortem analysis has shown that cases that have been missed are mostly due to morphosyntactic de- pendencies that span a context much wider that 5 tokens that we currently employ. 4.1 Using root and contextual statistics We have employed two additional sources of infor- mation: root word usage statistics, and contextual statistics. We have statistics compiled from previ- ously disambiguated text, on root frequencies. After the application of constraints as described above, for 6The reason for the (comparatively) high number of unknown words in MAN, is that tokens found in such texts, like .[10, denoting a function key in the computer can not be parsed as a Turkish root word! 225 Text Sent. ARK 492 HIST 270 MAN 204 EMB 198 Tokens 7928 5212 2756 5177 Parses/ Token 1.823 1.797 1.840 1.914 Init. Prec. 0 0.55 0.15% 0.56 0.02% 0.54 0.65% 0.52 0.09% Distribution of Morphological Parses 1 2 3 4 >4 49.34% 30.93% 9.19% 8.46% 1.93% 50.63% 30.68% 8.62% 8.36% 1.69% 49.01% 31.70% 6.37% 8.91% 3.36% 43.94% 34.58% 9.60% 9.46% 2.33% Table 1: Statistics on Texts Vote Range Selected(m) TEXT 1.0 0.95 0.8 0.6 ARK Rec. 98.05 98.47 98.69 98.77 Prec. 94.13 87.65 84.41 82.43 Amb. 1.042 1.123 1.169 1.200 HIST Rec. 97.03 97.65 98.81 97.01 Prec. 94.13 87.10 84.41 82.29 Amb. 1.058 1.121 1.169 1.189 'I~IAN Rec. 97.03 97.92 97.81 98.77 Prec. 91.05 83.51 79.85 77.34 Amb. 1.068 1.172 1.237 1.277 EMB Rec. 96.51 97.48 97.76 97.94 Prec. 91.28 84.36 77.87 75.79 Amb. 1.057 1.150 1.255 1.292 Table 2: Results with voting constraints TEXT V V+R V+R+C ARK Rec. 98.05 97.60 96.98 Prec. 94.13 95.28 '96.19 Amb. 1.042 1.024 1.008 HIST Rec. 97:03 96.52 95.62 Prec. 94.13 92.59 94.33 Amb. 1.058 1.042 1.013 MAN Rec. 97.03 96.47 95.84 Prec. 91.05 93.08 94.47 Amb. 1.058 1.042 1.014 EMB Rec. 96.51 96.47 95.37 Prec. 91.28 93.08 94.45 Amb. 1.057 1.036 1.009 Table 3: Results with voting constraints and root statistics, context statistics tokens which are still ambiguous with ambiguity re- sulting from different root words, we discard parses if the frequencies of the root words for those parses are considerably lower than the frequency of the root of the highest scoring parse. The results after apply- ing this step on top of voting, with m = 1, are shown in the fourth column of Table 3 (labeled V+R). On top of this, we use the following heuristic us- ing context statistics to eliminate any further ambi- guities. For every remaining ambiguous token with unambiguous immediate left and right contexts (i.e., the tokens in the immediate left and right are unam- biguous), we perform the following, by ignoring the root/stem feature of ~he parses: 1. For every ambiguous parse in such an unam- biguous context, we count how many times, this parse occurs unambiguously in exactly the same unambiguous context, in the rest of the text. 2. We then choose the parse whose count is sub- stantially higher than the others. The results after applying this step on of the previ- ous two steps are shown in the last column of Table 3 (labeled V+R+C). One can see from the last three columns of this table, the impact of each of the steps. By ignoring root/stem features during this pro- cess, we essentially are considering just the top level inflectional information of the parses. This is very similar to Brill's use of contexts to induce transfor- mation rules for his tagger (Brill, 1992; Brill, 1995), but instead of generating transformation rules from a training text, we gather statistics and apply them to parses in the text being disambiguated. 5 Efficient Implementation Techniques and Extensions The current implementation of the voting approach is meant to be a proof of concept implementation and is rather inefficient. However, the use of regular relations and finite state transducers (Kaplan and Kay, 1994) provide a very efficient implementation method. For this, we view the parses of the tokens making up a sentence as making up a acyclic a fi- nite state recognizer with the states marking word boundaries and the ambiguous interpretations of the tokens as the state transitions between states, the rightmost node denoting the final state, as depicted in Figure 1 for a sentence with 5 tokens. In Figure 1, the transition labels are triples of the sort (wi, pj, O) for the jth parse of token i, with the 0 indicating the initial vote of the parse. The rules imposing constraints can also be represented as transducers which increment the votes of the matching transi- 226 (wl,pl,O) (w2,pl,O) (W3,pl,O) (w4,pl,O) (w5,pl,O) (wl,p3,0) (W2,p5,0) (w3,p4,0) (W4,p3,0) (W5,p4,0) Figure 1: Sentence as a finite state recognizer. tion labels by an appropriate amount. ~ Such trans- ducers ignore and pass through unchanged, parses that they are not sensitive to. When a finite state recognizer corresponding to the input sentence (which actually may be consid- ered as an identity transducer) is composed with a constraint transducer, one gets a slightly modified version of the sentence transducer with possibly ad- ditional transitions and states, where the votes of some of the labels have been appropriately incre- lnented. When the sentence transducer is composed with all the constraint transducers in sequence, all possible votes are cast and the final sentence trans- ducer reflects all the votes. The parse corresponding to each token with the highest vote can then be se- lected. The key point here is that due to the nature of the composition operator, the constraint transduc- ers can be composed off-line first, giving a single con- straint transducer and then this one is composed with every sentence transducer once (See Figure 2). The idea of voting can further be extended to a path voting framework where rules vote on paths containing sequences of matching parses and the path from the start state to the final stated with the highest votes received, is then selected. This can be implemented again using finite state transducers as described above (except that path vote is appor- tioned equally to relevant parse votes), but instead of selecting highest scoring parses, one selects the path from the start state to one of the final states where the sum of the parse votes is maximum. We have recently completed a prototype implementation of this approach (in C) for English (Brown Corpus) and have obtained quite similar results (Tiir, Of- lazer, and Oz-kan, 1997). 6 Conclusions We have presented an approach to constraint-based morphological disambiguation which uses constraint voting as its primary mechanism for parse selec- tion and alleviates the rule developer from worrying about rule ordering issues. Our approach is quite general and is applicable to any language. Rules de- scribing language specific linguistic constraints vote on matching parses of tokens, and at the end, parses TSuggested by Lauri Karttunen (private communica- tion). for every token receiving the highest tokens are se- lected. We have applied this approach to Turkish, a language with complex agglutinative word forms exhibiting morphological ambiguity phenomena not usually found in languages like English and have ob- tained quite promising results. The convenience of adding new rules in without worrying about where exactly it goes in terms of rule ordering (some- thing that hampered our progress in our earlier work on disambiguating Turkish morphology (Oflazer and KuruSz, 1994; Oflazer and Tiir, 1996)), has also been a key positive point. Furthermore, it is also possible to use rules with negative votes to disallow impos- sible cases. This has been quite useful for our work on tagging English (Tfir, Oflazer, and 0z-kan, 1997) where such rules with negative weights were used to fine tune the behavior of the tagger in various prob- lematic cases. The proposed approach is also amenable to an efficient implementation by finite state transducers (Kaplan and Kay, 1994). By using finitestate trans- ducers, it is furthermore possible to use a bit more expressive rule formalism including for instance the Kleene * operator so that one can use a much smaller set of rules to cover the same set of local linguistic phenomena. Our current and future work in this framework involves the learning of constraints and their votes from corpora, and combining learned and hand- crafted rules. 7 Acknowledgments This research has been supported in part by a NATO Science for Stability Grant TU-LANGUAGE. We thank Lauri Karttunen of Rank Xerox Research Centre in Grenoble for providing the Xerox two-level morphology tools on which the Turkish morpholog- ical analyzer was built. References Brill, Eric. 1992. A simple-rule based part-of-speech tagger. In Proceedings of the Third Conference on Applied Natural Language Processing, Trento, Italy. Brill, Eric. 1994. Some advances in rule-based part of speech tagging. In Proceedings of the 227 (wl,pl, 0) (w2,pl, 0) (w3,pl, 0) (w4,pl, 0) (w5.pl, 0) (wl.p3,0) (w2,p5,0) (W3,p4.0) (w4,p3,0) (W5,p4.0) t Composition of the 0 sentence transducer with the constraint transducer I I I I t Isingle transducer Icomposed from all ]constraint ]transducers I I '1 I I I I Constraint Transducer 1 (+VI) 0 Constraint Transducer n (+Vn) Resulting sentence / transducer after composition (w2. pl. 8~.~ (w3 ,pl, 4 ! ~ (wl,p3,4) ~ (w2,p5,3) Figure 2: Sentence and Constraint Transducers 228 Twelfth National Conference on Artificial Intel- ligence (AAAI-94), Seattle, Washington. Brill, Eric. 1995. Transformation-based error-driven learning and natural language processing: A case study in part-of-speech tagging. Computational Linguistics, 21(4):543-566, December. Church, Kenneth W. 1988. A stochastic parts pro- gram and a noun phrase parser for unrestricted text. In Proceedings of the Second Conference on Applied Natural Language Processing, Austin, Texas. Cutting, Doug, Julian Kupiec, Jan Pedersen, and Penelope Sibun. 1992. A practical part-of-speech tagger. In Proceedi~gs of the Third Conference on Applied Natural Language Processing, Trento, Italy. DeRose, Steven J. 1988. Grammatical category dis- ambiguation by statistical optimization. Compu- tational Linguistics, 14(1):31-39. Hankamer, Jorge. 1989. Morphological parsing and the lexicon. In W. Marslen-Wilson, editor, Lexical Representation and Process. MIT Press. Kaplan, Ronald M. and Martin Kay. 1994. Regular models of phonological rule systems. Computa- tional Linguistics, 20(3):331-378, September. Karlsson, Fred, Atro Voutilainen, Juha Heikkilii, and Arto Anttila. 1995. Constraint Grammar-A Language-Independent System for Parsing Unre- stricted Text. Mouton de Gruyter. Karttunen, Lauri. 1993. Finite-state lexicon com- piler. XEROX, Palo Alto Research Center- Tech- nical Report, April. Oflazer, Kemal and llker KuruSz. 1994. Tag- ging and morphological disambiguation of Turk- ish text. In Proceedil~gs of the 4 ~h Applied Natural Language Processing Conference, pages 144-149. ACL, October. Oflazer, Kemal and GSkhan Tilt. 1996. Combin- ing hand-crafted rules and unsupervised learn- ing in constraint-based morphological disambigua- tion. In Eric Brill and Kenneth Church, editors, Proceedings of the ACL-SIGDAT Conference on Empirical Methods in Natural Language Process- ing. Tfir, GSkhan, Kemal Oflazer, and Nihat Oz- kan. 1997. Tagging English by path voting constraints. Technical Report BU- CEIS-9704, Bilkent University, Department of Computer Engineering and Information Sci- ence, Ankara, Turkey, March. Available as ftp ://ftp. cs. bilkent, edu. tr/pub/t ech-rep- oft s/1997/BU-CEIS-9704 .ps. z. Vouti]ainen, Atro. 1994. Three studies of grammar- based surface-syntactic parsing of unrestricted En- glish text. Ph.D. thesis, Research Unit for Com- putational Linguistics, University of Hetsinki. Voutilainen, Atro. 1995a. Morphological disana- biguation. In Fred Karlsson, Atro Voutilainen, Juha Heikkil£, and Arto Anttila, editors, Con- straint Grammar-A Language-Independent Sys- tem for Parsing Unrestricted Text. Mouton de Gruyter, chapter 5. Voutilainen, Atro. 1995b. A syntax-based part-of- speech analyzer. In Proceedings of the Seventh Conference of the European Chapter of the Asso- ciation of Computational Linguistics, Dublin, Ire- land. Voutilainen, Atro, Juha Heikkil£, and Arto Anttila. 1992. Constraint Grammar of English. University of Helsinki. Voutilainen, Atro and Pasi Tapanainen. 1993. Am- biguity resolution in a reduetionistic parser. In Proceedings of EACL'93, Utrecht, Holland. 229 | 1997 | 29 |
Three Generative, Lexicalised Models for Statistical Parsing Michael Collins* Dept. of Computer and Information Science University of Pennsylvania Philadelphia, PA, 19104, U.S.A. mcollins~gradient, cis. upenn, edu Abstract In this paper we first propose a new sta- tistical parsing model, which is a genera- tive model of lexicalised context-free gram- mar. We then extend the model to in- clude a probabilistic treatment of both sub- categorisation and wh-movement. Results on Wall Street Journal text show that the parser performs at 88.1/87.5% constituent precision/recall, an average improvement of 2.3% over (Collins 96). 1 Introduction Generative models of syntax have been central in linguistics since they were introduced in (Chom- sky 57). Each sentence-tree pair (S,T) in a lan- guage has an associated top-down derivation con- sisting of a sequence of rule applications of a gram- mar. These models can be extended to be statisti- cal by defining probability distributions at points of non-determinism in the derivations, thereby assign- ing a probability 7)(S, T) to each (S, T) pair. Proba- bilistic context free grammar (Booth and Thompson 73) was an early example of a statistical grammar. A PCFG can be lexicalised by associating a head- word with each non-terminal in a parse tree; thus far, (Magerman 95; Jelinek et al. 94) and (Collins 96), which both make heavy use of lexical informa- tion, have reported the best statistical parsing per- formance on Wall Street Journal text. Neither of these models is generative, instead they both esti- mate 7)(T] S) directly. This paper proposes three new parsing models. Model 1 is essentially a generative version of the model described in (Collins 96). In Model 2, we extend the parser to make the complement/adjunct distinction by adding probabilities over subcategori- sation frames for head-words. In Model 3 we give a probabilistic treatment of wh-movement, which This research was supported by ARPA Grant N6600194-C6043. is derived from the analysis given in Generalized Phrase Structure Grammar (Gazdar et al. 95). The work makes two advances over previous models: First, Model 1 performs significantly better than (Collins 96), and Models 2 and 3 give further im- provements -- our final results are 88.1/87.5% con- stituent precision/recall, an average improvement of 2.3% over (Collins 96). Second, the parsers in (Collins 96) and (Magerman 95; Jelinek et al. 94) produce trees without information about wh- movement or subcategorisation. Most NLP applica- tions will need this information to extract predicate- argument structure from parse trees. In the remainder of this paper we describe the 3 models in section 2, discuss practical issues in sec- tion 3, give results in section 4, and give conclusions in section 5. 2 The Three Parsing Models 2.1 Model 1 In general, a statistical parsing model defines the conditional probability, 7)(T] S), for each candidate parse tree T for a sentence S. The parser itself is an algorithm which searches for the tree, Tb~st, that maximises 7~(T I S). A generative model uses the observation that maximising 7V(T, S) is equivalent to maximising 7~(T ] S): 1 Tbe,t = argm~xT~(TlS) = argmTax ?~(T,S) ~(s) = arg m~x 7~(T, S) (1) 7~(T, S) is then estimated by attaching probabilities to a top-down derivation of the tree. In a PCFG, for a tree derived by n applications of context-free re-write rules LHSi ~ RHSi, 1 < i < n, 7~(T,S) = H 7)(RHSi I LHSi) (2) i=l..n The re-write rules are either internal to the tree, where LHS is a non-terminal and RHS is a string 7~(T,S) 17~(S) is constant, hence maximising ~ is equiv- alent to maximising "P(T, S). 16 TOP i S(bought) N P ( w ~ o u g h t ) t VB/~Np m JJ NN NNP I I I ooks) Last week Marks I 1 bought NNP f Brooks TOP -> S(bought) S(bought) -> NP(week) NP(week) -> JJ(Last) NP (Marks) -> NNP (Marks) VP (bought) -> VB (bought) NP (Brooks) -> NNP (Brooks) NP(Marks) VP(bought) NN(week) NP(Brooks) Figure 1: A lexicalised parse tree, and a list of the rules it contains. For brevity we omit the POS tag associated with each word. of one or more non-terminals; or lexical, where LHS is a part of speech tag and RHS is a word. A PCFG can be lexicalised 2 by associating a word w and a part-of-speech (POS) tag t with each non- terminal X in the tree. Thus we write a non- terminal as X(x), where x = (w,t), and X is a constituent label. Each rule now has the form3: P(h) -> Ln(In)...ni(ll)H(h)Rl(rl)...Rm(rm) (3) H is the head-child of the phrase, which inherits the head-word h from its parent P. L1...L~ and R1...Rm are left and right modifiers of H. Either n or m may be zero, and n = m = 0 for unary rules. Figure 1 shows a tree which will be used as an example throughout this paper. The addition of lexical heads leads to an enormous number of potential rules, making direct estimation of ?)(RHS { LHS) infeasible because of sparse data problems. We decompose the generation of the RHS of a rule such as (3), given the LHS, into three steps -- first generating the head, then making the inde- pendence assumptions that the left and right mod- ifiers are generated by separate 0th-order markov processes 4: 1. Generate the head constituent label of the phrase, with probability 7)H(H I P, h). 2. Generate modifiers to the right of the head with probability 1-Ii=1..m+1 ~n(Ri(ri) { P, h, H). R,~+l(r,~+l) is defined as STOP -- the STOP symbol is added to the vocabulary of non- terminals, and the model stops generating right modifiers when it is generated. 2We find lexical heads in Penn treebank data using rules which are similar to those used by (Magerman 95; Jelinek et al. 94). SWith the exception of the top rule in the tree, which has the form TOP --+ H(h). 4An exception is the first rule in the tree, T0P -+ H (h), which has probability Prop (H, hlTOP ) 3. Generate modifiers to the left of the head with probability rL=l..n+ l ?) L ( L~( li ) l P, h, H), where Ln+l (ln+l) = STOP. For example, the probability of the rule S(bought) -> NP(week) NP(Marks) YP(bought)would be es- timated as 7~h(YP I S,bought) x ~l(NP(Marks) I S,YP,bought) x 7~,(NP(week) { S,VP,bought) x 7~z(STOP I S,VP,bought) x ~r(STOP I S, VP, bought) We have made the 0 th order markov assumptions 7~,(Li(li) { H, P, h, L1 (ll)...Li-1 (/i-1)) = P~(Li(li) { H,P,h) (4) Pr (Ri (ri) { H, P, h, R1 (rl)...R~- 1 (ri- 1 )) = ?~r(Ri(ri) { H, P, h) (5) but in general the probabilities could be conditioned on any of the preceding modifiers. In fact, if the derivation order is fixed to be depth-first -- that is, each modifier recursively generates the sub-tree below it before the next modifier is generated -- then the model can also condition on any structure below the preceding modifiers. For the moment we exploit this by making the approximations 7~l( Li(li ) { H, P, h, Ll ( ll )...Li_l (l~_l ) ) = ?)l(ni(li) l H, P,h, distancez(i - 1)) (6) ?)r( ai(ri) ] H, P, h, R1 (rl)...Ri-1 (ri-l ) ) = ?~T(Ri(ri) [ H,P.h, distancer(i - 1)) (7) where distancez and distancer are functions of the surface string from the head word to the edge of the constituent (see figure 2). The distance measure is the same as in (Collins 96), a vector with the fol- lowing 3 elements: (1) is the string of zero length? (Allowing the model to learn a preference for right- branching structures); (2) does the string contain a 17 verb? (Allowing the model to learn a preference for modification of the most recent verb). (3) Does the string contain 0, 1, 2 or > 2 commas? (where a comma is anything tagged as "," or ":"). P(h) distance -I Figure 2: The next child, Ra(r3), is generated with probability 7~(R3(r3) [ P,H, h, distancer(2)). The distance is a function of the surface string from the word after h to the last word of R2, inclusive. In principle the model could condition on any struc- ture dominated by H, R1 or R2. 2.2 Model 2: The complement/adjunct distinction and subcategorisation The tree in figure 1 is an example of the importance of the complement/adjunct distinction. It would be useful to identify "Marks" as a subject, and "Last week" as an adjunct (temporal modifier), but this distinction is not made in the tree, as both NPs are in the same position 5 (sisters to a VP under an S node). From here on we will identify complements by attaching a "-C" suffix to non-terminals -- fig- ure 3 gives an example tree. TOP 1 S(bought) N P ( w ~ o u g h t ) Last week Marks VBD NP-C(Brooks) I l bought Brooks Figure 3: A tree with the "-C" suffix used to identify complements. "Marks" and "Brooks" are in subject and object position respectively. "Last week" is an adjunct. A post-processing stage could add this detail to the parser output, but we give two reasons for mak- ing the distinction while parsing: First, identifying complements is complex enough to warrant a prob- abilistic treatment. Lexical information is needed 5Except "Marks" is closer to the VP, but note that "Marks" is also the subject in "Marks last week bought Brooks". -- for example, knowledge that "week '' is likely to be a temporal modifier. Knowledge about subcat- egorisation preferences -- for example that a verb takes exactly one subject -- is also required. These problems are not restricted to NPs, compare "The spokeswoman said (SBAR that the asbestos was dangerous)" vs. "Bonds beat short-term invest- ments (SBAR because the market is down)", where an SBAR headed by "that" is a complement, but an SBAI:t headed by "because" is an adjunct. The second reason for making the comple- ment/adjunct distinction while parsing is that it may help parsing accuracy. The assumption that complements are generated independently of each other often leads to incorrect parses -- see figure 4 for further explanation. 2.2.1 Identifying Complements and Adjuncts in the Penn Treebank We add the "-C" suffix to all non-terminals in training data which satisfy the following conditions: 1. The non-terminal must be: (1) an NP, SBAR, or S whose parent is an S; (2) an NP, SBAR, S, or VP whose parent is a VP; or (3) an S whose parent is an SBAR. 2. The non-terminal must not have one of the fol- lowing semantic tags: ADV, VOC, BNF, DIR, EXT, LOC, MNR, TMP, CLR or PRP. See (Marcus et al. 94) for an explanation of what these tags signify. For example, the NP "Last week" in figure 1 would have the TMP (tempo- ral) tag; and the SBAR in "(SBAR because the market is down)", would have the ADV (adver- bial) tag. In addition, the first child following the head of a prepositional phrase is marked as a complement. 2.2.2 Probabilities over Subcategorisation Frames The model could be retrained on training data with the enhanced set of non-terminals, and it might learn the lexical properties which distinguish complements and adjuncts ("Marks" vs "week", or "that" vs. "because"). However, it would still suffer from the bad independence assumptions illustrated in figure 4. To solve these kinds of problems, the gen- erative process is extended to include a probabilistic choice of left and right subcategorisation frames: 1. Choose a head H with probability ~H(H[P, h). 2. Choose left and right subcat frames, LC and RC, with probabilities 7)~c(LC [ P, H, h) and 18 I. (a) Incorrect S (b) Correct S NP-C VP NP-C NP-C VP I I ~ f ~. was ADJP NP NP Dreyfus the best fund was ADJP [ I I I low low Dreyfus the best fund 2. (a) Incorrect S (b) Correct S NP-C VP NP-C VP l I ~ The issue / ~ The issue was NP-C w -C NP VP a bill a bill funding NP-C funding NP-C I I Congress Congress Figure 4: Two examples where the assumption that modifiers are generated independently of each other leads to errors. In (1) the probability of generating both "Dreyfus" and "fund" as sub- jects, 7~(NP-C(Dreyfus) I S,VP,was) * T'(NP-C(fund) I S,VP,was) is unreasonably high. (2) is similar: 7 ~ (NP-C (bill), VP-C (funding) I VP, VB, was) = P(NP-C (bill) I VP, VB, was) * 7~(VP-C (funding) I VP, VB, was) is a bad independence assumption. Prc(RCIP, H,h ). Each subcat frame is a multiset 6 specifying the complements which the head requires in its left or right modifiers. 3. Generate the left and right modifiers with prob- abilities 7)l(Li, li I H, P, h, distancet(i - 1), LC) and 7~r (R~, ri I H, P, h, distancer(i - 1), RC) re- spectively. Thus the subcat requirements are added to the conditioning context. As comple- ments are generated they are removed from the appropriate subcat multiset. Most importantly, the probability of generating the STOP symbol will be 0 when the subcat frame is non-empty, and the probability of generating a complement will be 0 when it is not in the subcat frame; thus all and only the required complements will be generated. The probability of the phrase S(bought)-> NP(week) NP-C(Marks) VP(bought)is now: 7)h(VPIS,bought) x to({NP-C} I S,VP,bought) x t S,VP,bought) × 7~/(NP-C(Marks) IS ,VP,bought, {NP-C}) x 7:~I(NP(week) I S ,VP ,bought, {}) x 7)l(STOe I S ,ve ,bought, {}) × Pr(STOP I S, VP,bought, {}) Here the head initially decides to take a sin- gle NP-C (subject) to its left, and no complements ~A rnultiset, or bag, is a set which may contain du- plicate non-terminal labels. to its right. NP-C(Marks) is immediately gener- ated as the required subject, and NP-C is removed from LC, leaving it empty when the next modi- fier, NP(week) is generated. The incorrect struc- tures in figure 4 should now have low probabil- ity because ~Ic({NP-C,NP-C} [ S,VP,bought) and "Prc({NP-C,VP-C} I VP,VB,was) are small. 2.3 Model 3: Traces and Wh-Movement Another obstacle to extracting predicate-argument structure from parse trees is wh-movement. This section describes a probabilistic treatment of extrac- tion from relative clauses. Noun phrases are most of- ten extracted from subject position, object position, or from within PPs: Example 1 The store (SBAR which TRACE bought Brooks Brothers) Example 2 The store (SBAR which Marks bought TRACE) Example 3 The store (SBAR which Marks bought Brooks Brothers/tom TRACE) It might be possible to write rule-based patterns which identify traces in a parse tree. However, we argue again that this task is best integrated into the parser: the task is complex enough to warrant a probabilistic treatment, and integration may help parsing accuracy. A couple of complexities are that modification by an SBAR does not always involve extraction (e.g., "the fact (SBAR that besoboru is 19 NP(store) NP(store) SBAR(that)(+gap) The store WHNP(that) WDT I that (i) NP -> NP (2) SBAR(+gap) -> WHNP (3) S(+gap) -> NP-C (4) VP(+gap) -> VB S(bought )(-}-gap) N P C ( ~ h t ) (--{-gap) I B ~ w Marks V eek) I I bought last week SBAR(+gap) S-C(+gap) VP(+gap) TRACE NP Figure 5: A +gap feature can be added to non-terminals to describe NP extraction. The top-level NP initially generates an SBAR modifier, but specifies that it must contain an NP trace by adding the +gap feature. The gap is then passed down through the tree, until it is discharged as a TRACE complement to the right of bought. played with a ball and a bat)"), and it is not un- common for extraction to occur through several con- stituents, (e.g., "The changes (SBAR that he said the government was prepared to make TRACE)"). The second reason for an integrated treatment of traces is to improve the parameterisation of the model. In particular, the subcategorisation proba- bilities are smeared by extraction. In examples 1, 2 and 3 above 'bought' is a transitive verb, but with- out knowledge of traces example 2 in training data will contribute to the probability of 'bought' being an intransitive verb. Formalisms similar to GPSG (Gazdar et al. 95) handle NP extraction by adding a gap feature to each non-terminal in the tree, and propagating gaps through the tree until they are finally discharged as a trace complement (see figure 5). In extraction cases the Penn treebank annotation co-indexes a TRACE with the WHNP head of the SBAR, so it is straight- forward to add this information to trees in training data. Given that the LHS of the rule has a gap, there are 3 ways that the gap can be passed down to the RHS: Head The gap is passed to the head of the phrase, as in rule (3) in figure 5. Left, Right The gap is passed on recursively to one of the left or right modifiers of the head, or is discharged as a trace argument to the left/right of the head. In rule (2) it is passed on to a right modifier, the S complement. In rule (4) a trace is generated to the right of the head VB. We specify a parameter 7~c(GIP, h, H) where G is either Head, Left or Right. The generative pro- cess is extended to choose between these cases after generating the head of the phrase. The rest of the phrase is then generated in different ways depend- ing on how the gap is propagated: In the Head case the left and right modifiers are generated as normal. In the Left, Right cases a gap require- ment is added to either the left or right SUBCAT variable. This requirement is fulfilled (and removed from the subcat list) when a trace or a modifier non-terminal which has the +gap feature is gener- ated. For example, Rule (2), SBAR(that) (+gap) -> WHNP(that) S-C(bought) (+gap), has probability ~h (WHNP I SBAR, that) × 7~G (Right I SBAR, WHNP, that) x T~LC({} I SBAR,WHNP,that) x T'Rc({S-C} [ SBAR,WHNP, that) x 7~R (S-C (bought) (+gap) [ SBAR, WHNP, that, {S-C, +gap}) x 7~R(STOP I SBAR,WHNP,that, {}) x PC (STOP I SBAR, WHNP, that, { }) Rule (4), VP(bought) (+gap) -> VB(bought) TRACE NP (week), has probability 7~h(VB I VP,bought) x PG(Right I VP,bought,VB) x PLC({} I VP,bought,VB) x ~PRc({NP-C} I vP,bought,VB) x 7~R(TRACE I VP,bought,VB, {NP-C, +gap}) x PR(NP(week) I VP,bought ,VB, {}) × 7)L(STOP I VP,bought,VB, {}) x 7~R (STOP I VP ,bought ,VB, {}) In rule (2) Right is chosen, so the +gap requirement is added to RC. Generation of S-C(bought)(+gap) 20 (a) H(+) =~ P(-) • H(+) Prob =X Pr£b = X'X~H(HIP,... ) (b) P(-) + Ri(+) =~ H R1 Prob -= X Prob = Y Figure 6: The life of a constituent in the chart. (c) P(-) =~ P(+) Prob = X Prob = X X'PL(STOP I .... ) xPR(STOP I .... ) P(-) • . H R1 Ri Prob = X x Y x ~R(Ri(ri) I P,H,...) (+) means a constituent is complete (i.e. it includes the stop probabilities), (-) means a constituent is incomplete. (a) a new constituent is started by projecting a complete rule upwards; (b) the constituent then takes left and right modifiers (or none if it is unary). (c) finally, STOP probabilities are added to complete the constituent. Back-off "PH(H I"-) Pa(G I ...) PL~(Li(It,) I..-) Level PLc(LC t ...) Pm(Ri(rti) I...) 7)Rc(RC I ...) 1 P, w, t P, H, w, t P, H, w, t, A, LC 2 P, t P, H, t P, H, t, A, LC 3 P P, H P, H, &, LC 4 -- PL2(lwi l ...) PR2(rwi I ...) Li, Iti, P, H, w, t, A, LC L,, lti, P, H, t, A, LC LI, lti It~ Table 1: The conditioning variables for each level of back-off. For example, T'H estimation interpolates el = ~°H(H I P, w, t), e2 = 7~H(H I P, t), and e3 = PH(H I P). A is the distance measure. :ulfills both the S-C and +gap requirements in RC. In rule (4) Right is chosen again. Note that gen- eration of trace satisfies both the NP-C and +gap subcat requirements. 3 Practical Issues 3.1 Smoothing and Unknown Words Table 1 shows the various levels of back-off for each type of parameter in the model. Note that we de- compose "PL(Li(lwi,lti) I P, H,w,t,~,LC) (where lwi and Iti are the word and POS tag generated with non-terminal Li, A is the distance measure) into the product 79L1(Li(lti) I P, H,w,t, Zx,LC) x 7~ L2(lwi ILi, lti, 19, H, w, t, A, LC), and then smooth these two probabilities separately (Jason Eisner, p.c.). In each case 7 the final estimate is e----Ale1 + (1 - &l)(A2e2 + (1 - &2)ea) where ex, e2 and e3 are maximum likelihood esti- mates with the context at levels 1, 2 and 3 in the table, and ,kl, ,k2 and )~3 are smoothing parameters where 0 _< ,ki _< 1. All words occurring less than 5 times in training data, and words in test data which rExcept cases L2 and R2, which have 4 levels, so that e = ~let + (1 -- *X1)()~2e2 + (1 - ,~2)(&3e3 + (1 - ~3)e4)). have never been seen in training, are replaced with the "UNKNOWN" token. This allows the model to robustly handle the statistics for rare or new words. 3.2 Part of Speech Tagging and Parsing Part of speech tags are generated along with the words in this model. When parsing, the POS tags al- lowed for each word are limited to those which have been seen in training data for that word. For un- known words, the output from the tagger described in (Ratnaparkhi 96) is used as the single possible tag for that word. A CKY style dynamic programming chart parser is used to find the maximum probability tree for each sentence (see figure 6). 4 Results The parser was trained on sections 02 - 21 of the Wall Street Journal portion of the Penn Treebank (Mar- cus et al. 93) (approximately 40,000 sentences), and tested on section 23 (2,416 sentences). We use the PAR.SEVAL measures (Black et al. 91) to compare performance: Labeled Precision = number of correct constituents in proposed parse number of constituents in proposed parse 21 MODEL (Magerman 95) (Collins 96) Model 1 Model 2 Model 3 ~ c e ~ ) 2 CBs 84.6% 84.9% 1.26 56.6% 81.4% 84.0% 84.3% 1.46 54.0% 85.8% 86.3% 1.14 59.9% 83.6% 85.3% 85.7% 1.32 57.2% 87.4% 88.1% 0.96 65.7% 86.3% 86.8% 87.6% 1.11 63.1% 88.1% 88.6% 0.91 66.5% 86.9% 87.5% 88.1% 1.07 63.9% 88.1% 88.6% 0.91 66.4% 86.9% 87.5% 88.1% 1.07 63.9% 78.8% 80.8% 84.1% 84.6% 84.6% Table 2: Results on Section 23 of the WSJ Treebank. LR/LP = labeled recall/precision. CBs is the average number of crossing brackets per sentence. 0 CBs, < 2 CBs are the percentage of sentences with 0 or < 2 crossing brackets respectively. Labeled Recall -~ number o/ correct constituents in proposed parse number of constituents in treebank parse Crossing Brackets ---- number of con- stituents which violate constituent boundaries with a constituent in the treebank parse. For a constituent to be 'correct' it must span the same set of words (ignoring punctuation, i.e. all to- kens tagged as commas, colons or quotes) and have the same label s as a constituent in the treebank parse. Table 2 shows the results for Models 1, 2 and 3. The precision/recall of the traces found by Model 3 was 93.3%/90.1% (out of 436 cases in section 23 of the treebank), where three criteria must be met for a trace to be "correct": (1) it must be an argu- ment to the correct head-word; (2) it must be in the correct position in relation to that head word (pre- ceding or following); (3) it must be dominated by the correct non-terminal label. For example, in figure 5 the trace is an argument to bought, which it fol- lows, and it is dominated by a VP. Of the 436 cases, 342 were string-vacuous extraction from subject po- sition, recovered with 97.1%/98.2% precision/recall; and 94 were longer distance cases, recovered with 76%/60.6% precision/recall 9 4.1 Comparison to previous work Model 1 is similar in structure to (Collins 96) -- the major differences being that the "score" for each bigram dependency is 7't(L{,liIH, P, h, distancet) 8(Magerman 95) collapses ADVP and PRT to the same label, for comparison we also removed this distinction when calculating scores. 9We exclude infinitival relative clauses from these fig- ures, for example "I called a plumber TRACE to fix the sink" where 'plumber' is co-indexed with the trace sub- ject of the infinitival. The algorithm scored 41%/18% precision/recall on the 60 cases in section 23 -- but in- finitival relatives are extremely difficult even for human annotators to distinguish from purpose clauses (in this case, the infinitival could be a purpose clause modifying 'called') (Ann Taylor, p.c.) rather than Pz(Li, P, H I li, h, distancel), and that there are the additional probabilities of generat- ing the head and the STOP symbols for each con- stituent. However, Model 1 has some advantages which may account for the improved performance. The model in (Collins 96) is deficient, that is for most sentences S, Y~T 7~( T ] S) < 1, because prob- ability mass is lost to dependency structures which violate the hard constraint that no links may cross. For reasons we do not have space to describe here, Model 1 has advantages in its treatment of unary rules and the distance measure. The generative model can condition on any structure that has been previously generated -- we exploit this in models 2 and 3 -- whereas (Collins 96) is restricted to condi- tioning on features of the surface string alone. (Charniak 95) also uses a lexicalised genera- tive model. In our notation, he decomposes P(RHSi l LHSi) as "P(R,~...R1HL1..Lm ] P,h) x 1-L=I..~ 7~(r~l P, Ri, h) x I-L=l..m 7)(lil P, Li, h). The Penn treebank annotation style leads to a very large number of context-free rules, so that directly estimating 7~(R .... R1HL1..Lm I P, h) may lead to sparse data problems, or problems with coverage (a rule which has never been seen in training may be required for a test data sentence). The com- plement/adjunct distinction and traces increase the number of rules, compounding this problem. (Eisner 96) proposes 3 dependency models, and gives results that show that a generative model sim- ilar to Model 1 performs best of the three. However, a pure dependency model omits non-terminal infor- mation, which is important. For example, "hope" is likely to generate a VP(T0) modifier (e.g., I hope [VP to sleep]) whereas "'require" is likely to gen- erate an S(T0) modifier (e.g., I require IS Jim to sleep]), but omitting non-terminals conflates these two cases, giving high probability to incorrect struc- tures such as "I hope [Jim to sleep]" or "I require [to sleep]". (Alshawi 96) extends a generative depen- dency model to include an additional state variable which is equivalent to having non-terminals -- his 22 suggestions may be close to our models 1 and 2, but he does not fully specify the details of his model, and doesn't give results for parsing accuracy. (Miller et al. 96) describe a model where the RHS of a rule is generated by a Markov process, although the pro- cess is not head-centered. They increase the set of non-terminals by adding semantic labels rather than by adding lexical head-words. (Magerman 95; Jelinek et al. 94) describe a history-based approach which uses decision trees to estimate 7a(T[S). Our models use much less sophis- ticated n-gram estimation methods, and might well benefit from methods such as decision-tree estima- tion which could condition on richer history than just surface distance. There has recently been interest in using dependency-based parsing models in speech recog- nition, for example (Stolcke 96). It is interesting to note that Models 1, 2 or 3 could be used as lan- guage models. The probability for any sentence can be estimated as P(S) = ~~.TP(T,S), or (making a Viterbi approximation for efficiency reasons) as 7)(S) .~ P(Tb~st, S). We intend to perform experi- ments to compare the perplexity of the various mod- els, and a structurally similar 'pure' PCFG 1°. 5 Conclusions This paper has proposed a generative, lexicalised, probabilistic parsing model. We have shown that lin- guistically fundamental ideas, namely subcategori- sation and wh-movement, can be given a statistical interpretation. This improves parsing performance, and, more importantly, adds useful information to the parser's output. 6 Acknowledgements I would like to thank Mitch Marcus, Jason Eisner, Dan Melamed and Adwait Ratnaparkhi for many useful discussions, and comments on earlier versions of this paper. This work has also benefited greatly from suggestions and advice from Scott Miller. References H. Alshawi. 1996. Head Automata and Bilingual Tiling: Translation with Minimal Representa- tions. Proceedings of the 3~th Annual Meeting of the Association for Computational Linguistics, pages 167-176. E. Black et al. 1991. A Procedure for Quantita- tively Comparing the Syntactic Coverage of En- glish Grammars. Proceedings of the February 1991 DARPA Speech and Natural Language Workshop. 1°Thanks to one of the anonymous reviewers for sug- gesting these experiments. T. L. Booth and R. A. Thompson. 1973. Applying Probability Measures to Abstract Languages. IEEE Transactions on Computers, C-22(5), pages 442- 450. E. Charniak. 1995. Parsing with Context-Free Gram- mars and Word Statistics. Technical Report CS- 95-28, Dept. of Computer Science, Brown Univer- sity. N. Chomsky. 1957. Syntactic Structures, Mouton, The Hague. M. J. Collins. 1996. A New Statistical Parser Based on Bigram Lexical Dependencies. Proceedings o/ the 34th Annual Meeting o/ the Association for Computational Linguistics, pages 184-191. J. Eisner. 1996. Three New Probabilistic Models for Dependency Parsing: An Exploration. Proceed- ings o/ COLING-96, pages 340-345. G. Gazdar, E.H. Klein, G.K. Pullum, I.A. Sag. 1985. Generalized Phrase Structure Grammar. Harvard University Press. F. Jelinek, J. Lafferty, D. Magerman, R. Mercer, A. Ratnaparkhi, S. Roukos. 1994. Decision Tree Pars- ing using a Hidden Derivation Model. Proceedings o/ the 1994 Human Language Technology Work- shop, pages 272-277. D. Magermaa. 1995. Statistical Decision-Tree Mod- els for Parsing. Proceedings o/ the 33rd Annual Meeting o] the Association for Computational Linguistics, pages 276-283. M. Marcus, B. Santorini and M. Marcinkiewicz. 1993. Building a Large Annotated Corpus of En- glish: the Penn Treebank. Computational Linguis- tics, 19(2):313-330. M. Marcus, G. Kim, M. A. Marcinkiewicz, R. MacIntyre, A. Bies, M. Ferguson, K. Katz, B. Schasberger. 1994. The Penn Treebank: Annotat- ing Predicate Argument Structure. Proceedings of the 1994 Human Language Technology Workshop, pages 110~115. S. Miller, D. Staliard and R. Schwartz. 1996. A Fully Statistical Approach to Natural Language Interfaces. Proceedings o/ the 34th Annual Meeting of the Association for Computational Linguistics, pages 55-61. A. Ratnaparkhi. 1996. A Maximum Entropy Model for Part-Of-Speech Tagging. Conference on Em- pirical Methods in Natural Language Processing. A. Stolcke. 1996. Linguistic Dependency Modeling. Proceedings of ICSLP 96, Fourth International Conference on Spoken Language Processing. 23 | 1997 | 3 |
Mistake-Driven Mixture of Hierarchical Tag Context Trees Masahiko Haruno NTT Communication Science Laboratories 1-1 Hikari-No-Oka Yokosuka-Shi Kanagawa 239, Japan haruno©cslab, kecl. ntt. co. j p Yuji Matsumoto NAIST 8916-5 Takayama-cho Ikoma-Shi Nara 630-01, Japan mat su©is, aist-nara, ac. j p Abstract This paper proposes a mistake-driven mix- ture method for learning a tag model. The method iteratively performs two proce- dures: 1. constructing a tag model based on the current data distribution and 2. updating the distribution by focusing on data that are not well predicted by the constructed model. The final tag model is constructed by mixing all the models according to their performance. To well reflect the data distribution, we repre- sent each tag model as a hierarchical tag (i.e.,NTT 1 < proper noun < noun) con- text tree. By using the hierarchical tag context tree, the constituents of sequential tag models gradually change from broad coverage tags (e.g.,noun) to specific excep- tional words that cannot be captured by generM tags. In other words, the method incorporates not only frequent connec- tions but also infrequent ones that are of- ten considered to be collocationah We evaluate several tag models by implement- ing Japanese part-of-speech taggers that share all other conditions (i.e.,dictionary and word model) other than their tag models. The experimental results show the proposed method significantly outper- forms both hand-crafted and conventional statistical methods. 1 Introduction The last few years have seen the great success of stochastic part-of-speech (POS) taggers (Church, 1988: Kupiec, 1992; Charniak et M., 1993; Brill, 1992; Nagata, 1994). The stochastic approach gen- erally attains 94 to 96% accuracy and replaces the labor-intensive compilation of linguistics rules by using an automated learning algorithm. However, 1NTT is an abbreviation of Nippon Telegraph and Telephone Corporation. practical systems require more accuracy because POS tagging is an inevitable pre-processing step for all practical systems. To derive a new stochastic tagger, we have two options since stochastic taggers generally comprise two components: word model and tag model. The word model is a set of probabilities that a word oc- curs with a tag (part-of-speech) when given the pre- ceding words and their tags in a sentence. On the contrary, the tag model is a set of probabilities that a tag appears after the preceding words and their tags. The first option is to construct more sophisticated word models. (Charniak et al., 1993) reports that their model considers the roots and suffixes of words to greatly improve tagging accuracy for English cor- pora. However, the word model approach has the following shortcomings: • For agglutinative languages such as Japanese and Chinese, the simple Bayes transfer rule is inapplicable because the word length of a sen- tence is not fixed in all possible segmentations -~. We can only use simpler word models in these languages. • Sophisticated word models largely depend on the target language. It is time-consuming to compile fine-grained word models for each lan- guage. The second option is to devise a new tag model. (Sch~tze and Singer. 1994) have introduced a variable-memory-length tag model. Unlike conven- tional bi-gram and tri-gram models, the method selects the optimal length by using the context tree (Rissanen, 1983) which was originally intro- duced for use in data compression (Cover and Thomas, 1991). Although the variable-memory length approach remarkably reduces the number of parameters, tagging accuracy is only as good as con- ventional methods. Why didn't the method have higher accuracy ? The crucial problem for current P(,,,)P(,,lu,,) P(wi) cannot be consid- 2In P(w,]t,) = P(t,) ' ered to be identical for ~ll segmentations. 230 tag models is the set of collocational sequences of words that cannot be captured by just their tags. Because the maximal likelihood estimator (MLE) emphasizes the most frequent connections, an ex- ceptional connection is placed in the same class as a frequent connection. To tackle this problem, we introduce a new tag model based on the mistake-driven mixture of hi- erarchical tag context trees. Compared to Schiitze and Singer's context tree (Schiitze and Singer, 1994), the hierarchical tag context tree is extended in that the context is represented by a hierarchical tag set (i.e.,NTT < proper noun < noun). This is extremely useful in capturing exceptional connections that can be detected only at the word level. To make the best use of the hierarchical con- text tree, the mistake-driven mixture method imi- tates the process in which linguists incorporate ex- ceptional connections into hand-crafted rules: They first construct coarse rules which seems to cover broad range of data. They then try to analyze data by using the rules and extract exceptions that the rules cannot handle. Next they generalize the ex- ceptions and refine the previous rules. The following two steps abstract the human algorithm for incorpo- rating exceptional connections. 1. construct temporary rules which seem to well generalize given data. 2. try to analyze data by using the constructed rules and extract the exceptions that cannot be correctly handled, then return to the first step and focus on the exceptions. To put the above idea into our learning algo- rithm, The mistake-driven mixture method attaches a weight vector to each example and iteratively per- forms the following two procedures in the training phase: 1. constructing a context tree based on the current data distribution (weight vector) 2. updating the distribution (weight vector) by fo- cusing on data not well predicted by the con- structed tree. More precisely, the algorithm re- duces the weight of examples that are correctly handled. For the prediction phase, it then outputs a final tag model by mixing all the constructed models ac- cording to their performance. By using the hierar- chical tag context tree, the constituents of a series of tag models gradually change from broad coverage tags (e.g.,noun) to specific exceptional words that cannot be captured by general tags, In other words, the method incorporates not only frequent connec- tions but also infrequent ones that are often consid- ered to be exceptional. The construction of the paper is as follows. Sec- tion 2 describes the stochastic POS tagging scheme and hierarchical tag setting. Section 3 presents a new probability estimator that uses a hierarchical tag context tree and Section 4 explains the mistake- driven mixture method. Section 5 reports a prelim- inary evaluation using Japanese newspaper articles. We tested several tag models by keeping all other conditions (i.e., dictionary and word model) iden- tical. The experimental results show that the pro- posed method significantly outperforms both hand- crafted and conventional statistical methods. Sec- tion 6 concerns related works and Sections 7 con- cludes the paper. 2 Preliminaries 2.1 Basic Equation In this section, we will briefly review the basic equations for part-of-speech tagging and introduce hierarchical-tag setting. The tagging problem is formally defined as finding a sequence of tags tl,, that maximize the probability of input string L. P(wl,.,tl,~,L) argmaxt. P(Wl,n,tl,nlL) = argmazq,. P(L) ¢~ argmaxtl ....... ~ L P( tl,~ , Wl,~ ) We break out P(ta,~, Wl,n) as a sequence of the prod- ucts of tag probability and word probability. rl P(tl,n, Wl,~) = 1-I P( u'iltl,i-l' wl,i-1)P(tiltl'i-l' wx,i ) i=1 By approximating word probability as con- strained only by its tag, we obtain equation (1). Equation (1) yields various types of stochastic tag- gers. For example, bi-gram and tri-gram models approximate their tag probability as P(tilti-1) and P(tilti_l,ti_.), respectively. In the rest of the pa- per, we assume all tagging methods share the word model P(wilti) and differ only in the tag model P( ti ltl,i-1, Wl,i ). argmaxt ........ eL l"I P(ti[tl,i-a' wi.i)P(wilti) (1) i=1. 2.2 Hierarchical Tag Set To construct a tag model that captures excep- tional connections, we have to consider word-level context as well as tag-level. In a more general form, we introduce a tag set that has a hierarchi- cal structure. Our tag set has a three-level struc- ture as shown in Figure 1. Tile topmost and the second level of the hierarchy are part-of-speech level and part-of-speech subdivision level respectively. Al- though stochastic taggers usually make use of subdi- vision level, part-of-speech level is remarkably robust 231 (root) 0i* (noun) ..... (adverb) (proper) (numeral) (declarative) NTr AT&T 1 2 part-of-speech level (degree) subdivision level word level Figure 1: Hierarchical Tag Set against data sparseness. The bottom level is word level and is indispensable in coping with exceptional and collocational sequences of words. Our objective is to construct a tag model that precisely evaluates P(tiltl,i-1, Wl,i) (in equation (1)) by using the three- level tag set. To construct this model, we have to answer the following questions. 1. Which level is appropriate for t i .9 2. Which length is to be considered for tl,i-1 and wl,i ? :3. Which level is appropriate for tl,i-1 and wl,i ? To resolve the first question, we fix ti at subdivision level as is done in other tag models. The second and third questions are resolved by introducing hierar- chical tag context trees and mistake-driven mixture method that are respectively described in Section 3 and 4. Before moving to the next section, let us define the basic tag set. If all words are considered con- text candidates, the search space will be enormous. Thus, it is reasonable for the tagger to constrain the candidates to frequent open class words and closed class words. Tile basic tag set is a set of tile most detailed context elements that comprises the words selected above and part-of-speech subdivision level. 3 Hierarchical Tag Context Tree A hierarchical tag context tree is constructed by a two-step methodology. The first step produces a context tree by using tile basic tag set. The sec- ond step then produces the hierarchical tag context tree. It generalizes the basic tag context tree and avoids over-fitting the data by replacing excessively specific context in the tree wi4h more general tags. Finally, the generated tree is transformed into a fi- nite automaton to improve tagging efficiency (Ron et al., 1997). 3.1 Constructing a Basic Tag Context Tree In this section, we construct a basic tag context tree. Before going into detail of the algorithm, we briefly explain the context tree by using a simple binary case. The context tree was originally introduced in the field of data compression (Rissanen, 1983; Willems et al., 1995; Cover and Thomas, 1991) to represent how many times and in what context each symbol appeared in a sequence of symbols. Figure 2 exemplifies two context trees comprising binary symbols 'a' and 'b'. T(4) is constructed from the se- quence 'baab'and T(6) from 'baabab '. The root node of T(4) explains that both 'a'and 'b ' appeared twice in 'baab' when no consideration is taken of previous symbols. The nodes of depth 1 represent an order 1 (bi-gram) model. The left node of T(4) represents that both 'a' and "b' appeared only once after sym- bol 'a', while the right node of T(4) represents only 'a' occurred once after 'b '. In the same way, the node of depth 2 in T(6) represents an order 2 (tri-gram) context model. It is straightforward to extend this binary tree to a basic tag context tree. In this case, context symbols 'a' and 'b" are replaced by an element of the basic tag set and the frequency table of each node then consists of the part-of-speech subdivision set. The procedure construct-btree which constructs a basic tag context tree is given below. Let a set of subdivision tags to be Sl,--.,sn. Let weight[t] be a weight vector attached to the tth example x(t). Initial values of weight[t] are set to 1. 1. the only node, the root, is marked with the count table (c(sl,)0,"-, C(Sn,)~) = (0,'--.0)). 2. Apply the following recursively. Let T(t-1) be 232 a b -- (2,2) - (1,1) (1,o) r(4) a b (3,3) . (1,2) (2,o) (o,1) (1,o) (o,o) r(6) Figure 2: Context Trees for 'baab" and 'baabab' the last constructed tree with counts of nodes z, (c(sl,z),-.., c(sn,z)). After the next symbol whose subdivision is x(t) is observed, generate the next tree T(t) as follows: follow the T(t-1), starting at the root and taking the branch in- dicated by each successive symbol in the past sequence by using basic tag level. For each node z visited, increment the component count c(x(t),:) by weight[t]. Continue until node w is a leaf node. 3. If w is a leaf, extend the tree by creat- ing new leaves: c(x(t),wsl)=...=c(x(t),wsn) = weight[t], c(x(t),wsl) ..... c(x(t),wsn)=O. Define the resulting tree to be T(t). 3.2 Constructing a Hierarchical Tag Context Tree This section delineates how a hierarchical tag con- text tree is constructed from a basic tag context tree. Before describing the algorithm, we prepare some definitions and notations. Let .4 be a part-of-speech subdivision set. As de- scribed in the previous section, frequency tables of each node consist of the set A. At ally node s of a context tree, let n(ats ) and /5(als ) be tile count of element a and its probability, respectively. p(ats) _ n(als) ~bc_.a n(bls) We introduce an information-theoretical criteria A(sb) (Weinberger et al., 1995) to evaluate the gain of expanding a node s by its daughter sb. ._k(sb) = Z n(alsbll°g~ ) (2) aCA A(sb) is the difference in optimal code lengths when symbols at node sb are compressed by using probability distribution P(.Is) at node s and P('lsb) at node sb. Thus, the larger A(sb) is, the more meaningful it is to expand a node by sb. Now, we go back to the hierarchical tag context tree construction. As illustrated in Figure 3, the gen- eration process amounts to the iterative selection of b out of word level, subdivision, part-of-speech and null (no expansion). Let us look at the procedure from the information-theoretical viewpoint. Breaking out equation (2) as (3), 2x(sb) is represented as the prod- uct of the frequencies of all subdivision symbols at node sb and Kullback-Leibler (KL) divergence. n(alsb), P(alsb) A(sb)= n(sb) E -- *og - - ac_a n(sb) p(als ) = n(sb)~ P(alsb)log P(alsb) ~g.-t P( als ) = n(sb)D~.L(P(.[sb),/~(.[s)) (3) Because the KL divergence defines a distance measure between probability distributions, P(.]sb) and P(.Is), there is the following trade-off between the two terms of equation (3). • The more general b is, the more subdivision symbols appear at node sb. • The more specific b is, the more /~(-[s) and P(.Isb) differ. By using the trade-off, the optimal level of b is se- •lected. Table 1 summarizes the algorithm construct-htree that constructs the hierarchical tag context tree. First, construct-htree generates a basic tag context tree by calling construct-btree. Assume that the 233 (root) ~adjective Which is appropriate for b word, subdivision, part-of-speech $b or null ? Figure 3: Constructing Hierarchical Tag Context Tree training examples consist of a sequence of triples, < pt,st,wt >, in which Pt, st and wt represent part-of-speech, subdivision and word, respectively. Eachtime the algorithm reads an example, it first reaches current leaf node s by following the past se- quence, computes A(sb), and then selects the opti- mal b. The initially constructed basic tag context tree is used to compute A(sb)s. 4 Mistake-Driven Mixture of Hierarchical Tag Context Trees Up to this section, we introduced a new tag model that uses a single hierarchical tag context tree to cope with the exceptional connections that cannot be captured by just part-of-speech level. However, this approach has a clear limitation; the exceptional connections that do not occur so often cannot be detected by the single tree model. In such a ease, the first term n(sb) in equation (3) is enormous for general b and the tree is expanded by using more general symbols. To overcome this limitation, we devised the mistake-driven mixture algorithm summarized in Ta- ble 4 which constructs T context trees and outputs the final tag model. mistake-driven mixture sets the weights to 1 for all examples and repeats the following procedures T times. The algorithm first construct a hierarchi- cal context tree by using the current weight vector. Example data are then tagged by the tree and the weights of correctly handled examples are reduced by equation (4). Finally, the final tag model is con- structed by mixing T trees according to equation (5). By using the mistake-driven mixture method, the constituents of a series of hierarchical tag context trees gradually change from broad coverage tags (e.g.,noun) to specific exceptional words that can- not be captured by part-of-speech and subdivisions. The method, by mixing different levels of trees, in- corporates not only frequent connections but also infrequent ones that are often considered to be col- locational without over-fitting the data. 5 Preliminary Evaluation We performed an preliminary evaluation using the first 8939 Japanese sentences in a year's volume of newspaper articles(Mainichi, 1993). We first auto- matically segmented and tagged these sentences and then revised them by hand. The total number of words in the hand-revised corpus was 226162. We trained our tag models on the corpora with every tenth sentence removed (starting with the first sen- tence) and then tested the removed sentences. There were 22937 words in the test corpus. As the first milestone of performance, we tested a hand-crafted tag model of JUMAN (Kurohashi et al., 1994), the most widely used Japanese part-of- speech tagger. The tagging accuracy of JUMAN for the test corpus was only 92.0 %. This shows that our corpus is difficult to tag because the corpus contains various genres of texts; from obituaries to poetry. Next. we compared the mixture of bi-grams and the mixture of hierarchical tag context trees. In this experiment, only post-positional particles and aux- iliaries were word-level elements of basic tags and all other elements were subdivision level. In contrast, bi-gram was constructedby using subdivision level. We set the iteration number T to 5. The results of our experiments are summarized in Figure 4. As a single tree estimator (Number of Mixture = 1), the hierarchical tag context tree attained 94.1% accuracy, while bi-gram yielded 93.1%. A hierarchi- cal tag context tree offers a slight improvement, but 234 Initialize weight[j] = 1 for all examples j 1=1 call constrnct-btree do Read tth example xt(< pt,dt, wt >) in which Pt, dt and wt represent part-of-speech, subdivision and word, respectively. Follow ;gt_l,Xt_2,...,xt_(i_l) and Reach leaf node s low = swt-i, high = sdt-i while(max(iN(low), ,.3,(high)) >_ Threshold) { if(iN(low) > A(high)) Expand the tree by the node low else if(high==spt-i ) Expand the tree by the node high else low = sdt_i, high = spt-i } t=t+l while(xt is not empty) Table 1: Algorithm construct-htree Input: sequence of N examples < Pl, dl, wl >, . •., < pN, dN, WN > in which Pi, di and wi represent part-of-speech, subdivision and word, respectively. Initialize the weight vector weight[i] =1 for i = 1 ..... N Do for t = 1,2 ..... T Call construct-htree providing it with the weight vector weight D and Construct a part-of-speech tagger ht Let Error be a set of examples that are not identified by ht • • N Compute the error rate of hi: et = EicError we*ght[2]/Y"~i=l weight[i] ¢~t = i'--'d For examples correctly predicted by ht, update the weights vector to be weight[i] = weight[i]flt (4) Output a final tag model h I = ET=l(log~)ht/ET=l(log~) (5) Table 2: Algorithm mistake-driven mixture not a gret deal• This conclusion agrees with Schiitze and Singer's experiments that used a context tree of usual part-of-speech. When we turn to the mixture estimator, a great difference is seen between hierarchical tag context trees and bi-grams. The hierarchical tag con- text trees produced by the mistake-driven mixture method, greatly improved the accuracy and over- fitting data was not serious. The best and worst performances were 96.1% (Number of Mixture = 3) and 94.1% (Number of Mixture = 1), respectively. On the other hand, the performance of the bi-gram mixture was not satisfactory. Tile best and worst performances were 93.8 % (Number of Mixture = 2) and 90.8 % (Number of Mixture = 5), respectively. From the result, we may say exceptional connec- tions are well captured by hierarchical context trees but not by bi-grams. Bi-grams of subdivision are too general to selectively detect exceptions. 6 Related Work Although statistical natural language processing has mainly focused on Maximum Likelihood Estimators, (Pereira et al., 1995) proposed a mixture approach to predict next words by using the Context Tree Weighting (CTW) method .(Willems et al., 1995). The CTW method computes probability by mixing subtrees in a single context tree in Bayesian fashion. Although the method is very efficient, it cannot be used to construct hierarchical tag context trees. Various kinds of re-sampling techniques have been studied in statistics (Efron, 1979; Efron and Tibshi- rani, 1993) and machine learning (Breiman, 1996; Hull et al., 1996; Freund and Schapire, 1996a). In particular, the mistake-driven mixture algorithm 235 g- F_ 97 95 94 93' 92 91 90 i i i mixture of biKjrarns .e- mixture of context trees -+--- f- .-" ................ ................... • 2. j," ......................................................... I I I 3 4 Number of Mixture Figure 4: Context Tree Mixture v.s. Bi-gram Mixture was directly motivated by Adaboost (Freund and Schapire, 1996a). The Adaboost method was de- signed to construct a high-performance predictor by iteratively calling a weak learning algorithm (that is slightly better than random guess). An em- pirical work reports that the method greatly im- proved the performance of decision-tree, k-nearest- neighbor, and other learning methods given rela- tively simple and sparse data (Freund and Schapire, 1996b). We borrowed the idea of re-sampling to de- tect exceptional connections and first proved that such a re-sampling method is also effective for a practical application using a large amount of data. The next step is to fill the gap between theory and practition. Most theoretical work on re-sampling as- sumes i.i.d (identically, independently distributed) samples. This is not a realistic assumption in part- of-speech tagging and other NL applications. An interesting future research direction is to construct a theory that handles Markov processes. 7 Conclusion We have described a new tag model that uses mistake-driven mixture to produce hierarchical tag context trees that can deal with exceptional con- nections whose detection is not possible at part-of- speech level. Our experinaental results show that combining hierarchical tag context trees with the mistake-driven mixture method is extremely effec- tive for 1. incorporating exceptional connections and 2. avoiding data over-fitting. Although we have focused on part-of-speech tagging in this paper, the mistake-driven mixture method should be useful for other applications because detecting and incorporat- ing exceptions is a central problem in corpus-based NLP. We are now costructing a Japanese depen- dency parser that employes mistake-driven mixture of decision trees. References Leo Breiman. 1996. Bagging predictors. Machine Learning, 24(2):123-140, August. Eric Brill. 1992. A simple rule-based part of speech tagger. In Proc. Third Conference on Applied Natural Language Processin 9, pages 152-155. Eugene Charniak, Curtis Hendrickson, Neil Jacob- son, and Mike Perkowits. 1993. Equations for Part-of-Speech Tagging. In Proc. 11th AAAI, pages 784-789. K. W. Church. 1988. A stochastic parts program and noun phrase parser for unrestricted text. In Proc. ACL 2nd Conference on Applied Natural Language Processing, pages 126-143. 236 T.M. Cover and J.A. Thomas, 1991. Elements of Information Theory. John Wiley & Sons. B. Efron and R. Tibshirani, 1993. An Introduction to the Bootstrap. Chapman and Hall. B. Efron. 1979. Bootstrap: another look at the jackknife. The Annals of Statistics, 7(1):1-26. Yoav Freund and Robert Schapire. 1996a. A decision-theoretic generalization of on-line learn- ing and an application to boosting. Yoav Freund and Robert Schapire. 1996b. Experi- ments with a New Boosting algorithm. In Proc. 13rd International Conference on Machine Learn- ing, pages 148-156. David A. Hull, Jan O. Pedersen, and Hinrich Schiitze. 1996. Method combination for docu- ment filtering. In Proc. A CM SIGIR 96, pages 279-287. J. Kupiec. 1992. Robust part-of-speech tagging us- ing a hidden Markov model. Computer Speech and Language, 6:225-242. Sadao Kurohashi, Toshihisa Nakamura, Yuji Mat- sumoto, and Makoto Nagao. 1994. Improvements of Japanese morphological analyzer juman. In Proc. International Workshop on Sharable Nat- ural Language Resources, pages 22-28. Mainichi, 1993. CD Mainichi Shinbun. Nichigai As- sociates Co. Masaaki Nagata. 1994. A Stochastic Japanese Mor- phological Analyzer Using Forward-DP Backward-A* N-Best Search Algorithm. In Proc. 15th COLING, pages 201-207. Fernando C. Pereira, Yoram Singer, and Naftali Tishby. 1995. Beyond Word N-Grams. In Proc. Third Workshop on Very Large Corpora, pages 95-106. Jorma Rissanen. 1983. A universal data compres- Sion system. IEEE Transaction on Information Theory, 29(5):656-664, September. Dana Ron, Yoram Singer, and Naftali Tishby. 1997. The power of amnesia: Learning probabilistic au- tomata with variable memory length. (to appear) Machine Learning Special Issue on COLT94. H. Schiitze and Y. Singer. 1994. Part-of-speech tag- ging using a variable markov model. In the 32th Annual Meeting of A CL, pages 181-187. M J. Weinberger, J J. Rissanen, and M. Feder. 1995. A universal finite memory source. 1EEE Transac- tion on Information Theory, 41(3):643-652, May. F M J. Willems, Y M. Shtarkov, and T J. Tjalkens. 1995. The context-tree weigting method: Ba- sic properties. 1EEE Transaction on Information Theory, 41(3):653-664, May. 237 | 1997 | 30 |
A Flexible POS Tagger Using an Automatically Acquired Language Model* Llufs Mhrquez LSI- UPC c/Jordi Girona 1-3 08034 Barcelona. Catalonia lluism©isi, upc. es Llu/s Padr6 LSI- UPC c/Jordi Girona 1-3 08034 Barcelona. Catalonia padro@isi, upc. es Abstract We present an algorithm that automati- cally learns context constraints using sta- tistical decision trees. We then use the ac- quired constraints in a flexible POS tag- ger. The tagger is able to use informa- tion of any degree: n-grams, automati- cally learned context constraints, linguis- tically motivated manually written con- straints, etc. The sources and kinds of con- straints are unrestricted, and the language model can be easily extended, improving the results. The tagger has been tested and evaluated on the WSJ corpus. 1 Introduction In NLP, it is necessary to model the language in a representation suitable for the task to be performed. The language models more commonly used are based on two main approaches: first, the linguistic ap- proach, in which the model is written by a linguist, generally in the form of rules or constraints (Vouti- lainen and Jgrvinen, 1995). Second, the automatic approach, in which the model is automatically ob- tained from corpora (either raw or annotated) 1 , and consists of n-grams (Garside et al., 1987; Cutting et ah, 1992), rules (Hindle, 1989) or neural nets (Schmid, 1994). In the automatic approach we can distinguish two main trends: The low-level data trend collects statistics from the training corpora in the form of n-grams, probabilities, weights, etc. The high level data trend acquires more sophisticated in- formation, such as context rules, constraints, or de- cision trees (Daelemans et al., 1996; M/~rquez and Rodriguez, 1995; Samuelsson et al., 1996). The ac- quisition methods range from supervised-inductive- learning-from-example algorithms (Quinlan, 1986; *This research has been partially funded by the Span- ish Research Department (CICYT) and inscribed as TIC96-1243-C03-02 I When the model is obtained from annotated corpora we talk about supervised learning, when it is obtained from raw corpora training is considered unsupervised. Aha et al., 1991) to genetic algorithm strategies (Losee, 1994), through the transformation-based error-driven algorithm used in (Brill, 1995), Still another possibility are the hybrid models, which try to join the advantages of both approaches (Vouti- lainen and Padr6, 1997). We present in this paper a hybrid approach that puts together both trends in automatic approach and the linguistic approach. We describe a POS tag- ger based on the work described in (Padr6, 1996), that is able to use bi/trigram information, auto- matically learned context constraints and linguisti- cally motivated manually written constraints. The sources and kinds of constraints are unrestricted, and the language model can be easily extended. The structure of the tagger is presented in figure 1. Language Model . I~:.i:;:;~: I / le~ed | t wri.e. | ... l i.wco us Figure h Tagger architecture. Corpus We also present a constraint-acquisition algo- rithm that uses statistical decision trees to learn con- text constraints from annotated corpora and we use the acquired constraints to feed the POS tagger. The paper is organized as follows. In section 2 we describe our language model, in section 3 we describe the constraint acquisition algorithm, and in section 4 we expose the tagging algorithm. Descriptions of the corpus used, the experiments performed and the results obtained can be found in sections 5 and 6. 2 Language Model We will use a hybrid language model consisting of an automatically acquired part and a linguist-written part. 238 The automatically acquired part is divided in two kinds of information: on the one hand, we have bi- grams and trigrams collected from the annotated training corpus (see section 5 for details). On the other hand, we have context constraints learned from the same training corpus using statistical deci- sion trees, as described in section 3. The linguistic part is very small --since there were no available resources to develop it further-- and covers only very few cases, but it is included to il- lustrate the flexibility of the algorithm. A sample rule of the linguistic part: i0.0 (XvauxiliarY.) (-[VBN IN , : JJ JJS JJR])+ <VBN> ; This rule states that a tag past participle (VBN) is very compatible (10.0) with a left context consisting of a %vauxiliar% (previously defined macro which includes all forms of "have" and "be") provided that all the words in between don't have any of the tags in the set [VBN IN , : JJ JJS J JR]. That is, this rule raises the support for the tag past partici- ple when there is an auxiliary verb to the left but only if there is not another candidate to be a past participle or an adjective inbetween. The tags [IN , :] prevent the rule from being applied when the auxiliary verb and the participle are in two different phrases (a comma, a colon or a preposition are con- sidered to mark the beginning of another phrase). The constraint language is able to express the same kind of patterns than the Constraint Gram- mar formalism (Karlsson et al., 1995), although in a different formalism. In addition, each constraint has a compatibility value that indicates its strength. In the middle run, the system will be adapted to accept CGs. 3 Constraint Acquisition Choosing, from a set of possible tags, the proper syn- tactic tag for a word in a particular context can be seen as a problem of classification. Decision trees, recently used in NLP basic tasks such as tagging and parsing (McCarthy and Lehnert, 1995: Daele- mans et al., 1996; Magerman, 1996), are suitable for performing this task. A decision tree is a n-ary branching tree that rep- resents a classification rule for classifying the objects of a certain domain into a set of mutually exclusive classes. The domain objects are described as a set of attribute-value pairs, where each attribute mea- sures a relevant feature of an object taking a (ideally small) set of discrete, mutually incompatible values. Each non-terminal node of a decision tree represents a question on (usually) one attribute. For each possi- ble value of this attribute there is a branch to follow. Leaf nodes represent concrete classes. Classify a new object with a decision tree is simply following the convenient path through the tree until a leaf is reached. Statistical decision trees only differs from common decision trees in that leaf nodes define a conditional probability distribution on the set of classes. It is important to note that decision trees can be directly translated to rules considering, for each path from the root to a leaf, the conjunction of all ques- tions involved in this path as a condition and the class assigned to the leaf as the consequence. Statis- tical decision trees would generate rules in the same manner but assigning a certain degree of probability to each answer. So the learning process of contextual constraints is performed by means of learning one statistical de- cision tree for each class of POS ambiguity -~ and con- verting them to constraints (rules) expressing com- patibility/incompatibility of concrete tags in certain contexts. Learning Algorithm The algorithm we used for constructing the statisti- cal decision trees is a non-incremental supervised learning-from-examples algorithm of the TDIDT (Top Down Induction of Decision Trees) family. It constructs the trees in a top-down way, guided by the distributional information of the examples, but not on the examples order (Quinlan, 1986). Briefly. the algorithm works as a recursive process that de- parts from considering the whole set of examples at the root level and constructs the tree ina top-down way branching at any non-terminal node according to a certain selected attribute. The different val- ues of this attribute induce a partition of the set of examples in the corresponding subsets, in which the process is applied recursively in order to gener- ate the different subtrees. The recursion ends, in a certain node, either when all (or almost all) the re- maining examples belong to the same class, or when the number of examples is too small. These nodes are the leafs of the tree and contain the conditional probability distribution, of its associated subset, of examples, on the possible classes. The heuristic function for selecting the most useful attribute at each step is of a cru- cial importance in order to obtain simple trees, since no backtracking is performed. There ex- ist two main families of attribute-selecting func- tions: information-based (Quinlan, 1986: Ldpez, 1991) and statistically--based (Breiman et al., 1984; Mingers, 1989). Training Set For each class of POS ambiguity the initial exam- ple set is built by selecting from the training corpus Classes of ambiguity are determined by the groups of possible tags for the words in the corpus, i.e, noun- adjective, noun-adjective-verb, preposition-adverb, etc. 239 all the occurrences of the words belonging to this ambiguity class. More particularly, the set of at- tributes that describe each example consists of the part-of-speech tags of the neighbour words, and the information about the word itself (orthography and the proper tag in its context). The window consid- ered in the experiments reported in section 6 is 3 words to the left and 2 to the right. The follow- ing are two real examples from the training set for the words that can be preposition and adverb at the same time (IN-RB conflict). VB DT NN <"as" ,IN> DT JJ NN IN NN <"once",RB> VBN TO Approximately 90% of this set of examples is used for the construction of the tree. The remaining 10% is used as fresh test corpus for the pruning process. Attribute Selection Function For the experiments reported in section 6 we used a attribute selection function due to L6pez de Minta- ras (L6pez. 1991), which belongs to the information- based family. Roughly speaking, it defines a distance measure between partitions and selects for branch- ing the attribute that generates the closest partition to the correc* partaion, namely the one that joins together all the examples of the same class. Let X be aset of examples, C the set of classes and Pc(X) the partition of X according to the values of C. The selected attribute will be the one that gen- erates the closest partition of X to Pc(X). For that we need to define a distance measure between parti- tions. Let PA(X) be the partition of X induced by the values of attribute A. The average information of such partition is defined as follows: I(PA(X)) = - ~, p(X,a) log,.p(X,a), aEPa(X) where p(X. a) is the probability for an element of X belonging to the set a which is the subset of X whose examples have a certain value for the attribute .4, and it is estimated bv the ratio ~ This average • IXl ' information measure reflects the randomness of dis- tribution of the elements of X between the classes of the partition induced by .4.. If we consider now the intersection between two different partitions induced by attributes .4 and B we obtain I(PA(X) N PB(X))= - E Z p(X. aMb) log,.p(X, aAb). aEP.a(A'} bEPB;XI Conditioned information of PB(X) given PA(X) iS I(PB(X)IPA(X)) = I( PA(X) M Ps(X)) - I(P~(X)) = - Z Z p(X, nb) log, p(X'anb) p(X,a) a~Pa(X ~, bEPBtX ~ It is easy to show that the measure d(Pa(.Y). PB(X)) = [(Ps(X)iPA(X)) + I(PA(X)IPB(X)) is a distance. Normalizing we obtain d(PA(X).PB(,\')) d.,v(Pa(X). PB(.V)) = I(Pa(X)aPB(X)) " with values in [0,1]. So the selected attribute will be that one that min- imizes the measure: d.v(Pc(X), PA(X)). Branching Strategy Usual TDIDT algorithms consider a branch for each value of the selected attribute. This strategy is not feasible when the number of values is big (or even in- finite). In our case the greatest number of values for an attribute is 45 --the tag set size-- which is con- siderably big (this means that the branching factor could be 45 at every level of the tree 3). Some s.vs- terns perform a previous recasting of the attributes in order to have only binary-valued attributes and to deal with binary trees (Magerman, 1996). This can always be done but the resulting features lose their intuition and direct interpretation, and explode in number. We have chosen a mixed approach which consist of splitting for all values and afterwards join- ing the resulting subsets into groups for which we have not enough statistical evidence of being differ- ent distributions. This statistical evidence is tested with a X ~" test at a 5% level of significance. In order to avoid zero probabilities the following smoothing is performed. In a certain set of examples, the prob- ability of a tag ti is estimated by I~,l+-~ ri(4) = ,+~ where m is the number of possible tags and n the number of examples. Additionally. all the subsets that don't imply a reduction in the classification error are joined to- gether in order to have a bigger set of examples to be treated in the following step of the tree construc- tion. The classification error of a certain node is simply: I - maxt<i<m (t)(ti)). Experiments reported in (.\I&rquez and Rodriguez. 1995) show that in this way more compact and predictive trees are obtained. Pruning the Tree Decision trees that correctly classify all examples of the training set are not always the most predictive ones. This is due to the phenomenon known as o,'er- fitting. It occurs when the training set has a certain amount of misclassified examples, which is obviously the case of our training corpus (see section 5). If we 3In real cases the branching factor is much lower since not all tags appear always in all positions of the context. 240 force the learning algorithm to completely classify the examples then the resulting trees would fit also the noisy examples. The usual solutions to this problem are: l) Prune the tree. either during the construction process (Quinlan. 1993) or afterwards (Mingers, 1989); 2) Smooth the conditional probability distributions us- ing fresh corpus a (Magerman, 1996). Since another important, requirement of our prob- lem is to have small trees we have implemented a post-pruning technique. In a first step the tree is completely expanded and afterwards it is pruned following a minimal cost-complexity crite- rion (Breiman et al.. 1984). Roughly speaking this is a process that iteratively cut those subtrees pro- ducing only marginal benefits in accuracy, obtaining smaller trees at each step. The trees of this sequence are tested using a, comparatively small, fresh part of the training set in order to decide which is the one with the highest degree of accuracy on new exam- ples. Experimental tests (M&rquez and Rodriguez, 1995) have shown that the pruning process reduces tree sizes at about 50% and improves their accuracy in a 2-5%. An Ezample Finally, we present a real example of the simple ac- quired contextual constraints for the conflict IN-RB (preposition-adverb). P(IN)=0.$1 ] Pnorprobability P(RB)=0.19 [ di~tnbunon T ... ~dnghlm~g er s U-"< C,,.dm,,.wl: P(IN)=0.013 ' ' ' probuiJilm di.~tnbut.m P~RB~0.987 Figure 2: Example of a decision tree branch, The tree branch in figure 2 is translated into the following constraints: -5.81 <["as .... As"],IN> ([RB'I) ([IN]); 2.366 <["as .... As"],RS> ([RB]) ([IN]); which express the compatibility (either positive or negative) of the word-tag pair in angle brackets with the given context. The compatibility value for each constraint is the mutual information between the tag and the context (Cover and Thomas, 1991). It is directly" computed from the probabilities in the tree. ~Of course, this can be done only in the case of sta- tistical decision trees. 4 Tagging Algorithm Usual tagging algorithms are either n-gram oriented -such as Viterbi algorithm (Viterbi. 1967)- or ad- hoc for every case when they must deal with more complex information. We use relaxation labelling as a tagging algorithm. Relaxation labelling is a generic name for a family of iterative algorithms which perform function opti- mization, based on local information. See (Torras. 1989) for a summary. Its most remarkable feature is that it can deal with any kind of constraints, thus the model can be improved by adding any constraints available and it makes the tagging algorithm inde- pendent of the complexity of the model. The algorithm has been applied to part-of-speech tagging (Padr6, 1996), and to shallow parsing (Voutilainen and Padro. 1997). The algorithm is described as follows: Let. V = {Vl.t'2 ..... v,} be a set of variables (words). Let ti = {t].t~ ..... t~,} be the set of possible labels (POS tags) for variable vi. Let CS be a set of constraints between the labels of the variables. Each constraint C E CS states a "compatibility value" C, for a combination of pairs variable-label. Any number of variables may be in- volved in a constraint. The aim of the algorithm is to find a weighted labelling 5 such that "global consistency" is maxi- mized. Maximizing "global consistency" is defined i is as maximizing for all vi, ~i P} x Sii, where pj the weight for label j in variable vi and Sij the sup- port received by the same combination. The support for the pair variable-label expresses how compatible that pair is with the labels of neighbouring variables. according to the constraint set. It is a vector opti- mization and doesn't maximize only the sum of the supports of all variables. It finds a weighted labelling such that any other choice wouldn't increase the sup- port for any variable. The support is defined as the sum of the influence of every constraint on a label. c.. Z Inf(r) r6R,j where: l~ij is the set of constraints on label j for variable i, i.e. the constraints formed by any combination of variable-label pairs that includes the pair (ci. t i ). Inf(r) = C, x p~'t,"n) x ... x ,v~(m).. . is the prod- uct of the current weights ~ for the labels appearing 5A weighted labelling is a weight assignment for each label of each variable such that the weights for the labels of the same variable add up to one. Gp~(rn) is the weight assigned to label k for variable r at time m. 241 in the constraint except (vi,t}) (representing how applicable the constraint is in the current context) multiplied by Cr which is the constraint compatibil- ity value (stating how compatible the pair is with the context). Briefly, what the algorithm does is: i. Start with a random weight assignment r. 2. Compute the support value for each label of each variable. 3. Increase the weights of the labels more compat- ible with the context (support greater than 0) and decrease those of the less compatible labels (support less than 0) s, using the updating func- tion: i(m + 1) = p~(m) × (1 + s~j) PJ I~, Zp~(m ) x (i + Sit:) k=l where -l<Sij <_+1 4. If a stopping/convergence criterion 9 is satisfied, stop, otherwise go to step 2. The cost of the algorithm is proportional to the product of the number of words by the number of constraints. 5 Description of the corpus We used the Wall Street Journal corpus to train and test the system. We divided it in three parts: 1,100 Kw were used as a training set, 20 Kw as a model- tuning set, and 50 Kw as a test set. The tag set size is 45 tags. 36.4% of the words in the corpus are ambiguous, and the ambiguity ratio is 2.44 tags/word over the ambiguous words, 1.52 overall. We used a lexicon derived from training corpora, that contains all possible tags for a word, as well as their lexical probabilities. For the words in test corpora not appearing in the train set, we stored all possible tags, but no lexical probability (i.e. we assume uniform distribution) l°. The noise in the lexicon was filtered by manually checking the lexicon entries for the most frequent 200 words in the corpus 11 to eliminate the tags due to errors in the training set. For instance the original ZWe use lexical probabilities as a starting point. SNegative values for support indicate incompatibility. 9We use the criterion of stopping when there are no more changes, although more sophisticated heuristic pro- cedures are also used to stop relaxation processes (Ek- lundh and Rosenfeld, 1978; Richards et hi. , 1981). 1°That is, we assumed a morphological analyzer that provides all possible tags for unknown words. l~The 200 most frequent words in the corpus cover over half of it. lexicon entry (numbers indicate frequencies in the training corpus) for the very common word the was ~he CD i DT 47715 JJ 7 NN I NNP 6 VBP 1 since it appears in the corpus with the six differ- ent tags: CD (cardinal), DT (determiner), JJ (ad- jective), NN (noun). NNP (proper noun) and VBP (verb-personal form). It is obvious that the only correct reading for the is determiner. The training set was used to estimate bi/trigram statistics and to perform the constraint learning. The model-tuning set was used to tune the algo- rithm parameterizations, and to write the linguistic part of the model. The resulting models were tested in the fresh test set. 6 Experiments and results The whole WSJ corpus contains 241 different classes of ambiguity. The 40 most representative classes t-" were selected for acquiring the corresponding deci- sion trees. That produced 40 trees totaling up to 2995 leaf nodes, and covering 83.95% of the ambigu- ous words. Given that each tree branch produces as many constraints as tags its leaf involves, these trees were translated into 8473 context constraints. We also extracted the 1404 bigram restrictions and the 17387 trigram restrictions appearing in the training corpus. Finally, the model-tuning set was tagged using a bigram model. The most common errors com- mited by the bigram tagger were selected for manu- ally writing the sample linguistic part of the model, consisting of a set of 20 hand-written constraints. From now on C will stands for the set of acquired context constraints. B for the bigram model, T for th.e trigram model, and H for the hand-written con- straints. Any combination of these letters will indi- cate the joining of the corresponding models (BT, BC, BTC, etc.). In addition, ML indicates a baseline model con- raining no constraints (this will result in a most- likely tagger) and HMM stands for a hidden Markov model bigram tagger (Elworthy, 1992). We tested the tagger on the 50 Kw test set using all the combinations of the language models. Results are reported below. The effect of the acquired rules on the number of errors for some of the most common cases is shown in table 1. XX/Y'Y stands for an error consisting of a word tagged ~t%_" when it should have been XX. Table 2 contains the meaning of all the involved tags. Figures in table 1 show that in all cases the learned constraints led to an improvement. It is remarkable that when using C alone, the number of errors is lower than with any bigram 12In terms of number of examples. 242 JJ/NN+NN/JJ VBD/VBN+VBN/VBD IN/RB+RB/IN VB/VBP+VBP/VB NN/NNP+NNP/NN NNP/NNPS+NNPS/NNP "'that" 187 Total ML C B 73+137 70+94 73+112 176+190 71+66 88+69 31+132 40+69 66+107 128+147 30+26 49+43 70+11 44+12 72+17 45+14 37+19 45+13 53 66 BC 69+102 63+56 43+17 32+27 45+16 46+15 45 T I TC 57+103 [ 61+95 56+57 55+57 77+68 47+67 31+32 32+18 69+27 50+18 54+12 51+12 60 I 40 BT[ BTC 67+101 t 62+93 65+60 59+61 65+98 46-z-83 28+32 ') ' ' '} .8,--3. 71+20 62+t.5 53+14 51+14 57 . 45 1341 it 631 II 82°1 630 II 7o3! 603 731 ~s51 i Table 1: Number of some common errors commited by each model NN JJ VBD VBN RB IN VB VBP NNP NNPS Noun [ I ambiguous Adjective B 91.35% Verb - past. tense T 91.82% 'verb - past participle BT 91.92% Adverb Preposition B C 91.96% Verb - base form C 92.72% Verb - personal form TC 92.82% Proper noun BTC 92.55% Plural proper noun Table 4: Results of our Table 2: Tag meanings of constraint kinds and/or trigram model, that is, the acquired model performs better than the others estimated from the same training corpus. We also find that the cooperation of a bigram or trigram model with the acquired one, produces even better results. This is not true in the cooperation of bigrams and trigrams with acquired constraints (BTC), in this case the synergy is not enough to get a better joint result. This might be due to the fact that the noise in B and T adds up and overwhelms the context constraints. The results obtained by the baseline taggers can be found in table 3 and the results obtained using all the learned constraints together with the bi/trigram models in table 4. ] ambiguous I overall ML [ 85.31%194.66% HMM 91.75% 97.00% Table 3: Results of the baseline taggers On the one hand. the results in tables 3 and 4 show that our tagger performs slightly worse than a HMM tagger in the same conditions 13, that is, when using only bigram information. 13Hand analysis of the errors commited by the algo- rithm suggest that the worse results may be due to noise in the training and test corpora, i.e., relaxation algo- rithm seems to be more noise-sensitive than a Markov model. Further research is required on this point. overall 96.86% 97.03% 97.06% 97.08% 97.36% 97.39% 97.29% tagger using every combination On the other hand, those results also show that since our tagger is more flexible than a HMM, it can easily accept more complex information to improve its results up to 97.39% without modifying the algo- rithm. I I ambigu°us H 86.41% BH 91.88% TH 92.04% BTH 92.32% CH 91.97% BCH 92.76% TCH 92.98% BTCH 92.71% overall 95.06% 97.05% 97.11% 97.21% 97.08% 97.37% 97.45% 97.35% Table .5: Results of our tagger using every combination of constraint kinds and hand written constraints Table 5 shows the results adding the hand written constraints. The hand written set is very small and only covers a few common error cases. That pro- duces poor results when using them alone (H). but they are good enough to raise the results given by the automatically acquired models up to 97.-15%. Although the improvement obtained might seem small, it must be taken into .account that we are moving very close to the best achievable result with these techniques. First, some ambiguities can only be solved with semantic information, such as the Noun-Adjective ambiguity for word principal in the phrase lhe prin- cipal office. It could be an adjective, meaning the 243 main office, or a noun, meaning the school head of- rice, Second, the WSJ corpus contains noise (mistagged words) that affects both the training and the test sets. The noise in the training set produces noisy -and so less precise- models. In the test set, it pro- duces a wrong estimation of accuracy, since correct answers are computed as wrong and vice-versa. For instance, verb participle forms are sometimes tagged as such (VBIV) and also as adjectives (J J) in other sentences with no structural differences: • ... failing_VBG ~o_TO voluntarily_KB submit_VB the_DT reques~ed_VBN informa%ion.NN . . . • ... a_DT large_JJ sample_NN of_IN married_JJ women_NNS with_IN at_II~ least_JJS one_CD child..gN ... Another structure not coherently tagged are noun chains when the nouns are ambiguous and can be • also adjectives: • ... Mr._NNP Hahn_NNP ,_, the_DT 62-year-old_JJ chairman_NN and_CC chief_NN executive_JJ officer_NN of_IN Georgia-Pacific_~NP Corp._NNP ... • ... Burger_NgP King_~NP 's_POS chief_JJ ezecutive_NN officer_NN ,_, Barry_NNP Gibbons_NNP ,_, stars_VBZ inlN ads_NNS saying_VBG ... • ... and_CC Barrett_NNP B._NNP Weekes_NNP ,_, chairma~t-NN ,_, president_NN and_CC chief_JJ ezecutive_JJ officer_NN . _. • ... the_DT compaay_NN includes_VBZ NeiI_NNP Davenport_NNP ,_, 47_CD ,_, president_NN and_CC chief_NN ezecu~ive_NN officer_NN ;_: All this makes that the performance cannot reach 100%, and that an accurate analysis of the noise in WS3 corpus should be performed to estimate the actual upper bound that a tagger can achieve on these data. This issue will be addressed in further work. 7 Conclusions We have presented an automatic constraint learning algorithm based on statistical decision trees. We have used the acquired constraints in a part- of-speech tagger that allows combining any kind of constraints in the language model. The results obtained show a clear improvement in the performance when the automatically acquired constraints are added to the model. That indicates that relaxation labelling is a flexible algorithm able to combine properly different information kinds, and that the constraints acquired by the learning algo- rithm capture relevant context information that was not included in the n-gram models. It is difficult to compare the results to other works, since the accuracy varies greatly depending on the corpus, the tag set, and the lexicon or morphological analyzer used. The more similar conditions reported in previous work are those experiments performed on the WSJ corpus: (Brill, 1992) reports 3-4% er- ror rate, and (Daelemans et al., 1996) report 96.7% accuracy. We obtained a 97.39% accuracy with tri- grams plus automatically acquired constraints, and 97.45% when hand written constraints were added. 8 Further Work Further work is still to be done in the following di- rections: • Perform a thorough analysis of the noise in the WSJ corpus to determine a realistic upper • bound for the performance that can be expected from a POS tagger. On the constraint learning algorithm: • Consider more complex context features, such as non-limited distance or barrier rules in the style of (Samuelsson et al., 1996). • Take into account morphological, semantic and other kinds of information. • Perform a global smoothing to deal with low- frequency ambiguity classes. On the tagging algorithms • Study the convergence properties of the algo- rithm to decide whether the lower results at convergence are produced by the noise in the corpus. • Use back-off techniques to minimize inter- ferences between statistical and learned con- straints. • Use the algorithm to perform simultaneously POS tagging and word sense disambiguation, to take advantage of cross influences between both kinds of information. References D.W. Aha, D. Kibler and M. Albert. 1991 Instance- based learning algorithms. In Machine Learning. 7:37-66. Belmont, California. L. Breiman, J.H. Friedman, R.A. Olshen and C.J. Stone. 1984 Classification and Regression Trees. The Wadsworth Statistics/Probability Se- ries. Wadsworth International Group, Behnont, California. 244 E. Brill. 1992 A Simple Rule-Based Part-of-Speech. In Proceedings of the Third Conference on Applied Natural Language Processing. ACL. E. Brill. 1995 Unsupervised Learning of Disam- biguation Rules for Part-of-speech Tagging. In Proceedings of 3rd Workshop on Very Large Cor- pora. Massachusetts. T.M. Cover and J.A. Thomas (Editors) 1991 Ele- ments of information theory. John Wiley & Sons. D. Cutting, J. Kupiec, J. Pederson and P. Sibun. 1992 A Practical Part-of-Speech Tagger. In Pro- ceedings of the Third Conference on Applied Nat- ural Language Processing., ACL. J. Eklundh and A. Rosenfeld. 1978 Convergence Properties of Relaxation Labelling. Technical Re- port no. 701. Computer Science Center. Univer- sity of Maryland. D. Elworthy. 1993 Part-of-Speech and Phrasal Tagging. Technical report, SPRIT BRA-7315 Ac- quilex II, Working Paper WP #10. W. Daelemans, J. Zavrel, P. Berck and S. Gillis. 1996 MTB: A Memory-Based Part-of-Speech Tagger Generator. In Proceedings of ~th Work- shop on Very Large Corpora. Copenhagen, Den- mark. R. Garside, G. Leech and G. Sampson (Editors) 1987 The Computational Analysis of English. London and New York: Longman. D. Hindle. 1989 Acquiring disambiguation rules from text. In Proceedings ACL'89. F. Karlsson 1990 Constraint Grammar as a Frame- work for Parsing Running Text. In H. Karlgren (ed.), Papers presented to the 13th International Conference on Computational Linguistics, Vol. 3. Helsinki. 168-173. F. Karlsson, A. Voutilainen, J. Heikkil~ and A. Anttila. (Editors) 1995 Constraint Grammar: A Language-Independent System for Parsing Un- restricted Tezt. Mouton de Gruyter, Berlin and New York. R. L6pez. 1991 A Distance-Based Attribute Selec- tion Measure for Decision Tree Induction. Ma- chine Learning. Kluwer Academic. R.M. Losee. 1994 Learning Syntactic Rules and Tags with Genetic Algorithms for Information Retrieval and Filtering: An Empirical Basis for Grammatical Rules. Information Processing & Management, May. M. Magerman. 1996 Learning Grammatical Struc- ture Using Statistical Decision-Trees. In Lecture Notes in Artificial Intelligence 11~7. Grammatical Inference: Learning Syntax from Sentences. Pro- ceedings ICGI-96. Springer. L. M£rquez and H. Rodriguez. 1995 Towards Learn- ing a Constraint Grammar from Annotated Cor- pora Using Decision Trees. ESPRIT BRA-7315 Acquilex II, Working Paper. J.F. McCarthy and W.G. Lehnert. 1995 Using De- cision Trees for Coreference Resolution. In Pro- ceedings of l~th International Joint Conference on Artificial Intelligence (IJCAI'95). J. Mingers. 1989 An Empirical Comparison of Se- lection Measures for Decision-Tree Induction. In Machine Learning. 3:319-342. J. Mingers. 1989 An Empirical Comparison of Prun- ing Methods for Decision-Tree Induction. In Ma- chine Learning. 4:227-243. L. Padr6. 1996 POS Tagging Using Relaxation Labelling. In Proceedings of 16th International Conference on Computational Linguistics. Copen- hagen, Denmark. J.R. Quinlan. 1986 Induction of Decision Trees. In Machine Learning. 1:81-106. J.R. Quinlan. 1993 C4.5: Programs for Machine Learning. San Mateo, CA. Morgan Kaufmann. 3. Richards, D. Landgrebe and P. Swain. 1981 On the accuracy of pixel relaxation labelling. IEEE Transactions on System, Man and Cybernetics. Vol. SMC-11 C. Samuelsson, P. Tapanainen and A. Voutilainen. 1996 Inducing Constraint Grammars. In Pro- ceedings of the 3rd International Colloquium on Grammatical Inference. H. Schmid 1994 Part-of-speech tagging with neu- ral networks. In Proceedings of 15th International Conference on Computational Linguistics. Kyoto, Japan. C. Torras. 1989 Relaxation and Neural Learning: Points of Convergence and Divergence. Journal of Parallel and Distributed Computing. 6:217-244 A.J. Viterbi. 1967 Error bounds for convolutional codes and an asymptotically optimal decoding al- gorithm. In IEEE Transactions on Information Theory. pg 260-269, April. A. Voutilainen and T. J~rvinen. 1995 Specifying a shallow grammatical representation for parsing purposes. In Proceedings of the 7th meeting of the European Association for Computational Linguis- tics. 210-214. A. Voutilainen and L. Padr6. 1997 Developing a Hybrid NP Parser. In Proceedings of ANLP'97. 245 | 1997 | 31 |
Comparing a Linguistic and a Stochastic Tagger Christer Samuelsson Atro Voutilainen Lucent Technologies Research Unit for Multilingu~l Language Technology Bell Laboratories P.O. Box 4 600 Mountain Ave, Room 2D-339 FIN-00014 University of Helsinki .Murray Hill, NJ 07974, USA Finland christ er©research, bell-labs, tom Afro. Vout ilainen©Helsinki. FI Abstract Concerning different approaches to auto- matic PoS tagging: EngCG-2, a constraint- based morphological tagger, is compared in a double-blind test with a state-of-the-art statistical tagger on a common disambigua- tion task using a common tag set. The ex- periments show that for the same amount of remaining ambiguity, the error rate of the statistical tagger is one order of mag- nitude greater than that of the rule-based one. The two related issues of priming effects compromising the results and dis- agreement between human annotators are also addressed. 1 Introduction There are currently two main methods for auto- matic part-of-speech tagging. The prevailing one uses essentially statistical language models automat- ically derived from usually hand-annotated corpora. These corpus-based models can be represented e.g. as collocational matrices (Garside et al. (eds.) 1987: Church 1988), Hidden Markov models (cf. Cutting et al. 1992), local rules (e.g. Hindle 1989) and neu- ral networks (e.g. Schmid 1994). Taggers using these statistical language models are generally reported to assign the correct and unique tag to 95-97% of words in running text. using tag sets ranging from some dozens to about 130 tags. The less popular approach is based on hand-coded linguistic rules. Pioneering work was done in the 1960"s (e.g. Greene and Rubin 1971). Recently, new interest in the linguistic approach has been shown e.g. in the work of (Karlsson 1990: Voutilainen et al. 1992; Oflazer and Kuru6z 1994: Chanod and Tapanainen 1995: Karlsson et al. (eds.) 1995; Vouti- lainen 1995). The first serious linguistic competitor to data-driven statistical taggers is the English Con- straint Grammar parser. EngCG (cf. Voutilainen et al. 1992; Karlsson et al. (eds.) 1995). The tagger consists of the following sequentially applied mod- ules: 1. Tokenisation 2. Morphological analysis (a) Lexical component (b) Rule-based guesser for unknown words 3. Resolution of morphological ambiguities The tagger uses a two-level morphological anal- yser with a large lexicon and a morphological description that introduces about 180 different ambiguity-forming morphological analyses, as a re- sult of which each word gets 1.7-2.2 different analy- ses on an average. Morphological analyses are as- signed to unknown words with an accurate rule- based 'guesser'. The morphological disambiguator uses constraint rules that discard illegitimate mor- phological analyses on the basis of local or global context conditions. The rules can be grouped as ordered subgrammars: e.g. heuristic subgrammar 2 can be applied for resolving ambiguities left pending by the more "careful' subgrammar 1. Older versions of EngCG (using about 1,150 con- straints) are reported (~butilainen et al. 1992; Vouti- lainen and HeikkiUi 1994; Tapanainen and Vouti- lainen 1994; Voutilainen 1995) to assign a correct analysis to about 99.7% of all words while each word in the output retains 1.04-1.09 alternative analyses on an average, i.e. some of the ambiguities remait~ unresolved. These results have been seriously questioned. One doubt concerns the notion 'correct analysis". For example Church (1992) argues that linguists who manually perform the tagging task using the double- blind method disagree about the correct analysis in at least 3% of all words even after they have nego- tiated about the initial disagreements. If this were the case, reporting accuracies above this 97% "upper bound' would make no sense. However, Voutilainen and J~rvinen (1995) empir- ically show that an interjudge agreement virtually of 1()0% is possible, at least with the EngCG tag set if not with the original Brown Corpus tag set. This consistent applicability of the EngCG tag set is ex- plained by characterising it as grammatically rather than semantically motivated. 246 Another main reservation about the EngCG fig- ures is the suspicion that, perhaps partly due to the somewhat underspecific nature of the EngCG tag set, it must be so easy to disambiguate that also a statistical tagger using the EngCG tags would reach at least as good results. This argument will be ex- amined in this paper. It will be empirically shown (i) that the EngCG tag set is about as difficult for a probabilistic tagger as more generally used tag sets and (ii) that the EngCG disambiguator has a clearly smaller error rate than the probabilistic tagger when a similar (small) amount of ambiguity is permitted in the output. A state-of-the-art statistical tagger is trained on a corpus of over 350,000 words hand-annotated with EngCG tags. then both taggers (a new version known as En~CG-21 with 3,600 constraints as five subgrammars-, and a statistical tagger) are applied to the same held-out benchmark corpus of 55,000 words, and their performances are compared. The results disconfirm the suspected 'easiness' of the EngCG tag set: the statistical tagger's performance figures are no better than is the case with better known tag sets. Two caveats are in order. What we are not ad- dressing in this paper is the work load required for making a rule-based or a data-driven tagger. The rules in EngCG certainly took a considerable effort to write, and though at the present state of knowl- edge rules could be written and tested with less ef- fort, it may well be the case that a tagger with an accuracy of 95-97% can be produced with less effort by using data-driven techniques. 3 Another caveat is that EngCG alone does not re- solve all ambiguities, so it cannot be compared to a typical statistical tagger if full disambiguation is re- quired. However, "~butilainen (1995) has shown that EngCG combined with a syntactic parser produces morphologically unambiguous output with an accu- racy of 99.3%, a figure clearly better than that of the statistical tagger in the experiments below (however. the test data was not the same). Before examining the statistical tagger, two prac- tical points are addressed: the annotation of tile cor- pora used. and the modification of the EngCG tag set for use in a statistical tagger. 1An online version of EngCG-2 can be found at, ht tp://www.ling.helsinki.fi/"avoutila/engcg-2.ht ml. :The first three subgrammars are generally highly re- liable and almost all of the total grammar development time was spent on them: the last two contain rather rough heuristic constraints. 3However, for an interesting experiment suggesting otherwise, see (Chanod and Tapanainen 1995). 2 Preparation of Corpus Resources 2.1 Annotation of training corpus The stochastic tagger was trained on a sample of 357,000 words from the Brown University Corpus of Present-Day English (Francis and Ku6era 1982) that was annotated using the EngCG tags. The cor- pus was first analysed with the EngCG lexical anal- yser, and then it was fully disambiguated and, when necessary, corrected by a human expert. This an- notation took place a few years ago. Since then, it has been used in the development of new EngCG constraints (the present version, EngCG-2, contains about 3,600 constraints): new constraints were ap- plied to the training corpus, and whenever a reading marked as correct was discarded, either the analysis in the corpus, or the constraint itself, was corrected. In this way, the tagging quality of the corpus was continuously improved. 2.2 Annotation of benchmark corpus Our comparisons use a held-out benchmark corpus of about 55,000 words of journalistic, scientific and manual texts, i.e., no ,training effects are expected for either system. The benchmark corpus was an- notated by first applying the preprocessor and mor- phological aaalyser, but not the morphological dis- ambiguator, to the text. This morphologically am- biguous text was then independently and fully dis- ambiguated by two experts whose task was also to detect any errors potentially produced by the pre- viously applied components. They worked indepen- dently, consulting written documentation of the tag set when necessary. Then these manually disam- biguated versions were automatically compared with each other. At this stage, about 99.3% of all anal- yses were identical. When the differences were col- lectiyely examined, virtually all were agreed to be due to clerical mistakes. Only in the analysis of 21 words, different (meaning-level) interpretations per- sisted, and even here both judges agreed the ambigu- ity to be genuine. One of these two corpus versions was modified to represent the consensus, and this "consensus corpus' was used as a benchmark in the evaluations. As explained in Voutilainen and J/irvinen (1995). this high agreement rate is due to two main factors. Firstly, distinctions based on some kind of vague se- mantics are avoided, which is not always case with better known tag sets. Secondly. the adopted analy- sis of most of the constructions where humans tend to be uncertain is documented as a collection of tag application principles in the form of a grammar- inn's manual (for further details, cf. Voutilainen and J/irvinen 1995). Tile corpus-annotation procedure allows us t.o per- form a text-book statistical hypothesis test. Let tile null hypothesis be that any two human eval- uators will necessarily disagree in at least 3% of 247 the cases. Under this assumption, the probability of an observed disagreement of less than 2.88% is less than 5%. This can be seen as follows: For the relative frequency of disagreement, fn, we have t - . - - . . - that f. is approximately --, N(p, ~/~), where p is the actual disagreement probability and n is the number of trials, i.e., the corpus size. This means fn-P v/- ff that P(( ~ < z) ~ ~(x) where ¢b is the standard normal distribution function. This in turn means that P ( f , < p + z P~ - p-----~) ) ,~ ~ ( z ) Here n is 55,000 and ~(-1.645) = 0.05. Under the null hypothesis, p is at least 3% and thus: . /O.O3.0.97 P(f. < o.o3- 1.64%/-g,o-g6 ) - = P(A <__ 0.0288) < 0.05 We can thus discard the null hypothesis at signifi- cance level 5% if the observed disagreement is less than 2.88%. It was in fact 0.7% before error cor- .21) rection, and virtually zero ( ~ after negotia- tion. This means that we can actually discard the hypotheses that the human evaluators in average disagree in at least 0.8% of the cases before error correction, and in at least 0.1% of the cases after negotiations, at significance level 5%. 2.3 Tag set conversion The EugCG morphological analyser's output for- mally differs from most tagged corpora; consider the following 5-ways ambiguous analysis of "'walk": walk walk <SV> <SVO> V SUBJUNCTIVE VFIN walk <SV> <SVO> V IMP VFIN walk <SV> <SVG> V INF walk <SV> <SVO> V PRES -SG3 VFIN walk N NOM SG Statistical taggers usually employ single tags to indicate analyses (e.g. "'NN" for "'N NOM SG"). Therefore a simple conversion program was made for producing the following kind of output, where each reading is represented as a single tag: walk V-SUBJUNCTIVE V-IMP V-INF V-PRES-BASE N-NOM-SG The conversion program reduces the multipart EngCG tags into a set of 80 word tags and 17 punc- tuation tags (see Appendix) that retain the central linguistic characteristics of the original EngCG tag set. A reduced version of the benchmark corpus was prepared with this conversion program for the sta- tistical tagger's use. Also EngCG's output was con- verted into this format to enable direct comparison with the statistical tagger. 8 The Statistical Tagger The statistical tagger used in the experiments is a classical trigram-based HMM decoder of the kind described in e.g. (Church 1988), (DeRose 1988) and numerous other articles. Following conventional no- tation, e.g. (Rabiner 1989, pp. 272-274) and (Krenn and Samuelsson 1996, pp. 42-46), the tagger recur- sively calculates the ~, 3, 7 and 6 variables for each word string position t = 1 ..... T and each possible state 4 si : i = 1,...,n: a,(i) = P(W<,;S, = si) .'3,(i) = P(W>, IS, = s~) 7t{i) --- &(i) = Here W W5t W>t Sst P(W; & = si) P(&=siIW) = P(W) ~,(i). 3,(i) r6 y~o~,(i). 3,(i) i=l max P(S<t-l, S= = si; W<,) S<,_t = l/V1 = wlq,..., ~VT = Wkr -- ~'VI = wk~ , . . . , Wt = wk, "- l~Vt+l = wk,+ t, • •., I'VT = Wkr -= S1 = si~ ..... St = si, where St = si is the event of the tth word being emitted from state si and Wt = wk, is the event of the tth word being the particular word w~, that was actually observed in the word string. Note that for t = 1 ..... T-1 ; i,j- l ..... n at+~(j) 3,(0 = ~ 3,+1(j) "Pij .aj~,+~ j=l where pij = P(St+I = sj I St = si) are the transi- tion probabilities, encoding the tag N-gram proba- bilities, and ajk = = P(Wt=wkIS,=sj) = P(Wt=w~l,\'t=zj) 4The N-I th-order HMM corresponding to an N-gram tagger is encoded as a first-order HMM, where each state corresponds to a sequence of ,V-I tags, i.e., for a trigram tagger, each state corresponds to a tag pair. 248 are the lexical probabilities. Here X, is the random variable of assigning a tag to the tth word and xj is the last tag of the tag sequence encoded as state sj. Note that si # sj need not imply zi # zj. More precisely, the tagger employs the converse lexical probabilities P(Xt = zj I Wt = w,) ajk a~ k = P(X, = zj) P(W, = wk) This results in slight variants a', fl', 7' and 6' of the original quantities: ~,(i) 6,(i) ' = = I-[ P(Wu = o4(i ) 6;(i) .=1 ~,(i) r - H P(W~ =w~=) /3;(i) u=t+l and thus Vi, t 7~(i) = a;(i) ./3;(i) = ka;(i) ./3;(i1 i=1 ~,(i) .~,(i) and Vt ~e,(i) ./3t(i) i=1 = 7t(0 argmax6;(i) = argmax6t(i) l<i<n l<i<n The rationale behind this is to facilitate estimat- ing the model parameters from sparse data. In more detail, it is easy to estimate P(tag I word) for a pre- viously unseen word by backing off to statistics de- rived from words that end with the same sequence of letters (or based on other surface cues), whereas directly estimating P(word I tag) is more difficult. This is particularly useful for languages with a rich inflectional and derivational morphology, but also for English: for example, the suffix "-tion" is a strong indicator that the word in question is a noun; the suffix "-able" that it is an adjective. More technically, the lexicon is organised as a reverse-suffix tree, and smoothing the probability es- timates is accomplished by blending the distribution at the current node of the tree with that of higher- level nodes, corresponding to (shorter) suffixes of the current word (suffix). The scheme also incorporates probability distributions for the set of capitalized words, the set of all-caps words and the set of in- frequent words, all of which are used to improve the estimates for unknown words. Employing a small amount of back-off smoothing also for the known words is useful to reduce lexical tag omissions. Em- pirically, looking two branching points up the tree for known words, and all the way up to the root for unknown words, proved optimal. The method for blending the distributions applies equally well to smoothing the transition probabilities pij, i.e., the tag N-gram probabilities, and both the scheme and its application to these two tasks are described in de- tail in (Samuelsson 1996), where it was also shown to compare favourably to (deleted) interpolation, see (Jelinek and Mercer 1980), even when the back-off weights of the latter were optimal. The 6 variables enable finding the most probable state sequence under the HMM, from which the most likely assignment of tags to words can be directly es- tablished. This is the normal modus operandi of an HMM decoder. Using the 7 variables, we can calcu- late the probability of being in state si at string po- sition t, and thus having emitted wk, from this state, conditional on the entire word string. By summing over all states that would assign the same tag to this word, the individual probability of each tag being as- signed to any particular input word, conditional on the entire word string, can be calculated: P(X, = zilW) = = Z P(S,=sj t W) = E 7,(J) 8j:rj=r i $j:rj =~'= This allows retaining multiple tags for each word by simply discarding only low-probability tags; those whose probabilities are below some threshold value. Of course, the most probable tag is never discarded, even if its probability happens to be less than the threshold value. By varying the threshold, we can perform a recall-precision, or error-rate-ambiguity, tradeoff. A similar strategy is adopted in (de Mar- cken 1990). 4 Experiments The statistical tagger was trained on 357,000 words from the Brown corpus (Francis and Ku~era 1982), reannotated using the EngCG annotation scheme (see above). In a first set of experiments, a 35,000 word subset of this corpus was set aside and used to evaluate the tagger's performance when trained on successively larger portions of the remaining 322,000 words. The learning curve, showing the error rate al- ter full disambiguation as a function of the amount of training data used, see Figure 1, has levelled off at 322,000 words, indicating that little is to be gained from further training. We also note that the ab- solute value of the error rate is 3.51% -- a typi- cal state-of-the-art figure. Here, previously unseen words contribute 1.08% to the total error rate, while the contribution from lexical tag omissions is 0.08% 95% confidence intervals for the error rates would range from + 0.30% for 30,000 words to + 0.20~c at 322.000 words. The tagger was then trained on the entire set of 357,000 words and confronted with the separate 55,000-word benchmark corpus, and run both in full 249 8 v 6 .~ 5 ~ 4 ~ 3 o 2 1 0 Learning curve , I I I I I I 0 50 I00 150 200 250 300 Training set (kWords) Figure 1: Learning curve for the statistical tagger on the Brown corpus. Ambiguity (Tags/word) 1.000 1.012 1.025 1.026 1.035 1.038 1.048 1.051 1.059 1.065 1.070 1.078 1.093 Error rate (%) Statistical Tagger EngCG (~) (7) 4.72 4.68 4.20 3.75 (3.72) (3.48) 3.40 (3.20) 3.14 (2.99) 2.87 (2.80) 2.69 2.55 0.43 0.29 0.15 0.12 0.10 Table h Error-rate-ambiguity tradeoff for both tag- gets on the benchmark corpus. Parenthesized num- bers are interpolated. and partial disambiguation mode. Table 1 shows the error rate as a function of remaining ambiguity (tags/word) both for the statistical tagger, and for the EngCG-2 tagger. The error rate for full disana- biguation using the 6 variables is 4.72% and using the 7 variables is 4.68%, both -4-0.18% with confi- dence degree 95%. Note that the optimal tag se- quence obtained using the 7 variables need not equal the optimal tag sequence obtained using the 6 vari- ables. In fact, the former sequence may be assigned zero probability by the HMM, namely if one of its state transitions has zero probability. Previously unseen words account for 2.01%, and lexical tag omissions for 0.15% of the total error rate. These two error sources are together exactly 1.00% higher on the benchmark corpus than on the Brown corpus, and account for almost the entire difference in error rate. They stem from using less complete lexical information sources, and are most likely the effect of a larger vocabulary overlap between the test and training portions of the Brown corpus than be- tween the Brown and benchmark corpora. The ratio between the error rates of the two tag- gets with the same amount of remaining ambiguity ranges from 8.6 at 1.026 tags/word to 28,0 at 1.070 tags/word. The error rate of the statistical tagger can be further decreased, at the price of increased remaining ambiguity, see Figure 2. In the limit of retaining all possible tags, the residual error rate is entirely due to lexical tag omissions, i.e., it is 0.15%, with in average 14.24 tags per word. The reason that this figure is so high is that the unknown words, which comprise 10% of the corpus, are assigned all possible tags as they are backed off all the way to the root of the reverse-suffix tree. 5 v 4 3 2 O 0 Error-rate-ambiguity trade-off i ! i l i l i I I I I i I r- 2 4 6 8 i0 12 14 Remaining ambiguity (Tags/Word) Figure 2: Error-rate-ambiguity tradeoff for the sta- tistical tagger on the benchmark corpus. 5 Discussion Recently voiced scepticisms concerning the superior EngCG tagging results boil down to the following: • The reported results are due to the simplicity of the tag set employed by the EngCG system. • The reported results are an effect of trading high ambiguity resolution for lower error rate. • The results are an effect of so-called priming of the huraan annotators when preparing the test corpora, compromising the integrity of the experimental evaluations. In the current article, these points of criticism were investigated. A state-of-the-art statistical tagger, capable of performing error-rate-ambiguity tradeoff, was trained on a 357,000-word portion of the Brown corpus reannotated with the EngCG tag set, and both taggers were evaluated using a sep- arate 55,000-word benchmark corpus new to both 250 systems. This benchmark corpus was independently disambiguated by two linguists, without access to the results of the automatic taggers. The initial differences between the linguists' outputs (0.7% of all words) were jointly examined by the linguists; practically all of them turned out to be clerical er- rors (rather than the product of genuine difference of opinion). In the experiments, the performance of the EngCG-2 tagger was radically better than that of the statistical tagger: at ambiguity levels common to both systems, the error rate of the statistical tag- ger was 8.6 to 28 times higher than that of EngCG- 2. We conclude that neither the tag set used by EngCG-2, nor the error-rate-ambiguity tradeoff, nor any priming effects can possibly explain the observed difference in performance. Instead we must conclude that the lexical and con- textual information sources at the disposal of the EngCG system are superior. Investigating this em- pirically by granting the statistical tagger access to the same information sources as those available in the Constraint Grammar framework constitutes fu- ture work. Acknowledgements Though Voutilainen is the main author of the EngCG-2 tagger, the development of the system has benefited from several other contributions too. Fred Karlsson proposed the Constraint Grammar framework in the late 1980s. Juha Heikkil£ and Timo J~irvinen contributed with their work on En- glish morphology and lexicon. Kimmo Koskenniemi wrote the software for morphological analysis. Pasi Tapanainen has written various implementations of the CG parser, including the recent CG-2 parser (Tapanainen 1996). The quality of the investigation and presentation was boosted by a number of suggestions to improve- ments and (often sceptical) comments from numer- ous ACL reviewers and UPenn associates, in partic- ular from Mark Liberman. References J-P Chanod and P. Tapanainen. 1995. Tagging French: comparing a statistical and a constraint- based method. In Procs. 7th Conference of the European Chapter of the Association for Compu- tational Lingaistics, pp. 149-157, ACL, 1995. K. W. Church. 1988. "'A Stochastic Parts Program and Noun Phrase Parser for Unrestricted Text.". In Procs. 2nd Conference on Applied Natural Lan- guage Processing, pp. 136-143, ACL, 1988. K. Church. 1992. Current Practice in Part of Speech Tagging and Suggestions for the Future. in Simmons (ed.), Sbornik praci: In Honor of Henry Ku6era. Michigan Slavic Studies, 1992. D. Cutting, J. Kupiec, J. Pedersen and P. Sibun. 1992. A Practical Part-of-Speech Tagger. In Procs. 3rd Conference on Applied Natural Lan- guage Processing, pp. 133-140, ACL, 1992. S. J. DeRose. 1988. "Grammatical Category Disambiguation by Statistical Optimization". In Computational Linguistics 14(1), pp. 31-39, ACL, 1988. N. W. Francis and H. Ku~era. 1982. Fre- quency Analysis of English Usage, Houghton Mif- flin, Boston, 1982. R. Garside, G. Leech and G. Sampson (eds.). 1987. The Computational Analysis of English. London and New York: Longman, 1987. B. Greene and G. Rubin. 1971. Automatic gram- matical tagging of English. Brown University, Providence, 1971. D. Hindle. 1989. Acquiring disambiguation rules from text. In Procs. 27th Annual Meeting of the Association for Computational Linguistics, pp. 118-125, ACL, 1989. F. Jelinek and R. L. Mercer. 1980. "Interpolated Estimation of Markov Source Paramenters from Sparse Data". Pattern Recognition in Practice: 381-397. North Holland, 1980. F. Karlsson. 1990. Constraint Grammar as a Framework for Parsing Running Text. In Procs. CoLing'90. In Procs. 14th International Confer- ence on Computational Linguistics, ICCL, 1990. F. Karlsson, A. Voutilainen, J. Heikkilii and A. Anttila (eds.). 1995. Constraint Grammar. A Language-Independent System for Parsing Unre- stricted Tezt. Berlin and New York: Mouton de Gruyter, 1995. B. Krenn and C. Samuelsson. The Linguist's Guide to Statistics. Version of April 23, 1996. http ://coli. uni-sb, de/~christ er. C. G. de Marcken. 1990. "Parsing the LOB Cor- pus". In Procs. 28th Annual Meeting of the As- sociation for Computational Linguistics, pp. 243- 251, ACL, 1990. K. Oflazer and I. KuruSz. 1994. Tagging and morphological disambiguation of Turkish text. In Procs. 4th Conference on Applied Natural La1~- guage Processing. ACL. 1994. L. R. Rabiner. 1989. "A Tutorial on Hid- den Markov Models and Selected Applications in Speech Recognition". In Readings in Speech Recognition, pp. 267-296. Alex Waibel and Kai- Fu Lee (eds), Morgan I<aufmann, 1990. G. Sampson. 1995. English for the Computer, Ox- ford University Press. 1995. 251 C. Samuelsson. 1996. "Handling Sparse Data by Successive Abstraction". In Procs. 16th Interna- tional Conference on Computational Linguistics, pp. 895-900, ICCL, 1996. H. Schmid. 1994. Part-of-speech tagging with neu- ral networks. In Procs. 15th International Confer- ence on Computational Linguistics, pp. 172-176, ICCL, 1994. P. Tapanainen. 1996. The Constraint Grammar Parser CG-2. Publ. 27, Dept. General Linguistics, University of Helsinki, 1996. P. Tapanainen and A. Voutilainen. 1994. Tagging accurately - don't guess if you know. In Procs. 4th Conference on Applied Natural Language Process- ing, ACL, 1994. A. Voutilainen. 1995. "A syntax-based part of speech analyser". In Procs. 7th Conference of the European Chapter of the Association for Compu- tational Linguistics, pp. 157-164, ACL, 1995. A. Voutilainen and J. Heikkil~. 1994. An English constraint grammar (EngCG): a surface-syntactic parser of English. In Fries, Tottie and Schneider (eds.), Creating and using English language cor- pora, Rodopi, 1994. A. Voutilainen, J. Heikkil~ and A. Anttila. 1992. Constraint Grammar of English. A Performance- Oriented Introduction. Publ. 21, Dept. General Linguistics, University of Helsinki, 1992. A. Voutilainen and T. J~irvinen. "Specifying a shal- low grammatical representation for parsing pur- poses". In Procs. 7th Conference of the Euro- pean Chapter of the Association for Computa- tional Linguistics, pp. 210-214, ACL, 1995. 252 Appendix: Reduced EngCG tag set ING Punctuation tags: BE-IMP N-GEN-SG/PL '~colon BE-INF N-GEN-PL @comma BE-ING N-GEN-SG :~d~h BE-PAST-BASE N-NOM-SG/PL ~dotdot BE-PAST-WAS N-NOM-PL @dquote BE-PRES-AM N-NOM-SG @exclamation BE-PRES-ARE NEG @fuUstop BE-PRES-IS NUM-CARD @lparen BE-SUBJUNCTIVE NUM-FRA-PL @rparen CC NUM-FRA-SG @rparen CCX NUM-ORD @rparen CS PREP @rparen DET-SG/PL PRON @lquote DET-SG PRON-ACC @rquote DET-WH PRON-CMP @slash DO-EN PRON-DEM-PL @newlines DO-IMP PRON-DEM-SG @question DO-INF PRON-GEN @semicolon DO-ING PRON-INTERR Word tags: DO-PAST PRON-NOM-SG/PL A-ABS DO-PRES-BASE PRON-NOM-PL A-CMP DO-PRES-SG3 PRON-NOM-SG A-SUP DO-SUBJUNCTIVE PRON-REL ABBR-GEN-SG/PL EN PRON-SUP ABBR-GEN-PL HAVE-EN PRON-WH ABBR-GEN-SG HAVE-IMP V-AUXMOD ABBR-NOM-SG/PL HAVE-INF V-IMP ABBR-NOM-PL HAVE-ING V-INF ABBR-NOM-SG HAVE-PAST V-PAST ADV-ABS HAVE-PRES-BASE V-PRES-BASE ADV-CMP HAVE-PRES-SG3 V-PRES-SG1 ADV-SUP HAVE-SUBJUNCTIVE V-PRES-SG2 ADV-WH I V-PRES-SG3 BE-EN INFMARK V-SUBJUNCTIVE 253 | 1997 | 32 |
Intonational Boundaries, Speech Repairs and Discourse Markers: Modeling Spoken Dialog Peter A. Heeman and James F. Allen Department of Computer Science University 9f Rochester Rochester NY 14627, USA {heeman, j ames }~cs. rochester, edu Abstract To understand a speaker's turn of a con- versation, one needs to segment it into in- tonational phrases, clean up any speech re- pairs that might have occurred, and iden- tify discourse markers. In this paper, we argue that these problems must be resolved together, and that they must be resolved early in the processing stream. We put for- ward a statistical language model that re- solves these problems, does POS tagging, and can be used as the language model of a speech recognizer. We find that by ac- counting for the interactions between these tasks that the performance on each task improves, as does POS tagging and per- plexity. 1 Introduction Interactive spoken dialog provides many new chal- lenges for natural language understanding systems. One of the most critical challenges is simply de- termining the speaker's intended utterances: both segmenting the speaker's turn into utterances and determining the intended words in each utterance. Since there is no well-agreed to definition of what an utterance is, we instead focus on intonational phrases (Silverman et al., 1992), which end with an acoustically signaled boundary lone. Even assuming perfect word recognition, the problem of determin- ing the intended words is complicated due to the occurrence of speech repairs, which occur where the speaker goes back and changes (or repeats) some- thing she just said. The words that are replaced or repeated are no longer part of the intended ut- terance, and so need to be identified. The follow- ing example, from the Trains corpus (Heeman and Allen, 1995), gives an example of a speech repair with the words that the speaker intends to be re- placed marked by reparandum, the words that are the intended replacement marked as alteration, and the cue phrases and filled pauses that tend to occur in between marked as the editing term. Example 1 (d92a-5.2 utt34) we'll pick up ~ . uh the tanker of oranges reparandu "q'ml ~ • ~ • editing term alteration interruption point Much work has been done on both detect- ing boundary tones (e.g. (Wang and Hirschberg, 1992; Wightman and Ostendorf, 1994; Stolcke and Shriberg, 1996a; Kompe et al., 1994; Mast et al., 1996)) and on speech repair detection and correction (e.g. (Hindle, 1983; Bear, Dowding, and Shriberg, 1992; Nakatani and Hirschberg, 1994; Heeman and Allen, 1994; Stolcke and Shriberg, 1996b)). This work has focused on one of the issues in isolation of the other. However, these two issues are intertwined. Cues such as the presence of silence, final syllable lengthening, and presence of filled pauses tend to mark both events. Even the presence of word cor- respondences, a tradition cue for detecting and cor- recting speech repairs, sometimes marks boundary tones as well, as illustrated by the following example where the intonational phrase boundary is marked with the ToBI symbol %. Example 2 (d93-83.3 utt73) that's all you need % you only need one boxcar Intonational phrases and speech repairs also in- teract with the identification of discourse markers. Discourse markers (Schiffrin, 1987; Hirschberg and Litman, 1993; Byron and Heeman, 1997) are used to relate new speech to the current discourse state. Lexical items that can function as discourse mark- ers, such as "well" and "okay," are ambiguous as to whether they are being used as discourse markers or not. The complication is that discourse markers tend to be used to introduce a new utterance, or can be an utterance all to themselves (such as the acknowledgment "okay" or "alright"), or can be used as part of the editing term of a speech repair, or to begin the alteration. Hence, the problem of identi- fying discourse markers also needs to be addressed with the segmentation and speech repair problems. These three phenomena of spoken dialog, however, cannot be resolved without recourse to syntactic in- formation. Speech repairs, for example, are often 254 signaled by syntactic anomalies. Furthermore, in order to determine the extent of the reparanduin, one needs to take into account the parallel structure that typically exists between the reparandum and al- teration, which relies on at identifying the s:?ntactic roles, or part-of-speech (POS) tags, of the words in- volved (Bear, Dowding, and Shriberg, 1992; Heeman and Allen, 1994). However, speech repairs disrupt the context that is needed to determine the POS tags (Hindle, 1983). Hence, speech repairs, as well as boundary tones and discourse markers, must be resolved during syntactic disambiguation. Of course when dealing with spoken dialogue, one cannot forget the initial problem of determining the actual words that the speaker is saying. Speech rec- ognizers rely on being able to predict the probabil- ity of what word will be said next. Just as intona- tional phrases and speech repairs disrupt the local context that is needed for syntactic disambiguation, the same holds for predicting what word will come next. If a speech repair or intonational phrase oc- curs, this will alter the probability estimate. But more importantly, speech repairs and intonational phrases have acoustic correlates such as the pres- ence of silence. Current speech recognition language models camlot account for the presence of silence, and tend to simply ignore it. By modeling speech re- pairs and intonational boundaries, we can take into account the acoustic correlates and hence use more of the available information. From the above discussion, it is clear that we need to model these dialogue phenomena together and very early on in the speech processing stream, in fact, during speech recognition. Currently, the ap- proaches that work best in speech recognition are statistical approaches that are able to assign proba- bility estimates for what word will occur next given the previous words. Hence, in this paper, we in- troduce a statistical language model that can de- tect speech repairs, boundary tones, and discourse markers, and can assign POS tags, and can use this information to better predict what word will occur next. In the rest of the paper, we first introduce the Trains corpus. We then introduce a statistical lan- guage model that incorporates POS tagging and the identification of discourse markers. We then aug- meat this model with speech repair detection and correction and intonational boundary tone detec- tion. We then present the results of this model on the Trains corpus and show that it can better ac- count for these discourse events than can be achieved by modeling them individually. We also show that by modeling these two phenomena that we can in- crease our POS tagging performance by 8.6%, and improve our ability to predict the next word. Dialogs 98 Speakers 34 Words 58298 Turns 6163 Discourse Markers 8278 Boundary Tones 10947 Turn-Internal Boundary Tones 5535 Abridged Repairs 423 Modification Repairs 1302 Fresh Starts 671 Editing Terms 1128 Table 1: Frequency of Tones, Repairs and Editing Terms in the Trains Corpus 2 Trains Corpus As part of the TRAINS project (Allen et al., 1995), which is a long term research project to build a con- versationally proficient planning assistant, we have collected a corpus of problem solving dialogs (Hee- man and Allen, 1995). The dialogs involve two hu- man participants, one who is playing the role of a user and has a certain task to accomplish, and an- other who is playing the role of the system by acting as a planning assistant. The collection methodology was designed to make the setting as close to human- computer interaction as possible, but was not a wiz- ard scenario, where one person pretends to be a com- puter. Rathor, the user knows that he is talking to another person. The TaAINS corpus consists of about six and half hours of speech. Table 1 gives some general statistics about the corpus, including the number of dialogs, speakers, words, speaker turns, and occurrences of discourse markers, boundary tones and speech re- pairs. The speech repairs in the Trains corpus have been hand-annotated. We have divided the repairs into three types: fresh starts, modification repairs, and abridged repairs. 1 A fresh start is where the speaker abandons the current utterance and starts again, where the abandonment seems acoustically signaled. Example 3 (d93-12.1 utt30) so it'll take um so you want to do what reparandum| editing term alteration interruption point The second type of repairs are the modification re- pairs. These include all other repairs in which the reparandum is not empty. Example 4 (d92a-l.3 utt65) so that will total will take seven hours to do that reparandumT alteration interruption point 1This classification is similar to that of Hindle (1983) and Levelt (1983). 255 The third type of repairs are the abridged repairs, which consist solely of an editing term. Note that utterance initial filled pauses are not treated as abridged repairs. Example 5 (d93-14.3 utt42) we need to um manage to get the bananas to Dansville T editing term interruption point There is typically a correspondence between the reparandum and the alteration, and following Bear et al. (1992), we annotate this using the la- bels m for word matching and r for word replace- ments (words of the same syntactic category). Each pair is given a unique index. Other words in the reparandum and alteration are annotated with an x. Also, editing terms (filled pauses and clue words) are labeled with et, and the interruption point with ip, which will occur before any editing terms asso- ciated with the repair, and after a word fragment, if present. The interruption point is also marked as to whether the repair is a fresh start, modification repair, or abridged repair, in which cases, we use ip:ean, ip:mod and ip:abr, respectively. The ex- ample below illustrates how a repair is annotated in this scheme. Example 6 (d93-15.2 utt42) engine two from Elmi(ra)- or engine three from Elmira ml r2 m3 m4 Tet ml r2 m3 m4 ip:mod 3 A POS-Based Language Model The goal of a speech recognizer is to find the se- quence of words l~ that is maximal given the acous- tic signal A. However, for detecting and correcting speech repairs, and identifying boundary tones and discourse markers, we need to augment the model so that it incorporates shallow statistical analysis, in the form of POS tagging. The POS tagset, based on the Penn Treebank tagset (Marcus, Santorini, and Marcinkiewicz, 1993), includes special tags for de- noting when a word is being used as a discourse marker. In this section, we give an overview of our basic language model that incorporates POS tag- ging. Full details can be found in (Heeman and Allen, 1997; Heeman~ 1997). To add in POS tagging, we change the goal of the speech recognition process to find the best word and POS tags given the acoustic signal. The derivation of the acoustic model and language model is now as follows. IfVP = argmaxPr(WPIA) W,P Pr(A[WP) Pr(WP) :- arg max WP Pr(A) = argmaxPr(AIWP ) Pr(WP) WY The first term Pr(AIWP ) is the factor due to the acoustic model, which we can approximate by Pr(A[W). The second term Pr(WP) is the factor due to the language model. We rewrite Pr(WP) as Pr(WI,NPI,N), where N is the number of words in the sequence. We now rewrite the language model probability as follows. Pr( W1,N P1,N ) = H Pr(WiPilWl,i-lPl, i-1) i=l,N = 1-I Pr(WiIWl,i-lPl, i) Pr(PilW~i-lPl'i-1) i=l,N We now have two probability distributions that we need to estimate, which we do using decision trees (Breiman et al., 1984; Bahl et al., 1989). The de- cision tree algorithm has the advantage that it uses information theoretic measures to construct equiva- lence classes of the context in order to cope with sparseness of data. The decision tree algorithm starts with all of the training data in a single leaf node. For each leaf node, it looks for the question to ask of the context such that splitting the node into two leaf nodes results in the biggest decrease in impurity, where tile impurity measures how well each leaf predicts the events in the node. After the tree is grown, a heldout dataset is used to smooth the probabilities of each node with its parent (Bahl et al., 1989). To allow the decision tree to ask about the words and POS tags in the context, we cluster the words and POS tags using the algorithm of Brown et al. (1992) into a binary classification tree. This gives an implicit binary encoding for each word and POS tag, thus allowing the decision tree to ask about the words and POS tags using simple binary questions, such as 'is the third bit of the POS tag encoding equal to one?' Figure 1 shows a POS classification tree. The binary encoding for a POS tag is deter- mined by the sequence of top and bottom edges that leads from the root node to the node for the POS tag. Unlike other work (e.g. (Black et al., 1992; Mater- man, 1995)), we treat the word identities as a further refinement of the POS tags; thus we build a word classification tree for each POS tag. This has the advantage of avoiding unnecessary data fragmenta- tion, since the POS tags and word identities are no longer separate sources of information. As well, it constrains the task of building the word classifica- tion trees since the major distinctions are captured by the POS classification tree. 4 Augmenting the Model Just as we redefined the speech recognition prob- lem so as to account for POS tagging and identify- ing discourse markers, we do the same for modeling 256 Figure 1: POS Classification Tree boundary tones and speech repairs. We introduce null tokens between each pair of consecutive words wi-1 and wi (Heeman and Allen, 1994), which wilt be tagged as to the occurrence of these events. The boundary tone tag T/ indicates if word wi-1 ends an intonational boundary (T~=T), or not (T~=null). For detecting speech repairs, we have the prob- lem that repairs are often accompanied by an edit- ing term, such as "um", "uh", "okay", or "well", and these must be identified as such. Furthermore, an editing term might be composed of a number of words, such as "let's see" or "uh well". Hence we use two tags: an editing term tag Ei and a repair tag Ri. The editing term tag indicates if wi starts an edit- ing term (Ei=Push), if wi continues an editing term (Ei=ET), if wi-~ ends an editing term (Ei=Pop), or otherwise (Ei=null). The repair tag Ri indicates whether word wi is the onset of the alteration of a fresh start (Ri=C), a modification repair (Ri=M), or an abridged repair (Ri=A), or there is not a re- pair (Ri=null). Note that for repairs with an edit- ing term, the repair is tagged after the extent of the editing term has been determined. Below we give an example showing all non-null tone, editing term and repair tags. Example 7 (d93-18.1 utt47) it takes one Push you ET know Pop M two hours T If a modification repair or fresh start occurs, we need to determine the extent (or the onset) of the reparandum, which we refer to as correct- ing the speech repair. Often, speech repairs have strong word correspondences between the reparan- we'll pick up a tank of uh the tanker of oranges ' I 'tl . . . . Figure 2: Cross Serial Correspondences dum and alteration, involving word matches and word replacements. Hence, knowing the extent of the reparandum means that we can use the reparan- dum to predict the words (and their POS tags) that make up the alteration. For Ri E {Mod, Can}, we define Oi to indicate the onset of the reparandum. 2 If we are in the midst of processing a repair, we need to determine if there is a word correspondence from the reparandum to the current word wi. The tag Li is used to indicate which word in the reparan- dum is licensing the correspondence. Word cor- respondences tend to exhibit a cross serial depen- dency; in other words if we have a correspondence between wj in the reparandum and wk in the alter- ation, any correspondence with a word in the alter- ation after w~ will be to a word that is after wj, as il- lustrated in Figure 2. This means that if wi involves a word correspondence, it will most likely be with a word that follows the last word in the reparandum that has a word correspondence. Hence, we restrict LI to only those words that are after the last word in the reparandum that has a correspondence (or from the reparandum onset if there is not yet a correspon- dence). If there is no word correspondence for wi, we set Li to the first word after the last correspondence. The second tag involved in the correspondences is Ci, which indicates the type of correspondence be- tween the word indicated by Li and the current word wi. We focus on word correspondences that involve either a word match (Ci=m), a word replacement (Ci=r), where both words are of the same POS tag, or no correspondence (Ci=x). Now that we have defined these six additional tags for modeling boundary tones and speech repairs, we redefine the speech recognition problem so that its goal is to find the maximal assignment for the words as well as the POS, boundary tone, and speech repair tags. WPCLORET = arg max Pr(WCLORET[A) WPCLOItET The result is that we now have eight probability dis- tributions that we need to estimate. Pr (Ti I Wl,i- 1Pl,i-1Cl,i-1Ll, i-101,1-1Rl,i-i El,i-1Tl,i-1 ) Pr( EilWl,i- 1Pl,i-1CI,i-1Ll,l-1 01,1-1Rl,i- 1 El,l-1Tl,i) Pr(Ri [WI,i-1Pl, i-1 el,i- 1 .LI,I- 10l~i-1 RI,I-1 El,iTl,i ) Pr (Oi [ Wl,i-1Pl,i-1Cl,i-1Ll,i-101,1-1Rl,iEl,iTl,i) Pr(Li [W1,,-1Pl,i-1Cl, i-1Ll, i-101,1Rl,i EI,,TI,i ) 2Rather than estimate Oi directly, we instead query each potential onset to see how likely it is to be the actual onset of the reparandum. 257 Pr(CiIW~,+-~ PJ,+-~ Ct,+-~ Ll,i Ol,i Rl,i El, i Zl,i ) Pr( Pi l Wl,i-1PI, i-1CI,i L I,i 01,i R I,i El,i Tl,i ) Pr(W, Pl,i Cl,i L l,i Ol,i Rl,i El,i Zl,i ) The context for each of the probability distribu- tions includes all of the previous context. In princi- pal, we could give all of this context to the decision tree algorithm and let it decide what information is relevant in constructing equivalence classes of the contexts. However, the amount of training data is limited (as are the learning techniques) and so we need to encode the context in order to simplify the task of constructing meaningflfl equivalence classes. We start with the words and their POS tags that are in the context and for each non-null tone, editing term (we also skip over E=ET), and repair tag, we insert it into the appropriate place, just as Kompe et al. (1994) do for boundary tones in their language model. Below we give the encoded context for the word "know" from Example 7 Example 8 (d93-18.1 utt47) it/PRP takes/VBP one/CD Push you/PRP The result of this is that the non-null tag values are treated just as if they were lexical items. 3 Further- more, if an editing term is completed, or the extent of a repair is known, we can also clean up the edit- ing term or reparandum, respectively, in the same way that Stolcke and Shriberg (1996b) clean up filled pauses, and simple repair patterns. This means that we can then generalize between fluent speech and instances that have a repair. For instance, in the two examples below, the context for the word "get" and its POS tag will be the same for both, namely "so/CC_D we/PRP need/VBP to/TO". Example 9 (d93-11.1 utt46) so we need to get the three tankers Example 10 (d92a-2.2 utt6) so we need to Push um Pop A get a tanker of OJ We also include other features of the context. For instance, we include a variable to indicate if we are currently processing an editing term, and whether a non-filled pause editing term was seen. For es- timating Ri, we include the editing terms as well. For estimating Oi, we include whether the proposed reparandum includes discourse markers, filled pauses that are not part of an editing term, boundary terms, and whether the proposed reparandum overlaps with any previous repair. 5 Silences Silence, as well as other acoustic information, can also give evidence as to whether an intonational phrase, speech repair, or editing term occurred. We 3Since we treat the non-null tags as lexical items, we associate a unique POS tag with each value. , , , , , , Fluant -- Tone .... Modification .... Fresh Starl ......... Push ..... . -.. .. .......... , Pop .... / '\ ,._..., ,,,, ...... ".+,,, ..... : .#'%-.:,..<+-.< ....................... t'" /- .......... '., it}',." "'...." ".. '~ ................................ -.::~ . L_, " ............................................ : .......... _ ......... 0.5 1 1.5 2 2.5 3 3.5 Figure 3: Preference for tone, editing term, and re- pair tags given the length of silence include Si, the silence duration between word wi-1 and wi, as part of the context for conditioning the probability distributions for the tone T/, editing term El, and repair Ri tags. Due to sparseness of data, we make several the independence assumptions so that we can separate the silence information from the rest of the context. For example, for the tone tag, let Resti represent the rest of the context that is used to condition T/. By assuming that Resti and Si are independent, and are independent given T/, we can rewrite Pr(TiISiResti) as follows. . Pr(2qlSi-1) Pr(T~lS~Rest~) = Pr(filResh) Pr(T, IS, ) We can now use P,-(T,) as a factor to modify the tone probability in order to take into account the silence duration. In Figure 3, we give the factors by which we adjust the tag probabilities given the amount of silence. Again, due to sparse of data, we collapse the values of the tone, editing term and repair tag into six classes: boundary tones, editing term pushes, editing term pops, modification repairs and fresh starts (without an editing term). From the figure, we see that if there is no silence between wi-1 and wi, the null interpretation for the tone, repair and editing term tags is preferred. Since the independence assumptions that we have to make are too strong, we normalize the adjusted tone, editing term and repair tag probabilities to ensure that they sum to one over all of the values of the tags. 6 Example To demonstrate how the model works, consider the following example. Example 11 (d92a-2.1 utt95) will take a total of um let's see total of s- of 7 hours reparandum | et reparandum l iV iV The language model considers all possible interpre- tations (at least those that do not get pruned) and assigns a probability to each. Below, we give the probabilities for the correct interpretation of the 258 word "um", given.the correct interpretation of the words "will take a total of". For reference, we give a simplified view of the context that is used for each probability. Pr(T6=null[a total of)=0.98 Pr(E6=Pushla total of)=0.28 Pr(R~=nultla total of Push)=l.00 Pr(P6=UH_FP[a total of Push)=0.75 Pr(Ws=um[a total of Push UH_FP)=0.33 Given the correct interpretation of the previous words, the probability of the filled pause "urn" along with the correct POS tag, boundary tone tag, and repair tags is 0.0665. Now lets consider predicting the second instance of "total", which is the first word of the alteration of the first repair, whose editing term "urn let's see", which ends with a boundary tone, has just finished. Pr(T10=TlPush let's see)=0.93 Pr(E:0=PoPlPush let's see Tone)=0.79 Pr(R10=Mla total of Push let's see Pop) = 0.26 Pr(O10=totallwill take a total of R10=Mod)=0.07 Pr(L10=totalltotal of R10=Mod)=0.94 Pr(C10=mlwill take a L10=total/NN) = 0.87 4 Pr(P10=NN]will take a L10=total/NN C10=m)=l Pr(W10=total[will take a NN L10=totai C10---m)=l Given the correct interpretation of the previous words, the probability of the word "total" along with the correct POS tag, boundary tone tag, and repair tags is 0.011. 7 Results To demonstrate our model, we use a 6-fold cross validation procedure, in which we use each sixth of the corpus for testing data, and the rest for train- ing data. We start with the word transcriptions of the Trains corpus, thus allowing us to get a clearer indication of the performance of our model without having to take into account the poor performance of speech recognizers on spontaneous speech. All si- lence durations are automatically obtained from a word aligner (Ent, 1994). Table 2 shows how POS tagging, discourse marker identification and perplexity benefit by modeling the speaker's utterance. The POS tagging results are re- ported as the percentage of words that were assigned the wrong tag. The detection of discourse markers is reported using recall and precision. The recall rate of X is the number of X events that were correctly determined by the algorithm over the number of oc- currences of X. The precision rate is the number of X events that were correctly determined over the number of times that the algorithm guessed X. The error rate is the number of X events that the algo- rithm missed plus the number of X events that it incorrectly guessed as occurring over the number of X events. The last measure is perplexity, which is Base Model Tones Tones Repairs Repairs Corrections Corrections Silences POS Tagging Error Rate 2.95 2.86 2.69 Discourse Markers Recall 96.60 96.60 97.14 Precision 95.76 95.86 96.31 Error Rate 7.67 7.56 6.57 Perplexity 24.35 23.05 22.45 Table 2: POS Tagging and Perplexity Results Tones Repairs Tones Corrections Tones Silences Silences Within Turn Recall 64.9 70.2 70.5 Precision 67.4 68.7 69.4 Error Rate 66.5 61.9 60.5 All Tones Recall 80.9 83.5 83.9 Precision 81.0 81.3 81.8 Error Rate 38.0 35.7 34.8 Perplexity 24.12 23.78 22.45 Table 3: Detecting Intonational Phrases a way of measuring how well the language model is able to predict the next word. The perplexity of a test set of N words Wl,g is calculated as follows. 1 N 2-~ ~,=1 l°g2 Pr(wdwl, ~-') The second column of Table 2 gives the results of the POS-based model, the third column gives the results of incorporating the detection and cor- rection of speech repairs and detection of intona- tional phrase boundary tones, and the fourth col- umn gives the results of adding in silence informa- tion. As can be seen, modeling the user's utterances improves POS tagging, identification of discourse markers, and word perplexity; with the POS er- ror rate decreasing by 3.1% and perplexity by 5.3%. Furthermore, adding in silence information to help detect the boundary tones and speech repairs results in a further improvement, with the overall POS tag- ging error rate decreasing by 8.6% and reducing per- plexity by 7.8%. In contrast, a word-based trigram backoff model (Katz, 1987) built with the CMU sta- tistical language modeling toolkit (Rosenfeld, 1995) achieved a perplexity of 26.13. Thus our full lan- guage model results in 14.1% reduction in perplex- ity. Table 3 gives the results of detecting intonational boundaries. The second column gives the results of adding the boundary tone detection to the POS model, the third column adds silence information, 259 Repairs Repairs Corrections Repairs Silences Silences Detection Recall 67.9 72.7 Precision 80.6 77.9 Error Rate 48.5 47.9 Correction Recall Precision Error Rate Perplexity 24.11 23.72 Tones Repairs Corrections Silences 75.7 77.0 80.8 84.8 42.4 36.8 62.4 65.0 66.6 71.5 68.9 60.9 23.04 22.45 Table 4: Detecting and Correcting Speech Repairs and the fourth column adds speech repair detection and correction. We see that adding in silence infor- mation gives a noticeable improvement in detecting boundary tones. Furthermore, adding in the speech repair detection and correction further improves the results of identifying boundary tones. Hence to de- tect intonational phrase boundaries in spontaneous speech, one should also model speech repairs. Table-4 gives the results of detecting and correct- ing speech repairs. The detection results report the number of repairs that were detected, regardless of whether the type of repair (e.g. modification repair versus abridged repair) was properly determined. The second column gives the results of adding speech repair detection to the POS model. The third col- umn adds in silence information. Unlike the case for boundary tones, adding silence does not have much of an effect. 4 The fourth column adds in speech re- pair correction, and shows that taking into account the correction, gives better detection rates (Heeman, Loken-Kim, and Allen, 1996). The fifth column adds in boundary tone detection, which improves both the detection and correction of speech repairs. 8 Comparison to Other Work Comparing the performance of this model to oth- ers that have been proposed in the literature is very difficult, due to differences in corpora, and different input assumptions. However, it is useful to compare the different techniques that are used. Bear et al. (1992) used a simple pattern matching approach on ATIS word transcriptions. They ex- clude all turns that have a repair that just consists of a filled pause or word fragment. On this subset they obtained a correction recall rate of 43% and a precision of 50%. Nakatani and Hirschberg (1994) examined how speech repairs can be detected using a variety of information, including acoustic, presence of word 4Silence has a bigger effect on detection and correc- tion if boundary tones are modeled. matchings, and POS tags. Using these clues they were able to train a decision tree which achieved a recall rate of 86.1% and a precision of 92.1% on a set of turns in which each turn contained at least one speech repair. Stolcke and Shriberg (1996b) examined whether perplexity can be improved by modeling simple types of speech repairs in a language model. They find that doing so actually makes perplexity worse, and they attribute this to not having a linguistic seg- mentation available, which would help in modeling filled pauses. We feel that speech repair modeling must be combined with detecting utterance bound- aries and discourse markers, and should take advan- tage of acoustic information. For detecting boundary tones, the model of Wightman and Ostendorf (1994) achieves a recall rate of 78.1% and a precision of 76.8%. Their better performance is partly attributed to richer (speaker dependent) acoustic modeling, including phoneme duration, energy, and pitch. However, their model was trained and tested on professionally read speech, rather than spontaneous speech. Wang and Hirschberg (1992) did employ sponta- neous speech, namely, the ATIS corpus. For turn- internal boundary tones, they achieved a recall rate of 38.5% and a precision of 72.9% using a decision tree approach that combined both textual features, such as POS tags, and syntactic constituents with intonational features. One explanation for the differ- ence in performance was that our model was trained on approximately ten times as much data. Secondly, their decision trees are used to classify each data point independently of the next, whereas we find the best interpretation over the entire turn, and in- corporate speech repairs. The models of Kompe et al. (1994) and Mast et al. (1996) are the most similar to our model in terms of incorporating a language model. Mast et al. achieve a recall rate of 85.0% and a precision of 53.1% on identifying dialog acts in a German cor- pus. Their model employs richer acoustic modeling, however, it does not account for other aspects of ut- terance modeling, such as speech repairs. 9 Conclusion In this paper, we have shown that the problems of identifying intonational boundaries and discourse markers, and resolving speech repairs can be tack- led by a statistical language model, which uses lo- cal context. We have also shown that these tasks, along with POS tagging, should be resolved to- gether. Since our model can give a probability esti- mate for the next word, it can be used as the lan- guage model for a speech recognizer. In terms of perplexity, our model gives a 14% improvement over word-based language models. Part of this improve- ment is due to being able to exploit silence durations, 260 which traditional word-based language models tend to ignore. Our next step is to incorporate this model into a speech recognizer in order to validate that the improved perplexity does in fact lead to a better word recognition rate. 10 Acknowledgments This material is based upon work supported by the NSF under grant IRI-9623665 and by ONR under grant N00014-95-1-1088. Final preparation of this paper was done while the first author was visiting CNET, France T~l~com. References Allen, J. F., L. Schubert, G. Ferguson, P. Heeman, C. Hwang, T. Kato, M. Light, N. Martin, B. Miller, M. Poesio, and D. Traum. 1995. The Trains project: A case study in building a conversational planning agent. Journal of Experimental and Theoretical AI, 7:7-48. Bahl, L. R., P. F. Brown, P. V. deSouza, and R. L. Mer- cer. 1989. A tree-based statistical language model for natural lnaguage speech recognition. IEEE Trans- actions on Acoustics, Speech, and Signal Processing, 36(7):1001-1008. Bear, J., J. Dowding, and E. Shriberg. 1992. Integrating multiple knowledge sources for detection and correc- tion of repairs in human-computer dialog. In Proceed- ings of the 30 th Annual Meeting of the Association for Computational Linguistics, pages 56-63. Black, E., F. Jelinek, J. Lafferty, R. Mercer, and S. Roukos. 1992. Decision tree models applied to the labeling of text with parts-of-speech. In Proceedings of the DARPA Speech and Natural Language Workshop, pages 117-121. Morgan Kaufmann. Breiman, L., J. H. Friedman, R. A. Olshen, and C. J. Stone. 1984. Classification and Regression Trees. Monterrey, CA: Wadsworth & Brooks. Brown, P. F., V. J. Della Pietra, P. V. deSouza, J. C. Lai, and R. L. Mercer. 1992. Class-based n-gram models of natural language. Computational Linguis- tics, 18(4):467-479. Byron, D. K. and P. A. Heeman. 1997. Discourse marker use in task-oriented spoken dialog. In Proceedings of the 5 th European Conference on Speech Communica- tion and Technology (Eurospeech), Rhodes, Greece. Entropic Research Laboratory, Inc., 1994. Aligner Ref- erence Manual. Version 1.3. Heeman, P. and J. Allen. 1994. Detecting and correct- ing speech repairs. In Proceedings of the 32 th Annual Meeting of the Association for Computational Linguis- tics, pages 295-302, Las Cruces, New Mexico, June. Heeman, P. A. 1997. Speech repairs, intonational boundaries and discourse markers: Modeling speakers' utterances in spoken dialog. Doctoral dissertation. Heeman, P. A. and J. F. Allen. 1995. The Trains spo- ken dialog corpus. CD-ROM, Linguistics Data Con- sortium. Heeman, P. A. and J. F. Allen. 1997. Incorporating POS tagging into language modeling. In Proceedings of the 5 th European Conference on Speech Communication and Technology (Eurospeech), Rhodes, Greece. Heeman, P. A., K. Loken-Kim, and J. F. Allen. 1996. Combining the detection and correction of speech re- pairs. In Proceedings of the 4rd International Con- ference on Spoken Language Processing (ICSLP-96), pages 358-361, Philadephia, October. ttindle, D. 1983. Deterministic parsing of syntactic non- fluencies. In Proceedings of the 21 st Annual Meeting of the Association for Computational Linguistics, pages 123-128. Hirschberg, J. and D. Litman. 1993. Empirical studies on the disambiguation of cue phrases. Computational Linguistics, 19(3):501-530. Katz, S. M. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Transactions on Acoustics, Speech, and Signal Processing, pages 400-401, March. Kompe, R., A. Battiner, A. Kiefling, U. Kilian, H. Nie- mann, E. NSth, and P. Regel-Brietzmann. 1994. Au- tomatic classification of prosodically marked phrase boundaries in german. In Proceedings of the Interna- tional Conference on Audio, Speech and Signal Pro- cessing (ICASSP), pages 173-176, Adelaide. Levelt, W. J. M. 1983. Monitoring and self-repair in speech. Cognition, 14:41-104. Magerman, D. M. 1995. Statistical decision trees fol parsing. In Proceedings of the 33 th Annual Meeting of the Association for Computational Linguistics, pages 7-14, Cambridge, MA, June. Marcus, M. P., B. Santorini, and M. A. Marcinkiewicz. 1993. Building a large annotated corpus of en- glish: The Penn Treebank. Computational Linguis- tics, 19(2):313-330. Mast, M., R. Kompe, S. Harbeck, A. Kieflling, H. Nie- mann, E. NSth, E. G. Schukat-Taiamazzini, and V. Warnke. 1996. Dialog act classification with the help of prosody. In Proceedings of the 4rd Inter- national Conference on Spoken Language Processing (ICSLP-96), Philadephia, October. Nakatani, C. H. and J. Hirschberg. 1994. A corpus-based study of repair cues in spontaneous speech. Journal of the Acoustical Society of America, 95(3):1603-1616. Rosenfeld, R. 1995. The CMU statistical language mod- eling toolkit and its use in the 1994 ARPA CSR evai- uation. In Proceedings of the ARPA Spoken Language Systems Technology Workshop, San Mateo, California, 1995. Morgan Kaufmann. Schiffrin, D. 1987. Discourse Markers. New York: Cam- bridge University Press. Silverman, K., M. Beckman, J. Pitrelli, M. Osten- dorf, C. Wightman, P. Price, J. Pierrehumbert, and J. Hirschberg. 1992. ToBI: A standard for labelling English prosody. In Proceedings of the 2nd Inter- national Conference on Spoken Language Processing (ICSLP-92), pages 867-870. Stolcke, A. and E. Shriberg. 1996a. Automatic linguistic segmentation of conversational speech. In Proceedings of the 4rd International Conference on Spoken Lan- guage Processing (1CSLP-96), October. Stolcke, A. and E. Shriberg. 1996b. Statistical language modeling for speech disfluencies. In Proceedings of the International Conference on Audio, Speech and Signal Processing (1CASSP), May. Wang, M. Q. and J. Hirschberg. 1992. Automatic classi- fication of intonational phrase boundaries. Computer Speech and Language, 6:175-196. Wightman, C. W. and M. Ostendorf. 1994. Automatic labeling of prosodic patterns. IEEE Transactions on speech and audio processing, October. 261 | 1997 | 33 |
Tracking Initiative in Collaborative Dialogue Interactions Jennifer Chu-Carroll and Michael K. Brown Bell Laboratories Lucent Technologies 600 Mountain Avenue Murray Hill, NJ 07974, U.S.A. E-mail: {jencc,mkb} @ bell-labs.corn Abstract In this paper, we argue for the need to dis- tinguish between task and dialogue initiatives, and present a model for tracking shifts in both types of initiatives in dialogue interactions. Our model predicts the initiative holders in the next dialogue turn based on the current initia- tive holders and the effect that observed cues have on changing them. Our evaluation across various corpora shows that the use of cues con- sistently improves the accuracy in the system' s prediction of task and dialogue initiative hold- ers by 2-4 and 8-13 percentage points, respec- tively, thus illustrating the generality of our model. 1 Introduction Naturally-occurring collaborative dialogues are very rarely, if ever, one-sided. Instead, initiative of the in- teraction shifts among participants in a primarily princi- pled fashion, signaled by features such as linguistic cues, prosodic cues and, in face-to-face interactions, eye gaze aad gestures. Thus, for a dialogue system to interact with its user in a natural and coherent manner, it must recog- nize the user's cues for initiative shifts and provide ap- propriate cues in its responses to user utterances. Previous work on mixed-initiative dialogues focused on tracking a single thread of control among participants. We argue that this view of initiative fails to distinguish between task initiative and dialogue initiative, which to- gether determine when and how an agent will address an issue. Although physical cues, such as gestures and eye gaze, play an important role in coordinating initia- tive shifts in face-to-face interactions, a great deal of information regarding initiative shifts can be extracted from utterances based on linguistic and domain knowl- edge alone. By taking into account such cues during dia- logue interactions, the system is better able to determine the task and dialogue initiative holders for each turn and to tailor its response to user utterances accordingly. In this paper, we show how distinguishing between task and dialogue initiatives accounts for phenomena in collaborative dialogues that previous models were unable to explain. We show that a set of cues, which can be recognized based on linguistic and domain knowledge alone, can be utilized by a model for tracking initiative to predict the task and dialogue initiative holders with 99.1% and 87.8% accuracies, respectively, in collabo- rative planning dialogues. Furthermore, application of our model to dialogues in various other collaborative en- vironments consistently increases the accuracies in the prediction of task and dialogue initiative holders by 2-4 and 8-13 percentage points, respectively, compared to a simple prediction method without the use of cues, thus illustrating the generality of our model. 2 Task Initiative vs. Dialogue Initiative 2.1 Motivation Previous work on mixed-initiative dialogues focused on tracking and allocating a single thread of control, the conversational lead, among participants. Novick (1988) developed a computational model that utilizes meta- locutionary acts, such as repeat and give-turn, to cap- ture mixed-initiative behavior in dialogues. Whittaker and Stenton (1988) devised rules for allocating dialogue control based on utterance types, and Walker and Whit- taker (1990) utilized these rules for an analytical study on discourse segmentation. Kitano and Van Ess-Dykema (1991) developed a plan-based dialogue understanding model that tracks the conversational initiative based on the domain and discourse plans behind the utterances. Smith and Hipp (1994) developed a dialogue system that varies its responses to user utterances based on four di= alogue modes which model different levels of initiative exhibited by dialogue participants. However, the dia- logue mode is determined at the outset and cannot be changed during the dialogue. Guinn (1996) subsequently developed a system that allows change in the level of ini- 262 tiative based on initiative-changing utterances and each agent's competency in completing the current subtask. However, we contend that merely maintaining the con- versational lead is insufficient for modeling complex be- havior commonly found in naturally-occurring collabo- rative dialogues (SRI Transcripts, 1992; Gross, Allen, and Tram, 1993; Heeman and Allen, 1995). For in- stance, consider the alternative responses in utterances (3a)-(3c), given by an advisor to a student's question: (1) S: I want to take NLP to satisfy my seminar course requirement. (2) Who is teaching NLP? (3a) A: Dr. Smith is teaching NLP. (3b) A: You can't take NLP because you haven't taken AI, which is a prerequisite for NLP (3c) A: You can't take NLP because you haven't taken AI, which is a prerequisite for NLP You should take distributed programming to satisfy your requirement, and sign up as a listener for NI.~. Suppose we adopt a model that maintains a single thread of control, such as that of (Whittaker and Stenton, 1988). In utterance (3a), A directly responds to S's ques- tion; thus the conversational lead remains with S. On the other hand, in (3b) and (3c), A takes the lead by initiating a subdialogue to correct S's invalid proposal. However, existing models cannot explain the difference in the two responses, namely that in (3c), A actively participates in the planning process by explicitly proposing domain ac- tions, whereas in (3b), she merely conveys the invalid- ity of S's proposal. Based on this observation, we argue that it is necessary to distinguish between task initiative, which tracks the lead in the development of the agents' plan, and dialogue initiative, which tracks the lead in de- termining the current discourse focus (Chu-Carroll and Brown, 1997). 1 This distinction then allows us to explain • ~/s behavior from a response generation point of view: in (3b), A responds to S's proposal by merely taking over the dialogue initiative, i.e., informing S of the invalidity of the proposal, while in (3c), A responds by taking over both the task and dialogue initiatives, i.e., informing S of the invalidity and suggesting a possible remedy. An agent is said to have the task initiative if she is directing how the agents' task should be accomplished, i.e., if her utterances directly propose actions that the 1Although independently conceived, this distinction be- tween task and dialogue initiatives is similar to the notion of choice of task and choice of speaker in initiative in (Novick and Sutton, 1997), and the distinction between control and ini- tiative in (Jordan and Di Eugenio, 1997). TI: system 37 (3.5%) TI: manager 274 (26.3%) 727 (69.8%) DI: system DI: manager 4 (0.4%) Table 1: Distribution of Task and Dialogue Initiatives agents should perform. The utterances may propose domain actions (Litman and Allen, 1987) that directly contribute to achieving the agents' goal, such as "Let's send engine E2 to Coming." On the other hand, they may propose problem-solving actions (Allen, 1991; Lambert and Carberry, 1991; Ramshaw, 1991) that con- tribute not directly to the agents' domain goal, but to how they would go about achieving this goal, such as "Let's look at the first [problem]first." An agent is said to have the dialogue initiative if she takes the conversational lead in order to establish mutual beliefs, such as mutual beliefs about a piece of domain knowledge or about the validity of a proposal, between the agents. For instance, in responding to agent Xs proposal of sending a boxcar to Coming via Dansville, agent B may take over the dia- logue initiative (but not the task initiative) by saying "We can't go by Dansville because we've got Engine I going on that track." Thus, when an agent takes over the task initiative, she also takes over the dialogue initiative, since a proposal of actions can be viewed as an attempt to es- tablish the mutual belief that a set of actions be adopted. On the other hand, an agent may take over the dialogue initiative but not the task initiative, as in (3b) above. 2.2 An Analysis of the TRAINS91 Dialogues To analyze the distribution of task/dialogue initiatives in collaborative planning dialogues, we annotated the TRAINS91 dialogues (Gross, Allen, and Traum, 1993) as follows: each dialogue turn is given two labels, task initiative (TI) and dialogue initiative (DI), each of which can be assigned one of two values, system or manager, depending on which agent holds the task/dialogue initia- tive during that turn. 2 Table 1 shows the distribution of task and dialogue ini- tiatives in the TRAINS91 dialogues. It shows that while in the majority of turns, the task and dialogue initiatives are held by the same agent, in approximately 1/4 of the turns, the agents' behavior can be better accounted forby tracking the two types of initiatives separately. To assess the reliability of our annotations, approxi- mately 10% of the dialogues were annotated by two ad- ditional coders. We then used the kappa statistic (Siegel and Castellan, 1988; Carletta, 1996) to assess the level of agreement between the three coders with respect to the 2 An agent holds the task initiative during a turn as long as some utterance during the turn directly proposes how the agents should accomplish their goal, as in utterance (3c). 263 task and dialogue initiative holders. In this experiment, K is 0,57 for the task initiative holder agreement and K is 0.69 for the dialogue initiative holder agreement. Carletta suggests that content analysis researchers consider K >.8 as good reliability, with .67< /~" <.8 allowing tentative conclusions to be drawn (Carletta, 1996). Strictly based on this metric, our results indicate that the three coders have a reasonable level of agree- ment with respect to the dialogue initiative holders, but do not have reliable agreement with respect to the task initiative holders. However, the kappa statistic is known to be highly problematic in measuring inter-coder reli- ability when the likelihood of one category being cho- sen overwhelms that of the other (Grove et al., 1981), which is the case for the task initiative distribution in the TRAINS91 corpus, as shown in Table 1. Furthermore, as will be shown in Table 4, Section 4, the task and dialogue initiative distributions in TRAINS91 are not at all repre- sentative of collaborative dialogues. We expect that by taking a sample of dialogues whose task/dialogue initia- tive distributions are more representative of all dialogues, we will lower the value of P(E), the probability of chance agreement, and thus obtain a higher kappa coefficient of agreement. However, we leave selecting and annotating such a subset of representative dialogues for future work. 3 A Model for Tracking Initiative Our analysis shows that the task and dialogue initiatives shift between the participants during the course of a di- alogue. We contend that it is important for the agents to take into account signals for such initiative shifts for two reasons. First, recognizing and providing signals for initiative shifts allow the agents to better coordinate their actions, thus leading to more coherent and cooper- ative dialogues. Second, by determining whether or not it should hold the task and/or dialogue initiatives when responding to user utterances, a dialogue system is able to tailor its responses based on the distribution of initia- tives, as illustrated by the previous dialogue (Chu-Carroll and Brown, 1997). This section describes our model for tracking initiative using cues identified from the user's utterances. Our model maintains, for each agent, a task initiative index and a dialogue initiative index which measure the amount of evidence available to support the agent hold- ing the task and dialogue initiatives, respectively. After each turn, new initiative indices are calculated based on the current indices and the effects of the cues observed during the turn. These cues may be explicit requests by the speaker to give up his initiative, or implicit cues such as ambiguous proposals. The new initiative indices then determine the initiative holders for the next turn. We adopt the Dempster-Shafer theory of evidence (Sharer, 1976; Gordon and Shortliffe, 1984) as our un- derlying model for inferring the accumulated effect of multiple cues on determining the initiative indices. The Dempster-Shafer theory is a mathematical theory for rea- soning under uncertainty which operates over a set of possible outcomes, O. Associated with each piece of evidence that may provide support for the possible out- comes is a basic probability assignment (bpa), a func- tion that represents the impact of the piece of evidence on the subsets of O. A bpa assigns a number in the range [0,1] to each subset of O such that the numbers sum to 1. The number assigned to the subset O1 then denotes the amount of support the evidence directly provides for the conclusions represented by O1. When multiple pieces of evidence are present, Dempster' s combination rule is used to compute a new bpa from the individual bpa' s to represent their cumulative effect. The reasons for selecting the Dempster-Shafer theory as the basis for our model are twofold. First, unlike the Bayesian model, it does not require a complete set of a priori and conditional probabilities, which is dif- ficult to obtain for sparse pieces of evidence. Second, the Dempster-Shafer theory distinguishes between situ- ations in which no evidence is available to support any conclusion and those in which equal evidence is avail- able to support each conclusion. Thus the outcome of the model more accurately represents the amount of ev- idence available to support a particular conclusion, i.e., the provability of the conclusion (Pearl, 1990). 3.1 Cues for Tracking Initiative In order to utilize the Dempster-Shafer theory for mod- eling initiative, we must first identify the cues that pro- vide evidence for initiative shifts. Whittaker, Stenton, and Walker (Whittaker and Stenton, 1988; Walker and Whittaker, 1990) have previously identified a set of ut- terance intentions that serve as cues to indicate shifts or lack of shifts in initiative, such as prompts and questions. We analyzed our annotated TRAINS91 corpus and iden- tified additional cues that may have contributed to the shift or lack of shift in task/dialogue initiatives during the interactions. This results in eight cue types, which are grouped into three classes, based on the kind of knowl- edge needed to recognize them. Table 2 shows the three classes, the eight cue types, their subtypes if any, whether a cue may affect merely the dialogue initiative or both the task and dialogue initiatives, and the agent expected to hold the initiative in the next turn. The first cue class, explicit cues, includes explicit re- quests by the speaker to give up or take over the initiative. For instance, the utterance "Any suggestions ?" indicates the speaker's intention for the hearer to take over both the task and dialogue initiatives. Such explicit cues can be recognized by inferring the discourse and/or problem- solving intentions conveyed by the speaker' s utterances. 264 Class Cue Type Subtype Explicit Explicit requests give up take over Discourse End silence No new info repetitions Effect both both both both Initiative Example hearer speaker hearer hearer prompts both hearer Questions domain DI speaker evaluation DI hearer Obligation task both hearer fulfilled discourse action belief DI Analytical Invalidity Suboptimahty "Any suggestions?" "Summarize the plan up to this point" "Let me handle this one." A: hearer A: B: A: Ambiguity action belief A: "Grab the tanker, pick up oranges, go to Elmira, make them into orange juice." B: "We go to Elmira, we make orange juice, okay.'" "Yeah ", "Ok", "Right" "How far is it from Bath to Coming?" "Can we do the route the banana guy isn't doing?" A: "Any suggestions ?" B: "Well, there's a boxcar at Dansville." "But you have to change your banana plan." "How long is it from Dansville to Coming ?" "Go ahead and fill up E1 with bananas." "Well, we have to get a boxcar." "Right. okay. It's shorter to Bath from Avon." both hearer DI hearer both hearer both hearer DI hearer A: "Let's get the tanker car to Elmira anaJill it with OJ. B: "You need to get oranges to the O J factory." A: "h' s shorter to Bath from Avon." B: "R's shorter to DansvUle.'" "The map is slightly misleading." A: "Using Saudi on Thursday the eleventh.'" B: "It's sold out." A: "Is Friday open?" B: "Economy on Pan Am is open on Thursday." A: "Take one of the engines from Coming." B: "Let's say engine E2." A: "We would get back to Coming at 4." B: "4PM? 4AM?" Table 2: Cues for Modeling Initiative The second cue class, discourse cues, includes cues that can be recognized using linguistic and discourse in- formation, such as from the surface form of an utterance, or from the discourse relationship between the current and prior utterances. It consists of four cue types. The first type is perceptible silence at the end of an utterance, which suggests that the speaker has nothing more to say and may intend to give up her initiative. The second type includes utterances that do not contribute information that has not been conveyed earlier in the dialogue. It can be further classified into two groups: repetitions, a sub- set of the informationally redundant utterances (Walker, 1992), in which the speaker paraphrases an utterance by the hearer or repeats the utterance verbatim, and prompts, in which the speaker merely acknowledges the bearer's previous utterance(s). Repetitions and prompts also suggest that the speaker has nothing more to say and indicate that the hearer should take over the initiative (Whittaker and Stenton, 1988). The third type includes questions which, based on anticipated responses, are divided into domain and evaluation questions. Domain questions are questions in which the speaker intends to obtain or verify a piece of domain knowledge. They usually merely require a direct response and thus typically do not result in an initiative shift. Evaluation questions, on the other hand, are questions in which the speaker intends to assess the quality of a proposed plan. They often require an analysis of the proposal, and thus frequently result in a shift in dialogue initiative. The final type includes utterances that satisfy an outstanding task or discourse obligation. Such obligations may have resulted from a prior request by the hearer, or from an interruption initiated by the speaker himself. In either case, when the task/dialogue obligation is fulfilled, the initiative may be reverted back to the hearer who held the initiative prior to the request or interruption. The third cue class, analytical cues, includes cues that cannot be recognized without the hearer perform- ing an evaluation on the speaker's proposal using the heater's private knowledge (Chu-Carroll and Carberry, 1994; Chu-Carroll and Carberry, 1995). After the eval- uation, the hearer may find the proposal invalid, subop- timal, or ambiguous. As a result, he may initiate a sub- dialogue to resolve the problem, resulting in a shift in task/dialogue initiatives. 3 3 Whittaker, Stenton, and Walker treat subdialogues initiated as a result of these cues as interruptions, motivated by their col- laborative planning principles (Whittaker and Stenton, 1988; Walker and Whittaker, 1990). 265 3.2 Utilizing the Dempster-Shafer Theory As discussed earlier, at the end of each turn, new task/dialogue initiative indices are computed based on the current indices and the effect of the observed cues to determine the next task/dialogue initiative holders. In terms of the Dempster-Shafer theory, new task/dialogue bpa's (mt_new/md_netu) 4 are computed by applying Dempster's combination rule to the bpa's representing the current initiative indices ~ and the bpa of each observed cue. Evidently, some cues provide stronger evidence for an initiative shift than others. Furthermore, a cue may provide stronger support for a shift in dialogue initiative than in task initiative. Thus, we associate with each cue two bpa' s to represent its effect on changing the current task and dialogue initiative indices, respectively. We ex- tended our annotations of the TRAINS91 dialogues to include, in addition to the agent(s) holding the task and dialogue initiatives for each turn, a list of cues observed during that turn. Initially, each cue~ is assigned the fol- lowing bpa's: mt-i(O) ~- I and ma-i(@) = 1, where @ = {speaker,hearer}. In other words, we assume that the cue has no effect on changing the current initiative indices. We then developed a training algorithm (Train- bpa, Figure 1) and applied it on the annotated data to obtain the final bpa' s. For each turn, the task and dialogue bpa's for each observed cue are used, along with the current initiative indices, to determine the new initiative indices (step 2). The combine function utilizes Dempster's combination rule to combine pairs of bpa' s until a final bpa is obtained to represent the cumulative effect of the given bpa' s. The resulting bpa's are then used to predict the task/dialogue initiative holders for the next turn (step 3). If this pre- diction disagrees with the actual value in the annotated data, Adjust-bpa is invoked to alter the bpa' s for the ob- served cues, and Reset-current-bpa is invoked to ad- just the current bpa' s to reflect the actual initiative holder (step 4). Adjust-bpa adjusts the bpa's for the observed cues in favor of the actual initiative holder. We developed three adjustment methods by varying the effect that a disagreement between the actual and predicted initiative holders will have on changing the bpa' s for the observed cues. The first is constant-increment where each time a disagreement occurs, the value for the actual initiative holder in the bpa is incremented by a constant (A), while 4Bpa's are represented by functions whose names take the form of m,~,b. The subscript sub may be t-X or d-X, indicat- ing that the function represents the task or dialogue bpa under scenario X. SThe initiative indices are represented as bpa's. For in- stance, the current task initiative indices take the following form: rat .... (speaker) = z and rat .... (hearer) = 1 - z. Train-bpa(annotated-data): 1. rat-~.,,r ~ default task initiative indices raa-eur -- default dialogue initiative indices cur-data ,--- read(annotated-data) cue-set .- cues in cur-data 2. /* compute new initiative indices */ rat-obs *-- task initiative bpa's for cues in cue-set raa-ob~ ,-- dialogue initiative bpa' s for cues in cue-set mr-nero ~ combine(mr_cur, mt-obs) md .... ~ combine(md ..... ma-ob,) 3. /* determMe predicted next initiative holders */ ff mt .... (speaker) > rat_neio(hearer), t-predicted *--- speaker Else, t-predicted *- hearer ffmd .... (speaker) > tad .... (hearer), d-predicted *--- speaker Else, d-predicted ,--- hearer 4. /'* find actual initiative holders and compare */ new-data -- read(annotated-data) t-actual ,--- actual task initiative holder in new-data d-actual ,--- actual dialogue initiative holder in new-data If t-predicted # t-actual, Adjust-bpa(cue-set, task) Reset-current-bpa(mt_c=~) If d-predicted # d-actual, Adjust-bpa(cue-set,dialogue) Reset-current-bpa(ma .... ) 5. If end-of-dialogue, return Else, ,1" swap roles of speaker and hearer */ rat .... (speaker) ~-- mt .... (hearer) raa .... (speaker) -- ma .... (hearer) rat .... (hearer) ~ rat . . . . (speaker) rad .... (hearer) ,--- raa .... (speaker) cue-set ,-- cues in new-data Goto step 2. Figure l: Training Algorithm for Determining BPX s that for O is decremented by ~. The second method, constant-increment-with-counter, associates with each bpa for each cue a counter which is incremented when a correct prediction is made, and decremented when an incorrect prediction is made. If the counter is nega- tive, the constant-increment method is invoked, and the counter is reset to 0. This method ensures that a bpa will only be adjusted if it has no "credit" for correct predic- tions in the past. The third method, variable-increment- with-counter, is a variation of constant-increment-with- counter. However, instead of determining whether an adjustment is needed, the counter determines the amount to be adjusted. Each time the system makes an incorrect prediction, the value for the actual initiative holder is in- cremented by A/2 c°'`'~+z, and that for O decremented 266 1 0.99 0.98 O. 97 0.96 0.95 no-predlctlon-- const-lnc const-inc-wc "* .... var-inc-wc ~ t l l i , t l l l 0.05 0.I 0.15 0.2 0.25 0,3 0,35 0.4 0.45 0.5 delta 0.9 0.85 0.8 0.75 0.7 0.65 0.6 no- redlctlon -- const-inc ~.._ c< nst- inc-wc "* .... var-inc-wc i t J i , 0.05 0.i 0.15 0.2 0.25 0.3 0.35 0,4 0.45 0.5 delta (a) Task Initiative Prediction (b) Dialogue Initiative Prediction Figure 2: Comparison of Three Adjustment Methods by the same amount. In addition to experimenting with different adjustment methods, we also varied the increment constant, A. For each adjustment method, we ran 19 training sessions with A ranging from 0.025 to 0.475, incrementing by 0.025 between each session, and evaluated the system based on its accuracy in predicting the initiative holders for each turn. We divided the TRAINS91 corpus into eight sets based on speaker/hearer pairs. For each A, we cross-validated the results by applying the training algorithm to seven dialogue sets and testing the resulting bpa' s on the remaining set. Figures 2(a) and 2(b) show our system's performance in predicting the task and dia- logue initiative holders, respectively, using the three ad- justment methods. 6 3.3 Discussion Figure 2 shows that in the vast majority of cases, our prediction methods yield better results than making pre- dictions without cues. Furthermore, substantial improve- ment is gained by the use of counters since they prevent the effect of the "exceptions of the rules" from accu- mulating and resulting in erroneous predictions. By re- stricting the increment to be inversely exponentially re- lated to the "credit" the bpa had in making correct pre- dictions, variable-increment-with-counter obtains bet- ter and more consistent results than constant-increment. However, the exceptions of the rules still resulted in un- desirable effects, thus the further improved performance by constant-increment-with-counter. We analyzed the cases in which the system, using 6For comparison purposes, the straight lines show the sys- tem's performance without the use of cues, i.e., always predict that the initiative remains with the current holder. constant-increment-with-counter with A = .35, 7 made erroneous predictions. Tables 3(a) and 3(b) summarize the results of our analysis with respect to task and di- alogue initiatives, respectively. For each cue type, we grouped the errors based on whether or not a shift oc- curred in the actual dialogue. For instance, the first row in Table 3(a) shows that when the cue invalid action is detected, the system failed to predict a task initiative shift in 2 out of 3 cases. On the other hand, it correctly pre- dicted all 11 cases where no shift in task initiative oc- curred. Table 3(a) also shows that when an analytical cue is detected, the system correctly predicted all but one case in which there was no shift in task initiative. How- ever, 55% of the time, the system failed to predict a shift in task initiative, s This suggests that other features need to be taken into account when evaluating user proposals in order to more accurately model initiative shifts result- ing from such cues. Similar observations can be made about the errors in predicting dialogue initiative shifts when analytical cues are observed (Table 3(b)). Table 3(b) shows that when a perceptible silence is detected at the end of an utterance, when the speaker utters a prompt, or when an outstanding discourse obligation is fulfilled (first three rows in table), the system correctly predicted the dialogue initiative holder in the vast majority of cases. However, for the cue class questions, when the actual initiative shift differs from the norm, i.e., speaker retaining initiative for evaluation questions and hearer taking over initiative for domain questions, the system's performance worsens. In the rThis is the value that yields the optimal results (Figure 2). sin the case of suboptimal actions, we encounter the sparse data problem. Since there is only one instance of the cue in the set of dialogues, when the cue is present in the testing set, it is absent from the training set. 267 Cue Type Subtype Shift No-Shift error total error total Invalidity action 2 3 0 11 Suboptimality 1 1 0 0 Ambiguity action 3 7 1 5 (a) Task Initiative Errors Cue Type End silence' No new info Questions Obligation fulfilled Invalidity f f l ~ Subtype Shift error total 13 41 prompts 7 193 domain 13 31 evaluation 8 28 discourse 12 198 11 34 1 1 9 24 (b) Dialogue Initiative Errors No-Shift error total 0 53 l 6 0" 98 5 7 l 5 0 0 0 0 0 0 Table 3: Summary of Prediction Errors case of domain questions, errors occur when 1) the re- sponse requires more reasoning than do typical domain questions, causing the hearer to take over the dialogue initiative, or 2) the hearer, instead of merely responding to the question, offers additional helpful information. In the case of evaluation questions, errors occur when 1) the result of the evaluation is readily available to the hearer, thus eliminating the need for an initiative shift, or 2) the hearer provides extra information. We believe that although it is difficult to predict when an agent may include extra information in response to a question, taking into account the cognitive load that a question places on the hearer may allow us to more accurately predict dialogue initiative shifts. 4 Applications in Other Environments TO investigate the generality of our system, we applied our training algorithm, using the constant-increment- with-counter adjustment method with A = 0.35, on the TRAINS91 corpus to obtain a set of bpa's. We then evaluated the system on subsets of dialogues from four other corpora: the TRAINS93 dialogues (Heeman and Allen, 1995), airline reservation dialogues (SRI Transcripts, 1992), instruction-giving dialogues (Map Task Dialogues, 1996), and non-task-oriented dialogues (Switchboard Credit Card Corpus, 1992). In addition, we applied our baseline strategy which makes predictions without the use of cues to each corpus. Table 4 shows a comparison between the dialogues from the five corpora and the results of this evaluation. Row I in the table shows the number of turns where the expert 9 holds the task/dialogue initiative, with percent- ages shown in parentheses. This analysis shows that me distribution of initiatives varies quite significantly across corpora, with the distribution biased toward one agent in the TRAINS and maptask corpora, and split fairly evenly in the airline and switchboard dialogues. Row 2 shows the results of applying our baseline prediction method to the various corpora. The numbers shown are correct predictions in each instance, with the corresponding percentages shown in parentheses. These results indicate the difficulty of the prediction problem in each corpus that the task/dialogue initiative distribution (row 1) falls to convey. For instance, although the dialogue initiative is distributed approximately 30/70% between the two agents in the TRAINS91 corpus and 40160% in the airline dialogues, the prediction rates in row 2 shows that in both cases, the distribution is the result of shifts in dialogue initiative in approximately 25% of the dialogue turns. Row 3 in the table shows the prediction results when applying our training algorithm using the constant-increment-with-counter method. Finally, the last row shows the improvement in percentage points between our prediction method and the baseline 9The expertis assigned as follows: in the TRAINS domain, the system; in the airline domain, the travel agent; in the map- task domain, the instruction giver; and in the switchboard dia- logues, the agent who holds the dialogue initiative the majority of the time. 268 Corpus TRAINS91 (1042) (# turns) task dialogue Expert 41 311 control (3.9%) (29.8%) No cue 1009 780 (96.8%) (74.9%) const-inc- 1033 915 w-count (99.1%) (87.8%) Improvement 2.3% 12.9% TRAINS93 (256) Airline (332) Maptask (320) task dialogue task dialogue task dialogue 37 101 194 193 320 277 (14.4%) (39.5%) (58.4%) (58.1%) (100%) (86.6%) 239 189 308 247 320 270 (93.3%) (73.8%) (92.8%) (74.4%) (100%) (84.4%) 250 217 316 281 320 297 (97.7%) (84.8%) (95.2%) (84.6%) (100%) (92.8%) 4.4% 11.0% 2.4% 10.2% 0.0% 8.4% Table 4: Comparison Across Different Application Environments Switchboard (282) task dialogue N/A 166 (59.9%) N/A 193 (68.4%) N/A 216 (76.6%) N/A 8.2% prediction method. To test the statistical significance of the differences between the results obtained by the two prediction algorithms, for each corpus, we applied Cochran' s Q test (Cochran, 1950) to the results in rows 2 and 3. The tests show that for all corpora, the differences between the two algorithms when predicting the task and dialogue initiative holders are statistically significant at the levels of p<0.05 and p< 10 -5, respectively. Based on the results of our evaluation, we make the following observations. First, Table 4 illustrates the gen- erality of our prediction mechanism. Although the sys- tem's performance varies across environments, the use of cues consistently improves the system's accuracies in predicting the task and dialogue initiative holders by 2- 4 percentage points (with the exception of the maptask corpus in which there is no room for improvement) TM and 8-13 percentage points, respectively. Second, Ta- ble 4 shows the specificity of the trained bpa's with re- spect to application environments. Using our predic- tion mechanism, the system's performances on the col- laborative planning dialogues (TRAINS91, TRAINS93, and airline reservation) most closely resemble one an- other (last row in table). This suggests that the bpa's may be somewhat sensitive to application environments since they may affect how agents interpret cues. Third, our prediction mechanism yields better results on task- oriented dialogues. This is because such dialogues are constrained by the goals; therefore, there are fewer di- gressions and offers of unsolicited opinion as compared to the switchboard corpus. 5 Conclusions This paper discussed a model for tracking initiative be- tween participants in mixed-initiative dialogue interac- tions. We showed that distinguishing between task and dialogue initiatives allows us to model phenomena in col- laborative dialogues that existing systems are unable to explain. We presented eight types of cues that affect ini- tiative shifts in dialogues, and showed how our model 1°In the maptask domain, the task initiative remains with one agent, the instruction giver, throughout the dialogue. predicts initiative shifts based on the current initiative holders and and the effects that observed cues have on changing them. Our experiments show that by utilizing the constant-increment-with-counter adjustment method in determining the basic probability assignments for each cue, the system can correctly predict the task and dia- logue initiative holders 99.1% and 87.8% of the time, re- spectively, in the TRAINS91 corpus, compared to 96.8% and 74.9% without the use of cues. The differences be- tween these results are shown to be statistically signif- icant using Cochran's Q test. In addition, we demon- strated the generality of our model by applying it to dia- logues in different application environments. The results indicate that although the basic probability assignments may be sensitive to application environments, the use of cues in the prediction process significantly improves the system' s performance. Acknowledgments We would like to thank Lyn Walker, Diane Litman, Bob Carpenter, and Christer Samuelsson for their comments on earlier drafts of this paper, Bob Carpenter and Christer "Samuelsson for participating in the coding reliability test, as well as Jan van Santen and Lyn Walker for discussions on statistical testing methods. References Allen, James. 1991. Discourse structure in the TRAINS project. In Darpa Speech and Natural Language Workshop. Carletta, Jean. 1996. Assessing agreement on classifi- cation tasks: The kappa statistic. ComputationaILin- guistics, 22:249-254. Chu-Carroll, Jennifer and Michael K. Brown. 1997. Ini- tiative in collaborative interactions -- its cues and ef- fects. In Working Notes of the AAAI-97 Spring Sym- posium on Computational Models for Mixed Initiative Interaction, pages 16-22. Chu-Carroll, Jennifer and Sandra Carberry. 1994. A plan-based model for response generation in collab- 269 orative task-oriented dialogues. In Proceedings of the Twelfth National Conference on Artificial Intelligence, pages 799-805. Chu-Carroll, Jennifer and Sandra Carberry. 1995. Re- sponse generation in collaborative negotiation. In Pro- ceedings of the 33rd Annual Meeting of the Associa- tion for Computational Linguistics, pages 136-143. Cochran, W. G. 1950. The comparison of percentages in matched samples. Biometrika, 37:256-266. Gordon, Jean and Edward H. Shortliffe. 1984. The Dempster-Shafer theory of evidence. In Bruce Buchanan and Edward Shortliffe, editors, Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison- Wesley, chapter 13, pages 272-292. Gross, Derek, James F. Allen, and David R. Tranm. 1993. The TRAINS 91 dialogues. Technical Report TN92-1, Department of Computer Science, University of Rochester. Grove, William M., Nancy C. Andreasen, Patricia McDonald-Scott, Martin B. Keller, and Robert W. Shapiro. 1981. Reliability studies of psychiatric di- agnosis. Archives of General Psychiatry., 38:408-413, Guinn, Curry I. 1996. Mechanisms for mixed-initiative )',m~nJ'c, mputer col!~_b,~_raOve di_scourse. In Proceed- i;;g~ of tiu." 34th Anl;ual Mccti,. d of the ,ts~,,ciati~,.,for Computational Linguistics, pages 278-285. Heeman, Peter A. and James F. Allen. 1995. The TRAINS 93 dialogues. Technical Report TN94- 2, Department of Computer Science, University of Rochester. Jordan, Pamela W. and Barbara Di Eugenio. 1997. Con- trol and initiative in collaborative problem solving dia- logues. In Working Notes of the AAA1-97 Spring Sym- posium on Computational Models for Mixed Initiative Interaction, pages 81-84. Kitano, Hiroaki and Carol Van Ess-Dykema. 1991. To- ward a plan-based understanding model for mixed- initiative dialogues. In Proceedings of the 29th An- nual Meeting of the Association for Computational Linguistics, pages 25-32. Lambert, Lynn and Sandra Carberry. 1991. A tripartite plan-based model of dialogue. In Proceedings of the 29th Annual Meeting of the Association for Computa- tional Linguistics, pages 47-54. Litman, Diane and James Allen. 1987. A plan recogni- tion model for subdialogues in conversation. Cogni- tive Science, 11:163-200. Map Task Dialogues. 1996. Transcripts of DCIEM Sleep Deprivation Study, conducted by Defense and Civil Institute of Environmental Medicine, Canada, and Human Communication Research Centre, Uni- versity of Edinburgh and University of Glasgow, UK. Distrubuted by HCRC and LDC. Novick, David G. 1988. Control of Mixed-lnitiative Dis- course Through Meta-Locutionary Acts: A Computa- tional Model. Ph.D. thesis, University of Oregon. Novick, David G. and Stephen Sutton. 1997. What is mixed-initiative interaction? In Working Notes of the AAAI-97 Spring Symposium on Computational Mod- els for Mixed Initiative Interaction, pages 114-116. Pearl, Judea. 1990, Bayesian and belief-fuctions for- malisms for evidential reasoning: A conceptual analy- sis. In Glenn Shafer and Judea Pearl, editors, Read- ings in Uncertain Reasoning. Morgan Kaufmann, pages 540-574. Rmnshaw, Lance A. 1991. A three-level model for plan exploration. In Proceedings of the 29th Annual Meet- ing of the Association for Computational Linguistics, pages 36--46. Shafer, Glenn. 1976. A Mathematical Theory of Evi- dence. Princeton University Press. Siegel, Sidney. and N. John. Castellan, Jr. 1988. Non- parametric Statistics for the Behavioral Sciences. Mc- Graw Hill. Smith, Ronnie W. and D. Richard Hipp. 1994. Spoken Natural Language Dialog Systems -- A Practical Ap- proach. Oxford University Press. SRI Transcripts. 1992. Transcripts derived from audio- tape conversations made at SRI International, Menlo Park, CA. Prepared by Jacqueline Kowtko under the direction of Patti Price. Switchboard Credit Card Corpus. 1992. Transcripts of telephone conversations on the topic of credit card use, collected at Texas Instruments. Produced by NIST, available through LDC. Walker, Marilyn and Steve Whittaker. 1990. Mixed initiative in dialogue: An investigation into discourse segmentation. In Proceedings of the 28th Annual Meeting of the Association for Computational Lin- guistics, pages 70-78. Walker, Marilyn A. 1992. Redundancy in collabora- tive dialogue. In Proceedings of the 15th International Conference on Computational Linguistics, pages 345- 351. Whittaker, Steve and Phil Stenton. 1988. Cues and con- trol in expert-client dialogues. In Proceedings of the 26th Annual Meeting of the Association for Computa- tional Linguistics, pages 123-130. 270 | 1997 | 34 |
PARADISE: A Framework for Evaluating Spoken Dialogue Agents Marilyn A. Walker, Diane J. Litman, Candace A. Kamm and Alicia Abella AT&T Labs--Research 180 Park Avenue Florham Park, NJ 07932-0971 USA walker, diane,cak,[email protected] Abstract This paper presents PARADISE (PARAdigm for Dialogue System Evaluation), a general framework for evaluating spoken dialogue agents. The framework decouples task require- ments from an agent's dialogue behaviors, sup- ports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normaliz- ing for task complexity. 1 Introduction Recent advances in dialogue modeling, speech recogni- tion, and natural language processing have made it possi- ble to build spoken dialogue agents for a wide variety of applications, n Potential benefits of such agents include remote or hands-free access, ease of use, naturalness, and greater efficiency of interaction. However, a critical obstacle to progress in this area is the lack of a general framework for evaluating and comparing the performance of different dialogue agents. One widely used approach to evaluation is based on the notion of a reference answer (Hirschman et al., 1990). An agent's responses to a query are compared with a prede- fined key of minimum and maximum reference answers; performance is the proportion of responses that match the key. This approach has many widely acknowledged lim- itations (Hirschman and Pao, 1993; Danieli et al., 1992; Bates and Ayuso, 1993), e.g., although there may be many potential dialogue strategies for carrying out a task, the key is tied to one particular dialogue strategy. In contrast, agents using different dialogue strategies can be compared with measures such as inappropri- ate utterance ratio, turn correction ratio, concept accu- racy, implicit recovery and transaction success (Danieli LWe use the term agent to emphasize the fact that we are evaluating a speaking entity that may have a personality. Read- ers who wish to may substitute the word "system" wherever "agent" is used. and Gerbino, 1995; Hirschman and Pao, 1993; Po- lifroni et al., 1992; Simpson and Fraser, 1993; Shriberg, Wade, and Price, 1992). Consider a comparison of two train timetable information agents (Danieli and Gerbino, 1995), where Agent A in Dialogue I uses an explicit con- firmation strategy, while Agent B in Dialogue 2 uses an implicit confirmation strategy: (1) User: I want to go from Torino to Milano. Agent A: Do you want to go from Trento to Milano? Yes or No? User: No. (2) User: I want to travel from Torino to Milano. Agent B: At which time do you want to leave from Merano to Milano? User: No, I want to leave from Torino in the evening. Danieli and Gerbino found that Agent A had a higher transaction success rate and produced less inappropriate and repair utterances than Agent B, and thus concluded that Agent A was more robust than Agent B. However, one limitation of both this approach and the reference answer approach is the inability to generalize results to other tasks and environments (Fraser, 1995). Such generalization requires the identification of factors that affect performance (Cohen, 1995; Sparck-Jones and Galliers, 1996). For example, while Danieli and Gerbino found that Agent A's dialogue strategy produced dia- logues that were approximately twice as long as Agent B's, they had no way of determining whether Agent A's higher transaction success or Agent B's efficiency was more critical to performance. In addition to agent factors such as dialogue strategy, task factors such as database size and environmental factors such as background noise may also be relevant predictors of performance. These approaches are also limited in that they currently do not calculate performance over subdialogues as well as whole dialogues, correlate performance with an external validation criterion, or normalize performance for task complexity. This paper describes PARADISE, a general framework for evaluating spoken dialogue agents that addresses these limitations. PARADISE supports comparisons among di- alogue strategies by providing a task representation that decouples what an agent needs to achieve in terms of 271 I MAXIMIZE USER SATISFACTION[ l Figure 1: PARADISE's structure of objectives for spoken dialogue performance the task requirements from how the agent carries out the task via dialogue. PARADISE uses a decision-theoretic framework to specify the relative contribution of various factors to an agent's overall performance. Performance is modeled as a weighted function of a task-based suc- cess measure and dialogue-based cost measures, where weights are computed by correlating user satisfaction with performance. Also, performance can be calculated for subdialogues as well as whole dialogues. Since the goal of this paper is to explain and illustrate the appli- cation of the PARADISE framework, for expository pur- poses, the paper uses simplified domains with hypothet- ical data throughout. Section 2 describes PARADISE's performance model, and Section 3 discusses its general- ity, before concluding in Section 4. 2 A Performance Model for Dialogue PARADISE uses methods from decision theory (Keeney and Raiffa, 1976; Doyle, 1992) to combine a disparate set of performance measures (i.e., user satisfaction, task success, and dialogue cost, all of which have been pre- viously noted in the literature) into a single performance evaluation function. The use of decision theory requires a specification of both the objectives of the decision prob- lem and a set of measures (known as attributes in de- cision theory) for operationalizing the objectives. The PARADISE model is based on the structure of objectives (rectangles) shown in Figure 1. The PARADISE model posits that performance can be correlated with a mean- ingful external criterion such as usability, and thus that the overall goal of a spoken dialogue agent is to maxi- mize an objective related to usability. User satisfaction ratings (Kamm, 1995; Shriberg, Wade, and Price, 1992; Polifroni et al., 1992) have been frequently used in the literature as an external indicator of the usability of a di- alogue agent. The model further posits that two types of factors are potential relevant contributors to user satisfac- tion (namely task success and dialogue costs), and that two types of factors are potential relevant contributors to costs (Walker, 1996). In addition to the use of decision theory to create this objective structure, other novel aspects of PARADISE include the use of the Kappa coefficient (Carletta, 1996; Siegel and Castellan, 1988) to operationalize task suc- cess, and the use of linear regression to quantify the rel- ative contribution of the success and cost factors to user satisfaction. The remainder of this section explains the measures (ovals in Figure 1) used to operationalize the set of objec- tives, and the methodology for estimating a quantitative performance function that reflects the objective structure. Section 2.1 describes PARADISE's task representation, which is needed to calculate the task-based success mea- sure described in Section 2.2. Section 2.3 describes the cost measures considered in PARADISE, which reflect both the efficiency and the naturalness of an agent's dia- logue behaviors. Section 2.4 describes the use of linear regression and user satisfaction to estimate the relative contribution of the success and cost measures in a single performance function. Finally, Section 2.5 explains how performance can be calculated for subdialogues as well as whole dialogues, while Section 2.6 summarizes the method. 2.1 Tasks as Attribute Value Matrices A general evaluation framework requires a task represen- tation that decouples what an agent and user accomplish from how the task is accomplished using dialogue strate- gies. We propose that an attribute value matrix (AVM) can represent many dialogue tasks. This consists of the information that must be exchanged between the agent and the user during the dialogue, represented as a set of ordered pairs of attributes and their possible values. 2 As a first illustrative example, consider a simplification of the train timetable domain of Dialogues 1 and 2, where the timetable only contains information about rush-hour trains between four cities, as shown in Table 1. This AVM consists of four attributes (abbreviations for each attribute name are also shown). 3 In Table 1, these attribute-value pairs are annotated with the direction of information flow to represent who acquires the information, although this information is not used for evaluation. During the dia- logue the agent must acquire from the user the values of DC, AC, and DR, while the user must acquire DT. Performance evaluation for an agent requires a corpus of dialogues between users and the agent, in which users execute a set of scenarios. Each scenario execution has 2For infinite sets of values, actual values found in the exper- imental data constitute the required finite set. 3The AVM serves as an evaluation mechanism only. We are not claiming that AVMs determine an agent's behavior or serve as an utterance's semantic representation. 272 attribute possible values information flow depart-city (DC) arrival-city (AC) depart-range (DR) depart-time (DT) Milano, Roma. Torino, Trento Milano, Roma, Torino, Trento morning,evening 6am,8am,6pm,Spm to agent to agent to agent to user Table 1: Attribute value matrix, simplified train timetable domain A 1 : Hello, This is Train Enquiry Service. DC,AC,DR,DT Please speak after the tone. DC,AC,DR,DT Which information do you need? DC,AC,DR,DT U 1: I want to go from Torino to Milano. DC,AC A2: Do you want to go from Trento to Milano? DC,AC Yes or No? DC,AC U2: No. DC,AC A3: Do you want to leave from Trento? DC Yes or No? DC U3: No. DC A4: Where do you want to leave from? DC U4: I want to leave from Torino. DC A5: Do you want to leave from Torino? DC Yes or No? DC U5: Yes. DC A6: Do you want to go to Milano? AC U6: Yes. AC A7: At which time do you want to leave? DR U7: I want to travel in the evening. DR A8: Do you want to leave between 6 and 9 p.m.? DR Yes or No? DR U8: Yes. DR A9: There is a train leaving at 8:00 p.m. DT Figure 2: Agent A dialogue interaction (Danieli and Gerbino, 1995) a corresponding AVM instantiation indicating the task information requirements for the scenario, where each attribute is paired with the attribute value obtained via the dialogue. For example, assume that a scenario requires the user to find a train from Torino to Milano that leaves in the evening, as in the longer versions of Dialogues 1 and 2 in Figures 2 and 3.4 Table 2 contains an AVM corresponding to a "key" for this scenario. All dialogues resulting from execution of this scenario in which the agent and the user correctly convey all attribute values (as in Figures 2 and 3) would have the same AVM as the scenario key in Table 2. The AVMs of the remaining dialogues would differ from the key by at least one value. Thus, even though the dialogue strategies in Figures 2 and 3 are radically different, the AVM task representation for these dialogues is identical and the performance of the system for the same task can thus be assessed on the basis of the AVM representation. 2.2 Measuring Task Success Success at the task for a whole dialogue (or subdia- logue) is measured by how well the agent and user achieve the information requirements of the task by the end of the 4These dialogues have been slightly modified from (Danieli and Gerbino, 1995). The attribute names at the end of each utterance will be explained below. B I : Hello, This is Train Enquiry Service. DC,AC,DILDT Please speak after the tone. DC,AC,DR,DT Which information do you need? DC,AC,DR,DT U 1 : I want to travel from Torino to Milano. DC,AC B2: At which time do you want to leave from Merano to Milano? DC,AC,DR U2: No, I want to leave from Torino in the evening. DC, DR B3: Do you want to leave from Torino between 6 and 11 p.m? DC, DR Plees~ answer Yes or No. DC, DR U3: Yes. DC, DR B4: A train leaves at 8 p.m. DT Figure 3: Agent B dialogue interaction (Danieli and Gerbino, 1995) attribute actual value depart-city Torino arrival-city Milano depart-range evening depart-time 8pm Table 2: Attribute value matrix instantiation, scenario key for Dialogues 1 and 2 dialogue (or subdialogue). This section explains how PARADISE uses the Kappa coefficient (Carletta, 1996; Siegel and Castellan, 1988) to operationalize the task- based success measure in Figure 1. The Kappa coefficient, ~, is calculated from a confu- sion matrix that summarizes how well an agent achieves the information requirements of a particular task for a set of dialogues instantiating a set of scenarios, s For exam- ple, Tables 3 and 4 show two hypothetical confusion ma- trices that could have been generated in an evaluation of 100 complete dialogues with each of two train timetable agents A and B (perhaps using the confirmation strategies illustrated in Figures 2 and 3, respectively), 6 The values in the matrix cells are based on comparisons between the dialogue and scenario key AVMs. Whenever an attribute value in a dialogue (i.e., data) AVM matches the value in its scenario key, the number in the appropriate diagonal cell of the matrix (boldface for clarity) is incremented by 1. The off diagonal cells represent misunderstand- ings that are not corrected in the dialogue. Note that depending on the strategy that a spoken dialogue agent uses, confusions across attributes are possible, e.g., "Mi- lano " could be confused with "morning." The effect of misunderstandings that are corrected during the course of the dialogue are reflected in the costs associated with the dialogue, as will be discussed below. The first matrix summarizes how the 100 AVMs rep- resenting each dialogue with Agent A compare with the AVMs representing the relevant scenario keys, while the 5Confusion matrices can be constructed to summarize the result of dialogues for any subset of the scenarios, attributes, users or dialogues. ~The distributions in the tables were roughly based on per- formance results in (Danieli and Gerbino, 1995). 273 DATA vl v2 v3 v4 v5 v6 v7 v8 v9 vlO vii v12 v13 vl4 sum KEY DEPART.CITY ARRIVAL-CTrY DEPART-RANGE DEPART-TIME vl v2 v3 v4 v5 v6 v7 v8 v9 vl0 vii v12 v13 v14 22 1 3 29 4 16 4 I 1 1 5 11 1 3 20 22 2 1 1 20 5 1 1 2 8 15 45 10 5 40 oIBI~ 15 25 25 30 20 50 50 20 2 I 19 2 4 2 18 2 6 3 21 25 25 25 25 Table 3: Confusion matrix, Agent A DEPART-CITY DATA vl v2 v3 v4 v! 16 1 v2 1 20 1 v3 5 1 9 4 v4 1 2 6 6 v5 4 v6 1 6 v7 5 2 v8 1 3 3 v9 2 vl0 vii v12 v13 v14 sum 30 30 25 15 ARR2VAL-CITY v5 v6 v7 v8 4 3 2 4 2 15 19 1 1 15 1 2 9 25 25 30 DEPART-RANGE v9 vl0 3 2 2 3 2 3 4 11 39 10 6 35 20 5O 50 DEPAK'F-TIME I / E 20 5 5 4 10 5 5 5 5 10 5 5 5 11 25 25 25 25 Table 4: Confusion matrix, Agent B second matrix summarizes the information exchange with Agent B. Labels vl to v4 in each matrix represent the possible values of depart-city shown in Table 1; v5 to v8 are for arrival-city, etc. Columns represent the key, specifying which information values the agent and user were supposed to communicate to one another given a particular scenario. (The equivalent column sums in both tables reflects that users of both agents were assumed to have performed the same scenarios). Rows represent the data collected from the dialogue corpus, reflecting what attribute values were actually communicated between the agent and the user. Given a confusion matrix M, success at achieving the information requirements of the task is measured with the Kappa coefficient (Carletta, 1996; Siegel and Castellan, 1988): P(A) - P(E) K-- 1 - P(E) P(A) is the proportion of times that the AVMs for the actual set of dialogues agree with the AVMs for the sce- nario keys, and P(E) is the proportion of times that the AVMs for the dialogues and the keys are expected to agree by chance. 7 When there is no agreement other than that which would be expected by chance, ~ = 0. When there is total agreement, ~ = 1. n is superior to other measures of success such as transaction success (Danieli and Gerbino, 1995), concept accuracy (Simpson and Fraser, 1993), and percent agreement (Gale, Church, and Yarowsky, 1992) because n takes into account the inherent complexity of the task by correcting for chance expected agreement. Thus ~ provides a basis for comparisons across agents that are performing different tasks. When the prior distribution of the categories is un- known, P(E), the expected chance agreement between the data and the key, can be estimated from the distri- bution of the values in the keys. This can be calculated from confusion matrix M, since the columns represent the values in the keys. In particular: r~ P(E) = ~j--,ft_i ~2 L.~, T, i=l 7~ has been used to measure pairwise agreement among coders making category judgments (Carletta, 1996; Krippen- doff, 1980; Siegel and Castellan, 1988). Thus, the observed user/agent interactions are modeled as a coder, and the ideal interactions as an expert coder. 274 where ti is the sum of the frequencies in column i of M, and T is the sum of the frequencies in M (tl + • • • + tn). P(A), the actual agreement between the data and the key, is always computed from the confusion matrix M: P(A) - ~'~i~=l M(i, i) T Given the confusion matrices in Tables 3 and 4, P(E) = 0.079 for both agents, s For Agent A, P(A) = 0.795 and • = 0.777, while for Agent B, P(A) = 0.59 and a = 0.555, suggesting that Agent A is more successful than B in achieving the task goals. 2.3 Measuring Dialogue Costs As shown in Figure 1, performance is also a function of a combination of cost measures. Intuitively, cost measures should be calculated on the basis of any user or agent dialogue behaviors that should be minimized. A wide range of cost measures have been used in previous work; these include pure efficiency measures such as the num- ber of turns or elapsed time to complete the task (Abella, Brown, and Buntschuh, 1996; Hirschman et al., 1990; Smith and Gordon, 1997; Walker, 1996), as well as mea- sures of qualitative phenomena such as inappropriate or repair utterances (Danieli and Gerbino, 1995; Hirschman and Pao, 1993; Simpson and Fraser, 1993). PARADISE represents each cost measure as a function ci that can be applied to any (sub)dialogue. First, consider the simplest case of calculating efficiency measures over a whole dialogue. For example, let cl be the total number of utterances. For the whole dialogue D1 in Figure 2, el(D1) is 23 utterances. For the whole dialogue D2 in Figure 3, cl (D2) is 10 utterances. To calculate costs over subdialogues and for some of the qualitative measures, it is necessary to be able to spec- ify which information goals each utterance contributes to. PARADISE uses its AVM representation to link the information goals of the task to any arbitrary dialogue behavior, by tagging the dialogue with the attributes for the task. 9 This makes it possible to evaluate any potential dialogue strategies for achieving the task, as well as to evaluate dialogue strategies that operate at the level of dialogue subtasks (subdialogues). Consider the longer versions of Dialogues 1 and 2 in Figures 2 and 3. Each utterance in Figures 2 and 3 has been tagged using one or more of the attribute abbrevia- tions in Table 1, according to the subtask(s) the utterance contributes to. As a convention of this type of tagging, SUsing a single confusion matrix for all attributes as in Tables 3 and 4 inflates n when there are few cross-attribute confusions by making P(E) smaller. In some cases it might be desirable to calculate ~; first for identification of attributes and then for values within attributes, or to average ~ for each attribute to produce an overall t¢ for the task. 9This tagging can be hand generated, or system generated and hand corrected. Preliminary studies indicate that reliability for human tagging is higher for AVM attribute tagging than for other types of discourse segment tagging (Passonneau and Litman, 1997; Hirschberg and Nakatani, 1996). ~:E.AC, DR, D ~:AI..A9 SEG~cr: S3 S~Ml~Cr: S4 G0~: I£ GOALS: AC o'rr~cES: A3...u5 0TI/~ES: A6...U6 Figure 4: Task-defined discourse structure of Agent A dialogue interaction utterances that contribute to the success of the whole dia- logue, such as greetings, are tagged with all the attributes. Since the structure of a dialogue reflects the structure of the task (Carberry, 1989; Grosz and Sidner, 1986; Litman and Allen, 1990), the tagging of a dialogue by the AVM attributes can be used to generate a hierarchical discourse structure such as that shown in Figure 4 for Dialogue 1 (Figure 2). For example, segment (subdialogue) $2 in Figure 4 is about both depart-city (DC) and arrival- city (AC). It contains segments $3 and $4 within it, and consists of utterances U1... U6. Tagging by AVM attributes is required to calculate costs over subdialogues, since for any subdialogue, task attributes define the subdialogue. For subdialogue $4 in Figure 4, which is about the attribute arrival-city and consists of utterances A6 and U6, ct(S4) is 2. Tagging by AVM attributes is also required to calculate the cost of some of the qualitative measures, such as number of repair utterances. (Note that to calculate such costs, each utterance in the corpus of dialogues must also be tagged with respect to the qualitative phenomenon in question, e.g. whether the utterance is a repair, l°) For example, let c2 be the number of repair utterances. The repair utterances in Figure 2 are A3 through U6, thus c2(D1) is 10 utterances and c2($4) is 2 utterances. The repair utterance in Figure 3 is U2, but note that according to the AVM task tagging, U2 simultaneously addresses the information goals for depart-range. In general, if an utterance U contributes to the information goals of N different attributes, each attribute accounts for 1/N of any costs derivable from U. Thus, c2(D2) is .5. Given a set of ci, it is necessary to combine the dif- mPrevious work has shown that this can be done with high reliability (Hirschman and Pao, 1993). 275 ferent cost measures in order to determine their relative contribution to performance. The next section explains how to combine ~ with a set of ci to yield an overall performance measure. 2.4 Estimating a Performance Function Given the definition of success and costs above and the model in Figure 1, performance for any (sub)dialogue D is defined as follows: it n Performance = (o~ • .N'(t~)) - ~ wi * .N'(ci) i=1 Here ~ is a weight on ~, the cost functions ci are weighted by wi, and At" is a Z score normalization function (Cohen, 1995). The normalization function is used to overcome the problem that the values of ci are not on the same scale as x, and that the cost measures ci may also be calculated over widely varying scales (e.g. response delay could be measured using seconds while, in the example, costs were calculated in terms of number of utterances). This problem is easily solved by normalizing each factor x to its Z score: N'(x) = O'.:t: where ~r= is the standard deviation for x. user agent US ~ el (#utt) e2 (#rep) 1 A 1 1 46 30 2 A 2 1 50 30 3 A 2 I 52 30 4 A 3 1 40 20 5 A 4 1 23 10 6 A 2 1 50 36 7 A 1 0.46 75 30 8 A 1 0.19 60 30 9 B 6 I 8 0 10 B 5 1 15 1 11 B 6 I 10 0.5 12 B 5 1 20 3 13 B 1 0.L9 45 18 14 B 1 0.46 50 22 15 B 2 0.19 34 18 16 B 2 0.46 40 18 Mean(A) A 2 0.83 49.5 27 Mean(B) B 3.5 0.66 27.8 10,1 Mean NA 2.75 0.75 38,6 18,5 Table 5: Hypothetical performance data from users of Agents A and B To illustrate the method for estimating'a performance function, we will use a subset of the data from Tables 3 and 4, shown in Table 5. Table 5 represents the results tZWe assume an additive performance (utility) function be- cause it appears that n and the various cost factors ci are util- ity independent and additive independent (Keeney and Raiffa, 1976). It is possible however that user satisfaction data col- lected in future experiments (or other data such as willingness to pay or use) would indicate otherwise. If so, continuing use of an additive function might require a transformation of the data, a reworking of the model shown in Figure 1, or the inclusion of interaction terms in the model (Cohen, 1995). from a hypothetical experiment in which eight users were randomly assigned to communicate with Agent A and eight users were randomly assigned to communicate with Agent B. Table 5 shows user satisfaction (US) ratings (discussed below), ~, number of utterances (#utt) and number of repair utterances (#rep) for each of these users. Users 5 and 11 correspond to the dialogues in Figures 2 and 3 respectively. To normalize ct for user 5, we determine that ~ is 38.6 and crc~ is 18.9. Thus, .N'(cl) is -0.83. Similarly A/'(cl) for user 11 is -1.51. To estimate the performance function, the weights and wi must be solved for. Recall that the claim implicit in Figure 1 was that the relative contribution of task success and dialogue costs to performance should be calculated by considering their contribution to user satisfaction. User satisfaction is typically calculated with surveys that ask users to specify the degree to which they agree with one or more statements about the behavior or the performance of the system. A single user satisfaction measure can be calculated from a single question, or as the mean of a set of ratings. The hypothetical user satisfaction ratings shown in Table 5 range from a high of 6 to a low of 1. Given a set of dialogues for which user satisfaction (US), ~ and the set of ci have been collected experimen- tally, the weights ~ and wi can be solved for using multi- ple linear regression. Multiple linear regression produces a set of coefficients (weights) describing the relative con- tribution of each predictor factor in accounting for the variance in a predicted factor. In this case, on the basis of the model in Figure 1, US is treated as the predicted factor. Normalization of the predictor factors (~ and ci) to their Z scores guarantees that the relative magnitude of the coefficients directly indicates the relative contribu- tion of each factor. Regression on the Table 5 data for both sets of users tests which factors ~, #utt, #rep most strongly predicts US. In this illustrative example, the results of the regression with all factors included shows that only ~ and #rep are significant (p < .02). In order to develop a performance function estimate that includes only significant factors and eliminates redundancies, a second regression includ- ing only significant factors must then be done. In this case, a second regression yields the predictive equation: Performance = .40.N'(~) - .78.N'(c2) i.e., c~ is .40 and w2 is .78. The results also show ~ is significant at p < .0003, #rep significant at p < .0001, and the combination of ~ and #rep account for 92% of the variance in US, the external validation criterion. The factor #utt was not a significant predictor of performance, in part because #utt and #rep are highly redundant. (The correlation between #utt and #rep is 0.91). Given these predictions about the relative contribution of different factors to performance, it is then possible to return to the problem first introduced in Section 1: given potentially conflicting performance criteria such as robustness and efficiency, how can the performance of Agent A and Agent B be compared? Given values for and wi, performance can be calculated for both agents 276 using the equation above. The mean performance of A is -.44 and the mean performance of B is .44, suggesting that Agent B may perform better than Agent A overall. The evaluator must then however test these perfor- mance differences for statistical significance. In this case, a t test shows that differences are only significant at the p < .07 level, indicating a trend only. In this case, an eval- uation over a larger subset of the user population would probably show significant differences. 2.5 Application to Subdialogues Since both ~ and ei can be calculated over subdialogues, performance can also be calculated at the subdialogue level by using the values for c~ and wi as solved for above. This assumes that the factors that are predictive of global performance, based on US, generalize as predictors of local performance, i.e. within subdialogues defined by subtasks, as defined by the attribute tagging. 12 Consider calculating the performance of the dialogue strategies used by train timetable Agents A and B, over the subdialogues that repair the value of depart-city. Seg- ment $3 (Figure 4) is an example of such a subdialogue with Agent A. As in the initial estimation of a perfor- mance function, our analysis requires experimental data, namely a set of values for ~ and el, and the application of the Z score normalization function to this data. However, the values for ~ and ci are now calculated at the subdia- Iogue rather than the whole dialogue level. In addition, only data from comparable strategies can be used to cal- culate the mean and standard deviation for normalization. Informally, a comparable strategy is one which applies in the same state and has the same effects. For example, to calculate ~ for Agent A over the sub- dialogues that repair depart-city, P(A) and P(E) are com- puted using only the subpart of Table 3 concerned with depart-city. For Agent A, P(A) = .78, P(E) = .265, and = .70. Then, this value of~ is normalized using data from comparable subdialogues with both Agent A and Agent B. Based on the data in Tables 3 and 4, the mean ~ is .515 and ~r is .261, so that.M(~c) for Agent A is .71. To calculate c2 for Agent A, assume that the average number of repair utterances for Agent A's subdialogues that repair depart-city is 6, that the mean over all compa- rable repair subdialogues is 4, and the standard deviation is 2.79. Then A/'(cz) is .72. Let Agent A's repair dialogue strategy for subdialogues repairing depart-city be RA and Agent B's repair strat- egy for depart-city be RB. Then using the performance equation above, predicted performance for RA is: Performance(Ra) = .40 • .71 -- .78 • .72 = --0.28 For Agent B, using the appropriate subpart of Table 4 to calculate ~, assuming that the average number of depart-city repair utterances is 1.38, and using similar 12This assumption has a sound basis in theories of dialogue structure (Carberry, 1989; Grosz and Sidner, 1986; Litman and Allen, 1990), but should be tested empirically. calculations, yields Performance(RB) = .40. -.71 - .78 • -.94 = 0.45 Thus the results of these experiments predict that when an agent needs to choose between the repair strategy that Agent B uses and the repair strategy that Agent A uses for repairing depart-city, it should use Agent B's strategy RB, since the performance(RB) is predicted to be greater than the performance(Ra). Note that the ability to calculate performance over sub- dialogues allows us to conduct experiments that simulta- neously test multiple dialogue strategies. For example, suppose Agents A and B had different strategies for pre- senting the value of depart-time (in addition to different confirmation strategies). Without the ability to calculate performance over subdialogues, it would be impossible to test the effect of the different presentation strategies independently of the different confirmation strategies. 2.6 Summary We have presented the PARADISE framework, and have used it to evaluate two hypothetical dialogue agents in a simplified train timetable task domain. We used PAR- ADISE to derive a performance function for this task, by estimating the relative contribution of a set of potential predictors to user satisfaction. The PARADISE method- ology consists of the following steps: • definition of a task and a set of scenarios; • specification of the AVM task representation; • experiments with alternate dialogue agents for the task; • calculation of user satisfaction using surveys; • calculation of task success using ~; • calculation of dialogue cost using efficiency and qualitative measures; • estimation of a performance function using linear regression and values for user satisfaction, K and dialogue costs; • comparison with other agents/tasks to determine which factors generalize; • refinement of the performance model. Note that all of these steps are required to develop the performance function. However once the weights in the performance function have been solved for, user satisfaction ratings no longer need to be collected. In- stead, predictions about user satisfaction can be made on the basis of the predictor variables, as illustrated in the application of PARADISE to subdialogues. Given the current state of knowledge, it is important to emphasize that researchers should be cautious about gen- eralizing a derived performance function to other agents. or tasks. Performance function estimation should be done iteratively over many different tasks and dialogue strate- gies to see which factors generalize. In this way, the field can make progress on identifying the relationship between various factors and can move towards more pre- dictive models of spoken dialogue agent performance. 277 3 Generality In the previous section we used PARADISE to evalu- ate two confirmation strategies, using as examples fairly simple information access dialogues in the train timetable domain. In this section we demonstrate that PARADISE is applicable to a range of tasks, domains, and dialogues, by presenting AVMs for two tasks involving more than information access, and showing how additional dialogue phenomena can be tagged using AVM attributes. depart-city (DC) arrival-city (AC) depart-range (DR) depart-time (DT) request-type (R'r) possible values information flow Milano, Roma, Torino, Trento to agent Milano, Roma, Torino, Trento to agent morning,evening to agent 6am,Sam,6pm,8pm to user reserve, purchase to agent I Table 6: Attribute value matrix, train timetable domain with requests First, consider an extension of the train timetable task, where an agent can handle requests to reserve a seat or purchase a ticket. This task could be represented using the AVM in Table 6 (an extension of Table 1), where the agent must now acquire the value of the attribute request-type, in order to know what to do with the other information it has acquired. U 1: I want to go from Torino to Roma DC,AC C 1: Approximately what time of day would you like to travel? DR U2: What are the options? DR C2: Morning or evening. DR U3: Are those departure times? DR C3: Yes. DR U4: I'd like to leave in the morning. DR C4: Train 702 leaves Torino Porto at 8 a.m. DT U5: Please reserve me a seat on that train. RT Figure 5: Hypothetical Agent C dialogue interaction Figure 5 presents a hypothetical dialogue in this ex- tended task domain, and illustrates user utterance types and an agent dialogue strategy that are very different from those in Figures 2 and 3. First, Agent C in Figure 5 uses a "no confirmation" dialogue strategy, in contrast to the explicit and implicit confirmation strategies used in Fig- ures 2 and 3. Second, Figure 5 illustrates new types of user utterances that do not directly further the informa- tional goals of the task. In U2, the user asks the agent a wh-question about the DR attribute itself, rather than providing information about that attribute's value. Since U2 satisfies a knowledge precondition related to answer- ing C l, U2 contributes to the DR goal and is tagged as such. In U3, the user similarly asks a yes-no question that addresses a subgoal related to answering C1. Finally, U5 illustrates a user request for an agent action, and is tagged with the RT attribute. The value of RT in the AVM instantiation for the dialogue would be "reserve" Second, consider the very different domain and task of diagnosing a fault and repairing a circuit (Smith and Gordon, 1997). Figure 6 presents one dialogue from this domain. I. C: This is the circuit fix it shop. How may I help you? 2. U: I want to fix a circuit. 3. C: What is the ID of the circuit? 4. U: Rsll 1. 5. C: I am familiar with that circuit. The LED is supposed to be displaying alternately flashing one and seven. CB 6. U: The LED is off. RB 7. C: What is the switch at when the LED is off?. RB 8, U: The switch is down. RB 9. C: Put the switch up. RB 10. U: Okay. RB 11. C: Put the knob to one zero. RB 12. U: Okay. RB 13. C: What is the LED displaying? RB 14. U: LED is displaying only a flashing seven. RB 15. C: Is there a wire between connector eight four and connector nine nine? FT 16. U: No. FT 17. C: Add a wire between connector eight four and connector nine nine. FC 18. U: Done. FC 19. C: What is the LED displaying? T 20. U: Alternately flashing one and seven. T 2 I. C: Is the one on the LED displaying for a longer period of time? T 22. U: No. T 23. C: Is the seven on the LED displaying for a longer period of time? T 24. U: No. T 25. C: Put the knob to zero. T 26. U: Okay. T 27. C: What is the LED displaying? T 28. U: Alternately displaying one and seven. T 29. C: Is the one on the LED displaying for a longer period of time? T 30. U: Yes. T 31. C: Put the switch down. T 32. U: The switch is down. T 33. C: What is the LED displaying? T 34. U: Nothing. T 35. C: The circuit is working correctly. Good-bye. T ID,CB,RB,FT, FC,T ID,CB,RB,FT, FC,T ID,CB,RB,FT, FC,T ID ID CB Figure 6: A circuit domain dialogue (Smith and Gordon, 1997), with AVM tagging Smith and Gordon collected 144 dialogues for this task, in which agent initiative was varied by using different dialogue strategies, and tagged each dialogue according to the following subtask structure: 13 • Introduction (I)--establish the purpose of the task . Assessment (A)--establish the current behavior • Diagnosis (D)---establish the cause for the errant behavior • Repair (R)---establish that the correction for the er- rant behavior has been made • Test (T)---establish that the behavior is now correct Our informational analysis of this task results in the AVM shown in Table 7. Note that the attributes are almost identical to Smith and Gordon's list of subtasks. Circuit- ID corresponds to Introduction, Correct-Circuit-Behavior and Current-Circuit-Behavior correspond to Assessment, t3They report a ~ of.82 for reliability of their tagging scheme. 278 Fault-Type corresponds to Diagnosis, Fault-Correction corresponds to Repair, and Test corresponds to Test. The attribute names emphasize information exchange, while the subtask names emphasize function. attribute possible values Circuit-ID (ID) RSI 11, RS112 .... Correct-Circuit-Behavior (CB) Flash- 1-7, Flash- 1 .... Current-Circuit-Behavior (RB) Flash-7 Fault-Type (P-'q') MissingWire84-99, MissingWire88-99 .... Fault-Correction (FC) yes, no Test (T) yes, no Table 7: Attribute value matrix, circuit domain Figure 6 is tagged with the attributes from Table 7. Smith and Gordon's tagging of this dialogue according to their subtask representation was as follows: turns 1- 4 were I, turns 5-14 were A, turns 15-16 were D, turns 17-18 were R, and turns 19-35 were T. Note that there are only two differences between the dialogue structures yielded by the two tagging schemes. First, in our scheme (Figure 6), the greetings (turns 1 and 2) are tagged with all the attributes. Second, Smith and Gordon's single tag A corresponds to two attribute tags in Table 7, which in our scheme defines an extra level of structure within assessment subdialogues. 4 Discussion This paper presented the PARADISE framework for eval- uating spoken dialogue agents. PARADISE is a gen- eral framework for evaluating spoken dialogue agents that integrates and enhances previous work. PARADISE supports comparisons among dialogue strategies with a task representation that decouples what an agent needs to achieve in terms of the task requirements from how the agent carries out the task via dialogue. Furthermore, this task representation supports the calculation of perfor- mance over subdialogues as well as whole dialogues. In addition, because PARADISE's success measure normal- izes for task complexity, it provides a basis for comparing agents performing different tasks. The PARADISE performance measure is a function of both task success (~) and dialogue costs (ci), and has a number of advantages. First, it allows us to evaluate performance at any level of a dialogue, since n and ci can be calculated for any dialogue subtask. Since per- formance can be measured over any subtask, and since dialogue strategies can range over subdialogues or the whole dialogue, we can associate performance with indi- vidual dialogue strategies. Second, because our success measure n takes into account the complexity of the task, comparisons can be made across dialogue tasks. Third, ~; allows us to measure partial success at achieving the task. Fourth, performance can combine both objective and subjective cost measures, and specifies how to eval- uate the relative contributions of those costs factors to overall performance. Finally, to our knowledge, we are the first to propose using user satisfaction to determine weights on factors related to performance. In addition, this approach is broadly integrative, in- corporating aspects of transaction success, concept accu- racy, multiple cost measures, and user satisfaction. In our framework, transaction success is reflected in ~;, corre- sponding to dialogues with a P(A) of 1. Our performance measure also captures information similar to concept ac- curacy, where low concept accuracy scores translate into either higher costs for acquiring information from the user, or lower ~ scores. One limitation of the PARADISE approach is that the task-based success measure does not reflect that some solutions might be better than others. For example, in the train timetable domain, we might like our task-based suc- cess measure to give higher ratings to agents that suggest express over local trains, or that provide helpful infor- mation that was not explicitly requested, especially since the better solutions might occur in dialogues with higher costs. It might be possible to address this limitation by using the interval scaled data version of n (Krippen- dorf, 1980). Another possibility is to simply substitut*. a domain-specific task-based success measure in the per- formance model for n. The evaluation model presented here has many applica- tions in apoken dialogue processing. We believe that the framework is also applicable to other dialogue modal- ities, and to human-human task-oriented dialogues. In addition, while there are many proposals in the litera- ture for algorithms for dialogue strategies that are co- operative, collaborative or helpful to the user (Webber and Joshi, 1982; Pollack, Hirschberg, and Webber, 1982; Joshi, Webber, and Weischedel, 1984; Chu-Carrol and Carberry, 1995), very few of these strategies have been evaluated as to whether they improve any measurable as- pect of a dialogue interaction. As we have demonstrated here, any dialogue strategy can be evaluated, so it should be possible to show that a cooperative response, or other cooperative strategy, actually improves task performance by reducing costs or increasing task success. We hope that this framework will be broadly applied in future di- alogue research. 5 Acknowledgments We would like to thank James Allen, Jennifer Chu- Carroll, Morena Danieli, Wieland Eckert, Giuseppe Di Fabbrizio, Don Hindle, Julia Hirschberg, Shri Narayanan, Jay Wilpon, Steve Whittaker and three anonymous re- views for helpful discussion and comments on earlier versions of this paper. References Abella, Alicia, Michael K Brown, and Bruce Buntschuh. 1996. Development principles for dialog-based inter- faces. In ECAI-96 Spoken Dialog Processing Work- shop, Budapest, Hungary. 279 Bates, Madeleine and Damaris Ayuso. 1993. A proposal for incremental dialogue evaluation. In Proceedings of the DARPA Speech and NL Workshop, pages 319-322. Carberry, S. 1989. Plan recognition and its use in un- derstanding dialogue. In A. Kobsa and W. Wahlster, editors, User Models in Dialogue Systems. Springer Verlag, Berlin, pages 133-162. Carletta, Jean C. 1996. Assessing the reliability of subjective codings. Computational Linguistics, 22(2):249-254. Chu-Carrol, Jennifer and Sandra Carberry. 1995. Re- sponse generation in collaborative negotiation. In Pro- ceedings of the Conference of the 33rd Annual Meet- ing of the Association for Computational Linguistics, pages 136-143. Cohen, Paul. R. 1995. Empirical Methods for Artificial Intelligence. MIT Press, Boston. Danieli, M., W. Eckert, N. Fraser, N. Gilbert, M. Guy- omard, P. Heisterkam p, M. Kharoune, J. Magadur, S. McGlashan, D. Sadek, J. Siroux, and N. Youd. 1992. Dialogue manager design evaluation. Technical Report Project Esprit 2218 SUNDIAL, WP6000-D3. Danieli, Morena and Elisabetta Gerbino. 1995. Metrics for evaluating dialogue strategies in a spoken language system. In Proceedings of the 1995 AAAI Spring Sym- posium on Empirical Methods in Discourse Interpre- tation and Generation, pages 34-39. Doyle, Jon. 1992. Rationality and its roles in reasoning. Computational Intelligence, 8(2):376--409. Fraser, Norman M. 1995. Quality standards for spoken dialogue systems: a report on progress in EAGLES. In ESCA Workshop on Spoken Dialogue Systems Vigso, Denmark, pages 157-160. Gale, William, Ken W. Church, and David Yarowsky. 1992. Estimating upper and lower bounds on the per- formance of word-sense disambiguation programs. In Proc. of3Oth ACL, pages 249-256, Newark, Delaware. Grosz, Barbara J. and Candace L. Sidner. 1986. Atten- tions, intentions and the structure of discourse. Com- putational Linguistics, 12:175-204. Hirschberg, Julia and Christine Nakatani. 1996. A prosodic analysis of discourse segments in direction- giving monologues. In 34th Annual Meeting of the Association for Computational Linguistics, pages 286-- 293. Hirschman, Lynette, Deborah A. Dahl, Donald P. McKay, Lewis M. Norton, and Marcia C. Linebarger. 1990. Beyond class A: A proposal for automatic evaluation of discourse. In Proceedings of the Speech and Natural Language Workshop, pages 109-113. Hirschman, Lynette and Christine Pao. 1993. The cost of errors in a spoken language system. In Proceedings of the Third European Conference on Speech Commu- nication and Technology, pages 1419-1422. Joshi, Aravind K., Bonnie L. Webber, and Ralph M. Weischedel. 1984. Preventing false inferences. In COLING84: Proc. lOth International Conference on Computational Linguistics., pages 134-138. Kamm, Candace. 1995. User interfaces for voice appli- cations. In David Roe and Jay Wilpon, editors, Voice Communication between Humans and Machines. Na- tional Academy Press, pages 422--442. Keeney, Ralph and Howard Raiffa. 1976. Decisions with Multiple Objectives: Preferences and Value Tradeoffs. John Wiley and Sons. Krippendorf, Klaus. 1980. Content Analysis: An Intro- duction to its Methodology. Sage Publications, Bev- erly Hills, Ca. Litman, Diane and James Allen. 1990. Recognizing and relating discourse intentions and task-oriented plans. In Philip Cohen, Jerry Morgan, and Martha Pollack, editors, Intentions in Communication. MIT Press. Passonneau, Rebecca J. and Diane Litman. 1997. Dis- course segmentation by human and automated means. Computational Linguistics, 23(1). Polifroni, Joseph, Lynette Hirschman, Stephanie Seneff, and Victor Zue. 1992. Experiments in evaluating in- teractive spoken language systems. In Proceedings of the DARPA Speech and NL Workshop, pages 28-33. Pollack, Martha, Julia Hirschberg, and Bonnie Webber. 1982. User participation in the reasoning process of expert systems. In Proceedings First National Confer- ence on Artificial Intelligence, pages pp. 358-361. Shriberg, Elizabeth, Elizabeth Wade, and Patti Price. 1992. Human-machine problem solving using spo- ken language systems (SLS): Factors affecting perfor- mance and user satisfaction. In Proceedings of the DARPA Speech and NL Workshop, pages 49-54. Siegel, Sidney and N. J. Castellan. 1988. Nonparametric Statistics for the Behavioral Sciences. McGraw Hill. Simpson, A. and N. A. Fraser. 1993. Black box and glass box evaluation of the SUNDIAL system. In Pro- ceedings of the Third European Conference on Speech Communication and Technology, pages 1423-1426. Smith, Ronnie W. and Steven A. Gordon. 1997. Effects of variable initiative on linguistic behavior in human- computer spoken natural language dialog. Computa- tional Linguistics, 23(1). Sparck-Jones, Karen and Julia R. Galliers. 1996. Evalu- ating Natural Language Processing Systems. Springer. Walker, Marilyn A. 1996. The Effect of Resource Limits and Task Complexity on Collaborative Planning in Di- alogue. Artificial Intelligence Journal, 85(1-2): 181- 243. Webber, Bonnie and Aravind Joshi. 1982. Taking the initiative in natural language database interaction: Jus- tifying why. In Coling 82, pages 413--419. 280 | 1997 | 35 |
Unification-based Multimodal Integration Michael Johnston, Philip R. Cohen, David McGee, Sharon L. Oviatt, James A. Pittman, Ira Smith Center for Human Computer Communication Department of Computer Science and Engineering Oregon Graduate Institute, PO BOX 91000, Portland, OR 97291, USA. {johnston, pcohen, dmcgee, oviatt, jay, ira}©cse, ogi. edu Abstract Recent empirical research has shown con- clusive advantages of multimodal interac- tion over speech-only interaction for map- based tasks. This paper describes a mul- timodal language processing architecture which supports interfaces allowing simulta- neous input from speech and gesture recog- nition. Integration of spoken and gestural input is driven by unification of typed fea- ture structures representing the semantic contributions of the different modes. This integration method allows the component modalities to mutually compensate for each others' errors. It is implemented in Quick- Set, a multimodal (pen/voice) system that enables users to set up and control dis- tributed interactive simulations. 1 Introduction By providing a number of channels through which information may pass between user and computer, multimodal interfaces promise to significantly in- crease the bandwidth and fluidity of the interface between humans and machines. In this work, we are concerned with the addition of multimodal input to the interface. In particular, we focus on interfaces which support simultaneous input from speech and pen, utilizing speech recognition and recognition of gestures and drawings made with a pen on a complex visual display, such as a map. Our focus on multimodal interfaces is motivated, in part, by the trend toward portable computing de- vices for which complex graphical user interfaces are infeasible. For such devices, speech and gesture will be the primary means of user input. Recent em- pirical results (Oviatt 1996) demonstrate clear task performance and user preference advantages for mul- timodal interfaces over speech only interfaces, in par- ticular for spatial tasks such as those involving maps. Specifically, in a within-subject experiment during which the same users performed the same tasks in various conditions using only speech, only pen, or both speech and pen-based input, users' multimodal input to maps resulted in 10% faster task comple- tion time, 23% fewer words, 35% fewer spoken dis- fluencies, and 36% fewer task errors compared to unimodal spoken input. Of the user errors, 48% in- volved location errors on the map--errors that were nearly eliminated by the simple ability to use pen- based input. Finally, 100% of users indicated a pref- erence for multimodal interaction over speech-only interaction with maps. These results indicate that for map-based tasks, users would both perform bet- ter and be more satisfied when using a multimodal interface. As an illustrative example, in the dis- tributed simulation application we describe in this paper, one user task is to add a "phase line" to a map. In the existing unimodal interface for this ap- plication (CommandTalk, Moore 1997), this is ac- complished with a spoken utterance such as 'CRE- ATE A LINE FROM COORDINATES NINE FOUR THREE NINE THREE ONE TO NINE EIGHT NINE NINE FIVE ZERO AND CALL IT PHASE LINE GREEN'. In contrast the same task can be ac- complished by saying 'PHASE LINE GREEN' and simultaneously drawing the gesture in Figure 1. J Figure 1: Line gesture The multimodal command involves speech recog- nition of only a three word phrase, while the equiva- lent unimodal speech command involves recognition of a complex twenty four word expression. Further- more, using unimodal speech to indicate more com- 281 plex spatial features such as routes and areas is prac- tically infeasible if accuracy of shape is important. Another significant advantage of multimodal over unimodal speech is that it allows the user to switch modes when environmental noise or security con- cerns make speech an unacceptable input medium, or for avoiding and repairing recognition errors (Ovi- att and Van Gent 1996). Multimodality also offers the potential for input modes to mutually compen- sate for each others' errors. We will demonstrate :~'~.,, in our system, multimodal integration allows speech input to compensate for errors in gesture recognition and vice versa. Systems capable of integration of speech and ges- ture have existed since the early 80's. One of the first such systems was the "Put-That-There" sys- tem (Bolt 1980). However, in the sixteen years since then, research on multimodal integration has not yielded a reusable scalable architecture for the con- struction of multimodal systems that integrate ges- ture and voice. There are four major limiting factors in previous approaches to multimodal integration: (1) The majority of approaches limit the bandwidth of the gestural mode to simple deictic pointing gestures made with a mouse (Neal and Shapiro 1991, Cohen 1991, Cohen 1992, Brison and Vigouroux (ms.), Wauchope 1994) or with the hand (Koons et al 19931). (ii) Most previous approaches have been primarily speech-driven ~ , treating gesture as a secondary dependent mode (Neal and Shapiro 1991, Co- hen 1991, Cohen 1992, Brison and Vigouroux (ms.), Koons et al 1993, Wauchope 1994). In these systems, integration of gesture is triggered by the appearance of expressions in the speech stream whose reference needs to be resolved, such as definite and deictic noun phrases (e.g. 'this one', 'the red cube'). (iii) None of the existing approaches provide a well- understood generally applicable common mean- ing representation for the different modes, or, (iv) A general and formally-welldefined mechanism for multimodal integration. I Koons et al 1993 describe two different systems. The first uses input from hand gestures and eye gaze in order to aid in determining the reference of noun phrases in the speech stream. The second allows users to manipulate objects in a blocks world using iconic and pantomimic gestures in addition to deictic gestures. ~More precisely, they are 'verbal language'-driven. Either spoken or typed linguistic expressions are the driving force of interpretation. We present an approach to multimodal integra- tion which overcomes these limiting factors. A wide base of continuous gestural input is supported and integration may be driven by either mode. Typed feature structures (Carpenter 1992) are used to pro- vide a clearly defined and well understood common meaning representation for the modes, and multi- modal integration is accomplished through unifica- tion. 2 Quickset: A Multimodal Interface for Distributed Interactive Simulation The initial application of our multimodal interface architecture has been in the development of the QuickSet system, an interface for setting up and interacting with distributed interactive simulations. QuickSet provides a portal into LeatherNet 3, a sim- ulation system used for the training of US Marine Corps platoon leaders. LeatherNet simulates train- ing exercises using the ModSAF simulator (Courte- manche and Ceranowicz 1995) and supports 3D vi- sualization of the simulated exercises using Com- mandVu (Clarkson and Yi 1996). SRI Interna- tional's CommandTalk provides a unimodal spoken interface to LeatherNet (Moore et al 1997). QuickSet is a distributed system consisting of a collection of agents that communicate through the Open Agent Architecture 4 (Cohen et al 1994). It runs on both desktop and hand-held PCs under Win- dows 95, communicating over wired and wireless LANs (respectively), or modem links. The wire- less hand-held unit is a 3-1b Fujitsu Stylistic 1000 (Figure 2). We have also developed a Java-based QuickSet agent that provides a portal to the simula- tion over the World Wide Web. The QuickSet user interface displays a map of the terrain on which the simulated military exercise is to take place (Figure 2). The user can gesture and draw directly on the map with the pen and simultaneously issue spoken commands. Units and objectives can be laid down on the map by speaking their name and gesturing on the desired location. The map can also be an- notated with line features such as barbed wire and fortified lines, and area features such as minefields and landing zones. These are created by drawing the appropriate spatial feature on the map and speak- 3LeatherNet is currently being developed by the Naval Command, Control and Ocean Surveillance Cen- ter (NCCOSC) Research, Development, Test and Eval- uation Division (NRaD) in coordination with a number of contractors. 4Open Agent Architecture is a trademark of SRI International. 282 Figure 2: The QuickSet user interface ing its name. Units, objectives, and lines can also be generated using unimodal gestures by drawing their map symbols in the desired location. Orders can be assigned to units, for example, in Figure 2 an M1A1 platoon on the bottom left has been as- signed a route to follow. This order is created mul- timodally by drawing the curved route and saying 'WHISKEY FOUR SIX FOLLOW THIS ROUTE'. As entities are created and assigned orders they are displayed on the UI and automatically instantiated in a simulation database maintained by the ModSAF simulator. Speech recognition operates in either a click-to- speak mode, in which the microphone is activated when the pen is placed on the screen, or open micro- phone mode. The speech recognition agent is built using a continuous speaker-independent recognizer commercially available from IBM. When the user draws or gestures on the map, the resulting electronic 'ink' is passed to a gesture recog- nition agent, which utilizes both a neural network and a set of hidden Markov models. The ink is size- normalized, centered in a 2D image, and fed into the neural network as pixels, as well as being smoothed, resampled, converted to deltas, and fed to the HMM recognizer. The gesture recognizer currently recog- nizes a total of twenty six different gestures, some of which are illustrated in Figure 3. They include var- ious military map symbols such as platoon, mortar, and fortified line, editing gestures such as deletion, and spatial features such as routes and areas. i - G line tank mechanized platoon company fo~ied line area point deletion mortar barbed wire Figure 3: Example symbols and gestures As with all recognition technologies, gesture recognition may result in errors. One of the factors 283 contributing to this is that routes and areas do not have signature shapes that can be used to identify them and are frequently confused (Figure 4). O g3 Figure 4: Pen drawings of routes and areas Another contributing factor is that users' pen in- put is often sloppy (Figure 5) and map symbols can be confused among themselves and with route and area gestures. mortar tank deletion mechanized platoon company Figure 5: Typical pen input from real users Given the potential for error, the gesture recog- nizer issues not just a single interpretation, but a series of potential interpretations ranked with re- spect to probability. The correct interpretation is frequently determined as a result of multimodal in- tegration, as illustrated below 5. 3 A Unification-based Architecture for Multimodal Integration One the most significant challenges facing the devel- opment of effective multimodal interfaces concerns the integration of input from different modes. In- put signals from each of the modes can be assigned meanings. The problem is to work out how to com- bine the meanings contribute d by each of the modes in order to determine what the user actually intends to communicate. To model this integration, we utilize a unification operation over typed feature structures (Carpenter 1990, 1992, Pollard and Sag 1987, Calder 1987, King SSee Wahlster 1991 for discussion of the role of dialog in resolving ambiguous gestures. 1989, Moshier 1988). Unification is an operation that determines the consistency of two pieces of par- tial information, and if they are consistent combines them into a single result. As such, it is ideally suited to the task at hand, in which we want to determine whether a given piece of gestural input is compatible with a given piece of spoken input, and if they are compatible, to combine the two inputs into a single result that can be interpreted by the system. The use of feature structures as a semantic rep- resentation framework facilitates the specification of partial meanings. Spoken or gestural input which partially specifies a command can be represented as an underspecified feature structure in which cer- tain features are not instantiated. The adoption of typed feature structures facilitates the statement of constraints on integration. For example, if a given speech input can be integrated with a line gesture, it can be assigned a feature structure with an under- specified location feature whose value is required to be of type line. I Art I Figure 6: Multimodal integration architecture Figure 6 presents the main agents involved in the QuickSet system. Spoken and gestural input orig- inates in the user interface client agent and it is passed on to the speech recognition and gesture recognition agents respectively. The natural lan- guage agent uses a parser implemented in Prolog to parse strings that originate from the speech recog- nition agent and assign typed feature structures to 284 them. The potential interpretations of gesture from the gesture recognition agent are also represented as typed feature structures. The multimodal integra- tion agent determines and ranks potential unifica- tions of spoken and gestural input and issues com- plete commands to the bridge agent. The bridge agent accepts commands in the form of typed fea- ture structures and translates them into commands for whichever applications the system is providing an interface to. For example, if the user utters 'M1A1 PLA- TOON', the name of a particular type of tank pla- toon, the natural language agent assigns this phrase the feature structure in Figure 7. The type of each feature structure is indicated in italics at its bottom right or left corner. object : echelon : platoon unit create_unit location : ] point Figure 7: Feature structure for 'M1A1 PLATOON' Since QuickSet is a task-based system directed to- ward setting up a scenario for simulation, this phrase is interpreted as a partially specified unit creation command. Before it can be executed, it needs a lo- cation feature indicating where to create the unit, which is provided by the user's gesturing on the screen. The user's ink is likely to be assigned a num- ber of interpretations, for example, both a point in- terpretation and a line interpretation, which the ges- ture recognition agent assigns typed feature struc- tures (see Figures 8 and 9). Interpretations of ges- tures as location features are assigned a general com- mand type which unifies with all of commands taken by the system. [ [xcoord 9 30 ] ] location : xcoord : 94365 command point Figure 8: Point interpretation of gesture command [ icoor it ] 1 [(95301, 94360), location : (95305, 94365), (95310, 94380)] ~in¢ Figure 9: Line interpretation of gesture The task of the integrator agent is to field incom- ing typed feature structures representing interpreta- tions of speech and of gesture, identify the best po- tential interpretation, multimodal or unimodal, and issue a typed feature structure representing the pre- ferred interpretation to the bridge agent, which will execute the command. This involves parsing of the speech and gesture streams in order to determine po- tential multimodal integrations. Two factors guide this: tagging of speech and gesture as either com- plete or partial and examination of time stamps as- sociated with speech and gesture. Speech or gesture input is marked as complete if it provides a full command specification and therefore does not need to be integrated with another mode. Speech or gesture marked as partial needs to be in- tegrated with another mode in order to derive an executable command. Empirical study of the nature of multimodal inter- action has shown that speech typically follows ges- ture within a window of a three to four seconds while gesture following speech is very uncommon (Oviatt et al 97). Therefore, in our multimodal architec- ture, the integrator temporally licenses integration of speech and gesture if their time intervals overlap, or if the onset of the speech signal is within a brief time window following the end of gesture. Speech and gesture are integrated appropriately even if the integrator agent receives them in a different order from their actual order of occurrence. If speech is temporally compatible with gesture, in this respect, then the integrator takes the sets of interpretations for both speech and gesture, and for each pairing in the product set attempts to unify the two fea- ture structures. The probability of each multimodal interpretation in the resulting set licensed by unifi- cation is determined by multiplying the probabilities assigned to the speech and gesture interpretations. In the example case above, both speech and gesture have only partial interpretations, one for speech, and two for gesture. Since the speech in- terpretation (Figure 7) requires its location feature to be of type point, only unification with the point interpretation of the gesture will succeed and be passed on as a valid multimodal interpretation (Fig- ure 10). create_unit type:mlal ] object : echelon : platoon J =nit xcoord : 95305 ] location : xcoord : 94365 J poi,~t Figure 10: Multimodal interpretation The ambiguity of interpretation of the gesture was resolved by integration with speech which in this case required a location feature of type point. If the spoken command had instead been 'BARBED 285 WIRE' it would have been assigned the feature structure in Figure 11. This structure would only unify with the line interpretation of gesture result- ing in the interpretation in Figure 12. create_line [ style:barbed_wire ] ] object : color : red location: [ ]li,~ , .... b.~ Figure 11: Feature structure for 'BARBED WIRE' create_line object: location : [ :to~le :: b Tbed-wire ] ,,,~_ob ~ [oorot ] [(95301, 9436o), (95305, 94365), (95310, 94380)] .,~ Figure 12: Multimodal line creation Similarly, if the spoken command described an area, for example an 'ANTI TANK MINEFIELD' , it would only unify with an interpretation of gesture as an area designation. In each case the unification- based integration strategy compensates for errors in gesture recognition through type constraints on the values of features. Gesture also compensates for errors in speech recognition. In the open microphone mode, where the user does not have to gesture in order to speak, spurious speech recognition errors are more common than with click-to-speak, but are frequently rejected by the system because of the absence of a compatible gesture for integration. For example, if the system spuriously recognizes 'M1A1 PLATOON', but there is no overlapping or immediately preceding gesture to provide the location, the speech will be ignored. The architecture also supports selection among n- best speech recognition results on the basis of the preferred gesture recognition. In the future, n-best recognition results will be available from the recog- nizer, and we will further examine the potential for gesture to help select among speech recognition al- ternatives. Since speech may follow gesture, and since even si- multaneously produced speech and gesture are pro- cessed sequentially, the integrator cannot execute what appears to be a complete unimodal command on receiving it, in case it is immediately followed by input from the other mode suggesting a multimodal interpretation. If a given speech or gesture input has a set of interpretations including both partial and complete interpretations, the integrator agent waits for an incoming signal from the other mode. If no signal is forthcoming from the other mode within the time window, or if interpretations from the other mode do not integrate with any interpretations in the set, then the best of the complete unimodal interpretations from the original set is sent to the bridge agent. For example, the gesture in Figure 13 is used for unimodal specification of the location of a fortified line. If recognition is successful the gesture agent would assign the gesture an interpretation like that in Figure 14. /kgXdl..O Figure 13: Fortified line gesture createJine °bject: [ ].bj .... location : style : fortified._fine color : blue coordlist : [(93000, 94360), (93025, 94365), Figure 14: Unimodal fortified line feature structure However, it might also receive an additional po- tential interpretation as a location feature of a more general line type (Figure 15). location : command line coordhst: [(93000,94360), (93025,94365), i 3112, 94362)] Figure 15: Line feature structure On receiving this set of interpretations, the in- tegrator cannot immediately execute the complete interpretation to create a fortified line, even if it is assigned the highest probability by the recognizer, since speech contradicting this may immediately fol- low. For example, if overlapping with or just after the gesture, the user said 'BARBED WIRE' then the line feature interpretation would be preferred. If speech does not follow within the three to four sec- ond window, or following speech does not integrate with the gesture, then the unimodal interpretation 286 is chosen. This approach embodies a preference for multimodal interpretations over unimodal ones, mo- tivated by the possibility of unintended complete unimodal interpretations of gestures. After more detailed empirical investigation, this will be refined so that the possibility of integration weighs in favor of the multimodal interpretation, but it can still be beaten by a unimodal gestural interpretation with a significantly higher probability. 4 Conclusion We have presented an architecture for multimodal interfaces in which integration of speech and ges- ture is mediated and constrained by a unification operation over typed feature structures. Our ap- proach supports a full spectrum of gestural input, not just deixis. It also can be driven by either mode and enables a wide and flexible range of interactions. Complete commands can originate in a single mode yielding unimodal spoken and gestural commands, or in a combination of modes yielding multimodal commands, in which speech and gesture are able to contribute either the predicate or the arguments of the command. This architecture allows the modes to synergistically mutual compensate for each oth- ers' errors. We have informally observed that inte- gration with speech does succeed in resolving am- biguous gestures. In the majority of cases, gestures will have multiple interpretations, but this is rarely apparent to the user, because the erroneous inter- pretations of gesture are screened out by the unifi- cation process. We have also observed that in the open microphone mode multimodality allows erro- neous speech recognition results to be screened out. For the application tasks described here, we have observed a reduction in the length and complexity of spoken input, compared to the unimodal spoken interface to LeatherNet, informally reconfirming the empirical results of Oviatt et al 1997. For this fam- ily of applications at least, it appears to be the case that as part of a multimodal architecture, current speech recognition technology is sufficiently robust to support easy-to-use interfaces. Vo and Wood 1996 present an approach to mul- timodal integration similar in spirit to that pre- sented here in that it accepts a variety of gestures and is not solely speech-driven. However, we be- lieve that unification of typed feature structures provides a more general, formally well-understood, and reusable mechanism for multimodal integration than the frame merging strategy that they describe. Cheyer and Julia (1995) sketch a system based on Oviatt's (1996) results but describe neither the in- tegration strategy nor multimodal compensation. QuickSet has undergone a form of pro-active eval- uation in that its design is informed by detailed pre- dictive modeling of how users interact multimodally and it incorporates the results of existing empirical studies of multimodal interaction (Oviatt 1996, Ovi- att et al 1997). It has also undergone participatory design and user testing with the US Marine Corps at their training base at 29 Palms, California, with the US Army at the Royal Dragon exercise at Fort Bragg, North Carolina, and as part of the Command Center of the Future at NRaD. Our initial application of this architecture has been to map-based tasks such as distributed simula- tion. It supports a fully-implemented usable system in which hundreds of different kinds of entities can be created and manipulated. We believe that the unification-based method described here will read- ily scale to larger tasks and is sufficiently general to support a wide variety of other application areas, including graphically-based information systems and editing of textual and graphical content. The archi- tecture has already been successfully re-deployed in the construction of multimodal interface to health care information. We are actively pursuing incorporation of statistically-derived heuristics and a more sophisti- cated dialogue model into the integration architec- ture. We are also developing a capability for auto- matic logging of spoken and gestural input in order to collect more fine-grained empirical data on the nature of multimodal interaction. 5 Acknowledgments This work is supported in part by the Informa- tion Technology and Information Systems offices of DARPA under contract number DABT63-95-C-007, in part by ONR grant number N00014-95-1-1164, and has been done in collaboration with the US Navy's NCCOSC RDT&E Division (NRaD), Ascent Technologies, Mitre Corp., MRJ Corp., and SRI In- ternational. References Bolt, R. A., 1980. "Put-That-There" :Voice and ges- ture at the graphics interface. Computer Graph- ics, 14.3:262-270. Brison, E., and N. Vigouroux. (unpublished ms.). Multimodal references: A generic fusion pro- cess. URIT-URA CNRS. Universit Paul Sabatier, Toulouse, France. Calder, J. 1987. Typed unification for natural lan- guage processing. In E. Klein and J. van Benthem, 287 editors, Categories, Polymorphisms, and Unifica- tion, pages 65-72. Centre for Cognitive Science, University of Edinburgh, Edinburgh. Carpenter, R. 1990. Typed feature structures: In- heritance, (In)equality, and Extensionality. In W. Daelemans and G. Gazdar, editors, Proceed- ings of the ITK Workshop: Inheritance in Natural Language Processing, pages 9-18, Tilburg. Insti- tute for Language Technology and Artificial Intel- ligence, Tilburg University, Tilburg. Carpenter, R. 1992. The logic of typed feature struc- tures. Cambridge University Press, Cambridge, England. Cheyer, A., and L. Julia. 1995. Multimodal maps: An agent-based approach. In International Con- ference on Cooperative Multimodal Communica- tion (CMC/95), pages 24-26, May 1995. Eind- hoven, The Netherlands. Clarkson, J. D., and J. Yi. 1996. LeatherNet: A synthetic forces tactical training system for the USMC commander. In Proceedings of the Sixth Conference on Computer Generated Forces and Behavioral Representation, pages 275-281. Insti- tute for simulation and training. Technical Report IST-TR-96-18. Cohen, P. R. 1991. Integrated interfaces for decision support with simulation. In B. Nelson, W. D. Kel- ton, and G. M. Clark, editors, Proceedings of the Winter Simulation Conference, pages 1066-1072. ACM, New York. Cohen, P. R. 1992. The role of natural language in a multimodal interface. In Proceedings of UIST'92, pages 143-149. ACM Press, New York. Cohen, P. R., A. Cheyer, M. Wang, and S. C. Baeg. 1994. An open agent architecture. In Working Notes of the AAA1 Spring Symposium on Soft- ware Agents (March 21-22, Stanford University, Stanford, California), pages 1-8. Courtemanche, A. J., and A. Ceranowicz. 1995. ModSAF development status. In Proceedings of the Fifth Conference on Computer Generated Forces and Behavioral Representation, pages 3-13, May 9-11, Orlando, Florida. University of Central Florida, Florida. King, P. 1989. A logical formalism for head-driven phrase structure grammar. Ph.D. Thesis, Univer- sity of Manchester, Manchester, England. Koons, D. B., C. J. Sparrell, and K. R. Thorisson. 1993. Integrating simultaneous input from speech, gaze, and hand gestures. In M. T. Maybury, edi- tor, Intelligent Multimedia Interfaces, pages 257- 276. AAAI Press/ MIT Press, Cambridge, Mas- sachusetts. Moore, R. C., J. Dowding, H. Bratt, J. M. Gawron, Y. Gorfu, and A. Cheyer 1997. CommandTalk: A Spoken-Language Interface for Battlefield Sim- ulations. In Proceedings of Fifth Conference on Applied Natural Language Processing, pages 1-7, Washington, D.C. Association for Computational Linguistics, Morristown, New Jersey. Moshier, D. 1988. Extensions to unification gram- mar for the description of programming languages. Ph.D. Thesis, University of Michigan, Ann Arbor, Michigan. Neal, J. G., and S. C. Shapiro. 1991. Intelligent multi-media interface technology. In J. W. Sul- livan and S. W. Tyler, editors, Intelligent User Interfaces, pages 45-68. ACM Press, Frontier Se- ries, Addison Wesley Publishing Co., New York, New York. Oviatt, S. L. 1996. Multimodal interfaces for dy- namic interactive maps. In Proceedings of Con- ference on Human Factors in Computing Systems: CHI '96, pages 95-102, Vancouver, Canada. ACM Press, New York. Oviatt, S. L., A. DeAngeli, and K. Kuhn. 1997. In- tegration and synchronization of input modes dur- ing multimodal human-computer interaction. In Proceedings of the Conference on Human Factors in Computing Systems: CH[ '97, pages 415-422, Atlanta, Georgia. ACM Press, New York. Oviatt, S. L., and R. van Gent. 1996. Error resolu- tion during multimodal human-computer interac- tion. In Proceedings of International Conference on Spoken Language Processing, vol 1, pages 204- 207, Philadelphia, Pennsylvania. Pollard, C. J., and I. A. Sag. 1987. Information- based syntax and semantics: Volume I, Funda- mentals., Volume 13 of CSLI Lecture Notes. Cen- ter for the Study of Language and Information, Stanford University, Stanford, California. Vo, M. T., and C. Wood. 1996. Building an appli- cation framework for speech and pen input inte- gration in multimodal learning interfaces. In Pro- ceedings of International Conference on Acoustics, Speech, and Signal Processing, Atlanta, GA. Wahlster, W. 1991. User and discourse models for multimodal communication. In J. Sullivan and S. Tyler, editors, Intelligent User Interfaces, ACM Press, Addison Wesley Publishing Co., New York, New York. Wauchope, K. 1994. Eucalyptus: Integrating natural language input with a graphical user interface. Naval Research Laboratory, Report NRL/FR/5510-94-9711. 288 | 1997 | 36 |
A DP based Search Using Monotone Alignments in Statistical Translation C. Tillmann, S. Vogel, H. Ney, A. Zubiaga Lehrstuhl f/Jr Informa,tik VI, RWTH Aachen D-52056 Aachen, Germany {t illmann, ney}©informatik, rwth-aachen, de Abstract In this paper, we describe a Dynamic Pro- gramming (DP) based search algorithm for statistical translation and present ex- perimental results. The statistical trans- lation uses two sources of information: a translation model and a language mod- el. The language model used is a stan- dard bigram model. For the transla- tion lnodel, the alignment probabilities are made dependent on the differences in the alignment positions rather than on the absolute positions. Thus, the approach amounts to a first-order Hidden Markov model (HMM) as they are used successful- ly in speech recognition for the time align- ment problem. Under the assumption that the alignment is monotone with respect to the word order in both languages, an ef- ficient search strategy for translation can be formulated. The details of the search algorithm are described. Experiments on the EuTrans corpus produced a word error rate of 5.1(/~.. 1 Overview: The Statistical Approach to Translation The goal is the translation of a text given in some source language into a target language. We are given o J a source ('French') string fl = fl...fj...f.l, which is to be translated into a target ('English') string c~ = el...ei...el. Among all possible target strings, we will choose the one with the highest probability which is given by Bayes' decision rule (Brown et al.. 1993): ,~ = argmax{P,'(e]~lfg~)} = argmax {P,'(ef). Pr(.f/lef)} Pr(e{) is the language model of the target language. whereas Pr(j'lale{) is the string translation model. The argmax operation denotes the search problem. In this paper, we address • the problem of introducing structures into the probabilistic dependencies in order to model the string translation probability Pr(f] [e~). • the search procedure, i.e. an algorithm to per- form the argmax operation in an efficient way. • transformation steps for both the source and the target languages in order to improve the translation process. The transformations are very much dependent on the language pair and the specific translation task and are therefore discussed in the context of the task description. We have to keep in mind that in the search procedure both the language and the transla- tion model are applied after the text transformation steps. However, to keep the notation simple we will not make this explicit distinction in the subsequent exposition. The overall architecture of the statistical translation approach is summarized in Figure 1. 2 Aligmnent Models A key issue in modeling the string translation prob- ability Pr(f(le I) is the question of how we define the correspondence between the words of the target sentence and the words of the source sentence. In typical cases, we can assume a sort of pairwise de- pendence by considering all word pairs (fj,ei) for a given sentence pair [f(; el]. We further constrain this model by assigning each source word to exact- ly one target word. Models describing these types of dependencies are referred to as alignrnen.t models (Brown et al., 1993), (Dagan eta].. 1993). (Kay & R6scheisen, 1993). (Fung & Church. 1994), (Vogel et al., 1996). In this section, we introduce a monotoue HMM based alignment and an associated DP based search algorithm for translation. Another approach to sta- tistical machine translation using DP was presented in (Wu, 1996). The notational convention will be a,s follows. We use the symbol Pr(.) to denote general 289 Source Language Text 1 I Transformation 1 ¢~ Global Search: j~ Lexicon Model maximize Pr(el). pr(f~lell} I I AllgnmentModel ovor j. pc(e~) [ Language Model, [;....,!...,,on] 1 Target Language Text Figure I: Architecture of the translation approach based on Bayes decision rule. probability distributions with (nearly) no specific as- snmptions. In contrast, for model-based probability distributions, we use the generic symbol p(.). 2.1 Alignment with HMM When aligning the words in parallel texts (for Indo-European language pairs like Spanish-English, German-English, halian-German .... ), we typically observe a strong localization effect.. Figure 2 illus- trates this effect, for the language pair Spanish-to- English. In many cases, although not always, there is an even stronger restriction: the difference in the position index is smaller than 3 and the alignment. is essentially monotone. To be more precise, the sentences can be partitioned into a small number of segments, within each of which the alignment is monotone with respect to word order in both lan- gaages. To describe these word-by-word alignments, we introduce the mapping j -- o j, which assigns a po- sition j (with source word .fj ) to the position i = aj (with target word ei). The concept of these align- ments is similar to the ones introduced by (Brown et al., 1993), but we will use another type of de- pendence in the probability distributions. Looking at. such alignments produced by a human expert, it, is evident that the mathematical model should try to capture the strong dependence of aj on the pre- ceding alignment a j-1. Therefore the probability of alignment aj for position j should have a dependence on the previous alignment position O j_l: P((/j [(/j-1 ) A similar approach has been chosen by (Dagan et al., 1993) and (Vogel et al.. 1996). Thus the problem formulation is similar t.o that of/,he time alignment problem in speech recognition, where the so-called Hidden Markov models have been successfully used for a long time (Jelinek. 1976). Using the same basic principles, we can rewrite the probability by intro- ducing the 'hidden" aligmnents a~ := a l...aj...aa for a sentence pair [f~; c/]: P,,(s 'lcI = J ~i' j=1 To avoid any confnsion with the term 'hidden'in comparison with speech recognition, we observe that the model states as such (representing words) are not hidden but the actual alignments, i.e. the sequence of position index pairs (j. i = aj ). So far there has been no basic restriction of the approach. We now assume a first-order dependence on the alignments aj only: Pr(fj,ajlf~-l,a{-1.e{) = p(fj,(/jlaj-l,e{) = p(ajlaj_l).p(fjlea,), where, in addition, we have assumed that the lexicon probability p(fle) depends only on aj and not. on aj _ 1 • To reduce the number of alignment parameters, we assume that the HMM alignment probabilities p(i[i') depend only on the jump width (i - i'). The monotony condition can than be formulated as: p(i[i')=O for i¢i'+O.i'+l,i'+2. This monotony requirement limits the applicabili- ty of our approach. However, by performing simple word reorderings, it. is possible to approach this re- quirement (see Section 4.2). Additional countermea- sures will be discussed later. Figure 3 gives an illus- tration of the possible alignments for the monotone hidden Markov model. To draw the analogy with speech recognition, we have to identify the states (along the vertical axis) with the positions i of the target words ei and the time (along the horizont.al axis) with the positions j of the source words J). 2.2 Training To train the alignment and the lexicon model, we use the maximum likelihood criterion in the so-called maximum approximation, i.e. the likelihood criteri- on covers only the most likely alignment rather than the set of all alignments: J Pr(.f(leI) = ~ 1-i [P(aJlaJ-l'. I)" P(fJle°.i )] "i' j=i J -'= max1- ~ [p(ajla.o_~, I). p(.l)leo,)] j al j=l 290 days o two o for o room o double o a o is o much how Io I .... L___L___L___L ............... c v u h d p d d U a n a o a o ' ' I a b b r s i a e i i a a n t e s t a o c i 0 n roomJ, o the J. o in Jo cold[, o too I. o is I. it J. o J ........................ e I h h d f n a a a e r b c m ' i e a i t s o a i C a i d 0 0 n night a for tv a and safe a telephonel a J with J room J a I booked I have we 0 0 0 0 0 0 0 0 o I Io I .... ----'--------- ................................... t r u h c t c f y t p u n e e n a o e a u n s a b n 1 j e e e i ' a r m r t e t o v a f e s a c o d i n a ) o 0 n e a n o 1 r a c e a h v e i S i 0 n Figure 2: Word aligmnents for Spanish-English sentence pairs. 291 o*" Z r.~ © L5 iv, < F~ I I I I [ I 1 2 3 4 5 6 SOURCE POSITION Figure 3: Illustrat ion of alignments for the nlonotone HMM. To find the optimal alignment, we use dynamic programming for which we have the following typical recursion formula: Q(i, j) = p(fj ]ei)max [p(ili') . Q(i', j - 1)1 i' Here. Q(i. j) is a sort of partial probability as in t.ime alignment for speech recognit.ion (aelinek, 1976). As a result, the training procedure amounts to a se- quence of iterat.ions, each of which consists of two steps: • posilion alignm~TH: Given the model parame- t.ers, det.ermine the most likely position align- n-lent. • parame*e-r eslimalion: Given the position align- ment. i.e. going along the alignment paths for all sentence pairs, perform maximum likelihood estimation of the model parameters; for model- free distributions, these estimates result in rel- a.tive fi'equencies. The IBM model 1 (Brown et al., 1993) is used to find an initial estimate of the translation probabilities. 3 Search Algorithm for Translation For the translation operat.ion, we use a bigram lan- guage model, which is given in terms of the con- dit.ional probability of observing word ei given the predecessor word e.i- 1: p(~ilei-:) Using the conditional probability of the bigram lan- guage model, we have the overall search criterion in the maxinmm approximation: max p(eile;_:)lnax l'I [p(ajla~-:)P(fJlea,)] " ,,' ti=: ~i ~=: Here and in the following, we omit a special treat- ment of the start and end conditions like j = 1 or j = J in order to simplify the presentation and avoid confusing details. Having the above criterion in mind, we try t.o associate the language model prob- abilities with the aligmnents j ~ i - aj. To this purpose, we exploit the monotony property of our alignment model which allows only transitions from aj-i tO aj if the difference 6 = oj-aj-1 is 0,1,2. We define a modified probability p~(el#) for the lan- guage model depending on the alignment difference t~. We consider each of the three cases 5 = 0, 1,2 separately: • ~ = 0 (horizontal transition = alignment repe- tition): This case corresponds to a target word with two or more aligned source words and therefore requires ~ = # so that there is no contribution fl'om the language model: 1 for e=e' P~=°(ele') = 0 for e ee' • 6 = 1 (forward transition = regular alignment.): This case is the regular one, and we can use directly the probability of the bigram language model: p~=:(ele') = p(ele') • ~ = 2 (skip transition = non-aligned word): This case corresponds to skipping a word. i.e, there is a word in the target string with no aligned word in the source string. We have to find the highest probability of placing a non- aligned word e_- between a predecessor word e' and a successor word e. Thus we optimize the following product, over the non-aligned word g: p~=~(eJe') = maxb~(elg).p(gIe')] i This maximization is done beforehand and the result is stored in a table. Using this modified probability p~(ele'), we can rewrite the overall search criterion: aT l-I )]. The problem now is to find the unknown mapping: j -- (aj, ca.,) which defines a path through a network with a uni- form trellis structure. For this trellis, we can still use Figure 3. However. in each position i along the 292 Table h DP based search algorithm for the monotone translation model. !nput: source string/l...fj...fJ initialization for each position j = 1,2 ..... d in source sel'ltence do for each position i = 1,2, ...,/maz in target sentence do for each target word e do V Q(i, j, e) = p(fj le)' ma;x{p(i[i - 6). p~(e[e'). Q(i - 6. j - 1, e')} 6,e traceback: - find best end hypothesis: max Q(i, J, e) - recover optimal word sequence vertical axis. we have to allow all possible words e of the target vocabulary. Due to the monotony of our alignnaent model and the bigraln language mod- el. we have only first-order type dependencies such that the local probabilities (or costs when using the negative logarithms of the probabilities) depend on- I.q on the arcs (or transitions) in the lattice. Each possible index triple (i.j.e) defines a grid point in the lattice, and we have the following set of possi- ble transitions fi'om one grid point to another grid point : ~fi {0.1.2} : (i-6. j-l.e')--(i,j,e) Each of these transitions is assigned a local proba- bility: p(ili - 6). p,,(ele') . p(fj le) Using this formulation of the search task, we can now use the method of dynamic programming(DP) to find the best path through the lattice. To this purpose, we introduce the auxiliary quantity: Q(i.j.e): probability of the best. partial path which ends in the grid point (i, j, e). Since we have only first-order dependencies in our model, it is easy to see that the auxiliary quantity nmst satisfy the following DP recursion equation: Q(i.j.e) = p(fjle). max {p(ili- ~). maxp,,(ele'). Q(i- 6, j - 1,e')}. To explicitly construct the unknown word sequence ~. it is convenient to make use of so-called back- pointers which store for each grid point (i.j,e) the best predecessor grid point (Ney et al.. 1992). The DP equation is evaluated recursively to find the best partial path to each grid point (i, j, e). The resuhing algorithm is depicted in Table 1. The com- plexity of the algorithm is J. I,,,.,. • E'-'. where E is the size of t.he target language vocabulary and I,,,,~. is the n~aximum leng{'h of the target sentence con- sidered. It is possible to reduce this COml)utational complexity by using so-called pruning methods (Ney et al.. 1992): due to space limitatiol~s, they are not discussed here. 4 Experimental Results 4.1 The Task and the Corpus The search algorithln proposed in this paper was tested on a subtask of the "'Traveler Task" (Vidal, 1997). The general domain of the task comprises typical situations a visitor to a foreign country is faced with. The chosen subtask corresponds to a sce- nario of the hulnan-to-human communication situ- ations at the registration desk in a hotel (see Table 4). The corpus was generated in a semi-automatic way. On the basis of examples from traveller book- lets, a prol)abilistic gralmnar for different language pairs has been constructed from which a large cor- pus of sentence pairs was generated. The vocabulary consisted of 692 Spanish and 518 English words (in- eluding punctuatioll marks). For the experiments, a trailfing corpus of 80,000 sentence pairs with 628,117 Spanish and 684.777 English words was used. In ad- dition, a test corpus with 2.730 sentence pairs differ- ent froln the training sentence pairs was construct- ed. This test corpus contained 28.642 Spanish a.nd 24.927 English words. For the English sentences, we used a bigram language model whose perplexity on the test corpus varied between 4.7 for the orig- inal text. and 3.5 when all transformation steps as described below had been applied. Table 2: Effect of the transformation steps on the vocabulary sizes in both languages. Transformation Step Spanish English Original (with punctuation) 692 518 + C.ategorization 416 227 + 'por_~avor' 417 + V~'ol'd Splkt.ing 374 + Word Joining 237 + 'Word Reordering 293 4.2 Text Tl-ansformations The purpose of the text transformations is to make the two languages resenable each other as closely as possible with respect, to sentence length and word or- der. In addition, the size of both vocabularies is re- duced by exploiting evident regularities; e.g. proper names and numbers are replaced by category mark- ers. We used different, preprocessing steps which were applied consecutively: • Original Corpus: Punctuation marks are treated like regular words. • Categorization: Some particular words or word groups are replaced by word categories. Seven non-overlapping categories are used: three categories for names (surnames, name and female names), two categories for numbers (reg- ular numbers and room numbers) and two cat- egories for date and time of day. • 'D_'eatment of 'pot :favor': The word 'pot :favor' is always moved to the end of the sentence and replaced by the one-word token ' pot_favor '. • Word Splitting: In Spanish, the personal pronouns (in subject case and in object, case) can be part of the inflected verb form. To coun- teract this phenomenon, we split the verb into a verb part and pronoun part, such as 'darnos" "dar _nos' and "pienso" -- '_yo pienso'. • Word Joining: Phrases in the English lan- guage such as "Would yogi mind doing ...' and '1 would like you to do ..." are difficult to han- dle by our alignment model. Therefore, we apply some word joining, such as 'would yo~t mi71d" -- 'wo~dd_yo',_mind" and ~would like ' -- "wotdd_like '. • Word Reordering: This step is applied to the Spanish text to take into account, cases like the position of the adjective in noun-adjective phrases and the position of object, pronouns. E.g. "habitacidT~ dobh'-- 'doble habitaci6~'. By this reordering, our assumption about the monotony of the alignment model is more often satisfied. The effect of these transformation steps on the sizes of both vocabularies is shown in Table 2. In addi- tion to all preprocessing steps, we removed the punc- t.uation marks before translation and resubstituted t.hena by rule into the target sentence. 4.3 Translation Results For each of the transformation steps described above, all probability models were trained anew, i.e, the lexicon probabilities p(fle), the alignment prob- abilities p(ili - 6) and the bigram language proba- bilities p(ele'). To produce the translated sentence in normal language, the transformation steps in the target language were inverted. The translation results are summarized in Table 3. As an aut.omatic and easy-to-use measure of the translation errors, the Levenshtein distance between the automatic translation and the reference transla- tion was calculated. Errors are reported at the word level and at. the sentence level: • word leveh insertions (INS). deletions (DEL), and total lmmber of word errors (\VER). • sentence level: a sentence is counted as correct only if it is identical to the reference sentence. Admittedly, this is not a perfect measure. In par- ticular, the effect of word ordering is not taken into account appropriately. Actually, the figures for sen- tence error rate are overly pessimistic. Many sen- tences are acceptable and semantically correct trans- lations (see the example translations in Table 4), Table 3: Word error rates (INS/DEL, WER) and sentence error rates (SER) for different transforma- tion steps. Transformation Step Original CorPora + Categorization + 'por2favor ' + Word Splitting Translation Errors [~.] 423/11.2 21.2 85.5 2.5/§.6 16.1 81.0 2.6/8.3 14.3 75.6 2.5/7.4 12.3 65.4 i.3/4.9 44.6 + Word Joining 7.3 + Word Reordering 0.9/3.4 5.1 30.1 As can be seen in Table 3. the translation er- rors can be reduced systen~at.ically by applying all transformation steps. The word error rate is re- duced from 21.2{,} t.o 5.1{2~: the sentence error rate is reduced from 85.55~, to 30.1%. The two most ina- portant transformation steps are categorization and word joining. What is striking, is the large fi'action of deletion errors. These deletion errors are often caused by the omission of word groups like 'for me please "and "could you ". Table 4 shows some example translations (for the best translation results). It can be seen that the semantic meaning of the sentence in the source language may be preserved even if there are three word errors according t.o our performance criterion. To study the dependence on the amount of training data, we also performed a training wit.la only 5 000 sentences out of the training corpus. For this training condition, the word error rate went up only slightly, namely from 5.15}. (for 80,000 training sentences) to 5.3% (for 5 000 training sentences). To study the effect of the language model, we test- ed a zerogram, a unigram and a bigram language model using the standard set of 80 000 training sen- tences. The results are shown in Table 5. The 294 Table 4: Examples from tile EuTrans task: O= original sentence, R= reference translation. A= automatic t.ranslatiol~. O: He hecho la reserva de una habitacidn con televisidn y t.el~fono a hombre del sefior Morales. R: I have made a reservation for a room with TV and telephone for Mr. Morales. A: I have made a reservation for a room with TV and telephone for Mr. Morales. O: Sfibanme las maletas a mi habitacidn, pot favor. R: Send up my suitcases to my room, please. A: Send up my suitcases to my room, please. O: Pot favor, querr{a qua nos diese las llaves de la habitacidn. R: I would like you to give us the keys to the room, please. A: I would like you to give us the keys to the room, please. O: Pot favor, me pide mi taxi para la habitacidn tres veintidds? R: Could you ask for nay taxi for room number three two two for me. please'? A: Could you ask for my taxi for room number three two two. please? O: Por favor, reservamos dos habitaciones dobles con euarto de bafio. R: We booked two double rooms with a bathroom. A: We booked two double rooms with a bathroom, please. O: Quisiera qua nos despertaran mafiana a las dos y cuarto, pot favor. R: l would like you to wake us up tomorrow at. a quarter past two. please. A: I want you to wake us up tomorrow at a quarter past two. please. O: Rep/seme la cuenta de la l~abitacidn ochocientos veintiuno. R: Could .you check the bill for room number eight two one for me, please'? A: Check the bill for room lmmber eight two one. WER decreases from 31.1c/c for the zerogram model to 5.1% for the bigram model. The results presented here can be compared with the results obtained by the finite-state transducer approach described in (Vidal, 1996: Vidal, 1997), where the same training and test conditions were used. However the only preprocessing step was cat- egorization. In that work. a WER of 7.1c)~. was ob- tained as opposed to 5.1(7c presented in this paper. For smaller amounts of training data (say 5 000 sen- tence pairs), the DP based search seems to be even lnore superior. Table 5: Language model perplexity (PP), word er- ror rates (INS/DEL. WER) and sentence error rates (SER) for different language models. Model Language PP INS/DEL Translation WER Errors [SER [%] Zerogram 237.0 0.6/18.6 31.1 98.1 Unigram 74.4 0.9/12.4 20.4 94.8 Bigram 4.1 0.9/3.4 5.1 30.1 4.4 Effect of the Word Reordering In more general cases and applications, there will ahvays be sentence pairs with word alignments for which the monotony constraint is ]lot satisfied. How- ever even then, the nlonotouy constraint is satisfied locally for the lion's share of all word alignments in such sentences. Therefore. we expect t.o extend the approach presented by the following methods: • more systelnatic approaches to local and global word reorderiugs that try to produce the same word order in both languages. • a multli-level approach that allows a small (say 4) number of large forward and backward tran- sitions. Within each level, the monotone align- ment model can still be applied, and only when moving from one level to the next, we have to handle the problem of different word orders. To show the usefulness of global word reorder- ing. we changed the word order of some sentences by hand. Table 6 shows the effect of the global re- ordering for two sentences. In the first example, we changed the order of two groups of consecutive words and placed an a.dditional copy of the Spanish word "euest, a'" into the source sentence. In the second example, the personal pronoun "'me" was placed at the end of the source sentence. In both cases, we obtained a correct translation. 5 Conclusion In this paper, we have presented an HMM based ap- proach to handling word alignlnents and an associat- ed search algorithm for autonaatic translation. The characteristic feature of this approach is to make the aligmnent probabilities explicitly dependent on the Mignment position of the previous word and t.o as- sume a monotony constraint for the word order in both languages. Due t.o this mOllOtony constraint. we are able to apply an efficient DP based search al- gorithln. We have tested the model successfully on the EuTrans traveller task, a limited domain task with a vocabulary of 200 to 500 words. The result- 295 Table 6: Effect of the global word reordering: O= original sentence, R= reference translation, A= automatic translation, O'= original sentence reordered, A'= aut, omatic translation after reordering. O: Cu£nto cuesta una habitacidn doble para cinco noches incluyendo servicio de habitaciones ? R: How much does a double room including room service cost for five nights ? A: How much does a double room including room service ? O': Cu~into cuesta una habitacidn doble incluyendo servicio' de habitaciones cuesta para cinco noches ? A': How much does a double room hlcluding room service cost for five nights ? O:. Expli'que _me la factura de la habitacidn tres dos cuatro. R: Explain the bill for room number three two four for me. A: Explain the bill for room number three two four. O': Explique la faclura de la habitaci6n tres dos cuatro .ane. A: Explain tile bill for rooln number three two four for me. ing word error rate was only 5.1V(. To mitigate the monotony constraint, we plan to reorder the words in the source sentences to produce the same word order in both languages. Acklmwledgement This work has been supported partly by t.he Ger- man Federal Ministry of Education. Science. Re- search and Technology under the contract number 01 IV 601 A (Verbmobil) and by the European Com- munity under the ESPRIT project number 20268 (EuTrans). References A. L. Berger. P. F. Brown. S. A. Della Pietra, V. J. Della Pietra. ,]. R. Gillett. J. D. Lafferty. R. L. Mercer. H. Printz. and L. Ures. 1994. "The Call- dide System for Machine Translation". In Proc. of ARPA Huma~ La,guage Technology Workshop. pp. 152-157. Plainsboro. NJ. Morgan Kaufinann Publishers. San Mateo. CA, March. P. F. Brown, V. J. Della Pietra. S. A. Della Pietra, and R. L. Mercer. 1993. "'The Mathematics of Statistical Machine Translation: Parameter Esti- mat.ion". Comp,fational Linguistics, Vol. 19, No. 2. pp. 263-311. I. Dagan. K. W. Church. and W. A. Gale. 1993. "'Robust Bilingual Word Alignment for Machine Aided Translation". In Proc. of the Workshop on I.<ry Large Corpora. pp. 1-8. Columbus, OH. P. Fung. and K. W. Church. 1994. "'K-vec: A New Approach for Aligning Parallel Texts", In Proc. of lhe 15th In i. Conf. on ('ompulalim~al Linguistics, pp. 10.(.)6-1102, Kyoto. F..lelinek. 1.(.t76. "'Speech Recognition by Statistical Methods". Proc. of lhe IEEE. Vol. 64. pp. 532- 556. April. M. Kay. and M. R6scheisen. 1993. "Text- Translation Alignlnent". Comp~talional Lin.gu~s- lie.s. Vol. 19. No. 2. pp. 121-142. H. Ney, D. Mergel, A. Noll, A. Paeseler. 1992. "Da- t.a Driven Search Organization for Continuons Speech Recognition". IEEE Trans. on Signal Pro- cessing, Vol. SP-40. No. 2. pp. 272-281. February. E. Vidal. 1996. "Final report of Esprit Research Project. 20268 (EuTrans): Example-Based Under- standing and Translation Systelns". Universidad Polit~cnica de Valencia, Instituto Tecnol6gio de Informgtica, October. E. Vidal. 1997. "Finite-State Speech-to-Speech Translation". In Proc. of lhe Int. Co,,f. on Acous- fits, Speech and Signal Processing. Munich. April. S. Vogel, H. Ney, and C. Tillmmm. 1996. "HMM Based Word Alignment in Statistical Transla- tion". In Proc. of the 16~h Inf. Conf. on Com- putational Linguistics. pp. 836-841. Copenhagen, August. D. Wu. 1996. "'A Polynomial-Time Algorithm for Statistical Machine Translation". In Proc. of the 34th Annual Conf. of the Associalio~ for Comp~l- talional Linguistics, pp. 152-158. Santa Cruz, CA. Julle, 296 | 1997 | 37 |
An Alignment Method for Noisy Parallel Corpora based on Image Processing Techniques Jason S. Chang and Mathis H. Chen Department of Computer Science, National Tsing Hua University, Taiwan [email protected] mathis @nlplab.cs.nthu.edu.tw Phone: +886-3-5731069 Fax: +886-3-5723694 Abstract This paper presents a new approach to bitext correspondence problem (BCP) of noisy bilingual corpora based on image processing (IP) techniques. By using one of several ways of estimating the lexical translation probability (LTP) between pairs of source and target words, we can turn a bitext into a discrete gray-level image. We contend that the BCP, when seen in this light, bears a striking resemblance to the line detection problem in IP. Therefore, BCPs, including sentence and word alignment, can benefit from a wealth of effective, well established IP techniques, including convolution-based filters, texture analysis and Hough transform. This paper describes a new program, PlotAlign that produces a word-level bitext map for noisy or non-literal bitext, based on these techniques. Keywords: alignment, bilingual corpus, image processing 1. Introduction Aligned corpora have proved very useful in many tasks, including statistical machine translation, bilingual lexicography (Daille, Gaussier and Lange 1993), and word sense disambiguation (Gale, Church and Yarowsky 1992; Chen, Ker, Sheng, and Chang 1997). Several methods have recently been proposed for sentence alignment of the Hansards, an English-French corpus of Canadian parliamentary debates (Brown, Lai and Mercer 1991; Gale and Church 1991a; Simard, Foster and Isabelle 1992; Chen 1993), and for other language pairs such as English-German, English-Chinese, and English-Japanese (Church, Dagan, Gale, Fung, Helfman and Satish 1993; Kay and Rtischeisen 1993; Wu 1994). The statistical approach to machine translation (SMT) can be understood as a word-by-word model consisting of two sub-models: a language model for generating a source text segment S and a translation model for mapping S to its translation T. Brown et al. (1993) also recommend using a bilingual corpus to train the parameters of Pr(S I 73, translation probability (TP) in the translation model. In the context of SMT, Brown et al. (1993) present a series of five models of Pr(S I 73 for word alignment. The authors propose using an adaptive Expectation and Maximization (EM) algorithm to estimate parameters for lexical translation probability (LTP) and distortion probability (DP), two factors in the TP, from an aligned bitext. The EM algorithm iterates between two phases to estimate LTP and DP until both functions converge. Church (1993) observes that reliably distinguishing sentence boundaries for a noisy bitext obtained from an OCR device is quite difficult. Dagan, Church and Gale (1993) recommend aligning words directly without the preprocessing phase of sentence alignment. They propose using char_align to produce a rough character-level alignment first. The rough alignment provides a basis for estimating the translation probability based on position, as well as limits the range of target words being considered for each source word. Char_align (Church 1993) is based on the observation that there are many instances of. 297 • :.-,., ---.., ~-::.~ • • :.~.".2"...'- • ...,.~. .~.- ,. • " Figure 1. Dotplot. An example of a dotplot of alignment showing only likely dots which lie within a short distance from the diagonal. cognates among the languages in the Indo- European family. However, Fung and Church (1994) point out that such a constraint does not exist between languages across language groups such as Chinese and English. The authors propose a K-vec approach which is based on a k- way partition of the bilingual corpus. Fung and McKeown (1994) propose using a similar measure based on Dynamic Time Warping (DTW) between occurrence recency sequences to improve on the K- vec method. The char-align, K-vec and DTW approaches rely on dynamic programming strategy to reach a rough alignment. As Chen (1993) points out, dynamic programming is particularly susceptible to deletions occurring in one of the two languages. Thus, dynamic programming based sentence alignment algorithms rely on paragraph anchors (Brown et al. 1991) or lexical information, such as cognates (Simard 1992), to maintain a high accuracy rate. These methods are not robust with respect to non-literal translations and large deletions (Simard 1996). This paper presents a new approach based on image processing (IP) techniques, which is immune to such predicaments. 2. BCP as image processing 2.1 Estimation of LTP A wide variety of ways of LTP estimation have been proposed in the literature of computational linguistics, including Dice coefficient (Kay and R6scheisen 1993), mutual information, ~2 (Gale and Church 1991b), dictionary and thesaurus Table 1. Linguistic constraints. Linguistic constraints at various level of alignment resolution give rise to different types of image pattern that are susceptible to well established IP techniques. Constraints Image IP techniques Alignment Pattern Resolution Structure Edge Convolution Phrase preserving One-to-one Texture Feature Sentence extraction Non-crossing Line Hough Discourse transform information (Ker and Chang 1996), cognates (Simard 1992), K-vec (Fung and Church 1994), DTW (Fung and McKeown 1994), etc. Dice coefficient: Dice(s,t)= 2. prob( s, t) prob(s) + prob(t) mutual information: Ml(s, t) = log prob(s,t) prob(s), prob(t) Like the image of a natural scene, the linguistic or statistical estimate of LTP gives rise to signal as well as noise. These signal and noise can be viewed as a gray-level dotplot (Church and Gale 1991), as Figure 1 shows. We observe that the BCP, when cast as a gray-level image, bears a striking resemblance to IP problems, including edge detection, texture classification, and line detection. Therefore, the BCP can benefit from a wealth of effective, well established IP techniques, including convolution-based filtering, texture analysis, and Hough transform. 2.2 Properties of aligned corpora The PlotAlign algorithms are based on three linguistic constraints that can be observed at different level of alignment resolution, including phrase, sentence, and discourse: 298 1. Structure preserving constraint: The connec- tion target of a word tend to be located next to that of its neighboring words. 2. One-to.one constraint: Each source word token connect to at most one target word token. 3 Non-crossing constraint: The connection target of a sentence does not come before that of its preceding sentence. He hopes to achieve all his aims by the end of the year Figure 2. 0 Om i []me [] B Short edges and textural pattern in a dotplot. The shaded cells are positions where a high LTP value is registered. The cell with a dark dot in it is an alignment connection. Each of these constraints lead to a specific pattern in the dotplot. The structure preserving constraint means that the connections of adjacent words tend to form short, diagonal edges on the dotplot. For instance, Figure 2 shows that the adjacent words such as "He hopes" and "achieve all" lead to diagonal edges, 00 and 00 in the dotplot. However, edges with different orientation may also appear due to some morphological constraints. For instance, the token "aim" connects to a Mandarin compound "I~ ~.," thereby gives rise to the horizontal edge 00. The one-to-one assumption leads to a textural pattern that can be categorized as a region of dense dots distributed much like the l's in a permutation matrix. For instance, the vicinity of connection dot O (end,)~,) is denser than that of a non-connection say (end, ). Furthermore, the nearby connections @, O, and 0, form a texture much like a permutation matrix with roughly one dot per row and per column. The non-crossing assumption means that the connection target of a sentence will not come before that of its preceding sentence. For instance, Figure 1 shows that there are clearly two long lines representing a sequence of sentences where this constraint holds. The gap between these two lines results from the deletion of several sentences in the translation process. (a) 5oo . . I ,toe ¢ o • • ;:.'-. - • : • • :j ,o0 ! 2O0 • o" b ~ o *i Io0 0 o ".t ." t, * Io0 2O0 3O0 400 500 ~o 7O0 English ".•300 20C 0 O f • °° o" "i. * t• ..................... i :" i • ° i °'~ ° • i • * ° * o i %.- 10o 200 3O0 4O0 500 600 700 English Figure 3. Convolution. (a) LTP dotplot before convolution; and (b) after convolution. 2.3 Convolution and local edge detection Convolution is the method of choice for enhancing and detecting the edges in an image. For noise or incomplete image, as in the case of LTP dotplot, a discrete convolution-based filter is effective in filling a missing or under-estimated dot which is surrounded by neighboring dots with high LTP value according to the structure preserving con- straint. A filtering mask stipulates the relative location of these supporting dots. The filtering can be proceed as follows to obtain Pr(sx, ty), the 299 translation probability of the position (x, y), from t(sx+i, ty+j), the LTP values of itself and neighboring cells: Pr(sx, t r) = ~ ~ t(sx+i, ty, j)×mask(i,j) j= .w i= -w where w is a pre-determined parameter specifying the size of the convolution filter. Connections that fall outside this window are assumed to have no affect on Pr(sx, ty). For simplicity, two 3x3 filters can be employed to detect and accentuate the signal: -1 -1 -1 2 -1 -1 2 2 2 -1 2 -1 -1 -1 -1 -1 -1 2 However, a 5 by 5 filter, empirically derived from the data, performs much better. -0.04 -0.11 -0.20 -0.15 -0.11 0.08 -0.01 -0.25 -0.19 -0.15 -0.13 0.27 1.00 0.27 -013 -0.13 -0.16 -0.22 0.02 0.11 -0.10 -0.14 -0.19 -0.10 -0.02 2.4 Texture analysis Following the common practice in IP for texture analysis, we propose to extract features to discriminate a connection region in the dotplot from non-connection regions. First, the dotplot should be normalized and binarized, leaving the expected number of dots, in order to reduce complexity and simplify computation. Then, projectional transformation to either or both axes of the languages involved will compress the data further without losing too much information. That further reduces the 2D texture discrimination task to a 1D problem. For instance, Figure 4 shows that the vicinity of a connection (by, ~r) is characterized by evenly distributed high LTP values, while that of a non-connection is not. According to the one-to-one constraint, we should be looking for dense and continuous 1D occurrence of dots. A cell with high density and high power density indicate that connections fall on the vicinity of the cell. With this in mind, we proceed as follows to extract features for textural discrimina- tion: 1. Normalize the LTP value row-wise and column- wise. 2. For a window of n x m cells, set the t (s, t) values of k cells with highest LTP values to 1 and the rest to 0, k = max (n, m). 3. Compute the density and deviation features: projection: It p (x, y) = ~,t(x,y+j) j=-v density: d (x,y) = w Y~p(x + i, y) i~w 2w+ 1 power density: pd(x,y)= ~ *~* p(x',y).p(x'-i,y) i=1 x'=x-w where w and v are the width and height of a window for feature extraction, and c is the bound for the resolution of texture. The bound depends on the coverage rate of LTP estimates; 2 or 3 seems to produce satisfactory results. Since the one-to-one constraint is a sentence level phenomena, the values for w and v should be chosen to correspond to the lengths of average sentences in each of the two languages. 2.5 Hough transform and line detection The purpose of Hough transform (HT) algorithm, in short, is to map all points of a line in the original space to a single accumulative value in the parameter space. We can describe a line on x-y plane in the form p = x.sin0 + y.cos0. Therefore, 300 a point (p, 0) on the p - 0 plane describes a line on the x-y plane. Furthermore, HT is insensitive to perturbation in the sense the line of (p, 0) is very close to that of (p+Ap, 0+A0). That enables HT-based line detection algorithm to fred high resolution, one-pixel-wide lines, as well as lower- resolution lines. p 1/2 1 1 0 1 0 1 1 1 1 1/21/31/21/2 He mt I I hopes Im I I to W I achieve ~ I I all ~]eJ his ~ ~] aims ~ by 0 J J the II end • [] ~ of II the ] J year m l i Figure 4. Projection. The histogram of horizontal projection of the data in Figure 2. As mentioned above, many alignment algorithms rely on anchors, such as cognates, to keep alignment on track. However, that is only possible for bitext of certain language pairs and text genres. For a clean bitext, such as the Hansards, most dynamic programming based algorithms perform well (Simard 1996). To the contrary, a noisy bitext with large deletions, inversions and non-literal translations will appear as disconnected segments on the dotplot. Gaps between these segments may overpower dynamic programming, and lead to a low precision rate. Simard (1996) shows that for the Hansards corpus, most sentence-align algorithms yield a precision rate over 90%. For a noisy corpus, such as literary bitext, the rate drops below 50%. Contrary to the dynamic programming based methods, Hough transform always detect the most apparent line segments even in a noisy dotplot. Before applying Hough transform, the same processes of normalization and thresholding are performed first. The algorithm is described as follows: 1. Normalize the LTP value row-wise and column- wise. 2. For a window of n x m cells, set the t(s, t) values of k cells with highest LTP values to 1 and the rest to 0, k = max (n, m). 3. Set incidence (p, 0) = 0, for all - k < p < k, -90 ° <0<0 °, 4. For each cell (x, y), t(x, y) = 1 and -90 ° < 0 < 0 °, increment incidence (x cos 0 + y sin 0, 0) by 1. 5. Keep (p, 0) pairs that have high incidence value, incidence (p, 0) > ~,. Subsequently, filter out dot (x, y) that does not lie on such a line, (p, 0) or within a certain distance ~i from (p, 0). 3. Experiments To asses the effectiveness of the PlotAlign algorithms, we conducted a series of experiments. A novel and its translation was chosen as the test data. For simplicity, we have selected mutual information to estimate LTP. Statistics of mutual information between a source and target words is estimated using an outside source, example sentences and translation in the Longman English- Chinese Dictionary of Contemporary English (LecDOCE, Longman Group, 1992). An addi- tional list of some 3,200 English person names and Chinese translations are used to enhance the coverage of proper nouns in the bitext. 301 500 r j 2OO 100 /~ .J" 0 Its" 0 I00 Figure 5. / j:, ,I ./,. 200 300 400 500 600 Alignment by a human judge. -%, LTP ~ of Tea~rc Data ~o -...1,, ,'71' ' . . . . . . . '" = • .. • . ~=.l: ..I • .., ,. • , ~ 2, .. .. .. , ~.... " i~." ,. , ",.'-" 400 ~ '~. % ! % • • :':i °! .o' " "'" - .=. )] d~... ,.!., ... ,.. • .•::• ::= .'".*-~-, .: ,.-- ....... • , ,. ~t:" " ~ :'' " ;'" '" " • """" . '~" "'" ', : " .i • . 'Ol ., 1. :. • ~: ! • , o "~¢° * • °, o) "°" r 100 l ; ~" • o, " .~ 0 %" " . . . . . . ~, ~ " "~' 0 1130 200 300 400 500 600 En~ LTP estimation of the test data. ~3~0 Figure 6. Figure 5 displays the result of word alignment by a human judge. Only 40% of English text and 70% of Chinese text have a connection counterpart. This indicates the translation is not literal and there are many deletions. For instance, the following sentences are freely translated: la. It was only a quarter to eleven. lb. ~J~4~.;~.~'~;-~l'] o (10:45.) 2a. She was tall, maybe five ten and a half, but she didn't stoop. 2b. ~d~--q~.~5_~e.~X I- o (175cm) 3a. Larry Cochran tried to keep a discreet distance away. He knew his quarry was elusive and self-protective: there were few candid pictures of her, which was what would make these valuable. He walked on the opposite side of the street from her; using a zoom lens, he had already shot a whole roll of film. When they came to Seventy-ninth Street, he caught a real break when she crossed over to him, and he realised he might be able to squeeze off full-face shots. Maybe, i{it clouded over more, she might take off her dark glasses. That would be a real coup. 4. Result and Discussion Figure 6 shows that the coverage and precision of the LTP estimate is not very high. That is to be expected since the translation is not literal and the mutual information estimate based on an outside source might not be relevant. Nevertheless, PlotAlign algorithms seem to be robust enough to produce reasonably high precision that can be seen from Figure 3. Figure 3(a) shows that a normalization and thresholding process based on one-to-one constraints does a good job of filtering out noise. Figure 3(b) shows that convolution- based filtering remove more noise according to the assumption of structure preserving constraint. Texture analysis does an even better job in noise suppression. Figure 7(a) and 7(b) show that signal-to-noise ratio (SNR) is greatly improved. The filtering based on Hough Transform, contrary to the other two filtering methods, prefers connection that is consistent with other connections globally. It does a pretty good job of identifying a long line segment. However, isolated, short segments, surrounded by deletions are likely to be missed out. Figure 8(b) shows that filtering based on HT missed out the short line segment appearing near the center of the dotplot shown in Figure 6(b). Nevertheless, this short segment presents most vividly in the result of textural filter, shown in Figure 7(b). By combining filters on all three levels of resolution, we gather as much evidence as possible for optimal result. 302 500 400 300 ~ 2m; 100 ol ' I 0 (a) l, 41 l • l t~: • • , . • ! • i [ r : I I I •:41"+ ! 100 200 300 400 ~esh 500 4(]0 .... 30O 2O0 llll 0 • 0 (b) Texttce Analysis: Acc>4, DEV<4 : I : I : ,, ..I ;" : • 41 , , . . , • 1: I • : ::1 :, :1 : :.1. • i i : i I"' l '" I ' I : 1 : 1 ' : • • " i IQO 200 300 400 500 600 eazesh Figure 7. Texture Analysis. (a) Threshold = 3; (b) Threshold = 4. Table 2. Hough o 0 p 0 N 5 -42 10 23 0 9 313 0 9 387 0 9 0 -45 8 0 -49 8 4 -43 8 3 -44 7 -18 -90 7 -24 -51 7 -38 -53 7 -39 -53 7 109 0 7 22F~ N 7 0 -43 7 "41 -2 -45 -2 -48 -3 -49 -6 -46 -9 -50 32 -1 46 -31 -11 -54 -43 -54 -46 -54 -53 -57 -.R4 -RR Transform. N p 6 -61 6 -83 6 113 6 252 6 323 6 348 6 420 6 486 6 498 6 566 6 -107 6 -120 6 -226 6 -~RR 0 N -56 6 -60 6 0 6 0 6 0 6 0 6 0 6 0 6 0 6 0 6 -67 6 -59 6 -75 6 -90 6 0 -15 -30 i-45 , -75 -'/5 (a) Hough Transform (l'l~eshold: 4) ,i":" I i: . ' i ," i, , = ..... i,,,,,,i .... 't': I ~ ,,J" i • ! : | I, i ° . i. ; : -15 • = -3o ,:; i i I"' ~ -45 i i , t , |: , i = i. ..... -300 -200 -100 100 p(oerset) (h) Hough Transform (Threshold: 8) I" I~ • ~i I • i ................... i 31111 -90 -400 -3110 -200 O O -100 p(offzct) (c) i • i 10O 200 300 400 ; Ii I : : t i :' :'° .~ ° i: 100 -I. ¢ J i • ! • ,D 1 i o • 0 , ' ,," '' 0 100 200 300 400 500 600 Em$11zh Figure 8. Hough transform of the test data. 5. Conclusion The algorithm's performance discussed herein can definitely be improved by enhancing the various components of the algorithms, e.g. introducing bilingual dictionaries and thesauri. However, the PlotAlign algorithms constitute a functional core for processing noisy bitext. While the evaluation is based on an English-Chinese bitext, the linguistic constraints motivating the algorithms seem to be quite general and, to a large extent, language independent. If that is the case, the algorithms 303 should be effective to other language pairs. The prospects for English-Japanese or Chinese- Japanese, in particular, seem highly promising. Performing the alignment task as image processing proves to be an effective approach and sheds new light on the bitext correspondence problem. We are currently looking at the possibilities of exploiting powerful and well established IP techniques to attack other problems in natural language processing. Acknowledgement This work is supported by National Science Council, Taiwan under contracts NSC-862-745- E007-009 and NSC-862-213-E007-049. And we would like to thank Ling-ling Wang and Jyh-shing Jang for their valuable comments and suggestions. References 1. Brown, P. F., J. C. Lai and R. L. Mercer, (1991). Aligning Sentences in Parallel Corpora, In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, 169-176, Berkeley, CA, USA. 2. Brown, P. F., S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer, (1993). The Mathematics of Statistical Machine Translation: Parameter Estimation, Computational Linguistics, 19:2, 263-311. 3. Chen, J. N., J. S. Chang, H. H. Sheng and S. J. Ker, (1997). Word Sense Disambiguation using a Bilingual Machine Readable Dictionary. To appear in Natural Language Engineering. 4. Chen, Stanley F., (1993). Aligning Sentences in Bilingual Corpora Using Lexical Information, In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics (ACL-91), 9- 16, Ohio, USA. 5. Church, K. W., I. Dagan, W. A. Gale, P. Fung, J. Helfman, and B. Satish, (1993). Aligning Parallel Texts: Do Methods Developed for English-French Generalized to Asian Languages? In Proceedings of the First Pacific Asia Conference on Formal and Computational Linguistics, 1-12. 6. Church, Kenneth W. (1993), Char_align: A Program for Aligning Parallel Texts at the Character Level, In Proceedings of the 31th Annual Meeting of the Association for Computational Linguistics (ACL-93), Columbus, OH, USA 7. Dagan, I., K. W. Church and W. A. Gale, (1993). Robust Bilingual Word Alignment for Machine Aided Translation, In Proceedings of the Workshop on Very Large Corpora : Academic and Industrial Perspectives, 1-8, Columbus, Ohio, USA. 8. Daille, B., E. Gaussier and J.-M. Lange, (1994). Towards Automatic Extraction of Monolingual and Bilingual Terminology, In Proceedings of the 15th International Conference on Computational Linguistics, 515-521, Kyoto, Japan. 9. Fung, P. and K. McKeown, (1994). Aligning Noisy Parallel Corpora across Language Groups: Word Pair Feature Matching by Dynamic Time Warping, In Proceedings of the First Conference of the Association for Machine Translation in the Americas(AMTA-94), 81-88, Columbia, Maryland, USA. 10. Fung, Pascale and Kenneth W. Church (1994), K-vec: A New Approach for Aligning Parallel Texts, In Proceed- ings of the 15th International Conference on Computational Linguistics (COLING-94), 1096-1140, Kyoto, Japan. 11. Gale, W. A. and K. W. Church, (1991a). A Program for Aligning Sentences in Bilingual Corpora, In Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics( ACL-91), 177-184, Berkeley, CA, USA, 12. Gale, W. A. and K. W. Church, (1991b). Identifying Word Correspondences in Parallel Texts, In Proceedings of the Fourth DARPA Speech and Natural Language Workshop, 152-157, Pacific Grove, CA, USA. 13. Gale, W. A., K. W. Church and D. Yarowsky, (1992), Using Bilingual Materials to Develop Word Sense Disambiguation Methods, In Proceedings of the 4th International Conference on Theoretical and Methodological Issues in Machine Translation (TMI-92), 101-112, Montreal, Canada. 14. Kay, M. and M. R6scheisen, (1993). Text-translation Alignment, Computational Linguistics, 19:1, 121-142. 15. Ker, Sur J. and Jason S. Chang (1997), Class-based Approach to Word Alignment, to appear in Computational Linguistics, 23:2. 16. Longman Group, (1992). Longman English-Chinese Dictionary of Contemporary English, Published by Longman Group (Far East) Ltd., Hong Kong. 17. Simard, M., G. F. Foster, and P. Isabelle, (1992). Using Cognates to Align Sentences in Bilingual Corpora, In Proceedings of the Fourth International Conference on Theoretical and Methodological Issues in Machine Translation (TMI-92), 67-81, Montreal, Canada. 18. Simard, Michel and Pierre Plamondon (1996), Bilingual Sentence Alignment: Balancing Robustness and Accuracy, in Proceedings of the First Conference of the Association for Machine Translation in the Americas (AMTA-96), 135-144, Montreal, Quebec, Canada. 19. Wu, Dekai (1994), Aligning a Parallel English-Chinese Corpus Statistically with Lexical Criteria, in Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, (ACL-94) 80-87, Las Cruces, New Mexican, USA. 304 | 1997 | 38 |
A Portable Algorithm for Mapping Bitext Correspondence I. Dan Melamed Dept. of Computer and Information Science University of Pennsylvania Philadelphia, PA, 19104, U.S.A. melamed@unagi, cis. upenn, edu Abstract The first step in most empirical work in multilingual NLP is to construct maps of the correspondence between texts and their translations (bitext maps). The Smooth Injective Map Recognizer (SIMR) algo- rithm presented here is a generic pattern recognition algorithm that is particularly well-suited to mapping bitext correspon- dence. SIMR is faster and significantly more accurate than other algorithms in the literature. The algorithm is robust enough to use on noisy texts, such as those result- ing from OCR input, and on translations that are not very literal. SIMR encap- sulates its language-specific heuristics, so that it can be ported to any language pair with a minimal effort. 1 Introduction Texts that are available in two languages (bitexts) are immensely valuable for many natural language processing applications z. Bitexts are the raw ma- terial from which translation models are built. In addition to their use in machine translation (Sato & Nagao, 1990; Brown et al., 1993; Melamed, 1997), translation models can be applied to machine- assisted translation (Sato, 1992; Foster et al., 1996), cross-lingual information retrieval (SIGIR, 1996), and gisting of World Wide Web pages (Resnik, 1997). Bitexts also play a role in less auto- mated applications such as concordancing for bilin- gual lexicography (Catizone et al., 1993; Gale & Church, 1991b), computer-assisted language learn- ing, and tools for translators (e.g. (Macklovitch, 1 "Multitexts" in more than two languages are even more valuable, but they are much more rare. 1995; Melamed, 1996b). However, bitexts are of lit- tle use without an automatic method for construct- ing bitext maps. Bitext maps identify corresponding text units be- tween the two halves of a bitext. The ideal bitext mapping algorithm should be fast and accurate, use little memory and degrade gracefully when faced with translation irregularities like omissions and in. versions. It should be applicable to any text genre in any pair of languages. The Smooth Injective Map Recognizer (SIMR) al- gorithm presented in this paper is a bitext mapping algorithm that advances the state of the art on these criteria. The evaluation in Section 5 shows that SIMR's error rates are lower than those of other bitext mapping algorithms by an order of magni- tude. At the same time, its expected running time and memory requirements are linear in the size of the input, better than any other published algorithm. The paper begins by laying down SIMR's geomet- ric foundations and describing the algorithm. Then, Section 4 explains how to port SIMR to arbitrary language pairs with minimal effort, without rely- ing on genre-specific information such as sentence boundaries. The last section offers some insights about the optimal level of text analysis for mapping bitext correspondence. 2 Bitext Geometry A bitext (Harris, 1988) comprises two versions of a text, such as a text in two different languages. Translators create a bitext each time they trans- late a text. Each bitext defines a rectangular bitext space, as illustrated in Figure 1. The width and height of the rectangle are the lengths of the two component texts, in characters. The lower left corner of the rectangle is the origin of the bitext space and represents the two texts' beginnings. The upper right corner is the terminus and represents the texts' ends. The line between the origin and the 305 II origin terminus diagonal x = character position in text 1 Figure 1: a bitext space terminus is the main diagonal. The slope of the main diagonal is the bitext slope. Each bitext space contains a number of true points of correspondence (TPCs), other than the origin and the terminus. For example, if a token at position p on the x-axis and a token at position q on the y-axis are translations of each other, then the coordinate (p, q) in the bitext space is a TPC 2. TPCs also exist at corresponding boundaries of text units such as sentences, paragraphs, and chapters. Groups of TPCs with a roughly linear arrangement in the bitext space are called chains. Bitext maps are 1-to-1 functions in bitext spaces. A complete set of TPCs for a particular bitext is called a true bitext map (TBM). The purpose of a bitext mapping algorithm is to pro- duce bitext maps that are the best possible approx- imations of each bitext's TBM. 3 SIMR SIMR builds bitext maps one chain at a time. The search for each chain alternates between a genera- tion phase and a recognition phase. The genera- tion phase begins in a small rectangular region of the bitext space, whose diagonal is parallel to the main diagonal. Within this search rectangle, SIMR generates all the points of correspondence that sat- isfy the supplied matching predicate, as explained in Section 3.1. In the recognition phase, SIMR calls the chain recognition heuristic to find suitable chains among the generated points. If no suitable chains are found, the search rectangle is proportion- ally expanded and the generation-recognition cycle 2Since distances in the bitext space are measured in characters, the position of a token is defined as the mean position of its characters. is repeated. The rectangle keeps expanding until at least one acceptable chain is found. If more than one chain is found in the same cycle, SIMR accepts the one whose points are least dispersed around its least-squares line. Each time SIMR accepts a chain, it selects another region of the bitext space to search for the next chain. SIMR employs a simple heuristic to select regions of the bitext space to search. To a first approxima- tion, TBMs are monotonically increasing functions. This means that if SIMR finds one chain, it should look for others either above and to the right or below and to the left of the one it has just found. All SIMR needs is a place to start the trace. A good place to start is at the beginning: Since the origin of the bitext space is always a TPC, the first search rect- angle is anchored at the origin. Subsequent search rectangles are anchored at the top right corner of the previously found chain, as shown in Figure 2. I e discovered TPC 1 next ~ o o undiscovered TPC T P C ~ J • • previous chain ® Figure 2: S[MR's "expanding rectangle" search strategy. The search rectangle is anchored at the top right corner of the previously found chain. Its diag- onal remains parallel to the main diagonal. The expanding-rectangle search strategy makes SIMR robust in the face of TBM discontinuities. Figure 2 shows a segment of the TBM that contains a vertical gap (an omission in the text on the x-axis). As the search rectangle grows, it will eventually in- tersect with the TBM, even if the discontinuity is quite large (Melamed, 1996b). The noise filter de- scribed in Section 3.3 prevents SIMR from being led astray by false points of correspondence. 3.1 Point Generation SIMR generates candidate points of correspondence in the search rectangle using one of its matching predicates. A matching predicate is a heuristic for deciding whether a given pair of tokens are likely to be'mutual translations. Two kinds of information 306 that a matching predicate can rely on most often are cognates and translation lexicons. Two tokens in a bitext are cognates if they have the same meaning and similar spellings. In the non- technical Canadian Hansards (parliamentary debate transcripts available in English and in French), cog- nates can be found for roughly one quarter of all text tokens (Melamed, 1995). Even distantly related languages like English and Czech will share a large number of cognates in the form of proper nouns. Cognates are more common in bitexts from more similar language pairs, and from text genres where more word borrowing occurs, such as technical texts. When dealing with language pairs that have dissim- ilar alphabets, the matching predicate can employ phonetic cognates (Melamed, 1996a). When one or both of the languages involved is written in pic- tographs, cognates can still be found among punc- tuation and digit strings. However, cognates of this last kind are usually too sparse to suffice by them- selves. When the matching predicate cannot generate enough candidate correspondence points based on cognates, its signal can be strengthened by a trans- lation lexicon. Translation lexicons can be ex- tracted from machine-readable bilingual dictionaries (MRBDs), in the rare cases where MRBDs are avail- able. In other cases, they can be constructed auto- matically or semi-automatically using any of several methods (Fung, 1995; Melamed, 1996c; Resnik & Melamed, 1997). Since the matching predicate need not be perfectly accurate, the translation lexicons need not be either. Matching predicates can take advantage of other information, besides cognates and translation lexi- cons can also be used. For example, a list of faux amis is a useful complement to a cognate matching strategy (Macklovitch, 1995). A stop list of function words is also helpful. Function words are translated inconsistently and make unreliable points of corre- spondence (Melamed, 1996a). 3.2 Point Selection As illustrated in Figure 2, even short sequences of TPCs form characteristic patterns. Most chains of TPCs have the following properties: • Linearity: TPCs tend to line up straight. • Low Variance of Slope: The slope of a TPC chain is rarely much different from the bitext slope. • Injectivity: No two points in a chain of TPCs can have the same x- or y-co-ordinates. SIMR's chain recognition heuristic exploits these properties to decide which chains in the search rect- angle might be TPC chains. The heuristic involves three parameters: chain size, maximum point dispersal and maximum angle deviation. A chain's size is simply the num- ber of points it contains. The heuristic considers only chains of exactly the specified size whose points are injective. The linearity of the these chains is tested by measuring the root mean squared distance of the chain's points from the chain's least-squares line. If this distance exceeds the maximum point dispersal threshold, the chain is rejected. Next, the angle of each chain's least-squares line is compared to the arctangent of the bitext slope. If the differ- ence exceeds the maximum angle deviation thresh- old, the chain is rejected. These filters can be effi- ciently combined so that SIMR's expected running time and memory requirements are linear in the size of the input bitext (Melamed, 1996a). The chain recognition heuristic pays no attention to whether chains are monotonic. Non-monotonic TPC chains are quite common, because even lan- guages with similar syntax like French and English have well-known differences in word order. For ex- ample, English (adjective, noun) pairs usually corre- spond to French (noun, adjective) pairs. Such inver- sions result in TPCs arranged like the middle two points in the "previous chain" of Figure 2. SIMR has no problem accepting the inverted points. If the order of words in a certain text passage is radically altered during translation, SIMR will sim- ply ignore the words that "move too much" and con- struct chains out of those that remain more station- ary. The maximum point dispersal parameter lim- its the width of accepted chains, but nothing lim- its their length. In practice, the chain recognition heuristic often accepts chains that span several sen- tences. The ability to analyze non-monotonic points of correspondence over variable-size areas of bitext space makes SIMR robust enough to use on transla- tions that are not very literal. 3.3 Noise Filter Points of correspondence among frequent token types often line up in rows and columns, as illus- trated in Figure 3. Token types like the English article "a" can produce one or more correspondence points for almost every sentence in the opposite text. Only one point of correspondence in each row and column can be correct; the rest are noise. A noise fil- ter can make it easier for SIMR to find TPC chains. Other bitext mapping algorithms mitigate this source of noise either by assigning lower weights to 307 a a a a "" a c- .ca a c-. I.U ql ii • qD • • q, Q • qD ~ 'a French text Figure 3: Frequent tokens cause false points of cor- respondence that line up in rows and columns. correspondence points associated with frequent to- ken types (Church, 1993) or by deleting frequent to- ken types from the bitext altogether (Dagan et al., 1993). However, a token type that is relatively fre- quent overall can be rare in some parts of the text. In those parts, the token type can provide valuable clues to correspondence. On the other hand, many tokens of a relatively rare type can be concentrated in a short segment of the text, resulting in many false correspondence points. The varying concentra- tion of identical tokens suggests that more localized noise filters would be more effective. SIMR's local- ized search strategy provides a vehicle for a localized noise filter. The filter is based on the maximum point am- biguity level parameter. For each point p = (x, y), lct X be the number of points in column x within the search rectangle, and let Y be the number of points in row y within the search rectangle. Then the ambiguity level of p is X + Y - 2. In partic- ular, if p is the only point in its row and column, then its ambiguity level is zero. The chain recogni- tion heuristic ignores points whose ambiguity level is too high. What makes this a localized filter is that only points within the search rectangle count toward each other's ambiguity level. The ambiguity level of a given point can change when the search rectangle expands or moves. The noise filter ensures that false points of corre- spondence are very sparse, as illustrated in Figure 4. Even if one chain of false points of correspondence slips by the chain recognition heuristic, the expand- ing rectangle will find its way back to the TBM be- fore the chain recognition heuristic accepts another false "" .,Z °• :~'~ anchor off track " Figure 4: SIMR's noise filter ensures that TPCs are much more dense than false points of correspon- dence A good signal-to-noise ratio prevents SIMR from getting lost. chain. If the matching predicate generates a reason- ably strong signal then the signal-to-noise ratio will be high and SIMR will not get lost, even though it is a greedy algorithm with no ability to look ahead. 4 Porting to New Language Pairs SIMR can be ported to a new language pair in three steps. 4.1 Step 1: Construct Matching Predicate The original SIMR implementation for French/English included matching predicates that could use cognates and/or translation lexicons. For language pairs in which lexical cognates are frequent, a cognate-based matching predicate should suffice. In other cases, a "seed" translation lexicon may be used to boost the number of candidate points pro- duced in the generation phase of the search. The SIMR implementation for Spanish/English uses only cognates. For Korean/English, SIMR takes advan- tage of punctuation and number cognates but sup- plements them with a small translation lexicon. 4.2 Step 2: Construct Axis Generators In order for SIMR to generate candidate points of correspondence, it needs to know what token pairs correspond to co-ordinates in the search rectangle. It is the axis generator's job to map the two halves of the bitext to positions on the x- and y-axes of the bitext space, before SIMR starts searching for chains. This mapping should be done with the matching predicate in mind. If the matching predicate uses cognates, then ev- ery word that might have a cognate in the other half of the bitext should be assigned its own axis 308 position. This rule applies to punctuation and num- bers as well as to "lexical" cognates. In the case of- lexical cognates, the axis generator typically needs to invoke a language-specific tokenization program to identify words in the text. Writing such a pro- gram may constitute a significant part of the port- ing effort, if no such program is available in advance. The effort may be lessened, however, by the realiza- tion that it is acceptable for the tokenization pro- gram to overgenerate just as it is acceptable for the matching predicate. For example, when tokenizing German text, it is not necessary for the tokenizer to know which words are compounds. A word that has another word as a substring should result in one axis position for the substring and one for the su- perstring. When lexical cognates are not being used, the axis generator only needs to identify punctuation, num- bers, and those character strings in the text which also appear on the relevant side of the translation lexicon 3. It would be pointless to plot other words on the axes because the matching predicate could never match them anyway. Therefore, for languages like Chinese and Japanese, which are written with- out spaces between words, tokenization boils down to string matching. In this manner, SIMR circum- vents the difficult problem of word identification in these languages. 4.3 Step 3: Re-optimize Parameters The last step in the porting process is to re-optimize SIMR's numerical parameters. The four parameters described in Section 3 interact in complicated ways, and it is impossible to find a good parameter set analytically. It is easier to optimize these parameters empirically, using simulated annealing (Vidal, 1993). Simulated annealing requires an objective func- tion to optimize. The objective function for bitext mapping should measure the difference between the TBM and maps produced with the current parame- ter set. In geometric terms, the difference is a dis- tance. The TBM consists of a set of TPCs. The error between a bitext map and each TPC can be defined as the horizontal distance, the vertical dis- tance, or the distance perpendicular to the main di- agonal. The first two alternatives would minimize the error with respect to only one language or the other. The perpendicular distance is a more robust average. In order to penalize large errors more heav- ily, root mean squared (RMS) distance is minimized instead of mean distance. 3Multi-word expressions in the translation lexicon are treated just like any other character string. The most tedious part of the porting process is the construction of TBMs against which SIMR's param- eters can be optimized and tested. The easiest way to construct these gold standards is to extract them from pairs of hand-aligned text segments: The final character positions of each segment in an aligned pair are the co-ordinates of a TPC. Over the course of two porting efforts, I have develol~ed and refined tools and methods that allow a bilingual annota- tor to construct the required TBMs very efficiently from a raw bitext. For example, a tool originally de- signed for automatic detection of omissions in trans- lations (Melamed, 1996b) was adopted to detect mis- alignments. 4.4 Porting Experience Summary Table 1 summarizes the amount of time invested in each new language pair. The estimated times for building axis generators do not include the time spent to build the English axis generator, which was part of the original implementation. Axis generators need to be built only once per language, rather than once per language pair. 5 Evaluation SIMR was evaluated on hand-aligned bitexts of vari- ous genres in three language pairs. None of these test bitexts were used anywhere in the training or port- ing procedures. Each test bitext was converted to a set of TPCs by noting the pair of character positions at the end of each aligned pair of text segments. The test metric was the root mean squared distance, in characters, between each TPC and the interpolated bitext map produced by SIMR, where the distance was measured perpendicular to the main diagonal. The results are presented in Table 2. The French/English part of the evaluation was performed on bitexts from the publicly available BAF corpus created at CITI (Simard & Plamon- don, 1996). SIMR's error distribution on the "parlia- mentary debates" bitext in this collection is given in Table 3. This distribution can be compared to error distributions reported in (Church, 1993) and in (Da- gan et al., 1993). SIMR's RMS error on this bitext was 5.7 characters. Church's char_align algorithm (Church, 1993) is the only algorithm that does not use sentence boundary information for which com- parable results have been reported, char_align's RMS error on this bitext was 57 characters, exactly ten times higher. Two teams of researchers have reported results on the same "parliamentary debates" bitext for al- gorithms that map correspondence at the sentence level (Gale & Church, 1991a; Simard et al., 1992). 309 Table 1: Time spent in constructing two "gold standard" TBMs. estimated time estimated time main informant for spent to build spent on language pair matching predicate new axis generator hand-alignment Spanish/English lexical cognates 8 h 5 h Korean/English translation lexicon 6 h 12 h number of segments aligned 1338 1224 Table 2: SIMR accuracy on different text genres in three language pairs. language number of number of RMS Error pair training TPCs genre test TPCs in characters French / English 598 parliamentary debates CITI technical reports other technical reports court transcripts U.N. annual report I.L.O. report 7123 365,305, 176 561, 1393 1377 2049 7129 5.7 4.4, 2.6, 9.9 20.6, 14.2 3.9 12.36 6.42 .... Spanish / English 562 software manuals 376, 151,100, 349 4.7, 1.3, 6.6, 4.9 Korean / English 615 military manuals 40, 88, 186, 299 2.6, 7.1, 25, 7.8 military messages 192 0.53 Table 3: SIMR 's error distribution on the French/English "parliamentary debates" bitext. number of error range fraction of test points in characters test points 1 2 1 5 4 6 9 29 3057 3902 43 28 17 5 8 1 1 1 1 1 1 -101 -80 to -70 -70 to -60 -60 to -50 -50 to -40 -40 to -30 -30 to -20 -20 to -10 -10 to 0 0 to 10 10 to 20 20 to 30 30 to 40 40 to 50 50 to 60 60 to 70 70 to 80 80 to 90 90 to 100 110 to 120 185 .0001 .0003 .0001 .0007 .0006 .0008 .0013 .0041 .4292 .5478 .0060 .0039 .0024 .0007 .0011 .0001 .0001 .0001 .0001 .0001 .0001 7123 1.000 Both of these algorithms use sentence boundary information. Melamed (1996a) showed that sen- tence boundary information can be used to convert SIMR's output into sentence alignments that are more accurate than those obtained by either of the other two approaches. The test bitexts in the other two language pairs were created when SIMR was being ported to those languages. The Spanish/English bitexts were drawn from the on-line Sun MicroSystems Solaris An- swerBooks. The Korean/English bitexts were pro- vided and hand-aligned by Young-Suk Lee of MIT's Lincoln Laboratories. Although it is not possible to compare SIMR's performance on these language pairs to the performance of other algorithms, Table 2 shows that the performance on other language pairs is no worse than performance on French/English. 6 Which Text Units to Map? Early bitext mapping algorithms focused on sen- tences (Kay & RSscheisen, 1993; Debili & Sam- mouda, 1992). Although sentence maps do not have sufficient resolution for some important bitext appli- cations (Melamed, 1996b; Macklovitch, 1995), sen- tences were an easy starting point, because their order rarely changes during translation. Therefore, sentence mapping algorithms need not worry about crossing correspondences. In 1991, two teams of re- searchers independently discovered that sentences can be accurately aligned by matching sequences 310 with similar lengths (Gale & Church, 1991a; Brown et al., 1991). Soon thereafter, Church (1993) found that bitext mapping at the sentence level is not an option for noisy bitexts found in the real world. Sentences are often difficult to detect, especially where punc- tuation is missing due to OCR errors. More im- portantly, bitexts often contain lists, tables, titles, footnotes, citations and/or mark-up codes that foil sentence alignment methods. Church's solution was to look at the smallest of text units -- characters and to use digital signal processing techniques to grapple with the much larger number of text units that might match between the two halves of a bitext. Characters match across languages only to the extent that they participate in cognates. Thus, Church's method is only applicable to language pairs with similar alphabets. The main insight of the present work is that words are a happy medium-sized text unit at which to map bitext correspondence. By situating word positions in a bitext space, the geometric heuristics of sen- tence alignment algorithms can be exploited equally well at the word level. The cognate heuristic of the character-based algorithms works better at the word level, because cognateness can be defined more precisely in terms of words, e.g. using the Longest Common Subsequence Ratio (Melamed, 1995). Sev- eral other matching heuristics can only be applied at the word level, including the localized noise filter in Section 3.3, lists of stop words and lists of/aux amis (Macklovitch, 1995). Most importantly, trans- lation lexicons can only be used at the word level. SIMR can employ a small hand-constructed transla- tion lexicon to map bitexts in any pair of languages, even when the cognate heuristic is not applicable and sentences cannot be found. The particular combina- tion of heuristics described in Section 3 can certainly be improved on, but research into better bitext map- ping algorithms is likely to be most fruitfull at the word level. 7 Conclusion The Smooth Injective Map Recognizer (SIMR) bitext mapping algorithm advances the state of the art on several frontiers. It is significantly more ac- curate than other algorithms in the literature. Its expected running time and memory requirements are linear in the size of the input, which makes it the algorithm of choice for very large bitexts. It is not fazed by word order differences. It does not rely on pre-segmented input and is portable to any pair of languages with a minimal effort. These features make SIMR the mostly widely applicable bitext mapping algorithm to date. SIMR opens up several new avenues of research. One important application of bitext maps is the con- struction of translation lexicons (Dagan et al., 1993) and, as discussed, translation lexicons are an impor- tant information source for bitext mapping. It is likely that the accuracy of both kinds of algorithms can be improved by alternating between the two on the same bitext. There are also plans to build an automatic bitext locating spider for the World Wide Web, so that SIMR can be applied to more new lan- guage pairs and bitext genres. Acknowledgements SIMR was ported to Spanish/English while I was visiting Sun MicroSystems Laboratories. Thanks to Gary Adams, Cookie Callahan, Bob Kuhns and Philip Resnik for their help with that project. Thanks also to Philip Resnik for writing the Spanish tokenizer, and hand-aligning the Spanish/English training bitexts. Porting SIMR to Korean/English would not have been possible without Young-Suk Lee of MIT's Lincoln Laboratories, who provided the seed translation lexicon, and aligned all the training and test bitexts. This paper was much improved by helpful comments from Mitch Marcus, Adwait Ratnaparkhi, Bonnie Webber and three anonymous reviewers. This research was supported by an equip- ment grant from Sun MicroSystems and by ARPA Contract #N66001-94C-6043. References P. F. Brown, J. C. Lai & R. L. Mercer, "Aligning Sentences in Parallel Corpora," Proceedings of the 29th Annual Meeting of the AsSociation for Com- putational Linguistics, Berkeley, CA, 1991. P. F. Brown, S. Della Pietra, V. Della Pietra, & R. Mercer, "The Mathematics of Statistical Ma- chine Translation: Parameter Estimation", Com- putational Lin9uistics 19:2, 1993. R. Catizone, G. Russell & S. Warwick "Deriving Translation Data from Bilingual Texts," Proceed- ings of the First International Lexical Acquisition Workshop, Detroit, MI, 1993. S. Chen, "Aligning Sentences in Bilingual Corpora Using Lexical Information," Proceedings of the 31st Annual Meeting of the Association for Com- putational Linguistics, Columbus, OH, 1993. K. W. Church, "Char_align: A Program for Align- ing Parallel Texts at the Character Level," Pro- 311 ceedings of the 31st Annual Meeting of the Asso- ciation for Computational Linguistics, Columbus, OH, 1993. I. Dagan, K. Church, & W. Gale, "Robust Word Alignment for Machine Aided Translation," Pro- ceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, Columbus, OH, 1993. F. Debili & E. Sammouda "Appariement des Phrases de Textes Bilingues," Proceedings of the 14th International Conference on Computational Lin- guistics, Nantes, France, 1992. G. Foster, P. Isabelle & P. Plamondon, "Word Com- pletion: A First Step Toward Target-Text Medi- ated IMT," Proceedings of the 16th International Conference on Computational Linguistics, Copen- hagen, Denmark, 1996. P. Fung, "Compiling Bilingual Lexicon Entries from a Non-Parallel English-Chinese Corpus," Proceed- ings of the Third Workshop on Very Large Cor- pora, Boston, MA, 1995. W. Gale & K. W. Church, "A Program for Aligning Sentences in Bilingual Corpora," Proceedings of the 29th Annual Meeting o-f the Association for Computational Linguistics, Berkeley, CA, 1991a. W. Gale & K. W. Church, "Identifying Word Corre- spondences in Parallel Texts," Proceedings of the DARPA SNL Workshop, 1991b. B. Harris, "Bi-Text, a New Concept in Translation Theory," Language Monthly #54, 1988. M. Kay & M. Rbscheisen "Text-Translation Align- ment," Computational Linguistics 19:1, 1993. E. Macklovitch, "Peut-on verifier automatiquement la coherence terminologique?" Proceedings of the IV es Journdes scientifiques, Lexicommatique et Dictionnairiques, organized by AUPELF-UREF, Lyon, France, 1995. I. D. Melamed "Automatic Evaluation and Uniform Filter Cascades for Inducing N-best Translation Lexicons," Proceedings of the Third Workshop on Very Large Corpora, Boston, MA, 1995. I. D. Melamed, "A Geometric Approach to Mapping Bitext Correspondence," Proceedings of the First Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP'96), Philadelphia, PA, 1996a. I. D. Melamed "Automatic Detection of Omissions in Translations," Proceedings of the 16th Interna- tional Conference on Computational Linguistics, Copenhagen, Denmark, 1996b. I. D. Metamed, "Automatic Construction of Clean Broad-Coverage Translation Lexicons," Proceed- ings of the Conference of the Association for Machine Translation in the Americas, Montreal, Canada, 1996c. I. D. Melamed, "A Word-to-Word Model of Transla- tional Equivalence," Proceedings of the 35th Con- ference of the Association/or Computational Lin- guistics, Madrid, Spain, 1997. (in this volume) P. Resnik & I. D. Melamed, "Semi-Automatic Acqui- sition of Domain-Specific Translation Lexicons," Proceedings of the 7th A CL Conference on Ap- plied Natural Language Processing, Washington, DC, 1997. P. Resnik, "Evaluating Multilingual Gisting of Web Pages," UMIACS-TR-97-39, University of Mary- land, 1997. S. Sato & M. Nagao, "Toward Memory-Based Trans- lation," Proceedings of the 13th International Conference on Computational Linguistics, 1990. S. Sato, "CTM: An Example-Based Translation Aid System," Proceedings of the 14th Interna- tional Conference on Computational Linguistics, Nantes, France, 1992. SIGIR Workshop on Cross-linguistic Multilingual Information Retrieval, Zurich, 1996. M. Simard, G. F. Foster & P. Isabelle, "Using Cog- nates to Align Sentences in Bilingual Corpora," in Proceedings of the Fourth International Con- ference on Theoretical and Methodological Issues in Machine Translation, Montreal, Canada, 1992. M. Simard &: P. Plamondon, "Bilingual Sentence Alignment: Balancing Robustness and Accuracy," Proceedings of the Conference of the Association for Machine Translation in the Americas, Mon- treal, Canada, 1996. R. V. V. Vidal, Applied simulated Annealing, Springer-Verlag, Heidelberg, Germany, 1993. 312 | 1997 | 39 |
Expansion of Multi-Word Terms for Indexing and Retrieval Using Morphology and Syntax* Christian Jacquemin Judith L. Klavans Institut de Recherche en Informatique Center for Research de Nantes, BP 92208 2, chemin de la Houssini~re 44322 NANTES Cedex 3 FRANCE j acquemin@irin, univ-nantes, fr Evelyne Tzoukermann Bell Laboratories, on Information Access Lucent Technologies, Columbia University 700 Mountain Avenue, 2D-448, 535 W. ll4th Street, MC 1101 P.O. Box 636, New York, NY 10027, USA Murray Hill, NJ 07974, USA klavans@cs, columbia, edu evelyneQresearch, bell-labs, corn Abstract A system for the automatic production of controlled index terms is presented using linguistically-motivated techniques. This includes a finite-state part of speech tagger, a derivational morphological processor for analysis and generation, and a unification- based shallow-level parser using transfor- mational rules over syntactic patterns. The contribution of this research is the success- ful combination of parsing over a seed term list coupled with derivational morphology to achieve greater coverage of multi-word terms for indexing and retrieval. Final re- sults are evaluated for precision and recall, and implications for indexing and retrieval are discussed. 1 Motivation Terms are known to be excellent descriptors of the informational content of textual documents (Sriniva- san, 1996), but they are subject to numerous linguis- tic variations. Terms cannot be retrieved properly with coarse text simplification techniques (e.g. stem- ming); their identification requires precise and effi- cient NLP techniques. We have developed a domain independent system for automatic term recognition from unrestricted text. The system presented in this paper takes as input a list of controlled terms and a corpus; it detects and marks occurrences of term We would like to thank the NLP Group of Columbia University, Bell Laboratories - Lucent Technologies, and the Institut Universitaire de Technologie de Nantes for their support of the exchange visitor program for the first author. We also thank the Institut de l'Information Scientifique et Technique (INIST-CNRS) for providing us with the agricultural corpus and the associated term list, and Didier Bourigault for providing us with terms extracted from the newspaper corpus through LEXTER (Bourigault, 1993). variants within the corpus. The system takes as in- put a precompiled (automatically or manually) term list, and transforms it dynamically into a more com- plete term list by adding automatically generated variants. This method extends the limits of term extraction as currently practiced in the IR commu- nity: it takes into account multiple morphological and syntactic ways linguistic concepts are expressed within language. Our approach is a unique hybrid in allowing the use of manually produced precom- piled data as input, combined with fully automatic computational methods for generating term expan- sions. Our results indicate that we can expand term variations at least 30% within a scientific corpus. 2 Background and Introduction NLP techniques have been applied to extraction of information from corpora for tasks such as free indexing (extraction of descriptors from corpora), (Metzler and Haas, 1989; Schwarz, 1990; Sheridan and Smeaton, 1992; Strzalkowski, 1996), term ac- quisition (Smadja and McKeown, 1991; Bourigault, 1993; Justeson and Katz, 1995; Dallle, 1996), or ex- traction of lin9uistic information e.g. support verbs (Grefenstette and Teufel, 1995), and event structure of verbs (Klavans and Chodorow, 1992). Although useful, these approaches suffer from two weaknesses which we address. First is the issue of filtering term lists; this has been dealt with by cons- traints on processing and by post-processing over- generated lists. Second is the problem of difficulties in identifying related terms across parts of speech. We address these limitations through the use of con- trolled indexing, that is, indexing with reference to previously available authoritative terms lists, such as (NLM, 1995). Our approach is fully automatic, but permits effective combination of available resources (such as thesauri) with language processing techno- logy, i.e., morphology, part-of-speech tagging, and syntactic analysis. 24 Automatic controlled indexing is a more difficult task than it may seem at first glance: • controlled indexing on single-words must account for polysemy and word disambiguation (Krovetz and Croft, 1992; Klavans, 1995). • controlled indexing on multi-word terms must consider the numerous forms of term va- riations (Dunham, Pacak, and Pratt, 1978; Sparck Jones and Tait, 1984; Jacquemin, 1996). We focus here on the multi-word task. Our system exploits a morphological processor and a transformation-based parser for the extraction of multi-word controlled indexes. The action of the system is twofold. First, a cor- pus is enriched by tagging each word unambiguously, and then expanded by linking each word with all its possible derivatives. For example, for English, the word genes is tagged as a plural noun and morpho- logically connected to genic, genetic, genome, ge- notoxic, genetically, etc. Second, the term list is dynamically expanded through syntactic transfor- mations which allow the retrieval of term variants. For example, genic expressions, genes were expres- sed, expression of this gene, etc. are extracted as variants of gene expression. This system relies on a full-fledged unification for- malism and thus is well adapted to a fine-grained identification of terms related in syntactically and morphologically complex ways. The same system has been effectively applied both to English and French, although this paper focuses on French (see (Jacquemin, 1994) for the case of syntactic variants in English). All evaluation experiments were perfor- med on two corpora: a training corpus [ECI] (ECI, 1989 and 1990) used for the tuning of the metagram- mar and a test corpus [AGR] (AGR, 1995) used for evaluation. [ECI] is a subset of the European Corpus Initiative data composed of 1.3 million words of the French newspaper "Le Monde"; [AGR] is a set of abstracts of scientific papers in the agricultural do- main from INIST/CNRS (1.1 million words). A list of terms is associated with each corpus: the terms corresponding to [ECI] were automatically extrac- ted by LEXTER (Bourigault, 1993) and the terms corresponding to [AGR] were extracted from the AGROVOC term list owned by INIST/CNRS. The following section describes methods for grou- ping multi-word term variants; Section 4 presents a linguistically-motivated method for lexical analy- sis (inflectional analysis, part of speech tagging, and derivational analysis); Section 5 explains term ex- pansion methods: constructions with a local parse through syntactic transformations preserving depen- dency relations; Section 6 illustrates the empirical tuning of linguistic rules; Section 7 presents an eva- luation of the results in terms of precision and recall. 3 Variation in Multi-Word Terms: A Description of the Problem Linguistic variation is a major concern in the studies on automatic indexing. Variations can be classified into three major categories: • Syntactic (Type 1): the content words of the original term are found in the variant but the syntactic structure of the term is modified, e.g. technique for performing volumetric mea- surements is a Type 1 variant of measurement technique. • Morpho-syntaetic (Type 2): the content words of the original term or one of their deri- vatives are found in the variant. The syntactic structure of the term is also modified, e.g. ele- ctrophoresed on a neutral polyaerylamide gel is a Type 2 variant of gel electrophoresis. • Semantic (Type 3): synonyms are found in the variant; the structure may be modified, e.g. kidney function is a Type 3 variant of renal fun- ction. This paper deals with Type 1 and Type 2 variations. The two main approaches to multi-word term con- flation in IR are text simplification and structural similarity. Text simplification refers to traditional IR algorithms such as (1) deletion of stop words, (2) normalization of single words through stemming, and (3) phrase construction through dictionary mat- ching. (See (Lewis, Croft, and Bhandaru, 1989; Smeaton, 1992) on the exploitation of NLP tech- niques in IR.) These methods are generally limited. The morphological complexity of the language seems to be a decisive argument for performing rich stem- ming (Popovi~ and Willett, 1992). Since we focus on French, a language with a rich declensional infle- ctional and derivational morphology--we have cho- sen the richest and most precise morphological ana- lysis. This is a key component in the recognition of Type 2 variants. For structural similarity, co- arse dependency-based NLP methods do not account for fine structural relations involved in Type 1 va- riants. For instance, properties of flour should be linked to flour properties, properties of wheat flour but not to properties of flour starch (examples are from (Schwarz, 1990)). The last occurrence must be rejected because starch is the argument of the head 25 noun properties, whereas flour is the argument of the head noun properties in the original term. Wi- thout careful structural disambiguation over internal phrase structure, these important syntactic distinc- tions would be incorrectly overlooked. 4 Part of Speech Disambiguation and Morphology First, inflectional morphology is performed in or- der to get the different analyses of word forms. Infle- ctional morphology is implemented with finite-state transducers on the model used for Spanish (Tzouker- mann and Liberman, 1990). The theoretical prin- ciples underlying this approach are based on gene- rative morphology (Aronoff, 1976; Selkirk, 1982). The system consists of precomputing stems, extrac- ted from a large dictionary of French (Boyer, 1993) enhanced with newspaper corpora, a total of over 85,000 entries. Second, a finite-state part of speech tagger (Tzoukermann, Radev, and Gale, 1995; Tzouker- mann and Radev, 1996) performs the morpho- syntactic disambiguation of words. The tagger takes the output of inflectional morphological analysis and through a combination of linguistic and statistical techniques, outputs a unique part of speech for each word in context. Reducing the ambiguity of part of speech tags eliminates ambiguity in local parsing. Furthermore, part of speech ambiguity resolution permits construction of correct derivational links. Third, derivational morphology (Tzoukermann and Jacquemin, 1997) is achieved to generate mor- phological variants of the disambiguated words. De- rivational generation is performed on the lemmas produced by the inflectional analysis and the part of speech information. Productive stripping and con- catenation rules are applied on lemmas. The derived forms are expressed as tokens with feature structures 1. For instance, the following set of constraints express that the noun modernisateur is morphologically related to the word modernisation 2 . The <ON> metarule removes the -ion suffix, and the <EUR> rule adds the nominal suffix -eur. 1In the remainder of the paper, N is Noun, A Adjective, C Coordinating conjunction, D Determiner, P Preposition, Av Adverb, Pu Punctuation, NP Noun Phrase, and AP Adjective Phrase. 2Each lemma has a unique numeric identifier <reference>. <cat> =- N <lemma> =- 'modernisation' <reference> = 52663 <derivation cat> -- N <derivation lemma> = 'modernisateur' <derivation reference> = 52662 <derivation history> -- '<ON<>EUR>'. The morphological analysis performed in this study is detailed in (Tzoukermann, Klavans, and Jacquemin, 1997). It is more complete and linguis- tically more accurate than simple stemming for the following reasons: • Allomorphy is accounted for by listing the set of its possible allomorphs for each word. A1- lomorphies are obtained through multiple verb stems, e.g. ]abriqu-, ]abric- (fabricate) or addi- tional allomorphic rules. • Concatenation of several suffixes is accounted for by rule ordering mechanisms. Furthermore, we have devised a method for guessing possible suffix combinations from a lexicon and a corpus. This empirical method reported in (Jacquemin, 1997) ensures that suffixes which are related wi- thin specific domains are considered. • Derivational morphology is built with the pers- pective of overgeneration. The nature of the se- mantic links between a word and its derivational forms is not checked and all allomorphic alter- nants are generated. Selection of the correct links occurs during subsequent term expansion process with collocational filtering. Although dtable (cowshed) is incorrectly related to dtablir (to establish), it is very improbable to find a context where dtablir co-occurs with one of the three words found in the three multi-word terms containing dtable: nettoyeur (cleaner), alimen- ration (feeding), and liti~re (litter): Since we focus on multi-word term variants, overgenera- tion does not present a problem in our system. 5 Transformation-Based Term Expansion The extraction of terms and their variants from cor- pora is performed by a unification-based parser. The controlled terms are transformed into grammar rules whose syntax is similar to PATR-II. 5.1 A Corpus-Based Method for Discovering Syntactic Transformations We present a method for inferring transformations from a corpus in the purpose of developing a gram- 26 mar of syntactic transformations for term variants. To discover the families of term variants, we first consider a notion of collocation which is less restri- ctive than variation. Then, we refine this notion in order to filter out genuine variants and to reject spu- rious ones. A Type 1 collocation of a binary term is a text window containing its content words wl and w2, without consideration of the syntactic stru- cture. With such a definition, any Type 1 variant is a Type 1 collocation. Similarly, a notion of Type 2 collocation is defined based on the co-occurence of wl and w2 including their derivational relatives. A d=5-word window is considered as sufficient for detecting collocations in English (Martin, A1, and Van Sterkenburg, 1983). We chose a window-size twice as large because French is a Romance language with longer syntactic structures due to the absence of compounding, and because we want to be sure to observe structures spanning over large textual se- quences. For example, the term perte au stockage (storage loss) is encountered in the [AGR] corpus as: pertes occasionndes par les insectes au sorgho stockd (literally: loss of stored sorghum due to the insects). A linguistic classification of the collocations which are correct variants brings up the following families of variations a. • Type 1 variations are classified according to their syntactic stucture. 1. Coordination: a coordination the combi- nation of two terms with a common head word or a common argument. Thus, fruits et agrumes tropicaux (literally: tropical ci- trus fruits or fruits) is a coordination va- riant of the term fruits tropicaux (tropical fruits). 2. Substitution/Modification: a substitu- tion is the replacement of a content word by a term; a modification is the insertion of a modifier without reference to another term. For example, activitd thermodyna- mique de l'eau (thermodynamic activity of water) is a substitution variant of activitg de l'eau (activity of water) if activitd ther- modynamique (thermodynamic activity) is a term; otherwise, it is a modification. 3. Compounding/Decompounding: in French, most terms have a compound noun structure, i.e. a noun phrase structure where determiners are omitted such as con- sommation d'oxyg~ne (oxygen consump- tion). The decompounding variation is the 3 Variations are generic linguistic functions and va- riants are transformations of terms by these functions. transformation of a term with a compound structure into a noun phrase structure such as consommation de l'oxyg~ne (consump- tion of the oxygen). Compounding is the reciprocal transformation. • Type 2 variations are classified according to the nature of the morphological derivation. Of- ten semantic shifts are involved as well (Viegas, Gonzalez, and Longwell, 1996). 1. Noun-Noun variations: relations such as result/agent (fixation de l'azote (ni- trogen fixation) / fixateurs d ' azote (nitrogen fixater)) or container/content (rdservoir d ' eau (water reservoir) / rdserve en eau (wa- ter reserve)) are found in this family. 2. Noun-Verb variations: these variations often involve semantic shifts such as pro- cess/result fixation de l'azote/fixer l'azote (to fix nitrogen). 3. Noun-Adjective variations: the two ways to modify a noun, a prepositional phrase or an adjectival phrase, are gene- rally semantically equivalent, e.g. variation du climat (climate variation) is a synonym of variation climatique (climatic variation). A method for term variant extraction based on morphology and simple co-occurrences would be very imprecise. A manual observation of collocations shows that only 55% of the Type 1 collocations are correct Type 1 variants and that only 52% of the Type 2 collocations are correct Type 2 variants. It is therefore necessary to conceive a filtering method for rejecting fortuitous co-occurrences. The follo- wing section proposes a filtering system based on syntactic patterns. 6 Empirical Rule Tuning 6.1 Syntactic Transformations for Type 1 and Type 2 variants The concept of a grammar of syntactic transforma- tions is motivated by well-known observations on the behavior of collocations in context (e.g. (Harris et al., 1989).) Initial rules based on surface syntax are refined through incremental experimental tuning. We have devised a grammar of French to serve as a basis for the creation of metarules for term variants. For example, the noun phrase expansion rule is4: NP -~ D: AP*N (APIPP)* (1) awe use UNIX regular expression symbols for rules and transformations. 27 From this rule a set of expansions can be generated: NP = D ? (Av ? A)* N (Av ? A I (2) P D ? (Av ? A)* N (Av ? A)*)* In order to balance completeness and accuracy, ex- pansions are limited. After the initial expansion is created for a range of structures, empirical tuning is applied to create a set of maximum coverage meta- rules. We briefly illustrate this process for coordina- tion. For this example, we restrict transformations to terms with N P N structures which represent a full 33% of the binary terms. Examples of metarules of Type 1 and Type 2 variations are given in Table 1. 6.2 Development of a Coordination Transformation for N P N Terms The coordination types are first calculated by combi- ning the pattern N1 P2 Ns with possible expansions of a noun phrase with a simple paradigmatic struc- ture A TN(AIPD ? A ?NAT)s: Coord(N1 P2 Ns) = N1 ((C A T N A T P) I (3) (A C P) I (P D? AT N A T C P?)) N3 The first parenthesis (C A T N A ? P) represents a coordinated head noun, the second (A C P) and third (P D ? A T N A T C P?) represent respectively an adjective phrase and a prepositional phrase coor- dinated with the prepositional phrase of the original term. Variants were extracted on the [ECI] corpus through this transformation; the following observa- tions and changes have been made. First, coordination accepts a substitution which replaces the noun N3 with a noun phrase D ? A T Ns. For example, the variant tempdrature et humiditd initiale de Pair (temperature and initial humidity of the air) is a coordination where a determiner pre- cedes the last noun (air). Secondly, the observations of coordination va- riants also suggest that the coordinating conjunction can be preceded by an optional comma and followed by an optional adverb, e.g. la production, et sur- tout la diffusion des semences (the production, and particularly the distribution of the seeds). Thirdly, variants such as de l'humiditd et de la vitesse de l'air (literally: of humidity and of the speed of the air) indicate that the conjunction can be followed by an optional preposition and an optional determiner. 5Subscripts represent indexing. The three preceding changes are made on the ex- pression of (3) and the resulting transformation is given in the first line of Table 1 (changes are under- lined). Our empirical selection of valid metarules is gui- ded by linguistic considerations and corpus observa- tions. This mode of grammar conception has led us to the following decisions: • reject linguistic phenomena which could not be accounted for by regular expressions such as sentential complements of nouns; • reject noisy and inaccurate variations such as long distance dependencies (specifically within a verb phrase); • focus on productive and safe variations which are felicitously represented in our framework. Accounting for variants which are not considered in our framework would require the conception of a no- vel framework, probably in cooperation with a dee- per analyzer. It is unlikely that our transformatio- nal approach with regular expressions could do much better than the results presented here. Table 2 shows some variants of AGROVOC terms extracted from the [AGR] corpus. 7 Evaluation The precision and recall of the extraction of term va- riants are given in Table 4 where precision is the ra- tio of correct variants among the variants extracted and the recall is the ratio of variants retrieved among the collocates. Results were obtained through a ma- nual inspection of 1,579 Type 1 variants, 823 Type 2 variants, 3,509 Type 1 collocates, and 2,104 Type 2 collocates extracted from the [AGR] corpus and the AGROVOC term list. These results indicate a very high level of accu- racy: 89.4% of the variants extracted by the system are correct ones. Errors generally correspond to a se- mantic discrepancy between a word and its morpho- logically derived form. For example, dlevde pour un sol (literally: high for a soil) is not a correct variant of dlevage hors sol (off-soil breeding) because dlevde and dlevage are morphologically related to two dif- ferent senses of the verb dlever:, dlevde derives from the meaning to raise whereas dlevage derives from to breed. Recall is weaker than precision because only 75.2% of the possible variants are retrieved. Improvement of Indexing through Variant Extraction For a better understanding of the importance of term expansion, we now compare term indexing with 28 Table 1: Metarules of Type 1 (Coordination) and Type 2 (Noun to Verb) Variations. Variation Term and variant Coord(N1 P2 N3) = NI (((Pu: C Av T pT D ? A T NAT P) {(ACAv T P) I(pDT ATNA T CAv T pT))D T A T) Ns. teneur en protgine (protein content) -~ teneur en eau et en protdine (protein and water content) NtoV(Nx P2 N3) ---- Vl (Av T (pT D I P) AT) N3: stabilisation de prix (price stabilization) <Vx derivation reference> = <N1 reference>. --~ stabiliser leurs prix (stabilize their prices) Table 2: Examples of Variations from [AGR]. Term Variant Type Eehange d'ion (ion exchange) Culture de eellules (cell culture) Propridtd chimique (chemical property) Gestion d ' eau (water management) Eau de surface (surface water) Huile de palme (palm oil) Initiation de bourgeon (bud initiation) dchange ionique (ionic exchange) N to A cultures primaires de cellules (primary cell cultures) Modif. propridtds physiques et chimiques Coor. (chemical and physical properties) gestion de l'eau (management of the water) Comp. eau et de l'dvaporation de surface Coor. (water and of surface evaporation [incorrect variant]) palmier d huile (palm tree [yielding oil]) N to N initier des bourgeons N to V (initiate buds) and without variant expansion. The [AGR] corpus has been indexed with the AGROVOC thesaurus in two different ways: 1. Simple indexing: Extraction of occurrences of multi-word terms without considering variation. 2. Rich indexing: Simple indexing improved with the extraction of variants of multi-word terms. Both indexings have been manually checked. Simple indexing is almost error-free but does not cover term variants. On the contrary, rich indexing is slightly less accurate but recall is much higher. Both me- thods are compared by calculating the effectiveness measure (Van Rijsbergen, 1975): 1 E~=l-a(_~)+(l_a)(_~) with0<a<l (4) P and R are precision and recall and a is a para- meter which is close to 1 if precision is preferred to recall. The value of E~ varies from 0 to 1; E~ is close to 0 when all the relevant conflations are made and when no incorrect one is made. The effectiveness of rich indexing is more than three times better than effectiveness of simple in- dexing. Retrieved variants increase the number Table 3: Evaluation of Simple vs. Rich Indexing. Precision Recall Eo.s Simple indexing 99.7% 72.4% 16.1% Rich indexing 97.2% 93.4% 4.7% of indexing items by 28.8% (17.3% Type 1 va- riants and 11.5% Type 2 variants). Thus, term va- riant extraction is a significant expansion factor for identifying morphologically and syntactically related multi-word terms in a document without introducing undesirable noise. As for performance, the parser is fast enough for processing large amounts of textual data due to the presence of several optimization devices. On a Pen- tium133 with Linux, the parser processes 18,100 words/min from an initial list of 4,300 terms. Conclusion This paper has proposed a syntax-based approach via morphologically derived forms for the identifi- cation and extraction of multi-word term variants. 29 Table 4: Precision and Recall of Term Variant Extraction on [AGR] Type 1 variants Type 2 variants Total Subst. Coord. Comp. AtoN NtoA NtoN NtoV # correct 808 228 404 19 60 273 471 2263 # rejected 87 26 26 7 5 28 90 269 90.3% 90.0% 94.0% 73.1% 91.6% 93.0% 84.0% Precision 89.4~o 91.2% 86.4% Recall 75.0% 75.6% 75.2% In using a list of controlled terms coupled with a syntactic analyzer, the method is more precise than traditional text simplification methods. Iterative ex- perimental tuning has resulted in wide-coverage lin- guistic description incorporating the most frequent linguistic phenomena. Evaluations indicate that, by accounting for term variation using corpus tagging, morphological deri- vation, and transformation-based rules, 28.8% more can be identified than with a traditional indexer which cannot account for variation. Applications to be explored in future research involve the incorpo- ration Of the system as part of the indexing module of an IR system, to be able to accurately measure improvements in system coverage as well as areas of possible degradation. We also plan to explore analy- sis of semantic variants through a predicative repre- sentation of term semantics. Our results so far indi- cate that using computational linguistic techniques for carefully controlled term expansion will permit at least a three-fold expansion for coverage over tra- ditional indexing, which should improve retrieval re- suits accordingly. References AGR, Institut National de l'Information Scientifique et Technique, Vandceuvre, France, 1995. Corpus de l'Agriculture, first edition. Aronoff, Mark. 1976. Word Formation in Gene- rative Grammar. Linguistic Inquiry Monographs. MIT Press, Cambridge, MA. Bourigault, Didier. 1993. An endogeneous corpus- based method for structural noun phrase disam- biguation. In Proceedings, 6th Conference of the European Chapter of the Association for Com- putational Linguistics (EACL'93), pages 81-86, Utrecht. Boyer, Martin. 1993. Dictionnaire du frangais. Hydro-Quebec, GNU General Public License, Qudbec, Canada. Daille, Bdatrice. 1996. Study and implementation of combined techniques for automatic extraction of terminology. In Judith L. Klavans and Philip Resnik, editors, The Balancing Act: Combining Symbolic and Statistical Approaches to Language. MIT Press, Cambridge, MA. Dunham, George S., Milos G. Pacak, and Arnold W. Pratt. 1978. Automatic indexing of pathology data. Journal of the American Society for Infor- mation Science, 29(2):81-90. ECI, European Corpus Initiative, 1989 and 1990. "Le Monde" Newspaper. Grefenstette, Gregory and Simone Teufel. 1995. Corpus-based method for automatic identifcation of support verbs for nominalizations. In Procee- dings, 7th Conference of the European Chapter of the Association for Computational Linguistics (EACL'95), pages 98-103, Dublin. Harris, Zellig S., Michael Gottfried, Thomas Ryck- man, Paul Mattick Jr, Anne Daladier, T. N. Har- ris, and S. Harris. 1989. The Form of Information in Science, Analysis of Immunology Sublanguage, volume 104 of Boston Studies in the Philosophy of Science. Kluwer, Boston, MA. Jacquemin, Christian. 1994. Recycling terms into a partial parser. In Proceedings, ~th Conference on Applied Natural Language Processing (ANLP'94), pages 113-118, Stuttgart. Jacquemin, Christian. 1996. What is the tree that we see through the window: A linguistic approach to windowing and term variation. Information Processing eJ Management, 32(4):445-458. Jacquemin, Christian. 1997. Guessing morphology from terms and corpora. In Proceedings, 20th 30 Annual International A CM SIGIR Conference on Research and Development in Information Retrie- val (SIGIR '97), Philadelphia, PA. Justeson, John S. and Slava M. Katz. 1995. Tech- nical terminology: some linguistic properties and an algorithm for identification in text. Natural Language Engineering, 1(1):9-27. Klavans, Judith L., editor. 1995. AAAI Sympo- sium on Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity, and Generati- vity. American Association for Artificial Intelli- gence, March. Klavans, Judith L. and Martin S. Chodorow. 1992. Degrees of stativity: The lexical representation of verb aspect. In Proceedings of the Fourteenth International Conference on Computational Lin- guistics, pages 1126-1131, Nantes, France. Krovetz, Robert and W. Bruce Croft. 1992. Lexical ambiguity and information retrieval. ACM Tran- sactions on Information Systems, 10(2):115-141. Lewis, David D., W. Bruce Croft, and Nehru Bhan- daru. 1989. Language-oriented information re- trieval. International Journal of Intelligent Sys- tems, 4:285-318. Martin, W.J.F., B.P.F. AI, and P.J.G. Van Sterken- burg. 1983. On the processing of a text cor- pus: From textual data to lexicographical infor- mation. In R.R.K. Hartman, editor, Lexicography, Principles and Practice. Academic Press, London, pages 77-87. Metzler, Douglas P. and Stephanie W. Haas. 1989. The Constituent Object Parser: Syntactic stru- cture matching for information retrieval. ACM Transactions on Information Systems, 7(3):292- 316. NLM, National Library of Medicine, Bethesda, MD, 1995. Unified Medical Language System, sixth ex- perimental edition. Popovifi, Mirko and Peter Willett. 1992. The effec- tiveness of stemming for Natural-Language access to Slovene textual data. Journal of the American Society for Information Science, 43(5):384-390. Schwarz, Christoph. 1990. Automatic syntactic analysis of free text. Journal of the American So- ciety for Information Science, 41(6):408-417. Selkirk, Elisabeth O. 1982. The Syntax of Words. MIT Press, Cambridge, MA. Sheridan, Paraic and Alan F. Smeaton. 1992. The application of morpho-syntactic language proces- sing to effective phrase matching. Information Processing g_4 Management, 28(3):349-369. Smadja, Frank and Kathleen R. McKeown. 1991. Using collocations for language generation. Com- putational Intelligence, 7(4), December. Smeaton, Alan F. 1992. Progress in the application of natural language processing to information re- trieval tasks. The Computer Journal, 35(3):268- 278. Sparck Jones, Karen and Joel I. Tait. 1984. Auto- matic search term variant generation. Journal of Documentation, 40(1):50-66. Srinivasan, Padmini. 1996. Optimal document- indexing vocabulary for Medline. Information Processing ~4 Management, 32(5):503-514. Strzalkowski, Tomek. 1996. Natural language infor- mation retrieval. Information Processing ~ Ma- nagement, 31(3):397-417. Tzoukermann, Evelyne and Christian Jacquemin. 1997. Analyse automatique de la morphologie ddrivationnelle et filtrage de mots possibles. Si- lexicales, 1:251-260. Colloque Mots possibles et mots existants, SILEX, University of Lille III. Tzoukermann, Evelyne, Judith L. Klavans, and Christian Jacquemin. 1997. Effective use of natu- ral language processing techniques for automatic conflation of multi-word terms: the role of deri- vational morphology, part of speech tagging, and shallow parsing. In Proceedings, 20th Annual In- ternational ACM SIGIR Conference on Research and Development in Information Retrieval (SI- GIR'97), Philadelphia, PA. Tzoukermann, Evelyne and Mark Y. Liberman. 1990. A finite-state morphological processor for Spanish. In Proceedings of the Thirteenth Interna- tional Conference on Computational Linguistics, pages 277-281, Helsinki, Finland. Tzoukermann, Evelyne and Dragomir R. Radev. 1996. Using word class for part-of-speech disambi- guation. In SIGDAT Workshop, pages 1-13, Co- penhagen, Denmark. Tzoukermann, Evelyne, Dragomir R. Radev, and William A. Gale. 1995. Combining linguistic knowledge and statistical learning in French part- of-speech tagging. In EACL SIGDAT Workshop, pages 51-57, Dublin, Ireland. Van Rijsbergen, C. J. 1975. Information Retrieval. Butterworth, London. Viegas, Evelyne, Margarita Gonzalez, and Jeff Long- well. 1996. Morpho-semantics and constructive derivational morphology: A transcategorial ap- proach. Technical Report MCCS-96-295, Com- puting Research Laboratory, New Mexico State University, Las Cruces, NM. 31 | 1997 | 4 |
Efficient Generation in Primitive Optimality Theory Jason Eisner Dept. of Computer and Information Science University of Pennsylvania 200 S. 33rd St., Philadelphia, PA 19104-6389, USA j eisner@linc, cis. upenn, edu Abstract This paper introduces primitive Optimal- ity Theory (OTP), a linguistically moti- vated formalization of OT. OTP specifies the class of autosegmental representations, the universal generator Gen, and the two simple families of permissible constraints. In contrast to less restricted theories us- ing Generalized Alignment, OTP's opti- mal surface forms can be generated with finite-state methods adapted from (Ellison, 1994). Unfortunately these methods take time exponential on the size of the gram- mar. Indeed the generation problem is shown NP-complete in this sense. How- ever, techniques are discussed for making Ellison's approach fast in the typical case, including a simple trick that alone provides a 100-fold speedup on a grammar fragment of moderate size. One avenue for future improvements is a new finite-state notion, "factored automata," where regular lan- guages are represented compactly via for- mal intersections N~=IAi of FSAs. 1 Why formalize OT? Phonology has recently undergone a paradigm shift. Since the seminal work of (Prince & Smolensky, 1993), phonologists have published literally hun- dreds of analyses in the new constraint-based frame- work of Optimality Th.eory, or OT. Old-style deriva- tional analyses have all but vanished from the lin- guistics conferences. The price of this creative ferment has been a cer- tain lack of rigor. The claim for O.T as Universal Grammar is not substantive or falsifiable without formal definitions of the putative Universal Gram- mar objects Repns, Con, and Gen (see below). Formalizing OT is necessary not only to flesh it out as a linguistic theory, but also for the sake of compu- tational phonology. Without knowing what classes of constraints may appear in grammars, we can say only so much about the properties of the system, or about algorithms for generation, comprehension, and learning. The central claim of OT is that the phonology of any language can be naturally described as succes- sive filtering. In OT, a phonological grammar for a language consists of ~ vector C1, C2, • .. C, of soft constraints drawn from a universal fixed set Con. Each constraint in the vector is a function that scores possible output representations (surface forms): (1) Ci : Repns --* {0, 1, 2,...} (Ci E Con) If C~(R) = 0, the output representation R is said to satisfy the ith constraint of the language. Other- wise it is said to violate that constraint, where the value of C~(R) specifies the degree of violation. Each constraint yields a filter that permits only minimal violation of the constraint: (2) Filteri(Set)= {R E Set : Ci(R) is minimal} Given an underlying phonological input, its set of legal surface forms under the grammar--typically of size 1--is just (3) Filter, (...Filter,. (Filter 1 (Gen(input)))) where the function Gen is fixed across languages and Gen(input) C_ Repns is a potentially infinite set of candidate surface forms. In practice, each surface form in Gen(input) must contain a silent copy of input, so the constraints can score it on how closely its pronounced material matches input. The constraints also score other cri- teria, such as how easy the material is to pronounce. If C1 in a given language is violated by just the forms with coda consonants, then Filterl(Gen(input)) in- cludes only coda-free candidates--regardless of their other demerits, such as discrepancies from input or unusual syllable structure. The remaining con- straints are satisfied only as well as they can be given this set of survivors. Thus, when it is impossible to satisfy all constraints at once, successive filtering means early constraints take priority. Questions under the new paradigm include these: • Generation. How to implement the input- output mapping in (3)? A brute-force approach 313 fails to terminate if Gen produces infinitely many candidates. Speakers must solve this problem. So must linguists, if they are to know what their proposed grammars predict. • Comprehension. How to invert the input- output mapping in (3)? Hearers must solve this. • Learn,ng. How to induce a lexicon and a phonology like (1) for a particular language. given the kind of evidence available to child lan- guage learners? None of these questions is well-posed without restric- tions on Gen and Con. In the absence of such restrictions, computational linguists have assumed convenient ones. gllison (1994) solves the generation problem where Gen produces a regular set of strings and Con admits all finite state transducers that can map a string to a number in unary notation. (Thus Ci(R) = 4 if the Ci transducer outputs the string llll on input R.) Tesar (1995. 1996) extends this result to the case where Gen(mput) is the set of parse trees for input under some context-free grammar (CFG)3 Tesar's constraints are functions on parse trees such tha~ Ci([A [B1.. ] [B~.-- .]]) can be computed from A, B:, B2, Ci(B1), and Ci(B~.). The optimal tree can then be found with a standard dynamic-programming chart parser for weighted CFGs. It is an important question whether these for- malisms are useful in practice. On the one hand, are they expressive enough to describe real languages? On the other, are they restrictive enough to admit good comprehension and unsupervised-learning al- gorithms? The present paper sketches primitive Optimal- ity Theory (OTP)--a new formalization of OT that is explicitly proposed as a linguistic hypothe- sis. Representations are autosegmental, Gen is triv- ial, and only certain simple and phonologically local constraints are allowed. I then show the following: i. Good news: Generation in OTP can be solved attractively with finke-state methods. The so- lution is given in some detail. 2. Good news: OTP usefully restricts the space of grammars to be learned. (In particular. Gener- alized Alignment is outside the scope of finite- state or indeed context-free methods.} 3. Bad news: While OTP generation is close to lin- ear on the size of the input form. it is NP-hard on the size of the grammar, which for human languages is likely to be quite large. 4. Good yews: Ellison's algorithm can be improved so that its exponential blowup is often avoided. *This extension is useful for OT syntax but may have little application to phonology, since the context-free case reduces to the regular case (i.e., Ellison) unless the CFG contains recursive productions. 2 Primitive Optimality Theory Primitive Optimality Theory. or OTP. is a formal- ization of OT featuring a homogeneous output repre- sentation, extremely' local constraints, and a simple, unrestricted Gen. Linguistic arguments t'or OTP's constraints and representations are given in !Eisner. 1997). whereas the present description focuses ,an its formal properties and suitability for computational work. An axiomatic treatment is omitted for rea- sons of space. Despite its simplicity. OTP appears capable of capturing virtually all analyses found in the (phonological) OT literature. 2.1 Repns: Representations in OTP To represent imP], OTP uses not the autosegmentai representation in (4a) IGoldsmith. 1976: Goldsmith. 1990) but rather the simplified autosegmental rep- resentation in (4b), which has no association lines. Similarly (Sa) is replaced by (Sb). The central rep- resentational notion is that of a constituent time- line: an infinitely divisible line along on which con- stituents are laid out. Every constituent has width and edges. (4) a. voi b. ,~o,[ • J t,o~ haS/ n~Jt ]ha, 1/ c! cIc ]c ! C C lab[ jlab lab For phonetic interpretation: ]~o, says to end voic- ing (laryngeal vibration). At the same instant, ],,~, says to end nasality (raise velum}. (5) a. O" O" /1\ /I CVCV b. ~[ C[ ~ " : C ,-" ~a ]¢-. j- V i .V .k timeline can carry tl~e full panoply of phonolog- ical and morphological ,:onstituents--an.vthing that phonological constraints might have to refer to. Thus, a timetine bears not only autosegmental fe,.'> tures like nasal gestures inasi and prosodic ,:on- stituents such as syllables [o']. but also stress marks [x], feature dpmains such as [ATRdom] (Cole L: Kisseberth, 1994) and morphemes such as [Stem i. All these constituents are formally identicah each marks off an interval on the timeline. Let Tiers de- note the fixed finite set of constituent types. {has. ~. x, ATRdom. S*.em .... }. It is always possible to recover the old representa- tion (4a) from the new one (4b), under the conven- tion that two constituents on the timeline are linked if their interiors overlap (Bird & Ellison, 1994). The interior of a constituent is the open interval that 314 excludes its edges: Thus, lab is linked to both con- sonants C in (4b), but the two consonants are not linked to each other, because their interiors do not overlap. By eliminating explicit association lines, OTP eliminates the need for faithfulness constraints on them, or for well-formedness constraints against gap- ping or crossing of associations. In addition, OTP can refer naturally to the edges of syllables (or mor- phemes). Such edges are tricky to define in (5a), be- cause a syllable's features are scattered across multi- ple tiers and perhaps shared with adjacent syllables. In diagrams of timelines, such as (4b) and (5b), the intent is that only horizontal order matters. Horizontal spacing and vertical order are irrelevant. Thus, a timeline may be represented as a finite col- lection S of labeled edge brackets, equipped with or- dering relations -~ and " that indicate which brack- ets precede each other or fall in the same place. Valid timelines (those in Repns) also require that edge brackets come in matching pairs, that con- stituents have positive width, and that constituents of the same type do not overlap (i.e., two con- stituents on the same tier may not be linked). 2.2 Gem Input and output in OTP OT's principle of Containment (Prince & Smolen- sky, 1993) says that each of the potential outputs in Repns includes a silent copy of the input, so that constraints evaluating it can consider the goodness of match between input and output. Accordingly, OTP represents both input and output constituents on the constituent timeline, but on different tiers. Thus surface nasal autosegments are bracketed with ,~as[ and ],,a~, while underlying nasal autosegments are bracketed with ,as[ and ] .... The underlining is a notational convention to denote input material. No connection is required between [nas] and [nas! except as enforced by constraints that prefer [nas] and [nas] or their edges to overlap in some way. (6) shows a candidate in which underlying [nas] has sur- faced "in place" but with rightward spreading. (6) ~o,[ ]~o~ .o,[ ].o, Here the left edges and interiors overlap, but the right edges fail to. Such overlap of interiors may be regarded as featural Input-Output Correspondence in the sense of (McCarthy & Prince, 1995). The lexicon and morphology supply to Gen an underspecified timeline--a partially ordered col- lection of input edges. The use of a partial ordering allows the lexicon and morphology to supply float- ing tones, floating morphemes and templatic mor- phemes. Given such an underspecified timeline as lexical input, Gen outputs the set of all fully specified time- lines that are consistent with it. No new input con- stituents may be added. In essence, Gen generates every way of refining the partial order of input con- stituents into a total order and decorating it freely with output constituents. Conditions such as the prosodic hierarchy (Selkirk, 1980) are enforced by universally high-ranked constraints, not by Gen. -~ 2.3 Con: The primitive constraints Having described the representations used, it is now possible to describe the constraints that evaluate them. OTP claims that Con is restricted to the following two families of primitive constraints: (7) a --* /3 ("implication"): "Each ~ temporally overlaps some ~." Scoring: Constraint(R) = number of a's in R that do not overlap any 8. (8) a 3- /3 ("clash"): "Each cr temporally overlaps no/3." Scoring: Constraint(R) = number of (a, ';3) pairs in R such that the a overlaps the/3. That is, a --~ /3 says that a's attract /3's, while a 3_ /3 says that c~'s repel/3's. These are simple and arguably natural constraints; no others are used. In each primitive constraint, cr and /3 each spec- ify a phonological event. An event is defined to be either a type of labeled edge, written e.g. ~[, or the interior (excluding edges) of a type of labeled constituent, written e.g.a. To express some con- straints that appear in real phonologies, it is also necessary to allow, a and /3 to be non-empty con- junctions and disjunctions of events. However, it appears possible to limit these cases to the forms in (9)-(10). Note that other forms, such as those in (11), can be decomposed into a sequence of two or ~The formalism is complicated slightly by the pos- sibility of deleting segments (syncope) or inserting seg- ments (epenthesis), as illustrated by the candidates be- low. (i) Syncope (CVC ~ CC): the _V is crushed to zero width so the C's can be adjacent. c[ Ic ]c ~[ 1~_ ]~ vlv (ii) Epenthesis (CC ~ CVC): the C__'s are pushed apart. c[ ]~ ~[ ]~ ~_[ ]~_ ~[ ]~ In order to Mlow adjacency of the surface consonants in (i), as expected by assimilation processes (and encour- aged by a high-ranked constraint), note that the underly- ing vowel must be allowed to have zero width--an option available to to input but not output constituents. The input representation must specify only v[ "< Iv, not v[ ~ ]v. Similarly, to allow (ii), the input representa- tion must specify only ]c, __. c_~[, not ]o, ~ c2[. 315 more constraints. 3 (9) ( c~1 and a~ and ... ) ---* (/31 or/32 or ...) Scoring: Constraint(R) = number of sets of events {A1, A2,...} of types (~l, a, .... respec- tively that all overlap on the timeline and whose intersection does not overlap any event of type/31,/3.,, • ... (10) (al anda2 and ...) .L (/31 and/3~ and ...) Scoring: Constraint(R) = number of sets of events {A1,A~.,..., B1,B~ .... } of types oq,a~ .... ,/31,/32,... respectively that all overlap on the timeline. (Could a/so be notated: al ± a2 ± "" ± Zl ± /~2 ± "".) (11) ¢X ~ ( fll and /32 ) [cf. o~ ~ /31 >> c~ --~ /32] ( cq or ~.~ ) --* ,3 [cf. ~1 ---* /3 >> a.~ --~ /3] The unifying theme is that each primitive con- straint counts the number of times a candidate gets into some bad local configuration. This is an inter- val on the timeline throughout which certain events (one or more specified edges or interiors) are all present and certain other events (zero or more spec- ified edges or interiors) are all absent. Several examples of phonologically plausible con- straints, with monikers and descriptions, are given below. (Eisner, 1997) shows how to rewrite hun- dreds of constraints from the literature in the primi- tive constraint notation, and discusses the problem- atic case of reduplication. (Eisner, in press) gives a detailed stress typology using only primitive con- straints; in particular, non-local constraints such as FTBIN, FOOTFORM, and Generalized Alignment (McCarthy & Prince, 1993) are eliminated. (12) a. ONSET: a[- C[ "Every syllable starts with a consonant." b. NONFINALITY: ]Wo,-d _1_ ]F "The end of a word may not be footed." c o[ , l"eet start and end on syllable boundaries." d. PACKFEET: ]F ""+ F[ "Each foot is followed immediately by an- other foot; i.e., minimize the number of gaps between feet. Note that the final foot, if any, will always violate this constraint." e, NOCLASH: ]X A_ x[ "Two stress marks may not be adjacent." f. PROGRESSIVEVOICING: ]voi _1_ C[ "If the segment preceding a consonant is voiced, voicing may not stop prior to the 3Such a sequence does alter the meaning slightly. To get the exact original meaning, we would have to de- compose into so-cMled "unranked" constraints, whereby Ci (R) is defined as C,, (R)+Ci~ (R). But such ties under- mine OT's idea of strict ranking: they confer the power to minimize linear functions such as (C1 + C1 + C1 + C2 + C3 + C3)(R) = 3C1 (R) + C2(R) + 2C3(R). For this reason, OTP currently disallows unranked constraints; I know of no linguistic data that crucially require them. consonant but must be spread onto it." g, NASVOI: nas -- voi "Every nasal gesture must be at least partly voiced." h. FULLNASVOI: has _[_ voi[, has I ]voi "A nasal gesture may not be only partly voiced." i. MAX(VOi) or PARSE(voi): vo._i ~ voi "Underlying voicing features surface." j. DEP(voi) or FILL(voi): voi ---, voi "Voicing features appear on the surface only if they are a/so underlying." k. NoSPREADRIGHT(voi): voi _1_ ]vo__i_ "Underlying voicing may not spread right- ward as in (6)." h NONDEGENERATE: F --~[ "Every foot must cross at least one morn boundary ,[." m. TAUTOMORPHEMICFOOT: F _]_ .~Iorph[ "No foot may cross a morpheme boundary." 3 Finite-state generation in OTP 3.1 A simple generation algorithm Recall that the generation problem is to find the output set S,~, where (13) a. So = Gen(inpu~) C_ Repns b. Si+l = Filteri+l(Si) C Si Since in OTP, the input is a partial order of edge brackets, and Sn is a set of one or more total orders (timelines), a natural approach is to successively re- fine a partial order. This has merit. However, not every Si can be represented as a single partial order, so the approach is quickly complicated by the need to encode disjunction. A simpler approach is to represent Si (as well as inpu~ and Repns) as a finite-state automaton (FSA), denoting a regular set of strings that encode timelines. The idea is essentially due to (Ellison, 1994), and can be boiled down to two lines: (14) Ellison's algorithm (variant). So = input N Repns = all conceivable outputs containing input Si+l = BestP~tths(Si N Ci+l) Each constraint Ci must be formulated as an edge- weighted FSA that scores candidates: Ci accepts any string R, on a singl e path of weight Ci(R). 4 Best- Paths is Dijkstra's "single-source shortest paths" algorithm, a dynamic-programming algorithm that prunes away all but the minimum-weight paths in an automaton, leaving an unweighted automaton. OTP is simple enough that it can be described in this way. The next section gives a nice encoding. 4Weighted versions of the state-labeled finite au- tomata of (Bird & EUison, 1994) could be used instead. 316 3.2 OTP with automata We may encode each timeline as a string over an enormous alphabet E. If [Tiersl = k, then each symbol in E is a k-tuple, whose components describe what is happening on the various tiers at a given moment. The components are drawn from a smaller alphabet A = { [, ], l, +, -}. Thus at any time, the ith tier may be beginning or ending a constituent ( [, ] ) or both at once ( I ), or it may be in a steady state in the interior or exterior of a constituent (+, -). At a minimum, the string must record all moments where there is an edge on some tier. If all tiers are in a steady state, the string need not use any symbols to say so. Thus the string encoding is not unique. (15) gives an expression for all strings that cor- rectly describe the single tier shown. (16) describes a two-tier timeline consistent with (15). Note that the brackets on the two tiers are ordered with re- spect to each other. Timelines like these could be assembled morphologically from one or more lexical entries (Bird & Ellison, 1994), or produced in the course of algorithm (14). (15) ]= -*[+*1+'3-* (16) (-,->*<[:,-)<,,->*<+, r><+, +)*(I, +)<+, +)* (+, ])(+,-)*(*, [)(*, +)*C], 1) We store timeline expressions like (16) as deter- ministic FSAs. To reduce the size of these automata, it is convenient to label arcs not with individual el- ements of El (which is huge) but with subsets of E, denoted by predicates. We use conjunctive predi- cates where each conjunct lists the allowed symbols on a given tier: (17) +F, 3cr, [l+-voi (arc label w/ 3 conjuncts) The arc label in (17) is said to mention the tiers F, o', voi E Tiers. Such a predicate allows any sym- bol from A on the tiers it does not mention. The input FSA constrains only the input tiers. In (14) we intersect it with Repns, which constrains only the output tiers. Repns is defined as the inter- section of many automata exactly like (18), called tier rules, which ensure that brackets are properly paired on a given tier such as F (foot). (18) -F ,+F Like the tier rules, the constraint automata Ci are small and deterministic and can be built automat- ically. Every edge has weight O or 1. With some care it is possible to draw each Ci with two or fewer states, and with a number of arcs proportional to the number of tiers mentioned by the constraint. Keeping the constraints small is important for ef- ficiency, since real languages have many constraints that must be intersected. Let us do the hardest case first. An implication constraint has the general form (9). Suppose that all the c~i are interiors, not edges. Then the constraint targets intervals of the form a = c~1 f'l c~2 fq • • .. Each time such an interval ends without any 3j having occurred during it, one violation is counted: (19) Weight-1 arcs are shown in bold; others are weight-0. (other) (other) b during a ~ / ~ "-1/ II a ends A candidate that does see a #j during an c~ can go and rest in the right-hand state for the duration of the a. Let us fill in the details of (19). How do we detect the end of an a? Because one or more of the ai end (], I), while all the al either end or continue (+), so that we know we are leaving an a. 5 Thus: (20) (in all ai)- (some bj) in all ai An unusually complex example is shown in (21). Note that to preserve the form of the predicates in (17) and keep the automaton deterministic, we need to split some of the arcs above into multi- ple arcs. Each flj gets its own arc, and we must also expand set differences into multiple arcs, using the scheme W - z A y A z = W V ~(x A y A z) = (W A ~x) V (W A z A-~y) V (W A x A y A -~:). sit is important to take ], not +, as our indication that we have been inside • constituent. This means that the timeline ( [, -)(+, -)*(+, [)(% +)*('], +)(-, +)*(-, ]) cannot avoid violating a clash constraint simply by instantiat- ing the (+, +)* part as e. Furthermore, the ] convention means that a zero-width input constituent (more pre- cisely, a sequence of zero-width constituents, represented as a single 1 symbol) will often act as if it has an interior. Thus if V syncopates as in footnote 2, it still violates the parse constraint _V -- V. This is an explicit property of OTP: otherwise, nothing that failed to parse would ever violate PARSE, because it would be gone! On the other hand, "l does not have this special role on the right hand side of ---+ , which does not quantify universally over an interval. The consequence for zero- width consituents is that even if a zero-width 1/_" overlaps (at the edge, say) with a surface V, the latter cannot claim on this basis alone to satisfy FILL: V -- V__. This too seems like the right move linguistically, although fur- ther study is needed. 317 (21) (pandq) --* (borc[) +p +q []l-b ]+-c How about other cases? If the antecedent of an implication is not. an interval, then the con- straint needs only one state, to penalize mo- ments when the antecedent holds and the con- sequent does not. Finally, a clash constraint cq I a2 _1_ ... is identical to the implication constraint (or1 and a.~ and...) --* FALSE. Clash FSAs are therefore just degenerate versions of im- plication FSAs, where the arcs looking for/3j do not exist because they would accept no symbol. (22) shows the constraints (p and ]q ) --+ b and p 3_ q. (22) +p +q 4 Computational requirements 4.1 Generalized Alignment is not flnite-state Ellison's method can succeed only on a restricted formalism such as OTP, which does not admit such constraints as the popular Generalized Alignment (GA) family of (McCarthy & Prince, 1993). A typ- ical GA constraint is ALIGN(F, L, Word, L), which sums the number of syllables between each left foot edge F[ and the left edge of the prosodic word. Min- imizing this sum achieves a kind of left-to-right it- erative footing. OTP argues that such non-local, arithmetic constraints can generally be eliminated in favor of simpler mechanisms (Eisner, in press). Ellison's method cannot directly express the above GA constraint, even outside OTP, because it cannot compute a quadratic function 0 + 2 + 4 + -.. on a string like [~cr]F [~a]r [~]r '" '. Path weights in an FSA cannot be more than linear on string length. Perhaps the filtering operation of any GA con- straint can be simulated with a system of finite- state constraints? No: GA is simply too powerful. The proof is suppressed here for reasons of space, but it relies on a form of the pumping lemma for weighted FSAs. The key insight is that among can- didates with a fixed number of syllables and a single (floating) tone, ALIGN(a, L, H, L) prefers candidates where the tone docks at the center. A similar argu- ment for weighted CFGs (using two tones) shows this constraint to be too hard even for (Tesar, 1996). 4.2 Generation is NP-complete even in OTP When algorithm (14) is implemented literally and with moderate care, using an optimizing C compiler on a 167MHz UltraSPARC, it takes fully 3.5 minutes (real time) to discover a stress pattern for the syl- lable sequence ~.6 The automata become impractically huge due to intersections. Much of the explosion in this case is introduced at the start and can be avoided. Because Repns has 21Tiersl = 512 states, So, $1, and $2 each have about 5000 states and 500,000 to 775,000 arcs. Thereafter the S~ automata become smaller, thanks to the pruning performed at each step by BestPaths. This repeated pruning is already an improvement over Ellison's original algorithm (which saves prun- ing till the end, and so continues to grow exponen- tially with every new constraint). If we modify (14) further, so that each tier rule from Repns is inter- sected with the candidate set only when its tier is first mentioned by a constraint, then the automata are pruned back as quickly as they grow. They have about 10 times fewer states and 100 times fewer arcs. and the generation time drops to 2.2 seconds. This is a key practical trick. But neither it nor any other trick can help for all grammars, for in the worst case, the OTP generation problem is NP-hard on the number of tiers used by the grammar. The locality of constraints does not save us here. Many NP-complete problems, such as graph coloring or bin packing, attempt to minimize some global count subject to numerous local restrictions. In the case of OTP generation, the global count to minimize is the degree of violation of Ci, and the local restrictions are imposed by C1, C2,... Ci-1. Proof of NP-hardness (by polytime reduction from Hamilton Path). Given G = (V(G), E(G)), an n-vertex directed graph. Put Tiers = V(G)tO {Stem, S}. Consider the following vector of O(n -~) primitive constraints (ordered as shown): (23) a. VveV(a): ~[-~s[ b. Vv E V(G): ]~ -- ]s c. Vv e V(G): St-era -~ v d. Stem .1_ S e. Vu, ve V(G) s.t. uv ~ E(G): ]u .L o[ f. Is - SThe grammar is taken from the OTP stress typol- ogy proposed by (Eisner, in press). It has tier rules for 9 tiers, and then spends 26 constraints on obvious univer- sal properties of morns and syllables, followed by 6 con- straints for universal properties of feet and stress marks and finally 6 substantive constraints that can be freely reranked to yield different stress systems, such as left-to- right iambs with iambic lengthening. 318 Suppose the input is simply [Stem]. Filtering Gen(input) through constraints (23a-d), we are left with just those candidates where Stem bears n (dis- joint) constituents of type S, each coextensive with a constituent bearing a different label v E V(G). (These candidates satisfy (23a-c) but violate (23d) n times.) (23e) says that a chain of abutting con- stituents [uIvIw]. • • is allowed only if it corresponds to a path in G. Finally, (23f) forces the grammar to minimize the number of such chains. If the minimum is 1 (i.e., an arbitrarily selected output candidate vi- olates (23f) only once), then G has a Hamilton path. When confronted with this pathological case, the finite:state methods respond essentially by enumer- ating all possible permutations of V(G) (though with sharing of prefixes). The machine state stores, among other things, the subset of V(G) that has al- ready been seen; so there are at least 2 ITiersl states. It must be emphasized that if the grammar is fixed in advance, algorithm (14) is close to linear in the size of the input form: it is dominated by a constant number of calls to Dijkstra's BestPaths method, each taking time O([input arcs[ log [input statesl). There are nonetheless three reasons why the above result is important. (a) It raises the prac- tical specter of huge constant factors (> 2 4°) for real grammars. Even if a fixed grammar can somehow be compiled into a fast form for use with many inputs, the compilation itself will have to deal with this con- stant factor. (b) The result has the interesting im- plication that candidate sets can arise that cannot be concisely represented with FSAs. For if all Si were polynomial-sized in (14), the algorithm would run in polynomial time. (c) Finally, the grammar is not fixed in all circumstances: both linguists and children crucially experiment with different theories. 4.3 Work in progress: Factored automata The previous section gave a useful trick for speeding up Ellison's algorithm in the typical case. We are currently experimenting with additional improve- ments along the same lines, which attempt to de- fer intersection by keeping tiers separate as long as possible. The idea is to represent the candidate set S/not as a large unweighted FSA, but rather as a collection A of preferably small unweighted FSAs, called factors, each of which mentions as few tiers as possible. This collection, called a factored automaton, serves as a compact representation of hA. It usually has far fewer states than 71.,4 would if the intersection were carried out. For instance, the natural factors of So are input and all the tier rules (see 18). This requires only O([Tiers[ + [input[) states, not O(21Tiersl. [input[). Using factored automata helps Ellison's algorithm (14) in several ways: • The candidate sets Si tend to be represented more compactly. • In (14), the constraint Ci+l needs to be inter- sected with only certain factors of Si. • Sometimes Ci+l does not need to be intersected with the input, because they do not mention any of the same tiers. Then step i + 1 can be performed in time independent of input length. Example: input = , which is a 43-state automaton, and C1 is F -- x, which says that every foot bears a stress mark. Then to find $1 = BestPaths(S0 71 C1), we need only consider S0's tier rules for F and x, which require well-formed feet and well-formed stress marks, and combine them with C1 to get a new factor that requires stressed feet. No other factors need be involved. The key operation in (14) is to find Bestpaths(A 71 C), where .4 is an unweighted factored automaton and C is an ordinary weighted FSA (a constraint). This is the best intersection problem. For con- creteness let us suppose that C encodes F ---* x, a two-state constraint. A naive idea is simply to add F ---* x to ..4 as a new factor. However, this ignores the BestPaths step: we wish to keep just the best paths in r[ ~ x[ that are compatible with A. Such paths might be long and include cycles in F[ ---* x[. For example, a weight-1 path would describe a chain of optimal stressed feet interrupted by a single unstressed one where A happens to block stress. A corrected variant is to put I -- 71.A and run BestPaths on I 71 C. Let the pruned result be B. We could add B directly back to to ,4 as a new factor, but it is large. We would rather add a smaller factor B' that has the same effect, in that 1 71 B' = 1 71 B. (B' will look something like the original C, but with some paths missing, some states split, and some cycles unrolled.) Observe that each state of B has the form i x c for some i E I and c E C. We form B' from B by "re-merging" states i x c and i' x c where possible, using an approach similar to DFA minimization. Of course, this variant is not very efficient, because it requires us to find and use I = N.4. What we really want is to follow the above idea but use a smaller I, one that considers just the relevant factors in .,4. We need not consider factors that will not affect the choice of paths in C above. Various approaches are possible for choosing such an I. The following technique is completely general, though it may or may not be practical. Observe that for BestPaths to do the correct thing, I needs to reflect the sum total of .A's con- straints on F and x, the tiers that C mentions. More formally, we want I to be the projection of the can- didate set N.A onto just the F and x tiers. Unfortu- nately, these constraints are not just reflected in the factors mentioning F or x, since the allowed con- figurations of F and x may be mediated through 319 additional factors. As an example, there may be a factor mentioning F and ¢, some of whose paths are incompatible with the input factor, because the lat- ter allows ¢ only in certain places or because only allows paths of length 14. 1. Number the tiers such that F and x are num- bered 0, and all other tiers have distinct positive numbers. 2. Partition the factors of .4 into lists L0, L1, L2,... Lk, according to the highest-numbered tier they mention. (Any factor that mentions no tiers at all goes onto L0.) 3. If k -- 0, then return MLk as our desired I. 4. Otherwise, MLk exhausts tier k's ability to me- diate relations among the factors. Modify the arc labels of ML} so that they no longer restrict (mention) k. Then add a determinized, mini- mized version of the result to to Lj, where j is the highest-numbered tier it now mentions. 5. Decrement k and return to step 3. If n.4 has k factors, this technique must per- form k - 1 intersections, just as if we had put I = n.4. However, it intersperses the intersections with determinization and minimization operations, so that the automata being intersected tend not to be large. In the best case, we will have k- 1 intersection-determinization-minimizations that cost O(1) apiece, rather than k-1 intersections that cost up to 0(2 k) apiece. 5 Conclusions Primitive Optimality Theory, or OTP, is an attempt to produce a a simple, rigorous, constraint-based model of phonology that is closely fitted to the needs of working linguists. I believe it is worth study both as a hypothesis about Universal Grammar and as a formal object. The present paper introduces the OTP formal- ization to the computational linguistics community. We have seen two formal results of interest, both having to do with generation of surface forms: • OTP's generative power is low: finite-state optimization. In particular it is more con- strained than theories using Generalized Align- ment. This is good news for comprehension and learning. • OTP's computational complexity, for genera- tion, is nonetheless high: NP-complete on the size of the grammar. This is mildly unfortunate for OTP and for the OT approach in general. It remains true that for a fixed grammar, the time to do generation is close to linear on the size of the input (Ellison, 1994), which is heart- ening if we intend to optimize long utterances with respect to a fixed phonology. Finally, we have considered the prospect of building a practical tool to generate optimal outputs from OT theories. We saw above to set up the represen- tations and constraints efficiently using determinis- tic finite-state automata, and how to remedy some hidden inefficiencies in the seminal work of (Elli- son, 1994), achieving at least a 100-fold observed speedup. Delayed intersection and aggressive prun- ing prove to be important. Aggressive minimization and a more compact. "factored" representation of automata may also turn out to help. References Bird, Steven, &: T. Mark Ellison. One Level Phonol- ogy: Autosegmental representations and rules as finite automata. Comp. Linguistics 20:55-90. Cole, Jennifer, ~z Charles Kisseberth. 1994. An op- timal domains theory of harmony. Studies in the Linguistic Sciences 24: 2. Eisner, Jason. In press. Decomposing FootForm: Primitive constraints in OT. Proceedings of SCIL 8, NYU. Published by MIT Working Papers. (Available at http://ruccs.rutgers.edu/roa.html.) Eisner, Jason. What constraints should OT allow? Handout for talk at LSA, Chicago. (Available at http://ruccs.rutgers.edu/roa.html.) Ellison, T. Mark. Phonological derivation in opti- mality theory. COLING '94, 100%1013. Goldsmith, John. 1976. Autosegmental phonology. Cambridge, Mass: MIT PhD. dissertation. Pub- lished 1979 by New York: Garland Press. Goldsmith, John. i990. Autosegmental and metrical phonology. Oxford: Blackwell Publishers. McCarthy, John, & Alan Prince. 1993. General- ized alignment. Yearbook of Morphology, ed. Geert Booij & 3aap van Marle, pp. 79-153. Kluwer. McCarthy, John and Alan Prince. 1995. Faithful- ness and reduplicative identity. In Jill Beckman et al., eds., Papers in Optimality Theory. UMass. Amherst: GLSA. 259-384. Prince, Alan, & Paul Smolensky. 1993. Optimality theory: constrainl interaction in generative gram- mar. Technical Reports of the Rutgers University Center for Cognitive Science. Selkirk, Elizabeth. 1980. Prosodic domains in phonology: Sanskrit revisited. In Mark Aranoff and Mary-Louise Kean, eds., Juncture, pp. 107- 129. Anna Libri, Saratoga, CA. Tesar, Bruce. 1995. Computational Optimality The- ory. Ph.D. dissertation, U. of Colorado, Boulder. Tesar, Bruce. 1996. Computing optimal descriptions for Optimality Theory: Grammars with context- free position structures. Proceedings of the 34th Annual Meeting of the ACL. 320 | 1997 | 40 |
A Trainable Rule-based Algorithm for Word Segmentation David D. Palmer The MITRE Corporation 202 Burlington Rd. Bedford, MA 01730, USA palmer@mitre, org Abstract This paper presents a trainable rule-based algorithm for performing word segmen- tation. The algorithm provides a sim- ple, language-independent alternative to large-scale lexicai-based segmenters requir- ing large amounts of knowledge engineer- ing. As a stand-alone segmenter, we show our algorithm to produce high performance Chinese segmentation. In addition, we show the transformation-based algorithm to be effective in improving the output of several existing word segmentation algo- rithms in three different languages. 1 Introduction This paper presents a trainable rule-based algorithm for performing word segmentation. Our algorithm is effective both as a high-accuracy stand-alone seg- menter and as a postprocessor that improves the output of existing word segmentation algorithms. In the writing systems of many languages, includ- ing Chinese, Japanese, and Thai, words are not de- limited by spaces. Determining the word bound- aries, thus tokenizing the text, is usually one of the first necessary processing steps, making tasks such as part-of-speech tagging and parsing possible. A vari- ety of methods have recently been developed to per- form word segmentation and the results have been published widely. 1 A major difficulty in evaluating segmentation al- gorithms is that there are no widely-accepted guide- lines as to what constitutes a word, and there is therefore no agreement on how to "correctly" seg- ment a text in an unsegmented language. It is 1Most published segmentation work has been done for Chinese. For a discussion of recent Chinese segmentation work, see Sproat et al. (1996). frequently mentioned in segmentation papers that native speakers of a language do not always agree about the "correct" segmentation and that the same text could be segmented into several very different (and equally correct) sets of words by different na- tive speakers. Sproat et a1.(1996) and Wu and Fung (1994) give empirical results showing that an agree- ment rate between native speakers as low as 75% is common. Consequently, an algorithm which scores extremely well compared to one native segmentation may score dismally compared to other, equally "cor- rect" segmentations. We will discuss some other is- sues in evaluating word segmentation in Section 3.1. One solution to the problem of multiple correct segmentations might be to establish specific guide- lines for what is and is not a word in unsegmented languages. Given these guidelines, all corpora could theoretically be uniformly segmented according to the same conventions, and we could directly compare existing methods on the same corpora. While this approach has been successful in driving progress in NLP tasks such as part-of-speech tagging and pars- ing, there are valid arguments against adopting it for word segmentation. For example, since word seg- mentation is merely a preprocessing task for a wide variety of further tasks such as parsing, information extraction, and information retrieval, different seg- mentations can be useful or even essential for the different tasks. In this sense, word segmentation is similar to speech recognition, in which a system must be robust enough to adapt to and recognize the mul- tiple speaker-dependent "correct" pronunciations of words. In some cases, it may also be necessary to allow multiple "correct" segmentations of the same text, depending on the requirements of further pro- cessing steps. However, many algorithms use exten- sive domain-specific word lists and intricate name recognition routines as well as hard-coded morpho- logical analysis modules to produce a predetermined segmentation output. Modifying or retargeting an 321 existing segmentation algorithm to produce a differ- ent segmentation can be difficult, especially if it is not clear what and where the systematic differences in segmentation are. It is widely reported in word segmentation papers, 2 that the greatest barrier to accurate word segmentation is in recognizing words that are not in the lexicon of the segmenter. Such a problem is de- pendent both on the source of the lexicon as well as the correspondence (in vocabulary) between the text in question and the lexicon. Wu and Fung (1994) demonstrate that segmentation accuracy is signifi- cantly higher when the lexicon is constructed using the same type of corpus as the corpus on which it is tested. We argue that rather than attempting to construct a single exhaustive lexicon or even a series of domain-specific lexica, it is more practical to de- velop a robust trainable means of compensating for lexicon inadequacies. Furthermore, developing such an algorithm will allow us to perform segmentation in many different languages without requiring ex- tensive morphological resources and domain-specific lexica in any single language. For these reasons, we address the problem of word segmentation from a different direction. We intro- duce a rule-based algorithm which can produce an accurate segmentation of a text, given a rudimentary initial approximation to the segmentation. Recog- nizing the utility of multiple correct segmentations of the same text, our algorithm also allows the output of a wide variety of existing segmentation algorithms to be adapted to different segmentation schemes. In addition, our rule-based algorithm can also be used to supplement the segmentation of an existing al- gorithm in order to compensate for an incomplete lexicon. Our algorithm is trainable and language in- dependent, so it can be used with any unsegmented language. 2 Transformation-based Segmentation The key component of our trainable segmenta- tion algorithm is Transformation-based Error-driven Learning, the corpus-based language processing method introduced by Brill (1993a). This technique provides a simple algorithm for learning a sequence of rules that can be applied to various NLP tasks. It differs from other common corpus-based methods in several ways. For one, it is weakly statistical, but not probabilistic; transformation-based approaches conseo,:~,tly require far less training data than most o;a~is~ical approaches. It is rule-based, but relies on 2See, for example, Sproat et al. (1996). machine learning to acquire the rules, rather than expensive manual knowledge engineering. The rules produced can be inspected, which is useful for gain- ing insight into the nature of the rule sequence and for manual improvement and debugging of the se- quence. The learning algorithm also considers the entire training set at all learning steps, rather than decreasing the size of the training data as learning progresses, such as is the case in decision-tree in- duction (Quinlan, 1986). For a thorough discussion of transformation-based learning, see Ramshaw and Marcus (1996). Brill's work provides a proof of viability of transformation-based techniques in the form of a number of processors, including a (widely- distributed) part-of-speech tagger (Brill, 1994), a procedure for prepositional phrase attachment (Brill and Resnik, 1994), and a bracketing parser (Brill, 1993b). All of these provided performance comparable to or better than previous attempts. Transformation-based learning has also been suc- cessfully applied to text chunking (Ramshaw and Marcus, 1995), morphological disambiguation (Oflazer and Tur, 1996), and phrase parsing (Vilain and Day, 1996). 2.1 Training Word segmentation can easily be cast as a transformation-based problem, which requires an initial model, a goal state into which we wish to transform the initial model (the "gold standard"), and a series of transformations to effect this improve- ment. The transformation-based algorithm involves applying and scoring all the possible rules to train- ing data and determining which rule improves the model the most. This rule is then applied to all ap- plicable sentences, and the process is repeated until no rule improves the score of the training data. In this manner a sequence of rules is built for iteratively improving the initial model. Evaluation of the rule sequence is carried out on a test set of data which is independent of the training data. If we treat the output of an existing segmentation algorithm 3 as the initial state and the desired seg- mentation as the goal state, we can perform a series of transformations on the initial state - removing ex- traneous boundaries and inserting new boundaries - to obtain a more accurate approximation of the goal state. We therefore need only define an appropriate rule syntax for transforming this initial approxima- 3The "existing" algorithm does not need to be a large or even accurate system; the algorithm can be arbi- trarily simple as long as it assigns some form of initial segmentation. 322 tion and prepare appropriate training data. For our experiments, we obtained corpora which had been manually segmented by native or near- native speakers of Chinese and Thai. We divided the hand-segmented data randomly into training and test sets. Roughly 80% of the data was used to train the segmentation algorithm, and 20% was used as a blind test set to score the rules learned from the training data. In addition to Chinese and Thai, we also performed segmentation experiments using a large corpus of English in which all the spaces had been removed from the texts. Most of our English experiments were performed using training and test sets with roughly the same 80-20 ratio, but in Sec- tion 3.4.3 we discuss results of English experiments with different amounts of training data. Unfortu- nately, we could not repeat these experiments with Chinese and Thai due to the small amount of hand- segmented data available. 2.2 Rule syntax There are three main types of transformations which can act on the current state of an imperfect segmen- tation: • Insert - place a new boundary between two char- acters • Delete - remove an existing boundary between two characters • Slide - move an existing boundary from its cur- rent location between two characters to a loca- tion 1, 2, or 3 characters to the left or right 4 In our syntax, Insert and Delete transformations can be triggered by any two adjacent characters (a bigram) and one character to the left or right of the bigram. Slide transformations can be triggered by a sequence of one, two, or three characters over which the boundary is to be moved. Figure 1 enumerates the 22 segmentation transformations we define. 3 Results With the above algorithm in place, we can use the training data to produce a rule sequence to augment an initial segmentation approximation in order to obtain a better approximation of the desired segmen- tation. Furthermore, since all the rules are purely character-based, a sequence can be learned for any character set and thus any language. We used our rule-based algorithm to improve the word segmen- tation rate for several segmentation algorithms in three languages. 4Note that a Slide transformation is equivalent to a Delete plus an Insert. 3.1 Evaluation of segmentation Despite the number of papers on the topic, the eval- uation and comparison of existing segmentation al- gorithms is virtually impossible. In addition to the problem of multiple correct segmentations of the same texts, the comparison of algorithms is diffi- cult because of the lack of a single metric for re- porting scores. Two common measures of perfor- mance are recall and precision, where recall is de- fined as the percent of words in the hand-segmented text identified by the segmentation algorithm, and precision is defined as the percentage of words re- turned by the algorithm that also occurred in the hand-segmented text in the same position. The com- ponent recall and precision scores are then used to calculate an F-measure (Rijsbergen, 1979), where F = (1 +/~)PR/(~P + R). In this paper we will report all scores as a balanced F-measure (precision and recall weighted equally) with/~ = 1, such that F = 2PR/(P + R) 3.2 Chinese For our Chinese experiments, the training set con- sisted of 2000 sentences (60187 words) from a Xin- hun news agency corpus; the test set was a separate set of 560 sentences (18783 words) from the same corpus. 5 We ran four experiments using this corpus, with four different algorithms providing the starting point for the learning of the segmentation transfor- mations. In each case, the rule sequence learned from the training set resulted in a significant im- provement in the segmentation of the test set. 3.2.1 Character-as-word (CAW) A very simple initial segmentation for Chinese is to consider each character a distinct word. Since the average word length is quite short in Chinese, with most words containing only 1 or 2 characters, 6 this character-as-word segmentation correctly iden- tified many one-character words and produced an initial segmentation score of F=40.3. While this is a low segmentation score, this segmentation algo- rithm identifies enough words to provide a reason- able initial segmentation approximation. In fact, the CAW algorithm alone has been shown (Buckley et al., 1996; Broglio et al., 1996) to be adequate to be used successfully in Chinese information retrieval. Our algorithm learned 5903 transformations from the 2000 sentence training set. The 5903 transfor- mations applied to the test set improved the score from F=40.3 to 78.1, a 63.3% reduction in the error 5The Chinese texts were prepared by Tom Keenan. 6The average length of a word in our Chinese data was 1.60 characters. 323 Boundary Triggering Action Context Rule xABC y ~ x ABCy AB ¢==~ A B Insert (delete) between A and B any xB ¢=:¢, x B Insert (delete) before any B any Ay ~ A y Insert (delete) after any A any ABC ~ A B C Insert (delete) between A and B any AND Insert (delete) between B and C JAB ~ JAB Insert (delete) between A and B J to left of A --JAB ~ -~JA B Insert (delete) between A and B no J to left of A ABK ~ A BK Insert (delete) between A and B K to right of B AB~K ~ A B-~K Insert (delete) between A and B no K to right of B xA y ~ x Ay Move from after A to before A any xAB y ~==e, x ABy Move from after bigram AB to before AB any Move from after trigram ABC to before ABC any Figure 1: Possible transformations. A, B, C, J, and K are specific characters; x and y can be any character. ~J and ~K can be any character except J and K, respectively. rate. This is a very surprising and encouraging re- sult, in that, from a very naive initial approximation using no lexicon except that implicit from the train- ing data, our rule-based algorithm is able to produce a series of transformations with a high segmentation accuracy. 3.2.2 Maximum matching (greedy) algorithm A common approach to word segmentation is to use a variation of the maximum matching algorithm, frequently referred to as the "greedy algorithm." The greedy algorithm starts at the first character in a text and, using a word list for the language be- ing segmented, attempts to find the longest word in the list starting with that character. If a word is found, the maximum-matching algorithm marks a boundary at the end of the longest word, then be- gins the same longest match search starting at the character following the match. If no match is found in the word list, the greedy algorithm simply skips that character and begins the search starting at the next character. In this manner, an initial segmen- tation can be obtained that is more informed than a simple character-as-word approach. We applied the maximum matching algorithm to the test set using a list of 57472 Chinese words from the NMSU CHSEG segmenter (described in the next section). This greedy algorithm produced an initial score of F=64.4. A sequence of 2897 transformations was learned • from the training set; applied to the test set, they improved the score from F=64.4 to 84.9, a 57.8% error reduction. From a simple Chinese word list, the rule-based algorithm was thus able to produce a- segmentation score comparable to segmentation al- gorithms developed with a large amount of domain knowledge (as we will see in the next section). This score was improved further when combin- ing the character-as-word (CAW) and the maximum matching algorithms. In the maximum matching al- gorithm described above, when a sequence of char- acters occurred in the text, and no subset of the sequence was present in the word list, the entire sequence was treated as a single word. This of- ten resulted in words containing 10 or more char- acters, which is very unlikely in Chinese. In this experiment, when such a sequence of characters was encountered, each of the characters was treated as a separate word, as in the CAW algorithm above. This variation of the greedy algorithm, using the same list of 57472 words, produced an initial score of F=82.9. A sequence of 2450 transformations was learned from the training set; applied to the test set, they improved the score from F=82.9 to 87.7, a 28.1% error reduction. The score produced using this variation of the maximum matching algorithm combined with a rule sequence (87.7) is nearly equal to the score produced by the NMSU segmenter seg- menter (87.9) discussed in the next section. 3.2.3 NMSU segmenter The previous three experiments showed that our rule sequence algorithm can produce excellent seg- mentation results given very simple initial segmen- tation algorithms. However, assisting in the adapta- tion of an existing algorithm to different segmenta- tion schemes, as discussed in Section 1, would most likely be performed with an already accurate, fully- developed algorithm. In this experiment we demon- 324 strate that our algorithm can also improve the out- put of such a system. The Chinese segmenter CHSEG developed at the Computing Research Laboratory at New Mexico State University is a complete system for high- accuracy Chinese segmentation (Jin, 1994). In ad- dition to an initial segmentation module that finds words in a text based on a list of Chinese words, CHSEG additionally contains specific modules for recognizing idiomatic expressions, derived words, Chinese person names, and foreign proper names. The accuracy of CHSEG on an 8.6MB corpus has been independently reported as F=84.0 (Ponte and Croft, 1996). (For reference, Ponte and Croft re- port scores of F=86.1 and 83.6 for their probabilis- tic Chinese segmentation algorithms trained on over 100MB of data.) On our test set, CHSEG produced a segmentation score of F=87.9. Our rule-based algorithm learned a sequence of 1755 transformations from the training set; applied to the test set, they improved the score from 87.9 to 89.6, a 14.0% reduction in the error rate. Our rule-based algorithm is thus able to produce an improvement to an existing high-performance sys- tem. Table 1 shows a summary of the four Chinese ex- periments. 3.3 Thai While Thai is also an unsegmented language, the Thai writing system is alphabetic and the average word length is greater than Chinese. ~ We would therefore expect that our character-based transfor- mations would not work as well with Thai, since a context of more than one character is necessary in many cases to make many segmentation decisions in alphabetic languages. The Thai corpus consisted of texts s from the Thai News Agency via NECTEC in Thailand. For our experiment, the training set consisted of 3367 sen- tences (40937 words); the test set was a separate set of 1245 sentences (13724 words) from the same corpus. The initial segmentation was performed using the maximum matching algorithm, with a lexicon of 9933 Thai words from the word separation filter in ctte~,a Thai language Latex package. This greedy algorithm gave an initial segmentation score of F=48.2 on the test set. 7The average length of a word in our Thai data was 5.01 characters. 8The Thai texts were manually segmented by 3o Tyler. Our rule-based algorithm learned a sequence of 731 transformations which improved the score from 48.2 to 63.6, a 29.7% error reduction. While the alphabetic system is obviously harder to segment, we still see a significant reduction in the segmenter error rate using the transformation-based algorithm. Nevertheless, it is doubtful that a segmentation with a score of 63.6 would be useful in too many appli- cations, and this result will need to be significantly improved. 3.4 De-segmented English Although English is not an unsegmented language, the writing system is alphabetic like Thai and the average word length is similar. 9 Since English lan- guage resources (e.g. word lists and morphological analyzers) are more readily available, it is instruc- tive to experiment with a de-segmented English cor- pus, that is, English texts in which the spaces have been removed and word boundaries are not explic- itly indicated. The following shows an example of an English sentence and its de-segmented version: About 20,000 years ago the last ice age ended. About20,000yearsagothelasticeageended. The results of such experiments can help us deter- mine which resources need to be compiled in order to develop a high-accuracy segmentation algorithm in unsegmented alphabetic languages such as Thai. In addition, we are also able to provide a more detailed error analysis of the English segmentation (since the author can read English but not Thai). Our English experiments were performed using a corpus of texts from the Wall Street Journal (WSJ). The training set consisted of 2675 sentences (64632 words) in which all the spaces had been removed; the test set was a separate set of 700 sentences (16318 words) from the same corpus (also with all spaces removed). 3.4.1 Maximum matching experiment For an initial experiment, segmentation was per- formed using the maximum matching algorithm, with a large lexicon of 34272 English words com- piled from the WSJ. l° In contrast to the low initial Thai score, the greedy algorithm gave an initial En- glish segmentation score of F=73.2. Our rule-based algorithm learned a sequence of 800 transformations, 9The average length of a word in our English data was 4.46. characters, compared to 5.01 for Thai and 1.60 for Chinese. 1°Note that the portion of the WSJ corpus used to compile the word list was independent of both the train- ing and test sets used in the segmentation experiments. 325 Initial algorithm Character-as-word Maximum matching Maximum matching + CAW NMSU segmenter l Initial I Rules score learned 40.3 5903 64.4 2897 82.9 2450 87.9 1755 Improved I score 78.1 84.9 87.7 89.6 Error reduction 63.3% 57.8% 28.1% 14.0% Table 1: Chinese results. which improved the score from 73.2 to 79.0, a 21.6% error reduction. The difference in the greedy scores for English and Thai demonstrates the dependence on the word list in the greedy algorithm. For example, an exper- iment in which we randomly removed half of the words from the English list reduced the performance of the greedy algorithm from 73.2 to 32.3; although this reduced English word list was nearly twice the size of the Thai word list (17136 vs. 9939), the longest match segmentation utilizing the list was much lower (32.3 vs. 48.2). Successive experiments in which we removed different random sets of half the words from the original list resulted in greedy algorithm performance of 39.2, 35.1, and 35.5. Yet, despite the disparity in initial segmentation scores, the transformation sequences effect a significant er- ror reduction in all cases, which indicates that the transformation sequences are effectively able to com- pensate (to some extent) for weaknesses in the lexi- con. Table 2 provides a summary of the results using the greedy algorithm for each of the three languages. 3.4.2 Basic morphological segmentation experiment As mentioned above, lexical resources are more readily available for English than for Thai. We can use these resources to provide an informed ini- tial segmentation approximation separate from the greedy algorithm. Using our native knowledge of English as well as a short list of common English prefixes and suffixes, we developed a simple al- gorithm for initial segmentation of English which placed boundaries after any of the suffixes and before any of the prefixes, as well as segmenting punctua- tion characters. In most cases, this simple approach was able to locate only one of the two necessary boundaries for recognizing full words, and the ini- tial score was understandably low, F=29.8. Never- theless, even from this flawed initial approximation, our rule-based algorithm learned a sequence of 632 transformations which nearly doubled the word re- call, improving the score from 29.8 to 53.3, a 33.5% error reduction. 3.4.3 Amount of training data Since we had a large amount of English data, we also performed a classic experiment to determine the effect the amount of training data had on the abil- ity of the rule sequences to improve segmentation. We started with a training set only slightly larger than the test set, 872 sentences, and repeated the maximum matching experiment described in Section 3.4.1. We then incrementally increased the amount of training data and repeated the experiment. The results, summarized in Table 3, clearly indicate (not surprisingly) that more training sentences produce both a longer rule sequence and a larger error re- duction in the test data. Training sentences 872 1731 2675 3572 4522 Rules learned 436 653 800 902 1015 Improved Error score reduction 78.2 18.9% 78.9 21.3% 79.0 21.6% 79.4 23.1% 80.3 26.5% Table 3: English training set sizes. Initial score of test data (700 sentences) was 73.2. 3.4.4 Error analysis Upon inspection of the English segmentation er- rors produced by both the maximum matching algo- rithm and the learned transformation sequences, one major category of errors became clear. Most appar- ent was the fact that the limited context transforma- tions were unable to recover from many errors intro- duced by the naive maximum matching algorithm. For example, because the greedy algorithm always looks for the longest string of characters which can be a word, given the character sequence "economicsi- tuation", the greedy algorithm first recognized "eco- nomics" and several shorter words, segmenting the sequence as "economics it u at io n". Since our transformations consider only a single character of context, the learning algorithm was unable to patch the smaller segments back together to produce the desired output "economic situation". In some cases, 326 Lexicon Language size Chinese 57472 Chinese (with CAW) 57472 Thai 9939 English 34272 ,oitial I I Imp.oved 11 score learned score 64.4 2897 84.9 82.9 2450 87.7 48.2 731 63.6 73.2 800 79.0 Error reduction 57.8% 28.1% 29.7% 21.6% Table 2: Summary of maximum matching results. the transformations were able to recover some of the word, but were rarely able to produce the full desired output. For example, in one case the greedy algo- rithm segmented "humanactivity" as "humana c ti vi ty". The rule sequence was able to transform this into "humana ctivity", but was not able to produce the desired "human activity". This suggests that both the greedy algorithm and the transformation learning algorithm need to have a more global word model, with the ability to recognize the impact of placing a boundary on the longer sequences of char- acters surrounding that point. 4 Discussion The results of these experiments demonstrate that a transformation-based rule sequence, supplement- ing a rudimentary initial approximation, can pro- duce accurate segmentation. In addition, they are able to improve the performance of a wide range of segmentation algorithms, without requiring expen- sive knowledge engineering. Learning the rule se- quences can be achieved in a few hours and requires no language-specific knowledge. As discussed in Sec- tion 1, this simple algorithm could be used to adapt the output of an existing segmentation algorithm to different segmentation schemes as well as compen- sating for incomplete segmenter lexica, without re- quiring modifications to segmenters themselves. The rule-based algorithm we developed to improve word segmentation is very effective for segment- ing Chinese; in fact, the rule sequences combined with a very simple initial segmentation, such as that from a maximum matching algorithm, produce performance comparable to manually-developed seg- menters. As demonstrated by the experiment with the NMSU segmenter, the rule sequence algorithm can also be used to improve the output of an already highly-accurate segmenter, thus producing one of the best segmentation results reported in the litera- ture. In addition to the excellent overall results in Chi- nese segmentation, we also showed the rule sequence algorithm to be very effective in improving segmen- tation in Thai, an alphabetic language. While the scores themselves were not as high as the Chinese performance, the error reduction was nevertheless very high, which is encouraging considering the sim- ple rule syntax used. The current state of our algo- rithm, in which only three characters are considered at a time, will understandably perform better with a language like Chinese than with an alphabetic lan- guage like Thai, where average word length is much greater. The simple syntax described in Section 2.2 can, however, be easily extended to consider larger contexts to the left and the right of boundaries; this extension would necessarily come at a corresponding cost in learning speed since the size of the rule space searched during training would grow accordingly. In the future, we plan to further investigate the ap- plication of our rule-based algorithm to alphabetic languages. Acknowledgements This work would not have been possible without the assistance and encour- agement of all the members of the MITRE Natural Language Group. This paper benefited greatly from discussions with and comments from Marc Vilain, Lynette Hirschman, Sam Bayer, and the anonymous reviewers. References Eric Brill and Philip Resnik. 1994. A rule-based ap- proach to prepositional phrase attachment disam- biguation. In Proceedings of the Fifteenth Interna- tional Conference on Computational Linguistics (COLING-1994). Eric Brill. 1993a. A corpus-based approach to lan- guage learning. Ph.D. Dissertation, University of Pennsylvania, Department of Computer and In- formation Science. Eric Brill. 1993b. Transformation-based error- driven parsing. In Proceedings of the Third In- ternational Workshop on Parsing Technologies. Eric Brill. 1994. Some advances in transformation- based part of speech tagging. In Proceedings of ~he Twelfth National Conference on Artificial In- telligence, pages 722-727. 327 John Broglio, Jamie Callan, and W. Bruce Croft. 1996. Technical issues in building an information retrieval system for chinese. CIIR Technical Re- port IR-86, University of Massachusetts, Amherst. Chris Buckley, Amit Singhal, and Mandar Mitra. 1996. Using query zoning and correlation within smart: Trec 5. In Proceedings of the Fifth Text Retrieval Conference (TREC-5). Wanying Jin. 1994. Chinese segmentation disam- biguation. In Proceedings of the Fifteenth Interna. tional Conference on Computational Linguistics (COLING-94), Japan. Judith L. Klavans and Philip P~snik. 1996. The Balancing Act: Combining Symbolic and Statis- tical Approaches to Language. MIT Press, Cam- bridge, MA. Kemal Oflazer and Gokhan Tur. 1996. Combin- ing hand-crafted rules and unsupervised learn- ing in constraint-based morphological disambigua- tion. In Proceedings of the Conference on Empir- ical Methods in Language Processing (EMNLP). Jay M. Ponte and W. Bruce Croft. 1996. Useg: A retargetable word segmentation procedure for information retrieval. In Proceedings of SDAIR96, Las Vegas, Nevada. J.R. Quinlan. 1986. Induction of decision trees. Ma- chine Learning, 1(1):81-106. Lance Ramshaw and Mitchell Marcus. 1995. Text chunking using transformation-based learning. In Proceedings of the Third Workshop on Very Large Corpora (WVLC-3), pages 82-94. Lance A. Ramshaw and Mitchell P. Marcus. 1996. Exploring the nature of transformation-based learning. In Klavans and Resnik (1996). C. J. Van Rijsbergen. 1979. Information Retrieval. Butterworths, London. Giorgio Satta and Eric Brill. 1996. Efficient transformation-based parsing. In Proceedings of the Thirty-fourth Annual Meeting of the Associa- tion for Computational Linguistics (ACL-96). Richard W. Sprout, Chilin Shih, William Gale, and Nancy Chang. 1996. A stochastic finite-state word-segmentation algorithm for chinese. Com- putational Linguistics, 22(3):377-404. Marc Vilain and David Day. 1996. Finite-state phrase parsing by rule sequences. In Proceed- ings of the Sixteenth International Conference on Computational Linguistics (COLING-96). Marc Vilain and David Palmer. 1996. Transformation-based bracketing: Fast algorithms and experimental results. In Pro- ceedings of the Workshop on Robust Parsing, held at ESSLLI 1996. Dekai Wu and Pascale Fung. 1994. Improving chi- nese tokenization with linguistic filters on sta- tistical lexical acquisition. In Proceedings of the Fourth ACL Conference on Applied Natural Lan- guage Processing (ANLP94), Stuttgart, Germany. Zimin Wu and Gwyneth Tseng. 1993. Chinese text segmentation for text retrieval: Achievements and problems. Journal of the American Society for Information Science, 44(9):532-542. 328 | 1997 | 41 |
Compiling Regular Formalisms with Rule Features into Finite-State Automata George Anton Kiraz Bell Laboratories Lucent Technologies 700 Mountain Ave. Murray Hill, NJ 07974, USA gkiraz@research, bell-labs, tom Abstract This paper presents an algorithm for the compilation of regular formalisms with rule features into finite-state automata. Rule features are incorporated into the right context of rules. This general notion can also be applied to other algorithms which compile regular rewrite rules into au- tomata. 1 Introduction The past few years have witnessed an increased in- terest in applying finite-state methods to language and speech problems. This in turn generated inter- est in devising algorithms for compiling rules which describe regular languages/relations into finite-state automata. It has long been proposed that regular formalisms (e.g., rewrite rules, two-level formalisms) accom- modate rule features which provide for finer and more elegant descriptions (Bear, 1988). Without such a mechanism, writing complex grammars (say two-level grammars for Syriac or Arabic morphol- ogy) would be difficult, if not impossible. Algo- rithms which compile regular grammars into au- tomata (Kaplan and Kay, 1994; Mohri and Sproat, 1996; Grimley-Evans, Kiraz, and Pulman, 1996) do not make use of this important mechanism. This pa- per presents a method for incorporating rule features in the resulting automata. The following Syriac example is used here, with the infamous Semitic root {ktb} 'notion of writ- ing'. The verbal pa"el measure 1, /katteb/~ 'wrote CAUSATIVE ACTIVE', is derived from the following 1Syriac verbs are classified under various measures (i.e., forms), the basic ones being p'al, pa "el and 'a/'el. 2Spirantization is ignored here; for a discussion on Syriac spirantization, see (Kiraz, 1995). morphemes: the pattern {cvcvc} 'verbal pattern', the above mentioned root, and the voealism {ae} 'ACTIVE'. The morphemes produce the following un- derlying form: 3 a e [ [ */kateb/ C V C V C J I I k t b /katteb/is derived then by the gemination, implying CAUSATIVE, of the middle consonant, [t].4 The current work assumes knowledge of regular relations (Kaplan and Kay, 1994). The following convention has been adopted. Lexical forms (e.g., morphemes in morphology) appear in braces, { }, phonological segments in square brackets, [], and elements of tuples in angle brackets, (). Section 2 describes a regular formalism with rule features. Section 3 introduce a number of mathe- matical operators used in the compilation process. Sections 4 and 5 present our algorithm. Finally, sec- tion 6 provides an evaluation and some concluding remarks. 2 Regular Formalism with Rule Features This work adopts the following notation for regular formalisms, cf. (Kaplan and Kay, 1994): r ( =~, <=,<~ } A___p (1) where T, A and p are n-way regular expressions which describe same-length relations) (An n-way regu- lar expression is a regular expression whose terms 3This analysis is along the lines of (McCarthy, 1981) - based on autosegmental phonology (Goldsmith, 1976). 4This derivation is based on the linguistic model pro- posed by (Kiraz, 1996). ~More 'user-friendly' notations which allow mapping expressions of unequal length (e.g., (Grimley-Evans, Ki- raz, and Pulman, 1996)) are mathematically equivalent to the above notation after rules are converted into same- 329 R1 k:cl:k:0 ::¢, ___ R2 b:c3:b:0 =¢. __ R3 a:v:0:a => ___ R4 e:v:0:e ::~ ___ R5 t:c2:t:0 t:0:0:0 ¢:~ ___ ([cat=verb], [measure=pa"el], []) R6 t:c~:t:0 ¢¢, ___ ([cat=verb], [measure=p'al], []) R7 0:v:0:a ¢:~ ___ t:c2:t:0 a:v:0:a Figure 1: Simple Syriac Grammar are n-tuples of alphabetic symbols or the empty string e. A same-length relation is devoid of e. For clarity, the elements of the n-tuple are separated by colons: e.g., a:b:c* q:r:s describes the 3-relation { (amq, bmr, cms) [ m > 0 }. Following current ter- minology, we call the first j elements 'surface '6 and the remaining elements 'lexical'.) The arrows corre- spond to context restriction (CR), surface coercion (SC) and composite rules, respectively. A compound rule takes the form r { ~, ~, ¢, } ~l___pl; ~2__p2;... (2) To accommodate for rule features, each rule may be associated with an (n -j)-tuple of feature struc- tures, each of the form [attributel =vall , attribute,=val2 , . . .] (3) i.e., an unordered set of attribute=val pairs. An attribute is an atomic label. A val can be an atom or a variable drawn from a predefined finite set of possi- ble values, z The ith element in the tuple corresponds to the (j z_ i)th element in rule expressions. As a way of illustration, consider the simplified grammar in Figure 1 with j = 1. The four elements of the tuples are: surface, pat- tern, root, and vocalism. R1 and R2 sanction the first and third consonants, respectively. R3 and R4 sanction vowels. R5 is the gemination rule; it is only triggered if the given rule features are satisfied: [cat=verb] for the first lexical element (i.e., the pat- tern) and [measure=pa"el] for the second element (i.e., the root). The rule also illustrates that r can be a sequence of tuples. The derivation of/katteb/ is illustrated below: length descriptions at some preprocessing stage. 6In natural language, usually j = 1. tit is also possible to extend the above formalism in order to allow val to be a category-feature structure, though that takes us beyond finite-state power. Sublexicon Entry Feature Structure Pattern ClVC2VC3 [cat=verb] Root ktb [measure=(p'al,pa"el)t] Voealism ae [voice=active, measure=pa"el] aa [voice=active, measure=p'al] tParenthesis denote disjunction over the given values. Figure 2: Simple Syriac Lexicon 0 [ a 100 e 0 vocalism k I 0 It0 0 b root cl I v It20 v c3 pattern 1 3 5 4 2 [ k ] a let e b ]surface The numbers between the lexical expressions and the surface expression denote the rules in Figure 1 which sanction the given lexical-surface mappings. Rule features play a role in the semantics of rules: a =~ states that if the contexts and rule features are satisfied, the rule is triggered; a ¢=: states that if the contexts, lexical expressions and rule features are satisfied, then the rule is applied. For example, although R5 is devoid of context expressions, the rule is composite indicating that if the root measure is pa "el, then gemination must occur and vice versa. Note that in a compound rule, each set of contexts is associated with a feature structure of its own. What is meant by 'rule features are satisfied'? Regular grammars which make use of rule features normally interact with a lexicon. In our model, the lexicon consists of (n - j) sublexica corresponding to the lexical elements in the formalism. Each sub- lexical entry is associate with a feature structure. Rule features are satisfied if they match the feature structures of the lexical entries containing the lexical expressions in r, respectively. Consider the lexicon in Figure 2 and rule R5 with 7" = t:c.,:t:0 t:0:0:0 and the rule features ([cat=verb], [measure=pa"el], []). The lexical entries containing r are {clvc_,vc3} and {ktb}, respectively. For the rule to be triggered, [cat=verb] of the rule must match with [cat=verb] of the lexical entry {clvc2vc3}, and [measure=pa"el] of the rule must match with [measure=(p'al,pa"el)] of the lexical entry {ktb}. As a second illustration, R6 derives the simple p'al measure,/ktab/. Note that in R5 and R6, 1. the lexical expressions in both rules (ignoring 0s) are equivalent, 2. both rules are composite, and 330 3. they have different surface expression in r. In traditional rewrite formalism, such rules will be contradicting each other. However, this is not the case here since R5 and R6 have different rule fea- tures. The derivation of this measure is shown below (R7 completes the derivation deleting the first vowel on the surfaceS): l a 101a 10 I~oc~tism 01ti01b root c v Ic2! v Ip . rn 17632 Ik!0!t !albl rI ce Note that in order to remain within finite-state power, both the attributes and the values in feature structures must be atomic. The formalism allows a value to be a variable drawn from a predefined finite set of possible atomic values. In the compilation process, such variables are taken as the disjunction of all possible predefined values. Additionally, this version of rule feature match- ing does not cater for rules whose r span over two lexical forms. It is possible, of course, to avoid this limitation by having rule features match the feature structures of both lexical entries in such cases. 3 Mathematical Preliminaries We define here a number of operations which will be used in our compilation process. If an operator 0p takes a number of arguments (at, • •., ak), the arguments are shown as a subscript, e.g. 0p(a,,...,~k) - the parentheses are ignored if there is only one argument. When the operator is men- tioned without reference to arguments, it appears on its own, e.g. 0p. Operations which are defined on tuples of strings can be extended to sets of tuples and relations. For example, if S is a tuple of strings and 0p(S) is an operator defined on S, the operator can be extended to a relation R in the following manner op(n) = { Op(3) I s e n } Definition3.1 (Identity) Let L be a regu- lar language. Id,(L) = {X I X is an n-tuple of the form (x,.-., x), x E L } is the n-way identity of L. 9 Remark 3.1 If Id is applied to a string s, we simply write Ida(s) to denote the n-tuple (s .... , s}. SShort vowels in open unstressed syllables are deleted in Syriac. 9This is a generalization of the operator Id in (Kaplan and Kay, 1994). Definition 3.2 (Insertion) Let R be a regular re- lation over the alphabet E and let m be a set of symbols not necessarily in E. Iasertm(R) inserts the relation Ida(a) for all a E m, freely throughout R. Insert~ I o Insertm(R) = R removes all such instances if m is disjoint from E. 1° Remark 3.2 We can define another form of Insert where the elements in rn are tuples of symbols as fol- lowS: Let R be a regular relation over the alphabet and let rn be a set of tuples of symbols not nec- essarily in E. Insertm(R) inserts a, for all a E m, freely throughout R. Definition 3.3 (Substitution) Let S and S' be same-length n-tuples of strings over the alphabet (E × ''' X E), [ ---- Ida(a ) for some a E E, and S = StIS,.I...Sk,k > 1, such that Si does not contain I - i.e. Si E ((E x -.. x E) - {I})'. Substitute(s, i)(S) = $1S'S,.S' ... Sk substitutes every occurrence of I in S with S'. Definition 3.4 (Projection) Let S = (st .... , s,,) be a tuple of strings, projec'ci(S), for some i 6 { 1 ..... n}, denotes the tuple element si. Project~-l(S), for some i E { 1 .... , n }, denotes the (n - 1)-tuple (Sl .... , si-1, si+l .... , sn). The symbol ,-r denotes 'feasible tuples', similar to 'feasible pairs' in traditional two-level morphology. The number of surface expressions, j, is always 1. The operator o represents mathematical composi- tion, not necessarily the composition of transducers. 4 Compilation without Rule Features The current algorithm is motivated by the work of (Grimley-Evans, Kiraz, and Puhnan, 1996). tt Intuitively, the automata is built by three approx- imations as follows: 1. 2. Accepting rs irrespective of any context. Adding context restriction (=~) constraints making the automata accept only the sequences which appear in contexts described by the grammar. . Forcing surface coercion constraints (¢=) mak- ing the automata accept all and only the se- quences described by the grammar. 1°This is similar to the operator Intro in (Kaplan and Kay, 1994). 11The subtractive approach for compiling rules into FSAs was first suggested by Edmund Grimley-Evans. 331 4.1 Accepting rs Let 7- be the set of all rs in a regular grammar, p be an auxiliary boundary symbol (not in the grammar's alphabets) and p' = Ida(p). The first approxima- tion is described by Centers : U (4) rET Centers accepts the symbols, p', followed by zero or more rs, each (if any) followed by p'. In other words, the machine accepts all centers described by the grammar (each center surrounded by p') irre- spective of their contexts. It is implementation dependent as to whether T includes other correspondences which are not explic- itly given in rules (e.g., a set of additional feasible centers). 4.2 Context Restriction Rules For a given compound rule, the set of relations in which r is invalid is Restrict(r) = 7r" rTr* - U 7r')~krPkTr* (5) k i.e., r in any context minus r in all valid contexts. However, since in §4.1 above, the symbol p appears freely, we need to introduce it in the above expres- sion. The result becomes Restrict(v) = Insert{o } o (6) k The above expression is only valid if r consists of only one tuple. However, to allow it to be a sequence of such tuples as in R5 in Figure 1, it must be 1. surrounded by p~ on both sides, and 2. devoid of p~. The first condition is accomplished by simply plac- ing p' to the left and right of r. As for the sec- ond condition, we use an auxiliary symbol, w, as a place-holder representing r, introduce p freely, then substitute r in place of w. Formally, let w be an auxiliary symbol (not in the grammar's alphabet), and let w ~ = Ida(w) be a place-holder representing r. The above expression becomes Restrict(r) = Substitute(v, w') o (7) Insert{~} o ,'r* p~w ~ ~o ~ ,-r" - U 7r* A k p~J p~p'~ 7r* k For all rs, we subtract this expression from the automaton under construction, yielding CR = Centers - U Restrict( ') (S) T CR now accepts only the sequences of tuples which appear in contexts in the grammar (but in- cluding the partitioning symbols p~); however, it does not force surface coercion constraints. 4.3 Surface Coercion Rules Let r' represent the center of the rule with the cor- rect lexical expressions and the incorrect surface ex- pressions with respect to ,'r*, r' = Proj'ectl(r} × Project~-l(r) (9) The coerce relation for a compound rule can be simply expressed by l~- Coerce(r') = Insert{p}o (10) U ,-r* A k p'r'p'pk lr* k The two p~s surrounding r ~ ensure that coercion ap- plies on at least one center of the rule. For all such expressions, we subtract Coerce from the automaton under construction, yielding SC = CR - U Coerce(v) (11) T SC now accepts all and only the sequences of tu- pies described by the grammar (but including the partitioning symbols p~). It remains only to remove all instances of p from the final machine, determinize and minimize it. There are two methods for interpreting transduc- ers. When interpreted as acceptors with n-tuples of symbols on each transition, they can be deter- minized using standard algorithms (Hopcroft and Ullman, 1979). When interpreted as a transduc- tion that maps an input to an output, they can- not always be turned into a deterministic form (see (Mohri, 1994; Roche and Schabes, 1995)). 5 Compilation with Rule Features This section shows how feature structures which are associated with rules and lexical entries can be in- corporated into FSAs. 12A special case can be added for epenthetic rules. 332 Entry Feature Structure abcd ./1 ef fa ghi fs Figure 3: Lexicon Example 5.1 Intuitive Description We shall describe our handling of rule features with a two-level example. Consider the following analysis. la[bl c ldI ~ te [ f! ~ [glh[ i ]1~ [ Lexical 1 2 3 4 5 6 7 5 8 9105 [a!blcldlOlelf!O!g!h!i!OlS""Saee The lexical expression contains the lexical forms {abcd}, {ef} and {ghi}, separated by a boundary symbol, b, which designates the end of a lexical entry. The numbers between the tapes represent the rules (in some grammar) which allow the given lexical- surface mappings. Assume that the above lexical forms are associ- ated in the lexicon with the feature structures as in Figure 3. Further, assume that each two-level rule m, 1 < m < 10, above is associated with the fea- ture structure Fro. Hence, in order for the above two-level analysis to be valid, the following feature structures must match All the structures ... must match ... F1,F2, F3, F4 fl F6,F7 f2 Fs, Fg, Fl o .1:3 Usually, boundary rules, e.g. rule 5 above, are not associated with feature structures, though there is nothing stopping the grammar writer from doing so. To match the feature structures associated with rules and those in the lexicon we proceed as follows. Firstly, we suffix each lexical entry in the lexicon with the boundary symbol, ~, and it's feature struc- ture. (For simplicity, we consider a feature struc- ture with instantiated values to be an atomic object of length one which can be a label of a transition in a FSA.) 13 Hence the above lexical forms become: 'abcd kfl', 'efbf~.', and 'ghi ~f3'. Secondly, we incor- porate a feature structure of a rule into the rule's right context, p. For example, if p of rule 1 above is b:b c:c, the context becomes b:b c:c ,'r* 0:F1 (12) (this simplified version of the expression suffices for the moment). In other words, in order for a:a to be sanctioned, it must be followed by the sequence: 13As to how this is done is a matter of implementation. 1. b:b c:c, i.e., the original right context; 2. any feasible tuple, ,'r*; and 3. the rule's feature structure which is deleted on the surface, 0:F1. This will succeed if only if F1 (of rule 1) and fl (of the lexical entry) were identical. The above analysis is repeated below with the feature structures incor- porated into p. lalblcldlblS~le fl~lS~lg hli!~!f~lL~ic~t 12345 675 89105 [alblcldlO!O!e flOlOlg hlilO!OiSuqace As indicated earlier, in order to remain within finite-state power, all values in a feature structure must be instantiated. Since the formalism allows values to be variables drawn from a predefined finite set of possible values, variables entered by the user are replaced by a disjunction over all the possible values. 5.2 Compiling the Lexicon Our aim is to construct a FSA which accepts any lexical entry from the ith sublexicon on its j " ith tape. A lexical entry # (e.g., morpheme) which is asso- ciated with a feature structure ¢ is simply expressed by/~¢, where k is a (morpheme) boundary symbol which is not in the alphabet of the lexicon. The expression of sublexicon i with r entries becomes, L, -- U#%¢ ~ (13) r We also compute the feasible feature structures of sublexicon i to be z, = U (14) r and the overall feasible feature structures on all sub- lexica to be • = O" x F1 x F~ x .-- (15) The first element deletes all such features on the surface. For convenience in later expressions, we in- corporate features with ~ as follows ~¢ - ,T U • (16) The overall lexicon can be expressed by, 14 Lexicon = LI × L~ × ... (17) 14To make the lexicon describe equal-length relations, a special symbol, say 0, is inserted throughout. 333 The operator × creates one large lexicon out of all the sublexica. This lexicon can be substantially reduced by intersecting it with Proj ect~'l (~0).. If a two-level grammar is compiled into an au- tomaton, denoted by Gram, and a lexicon is com- piled into an automaton, denoted by Lez, the au- tomaton which enforces lexical constraints on the language is expressed by L = (Proj,ctl(~)* × Lex) A Gram (18) The first component above is a relation which ac- cepts any surface symbol on its first tape and the lexicon on the remaining tapes. 5.3 Compiling Rules A compound regular rule with m context-pairs and m rule features takes the form v {==~,<==,¢~} kl___pl;k2--p2;...;Am---p m [¢1, ¢2,..., ¢-~] (19) where v, A ~, and pk, 1 < k < m are like before and ck is the tuple of feature structures associated with rule k. The following modifications to the procedure given in section 4 are required. Forgetting contexts for the moment, our basic ma- chine scans sequences of tuples (from "/-), but re- quires that any sequence representing a lexical entry be followed by the entry's feature structure (from • ). This is achieved by modifying eq. 4 as follows: Centers = [.J (20) vET The expression accepts the symbols, 9', followed by zero or more occurrences of the following: 1. one or more v, each followed by ~a', and 2. a feature tuple in • followed by p'. In the second and third phases of the compilation process, we need to incorporate members of ¢I, freely throughout the contexts. For each A k, we compute the new left context fk = Insert.(A ~) (21) The right context is more complicated. It requires that the first feature structure to appear to the right of v is Ck. This is achieved by the expression, 7"~ k = Inserto(p k) CI ~'*¢k~r~ (22) The intersection with a'*¢k,'r; ensures that the first feature structure to appear to the right of v is Ck: zero or more feasible tuples, followed by Ck, followed by zero or more feasible tuples or feature structures. Now we are ready to modify the Restrict relation. The first component in eq. 5 becomes A = (; U ~O)*vTr~ (23) The expression allows ~ to appear in the left and right contexts of v; however, at the left of v, the expression (Tr tO ~r¢) puts the restriction that the first tuple at the left end must be in a', not in ¢. The second component in eq. 5 simply becomes B = U "r; £k rTCkTr; (24) k Hence, Restrict becomes (after replacing v with w' in eq. 23 and eq. 24) Restrict(r) = Substitute(r,w')o (25) Insert{~} o A-B In a similar manner, the Coercer relation be- comes Coerce(r') = Insert{~}o (26) k 6 Conclusion and Future Work The above algorithm was implemented in Prolog and was tested successfully with a number of sample- type grammars. In every case, the automata pro- duced by the compiler were manually checked for correctness, and the machines were executed in gen- eration mode to ensure that they did not over gen- erate. It was mentioned that the algorithm presented here is based on the work of (Grimley-Evans, Kiraz, and Pulman, 1996) rather than (Kaplan and Kay, 1994). It must be stated, however, that the intu- itive ideas behind our compilation of rule features, viz. the incorporation of rule features in contexts, are independent of the algorithm itself and can be also applied to (Kaplan and Kay, 1994) and (Mohri and Sproat, 1996). One issue which remains to be resolved, how- ever, is to determine which approach for compiling rules into automata is more efficient: the standard method of (Kaplan and Kay, 1994) (also (Mohri and Sproat, 1996) which follows the same philosophy) or 334 Algorithm Intersection Determini- (N 2) zation (2 N) KK (n -- i) "J- 3 ~in_-i ki 8 ~']~=1 ki EKP 1 ± ~"]n n ,i=t ki 1 ..t. ~i=1 ki where n = number of rules in a grammar, and ki = number of contexts for rule i, 1 < i < n. Figure 4: Statistics of Complex Operation's dealt with at the morphotactic level using a unifica- tion based formalism. Acknowledgments I would like to thank Richard Sproat for comment- ing on an earlier draft. Many of the anonymous reviewers' comments proofed very useful. Mistakes, as always, remain mine. the subtractive approach of (Grimley-Evans, Kiraz, and Pulman, 1996). The statistics of the usage of computationally ex- pensive operations - viz., intersection (quadratic complexity) and determinization (exponential com- plexity) - in both algorithms are summarized in Fig- ure 4 (KK = Kaplan and Kay, EKP = Grimley- Evans, Kiraz and Pulman). Note that complemen- tation requires determinization, and subtraction re- quires one intersection and one complementation since A- B = An B (27) Although statistically speaking the number of op- erations used in (Grimley-Evans, Kiraz, and Pul- man, 1996) is less than the ones used in (Kaplan and Kay, 1994), only an empirical study can resolve the issue as the following example illustrates. Con- sider the expression A =al Ua2U...Uan and the De Morgan's law equivalent (28) B = ~n~n.-.n~. (29) The former requires only one complement which re- sults in one determinization (since the automata must be determinized before a complement is com- puted). The latter not only requires n complements, but also n - 1 intersections. The worst-case analy- sis clearly indicates that computing A is much less expensive than computing B. Empirically, however, this is not the case when n is large and ai is small, which is usually the case in rewrite rules. The reason lies in the fact that the determinization algorithm in the former expression applies on a machine which is by far larger than the small individual machines present in the latter expression, is Another aspect of rule features concerns the mor- photactic unification of lexical entries. This is best aSThis important difference was pointed out by one of the anonymous reviewers whom I thank. References Bear, J. 1988. Morphology with two-level rules and negative rule features. In COLING-88: Papers Presented to the 12th International Conference on Computational Linguistics, volume 1, pages 28- 31. Goldsmith, J. 1976. Autosegmental Phonology. Ph.D. thesis, MIT. Published as Autosegmental and Metrical Phonology, Oxford 1990. Grimley-Evans, E., G. Kiraz, and S. Pulman. 1996. Compiling a partition-based two-level formalism. In COLING-96: Papers Presented to the 16th International Conference on Computational Lin- guistics. Hopcroft, J. and J. Ullman. 1979. Introduction to Automata Theory, Languages, and Computation. Addison-Wesley. Kaplan, R. and M. Kay. 1994. Regular models of phonological rule systems. Computational Lin- guistics, 20(3):331-78. Kiraz, G. 1995. Introduction to Syriac Spirantiza- tion. Bar Hebraeus Verlag, The Netherlands. Kiraz, G. [1996]. Syriac morphology: From a lin- guistic description to a computational implemen- tation. In R. Lavenant, editor, VIItum Sympo- sium Syriacum 1996, Forthcoming in Orientalia Christiana Analecta. Pontificio Institutum Studio- rum Orientalium. Kiraz, G. [Forthcoming]. Computational Ap- proach to Nonlinear Morphology: with empha- sis on Semitic languages. Cambridge University Press. McCarthy, J. 1981. A prosodic theory of non- concatenative morphology. Linguistic Inquiry, 12(3):373-418. Mohri, M. 1994. On some applications of finite-state automata theory to natural language processing. Technical report, Institut Gaspard Monge. 335 Mohri, M. and S. Sproat. 1996. An efficient com- piler for weighted rewrite rules. In Proceedings of the 3~th Annual Meeting of the Association for Computational Linguistics, pages 231-8. Roche, E. and Y. Schabes. 1995. Deterministic part-of-speech tagging with finite-state transduc- ers. CL, 21(2):227-53. 336 | 1997 | 42 |
The Complexity of Recognition of Linguistically Adequate Dependency Grammars Peter Neuhaus Norbert Briiker Computational Linguistics Research Group Freiburg University, Friedrichstrage 50 D-79098 Freiburg, Germany email: { neuhaus,nobi } @ coling.uni-freiburg.de Abstract Results of computational complexity exist for a wide range of phrase structure-based gram- mar formalisms, while there is an apparent lack of such results for dependency-based for- malisms. We here adapt a result on the com- plexity of ID/LP-grammars to the dependency framework. Contrary to previous studies on heavily restricted dependency grammars, we prove that recognition (and thus, parsing) of linguistically adequate dependency grammars is~A/T'-complete. 1 Introduction The introduction of dependency grammar (DG) into modern linguistics is marked by Tesni~re (1959). His conception addressed didactic goals and, thus, did not aim at formal precision, but rather at an intuitive un- derstanding of semantically motivated dependency re- lations. An early formalization was given by Gaifman (1965), who showed the generative capacity of DG to be (weakly) equivalent to standard context-free grammars. Given this equivalence, interest in DG as a linguistic framework diminished considerably, although many de- pendency grammarians view Gaifman's conception as an unfortunate one (cf. Section 2). To our knowledge, there has been no other formal study of DG.This is reflected by a recent study (Lombardo & Lesmo, 1996), which applies the Earley parsing technique (Earley, 1970) to DG, and thereby achieves cubic time complexity for the analysis of DG. In their discussion, Lombardo & Lesmo express their hope that slight increases in generative ca- pacity will correspond to equally slight increases in com- putational complexity. It is this claim that we challenge here. After motivating non-projective analyses for DG, we investigate various variants of DG and identify the sep- aration of dominance and precedence as a major part of current DG theorizing. Thus, no current variant of DG (not even Tesni~re's original formulation) is compatible with Gaifman' s conception, which seems to be motivated by formal considerations only (viz., the proof of equiva- lence). Section 3 advances our proposal, which cleanly separates dominance and precedence relations. This is il- lustrated in the fourth section, where we give a simple en- coding of an A/P-complete problem in a discontinuous DG. Our proof of A/79-completeness, however, does not rely on discontinuity, but only requires unordered trees. It is adapted from a similar proof for unordered context- free grammars (UCFGs) by Barton (1985). 2 Versions of Dependency Grammar The growing interest in the dependency concept (which roughly corresponds to the O-roles of GB, subcatego- rization in HPSG, and the so-called domain of locality of TAG) again raises the issue whether non-lexical cat- egories are necessary for linguistic analysis. After re- viewing several proposals in this section, we argue in the next section that word order -- the description of which is the most prominent difference between PSGs and DGs -- can adequately be described without reference to non- lexical categories. Standard PSG trees are projective, i.e., no branches cross when the terminal nodes are projected onto the input string. In contrast to PSG approaches, DG re- quires non-projective analyses. As DGs are restricted to lexical nodes, one cannot, e.g., describe the so-called unbounded dependencies without giving up projectiv- ity. First, the categorial approach employing partial con- stituents (Huck, 1988; Hepple, 1990) is not available, since there are no phrasal categories. Second, the coin- dexing (Haegeman, 1994) or structure-sharing (Pollard & Sag, 1994) approaches are not available, since there are no empty categories. Consider the extracted NP in "Beans, I know John likes" (cf. also to Fig.1 in Section 3). A projective tree would require "Beans" to be connected to either "I" or "know" - none of which is conceptually directly related to "Beans". It is "likes" that determines syntactic fea- 337 tures of "Beans" and which provides a semantic role for it. The only connection between "know" and "Beans" is that the finite verb allows the extraction of "Beans", thus defining order restrictions for the NP. This has led some DG variants to adopt a general graph structure with mul- tiple heads instead of trees. We will refer to DGs allow- ing non-projective analyses as discontinuous DGs. Tesni~re (1959) devised a bipartite grammar theory which consists of a dependency component and a trans- lation component (' translation' used in a technical sense denoting a change of category and grammatical func- tion). The dependency component defines four main cat- egories and possible dependencies between them. What is of interest here is that there is no mentioning of order in TesniSre's work. Some practitioneers of DG have al- lowed word order as a marker for translation, but they do not prohibit non-projective trees. Gaifman (1965) designed his DG entirely analogous to context-free phrase structure grammars. Each word is associated with a category, which functions like the non-terminals in CFG. He then defines the following rule format for dependency grammars: (1) X(Y,,... , Y~, ,, Y~+I,..., Y,,) This rule states that a word of category X governs words of category Y1,... , Yn which occur in the given order. The head (the word of category X) must occur between the i-th and the (i + 1)-th modifier. The rule can be viewed as an ordered tree of depth one with node labels. Trees are combined through the identification of the root of one tree with a leaf of identical category of another tree. This formalization is restricted to projective trees with a completely specified order of sister nodes. As we have argued above, such a formulation cannot capture se- mantically motivated dependencies. 2.1 Current Dependency Grammars Today's DGs differ considerably from Gaifman's con- ception, and we will very briefly sketch various order de- scriptions, showing that DGs generally dissociate dom- inance and precedence by some mechanism. All vari- ants share, however, the rejection of phrasal nodes (al- though phrasal features are sometimes allowed) and the introduction of edge labels (to distinguish different de- pendency relations). Meaning-Text Theory (Mer 5uk, 1988) assumes seven strata of representation. The rules mapping from the un- ordered dependency trees of surface-syntactic represen- tations onto the annotated lexeme sequences of deep- morphological representations include global ordering rules which allow discontinuities. These rules have not yet been formally specified (Mel' 5uk & Pertsov, 1987, p. 187f), but see the proposal by Rambow & Joshi (1994). Word Grammar (Hudson, 1990) is based on general graphs. The ordering of two linked words is specified to- gether with their dependency relation, as in the proposi- tion "object of verb succeeds it". Extraction is analyzed by establishing another dependency, visitor, between the verb and the extractee, which is required to precede the verb, as in "visitor of verb precedes it". Resulting incon- sistencies, e.g. in case of an extracted object, are not resolved, however. Lexicase (Starosta, 1988; 1992) employs complex fea- ture structures to represent lexical and syntactic enti- ties. Its word order description is much like that of Word Grammar (at least at some level of abstraction), and shares the above inconsistency. Dependency Unification Grammar (Hellwig, 1988) defines a tree-like data structure for the representation of syntactic analyses. Using morphosyntactic features with special interpretations, a word defines abstract positions into which modifiers are mapped. Partial orderings and even discontinuities can thus be described by allowing a modifier to occupy a position defined by some transitive head. The approach cannot restrict discontinuities prop- erly, however. Slot Grammar (McCord, 1990) employs a number of rule types, some of which are exclusively concerned with precedence. So-called head/slot and slot/slot ordering rules describe the precedence in projective trees, refer- ring to arbitrary predicates over head and modifiers. Ex- tractions (i.e., discontinuities) are merely handled by a mechanism built into the parser. This brief overview of current DG flavors shows that various mechanisms (global rules, general graphs, proce- dural means) are generally employed to lift the limitation to projective trees. Our own approach presented below improves on these proposals because it allows the lexi- calized and declarative formulation of precedence con- straints. The necessity of non-projective analyses in DG results from examples like "Beans, 1 know John likes" and the restriction to lexical nodes which prohibits gap- threading and other mechanisms tied to phrasal cate- gories. 3 A Dependency Grammar with Word Order Domains We now sketch a minimal DG that incorporates only word classes and word order as descriptional dimensions. The separation of dominance and precedence presented here grew out of our work on German, and retains the lo- cal flavor of dependency specification, while at the same time covering arbitrary discontinuities. It is based on a (modal) logic with model-theoretic interpretation, which is presented in more detail in (Br~ker, 1997). 338 f know //~,,,@x i ~ e s ~ ) I dl d2 Figure 1: Word order domains in "Beans, I know John likes" 3.1 Order Specification Our initial observation is that DG cannot use binary precedence constraints as PSG does. Since DG analyses are hierarchically flatter, binary precedence constraints result in inconsistencies, as the analyses of Word Gram- mar and Lexicase illustrate. In PSG, on the other hand, the phrasal hierarchy separates the scope of precedence restrictions. This effect is achieved in our approach by defining word order domains as sets of words, where precedence restrictions apply only to words within the same domain. Each word defines a sequence of order do- mains, into which the word and its modifiers are placed. Several restrictions are placed on domains. First, the domain sequence must mirror the precedence of the words included, i.e., words in a prior domain must pre- cede all words in a subsequent domain. Second, the order domains must be hierarchically ordered by set inclusion, i.e., be projective. Third, a domain (e.g., dl in Fig.l) can be constrained to contain at most one partial depen- dency tree. l We will write singleton domains as "_", while other domains are represented by "-". The prece- dence of words within domains is described by binary precedence restrictions, which must be locally satisfied in the domain with which they are associated. Consid- ering Fig. 1 again, a precedence restriction for "likes" to precede its object has no effect, since the two are in dif- ferent domains. The precedence constraints are formu- lated as a binary relation "~" over dependency labels, including the special symbol "self" denoting the head. Discontinuities can easily be characterized, since a word may be contained in any domain of (nearly) any of its transitive heads. If a domain of its direct head contains the modifier, a continuous dependency results. If, how- ever, a modifier is placed in a domain of some transitive head (as "Beans" in Fig. 1), discontinuities occur. Bound- ing effects on discontinuities are described by specifying that certain dependencies may not be crossed. 2 For the tFor details, cf. (Br6ker, 1997). 2German data exist that cannot be captured by the (more common) bounding of discontinuities by nodes of a certain purpose of this paper, we need not formally introduce the bounding condition, though. A sample domain structure is given in Fig.l, with two domains dl and d2 associated with the governing verb "know" (solid) and one with the embedded verb "likes" (dashed). dl may contain only one partial dependency tree, the extracted phrase, d2 contains the rest of the sen- tence. Both domains are described by (2), where the do- main sequence is represented as "<<". d2 contains two precedence restrictions which require that "know" (rep- resented by self) must follow the subject (first precedence constraint) and precede the object (second precedence constraint). (2) __ { } << ----. { (subject -.< self), (self --< object)} 3.2 Formal Description The following notation is used in the proof. A lexicon Lez maps words from an alphabet E to word classes, which in turn are associated with valencies and domain sequences. The set C of word classes is hierarchically ordered by a subclass relation (3) isaccCxC A word w of class c inherits the valencies (and domain sequence) from c, which are accessed by (4) w.valencies A valency (b, d, c) describes a possible dependency re- lation by specifying a flag b indicating whether the de- pendency may be discontinuous, the dependency name d (a symbol), and the word class c E C of the modifier. A word h may govern a word m in dependency d if h de- fines a valency (b, d, c) such that (m isao c) and m can consistently be inserted into a domain of h (for b = -) or a domain of a transitive head of h (for b = +). This condition is written as (5) governs(h,d,m) A DG is thus characterized by (6) G = (Lex, C, isac, E) The language L(G) includes any sequence of words for which a dependency tree can be constructed such that for each word h governing a word m in dependency d, governs(h, d, m) holds. The modifier of h in dependency d is accessed by (7) h.mod(d) category. 339 4 The complexity of DG Recognition Lombardo & Lesmo (1996, p.728) convey their hope that increasing the flexibility of their conception of DG will " ... imply the restructuring of some parts of the rec- ognizer, with a plausible increment of the complexity". We will show that adding a little (linguistically required) flexibility might well render recognition A/P-complete. To prove this, we will encode the vertex cover problem, which is known to be A/P-complete, in a DG. 4.1 Encoding the Vertex Cover Problem in Discontinuous DG A vertex cover of a finite graph is a subset of its ver- tices such that (at least) one end point of every edge is a member of that set. The vertex cover problem is to decide whether for a given graph there exists a vertex cover with at most k elements. The problem is known to be A/7~-complete (Garey & Johnson, 1983, pp.53-56). Fig. 2 gives a simple example where {c, d} is a vertex cover. a b X d Figure 2: Simple graph with vertex cover {c, d}. A straightforward encoding of a solution in the DG formalism introduced in Section 3 defines a root word s of class S with k valencies for words of class O. O has IWl subclasses denoting the nodes of the graph. An edge is represented by two linked words (one for each end point) with the governing word corresponding to the node included in the vertex cover. The subordinated word is assigned the class R, while the governing word is assigned the subclass of O denoting the node it repre- sents. The latter word classes define a valency for words of class R (for the other end point) and a possibly discon- tinuous valency for another word of the identical class (representing the end point of another edge which is in- cluded in the vertex cover). This encoding is summarized in Table 1. The input string contains an initial s and for each edge the words representing its end points, e.g. "saccdadb- dcb" for our example. If the grammar allows the con- struction of a complete dependency tree (cf. Fig. 3 for one solution), this encodes a solution of the vertex cover problem. $ % I l l l l l l l l l b I l t l l l l l l l I I I I I I I I I I I I $ac c da dbdc b Figure 3: Encoding a solution to the vertex cover prob- lem from Fig. 2. 4.2 Formal Proof using Continuous DG The encoding outlined above uses non-projective trees, i.e., crossing dependencies. In anticipation of counter arguments such as that the presented dependency gram- mar was just too powerful, we will present the proof us- ing only one feature supplied by most DG formalisms, namely the free order of modifiers with respect to their head. Thus, modifiers must be inserted into an order do- main of their head (i.e., no + mark in valencies). This version of the proof uses a slightly more complicated en- coding of the vertex cover problem and resembles the proof by Barton (1985). Definition 1 (Measure) Let II • II be a measure for the encoded input length of a computational problem. We require that if S is a set or string and k E N then ISl > k implies IlSll ___ Ilkll and that for any tuple I1("" ,z,.. ")11 - Ilzll holds. < Definition 2 (Vertex Cover Problem) A possible instance of the vertex cover problem is a triple (V, E, k) where (V, E) is a finite graph and IvI > k N. The vertex cover problem is the set VC of all in- stances (V, E, k) for which there exists a subset V' C_ V and a function f : E ---> V I such that IV'l <_ k and V(Vm,Vn) E E: f((vm,Vn)) E {Vm,Vn}. <1 Definition 3 (DG recognition problem) A possible instance of the DG recognition problem is a tuple (G, a) where G = (Lex, C, isac, ~) is a depen- dency grammar as defined in Section 3 and a E E +. The DG recognition problem DGR consists of all instances (G, a) such that a E L(G). <1 For an algorithm to decide the VC problem consider a data structure representing the vertices of the graph (e.g., a set). We separate the elements of this data structure 340 classes valencies order domain S {(-, markl,O), (-, mark2,0)} --{(self-~ mark1), (mark1 -.< mark2)} A isac 0 {(-, unmrk, R), (+, same, A)} ={(unmrk -K same), (self -4 same)} B isac O {(-, unmrk, R), (+, same, B)} ={(unmrk --< same), (self -.< same)} (7 isac O {(-, unmrk, R), (+, same, C)} ~{(unmrk --4 same), (self -4 same)} D isac O {(-, unmrk, R), (+, same, D)} -{(unmrk --.< same), (self -~ same)} R {} --{} [ word [ classes I s {s} a {A,R} b {B,R} c {C,R} d {D,R} Table 1: Word classes and lexicon for vertex cover problem from Fig. 2 into the (maximal) vertex cover set and its complement set. Hence, one end point of every edge is assigned to the vertex cover (i.e., it is marked). Since (at most) all IEI edges might share a common vertex, the data struc- ture has to be a multiset which contains IEI copies of each vertex. Thus, marking the IVI - k complement ver- tices actually requires marking IVI - k times IE[ iden- tical vertices. This will leave (k - 1) * IEI unmarked vertices in the input structure. To achieve this algorithm through recognition of a dependency grammar, the mark- ing process will be encoded as the filling of appropriate valencies of a word s by words representing the vertices. Before we prove that this encoding can be generated in polynomial time we show that: Lemma 1 The DG recognition problem is in the complexity class Alp. [] Let G = (Lex, C, isac, Z) and a E ]E +. We give a nondeterministic algorithm for deciding whether a = (Sl-.- sn) is in L(G). Let H be an empty set initially: 1. Repeat until IHI = Iol (a) i. For every Si E O r choose a lexicon entry ci E Lex(si). ii. From the ci choose one word as the head h0. iii. Let H := {ho} and M := {cili E [1, IOrl]} \ H. (b) Repeat until M = 0: i. Choose a head h E H and a valency (b, d, c) E h.valencies and a modifier m E M. ii. If governs(h, d, m) holds then establish the dependency relation between h and the m, and add m to the set H. iii. Remove m from M. The algorithm obviously is (nondeterministically) polynomial in the length of the input. Given that (G, g) E DGR, a dependency tree covering the whole input exists and the algorithm will be able to guess the dependents of every head correctly. If, conversely, the algorithm halts for some input (G, or), then there neces- sarily must be a dependency tree rooted in ho completely covering a. Thus, (G, a) E DGR. [] Lemma 2 Let (V, E, k) be a possible instance of the vertex cover problem. Then a grammar G(V, E, k) and an input a(V, E, k) can be constructed in time polynomial in II (v, E, k)II such that (V, E, k) E VC ¢:::::v (G(V, E, k), a(V, E, k)) E DGR [] For the proof, we first define the encoding and show that it can be constructed in polynomial time. Then we proceed showing that the equivalence claim holds. The set of classes is G =aef {S, R, U} U {Hdi e [1, IEI]} U {U~, ¼1i e [1, IVI]}. In the isac hierarchy the classes Ui share the superclass U, the classes V~ the superclass R. Valencies are defined for the classes according to Table 2. Furthermore, we define E =dee {S} U {vii/ E [1, IVl]}. The lexicon Lex associates words with classes as given in Table 2. We set G(V, E, k) =clef ( Lex, C, isac, ~) and a(V, E, k) =def s Vl''" Vl"'" yIV[ " " " VlV ~ IEI IEI For an example, cf. Fig. 4 which shows a dependency tree for the instance of the vertex cover problem from Fig. 2. The two dependencies Ul and u2 represent the complement of the vertex cover. It is easily seen 3 that [[(G(V,E,k),a(V,E,k))[[ is polynomial in [[V[[, [[E[[ and k. From [El _> k and Def- inition 1 it follows that H(V,E,k)[I >_ [IE][ _> ][k[[ _> k. 3The construction requires 2 • [V[ + [El + 3 word classes, IV[ + 1 terminals in at most [El + 2 readings each. S defines IV[ + k • IE[ - k valencies, Ui defines [E[ - 1 valencies. The length of a is IV[ • [E[ + 1. 341 word class valencies Vvi • V Vi isac R { } Vvi • V Ui isac U {(-, rz, V/),--. , (-, rlEl_l, V/)} Vei E E Hi {} S {(-, u,, u),..., (-, u,v,_,, v), (-, hi, Hi),-'-, (-, hie I, HIEI), (-, n, R), • • • , (-, r(k-,)l~l, R)} I order I ={ } word ] ={ } "i -{} -{} word classes {U.~}U{Hjl3vm,v. • v: ej = (vm, v,,)^ s {s} Table 2: Word classes and lexicon to encode vertex cover problem $ aaaa bbbb Figure 4: Encoding a solution to the vertex cover prob- lem from Fig. 2. Hence, the construction of (G(V, E, k), a(V, E, k)) can be done in worst-case time polynomial in II(V,E,k)ll. We next show the equivalence of the two problems. Assume (V, E, k) • VC: Then there exists a subset V' C_ V and a function f : E --+ V' such that IV'l <_ k and V(vm,v,~) • E : f((vm,vn)) • {(vm,Vn)}. A dependency tree for a(V, E, k) is constructed by: 1. For every ei • E, one word f(ei) is assigned class Hi and governed by s in valency hi. 2. For each vi • V \ V', IEI - I words vi are assigned class R and governed by the remaining copy of vi in reading Ui through valencies rl to rlEl_l. 3. The vi in reading Ui are governed by s through the valencies uj (j • [1, IWl - k]). 4. (k - 1) • IEI words remain in a. These receive reading R and are governed by s in valencies r~ (j • [1, (k - 1)IEI]). The dependency tree rooted in s covers the whole in- put a(V, E, k). Since G(V, E, k) does not give any fur- ther restrictions this implies a( V, E, k) • L ( G ( V, E, k ) ) and, thus, (G(V, E, k), a(V, E, k)) • DGR. Conversely assume (G(V, E, k), a(V, E, k)) • DGR: Then a(V, E, k) • L(G(V, E, k)) holds, i.e., there ex- ists a dependency tree that covers the whole input. Since s cannot be governed in any valency, it follows that s must be the root. The instance s of S has IEI valencies of class H, (k- 1) * [E I valencies of class R, and IWl - k valencies of class U, whose instances in turn have I EI- 1 valencies of class R. This sums up to IEI * IVl potential dependents, which is the number of terminals in a be- sides s. Thus, all valencies are actually filled. We define a subset Vo C_ V by Vo =def {V E VI3i e [1, IYl - k] 8.mod(ul) = v}. I.e., (1) IVol = IVI- k The dependents of s in valencies hl are from the set V' Vo. We define a function f : E --+ V \ Vo by f(ei) =def s.mod(hi) for all ei E E. By construction f(ei) is an end point of edge ei, i.e. (2) V(v,,,,v,d e E: f((v,.,,,v,4,) e {v,,,,v,.,} We define a subset V' C V by V' =def {f(e)le • E}. Thus (3) Ve • E: f(e) • V' By construction of V' and by (1) it follows (4) IV'l < IYl- IVol = k From (2), (3), and (4) we induce (V, E, k) • VC. • Theorem 3 The DG recognition problem is in the complexity class Af l)C. [] The Af:P-completeness of the DG recognition problem follows directly from lemmata 1 and 2. • 5 Conclusion We have shown that current DG theorizing exhibits a feature not contained in previous formal studies of DG, namely the independent specification of dominance and precedence constraints. This feature leads to a A/'7% complete recognition problem. The necessity of this ex- tension approved by most current DGs relates to the fact that DG must directly characterize dependencies which in PSG are captured by a projective structure and addi- tional processes such as coindexing or structure sharing (most easily seen in treatments of so-called unbounded 342 dependencies). The dissociation of tree structure and linear order, as we have done in Section 3, nevertheless seems to be a promising approach for PSG as well; see a very similar proposal for HPSG (Reape, 1989). The .N'79-completeness result also holds for the dis- continuous DG presented in Section 3. This DG can characterize at least some context-sensitive languages such as anbnc n, i.e., the increase in complexity corre- sponds to an increase of generative capacity. We conjec- ture that, provided a proper formalization of the other DG versions presented in Section 2, their .A/P-completeness can be similarly shown. With respect to parser design, this result implies that the well known polynomial time complexity of chart- or tabular-based parsing techniques cannot be achieved for these DG formalisms in gen- eral. This is the reason why the PARSETALK text under- standing system (Neuhaus & Hahn, 1996) utilizes special heuristics in a heterogeneous chart- and backtracking- based parsing approach. References Barton, Jr., G. E. (1985). On the complexity of ID/LP parsing. Computational Linguistics, 11(4):205- 218. Br6ker, N. (1997). Eine Dependenzgrammatik zur Kopplung heterogener Wissenssysteme auf modaUogischer Basis, (Dissertation). Freiburg, DE: Philosophische Fakult~it, Albert-Ludwigs- Universit~it. Earley, J. (1970). An efficient context-free parsing algo- rithm. Communications of the ACM, 13(2):94-102. Gaifman, H. (1965). Dependency systems and phrase- structure systems. Information & Control, 8:304-- 337. Garey, M. R. & D. S. Johnson (1983). Computers and Intractability: A Guide to the Theory of NP- completeness (2. ed.). New York, NY: Freeman. Haegeman, L. (1994). Introduction to Government and Binding. Oxford, UK: Basil Blackwell. Hellwig, E (1988). Chart parsing according to the slot and filler principle. In Proc. of the 12 th Int. Conf. on Computational Linguistics. Budapest, HU, 22- 27Aug 1988, Vol. 1, pp. 242-244. Hepple, M. (1990). Word order and obliqueness in cat- egorial grammar. In G. Barry & G. Morill (Eds.), Studies in categorial grammar, pp. 47--64. Edin- burgh, UK: Edinburgh University Press. Huck, G. (1988). Phrasal verbs and the categories of postponement. In R. Oehrle, E. Bach & D. Wheeler (Eds.), Categorial Grammars and Natural Lan- guage Structures, pp. 249-263. Studies in Linguis- tics and Philosophy 32. Dordrecht, NL: D. Reidel. Hudson, R. (1990). English Word Grammar. Oxford, UK: Basil Blackwell. Lombardo, V. & L. Lesmo (1996). An earley-type recog- nizer for dependency grammar. In Proc. of the 16 th Int. Conf. on Computational Linguistics. Copen- hagen, DK, 5-9 Aug 1996, Vol. 2, pp. 723-728. McCord, M. (1990). Slot grammar: A system for simpler construction of practical natural language gram- mars. In R. Studer (Ed.), Natural Language and Logic, pp. 118-145. Berlin, Heidelberg: Springer. Mer ~uk, I. (1988). Dependency Syntax: Theory and Practice. New York, NY: SUNY State University Press of New York. Mel' 6uk, I. & N. Pertsov (1987). Surface Syntax of En- glish: A Formal Model within the MTT Framework. Amsterdam, NL: John Benjamins. Neuhaus, R & U. Hahn (1996). Restricted parallelism in object-oriented lexical parsing. In Proc. of the 16 th Int. Conf. on Computational Linguistics. Copen- hagen, DK, 5-9 Aug 1996, pp. 502-507. Pollard, C. & I. Sag (1994). Head-Driven Phrase Struc- ture Grammar. Chicago, IL: University of Chicago Press. Rambow, O. & A. Joshi (1994). A formal look at DGs and PSGs, with consideration of word-order phe- nomena. In L. Wanner (Ed.), Current Issues in Meaning-Text-Theory. London: Pinter. Reape, M. (I 989). A logical treatment of semi-free word order and discontinuous constituents. In Proc. of the 27 th Annual Meeting of the Association for Compu- tational Linguistics. Vancouver, BC, 1989, pp. 103- 110. Starosta, S. (1988). The Case for Lexicase. London: Pinter. Starosta, S. (1992). Lexicase revisited. Department of Linguistics, University of Hawaii. Tesni~re, L. ((1969) 1959). Elements de Syntaxe Struc- turale (2. ed.). Paris, FR: Klincksieck. 343 | 1997 | 43 |
Maximal Incrementality in Linear Categorial Deduction Mark Hepple Dept. of Computer Science University of Sheffield Regent Court, Portobello Street Sheffield S1 4DP, UK hepple©dcs, shef. ac. uk Abstract Recent work has seen the emergence of a common framework for parsing categorial grammar (CG) formalisms that fall within the 'type-logical' tradition (such as the Lambek calculus and related systems), whereby some method of linear logic the- orem proving is used in combination with a system of labelling that ensures only de- ductions appropriate to the relevant gram- matical logic are allowed. The approaches realising this framework, however, have not so far addressed the task of incremental parsing -- a key issue in earlier work with 'flexible' categorial grammars. In this pa- per, the approach of (Hepple, 1996) is mod- ified to yield a linear deduction system that does allow flexible deduction and hence in- cremental processing, but that hence also suffers the problem of 'spurious ambiguity'. This problem is avoided via normalisation. 1 Introduction A key attraction of the class of formalisms known as 'flexible' categorial grammars is their compatibility with an incremental style of processing, in allow- ing sentences to be assigned analyses that are fully or primarily left-branching. Such analyses designate many initial substrings of a sentence as interpretable constituents, allowing its interpretation to be gener- ated 'on-line' as it is presented. Incremental inter- pretation has been argued to provide for efficient language processing, by allowing early filtering of implausible readings. 1 This paper is concerned with the parsing of cat- egorial formalisms that fall within the 'type-logical' 1Within the categorial field, the significance of incre- mentality has been emphasised most notably in the work of Steedman, e.g. (Steedman, 1989). tradition, whose most familiar representative is the associative Lambek calculus (Lambek, 1958). Re- cent work has seen proposals for a range of such systems, differing in their resource sensitivity (and hence, implicitly, their underlying notion of 'lin- guistic structure'), in some cases combining differ- ing resource sensitivities in one system. 2 Many of these proposals employ a 'labelled deductive sys- tem' methodology (Gabbay, 1996), whereby types in proofs are associated with labels which record proof information for use in ensuring correct inferencing. A common framework is emerging for parsing type-logical formalisms, which exploits the labelled deduction idea. Approaches within this framework employ a theorem proving method that is appropri- ate for use with linear logic, and combine it with a labelling system that restricts admitted deductions to be those of a weaker system. Crucially, linear logic stands above all of the type-logical formalisms pro- posed in the hierarchy of substructural logics, and hence linear logic deduction methods can provide a common basis for parsing all of these systems. For example, Moortgat (1992) combines a linear proof net method with labelling to provide deduction for several categorial systems. Morrill (1995) shows how types of the associative Lambek calculus may be translated to labelled implicational linear types, with deduction implemented via a version of SLD resolution. Hepple (1996) introduces a linear deduc- tion method, involving compilation to first order for- mulae, which can be combined with various labelling disciplines. These approaches, however, are not dir- ected toward incremental processing. In what follows, we show how the method of (Hepple, 1996) can be modified to allow processing which has a high degree of incrementality. These modifications, however, give a system which suffers 2See, for example, the formalisms developed in (Moortgat & Morrill, 1991), (Moortgat & Oehrle, 1994), (Morrill, 1994), (Hepple, 1995). 344 the problem of 'derivational equivalence', also called 'spurious ambiguity', i.e. allowing multiple proofs which assign the same reading for some combina- tion, a fact which threatens processing efficiency. We show how this problem is solved via normalisation. 2 Implicational Linear Logic Linear logic is an example of a "resource-sensitive" logic, requiring that each assumption ('resource') is used precisely once in any deduction. For the implic- ational fragment, the set of formulae ~ are defined by 5 r ::= A [ ~'o-~- (with A a nonempty set of atomic types). A natural deduction formulation re- quires the elimination and introduction rules in (1), which correspond semantically to steps of functional application and abstraction, respectively. (1) Ao-B : a B: b IS: v] o-E A:a A: (ab) o-I Ao-B : Av.a The proof (2) (which omits lambda terms) illustrates that 'hypothetical reasoning' in proofs (i.e. the use of additional assumptions that are later discharged or cancelled, such as Z here) is driven by the presence of higher-order formulae (such as Xo-(yc-z) here). (2) Xo-(Yo--Z) Yo-W Wo--Z [Z] W Y Yo-Z X Various type-logical categorial formalisms (or strictly their implicational fragments) differ from the above system only in imposing further restric- tions on resource usage. For example, the associ- ative Lambek calculus imposes a linear order over formulae, in which context, implication divides into two cases, (usually written \ and /) depending on whether the argument type appears to the left or right of the functor. Then, formulae may combine only if they are adjacent and in the appropriate left-right order. The non-associative Lambek cal- culus (Lambek, 1961) sets the further requirement that types combine under some fixed initial brack- etting. Such weaker systems can be implemented by combining implicational linear logic with a la- belling system whose labels are structured objects that record relevant resource information, i.e. of se- quencing and/or bracketting, and then using this in- formation in restricting permitted inferences to only those that satisfy the resource requirements of the weaker logic. 3 First-order Compilation The first-order formulae are those with only atomic argument types (i.e. ~" ::= A I .~o-A). Hepple (1996) shows how deductions in implica- tional linear logic can be recast as deductions in- volving only first-order formulae. 3 The method in- volves compiling the original formulae to indexed first-order formulae, where a higher-order initial for- mula yields multiple compiled formulae, e.g. (omit- ting indices) Xo-(yo--Z) would yield Xo-Y and Z, i.e. with the subformula relevant to hypothetical reasoning (Z) effectively excised from the initial for- mulae, to be treated as a separate assumption, leav- ing a first-order residue. Indexing is used in ensuring general linear use of resources, but also notably to ensure proper use of excised subformulae, i.e. so that Z, in our example, must be used in deriving the argu- ment of Xo-Y, and not elsewhere (otherwise invalid deductions would be derivable). The approach is best explained by example. In proving Xo-(Yo--Z), Yo-W, Wo--Z =~ X, compila- tion of the premise formulae yields the indexed for- mulae that form the assumptions of (3), where for- mulae (i) and (iv) both derive from Xo--(Yo-Z). (Note in (3) that the lambda terms of assumptions are written below their indexed types, simply to help the proof fit in the column.) Combination is allowed by the single inference rule (4). (3) (i) (ii) (iii) (iv) {i}:Xo-(Y:{j}) {k}:Yo-(W:0) {l}:Wo--(Z:0) {j}:Z )~t.x( )tz.t ) )~u.yu Av.wv z {j,l} :W:wz {j, k, l}: Y: y(wz) {i, j, k, l}: X: x()tz.y(wz)) (4) ¢: Ao--(B:~) : Av.a ¢ : B : b lr = ¢t~¢ r: A: a[b//vl Each assumption in (3) is associated with a set con- taining a single index, which serves as the unique 3The point of this manoeuvre (i.e. compiling to first- order formulae) is to create a deduction method which, like chart parsing for phrase-structure grammar, avoids the need to recompute intermediate results when search- ing exhaustively for all possible analyses, i.e. where any combination of types contributes to more than one over- all analysis, it need only be computed once. The incre- mental system to be developed in this paper is similarly compatible with a 'chart-like' processing approach, al- though this issue will not be further addressed within this paper. For earlier work on chart-parsing type-logical formalisms, specifically the associative Lambek calculus, see KSnig (1990), Hepple (1992), K5nig (1994). 345 identifier for that assumption. The index sets of a derived formula identify precisely those assumptions from which it is derived. The rule (4) ensures appro- priate indexation, i.e. via the condition rr = ¢~¢, where t~ stands for disjoint union (ensuring linear usage). The common origin of assumptions (i) and (iv) (i.e. from Xo--(Yo-Z)) is recorded by the fact that (i)'s argument is marked with (iv)'s index (j). The condition a C ~b of (4) ensures that (iv) must contribute to the derivation of (i)'s argument (which is needed to ensure correct inferencing). Finally, ob- serve that the semantics of (4) is handled not by simple application, but rather by direct substitution for the variable of a lambda expression, employing a special variant of substitution, notated _[_//_] (e.g. t[s//v] to indicate substitution of s for v in t), which specifically does not act to avoid accidental binding. In the final inference of (3), this method allows the variable z to fall within the scope of an abstraction over z, and so become bound. Recall that introduc- tion inferences of the original formulation are associ- ated with abstraction steps. In this approach, these inferences are no longer required, their effects hav- ing been compiled into the semantics. See (Hepple, 1996) for more details, including a precise statement of the compilation procedure. 4 Flexible Deduction The approach just outlined is unsuited to incre- mental processing. Its single inference rule allows only a rigid style of combining formulae, where or- der of combination is completely determined by the argument order of functors. The formulae of (3), for example, must combine precisely as shown. It is not possible, say, to combine assumptions (i) and (if) to- gether first as part of a derivation. To overcome this limitation, we might generalise the combination rule to allow composition of functions, i.e. combinations akin to e.g. Xo-Y, Yo--W ==> Xo-W. However, the treatment of indexation in the above system is one that does not readily adapt to flexible combination. We will transform these indexed formulae to an- other form which better suits our needs, using the compilation procedure (5). This procedure returns a modified formula plus a set of equations that spe- cify constraints on its indexation. For example, the assumptions (i-iv) of (3) yield the results (6) (ignor- ing semantic terms, which remain unchanged). Each atomic formula is partnered with an index set (or typically a variable over such), which corresponds to the full set of indices to be associated with the complete object of that category, e.g. in (i) we have (X+¢), plus the equation ¢ = {i}Wrr which tells us that X's index set ¢ includes the argument formula Y's index set rr plus its own index i. The further constraint equation ¢ = {i}t~rr indicates that the argument's index set should include j (c.f. the con- ditions for using the original indexed formula). (5) 0.(¢: x: t) = ((x+¢) : t,0) where X atomic 0.(¢: Xo-Y: t) = (Z: t,C) where 0.1(¢, Xo--Y) = (Z, C) 0.1(¢,x) = ((x+7), {7 = ¢}) where X atomic, 7 a fresh variable 0.1 (¢, Xl°-( Y: 7r)) = (X2o--(Y+7), C') where 6, 7 fresh variables, 6 := ¢~7 0"1(6, X 1) = (X2, C) C' = C u {~r c 7} (unless ~r = 0, when C = C') (6) i. old formula: {i}: Xo--(Y:{j}) new formula: (X+C)o-(Y+Tr) constraints: {¢ = {i}~rr, {j} C 7r} if. old formula: {k}:Yo-(W:O) new formula: (V+a)o-(W%3) constraints: {a = {k}~/~} iii. old formula: {l} :Wo-(Z:O) new formula: (W+7)o-(Z+~) constraints: {7 = {l}t~} iv. old formula: {j} :Z new formula: (Z+{j}) constraints: 0 (7) Ac--B : Av.a B : b A: a[bllv] The previous inference rule (4) modifies to (7), which is simpler since indexation constraints are now handled by the separate constraint equations. We leave implicit the fact that use of the rule involves unification of the index variables associated with the two occurrences of "B" (in the standard manner). The constraint equations for the result of the com- bination are simply the sum of those for the formulae combined (as affected by the unification step). For example, combination of the formulae from (iii) and (iv) of (6) requires unification of the index set expres- sions 6 and {j}, yielding the result formula (W+7) plus the single constraint equation V = {l}tg{j}, which is obviously satisfiable (with 3' = {j,l}). A combination is not allowed if it results in an unsat- isfiable set of constraints. The modified approach so neatly moves indexation requirements off into the constraint equation domain that we shall henceforth drop all consideration of them, assuming them to be appropriately managed in the background. 346 We can now state a generalised composition rule as in (8). The inference is marked as [m, n], where m is the argument position of the 'functor' (always the lefthand premise) that is involved in the com- bination, and n indicates the number of arguments inherited from the 'argument' (righthand premise). The notation "o--Zn...o--Zl" indicates a sequence of n arguments, where n may be zero, e.g. the case [1,0] corresponds precisely to the rule (7). Rule (8) allows the non-applicative derivation (9) over the formulae from (6) (c.f. the earlier derivation (3)). (8) Xo-Y .... o--Y1 Ymo-Z .... o'-Zl Ayl ...y,, .a Azl ...z~ .b [m, n] Xo- Z .... o- Zl o-Y,,_ 1-.o-Y1 Ayl ...ym- 1 Zl ...z,.a[b // ym ] (9) (i) (ii) (iii) (iv) Xc-Y Yo-W Wo-Z Z At.x(Az.t) Au.yu Av.wv z Xo-W: Au.x(kz.yu) [1,11 [1,1] xo-z: ~v.x(~z.y(wv)) x: x(,~z.y(wz) ) [1 21 5 Incremental Derivation As noted earlier, the relevance of flexible CGs to incremental processing relates to their ability to assign highly left-branching analyses to sentences, so that many initial substrings are treated as in- terpretable constituents. Although we have adap- ted the (Hepple, 1996) approach to allow flexibility in deduction, the applicability of the notion 'left- branching' is not clear since it describes the form of structures built in proof systems where formu- lae are placed in a linear order, with combination dependent on adjacency. Linear deduction meth- ods, on the other hand, work with unordered collec- tions of formulae. Of course, the system of labelling that is in use -- where the constraints of the 'real' grammatical logic reside -- may well import word order information that limits combination possibil- ities, but in designing a general parsing method for linear categorial formalisms, these constraints must remain with the labelling system. This is not to say that there is no order informa- tion available to be considered in distinguishing in- cremental and non-incremental analyses. In an in- cremental processing context, the words of a sen- tence are delivered to the parser one-by-one, in 'left- to-right' order. Given lexical look-up, there will then be an 'order of delivery' of lexical formulae to the parser. Consequently, we can characterise an incre- mental analysis as being one that at any stage in- cludes the maximal amount of 'contentful' combin- ation of the formulae (and hence also lexical mean- ings) so far delivered, within the limits of possible combination that the proof system allows. Note that we have not in these comments reintroduced an ordered proof system of the familiar kind by the back door. In particular, we do not require formu- lae to combine under any notion of 'adjacency', but simply 'as soon as possible'. For example, if the order of arrival of the formulae in (9) were (i,iv)-<(ii)-<(iii) (recall that (i,iv) origin- ate from the same initial formula, and so must ar- rive together), then the proof (9) would be an incre- mental analysis. However, if the order instead was (ii)-<(iii)-<(i,iv), then (9) would not be incremental, since at the stage when only (ii) and (iii) had ar- rived, they could combine (as part of an equivalent alternative analysis), but are not so combined in (9). 6 Derivational Equivalence, Dependency &: Normalisation It seems we have achieved our aim of a linear deduc- tion method that allows incremental analysis quite easily, i.e. simply by generalising the combina- tion rule as in (8), having modified indexed formu- lae using (5). However, without further work, this 'achievement' is of little value, because the result- ing system will be very computationally expensive due to the problem of 'derivational equivalence' or 'spurious ambiguity', i.e. the existence of multiple distinct proofs which assign the same reading. For example, in addition to the proof (9), we have also the equivalent proof (10). (10) (i) (ii) (iii) (iv) Xo--Y Yo-W Wo-Z Z At.x(Az.t) Au.yu Av.wv z Yo--Z : )~v.y(wv) [1,1] Y: y(wz) x: z( az y( wz ) ) [1,0] [1,0] The solution to this problem involves specifying a normal form for deductions, and allowing that only normal form proofs are constructed) Our route to specifying a normal form for proofs exploits a corres- pondence between proofs and dependency structures. Dependency grammar (DG) takes as fundamental ~This approach of 'normal form parsing' has been applied to the associative Lambek calculus in (K6nig, 1989), (Hepple, 1990), (Hendriks, 1992), and to Combin- atory Categorial Grammar in (Hepple & Morrill, 1989), (Eisner, 1996). 347 the notions of head and dependent. An analogy is often drawn between CG and DG based on equating categorial functors with heads, whereby the argu- ments sought by a functor are seen as its dependents. The two approaches have some obvious differences. Firstly, the argument requirements of a categorial functor are ordered. Secondly, arguments in CG are phrasal, whereas in DG dependencies are between words. However, to identify the dependency rela- tions entailed by a proof, we may simply ignore argu- ment ordering, and we can trace through the proof to identify those initial assumptions ('words') that are related as head and dependent by each combination of the proof. This simple idea unfortunately runs into complications, due to the presence of higher or- der functions. For example, in the proof (2), since the higher order functor's argument category (i.e. Yo--Z) has subformuiae corresponding to compon- ents of both of the other two assumptions, Yo-W and Wo--Z, it is not clear whether we should view the higher order functor as having a dependency re- lation only to the 'functionally dominant' assump- tion Yo-W, i.e. with dependencies as in (lla), or to both the assumptions Yo-W and Wo-Z, i.e. with dependencies as perhaps in either (llb) or (llc). The compilation approach, however, lacks this prob- lem, since we have only first order formulae, amongst which the dependencies are clear, e.g. as in (12). (11) (a) ~ f~ Xo-(Yo-Z) Yo-W Wo-Z • Xo- (Yo-Z) Yo-W Wo-Z Xo-(Yo-Z) Yo-W Wo-Z (12) #-5 Xo--Y Yo-W Wo-Z Z Some preliminaries. We assume that proof as- sumptions explicitly record 'order of delivery' in- formation, marked by a natural number, and so take the form: n x N Further, we require the ordering to go beyond simple 'order of delivery' in relatively ordering first order as- sumptions that derive from the same original higher- order formula. (This move simply introduces some extra arbitrary bias as a basis for distinguishing proofs.) It is convenient to have a 'linear' nota- tion for writing proofs. We will write (n/X [a]) for an assumption (such as that just shown), and (X Y / Z [m, n]) for a combination of subproofs X and Y to give result formula Z by inference [m, n]. (13) dep((X Y / Z [m,n])) = {(i,j,k)} where gov(m, X) = (i, k), fun(Y) = j (14) dep*((n/X [a])) -- 0 dep*((X Y / Z [re, n])) = {~} U dep*(X) U dep*(Y) where 5 = dep((X Y / Z [m, n])) The procedure dep, defined in (13), identifies the dependency relation established by any combina- tion, i.e. for any subproof P = (X Y / Z [m,n]), dep(P) returns a triple (i,j,k), where i,j identify the head and dependent assumptions for the com- bination, and k indicates the argument position of the head assumption that is involved (which has now been inherited to be argument m of the functor of the combination). The procedure dep*, defined in (14), returns the set of dependencies established within a subproof. Note that dep employs the pro- cedures gov (which traces the relevant argument back to its source assumption -- the head) and fun (which finds the functionally dominant assumption within the argument subproof-- the dependent). (15) gov(i, (n/x [a])) = (n, i) gov(i, (x Y / z [m, n])) = gov((i - m + 1), Y) whereto<i< (m+n) gov(i, (X Y / Z [m, n])) = gov(i, X) where i < m gov(i, (X Y / Z [m, n])) = gov((i - n + 1), X) where (m + n) < i (16) fun((n/X [a])) = n fun((X Y / Z [re, n])) = fun(X) From earlier discussion, it should be clear that an 'incremental analysis' is one in which any depend- ency to be established is established as soon as pos- sible in terms of the order of delivery of assumptions. The relation << of (17) orders dependencies in terms of which can be established earlier on, i.e. 6 << 7 if the later-arriving assumption of 6 arrives before the later-arriving assumption of 7- Note however that 6,7 may have the same later arriving assumption (i.e. if this assumption is involved in more than one dependency). In this case, << arbitrarily gives pre- cedence to the dependency whose two assumptions occur closer together in delivery order. 348 (17) 5<<7 (whereh=(i,j,k),7=(x,y,z)) if] (max(/,j) < max(x,y) V (max(/,j) = max(x, y) A min(i, ]1 > rain(x, y))) We can use << to define an incremental normal form for proofs, i.e. an incremental proof is one that is well-ordered with respect to << in the sense that every combination (X Y / Z [m, n]) within it establishes a dependency 5 which follows under << every dependency 5' established within the sub- proofs X and Y it combines, i.e. 5' << 5 for each 5' 6 dep*(X) tJ dep*(Y). This normal form is useful only if we can show that every proof has an equi- valent normal form. For present purposes, we can take two proofs to be equivalent if] they establish identical sets of dependency relations. 5 (18) trace(/,j, (i/X [a])) = j trace(/,j, (X Y / Z [m,n])) = (m + k- 1) where i 6 assure(Y) trace(i, j, Y) = k trace(i,j, (X Y / Z [m,n])) = k where i 6 assure(X) trace(i,j,X) = k, k < m trace(i, j, (X Y / Z [m, hi)) = (k + n - 1) where i 6 assure(X) trace(i, j, X) = k, k > m (19) assum((i/x [a])) = {i} assum((X Y / Z fro, n])) = assum(X) U assum(Y) We can specify a method such that given a set of dependency relations :D we can construct a cor- responding proof. The process works with a set of subproofs 7 ), which are initially just the set of as- sumptions (i.e. each of the form (n/F [a])), and proceeds by combining pairs of subproofs together, until finally just a single proof remains. Each step involves selecting a dependency 5 (5 = (i, j, k)) from /) (setting D := D - {5} for subsequent purposes), removing the subproofs P, Q from 7) which contain the assumptions i,j (respectively), combining P, Q (with P as functor) to give a new subproof R which 5This criterion turns out to be equivalent to one stated in terms of the lambda terms that proofs generate, i.e. two proofs will yield identical sets of dependency re- lations iff they yield proof terms that are fly-equivalent. This observation should not be surprising, since the set of 'dependency relations' returned for a proof is in es- sence just a rather unstructured summary of its func- tional relations. is added to 7) (i.e. P := (7) - {P, Q}) u {R}). It is important to get the right value for m in the combin- ation fro, n] used to combine P, Q, so that the correct argument of the assumption i (as now inherited to the end-type of P) is involved. This value is given by m = trace(i, k, P) (with trace as defined in (18)). The process of proof construction is nondetermin- istic, in the order of selection of dependencies for in- corporation, and so a single set of dependences can yield multiple distinct, but equivalent, proofs (as we would expect). To build normal form proofs, we only need to limit the order of selection of dependencies using <<, i.e. requiring that the minimal element under << is se- lected at each stage. Note that this ordering restric- tion makes the selection process deterministic, from which it follows that normal forms are unique. Put- ting the above methods together, we have a complete normal form method for proofs of the first-order lin- ear deduction system, i.e. for any proof P, we can extract its dependency relations and use these to construct a unique, maximally incremental, altern- ative proof -- the normal form of P. 7 Proof Reduction and Normalisation The above normalisation approach is somewhat non- standard. We shall next briefly sketch how normal- isation could instead be handled via the standard method of proof reduction. This method involves defining a contraction relation (t>l) between proofs, which is typically stated as a number of contraction rules of the form X t>l Y, where X is termed a redex and Y its contractum. Each rule allows that a proof containing a redex be transformed into one where that occurrence is replaced by its contractum. A proof is in normal form if] it contains no redexes. The contraction relation generates a reduction rela- tion (t>) such that X reduces to Y (X [> Y) if] Y is obtained from X by a finite series (possibly zero) of contractions. A term Y is a normal form of X iff ¥ is a normal form and X [> Y. We again require the ordering relation << defined in (17). A redex is any subproof whose final step is a combination of two well-ordered subproofs, which establishes a dependency that undermines well-orderedness. A contraction step modifies the proof to swap this final combination with the final one of an immediate subproof, so that the depend- encies the two combinations establish are now ap- propriately ordered with respect to each other. The possibilities for reordering combination steps divide into four cases, which are shown in Figure 1. This re- 349 x X Y Z X Z Y [m, n] ~ is, t] V where s < m 1:> V' [8, t] [(m + t - 1), n] W W Y z X Y Z Ira, n] - - [(s - m + 1), t] V where m _< s I> V' [s, t] Ira, (n + t - 1)] W s < (m+ n) W x X Y Z X Z Y ~[m,n] ~[(s -- n + 1),t] V where s_> (re+n) D V' [~, t] Ira, ~] W w Y Z X Y Z Ira, n] --[8, (t - n + :)] V t> V' [s, t] [(m + s - 1), n] W W Figure 1: Local Reordering of Combination Steps: the four cases duction system can be shown to exhibit the property (called strong normalisation) that every reduction is finite, from which it follows that every proof has a normal form. 6 8 Normal form parsing The technique of normal form parsing involves en- suring that only normal form proofs are construc- ted by the parser, avoiding the unnecessary work of building all the non-normal form proofs. At any stage, all subproofs so far constructed are in normal form, and the result of any combination is admitted only provided it is in normal form, otherwise it is discarded. The result of a combination is recognised as non-normal form if it establishes a dependency that is out of order with respect to that of the fi- nal combination of at least one of the two subproofs combined (which is an adequate criterion since the subproofs are well-ordered). The procedures defined above can be used to identify these dependencies. 9 The Degree of Incrementality Let us next consider the degree of incrementality that the above system allows, and the sense in which 6To prove strong normalisation, it is sufficient to give a metric which assigns to each proof a finite non-negative integer score, and under which every contraction reduces a proof's score by a non-zero amount. The following metric tt can be shown to suffice: (a) for P = (nIX [a]), #(P) = 0, (b) for P=(XY / Z [m,n]), whose final step establishes a dependency a, #(P) = it(X) + ~u(Y) + D, where D is the number of dependencies 5' such that << a', which are established in X and Y, i.e. D = [A] whereA={5' ] 5'edep,(X) Udep,(Y) A 5<<5'}. it might be considered maximal. Clearly, the system does not allow full 'word-by-word' incrementality, i.e. where the words that have been delivered at any stage in incremental processing are combined to give a single result formula, with combinations to incor- porate each new lexical formula as it arrives/ For example, in incremental processing of Today John sang, the first two words might yield (after compil- ation) the first-order formulae so-s and np, which will not combine under the rule (8). s Instead, the above system will allow precisely those combinations that establish functional rela- tions that are marked out in lexical type structure (i.e. subcategorisation), which, given the parMlel- ism of syntax and semantics, corresponds to allow- ing those combinations that establish semantically relevant functional relations amongst lexical mean- ings. Thus, we believe the above system to exhibit maximal incrementality in relation to allowing 'se- mantically contentful' combinations. In dependency terms, the system allows any set of initial formulae to combine to a single result iff they form a con- nected graph under the dependency relations that obtain amongst them. Note that the extent of incrementality allowed by using 'generalised composition' in the compiled first- order system should not be equated with that which 7For an example of a system allowing word-by-word incrementality, see (Milward, 1995). SNote that this is not to say that the system is un- able to combine these two types, e.g. a combination so--s, np =~ so-(so-np) is derivable, with appropriate compilation. The point rather is that such a combina- tion will typically not happen as a component in a proof of some other overall deduction. 350 would be allowed by such a rule in the original (non- compiled) system. We can illustrate this point using the following type combination, which is not an in- stance of even 'generalised' composition. Xo-(Yo-Z), Yo--W =~ Xo-(Wo-Z) Compilation of the higher-order assumption would yield Xo--Y plus Z, of which the first formula can compose with the second assumption Yo-W to give Xo-W, thereby achieving some semantically con- tentful combination of their associated meanings, which would not be allowed by composition over the original formulae. 9 10 Conclusion We have shown how the linear categorial deduction method of (Hepple, 1996) can be modified to allow incremental derivation, and specified an incremental normal form for proofs of the system. These results provide for an efficient incremental linear deduction method that can be used with various labelling dis- ciplines as a basis for parsing a range of type-logical formalisms. References Jason Eisner 1996. 'Efficient Normal-Form Parsing for Combinatory Categorial Grammar.' Proc. o/ ACL-3~. Dov M. Gabbay. 1996. Labelled deductive systems. Volume 1. Oxford University Press. Herman Hendriks. 1992. 'Lambek Semantics: nor- malisation, spurious ambiguity, partial deduction and proof nets', Proc. of Eighth Amsterdam Col- loquium, ILLI, University of Amsterdam. Mark Hepple. 1990. 'Normal form theorem proving for the Lambek calculus'. Proc. of COLING-90. Mark Hepple. 1992. ' Chart Parsing Lambek Gram- mars: Modal Extensions and Incrementality', Proc. of COLING-92. Mark Hepple. 1995. 'Mixing Modes of Linguistic Description in Categorial Grammar'. Proceedings EA CL-7, Dublin. Mark Hepple. 1996. 'A Compilation-Chart Method for Linear Categorial Deduction'. Proc. of COLING-96, Copenhagen. 9This combination corresponds to what in a direc- tional system Wittenburg (1987) has termed a 'predict- ive combinator', e.g. such as X/(Y/Z), Y/W =v W/Z. Indeed, the semantic result for the combination in the first-order system corresponds closely to that which would be produced under Wittenburg's rule. Mark Hepple & Glyn Morrill. 1989. 'Parsing and derivational equivalence.' Proc. of EA CL-4. Esther KSnig. 1989. 'Parsing as natural deduction'. Proc. of ACL-2Z Esther KSnig. 1990. 'The complexity of pars- ing with extended categorial grammars' Proc. of COLING-90. Esther KSnig. 1994. 'A Hypothetical Reasoning Al- gorithm for Linguistic Analysis.' Journal of Logic and Computation, Vol. 4, No 1, ppl-19. Joachim Lambek. 1958. 'The mathematics of sentence structure.' American Mathematical Monthly, 65, pp154-170. Joachim Lambek. 1961. 'On the calculus of syn- tactic types.' R. Jakobson (Ed), Structure of Language and its Mathematical Aspects, Proceed- ings of the Symposia in Applied Mathematics XII, American Mathematical Society. David Milward. 1995. 'Incremental Interpretation of Categorial Grammar.' Proceedings EACL-7, Dublin. Michael Moortgat. 1992. 'Labelled deductive sys- tems for categorial theorem proving'. Proc. of Eighth Amsterdam Colloquium, ILLI, University of Amsterdam. Michael Moortgat & Richard T. Oehrle. 1994. 'Ad- jacency, dependency and order'. Proc. of Ninth Amsterdam Colloquium. Michael Moortgat & Glyn Morrill. 1991. 'Heads and Phrases: Type Calculus for Dependency and Constituency.' To appear: Journal of Language, Logic and Information. Glyn Morrill. 1994. Type Logical Grammar: Cat- egorial Logic of Signs. Kluwer Academic Publish- ers, Dordrecht. Glyn Morrill. 1995. 'Higher-order Linear Logic Programming of Categorial Deduction'. Proc. of EA CL- 7, Dublin. Mark J. Steedman. 1989. 'Grammar, interpreta- tion and processing from the lexicon.' In Marslen- Wilson, W. (Ed), Lexical Representation and Pro- cess, MIT Press, Cambridge, MA. Kent Wittenburg. 1987. 'Predictive Combinators: A method for efficient parsing of Combinatory Categorial Grammars.' Proc. of ACL-25. 351 | 1997 | 44 |
Automatic Extraction of Aspectual Information from a Monolingual Corpus Akira Oishi Yuji Matsumoto Graduate School of Information Science Nara Institute of Science and TechnologT 8916-5 Takayama, Ikoma, Nara 630-01 Japan {ryo-o, matsu}~is, aist-nara, ac. j p Abstract This paper describes an approach to ex- tract the aspectual information of Japanese verb phrases from a monolingual corpus. We classify Verbs into six categories by means of the aspectual features which are defined on the basis of the possibility of co-occurrence with aspectual forms and ad- verbs. A unique category could be identi- fied for 96% of the target verbs. To evalu- ate the result of the experiment, we exam- ined the meaning of -leiru which is one of the most fundamental aspectual markers in Japanese, and obtained the correct recog- nition score of 71% for the 200 sentences. 1 Introduction Aspect refers to the internal temporal structure of events and is distinguished from tense, which has a deictic element in it, of reference to a point of time anchored by the speaker's utterance. There is a vo- luminous literature on aspect within linguistics and philosophy. Recently, computational linguists also have joined in the act within the context of machine translation or text understanding etc. For example, consider the following Japanese sentences (quoted from (Gunji, 1992)). (a). Ken-wa ima tonarino heya-de kimono-wo ki-te-i-ru. Ken-TOP now next room-LOC kimono-ACC put- on-PRES 'Ken is now putting on kimono in the next room.' (b). Ken-wa kesa-kara zutto axto kimono-wo ki-te-i-ru. Ken-TOP this morning-since always that kimono- ACC weax-PRES 'Ken has been wearing that kimono since this morn- ing.' (e). Ken-wa ano kimono-wo san-nen maeni ki-te-i-ru. Ken-TOP that kimono-ACC three-year before wear-PRES 'Ken has the experience of wearing that kimono three years ago.' Notice that English translations use separate lex- ical items (put on for (a) and wear for (b), (c)) and different aspectual configurations (the progres- sive for (a), the perfect progressive for (b), and an- other for (c)), while all Japanese sentences contain the same verbal form ki-te-i-ru. Thus. when the sys- tem tries to translate these sentences, it must be aware of the difference among them. This paper describes an approach to extract the aspectual information of Japanese verb phrases from a monolingual corpus. In the next section, we will classify Japanese verbs into six categories by means of aspectual features following the framework of (Bennett et al., 1990). The aspectual forms land adverbs are defined as the functions which operate on verbs' aspectual features and changes their val- ues. By using the constraints of the applicability of the functions, we can identify a unique category for each verb automatically. If one can acquire aspec- tual properties of verbs properly and know how the other constituents in a sentence operate on them, then the aspectual meaning of the whole sentence will be determined monotonically. To evaluate the result of the experiment, we will examine the mean- ing of -teiru which is one of the most fundamental aspectual forms, since the classification itself is dif- ficult to evaluate objectively. 2 Realization Process of Aspectual Meaning We consider that the whole aspectual meaning of verb phrases is determined in the following order: verbs ---, arguments ~ adverbs ~ aspectual forms, Adverbs and aspectual forms are defined as indicators of such cognitive processes as "zoom- ing" and "focusing" which operate on the time-line representation. They are sinfilar to the notions "as- pectual coercion" (Moens and Steedman, 1988) or I The term "form" refers to grammatical morphemes which axe defined in terms of derivation. In this paper, we refer to the aspectual morphemes which follow verbs as "aspectual forms", including compound verbs such as .hazimevu(begin), suffixes with epenthetic -re such as - teiru, and aspectual nominals such as -bakaviOust now) etc. 352 "views" (Gunji, 1992). We explain each in turn. 2.1 Aspectual Categories of Verbs A number of aspectually oriented lexical-semantic representations have been proposed. ~Ve adopt and extend the feature-based framework proposed by (Bennett et al., 1990) in the spirit of (Moens and Steedman, 1988). They uses three features: ±dynamic, ±telic, and ±atomic. We add two more features: ±process and ±gradual. The feature dynamicity distinguishes between states(-d) and events(+d), and atomicity dis- tinguishes between point events(+a) and extended events(-a). The duration described by verbs is twofold: an ongoing process and a consequent state. The feature process concerns an ongoing process and distinguishes whether events described by verbs have the duration for which some actions unfold. The feature telicity distinguishes between culmi- native events(+t) and nonculminative events(-t). It presupposes a process. The feature graduality characterizes events in which some kind of change is included and the change gradually develops. We can classify verbs by means of different com- binations of the five features. Since there are depen- dences between features, only subsets of the com- binatorially possible configurations of features are defined as shown in the Table 1. In the Table 1, 1.stative verbs are those that are not dynamic. 2.atomic verbs are those that express an atomic event. 3.resultative verbs ex- press a punctual event followed by a new state which holds over some interval of time. 4.process+result verbs are those that express a complex situation consisting of a process which culminates in a new state. 5.non-gradual process verbs are those that express only processes and not changes of state. 6.gradual process verbs are those that have grad- uality. Although the verbs of the categories 5 and 6 don't contain telicity, the arguments of the verbs or some kinds of adverbs can set up the endpoint of the process as discussed later. In Vendlerian classi- fication, states correspond to 1, achievements to 2 and 3, accomplishments to 4 and 6, activities to 5, respectively (Vendler, 1957). Table 1: Aspectual categories of verbs 2.2 Arguments Tenny points out that internal argument of a verb can be defined as that which temporally delimits or measures out the event (Tenny, 1994). The direct internal argument can aspectually • 'measure out the event" to which the verb refers. To clarify what is meant by "'mesuring-out", she gives examples of three kinds of measuring-out: incremen- tal theme verbs (eat an apple, build a house etc.), change-of-state verbs (ripen the fruit etc.) and path objects of route verbs (climbed the ladder, play a sonata etc.). On the other hand, the indirect internal argument can provide a temporal terminus for the event de- scribed by the verb. The terminus causes the event to be delimited as in push the car to a gas station. There is only one kind of internal argument, in terms of thematic roles, that does provide an event termi- nus, and that is a goal. In terms of the current framework, both of them add the telicity to the verb which does not inherently contain the telicity. They play a role of framing the interval on which the focus should be brought. 2.3 Adverbs In general, adverbs focus on the subpart of the event described by a verb and give a more detailed de- scription. According to the discussion in (Moriyama, 1988), adverbs can be classified as follows in terms of the subpart on which they focus. Processes modifiers modify verbs which have process (+p). This class includes reduplicative ono- matopoeia such as gasagasa, batabata, suisui, ses- seto, butubutu, etc., which are expressing sound or manner of directed motion, and rate adverbs such as yukkuri(slowly), tebayaku(quickly), etc., which ex- press the speed of motions. They focus on the on- going process of events described by verbs. Gradual change indicators express the progress of change of state, such as dandan (grad- ually), sukosizutu (little by little), jojom (gradually), dondon (constantly). sidaini (by degrees), etc.. which modify gradual process verbs (+g) and focus on the process. Continuous adverbs are those that can mod- ify both states verbs (-d) and process verbs (+p), such as zutto(for a long time), itumademo(forever), etc. They express a continuance of an event or a maintenance of a state. categories features examples 1.stative 2.atomic 3.resultative 4.process+result 5.non-gradual process 6.gradual process [-d] [+d,+a] [+d,-a,-p] [+d,-a,+p,+t] [+d,-a,+p,-t,-g] [+d,-a,+p,-t,+g] aru(be), sobieru( se), sonzaisuru( e=isO hirameku(flash), mikakeru(notice) suwaru(sit down), tatu(stand up) korosu(kill), Urn(put on~wear), ake,' (open) aruku(walk), in(say), utau(sing) kusaru(turn sour), takamaru(become high) 353 Atomic adverbs make any events instantaneous, such as satto, ponto, gatatto, potarito, syunkan, etc., which express instantaneous sound emission or an instant. When these adverbs co-occur with verbs, the events are understood as instantaneous. This doesn't necessarily imply that the verb itself is in- stantaneous. Quantity regulators measure out events, such as gokiro aruku(walk 5kin). gojikan seizasita(sit straight for 5 hours), etc. These include time, dis- tance, and any quantity of contents. End state modifiers express the consequent state of events, such as mapputatuni(into two ex- act halves), konagonani(into pieces), pechankoni(be fiat), barabarani(come apart), etc. They focus on the resultant state. So far we have described adverbs which concern a single event, but some adverbs regulate the multiple events which involves iteration of a single event. By iteration, the whole process of a collective event can be taken up regardless of the inherent features of verbs. There are two kinds of Repetition adverbs: one regulates the whole quantity of the iteration of events such as san-kai(three times) or nan- domo(many times) etc., and the other describes the habitual repetition of events such as itumo(always) or syottyuu(very often) etc. Both describe many events each of which involves one person's act. Finally, we shall mention Time in the past ad- verbs. There are cases where the form -teiru, which marks the present tense, can co-occur with tempo- ral adverbs describing the past. (See the exan~ple (lc) in the introduction.) It describes the experien- tial fact of an event. Such adverbs as katute(once), mukasi(in the past) and izen(be/ore) determine the temporal structure of the event related with tense. 2.4 Aspectual Forms The ability of aspectual forms to follow verbs is con- strained by the inherent features of verbs. We briefly describe some of aspectual forms used in the exper- iment. The forms -you-to-suru(be going to) and kakeru(be about to) take up the occurrence of events. They can follow the verbs which are dynamic(+d). The form -tuzukeru(continue)can follow the verbs which have duration(-a). It can take up either the ongoing process or the resultant state. The form - hajimeru(begin) can follow the verbs which have pro- cess(-bp) and takes up the start time of the process. On the other hand, the forms -owaru(cease) and - oeru(finish) can follow the verbs which are telic(+t) and takes up the end point of the process. However, these constraints on the inherent features of verbs are only concerned with a single event. By itera- tion, the whole process of a collective event can be taken up regardless of the inherent features of verbs, as mentioned above. The forms -tutuaru(be in progress), -tekuru(come into state) and -teiku(go into state) focus on the gradual process of change. -Tutuaru(be in progress) takes up it as a kind of state, -tekuru(come into state) views it from the end state of change while -teiku(go into state) from the initial state of change. Both of -tekuru and -teiku have usages other than aspect, as in mot-tekuru(bring) or mot-teiku(take). 3 Experiment We carried out an experiment to classify Japanese verbs into six categories in the Table 1 by means of corpus data. As shown in the Figure 1, each category is defined in terms of the ability to co-occur with aspectual forms. However, the discrimination of the categories needs negative evidence which we cannot use by def- inition. A corpus only provides positive evidence. Furthermore, some forms can be used regardless of the features and have usages other than aspect as discussed in the previous section. ~Ve must establish a method which takes into these facts into account. \oo -- 2. atomic verbs + ~ p --/ \ +t //~ . t 3. resultative verb= =°/ \ process+result /~ 4" verbs +g/ ~-g 4¢i1~ 6. gr=dual process 5. non-gradual process verbs verbs Figure 1: The relation between categories of verbs and features 3.1 Algorithm We used the EDR Japanese Corpus and the EDR Japanese Co-occurrence Dictionary (EDR, 1995) as material to extract syntactic clues in the experi- ment. The corpus contains 220,000 sentences from various genres of text. The results of the parsing analysis of these sentences indicates that the con- stituents of the sentence have a dependency struc- 354 STEP:I Pick out the items of which the governing and dependent words are a verb and an adverb from the EDR Co-occurrence Dictionary and store them with the frequency in an array called PAIRS (cf. Table 2). STEP:2 For each adverb in PAIRS, give an adverb class label (the initial letter of the class name) on the ba- sis of the discussion in sec. 2.3 and store them in an array called ADVERBS (cf. Table3 and Table4). STEP:3 For each verb in PAIRS, add up the frequency of the co-occurrence with the adverbs contained in the array ADVERBS. If the sum is greater than 4, store the verb in a list called VERBS. STEP:4 For each sentence in the corpus, find a verb and if it is contained in VERBS, then: STEP:4°I If the form following the verb is con- tained in the predefined list (Table5), make an array FORMS[/,j] positive (where i is the position of the verb in the list VERBS and j is the position of the form in the Table 5, see Table6), provided that the verb is not modi- fied by repetition adverbs(R). When the form is -tekuru or -teiku, put it on record only if the verb is modified by gradual change indica- tors(G). STEP:4-2 If the verb is modified by the adverbs contained in the array ADVERBS, refer to the adverb class label and add 1 to an array MODIFIED[i, k] (where i is the position of the verb in the list VERBS and k is the position of the adverb class label in the Table4. When the adverb is continuous one(C), distinguish the cases where the verb is followed by -teiru(C1) from the other eases(C2), see Table7), pro- vided that the verb is not followed by negative forms such as -nai or -zu, nor the forms which change the voice such as -reru(the passivizer) or -seru(the causativizer), since they affect the aspeetual properties of the attached verb. STEP:5 For each verb in VERBS: STEP:5-1 Narrow down the candidates by means of the array FORMS (on the basis of possible categories shown in Table 5). STEP:5-2 In the ease where the category of the verb cannot be uniquely identified in STEP:5- 1, i.e., other than the category 6, determine it by means of the array MODIFIED as follows: the category 6 the category 5 the category 4 the category 3 the category 2 the category 1 ambiguous ture. That is. the constituents have a governing- dependent relation. It is these constituents that form the head phrases of the Japanese Co-occurrence Dictionary which describes collocational information in the form of binary relations. Each item in the Japanese Co-occurrence Dictionary consists of a gov- erning word. a dependent word, the relator between the words, and supplementary co-occurrence item information which is composed of the frequency of the co-occurrence relation and a portion of the ac- tual example sentence from which the co-occurrence relation was taken. The algorithm used for classifying verbs is shown in Figure 2. Table 2: A part of the array PAIRS I "d~b I verb I r~q • I an(like that) tu(say) 1 an(like that) suru( do ) 1 ai (mutually) au(rneet ) 1 a~kawarazu(as usual) iru(be) 1 aikawarazu(as usual) otituku(settle) 1 aituide(one after another) sannyuusuru(join) 3 aituide(one after another) seturitusuru(establish) 4 Table 3: A part of the array ADVERBS adverb I label ] aikawarazu(as usual) C aegiaegi(gasping) P akaakato(brightly) P akuseku(busily} P atafuta(in a hurry} P atafutato(in a hurry} P attoiuma(in an instance) A ikiiki(vividly) P (if the verb is modified by gradual change indicators(G)) (if modified by process modifiers(P) and not by end state modifiers(E)) (if modified by both process modifiers(P) and end state modifiers(E)) (if modified by end state modifiers(E) and not byprocess modifiers(P)) (if modified by only atomic adverbs(A)) (if modified by continuous adverbs without being followed by .teiru(C2) and not modified by process modifiers(P) nor gradual change indicators(G) nor end state modifiers(E)) (otherwise) Figure 2: The algorithm for classifying verbs 355 Table 4: Results of the classification of adverbs adverb class(label) process modifiers (P) gradual change indicators (G) continuous adverbs (C) atomic adverbs (A) quantity regulators (Q) end state modifiers (E) repetition adverbs (R) ' | U[|ll I| I[| . ~ l ~ I']~i .Ill| ] total I examples 470 yukkuri(slowly), gasagasa, batabata, sui~ui, sesseto, butubutu,... 52 sidaini(gradually), masumasu(increasingly), jojoni(gradually)... 78 sonomama(as it is), zutto(for a long time), itumademo(forever)... 294 satto, ponto, gatatto, potarito, syunkan(instantaneously)... 12 180-do(180 degree), ippo(a step), ippai(a cup)... 86 mapputatuni(into two exact halves), konagonani(into powder)... 122 nandomo(many times), itumo(always), syottyuu(very often)... Table 5: The aspectual forms used in the experiment forms followable verb categories -you-to-suru(be going to),-kakeru(be about to) - tuzukeru(continue) -hajimeru(begin) -owaru(end) and -oeru(finish) -tutuaru(be in progress), -tekuru(come into state), -teiku(go into state) 2, 3, 4, 5, 6 3, 4, 5, 6 4,5,6 5,6 verb akkasuru(become worse) nigiru(catch) anteisuru(become stable) isikisuru(become conscious) kotonaru(differ) idousuru(move) ijisuru(maintain) tigau( differ) sodatu(grow) sodateru(bring up) ittisuru(agree) Tabl 6: A part of the array FORMS -kakeru + - + + - + - + - + + - + forms I -tuzu er l -hajime l -owa, I -tutuaru - + - + - + - + - + m Table 7: A part of the array MODIFIED verb adverb class labels PIGICl C2]AIQIE akkasuru(become worse) 0 5 0 0 1 0 0 nigiru(eatch) 0 1 0 1 0 0 1 anteisuru(become stable) 0 1 1 1 0 0 1 isikisuru(become conscious) 0 1 0 1 0 0 0 kotonaru( differ) 0 1 0 0 0 0 1 idousuru(move) 1 1 0 1 1 0 0 ijisuru(maintain) 0 0 0 4 0 0 0 tigau(differ) 0 1 0 0 1 0 0 sodatu(grow) 5 3 0 0 0 1 1 sodateru(bring up) 3 1 0 1 0 0 0 ittisuru(agree) 0 0 0 0 3 0 2 356 The steps 1, 2 find 3 are the processes to deter- mine the target verbs. There are 431 verbs modified by the classified adverbs more than 4 times. In step 2, we classify adverbs on the basis of the discussion in the previous section. Although the classification has been done by hand, it is much easier than that of verbs, since adverbs are fewer than verbs in num- ber (2,563 vs. 12,766 in the corpus) and have higher "iconicity" -- the isomorphism between form and meaning -- than verbs. This classification of ad- verbs is used not only for determining the aspectual categories of verbs but also for examining the mean- ing of -teiru as mentioned later. The step 4 is a process to register the co-occurring forms and adverbs for each verb. By using these data, we identify the aspectual categories of verbs in the step 5. Since the categories cannot be uniquely identified by aspectual forms only, we use adverbs which can modify the only restricted set of verbs as shown in Table 8. Table 8: categories adverb cl ass (Ta-b'e--i~ Adverb classes and their modifiable verb verb cate~ process modifiers (P) gradual change indicators (G) continuous adverbs (C) atomic adverbs (A) quantity regulators (Q) end state modifiers (E) 4,5,6 6 1,3,4,5,6 2,3,4,5,6 1,3,4,5,6 3, 4, 6 3.2 Evaluation and Discussion Out of 431 target verbs, we could uniquely identify categories for 375 verbs. As for the rest 56 verbs, 37 verbs were identified in the step 5-2 as the category which was not included in the set of categories out- putted by the step 5-1. This seems to be due to the failure to detect the expression of repetition, there- fore, we chose the category determined in the step 5-2. Table 9 shows the results. We confirmed that more than 80% of verbs are correctly classified. However, this is a subjective judgement. To evaluate the results of the classifi- cation more objectively, we focus on one evaluation metric; namely the automatic examination of the meaning of -teiru which can represent several dis- tinct senses as described in the introduction. The form -teiru indicates "zoom in" operation: it is a function that takes an event as its input and returns a type of states, which refers to unbounded regions i.e., a part of the time-line with no distinct boundaries. Figure 3 shows the time-line representa- tion for each aspectual category of verbs. Aspectual distinctions correspond to how parts of the time-line are delineated. 1. staUvo verbs t ) t l (1) (2) 2. atomic verbs ......................... --£) .......................... ; ........ ; (3) 3. resultatlve verbs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . t ~' (4) t I, (5) 4. process+result verbs ............. © © t J ~__I 4 J (s) (7) (e) 5. non-gradual procese verbs ..... ---(3 t J ................... ; .... -i (9) (10) gradual process verbs t # t ) (12) (11) Figure 3: The time-line representation for aspectual categories of verbs In Figure3, thick line segments signify regions, dashed line segments signify unbounded ends of re- gions and large open dots signify points in time boundaries or punctate events). Table 9: The verb classification obtained by the ex- periment [ verb catel~ory [ no. I examples 1.stative 30 2.atomic 19 3.resultative 29 4.process+result 30 5.non-gradual process 94 6.gradual process 210 ambiguous 19 mitumeru(stare) ijisuru(maintatn) sumu(live) sonzaisuru(ezist) nagameru(view, damaru(be silent) kumkaesu(repeat) tukaeru(can be used) ... nageru(throw) haneagaru(leap up) kizuku(notiee) mikakeru(happen to see) gouisuru(arrive at an agreement) kireru(snap) furnikiru(launch out) ... nureru(become wet) turnaru(become packed) tunagaru(make a connection) au(meet) suwaru(sit down) tatamu(fold) kureru(get dark) atehamaru(fit) ... tateru(build) nobasu(lengthen) rnatomeru(put together) narabu(form a line) tutumu(wrap) majiwaru(associate) tiru(fall) torikakomu(surround) ... nomu(drink) hakobu(carry) tanosimu(enjoy) kansatusuru(observe) furueru(shake) hibiku(ring) tobimawaru(fly about) taberu(eat) sugosu(spend)... akkasuru(get worse) tuyornaru(get strong) takarnaru(become raised) sinkoukasuru(get more acute) seityousuru(grow up) kappatukasuru(make active) ... kuwawaru(join) tutomeru(be employed) tomonau(accompany) tazuneru(visit) rainitisuru(eome to Japan) uwamawaru(be more than) hokoru(boast) ... 357 Since -teiru cannot include a time instant at which a state is drastically changed, it must denote one of the intervals depicted below the lines. The interval (1) in Figure3 designates a state which is a part of the state described by a lex_ical stative verb. It means a state holding before a speaker's eyes. It has been stated from (Kindaichi, 1976) that the form -teiru has three distinct senses: "a simple state', 'a progressive state' and 'a consequent state'. (1) corresponds to a simple state. (4) and (7) to a con- sequent state, (6), (9) and (11) to a progressive state. respectively. Though not represented in Figure 3, a consequent state can be taken up with the verbs of categories 5 and 6 if the endpoints of the processes are set up by explicit expressions. Kudo (Kudo, 1982) has pointed out that there are inherent meaning and derivative meaning for both progressive and consequent states and has sorted out them as follows. (i) inherent meaning of 'a progressive state': an ongoing process (ii) derivative meaning of 'a progressive state': an iteration (iii) inherent meaning of 'a consequent state': a re- sultative state (iv) derivative meaning of 'a consequent state': an experiential state (v) otherwise: a simple state (ii) is the above-mentioned process of a collective event; "a line as a set of points", so to speak. (iv) is a state where a speaker has an experience of the event described by a verb and corresponds to the intervals (2), (3), (5), (8), (10), (12) in Figure3. These derivative meanings are conditioned syntacti- cally or contextually, that is, they are stipulated as derivative by explicit linguistic expressions such as adverbials etc., while not concerned with the inher- ent features of verbs -- they can appear with most of verbs regardless of their aspectual categories. We carried out an experiment to examine the meaning of -teiru automatically by means of the clas- sifications of verbs and adverbs obtained in the pre- vious experiment. Table 10 shows the determination process of the meaning of -teiru. We checked the cases in Table 10 downward from the top. Table 11 shows the results obtained from running the process of Table 10 on 200 sentences containing -teiru which are randomly selected from the EDR Japanese Corpus. The precision on the whole is 71%. Note that the sense (i) 'an ongoing process' has high recall but low precision, while (iii) 'a resultative state' and (iv) 'an experiential state' show the opposite. This is due to the fact that the test sentences contain many "speech-act" verbs such as syuchousuru(insist), se- tumeisuru(explain), hyoumeisuru( declare) etc. They are classified as 5.non-gradual process verbs, and by Table 10: The determination process of the meaning of -teiru case output (1).the verb is modified by repetition (ii) an iteration adverbs( R} (2).the verb is modified by time in the past adverbs(P) or its category is 2. atomic verbs (3).the category of the verb is 1. stative verbs (4).the category of the verb is 3. resultative verbs (5).the verb is modified by process modifiers(P} or gradual change indicators (G} (6).the endpoint of the process is explicitly set up (the verb is modified by end state modifiers(E) ot quantity regulators(Q) or it takes a goal arsument i.e., ni(~o)-case etc. (7).the process cannot be taken up (the verb is modified by atomic adverbs(A) or sudeni(already), etc.) (iv) an experiential state (v) a simple state (iii) a resultative state (i) an ongoing process (iii} a resultative state (iii) a resultative state (8).the category of the verb is (i} an ongoing process 5. non-gradual process or 6. gradual process verbs (9).the category of the verb is ambiguous: (i) or (iii) 4. process+result verbs the case 8 in Table 10, the senses of -teiru follow- ing them are determined as (i) 'an ongoing process'. However, they takes a quotative to-case that marks the content of the statement and this measures out the event described by verbs. Therefore the resulta- tive or experiential readings are preferred. The other errors are caused by polysemous verbs such as kakaru (hangflie//all...) or ataru (hit/strike~be exposed/shine...). Their aspectual properties are changed by the complements they take. The analysis of how complements influence the aspectual properties of their governing verbs is beyond the scope of this paper. It seems to be a mat- ter of pragmatic world knowledge rather than sense- semantics (but see (Verkuyl, 1993) for English). 4 Related Work The approach proposed here is similar to that of Dorr's (Dorr, 1992: Dorr. 1993), but different from it in scale and determinability of the categories. She adopts the four-way classification system following Vendler (Vendler, 1957) and utilizes Dowty's test (Dowty, 1991) for deternfining aspectual categories of English verbs. She reports the results obtained from running the program on 219 sentences of the LOB corpus. Although we cannot know how many verbs she tested because she has shown only a subset of the verbs, the program was not able to pare down the aspectual category to one in 18 cases out of 27 verbs. Brent (Brent, 1991) discusses an implemented program that automatically classifies verbs into two groups, stative vs. non-stative, on the basis of their syntactic contexts. He uses the progressive and rate- 358 Table 11: The restdts of the evaluation ex)eriment the sense of -te=ru judgement by human(a) output of program(b) number of agreements(c) recall(%) c/a x 100 precision(%) c/b x 100 (i) an ongoing process 95 137 88 93 64 (ii) an iteration 4 2 2 50 100 (iii) a resultative state (iv) an experiential state (v) a simple state 29 48 93 39 15 14 36 93 19 19 15 79 79 ambiguous 14 12 9 64 75 total 200 200 142 71 71 adverbs constructions in combination with some sort of statistical smoothing technique. He identified eleven verbs as purely stative, of the 204 distinct verbs occurring at least 100 times in the LOB cor- pus. We think that the extraction of aspectual infor- mation must be based on principles that are well- grounded in linguistic theory. However, some sort of noise reduction technique such as the confidence intervals used by Brent may be needed to detect the cue more accurately. 5 Conclusion In this paper, we have proposed a method for classi- fying Japanese verbs on the basis of surface evidence from a monolingual corpus, and examined the mean- ing of the form -teiru by means of the classifications of verbs and adverbs. The aspect of verb phrases provides not only the temporal configuration within a single event but also the information needed for processing temporal re- lation between multiple events (Dowty, 1986; Pas- sonneau, 1988; Webber, 1988). Furthermore, the lexical aspect of verbs is closely related with their deep complement structures which may not be directly reflected on the surface argu- ment structures. Therefore, by combining the aspec- tual categories of verbs and those that are defined in terms of their surface argument structures, we can obtain an elaborate classification based on seman- tic types of verbs. (Preliminary experiments on this issue can be seen in (Oishi and Matsumoto, 1996).) Thus, the information obtained here can be used for various applications. References S. W. Bennett, T. Herlick, K. Hoyt, J. Lifo, and A. Santisteban. 1990. A computational model of aspect and verb semantics. Mashine 7~ranslation, 4(4):247-280. M. It. Brent. 1991. Automatic semantic classification of verbs from their syntactic contexts: An implemented classifier for stativity. In Proceedings of the 5th ACL European Chapter, pages 222-226. B. J. Dorr. 1992. A parameterized approach to integrating aspect with lexical-semantic for machine translation. In Proceedings of the 30th Annual Meeting of ACL. pages 257-264. B. J. Dorr. 1993. Machine Translation -- A View from the Lezicon. The MIT Press. D. R. Dowty. 1986. The effects of aspectual class on the temporal structure of discourse. Linguistics and Philosophy, 9(1):37- 61. D. R. Dowty. 1991. Word Meaning and Montague Grammar : The Semantics of Verbs and Times in Generative Semantics and in Montague's PTQ, volume 7 of Studies in Linguistics and Philosophy(SLAP). Kluwer Academic Publishers. Japan Electronic Dictionary Research Institute Ltd. EDIt. 1995. the EDR Electronic Dictionary Technical Guide. (in Japanese). T. Gunji. 1992. A proto-lexical analysis of temporal properties of japanese verbs. In Linguistics Studies on Natural Language, Kyung Hee Language Institute Monograph One, pages 197- 217. Hanshin Publishing. H. Kindaichi. 1976. Nihongo Dousi-no Asupekuto ('Aspect of Japanese Verbs'). Mugi Shobo. {in Japanese). M. Kudo. 1982. Siteiru-kei-no imi-kijutu ('the description of the meaning of the form -teiru'). Muzashi University Jinbun Gakkai Zasshi, 13(4). M. Moens and M. Steedman. 1988. Temporal ontology and tem- poral reference. Computational Linguistics, 14(2):15-28. T. Moriyama. 1988. Nihongo Doushi Jutsugobun no Kenkyuu ('A Study of Japanese Verb-pradicate Sentences'). Meiji Shoin. (in Japanese). A. Oishi and Y. Matsumoto. 1996. Detecting the organization of semantic subclasses of Japanese verbs. Technical Iteport NAIST-IS-TIt96019, Nara Institute of Science and Technology. It. J. Passonneau. 1988. A computational model of the semantics of tense and aspect. Computational Linguistics, 14(2}:44-60. C. L. Tenny. 1994. Aspectual Roles and the Syntaz-Semantics Interface, volume 52 of Studies in Linguistics and Philoso- phy(SLAP). Kluwer Academic. Z. Vendler. 1957. Verbs and times. Philosophical Review, 66:143-160. H. Verkuyl. 1993. A Theory of Aspectuality. Cambridge Uni- versity Press. B. L. Webber. 1988. Tense as discourse anaphor. Computational Linguistics, 14(2):61-73. 359 | 1997 | 45 |
A Comparison of Head Transducers and Transfer for a Limited Domain Translation Application Hiyan Alshawi and Adam L. Buchsbaum AT&T Labs 180 Park Avenue Florham Park. NJ 079:32-0971. USA {hiyan,alb}.~research.at t.com Abstract We compare the effectiveness of two related • machine translation models applied to the same limited-domain task. One is a trans- fer model with monolingual head automata for analysis and generation; the other is a direct transduction model based on bilin- gual head transducers. We conclude that the head transducer model is more effective according to measures of accuracy, compu- tational requirements, model size, and de- velopment effort. I Introduction In this paper we describe an experimental ma- chine translation system based on head transducer models and compare it to a related transfer sys- tem, described in Alshawi 1996a, based on mono- lingual head automata. Head transducer models consist of collections of finite state machines that are associated with pairs of lexical items in a bilin- gual lexicon. The transfer system follows the fa- miliar analysis-transfer-generation architecture (Is- abelle and Macklovitch 1986). with mapping of dependency representations (Hudson 1984)in the transfer phase. In contrast, the head transducer approach is more closely aligned with earlier di- rect translation methods: no explicit representa- tions of the source language (interlingua or other- wise) are created in the process of deriving the target string. Despite ~he simple direct architecture, the head transducer model does embody modern prin- ciples of lexicalized recursive grammars and statis- tical language processing. The context for evaluat- ing both the transducer and transfer models was the development of experimental prototypes for speech- to-speech translation. In the case of text translation for publishing, it is reasonable to adopt economic measures of the Fei Xia Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104. USA [email protected] effectiveness of translation systems. This involves assessing the total cost .,f employing a '~ransiation system, including, for example, the cost of manual post-editing. Post-editing "s not an option in speech translation systems for person-to-person communi- cation, and real-time operation is important in this context, so in comparing the two translation models we looked at a variety of other measures, including translation accuracy, speed, and system complexity. Both models underlying the translation systems can be characterized as statistical translation mod- els, but unlike the models proposed by Brown et al. (1990, 1993), these models have non-uniform lin- guistically motivated structure, at present coded by hand. In fact, the original motivation for the head transducer models was that they are simpler and more amenable to automatic model structure acqui- sition, while the transfer component of the tradi- tional system was designed with regard to allowing maximum flexibility in mapping between source and target representations to overcome translation diver- gences (Lindop and Tsujii 1991: Dorr 1994). In prac- tice, it turned out that adopting the simpler trans- ducer models did not invoive sacrificing accuracy, at least for our limited domain application. We first describe the transfer and head transducer approaches in Sections 2 and 3 and the method used to assign the numerical parameters of the models in Section 4. In Section 5. we compare experimental systems, based on the two approaches, for English- to-Chinese translation of air travel enquiries, and we conclude in Section 6. 2 Monolingual Automata and Transfer In this section we review the approach based oll monolingual head automata together with transfer mapping. Further details of this approach, includ- ing the analysis, transfer, and generation algorithms appear in Alshawi 1996a. 360 2.1 Monolingual Relational Models We can characterize the language models used for analysis and generation in the transfer system as quantitative generative models of ordered depen- dency trees. In the dependency trees generated by these models, each node is labeled with a word w from the vocabulary V of the language in question: the nodes (and their word labels) immediately dom- inated by such a node are the dependents of w in the dependency derivation. Dependency tree arcs are labeled with symbols taken from a set R of de- pendency rei~iorss. These monolingual models are reversible, in the sense they can be used for analy- sis or generation. The motivation for these models is similar to that for Probabilistic Link Grammar (Laf- ferry, Sleator, and Temperley 1992). one difference being that the head automata derivations are always trees. The models are quantitative in that they assign a real-number cost to derivations. Various cost func- tions are possible, though in the experiments re- ported in this paper, a discriminative cost function is used, as discussed in Section 4. In the monolin- gual models, derivation events are actions performed by relational head acceptors, a particular type of fi- nite state automata associated with each word in the language. A relational head acceptor writes (or accepts) a pair of symbol sequences, a left sequence and a right sequence. The symbols in these sequences are taken from the set R of dependency relations. In a de- pendency derivation, an acceptor is associated with a node with word w, and the sequences written by the acceptor correspond to the relation labels of the arcs to the left and right of the node. In other words, they are the dependency relations between w and the dependents of w to its left and right. The possible actions taken by a relational head acceptor m. in state qi are: • Left transition: write a symbol r onto the right end of the left sequence and eater state qi+l. • Right transition: write a symbol r onto the left end of the right sequence and enter state qi+l. • Stop: stop in state q, at which point the se- quences are considered complete. Derivation of ordered dependency trees proceeds recursively by generating the dependent relations for a node according to the word and acceptor at that node, and then generating the trees dominated by these relation edges. This process involves the fol- lowing actions in addition to the acceptor actions above: ) Selection of a word and acceptor to start an entire derivation. • Selection of a dependent word and acceptor given a head word and a dependency relation. 2.2 Transfer Transfer in this model is a mapping between un- ordered dependency trees. Surface ordering of de- pendent phrases of either the source or target is not taken into account in the transfer mapping. This or- dering is completely defined by the source and target monolingual models. Our transfer model involves a bilingual lexicon specifying paired source-target fragments of depen- dency trees. A bilingual iexical entry (see Alshawi 1996a for more details) includes a mapping function between the source and target nodes of the frag- ments. Valid transfer mappings are defined in terms of a tiling of the source dependency tree with source fragments from bilingual lexicon entries so that the partial mappings defined in entries are extended to a mapping for the entire source tree. This tiling pro- cess has the side effect of creating an unordered tar- get dependency representation. The following non- deterministic actions are involved in the tiling pro- cess: • Selection of a bilingual entry given a source lan- guage word, w. • Matching the nodes and arcs of the source frag- ment of an entry against a local subgraph in- cluding a node labeled by w. 3 Bilingual Head Transduction 3.1 Bilingual Head Transducers A head transducer is a transduction version of the finite state head acceptors employed in the transfer model. Such a transducer M is associated with a pair of words, a source word w and a target word t,. In fact. w is taken from the set ~,~ consisting of the source language vocabulary augmented by the "'empty word" e, and t, is taken from !,~, the tar- get language vocabulary augmented with e. A head transducer reads from a pair of source sequences, a left source sequence Lt and a right source sequence RI; it writes to a pair of target sequences, a left target sequence L.~ and a right target sequence R, (Figure 1). Head transducers were introduced in Alshawi 1996b, where the symbols in the source and target sequences are source and target words respectively. In the experiment described in this paper the sym- bols written are dependency relation symbols or the 361 l °11 1 L., r~ r~ ~ r~ R~ • . . r j + t • . . " [ ' Figure 1: Head transducer M converts the sequences of left and right relations (r~ ... r~) and (r~+l...rn 1) of w into left and right relations (r~...r]) and empty symbol e. While it is possible to construct a translator based on head transduction models with- out relation symbols, using a version of head trans- ducers with relation symbols allowed for a more di- rect comparison between the transfer and transducer systems, as discussed in Section 5 We can think of the transducer as simultaneously deriving the source and target sequences through a series of transitions followed by a stop action. From a state qi these actions are as follows: • Left transition: write a symbol rl onto the right end of L1, write symbol r2 to position a in the target sequences, and enter state qi+l. * Right transition: write a symbol rl onto the left end of R1, write a symbol r~ to position a in the target sequences, and enter state qi+t. . Stop: stop in state qi, at which point the se- quences Lt, R1, L~ and R,. are considered com- plete. In simple head transducers, the target positions a can be restricted in a similar way to the source positions, i.e., the right end of L~ or the left end of R.~. The version used in the experiment allows ad- ditional positions, including the left end of L2 and the right end R~.. Allowing additional target posi- tions increases the flexibility of transducers in the translation application without an adverse effect on computational complexity• On the other hand, we restrict the source side positions as indicated above to keep the transduction search similar in nature to head-outward context free parsing. 3.2 Recursive Head Transduction We can apply a set of head transducers recursively to derive a pair of source-target ordered dependency trees• This is a recursive process in which the depen- dency relations for corresponding nodes in the two trees are derived by a head transducer. In addition to the actions performed by the head transducers. this derivation process involves the actions: Selection of a pair of words wo E V1 and vo E V2, and a head transducer 3,10 to start the entire derivation. Selection of a pair of dependent words w I and v ~ and transducer M I given head words w and v and source and target dependency relations el and r2. (w,w' E V1; v,v' e V2.) The recursion takes place by running a head trans- ducer (M' in the second action above) to derive local dependency trees for corresponding pairs of depen- dent words (w', v'). 4 Event Cost Assignment The transfer and head transduction derivation mod- els can be formulated as probabilistic generative models; such formulations were given in Alshawi 1996a and 1996b respectively. Under such a for- mulation, negated log probabilities can be used as the costs for the actions listed in Sections 2 and 3. However, experimentation reported in Alshawi and Buchsbaum 1997 suggests that improved translation accuracy can be achieved by adopting cost functions other than log probability. This is true in particular for a family of discriminative cost functions. We define a cost function f as a real valued func- tion taking two arguments, a event e and a context c. The context c is an equivalence class of states un- der which an action is taken, and the event e is an equivalence class of actions possible from that set of states. We write the value of the function as f(elc ), borrowing notation from the special case of condi- tional probabilities. The pair (elc) is referred to as a choice. The cost of a solution (i.e., a possible trans- lation of an input string) is the sum of costs for all choices in the derivation of that solution. Discriminative cost functions, including likelihood ratios (cf. Dunning 1993), make use of both positive and negative instances of performing a task. Here we take a positive instance to be the derivation of a "'correct" translation, and a negative instance the derivation of an "incorrect" translation, where cor- rectness is judged by a speaker of both languages. Let n + (e]c) be the count of taking choice (elc) in pos- itive instances resulting from processing the source sentences in a training corpus. Similarly, let n-(elc ) be the count of taking (elc) for negative instances. 362 The cost function" used in the experiments is com- puted as: /(elc) = log(n+(el c) + n-(elc)) -log(n+(ele)). (By comparison, the usual "logprob" cost function using only positive instances would be log(n+(c)) - log(n+(elc)).) For unseen choices, we replace the context c and event e with larger equivalence classes. 5 Effectiveness Comparison 5.1 English-Chinese ATIS Models Both the transfer and transducer systems were trained and evaluated on English-to-Mandarin Chi- nese translation of transcribed utterances from the ATIS corpus (Hirschman et al. 1993). By train- ing here we simply mean assignment of the cost functions for fixed model structures. These model structures were coded by hand as monolingual head acceptor and bilingual dependency lexicons for the transfer system and a head transducer lexicon for the transducer system. Positive and negative counts for cost assignment were collected from two sources for both systems and an additional third source for the transfer system. The first set of counts was derived by processing traces using around 1200 sample utterances from the ATIS corpus. This involved running the sys- tems on the sample utterances, starting initially with uniform costs, and presenting the resulting trans- lations to a human judge for classification as cor- rect or incorrect. The second source of counts was hand-tagging around 800 utterance transcriptions to identify correct and incorrect attachment points for prepositional phrases, PP-attachment being im- portant for English-Chinese translation (Chen and Chen 1992). This attachment information was con- verted to corresponding counts for head-dependent choices involving prepositional phrase attachment. The additional source of counts used in the trans- fer system was an unsupervised training method in which 13000 training utterances were translated from English to Chinese, and then back again; the derivations were classified as positive (otherwise neg- ative) if the resulting back-translation was suffi- ciently close to the original English, as described in Alshawi and Buchsbaum 1997. There was a strong systematic relationship be- tween the structure of the models used in the two systems in the following sense. The head transducers were built by modifying the English head acceptors defined for the transfer system. This involved the addition of target relations, including some epsilon relations, to automaton transitions. In some cases, Transfer Head Transducer Word error rate 16.2 11.7 (per cent) Time 1.09 0.17 (seconds/sent.) Space 1.67 0.14 (Mbytes/sent.) Table 1: Accuracy. time, and space comparison the automata needed to be modified to include addi- tional states, and also some transitions with epsilon relations on the English (source) side. Typically, such cases arise when an additional particle needs to be generated on the target side, for example the yes-no question particle in Chinese. The inclusion of such particles often depended on additional distinc- tions not present in the original English automata. hence the requirement for additional states in the bilingual transducer versions. 5.2 Performance To evaluate the relative performance of the two translators, 200 utterances were chosen at random from a previously unseen test sample of ATIS utter- ances having no overlap with samples used in model building and cost assignment. There was no restric- tion on utterance length or ATIS "class" (dialogue or one-off queries, etc.) in making this selection. These English test utterances were processed by both sys- tems, yielding lowest cost Chinese translations. Three measures of performance--accuracy, com- putation time, and memory usage--were compared, with the results in Table 1, showing improvements by the transducer system for all three measures. The accuracy figures are given in terms of translation word error rate, a measure we believe to be some- what less subjective than sentence level measures of grammaticality and meaning preservation. Trans- lation word error rate is defined as the number of words in the source which are judged to have been mistranslated. For the purposes of this definition, mistranslation of a source word includes choice of the wrong target word (or words), the absence (or incorrect addition) of a particle related to the word, and the generation of a correct target word in the wrong position. The improvement in word error rates of the trans- ducer system was achieved without the benefit of the additional counts from unsupervised training, men- tioned above, with 13,000 utterances. Earlier experi- ments (Alshawi and Buschbaum 1997) show that the unsupervised training does lead to an improvement 363 in the performance of the transfer system. How- ever, this improvement is relatively small: around 2% reduction in the number of utterances contain- ing translation errors. (Word error rates for direct comparison with the results above are not available.) We also know that some additional improvement of the transducer system can be achieved by increasing the amount of training data: with a further 600 su- pervised training samples (for a total of 1800), the error rate for the transducer system falls to 11.0%. The processing times reported above are averages over the same 200 test utterances used in the accu- racy evaluation. These timings are for an implemen- tation of the search algorithms in Lisp on a Silicon Graphics machine with a 150MHz R4400 processor. The space figures give the average amount of mem- ory allocated in processing each utterance. 5.3 Model Size and Development Effort The performance comparison above is, of course, not the whole story, particularly since manual effort was required to build the model structures before train- ing for cost assignment. However, we believe the conclusion for the improvement in performance of the transducer system is valid because the amount of effort in building and training the transfer models exceeded that for the the transducer systems. After construction of the English head acceptor models, common to both systems, a rough estimate of the effort required for completing the models for English to Chinese translation is 12 person-months for the transfer system and 3 person-months for the trans- ducer system. With respect to training effort, as noted, the amount of supervised training effort in the main experiment was the same for both systems (supervised discriminative training for 1200 utter- auces plus tagging of prepositional attachments for 800 utterances), while the transfer system also ben- efited from unsupervised training with 13000 utter- ances. In comparing models for language processing, or indeed other tasks, it is reasonable to ask if per- formance improvements by one model over another were achieved through an increase in model complex- ity. We looked at three measures of model complex- ity for the two systems, with the results shown in Table 2. The first was the number of lexical entries. For the transfer model this includes both monolin- gual entries and the bilingual entries required for the English to Chinese direction; there are only bilin- gual entries in the transducer model. Comparing the structural complexity of the two models is somewhat more difficult but we can make a graph-theoretic ab- straction and count the number of edges in model Transfer Head Transducer Lexical entries 3,250 1,201 Edges 72,180 47,910 Choices 100,472 67,011 Table 2: Lexicon and model size comparison components. Both systems include edges for au- tomaton state transitions. The edge count for the transfer system includes the number of dependency graph edges in bilingual entries. Finally, we also looked at the number of choices for which train- ing counts were available, i.e., the number of model numerical parameters for which direct evidence was present in training data. As can be seen from Ta- ble 2, the transducer system has a lower model com- plexity according to all three measures. 6 Conclusion There are many aspects to the effectiveness of the translation component of a speech translator, mak- ing comparisons between systems difficult. There is also an inherent difficulty in evaluating the transla- tion task: a single source utterance has many valid translations and the validity of translations is a mat- ter of degree. Despite this, we believe that in the comparison considered in this paper, it is reason- able to make an overall assessment that the head transducer system is more effective that the transfer- based system. One justification for this conclusion is that the systems were closely related, having iden- tical sublanguage domain and test data, and using similar automata for analysis in the transfer system and transduction in the transducer system. Another justification is that it was not necessary to make difficult comparisons between different aspects of ef- fectiveness: the transducer system performed better with respect to all the measures we looked at for accuracy, speed, memory, development effort and model complexity. Looking forward, the relative simplicity of head transducer models makes them more promising for further automating the develop- ment of translation applications. Acknowledgment We are grateful to Jishen He for building the Chinese model and bilingual lexicon of the earlier transfer system that we used in this work for comparison with the head transducer system. 364 References Alshawi, H. and A.L. Buchsbaum. 1997. "State- Transition Cost Functions and an Application to Language Translation". In Proceedings of the In- ternational Conference on Acoustics, Speech, and Signal Processing, IEEE, Munich, Germany. Alshawi, H. 1996a. "Head Automata and Bilin- gual Tiling: Translation with Minimal Represen- tations". In Proceedings of the 34th Annual Meet- ing of the Association for Computational Linguis- tics, Santa Cruz, California, 167-176. Alshawi, H. 1996b. "Head Automata for Speech Translation". In Proceedings of the Interna- tional Conference on Spoken Language Processing, Philadelphia, Pennsylvania. Brown, P., J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, J. Lafferty, R. Mercer and P. Rossin. 1990. "A Statistical Approach to Machine Trans- lation". Computational Linguistics 16:79-85. Brown, P.F., S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer. 1993. "The Mathematics of Statistical Machine Translation: Parameter Esti- mation". Computational Linguistics 19:263-312. Chen, K.H. and H. H. Chen. 1992. "Attachment and Transfer of Prepositional Phrases with Constraint Propagation". Computer Processing of Chinese and Oriental Languages, Vol. 6, No. 2, 123-142. Dorr, B.J. 1994. "Machine Translation Divergences: A Formal Description and Proposed Solution". Computational Linguistics 20:597-634. Dunning, T. 1993. "Accurate Methods for Statis- tics of Surprise and Coincidence." Computational Linguistics 19:61-74. Hudson, R.A. 1984. Word Grammar. Blackwell, Oxford. Hirschman, L., M. Bates, D. Dahl, W. Fisher, J. Garofolo, D. Pallett, K. Hunicke-Smith, P. Price, A. Rudnicky, and E. Tzoukermann. 1993. "Multi- Site Data Collection and Evaluation in Spoken Language Understanding". In Proceedings of the Human Language Technology Workshop, Morgan Kaufmann, San Francisco, 19-24. Isabelle, P. and E. Macklovitch. 1986. "Transfer and MT Modularity", In Eleventh International Conference on Computational Linguistics, Bonn, Germany, 115-117. Jelinek, F., R.L. Mercer and S. Roukos. 1992. "Principles of Lexical Language Modeling for Speech Recognition". In S. Furui and M.M. Sondhi (eds.), Advances in Speech Signal Process- ing, Marcel Dekker, New York. Lafferty, J., D. Sleator and D. Temperley. 1992. "Grammatical Trigrams: A Probabilistic Model of Link Grammar". In Proceedings of the 1992 AAAI Fall Symposium on Probabilistic Approaches to Natural Language, 89-97. Kay, M. 1989. "Head Driven Parsing". In Pro- ceedings of the Workshop on Parsing Technolo- gies, Pittsburgh, 1989. Lindop, J, and J. Tsujii. 1991. "Complex Transfer in MT: A Survey of Examples". Technical Re- port 91/5, Centre for Computational Linguistics, UMIST, Manchester, UK. Sata, G. and O. Stock. 1989. "Heacl-Driven Bidirec- tional Parsing". In Proceedings of the Workshop on Parsing Technologies, Pittsburgh. Younger, D. 1967. Recognition and Parsing of Context-Free Languages in Time n 3. Information and Control, 10, 189-208. 365 | 1997 | 46 |
Decoding Algorithm in Statistical Machine Translation Ye-Yi Wang and Alex Waibel Language Technology Institute School of Computer Science Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213, USA {yyw, waibel}@cs, cmu. edu Abstract Decoding algorithm is a crucial part in sta- tistical machine translation. We describe a stack decoding algorithm in this paper. We present the hypothesis scoring method and the heuristics used in our algorithm. We report several techniques deployed to improve the performance of the decoder. We also introduce a simplified model to moderate the sparse data problem and to speed up the decoding process. We evalu- ate and compare these techniques/models in our statistical machine translation sys- tem. 1 Introduction 1.1 Statistical Machine Translation Statistical machine translation is based on a channel model. Given a sentence T in one language (Ger- man) to be translated into another language (En- glish), it considers T as the target of a communi- cation channel, and its translation S as the source of the channel. Hence the machine translation task becomes to recover the source from the target. Ba- sically every English sentence is a possible source for a German target sentence. If we assign a probability P(S I T) to each pair of sentences (S, T), then the problem of translation is to find the source S for a given target T, such that P(S [ T) is the maximum. According to Bayes rule, P(S IT) = P(S)P(T I S) P(T) (1) Since the denominator is independent of S, we have -- arg maxP(S)P(T I S) (2) S Therefore a statistical machine translation system must deal with the following three problems: • Modeling Problem: How to depict the process of generating a sentence in a source language, and the process used by a channel to generate a target sentence upon receiving a source sen- tence? The former is the problem of language modeling, and the later is the problem of trans- lation modeling. They provide a framework for calculating P(S) and P(W I S) in (2). • Learning Problem: Given a statistical language model P(S) and a statistical translation model P(T I S), how to estimate the parameters in these models from a bilingual corpus of sen- tences? • Decoding Problem: With a fully specified (framework and parameters) language and translation model, given a target sentence T, how to efficiently search for the source sentence that satisfies (2). The modeling and learning issues have been dis- cussed in (Brown et ah, 1993), where ngram model was used for language modeling, and five different translation models were introduced for the transla- tion process. We briefly introduce the model 2 here, for which we built our decoder. In model 2, upon receiving a source English sen- tence e = el,. • -, el, the channel generates a German sentence g = gl, • • ", g,n at the target end in the fol- lowing way: 1. With a distribution P(m I e), randomly choose the length m of the German translation g. In model 2, the distribution is independent of m and e: P(m [ e) = e where e is a small, fixed number. 2. For each position i (0 < i < m) in g, find the corresponding position ai in e according to an alignment distribution P(ai I i, a~ -1, m, e). In model 2, the distribution only depends on i, ai and the length of the English and German sen- tences: P(ai l i, a~-l,m,e) = a(ai l i, m,l) 3. Generate the word gl at the position i of the German sentence from the English word ea~ at 366 the aligned position ai of gi, according to a translation distribution P(gi t ~t~'~, st~i-t, e) = t(gl I ea~). The distribution here only depends on gi and eai. Therefore, P(g l e) is the sum of the probabilities of generating g from e over all possible alignments A, in which the position i in the target sentence g is aligned to the position ai in the source sentence e: P(gle) = I l m e ~, ... ~" IT t(g# le=jla(a~ Ij, l,m)= al=0 amm0j=l m ! e 1"I ~ t(g# l e,)a(ilj, t, m) (3) j=l i=0 (Brown et al., 1993) also described how to use the EM algorithm to estimate the parameters a(i I j,l, m) and $(g I e) in the aforementioned model. 1.2 Decoding in Statistical Machine Translation (Brown et al., 1993) and (Vogel, Ney, and Tillman, 1996) have discussed the first two of the three prob- lems in statistical machine translation. Although the authors of (Brown et al., 1993) stated that they would discuss the search problem in a follow-up arti- • cle, so far there have no publications devoted to the decoding issue for statistical machine translation. On the other side, decoding algorithm is a crucial part in statistical machine translation. Its perfor- mance directly affects the quality and efficiency of translation. Without a good and efficient decoding algorithm, a statistical machine translation system may miss the best translation of an input sentence even if it is perfectly predicted by the model. 2 Stack Decoding Algorithm Stack decoders are widely used in speech recognition systems. The basic algorithm can be described as following: 1. Initialize the stack with a null hypothesis. 2. Pop the hypothesis with the highest score off the stack, name it as current-hypothesis. 3. if current-hypothesis is a complete sentence, output it and terminate. 4. extend current-hypothesis by appending a word in the lexicon to its end. Compute the score of the new hypothesis and insert it into the stack. Do this for all the words in the lexi- con. 5. Go to 2. 2.1 Scoring the hypotheses In stack search for statistical machine translation, a hypothesis H includes (a) the length l of the source sentence, and (b) the prefix words in the sentence. Thus a hypothesis can be written as H = l : ere2.. "ek, which postulates a source sen- tence of length l and its first k words. The score of H, fit, consists of two parts: the prefix score gH for ele2"" ek and the heuristic score hH for the part ek+lek+2"-et that is yet to be appended to H to complete the sentence. 2.1.1 Prefix score gH (3) can be used to assess a hypothesis. Although it was obtained from the alignment model, it would be easier for us to describe the scoring method if we interpret the last expression in the equation in the following way: each word el in the hypothesis contributes the amount e t(gj [ ei)a(i l J, l, m) to the probability of the target sentence word gj. For each hypothesis H = l : el,e2,-",ek, we use SH(j) to denote the probability mass for the target word gl contributed by the words in the hypothesis: k SH(j) = e~'~t(g~ lei)a(ilj, t,m) (4) i=0 Extending H with a new word will increase Sn(j),l < j < m. To make the score additive, the logarithm of the probability in (3) was used. So the prefix score con- tributed by the translation model is :~']~=0 log St/(j). Because our objective is to maximize P(e, g), we have to include as well the logarithm of the language model probability of the hypothesis in the score, therefore we have m g. = ~IogS.(j) + j=0 k E log P(el l ei-N+t'" el-l). i=0 here N is the order of the ngram language model. The above g-score gH of a hypothesis H = l : ele?...ek can be calculated from the g-score of its parent hypothesis P = l : ele2.. "ek-t: gH = gp+logP(eklek-N+t'''ek-t) m + ~-'~ log[1 + et(gj l ek)a(k Ij, l, m) ~=0 se(j) ] SH(j) = Sp(j)+et(gjlek)a(klj, l,m) (5) A practical problem arises here. For a many early stage hypothesis P, Sp(j) is close to 0. This causes problems because it appears as a denominator in (5) and the argument of the log function when calculat- ing gp. We dealt with this by either limiting the translation probability from the null word (Brown 367 et al., 1993) at the hypothetical 0-position(Brown et al., 1993) over a threshold during the EM training, or setting SHo (j) to a small probability 7r instead of 0 for the initial null hypothesis H0. Our experiments show that lr = 10 -4 gives the best result. 2.1.2 Heuristics To guarantee an optimal search result, the heuris- tic function must be an upper-bound of the score for all possible extensions ek+le/c+2...et(Nilsson, 1971) of a hypothesis. In other words, the benefit of extending a hypothesis should never be under- estimated. Otherwise the search algorithm will con- clude prematurely with a non-optimal hypothesis. On the other hand, if the heuristic function over- estimates the merit of extending a hypothesis too much, the search algorithm will waste a huge amount of time after it hits a correct result to safeguard the optimality. To estimate the language model score h LM of the unrealized part of a hypothesis, we used the nega- tive of the language model perplexity PPtrain on the training data as the logarithm of the average proba- bility of predicting a new word in the extension from a history. So we have h LM = -(1 - k)PPtrai, + C. (6) Here is the motivation behind this. We assume that the perplexity on training data overestimates the likelihood of the forthcoming word string on av- erage. However, when there are only a few words to be extended (k is close to 1), the language model probability for the words to be extended may be much higher than the average. This is why the con- stant term C was introduced in (6). When k << l, -(l-k)PPtrain is the dominating term in (6), so the heuristic language model score is close to the aver- age. This can avoid overestimating the score too much. As k is getting closer to l, the constant term C plays a more important role in (6) to avoid un- derestimating the language model score. In our ex- periments, we used C = PPtrain +log(Pmax), where Pm== is the maximum ngram probability in the lan- guage model. To estimate the translation model score, we intro- duce a variable va(j), the maximum contribution to the probability of the target sentence word gj from any possible source language words at any position between i and l: vit(j) = max t(g~ [e)a(klj, l,m ). (7) i<_/c<_l,eEL~ " " here LE is the English lexicon. Since vit (j) is independent of hypotheses, it only needs to be calculated once for a given target sen- tence. When k < 1, the heuristic function for the hypoth- esis H = 1 : ele2 -..e/c, is 171 hH = ~max{0,1og(v(/c+Dl(j)) logSH(j)} j=l -(t - k)PP,~=., + c (8) where log(v(k+l)t(j))- logSg(j)) is the maximum increasement that a new word can bring to the like- lihood of the j-th target word. When k = l, since no words can be appended to the hypothesis, it is obvious that h~ = O. This heuristic function over-estimates the score of the upcoming words. Because of the constraints from language model and from the fact that a posi- tion in a source sentence cannot be occupied by two different words, normally the placement of words in those unfilled positions cannot maximize the likeli- hood of all the target words simultaneously. 2.2 Pruning and aborting search Due to physical space limitation, we cannot keep all hypotheses alive. We set a constant M, and when- ever the number of hypotheses exceeds M, the al- gorithm will prune the hypotheses with the lowest scores. In our experiments, M was set to 20,000. There is time limitation too. It is of little practical interest to keep a seemingly endless search alive too long. So we set a constant T, whenever the decoder extends more than T hypotheses, it will abort the search and register a failure. In our experiments, T was set to 6000, which roughly corresponded to 2 and half hours of search effort. 2.3 Multi-Stack Search The above decoder has one problem: since the heuristic function overestimates the merit of ex- tending a hypothesis, the decoder always prefers hypotheses of a long sentence, which have a bet- ter chance to maximize the likelihood of the target words. The decoder will extend the hypothesis with large I first, and their children will soon occupy the stack and push the hypotheses of a shorter source sentence out of the stack. If the source sentence is a short one, the decoder will never be able to find it, for the hypotheses leading to it have been pruned permanently. This "incomparable" problem was solved with multi-stack search(Magerman, 1994). A separate stack was used for each hypothesized source sentence length 1. We do compare hypotheses in different stacks in the following cases. First, we compare a complete sentence in a stack with the hypotheses in other stacks to safeguard the optimality of search result; Second, the top hypothesis in a stack is com- pared with that of another stack. If the difference is greater than a constant ~, then the less probable one will not be extended. This is called soft-pruning, since whenever the scores of the hypotheses in other stacks go down, this hypothesis may revive. 368 Z 2 5000 400O 3000 2000 1000 0 0 5 l0 15 20 25 Sentence Length Engfish -- 30 35 40 5OOO 4OOO 3OOO 111011 0 5 I0 15 20 25 Sentence Length German -- 30 35 40 Figure 1: Sentence Length Distribution 3 Stack Search with a Simplified Model In the IBM translation model 2, the alignment pa- rameters depend on the source and target sentence length I and m. While this is an accurate model, it causes the following difficulties: 1. there are too many parameters and therefore too few trainingdata per parameter. This may not be a problem when massive training data are available. However, in our application, this is a severe problem. Figure 1 plots the length distribution for the English and German sen- tences. When sentences get longer, there are fewer training data available. 2. the search algorithm has to make multiple hy- potheses of different source sentence length. For each source sentence length, it searches through almost the same prefix words and finally set- tles on a sentence length. This is a very time consuming process and makes the decoder very inefficient. To solve the first problem, we adjusted the count for the parameter a(i [ j, l, m) in the EM parameter estimation by adding to it the counts for the pa- rameters a(i l j, l', m'), assuming (l, m) and (1', m') are close enough. The closeness were measured in m m' .... , ....... ....... .... . ....... ....... -:," ....... ....... :,'' ...... ....... ....... ... ...# ....... ~...~..# ....... #..~ .:.. ....... ¢..~...~..~...~ ....... ...~ 1' 1 Figure 2: Each x/y position represents a different source/target sentence length. The dark dot at the intersection (l, m) corresponds to the set of counts for the alignment parameters a(. [ o,l, m) in the EM estimation. The adjusted counts are the sum of the counts in the neighboring sets residing inside the circle centered at (1, m) with radius r. We took r = 3 in our experiment. Euclidean distance (Figure 2). So we have e(i I J, t, m) = e(ilj, l',m';e,g ) (9) (I-l')~+(m-m')~<r~;e,g where ~(i I J, l, m) is the adjusted count for the pa- rameter a(i I J, 1, m), c(i I J, l, m; e, g) is the expected count for a(i I J, l, m) from a paired sentence (e g), and c(ilj, l,m;e,g) = 0 when lel • l, or Igl ¢ m, or i > l, or j > m. Although (9) can moderate the severity of the first data sparse problem, it does not ease the second inefficiency problem at all. We thus made a radi- cal change to (9) by removing the precondition that (l, m) and (l', m') must be close enough. This re- sults in a simplified translation model, in which the alignment parameters are independent of the sen- tence length 1 and m: P(ilj, m,e) = P(ilj, l,m) -- a(i l J) here i,j < Lm, and L,n is the maximum sentence length allowed in the translation system. A slight change to the EM algorithm was made to estimate the parameters. There is a problem with this model: given a sen- tence pair g and e, when the length of e is smaller than Lm, then the alignment parameters do not sum to 1: lel a(ilj) < 1. (10) i--0 We deal with this problem by padding e to length Lm with dummy words that never gives rise to any word in the target of the channel. Since the parameters are independent of the source sentence length, we do not have to make an 369 assumption about the length in a hypothesis. When- ever a hypothesis ends with the sentence end sym- bol </s> and its score is the highest, the decoder reports it as the search result. In this case, a hypoth- esis can be expressed as H = el,e2,...,ek, and IHI is used to denote the length of the sentence prefix of the hypothesis H, in this case, k. 3.1 Heuristics Since we do not make assumption of the source sen- tence length, the heuristics described above can no longer be applied. Instead, we used the following heuristic function: h~./ = ~ max{0,1og( v(IHI+I)(IHI+n)(j))} S.(j) -n * PPt~ain + C (11) L.-IHI h. = Pp(IHl+nlm)*h (12) n----I here h~ is the heuristics for the hypothesis that ex- tend H with n more words to complete the source sentence (thus the final source sentence length is [H[ + n.) Pp(x [ y) is the eoisson distribution of the source sentence length conditioned on the target sen- tence length. It is used to calculate the mean of the heuristics over all possible source sentence length, m is the target sentence length. The parameters of the Poisson distributions can be estimated from training data. 4 Implementation Due to historical reasons, stack search got its current name. Unfortunately, the requirement for search states organization is far beyond what a stack and its push pop operations can handle. What we really need is a dynamic set which supports the following operations: 1. INSERT: to insert a new hypothesis into the set. 2. DELETE: to delete a state in hard pruning. 3. MAXIMUM: to find the state with the best score to extend. 4. MINIMUM: to find the state to be pruned. We used the Red-Black tree data structure (Cor- men, Leiserson, and Rivest, 1990) to implement the dynamic set, which guarantees that the above oper- ations take O(log n) time in the worst case, where n is the number of search states in the set. 5 Performance We tested the performance of the decoders with the scheduling corpus(Suhm et al., 1995). Around 30,000 parallel sentences (400,000 words altogether for both languages) were used to train the IBM model 2 and the simplified model with the EM algo- rithm. A larger English monolingual corpus with around 0.5 million words was used to train a bi- gram for language modelling. The lexicon contains 2,800 English and 4,800 German words in morpho- logically inflected form. We did not do any prepro- cessing/analysis of the data as reported in (Brown et al., 1992). 5.1 Decoder Success Rate Table 1 shows the success rate of three mod- els/decoders. As we mentioned before, the compari- son between hypotheses of different sentence length made the single stack search for the IBM model 2 fail (return without a result) on a majority of the test sentences. While the multi-stack decoder im- proved this, the simplified model/decoder produced an output for all the 120 test sentences. 5.2 Translation Accuracy Unlike the case in speech recognition, it is quite arguable what "accurate translations" means. In speech recognition an output can be compared with the sample transcript of the test data. In machine translation, a sentence may have several legitimate translations. It is difficult to compare an output from a decoder with a designated translation. In- stead, we used human subjects to judge the machine- made translations. The translations are classified into three categories 1. 1. Correct translations: translations that are grammatical and convey the same meaning as the inputs. 2. Okay translations: translations that convey the same meaning but with small grammatical mis- takes or translations that convey most but not the entire meaning of the input. 3. Incorrect translations: Translations that are ungrammatical or convey little meaningful in- formation or the information is different from the input. Examples of correct, okay, and incorrect transla- tions are shown in Table 2. Table 3 shows the statistics of the translation re- sults. The accuracy was calculate by crediting a cor- rect translation 1 point and an okay translation 1/2 point. There are two different kinds of errors in statis- tical machine translation. A modeling erivr occurs when the model assigns a higher score to an incor- rect translation than a correct one. We cannot do anything about this with the decoder. A decoding 1 This is roughly the same as the classification in IBM statistical translation, except we do not have "legitimate translation that conveys different meaning from the in- put" -- we did not observed this case in our outputs. 370 Model 2, Single Stack Model 2, Multi-Stack Simplified Model Total Test Sentences Decoded Sentenced Failed sentences 120 32 88 120 83 37 120 120 0 Table 1: Decoder Success Rate Correct Okay Incorrect German English (target) English (output) German English/target) English (output) German English (target) English/output/ German English/target) English (output) German English (target) English (output) German English (target) English (output) ich habe ein Meeting yon halb zehn bis um zwSlf I have a meeting from nine thirty to twelve I have a meeting from nine thirty to twelve versuchen wir sollten es vielleicht mit einem anderen Termin we might want to try for some other time we should try another time ich glaube nicht diis ich noch irgend etwas im Januar frei habe I do not think I have got anything open m January I think I will not free in January ich glaube wit sollten em weiteres Meeting vereinbaren I think we have to have another meeting I think we should fix a meeting schlagen Sie doch einen Termin vor why don't you suggest a time why you an appointment ich habe Zeit fiir den Rest des Tages I am free the rest of it I have time for the rest of July Table 2: Examples of Correct, Okay, and Incorrect Translations: for each translation, the first line is an input German sentence, the second line is the human made (target) translation for that input sentence, and the third line is the output from the decoder. error or search error happens when the search al- gorithm misses a correct translation with a higher score. When evaluating a decoding algorithm, it would be attractive if we can tell how many errors are caused by the decoder. Unfortunately, this is not attainable. Suppose that we are going to translate a German sentence g, and we know from the sample that e is one of its possible English translations. The decoder outputs an incorrect e ~ as the translation of g. If the score of e' is lower than that of e, we know that a search error has occurred. On the other hand, if the score of e' is higher, we cannot decide if it is a modeling error or not, since there may still be other legitimate translations with a score higher than e ~ -- we just do not know what they are. Although we cannot distinguish a modeling error from a search error, the comparison between the de- coder output's score and that of a sample transla- tion can still reveal some information about the per- formance of the decoder. If we know that the de- coder can find a sentence with a better score than a "correct" translation, we will be more confident that the decoder is less prone to cause errors. Ta- ble 4 shows the comparison between the score of the outputs from the decoder and the score of the sam- ple translations when the outputs are incorrect. In most cases, the incorrect outputs have a higher score than the sample translations. Again, we consider a "okay" translation a half error here. This result hints that model deficiencies may be a major source of errors. The models we used here are very simple. With a more sophisticated model, more training data, and possibly some preprocessing, the total error rate is expected to decrease. 5.3 Decoding Speed Another important issue is the efficiency of the de- coder. Figure 3 plots the average number of states being extended by the decoders. It is grouped ac- cording to the input sentence length, and evaluated on those sentences on which the decoder succeeded. The average number of states being extended in the model 2 single stack search is not available for long sentences, since the decoder failed on most of the long sentences. The figure shows that the simplified model/decoder works much more efficiently than the other mod- 371 Total Model 2, Multi-Stack 83 Simplified Model 120 Correct Okay Incorrect Accuracy 39 12 32 54.2% 64 15 41 59.6% Table 3: Translation Accuracy Model 2, Multi-Stack Simplified Model Total Errors Scoree > Scoree, Scoree < Seoree, 38 3.5 (7.9%) 34.5 (92.1%) 48.5 4.5 (9.3%) 44 (90.7%) Table 4: Sample Translations versus Machine-Made Translations 6000 5000 ~d 4000 3000 =~ 2ooo Z 10oo < 0 j..Zh 1-4 "Model2-Single-S tack" , , "Model2-Multi-Stack" --~ "Simplified-Moder' , ......... i i 5-8 9-12 13-16 17-20 Target Sentence Length Figure 3: Extended States versus Target Sentence Length els/decoders. 6 Conclusions We have reported a stack decoding algorithm for the IBM statistical translation model 2 and a simpli- fied model. Because the simplified model has fewer uarameters and does not have to posit hypotheses with the same prefixes but different length, it out- performed the IBM model 2 with regard to both accuracy and efficiency, especially in our application that lacks a massive amount of training data. In most cases, the erroneous outputs from the decoder have a higher score than the human made transla- tions. Therefore it is less likely that the decoder is a major contributor of translation errors. 7 Acknowledgements We would like to thank John Lafferty for enlight- ening discussions on this work. We would also like to thank the anonymous ACL reviewers for valuable comments. This research was partly supported by ATR and the Verbmobil Project. The vmws and conclusions in this document are those of the au- thors. References Brown, P. F., S. A. Dellaopietra, V. J Della-Pietra, and R. L. Mercer. 1993. The Mathematics of Sta- tistical Machine Translation: Parameter Estima- tion. Computational Linguistics, 19(2):263-311. Brown, P. F., S. A. Della Pietra, V. J. Della Pietra, J. D. Lafferty, and R. L. Mercer. 1992. Analy- sis, Statistical Transfer, and Synthesis in Machine Translation. In Proceedings of the fourth Interna- tional Conference on Theoretical and Methodolog- ical Issues in Machine Translation, pages 83-100. Cormen, Thomas H., Charles E. Leiserson, and Ronald L. Rivest. 1990. Introduction to Al- gorithms. The MIT Press, Cambridge, Mas- sachusetts. Magerman, D. 1994. Natural Language Parsing as Statistical Pattern Recognition. Ph.D. thesis, Stanford University. Nilsson, N. 1971. Problem-Solving Methods in Arti- ficial Intelligence. McGraw Hill, New York, New York. Suhm, B., P.Geutner, T. Kemp, A. Lavie, L. May- field, A. McNair, I. Rogina, T. Schultz, T. Slo- boda, W. Ward, M. Woszczyna, and A. Waibel. 1995. JANUS: Towards multilingual spoken lan- guage translation. In Proceedings of the ARPA Speech Spoken Language Technology Workshop, Austin, TX, 1995. Vogel, S., H. Ney, and C. Tillman. 1996. HMM- Based Word Alignment in Statistical Transla- tion. In Proceedings of the Seventeenth Interna- tional Conference on Computational Linguistics: COLING-96, pages 836-841, Copenhagen, Den- mark. 372 | 1997 | 47 |
A Model of Lexical Attraction and Repulsion* Doug Beeferman Adam Berger John Lafferty School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 USA <dougb, aberger, lafferty>@cs, cmu. edu Abstract This paper introduces new methods based on exponential families for modeling the correlations between words in text and speech. While previous work assumed the effects of word co-occurrence statistics to be constant over a window of several hun- dred words, we show that their influence is nonstationary on a much smaller time scale. Empirical data drawn from En- glish and Japanese text, as well as conver- sational speech, reveals that the "attrac- tion" between words decays exponentially, while stylistic and syntactic contraints cre- ate a "repulsion" between words that dis- courages close co-occurrence. We show that these characteristics are well described by simple mixture models based on two- stage exponential distributions which can be trained using the EM algorithm. The resulting distance distributions can then be incorporated as penalizing features in an exponential language model. 1 Introduction One of the fundamental characteristics of language, viewed as a stochastic process, is that it is highly nonstationary. Throughout a written document and during the course of spoken conversation, the topic evolves, effecting local statistics on word oc- currences. The standard trigram model disregards this nonstationarity, as does any stochastic grammar whichassigns probabilities to sentences in a context- independent fashion. *Research supported in part by NSF grant IRI- 9314969, DARPA AASERT award DAAH04-95-1-0475, and the ATR Interpreting Telecommunications Research Laboratories. Stationary models are used to describe such a dy- namic source for at least two reasons. The first is convenience: stationary models require a relatively small amount of computation to train and to apply. The second is ignorance: we know so little about how to model effectively the nonstationary charac- teristics of language that we have for the most part completely neglected the problem. From a theoreti- cal standpoint, we appeal to the Shannon-McMillan- Breiman theorem (Cover and Thomas, 1991) when- ever computing perplexities on test data; yet this result only rigorously applies to stationary and er- godic sources. To allow a language model to adapt to its recent context, some researchers have used techniques to update trigram statistics in a dynamic fashion by creating a cache of the most recently seen n-grams which is smoothed together (typically by linear in- terpolation) with the static model; see for example (Jelinek et al., 1991; Kuhn and de Mori, 1990). An- other approach, using maximum entropy methods similar to those that we present here, introduces a parameter for trigger pairs of mutually informative words, so that the occurrence of certain words in re- cent context boosts the probability of the words that they trigger (Rosenfeld, 1996). Triggers have also been incorporated through different methods (Kuhn and de Mori, 1990; Ney, Essen, and Kneser, 1994). All of these techniques treat the recent context as a "bag of words," so that a word that appears, say, five positions back makes the same contribution to pre- diction as words at distances of 50 or 500 positions back in the history. In this paper we introduce new modeling tech- niques based on exponential families for captur- ing the long-range correlations between occurrences of words in text and speech. We show how for both written text and conversational speech, the empirical distribution of the distance between trig- 373 s t ger words exhibits a striking behavior in which the "attraction" between words decays exponentially, while stylistic and syntactic constraints create a "re- pulsion" between words that discourages close co- occurrence. We have discovered that this observed behavior is well described by simple mixture models based on two-stage exponential distributions. Though in com- mon use in queueing theory, such distributions have not, to our knowledge, been previously exploited in speech and language processing. It is remark- able that the behavior of a highly complex stochas- tic process such as the separation between word co- occurrences is well modeled by such a simple para- metric family, just as it is surprising that Zipf's law can so simply capture the distribution of word fre- quencies in most languages. In the following section we present examples of the empirical evidence for the effects of distance. In Sec- tion 3 we outline the class of statistical models that we propose to model this data. After completing this work we learned of a related paper (Niesler and Woodland, 1997) which constructs similar models. In Section 4 we present a parameter estimation algo- rithm, based on the EM algorithm, for determining the maximum likelihood estimates within the class. In Section 5 we explain how distance models can be incorporated into an exponential language model, and present sample perplexity results we have ob- tained using this class of models. 2 The Empirical Evidence The work described in this paper began with the goal of building a statistical language model using a static trigram model as a "prior," or default dis- tribution, and adding certain features to a family of conditional exponential models to capture some of the nonstationary features of text. The features we used were simple "trigger pairs" of words that were chosen on the basis of mutual information. Figure 1 provides a small sample of the 41,263 (s,t) trigger pairs used in most of the experiments we will de- scribe. In earlier work, for example (Rosenfeld, 1996), the distance between the words of a trigger pair (s,t) plays no role in the model, meaning that the "boost" in probability which t receives following its trigger s is independent of how long ago s occurred, so long as s appeared somewhere in the history H, a fixed- length window of words preceding t. It is reasonable to expect, however, that the relevance of a word s to the identity of the next word should decay as s falls Ms. changes energy committee board lieutenant AIDS Soviet underwater patients television Voyager medical I Gulf her revisions gas representative board colonel AIDS missiles diving drugs airwaves Neptune surgical me Gulf Figure 1: A sample of the 41,263 trigger pairs ex- tracted from the 38 million word Wall Street Journal corpus. s t UN electricity election silk court ,~WH~ -- Hungary Japan Air sentence transplant forest computer Security Council kilovatt small electoral district COCO0~ imprisonment Bulgaria to fly cargo proposed punishment orga/% wastepaper host Figure 2: A sample of triggers extracted from the 33 million word Nikkei corpus. further and further back into the context. Indeed, there are tables in (Rosenfeld, 1996) which suggest that this is so, and distance-dependent "memory weights" are proposed in (Ney, Essen, and Kneser, 1994). We decided to investigate the effect of dis- tance in more detail, and were surprised by what we found. 374 ++L • , 0.01:1 ]-- , 0.01= 0.01 • ~ O.G04 ~ O.(X31 *~o* O.g04 0++ °.. ° ~ ',;,0 ,=' ~ ° } Y q, tgO 150 ~ 2S0 ~ 360 Figure 3: The observed distance distributions--collected from five million words of the Wall Street Journal corpus--for one of the non-self trigger groups (left) and one of the self trigger groups (right). For a given distance 0 < k < 400 oa the z-axis, the value on the y-axis is the empirical probability that two trigger words within the group are separated by exactly k + 2 words, conditional on the event that they co-occur within a 400 word window. (We exclude separation of one or two words because of our use of distance models to improve upon trigrams.) The set of 41,263 trigger pairs was partitioned into 20 groups of non-self triggers (s, t), s ¢ t, such as (Soviet, Kremlin's), and 20 groups of self trig- gers (s, s), such as (business, business). Figure 3 displays the empirical probability that a word t ap- pears for the first time k words after the appearance of its mate s in a trigger pair (s,t), for two repre- sentative groups. The curves are striking in both their similarities and their differences. Both curves seem to have more or less flattened out by N = 400, which allows us to make the approximating assumption (of great prac- tical importance) that word-triggering effects may be neglected after several hundred words. The most prominent distinction between the two curves is the peak near k = 25 in the self trigger plots; the non- self trigger plots suggest a monotonic decay. The shape of the self trigger curve, in particular the rise between k = 1 and/¢ ~ 25, reflects the stylistic and syntactic injunctions against repeating a word too soon. This effect, which we term the lexical exclu- sion principle, does not appear for non-self triggers. In general, the lexical exclusion principle seems to be more in effect for uncommon words, and thus the peak for such words is shifted further to the right. While the details of the curves vary depending on the particular triggers, this behavior appears to be universal. For triggers that appear too few times in the data for this behavior to exhibit itself, the curves emerge when the counts are pooled with those from a collection of other rare words. An example of this law of large numbers is shown in Figure 4. These empirical phenomena are not restricted to the Wall Street Journal corpus. In fact, we have ob- served similar behavior in conversational speech and .Japanese text. The corresponding data for self trig- gers in the Switchboard data (Godfrey, Holliman, and McDaniel, 1992), for instance, exhibits the same bump in p(k) for small k, though the peak is closer to zero. The lexical exclusion principle, then, seems to be less applicable when two people are convers- ing, perhaps because the stylistic concerns of written communication are not as important in conversation. Several examples from the Switchboard and Nikkei corpora are shown in Figure 5. 3 Exponential Models of Distance The empirical data presented in the previous section exhibits three salient characteristics. First is the de- cay of the probability of a word t as the distance k from the most recent occurrence of its mate s in- creases. The most important (continuous-time) dis- tribution with this property is the single-parameter exponential family p~(x) = ~e :~. (We'll begin by showing the continuous analogues of the discrete formulas we actually use, since they are simpler in appearance.) This family is uniquely characterized by the mernoryless properly that the probability of waiting an additional length of time At is independent of the time elapsed so far, and 375 ~ o o I\ I\ t Figure 4: The law of large numbers emerging for distance distributions. Each plot shows the empirical distance curve for a collection of self triggers, each of which appears fewer than 100 times in the entire 38 million word Wall Street Journal corpus. The plots include statistics for 10, 50,500, and all 2779 of the self triggers which occurred no more than 100 times each. " o.m4 . ~ ~ ~ ~ ~ ~ ~ 11 a~ a~ \ a~ cu~ \ ~CIOI o.o~d o~ @ w IOO ~lo ~3o 21o ~oo o,~ 01 Figure 5: Empirical distance distributions of triggers in the :Iapanese Nikkei corpus, and the Switchboard corpus of conversational speech. Upper row: All non-self (left) and self triggers (middle) appearing fewer than 100 times in the Nikkei corpus, and the curve for the possessive particle ¢9 (right). Bottom row: self trigger Utl (left), YOU-KNOW (middle), and all self triggers appearing fewer than 100 times in the entire Switchboard corpus (right). the distribution p, has mean 1/y and variance 1/y 2. This distribution is a good candidate for modeling non-self triggers. Figure 6: A two-stage queue The second characteristic is the bump between 0 and 25 words for self triggers. This behavior appears when two exponential distributions are arranged in serial, and such distributions are an important tool in the "method of stages" in queueing theory (Klein- rock, 1975). The time it takes to travel through two service facilities arranged in serial, where the first provides exponential service with rate /~1 and the second provides exponential service with rate Y2, is simply the convolution of the two exponentials: # P.~,~2(z) = Y1Y2 e-~':te -~'~(=-Od~ _ ~1~2 (e -°'=- e -~'~=) ~x ¢/J2. /~2 - #1 The mean and variance of the two-stage exponen- tial p.,,,: are 1/#, + l/p2 and 1/y~ + 1//J~ respec- tively. As #1 (or, by symmetry, P2) gets large, the peak shifts towards zero and the distribution ap- proaches the single-parameter exponential Pu= (by 376 symmetry, Pro)- A sequence of two-stage models is shown in Figure 7. 0.01 O+OOg O.QI]I 0..007 O.OOG 0.~6 0.004 0,00¢I 0.002 O,G01 0 Figure 7: A sequence of two-stage exponential mod- els pt`~,t`~(x) with/Jl = 0.01, 0.02, 0.06, 0.2, oo and /~ = 0.01. The two-stage exponential is a good candidate for distance modeling because of its mathematical prop- erties, but it is also well-motivated for linguistic rea- sons. The first queue in the two-stage model rep- resents the stylistic and syntactic constraints that prevent a word from being repeated too soon. After this waiting period, the distribution falls off expo- nentially, with the memoryless property. For non- self triggers, the first queue has a waiting time of zero, corresponding to the absence of linguistic con- straints against using t soon after s when the words s and t are different. Thus, we are directly model- ing the "lexical exclusion" effect and long-distance decay that have been observed empirically. The third artifact of the empirical data is the ten- dency of the curves to approach a constant, positive value for large distances. While the exponential dis- tribution quickly approaches zero, the empirical data settles down to a nonzero steady-state value. Together these three features suggest modeling distance with a three-parameter family of distribu- tions: = + c) where c > 0 and 7 is a normalizing constant. Rather than a continuous-time exponential, we use the discrete-time analogue p.(k) = (1 - -t`k In this case, the two-stage model becomes the discrete-time convolution k pt=l,t`2(k) = ~ p/=l(t)pt`~(k -- t). t=O Remark. It should be pointed out that there is another parametric family that is an excellent can- didate for distance models, based on the first two features noted above: This is the Gamma dislribu. lion /~a xot-le -#~ = This distribution has mean a//~ and variance a//~ 2 and thus can afford greater flexibility in fitting the empirical data. For Bayesian analysis, this distribu- tion is appropriate as the conjugate prior for the ex- ponential parameter p (Gelman et al., 1995). Using this family, however, sacrifices the linguistic inter- pretation of the two-stage model. 4 Estimating the Parameters In this section we present a solution to the problem of estimating the parameters of the distance models introduced in the previous section. We use the max- imum likelihood criterion to fit the curves. Thus, if 0 E 0 represents the parameters of our model, and /3(k) is the empirical probability that two triggers appear a distance of k words apart, then we seek to maximize the log-likelihood C(0) = ~ ~(k)logp0(k). k>0 First suppose that {PO}oE® is the family of continu- ous one-stage exponential models p~(k) = pe -t`k. In this case the maximum likelihood problem is straightforward: the mean is the sufficient statistic for this exponential family, and its maximum likeli- hood estimate is determined by 1 1 - Ek>o k~(k) - E~ [k]" In the case where we instead use the discrete model pt`(k) = (1 - e -t') e -t`k, a little algebra shows that the maximum likelihood estimate is then Now suppose that our parametric family {PO}OE® is the collection of two-stage exponential models; the log-likelihood in this case becomes £(/~1,/~2) = ~--~iS(k)log pm(j)pt`,(k-j) . k_>0 Here it is not obvious how to proceed to obtain the maximum likelihood estimates. The difficulty is that there is a sum inside the logarithm, and direct dif- ferentiation results in coupled equations for Pi and 377 #2. Our solution to this problem is to view the con- volving index j as a hidden variable and apply the EM algorithm (Dempster, Laird, and Rubin, 1977). Recall that the interpretation of j is the time used to pass through the first queue; that is, the number of words used to satisfy the linguistic constraints of lexical exclusion. This value is hidden given only the total time k required to pass through both queues. Applying the standard EM argument, the dif- ference in log-likelihood for two parameter pairs (#~,#~) and (/tt,#2) can be bounded from below as c(.')- = ( )log (p.:,.;(.,j')) /:>_0 j=0 A(i,',~,) > where and p.,,..(~, J) = p., (J) p.~ (~ - i) Pu,,~,=(jlk) = Pm'"2(k'J) p.,,.~(k) Thus, the auxiliary function A can be written as k - it' z E~(k)EJPm,~,2(J [k) k_>0 j=0 k k>0 j=0 + constant(#). Differentiating .A(#',#) with respect to #~, we get the EM updates ( 1 ) #i = log 1 + )-~k>0/3(k) k Ej =0 J P;,,t'2 (J [ k) ( 1 ) k #~ -- log 1 + ~ka0/3(k) y'~j__0(k - j)pm,.~(jlk) l:l.emark. It appears that the above updates re- quire O(N 2) operations if a window of N words is maintained in the history. However, us- ing formulas for the geometric series, such as ~ k ~k=0 kz = z/(1- x) 2, we can write the expec- k • tation ~":~j=o 3 Pm,~,,(Jlk) in closed form. Thus, the updates can be calculated in linear time. Finally, suppose that our parametric family {pc}see is the three-parameter collection of two- stage exponential models together with an additive constant: p.,,.~,o(k) = .-~(p.,,.=(k) + e). Here again, the maximum likelihood problem can be solved by introducing a hidden variable. In par- c ticular, by setting a "- ~ we can express this model as a mizture of a two-stage exponential and a uniform distribution: Thus, we can again apply the EM algorithm to de- termine the mixing parameter a. This is a standard application of the EM algorithm, and the details are omitted. In summary, we have shown how the EM algo- rithm can be applied to determine maximum like- lihood estimates of the three-parameter family of distance models {Pm,~=,a} of distance models. In Figure 8 we display typical examples of this training algorithm at work. 5 A Nonstationary Language Model To incorporate triggers and distance models into a long-distance language model, we begin by constructing a standard, static backoff trigram model (Katz, 1987), which we will denote as q(wo[w-l,w-2). For the purposes of building a model for the Wall Street Journal data, this trigram model is quickly trained on the entire 38-million word corpus. We then build a family of conditional exponential models of the general form p(w I H) = 1 (= ) Z~-ff~ exp Aifi(H,w) q(wlw_l,w_2 ) where H = w-t, w-2 .... , w_N is the word history, and Z(H) is the normalization constant Z( H)~= E exp ( E Aifi( H' , q(w l w_l, w-2) The functions fl, which depend both on the word history H and the word being predicted, are called features, and each feature fi is assigned a weight Ai. In the models that we built, feature fi is an indicator function, testing for the occurrence of a trigger pair (si,ti): 1 ifsiEHandw=ti fi(H,w) = 0 otherwise. The use of the trigram model as a default dis- tribution (Csiszhr, 1996) in this manner is new in language modeling. (One might also use the term prior, although q(w[H) is not a prior in the strict Bayesian sense.) Previous work using maximum en- tropy methods incorporated trigram constraints as 378 0.014 0.012 0.01 O.00e 0.004 0.004 0.002 r" \ -.~.. 0.012 0.01 !~il " I I I ol i i I i I * "'1 ' Figure 8: The same empirical distance distributions of Figure 2 fit to the three-parameter mixture model Pm,#2,a using the EM algorithm. The dashed line is the fitted curve. For the non-self trigger plot/J1 = 7, /~ = 0.0148, and o~ = 0.253. For the self trigger plot/~1 = 0.29,/J2 = 0.0168, and a = 0.224. explicit features (Rosenfeld, 1996), using the uni- form distribution as the default model. There are several advantages to incorporating trigrams in this way. The trigram component can be efficiently con- structed over a large volume of data, using standard software or including the various sophisticated tech- niques for smoothing that have been developed. Fur- thermore, the normalization Z(H) can be computed more efficiently when trigrams appear in the default distribution. For example, in the case of trigger fea- tures, since Z(H) = 1 + ~ 6(si E H)(e x' - 1)q(ti lw-1, w-z) i the normalization involves only a sum over those words that are actively triggered. Finally, assuming robust estimates for the parameters hl, the resulting model is essentially guaranteed to be superior to the trigram model. The training algorithm we use for estimating the parameters is the Improved Iterative Scaling (IIS) algorithm introduced in (Della Pietra, Della Pietra, and Lafferty, 1997). To include distance models in the word predic- tions, we treat the distribution on the separation k between sl and ti in a trigger pair (si,ti) as a prior. Suppose first that our distance model is a simple one-parameter exponential, p(k I sl E H,w = ti) = #i e -m~. Using Bayes' theorem, we can then write p(w = ti [sl E H, si = w-A) p(w = ti [si E H) p(k [si E H,w = ti) p(k I si E H) oc e x'-"'k q(tl I wi-l,wi-~). Thus, the distance dependence is incorporated as a penalizing feature, the effect of which is to discour- age a large separation between si and ti. A simi- lar interpretation holds when the two-stage mixture models P,1,,2,~ are used to model distance, but the formulas are more complicated. In this fashion, we first trained distance models using the algorithm outlined in Section 4. We then incorporated the distance models as penalizing fea- tures, whose parameters remained fixed, and pro- ceeded to train the trigger parameters hi using the IIS algorithm. Sample perplexity results are tabu- lated in Figure 9. One important aspect of these results is that be- cause a smoothed trigram model is used as a de- fault distribution, we are able to bucket the trigger features and estimate their parameters on a modest amount of data. The resulting calculation takes only several hours on a standard workstation, in com- parison to the machine-months of computation that previous language models of this type required. The use of distance penalties gives only a small improvement, in terms of perplexity, over the base- line trigger model. However, we have found that the benefits of distance modeling can be sensitive to configuration of the trigger model. For example, in the results reported in Table 9, a trigger is only al- lowed to be active once in any given context. By instead allowing multiple occurrences of a trigger s to contribute to the prediction of its mate t, both the perplexity reduction over the baseline trigram and the relative improvements due to distance mod- eling are increased. 379 Experiment Perplexity Baseline: trigrams trained on 5M words 170 Trigram prior + 41,263 triggers 145 Same as above + distance modeling 142 Baseline: trigrams trained on 38M words 107 Trigram prior + 41,263 triggers 92 Same as above + distance modeling 90 Figure 9: Models constructed using trigram priors. Training the larger DEC Alpha workstation. Reduction 14.7% I6.5% 14.0% 15.9% model required about 10 hours on a 6 Conclusions We have presented empirical evidence showing that the distribution of the distance between word pairs thai; have high mutual information exhibits a strik- ing behavior that is well modeled by a three- parameter family of exponential models. The prop- erties of these co-occurrence statistics appear to be exhibited universally in both text and conversational speech. We presented a training algorithm for this class of distance models based on a novel applica- tion of the EM algorithm. Using a standard backoff trigram model as a default distribution, we built a class of exponential language models which use non- stationary features based on trigger words to allow the model to adapt to the recent context, and then incorporated the distance models as penalizing fea- tures. The use of distance modeling results in an improvement over the baseline trigger model. Acknowledgement We are grateful to Fujitsu Laboratories, and in par- ticular to Akira Ushioda, for providing access to the Nikkei corpus within Fujitsu Laboratories, and as- sistance in extracting Japanese trigger pairs. References Berger, A., S. Della Pietra, and V. Della Pietra. 1996. A maximum entropy approach to natural language pro- cessing. Computational Linguistics, 22(1):39-71. Cover, T.M. and J.A. Thomas. 1991. Elements of In. .[ormation Theory. John Wiley. Csisz£r, I. 1996. Maxent, mathematics, and information theory. In K. Hanson and It. Silver, editors, Max- imum Entropy and Bayesian Methods. Kluwer Aca- demic Publishers. DeLia Pietra, S., V. Della Pietra, and J. Lafferty. 1997. Inducing features of random fields. IEEE Trans. on Pattern Analysis and Machine Intelligence, 19(3), March. Dempster, A.P., N.M. Laird, and D.B. RubEn. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal o] the Royal Statistical Society, 39(B):1-38. Gelman, A., J. Car]in, H. Stern, and D. RubEn. 1995. Bayesian Data Analysis. Chapman &: Hall, London. Godfrey, J., E. HoUiman, and J. McDaniel. 1992. SWITCHBOARD: Telephone speech corpus for re- search and development. In Proc. ICASSP-9~. Jelinek, F., B. MeriMdo, S. Roukos, and M. Strauss. 1991. A dynamic language model for speech recog- nition. In Proceedings o/the DARPA Speech and Nat. ural Language Workshop, pages 293-295, February. Katz, S. 1987. Estimation of probabilities from sparse data for the langauge model component of a speech recognizer. IEEE Transactions on Acoustics, Speech and Signal Processing, ASSP-35(3):400-401, March. Kleinrock, L. 1975. Queueing Systems. Volume I: The- ory. Wiley, New York. Kuhn, R. and R. de Mori. 1990. A cache-based nat- ural language model for speech recognition. IEEE Trans. on Pattern Analysis and Machine Intelligence, 12:570-583. Ney, H., U. Essen, and R. Kneser. 1994. On structur- ing probabilistic dependencies in stochastic language modeling. Computer Speech and Language, 8:1-38. Niesler, T. and P. Woodland. 1997. Modelling word- pair relations in a category-based language model. In Proceedings o] ICASSP-97, Munich, Germany, April. Rosenfeld, R. 1996. A maximum entropy approach to adaptive statistical language modeling. Computer Speech and Language, 10:187-228. 380 | 1997 | 48 |
Hierarchical Non-Emitting Markov Models Eric Sven Ristad and Robert G. Thomas Department of Computer Science Princeton University Princeton, NJ 08544-2087 {ristad, rgt )©cs. princeton, edu Abstract We describe a simple variant of the inter- polated Markov model with non-emitting state transitions and prove that it is strictly more powerful than any Markov model. Empirical results demonstrate that the non-emitting model outperforms the inter- polated model on the Brown corpus and on the Wall Street Journal under a wide range of experimental conditions. The non- emitting model is also much less prone to overtraining. 1 Introduction The Markov model has long been the core technol- ogy of statistical language modeling. Many other models have been proposed, but none has offered a better combination of predictive performance, com- putational efficiency, and ease of implementation. Here we add hierarchical non-emitting state tran- sitions to the Markov model. Although the states in our model remain Markovian, the model itself is no longer Markovian because it can represent unbounded dependencies in the state distribution. Consequently, the non-emitting Markov model is strictly more powerful than any Markov model, in- cluding the context model (Rissanen, 1983; Rissa- nen, 1986), the backoff model (Cleary and Witten, 1984; Katz, 1987), and the interpolated Markov model (Jelinek and Mercer, 1980; MacKay and Peto, 1994). More importantly, the non-emitting model consis- tently outperforms the interpolated Markov model on natural language texts, under a wide range of experimental conditions. We believe that the su- perior performance of the non-emitting model is due to its ability to better model conditional inde- pendence. Thus, the non-emitting model is better able to represent both conditional independence and long-distance dependence, ie., it is simply a better statistical model. The non-emitting model is also nearly as computationally effÉcient and easy to im- plement as the interpolated model. The remainder of our article consists of four sec- tions. In section 2, we review the interpolated Markov model and briefly demonstrate that all inter- polated models are equivalent to some basic Markov model of the same model order. Next, we introduce the hierarchical non-emitting Markov model in sec- tion 3, and prove that even a lowly second order non-emitting model is strictly more powerful than any basic Markov model, of any model order. In section 4, we report empirical results for the inter- polated model and the non-emitting model on the Brown corpus and Wall Street Journal. Finally, in section 5 we conjecture that the empirical success of the non-emitting model is due to its ability to bet- ter model a point of apparent independence, such as may occur at a sentence boundary. Our notation is as follows. Let A be a finite alpha- bet of distinct symbols, [A[ = k, and let z T 6 A T denote an arbitrary string of length T over the al- phabet A. Then z~ denotes the substring of z T that begins at position i and ends at position j. For con- venience, we abbreviate the unit length substring z~ as zi and the length t prefix of z T as z*. 2 Background Here we review the basic Markov model and the in- terpolated Markov model, and establish their equiv- alence. A basic Markov model ¢ = (A,n,6,) consists of an alphabet A, a model order n, n > 0, and the state transition probabilities 6, : A n x A ---* [0, 1]. With probability 6,(y[zn), a Markov model in the state z '~ will emit the symbol y and transition to the state z'~y. Therefore, the probability Prn(ZtlX t-1 , ¢) assigned by an order n basic Markov model ¢ to a symbol z' in the history z t-1 depends only on the last n symbols of the history. £ ,'~ I,Tt-l\ pm(z, lz'-l,¢)=~.~ ,I ,-.J (1) An interpolated Markov model ¢ = (A,n,A,6) consists of a finite alphabet A, a maximal model or- der n, the state transition probabilities 6 = 60 ... 6,, 6i : A i x A ~ [0, 1], and the state-conditional inter- polation parameters A = A0... An, Ai : A i ---* [0, 1]. 381 The probability assigned by an interpolated model is a linear combination of the probabilities assigned by all the lower order Markov models. p0(yl ', ¢) = +(1 - Ai(zi))p¢(ylz~, ¢) (2) where )q(z i) = 0 for i > n, and and therefore p~(z, lzt-1, ¢) ,-7 = p¢(ztlzt_,~,¢), ie., the prediction depends only on the last n symbols of the history. In the interpolated model, the interpolation pa- rameters smooth the conditional probabilities esti- mated from longer histories with those estimated from shorter histories (:lelinek and Mercer, 1980). Longer histories support stronger predictions, while shorter histories have more accurate statistics. In- terpolating the predictions from histories of different lengths results in more accurate predictions than can be obtained from any fixed history length. A quick glance at the form of (2) and (1) re- veals the fundamental simplicity of the interpolated Markov model. Every interpolated model ¢ is equiv- alent to some basic Markov model ¢' (temma 2.1), and every basic Markov model ¢ is equivalent to some interpolated context model ¢' (lemma 2.2). Lemma 2.1 V¢ 3qJ' VZ T E A* ~m(ZTI¢',T) : pe(zTI¢,T)] Proof. We may convert the interpolated model ¢ into a basic model ¢' of the same model order n, simply by setting 6"(ylz n) equal to pc(y[z n, ¢) for all states z n E A n and symbols y 6 A. [] Lemma 2.2 V¢ ~¢t vzT 6 A* [pc(zTI¢',T) = pm(xT]¢,T)] Proof. Every basic model is equivalent to an inter- polated model whose interpolation values are unity for states of order n. [] The lemmas suffice to establish the following the- orem. Theorem 1 The class of interpolated Markov mod- els is equivalent to the class of basic Markov models. Proof. By lemmas 2.1 and 2.2. f"l A similar argument applies to the backoff model. Every backoff model can be converted into an equiv- alent basic model, and every basic model is a backoff model. 3 Non-Emitting Markov Models A hierarchical non-emitting Markov model ¢ = (A,n, A,5) consists of an alphabet A, a maximal model order n, the state transition probabilities, 5 = 5o...6n, 6i : A i x A ~ [0,1], and the non- emitting state transition probabilities A = A0 ... An, hi : A i ---* [0, 1]. With probability 1 - Ai(zi), a non- emitting model will transition from the state z i to the state z~ without emitting a symbol. With proba- bility A/(z')~i (Y[Z i), a .non-emitting model will tran- sition from the state z* to the state z'y and emit the symbol y. Therefore, the probability pe(yJ [z i, ¢) assigned to a string yJ in the history x i by a non-emitting model ¢ has the recursive form (3), = +(1 - ¢) (3) where Ai(z i) = 0 for i > n and A0(e) = 1. Note that, unlike the basic Markov model, p~(ztlzt-l,¢) # t--1 pe(ztlzt_n, ¢) because the state distribution of the non-emitting model depends on the prefix zi-n: This simple fact will allow us to establish that there exists a non-emitting model that is not equivalent to any basic model. Lemma 3.1 states that there exists a non-emitting model ¢ that cannot be converted into an equivalent basic model of any order. There will always be a string z T that distinguishes the non-emitting model ¢ from any given basic model ¢' because the non- emitting model can encode unbounded dependencies in its state distribution. Lemma 3.1 3¢ V¢' 3z T E A* [p,(zTI¢,T) # pm(zT[¢',T)] Proof. The idea of the proof is that our non- emitting model will encode the first symbol Zl of the string z T in its state distribution, for an un- bounded distance. This will allow it to predict the last symbol ZT using its knowledge of the first sym- bol zl. The basic model will only be able predict the last symbol ZT using the preceding n symbols, and therefore when T is greater than n, we can arrange for p,(zTl¢,T) to differ from any p,~(zT[¢',T), sim- ply by our choice of zl. The smallest non-emitting model capable of ex- hibiting the required behavior has order 2. The non-emitting transition probabilities A and the in- terior of the string z T-1 will be chosen so that the non-emitting model is either in an order 2 state or an order 0 state, with no way to transition from one to the other. The first symbol zl will determine whether the non-emitting model goes to the order 2 state or stays in the order 0 state. No matter what probability the basic model assigns to the final sym- bol ZT, the non-emitting model can assign a different probability by the appropriate choice of Zl, 6O(ZT), and Consider the second order non-emitting model over a binary alphabet with )~(0) = 1, A(1) = 0, and A(ll) = 1 on strings in AI'A. When zl = 0, then x2 will be predicted using the 1st order model 61(x21xl), and all subsequent zt will be predicted by the second order model 62(ztlxtt_-~). When zl = 0, then all sub- sequent z, will be predicted by the 0th order model t-1 ~5o(xt). Thus for all t > p, pc(x~[x ~-x) ¢ p~(t[xt_v) for any fixed p, and no basic model is equivalent to this simple non-emitting model. [] It is obvious that every basic model is also a non- emitting model, with the appropriate choice of non- 382 emitting transition probabilities. Lemma 3.2 V¢ 3~' V2: T E A* [pe(xTJ¢',T) = prn(zTl¢,T)] These lemmas suffice to establish the following theorem. Theorem 2 The class of non-emitting Markov models is strictly more powerful than the class of ba- sic Markov models, because it is able to represent a larger class of probability distributions on strings. Proof. By lemmas 3.1 and 3.2. r-I Since interpolated models and backoff models are equivalent to basic Markov models, we have as a corollary that non-emitting Markov models are strictly more powerful than interpolated models and backoff models as well. Note that non-emitting Markov models are considerably less powerful than the full class of stochastic finite state automaton (SFSA) because their states are Markovian. Non- emitting models are also less powerful than the full: class of hidden Markov models. Algorithms to evaluate the probability of a string according to a non-emitting model, and to opti- mize the non-emitting state transitions on a train- ing corpus are provided in related work (Ristad and Thomas, 1997). 4 Empirical Results The ultimate measure of a statistical model is its predictive performance in the domain of interest. To take the true measure of non-emitting models for natural language texts, we evaluate their per- formance as character models on the Brown corpus (Francis and Kucera, 1982) and as word models on the Wall Street Journal. Our results show that the non-emitting Markov model consistently gives bet.ter predictions than the traditional interpolated Markov model under equivalent experimental conditions: In all cases we compare non-emitting and interpolated models of identical model orders, with the same number of parameters. Note that the non-emitting bigram and the interpolated bigram are equivalent. Corpus Size Alphabet Blocks Brown 6,004,032 90 21 WSJ 1989 6,219,350 20,293 22 WSJ 1987-89 42,373,513 20,092 152 All ,~ values were initialized uniformly to 0.5 and then optimized using deleted estimation on the first 90% of each corpus (Jelinek and Mercer, 1980). DEr.ET~D-ESTIMATIoN(B,¢) 1. Until convergence 2. Initialize A+,,~- to zero; 3. For each block Bi in B 4. Initialize 6 using B - Bi; 5. EXPECTATION-STEP( Bi ,¢,~ +,~- ); 6. MAXIMIZATION-STEP(~b,~+ ,)~- ); 7.Initialize ~ using B; Here ,~+ (zi) accumulates the expectations of emit- ting a, symbol from state z i while )~-(zi) accumu- lates the expectations of transitioning to the state z~ without emitting a symbol. The remaining 10% percent of each corpus was used to evaluate model performance. No parameter tying was performed.1 4.1 Brown Corpus Our first set of experiments were with character models on the Brown corpus. The Brown cor- pus is an eclectic collection of English prose, con- taining 6,004,032 characters partitioned into 500 files. Deleted estimation used 21 blocks. Re- sults are reported as per-character test message entropies (bits/char), -Llog 2p(yvjv). The non- tl emitting model outperforms the interpolated model for all nontrivial model orders, particularly for larger m.odel orders. The non-emitting model is consider- ably less prone to overtraining. After 10 EM itera- tions, the order 9 non-emitting model scores 2.0085 bits/char while the order 9 interpolated model scores 2.3338 bits/char after 10 EM iterations. Bto~,m Comus 3.B . . . . . . . N<~..e~,nlng Ido~k Be~ EM Itorltio~1 -e--- 6 ~1~ Inta~t~lno Model: ~iI EM hemtio~ ~-, 3. Not~emJflJn Mod~l: 10th~Mlte/itlon .o-- " Interpo4ate~ Model: lOtPI EM neritk)41 -m-- I\ 3"4 f ~ . ~ 2J 2.~ ~ ~ . . : ....... :-..---.: ..... 2 t i i i s a i 1.8 2 3 4 5 6 7 8 ~ol On~r Figure 1: Test message entropies as a function of model order on the Brown corpus. 4.2 WSJ 1989 The second set of exPeriments was on the 1989 Wall Street Journal corpus, which contains 6,219,350 words. Our vocabulary consisted of the 20,293 words that occurred at least 10 times in the en- tire WSJ 1989 corpus. All out-of-vocabulary words 1 In forthcoming work, we compare the performance of the interpolated and non-emitting models on the Brown corpus and Wall Street Journal with ten different pa- rameter tying schemes. Our experiments confirm that some parameter tying schemes improve model perfor- mance, although only slightly. The non-emitting model consistently outperformed the interpolated model on all the corpora for all the parameter tying schemes that we evaluated. 383 WS..I 1987-'89 160 were mapped to a unique OOV symbol. Deleted estimation used 22 blocks. Following standard prac- tice in the speech recognition community, results are reported as per-word test message perplexities, p(yVlv)-¼. Again, the non-emitting model outper- forms the interpolated Markov model for all nontriv- ial model orders. WSJ 1989 .-- , , , Norl-emc~ng Model: But EM It or=tk~ Intsrp~ated Model: ~ EM I~er~ion ~-- 170 160 150 140 *,~, 30 'k 11o "*~ ..................... Ioo i i " L i ,, 1 2 Model30;,der 4 Figure 2: Test message perplexities as a function of model order on WSJ 1989. 4.3 WSJ 1987-89 The third set of experiments was on the 1987-89 Wall Street Journal corpus, which contains 42,373,513 words. Our vocabulary consisted of the 20,092 words that occurred at least 63 times in the entire WSJ 1987-89 corpus. Again, all out-of-vocabulary words were mapped to a unique OOV symbol. Deleted es- timation used 152 blocks. Results are reported as test message perplexities. As with the WS3 1989 corpus, the non-emitting model outperforms the in- terpolated model for all nontrivial model orders. 5 Conclusion The power of the non-emitting model comes from its ability to represent additional information in its state distribution. In the proof of lemma 3.1 above, we used the state distribution to represent a long dis- tance dependency. We conjecture, however, that the empirical success of the non-emitting model is due to its ability to remember to ignore (ie., to forget) a misleading history at a point of apparent indepen- dence. A point of apparent independence occurs when we have adequate statistics for two strings z n-1 and yn but not yet for their concatenation z,,-lyn. In the most extreme case, the frequencies of z n-1 and yn are high, but the frequency of even the medial bigram zn-lyl is low. In such a situation, we would like to ignore the entire history z n-1 when predicting y'~, because all di(yjlxn-l~ -1) will be close to zero x J J ;SO 140 120 110 100 90 80 Non-4mitting Modot: Be=t EM #erat)o41 Lnterpolatod Moflel: Best EM Itorlt~on ~- Figure 3: Test message perplexities as a function of model order on WSJ 1987-89. for i < n. To simplify the example, we assume that 6(yjlz~-l~ -1) = 0 for j _> 1 and i < n. In such a situation, the interpolated model must repeatedly transition past some suffix of the history z ~-1 for each of the next n-1 predictions, and so the total probability assigned to pc(y nle) by the interpo- lated model is a product of n(n - 1)/2 probabilities. po(y~ I ~"-~ ) "-~ ))] = [i=~l(1-A(x~ *-1 P(Y~I~) n--1 ] ... (1 - a(~_~yi~-l))p(yn ly ~-~) F,,-I r'.--i ] :" [k~=li~= (1--A(X'~-ly~-I)) Pc(Yn'~) (4) In contrast, the non-emitting model will imme- diately transition to the empty context in order to predict the first symbol Yl, and then it need never again transition past any suffix of x n-]. Conse- quently, the total probability assigned to pe(yn[e) by the non-emitting model is a product of only n- 1 probabilities. n--1 ] Given the same state transition probabilities, note that (4) must be considerably less than (5) because probabilities lie in [0, 1]. Thus, we believe that the empirical success of the non-emitting model comes from its ability to effectively ignore a misleading his- tory rather than from its ability to remember distant events. 384 Finally, we note the use of hierarchical non- emitting transitions is a general technique that may be employed in any time series model, including con- text models and backoff models. Acknowledgments Both authors are partially supported by Young Investigator Award IRI-0258517 to Eric Ristad from the National Science Foundation. References Lalit R. Bahl, Peter F. Brown, Peter V. de Souza, Robert L. Mercer, and David Nahamoo. 1991. A fast algorithm for deleted interpolation. In Proc. EUROSPEECH '91, pages 1209-1212, Genoa. J.G. Cleary and I.H. Witten. 1984. Data com- pression using adaptive coding and partial string matching. IEEE Trans. Comm., COM-32(4):396- 402. W. Nelson Francis and Henry Kucera. 1982. Fre- quency analysis of English usage: lexicon and grammar. Houghton Mifflin, Boston. Fred Jelinek and Robert L. Mercer. 1980. Inter- polated estimation of Markov source parameters from sparse data. In Edzard S. Gelsema and Laveen N. Kanal, editors, Pattern Recognition in Practice, pages 381-397, Amsterdam, May 21-23. North Holland. Slava Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. IEEE Trans. ASSP, 35:400- 401. David J.C. MacKay and Linda C. Bauman Peto. 1994. A hierarchical Dirichlet language model. Natural Language Engineering, 1(1). Jorma Rissanen. 1983. A universal data compres- sion system. IEEE Trans. Information Theory, IT-29(5):656-664. Jorma Rissanen. 1986. Complexity of strings in the class of Markov sources. IEEE Trans. Information Theory, IT-32(4):526-532. Eric Sven Ristad and Robert G. Thomas. 1997. Hi- erarchical non-emitting Markov models. Techni- cal Report CS-TR-544-96, Department of Com- puter Science, Princeton University, Princeton, NJ, March. Frans M. J. Willems, Yuri M. Shtarkov, and Tjalling J. Tjalkens. 1995. The context-tree weighting method: basic properties. IEEE Trans. Inf. Theory, 41(3):653-664. 385 | 1997 | 49 |
Automatic Detection of Text Genre Brett Kessler Geoffrey Nunberg Hinrich Schfitze Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto CA 94304 USA Department of Linguistics Stanford University Stanford CA 94305-2150 USA emaih {bkessler,nunberg,schuetze}~parc.xerox.com URL: ftp://parcftp.xerox.com/pub/qca/papers/genre Abstract As the text databases available to users be- come larger and more heterogeneous, genre becomes increasingly important for com- putational linguistics as a complement to topical and structural principles of classifi- cation. We propose a theory of genres as bundles of facets, which correlate with var- ious surface cues, and argue that genre de- tection based on surface cues is as success- ful as detection based on deeper structural properties. 1 Introduction Computational linguists have been concerned for the most part with two aspects of texts: their structure and their content. That is. we consider texts on the one hand as formal objects, and on the other as symbols with semantic or referential values. In this paper we want to consider texts from the point of view of genre: that is. according to the various functional roles they play. Genre is necessarily a heterogeneous classificatory principle, which is based among other things on the way a text was created, the way it is distributed, the register of language it uses, and the kind of au- dience it is addressed to. For all its complexity, this attribute can be extremely important for many of the core problems that computational linguists are concerned with. Parsing accuracy could be increased by taking genre into account (for example, certain object-less constructions occur only in recipes in En- glish). Similarly for POS-tagging (the frequency of uses of trend as a verb in the Journal of Commerce is 35 times higher than in Sociological Abstracts). In word-sense disambiguation, many senses are largely restricted to texts of a particular style, such as col- loquial or formal (for example the word pretty is far more likely to have the meaning "rather" in informal genres than in formal ones). In information retrieval, genre classification could enable users to sort search results according to their immediate interests. Peo- ple who go into a bookstore or library are not usually looking simply for information about a particular topic, but rather have requirements of genre as well: they are looking for scholarly articles about hypno- tism, novels about the French Revolution, editorials about the supercollider, and so forth. If genre classification is so useful, why hasn't it fig- ured much in computational linguistics before now? One important reason is that, up to now, the digi- tized corpora and collections which are the subject of much CL research have been for the most part generically homogeneous (i.e., collections of scientific abstracts or newspaper articles, encyclopedias, and so on), so that the problem of genre identification could be set aside. To a large extent, the problems of genre classification don't become salient until we are confronted with large and heterogeneous search domains like the World-Wide Web. Another reason for the neglect of genre, though, is that it can be a difficult notion to get a conceptual handle on. particularly in contrast with properties of structure or topicality, which for all their complica- tions involve well-explored territory. In order to do systematic work on automatic genre classification. by contrast, we require the answers to some basic theoretical and methodological questions. Is genre a single property or attribute that can be neatly laid out in some hierarchical structure? Or are we really talking about a muhidimensional space of properties that have little more in common than that they are more or less orthogonal to topicality? And once we have the theoretical prerequisites in place, we have to ask whether genre can be reliably identified by means of computationally tractable cues. In a broad sense, the word "genre" is merely a literary substitute for "'kind of text," and discus- sions of literary classification stretch back to Aris- 32 totle. We will use the term "'genre" here to re- fer to any widely recognized class of texts defined by some common communicative purpose or other functional traits, provided the function is connected to some formal cues or commonalities and that the class is extensible. For example an editorial is a shortish prose argument expressing an opinion on some matter of immediate public concern, typically written in an impersonal and relatively formal style in which the author is denoted by the pronoun we. But we would probably not use the term "genre" to describe merely the class of texts that have the objective of persuading someone to do something, since that class -- which would include editorials, sermons, prayers, advertisements, and so forth -- has no distinguishing formal properties. At the other end of the scale, we would probably not use "genre" to describe the class of sermons by John Donne, since that class, while it has distinctive formal characteris- tics, is not extensible. Nothing hangs in the balance on this definition, but it seems to accord reasonably well with ordinary usage. The traditional literature on genre is rich with classificatory schemes and systems, some of which might in retrospect be analyzed as simple at- tribute systems. (For general discussions of lit- erary theories of genre, see, e.g., Butcher (1932), Dubrow (1982), Fowler (1982), Frye (1957), Her- nadi (1972), Hobbes (1908), Staiger (1959), and Todorov (1978).) We will refer here to the attributes used in classifying genres as GENERIC FACETS. A facet is simply a property which distinguishes a class of texts that answers to certain practical interests~ and which is moreover associated with a characteris- tic set of computable structural or linguistic proper- ties, whether categorical or statistical, which we will describe as "generic cues." In principle, a given text can be described in terms of an indefinitely large number of facets. For example, a newspaper story about a Balkan peace initiative is an example of a BROADCAST as opposed to DIRECTED communica- tion, a property that correlates formally with cer- tain uses of the pronoun you. It is also an example of a NARRATIVE, as opposed to a DIRECTIVE (e.g.. in a manual), SUASXVE (as in an editorial), or DE- SCRIPTIVE (as in a market survey) communication; and this facet correlates, among other things, with a high incidence of preterite verb forms. Apart from giving us a theoretical framework for understanding genres, facets offer two practical ad- vantages. First. some applications benefit from cat- egorization according to facet, not genre. For ex- ample, in an information retrieval context, we will want to consider the OPINION feature most highly when we are searching for public reactions to the supercollider, where newspaper columns, editorials. and letters to the editor will be of roughly equal in- terest. For other purposes we will want to stress narrativity, for example in looking for accounts of the storming of the Bastille in either novels or his- tories. Secondly. we can extend our classification to gen- res not previously encountered. Suppose that we are presented with the unfamiliar category FINAN- CIAL ANALYSTS' REPORT. By analyzing genres as bundles of facets, we can categorize this genre as INSTITUTIONAL (because of the use of we as in edi- torials and annual reports) and as NON-SUASIVE or non-argumentative (because of the low incidence of question marks, among other things), whereas a sys- tem trained on genres as atomic entities would not be able to make sense of an unfamiliar category. 1.1 Previous Work on Genre Identification The first linguistic research on genre that uses quan- titative methods is that of Biber (1986: 1988; 1992; 1995), which draws on work on stylistic analysis, readability indexing, and differences between spo- ken and written language. Biber ranks genres along several textual "dimensions", which are constructed by applying factor analysis to a set of linguistic syn- tactic and lexical features. Those dimensions are then characterized in terms such as "informative vs. involved" or "'narrative vs. non-narrative." Factors are not used for genre classification (the values of a text on the various dimensions are often not infor- mative with respect to genre). Rather, factors are used to validate hypotheses about the functions of various linguistic features. An important and more relevant set of experi- ments, which deserves careful attention, is presented in Karlgren and Cutting {1994). They too begin with a corpus of hand-classified texts, the Brown corpus. One difficulty here. however, is that it is not clear to what extent the Brown corpus classi- fication used in this work is relevant for practical or theoretical purposes. For example, the category "Popular Lore" contains an article by the decidedly highbrow Harold Rosenberg from Commentary. and articles from Model Railroader and Gourmet, surely not a natural class by any reasonable standard. In addition, many of the text features in Karlgren and Cutting are structural cues that require tagging. We will replace these cues with two new classes of cues that are easily computable: character-level cues and deviation cues. 33 2 Identifying Genres: Generic Cues This section discusses generic cues, the "'observable'" properties of a text that are associated with facets. 2.1 Structural Cues Examples of structural cues are passives, nominal- izations, topicalized sentences, and counts of the fre- quency of syntactic categories (e.g.. part-of-speech tags). These cues are not much discussed in the tra- ditional literature on genre, but have come to the fore in recent work (Biber, 1995; Karlgren and Cut- ting, 1994). For purposes of automatic classification they have the limitation that they require tagged or parsed texts. 2.2 Lexical Cues Most facets are correlated with lexical cues. Exam- ples of ones that we use are terms of address (e.g., Mr., Ms.). which predominate in papers like the New ~brk Times: Latinate affixes, which signal certain highbrow registers like scientific articles or scholarly works; and words used in expressing dates, which are common in certain types of narrative such as news stories. 2.3 Character-Level Cues Character-level cues are mainly punctuation cues and other separators and delimiters used to mark text categories like phrases, clauses, and sentences (Nunberg, 1990). Such features have not been used in previous work on genre recognition, but we be- lieve they have an important role to play, being at once significant and very frequent. Examples include counts of question marks, exclamations marks, cap- italized and hyphenated words, and acronyms. 2.4 Derivative Cues Derivative cues are ratios and variation measures de- rived from measures of lexical and character-level features. Ratios correlate in certain ways with genre, and have been widely used in previous work. We repre- sent ratios implicitly as sums of other cues by trans- forming all counts into natural logarithms. For ex- ample, instead of estimating separate weights o, 3, and 3' for the ratios words per sentence (average sentence length), characters per word (average word length) and words per type (token/type ratio), re- spectively, we express this desired weighting: , II'+l C+I W+I alog~+31og~+3,1og T+I as follows: "(c~ -/3 + 7) log(W + 1)- a log(S + 1) + 31og(C + 1) - ~. log(T + l) (where W = word tokens. S = sentences. C =char- acters, T = word types). The 55 cues in our ex- periments can be combined to almost 3000 different ratios. The log representation ensures that. all these ratios are available implicitly while avoiding overfit- ting and the high computational cost of training on a large set of cues. Variation measures capture the amount of varia- tion of a certain count cue in a text (e.g.. the stan- dard deviation in sentence length). This type of use- ful metric has not been used in previous work on genre. The experiments in this paper are based on 55 cues from the last three groups: lexical, character- level and derivative cues. These cues are easily com- putable in contrast to the structural cues that have figured prominently in previous work on genre. 3 Method 3.1 Corpus The corpus of texts used for this study was the Brown Corpus. For the reasons mentioned above, we used our own classification system, and elimi- nated texts that did not fall unequivocally into one of our categories. W'e ended up using 499 of the 802 texts in the Brown Corpus. (While the Corpus contains 500 samples, many of the samples contain several texts.) For our experiments, we analyzed the texts in terms of three categorical facets: BROW, NARRA- TIVE, and GENRE. BROW characterizes a text in terms of the presumptions made with respect to the required intellectual background of the target au- dience. Its levels are POPULAR, MIDDLE. UPPER- MIDDLE, and HIGH. For example, the mainstream American press is classified as MIDDLE and tabloid newspapers as POPULAR. The ,NARRATIVE facet is binary, telling whether a text is written in a narra- tive mode, primarily relating a sequence of events. The GENRE facet has the values REPORTAGE, ED- ITORIAL, SCITECH, LEGAL. NONFICTION, FICTION. The first two characterize two types of articles from the daily or weekly press: reportage and editorials. The level SCITECH denominates scientific or techni- cal writings, and LEGAL characterizes various types of writings about law and government administra- tion. Finally, NONFICTION is a fairly diverse cate- gory encompassing most other types of expository writing, and FICTION is used for works of fiction. Our corpus of 499 texts was divided into a train- "ing subcorpus (402 texts) and an evaluation subcor- pus (97). The evaluation subcorpus was designed 34 to have approximately equal numbers of all repre- sented combinations of facet levels. Most such com- binations have six texts in the evaluation corpus, but due to small numbers of some types of texts, some extant combinations are underrepresented. Within this stratified framework, texts were chosen by a pseudo random-number generator. This setup re- sults in different quantitative compositions of train- ing and evaluation set. For example, the most fre- quent genre level in the training subcorpus is RE- PORTAGE, but in the evaluation subcorpus NONFIC- TION predominates. 3.2 Logistic Regression We chose logistic regression (LR) as our basic numer- ical method. Two informal pilot studies indicated that it gave better results than linear discrimination and linear regression. LR is a statistical technique for modeling a binary response variable by a linear combination of one or more predictor variables, using a logit link function: g(r) = log(r~(1 - zr)) and modeling variance with a binomial random vari- able, i.e., the dependent variable log(r~(1 - ,7)) is modeled as a linear combination of the independent variables. The model has the form g(,'r) = zi,8 where ,'r is the estimated response probability (in our case the probability of a particular facet value), xi is the feature vector for text i, and ~q is the weight vector which is estimated from the matrix of feature vec- tors. The optimal value of fl is derived via maximum likelihood estimation (McCullagh and Netder, 1989), using SPlus (Statistical Sciences, 1991). For binary decisions, the application of LR was straightforward. For the polytomous facets GENRE and BROW, we computed a predictor function inde- pendently for each level of each facet and chose the category with the highest prediction. The most discriminating of the 55 variables were selected using stepwise backward selection based on the AIC criterion (see documentation for STEP.GLM in Statistical Sciences (1991)). A separate set of variables was selected for each binary discrimination task. 3.2.1 Structural Cues In order to see whether our easily-computable sur- face cues are comparable in power to the structural cues used in Karlgren and Cutting (1994), we also ran LR with the cues used in their experiment. Be- cause we use individual texts in our experiments in- stead of the fixed-length conglomerate samples of Karlgren and Cutting, we averaged all count fea- tures over text length. 3.3 Neural Networks Because of the high number of variables in our ex- periments, there is a danger that overfitting occurs. LR also forces us to simulate polytomous decisions by a series of binary decisions, instead of directly modeling a multinomial response. Finally. classical LR does not model variable interactions. For these reasons, we ran a second set of experi- ments with neural networks, which generally do well with a high number of variables because they pro- tect against overfitting. Neural nets also naturally model variable interactions. We used two architec- tures, a simple perceptron (a two-layer feed-forward network with all input units connected to all output units), and a multi-layer perceptron with all input units connected to all units of the hidden layer, and all units of the hidden layer connected to all out- put units. For binary decisions, such as determining whether or not a text is :NARRATIVE, the output layer consists of one sigmoidal output unit: for poly- tomous decisions, it consists of four (BRow) or six (GENRE) softmax units (which implement a multi- nomial response model} (Rumelhart et al., 1995). The size of the hidden layer was chosen to be three times as large as the size of the output layer (3 units for binary decisions, 12 units for BRow, 18 units for GENRE). For binary decisions, the simple perceptron fits a logistic model just as LR does. However, it is less prone to overfitting because we train it using three-fold cross-validation. Variables are selected by summing the cross-entropy error over the three validation sets and eliminating the variable that if eliminated results in the lowest cross-entropy error. The elimination cycle is repeated until this summed cross-entropy error starts increasing. Because this selection technique is time-consuming, we only ap- ply it to a subset of the discriminations. 4 Results Table 1 gives the results of the experiments. ~For each genre facet, it compares our results using surface cues (both with logistic regression and neural nets) against results using Karlgren and Cutting's struc- tural cues on the one hand (last pair of columns) and against a baseline on the other (first column). Each text in the evaluation suite was tested for each facet. Thus the number 78 for NARRATIVE under method "LR (Surf.) All" means that when all texts were subjected to the NARRATIVE test, 78% of them were classified correctly. There are at least two major ways of conceiving what the baseline should be in this experiment. If 35 the machine were to guess randomly among k cat- egories, the probability of a correct guess would be 1/k. i.e., 1/2 for NARRATIVE. 1/6 for GENRE. and 1/4 for BROW. But one could get dramatic improve- ment just by building a machine that always guesses the most populated category: NONFICT for GENRE. MIDDLE for BROW, and No for NARRATIVE. The first approach would be fair. because our machines in fact have no prior knowledge of the distribution of genre facets in the evaluation suite, but we decided to be conservative and evaluate our methods against the latter baseline. No matter which approach one takes, however, each of the numbers in the table is significant at p < .05 by a binomial distribution. That is, there is less than a 5% chance that a ma- chine guessing randomly could have come up with results so much better than the baseline. It will be recalled that in the LR models, the facets with more than two levels were computed by means of binary decision machines for each level, then choosing the level with the most positive score. Therefore some feeling for the internal functioning of our algorithms can be obtained by seeing what the performance is for each of these binary machines, and for the sake of comparison this information is also given for some of the neural net models. Ta- ble 2 shows how often each of the binary machines correctly determined whether a text did or did not fall in a particular facet level. Here again the ap- propriate baseline could be determined two ways. In a machine that chooses randomly, performance would be 50%, and all of the numbers in the table would be significantly better than chance (p < .05, binomial distribution). But a simple machine that always guesses No would perform much better, and it is against this stricter standard that we computed the baseline in Table 2. Here, the binomial distribu- tion shows that some numbers are not significantly better than the baseline. The numbers that are sig- nificantly better than chance at p < .05 by the bi- nomial distribution are starred. Tables 1 and 2 present aggregate results, when all texts are classified for each facet or level. Ta- ble 3, by contrast, shows which classifications are assigned for texts that actually belong to a specific known level. For example, the first row shows that of the 18 texts that really are of the REPORTAGE GENRE level, 83% were correctly classified as RE- PORTAGE, 6% were misclassified as EDITORIAL, and 11% as NONFICTION. Because of space constraints, we present this amount of detail only for the six GENRE levels, with logistic regression on selected surface variables. 5 Discussion The experiments indicate that categorization deci- sions can be made with reasonable accuracy on the basis of surface cues. All of the facet level assign- ments are significantly better than a baseline of al- ways choosing the most frequent level (Table 1). and the performance appears even better when one con- siders that the machines do not actually know what the most frequent level is. When one takes a closer look at the performance of the component machines, it is clear that some facet levels are detected better than others. Table 2 shows that within the facet GENRE, our systems do a particularly good job on REPORTAGE and FICTION. trend correctly but not necessarily significantly for SCITECH and NONFICTION, but perform less well for EDITORIAL and LEGAL texts. We suspect that the indifferent performance in SCITECH and LEGAL texts may simply reflect the fact that these genre levels are fairly infrequent in the Brown corpus and hence in our training set. Table 3 sheds some light on the other cases. The lower performance on the EDITO- RIAL and NONFICTION tests stems mostly from mis- classifying many NONFICTION texts as EDITORIAL. Such confusion suggests that these genre types are closely related to each other, as ill fact they are. Ed- itorials might best be treated in future experiments as a subtype of NONFICTION, perhaps distinguished by separate facets such as OPINION and INSTITU- TIONAL AUTHORSHIP. Although Table 1 shows that our methods pre- dict BROW at above-baseline levels, further analysis (Table 2) indicates that most of this performance comes from accuracy in deciding whether or not a text is HIGH BROW. The other levels are identified at near baseline performance. This suggests prob- lems with the labeling of the BRow feature in the training data. In particular, we had labeled journal- istic texts on the basis of the overall brow of the host publication, a simplification that ignores variation among authors and the practice of printing features from other publications. Vv'e plan to improve those labelings in future experiments by classifying brow on an article-by-article basis. The experiments suggest that there is only a small difference between surface and structural cues, Comparing LR with surface cues and LR with struc- tural cues as input, we find that they yield about the same performance: averages of 77.0% (surface) vs. 77.5% (structural) for all variables and 78.4% (sur- face) vs. 78.9% (structural) for selected variables. Looking at the independent binary decisions on a task-by-task basis, surface cues are worse in 10 cases 36 Table 1: Classification Results for All Facets. Baseline LR (Surf.) [ 2LP 3LP LR (Struct.) Facet All Sel. ] All Sel. All Sel. All Sel. Narrative 54 78 80 82 82 86 82 78 80 Genre 33 61 66 75 79 71 74 66 62 Brow 32 44 46 47 -- 54 -- 46 53 Note. Numbers are the percentage of the evaluation subcorpus (:V = 97) which were correctly assigned to the appropriate facet level: the Baseline column tells what percentage would be correct if the machine always guessed the most frequent level. LR is Logistic Regression, over our surface cues (Surf.) or Karlgren and Cutting's structural cues (Struct.): 2LP and 3LP are 2- or 3-layer perceptrons using our surface cues. Under each experiment. All tells the results when all cues are used, and Sel. tells the results when for each level one selects the most discriminating cues. A dash indicates that an experiment was not run. Levels Table 2: Classification Results for Each Facet Level. Baseline LR (Surf.) 2LP 3LP LR (Struct.) Genre Rep Edit Legal Scitech Nonfict Fict Brow Popular Middle Uppermiddle High All 81 89* 81 75 95 96 94 100" 67 67 81 93* 74 74 68 66 88 74 70 84* Sel. 88 96 96 68 96* 75 67 78 88* 94* 74 95 99* 78* 99* 74 64 86 89* All All 94* 8O 95 94 67 81 74 S4 88 90" All Sel. 90* 90* 79 77 93 93 93 96 73 74 96* 96* 72 73 58 64 79 82 85* 86* Note. Numbers are the percentage of the evaluation subcorpus (N = 97) which was correctly classified on a binary discrimination task. The Baseline column tells what percentage would be got correct by guessing No for each level. Headers have the same meaning as in Table 1. * means significantly better than Baseline at p < .05, using a binomial distribution (N=97, p as per first column). Table 3: Genre Binary Actual Rep Edit Legal Scitech Nonfict Fict Level Classification Results by Genre Level. Guess Rep Edit Legal Scitech Nonfict Fict 83 6 0 0 11 0 17 61 0 0 17 6 20 0 20 0 60 0 0 0 0 83 17 0 3 34 0 6 47 9 0 6 0 0 0 94 N 18 18 5 6 32 18 Note. Numbers are the percentage of the texts actually belonging to the GENRE level indicated in the first column that were classified as belonging to each of the GENRE levels indicated in the column headers. Thus the diagonals are correct guesses, and each row would sum to 100%, but for rounding error. 37 and better in 8 cases. Such a result is expected if we assume that either cue representation is equally likely to do better than the other (assuming a bino- mial model, the probability of getting this or a more 8 extreme result is ~-':-i=0 b(i: 18.0.5) = 0.41). We con- clude that there is at best a marginal advantage to using structural cues. an advantage that will not jus- tify the additional computational cost in most cases. Our goal in this paper has been to prepare the ground for using genre in a wide variety of areas in natural language processing. The main remaining technical challenge is to find an effective strategy for variable selection in order to avoid overfitting dur- ing training. The fact that the neural networks have a higher performance on average and a much higher performance for some discriminations (though at the price of higher variability of performance) indicates that overfitting and variable interactions are impor- tant problems to tackle. On the theoretical side. we have developed a tax- onomy of genres and facets. Genres are considered to be generally reducible to bundles of facets, though sometimes with some irreducible atomic residue. This way of looking at the problem allows us to define the relationships between different genres in- stead of regarding them as atomic entities. We also have a framework for accommodating new genres as yet unseen bundles of facets. Finally, by decompos- ing genres into facets, we can concentrate on what- ever generic aspect is important in a particular appli- cation (e.g., narrativity for one looking for accounts of the storming of the Bastille). Further practical tests of our theory will come in applications of genre classification to tagging, summarization, and other tasks in computational linguistics. We are particularly interested in ap- plications to information retrieval where users are often looking for texts with particular, quite nar- row generic properties: authoritatively written doc- uments, opinion pieces, scientific articles, and so on. Sorting search results according to genre will gain importance as the typical data base becomes in- creasingly heterogeneous. We hope to show that the usefulness of retrieval tools can be dramatically im- proved if genre is one of the selection criteria that users can exploit. References Biber, Douglas. 1986. Spoken and written textual dimensions in English: Resolving the contradic- tory findings. Language, 62(2):384-413. Biber. Douglas. 1988. Variation across Speech and Writing. Cambridge University Press. Cam- bridge. England. Biber. Douglas. 1992. The multidimensional ap- proach to linguistic analyses of genre variation: An overview of methodology and finding. Com- puters in the Humanities, 26(5-6):331-347. Biber. Douglas. 1995. Dimensions of Register Vari- ation: A Cross-Linguistic Comparison. Cam- bridge University Press. Cambridge. England. Butcher, S. H.. editor. 1932. Aristotle's Theory of Poetry and Fine Arts. with The Poetics. Macmil- lan, London. 4th edition. Dubrow, Heather. 1982, Genre. Methuen. London and New York. Fowler, Alistair. 1982. Kinds of Literature. Harvard University Press. Cambridge. Massachusetts. Frye. Northrop. 1957. The Anatomy of Criticism, Princeton University Press. Princeton, New Jer- sey. Hernadi, Paul. 1972. Beyond Genre. Cornell Uni- versity Press. Ithaca, New York. Hobbes, Thomas. 1908. The answer of mr Hobbes to Sir William Davenant's preface before Gondib- ert. In J.E. Spigarn, editor. Critical Essays of the Seventeenth Century. The Clarendon Press, Ox- ford. Karlgren, Jussi and Douglass Cutting. 1994. Recog- nizing text genres with simple metrics using dis- criminant analysis. In Proceedings of Coling 94, Kyoto. McCullagh, P. and J.A. Nelder. 1989. Generalized Linear Models. chapter 4, pages 101-123. Chap- man and Hall, 2nd edition. Nunberg, Geoffrey. 1990. The Linguistics of Punc- tuation. CSLI Publications. Stanford. California. Rumelhart. David E.. Richard Durbin. Richard Golden. and Yves Chauvin. 1995. Backprop- agation: The basic theory. In Yves Chau- vin and David E. Rumelhart, editors, Back. propagation: Theory, Architectures, and Applica- tions. Lawrence Erlbaum. Hillsdale, New Jersey, pages 1-34. Staiger, Emil. 1959. Grundbegriffe der Poetik. At- lantis, Zurich. Statistical Sciences. 1991. S-PLUS Reference Man- ual. Statistical Sciences. Seattle, Washington. Todorov, Tsvetan. 1978. Les genres du discours. Seuil, Paris. 3B | 1997 | 5 |
Efficient Construction of Underspecified Semantics under Massive Ambiguity Jochen D5rre* Institut fiir maschinelle Sprachverarbeitung University of Stuttgart Abstract We investigate the problem of determin- ing a compact underspecified semantical representation for sentences that may be highly ambiguous. Due to combinatorial explosion, the naive method of building se- mantics for the different syntactic readings independently is prohibitive. We present a method that takes as input a syntac- tic parse forest with associated constraint- based semantic construction rules and di- rectly builds a packed semantic structure. The algorithm is fully implemented and runs in O(n4log(n)) in sentence length, if the grammar meets some reasonable 'nor- mality' restrictions. 1 Background One of the most central problems that any NL sys- tem must face is the ubiquitous phenomenon of am- biguity. In the last few years a whole new branch de- veloped in semantics that investigates underspecified semantic representations in order to cope with this phenomenon. Such representations do not stand for the real or intended meaning of sentences, but rather for the possible options of interpretation. Quanti- fier scope ambiguities are a semantic variety of am- biguity that is handled especially well by this ap- proach. Pioneering work in that direction has been (Alshawi 92) and (Reyle 93). More recently there has been growing interest in developing the underspecification approach to also cover syntactic ambiguities (cf. (Pinkal 95; EggLe- beth 95; Schiehlen 96)). Schiehlen's approach is out- standing in that he fully takes into account syntactic *This research has been carried out While the au- thor visited the Programming Systems Lab of Prof. Gert Smolka at the University of Saarland, Saarbriicken. Thanks to John Maxwell, Martin Miiller, Joachim Niehren, Michael Schiehlen, and an anonymous reviewer for valuable feedback and to all at PS Lab for their help- ful support with the OZ system. constraints. In (Schiehlen 96) he presents an algo- rithm which directly constructs a single underspec- ified semantic structure from the ideal "underspeci- fled" syntactic structure, a parse forest. On the other hand, a method for producing "packed semantic structures", in that case "packed quasi-logical forms", has already been used in the Core Language Engine, informally described in (A1- shawi 92, Chap. 7). However, this method only pro- duces a structure that is virtually isomorphic to the parse forest, since it simply replaces parse for- est nodes by their corresponding semantic oper- ators. No attempt is made to actually apply se- mantic operators in the phase where those "packed QLFs" are constructed. Moreover, the packing of the QLFs seems to serve no purpose in the process- ing phases following semantic analysis. Already the immediately succeeding phase "sortal filtering" re- quires QLFs to be unpacked, i.e. enumerated. Contrary to the CLE method, Schiehlen's method actively packs semantic structures, even when they result from distinct syntactic structures, extracting common parts. His method, however, may take time exponential w.r.t, sentence length. Already the se- mantic representations it produces can be exponen- tially large, because they grow linear with the num- ber of (syntactic) readings and that can be exponen- tial, e.g., for sentences that exhibit the well-known attachment ambiguity of prepositional phrases. It is therefore an interesting question to ask, whether we can compute compact semantic representations from parse forests without falling prey to exponential ex- plosion. The purpose of the present paper is to show that construction of compact semantic representations like in Schiehlen's approach from parse forests is not only possible, but also cheap, i.e., can be done in polynomial time. To illustrate our method we use a simple DCG grammar for PP-attachment .ambiguities, adapted from (Schiehlen 96), that yields semantic represen- tations (called UDI~Ss) according to the Underspec- ified Discourse Representation Theory (Reyle 93; KampReyle 93). The grammar is shown in Fig. 1. 386 start(DRS) --> s([ .... itop], [],DRS). s([Evenc,VerbL,DomL],DRS i,PRS_o) --> np([X,VerbL,DomL],DRS_i,DRSI), vp([Event,X,VerbL,DomL],DRS1,DRS_o). s([Event,VerbL,DomL],DRS_i,DRS_o) --> s([Event,VerbL, DomL],DRSi,DRSl), pp([Event,VerbL,DomL],DRSi,DRSo). vp([Ev,X,VerbL,DomL],DRS_i,DRSo) --> vt([Ev,X,Y,VerbL,DomL],DRS_i,DRSI), np([Y,VerbL,DomL],DRSI,DRS_O). np([X, VbL,DomL],DRS i,DRS_o) --> det([X,Nou~,L,VbL,DomL],DRS_i,DRSI), n([X,NounL,DomL],DRSI,DRSo). n([X,NounL,DomL],DRS i,DRS_o) --> n([X,NounL,DomL],DRS i,DRSI), pp([X,NounL,DomL],DRSI,DRS_o). pp([X,L,DomL],DRS_i,DRS o) --> prep(Cond,X,Y), np([Y,L,DomL], [L:CondlDRS i],DRS o). vt([Ev, X,Y,L,_DomL],DRS_i,DRS) --> [saw], [DRS=[L:see(Ev,X,Y) IDRS_i]}. det([X,Lab,VerbL,_],DRS i,DRS) --> [a], [DRS=[It(Lab, ltop),It(VerbL,Lab), Lab:XIDRS_i], gensym(l,Lab),gensym(x,X)}. det([X,ResL;VbL,DomL],DRSi,DRS) --> [ every ], (DRS=[lt(L,DomL),lt(VbL,ScpL),ResL:X, L:every(ResL,ScpL) IDRS_i], gensym(l,L),gensym(l,ResL), gensym(l,ScpL),gensym(x,X)}. np([X .... ],DRS_i,DRS) --> [i], [DRS=[itop:X,anchor(X, speaker) IDRS_i], gensyrn(x,X)}. n([X,L,_],DRS, [L:man(X) IDRS]) --> [man]. n([X,L,_],DRS, [L:hilI(X) IDRS]) --> [hill]. prep(on(X,Y),X,Y) --> [on]. prep(with(X,Y),X,Y) --> [with]. Figure h Example DCG The UDRSs constructed by the grammar are flat lists of the UDRS-constraints I <__ l' (subordination (partial) ordering between labels; Prolog represen- tation: it (l,l')), l : Cond (condition introduction in subUDRS labeled l), I : X (referent introduc- tion in l), l : GenQuant(l',l") (generalised quan- tifier) and an anchoring function. The meaning of a UDKS as a set of denoted DRSs can be explained as follows. 1 All conditions with the same label form a subUDRS and labels occurring in subUDRSs de- note locations (holes) where other subUDRSs can be plugged into. The whole UDRS denotes the set of well-formed DRSs that can be formed by some plugging of the subUDRSs that does not violate the ordering <. Scope of quantifiers can be underspec- ified in UDRSs, because subordination can be left partial. In our example grammar every nonterminal has three arguments. The 2nd and the 3rd argument rep- resent a UDRS list as a difference list, i.e., the UDRS is "threaded through". The first argument is a list of objects occurring in the UDRS that play a specific role in syntactic combinations of the current node. 2 An example of a UDRS, however a packed UDRS, is shown later on in §5. To avoid the dependence on a particular grammar formalism we present our method for a constraint- based grammar abstractly from the actual constraint 1Readers unfamiliar with DRT should think of these structures as some Prolog terms, representing semantics, built by unifications according to the semantic rules. It is only important to notice how we extract common parts of those structures, irrespective of the structures' mean- ings. ~E.g., for an NP its referent, as well as the upper and lower label for the current clause and the top label. system employed. We only require that semantic rules relate the semantic 'objects' or structures that are associated with the nodes of a local tree by em- ploying constraints. E.g., we can view the DCG rule s ~ np vp as a relation between three 'seman- tic construction terms' or variables SereS, SemNP, SemVP equivalent to the constraints Seres = [ [Event, VerbL,DomL, TopL] , DRS_i, DRS_o] SemNP = [[X,VerbL,DomL,TopL] ,DRS_i,DRSI] SemVP = [ [Event, X, VerbL, DomL, TopL] , DRS 1, DRS_o] Here is an overview of the paper. §2 gives the pre- liminaries and assumptions needed to precisely state the problem we want to solve. §3 presents the ab- stract algorithm. Complexity considerations follow in §4. Finally, we consider implementation issues, present results of an experiment in §5, and close with a discussion. 2 The Problem As mentioned already, we aim at calculating from given parse forests the same compact semantic struc- tures that have been proposed by (Schiehlen 96), i.e. structures that make explicit the common parts of different syntactic readings, so that subsequent semantic processes can use this generalised infor- mation. As he does, we assume a constraint-based grammar, e.g. a DCG (PereiraWarren 80) or HPSG (PollardSag 94) , in which syntactic constraints and constraints that determine a resulting semantic rep- resentation can be seperated and parsing can be per- formed using the syntactic constraints only. Second, we assume that the set of syntax trees can be compactly represented as a parse forest (cf. (Earley 70; BillotLang 89; Tomita 86)). Parse forests are rooted labeled directed acyclic graphs with AND-nodes (standing for context-free branch- 387 s s s np 5 n n p / \ / / np np 12 / ,3 PP 18 16 PP 19 np v d n p d n p 23 24 25 26 27 28 29 30 np 22 d n 31 32 I saw a man on the hill with the tele Figure 2: Example of a parse forest ing) and OR-nodes (standing for alternative sub- trees), that call be characterised as follows (cf. Fig. 2 for an example).3 1. The terminal yield as well as the label of two AND-nodes are identical, if and only if they both are children of one OR-node. 2. Every tree reading is .a valid parse tree. Tree readings of such graphs are obtained by replac- ing any OR-node by one of its children. Parse forests can represent an exponential number of phrase structure alternatives in o(n 3) space, where n is the length of the sentence. The example uses the 3 OR- nodes (A, B, C) and the AND-nodes 1 through 32 to represent 5 complete parse trees, that would use 5 x 19 nodes. Third, we assume the rule-to-rule hypothesis, i.e., 3The graphical representation of an OR-node is a box surroux~ding its children, i.e. the AND-OR-graph struc- ture of ~ is o~. ND that the grammar associates with each local tree a 'semantic rule' that specifies how to construct the mother node's semantics from those of its children. Hence, input to the algorithm is • a parse forest • an associated semantic rule for every local tree (AND-node together with its children) therein • and a semantic representation for each leaf (coming from a semantic lexicon). To be more precise, we assume a constraint lan- guage C over a denumerable set of variables X, that is a sublanguage of Predicate Logic with equal- ity and is closed under conjunction, disjunction, and variable renaming. Small greek letters ¢, ¢ will henceforth denote constraints (open formulae) and letters X, Y, Z (possibly with indeces) will denote variables. Writing ¢(X1,..., Xk) shall indicate that X1 .... , Xk are the free variables in the constraint ~. Frequently used examples for constraint languages are the language of equations over first-order terms 388 for DCGs, 4 PATR-style feature-path equations, or typed feature structure description languages (like the constraint languages of ALE (Carpenter 92) or CUF (D6rreDorna 93)) for HPSG-style grammars. Together with the constraint language we require a constraint solver, that checks constraints for satis- fiability, usually by transforming them into a normal form (also called 'solved form'). Constraint solving in the DCG case is simply unification of terms. The semantic representations mentioned before are actually not given directly, but rather as a con- straint on some variable, thus allowing for partiality in the structural description. To that end we assume that every node in the parse forest u has associated with it a variable Xv that is used for constraining the (partial) semantic structure of u. The semantics of a leaf node # is hence given as a constraint ¢,(X,), called a leaf constraint. A final assumption that we adopt concerns the na- ture of the 'semantic rules'. The process of semantics construction shall be a completely monotonous pro- cess of gathering constraints that never leads to fail- ure. We assume that any associated (instantiated) semantic rule r(u) of a local tree (AND-branching) u(ul,...,u~) determines u's semantics Z(u) as fol- lows from those of its children: Z(,,) = 3X,,,... 3X~, (¢~(,,)(X,,, X,,,,..., X,,,) A Z(I."I) A... A E(Uk) ). The constraint Cr(v)(Xv, Xvl,..., X~) is called the rule constraint for ~,. It is required to only depend on the variables X~, X~I,..., X~,. Note that if the same rule is to be applied at another node, we have a different rule constraint. Note that any F,(~,) depends only on Xv and can be thought of as a unary predicate. Now, let us con- sider semantics construction for a single parse tree for the moment. The leaf constraints together with the rules define a semantics constraint Z(~,) for ev- ery node u, and the semantics of the full sentence is described by the T-constraint of the root node, ~,(root). In the T-constraints, we actually can sup- press the existential quantifiers by adopting the con- vention that any variable other than the one of the current node is implicitly existentially bound on the formula toplevel. Name conflicts, that would force variable renaming, cannot occur. Therefore ~(root) is (equivalent to) just a big conjunction of all rule constraints for the inner nodes and all leaf con- straints. Moving to parse forests, the semantics of an OR- node u(~,l,..., uk) is to be defined as z(,.,) = 3x~, ... 3x,~(z(,,~) ^ x~=x~, v... v z(~k) ^ x~=x~), 4DCG shall refer in this paper to a logically pure ver- sion, Definite Clause Grammars based on pure PROLOC, involving no nonlogical devices like Cut, var/1, etc. specifying that the set of possible (partial) semantic representations for u is the union of those of u's chil- dren. However, we can simplify this formula once and for all by assuming that for every OR-node there is only one variable Xu that is associated with it and all of its children. Using the same variable for ul ... uk is unproblematic, because no two of these nodes can ever occur in a tree reading. Hence, the definition we get is ~"](IJ) : Z(I]I) V... V Z(lYk). Now, in the same way as in the single-tree case, we can directly "read off" the T-constraint for the whole parse forest representing the semantics of all read- ings. Although this constraint is only half the way to the packed semantic representation we are aim- ing at, it is nevertheless worthwhile to consider its structure a little more closely. Fig. 3 shows the struc- ture of the F,-constraint for the OR-node B in the example parse forest. In a way the structure of this constraint directly mirrors the structure of the parse forest. However, by writing out the constraint, we loose the sharings present in the forest. A subformula coming from a shared subtree (as Z(18) in Fig. 3) has to be stated as many times as the subtree appears in an unfolding of the forest graph. In our PP-attachment example the blowup caused by this is in fact exponential. On the other hand, looking at a T-constraint as a piece of syntax, we can represent this piece of syntax in the same manner in which trees are represented in the parse forest, i.e. we can have a representation of Z(root) with a structure isomorphic to the forest's graph structure, s In practice this difference becomes a question of whether we have full control over the representations the constraint solver employs (or any other process that receives this constraint as input). If not, we cannot contend ourselves with the possi- bility of compact representation of constraints, but rather need a means to enforce this compactness on the constraint level. This means that we have to in- troduce some form of functional abstraction into the constraint language (or anything equivalent that al- lows giving names to complex constraints and refer- encing to them via their names). Therefore we en- hance the constraint language as follows. We allow to our disposition a second set of variables, called names, and two special forms of constraints 1. def(<name>, <constraint>) name definition 2. <name> name use with the requirements, that a name may only be used, if it is defined and that its definition is unique. Thus, the constraint Z(B) above can be written as (¢r(6) A... A ¢26 A N V ¢~(7) A... A ¢26 A N ) A def(N, ¢r(18) A¢27 A ¢r(21) A¢2S A¢29) 5The packed QLFs in the Core Language Engine (A1- shawl 92) are an example of such a representation. 389 6r(6) A 623 A d)r(10) A 024 A 6r(12i A (~25 A Cr(lS) A 626 A ~r(IS} A ~27 A (~r(21) A 628 A ~29 . z( s) ~(6) v 6r(7) A ~r(14) A 623 A Or(17) A 624 A 6r(20) A 625 A 626/~ ~r(18) A ~27 A 6r(21) A 628 A 629 E(181 Figure 3: Constraint E(B) of example parse forest The packed semantic representation as con- structed by the method described so far still calls for an obvious improvement. Very often the dif- ferent branches of disjunctions contain constraints that have large parts in common. However, although these overlaps are efficiently handled on the rep- resentational level, they are invisible at the logical level. Hence, what we need is an algorithm that fac- tores out common parts of the constraints on the logical level, pushing disjunctions down. 6 There are two routes that we can take to do this efficiently. In the first we consider only the structure of the parse forest, however ignore the content of (rule or leaf) constraints. I.e. we explore the fact that the parts of the E-constraints in a disjunction that stem from nodes shared by all disjuncts must be identical, and hence can be factored out/ More precisely, we can compute for every node v the set must-occur(v) of nodes (transitively) dominated by v that must oc- cur in a tree of the forest, whenever u occurs. We can then use this information, when building the disjunc- tion E(u) to factor out the constraints introduced by nodes in must-occur(v), i.e., we build the fac- tor • = Av'emust-occur(v) Z(u') and a 'remainder' constraint E(ui)\~ for each disjunct. The other route goes one step further and takes into account the content of rule and leaf constraints. For it we need an operation generalise that can be characterised informally as follows. For two satisfiable constraints ¢ and ~, generalise(¢, !b) yields the triple ~, ¢', ~3', such that ~ contains the 'common part' of ¢ and 19 and ¢' represents the 'remainder' 6\~ and likewise 19' represents 19\~. 6Actually, in the E(B) example such a factoring makes the use of the name N superfluous. In general, however, use of names is actually necessary to avoid ex- ponentially large constraints. Subtrees may be shared by quite different parts of the structure, not only by dis- juncts of the same disjunction. In the PP-attachment ex- ample, a compression of the E-constraint to polynomial size cannot be achieved with factoring alone. 7(Maxwell IIIKaplan 93) exploit the same idea for efficiently solving the functional constraints that an LFG grammar associates with a parse forest. The exact definition of what the 'common part' or the 'remainder' shall be, naturally depends on the actual constraint system chosen. For our purpose it is sufficient to require the following properties: If generalise(~. 19) ~-~ (~, ~', ~b'), then ~ I- andOf-~ando=~A¢'and~b-=~A~b'. We shall call such a generalisation operation sim- plifying if the normal form of ~ is not larger than any of the input constraints' normal form. Example: An example for such a generalisa- tion operation for PROLOG'S constraint system (equations over first-order terms) is the so-called anti-unify operation, the dual of unification, that some PROLOG implementations provide as a library predicate, s Two terms T1 and T2 'anti-unify' to T, iff T is the (unique) most specific term that sub- sumes both T1 and T2. The 'remainder constraints' in this case are the residual substitutions al and a2 that transform T into T1 or T2, respectively. Let us now state the method informally. We use generalise to factor out the common parts of dis- junctions. This is, however, not as trivial as it might appear at first sight. Generalise should operate on solved forms, but when we try to eliminate the names introduced for subtree constraints in order to solve the corresponding constraints, we end up with constraints that are exponential in size. In the following section we describe an algorithm that cir- cumvents this problem. 3 The Algorithm We call an order < on the nodes of a directed acyclic graph G = (N, E) with nodes N and edges E bottom-up, iff whenever (i, j) E E ("i is a predecessor to j"), then j < i. For the sake of simplicity let us assume that any nonterminal node in the parse forest is binary branching. Furthermore, we leave implicit, when conjunctions of constraints are normalised by the constraint solver. Recall that for the generalisation operation it is usually meaningful to operate on Santi_unify in Quintus Prolog , term_subsumer in Sicstus Prolog. 390 Input: • parse'forest, leaf and rule constraints as described above • array of variables X~ indexed by node s.t. if v is a child of OR-node v', then Xv = Xv, Data structures: • an array SEM of constraints and an array D of names, both indexed by node • a stack ENV of def constraints Output: a constraint representing a packed semantic representation Method: ENV := nil process nodes in a bottom-up order doing with node u: if u is a leaf then SEM[v] := ¢, D[v] :: true eiseif v is AND(v1, v2) then SEIVlIv] := Cr(,) A SEM[vl] A SEM[v2] if D[vl] = true then D[v] := D[u2] elseif Dive] = true then D[v] := D[vl] else D[v] := newname push def(D[v], D[vl] A D[v2]) onto ENV end elseif v is OR(v1, v2) then let GEN, REM1, REM2 such that generalise(SEM[vl], SEM[v2]) ~-+ (GEN, REM1, REM2) SEM[v] := GEN D[v] := newname push def(D[v], REM1 A D[vl] V REM2 A D[v2]) onto ENV end return SEM[root] A D[root] A ENV Figure 4: Packed Semantics Construction Algorithm solved forms. However, at least the simplifications true A ¢ -- ¢ and ¢ A true =-- ¢ should be assumed. The Packed Semantics Construction Algorithm is given in Fig. 4. It enforces the following invariants, which can easily be shown by induction. 1. Every name used has a unique definition. 2. For any node v we have the equivalence ~(v) - SEM[u] A [D[v]], where [D[u]] shall denote the constraint obtained from D[v] when recursively replacing names by the constraints they are bound to in ENV. 3. For any node u the constraint SEM[v] is never larger than the ~-constraint of any single tree in the forest originating in u. Hence. the returned constraint correctly represents the semantic representation for all readings. 4 Complexity The complexity of this abstract algorithm depends primarily on the actual constraint system and gen- eralisation operation employed. But note also that the influence of the actual semantic operations pre- scribed by the grammar can be vast, even for the simplest constraint systems. E.g., we can write a DCGs that produce abnormal large "semantic struc- tures" of sizes growing exponentially with sentence length (for a single reading). For meaningful gram- mars we expect this size function to be linear. There- fore, let us abstract away this size by employing a function fa(n) that bounds the size of semantic structures (respectively the size of its describing con- straint system in normal form) that grammar G as- signs to sentences of length n. Finally, we want to assume that generalisation is simplifying and can be performed within a bound of g(m) steps, where m is the total size of the input constraint systems. With these assumptions in place, the time com- plexity for the algorithm can be estimated to be (n = sentence length, N = number of forest nodes) O(g(fc(n) ) " N) <_ O(g(fa(n) ) . n3), since every program step other than the generali- sation operation can be done in constant time per node. Observe that because of Invariant 3. the input constraints to generalise are bounded by fc as any constraint in SEM. In the case of a DCG the generalisation oper- ation is anti_unify, which can be performed in o(n. log(n)) time and space (for acyclic struc- tures). Hence, together with the assumption that the semantic structures the DCG computes can be bounded linearly in sentence length (and are acyclic), we obtain a O(n. log(n). N) < O(n41og(n)) total time complexity. 391 SEM[top]: [itop : xl, anchor(xl,'Speaker') ii : see(el,xl,x2), it(12,1top), it(ll,12), 12 : x2, 12 : man(x2), A : on(B,x3), it(13,1top), It(A,15), 14 : x3, 13 : every(14,15), 14 : hill(x3), C : with(D,x4), it(16,1top), it(C,16), 16 : x4, 16 : tele(x4)] D[top] (a Prolog goal): dEnv(509,1,[B,A,D,C]) ENV (as Prolog predicates): deny(506, i, A) ( A=[e{,ll] i A= [x2, 12 ] dEnv(339, i, A) :- ( A= [C,B,C,B] ; A= Ix3,14 .... ] ) dEnv(509, 2, A) :- ( A= [el, ll,x3,14] ; A= Ix2,12,C,B], dEny(339, I, [C,B,x2,12]) ) dEnv(509, i, A) :- ( A=[G,F,eI,II], deny(506, i, [G,F]) A= [E,D,C,B] , dEny(509, 2, [E,D,C,B]) Figure 5: Packed UDRS: conjunctive part (left column) and disjunctive binding environment 5 Implementation and Experimental Results The algorithm has been implemented for the PRO- LOG (or DCG) constraint system, i.e., constraints are equations over first-order terms. Two implemen- tations have been done. One in the concurrent con- straint language OZ (SmolkaTreinen 96) and one in Sicstus Prolog. 9 The following results relate to the Prolog implementation, l° Fig. 5 shows the resulting packed UDRS for the example forest in Fig. 2. Fig. 6 displays the SEM part as a graph. The disjunctive binding environ- ment only encodes what the variable referents B and D (in conjunction with the corresponding labels A and C) may be bound to to: one of el, x2, or x3 (and likewise the corresponding label). Executing the goal deny (509,1, [B, A, D, C] ) yields the five solutions: A = ii, B = el, C = ii, D = el ? ; A = 12, B = x2, C = ii, D = el ? ; A = 11, B = el, C = 14, D = x3 ? ; A = 12, B = x2. C = 12, D = x2 ? ; A = 12, B = x2, C = 14, D = x3 ? ; no I ?- Table 1 gives execution times used for semantics construction of sentences of the form I saw a man (on a hill) n for different n. The machine used for °The OZ implementation has the advantage that fea- ture structure constraint solving is built-in. Our imple- mentation actually represents the DCG terms as a fea- ture structures. Unfortunately it is an order of magni- tude slower than the Prolog version. The reason for this presumably lies in the fact that meta-logical operations the algorithm needs, like generalise and copy_term have been modeled in OZ and not on the logical level were they properly belong, namely the constraint solver. 1°This implementation is available from http://www.ims.uni-stuttgart.de/'jochen/CBSem. 12 ~ x2 14 15 I man(x2) I x3 ' [ hill¢x3) x~ I 11 ' A / I see(el'x l'x2) l °n(B'x3) J ltop anchor(x 1 ,' Speaker') I. C I with(D,x4) Figure 6: Conjunctive part of UDRS, graphically n 2 4 6 8 10 12 14 16 Readings 5 35 42 91 429 183 4862 319 58786 507 742900 755 9694845 1071 129Mio. 1463 AND- + OR-nodes Time 4 msec 16 msec 48 msec ll4 msec 220 msec 430 msec 730 msec i140 msec Table 1: Execution times the experiment was a Sun Ultra-2 (168MHz), run- ning Sicstus 3.0~3. In a further experiment an n-ary anti_unify operation was implemented, which im- proved execution times for the larger sentences, e.g., the 16 PP sentence took 750 msec. These results ap- proximately fit the expectations from the theoretical complexity bound. 392 6 Discussion Our algorithm and its implementation show that it is not only possible in theory, but also feasible in practice to construct packed semantical representa- tions directly from parse forests for sentence that ex- hibit massive syntactic ambiguity. The algorithm is both in asymptotic complexity and in real numbers dramatically faster than an earlier approach, that also tries to provide an underspecified semantics for syntactic ambiguities. The algorithm has been pre- sented abstractly from the actual constraint system and can be 2dapted to any constraint-based gram- mar formalism. A critical assumption for the method has been that semantic rules never fail, i.e., no search is in- volved in semantics construction. This is required to guarantee that the resulting constraint is a kind of 'solved form' actually representing so-to-speak the free combination of choices it contains. Nevertheless, our method (modulo small changes to handle failure) may still prove useful, when this restriction is not fulfilled, since it focuses on computing the common information of disjunctive branches. The conjunctive part of the output constraint of the algorithm can then be seen as an approximation of the actual re- sult, if the output constraint is satisfiable. Moreover, the disjunctive parts are reduced, so that a subse- quent full-fledged search will have considerably less work than when directly trying to solve the original constraint system. References H. Alshawi (Ed.). The Core Language Engine. ACL-MIT Press Series in Natural Languages Pro- cessing. MIT Press, Cambridge, Mass., 1992. S. Billot and B. Lang. The Structure of Shared Forests in Ambiguous Parsing. In Proceedings of the 27th Annual Meeting of the A CL, University of British Columbia, pp. 143-151, Vancouver, B.C., Canada, 1989. B. Carpenter. ALE: The Attribute Logic Engine User's Guide. Laboratory for Computational Lin- guistics, Philosophy Department, Carnegie Mellon University, Pittsburgh PA 15213, December 1992. J. DSrre and M. Dorna. CUF -- A Formalism for Linguistic Knowledge Representation. In J. DSrre (Ed.), Computational Aspects of Constraint-Based Linguistic Description I, DYANA-2 deliverable R1.2.A. ESPRIT, Basic Research Project 6852, July 1993. J. Earley. An Efficient Context-Free Parsing Algo- rithm. Communications of the ACM, 13(2):94- 102, 1970. M. Egg and K. Lebeth. Semantic Underspeci- fication and Modifier Attachment Ambiguities. In J. Kilbury and R. Wiese (Eds.), Integra- tive Ansiitze in der Computerlinguistik. Beitriige zur 5. Fachtagung der Sektion Computerlinguis- tik der Deutschen Gesellschaft fiir Spraehwis- senschaft (DGfS), pp. 19-24. Dfisseldorf, Ger- many, 1995. H. Kamp and U. Reyle. From Discourse to Logic. In- troduction to Modeltheoretic Semantics of Natural Language, Formal Logic and Discourse Represen- tation Theory. Studies in Linguistics and Philoso- phy 42. Kluwer Academic Publishers, Dordrecht, The Netherlands, 1993. J. T. Maxwell III and R. M. Kaplan. The Inter- face between Phrasal and Functional Constraints. Computational Linguistics, 19(4):571-590, 1993. F. C. Pereira and D. H. Warren. Definite Clause Grammars for Language Analysis--A Survey of the Formalism and a Comparison with Aug- mented Transition Networks. Artificial Intelli- gence, 13:231-278, 1980. M. Pinkai. Radical Underspecification. In Pro- ceedings of the lOth Amsterdam Colloquium, pp. 587-606, Amsterdam, Holland, December 1995. ILLC/Department of Philosophy, University of Amsterdam. C. Pollard and I. A. Sag. Head Driven Phrase Structure Grammar. University of Chicago Press, Chicago, 1994. U. Reyle. Dealing with Ambiguities by Underspecifi- cation: Construction, Representation, and Deduc- tion. Journal of Semantics, 10(2):123-179, 1993. M. Schiehlen. Semantic Construction from Parse Forests. In Proceedings of the 16th International Conference on Computational Linguistics, Copen- hagen, Denmark, 1996. G. Smolka and R. Treinen (Eds.). DFKI Oz Doc- umentation Series. German Research Center for Artificial Intelligence (DFKI), Stuhlsatzen- hausweg 3, D-66123 Saarbriicken, Germany, 1996. http://www.ps.uni-sb.de/oz. M. Tomita. Efficient Parsing for Natural Languages. Kluwer Academic Publishers, Boston, 1986. 393 | 1997 | 50 |
A Theory of Parallelism and the Case of VP Ellipsis Jerry R. Hobbs and Andrew Kehler Artificial Intelligence Center SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 {hobbs, kehler}©ai, sri. com Abstract We provide a general account of parallelism in discourse, and apply it to the special case of resolving possible readings for in- stances of VP ellipsis. We show how seyeral problematic examples are accounted for in a natural and straightforward fashion. The generality of the approach makes it directly applicable to a variety of other types of el- lipsis and reference. 1 The Problem of VP Ellipsis VP ellipsis has received a great deal of atten- tion in theoretical and computational linguistics (Asher, 1993; Crouch, 1995; Dalrymple, Shieber, and Pereira, 1991; Fiengo and May, 1994; Gawron and Peters, 1990; Hardt, 1992; Kehler, 1993; Lappin and McCord, 1990; Priist, 1992; Sag, 1976; Web- bet, 1978; Williams, 1977, inter alia). The area is a tangled thicket of examples in which readings are mysteriously missing and small changes reverse judg- ments. It is a prime example of a phenomenon at the boundary between syntax and pragmatics. VP ellipsis is exemplified in sentence (1). (1) John revised his paper before the teacher did. This sentence has two readings, one in which the teacher revised John's paper (the strict reading), and one in which the teacher revised his own paper (the sloppy reading). Obtaining an adequate account of strict/sloppy ambiguities has been a major focus of VP ellipsis research. This is challenging because not all examples are as simple as sentence (1). In fact, sentence (1) is the first main clause of one of the more problematic cases in the literature: (2) John revised his paper before the teacher did, and Bill did too. Whereas one might expect there to be as many as six readings for this sentence, Dalrymple et ai. (1991, henceforth DSP) note that it has only five readings; the reading is absent in which (3) John revised John's paper before the teacher revised John's paper, and Bill revised John's paper before the teacher revised Bill's paper. Previous analyses have either generated too few or too many readings, or have required an appeal to additional processes or constraints external to the actual resolution process itself. Examples like (2) test the adequacy of an analysis at a fine-grained level of detail. Other examples test the generality of an analysis, in terms of its ability to account for phenomena similar to VP ellipsis and to interact with other interpretation processes that may come into play. For instance, strict/sloppy am- biguities are not restricted to VP ellipsis, but are common to a wide range of constructions that rely on parallelism between two eventualities, some of which are listed in Table 1. Given the ubiquity of strict/sloppy ambiguities, one would expect these to be a by-product of general discourse resolution mechanisms and not mechanisms specific to VP el- lipsis. Any account applying only to the latter would miss an important generalization. In this paper, we give an account of resolution rooted in a general computational theory of paral- lelism. We demonstrate the depth of our approach by showing that unlike previous approaches, the al- gorithm generates the correct five readings for ex- ample (2) without appeal to additional mechanisms or constraints. We also discuss how other 'missing readings' cases are accounted for. We show the gen- erality of the approach by demonstrating its han- dling of several other examples that prove prob- lematic for past approaches, including a source-of- ellipsis paradox, so-called extended parallelism cases, and sloppy readings with events cases. Of the phe- 394 Phenomenon Example 'Do It' Anaphora 'Do So' Anaphora Stripping Comparative Deletion 'Same As' Reference 'Me Too' Phenomena 'one' Anaphora Lazy Pronouns Anaphoric Deaccenting Focus Phenomena John revised his paper before Bill did it. John revised his paper and Bill did so too. John revised his paper, and Bill too. John revised his paper more quickly' than Bill. John revised his paper, and Bill did the same. John revised his paper, and the teacher followed suit. A: John revised his paper. B: Me too./Ditto. John revised a paper of his, and Bill revised one too. The student who revised his paper did better than the student who handed it in as is. John said he called his teacher an idiot, and Bill said he insulted his teacher too. Only John revised his paper. Table 1: Phenomena Giving Rise to Sloppy Interpretations nomena in Table 1, we briefly discuss the algorithm's handling of lazy pronoun cases. 2 A Theory of Parallelism The Theory A clause conveys a property or even- tuality, or describes a situation, or expresses a proposition. We use the term "property" to cover all of these cases. A property consists of a predi- cate applied to a number of arguments. We make use of a duality between properties having a number of arguments, and arguments having a number of properties. Parallelism is characterized in terms of a co-recursion in which the similarity of properties is defined in terms of the similarity of arguments, and the similarity of arguments is defined in terms of the similarity of properties. 1 Two fragments of discourse stand in a parallel re- lation if they describe similar properties. Two prop- erties are similar if two corresponding properties can be inferred from them in which the predicates are the same and the corresponding pairs of arguments are either coreferential or similar. Similarly(el,x1, .. •, Zl), p2(e2; x2, ..., z2)]: p~(el,xl,...,Zx) ~ p'(el,xl,...,zl) and I e ..., ..., P2( 2,X2, Z2) Dp'(e2,x2, z2), where Corer(x1,..., x2 .... ) or Similar[x1, x2], Corer(z1,..., z2, . . .) or Similar[z1, z2] Two arguments are similar if their other, "inferen- tially independent" properties are similar. Similar[xl, x2]: Similar~ (..., zl, . . .),p~2 (..., x2, . . .)], ... Similar[q~ (..., Xl , . . .), q~ (..., x2, . . .)] 1This account is a elaboration of treatments of par- allelism by Hobbs (1979; 1985) and Kehler (1995). The constructed mapping between pairs of argu- ments must be preserved and remain one-to-one. There are three ways the recursion can bottom out. we can run out of new arguments in prop- erties. We can run out of new, inferentially inde- pendent properties of arguments. And we can "bail out" of proving similarity by proving or assuming coreference between the two entities. Two properties are inferentially independent if neither can be derived from the other. Given a knowledge base K representing the mutual knowl- edge of the participants in the discourse, properties P1 and P2 are inferentially independent if neither K,/)1 I-- P~ nor K, P2 ~- PI. This rules out the case in which, for example, the fact that John and Bill are both persons would be used to establish their similarity when the fact that they are both men has already been used. Inferential independence is generally undecidable, but in practice this is not a problem. In discourse interpretation, all we usually know about an entity is the small set of properties presented explicitly in the text itself. We may take these to be inferentially independent and look for no further properties, once properties inferrable from these have been used in establishing the parallelism. Similarity is a matter of degree. The more corre- sponding pairs of inferentially independent proper- ties that are found, and the more contextually salient those properties are, the stronger the similarity. In a system which assigns different costs to proofs (e.g., Hobbs et al. (1993)), the more costly the proofs re- quired to establish similarity are, the less similar the properties or arguments should seem. Interpreta- tions should seek to maximize similarity. This account of parallelism is semantic in the sense that it depends on the content of the discourse rather than directly on its form. But syntax plays an im- plicit role. When seeking to establish the paral- 395 lelism between two clauses, we must begin with the "top-level" properties; this is generally determined by the syntactic structure of the clause. Then the co-recursion through the arguments and properties normally mirrors the syntactic structure of the sen- tence. However, features of syntax that are not man- ifested in logical form are not taken into account. An Example To illustrate that the theory has applicability well beyond the problem of VP ellip- sis, we present an example of semantic parallelism in discourse. It comes from an elementary physics textbook, and is worked out in essentially the same manner in Hobbs (1979). (4) A ladder weighs 100 lb with its center of grav- ity 10 ft from the foot, and a 150 lb man is 10 ft from the top. We will assume "the foot" has been identified as the foot of the ladder. Because it is a physics problem, we must reduce the two clauses to statements about forces acting on objects with magnitudes in a direc- tion at a point in the object: force(w1, L, dl, zl); force(w2, y, d2, x2) In the second clause we do not know that the man is standing on the ladder--he could be on the roof-- and we do not know what "the top" is the top of. These facts fall out of recognizing the parallelism. The procedure for establishing parallelism is il- lustrated in Figure 1, in which parallel elements are placed on the same line. The force predicates are the same so there is no need to infer further properties. The first pair of arguments, wl and w2 are similar in that both are weights. To make the second pair of arguments similar, we can assume they are corefer- ential; as a by-product, this tells us that the object the man's weight is acting on is the ladder, and hence that the man is on the ladder. The third pair of argu- ments are both downward directions. The final pair of arguments, x~ and x2, are similar if their proper- ties distance(x1, f, 20ft) and distance(x2, t, 10ft) are similar. These will be similar if their previously un- matched pair of arguments f and t are similar. This holds if their properties foot(f, L) and top(t, z) are similar. We infer end(f, L) and end(t, z ), since feet and tops are ends. Finally, we have to show L and z are similar. We can do this by assuming they are coreferential. This, as a by-product, tells us that the top is the top of the ladder. The use of inferences, such as '% foot is an end", means that this theory is parametric on a knowl- edge base. Different sets of beliefs can yield different bases for parallelism and indeed different judgments about whether parallelism occurs at all. A crucial piece of our treatment of VP-ellipsis is the explicit representation of coreference relations, denoted with the predicate Core]. We could use equalities such as y = L, or since equals can be re- placed by equals, simply replace y with L. However, doing this would lose the distinction between y and L under their corresponding descriptions. Consequently, we introduce the relation Corer(y, e~, x, el) to express this coreferentiality. This relation says that y under the description as- sociated with e2 is coreferential with x under the description associated with el. From this we can in- fer y = x but not e2 = el, and the coreferentiality cannot be washed out in substitution. A constraint on the arguments of Corefis that el and e2 be prop- erties of x and y respectively. The phenomenon of parallelism pervades dis- course. In addition to straightforward examples of parallelism like the above, there are also contrasts, exemplifications, and generalizations, which are de- fined in a similar manner. The interpretation of a number of syntactic constructions depends on recog- nizing parallelism, including those cited in Table 1. In brief, our theory of parallelism is not something we have introduced merely for the purpose of han- dling VP ellipsis; it is needed for a wide range of sentential and discourse phenomena. Other Approaches Based on Parallelism Our aim in this paper is to present the theory of paral- lelism at an abstract enough level that it can be em- bedded in any sufficiently powerful framework. By "sufficiently powerful" we mean that there must be a formalization of the notion of inference, strength of inference, and inferential independence, and there must be a reasonable knowledge base. In Hobbs and Kehler (forthcoming), we show how our approach can be realized within the "Interpretation as Ab- duction" framework (Hobbs et al., 1993). There are at least two other treatments in which VP ellipsis is resolved through a more general system of determining discourse parallelism, namely, those of PriJst (1992) and Asher (1993). Prfist (1992) gives an account of parallelism devel- oped within the context of the Linguistic Discourse Model theory (Scha and Polanyi, 1988). Parallelism is computed by determining the "Most Specific Com- mon Denominator" of a set of representations, which results from unifying the unifiable aspects of those representations and generalizing over the others. VP ellipsis is resolved as a side effect of this unifica- tion. The representations assumed, called syntac- 396 f orce(wl , L, dl, xl ) wl : lb(wl, 100) L : ladder(L) dl : Down(dl) xz : distance(xt, f, 20ft) f: foot(f, L) =~ end(f, L) L: force(w2, y, d~., z~.) w2 : lb(w2,150) y :~ Coref(y, ..., L, ...) d2 :Down(d2) x2 : distance(x2, t, 10ft) t : top(t, z) ~ end(t, z) z :~ Coref(z, ..., L, ...) Figure 1: Example of Parallelism Establishment tic/semantic structures, incorporate both syntactic and semantic information about an utterance. One weakness of this approach is that it appears overly restrictive in the syntactic similarity that it requires. Asher (1993) also provides an analysis of VP ellip- sis in the context of a theory of discourse structure and coherence, using an extension of Discourse Rep- resentation Theory. The resolution of VP ellipsis is driven by a need to maximize parallelism (or in some cases, contrast) that is very much in the spirit of what we present. Detailed comparisons with our approach are given with the examples below. In general, however, in neither of these approaches has enough attention been paid to other interacting phenomena to explain the facts at the level of detail that we do. 3 VP Ellipsis: A Simple Case We first illustrate our approach on the simple case of VP ellipsis in sentence (1). The representation for the antecedent clause in our "logical form" ~ ap- pears on the left-hand side of Figure 2. Note that a Core] relation links Xl, the variable corresponding to "he" (eventuality e13), to its antecedent j; the entity described by "John" (eventuality ell). From the second clause we know there is an elided eventuality e22 of unknown type P, the logical sub- ject of which is the teacher t. P(e22, t) t : teachert(e21, t) Because of the ellipsis, e22 must stand in a parallel relation to some previous eventuality; here the only candidate is John's revising his paper (e12). To es- tablish Similar(el2, e22),3 we need to show that their corresponding arguments are similar. John j and the 2The normally controversial term "logical form" is used loosely here, simply to capture the information that the hearer must bear in mind, at least implicitly, in in- terpreting texts such as sentence (1). 3 We cannot establish coreference between the events because their agents are distinct. In other cases, how- ever, the process can bail out immediately in event coref- erence; consider the sentence "John revised his paper, teacher t are similar by virtue of being persons. The corresponding objects Pl and/>2 are similar if we take p2 to be a paper and to have a Poss property similar to that of Pl. The latter is true if corresponding to the possessor Xl, there is an x2 that is similar to xl. In constructing the similarity between x2 and xl, we can either take them to be coreferential (case *a) or prove them to be similar by having similar prop- erties, including having similar dependencies estab. lished by Core] (case *b). In the former case, x~ is coreferential with xl which is coreferential with John j, giving us the strict reading. In the latter case, we must preserve the previously-constructed mapping between John j (on which xl is dependent) and the teacher t; thus x2 is similar to xl if taken to be coreferential with t, giving us the sloppy reading. 4 4 A Missing Readings Paradox Sentence (1) is the antecedent clause for example (2), one of the more problematic examples in the literature. Theoretically, this example could have as many as six readings, paraphrased as follows: (5) John revised John's paper before the teacher revised John's paper, and Bill revised John's/Bill's paper before the teacher revised John's/Bill's paper. (6) John revised John's paper before the teacher revised the teacher's paper, and Bill revised John's/Bill's paper before the teacher revised the teacher's paper. smoking incessantly as he did." A Core] link is estab- lished between the elided and antecedent events in the same way as for pronouns. This symmetry accounts for another problematic case, discussed in Section 6. 4It is also possible to "bail out" in coreference be- tween the papers pl and p2; here we would get the strict reading again. However, consider if the example had said "a paper of his" rather than "his paper". The resulting sentence has two strict readings, one in which both re- vised the same paper of John's (generated by assuming coreference between the papers), and one in which each revised a (possibly) different paper of John's (generated by assuming coreference between the pronouns). 397 before'(el2, e22) revise'(e12, j, Pl) j: John'(ell,j) Pl : paper'(els,pl) Poss'(e14, xl,pl) xl : he'(e13,xl) Coref(xl, el3, j, ell) revise'(e22, t, P2) t : teacher'(e21, t) P2 : papert(e25, P2) Poss' (e24, x2, P2) x2 : he'(e23,x2) [Co~ef(z~., e23, xl, e13) (*a)] [Corel(z2, e23, t, e..,~) (*b)] Figure 2: Representations for Simple Case We follow DSP in claiming that this example has five readings, in which the JJJB reading shown in (3) is missing. ~ DSP, who use this case as a benchmark for theories of VP ellipsis, note that the methods of Sag (1976) and Williams (1977) can be seen to derive two readings, namely JJJJ and JTBT. An analysis proposed by Gawron and Peters (1990), who first introduced this example, generates three readings (adding JJBB to the above two), as does the analysis of Fiengo and May (1994). A method that Gawron and Peters attribute to Hans Kamp generates either four readings, including the above three and JTJT, or all six readings. DSP's analysis strictly speak- ing generates all six readings; however, they appeal to anaphor/antecedent linking relationships to elim- inate the JJJB reading. However, these linking rela- tionships are not a by-product of the resolution pro- cess itself, but must be generated separately. Our approach derives exactly the correct five readings. 6 The antecedent clause is represented in Figure 2, and the expansion of the final VP ellipsis is shown in Figure 3. In proving similarity, each pronoun can be taken to be coreferential with its parallel element (cases *a, *c and *e), or proven similar to it (cases *b, *d, *f and *g). If choice *a is taken in the sec- ond clause, then the "similarity" choice in the fourth clause must be *f; if *b, then *g. If *a and *c are chosen, the JJJJ reading results. If *a, *d, and *e are chosen, the JJBJ reading results. If *a, *d, and *f are chosen, the JJBB reading results. If *b and *c are chosen, the JTJT reading results. If *b and *d are chosen, the JTBT reading results. Thus taking all possible choices gives us all acceptable readings. Now consider what it would take to obtain the *JJJB reading. The variable x3 would have to be 5Each reading for this example contains four descrip- tions of papers that were revised. We use the notation JJJB to represent the reading in which the first three papers are John's and fourth is Bill's, corresponding to reading (3). Other uses of such notation should be un- derstood analogously. 6The approach presented in Kehler (1993) also derives the correct five readings, however, our method has ad- vantages in its being more general and better motivated. coreferential with John and x4 with Bill. The for- mer requirement forces us to pick case *c. But then case *e makes x4 coreferential with either John or the teacher (depending on how the first ellipsis was resolved). Case *f makes x4 coreferential with John, and case *g makes it coreferential with the teacher. There is no way to get x4 coreferential with Bill once we have set x3 to something other than Bill. Neither Prtist (1992) nor Asher (1993) discuss this example. In extrapolating from the analyses Pr/ist gives, we find that his analysis generates only two of the five readings. Briefly, if the first ellipsis is resolved to the strict reading, then the JJJJ read- ing is possible. If the first ellipsis is resolved to the sloppy reading, then only the JTBT reading is possi- ble. Asher's account, extrapolating from an example he discusses (p. 371), may generate as many as six readings, including the missing reading. This read- ing results from the manner in which the strict read- ing for the first ellipsis is generated--the final clause pronoun is resolved with the entity specified by the subject of the antecedent clause, whereas our algo- rithm creates a dependency between the pronoun and its parallel element in the antecedent clause. Our mechanism is more natural because of the align- ment of parallel elements between clauses when es- tablishing parallelism, and it is this property which results in the underivability of the missing reading. 5 A Source-of-Ellipsis Paradox DSP identify two kinds of analysis in the VP ellip- sis literature. In identity-of-relations analyses (Sag, 1976; Williams, 1977; Gawron and Peters, 1990; Fiengo and May, 1994, inter alia) strict/sloppy read- ings arise from an ambiguity in the antecedent VP derivation. The ambiguity in the ellipsis results from copying each possibility. In non-identity ap- proaches (Dalrymple, Shieber, and Pereira, 1991; Kehler, 1993; Crouch, 1995, inter alia) strict/sloppy readings result from a choice point within the reso- lution algorithm. Our approach falls into this class. Non-identity approaches are supported by exam- ples such as (7), which has reading (8). 398 before(e32, e42) revise' (e32, b, P3 ) b : Bill'(e31, b) p3 : paper'(e35, P3) P oss' ( e34 , x 3 , P3 ) x3 : he'(e33,x3) [(*c) C,:,'ef(z3, e33, =~, e~3)] [(*d) Core.f (z3, e33, b, e31)] Figure 3: Representations (7) John realizes that he is a fool, but Bill does not, even though his wife does. (Dahl, 1972) (8) John realizes that John is a fool, but Bill does not realize that Bill is a fool, even though Bill's wife realizes Bill is a fool. Example (7) contains two ellipses. Reading (8) re- sults from the second clause receiving a sloppy in- terpretation from the first, and the third clause re- . ceiving a strict interpretation from the second. An identity-of-relations analysis, however, predicts that this reading does not exist. Because the second clause will only have the sloppy derivation received from the first, the strict derivation that the third clause requires from the second will not be present. However, in defending their identity-of-relations approach, Gawron and Peters (1990) note that a non-identity account predicts that sentence (9) has the (nonexistent) reading given in (10). (9) John revised his paper before Bill did, but after the teacher did. (10) John revised John's paper before Bill revised Bill's paper, but after the teacher revised John's paper. In this case, the first clause is the antecedent for both ellipses. These two examples create a paradox; apparently neither type of analysis (nor any previous analyses we are aware of) can explain both. Our analysis accounts for both examples through a mutually-constraining interaction of parallelisms. Example (7) is fairly straightforward, so we focus on example (9). Let us refer to the clauses as clauses 1, 2, and 3. Because clauses 2 and 3 are VP-elliptical, we must establish a parallelism between each of them and clause 1. In addition, the contrast rela- tion signalled by "but" is justified by the contrast- ing predicates "before" and "after", provided their corresponding pairs of arguments are similar. Their first arguments are similar since they are identical-- clause 1. Then we also have to establish the similar- ity of their second arguments--clause 2 and clause 3. revise' ( e42 , t, p4 ) t : teacher'(e41, t) P4 : paper'(e45,P4) Poss'(e44, x4, P4) x4 : he'(e4z, x4) [Co~e/(z4, e43, z2, e~3) (*e)] [Core/(z4, e43, z3, e33) (*f)] [Co~el(x~, e,3, t, e,1) (*g)] for Five Readings Case Thus, three mutually constraining parallelisms must be established: 1 - 2, 1 - 3, and 2 - 3. In Figure 4, cases *a and *b arise from the coref- erence and similarity options when establishing the parallelism between clauses 1 and 2, and cases *c and *d from the parallelism between clauses 1 and 3. However, because parallelism is also required be- tween clauses 2 and 3, we cannot choose these op- tions freely. If we choose case *a, then we must choose case *c, giving us the JJJ reading. If we choose case *b, then we must choose case *d, giving us the JBT reading. Because of the mutual con- straints of the three parallelisms, no other readings are possible. This is exactly the right result. Prtist (1992) essentially follows Sag's (1976) treat- ment of strict and sloppy readings, which, like other identity-of-relations analyses, will not generate the reading of the cascaded ellipsis sentence (7) shown in (8). While the approach will correctly predict the lack of reading (10) for sentence (9), it does so for the wrong reason. Whereas ellipsis resolution does :not permit such readings in any circumstance in his account, we claim that the lack of such readings for • sentence (9) is due to constraints imposed by multi- ple parallelisms, and not because of the correctness of identity-of-relations analyses. Asher's (1993) analysis falls into the non-identity class of analyses, a~ld therefore makes the correct predictions for sentence (7). While he does not dis- cuss the contrast between this case and sentence (9), we do not see any reason why his framework could not accommodate our solution. 6 Other Examples Missing Readings with Multiple Pronouns Dahl (1974) noticed that sentence (11) has only three readings instead of the four one might expect. The reading Bill said that John revised Bill's paper is missing. (11) John said that he revised his paper, and Bill did too. 399 before(el2, e22) e12 :revise'(e12,j, pl) j: John'(ell,j) Pl : paper'(e15,P1) Poss' (e14, xl, Pl) 2;1 : he'(e13,x1) Co~ef(xl, el3, j, e11) after(el2, e32) e32 : revise'(e32, t,p3) t : teacher'(e31, t) P3 : paper~(e3s,P3) Poss' (e34, x3, P3) x3 : he'(e33,x3) [Corer(x3, e33, Zl, el3) (*C)] [Corer(z3, e33, t, e31) (*d)] e22 : revise' ( e22, b, p2 ) b : Billl(e21, b) P2 : paper' (e25, P2 ) Poss'(e24, x2, P2) x2 : he'(e23,x2) [Co~e/(=2, e23, Zl, e13) (*a)] [Coref(x=, e23, b, e21) (*b)] Figure 4: Representations for the Source-of-Ellipsis Paradox In contrast, the similar sentence given in (12) ap- pears to have all four readings. (12) John said that his teacher revised his paper, and Bill did too. The readings derived by our analysis depend on the Core] relations that hold between the corefer- ring noun phrases in the antecedent clauses. For sentence (11), the correct readings result if his is linked to he and he to John; for sentence (12), the correct readings result if both pronouns are linked to John. Other cases in the literature indicate that the situation is more complicated than might initially be evident. Handling these cases requires an account of how such dependencies are established, which we discuss in Hobbs and Kehler (forthcoming). Extended Parallelism In some cases, the ele- ments involved in a sloppy reading may not be con- tained in the minimal clause containing the ellipsis. (13) John told a man that Mary likes him, and Bill told a boy that Susan does. ~ (14) The man who gives his paycheck to his wife is wiser than the man who gives it to his mis- tress. (Karttunen, 1969) the pronoun it does not refer to the first man's pay- check but the second's. In text, it normally requires an explicit, corefer- ring antecedent. However, the parallelism between the clauses licenses a sloppy reading via the similar- ity option.. The real world fact that to give some- thing to someone, you first must have it, leads to a strong preference for the sloppy reading. It is necessary to have parallelism in order to li- cense the lazy pronoun reading. If we eliminate the possibility of parallelism, as in (15) John revised his paper, and then Bill handed it in. the lazy pronoun reading is not available, even though the have-before-give constraint is not satis- fied. To interpret this sentence, we are more likely to assume an unmentioned transfer event between the two explicit events. Sloppy Readings with Events Sentence (16) has a "sloppy" reading in which the second main clause means "I will kiss you even if you don't want me to kiss you." (16) I will help you if you want me to, but I will kiss you even if you don't, s Deriving this reading requires a Core] relation be- tween the elided event and its antecedent in the first main clause, which is obtained when our al- gorithm bails out in event coreference (see footnote 8Mark Gawron, p.c., attributed to Carl Pollard. Although the antecedent clause for "Susan does" is "Mary likes him", there is a sloppy reading in which "Bill told a boy that Susan likes Bill". This fact is problematic for accounts of VP ellipsis that operate only within the minimal clauses. These readings are predicted by our account, as John and Bill are parallel in the main clauses. Lazy Pronouns "Lazy pronouns" can be ac- counted for similarly. In TThis example is due to Priist (1992), whose approach successfully handles this example. 400 3). Then in expahding the VP ellipsis in the sec- ond main clause, taking the similarity option for the event generates the desired reading. Inferentially-Determined Antecedents Web- bet (1978) provides several examples in which the antecedent of an ellipsis is derived inferentially: (17) Mary wants to go to Spain and Fred wants to go to Peru, but because of limited resources, only one of them can. Our account of parallelism applies twice in han- dling this example, once in creating a complex antecedent from recognizing the parallelism be- tween the first two clauses, and again in resolv- ing the ellipsis against this antecedent. Hobbs and Kehler (forthcoming) describe the analysis of this case as well as others involving quantification. 7 Summary We have given a general account of parallelism in discourse and applied it to the special case of resolv- ing possible readings for instances of VP ellipsis. In doing so, we showed how a variety of examples that have been problematic for previous approaches are accounted for in a natural and straightforward fash- ion. Furthermore, the generality of the approach makes it directly applicable to a variety of other types of ellipsis and reference in natural language. Acknowledgements The authors thank Mark Gawron, David Israel, and three anonymous reviewers for helpful comments. This research was supported by National Science Foundation/Advanced Research Projects Agency Grant IRI-9314961. References Asher, Nicholas. 1993. Reference to Abstract Ob- jects inDiscourse. SLAP 50, Dordrecht, Kluwer. Crouch, Richard. 1995. Ellipsis and quantifica- tion: A substitutional approach. In Proceedings of EACL-95, pages 229-236, Dublin, Ireland, March. Dahl, Osten. 1972. On so-called "sloppy" identity. Gothenburg Papers in Theoretical Linguistics, 11. University of GSteborg. Dahl, Osten. 1974. How to open a sentence: Ab- straction in natural language. In Logical Gram- mar Reports, No. 12. University of GSteborg. Dalrymple, Mary, Stuart M. Shieber, and Fernando Pereira. 1991. Ellipsis and higher-order unifica- tion. Linguistics and Philosophy, 14:399-452. Fiengo, Robert and Robert May. 1994. Indices and Identity. MIT Press, Cambridge, MA. Gawron, Mark and Stanley Peters. 1990. Ana- phora and Quantification in Situation Semantics. CSLI/University of Chicago Press, Stanford Uni- versity. CSLI Lecture Notes, Number 19. Hardt, Daniel. 1992. VP ellipsis and contextual in- terpretation. In Proceedings COLING-92, Nantes. Hobbs, Jerry R.. 1979. Coherence and coreference. Cognitive Science, 3:67-90. Hobbs, Jerry R. 1985. On the coherence and struc- ture of discourse. Technical Report CSLI-85-37, Center for the Study of Language and Informa- tion, Stanford University, October. Hobbs, Jerry R. and Andrew Kehler. Forthcoming. A general theory of parallelism and the special case of VP ellipsis. Technical report, SRI Interna- tional. Hobbs, Jerry R., Mark E. Stickel, Douglas E. Ap- pelt, and Paul Martin. 1993. Interpretation as abduction. Artificial Intelligence, 63:69-142. Karttunen, Lauri. 1969. Pronouns and variables. In Papers from the Fifth Regional Meeting of the Chicago Linguistics Society. Kehler, Andrew. 1993. A discourse copying algo- rithm for ellipsis andanaphora resolution. In Pro- ceedings of EACL-93, pages 203-212, Utrecht, the Netherlands, April. Kehler, Andrew. 1995. Interpreting Cohesive Forms in the Context of Discourse Inference. Ph.D. the- sis, Harvard University. Lappin, Shalom and Michael McCord. 1990. Ana- phora resolution in slot grammar. Computational Linguistics, 16:197-212. Pr/ist, Hub. 1992. On Discourse Structuring, VP Anaphora, and Gapping. Ph.D. thesis, University of Amsterdam. Sag, Ivan. 1976. Deletion and Logical Form. Ph.D. thesis, MIT. Scha, Remko and Livia Polanyi. 1988. An aug- mented context free grammar for discourse. In Proceedings of COLING-88, pages 573-577, Bu- dapest, August. Webber, Bonnie Lynn. 1978. A Formal Approach to Discourse Anaphora. Ph.D. thesis, Harvard Uni- versity. Williams, Edwin. 1977. Discourse and logical form. Linguistic Inquiry, 8(1). 401 | 1997 | 51 |
On Interpreting F-Structures as UDRSs Josef van Genabith School of Computer Applications Dublin City University Dublin 9 Ireland j osef@compapp, dcu. ie Richard Crouch Department of Computer Science University of Nottingham University Park Nottingham NG7 2RD, UK rsc@cs, nott. ac. uk Abstract We describe a method for interpreting ab- stract fiat syntactic representations, LFG f- structures, as underspecified semantic rep- resentations, here Underspecified Discourse Representation Structures (UDRSs). The method establishes a one-to-one correspon- dence between subsets of the LFG and UDRS formalisms. It provides a model theoretic interpretation and an inferen- tial component which operates directly on underspecified representations for f- structures through the translation images of f-structures as UDRSs. 1 Introduction Lexical Functional Grammar (LFG) f-structures (Kaplan and Bresnan, 1982; Dalrymple et al., 1995a) are attribute-value matrices representing high level syntactic information abstracting away from the par- ticulars of surface realization such as word order or inflection while capturing underlying generaliza- tions. Although f-structures are first and foremost syntactic representations they do encode some se- mantic information, namely basic predicate argu- ment structure in the semantic form value of the PRED attribute. Previous approaches to provid- ing semantic components for LFGs concentrated on providing schemas for relating (or translating) f- structures (in)to sets of disambiguated semantic rep- resentations which are then interpreted model the- oretically (Halvorsen, 1983; Halvorsen and Kaplan, 1988; Fenstad et al., 1987; Wedekind and Kaplan, 1993; Dalrymple et al., 1996). More recently, (Gen- abith and Crouch, 1996) presented a method for providing a direct and underspecified interpretation of f-structures by interpreting them as quasi-logical forms (QLFs) (Alshawi and Crouch, 1992). The ap- proach was prompted by striking structural similar- ities between f-structure ['PRED ~COACH ~ ] SUBJ NUM SG /SPEC EVERY PRED 'pick (T SUB J, T OBJ)' [PRED 'PLAYER'] L°B: iN'M s/ J LSPE¢ and QLF representations ?Scope : pick (t erm(+r, <hUm= sg, spec=every>, coach, ?Q, ?X), term (+g, <num=sg, spec=a>, player, ?P, ?R) ) both of which are fiat representations which allow underspecification of e.g. the scope of quantifica- tional NPs. In this companion paper we show that f-structures are just as easily interpretable as UDRSs (Reyle, 1993; Reyle, 1995): coach(x) layer(y) I pick(x,y) I We do this in terms of a translation function r from f-structures to UDRSs. The recursive part of the def- inition states that the translation of an f-structure is simply the union of the translation of its component parts: 'F1 71 ... T( PRED I-[(~ rl,...,l l~n) ) r, ..... T r.)) u u... u While there certainly is difference in approach and emphasis between f-structures, QLFs and UDRSs 402 the motivation foi" flat (underspecified) representa- tions in each case is computational. The details of the LFG and UDRT formalisms are described at length elsewhere: here we briefly present the very basics of the UDRS formalism; we define a language of wff-s (well-formed f-structures); we define a map- ping 7" from f-structures to UDRSs together with a reverse mapping r -1 and we show correctness with respect to an independent semantics (Dalrymple et al., 1996). Finally, unlike QLF the UDRS formal- ism comes equipped with an inference mechanism which operates directly on the underspecified rep- resentations without the need of considering cases. We illustrate our approach with a simple example involving the UDRS deduction component (see also (KSnig and Reyle, 1996) where amongst other things the possibility of direct deductions on f-structures is discussed). 2 Underspecified Discourse Representation Structures In standard DRT (Kamp and Reyle, 1993) scope re- lations between quantificational structures and op- erators are unambiguously specified in terms of the structure and nesting of boxes. UDRT (Reyle, 1993; Reyle, 1995) allows partial specifications of scope relations. Textual definitions of UDRSs are based on a labeling (indexing) of DRS conditions and a statement of a partial ordering relation between the labels. The language of UDRSs is based on a set L of labels, a set Ref of discourse referents and a set Rel of relation symbols. It features two types of conditions: 1 1. (a) if/E L and x E Refthen l : x is a condition (b) if 1 E L, R E Rel a n-place relation and Xl, ..,Xn E Ref then l : P(Xl, ..,Xn) is a condition (c) if li, lj E L then li : '~lj is a condition (d) if li, lj, Ik E L then li : lj ::¢, l~ is a condition (e) if l, ll,...,ln E L then l: V(ll,...,ln) is a condition 2. if li, Ij E L then li < lj is a condition where _< is a partial ordering defining an upper semi-lattice with a top element. UDRSs are pairs of a set of type 2 conditions with a set of type 1 conditions: • A UDRS /C is a pair (L,C) where L = (i,<) is an upper semi-lattice of labels and C a set of conditions of type 1 above such that if li : ~lj E 1The definition abstracts away from some of the com- plexities in the full definitions of the UDRS language (Reyle, 1993). The full language also contains type 1 conditions of the form 1 : a(ll,...,ln) indicating that (/1,..., In) are contributed by a single sentence etc. Cthenlj :< li E £ and ifli : lj ~ lk E C then lj < li,lk < li E £.2 The construction of UDRSs, in particular the speci- fication of the partial ordering between labeled con- ditions in £, is constrained by a set of meta-level constraints (principles). They ensure, e.g., that verbs are subordinated with respect to their scope inducing arguments, that scope sensitive elements obey the restrictions postulated by whatever syn- tactic theory is adopted, that potential antecedents are scoped with respect to their anaphoric potential etc. Below we list the basic cases: • Clause Boundedness: the scope of genuinely quantificational structures is clause bounded. If lq and let are the labels associated with the quantificational structure and the containing clause, respectively, then the constraint lq < let enforces clause boundedness. • Scope of Indefinites: indefinites labeled li may take arbitrarily wide scope in the representa- tion. They cannot exceed the top-level DRS IT, i.e. li < IT. • Proper Names: proper names, 7r, always end up in the top-level DRS, IT. This is specified lexically by IT : r The semantics is defined in terms of disambiguations & It takes its cue from the definition of the conse- quence relation; in the most recent version (Reyle, 1995) with correlated disambiguations 8t V61( r~, D M') resulting in a conjunctive interpretation of a goal UDRS. 3 In contrast to other proof systems the UDRS proof systems (Reyle, 1993; Reyle, 1995; Kbnig and Reyle, 1996) operate directly on under- specified representations avoiding (whenever possi- ble) the need to consider disambiguated cases. 4 3 A language of well-formed f-structures The language of wff-s (well-formed f-structures) is defined below. The basic vocabulary consists of five disjoint sets: GFs (subcategorizable grammatical functions), GF,~ (non-subcategorizable grammatical functions), SF (semantic forms), ATR (attributes) and ATOM (atomic values): 2This closes Z: under the subordination relations in- duced by complex conditions of the form -~K and Ki =~ Kj. 38 is an o~eration mapping a into one of its disam- biguations c~ . The original semantics in (Reyle, 1993) took its cue from V~i3/ij(F 6i ~ v~ 6j) resulting in a dis- junctive semantics. 4 Soundness and completeness results are given for the system in (Reyle, 1993). 403 • CFs = {SUB J, OBJ, COMP, XCOMP,...} • GFn -~ {ADJUNCTS,RELMODS,...} • SF = {coach(}, support(* SUB J, 1" OUJ},...} • ATR "~ {SPEC,NUM,PER, GEN...} • ATOM = {a, some, every, most,..., SG, PL, . . .} The formation rules pivot on the semantic form PRED values. * if[10 E SF then [PRED lI 0 ]~ e wff-s • if ~o1~,...,~o,,[] e wff-s and H{T F1,...,* rn} e SF then ~ e wff-s where ~ is of the form PRgD [1(* I~1,...,1" FN) ~] ~ ~ff-8 r. where for any two substructures ¢~] and ¢r~1 occurring in ~d~], 1 :~ m except possibly where ¢-¢.s • if a E ATR, v E ATOM, ~o E wff-s where ~]isoftheform [PRED.,. II(...)]~]andc~ dom(~]) then ED n(...) ~1 e wl/-s The side condition in the second clause ensures that only identical substructures can have identi- cal tags. Tags are used to represent reentrancies and will often appear vacuously. The definition cap- tures f-structures that are complete, coherent and consistent.6 4 An f-structure - UDRS return trip In order to illustrate the basic idea we will first give a simplified graphical definition of the translation r from f-structures to UDRSs. The full textual defini- tions are given in the appendix• The (U)DRT con- struction principles distinguish between genuinely SWhere - denotes syntactic identity modulo permu- tation of attribute-value pairs. 6Proof: simple induction on the formation rules for wff-s using the definitions of completeness, coherence and consistency (Kaplan and Bresnan, 1982). Because of lack of space here we can not consider non-subcategorizable grammatical functions. For a treatment of those in a QLF-style interpretation see (Genabith and Crouch, 1996). The notions of substructure occurring in an .f- structure and dom(~o) can easily be spelled out formally. The definition given above uses textual representations of f-structures. It can easily be recast in terms of hier- archical sets, finite functions, directed graphs etc. quantificational NPs and indefinite NPs. 7 Accord- ingly we have F2 ~o2 , . . • r(lPXED II<Trl,...,TFN) ) := /Lr. .' ~ T1(~01) T2(~2) ..... Tn(~On) II(zl, "2,.-., x~) [sP c ]) • r'(LPRED ~II() : = ~ [SPEC every ] • ri(iVRE D H0 ) := The formulation of the reverse translation r- 1 from UDRSs back into f-structures depends on a map be- tween argument positions in UDRS predicates and grammatical functions in LFG semantic forms: I1( ~1, ~2, ..., ~, ) I I I I n( ,rl, tru, ..., ,r~ } This is, of course, the province of lexical mapping theories (LMTs). For our present purposes it will be sufficient to assume a lexically specified mapping. • r-l( re1 g2 To. ):= n(zl, x2,..., x~) I rl r-1(~1) r2 r-1 (7¢2) n{r rl,T r2,...,, rN) • := LPRE D 110 • := sPzc every ] PRED no J 7Proper names are dealt with in the full definitions in the appendix. 404 I coach( x[~]) ~ yer(y~) Figure 1: The UDRS rT-(~l) =/C~ If the lexical map between argument positions in UDRS predicates and grammatical functions in LFG semantic forms is a function it can be shown that for all ~ E wff-s: ~-l(r(~)) = Proof is by induction on the complexity of ~. This establishes a one-to-one correspondence between subsets of the UDRS and LFG formalism. Note that 7" -1 is a partial function on UDRS representations. The reason is that in addition to full underspecifica- tion UDRT allows partial underspecification of scope for which there is no correlate in the original LFG f-structure formalism. 5 Correctness of the Translation A correctness criterion for the translation can be de- fined in terms of preservation of truth with respect to an independent semantics. Here we show correct- ness with respect to the linear logic (a)s based LFG semantics of (Dalrymple et al., 1996): [r(~)] --- [~(~)] Correctness is with respect to (sets of) disambigua- tions and truthfl {ulu = 6(r(~))} - {ll~(~ ) ~, l} where 6 is the UDRS disambiguation and b'u the lin- ear logic consequence relation. Without going into details/f works by adding subordination constraints turning partial into total orders. In the absence of scope constraints l° for a UDRS with n quantifica- tional structures Q (that is including indefinites) this results in n! scope readings, as required. Linear logic deductions F-u produce scopings in terms of the order SThe notation a(~a) is in analogy with the LFG a - projection and here refers to the set of linear logic mean- ing constructors associated with 99. 9This is because the original semantics in (Dalrymple et al., 1996) is neither underspecified nor dynamic. See e.g. (Genabith and Crouch, 1997) for a dynamic and underspecified version of a linear logic based semantics. Z°Here we need to drop the clause boundedness constraint. in which premises are consumed in a proof. Again, in the absence of scope constraints this results in n! scopings for n quantifiers Q. Everything else be- ing equal, this establishes correctness with respect to sets of disambiguations. 6 A Worked Example We illustrate our approach in terms of a simple ex- ample inference. The translations below are ob- tained with the full definitions in the appendix. [~ Every coach supported a player. Smith is a coach. Smith supported a player. Premise ~ is ambiguous between an wide scope and a narrow scope reading of the indefinite NP. From [-fl and [] we can conclude Ii] which is not ambiguous. Assume that the following (simplified) f-structures !a[~], ¢[] and ~[i] are associated with [-fl, [] and [if, respectively: [ [PRED tCOACH'] suBJ LsPEc EVERY j[] 'SUPPORT (~" ['f] J PRED SUBJ,T OBJ)' L TM L sPEc [PRED 'PLAYER' ] A [~ SUBJ [PRED 'SMITH']~] ] PRED 'COACH (~ SUB J)' ] [] SUBJ PRED OBJ We have that 'SUPPORT (r SUS.J,I" OS.O' / [PRED 'PLAYER' ] | []'] [SPEO A ][] J ({t~: z®, v~® %~,%: ~],z~ : ~oa~h(~), t~ : ~G] ' l~ : pt~,~,e,( ~m ), Zmo : s,,pport( ~® , ~)}, 405 the graphical representation of which is given in Fig- ure 1 (on the previous page). For (N] we get = ({IT : z~],lr :smith(z~),l[-g]o: coach(xM} , {lNo < Iv}) I 1} _~ smith(z~) = IC[~] $ I co ch( M) l In the calculus of (Reyle, 1995) we obtain the UDRS K:Ii I associated with the conclusion in terms of an application of the rule of detachment (DET): l' : support(x~, x~])}, {l~]. < IT, l~] ° < l~] l~ < IT }) smith( x~ ) p uer(@ $ l F SUBJ PRED 7"T( L TM [PRED 'S IT.' ] ] 'SUPPORT ([ SUB J,'[ OBJ)' / [PRED 'PLAYER' "1 | [SPEC A ]['ffl J M) which turns out to be the translation image under r of the f-structure ~[i] associated with the conclusion ~.la Summarizing we have that indeed: rr ( lil) which given that 7- is correct does come as too much of a surprise. The possibility of defining deduction rules directly on f-structures is discussed in (KSnig and Reyle, 1996). l XNote that the conclusion UDRS K;[I l can be "col- lapsed" into the fully specified DRS zy smith(z) player(y) support(x, y) 7 Conclusion and Further Work In the present paper we have interpreted f-structures as UDRSs and illustrated with a simple example how the deductive mechanisms of UDRT can be exploited in the interpretation. (KSnig and Reyle, 1996) amongst other things further explores this issue and proposes direct deduction on LFG f-structures. We have formulated a reverse translation from UDRSs back into f-structures and established a one-to-one correspondence between subsets of the LFG and UDRT formalisms. As it stands, however, the level of f-structure representation does not express the full range of subordination constraints available in UDRT. In this paper we have covered the most basic parts, the easy bits. The method has to be extended to a more extensive fragment to prove (or disprove) its mettle. The UDRT and QLF (Genabith and Crouch, 1996) interpretations of f-structures invite comparison of the two semantic formalisms. With- out being able to go into any great detail, QLF and UDRT both provide underspecified semantics for ambiguous representations A in terms of sets {col, ..., COn } of fully disambiguated representations COi which can be obtained from A. For a simple core fragment (disregarding dynamic effects, wrinkles of the UDRS and QLF disambiguation operations/)~ and 79q etc.) everything else being equal, for a given sentence S with associated QLF and UDRS repre- sentations Aq and A~, respectively, we have that Dq(Aq) = {COl,..., q CO~} and "D~,(Au) = {CO?,..., CO,I} and pairwise [CO/q ] = [[CO u] for 1 < i < n and col 6 ~)q(Aq) and COl' e 7)~(A=). That is-the QLF and UDRT semantics coincide with respect to truth conditions Of representations in corresponding sets of disambiguations. This said, however, they differ with respect to the semantics assigned to the un- derspecified representations Aq and An. [[Aq~ is de- fined in terms of a supervaluation construction over {CO q .... , CO q} (Alshawi and Crouch, 1992) resulting in the three-valued: [Aq] = 1 ifffor all co~ E ~)q(Aq), [COq] ~. 1 [Aq]] 0 ifffor no COl E :Dq(Aq), [COl] = 1 [Aq] = undefined otherwise The UDRT semantics is defined classically and takes its cue from the definition of the semantic conse- quence relation for UDRS. In (Reyle, 1995): +' A +') (where IE e+ =COi E :D,,(]E)) which implies that a goal UDRS is interpreted conjunctively: [A~,~ 95 = 1 ifffor all CO u E 7:)~,(A~,), [COr~ 9s = 1 [Au]gs = 0 otherwise while the definition in (Reyle, 1993): +' A results in a disjunctive interpretation: 406 [A.] 93 = 1 ifffor some O}' E V.(A,~), [0~]93 = 1 [Au]]93 = 0 otherwise It is easy to see that the UDRS semantics [o~] 95 and [[od] 93 each cover the two opposite ends of the QLF semantics [[%]]: [o=] 95 covers definite truth while [[Ou] 93 covers definite falsity. On a final note, the remarkable correspondence be- tween LFG f-structure and UDRT and QLF repre- sentations (the latter two arguably being the ma- jor recent underspecified semantic representation formalisms) provides further independent motiva- tion for a level of representation similar to LFG f- structure which antedates its underspecified seman- tic cousins by more than a decade. 8 Appendix We now define a translation r from f-structures to UDRSs. The (U)DRT construction principles distin- guish between genuinely quantificational NPs, indef- inite NPs and proper names. Accordingly we have • ~([pRED n(t rl,...,t r~) [i]):= /-'" kr. ~.[] uYmo: n(N2,..., %])} where { x[~] iff FiE{SUBJ,OBJ,...} 7~] := l[~]o iff ri E {COMP, XCOMP} * T.[~([SPEC EVERY ] ffRrD nO m) := : 'm,Wmtm ,/ml : : -< l[3], l~o ~- lm2} [3"], [SPEC A ] " r=t/PREDL HO J ]]]) := : tm z z tin) . T~]([PRED l-I 0 ]~) := {tT : xm,tT : n(xm),lmo _< l~} The first clause defines the recursive part of the translation function and states that the translation of an f-structure is simply the union of the trans- lations of its component parts. The base cases of the definition are provided by the three remaining clauses. They correspond directly to the construc- tion principles discussed in section 2. The first one deals with genuinely quantificational NPs, the sec- ond one with indefinites and the third one with proper names. Note that the definitions ensure clause boundedness of quantificational NPs {l[/] < l[] } , allow indefinites to take arbitrary wide scope {1[]] <_ h-} and assign proper names to the top level of the resulting UDRS {iv : z~,/v : H(zffj)} as re- quired. The indices are our book-keeping devices for label and variable management. F-structure reen- trancies are handled correctly without further stipu- lation. Atomic attribute-value pairs can be included as unary definite relations. For the reverse mapping assume a consistent UDRS labeling (e.g. as provided by the v mapping) and a lexically specified mapping between subcategoriz- able grammatical functions in LFG semantic form and argument positions in the corresponding UDRT predicates: II( gel, ~g2, .'', Xn ) I I I I n( Try, Tr2, ..., tr, ) The scaffolding which allows us to ire)construct a f-structure from a UDRS is provided by UDRS sub- ordination constraints and variables occurring in UDRS conditions) 2 The translation recurses on the semantic contributions of verbs. To translate a UDRS ~ = (£:,C) merge the structural with the content constraints into the equivalent ~t = E U C. Define a function 0 ("dependents") on referents, la- bels and merged UDRSs as in Figure 2. 0 is constrained to O(qi, IV.) C ]C. Given a discourse referent x and a UDRS, 0 picks out components of the UDRS corresponding to proper names, in- definite and genuinely quantificational NPs with x as implicit argument. Given a label l, 0 picks out the transitive closure over sentential comple- ments and their dependents. Note that for sim- ple, non-recursive UDRSs ]C, 0 defines a partition {{/: II(xl,...,xn)},O(xi,~),..., O(~cn,~)} of/(;. s ifIg = {/~o : 1-I(~1,... ,~,)}t~7~ then r-l(]C) := PREp n(t F1,...,T FN) IN] SPEC EVERY ] PRED II 0 [] 12The definition below ignores subordination con- straints. It assumes proper UDRSs, i.e. UDRS where all the discourse referents are properly bound. Thus the definition implements the "garbage in - garbage out" principle. It also assumes that discourse referents in "quantifier prefixes" are disjoint. It is straightforward to extend the definition to take account of subordina- t~ion constraints if that is desired but, as we remarked above, the translation image (the resulting f-structures) cannot in all cases reflect the constraints. 407 {la, : Th,la, : II(rh)} U {.~ < l¢,,l()~ < la,) E E} if T/i e Ref O(o~,/~):= {l,~, l,~.Voil~,,~,l,~,, :~?,,1,~. :II(o~},U{A<_I,~,~I(A<I,~,~)E~} if rliE Ref {l,, I]('y~,...,7,~)}OD(7~,K.),...,D(%,If. ) if ~EL Figure 2: The "dependents" function 0 (where 0(~i, K:) C_/C). . T-a({/. :x,l~ :n(x)}~Sub):= sPEc A ] PRED I-i() [] ° T-I({IT : X, IT : II(x)}~S~b):= [PREp n0 ][] Note that r -1 is a partial function from UDRSs to f-structures. The reason is that that f-structures do not represent partial subordination constraints, in other words they are fully underspecified. Finally, note that r and r -1 are recursive (they allow for ar- bitrary embeddings of e.g. sentential complements). This may lead to structures outside the first-order UDRT-fragment. As an example the reader may want to check the translation in Figure 3 and fur- thermore verify that the reverse translation does in- deed take us back to the original (modulo renaming of variables and labels) UDRS. 9 Acknowledgements Early versions of this have been presented at Fra- CaS workshops (Cooper et al., 1996) and at ]MS, Stuttgart in 1995 and at the LFG96 in Grenoble. We thank our FraCaS colleagues and Anette Frank and Mary Dalrymple for discussion and support. References H. Alshawi and R. Crouch. 1992. Monotonic se- mantic interpretation. In Proceedings 30th Annual Meeting of the Association for Computational Lin- guistics, pages 32-38. Cooper, R. and Crouch, R. and van Eijck, J. and Fox, C. and van Genabith, J. and Jaspars, J. and Kamp, H. and Pinkal, M. and Milward, D. and Poesio, M. and Pulman, S. 1996. Building the Framework. FraCaS: A Framework for Compu- tational Semantics. FraCaS deliverable D16 Also available by anonymous ftp from ftp.cogsci.ed.ac.uk, pub/FRACAS/de116.ps.gz. M. Dalrymple, R.M. Kaplan, J.T. Maxwell, and A. Zaenen, editors. 1995a. Formal Issues in Lexical- Functional Grammar. CSLI lecture notes; no.47. CSLI Publications. M. Dalrymple, J. Lamping, F.C.N Pereira, and V. Saraswat. 1996. A deductive account of quan- tification in lfg. In M. Kanazawa, C. Pinon, and H. de Swart, editors, Quantifiers, Deduction and Context, pages 33-57. CSLI Publications, No. 57. J.E. Fenstad, P.K. Halvorsen, T. Langholm, and J. van Benthem. 1987. Situations, Language and Logic. D.Reidel, Dordrecht. J. van Genabith and R. Crouch. 1996. Direct and underspecified interpretations of lfg f-structures. In COLING 96, Copenhagen, Denmark, pages 262-267. J. van Genabith and R. Crouch. 1997. How to glue a donkey to an f-structure or porting a dy- namic meaning representation language into lfg's linear logic based glue language semantics. In In- ternational Workshop for Computational Semantics, Tilburg, Proceedings, pages 52-65. P.K. Halvorsen and R. Kaplan. 1988. Projections and semantic description in lexical-functional gram- mar. In Proceedings of the International Conference on Fifth Generation Computer Systems, pages 1116- 1122, Tokyo: Institute for New Generation Com- puter Technology. P.K. Halvorsen. 1983. Semantics for lfg. Linguistic Inquiry, 14:567-615. H. Kamp and U. Reyle. 1993. From Discourse to Logic. Kluwer, Dordrecht. R.M. Kaplan and J. Bresnan. 1982. Lexical func- tional grammar. In J. Bresnan, editor, The mental representation of grammatical relations, pages 173- 281. MIT Press, Cambridge Mass. Esther KSnig and Uwe Reyle. 1996. A general rea- soning scheme for underspecified representations. In Hans-Jiirgen Ohlbach and Uwe Reyle, editors, Logic and its Applications. Festschrift for Dov Gabbay. Kluwer. U. Reyle. 1993. Dealing with ambiguities by un- derspecification: Construction, representation and deduction. Journal of Semantics, 10:123-179. Uwe Reyle. 1995. On reasoning with ambiguities. In Seventh Conference of the European Chapter of the Association for Computational Linguistics -- Pro- ceedings of the Conference, pages 1-8, Dublin. ACL. J. Wedekind and R.M. Kaplan. 1993. Type- driven semantic interpretation of f-structures. In S. Krauwer, M. Moortgat, and Louis des Tombe, editors, Sixth Conference of the European Chapter of the Association for Computational Linguistics -- Proceedings of the Conference, pages 404-411. ACL. 408 r-'( ,ill :oachlx) l r(y) '3[ :ontr c,(z) I ) = lsl sign(y,z) I 7.-1 { 11 : 111 V;c 112,111 : x, lll : coaeh(x),ll <_ lT,14 <_ 112, 12 : y, 12 : player(y),12 <_ IT,14 <_ 12,Is <_ 12, 13 : z, la : contract(z), la <_ IT, Is <_ 13, Is: sign(y, z), /4: persuade(x, y, Is) })= SUBJ v-l({ll :lll Vx 112,l,1 : x, lll :coach(x),ll <_l-r,14 _< 112}) PRED 'persuade (T suaa, 1` OB3, 1" XCOMP)' OBJ T-1({12 : y, 19, : player(y), 12 < IT, 14 < 12}) l le f 12 : y, 12 : player(y),12 < ~ , ls <_12, }) XCOMP r- ~,~, /a: z, la : contract(z),la < Iv,Is < la,ls : sign(y, z)} ] = SUBJ 7"-1({ll :111 Vx 1,2,1,1 : x, ll, : coach(x),ll <iT,14<112}) PRED 'persuade (T SUB J, T OBJ, 1" XCOMP)' -- -- OBJ r-1({12 : y, 12 : player(y), 12 < IT, 14 < 12}) ~-'.,~ ~ p-[ayer(y).12 < IT, 15 < 12}) ] [] = XCOMP |PRED 'sign (T SUBJ, 1` OBJ)' -- -- / [] Losa r-'(13 : z, 13: contract(z),ta < IT,Is < 13})J SUBJ PRED OBJ XCOMP PRED 'COACH' ] SPEC EVERY [] 'persuade (1` SUB J, ~" OBJ, 1" XCOMP)' PREp 'PLAYER' ] r~ SPEC A J [" [PRED 'PLAYER' ] [SUBJ [SPEC A J 2~ |PRED 'sign (T suaJ,T oBJ)' / [PRED 'CONTRACT' ] L °~' A [] [] Figure 3: A worked translation example for the UDRS ]C for Every coach persuaded a player to sign a contract. The reader may verify that the resulting f-structure T-I(~) is mapped back to the source UDRS (modulo renaming of variables and labels) by r: r(r-I(K)) = ~. 409 | 1997 | 52 |
A Uniform Approach to Underspecification and Parallelism Joachim Niehren Programming Systems Lab Universitgt des Saarlandes Saarbrficken, Germany niehren©ps, uni- sb. de Manfred Pinkal Department of Computational Linguistics UniversitS~t des Saarlandes Saarbrficken, Germany pinkal@coli, uni- sb. de Peter Ruhrberg Department of Computational Linguistics Universit/it des Saarlandes Saarbrficken, Germany peru@coli, uni-sb, de Abstract We propose a unified framework in which to treat semantic underspecification and parallelism phenomena in discourse. The framework employs a constraint language that can express equality and subtree rela- tions between finite trees. In addition, our constraint language can express the equal- ity up-to relation over trees which cap- tures parallelism between them. The con- straints are solved by context unification. We demonstrate the use of our framework at the examples of quantifier scope, ellipsis, and their interaction. 1 1 Introduction Traditional model-theoretic semantics of natural languages (Montague, 1974) has assumed that se- mantic information, processed by composition and reasoning processes, is available in a completely specified form. During the last few years, the phe- nomenon of semantic underspecification, i.e. the incomplete availability of semantic information in processing, has received increasing attention. Sev- eral aspects of underspecification have been fo- cussed upon, motivated mainly by computational considerations: the ambiguity and openness of lex- ical meaning (Pustejovsky, 1995; Copestake and Briscoe, 1995), referential underspecification (Asher, 1993), structural semantic underspecification caused by syntactic ambiguities (Egg and Lebeth, 1995), and by the underdetermination of scope relations (Alshawi and Crouch, 1992; Reyte, 1993). In ad- dition, external factors such as insufficient coverage 1The research reported in this paper has been sup- ported by the SFB 378 at the UniversitS.t des Saarlandes and the Esprit Working Group CCL II (EP 22457). of the grammar, time-constraints for parsing, and most importantly the kind of incompleteness, uncer- tainty, and inconsistency, coming with spoken input are coming more into the focus of semantic process- ing (Bos et al., 1996; Pinkal, 1995). The aim of semantic underspecification is to pro- duce compact representations of the set of possible readings of a discourse. While the readings of a dis- course may be only partially known, the interpre- tations of its components are often strongly corre- lated. In this paper, we are concerned with a uni- form treatment of underspecification and of phenom- ena of discourse-semantic parallelism. Some typical parallelism phenomena are ellipsis, corrections, and variations. We illustrate them here by some exam- ples (focus-bearing phrases are underlined): (1) John speaks Chinese. Bill too. (2) John speaks Japanese. - No, he speaks Chinese. (3) ??? - Bill speaks Chinese, too. Parallelism guides the interpretation process for the above discourses. This is most obvious in the case of ellipsis interpretation (1), but is also evident for the resolution of the anaphor in the correction in (2), and in the variation case (3) where the context is unknown and has to be inferred. The challenge is to integrate a treatment of paral- lelism with underspecification, such as in cases of the interaction of scope and ellipsis. Problematic examples like (4) have been brought to attention by (Hirschbuehler, 1982). The example demonstrated that earlier treatments of ellipsis based on copying of the content of constituents are insufficient for such kinds of parallelism. (4) Two European languages are spoken by many linguists, and two Asian ones (are spoken by many linguists), too. 410 The first clause of (4) is scope-ambiguous between two readings. The second, elliptic one, is too. Its interpretation is indicated by the part in parenthe- ses. The parallelism imposed by ellipsis requires the scope of the quantifiers in the elliptical clause to be analogous to the scope of the quantifiers in the antecedent clause. Thus, the conjunction of both clauses has only two readings: Either the interpre- tation is the wide scope existential one in both cases (two specific European languages as well as two spe- cific Asian languages are widely known among lin- guists), or it is the narrow scope existential one (many linguists speak two European languages, and many linguists speak two Asian languages). A natural approach for describing underspecified se- mantic information is to use an appropriate con- straint language. We use constraints interpreted over finite trees. A tree itself represents a formula of some semantic representation language. This ap- proach is very flexible in allowing various choices for the particular semantic representation language, such as first-order logic, intensional logic (Dowty, Wall, and Peters, 1981), or Discourse Representa- tion Theory, DRT, (Kamp and Reyle, 1993). The constraint approach contrasts with theories such as Reyles UDRT (1993) which stresses the integration of the levels of semantic representation language and underspecified descriptions. For a description language we propose the use of con- text constraints over finite trees which have been in- vestigated in (Niehren, Pinkal, and Ruhrberg, 1997). This constraint language can express equality and subtree relations between finite trees. More gen- erally it can express the "equality up-to" relation over trees, which captures (non-local) parallelism be- tween trees. The general case of equality up-to con- straints cannot be handled by a system using subtree plus equality constraints only. The problem of solv- ing context constraints is known as context unifica- tion, which is a subcase of linear second-order unifi- cation (L~vy, 1996; Pinkal, 1995). There is a com- plete and correct semi-decision procedure for solving context constraints. Context unification allows to treat the interaction of scope and ellipsis. Note that in example (4) the trees representing the semantics of the source and target clause must be equal up to the positions cor- responding to the contrasting elements (two Euro- pean languages / two Asian languages). Thus, this is a case where the additional expressive power of context constraints is crucial. In this paper, we elab- orate on the example of scope and ellipsis interac- tion. The framework appears to extend, however, to all kinds of cases where structural underspecification and discourse-semantic parallelism interact. In Section 2, we will describe context unification, and present some results about its formal proper- ties and its relation to other formalisms. Section 3 demonstrates the application to scope underspeci- fication, to ellipsis, and to the combined cases. In Section 4, the proposed treatment is compared to re- lated approaches in computational semantics. Sec- tion 5 gives an outlook on future work. 2 Context Unification Context unification is the problem of solving con- text constraints over finite trees. The notion of con- text unification stems from (L6vy, 1996) whereas the problem originates from (Comon, 1992) and (Schmidt-Schaul3, 1994). Context unification has been formally defined and investigated by the au- thors in (Niehren, Pinkal, and Ruhrberg, 1997). Here, we select and summarize relevant results on context unification from the latter. Context unification subsumes string unification (see (Baader and Siekmann, 1993) for an overview) and is subsumed by linear second-order unification which has been independently proposed by (L@vy, 1996) and (Pinkal, 1995). The decidability of context uni- fication is an open problem. String unification has been proved decidable by (Makanin, 1977). The decidability of linear second-order unification is an open problem too whereas second-order unification is known to be undecidable (Goldfarb, 1981). The syntax and semantics of context constraints are defined as follows. We assume an infinite set of first- order variables ranged over by X, Y, Z, an infinite set of second-order variables ranged over by C, and a set of function symbols ranged over by f, that are equipped with an arity n > 0. Nullary function symbols are called constants. Context constraints ~o are defined by the following abstract syntax: t ::= x I f(tl,...,t,) [ C(t) ~P ::: t:tl I ~A~ I A (second-order) term t is either a first-order vari- able X, a construction f(tl,..., tn) where the arity off is n, or an application C(t). A context constraint is a conjunction of equations between second-order terms. Semantically, we interpret first-order variables X as finite constructor trees, which are first-order terms without variables, and second-order variables C as context functions that we define next. A context with 411 and Figure 1: The equality up-to relation hole X is a term t that does not contain any other variable than X and has exactly one occurrence of X. A conlezt function 7 is a function from trees to trees such that there exists a variable X and a context t with hole X satisfying the equation: 7(~r) = t[~r/X] for all trees or. Note that context functions can be described by lin- ear second-order lambda terms of the form AX.t where X occurs exactly once in the second-order term t. Let a be a variable assignment that maps first-order variables to finite trees and second-order variables to context functions. The interpretation (~(t) of a term t under a is the finite tree defined as follows: (~(a(tl,...,tn)) = a(c~(tl),..., ~(tn)) = A solution of a context constraint ~ is a variable as- signment a such that a(t) = a(t') for all equations t = t' in 9. A context constraint is called satisfi- able if it has a solution. Context unification is the satisfiability problem of context constraints. Context constraints (plus existential quantification) can express subtree constraints over finite trees. A subtree constraint has the form X<<X' and is inter- preted with respect to the subtree relation on finite trees. A subtree relation ¢r<<a ~ holds if cr is a subtree of cr I, i.e. if there exists a context function 7 such that a' = 7(a). Thus, the following equivalence is valid over finite trees: X<<X' ~ ~C(X' = C(X)) Context constraints are also more general than equality up-to constraints over finite trees, which al- low to describe parallel tree structures. An equality up-to constraint has the form X1/X~=Y1/Y~ and is interpreted with respect to the equality up-to rela- tion on finite trees. Given finite trees al,cr~, cr2,a~, the equality up-to relation ai/a~=a2/a~ holds if ~r~ is equal to ~2 up-to one position p where al has the subtree a~ and ~2 the subtree a S. This is depicted in Figure 1. In this case, there exists a context function 7 such that al = 7(al) and a2 = 7(a~). In other words, the following equivalence holds: X/X'=Y/Y' +-+ 3C(X=C(X') AY=C(Y')) Indeed, the satisfiability problems of context con- straints and equality up-to constraints over finite trees are equivalent. In other words, context uni- fication can be considered as the problem of solving equality up-to constraints over finite trees. 2.1 Solving Context Constraints There exists a correct and complete semi-decision procedure for context unification. This algorithm computes a representation of all solutions of a con- text constraint, in case there are any. We illustrate the algorithm in figure 2. There, we consider the constraint X,=@(Q(s, c), j) A X, =C(Xcs) A Xc,=j which is also discussed in example (11)(i) as part of an elliptical construction. Our algorithm proceeds on pairs consisting of a con- straint and a set of variable bindings. At the begin- ning the set of variable bindings is empty. In case of termination with an empty constraint, the set of variable bindings describes a set of solutions of the initial constraint. Consider the run of our algorithm in figure 2. In the first step, Xs =@(@(s, c), j) is removed from the con- straint and the variable binding X8 ~-* @(@(s, c), j) is added. This variable binding is applied to the remaining constraint where X8 is substituted by @(@(s, c), j). The second computation step is simi- lar. It replace the to constraint Xcs=j by a variable binding Xcs ~-~ j and eliminates Xc8 in the remain- ing constraint. The resulting constraint @(@(s,c),j) = C(j) presents an equation between a term with a con- stant @ as its ("rigid") head symbol and a term with a context variable C as its ("flexible") head sym- bol. In such a case one can either apply a projection rule that binds C to the identity context AY.Y or an 412 false Xs=@(@(s,c),j) A Xs=C(Xc,) A Xc,=j l x, @(@(=, c), J) @(@(s,c),j)=C(X~) A Xc==j ~ Xc, ~ j @(@(s, c), j)=C(j) c c), @(s, c)=C'(j) j=C'(j) 1 false j=j 1 true Figure 2: Solving the context constraints of example (ll)(i) imitation rule. Projection produces a clash of two rigid head symbols @ and j. Imitation presents two possibilities for locating the argument j of the con- text variable C as a subtree of the two arguments of the rigid head symbol @. Both alternatives lead to new rigid-flexible situations. The first alternative leads to failure (via further projection or imitation) as @(s, c) does not contain j as a subtree. The sec- ond leads to success by another projection step. The unique solution of the constraint in figure 2 can be described as follows: Xs ~-* @(@(8, c), j), Xc= ~ j, c AY.@(@(=, c), Y) The full version of (Niehren, Pinkal, and Ruhrberg, 1997) contains discussions of two algorithms for con- text unification. For a discussion on decidable frag- ments of context constraints, we also refer to this paper. 3 Underspecification and Parallelism In this section, we discuss the use of context unifica- tion for treating underspecification and parallelism by some concrete examples. The set of solutions of a context constraint represents the set of possible readings of a given discourse. The trees assigned by the solutions represent expressions of some seman- tic representation language. Here, we choose (ex- tensional) typed higher-order logic, HOL, (Dowty, Wall, and Peters, 1981). However, any other logical language can be used in principle, so long as we can represent its syntax in terms of finite trees. It is important to keep our semantic representation language (HOL) clearly separate from our descrip- tion language (context constraints over finite trees). We assume an infinite set of HOL-variables ranged over by x and y. The signature of context constraints contains a unary function symbol lamx and a con- stant var. per HOL-variable x. Futhermore, we as- sume a binary function symbol @ that we write in left associative infix notation and constants like john, language, etc. For example the tree (many@language)@(lamx((spoken_by@john)@varx)) represents the HOL formula (=poke by(j Note that the function symbol @ represents the ap- plication in HOL and the function symbols lamx the abstraction over x in HOL. 413 3.1 Scope Scope underspecification for a sentence like (5) is expressed by the equations in (6): (5) (6) Two languages are spoken by many linguists. Xs = Cl((two@language)@lamx(C3(X~s))) A Xs = C2((many@linguist)@lamy(C4(X~s))) A X~ = spoken_by@vary@var~ The algorithm for context unification leads to a dis- junction of two solved constraints given in (7) (i) and (ii). (7) (i) Xs = O1 ((twoQlanguage)@la mx ( Cs((many@linguist)@lamy( C4(spoke._by@var,@var ))))) (ii) Xs = C2 ((many@linguist)@lam,( C6 ((two@language)@lam~( C3 (spoken_by@var,@varx))))) The algorithm does in fact compute a third kind of solved constraint for (6), where none of the quan- tifiers two@language and many@linguist are required to be within the scope of each other. This possibility can be excluded within the given framework by us- ing a stronger set of equations between second-order terms as in (6'). Such equations can be reduced to context constraints via Skolemisation. (6') Cs = )~X.Cl(two@language@lamx(C3(X))) A Cs = AX.Cz(many@linguist@lamy(C4(X))) A Xs = Cs(spoken_by@vary@varx) Both solved constraints in (7) describe infinite sets of solutions which arise from freely instantiating the re- maining context variables by arbitrary contexts. We need to apply a closure operation consisting in pro- jecting the remaining free context variables to the indentity context AX.X. This gives us in some sense the minimal solutions to the original constraint. It is clear that performing the closure operation must be based on the information that the semantic ma- terial assembled so far is complete. Phenomena of incomplete input, or coercion, require a withholding, or at least a delaying of the closure operation. The closure operation on (7) (i) and (ii)leads to the two possible scope readings of (5) given in (8) (i) and (ii) respectively. (8) (i) Xs (two@language)@lamx( (many@linguist)@lamy( spoken_by@vary@vary)) (ii) Xs (many@linguist)@lamy( (two@language)@lamx( spoken_by@vary@varx)) A constraint set specifying the scope-neutral mean- ing information as in (6') can be obtained in a rather simple compositional fashion. Let each node P in the syntactic structure be associated with three se- mantic meta-variables Xp, X~p, and Cp, and let I(P) be the scope boundary for each node P . Rules for obtaining semantic constraints from binary syn- tax trees are: (9) (i) For every S-node P add Xp = Cp(X~p), for any other node add Xp = X~p. (ii) If[p V R], Q and Rare not NP nodes, add X~ = XQ@Xn or X~p = XI~@XQ, according to HOL type. (iii) If [p Q R] or [p R Q], and R is an NP node, then add X~o = XQ@varx and c,(p) = :,X.Co(X,@lam.(Cl(X))). For example, the first two constraints in example (6') result from applying rule (iii), where the values for the quantifiers two@language and many@linguist are already substituted in for the variables XR in both cases. The quantifiers themselves are put together by rule (ii). The third constraint results from rule (i) when the semantics of X~ is filled in. The latter is a byproduct of the applications of rule (iii) to the two NPs. 3.2 Ellipsis We now look into the interpretation of examples (1) to (4), which exhibit forms of parallelism. Let us take Xs and Xt to represent the semantics of the source and the target clause (i.e., the first and the second clause of a parallel construction; the termi- nology is taken over from the ellipsis literature), and Xcs and Xct to refer to the semantic values of the contrast pair. The constraint set of the whole con- struction is the union of the constraint sets obtained by interpreting source and target clause independent of each other plus the pair of constraints given in (10). (lo) x, = c(xo=) ^ x, = c(xc,) 414 The equations in (10) determine that the semantics of the source clause and the semantics of the tar- get clause are obtained by embedding the represen- tations of the respective contrasting elements into the same context. In other words: Source semantics and target semantics must be identical up to the positions of the contrasting elements. As an example, consider the ellipsis construction of Sentence (1), where for simplicity we assume that proper names are interpreted by constants and not as quantifiers. It makes no difference for our treat- ment of parallelism. (11) (i) X~ = speak@chinese@john A Xc, = john A Xs = C(Xcs) (ii) Xa = bill A Xt = C(Xot) By applying the algorithm for context unification to this constraint, in particular to part (i) as demon- strated in figure 2, we can compute the context C to be AY.(speak@chinese@Y). This yields the inter- pretation of the elliptical clause, which is given by Xt ~ speak@chinese@bill. Note that the treatment of parallelism refers to con- trasted and non-contrasted portions of the clause pairs rather than to overt and phonetically unreal- ized elements. Thus it is not specifc for the treat- ment of ellipsis, but can be applied to other kinds of parallel constructions, as well. In the correction pair of Sentence (2), it provides a certain unam- biguous reading for the pronoun, in (3), it gives X8 = speak@chinese@X~ as a partial description of the (overheard or unuttered) source clause. 3.3 Scope and Ellipsis Finally, let us look at the problem case of par- allelism constraints for structurally underspecified clause pairs. We get a combination of constraints for a scope underspecified source clause (12) and paral- lelism constraints between source and target (13). (12) Cs = AX.Ol((two@e_language)@lam,(C3(X))) A C~ = AX.C2( ( rnany@linguist )@lamy( C4( X ) ) ) A Xs = Cs(spoken_by@vary@varx) (13) X, = C(two@e_language) A Xt -- C(two@a_language) The conjunction of the constraints in (12) and (13) correctly allows for the two solutions (14) and (15), with corresponding scopings in Xs and Xt after closure. 2 (14) X~ (two@e_language)@lamx ( (ma ny@linguist)Qla rny ( spoken_by@vary@varx)) A X t (two@a_la nguage)@la m~( (ma ny@linguist)@lamy ( spoken_by@vary@varx)) A AY. Y @lamx( (many@linguist)Qlamy( spoken_by@vary@varx)) (15) Xs ~-* (many@linguist)@lamy( (two@e_language)Qlarnx( spoken_by@vary@vary)) A i t (many@linguist)@lamy( (two@a_language)@la rnx( spoken_by@varyQvarx)) A el--+ AY. (manyQlinguist)Qlamy( Y @lamx( spoken_by@vary@varx)) Mixed solutions, where the two quantifiers take dif- ferent relative scope in the source and target clause are not permitted by our constraints. For example, (16) provides no solution to the above constraints. (16) X 3 (twoQe_language)@lam~ ( (many@linguist)Qlamy( spoken_by@vary@varx)) Xt t--4. (rna ny@linguist)@la my ( (two@a_language)@lamx( spoken_by@varyQvarx)) 2Notice that closure is applied to the solved form of the combined constraints (i.e. (14) and (15) respectively) of the two sentences here, rather than to solved forms of (12) and (13) separately. This reflects the dependency of the interpretation of the second sentence on material in the first one. 415 From the trees in (16) one cannot construct a con- text function to be assigned to C which solves the parallelism constraints in (13). 4 Comparison to other Theories Standard theories for scope underspecification make use of subtree relations and equality relations only. Such relationships may be expressed on a level of a separate constraint language, as in our case, or be in- corporated into the semantic formalism itself, as it is done for DRT by the system of UDRT (Reyle, 1993). In UDRT one introduces "labels" that behave very much like variables for DRSes. These labels figure in equations as well as subordination constraints to express scope relations between quantifiers. Equa- tions and subordination constraints alone do not provide us with a treatment of parallelism. An idea that seems to come close to our notion of equal- ity up-to constraints is the co-indexing technique in (Reyle, 1995), where non-local forms of parallelism are treated by dependency marking on labels. We believe that our use of a separate constraint language is more transparent. A treatment for ellipsis interpretation which uses a form of higher-order unification has been proposed in (Dalrymple, Shieber, and Pereira, 1991) and ex- tended to other kinds of parallel constructions by (Gardent, Kohlhase, and van Leusen, 1996; Gardent and Kohlhase, 1996). Though related in some re- spects, there are formal differences and differences in coverage between this approach and the one we pro- pose. They use an algorithm for higher-order match- ing rather than context unification and they do not distinguish an object and meta language level. As a consequence they need to resort to additional ma- chinery for the treatment of scope relations, such as Pereira's scoping calculus, described in (Shieber, Pereira, and Dalrymple, 1996). On the other hand, their approach treats a large number of problems of the interaction of anaphora and ellipsis, especially strict/sloppy ambiguities. Our use of context unification does not allow us to adopt their strategy of capturing such ambiguities by admitting non-linear solutions to parallelism con- straints. 5 Outlook Extensions of context unification may be useful for our applications. For gapping constructions, con- texts with multiple holes need to be considered. The algorithm for context unification described in the complete version of (Niehren, Pinkal, and Ruhrberg, 1997) makes use of contexts with multiple holes in any case. So far our treatment of ellipsis does not capture strict-sloppy ambiguities if that ambiguity is not postulated for the source clause of the ellipsis con- struction. We believe that the ambiguity can be integrated into the framework of context unifica- tion without making such a problematic assump- tion. This requires modifying the parallelism re- quirements in an appropriate way. We hope that while sticking to linear solutions only, one may be able to introduce such ambiguities in a very con- trolled way, thus avoiding the overgeneration prob- lems that come from freely abstracting multiple vari- able occurrences. This work is currently in progress, and a deeper comparison between the approaches has yet to be carried out. An implementation of a semi-decision procedure for context unification has been carried out by Jordi L6vy, and we applied it successfully to some sim- ple ellipsis examples. Further experimentation is needed. Hopefully there are decidable fragments of the context unification problem that are empirically adequate for the phenomena we wish to model. References Alshawi, H. and D. Crouch. 1992. Monotonic se- mantic interpretation. In 30th Annual Meeting of the Association of Computational Linguistics, pages 32-38. Asher, Nick. 1993. Reference to abstract objects in discourse. Kluwer, Dordrecht. Bander, F. and J. Siekmann. 1993. Unification the- ory. In D. Gabbay, C.J. Hogger, and J.A. Robinson, editors, Handbook of Logic in Artificial Intelligence and Logic Programming. Oxford University Press. Bos, Johan, Bj6rn Gambi~ck, Christian Lieske, Yoshiki Mori, Manfred Pinkal, and Karsten Worm. 1996. Compositional semantics in Verbmobil. In Proceedings of the 16th International Conference on Computational Linguistics, volume 1, pages 131- 136, Ktbenhavn, Denmark, August. ACL. Comon, ttubert. 1992. Completion of rewrite sys- tems with membership constraints. In W. Kuich, ed- itor, Proc. 19th Int. Coll. on Automata, Languages and Programming, LNCS 623, Vienna. Springer- Verlag. Copestake, A. and E. J. Briscoe. 1995. Semi pro- ductive polysemy and sense extension. Journal of Semantics, 12:15-67. 416 Dalrymple, Maryl Stuart Shieber, and Fernando Pereira. 1991. Ellipsis and higher order unification. Linguistics and Philosophy, 14:399-452. Dowty, D., R. Wall, and S. Peters. 1981. Introduc- tion to Montague semantics. Reidel, Dordrecht. Egg, M. and K. Lebeth. 1995. Semantic un- derspecification and modifier attachment ambigui- ties. In J. Kilbury and R. Wiese, editors, Inte- grative Ansaetze in der Computerlinguistik. Duessel- doff, pages 19-24. Gardent, Cl~.ire and Michael Kohlhase. 1996. Fo- cus and higher-order unification. In Proceedings of COLING-96, Copenhagen. Gardent, Claire, Michael Kohlhase, and Noor van Leusen. 1996. Corrections and Higher-Order Unification. In Proceedings of KONVENS-96. De Gruyter, Bielefeld, Germany, pages 268-279. Goldfarb, W. D. 1981. The undecidability of the second-order unification problem. Theoretical Com- puter Science, 13:225-230. Hirschbuehler, Paul. 1982. Vp deletion and across the board quantifier scope. In J. Pustejovsky and P. Sells, editors, NELS 12, University of Massachus- setts, Amherst. Kamp, H. and U. Reyle. 1993. From Discourse to Logic. Kluwer, Dordrecht. L~vy, Jordi. 1996. Linear second order unification. In Proceedings of the Conference on Rewriting Tech- niques and Applications. Springer-Verlag. Makanin, G.S. 1977. The problem of solvability of equations in a free semigroup. Soviet Akad. Nauk SSSR, 223(2). Montague, R. 1974. The proper treatment of quan- tification in ordinary english. In R. Thomason, ed- itor, Formal Philosophy. Selected Papers of Richard Montague. Yale University Press, New Haven and London, pages 247-271. Niehren, Joachim, Manfred Pinkal, and Peter Ruhrberg. 1997. On equality up-to constraints over finite trees, context unification and one-step rewrit- ing. In Proceedings of the l~th International Confer- ence on Automated Deduction. A complete verison is available from http://www.ps.uni-sb.de/~uiehren. In Press. Pinkal, Manfred. 1995. Radical underspecification. In Paul Dekker and Martin Stokhof, editors, Pro- ceedings of the lOth Amsterdam Colloquium, Uni- versity of Amsterdam. Pustejovsky, J. 1995. The Generative Lexicon. MIT press, Mambridge MA. Reyle, Uwe. 1993. Dealing with ambiguities by un- derspecification: construction, representation, and deduction. Journal of Semantics, 10:123-179. Reyle, Uwe. 1995. Co-indexing labelled DRSs to represent and reason with ambiguities. In S. Pe- ters and K. van Deemter, editors, Semantic Am- biguity and Underspecification. CSLI Publications, Stanford. Schmidt-SchaufS, Manfred. 1994. Unification of stratified second-order terms. Technical Report internal report 12/94, J. W. Goethe Universit~it, Frankfurt, Germany. Shieber, Stuart, Fernando Pereira, and Mary Dal- rymple. 1996. Interactions of scope and ellipsis. Linguistics and Philosophy, 19:527-552. 41 7 | 1997 | 53 |
Co-evolution of Language and of the Language Acquisition Device Ted Briscoe ejb¢cl, cam. ac. uk Computer Laboratory University of Cambridge Pembroke Street Cambridge CB2 3QG, UK Abstract A new account of parameter setting dur- ing grammatical acquisition is presented in terms of Generalized Categorial Grammar embedded in a default inheritance hierar- chy, providing a natural partial ordering on the setting of parameters. Experiments show that several experimentally effective learners can be defined in this framework. Ew)lutionary simulations suggest that a lea.rner with default initial settings for pa- rameters will emerge, provided that learn- ing is memory limited and the environment of linguistic adaptation contains an appro- priate language. 1 Theoretical Background Grmnnmtical acquisition proceeds on the basis of a partial genotypic specifica.tion of (universal) grmn- mar (UG) complemented with a learning procedure elmbling the child to complete this specification ap- propriately. The parameter setting frainework of Chomsky (1981) claims that learning involves fix- ing the wdues of a finite set of finite-valued param- eters to select a single fully-specified grammar from within the space defined by the genotypic specifi- cation of UG. Formal accounts of parameter set- ting have been developed for small fragments but even in these search spaces contain local maxima and subset-superset relations which may cause a learner to converge to an incorrect grammar (Clark, 1992; Gibson and Wexler, 1994; Niyogi and Berwick, 1995). The solution to these problems involves defin- ing d(,fault, umnarked initial values for (some) pa- rameters and/or ordering the setting of paraineters during learning. Bickerton (1984) argues for the Bioprograin Hy- pothesis a.s an explanation for universal similarities between historically unrelated creoles, and for the rapid increase in gramlnatical complexity accompa- nying the transition from pidgin to creole languages. Prom the perspective of the parameters framework, the Bioprogram Hypothesis claims that children are endowed genetically with a UG which, by default, specifies the stereotypical core creole grammar, with right-branching syntax and subject-verb-object or- der, as in Saramaccan. Others working within the parameters framework have proposed unmarked, de- fault parameters (e.g. Lightfoot, 1991), but the Bio- program Hypothesis can be interpreted as towards one end of a continuum of proposals ranging from all parameters initially unset to all set to default values. 2 The Language Acquisition Device A model of the Language Acquisition Device (LAD) incorporates a UG with associated parameters, a parser, and an algorithm for updating initial param- eter settings on parse failure during learning. 2.1 The Grammar (set) Basic categorial grammar (CG) uses one rule of ap- plication which combines a functor category (con- taining a slash) with an argument category to form a derived category (with one less slashed argument category). Grammatical constraints of order and agreement are captured by only allowing directed application to adjacent matching categories. Gener- alized Categorial Grammar (GCG) extends CG with further rule schemata) The rules of FA, BA, gen- eralized weak permutation (P) and backward and forward colnposition (I?C, BC) are given in Fig- ure 1 (where X, Y and Z are category variables, [ is a vm'iable over slash and backslash, and ... denotes zero or more further flmctor arguments). Once pernmtation is included, several semantically l\¥ood (1993) is a general introduction to Categorial Grammar mid extensions to the basic theory. The most closely related theories to that presented here are those of Steedman (e.g. 1988) and Hoffman (1995). 418 X/Y Y ~ X Y X\Y ~ X Forward Application: A y [X(y)] (y) ::~ X(y) Backward Application: A y [X(y)] (y) =~ X(y) X/Y Y/Z ~ X/Z Y\Z X\Y ~ X\Z Forward Composition: y [X(y)] A z [Y(z)] =~ A z [X(Y(z))] Backward Composition: z [Y(z)] A y [X(y)] ~ A z [X(Y(z))] (Generalized Weak) Permutation: (XIY1)... IY, ~ (XIYn)IYI... A Yn..-,Yl [X(yl ...,y,.)] =V A Yl,Y . . . . [X(yl ...,Yn)] Figure 1: GCG Rule Schemata Kim loves Sandy NP (S\NP)/NP NP kim' A y,x [love'(x y)] sandy' P (S/NP)\NP A x,y [love'(x y)] -BA S/NP A y [love'(kim' y)] FA S love'(kim' sandy') Figure 2: GCG Derivation for Kim loves Sandy equivalent derivations for Kim loves Sandy become available, Figure 2 shows the non-conventional left- branching one. Composition also allows alterna- tive non-conventional semantically equivalent (left- branching) derivations. GCG as presented is inadequate as an account of UG or of any individual grammar. In particular, the definition of atomic categories needs extending to deal with featural variation (e.g. Bouma and van Noord, 1994), and the rule schemata, especially com- position and weak permutation, must be restricted in various parametric ways so that overgeneration is prevented for specific languages. Nevertheless, GCG does represent a plausible kernel of UG; Hoff- man (1995, 1996) explores the descriptive power of a very similar system, in which generalized weak per- mutation is not required because functor arguments are interpreted as multisets. She demonstrates that this system can handle (long-distance) scrambling elegantly and generates mildly context-sensitive lan- guages (Joshi et al, 1991). The relationship between GCG as a theory of UG (GCUG) and as a the specification of a particu- lar grammar is captured by embedding the theory in a default inheritance hierarchy. This is repre- sented as a lattice of typed default feature structures (TDFSs) representing subsumption and default in- heritance relationships (Lascarides et al, 1996; Las- carides and Copestake, 1996). The lattice defines intensionally the set of possible categories and rule schemata via type declarations on nodes. For ex- ample, an intransitive verb might be treated as a subtype of verb, inheriting subject directionality by default from a type gendir (for general direction). For English, gendir is default right but the node of the (intransitive) functor category, where the direc- tionality of subject arguments is specified, overrides this to left, reflecting the fact that English is pre- dominantly right-branching, though subjects appear to the left of the verb. A transitive verb would in- herit structure from the type for intransitive verbs and an extra NP argument with default directional- ity specified by gendir, and so forth. 2 For the purposes of the evolutionary simulation described in §3, GC(U)Gs are represented as a se- quence of p-settings (where p denotes principles or parameters) based on a flat (ternary) sequential en- coding of such default inheritance lattices. The in- 2Bouma and van Noord (1994) and others demon- strate that CGs can be embedded in a constraint-based representation. Briscoe (1997a,b) gives further details of the encoding of GCG in TDFSs. 419 NP N S gen-dir subj-dir applic AT AT AT DR DL DT NP gendir applic S N subj-dir AT DR DT AT AT DL "applic NP N gen-dir subj-dir S DT AT AT DR DL AT Figure 3: Sequential encodings of the grammar fragment heritance hierarchy provides a partial ordering on parameters, which is exploited in the learning pro- cedure. For example, the atomic categories, N, NP and S are each represented by a parameter en- coding the presence/absence or lack of specification (T/F/?) of the category in the (U)G. Since they will be unordered in the lattice their ordering in the se- quential coding is arbitrary. However, the ordering of the directional types gendir and subjdir (with values L/R) is significant as the latter is a more spe- cific type. The distinctions between absolute, de- fault or unset specifications also form part of the encoding (A/D/?). Figure 3 shows several equiva- lent and equally correct sequential encodings of the fragment of the English type system outlined above. A set of grammars based on typological distinc- tions defined by basic constituent order (e.g. Green- berg, 1966; Hawkins, 1994) was constructed as a (partial) GCUG with independently varying binary- valued parameters. The eight basic language fami- lies are defined in terms of the unmarked order of verb (V), subject (S) and objects (0) in clauses. Languages within families further specify the order of modifiers and specifiers in phrases, the order of ad- positions and further phrasal-level ordering param- eters. Figure 4 list the language-specific ordering parameters used to define the full set of grammars in (partial) order of generality, and gives examples of settings based on familiar languages such as "En- glish", "German" and "Japanese". 3 "English" de- fines an SVO language, with prepositions in which specifiers, complementizers and some modifiers pre- cede heads of phrases. There are other grammars in the SVO family in which all modifers follow heads, there are postpositions, and so forth. Not all combi- nations of parameter settings correspond to attested languages and one entire language family (OVS) is unattested. "Japanese" is an SOV language with 3Throughout double quotes around language names are used as convenient mnemonics for familiar combina- tions of parameters. Since not all aspects of these actual languages are represented in the grammars, conclusions about actual languages must be made with care. postpositions in which specifiers and modifiers follow heads. There are other languages in the SOV family with less consistent left-branching syntax in which specifiers and/or modifiers precede phrasal heads, some of which are attested. "German" is a more complex SOV language in which the parameter verb- second (v2) ensures that the surface order in main clauses is usually SVO. 4 There are 20 p-settings which determine the rule schemata available, the atomic category set, and so forth. In all, this CGUG defines just under 300 grammars. Not all of the resulting languages are (stringset) distinct and some are proper subsets of other languages. "English" without the rule of per- mutation results in a stringset-identical language, but the grammar assigns different derivations to some strings, though the associated logical forms are identical. "English" without composition results in a subset language. Some combinations of p-settings result in 'impossible' grammars (or UGs). Others yield equivalent grammars, for example, different combinations of default settings (for types and their subtypes) can define an identical category set. The grammars defined generate (usually infinite) stringsets of lexical syntactic categories. These strings are sentence types since each is equivalent to a finite set of grammatical sentences formed by selecting a lexical instance of each lexicai category. Languages are represented as a finite subset of sen- tence types generated by the associated grammar. These represent a sample of degree-1 learning trig- gers for the language (e.g. Lightfoot, 1991). Subset languages are represented by 3-9 sentence types and 'full' languages by 12 sentence types. The construc- tions exemplified by each sentence type and their length are equivalent across all the languages defined by the grammar set, but the sequences of lexical cat- egories can differ. For example, two SOV language renditions of The man who Bill likes gave Fred a 4Representation of the vl/v2 parameter(s) in terms of a type constraint determining allowable functor cate- gories is discussed in more detail in Briscoe (1997b). 420 gen vl n subj obj v2 mod spec relcl adpos compl Engl R F R L R F R R R R R Ger R F R L L T R R R R R Jap L F L L L F L L L L ? Figure 4: The Grammar Set - Ordering Parameters present, one with premodifying and the other post- modifying relative clauses, both with a relative pro- noun at the right boundary of the relative clause, are shown below with the differing category highlighted. Bill likes who the-man a-present Fred gave NP8 (S\NP,)\NPo Rc\(S\NPo) NPs\Rc NPo2 NPol ((S\NPs)\NPo2)\NPol The-man Bill likes who a-present Fred gave NPs/Rc NPs (S\NPs)\NPo Rc\(S\NPo) NPo2 NPol ((S\NPs)\NPo2)\NPol 2.2 The Parser The parser is a deterministic, bounded-context stack-based shift-reduce algorithm. The parser op- erates on two data structures, an input buffer or queue, and a stack or push down store. The algo- rithm for the parser working with a GCG which in- cludes application, composition and permutation is given in Figure 5. This algorithm finds the most left- branching derivation for a sentence type because Re- duce is ordered before Shift. The category sequences representing the sentence types in the data for the entire language set are designed to be unambiguous relative to thi s 'greedy', deterministic algorithm, so it will always assign the appropriate logical form to each sentence type. However, there are frequently al- ternative less left-branching derivations of the same logical form. The parser is augmented with an algorithm which computes working memory load during an analy- sis (e.g. Baddeley, 1992). Limitations of working memory are modelled in the parser by associating a cost with each stack cell occupied during each step of a derivation, and recency and depth of process- ing effects are modelled by resetting this cost each time a reduction occurs: the working memory load (WML) algorithm is given in Figure 6. Figure 7 gives the right-branching derivation for Kim loves Sandy, found by the parser utilising a grammar without per- mutation. The WML at each step is shown for this derivation. The overall WML (16) is higher than for the left-branching derivation (9). The WML algorithm ranks sentence types, and 1. The Reduce Step: if the top 2 cells of the stack are occupied, then try a) Application, if match, then apply and goto 1), else b), b) Combination, if match then apply and goto 1), else c), c) Permutation, if match then apply and goto 1), else goto 2) 2. The Shift Step: if the first cell of the Input Buffer is occupied, then pop it and move it onto the Stack to- gether with its associated lexical syntactic cat- egory and goto 1), else goto 3) 3. The Halt Step: if only the top cell of the Stack is occupied by a constituent of category S, then return Success, else return Fail The Match and Apply operation: if a binary rule schema matches the categories of the top 2 cells of the Stack, then they are popped from the Stack and the new category formed by applying the rule schema is pushed onto the Stack. The Permutation operation: each time step lc) is visited during the Reduce step, permutation is ap- plied to one of the categories in the top 2 cells of the Stack until all possible permutations of the 2 cate- gories have been tried using the binary rules. The number of possible permutation operations is finite and bounded by the maximum number of arguments of any functor category in the grammar. Figure 5: The Parsing Algorithm 421 Stack Input Buffer Operation Step WML Kim loves Sandy 0 0 Kim:NP:kim ~ loves Sandy Shift 1 1 loves:(S\NP)/NP:A y,x(love' x, y) Sandy Shift 2 3 Kim:NP:kim ~ Sandy:NP:sandy ~ Shift 3 6 loves:(S\NP)/NP:A y,x(love' x, y) Kim:NP:kim ~ loves Sandy:S/NP:A x(love' x, sandy') Reduce (A) 4 Kim:NP:kim ~ Kim loves Sandy:S:(love' kim ~, sandy ~) Reduce (A) 5 Figure 7: WML for Kim loves Sandy After each parse step (Shift, Reduce, Halt (see Fig 5): 1. Assign any new Stack entry in the top cell (in- troduced by Shift or Reduce) a WML value of 0 2. Increment every Stack cell's WML value by 1 3. Push the sum of the WML values of each Stack cell onto the WML-record When the parser halts, return the sum of the WML- record gives the total WML for a derivation Figure 6: The WML Algorithm thus indirectly languages, by parsing each sentence type from the exemplifying data with the associ- ated grammar and then taking the mean of the WML obtained for these sentence types. "En- glish" with Permutation has a lower mean WML than "English" without Permutation, though they are stringset-identical, whilst a hypothetical mix- ture of "Japanese" SOV clausal order with "En- glish" phrasal syntax has a mean WML which is 25% worse than that for "English". The WML algorithm is in accord with existing (psycholinguistically- motivated) theories of parsing complexity (e.g. Gib- son, 1991; Hawkins, 1994; Rambow and Joshi, 1994). 2.3 The Parameter Setting Algorithm The parameter setting algorithm is an extension of Gibson and Wexler's (1994) Trigger Learning Al- gorithm (TLA) to take account of the inheritance- based partial ordering and the role of memory in learning. The TLA is error-driven - parameter set- tings are altered in constrained ways when a learner cannot parse trigger input. Trigger input is de- fined as primary linguistic data which, because of its structure or context of use, is determinately un- parsable with the correct interpretation (e.g. Light- foot, 1991). In this model, the issue of ambigu- ity and triggers does not arise because all sentence types are treated as triggers represented by p-setting schemata. The TLA is memoryless in the sense that a history of parameter (re)settings is not maintained, in principle, allowing the learner to revisit previous hypotheses. This is what allows Niyogi and Berwick (1995) to formalize parameter setting as a Markov process. However, as Brent (1996) argues, the psy- chological plausibility of this algorithm is doubt- ful - there is no evidence that children (randomly) move between neighbouring grammars along paths that revisit previous hypotheses. Therefore, each parameter can only be reset once during the learn- ing process. Each step for a learner can be defined in terms of three functions: P-SETTING, GRAMMAR and PARSER, as: PARSERi(GRAMMAR/(P-SETTING/(Sentence j))) A p-setting defines a grammar which in turn defines a parser (where the subscripts indicate theoutput of each function given the previous trigger). A param- eter is updated on parse failure and, if this results in a parse, the new setting is retained. The algo- rithm is summarized in Figure 8. Working mem- ory grows through childhood (e.g. Baddeley, 1992), and this may assist learning by ensuring that trigger sentences gradually increase in complexity through the acquisition period (e.g. Elman, 1993) by forcing the learner to ignore more complex potential triggers that occur early in the learning process. The WML of a sentence type can be used to determine whether it can function as a trigger at a particular stage in learning. 422 Data: {$1, S2, ... Sn} unleSs PARSERi( GRAMMARi(P-SETTINGi(Sj ) ) ) : Success then p-settingj = UPDATE(p-settings) unless PARSERj (GRAMMARj (P-SETTINGj (Sj))) -- Success then RETURN p-settings/ else RETURN p-settingsy Update: Reset the first (most general) default or unset pa- rameter in a left-to-right search of the p-set accord- ing to the following table: Input: D 1 D0 ? ? ] Output: R 0 R 1 ? 1/0 (random) I (where 1 = T/L and 0 = F/R) Figure 8: The Learning Algorithm 3 The Simulation Model The computational simulation supports the evolu- tion of a population of Language Agents (LAgts), similar to Holland's (1993) Echo agents. LAgts gen- erate and parse sentences compatible with their cur- rent p-setting. They participate in linguistic inter- actions which are successful if their p-settings are compatible. The relative fitness of a LAgt is a func- tion of the proportion of its linguistic interactions which have been successful, the expressivity of the language(s) spoken, and, optionally, of the mean WML for parsing during a cycle of interactions. An interaction cycle consists of a prespecified number of individual random interactions between LAgts, with generating and parsing agents also selected ran- domly. LAgts which have a history of mutually suc- cessful interaction and high fitness can 'reproduce'. A LAgt can 'live' for up to ten interaction cycles, but may 'die' earlier if its fitness is relatively low. It is possible for a population to become extinct (for example, if all the initial LAgts go through ten in- teraction cycles without any successful interaction occurring), and successful populations tend to grow at a modest rate (to ensure a reasonable proportion of adult speakers is always present). LAgts learn during a critical period from ages 1-3 and reproduce from 4-10, parsing and/or generating any language learnt throughout their life. During learning a LAgt can reset genuine param- Variables Typical Values Population Size 32 Interaction Cycle 2K Interactions Simulation Run 50 Cycles Crossover Probability 0.9 Mutation Probability 0 Learning memory limited yes critical period yes Figure 9: The Simulation Options (Cost/Benefits per sentence (1-6); summed for each LAgt at end of an interaction cycle and used to cal- culate fitness functions (7-8)): 1. Generate cost: 1 (GC) 2. Parse cost: ! (PC) 3. Generate subset language cost: 1 (GSC) 4. Parse failure cost: 1 (PF) 5. Parse memory cost: WML(st) 6. Interaction success benefit: 1 (SI) 7. Fitness(WML): SI GC • GC+PC X GC+GSC X 8. Fitness(-~WML): sI cc GC+PC X CC.-[-GSC Figure 10: Fitness Functions eters which either were unset or had default settings 'at birth'. However, p-settings with an absolute value (principles) cannot be altered during the life- time of an LAgt. Successful LAgts reproduce at the end of interaction cycles by one-point crossover of (and, optionally, single point mutation of) their ini- tial p-settings, ensuring neo-Darwinian rather than Lamarckian inheritance. The encoding of p-settings allows the deterministic recovery of the initial set- ting. Fitness-based reproduction ensures that suc- cessful and somewhat compatible p-settings are pre- served in the population and randomly sampled in the search for better versions of universal grammar, including better initial settings of genuine parame- ters. Thus, although the learning algorithm per se is fixed, a range of alternative learning procedures can be explored based on the definition of the inital set of parameters and their initial settings. Figure 9 summarizes crucial options in the simulation giving the values used in the experiments reported in §4 and Figure 10 shows the fitness functions. 423 4 Experimental Results 4.1 Effectiveness of Learning Procedures Two learning procedures were predefined - a default learner and an unset learner. These LAgts were ini- tialized with p-settings consistent with a minimal in- herited CGUG consisting of application with NP and S atomic categories. All the remaining p-settings were genuine parameters for both learners. The un- set learner was initialized with all unset, whilst the default learner had default settings for the parame- ters gendir and subjdir and argorder which spec- ify a minimal SVO right-branching grammar, as well as default (off) settings for comp and perm which determine the availability of Composition and Per- mutation, respectively. The unset learner represents a 'pure' principles-and-parameters learner. The de- fault learner is modelled on Bickerton's bioprogram learner. Each learner was tested against an adult LAgt initialized to generate one of seven full lan- guages in the set which are close to an at- tested language; namely, "English" (SVO, predom- inantly right-branching), "Welsh" (SVOvl, mixed order), "Malagasy" (VOS, right-branching), "Taga- log" (VSO, right-branching), "Japanese" (SOV, left-branching), "German" (SOVv2, predominantly right-branching), "Hixkaryana" (OVS, mixed or- der), and an unattested full OSV language with left- branching syntax. In these tests, a single learner in- teracted with a single adult. After every ten interac- tions, in which the adult randomly generated a sen- tence type and the learner attempted to parse and learn from it, the state of the learner's p-settings was examined to determine whether the learner had con- verged on the same grammar as the adult. Table 1 shows the number of such interaction cycles (i.e. the number of input sentences to within ten) required by each type of learner to converge on each of the eight languages. These figures are each calculated from 100 trials to a 1% error rate; they suggest that, in general, the default learner is more effective than the unset learner. However, for the OVS language (OVS languages represent 1.24% of the world's lan- guages, Tomlin, 1986), and for the unattested OSV language, the default (SVO) learner is less effective. So, there are at least two learning procedures in the space defined by the model which can converge with some presentation orders on some of the grammars in this set. Stronger conclusions require either ex- haustive experimentation or theoretical analysis of the model of the type undertaken by Gibson and Wexler (1994) and Niyogi and Berwick (1995). Unset Default None WML 15 39 26 -~WML 34 17 29 Table 2: Overall preferences for parameter types 4.2 Evolution of Learning Procedures In order to test the preference for default versus un- set parameters under different conditions, the five parameters which define the difference between the two learning procedures were tracked through an-- other series of 50 cycle runs initialized with either 16 default learning adult speakers and 16 unset learning adult speakers, with or without memory-limitations during learning and parsing, speaking one of the eight languages described above. Each condition was run ten times. In the memory limited runs, default parameters came to dominate some but not all pop- ulations. In a few runs all unset parameters dis- appeared altogether. In all runs with populations initialized to speak "English" (SVO) or "Malagasy" (VOS) the preference for default settings was 100%. In 8 runs with "Tagalog" (VSO) the same preference emerged, in one there was a preference for unset pa- rameters and in the other no clear preference. How- ever, for the remaining five languages there was no strong preference. The results for the runs without memory limita- tions are different, with an increased preference for unset parameters across all languages but no clear 100% preference for any individual language. Ta- ble 2 shows the pattern of preferences which emerged across 160 runs and how this was affected by the presence or absence of memory limitations. To test whether it was memory limitations during learning or during parsing which were affecting the results, another series of runs for "English" was per- formed with either memory limitations during learn- ing but not parsing enabled, or vice versa. Memory limitations during learning are creating the bulk of the preference for a default learner, though there appears to be an additive effect. In seven of the ten runs with memory limitations only in learning, a clear preference for default learners emerged. In five of the runs with memory limitations only in parsing there appeared to be a slight preference for defaults emerging. Default learners may have a fitness ad- vantage when the number of interactions required to learn successfully is greater because they will tend to converge faster, at least to a subset language. This will tend to increase their fitness over unset learners who do not speak any language until further into the 424 Learner Language SVO SVOvl VOS VSO SOV SOVv2 OVS OSV Unset 60 80 70 80 70 70 70 70 Default 60 60 60 60 60 60 80 70 Table 1: Effectiveness of Two Learning Procedures learning period. The precise linguistic environment of adaptation determines the initial values of default parameters which evolve. For example, in the runs initialized with 16 unset learning "Malagasy" VOS adults and 16 default (SVO) learning VOS adults, the learn- ing procedure which dominated the population was a variant VOS default learner in which the value for subjdir was reversed to reflect the position of the subject in this language. In some of these runs, the entire population evolved a default sub- jdir 'right' setting, though some LAgts always re- tained unset settings for the other two ordering pa- rameters, gendir and argo, as is illustrated in Fig- ure 11. This suggests that if the human language fac- ulty has evolved to be a right-branching SVO default learner, then the environment of linguistic adapta- tion must have contained a dominant language fully compatible with this (minimal) grammar. 4.3 Emergence of Language and Learners To explore the emergence and persistence of struc- tured language, and consequently the emergence of effective learners, (pseudo) random initialization was used. A series of simulation runs of 500 cycles were performed with random initialization of 32 LAgts' p-settings for any combination of p-setting values, with a probability of 0.25 that a setting would be an absolute principle, and 0.75 a parameter with unbi- ased allocation for default or unset parameters and for values of all settings. All LAgts were initialized to be age 1 with a critical period of 3 interaction cycles of 2000 random interactions for learning, a maximum age of 10, and the ability to reproduce by crossover (0.9 probability) and mutation (0.01 prob- ability) from 4-10. In around 5% of the runs, lan- guage(s) emerged and persisted to the end of the run. Languages with close to optimal WML scores typi- cally came to dominate the population quite rapidly. However, sometimes sub-optimal languages were ini- tially selected and occasionally these persisted de- spite the later appearance of a more optimal lan- guage, but with few speakers. Typically, a minimal subset language dominated - although full and inter- mediate languages did appear briefly, they did not survive against less expressive subset languages with a lower mean WML. Figure 12 is a typical plot of the emergence (and extinction) of languages in one of these runs. In this run, around 10 of the initial population converged on a minimal OVS language and 3 others on a VOS language. The latter is more optimal with respect to WML and both are of equal expressivity so, as expected, the VOS language ac- quired more speakers over the next few cycles. A few speakers also converged on VOS-N, a more expres- sive but higher WML extension of VSO-N-GWP- COMP. However, neither this nor the OVS language survived beyond cycle 14. Instead a VSO language emerged at cycle 10, which has the same minimal expressivity of the VOS language but a lower WML (by virtue of placing the subject before the object) and this language dominated rapidly and eclipsed all others by cycle 40. In all these runs, the population settled on sub- set languages of low expressivity, whilst the percent- age of absolute principles and default parameters in- creased relative to that of unset parameters (mean % change from beginning to end of runs: +4.7, +1.5 and -6.2, respectively). So a second identical set of ten was undertaken, except that the initial popula- tion now contained two SOV-V2 "German" speak- ing unset learner LAgts. In seven of these runs, the population fixed on a full SOV-V2 language, in two on the intermediate subset language SOV-V2-N, and in one on the minimal subset language SOV-V2-N- GWP-COMP. These runs suggest that if a full lan- guage defines the environment of adaptation then a population of randomly initialized LAgts is more likely to converge on a (related) full language. Thus, although the simulation does not model the devel- opment of expressivity well, it does appear that it can model the emergence of effective learning pro- cedures for (some) full languages. The pattern of language emergence and extinction followed that of the previous series of runs: lower mean WML lan- guages were selected from those that emerged during the run. However, often the initial optimal SVO-V2 itself was lost before enough LAgts evolved capable of learning this language. In these runs, changes in the percentages of absolute, default or unset p- settings in the population show a marked difference: 425 100 / 80 -"': / i 60 '',,": !'/ 40 V 2O 0 i 0 I0 I ; i i : ,,./"'_ .,-',..,,' "...' ,,.. -, .. ,,' ,: 'I/ "G0g"~ ~di~" ....... "G0argo" - .... "G0subjdir ....... f ,,v, j i / " '~'v i , ,/ i\},V I i ~ a I \q9 ,f I I I I 20 30 40 50 60 70 Interaction Cycles Q. q) "5 Figure 11: Percentage of each default ordering pa- rameter 45 40 35 30 25 20 15 10 5 0 i ; i i IL i "aa-S¢" -- "GB-OVS-N-P-C ...... k "ge-y~,o-N ....... ~ ,., ~GS-,VOS-N'., . .......... ""GB-VOS-N-~WI~-COMP" k-:::." "G,8-VSOrN:GWP-COMP" - .... 'l ! t i-/~ i i ; i ! zi !'i ! z'! / ~ 1 1 \' i z i V-"" " ........ i ~ L /\ I '-V"'~':'( "'''', i I \ i 5 10 15 20 25 30 Interaction Cycles I /-'x, I I 35 40 45 50 Figure 12: Emergence of language(s) the mean number of absolute principles declined by 6.1% and unset parameters by 17.8%, so the num- ber of default parameters rose by 23.9% on average between the beginning and end of the 10 runs. This may reflect the more complex linguistic environment in which (incorrect) absolute settings are more likely to handicap, rather than simply be irrelevant to, the performance of the LAgt. 5 Conclusions Partially ordering the updating of parameters can result in (experimentally) effective learners with a more complex parameter system than that studied previously. Experimental comparison of the default (SVO) learner and the unset learner suggests that the default learner is more efficient on typologically more common constituent orders. Evolutionary sim- ulation predicts that a learner with default param- eters is likely to emerge, though this is dependent both on the type of language spoken and the pres- ence of memory limitations during learning and pars- ing. Moreover, a SVO bioprogram learner is only likely to evolve if the environment contains a domi- nant SVO language. The evolution of a bioprogram learner is a man- ifestation of the Baldwin Effect (Baldwin, 1896) - genetic assimilation of aspects of the linguistic envi- ronment during the period of evolutionary adapta- tion of the language learning procedure. In the case of grammar learning this is a co-evolutionary process in which languages (and their associated grammars) are also undergoing selection. The WML account of parsing complexity predicts that a right-branching SVO language would be a near optimal selection at a stage in grammatical development when complex rules of reordering such as extraposition, scrambling or mixed order strategies such as vl and v2 had not evolved. Briscoe (1997a) reports further exper- iments which demonstrate language selection in the model. Though, simulation can expose likely evolution- ary pathways under varying conditions, these might have been blocked by accidental factors, such as ge- netic drift or bottlenecks, causing premature fixa- tion of alleles in the genotype (roughly correspond- ing to certain p-setting values). The value of the simulation is to, firstly, show that a bioprogram learner could have emerged via adaptation, and sec- ondly, to clarify experimentally the precise condi- tions required for its emergence. Since in many cases these conditions will include the presence of constraints (working memory limitations, expressiv- ity, the learning algorithm etc.) which will remain causally manifest, further testing of any conclusions drawn must concentrate on demonstrating the ac- 426 curacy of the assumptions made about such con- straints. Briscoe (1997b) evaluates the psychological plausibility of the account of parsing and working memory. References Baddeley, A. (1992) 'Working Memory: the interface between memory and cognition', J. of Cognitive Neuroscience, vol.4.3, 281-288. Baldwin, J.M. (1896) 'A new factor in evolution', American Naturalist, vol.30, 441-451. Bickerton, D. (1984) 'The language bioprogram hy- pothesis', The Behavioral and Brain Sciences, vol. 7.2, 173-222. Bouma, G. and van Noord, G (1994) 'Constraint- based categorial grammar', Proceedings of the 32nd Assoc. for Computational Linguistics, Las Cruces, NM, pp. 147-154. Brent, M. (1996) 'Advances in the computational study of language acquisition', Cognition, vol. 61, 1-38. Briscoe, E.J. (1997a, submitted) 'Language Acquisi- tion: the Bioprogram Hypothesis and the Bald- win Effect', Language, Briscoe, E.J. (1997b, in prep.) Working memory and its influence on the development of human lan- guages and the human language faculty, Univer- sity of Cambridge, Computer Laboratory, m.s.. Chomsky, N. (1981) Government and Binding, Foris, Dordrecht. Clark, R. (1992) 'The selection of syntactic knowl- edge', Language Acquisition, vol.2.2, 83-149. Elman, J. (1993) 'Learning and development in neu- ral networks: the importance of starting small', Cognition, vol.48, 71-99. Gibson, E. (1991) A Copmutational Theory of Hu- man Linguistic Processing: Memory Limitations and Processing Breakdown, Doctoral disserta- tion, Carnegie Mellon University. Gibson, E. and Wexler, K. (1994) 'Triggers', Lin- guistic Inquiry, vol.25.3, 407-454. Greenberg, J. (1966) 'Some universals of grammar with particular reference to the order of mean- ingflll elements' in J. Greenberg (ed.), Univer- sals of Grammar, MIT Press, Cambridge, Ma., pp. 73-113. Hawkins, J.A. (1994) A Performance Theory of Order and Constituency, Cambridge University Press, Cambridge. Hoffman, B. (1995) The Computational Analysis of the Syntax and Interpretation of 'Free' Word Or- der in Turkish, PhD dissertation, University of Pennsylvania. Hoffman, B. (1996) 'The formal properties of syn- chronous CCGs', Proceedings o] the ESSLLI For- mal Grammar Conference, Prague. Holland, J.H. (1993) Echoing emergence: objectives, rough definitions and speculations for echo-class models, Santa Fe Institute, Technical Report 93- 04-023. Joshi, A., Vijay-Shanker, K. and Weir, D. (1991) 'The convergence of mildly context-sensitive grammar formalisms' in Sells, P., Shieber, S. and Wasow, T. (ed.), Foundational Issues in Natural Language Processing, MIT Press, pp. 31-82. Lascarides, A., Briscoe E.J. , Copestake A.A and Asher, N. (1995) 'Order-independent and persis- tent default unification', Linguistics and Philos- ophy, vo1.19.1, 1-89. Lascarides, A. and Copestake A.A. (1996, submit- ted) 'Order-independent typed default unifica- tion', Computational Linguistics, Lightfoot, D. (1991) How to Set Parameters: Argu- ments from language Change, MIT Press, Cam- bridge, Ma.. Niyogi, P. and Berwick, R.C. (1995) 'A markov language learning model for finite parameter spaces', Proceedings of the 33rd Annual Meet- ing of the Association for Computational Lin- guistics, MIT, Cambridge, Ma.. Rambow, O. and Joshi, A. (1994) 'A processing model of free word order languages' in C. Clifton, L. Frazier and K. Rayner (ed.), Perspectives on Sentence Processing, Lawrence Erlbaum, Hills- dale, NJ., pp. 267-301. Steedman, M. (1988) 'Combinators and grammars' in R. Oehrle, E. Bach and D. Wheeler (ed.), Cat- egorial Grammars and Natural Language Struc- tures, Reidel, Dordrecht, pp. 417-442. Tomlin, R. (1986) Basic Word Order: Functional Principles, Routledge, London. Wood, M.M. (1993) Categorial-Grammars, Rout- ledge, London. 427 | 1997 | 54 |
Paradigmatic Cascades: a Linguistically Sound Model of Pronunciation by Analogy Francois Yvon ENST and CNRS, URA 820 Computer Science Department 46 rue Barrault - F 75 013 Paris yvon~±nf, enst. fr Abstract We present and experimentally evaluate a new model of pronunciation by analogy: the paradigmatic cascades model. Given a pronunciation lexicon, this algorithm first extracts the most productive paradigmatic mappings in the graphemic domain, and pairs them statistically with their corre- late(s) in the phonemic domain. These mappings are used to search and retrieve in the lexical database the most promising analog of unseen words. We finally apply to the analogs pronunciation the correlated series of mappings in the phonemic domain to get the desired pronunciation. 1 Motivation Psychological models of reading aloud traditionally assume the existence of two separate routes for con- verting print to sound: a direct lexical route, which is used to read familiar words, and a dual route rely- ing upon abstract letter-to-sound rules to pronounce previously unseen words (Coltheart, 1978; Coltheart et al., 1993). This view has been challenged by a number of authors (e.g. (Glushsko, 1981)), who claim that the pronunciation process of every word, familiar or unknown, could be accounted for in a unified framework. These single-route models cru- cially suggest that the pronunciation of unknown words results from the parallel activation of similar lexical items (the lexical neighbours). This idea has been tentatively implemented both into various sym- bolic analogy-based algorithms (e.g. (Dedina and Nusbaum, 1991; Sullivan and Damper, 1992)) and into connectionist pronunciation devices (e.g. (Sei- denberg and McClelland, 1989)). The basic idea of these analogy-based models is to pronounce an unknown word x by recombin- ing pronunciations of lexical items sharing common subparts with x. To illustrate this strategy, Ded- ina and Nussbaum show how the pronunciation of the sequence lop in the pseudo-word blope is analo- gized with the pronunciation of the same sequence in sloping. As there exists more than one way to re- combine segments of lexical items, Dedina and Nuss- baum's algorithm favors recombinations including large substrings of existing words. In this model, the similarity between two words is thus implicitely defined as a function of the length of their common subparts: the longer the common part, the better the analogy. This conception of analogical processes has an im- portant consequence: it offers, as Damper and East- mona ((Damper and Eastmond, 1996)) state it, "no principled way of deciding the orthographic neigh- bouts of a novel word which are deemed to influ- ence its pronunciation (...)". For example, in the model proposed by Dedina and Nusbaum, any word having a common orthographic substring with the unknown word is likely to contribute to its pronun- ciation, which increases the number of lexical neigh- bouts far beyond acceptable limits (in the case of blope, this neighbourhood would contain every En- glish word starting in bl, or ending in ope, etc). From a computational standpoint, implement- ing the recombination strategy requires a one-to- one alignment between the lexical graphemic and phonemic representations, where each grapheme is matched with the corresponding phoneme (a null symbol is used to account for the cases where the lengths of these representations differ). This align- ment makes it possible to retrieve, for any graphemic substring of a given lexical item, the corresponding phonemic string, at the cost however of an unmoti- vated complexification of lexical representations. In comparison, the paradigmati c cascades model (PCP for short) promotes an alternative view of analogical processes, which relies upon a linguisti- cally motivated similarity measure between words. 428 The basic idea of our model is to take advantage of the internal structure of "natural" lexicons. In fact, a lexicon is a very complex object, whose ele- ments are intimately tied together by a number of fine-grained relationships (typically induced by mor- phological processes), and whose content is severely restricted, on a language-dependant basis, by a com- plex of graphotactic, phonotactic and morphotac- tic constraints. Following e.g. (Pirrelli and Fed- erici, 1994), we assume that these constraints sur- face simultaneously in the orthographical and in the phonological domain in the recurring pattern of paradigmatically alterning pairs of lexical items. Extending the idea originally proposed in (Federici, Pirrelli, and Yvon, 1995), we show that it is possible to extract these alternation patterns, to associate alternations in one domain with the related alterna- tion in the other domain, and to construct, using this pairing, a fairly reliable pronunciation procedure. 2 The Paradigmatic Cascades Model In this section, we introduce the paradigmatic cas- cades model. We first formalize the concept of a paradigmatic relationship. We then go through the details of the learning procedure, which essentially consists in an extensive search for such relationships. We finally explain how these patterns are used in the pronunciation procedure. 2.1 Paradigmatic Relationships and Alternations The paradigmatic cascades model crucially relies upon the existence of numerous paradigmatic rela- tionships in lexical databases. A paradigmatic re- lationship involves four lexical entries a, b, c, d, and expresses that these forms are involved in an ana- logical (in the Saussurian (de Saussure, 1916) sense) proportion: a is to b as e is to d (further along ab- breviated as a : b = c : d, see also (Lepage and Shin-Ichi, 1996) for another utilization of this kind of proportions). Morphologically related pairs pro- vide us with numerous examples of orthographical proportions, as in: reactor : reaction = factor : faction (1) Considering these proportions in terms of ortho- graphical alternations, that is in terms of partial fnnctions in the graphemic domain, we can see that each proportion involves two alternations. The first one transforms reactor into reaction (and factor into faction), and consists in exchanging the suffixes or and ion. The second one transforms reactor into factor (and reaction into faction), and consists in exchanging the prefixes re and f. These alternations are represented on figure 1. f reactor • reaction factor • faction f Figure 1: An Analogical Proportion Formally, we define the notion of a paradigmatic relationship as follows. Given E, a finite alphabet, and/:, a finite subset of E*, we say that (a, b) E/: x/: is paradigmatically related to (c, d) E/: x/: iff there exits two partial functions f and g from E* to E*, where f exchanges prefixes and g exchanges suffixes, and: f(a) = c and f(b) = d (2) g(a) = b and g(c) = d (3) f and g are termed the paradigmatic alternations associated with the relationship a : b =:,9 c : d. The domain of an alternation f will be denoted by dora(f). 2.2 The Learning Procedure The main purpose of the learning procedure is to extract from a pronunciation lexicon, presumably structured by multiple paradigmatic relationships, the most productive paradigmatic alternations. Let us start with some notations: Given G a graphemic alphabet and P a phonetic alphabet, a pronunciation lexicon £ is a subset of G* × P*. The restriction of/: on G* (respectively P*) will be noted /:a (resp./:p). Given two strings x and y, pref(x, y) (resp. suff(x, y)) denotes their longest common pre- fix (resp. suffix). For two strings x and y having a non-empty common prefix (resp. suffÉx) u, f~y (resp, g~y) denotes the function which transforms x into y: as x = uv, and as y = ut, f~y substitutes a final v with a final t. ~ denotes the empty string. Given /:, the learning procedure searches /:G for any for every 4-uples (a, b, c, d) of graphemic strings such that a : b =:,g c : d. Each match increments the productivity of the related alternations f and g. This search is performed using using a slightly modified version of the algorithm presented in (Fed- erici, Pirrelli, and Yvon, 1995), which applies to ev- ery word x in/:c the procedure detailled in table 1. In fact, the properties of paradigmatic relation- ships, notably their symetry, allow to reduce dra- matically the cost of this procedure, since not all 429 GETALTERNATIONS (X) 1 z)(x) ~- {y e 12a/(t = pref(x, y)) # ~} 2 for yeD(x) 3 do 4 P(x,y) ~- {(z,t) c 12~ × 12~1z = f;,~(t)} 5 if P(x,y) ¢ O 6 then 7 IT~crementCount (fSy) 8 IncrementCount (f:Pt) Table 1: The Learning Procedure 4-uple of strings in £c, need to be examined during that stage. For each graphemic alternation, we also record their correlated alternation(s) in the phonological domain, and accordingly increment their productiv- ity. For instance, assuming that factor and reactor respectively receive the pronunciations/faekt0r/and /rii~ektor/, the discovery of the relationship ex- pressed in (1) will lead our algorithm to record that the graphemic alternation f -+ re correlates in the phonemic domain with the alternation /f/-+ /ri:/. Note that the discovery of phonemic correlates does not require any sort of alignment between the or- thographic and the phonemic representations: the procedure simply records the changes in the phone- mic domain when the Mternation applies in the graphemic domain. At the end of the learning stage, we have in hand a set A = {Ai} of functions exchanging suffixes or prefixes in the graphemic domain, and for each Ai in A: (i) a statistical measure Pi of its productivity, de- fined as the likelihood that the transform of a lexical item be another lexieal item: Pi = I {x e dom(di) and Ai(x) E 12}1 i dom(&)l (4) (ii) a set {Bi,j},j G {1...hi} of correlated func- tions in the phonemic domain, and a statistical measure Pi,j of their conditional productivity, i.e. of the likelihood that the phonetic alterna- tion Bi,j correlates with Ai. Table 2 gives the list of the phonological correlates of the alternation which consists in adding the suffix ly, corresponding to a productive rule for deriving adverbs from adjectives in English. If the first lines of table 2 are indeed <'true" phonemic correlates of the derivation, corresponding to various classes of adjectives, a careful examination of the last lines re- veals that the extraction procedure is easily fooled alternation x x-It~ x-loll x-I~l/ -~ x -+ x-/iin/ x-I1dl x-~iv~ x-~o~ x -+ z-/Ir/ -+ x-/3n/ Example x-/li'/ good x-/adli'/ marked x-/oli:/ equal x-/li'/ capable x-~i:~ cool x-/enli'/ clean x-/aldli'/ id x-/aIli:/ live x-/51i'/ loath x-/laI/ imp x-/3:li'/ ear x-/onlil/ on Table 2: Phonemic correlates of x --+ x - ly by accidental pairs like imp-imply, on-only or ear- early. A simple pruning rule was used to get rid of these alternations on the basis of their productivity, and only alternations which were observed at least twice were retained. It is important to realize that A allows to specifiy lexical neighbourhoods in 12a: given a lexical entry x, its nearest neighbour is simply f(x), where f is the most productive alternation applying to x. Lex- ical neighbourhoods in the paradigmatic cascades model are thus defined with respect to the locally most productive alternations. As a consequence, the definition of neighbourhoods implicitely incorpo- rates a great deal of linguistic knowledge extracted fl'om the lexicon, especially regarding morphological processes and phonotactic constraints, which makes it much for relevant for grounding the notion of anal- ogy between lexical items than, say, any neighbour- hood based on the string edition metric. 2.3 The Pronunciation of Unknown Words Supose now that we wish to infer the pronunciation of a word x, which does not appear in the lexicon. This goal is achieved by exploring the neighbour- hood of x defined by A, in order to find one or several analogous lexica.1 entry(ies) y. The second stage of the pronunciation procedure is to adapt the known pronunciation of y, and derive a suitable pronuncia- tion for x: the idea here is to mirror in the phonemic domain the series of alternations which transform x into y in the graphemic domain, using the statistical pairing between alternations that is extracted dur- ing the learning stage. The complete pronunciation procedure is represented on figure 2. Let us examine carefully how these two aspects of the pronunciation procedure are implemented. The first stage is I;o find a lexical entry in the neighbour- 430 Graphcmic domain Phonemic domain Figure 2: The pronunciation of an unknown word hood of x defined by L:. The basic idea is to generate A(x), defined as {Ai(x), forAi E ,4, x E domain(Ai)}, which con- tains all the words that can be derived from x us- ing a function in ,4. This set, better viewed as a stack, is ordered according to the productivity of the Ai: the topmost element in the stack is the nearest neighbour of x, etc. The first lexical item found in fl, (x) is the analog of x. If A (x) does not contain any known word, we iterate the procedure, using x I, the top-ranked element of .4 (x), instead of x. This expands the set of possible analogs, which is accordingly reordered, etc. This basic search strat- egy, which amounts to the exploration of a deriva- tion tree, is extremely ressource consuming (every expension stage typically adds about a hundred of new virtual analogs), and is, in theory, not guar- anted to terminate. In fact, the search problem is equivalent to the problem of parsing with an unre- stricted Phrase Structure Grammar, which is known to be undecidable. We have evaluated two different search strategies, which implement various ways to alternate between expansion stages (the stack is expanded by gener- ating the derivatives of the topmost element) and matching stages (elements in the stack are looked for in the lexicon). The first strategy implements a depth-first search of the analog set: each time the topmost element of the stack is searched, but not found, in the lexicon, its derivatives are immediately generated, and added to the stack. In this approach, the position of an analog in the stack is assessed a.s a function of the "distance" between the original word x and the analog y = A~ (A~_, (... A~ (x))), accord- ing to: l=k d(x, y) = 1-I /----1 The search procedure is stopped as soon an ana- log is found in L:a, or else, when the distance be- tween x and the topmost element of the stack, which monotonously decreases (Vi, pi < 1), falls below a pre-defined theshold. The second strategy implements a kind of com- promise between depth-first and breadth-first explo- ration of the derivation tree, and is best understood if we first look at a concrete example. Most alter- nations substituting one initial consonant are very productive, in English like in many other languages. Therefore, a word starting with say, a p, is very likely to have a very close derivative where the initial p has been replaced by say, a r. Now suppose that this word starts with pl: the alternation will de- rive an analog starting with rl, and will assess it with a very high score. This analog will, in turn, derive many more virtual analogs starting with rl, once its suffixes will have been substituted during another expansion phase. This should be avoided, since there are in fact very few words starting with the prefix rl: we would therefore like these words to be very poorly ranked. The second search strategy has been devised precisely to cope with this problem. The idea is to rank the stack of analogs according to the expectation of the number of lexical deriva- tives a given analog may have. This expectation is computed by summing up the productivities of all the alternations that can be applied to an analog y according to: p, (61 i/yEdom(Ai) This ranking will necessarily assess any analog start- ing in rl with a low score, as very few alternations will substitute its prefix. However, the computation of (6) is much more complex than (5), since it re- quires to examine a given derivative before it can be positioned in the stack. This led us to bring for- ward the lexical matching stage: during the expan- sion of the topmost stack element, all its derivatives are looked for in the lexicon. If several derivatives are simultaneously found, the search procedure halts and returns more than one analog. The expectation (6) does not decrease as more derivatives are added to the stack; consequently, it cannot be used to define a stopping criterion. The search procedure is therefore stopped when al} derivatives up to a given depth (2 in our ex- periments) have been generated, and unsuccessfully looked for in the lexicon. This termination criterion is very restrictive, in comparison to the one imple- mented in the depth-first strategy, since it makes it impossible to pronounce very long derivatives, for which a significant number of alternations need to 431 be applied before an analog is found. An example is the word synergistically, for which the "breadth- first" search terminates uncessfully, whereas the depth-first search manages to retrieve the "analog" energy. Nonetheless, the results reported hereafter have been obtained using this "breadth-first" strat- egy, mainly because this search was associated with a more efficient procedure for reconstructing pronun- ciations (see below). Various pruning procedures have also been imple- mented in order to control the exponential growth of the stack. For example, one pruning procedure de- tects the most obvious derivation cycles, which gen- erate in loops the same derivatives; another prun- ing procedure tries to detect commutating alterna- tions: substituting the prefix p, and then the suffix s often produces the same analog than when alter- nations apply in the reverse order, etc. More de- tails regarding implementational aspects are given in (Yvon, 1996b). If the search procedure returns an analog y = Aik(Aik_~(...Ail(x))) in £, we can build a pronun- ciation for x, using the known pronunciation ¢(y) of y. 'For this purpose, we will use our knowledge of the Bi,j, for i E {il...ik}, and generate ev- ery possible transforms of q;(y) in the phonological domain: -1 -1 {Bik,jk(Bik_~,jk_~ (. .. (q~(y))))), with jk in { 1 ... nik }, and order this set using some function of the Pi,j. The top-ranked element in this set is the pronunciation of x. Of course, when the search fails, this procedure fails to propose any pronunciation. In fact, the results reported hereafter use a slightly extended version of this procedure, where the pro- nunciations of more than one a.nMog are used for generating and selecting the pronunciation of the un- known word. The reason for using multiple analogs is twofold: first, it obviates the risk of being wrongly influenced by one very exceptional analog; second, it enables us to model conspiracy effects more accu- rately. Psychological models of reading aloud indeed assume that the pronunciation of an unknown word is not influenced by just one analog, but rather by its entire lexical neighbourhood. 3 Experimental Results 3.1 Experimental Design We have evaluated this algorithm on two different pronunciation tasks. The first experiment consists in infering the pronunciation of the 70 pseudo-words originally used in Glushko's experiments, which have been used as a test-bed for various other pronun- ciation algorithms, and allow for a fair head-to- head comparison between the paradigmatic cascades model and other analogy-based procedures. For this experiment, we have used the entire nettalk (Sejnowski and Rosenberg, 1987) database (about 20 000 words) as the learning set. The second series of experiments is intended to provide a more realistic evaluation of our model ill the task of pronouncing unknown words. We have used the following experimental design: 10 pairs of disjoint (learning set, test set) are randomly selected from the nettalk database and evaluated. In each experiment, the test set contains abou~ the tenth of the available data. A transcription is judged to be correct when it matches exactly the pronuncia-- tion listed in the database at the segmental level. The number of correct phonemes in a transcription is computed on the basis of the string-to-string edit distance with the target pronunciation. For each experiment, we measure the percentage of phoneme and words that are correctly predicted (referred to as correctness), and two additional figures, which are usually not significant in context of the evaluation of transcription systems. Recall that our algorithm, unlike many other pronunciation algorithms, is likely to remain silent. In order to take this aspect into ac- count, we measure in each experiment the number of words that can not be pronounced at all (the si- lence), and the percentage of phonemes and words that are correctly transcribed amongst those words that have been pronounced at all (the precision). The average values for these measures are reported here- after. 3.2 Pseudo-words All but one pseudo-words of Glushko's test set could be pronounced by the paradigmatic cascades algo- rithm, and amongst the 69 pronunciation suggested by our program, only 9 were uncorrect (that is, were not proposed by human subjects in Glushko's ex- periments), yielding an overall correctness of 85.7%, and a precision of 87.3%. An important property of our algortihm is that it allows to precisely identify, for each pseudo-word, the lexical entries that have been analogized, i.e. whose pronunciation was used in the inferential pro- cess. Looking at these analogs, it appears that three of our errors are grounded on very sensible analo- gies, and provide us with pronunciations that seem at least plausible, even if they were not suggested in Glushko's experiments. These were pild and bild, analogized with wild, and pornb, analogized with tomb. These results compare favorably well with the per- formances reported for other pronunciation by anal- ogy algorithms ((Damper and Eastmond, 1996) re- 432 ports very similai" correctness figures), especially if one remembers that our results have been obtained, wilhout resorting to any kind of pre-alignment be- tween the graphemic and phonemic strin9s in the lea'icons. 3.3 Lexical Entries This second series of experiment is intended to provide us with more realistic evaluations of the paradigmatic cascade rnodeh Glushko's pseudo- words have been built by substituting the initial consonant or existing monosyllabic words, and con- sl.itute theretore an over-simplistic test-bed. The nettalk dataset contains plurisyllabic words, com- plex derivatives, loan words, etc, and allows to test the ability of our model to learn complex morpho- phonological phenomenas, notably vocalic alterna- tions and other kinds of phonologically conditioned root a.llomorphy, that are very difficult to learn. With this new test set, the overall performances of our algorithm averages at about 54.5% of en- tirely correct words, corresponding to a 76% per phoneme correctness. If we keep the words that could not be pronounced at all (about 15% of the test set) apart fi'oln the evaluation, the per word and per phoneme precision improve considerably, reach- ing respectively 65% and 93%. Again, these pre- cision results compare relatively well with the re- suits achieved on the same corpus using other self- learning algorithms for grapheme-to-phoneme trma- scription (e.g. (van den Bosch and Daelemans, 1993; Yvon, 1996a)), which, unlike ours, benefit from the knowledge of tile alignment between graphemic and phonemic strings. Table 3 suimnaries the per- forma.uce (in terms of per word correctness, si- lence, and precision) of various other pronunciation systems, namely PRONOUNCE (Dedina and Nus- baum, 1991), DEC (Torkolla, 1993), SMPA (Yvon, 1!)96a). All these models have been tested nsing ex- a.c(.ly the sanle evMual.ion procedure and data. (see (Yvon, 1996b), which also contains an evalution per- formed with a French database suggesting that this h'arning strategy effectively applies to other lan- guages). System corr. prec. silence DE(/', 56.67 56.67 0 SMPA 63.96 64.24 0.42 PRONOUNC.F, 56.56 56.75 0.32 I)CP 54A9 63.95 14.80 Table 3: A Comparatiw. l~;valuation '[a/)le 3 pinpoints the main weakness of our model, that is, its significant silence rate. The careful ex- alnination of the words that cannot be pronounced reveals that they are either loan words, which are very isolated in an English lexicon, and .for which no analog can be found; or complex morphological derivatives for which the search procedure is stopped before the existing analog(s) can be reached. Typical examples are: synergistically, timpani, hangdog, oasis, pemmican, to list just a few. This suggests that the words which were not pronounced are not randomly distributed. Instead, they mostly belong to a linguistically homogeneous group, the group of foreign words, which, for lack of better evidence, should better be left silent, or processed by another pronnnciation procedure (for example a rule-based system (Coker, Church, and Liberman, 1990)), than uncorrectly analogized. Some complementary results finally need to be mentioned here, in relation to the size of lexical neighbourhoods. In fact, one of our main goal was to define in a sensible way the concept of a lexical neighbourhood: it is therefore important to check that our model manages to keep this neighbourhood relatively small. Indeed, if this neighbourhood can be quite large (typically 50 analogs) for short words, the number of analogs used in a pronunciation aver- ages at about 9.5, which proves that our definition of a lexical ncighbourhood is sufficiently restrictive. 4 Discussion and Perspectives 4.1 Related works A large number of procedures aiming at the auto- matic discovery of pronunciation "rules" have been proposed over the past few years: connectionist models (e.g. (Sejnowski and Rosenberg, 1987)), tra- ditional symbolic machine learning techniques (in- duction of decision trees, k-nearest neighbours) e.g. (Torkolla, 1993; van den Bosch and Daelemans, 1993), as well as various recombination techniques (Yvon, 1996a). In these models, orthographical cor- respondances are primarily viewed as resulting from a strict underlying phonographical system, where each grapheme encodes exactly one phoneme. This assumption is reflected by the possibility of align- ing on a one-to-one basis graphemic and phonemic strings, and these models indeed use this kind of alignment t.o initiate learning. Under this view, tile orthographical representation of individual words is strongly subject to their phonological forms on an word per word basis. The main task of a machine- learning algorithm is thus mainly to retrieve, on a statistical basis, these grapheme-phoneme corre- spondances, which are, in languages like French or 433 English, accidentally obscured by a multitude of ex- ceptional and idiosyncratic correspondances. There exists undoubtly strong historical evidences support- ing the view that the orthographical system of most european languages developped from a such phono- graphical system, and languages like Spanish or Ital- ian still offer examples of that kind of very regular organization. Our model, which extends the proposals of (Coker, Church, and Liberman, 1990), and more recently, of (Federici, Pirrelli, and Yvon, 1995), entertains a different view of orthographical systems. Even we if acknowledge the mostly phonographical organiza- tion of say, French orthography, we believe that the nmltiple deviations from a strict grapheme-phoneme correspondance are best captured in a model which weakens somehow the assumption of a strong de- pendancy between orthographical and phonological representations. In our model, each domain has its own organization, which is represented in the form of systematic (paradigmatic) set of oppositions and alternations. In both domain however, this orga- nization is subject to the same paradigmatic prin- ciple, which makes it possible to represent the re- lationships between orthographical and phonologi- cal representations in the form of a statistical pair- ing between alternations. Using this model, it be- comes possible to predict correctly the outcome in the phonological domain of a given derivation in the orthographic domain, including patterns of vocalic alternations, which are notoriously difficult to model using a "rule-based" approach. 4.2 Achievements The paradigmatic cascades model offers an origi- nal and new framework for extracting information from large corpora. In the particular context of grapheme-to-phoneme transcription, it provides us with a more satisfying model of pronunciation by analogy, which: • gives a principled way to automatically learn local similarities that implicitely incorporate a substantial knowledge of the morphological pro- cesses and of the phonotactic constraints, both in the graphemic and the phonemic domain. This has allowed us to precisely define and iden- tify the content of lexical neighbourhoods; • achieves a very high precision without resorting to pre-aligned data, and detects automaticMly those words that are potentially the most dif- ficult to pronounce (especially foreign words). Interestingly, the ability of our model to pro- cess data which are not aligned makes it directly applicable to the reverse problem, i.e. phoneme- to-grapheme conversion. is computationally tractable, even if extremely ressource-consuming in the current version of our algorithm. The main trouble here comes from isolated words: for these words, the search procedure wastes a lot of time examining a very large number of very unlikely analogs, before re- alizing that there is no acceptable lexical neigh- bout. This aspect definitely needs to be im- proved. We intend to explore several directions to improve this search: one possibility is to use a graphotactieal model (e.g. a rt-gram model) in order to make the pruning of the derivation tree more effective. We expect such a model to bias the search in favor of short words, which are more represented than very long derivatives. Another possibility is to tag, during the learning stage, alternations with one or several morpho- syntactic labels expressing morphotactical re- strictions: this would restrict the domain of an alternation to a certain class of words, and ac- cordingly reduce the expansion of the analog set. 4.3 Perspectives The paradigmatic cascades model achieves quite sat- isfactory generalization performances when evalu- ated in the task of pronouncing unknown words. Moreover, this model provides us with an effective way to define the lexical neighbourhood of a given word, on the basis of "surface" (orthographical) local similarities. It remains however to be seen how this model can be extended to take into account other factors which have been proven to influence analogi- cal processes. For instance, frequency effects, which tend to favor the more frequent lexical neighbours, need to be properly model, if we wish to make a more realistic account of the human performance in the pronunciation task. In a more general perspective, tile notion of simi- larity between linguistic objects plays a central role in many corpus-based natural language processing applications. This is especially obvious in the con- text of example-based learning techniques, where the inference of some unknown linguistics property of a new object is performed on the basis of the most similar available example(s). The use of some kind of similarity measure has also demonstrated its effec- tiveness to circumvent the problem of data sparse- ness in the context of statistical language modeling. In this context, we believe that our model, which is precisely capable of detecting local similarities in 434 lexicons, and to 16erform, on the basis of these sinai- larities~ a global inferential transfer of knowledge, is especially well suited for a large range of NLP tasks. Encouraging results on the task of learning the En- glish past-tense forms have already l~een reported in (Yvon, 1996b), and we intend to continue to test this model on various other potentially relevant applica- tions, such as morpho-syntactical "guessing", part- of-speech tagging, etc. References Coker, Cecil H., Kenneth W. Church, and Mark Y. Liberman. 1990. Morphology and rhyming: two powerful alternatives to letter-to-sound rules. In Proceedings of the ESCA Conference on Speech Synthesis, Autrans, France. Coltheart, Max. 1978. Lexical access in simple read- ing tasks. In G. Underwood, editor, Strategies of information processing. Academic Press, New York, pages 151-216. Coltheart, Max, Brent Curtis, Paul Atkins, and Michael Haller. 1993. Models of reading aloud: dual route and parallel distributed processing ap- proaches. Psychological Review, 100:589-608. Damper, Robert I. and John F. G. Eastmond. 1996. Pronuncing text by analogy. In Proceedings of the seventeenth International Conference on Com- putational Linguistics (COLING'96), pages 268- 273, Copenhagen, Denmark. de Saussure, Ferdinand. 1916. Cours de Linguis- tique Ggn@rale. Payot, Paris. Dedina, Michael J. and Howard C. Nusbaum. 1991. PRONOUNCE: a program for pronunciation by analogy. Computer Speech and Langage, 5:55-64. Federici, Stefano, Vito Pirrelli, and Franqois Yvon. 1995. Advances in analogy-based learning: false friends and exceptional items in pronunciation by paradigm-driven analogy. In Proceedings of I J- CA I'95 workshop on 'New Approaches to Learning for Natural Language Processing', pages 158-163, Montreal. Glushsko, J, R. 1981. Principles for pronouncing print: the psychology of phonography. In A. M. Lesgold and C. A. Perfetti, editors, Interactive Processes in Reading, pages 61-84, Hillsdale, New Jersey. Erlbaum. Lepage, Yves and Ando Shin-Ichi. 1996. Saussurian analogy : A theoretical account and its applica- tion. In Proceedings of the seventeenth Interna- tional Conference on Computational Linguistics (COLING'96)~ pages 717-722, Copenhagen, Den- 1Tlarl(. Pirrelli, Vito and Stefano Federici. 1994. "Deriva- tional" paradigms in morphonology. In Proceed- ings of the sixteenth International Conference on Computational Linguistics (COLING'94), Kyoto, Japan. Seidenberg, M. S. and James. L. McClelland. 1989. A distributed, developnaental model of word recognition and naming. Psychological review, 96:523-568. Sejnowski, Terrence J. and Charles R. Rosenberg. 1987. Parrallel network that learn to pronounce English text. Complex Systems, 1:145-168. Sullivan, K.P.H and Robert I. Damper. 1992. Novel- word pronunciation within a text-to-speech sys- tem. In G~rard Bailly a.nd Christian Benoit, edi- tors, Talking Machines, pages 183-195. North Hol- land. Torkolla, Karl. 1993. An efficient way to learn English grapheme-to-phoneme rules automati- cally. In PTvceedings of the International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), volume 2, pages 199-202, Minneapo- lis, Apr. van den Bosch, Antal and Walter Daelemans. 1993. Data-oriented methods for grapheme-to-phoneme conversion. In Proceedings of the European Chap- ter of the Association for Computational Linguis- tics (EACL), pages 45-53, Utrecht. Yvon, Francois. 1996a. Grapheme-to-phoneme conversion using multiple unbounded overlapping chunks. In Proceedings of the conference on New Methods in Natural Language Processing (NeM- LaP II), pages 218-228, Ankara, Turkey. Yvon, Francois. 1996b. Prononcer par analogie : motivations, formalisations et dvaluations. Ph.D. thesis, Ecole Nationale Sup6.rieure des T@l~com- munications, Paris. 435 | 1997 | 55 |
Memory-Based Learning: Using Similarity for Smoothing Jakub Zavrel and Walter Daelemans Computational Linguistics Tilburg University PO Box 90153 5000 LE Tilburg The Netherlands {zavrel, walt er}@kub, nl Abstract This paper analyses the relation between the use of similarity in Memory-Based Learning and the notion of backed-off smoothing in statistical language model- ing. We show that the two approaches are closely related, and we argue that feature weighting methods in the Memory-Based paradigm can offer the advantage of au- tomatically specifying a suitable domain- specific hierarchy between most specific and most general conditioning information without the need for a large number of pa- rameters. We report two applications of this approach: PP-attachment and POS- tagging. Our method achieves state-of-the- art performance in both domains, and al- lows the easy integration of diverse infor- mation sources, such as rich lexical repre- sentations. 1 Introduction Statistical approaches to disambiguation offer the advantage of making the most likely decision on the basis of available evidence. For this purpose a large number of probabilities has to be estimated from a training corpus. However, many possible condition- ing events are not present in the training data, yield- ing zero Maximum Likelihood (ML) estimates. This motivates the need for smoothing methods, which re- estimate the probabilities of low-count events from more reliable estimates. Inductive generalization from observed to new data lies at the heart of machine-learning approaches to disambiguation. In Memory-Based Learning 1 (MBL) induction is based on the use of similarity (Stanfill & Waltz, 1986; Aha et al., 1991; Cardie, 1994; Daelemans, 1995). In this paper we describe how the use of similarity between patterns embod- ies a solution to the sparse data problem, how it 1The Approach is also referred to as Case-based, Instance-based or Exemplar-based. relates to backed-off smoothing methods and what advantages it offers when combining diverse and rich information sources. We illustrate the analysis by applying MBL to two tasks where combination of information sources promises to bring improved performance: PP- attachment disambiguation and Part of Speech tag- ging. 2 Memory-Based Language Processing The basic idea in Memory-Based language process- ing is that processing and learning are fundamen- tally interwoven. Each language experience leaves a memory trace which can be used to guide later pro- cessing. When a new instance of a task is processed, a set of relevant instances is selected from memory, and the output is produced by analogy to that set. The techniques that are used are variants and extensions of the classic k-nearest neighbor (k- NN) classifier algorithm. The instances of a task are stored in a table as patterns of feature-value pairs, together with the associated "correct" out- put. When a new pattern is processed, the k nearest neighbors of the pattern are retrieved from memory using some similarity metric. The output is then de- termined by extrapolation from the k nearest neigh- bors, i.e. the output is chosen that has the highest relative frequency among the nearest neighbors. Note that no abstractions, such as grammatical rules, stochastic automata, or decision trees are ex- tracted from the examples. Rule-like behavior re- sults from the linguistic regularities that are present in the patterns of usage in memory in combination with the use of an appropriate similarity metric. It is our experience that even limited forms of ab- straction can harm performance on linguistic tasks, which often contain many subregularities and excep- tions (Daelemans, 1996). 2.1 Similarity metrics The most basic metric for patterns with symbolic features is the Overlap metric given in equations 1 436 and 2; where A(X, Y) is the distance between pat- terns X and Y, represented by n features, wi is a weight for feature i, and 5 is the distance per fea- ture. The k-NN algorithm with this metric, and equal weighting for all features is called IB1 (Aha et al., 1991). Usually k is set to 1. where: A(X, Y) = ~ wi 6(xi, yi) (1) i=l tf(xi,yi) = 0 if xi = yi, else 1 (2) This metric simply counts the number of (mis)matching feature values in both patterns. If we do not have information about the importance of features, this is a reasonable choice. But if we do have some information about feature relevance one possibility would be to add linguistic bias to weight or select different features (Cardie, 1996). An alternative--more empiricist--approach, is to look at the behavior of features in the set of examples used for training. We can compute statistics about the relevance of features by looking at which fea- tures are good predictors of the class labels. Infor- mation Theory gives us a useful tool for measuring feature relevance in this way (Quinlan, 1986; Quin- lan, 1993). Information Gain (IG) weighting looks at each feature in isolation, and measures how much infor- mation it contributes to our knowledge of the cor- rect class label. The Information Gain of feature f is measured by computing the difference in uncer- tainty (i.e. entropy) between the situations with- out and with knowledge of the value of that feature (Equation 3). w] = H(C) - ~-]~ev, P(v) x H(Clv ) si(f) (3) si(f) = - Z P(v)log 2 P(v) (4) vEVs Where C is the set of class labels, V f is the set of values for feature f, and H(C) = - ~cec P(c) log 2 P(e) is the entropy of the class la- bels. The probabilities are estimated from relative frequencies in the training set. The normalizing fac- tor si(f) (split info) is included to avoid a bias in favor of features with more values. It represents the amount of information needed to represent all val- ues of the feature (Equation 4). The resulting IG values can then be used as weights in equation 1. The k-NN algorithm with this metric is called ml- IG (Daelemans & Van den Bosch, 1992). The possibility of automatically determining the relevance of features implies that many different and possibly irrelevant features can be added to the fea- ture set. This is a very convenient methodology if theory does not constrain the choice enough before- hand, or if we wish to measure the importance of various information sources experimentally. Finally, it should be mentioned that MB- classifiers, despite their description as table-lookup algorithms here, can be implemented to work fast, using e.g. tree-based indexing into the case- base (Daelemans et al., 1997). 3 Smoothing of Estimates The commonly used method for probabilistic clas- sification (the Bayesian classifier) chooses a class for a pattern X by picking the class that has the maximum conditional probability P(classlX ). This probability is estimated from the data set by looking at the relative joint frequency of occurrence of the classes and pattern X. If pattern X is described by a number of feature-values Xl,..., xn, we can write the conditional probability as P(classlxl,... , xn). If a particular pattern x~,..., x" is not literally present among the examples, all classes have zero ML prob- ability estimates. Smoothing methods are needed to avoid zeroes on events that could occur in the test material. There are two main approaches to smoothing: count re-estimation smoothing such as the Add-One or Good-Turing methods (Church & Gale, 1991), and Back-off type methods (Bahl et al., 1983; Katz, 1987; Chen & Goodman, 1996; Samuelsson, 1996). We will focus here on a comparison with Back-off type methods, because an experimental comparison in Chen & Goodman (1996) shows the superiority of Back-off based methods over count re-estimation smoothing methods. With the Back-off method the probabilities of complex conditioning events are ap- proximated by (a linear interpolation of) the proba- bilities of more general events: /5(ctasslX) = ~x/3(clas~lX) + ~x,/3(d~sslX') +... + ),x,~(dasslX n) (5) Where/5 stands for the smoothed estimate,/3 for the relative frequency estimate, A are interpolation weights, ~-']i~0"kx' = 1, and X -< X i for all i, where -< is a (partial) ordering from most specific to most general feature-sets 2 (e.g the probabilities of trigrams (X) can be approximated by bigrams (X') and unigrams (X")). The weights of the lin- ear interpolation are estimated by maximizing the probability of held-out data (deleted interpolation) with the forward-backward algorithm. An alterna- tive method to determine the interpolation weights without iterative training on held-out data is given in Samuelsson (1996). 2X -< X' can be read as X is more specific than X'. 437 We can assume for simplicity's sake that the Ax, do not depend on the value of X i, but only on i. In this case, if F is the number of features, there are 2 F - 1 more general terms, and we need to estimate A~'s for all of these. In most applications the inter- polation method is used for tasks with clear order- ings of feature-sets (e.g. n-gram language modeling) so that many of the 2 F -- 1 terms can be omitted beforehand. More recently, the integration of infor- mation sources, and the modeling of more complex language processing tasks in the statistical frame- work has increased the interest in smoothing meth- ods (Collins ~z Brooks, 1995; Ratnaparkhi, 1996; Magerman, 1994; Ng & Lee, 1996; Collins, 1996). For such applications with a diverse set of features it is not necessarily the case that terms can be ex- cluded beforehand. If we let the Axe depend on the value of X ~, the number of parameters explodes even faster. A prac- tical solution for this is to make a smaller number of buckets for the X i, e.g. by clustering (see e.g. Magerman (1994)): Note that linear interpolation (equation 5) actu- ally performs two functions. In the first place, if the most specific terms have non-zero frequency, it still interpolates them with the more general terms. Be- cause the more general terms should never overrule the more specific ones, the Ax e for the more general terms should be quite small. Therefore the inter- polation effect is usually small or negligible. The second function is the pure back-off function: if the more specific terms have zero frequency, the proba- bilities of the more general terms are used instead. Only if terms are of a similar specificity, the A's truly serve to weight relevance of the interpolation terms. If we isolate the pure back-off function of the in- terpolation equation we get an algorithm similar to the one used in Collins & Brooks (1995). It is given in a schematic form in Table 1. Each step consists of a back-off to a lower level of specificity. There are as many steps as features, and there are a total of 2 F terms, divided over all the steps. Because all features are considered of equal importance, we call this the Naive Back-off algorithm. Usually, not all features x are equally important, so that not all back-off terms are equally relevant for the re-estimation. Hence, the problem of fitting the Axe parameters is replaced by a term selection task. To optimize the term selection, an evaluation of the up to 2 F terms on held-out data is still neces- sary. In summary, the Back-off method does not pro- vide a principled and practical domain-independent method to adapt to the structure of a particular do- main by determining a suitable ordering -< between events. In the next section, we will argue that a for- mal operationalization of similarity between events, as provided by MBL, can be used for this purpose. In MBL the similarity metric and feature weighting scheme automatically determine the implicit back- If f(xl,..., xn) > 0: #(clzl ...,xn) = f(c,~l ..... ~.) ' f(~l ..... ~.) Else if f(xl, ...,Xn-1, *) "4- ... A- f(*,x2, ...,Xn) > O: ~(clzl,...,zn) = f(c,~l ..... ~,-1,,)+...+f(c,*,~2 ..... ~) f(zl,...,z.-1,*)+...+/(*,z2 ..... z.) Else if ... : ~(clzl, ..., z,~) = Else if f(xl, *, ..., *) + ... + f(*, ..., *, x,~) > O: ~(clzl,...,x~) = f(c'~l'* ...... )++f(c'*' ........ ) f(zl,*,...,*)+...+/(*,...,*,z~) Table 1: The Naive Back-off smoothing algorithm. f(X) stands for the frequency of pattern X in the training set. An asterix (*) stands for a wildcard in a pattern. The terms at a higher level in the back-off sequence are more specific (-<) than the lower levels. off ordering using a domain independent heuristic, with only a few parameters, in which there is no need for held-out data. 4 A Comparison If we classify pattern X by looking at its nearest neighbors, we are in fact estimating the probabil- ity P(classlX), by looking at the relative frequency of the class in the set defined by simk(X), where slink(X) is a function from X to the set of most sim- ilar patterns present in the training data 3. Although the name "k-nearest neighbor" might mislead us by suggesting that classification is based on exactly k training patterns, the sima(X) fimction given by the Overlap metric groups varying numbers of patterns into buckets of equal similarity. A bucket is defined by a particular number of mismatches with respect to pattern X. Each bucket can further be decom- posed into a number of schemata characterized by the position of a wildcard (i.e. a mismatch). Thus simk(X) specifies a ~ ordering in a Collins 8z Brooks style back-off sequence, where each bucket is a step in the sequence, and each schema is a term in the estimation formula at that step. In fact, the un- weighted overlap metric specifies exactly the same ordering as the Naive Back-off algorithm (table 1). In Figure 1 this is shown for a four-featured pat- tern. The most specific schema is the schema with zero mismatches, which corresponds to the retrieval of an identical pattern from memory, the most gen- eral schema (not shown in the Figure) has a mis- match on every feature, which corresponds to the 3Note that MBL is not limited to choosing the best class. It can also return the conditional distribution of all the classes. 438 Overlap exact match IIIII Overlap IG IIitl 1 mismatch 2 mismatches 3 mismatches X X X X xx XXX XX × XX x ~ X ×× X ×xx × X x X IX] I I I 1><]><3 I I ~ I 1><3 I I IX] I I><:] I I I IX] I IXI IX] Figure 1: An analysis of nearest neighbor sets into buckets (from left to right) and schemata (stacked). IG weights reorder the schemata. The grey schemata are not used if the third feature has a very high weight (see section 5.1). entire memory being best neighbor. If Information Gain weights are used in combina- tion with the Overlap metric, individual schemata instead of buckets become the steps of the back-off sequence 4. The -~ ordering becomes slightly more complicated now, as it depends on the number of wildcards and on the magnitude of the weights at- tached to those wildcards. Let S be the most specific (zero mismatches) schema. We can then define the ordering between schemata in the following equa- tion, where A(X,Y) is the distance as defined in equation 1. s' -< s" ~ ~,(s, s') < a(s, s") (6) Note that this approach represents a type of im- plicit parallelism. The importance of the 2~back-off terms is specified using only F parameters--the IG weights-, where F is the number of features. This advantage is not restricted to the use of IG weights; many other weighting schemes exist in the machine learning literature (see Wettschereck et aL (1997) for an overview). Using the IG weights causes the algorithm to rely on the most specific schema only. Although in most applications this leads to a higher accuracy, because it rejects schemata which do not match the most important features, sometimes this constraint needs 4Unless two schemata are exactly tied in their IG values. to be weakened. This is desirable when: (i) there are a number of schemata which are almost equally relevant, (ii) the top ranked schema selects too few cases to make a reliable estimate, or (iii) the chance that the few items instantiating the schema are mis- labeled in the training material is high. In such cases we wish to include some of the lower-ranked schemata. For case (i) this can be done by discretiz- ing the IG weights into bins, so that minor differ- ences will lose their significance, in effect merging some schemata back into buckets. For (ii) and (iii), and for continuous metrics (Stanfill & Waltz, 1986; Cost & Salzberg, 1993) which extrapolate from ex- actly k neighbors 5, it might be necessary to choose a k parameter larger than 1. This introduces one addi- tional parameter, which has to be tuned on held-out data. We can then use the distance between a pat- tern and a schema to weight its vote in the nearest neighbor extrapolation. This results in a back-off sequence in which the terms at each step in the se- quence are weighted with respect to each other, but without the introduction of any additional weight- ing parameters. A weighted voting function that was found to work well is due to Dudani (1976): the nearest neighbor schema receives a weight of 1.0, the furthest schema a weight of 0.0, and the other neigh- bors are scaled linearly to the line between these two points. 5Note that the schema analysis does not apply to these metrics. 439 Method IB1 (=Naive Back-off) IBI-IG LexSpace IG Back-off model (Collins & Brooks) C4.5 (Ratnaparkhi et al.) Max Entropy (Ratnaparkhi et al.) Brill's rules (Collins 8z Brooks) % Accuracy 83.7 % 84.1% 84.4 % 84.1% 79.9 % 81.6 % 81.9 % Table 2: Accuracy on the PP-attachment test set. 5 Applications 5.1 PP-attachment In this section we describe experiments with MBL on a data-set of Prepositional Phrase (PP) attach- ment disambiguation cases. The problem in this data-set is to disambiguate whether a PP attaches to the verb (as in I ate pizza with a fork) or to the noun (as in I ate pizza with cheese). This is a dif- ficult and important problem, because the semantic knowledge needed to solve the problem is very diffi- cult to model, and the ambiguity can lead to a very large number of interpretations for sentences. We used a data-set extracted from the Penn Treebank WSJ corpus by Ratnaparkhi et al. (1994). It consists of sentences containing the possibly ambiguous sequence verb noun-phrase PP. Cases were constructed from these sentences by record- ing the features: verb, head noun of the first noun phrase, preposition, and head noun of the noun phrase contained in the PP. The cases were la- beled with the attachment decision as made by the parse annotator of the corpus. So, for the two example sentences given above we would get the feature vectors ate,pizza,with,fork,V, and ate,pizza, with, cheese, N. The data-set contains 20801 training cases and 3097 separate test cases, and was also used in Collins & Brooks (1995). The IG weights for the four features (V,N,P,N) were respectively 0.03, 0.03, 0.10, 0.03. This identi- fies the preposition as the most important feature: its weight is higher than the sum of the other three weights. The composition of the back-off sequence following from this can be seen in the lower part of Figure 1. The grey-colored schemata were effec- tively left out, because they include a mismatch on the preposition. Table 2 shows a comparison of accuracy on the test-set of 3097 cases. We can see that Isl, which implicitly uses the same specificity ordering as the Naive Back-off algorithm already performs quite well in relation to other methods used in the literature. Collins & Brooks' (1995) Back-off model is more so- phisticated than the naive model, because they per- formed a number of validation experiments on held- out data to determine which terms to include and, more importantly, which to exclude from the back- off sequence. They excluded all terms which did not match in the preposition! Not surprisingly, the 84.1% accuracy they achieve is matched by the per- formance of IBI-IG. The two methods exactly mimic each others behavior, in spite of their huge differ- ence in design. It should however be noted that the computation of IG-weights is many orders of mag- nitude faster than the laborious evaluation of terms on held-out data. We also experimented with rich lexical represen- tations obtained in an unsupervised way from word co-occurrences in raw WSJ text (Zavrel & Veenstra, 1995; Schiitze, 1994). We call these representations Lexical Space vectors. Each word has a numeric 25 dimensional vector representation. Using these vec- tors, in combination with the IG weights mentioned above and a cosine metric, we got even slightly bet- ter results. Because the cosine metric fails to group the patterns into discrete schemata, it is necessary to use a larger number of neighbors (k = 50). The result in Table 2 is obtained using Dudani's weighted voting method. Note that to devise a back-off scheme on the basis of these high-dimensional representations (each pat- tern has 4 x 25 features) one would need to consider up to 2 l°° smoothing terms. The MBL framework is a convenient way to further experiment with even more complex conditioning events, e.g. with seman- tic labels added as features. 5.2 POS-tagging Another NLP problem where combination of differ- ent sources of statistical information is an impor- tant issue, is POS-tagging, especially for the guess- ing of the POS-tag of words not present in the lex- icon. Relevant information for guessing the tag of an unknown word includes contextual information (the words and tags in the context of the word), and word form information (prefixes and suffixes, first and last letters of the word as an approximation of affix information, presence or absence of capitaliza- tion, numbers, special characters etc.). There is a large number of potentially informative features that could play a role in correctly predicting the tag of an unknown word (Ratnaparkhi, 1996; Weischedel et al., 1993; Daelemans et al., 1996). A priori, it is not clear what the relative importance is of these features. We compared Naive Back-off estimation and MBL with two sets of features: • PDASS: the first letter of the unknown word (p), the tag of the word to the left of the unknown word (d), a tag representing the set of possible lexical categories of the word to the right of the unknown word (a), and the two last letters (s). The first letter provides information about cap- italisation and the prefix, the two last letters 440 about suffixes. • PDDDAAASSS: more left and right context fea- tures, and more suffix information. The data set consisted of 100,000 feature value patterns taken from the Wall Street Journal corpus. Only open-class words were used during construc- tion of the training set. For both IBI-IG and Naive Back-off, a 10-fold cross-validation experiment was run using both PDASS and PDDDAAASSS patterns. The results are in Table 3. The IG values for the features are given in Figure 2. The results show that for Naive Back-off (and ml) the addition of more, possibly irrelevant, features quickly becomes detrimental (decrease from 88.5 to 85.9), even if these added features do make a gener- alisation performance increase possible (witness the increase with IBI-IG from 88.3 to 89.8). Notice that we did not actually compute the 21° terms of Naive Back-off in the PDDDAAASSS condition, as IB1 is guaranteed to provide statistically the same results. Contrary to Naive Back-off and IB1, memory-based learning with feature weighting (ml-IG) manages to integrate diverse information sources by differ- entially assigning relevance to the different features. Since noisy features will receive low IG weights, this also implies that it is much more noise-tolerant. 0.30 - 0.25 - 0.20 - F o~ "3 ~_ 0.15- 0.10 , 0.05 - 0.0- p d d d a a a s s s feature Figure 2: IG values for features used in predicting the tag of unknown words. IB1, Naive Back-off IBI-IG PDASS 88.5 (0.4) 88.3 (0.4) PDDDAAASSS 85.9 (0.4) 89.8 (0.4) Table 3: Comparison of generalization accuracy of Back-off and Memory-Based Learning on prediction of category of unknown words. All differences are statistically significant (two-tailed paired t-test, p < 0.05). Standard deviations on the 10 experiments are between brackets. 6 Conclusion We have analysed the relationship between Back- off smoothing and Memory-Based Learning and es- tablished a close correspondence between these two frameworks which were hitherto mostly seen as un- related. An exception is the use of similarity for al- leviating the sparse data problem in language mod- eling (Essen & Steinbiss, 1992; Brown et al., 1992; Dagan et al., 1994). However, these works differ in their focus from our analysis in that the emphasis is put on similarity between values of a feature (e.g. words), instead of similarity between patterns that are a (possibly complex) combination of many fea- tures. The comparison of MBL and Back-off shows that the two approaches perform smoothing in a very sim- ilar way, i.e. by using estimates from more general patterns if specific patterns are absent in the train- ing data. The analysis shows that MBL and Back-off use exactly the same type of data and counts, and this implies that MBL can safely be incorporated into a system that is explicitly probabilistic. Since the underlying k-NN classifier is a method that does not necessitate any of the common independence or distribution assumptions, this promises to be a fruit- ful approach. A serious advantage of the described approach, is that in MBL the back-off sequence is specified by the used similarity metric, without manual in- tervention or the estimation of smoothing parame- ters on held-out data, and requires only one param- eter for each feature instead of an exponential num- ber of parameters. With a feature-weighting met- ric such as Information Gain, MBL is particularly at an advantage for NLP tasks where conditioning events are complex, where they consist of the fusion of different information sources, or when the data is noisy. This was illustrated by the experiments on PP-attachment and POS-tagging data-sets. 441 Acknowledgements This research was done in the context of the "Induc- tion of Linguistic Knowledge" research programme, partially supported by the Foundation for Lan- guage Speech and Logic (TSL), which is funded by the Netherlands Organization for Scientific Research (NWO). We would like to thank Peter Berck and Anders Green for their help with software for the experiments. References D. Aha, D. Kibler, and M. Albert. 1991. Instance- based Learning Algorithms. Machine Learning, Vol. 6, pp. 37-66. L.R. Bahl, F. Jelinek and R.L. Mercer. 1983. A Maximum Likelihood Approach to Continu- ous Speech Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-5 (2), pp. 179-190. Peter F. Brown, Vincent J. Della Pietra, Peter V. deSouza, Jennifer C. Lai, and Robert L. Mer- cer. 1992. Class-based N-gram Models of Natural Language. Computational Linguistics, Vol. 18(4), pp. 467-479. Claire Cardie. 1994. Domain Specific Knowl- edge Acquisition for Conceptual Sentence Anal- ysis, PhD Thesis, University of Massachusets, Amherst, MA. Claire Cardie. 1996. Automatic Feature Set Selec- tion for Case-Based Learning of Linguistic Knowl- edge. In Proc. of the Conference on Empirical Methods in Natural Language Processing, May 17- 18, 1996, University of Pennsylvania. Stanley F.Chen and Joshua Goodman. 1996. An Empirical Study of Smoothing Techniques for Language Modelling. In Proc. of the 34th Annual Meeting of the ACL, June 1996, Santa Cruz, CA, ACL. Kenneth W. Church and William A. Gale. 1991. A comparison of the enhanced Good-Turing and deleted estimation methods for estimating proba- bilities of English bigrams. Computer Speech and Language, Vol 19(5), pp. 19-54. M. Collins. 1996. A New Statistical Parser Based on Bigram Lexical Dependencies. In Proc. of the 34th Annual Meeting of the ACL, June 1996, Santa Cruz, CA, ACL. M. Collins and J. Brooks. 1995. Prepositional Phrase Attachment through a Backed-Off Model. In Proceedings of the Third Workshop on Very Large Corpora, Cambridge, MA. S. Cost and S. Salzberg. 1993. A weighted near- est neighbour algorithm for learning with symbolic features. Machine Learning, Vol. 10, pp. 57-78. Walter Daelemans and Antal van den Bosch. 1992. Generalisation Performance of Backprop- agation Learning on a Syllabification Task. In M. F. J. Drossaers & A. Nijholt (eds.), TWLT3: Connectionism and Natural Language Processing. Enschede: Twente University. pp. 27-37. Walter Daelemans. 1995. Memory-based lexical acquisition and processing. In P. Steffens (ed.), Machine Translation and the Lexicon. Springer Lecture Notes in Artificial Intelligence, no. 898. Berlin: Springer Verlag. pp. 85-98 Walter Da~lemans. 1996. Abstraction Considered Harmful: Lazy Learning of Language Process- ing. In J. van den Herik and T. Weijters (eds.), Benelearn-96. Proceedings of the 6th Belgian- Dutch Conference on Machine Learning. MA- TRIKS: Maastricht, The Netherlands, pp. 3-12. Walter Daelemans, Jakub Zavrel, Peter Berck, and Steven Gillis. 1996. MBT: A Memory-Based Part of Speech Tagger Generator. In E. Ejerhed and I. Dagan (eds.) Proc. of the Fourth Workshop on Very Large Corpora, Copenhagen: ACL SIGDAT, pp. 14-27. Walter Daelemans, Antal van den Bosch, and Ton Weijters. 1997. IGTree: Using Trees for Com- pression and Classification in Lazy Learning Al- gorithms. In D. Aha (ed.) Artificial Intelligence Review, special issue on Lazy Learning, Vol. 11(1- 5). Ido Dagan, Fernando Pereira, and Lillian Lee. 1994. Similarity-Based Estimation of Word Cooccur- fence Probabilities. In Proc. of the 32nd Annual Meeting of the ACL, June 1994, Las Cruces, New Mexico, ACL. S.A. Dudani. 1981 The Distance-Weighted k- Nearest Neighbor Rule IEEE Transactions on Systems, Man, and Cybernetics, Vol. SMC-6, pp. 325-327. Ute Essen, and Volker Steinbiss. 1992. Coocurrence Smoothing for Stochastic Language Modeling. In Proc. ofICASSP, Vol. 1, pp. 161-164, IEEE. Slava M. Katz. 1987. Estimation of Probabilities from Sparse Data for the Language Model Com- ponent of a Speech Recognizer IEEE Transac- tions on Acoustics, Speech and Signal Processing, Vol. ASSP-35, pp. 400-401, March 1987. David M. Magerman. 1994. Natural Language Pars- ing as Statistical Pattern Recognition, PhD The- sis, Department of Computer Science, Stanford University. Hwee Tou Ng and Hian Beng Lee. 1996. Integrat- ing Multiple Knowledge Sources to Disambiguate Word Sense: An Exemplar-Based Approach. In Proc. of the 3~th Annual Meeting of the ACL, June 1996, Santa Cruz, CA, ACL. 442 J..R. Quinlan. 1986. Induction of Decision Trees. Machine Learning, Vol. 1, pp. 81-206. J..R. Quinlan. 1993. ¢4.5: Programs for Machine Learning. San Mateo, CA: Morgan Kaufmann. Adwait Ratnaparkhi. 1996. A Maximum Entropy Part-Of-Speech Tagger. In Proc. of the Confer- ence on Empirical Methods in Natural Language Processing, May 17-18, 1996, University of Penn- sylvania. A. Ratnaparkhi, J. Reynar and S. Roukos. 1994. A maximum entropy model for Prepositional Phrase Attachment. In ARPA ~,Vorkshop on Human Lan- guage Technology, Plainsboro, NJ. Christer Samuelsson. 1996. Handling Sparse Data by Successive Abstraction In Proc. of the Interna- tional Confercncc on Computational Linguistics (COLING'96), August 1996, Copenhagen, Den- mark. Hinrich Sch/itze. 1994. Distributional Part-of- Speech Tagging. In Proc. of the 7th Conference of the European Chapter of the Association for Com- putational Linguistics (EACL'95), Dublin, Ire- land. C. Stanfill and D. Waltz. 1986. Toward memory- based reasoning. Communications of the ACM, Vol. 29, pp. 1213-1228. Ralph Weischedel, Marie Meteer, Richard Schwartz, Lance Ramshaw, and Jeff Palmucci. 1993. Cop- ing with Ambiguity and Unknown Words through Probabilistic Models. Computational Linguistics, Vol. 19(2). pp. 359-382. D. Wettschereck, D. W. Aha, and T. Mohri. 1997. A Review and Comparative Evaluation of Feature-Weighting Methods for Lazy Learning Al- gorithms. In D. Aha (ed.) Artificial Intelligence Review, special issue on Lazy Learning, Vol. 11(1- 5). Jakub Zavrel and Jorn B. Veenstra. 1995. The Lan- guage Environment and Syntactic Word-Class Ac- quisition. In C.Koster and F.Wijnen (eds.) Proc. of the Groningen Assembly on Language Acquisi- tion (GALA95). Center for Language and Cogni- tion, Groningen, pp. 365-374. 443 | 1997 | 56 |
String Transformation Learning Giorgio Satta Dipartimento di Elettronica e Informatica Universith di Padova via Gradenigo, 6/A 1-35131 Padova, Italy satt a@dei, unipd, it John C. Henderson Department of Computer Science Johns Hopkins University Baltimore, MD 21218-2694 j hndrsn~cs, j hu. edu Abstract String transformation systems have been introduced in (Brill, 1995) and have sev- eral applications in natural language pro- cessing. In this work we consider the com- putational problem of automatically learn- ing from a given corpus the set of transfor- mations presenting the best evidence. We introduce an original data structure and efficient algorithms that learn some fam- ilies of transformations that are relevant for part-of-speech tagging and phonologi- cal rule systems. We also show that the same learning problem becomes NP-hard in cases of an unbounded use of don't care symbols in a transformation. 1 Introduction Ordered sequences of rewriting rules are used in several applications in natural language process- ing, including phonological and morphological sys- tems (Kaplan and Kay, 1994), morphological disam- biguation, part-of-speech tagging and shallow syn- tactic parsing (Brill, 1995), (Karlsson et ah, 1995). In (Brill, 1995) a learning paradigm, called error- driven learning, has been introduced for automatic induction of a specific kind of rewriting rules called transformations, and it has been shown that the achieved accuracy of the resulting transformation systems is competitive with that of existing systems. In this work we further elaborate on the error- driven learning paradigm. Our main contribution is summarized in what follows. We consider some families of transformations and design efficient al- gorithms for the associated learning problem that improve existing methods. Our results are achieved by exploiting a data structure originally introduced in this work. This allows us to simultaneously repre- sent and test the search space of all possible transfor- mations. The transformations we investigate make use of classes of symbols, in order to generalize regu- larities in rule applications. We also show that when an unbounded number of these symbol classes are al- lowed within a transformation, then the associated learning problem becomes NP-hard. The notation we use in the remainder of the paper is briefly introduced here. ~3 denotes a fixed, finite alphabet and e the null string. E* and E+ are the set of all strings and all non-null strings over E, re- spectively. Let w 6 E*. We denote by Iwl the length ofw. Let w = uxv; uis aprefix and v is a suf- fix of w; when x is non-null, it is called a factor of w. The suffix of w of length i is denoted suffi(w), for O < i _< Iwl. Assume that x is non-null, and w = uixsuffi(w ) for ~ > 0 different values of i but not for ~ + 1, or x is not a factor of w and ~ = 0. Then we say that ~ is the statistic of factor z in w. 2 The learning paradigm The learning paradigm we adopt is called error- driven learning and has been originally proposed in (Brill, 1995) for part of speech tagging applica- tions. We briefly introduce here the basic assump- tions of the approach. A string transformation is a rewriting rule de- noted as u -* v, where u and v are strings such that [u[ = Ivt. This means that ifu appears as a factor of some string w, then u should be replaced by v in w. The application of the transformation might be con- ditioned by the requirement that some additionally specified pattern matches some part of the string w to be rewritten. We now describe how transformations can be au- tomatically learned. A pair of strings (w, w') is an aligned pair if IT[ = ]w'[. When w = uzsuffi(w), w' = u'x'suffi(w' ) and Ixl = Ix'l, we say that fac- tors x and x' occur at aligned positions within (w, w'). A multi-set of aligned pairs is called an aligned corpus. Let (w, w ') be an aligned pair and let 7- be some transformation of the form u --~ v. The positive evidence of v (w.r.t. (w, w')) is the number of different positions at which factors u and v are aligned within (w, w'). The negative evi- dence of r (w.r.t. w, w ~) is the number of different positions at which factors u and u are aligned within 444 ¢ a ~ ( C $' $ -a., al (p $ cl q~ $, [ 1 , 2 ~ Figure 1: Trie and suffix tree for string w = accbacac$. Pair [i, j] denotes the factor of w starting at position i and ending at position j (hence [1, 2] denotes ac). (w, w'). Intuitively speaking, positive (negative) ev- idence is a count of how many times we will do well (badly, respectively) when using v on w in trying to get w'. The score associated with v is the differ- ence between the positive evidence and the negative evidence of r. This extends to an aligned corpus in the obvious way. We are interested in the set of transformations that are associated with the high- est score in a given aligned corpus, and will develop algorithms to find such a set in the next sections. 3 Data Structures This section introduces two data structures that are basic to the development of the algorithms presented in this paper. 3.1 Suffix trees We briefly present here a data structure that is well known in the text processing literature; the reader is referred to (Crochemore and Rytter, 1994) and (Apostolico, 1985) for definitions and further references. Let w be some non-null string. Throughout the paper we assume that the rightmost symbol of w is an end-marker not found at any other position in the string. The suffix tree associated with w is a "compressed" trie of all strings suffi(w), 1 <i< Iwl. Edges are labeled by factors of w which are encoded by means of two natural numbers denoting endpoints in the string. An example is reported in Figure 1. An implicit node is a node not explicitly repre- sented in the suffix tree, that splits the label of some edge at a given position. (Each implicit node cor- responds to some node in the original trie having only one child.) We denote by parent(p) the parent node of (implicit) node p and by label(p, q) the la- bel of the edge spanning (implicit) nodes p and q. Throughout the paper, we take the dominance rela- tion between nodes to be reflexive, unless we write proper dominance. We also say that implicit node q immediately dominates node p if q splits the arc between parent(p) and p. Of main interest here are the following properties of suffix trees: • if node p has children Pl .... , Pd, then d _> 2 and strings label(p, pi) differ one from the other at the leftmost symbol; . all and only the factors of w are represented by paths from the root to some (implicit) node; • the statistic of factor u of w is the number of leaves dominated by the (implicit) node ending the path representing u. In the remainder of the paper, we sometimes identify an (implicit) node of a suffix tree with the factor represented by the path from the root to that node. The suffix tree and the statistics of all factors of w can be constructed/computed in time O([w[), as reported in (Weiner, 1973) and (McCreight, 1976). McCreight algorithm uses two basic functions to scan paths in the suffix tree under construction. These functions are briefly introduced here and will be exploited in the next subsection. Below, p is a node in a tree and u is a non-null string. function Slow_scan(p, u): Starting at p, scan u sym- bol by symbol. Return the {implicit) node corre- sponding to the last matching symbol. The next function runs faster than Slow_scan, and can be used whenever we already know that u is an (implicit) node in the tree (u completely matches some path ill the tree). function Fast_scan(p, u): Starting at p, scan u by iteratively (i) finding the edge between the current node and one of its children, that has the same first symbol as the suffix of u yet to be scanned, and (ii) skipping a prefix of u equal to the length of the selected edge label. Return the (implicit) node u. 445 [ 1 , 2 ~ / \"-M9,91 d/[8,9] t : : 7 ~ ~ N a [ ~ /[8.9]/[6.9] 9,9] 7(13)~(2) . [3~] N"~ "91 Figure 2: Suffix tree aligmnent for strings w = accbacac$, w' = acabacba$ and the identity homomorphism h(a) = a, h(b) = b, h(c) = c. Each a-link is denoted by indexing the incident nodes with the same integer number; if the incident node is an implicit node, then we add between parentheses the relative position w.r.t. the arc label. From each node au in the suffix tree, au some factor, McCreight's algorithm creates a pointer, called an s- link, to node u which necessarily exists in the suffix tree. We write q = s-link(p) if there is an s-link from ptoq. 3.2 Suffix tree alignment In the next section each transformation will be as- sociated with several strings. Given an input text, we will compute transformation scores by comput- ing statistics of these strings. This can easily be done using suffix trees, and by pairing statistics cor- responding to the same transformation. The latter task can be done using the data structure originally introduced here. A total function h : E ~ E ~, ~ and E' two alpha- bets, is called a (restricted) homomorphism. We extend h to a string function in the usual way by posing h(¢) = s and h(au) = h(a)h(u), a E E and u E E*. Given w,w' E E +, we need to pair each factor u of w with factor h(u) possibly occurring in w ~. To solve this problem, we construct the suffix trees T,T' for w,w', respectively. Then we estab- lish an a-link (a pointer) from each node u of T, u some factor, to the (implicit) node h(u) of T ~, if h(u) exists. Furthermore, if factor ua with a E E is an (implicit) node of T such that h(u) but not h(ua) are (implicit) nodes of T', we create node u in T (if u was an implicit node) and establish an a-link from u to (implicit) node h(u) of T'. Note that the to- tal number of a-links is O(Iwl). The resulting data structure is called here suffix tree aligmnent. An example is reported in Figure 2. We now specify a method to compute suffix tree alignments. In what follows p,p~ are tree nodes and u is a non-null string. Crucially, we assume we can access the s-links of T and T'. Paths u and v in T and T', respectively, are aligned if v = h(u). The next two functions are used to move a-links up and down two aligned paths. function Move_link_down(p,p',u): Starting at. p and p', simultaneously scan u and h(u), respectively, using function Slow_scan. Stop as soon as a symbol is not matched. At each encountered node of T and at the (implicit) node of T corresponding to the last successful match, create an a-link to the paired (im- plicit) node of T'. Return the pair of nodes in the lastly created a-link Mong with the length of the suc- cessfully matched prefix of u. In the next function, we use function Fast_scan in- troduced in Section 3.1, but we run it upward the tree (with the obvious modifications). function Move_link_up(p,p'): Starting at p and p', simultaneously scan the paths to the roots of T and T', respectively, using function Fast_scan. Stop as soon as a node of T is encountered that already ha.s an a-link. At each encountered node of T create an a-link to the paired (implicit) node of T'. We also need a function that "shifts" a-links to a new pair of aligned paths. This is done using s-links. The next auxiliary function takes care of those (implicit) nodes for which the s-link is missing. (This is the case for implicit nodes of T ~ and for some nodes of T that have been newly created.) We rest on the property that the parent node of any such (implicit) node always has an s-link, when it differs from the root. function Up_link_down(p): If s-link(p) is defined then return s-link(p). Else, let pl = parent(p). If Pl is not the root node, let P2 = s-link(p1) and return (implicit) node FasLscan(p2,1abel(pl,p)). If Pl is the root node, return (implicit) node Fast_scan(p1, sufflZab~l(p~,p )i_ l ( label(pl , p) ). function Shifl_link(p,p'): Pl = Up_link_down(p), P'I = Up_linLdown(p'). Return (Pl,P~). We can now present the algorithm for the con- struction of suffix tree alignments. Algorithm 1 Let T and T' be the suffix trees for strings w and w', respectively: ( bl~l ' Gl ' d) ,-- Move_link_down(root of T, 446 I il ab' I bi I a-linkl 9 - ac 1, 2 8 C C -- 7 e cba 4 6 ba bac 5 5 ac aca 6 4 ca ca 7 3 a ac -- 2 c c 8 1 e $ - Figure 3: The table reports the values of 8bi, bi and the established a-links at each iteration of Algo- rithm 1, when constructing the suffix tree aligmnent in Figure 2. To denote a-links we use the same inte- ger numbers as in Figure 2. root of T', for i from ]w I - 1 downto 1 do begin (sbi, sb;) ~-- Shift_link(bi+l, b;+l) M ove_link_up( sbi , sb~ ) ( b, , dd) Move_link_do n( ab, , abe, s wl-d(w)) d.--d+dd end end In Figure 3 a sample run of Algorithm 1 is schemat- ically represented. In the next section we use the following properties of Algorithm 1: • after T and T' have been processed, for every node p ofT representing factor u of w, (implicit) node a-link(p) of T ~ is defined if and only if a-link(p) represents factor h(u) of w'; • the algorithm can be executed in time O(Iwl + Iw'l). The first property above can be proved as follows. For 1 < i < Iwl, bi in Algorithm 1 is (the node representing) the longest prefix of suffi(w ) such that h(bi) is an (implicit) node of T' (is a factor of w'). This can be proved by induction on [w I -i, using the definition of Move_link_down and of s-link. We then observe that, if u is a node of T, then factor u is a prefix of some suffi(w ) and either u dominates bi or bi properly dominates u in T. If u dominates bi, then • h(u) must be an (implicit) node ofT'. In this case an a-link is established from u to h(u) by Move_link_up or Move_link_down, depending on whether u dom- inates or is dominated by sbi in T. If bi properly dominates u, h(u) does not occur in w'. In this case, node u is never reached by the algorithm and no a-link is established for this node. The proof of the linear time result is rather long, we only give an outline here. The interesting case is the function Shift_link, which is executed Iwl- 1 times by the algorithm. When executed once on nodes p and if, Shift_link uses time 0(1) if s-link(p) and s-link(p ~) are both defined. In all other cases, it uses an amount of time proportional to the num- ber of (implicit) nodes visited by function FasLscan, which is called through function Up_link_down. We use an amortization technique and charge a constant amount of time to the symbols in w and w', for each node visited in this way. Consider the execution of Shifl_link(bi+l, b~+l) for some i, 1 < i < Iw[- 1. As- sume that, correspondingly, Fast_scan visits nodes ul,...,Ud of T in this order, with d __ 1 and each uj some factor of w. Then we have that each uj is a (proper) prefix of uj+l, and Ud = sbi. For each u j, 1 < j _< d- 1, we charge a constant amount of time to the symbol in w "corresponding" to the last symbol of uj. The visit to Ud, on the other hand, is charged to the ith symbol of w. (Note that charging the visit to ud to the symbol in w "corresponding" to the last symbol of Ud does not work, since in the case of sbi ---" bi the same symbol would be charged again at the next iteration of the for-cycle.) It is not difficult to see that, in this way, each symbol of w is charged at most once. A similar argument works for visits to nodes of T' by Fast_scan, which are charged to symbols of u?. This shows that the time used by all executions of Shift_link is 0(Iwl + Iw'l). Suffix trees and suffix tree alignments can be gen- eralized to finite multi-sets of strings, each string ending with the same end-marker not found at any other position. In this case each leaf holds a record, called count, of the number of times the correspond- ing suffix appears in the entire multi-set, which will be propagated appropriately when computing factor statistic. Most important here, all of the above re- sults still hold for these generalizations. In the next section, we will deal with the multi-set case. 4 Transformation learning This section deals with the computational problem of learning string transformations from an aligned corpus. We show that some families of transforma- tions can be efficiently learned exploiting the data structures of Section 3. We also consider more gen- eral kinds of transformations and show that for this class the learning problem is NP-hard. 4.1 Data representation We introduce a representation of aligned corpora that reduces the problem of computing the pos- itive/negative evidence of transformations to the problem of computing factor statistics. Let (w, w') be an aligned pair, w = al.. "an and w'=a'l...a,~; withaiEEforl<i<n, andn>_ 1. We define w×w' = (al,a~)."(a~,a~). (1) 447 Note that w x w ~ is a string over the new al- phabet E x E. Let N > 1 and let L = {(wl, w~),..., (Wg, W~v)} be an aligned corpus. We represent L as a string multi-set over alphabet E x E: L× = {w x w' I (w,w') E L}, (2) where w x w ~ appears in Lx as many times as (w, w ~) appears in L. 4.2 Learning algorithms Let L be an aligned corpus with N aligned pairs over a fixed alphabet E, and let n be the length of the longest string in a pair in L. We start by considering plain transformations of the form u --* v, (3) where u, v E E +, lul = Ivl, We want to find all in- stances of strings u, v E E* such that, in L, u ~ v has score greater or equal than the score of any other transformation. Existing methods for this problem are data-driven. They consider all pairs of factors (with lengths bounded by n) occurring at aligned positions within some pair in L, and update the positive and the negative evidence of the associated transformations. They thus consider O(Nn 2) fac- tor pairs, where each pair takes time O(n) to be read/stored. We conclude that these methods use an amount of time O(Nn3). We can improve on this by using suffix tree alignments. Let Lx be defined as in (2) and let hi : (E x E) (E x E) be the homomorphism specified as: h~((a,b)) = (a,a). Recall that, each suffix of a multi-set of strings is represented by a leaf in the associated suffix-tree, because of the use of the end-marker, and that each leaf stores the count of the occurrences of the corre- sponding suffix in the source multi-set. We schemat- ically specify our first learning algorithm below. Algorithm 2 Step 1: construct two copies Tx and T x of the suf- fix tree associated with L× and align them using hi; Step 2: visit trees T× and T~ in post-order, and an- notate each node p with the number e(p) computed as the sum of the counts at leaves that p dominates; Step 3: annotate each node p of T× with the score e(p) - e(p'), where p' = a-link(p) if a-link(p) is an actual node, p~ is the node immediately dominated by a-link(p) if a-link(p) is an implicit node, and e(p ~) = 0 if a-link(p) is undefined; make a list of the nodes with the highest annotated score. Let p be a node of Tx associated with factor u x v. Integer e(p) computed at Step 2 is the number of times a suffix having u x v as a prefix appears in strings in Lx. Thus e(p) is the number of differ- ent positions at which factors u and v are aligned within Lx and hence the positive evidence of trans- formation u --~ v w.r.t. L, as defined in Section 2. Similarly, e(#) is the statistic of factor u >< u and hence the negative evidence of u --+ v (as well as the negative evidence of all transformations having u as left-hand side). It follows that Algorithm 2 records, at Step 3, the transformations having the highest score in L among all transformations represented by nodes of Tx. It is not difficult to see that the re- maining transformations, denoted by implicit nodes of Tx, do not have score greater than the one above. The latter transformations with highest score, if any, can be easily recovered by visiting the implicit nodes that immediately dominate the nodes of Tx recorded at Step 3. A complexity analysis of Algorithm 2 is straight- forward. Step 1 can be executed in time O(Nn), as discussed in Section 3. Since the size of Tx and T~< is O(Nn)~ all other steps can be easily executed in linear time. Hence Algorithm 2 runs in time O(Nn). We now turn to a more general kind of transfor- mations. In several natural language processing ap- plications it is useful to generalize over some trans- formations of the form in (3), by using classes of symbols in E. Let t > 1 and let C1, •.., Ct be a par- tition of E (each Ci ~-O). Consider F = {C1,..., Ct} as an alphabet. We say that string al...ad E ~+ matches string Ci,...Cid E F + if ak E Cik for 1 < k < d. We define transformations 1 u 7 -* v--, (4) u, v E E +, lut = Ivt, 7 E F +, and assume the follow- ing interpretation. An occurrence of string u must be rewritten to v in a text whenever u is followed by a substring matching 7. String 7 is called the right context of the transformation. The positive evidence for such transformation is the number of positions at which factors ux and vx ~ are aligned within the corpus, for all possible x, x ~ E E + with x matching 7. (We do not require x = x', since later transformations can change the right context.) The negative evidence for the transformation is the number of positions at which factors ux and ux ~ are aligned within the corpus, x, x ¢ as above. We are not aware of any learning method for transformations of the form in (4). A naive method for this task would consider all factor pairs appear- ing at aligned positions in some pair in L. The left component of each factor must then be split into a string in E + and a string in F +, to represent a transformation in the desired form. Overall, there are O(Nn 3) possible transformations, and we need time O(n) to read/store each transformation. Then the method uses an amount of time O(Nn4). Again, we can improve on this. We need a representa- tion for right context strings. Define homomorphism h2 :(E X E)---+ F as h~((a,~)) = C, a~C. 1In generative phonology (4) is usually written as u ---+ v / _ 7. Our notation can more easily be generalized, as it is needed in some transformation systems. 448 (h2 is well defined since r is a partition of E.) Let also Lr = {h2(w x w') I w x w' e Lx}, where h2(w x w') appears in Lr as many times as w x w' appears in L x. uxv / \ ~q Figure 4: At Step 3 of Algorithm 3, triple (q, e, e') is inserted in v(p) if the relations depicted above are realized, where dashed arrows denote a-links, black circles denote nodes, and white circles denote nodes that might be implicit. Integer e > 0 is a count of the paths from node q downward, having the form y x y' with a prefix of y matching 7. Similarly, e ~ is a count of the paths from node q~ downward satisfying the same matching condition with 7. The matching condition is enforced by the fact that the above paths have their ending leaf nodes a-linked to a leaf node of Tr dominated by node p. Below we link a suffix-tree to more than one suffix- tree. In the notation of a-links we then use a sub- script indicating the suffix tree of the target node, in order to distinguish among different linkings. We now schematically specify the learning algorithm; additional computational details will be provided later in the discussion of the complexity. Algorithm 3 Step 1: construct two copies Tx and T~ of the suffix tree associated with L× and construct the suffix tree Tr associated with Lr; Step 2: align Tx with T" using hi and align the resulting suffix trees Tx and T~ with Tr using h~; Step 3: for each node p of Tr, store a set v(p) including all triples (q, e, e') such that (see Figure 4): • q is a node of Tx such that a-linkTr(q) properly dominates p • e > 0 is the sum of the counts at leaves of Tx dominated by q that have an a-link to a leaf of Tr dominated by p • if ql = a_linkT, x (q) is defined, e' is the sum of the counts at leaves of T x dominated by q' that have an a-link to a leaf of Tr dominated by p; otherwise, e ~ = 0; Step 4: find all pairs (p,q), p a node of Tr and (q, e, e') E v(p), such that e - e ~ is greater than or equal to any other el - e~, (ql, el, el) in some r(pl). We next show that if pair (p, q) is found at Step 4, then q represents a factor u x v, p represents a factor h2(u x v)7, and transformation u7 ~ v -- has the highest score among all transformations represented by nodes of Tx and Tr. Similarly to the case of Al- gorithm 2, this is the highest score achieved in L, and other transformations with the same score can be obtained from some of the implicit nodes imme- diately dominating p and q. Let p aid q be defined as in Step 3 above. Assume that q represents a factor u x v of some string in L× and p represents a factor 87 E F* of some string in Lr, where [81 = lul. Since a-linkTr(q) dominates p, we must have h2(u x v) = 8. Consider a suffix (u x v)(z x x')(y x y') appearing in ~ > 0 strings in Lx, such that h2(x x x') = 7. (This means that x matches 7, and there are at least ~ positions at which u --+ v has been applied with a right-context of %) We have that string h2((u x v)(x x x')(y x y')) = &Th2(y x y') must be a suffix of some strings in Lr. It follows that (u x v)(x x z')(y x y') is a leaf of Tx with a count of ~, ~Th2(y x y') is a leaf of Tr, and there is an a-link between these two nodes. Leaf (u x v)(x x z')(y × y') is dominated by q, and leaf &Th2(y x y') is dominated by p. Then, at Step 3, integer ~ is added to e. Since no condition has been imposed above on string x' and on suffix (y x y'), we conclude that the final value ofe must be the positive evidence of transformation u7 --+ v --. A similar argument shows that the negative evidence of this transformation is stored in e'. It then follows that, at Step 4, Algorithm 3 finds the transformations with the highest score among those represented by nodes of Tx and Tr. Algorithm 3 can be executed in time O(Nn2). We only outline a proof of this property here, by fo- cusing on Step 3. To execute this step we visit Tr in post order. At leaf node p, we consider the set F(p) of all leaves q of Tx such that p = a-linkT× (q), and the set F~(p) of all leaves q~ of T~ such that p = a-linkTx (q'). For each (implicit) node of T" that dominates some node in F~(p) and that is the target of some a-link (from some source node of Tx), we record the sum of the counts of the dominated nodes in Fl(p). This can be done in time O(IF'(p)l n). For each node q of Tx dominat- ing some node in F(p), we store in v(p) the triple (q,e, e'), since a-linkTr(q) necessarily dominates p. We let e > 0 be the sum of the counts of the dom- inated nodes in F(p), and let e' be the value re- trieved from the a-link to T', if any. This takes time O(IF(P)l n). When p ranges over the leaves of 449 Tr, we have ~-~p IF(p)I = EC, IF'(p)I = O(Nn). We then conclude that sets r(p) for all leaves p of Tr can be computed in time O(Nn2). At internal node p with children Pi, 1 < i _< d, d > 1, we assume that sets r(pi)'s have already been computed. As- sume that for some i we have (q, ei, e~) E r(pl) and a-linkTr(q) does not immediately dominate Pi. If ' to e, respectively; (q, e, e') E r(p), we add ei, e i e', otherwise, we insert (q, el, e{) in r(p). We can then compute sets r(p) for all internal nodes p of Tr using an amount of time }-'~p Ir(p)t = O(Nn=). 4.3 General transformations We have mentioned that the introduction of classes of alphabet symbols allows abstraction over plain transformations that is of interest to natural lan- guage applications. We generalize here transforma- tions in (47 by letting 7 be a string over E U F. More precisely, we assume 7 has the form: "1 = uo~iul..-u~-i~'a~, (5) where u0,ud E ~*, ui E ~+ and ~j E F + for 1 _ i_< d-1 and l_<j_< d, and d>_ 1. The notion of matching previously defined is now extended in such a way that, for a, b E P,, a matches b if a = b. Then the interpretation of the resulting transformation is the usual one. The parameter d in (5) is called the number of alternations of the transformation. We have established the following results: • transformations with a bounded number of al- ternations can be learned in polynomial time; • learning transformations with an unbounded number of alternations is NP-hard. Again, we only give an outline of the proof below. The first result is easy to show, by observing that in an aligned corpus there are polynomially many occurrences of transformations with a bounded num- ber of alternations. The second result holds even if we restrict ourselves to IEI = 2 and Irl = 1, that is if we use a don~t care symbol. Here we introduce a decision problem associated with the optimiza- tion problem of learning the transformations with the highest, score, and outline an NP-completeness proof. TRANSFORMATION SCORING (TS) Instance: (L,K), with L an aligned corpus, K a positive integer. Question: Is there a transformation that has score greater than or equal to K w.r.t. L? Membership in NP is easy to establish for TS. To show NP-hardness, we consider the CLIQUE de- cision problem for undirected, simple, connected graphs and transform such a problem to the TS problem. (The NP-completeness for .the used restric- tion of the CLIQUE problem (Garey and Johnson, 1979) is easy to establish.) Let (G,K') be an in- stance of the CLIQUE problem as above, G = (V, E) and K' > 0. Without loss of generality, we assume that V = {1,2,...,q}. Let E = {a,b}; we construct an instance of the TS problem (L, K} over E as fol- lows. For each {i, j} E V with i < j let wi,j = ai-lbaJ-i-lba q-j. (6) We add to the aligned corpus L: 1. one instance of pair Pi,j = (awl j, bwi,j) for each i < j, {i,j} E E; 2. q2 instances of pair Pi,j = (awi,j, awi,j) for each i,j E Y with i < j and {i,j} ~ E; 3. q2 instances of pair Pa = (aaa, ban). Also, we set K = q2 + (~'). The above instance of TS can easily be constructed in polynomial deter- ministic time with respect to the length of (G, K'}. It is easy to show that when (G, K') is a positive instance of the source problem, then the correspond- ing instance of TS is satisfied by at least one trans- formation. Assume now that there exists a trans- formation r having score greater equal than K > 0, w.r.t.L. Since the replacement of a with b is the only rewriting that appears in pairs of L, r must have the form a7 --+ b --. If 7 includes some occur- rence of b, then r cannot match Pa and the positive evidence of r will not exceed IEI < (3) < K, con- trary to our assumption. We then conclude that 7 has the form (? denotes the don't care symbol): aJl-l?aJ~-ji-1 ? ...?.aq'-Ja, where V" = {ji,...,Jd} C_ V, d > 0 and q' < q. If there exists i, j E V" such that {-i, j} ~ E, then r would match some pair Pi,j E L and it would have negative evidence smaller or equal than q2. Since the positive evidence of r cannot exceed q2 + IEI, r would have a score not exceeding IEI < (q) < If, contrary to our assumption. Then r matches no pair Pij E L and, for each i,j E V", we have {i,j} E E. = K' (K') Since K - q2 ( 2 ), at least pairs Pi,j E L are matched by r. We therefore conclude that d > K' and that V" is a clique in G of size greater equal than K'. This concludes our outline of the proof. 5 Concluding remarks With some minor technical changes to function Up_link_down, we can align a suffix tree with itself (w.r.t. a given homomorphism). In this way we improve space performance of Algorithms 2 and 3, avoiding the construction of two copies of the same suffix tree. Algorithm 3 can trivially be adapted to learn transformations in (4) where a left context is specified in place of a right context. The algorithm can also be used to learn traditional phonological rules of the form a --* b / _7, where a,b are sin- gle phonemes and "/is a sequence over {C, V}, the classes of consonants and vowels. In this case the 450 algorithm runs in time O(Nn) (for fixed alphabet). We leave it as an open problem whether rules of the form in (4) can be learned in linear time. We have been concerned with learning the best transformations that should be applied at a given step. An ordered sequence of transformations can be learned by iteratively learning a single transfor- mation and by processing the aligned corpus with the transformation just learned (Brill, 1995). Dy- namic techniques for processing the aligned corpus were first proposed in (Ramshaw and Marcus, 1996) to re-edit the corpus only where needed. Those au- thors report that this is not space efficient if trans- formation learning is done by independently test- ing all possible transformations in the search space (as in (Brill, 1995)). The suffix tree alignment data structure allows simultaneous scoring for all trans- formations. We can now take advantage of this and design dynamical algorithms that re-edit a suffix tree alignment only where needed, on the line of a similar method for suffix trees in (McCreight, 1976). An alternative data structure to suffix trees for the representations of string factors, called DAWG, has been presented in (Blumer et al., 1985). We point out here that, because a DAWG is an acyclic graph rather than a tree, straightforward ways of defining alignment between two DAWGs results in a quadratic number of a-links, making DAWGs much less attractive than suffix trees for factor alignment. We believe that suffix tree alignments are a very flex- ible data structure, and that other transformations could be efficiently learned using these structures. We do not regard the result in Section 4.3 as a neg- ative one, since general transformations specified as in (5) seem too powerful for the proposed applica- tions in natural language processing, and learning might result in corpus overtraining. Other than transformation based systems the methods presented in this paper can be used for learning rules of constraint grammars (Karlsson et al., 1995), phonological rule systems as in (Kaplan and Kay, 1994), and in general those grammatical systems using constraints represented by means of rewriting rules. This is the case whenever we can encode the alphabet of the corpus in such a way that alignment is possible. Acknowledgements Part of the present research was done while the first author was visiting the Center for Language and Speech Processing, Johns Hopkins University, Bal- timore, MD. The second author is a member of the Center for Language and Speech Processing. This work was funded in part by NSF grant IRI-9502312. The authors are indebted to Eric Brill for technical discussions on topics related to this paper. References Apostolico, A. 1985. The myriad virtues of suf- fix trees. In A. Apostolico and Z. Galil, editors, Combinatorial Algorithms on Words, volume 12. Springer-Verlag, Berlin, Germany, pages 85-96. NATO Advanced Science Institutes, Seires F. Blumer, A., J. Blumer, D. Haussler, A. Ehrenfeucht, M. Chen, and J. Seiferas. 1985. The smallest au- tomaton recognizing the subwords of a text. The- oretical Computer Science, 40:31-55. Brill, E. 1995. Transformation-based error-driven learning and natural language processing: A case study in part of speech tagging. Computational Linguistics. Crochemore, M. and W. Rytter. 1994. Text Algo- rithms. Oxford University Press, Oxford, UK. Garey, M. R. and D. S. Johnson. 1979. Computers and Intractability. Freeman and Co., New York, NY. Kaplan, R. M. and M. Kay. 1994. Regular models of phonological rule sistems. Computational Lin- guistics, 20(3):331-378. Karlsson, F., A. Voutilainen, J. Heikkil~, and A. Anttila. 1995. Constraint Grammar. A Language Independent System for Parsing Unre- stricted Text. Mouton de Gruyter. McCreight, E. M. 1976. A space-economical suffix tree construction algorithm. Journal of the Asso- ciation for Computing Machinery, 23(2):262-272. Ramshaw, L. and M. P. Marcus. 1996. Explor- ing the nature of transformation-based learning. In J. Klavans and P. Resnik, editors, The Bal- ancing Act--Combining Symbolic and Statistical Approaches to Language. The MIT Press, Cam- bridge, MA, pages 135-156. Weiner, P. 1973. Linear pattern-matching algo- rithms. In Proceedings of the i4th IEEE Annual Symposium on Switching and Automata Theory, pages 1-11, New York, NY. Institute of Electrical and Electronics Engineers. 451 | 1997 | 57 |
Approximating Context-Free Grammars with a Finite-State Calculus Edmund GRIMLEY EVANS Computer Laboratory University of Cambridge Cambridge, CB2 3QG, GB Edmund. Grimley-EvansOcl. cam. ac. uk Abstract Although adequate models of human lan- guage for syntactic analysis and seman- tic interpretation are of at least context- free complexity, for applications such as speech processing in which speed is impor- tant finite-state models are often preferred. These requirements may be reconciled by using the more complex grammar to auto- matically derive a finite-state approxima- tion which can then be used as a filter to guide speech recognition or to reject many hypotheses at an early stage of processing. A method is presented here for calculat- ing such finite-state approximations from context-free grammars. It is essentially dif- ferent from the algorithm introduced by Pereira and Wright (1991; 1996), is faster in some cases, and has the advantage of be- ing open-ended and adaptable. 1 Finite-state approximations Adequate models of human language for syntac- tic analysis and semantic interpretation are typi- cally of context-free complexity or beyond. Indeed, Prolog-style definite clause grammars (DCGs) and formalisms such as PATR with feature-structures and unification have the power of Turing machines to recognise arbitrary recursively enumerable sets. Since recognition and analysis using such models may be computationally expensive, for applications such as speech processing in which speed is impor- tant finite-state models are often preferred. When natural language processing and speech recognition are integrated into a single system one may have the situation of a finite-state language model being used to guide speech recognition while a unification-based formalism is used for subsequent processing of the same sentences. Rather than write these two grammars separately, which is likely to lead to problems in maintaining consistency, it would be preferable to derive the finite-state gram- mar automatically from the (unification-based) anal- ysis grammar. The finite-state grammar derived in this way can not in general recognise the same language as the more powerful grammar used for analysis, but, since it is being used as a front-end or filter, one would like it not to reject any string that is accepted by the analysis grammar, so we are primarily interested in 'sound approximations' or 'approximations from above'. Attention is restricted here to approximations of context-free grammars because context-free lan- guages are the smallest class of formal language that can realistically be applied to the analysis of natural language. Techniques such as restriction (Shieber, 1985) can be used to construct context-free approx- imations of many unification-based formalisms, so techniques for constructing finite-state approxima- tions of context-free grammars can then be applied to these formalisms too. 2 Finite-state calculus A 'finite-state calculus' or 'finite automata toolkit' is a set of programs for manipulating finite-state automata and the regular languages and transduc- ers that they describe. Standard operations in- clude intersection, union, difference, determinisation and minimisation. Recently a number of automata toolkits have been made publicly available, such as FIRE Lite (Watson, 1996), Grail (Raymond and Wood, 1996), and FSA Utilities (van Noord, 1996). Finite-state calculus has been successfully applied both to morphology (Kaplan and Kay, 1994; Kempe and Karttunen, 1996) and to syntax (constraint grammar, finite-state syntax). The work described here used a finite-state calcu- lus implemented by the author in SICStus Prolog. 452 The use of Prolog rather than C or C++ causes large overheads in the memory and time required. How- ever, careful account has been taken of the way Pro- log operates, its indexing in particular, in order to ensure that the asymptotic complexity is as good as that of the best published algorithms, with the result that for large problems the Prolog implementation outperforms some of the publicly available imple- mentations in C++. Some versions of the calculus allow transitions to be labelled with arbitrary Prolog terms, including variables, a feature that proved to be very convenient for prototyping although it does not essentially alter the power of the machinery. (It is assumed that the string being tested consists of ground terms so no unification is performed, just matching.) 3 An approximation algorithm There are two main ideas behind this algorithm. The first is to describe the finite-state approximation us- ing formulae with regular languages and finite-state operations and to evaluate the formulae directly us- ing the finite-state calculus. The second is to use, in intermediate stages of the calculation, additional, auxiliary symbols which do not appear in the final result. A similar approach has been used for compil- ing a two-level formalism for morphology (Grimley Evans et al., 1996). In this case the auxiliary symbols are dotted rules from the given context-free grammar. A dotted rule is a grammar rule with a dot inserted somewhere on the right-hand side, e.g. S -+ - NP VP S -+ NP • VP S --~ NP VP • However, since these dotted rules are to be used as terminal symbols of a regular language, it is con- venient to use a more compact notation: they can be replaced by a triple made out of the nonterminal symbol on the left-hand side, an integer to determine one of the productions for that nonterminal, and an integer to denote the position of the dot on the right- hand side by counting the number of symbols to the left of the dot. So, if 'S ~ NP VP' is the fourth production for S, the dotted rules given above may be denoted by (S, 4, 0}, (S, 4, 1) and (S, 4, 2}, respec- tively. It will turn out to be convenient to use a slightly more complicated notation: when the dot is located after the last symbol on the right-hand side we use z as the third element of the triple instead of the corre- sponding integer, so the last triple is (S, 4, z) instead of (S, 4,2). (Note that z is an additional symbol, not a variable.) Moreover, for epsilon-rules, where there are no symbols on the right-hand side, we treat the e as it were a real symbol and consider there to be two corresponding dotted rules, e.g. (MOD, 1, O) and (MOD, 1, z) corresponding to 'MOD --~ • e' and 'MOD --~ e -' for the rule 'MOD -+ e'. Using these dotted rules as auxiliary symbols we can work with regular languages over the alphabet E= TU{ (X,m,n) ]X E V Am= I,...,mxA n = O,...,max{nx,m - 1,O},z} where T is the set of terminal symbols, V is the set of nonterminals, mx is the number of productions for nonterminal X, and nx,m is the number of symbols on the right-hand side of the ruth production for X. It will be convenient to use the symbol * as a 'wildcard', so (s,*, O) means { (X,m,n} E E IX = s,n=O} and (*,*,z) means {(X,m,n) E Eln= z }. (This last example explains why we use z rather than nx,rn; it would otherwise not be possible to use the 'wildcard' notation to denote concisely the set { (X, m, n) I n = nx,m }.) We can now attempt to derive an expression for the set of strings over E that represent a valid parse tree for the given grammar: the tree is traversed in a top-down left-to-right fashion and the daughters of a node X expanded with the ruth production for X are separated by the symbols (X, m, .). (Equivalently, one can imagine the auxiliary symbols inserted in the appropriate places in the right-hand side of each production so that the grammar is then unambigu- ous.) Consider, for example, the following grammar: S--+ aSb S--+e Then the following is one of the strings over E that we would like to accept, corresponding to the string aabb accepted by the grammar: (s, 1, O)a(s, 1, 1}(s, 1, O}a(s, 1, 1)(s, 2, 0)(s, 2, z) (s, 1, 2)b(s, 1, z)(s, 1, 2)b(s, 1, z) Our first approximation to the set of acceptable strings is (S, *, 0)N*(S,*, z), i.e. strings that start with beginning to parse an S and end with having parsed an S. From this initial approximation we sub- tract (that is, we intersect with the complement of) a series of expressions representing restrictions on the set of acceptable strings: 1 1In these expressions over regular languages set union and set difference are denoted by + and -, respectively, while juxtaposition denotes concatenation and the bar denotes complementation (5 - E* - x). 453 (z*((,, ,, ,) - (,,,, z))) + (1) Formula 1 expresses the restriction that a dotted rule of the form (%., 0), which represents starting to parse the right-hand side of a rule, may be preceded only by nothing (the start of the string) or by a dotted rule that is not of the form (*, *, z) (which would represent the end of parsing the right-hand side of a rule). + ((,,,,,) - (,,,,0))z* (2) Formula 2 similarly expresses the restriction that a dotted rule of the form (*, *, z) may be followed only by nothing or by a dotted rule that is not of the form (*, *, 0). For each non-epsilon-rule with dotted rules (X,m,n), n = O,...,nx,m - 1,z, for each n = 0,...,nx,m- 1: E*(X,m,n)next(X,m,n + 1)E* (3) where next(X, m, n) = a(X,m,n) (rhs(X, m, n) = a, aCT, n<nx,m) a(X,m,z) (rhs(X, m, n) = a, aeT, n=nx,m) (A, *, 0) (rhs(X, m, n) = A, A e V) where rhs(X, m, n) is the nth symbol on the right- hand side of the ruth production for X. Formula 3 states that the dotted rule (X, m, n) must be followed by a(X, m, n + 1) (or a(X, m, z) when n+ 1 = nx,m) when the next item to be parsed is the terminal a, or by C A, *, 0) (starting to parse an A) when the next item is the nonterminal A. For each non-epsilon-rule with dotted rules (X,m,n), n = O,...,nx,,~ - 1,z, for each n = 1,..., n x , m - 1, z: E*prev(X, m, n)(X, m, n)E* (4) where prev(X, m, n) = iX, re, n- 1)a (rhs(X, m, n) = a, a C T, n ~ z) (X, m, nx,m - 1)a (rhs(X, m, n) = a, a • T, n = z) (A, *, z) (rhs(X, m, n) = A, A • V) Formula 4 similarly states that the dotted rule (X, m, n) must be preceded by i X, m, n - 1)a (or (X,m, nx,m - 1) when n = z) when the previous item was the terminal a, or by (A,*,z) when the previous item was the nonterminal A. For each epsilon-rule corresponding to dotted rules (X,m,O) and (X,m,z): E*(X,m,O)(X,m,z)E*, and (5) (x, m, 0)(x, m, (6) Formulae 5 and 6 state that the dotted rule (X, ra,0) must be followed by (X,m,z), and (X, m, z) must be preceded by iX, m, 0). For each non-epsilon rule with dotted rules iX, re, n), n : O,...,nx,m - 1,z, for each n : O,...,nx,m- 1: m,*))*(iX, m,0)+(X,m,n'))Z* (r) and m, z)+ (x, m, - (X, m, *))* iX, m, (S) where n' = ~ n + 1, if n < nx,ra -- 1; [ z, if n = nx,m - 1. Formula 7 states that the next instance of (X,m,*) that follows (X,m,n) must be either (X, m, 0) (a recursive application of the same rule) or (X,m,n') (the next stage in parsing the same rule), and there must be such an instance. Formula 8 states similarly that the closest instance of (X, m, *) that precedes (X, m, n') must be either (X, m, z) (a recursive application of the same rule) or (X, m, n) (the previous stage in parsing the same rule), and there must be such an instance. When each of these sets has been subtracted from the initial approximation we can remove the auxil- iary symbols (by applying the regular operator that replaces them with e) to give the final finite-state approximation to the context-free grammar. 4 A small example It may be admitted that the notation used for the dotted rules was partly motivated by the possibil- ity of immediately testing the algorithm using the finite-state calculus in Prolog: the regular expres- sions listed above can be evaluated directly using the 'wildcard' capabilities of the finite-state calculus. Figure 2 shows the sequence of calculations that corresponds to applying the algorithm to the follow- ing grammar: S-~aSb S-~e With the following notational explanations it should be possible to understand the code and compare it with the description of the algorithm. • The procedure r(RE,X) evaluates the regu- lar expression RE and puts the resulting (min- imised) automaton into a register with the name X. 454 • list_fsa(X)prints out the transition table for the automaton in register X. • Terminal symbols may be any Prolog terms, so the terminal alphabet is implicit. Here atoms are used for the terminal symbols of the gram- mar (a and b) and terms of the form _/_/_ are used for the triples representing dotted rules. The terms need not be ground, so the Prolog variable symbol _ is used instead of the 'wild- card' symbol • in the description of the algo- rithm. • In a regular expression: - #X refers to the contents of register X; - $ represents E, any single terminal symbol; - s represents a string of terminals with length equal to the number of arguments; so s with no arguments represents the empty string e, s(a) represents the single terminal a, and s(s/_/0) represents the dotted rules (s, *, 0); - Kleene star is * (redefined as a postfix op- erator), and concatenation and union are ^ and +, respectively; - other operators provided include ~ (inter- section) and - (difference); there is no oper- ator for complementation; instead subtrac- tion from E* may be used, e.g. ($ *)-(#1) instead of L; - rein(RE,L) denotes the result of removing from the language RE all terminals that match one of the expressions in the list L. The context-free language recognised by the origi- nal context-free grammar is { anb n [ n > 0 }. The re- sult of applying the approximation algorithm is a 3- state automaton recognising the language e + a+b +. 5 Computational complexity Applying the restrictions expressed by formulae 1-6 gives an automaton whose size is at most a small constant multiple of the size of the input grammar. This is because these restrictions apply locally: the state that the automaton is in after reading a dotted rule is a function of that dotted rule• When restrictions 7-8 are applied the final au- tomaton may have size exponential in the size of the input grammar. For example, exponential behaviour is exhibited by the following class of grammars: S --+ al S al S -+ an S an S-+e Here the final automaton has 3 n states. (It records, in effect, one of three possibilities for each terminal symbol: whether it has not yet appeared, has ap- peared and must appear again, or has appeared and need not appear again.) There is an important computational improve- ment that can be made to the algorithm as described above: instead of removing all the auxiliary symbols right at the end they can be removed progressively as soon as they are no longer required; after formulae 7-8 have been applied for each non-epsilon rule with dotted rules (X,m,*), those dotted rules may be removed from the finite-state language (which typi- cally makes the automaton smaller); and the dotted rules corresponding to an epsilon production may be removed before formulae 7-8 are applied. (To 'remove' a symbol means to substitute it by e: a regular operation.) With this important improvement the algorithm gives exact approximations for the left-linear gram- mars S-~ Sal S~San S--+e and the right-linear grammars S --+ al S S --+ an S S--+e in space bounded by n and time bounded by n 2. (It is easiest to test this empirically with an implemen- tation, though it is also possible to check the cal- culations by hand.) Pereira and Wright's algorithm gives an intermediate unfolded recogniser of size ex- ponential in n for these right-linear grammars. There are, however, both left-linear and right- linear grammars for which the number of states in the final automaton is not bounded by any polyno- mial function of the size of the grammar. An exam- ples is: S --~ al S S~al A1 S-+anS S-+anAn A~ -+ a~ X A2 ---+ al A2 An -~ al An X-+e A1 -+ a2 Az ... A1 ~ an A1 A2 -+ a2 X ... A2 --~ an A2 An -+ a2 A,~ ... An --~ an X Here the grammar has size O(n 2) and the final ap- proximation has 2 n+l -- 1 states. 455 MOD --+ MOD --+ p NP NOM --+ a NOM NOM --+ n NOM --+ NOM MOD NOM --+ NOM S NP --+ NP ~ d NOM VP --+ v NP VP-~ vS VP -~ v VP VP --+v VP --+ VP c VP VP ~ VP MOD S ~ MOD S S-+NP S S~ScS S ~ v NP VP Figure 1: An 18-rule CFG derived from a unification grammar. Pereira and Wright (1996) point out in the context of their algorithm that a grammar may be decom- posed into 'strongly connected' subgrammars, each of which may be approximated separately and the results composed. The same method can be used with the finite-state calculus approach: Define the relation 7~ over nonterminals of the grammar s.t. ATC.B iff B appears on the right-hand side of a pro- duction for A. Then the relation $ = 7~* A (7~*) -1, the reflexive transitive closure of 7~ intersected with its inverse, is an equivalence relation. A subgram- mar consists of all the productions for nonterminals in one of the equivalence classes of S. Calculate the approximations for each nonterminal by treating the nonterminals that belong to other equivalence classes as if they were terminals. Finally, combine the results from each subgrammar by starting with the approximation for the start symbol S and substi- tuting the approximations from the other subgram- mars in an order consistent with the partial ordering that is induced by 7~ on the subgrammars. 6 Results with a larger grammar When the algorithm was applied to the 18-rule gram- mar shown in figure 1 it was not possible to com- plete the calculations for any ordering of the rules, even with the improvement mentioned in the previ- ous section, as the automata became too large for the finite-state calculus on the computer that was being used. (Note that the grammar forms a single strongly connected component.) However, it was found possible to simplify the cal- culation by omitting the application of formulae 7-8 for some of the rules. (The auxiliary symbols not involved in those rules could then be removed be- fore the application of 7-8.) In particular, when re- strictions 7-8 were applied only for the S and VP rules the calculations could be completed relatively quickly, as the largest intermediate automaton had only 406 states. Yet the final result was still a useful approximation with 16 states. Pereira and Wright's algorithm applied to the same problem gave an intermediate automaton (the 'unfolded recogniser') with 56272 states, and the fi- nal result (after flattening and minimisation) was a finite-state approximation with 13 states. The two approximations are shown for comparison in figure 3. Each has the property that the symbols d, a and n occur only in the combination d a* n. This fact has been used to simplify the state diagrams by treating this combination as a single terminal symbol dan; hence the approximations are drawn with 10 and 9 states, respectively. Neither of the approximations is better than the other; their intersection (with 31 states) is a bet- ter approximation than either. The two approxima- tions have therefore captured different aspects of the context-free language. In general it appears that the approximations pro- duced by the present algorithm tend to respect the necessity for certain constituents to be present, at whatever point in the string the symbols that 'trig- ger' them appear, without necessarily insisting on their order, while Pereira and Wright's approxima- tion tends to take greater account of the constituents whose appearance is triggered early on in the string: most of the complexity in Pereira and Wright's ap- proximation of the 18-rule grammar is concerned with what is possible before the first accepting state is encountered. 7 Comparison with previous work Rimon and Herz (1991; 1991) approximate the recognition capacity of a context-free grammar by extracting 'local syntactic constraints' in the form of the Left or Right Short Context of length n of a ter- minal. When n = 1 this reduces to next(t), the set of terminals that may follow the terminal t. The effect of filtering with Rimon and Herz's next(t) is similar to applying conditions 1-6 from section 3, but the use of auxiliary symbols causes two differences which can both be illustrated with the following grammar: S~aXa[bXb X--+e On the one hand, Rimon and Herz's 'next' does not distinguish between different instances of the same terminal symbol, so any a, and not just the first one, may be followed by another a. On the other hand, Rimon and Herz's 'next' looks beyond the empty constituent in a way that conditions 1-6 do not, so 456 initial approximation: r( s(s/_/O)^($ *)'s(s/_/Z) , a). formulae (1)-(2): r((#a) - (($ *)-(($ *)'(s(_/_/_)-s(_/_/z))+s))'s(_/_/O)'($ *) , a). r((#a) - ($ *)^s(_/_/z)^(($ *)-(s+(s(_/_/_)-s(_/_/O))^($ *))) , a). formula (3) for "S -> a S b": r((#a) - ($ *)^s(s/i/O)'(($ *)-s(a)'s(s/I/l)^($ *)) , a). r((#a) - ($ *)^s(s/1/1)'(($ *)-s(s/_/0)^($ *)) , a). r((#a) - ($ *)'s(s/1/2)^(($ *)-s(b)^s(s/1/z)^($ *)) , a). formula (4) for "S -> a S b": r((#a) - (($ *)-($ *)'s(s/1/0)^s(a))^s(s/1/1)^($ ,) , a). r((#a) - (($ *)-($ *)^s(s/_/z))'s(vp/2/1)'($ *) , a). r((#a) - (($ *)-($ *)^s(s/i/2)^s(b))^s(s/i/z)^($ *) , a). formulae (5)-(6) for "S -> "" r((#a) - ($ *)'s(s/2/O)^(($ *)-s(s/2/z)^($ *)) , a). r((#a) - (($ *)-($ *)^s(s/2/O))^s(s/2/z)'($ *) , a). formula (7) for "S -> a S b": r((#a)-($ *)^s(s/1/0)^(($ *)-(($ -s(s/1/_))*)^(s(s/1/O)+s(s/1/1))^($ *)),a). r((#a)-($ *)'s(s/1/1)^(($ *)-(($ -s(s/1/_))*)^(s(s/1/O)+s(s/1/2))^($ *)),a). r((#a)-($ *)'sCs/1/2)^(($ *)-(($ -sCs/1/_))*)^(s(s/1/O)+s(s/1/z))^($ *)),a). formula (8) for "S -> a S b": r((#a)-(($ *)-($ *)^(s(s/1/z)+s(s/1/O))^(($ -s(s/1/_)).))^s(s/1/1)'($ *),a). r((#a)-(($ *)-($ *)'(s(s/i/z)+s(s/i/l))^(($ -s(s/i/_))*))'s(s/i/2)^($ *),a). r((#a)-(($ *)-($ *)^(s(s/i/z)+s(s/I/2))^(($ -s(s/i/_)).))^s(s/i/z)^($ *),a). define the terminal alphabet: r(s(s/i/O)+s(s/i/l)+s(s/i/2)+s(s/i/z)+s(s/2/O)+s(s/2/z)+s(a)+s(b), sigma). remove the auxiliary symbols to give final result: r(rem((#a)a((#sigma) *),[_/_/_]) , f). list_fsa(f). Figure 2: The sequence of calculations for approximating S -+ a S b I e, coded for the finite-state calculus. vC p v dan p I c v//. I' v , d a n ~ ~ Figure 3: Finite-state approximations for the grammar in figure 1 calculated with the finite-state calculus (left) and by Pereira and Wright's algorithm (right). 457 ab is disallowed. Thus an approximation based on Rimon and Herz's 'next' would be aa* + bb*, and an approximation based on conditions 1-6 would be (a + b) (a + b). (However, the approximation becomes exact when conditions 7-8 are added.) Both Pereira and Wright (1991; 1996) and Rood (1996) start with the LR(0) characteristic machine, which they first 'unfold' (with respect to 'stacks' or 'paths', respectively) and then 'flatten'. The char- acteristic machine is defined in terms of dotted rules with transitions between them that are analagous to the conditions implied by formula 3 of section 3. When the machine is flattened, e-transitions are added in a way that is in effect simulated by condi- tions 2 and 4. (Condition 1 turns out to be implied by conditions 2-4.) It can be shown that the approx- imation L0 obtained by flattening the characteristic machine (without unfolding it) is as good as the ap- proximation L1-6 obtained by applying conditions 1-6 (L0 c L1-6). Moreover, if no nonterminal for which there is an e-production is used more than once in the grammar, then L0 = L1-6. (The gram- mar in figure 1 is an example for which Lo # L1-6; the approximation found in section 6 includes strings such as vvccvv which are not accepted by L0 for this grammar.) It can also be shown that LI-~ is the same as the result of flattening the character- istic machine for the same grammar modifed so as to fulfil the afore-mentioned condition by replacing the right-hand side of every e-production with a new nonterminal for which there is a single e-production. However, there does not seem to be a simple corre- spondence between conditions 7-8 and the 'unfold- ing' used by Pereira and Wright or Rood: even some simple grammars such as 'S ~ a S a [ b S b I e' are approximated differently by 1-8 than by Pereira and Wright's and Rood's methods. 8 Discussion and conclusions In the case of some simple examples (such as the grammar 'S --~ a S b I e' used earlier) the approxi- mation algorithm presented in this paper gives the same result as Pereira and Wright's algorithm. How- ever, in many other cases (such as the grammar 'S a S a I b S b I e' or the 18-rule grammar in the previous section) the results are essentially different and neither of the approximations is better than the other. The new algorithm does not share the problem of Pereira and Wright's algorithm that certain right- linear grammars give an intermediate automaton of exponential size, and it was possible to calculate a useful approximation fairly rapidly in the case of the 18-rule grammar in the previous section. However, it is not yet possible to draw general conclusions about the relative efficiency of the two procedures. Never- theless, the new algorithm seems to have the advan- tage of being open-ended and adaptable: in the pre- vious section it was possible to complete a difficult calculation by relaxing the conditions of formulae 7- 8, and it is easy to see how those conditions might also be strengthened. For example, a more compli- cated version of formulae 7-8 might check two levels of recursive application of the same rule rather than just one level and it might be useful to generalise this to n levels of recursion in a manner analagous to Rood's (1996) generalisation of Pereira and Wright's algorithm. The algorithm also demonstrates how the general machinery of a finite-state calculus can be usefully applied as a framework for expressing and solving problems in natural language processing. References Grimley Evans, Edmund, George Kiraz, and Stephen Pulman. 1996. Compiling a Partition- Based Two-Level Formalism~ COLING-96, 454- 459. Herz, Jacky, and Mori Rimon. 1991. Local Syntac- tic Constraints. Second International Workshop on Parsing Technology (IWPT-2). Kaplan, Ronald, and Martin Kay. 1994. Regular models of phonological rule systems. Computa- tional Linguistics, 20(3): 331-78. Kempe, AndS, and Lauri Karttunen. 1996. Parallel Replacement in Finite State Calculus. COLING- 96, 622. Pereira, Fernando, and Rebecca Wright. 1991. Finite-state approximation of phrase structure grammars. Proceedings of the 29th Annual Meet- ing of the Association for Computational Linguis- tics, 246-255. Pereira, Fernando, and Rebecca Wright. 1996. Finite-State Approximation of Phrase-Structure Grammars. cmp-lg/9603002. Raymond, Darrell, and Derick Wood. March 1996. The Grail Papers. University of Western Ontario, Department of Computer Science, Technical Re- port TR-491. Rimon, Mori, and Jacky Herz. 1991. The recogni- tion capacity of local syntactic constraints. ACL Proceedings, 5th European Meeting. Rood, Cathy. 1996. Efficient Finite-State Approxi- mation of Context Pree Grammars. Proceedings of ECAI 96. 458 Shieber, Stuart. 1985. Using restriction to extend parsing algorithms for complex-feature-based for- malisms. Proceedings of the 23nd Annual Meeting of the Association for Computational Linguistics, 145-152. Van Noord, Gertjan. 1996. FSA Utilities: Manipu- lation of Finite-State Automata implemented in Prolog. First International Workshop on Imple- menting Automata, University of Western On- tario, London Ontario, 29-31 August 1996. Watson, Bruce. 1996. Implementing and using finite automata tcolkits. Procccdings of ECAI 96. 459 | 1997 | 58 |
Finite State Transducers Approximating Hidden Markov Models Andrd Kempe Rank Xerox Research Centre - Grenoble Laboratory 6, chemin de Maupertuis - 38240 Meylan - France andre, kempe©grenoble, rxrc. xerox, com http ://www. rxrc. xerox, com/research/mltt Abstract This paper describes the conversion of a Hidden Markov Model into a sequential transducer that closely approximates the behavior of the stochastic model. This transformation is especially advantageous for part-of-speech tagging because the re- sulting transducer can be composed with other transducers that encode correction rules for the most frequent tagging errors. The speed of tagging is also improved. The described methods have been implemented and successfully tested on six languages. 1 Introduction Finite-state automata have been successfully applied in many areas of computational linguistics. This paper describes two algorithms 1 which ap- proximate a Hidden Markov Model (HMM) used for part-of-speech tagging by a finite-state transducer (FST). These algorithms may be useful beyond the current description on any kind of analysis of written or spoken language based on both finite-state tech- nology and HMMs, such as corpus analysis, speech recognition, etc. Both algorithms have been fully implemented. An HMM used for tagging encodes, like a trans- ducer, a relation between two languages. One lan- guage contains sequences of ambiguity classes ob- tained by looking up in a lexicon all words of a sen- tence. The other language contains sequences of tags obtained by statistically disambiguating the class se- quences. From the outside, an HMM tagger behaves like a sequential transducer that deterministically 1There is a different (unpublished) algorithm by Julian M. Kupiec and John T. Maxwell (p.c.). maps every class sequence to a tag sequence, e.g.: [DET, PRO] [ADJ,NOUN] [ADJ,NOUN] ...... [END] (i) DET ADJ NOUN ...... END The aim of the conversion is not to generate FSTs that behave in the same way, or in as similar a way as possible like IIMMs, but rather FSTs that per- form tagging in as accurate a way as possible. The motivation to derive these FSTs from HMMs is that HMMs can be trained and converted with little man- ual effort. The tagging speed when using transducers is up to five times higher than when using the underly- ing HMMs. The main advantage of transforming an HMM is that the resulting transducer can be han- dled by finite state calculus. Among others, it can be composed with transducers that encode: • correction rules for the most frequent tagging errors which are automatically generated (Brill, 1992; Roche and Schabes, 1995) or manually written (Chanod and Tapanainen, 1995), in or- der to significantly improve tagging accuracy 2. These rules may include long-distance depen- dencies not handled by HMM taggers, and can conveniently be expressed by the replace oper- ator (Kaplan and Kay, 1994; Karttunen, 1995; Kempe and Karttunen, 1996). • further steps of text analysis, e.g. light parsing or extraction of noun phrases or other phrases (Ait-Mokhtar and Chanod, 1997). These compositions enable complex text analysis to be performed by a single transducer. An IIMM transducer builds on the data (probabil- ity matrices) of the underlying HMM. The accuracy 2Automatically derived rules require less work than manually written ones but are unlikely to yield better results because they would consider relatively limited context and simple relations only. 460 of this data has an impact on the tagging accuracy of both the HMM itself and the derived transducer. The training of the HMM can be done on either a tagged or untagged corpus, and is not a topic of this paper since it is exhaustively described in the liter- ature (Bahl and Mercer, 1976; Church, 1988). An HMM can be identically represented by a weighted FST in a straightforward way. We are, however, interested in non-weighted transducers. 2 n-Type Approximation This section presents a method that approximates a (lst order) HMM by a transducer, called n-type approximation 3. Like in an HMM, we take into account initial prob- abilities ~r, transition probabilities a and class (i.e. observation symbol) probabilities b. We do, how- ever, not estimate probabilities over paths. The tag of the first word is selected based on its initial and class probability. The next tag is selected on its tran- sition probability given the first tag, and its class probability, etc. Unlike in an HMM, once a decision on a tag has been made, it influences the following decisions but is itself irreversible. A transducer encoding this behaviour can be gen- erated as sketched in figure 1. In this example we have a set of three classes, Cl with the two tags tn and t12, c2 with the three tags t21, t22 and t23 , and c3 with one tag t31. Different classes may contain the same tag, e.g. t12 and t2s may refer to the same tag. For every possible pair of a class and a tag (e.g. Cl :t12 or I'ADJ,NOUN] :NOUN) a state is created and labelled with this same pair (fig. 1). An initial state which does not correspond with any pair, is also cre- ated. All states are final, marked by double circles. For every state, as many outgoing arcs are created as there are classes (three in fig. 1). Each such arc for a particular class points to the most probable pair of this same class. If the arc comes from the initial state, the most probable pair of a class and a tag (destination state) is estimated by: argrnkaxpl(ci,tih ) ---- 7r(tik) b(ciltik) (2) If the arc comes from a state other than the initial state, the most probable pair is estimated by: argmaxp2(ci,tik) = a(tlkltp,eoio~,) b(ciltik) (3) In the example (fig. 1) cl :t12 is the most likely pair of class cl, and c2:t23 the most likely pair of class e2 aName given by the author. when coming from the initial state, and c2 :t21 the most likely pair of class c2 when coming from the state of c3 :t31. Every arc is labelled with the same symbol pair as its destination state, with the class symbol in the upper language and the tag symbol in the lower lan- guage. E.g. every arc leading to the state of cl :t12 is labelled with Cl :t12. Finally, all state labels can be deleted since the behaviour described above is encoded in the arc la- bels and the network structure. The network can be minimized and determinized. We call the model an nl-type model, the resulting FST an nl-type transducer and the algorithm lead- ing from the HMM to this transducer, an nl-type approximation of a 1st order HMM. Adapted to a 2nd order HMM, this algorithm would give an n2-type approximation. Adapted to a zero order HMM, which means only to use class probabilities b, the algorithm would give an nO-type approximation. n-Type transducers have deterministic states only. 3 s-Type Approximation This section presents a method that approxi- mates an HMM by a transducer, called s-type approximation 4. Tagging a sentence based on a 1st order HMM includes finding the most probable tag sequence T given the class sequence C of the sentence. The joint probability of C and T can be estimated by: p(C, T) = p(cl .... Cn, tl .... tn) = Its) 12 I a(t, lt _l) ItO i=2 (4) The decision on a tag of a particular word cannot be made separately from the other tags. Tags can influence each other over a long distance via transi- tion probabilities. Often, however, it is unnecessary to decide on the tags of the whole sentence at once. In the case ofa 1st order HMM, unambiguous classes (containing one tag only), plus the sentence begin- ning and end positions, constitute barriers to the propagation of HMM probabilities. Two tags with one or more barriers inbetween do not influence each other's probability. 4Name given by the author. 461 classes r-} tags of classes 22 ~ Figure 1: Generation of an nl-type transducer 3.1 s-Type Sentence Model To tag a sentence, one can split its class sequence at the barriers into subsequences, then tag them sep- arately and concatenate them again. The result is equivalent to the one obtained by tagging the sen- tence as a whole. We distinguish between initial and middle sub- sequences. The final subsequence of a sentence is equivalent to a middle one, if we assume that the sentence end symbol (. or ! or ?) always corresponds to an unambiguous class c~. This allows us to ig- nore the meaning of the sentence end position as an HMM barrier because this role is taken by the un- ambiguous class cu at the sentence end. An initial subsequence Ci starts with the sentence initial position, has any number (incl. zero) of am- biguous classes ca and ends with the first unambigu- ous class c~ of the sentence. It can be described by the regular expressionS: Ci = ca* (5) The joint probability of an initial class subse- quence Ci of length r, together with an initial tag subsequence ~, can be estimated by: r p(C,, ~1~) = r(tl) b(cl]tl). H a(tj]tj_l) b(cj Itj) (6) j=2 A middle subsequence Cm starts immediately af- ter an unambiguous class cu, has any number (incl. SRegular expression operators used in this section are explained in the annex• zero) of ambiguous classes ca and ends with the fol- lowing unambiguous class c~ : Cm = ca* c~ (7) For correct probability estimation we have to in- clude the immediately preceding unambiguous class cu, actually belonging to the preceding subsequence Ci or Cm. We thereby obtain an extended middle subsequence 5: = % ca* (8) The joint probability of an extended middle class subsequence C~ of length s, together with a tag sub- sequence Tr~ , can be estimated by: $ p(c£,7£) = b(clltl). I-[ a(tjltj_ ) b(cjlt ) (9) j=2 3.2 Construction of an s-Type Transducer To build an s-type transducer, a large number of ini- tial class subsequences Ci and extended middle class subsequences C~n are generated in one of the follow- ing two ways: (a) Extraction from a corpus Based on a lexicon and a guesser, we annotate an untagged training corpus with class labels. From ev- ery sentence, we extract the initial class subsequence Ci that ends with the first unambiguous class c~ (eq. 5), and all extended middle subsequences C~n rang- ing from any unambiguous class cu (in the sentence) to the following unambiguous class (eq. 8). 462 A frequency constraint (threshold) may be im- posed on the subsequence selection, so that the only subsequences retained are those that occur at least a certain number of times in the training corpus 6. (b) Generation of possible subsequences Based on the set of classes, we generate all possi- ble initial and extended middle class subsequences, Ci and C,e, (eq. 5, 8) up to a defined length. Every class subsequence Ci or C~ is first dis- ambiguated based on a 1st order HMM, using the Viterbi algorithm (Viterbi, 1967; Rabiner, 1990) for efficiency, and then linked to its most probable tag subsequence ~ or T~ by means of the cross product operationS: Si -- Ci .x. T/ ---- c 1 :tl c2 :t2 ...... Cn :tn (10) 01) e. e S~ = C~ .x. 7~ = el.t1 c2:t2 ...... c, :t, In all extended middle subsequences S~n, e.g.: S~ - C~ _ (12) [DET] [ADJ,NOUN] [ADJ, NOUN] [NOUN] DET ADJ ADJ NOUN the first class symbol on the upper side and the first tag symbol on the lower side, will be marked as an extension that does not really belong to the middle sequence but which is necessary to disambiguate it correctly. Example (12) becomes: s ° = = (13) TO O.[DET] [ADJ,NOUN] [ADJ, NOUN] [NOUN] O.DET ADJ ADJ NOUN We then build the union uS i of all initial subse- quences Si and the union uS~n of all extended middle subsequences S,e=, and formulate a preliminary sen- tence model: uS ° = ~S, uS°~* (14) in which all middle subsequences S ° are still marked and extended in the sense that all occurrences of all unambiguous classes are mentioned twice: Once un- marked as cu at the end of every sequence Ci or COn, 0 at the beginning and the second time marked as c u of every following sequence C ° . The upper side of the sentence model uS° describes the complete (but 6The frequency constraint may prevent the encoding of rare subsequences which would encrease the size of the transducer without contributing much to the tagging accuracy. extended) class sequences of possible sentences, and the lower side of uS° describes the corresponding (ex- tended) tag sequences. To ensure a correct concatenation of initial and middle subsequences, we formulate a concatenation constraint for the classes: 0 = N [-*[ % (15) J stating that every middle subsequence must begin 0 with the same marked unambiguous class % (e.g. 0.[DET]) which occurs unmarked as c~ (e.g. [DET]) at the end of the preceding subsequence since both symbols refer to the same occurrence of this unam- biguous class. Having ensured correct concatenation, we delete all marked classes on the upper side of the relation by means of and all marked tags on the lower side by means of By composing the above relations with the prelim- inary sentence model, we obtain the final sentence modelS: S = Dc .o. Rc .o. uS° .o. Dt (18) We call the model an s-type model, the corre- sponding FST an s-type transducer, and the whole algorithm leading from the HMMto the transducer, an s-type approximation of an HMM. The s-type transducer tags any corpus which con- tains only known subsequences, in exactly the same way, i.e. with the same errors, as the corresponding HMM tagger does. However, since an s-type trans- ducer is incomplete, it cannot tag sentences with one or more class subsequences not contained in the union of the initial or middle subsequences. 3.3 Completion of an s-Type Transducer An incomplete s-type transducer S can be completed with subsequences from an auxiliary, complete n- type transducer N as follows: First, we extract the union of initial and the union of extended middle subsequences, u u e Si and s Sm from the primary s-type transducer S, and the unions ~Si 463 and ~S,~ from the auxiliary n-type transducer N. To extract the union °S i of initial subsequences we use the following filter: Fs,=[\<c~,t>]* <c-,0 [?:[]]* (19) where (c,, t) is the l-level format 7 of the symbol pair cu :t. The extraction takes place by usi = [ N.1L .o. Fs, ].l.2L (20) where the transducer N is first converted into l- level format 7, then composed with the filter Fs, (eq. 19). We extract the lower side of this composition, where every sequence of N.1L remains unchanged from the beginning up to the first occurrence of an unambiguous class c,. Every following symbol is mapped to the empty string by means of [? :[ ]]. (eq. 19). Finally, the extracted lower side is again converted into 2-level format 7. The extraction of the union uSe of extended mid- die subsequences is performed in a similar way. We then make the joint unions of initial and ex- tended middle subsequences 5 : U~/ U O O U : I[ ] ] (21) -- ~Si .o. ~Si U e U e U e U e U e = [, Sm.u s., ,sin I[ (22) - ] .o. ] In both cases (eq. 21 and 22) we union all subse- quences from the principal model S, with all those subsequences from the auxiliary model N that are not in S. Finally, we generate the completed s+n-typc transducer from the joint unions of subsequences uSi and uS~n , as decribed above (eq. 14-18). A transducer completed in this way, disam- biguates all subsequences known to the principal incomplete s-type model, exactly as the underlying HMM does, and all other subsequences as the aux- iliary n-type model does. 4 An Implemented Finite-State Tagger The implemented tagger requires three transducers which represent a lexicon, a guesser and any above mentioned approximation of an HMM. All three transducers are sequential, i.e. deter- ministic on the input side. Both the lexicon and guesser unambiguously map a surface form of any word that they accept to the corresponding class of tags (fig. 2, col. 1 and 2): ~l-Level and 2-level format are explained in the an- flex. First, the word is looked for in the lexicon. If this fails, it is looked for in the guesser. If this equally fails, it gets the label [UNKNOWN] which associates the word with the tag class of unknown words. Tag probabilities in this class are approximated by tags of words that appear only once in the training cor- pus. As soon as an input token gets labelled with the tag class of sentence end symbols (fig. 2: [SENT]), the tagger stops reading words from the input. At this point, the tagger has read and stored the words of a whole sentence (fig. 2, col. 1) and generated the corresponding sequence of classes (fig. 2, col. 2). The class sequence is now deterministically mapped to a tag sequence (fig. 2, col. 3) by means of the HMM transducer. The tagger outputs the stored word and tag sequence of the sentence, and contin- ues in the same way with the remaining sentences of the corpus. The [AT] AT share [NN, VB] NN of [IN] IN tripled [VBD, VBN] VBD within [IN,RB] IN that [CS, DT, WPS] DT span INN, VB, VBD] VBD of [IN] IN t ime INN, VB] NN [SENT] SENT Figure 2: Tagging a sentence 5 Experiments and Results This section compares different n-type and s-type transducers with each other and with the underlying HMM. The FSTs perform tagging faster than the HMMs. Since all transducers are approximations of HMMs, they give a lower tagging accuracy than the corresponding HMMs. However, improvement in ac- curacy can be expected since these transducers can be composed with transducers encoding correction rules for frequent errors (sec. 1). Table 1 compares different transducers on an En- glish test case. The s+nl-type transducer containing all possible subsequences up to a length of three classes is the most accurate (table 1, last line, s+nl-FST (~ 3): 95.95 %) but Mso the largest one. A similar rate of accuracy at a much lower size can be achieved with the s+nl-type, either with all subsequences up to a 464 HMM accuracy in % 96.77 tagging speed in words/sec 4 590 transducer size creation time # states # arcs 1 297 71 21 087 927 203 853 2 675 564 887 4 709 976 785 476 107 728 211 52 624 154 41 598 2 049 418 536 799 167 952 432 96 712 9 796 1 311 962 92 463 13 681 113 n0-FST 83.53 20 582 16 sec nl-FST 94.19 17 244 17 sec s+nl-FST (20K, F1) 94.74 13 575 3 min s+nl-FST (50K, F1) 94.92 12 760 10 min s+nl-FST (100K, F1) 95.05 12 038 23 min s+nl-FST (100K, F2) 94.76 14 178 2 min s+nl-FST (100K, F4) 94.60 14 178 76 sec s+nl-FST (100K, F8) 94.49 13 870 62 see s+nl-FST (1M, F2) 95.67 11 393 7 min s+nl-FST (1M, F4) 95.36 11 193 4 min s+nl-FST (1M, FS) 95.09 13 575 3 min s+nl-FST (< 2) 95.06 8 180 39 min s+nl-FST (< 3) 95.95 4 870 47 h Language: English Corpora: 19 944 words for HMM training, 19 934 words for test Tag set: 74 tags 297 classes Types of FST (Finite-State Transducers) : nO, nl n0-type (with only lexical probabilities) or nl-type (sec. 2) s+nl (100K, F2) s-type (sec. 3), with subsequences of frequency > 2, from a training corpus of 100 000 words (sec. 3.2 a), completed with nl-type (sec. 3.3) s+nl (< 2) s-type (sec. 3), with all possible subsequences of length _< 2 classes (sec. 3.2 b), completed with nl-type (sec. 3.3) Computer: ultra2, 1 CPU, 512 MBytes physical RAM, 1.4 GBytes virtual RAM Table 1: Accuracy, speed, size and creation time of some HMM transducers length of two classes (s+nl-FST (5 2): 95.06 %) or with subsequences occurring at least once in a train- ing corpus of 100 000 words (s+nl-FST (lOOK, F1): 95.05 %). Increasing the size of the training corpus and the frequency limit, i.e. the number of times that a sub- sequence must at least occur in the training corpus in order to be selected (sec. 3.2 a), improves the re- lation between tagging accuracy and the size of the transducer. E.g. the s+nl-type transducer that en- codes subsequences from a training corpus of 20 000 words (table 1, s+nl-FST (20K, F1): 94.74 %, 927 states, 203 853 arcs), performs less accurate tagging and is bigger than the transducer that encodes sub- sequences occurring at least eight times in a corpus of 1 000 000 words (table 1, s+nl-FST (1M, F8): 95.09 %, 432 states, 96 712 arcs). Most transducers in table 1 are faster then the underlying HMM; the n0-type transducer about five times s. There is a large variation in speed between SSince n0-type and nl-type transducers have deter- ministic states only, a particular fast matching algorithm can be used for them. the different transducers due to their structure and size. Table 2 compares the tagging accuracy of different transducers and the underlying HMM for different languages. In these tests the highest accuracy was always obtained by s-type transducers, either with all subsequences up to a length of two classes 9 or with subsequences occurring at least once in a corpus of 100 000 words. 6 Conclusion and Future Research The two methods described in this paper allow the approximation of an HMM used for part-of-speech tagging, by a finite-state transducer. Both methods have been fully implemented. The tagging speed of the transducers is up to five times higher than that of the underlying HMM. The main advantage of transforming an HMM is that the resulting FST can be handled by finite 9A maximal length of three classes is not considered here because of the high increase in size and a low in- crease in accuracy. 465 .... HMM -'n0-FST nl-FST English 96.77 83.53 94.19 s+nl-FST (20K, F1) 94.74 s+nl-FST (50K, F1) 94.92 s+nl-FST (100K, F1) 95.05 s+nl-FST (100K, F2) 94.76 s÷nl-FST (100K, F4) s+nl-FST (100K, F8) 94.60 94.49 :HMM train.crp. (#wd) '"test corpus (# words) s+nl-FST (< 2) 95.06 19 944 19 934 #tags 74 #classes 297 accuracy in % I Dutch I French I German I 94"76[ 98"651 97.62 81.99 91.13 91.58 98.18 92.17 98.35 92.24 98.37 92.36 98.37 92.17 98.34 92.02 98.30 91.84 98.32 92.25 98.37 26 386 22 622 10 468 6 368 47 45 230 287 [ Types of FST (Finite-State Transducers) : Portug. Spanish [ 97.12 97.60 82.97 91.03 93.65 94.49 96.19 96.46 95.23 96.71 95.57 95.81 95.51 95.29 96.33 96.49 96.56 96.42 96.27 96.76 96.87 96.74 96.64 95.02 96.23 96.54 95.92 96.50 96.90 91 060 20 956 16 221 39 560 15 536 15 443 66 67 55 389 303 254 cf. table 1 I Table 2: Accuracy of some HMM transducers for different languages state calculus 1° and thus be directly composed with other transducers which encode tag correction rules and/or perform further steps of text analysis. Future research will mainly focus on this pos- sibility and will include composition with, among others: • Transducers that encode correction rules (pos- sibly including long-distance dependencies) for the most frequent tagging errors, ill order to significantly improve tagging accuracy. These rules can be either extracted automatically from a corpus (Brill, 1992) or written manually (Chanod and Tapanainen, 1995). • Transducers for light parsing, phrase extraction and other analysis (A'/t-Mokhtar and Chanod, 1997). An HMM transducer can be composed with one or more of these transducers in order to perform com- plex text analysis using only a single transducer. We also hope to improve the n-type model by us- ing look-ahead to the following tags 11. Acknowledgements I wish to thank the anonymous reviewers of my pa- per for their valuable comments and suggestions. I am grateful to Lauri Karttunen and Gregory Grefenstette (both RXRC Grenoble) for extensive and frequent discussion during the period of my work, as well as to Julian Kupiec (Xerox PARC) and Mehryar Mohri (AT&:T Research) for sending me some interesting ideas before I started. Many thanks to all my colleagues at RXRC Grenoble who helped me in whatever respect, partic- ularly to Anne Schiller, Marc Dymetman and Jean- Pierre Chanod for discussing parts of the work, and to Irene Maxwell for correcting various versions of the paper. l°A large library of finite-state functions is available at Xerox. 11Ongoing work has shown that, looking ahead to just one tag is worthless because it makes tagging results highly ambiguous. 466 References ANNEX: Regular Expression Operators Ait-Mokhtar, Salah and Chanod, Jean-Pierre (1997). Incremental Finite-State Parsing. In the Proceedings of the 5th Conference of Applied Natural Language Processing. ACL, pp. 72-79. Washington, DC, USA. Bahl, Lalit R. and Mercer, Robert L. (1976). Part of Speech Assignment by a Statistical Decision Algorithm. In IEEE international Symposium on $A Information Theory. pp. 88-89. Ronneby. Brill, Eric (1992). A Simple Rule-Based Part-of- -A Speech Tagger. In the Proceedings of the 3rd con- ference on Applied Natural Language Processing, \a pp. 152-155. Trento, Italy. Chanod, Jean-Pierre and Tapanainen, Pasi (1995). A* Tagging French - Comparing a Statistical and a Constraint Based Method. In the Proceedings of A+ the 7th conference of the EACL, pp. 149-156. ACL. Dublin, Ireland. a -> b Church, Kenneth W. (1988). A Stochastic Parts Program and Noun Phrase Parser for Unre- stricted Text. In Proceedings of the 2nd Con- a <- b ference on Applied Natural Language Processing. ACL, pp. 136-143. a:b Kaplan, Ronald M. and Kay, Martin (1994). Reg- ular Models of Phonological Rule Systems. In (a,b) Computational Linguistics. 20:3, pp. 331-378. Karttunen, Lauri (1995). The Replace Operator. R.u In the Proceedings of the 33rd Annual Meeting R. 1 of the Association for Computational Linguistics. h B Cambridge, MA, USA. cmp-lg/9504032 AI B Kempe, Andrd and Karttunen, Lauri (1996). Par- A ~ B allel Replacement in Finite State Calculus. In A - B the Proceedings of the 16th International Confer- ence on Computational Linguistics, pp. 622-627. h .x. B Copenhagen, Denmark. crap-lg/9607007 Rabiner, Lawrence R. (1990). A Tutorial on Hid- R .o. q den Markov Models and Selected Applications in it.lL Speech Recognition. In Readings in Speech Recog- nition (eds. A. Waibel, K.F. Lee). Morgan Kauf- mann Publishers, Inc. San Mateo, CA., USA. A.2L Roche, Emmanuel and Schabes, Yves (1995). De- terministic Part-of-Speech Tagging with Finite- Oorf] State Transducers. In Computational Linguistics. ? Vol. 21, No. 2, pp. 227-253. Viterbi, A.J. (1967). Error Bounds for Convolu- tional Codes and an Asymptotical Optimal De- coding Algorithm. In Proceedings of IEEE, vol. 61, pp. 268-278. Below, a and b designate symbols, A and B designate languages, and R and q desig- nate relations between two languages. More details on the following operators and point- ers to finite-state literature can be found in http ://www. rxrc. xerox, com/research/mltt/f st Contains. Set of strings containing at least one occurrence of a string from A as a substring. Complement (negation). All strings ex- cept those from A. Term complement. Any symbol other than a. Kleene star. Zero or more times h con- catenated with itself. Kleene plus. One or more times A concate- nated with itself. Replace. Relation where every a on the upper side gets mapped to a b on the lower side. Inverse replace. Relation where every b on the lower side gets mapped to an a on the upper side. Symbol pair with a on the upper and b on the lower side. 1-Level symbol which is the 1-1eve! form (. 1L) of the symbol pair a: b. Upper language of R. Lower language of R. Concatenation of all strings of A with all strings of tl. Union of A and B. Intersection of A and B. Relative complement (minus). All strings of A that are not in B. Cross Product (Cartesian product) of the languages A and B. Composition of the relations R and q. 1-Level form. Makes a language out of the relation R. Every symbol pair becomes a simple symbol. (e.g. a: b becomes (a, b) and a which means a:a becomes (a, a)) 2-Level form. Inverse operation to .1L (R.1L.2L = R). Empty string (epsilon). Any symbol in the known alphabet and its extensions 467 | 1997 | 59 |
Document Classification Using a Finite Mixture Model Hang Li Kenji Yamanishi C&C Res. Labs., NEC 4-1-1 Miyazaki Miyamae-ku Kawasaki, 216, Japan Email: {lihang,yamanisi} @sbl.cl.nec.co.j p Abstract We propose a new method of classifying documents into categories. We define for each category a finite mixture model based on soft clustering of words. We treat the problem of classifying documents as that of conducting statistical hypothesis testing over finite mixture models, and employ the EM algorithm to efficiently estimate pa- rameters in a finite mixture model. Exper- imental results indicate that our method outperforms existing methods. 1 Introduction We are concerned here with the issue of classifying documents into categories. More precisely, we begin with a number of categories (e.g., 'tennis, soccer, skiing'), each already containing certain documents. Our goal is to determine into which categories newly given documents ought to be assigned, and to do so on the basis of the distribution of each document's words. 1 Many methods have been proposed to address this issue, and a number of them have proved to be quite effective (e.g.,(Apte, Damerau, and Weiss, 1994; Cohen and Singer, 1996; Lewis, 1992; Lewis and Ringuette, 1994; Lewis et al., 1996; Schutze, Hull, and Pedersen, 1995; Yang and Chute, 1994)). The simple method of conducting hypothesis testing over word-based distributions in categories (defined in Section 2) is not efficient in storage and suffers from the data sparseness problem, i.e., the number of parameters in the distributions is large and the data size is not sufficiently large for accurately es- timating them. In order to address this difficulty, (Guthrie, Walker, and Guthrie, 1994) have proposed using distributions based on what we refer to as hard 1A related issue is the retrieval, from a data base, of documents which are relevant to a given query (pseudo- document) (e.g.,(Deerwester et al., 1990; Fuhr, 1989; Robertson and Jones, 1976; Salton and McGill, 1983; Wong and Yao, 1989)). clustering of words, i.e., in which a word is assigned to a single cluster and words in the same cluster are treated uniformly. The use of hard clustering might, however, degrade classification results, since the dis- tributions it employs are not always precise enough for representing the differences between categories. We propose here to employ soft chsterinf, i.e., a word can be assigned to several different clusters and each cluster is characterized by a specific word probability distribution. We define for each cate- gory a finite mixture model, which is a linear com- bination of the word probability distributions of the clusters. We thereby treat the problem of classify- ing documents as that of conducting statistical hy- pothesis testing over finite mixture models. In or- der to accomplish hypothesis testing, we employ the EM algorithm to efficiently and approximately cal- culate from training data the maximum likelihood estimates of parameters in a finite mixture model. Our method overcomes the major drawbacks of the method using word-based distributions and the method based on hard clustering, while retaining their merits; it in fact includes those two methods as special cases. Experimental results indicate that our method outperforrrLs them. Although the finite mixture model has already been used elsewhere in natural language processing (e.g. (Jelinek and Mercer, 1980; Pereira, Tishby, and Lee, 1993)), this is the first work, to the best of knowledge, that uses it in the context of document classification. 2 Previous Work Word-based method A simple approach to document classification is to view this problem as that of conducting hypothesis testing over word-based distributions. In this paper, we refer to this approach as the word-based method (hereafter, referred to as WBM). 2We borrow from (Pereira, Tishby, and Lee, 1993) the terms hard clustering and soft clustering, which were used there in a different task. 39 Letting W denote a vocabulary (a set of words), and w denote a random variable representing any word in it, for each category ci (i = 1,...,n), we define its word-based distribution P(wIci) as a his- togram type of distribution over W. (The num- ber of free parameters of such a distribution is thus I W[- 1). WBM then views a document as a sequence of words: d = Wl,''" , W N (1) and assumes that each word is generated indepen- dently according to a probability distribution of a category. It then calculates the probability of a doc- ument with respect to a category as N P(dlc,) = P(w,,...,~Nle,) = 1-~ P(w, lc,), (2) t=l and classifies the document into that category for which the calculated probability is the largest. We should note here that a document's probability with respect to each category is equivMent to the likeli- hood of each category with respect to the document, and to classify the document into the category for which it has the largest probability is equivalent to classifying it into the category having the largest likelihood with respect to it. Hereafter, we will use only the term likelihood and denote it as L(dlci). Notice that in practice the parameters in a dis- tribution must be estimated from training data. In the case of WBM, the number of parameters is large; the training data size, however, is usually not suffi- ciently large for accurately estimating them. This is the data .sparseness problem that so often stands in the way of reliable statistical language processing (e.g.(Gale and Church, 1990)). Moreover, the num- ber of parameters in word-based distributions is too large to be efficiently stored. Method based on hard clustering In order to address the above difficulty, Guthrie et.al, have proposed a method based on hard cluster- ing of words (Guthrie, Walker, and Guthrie, 1994) (hereafter we will refer to this method as HCM). Let cl,...,c,~ be categories. HCM first conducts hard clustering of words. Specifically, it (a) defines a vo- cabulary as a set of words W and defines as clusters its subsets kl,..-,k,n satisfying t3~=xk j = W and ki fq kj = 0 (i • j) (i.e., each word is assigned only to a single cluster); and (b) treats uniformly all the words assigned to the same cluster. HCM then de- fines for each category ci a distribution of the clus- ters P(kj [ci) (j = 1,...,m). It replaces each word wt in the document with the cluster kt to which it belongs (t = 1,--., N). It assumes that a cluster kt is distributed according to P(kj[ci) and calculates the likelihood of each category ci with respect to the document by N L(dle,) -- L(kl,..., kNlci) = H e(k, le,). t=l (3) Table 1: Frequencies of words racket stroke shot goal kick ball cl 4 1 2 1 0 2 c2 0 0 0 3 2 2 Table 2: Clusters and words (L = 5,M = 5) ' kl racket, stroke, shot ks kick . k 3 goal, ball Table 3: Frequencies of clusters kl ks k3 c 1 7 0 3 c2 0 2 5 There are any number of ways to create clusters in hard clustering, but the method employed is crucial to the accuracy of document classification. Guthrie et. al. have devised a way suitable to documentation classification. Suppose that there are two categories cl ='tennis' and c2='soccer,' and we obtain from the training data (previously classified documents) the frequencies of words in each category, such as those in Tab. 1. Letting L and M be given positive inte- gers, HCM creates three clusters: kl, k2 and k3, in which kl contains those words which are among the L most frequent words in cl, and not among the M most frequent in c2; k2 contains those words which are among the L most frequent words in cs, and not among the M most frequent in Cl; and k3 con- tains all remaining words (see Tab. 2). HCM then counts the frequencies of clusters in each category (see Tab. 3) and estimates the probabilities of clus- ters being in each category (see Tab. 4). 3 Suppose that a newly given document, like d in Fig. i, is to be classified. HCM cMculates the likelihood values 3We calculate the probabilities here by using the so- called expected likelihood estimator (Gale and Church, 1990): .f(kjlc, ) + 0.5 , P(k3lc~) = f-~--~-~ x m (4) where f(kjlci ) is the frequency of the cluster kj in ci, f(ci) is the total frequency of clusters in cl, and m is the total number of clusters. 40 Table 4: Probability distributions of clusters kl k2 k3 cl 0.65 0.04 0.30 cs 0.06 0.29 0.65 L(dlCl ) and L(dlc2) according to Eq. (3). (Tab. 5 shows the logarithms of the resulting likelihood val- ues.) It then classifies d into cs, as log s L(dlcs ) is larger than log s L(dlc 1). d = kick, goal, goal, ball Figure 1: Example document Table 5: Calculating log likelihood values log2 L(dlct ) = 1 x log s .04 + 3 × log s .30 = -9.85 log s L(d]cs) = 1 × log s .29 + 3 x log s .65 = -3.65 HCM can handle the data sparseness problem quite well. By assigning words to clusters, it can drastically reduce the number of parameters to be estimated. It can also save space for storing knowl- edge. We argue, however, that the use of hard clus- tering still has the following two problems: 1. HCM cannot assign a word ¢0 more than one cluster at a time. Suppose that there is another category c3 = 'skiing' in which the word 'ball' does not appear, i.e., 'ball' will be indicative of both cl and c2, but not cs. If we could assign 'ball' to both kt and k2, the likelihood value for classifying a document containing that word to cl or c2 would become larger, and that for clas- sifying it into c3 would become smaller. HCM, however, cannot do that. 2. HCM cannot make the best use of information about the differences among the frequencies of words assigned to an individual cluster. For ex- ample, it treats 'racket' and 'shot' uniformly be- cause they are assigned to the same cluster kt (see Tab. 5). 'Racket' may, however, be more indicative of Cl than 'shot,' because it appears more frequently in cl than 'shot.' HCM fails to utilize this information. This problem will become more serious when the values L and M in word clustering are large, which renders the clustering itself relatively meaningless. From the perspective of number of parameters, HCM employs models having very few parameters, and thus may not sometimes represent much useful information for classification. 3 Finite Mixture Model We propose a method of document classification based on soft clustering of words. Let cl,--.,cn be categories. We first conduct the soft cluster- ing. Specifically, we (a) define a vocabulary as a set W of words and define as clusters a number of its subsets kl,.--, k,n satisfying u'~=lk j = W; (no- tice that ki t3 kj = 0 (i ~ j) does not necessarily hold here, i.e., a word can be assigned to several dif- ferent clusters); and (b) define for each cluster kj (j = 1,..., m) a distribution Q(w[kj) over its words ()"]~wekj Q(w[kj) = 1) and a distribution P(wlkj) satisfying: ! Q(wlki); wek i, P(wlkj) 0; w ¢ (5) where w denotes a random variable representing any word in the vocabulary. We then define for each cat- egory ci (i = 1,..., n) a distribution of the clusters P(kj Ici), and define for each category a linear com- bination of P(w]kj): P(wlc~) = ~ P(kjlc~) x P(wlk.i) (6) j=l as the distribution over its words, which is referred to as afinite mixture model(e.g., (Everitt and Hand, 1981)). We treat the problem of classifying a document as that of conducting the likelihood ratio test over finite mixture models. That is, we view a document as a sequence of words, d= wl, " " , WN (7) where wt(t = 1,.-.,N) represents a word. We assume that each word is independently generated according to an unknown probability distribution and determine which of the finite mixture mod- els P(w[ci)(i = 1,...,n) is more likely to be the probability distribution by observing the sequence of words. Specifically, we calculate the likelihood value for each category with respect to the document by: L(d[ci) = L(wl,...,wglci) = I-[~=1 P(wtlc,) : n =l P(k ic,) x P(w, lk )) (8) We then classify it into the category having the largest likelihood value with respect to it. Hereafter, we will refer to this method as FMM. FMM includes WBM and HCM as its special cases. If we consider the specific case (1) in which a word is assigned to a single cluster and P(wlkj) is given by {1. (9) P(wlkj)= O; w~k~, 41 where Ikjl denotes the number of elements belonging to kj, then we will get the same classification result as in HCM. In such a case, the likelihood value for each category ci becomes: L(dlc,) = I-I;:x (P(ktlci) x P~wtlkt)) = 1-It=~ P(ktlci) x l-It=lP(Wtlkt), (lo) where kt is the cluster corresponding to wt. Since the probability P(wt]kt) does not depend on eate- N gories, we can ignore the second term YIt=l P(wt Ikt) in hypothesis testing, and thus our method essen- tially becomes equivalent to HCM (c.f. Eq. (3)). Further, in the specific case (2) in which m = n, for each j, P(wlkj) has IWl parameters: P(wlkj) = P(wlcj), and P(kjlci ) is given by 1; i = j, P(kjlci)= O; i#j, (11) the likelihood used in hypothesis testing becomes the same as that in Eq.(2), and thus our method becomes equivalent to WBM. 4 Estimation and Hypothesis Testing In this section, we describe how to implement our method. Creating clusters There are any number of ways to create clusters on a given set of words. As in the case of hard clustering, the way that clusters are created is crucial to the reliability of document classification. Here we give one example approach to cluster creation. Table 6: Clusters and words Ikl Iracket, stroke, shot, balll ks kick, goal, ball We let the number of clusters equal that of cat- egories (i.e., m = n) 4 and relate each cluster ki to one category ci (i = 1,--.,n). We then assign individual words to those clusters in whose related categories they most frequently appear. Letting 7 (0 _< 7 < 1) be a predetermined threshold value, if the following inequality holds: f(wlci) > 7, (t2) f(w) then we assign w to ki, the cluster related to ci, where f(wlci) denotes the frequency of the word w in category ci, and f(w) denotes the total frequency ofw. Using the data in Tab.l, we create two clusters: kt and k2, and relate them to ct and c2, respectively. 4One can certainly assume that m > n. For example, when 7 = 0.4, we assign 'goal' to k2 only, as the relative frequency of 'goal' in c~ is 0.75 and that in cx is only 0.25. We ignore in document classification those words which cannot be assigned to any cluster using this method, because they are not indicative of any specific category. (For example, when 7 >_ 0.5 'ball' will not be assigned into any cluster.) This helps to make classification efficient and accurate. Tab. 6 shows the results of creating clusters. Estimating P(wlk j) We then consider the frequency of a word in a clus- ter. If a word is assigned only to one cluster, we view its total frequency as its frequency within that clus- ter. For example, because 'goal' is assigned only to ks, we use as its frequency within that cluster the to- tal count of its occurrence in all categories. If a word is assigned to several different clusters, we distribute its total frequency among those clusters in propor- tion to the frequency with which the word appears in each of their respective related categories. For example, because 'ball' is assigned to both kl and k2, we distribute its total frequency among the two clusters in proportion to the frequency with which 'ball' appears in cl and c2, respectively. After that, we obtain the frequencies of words in each cluster as shown in Tab. 7. Table 7: Distributed frequencies of words racket stroke shot goal kick ball kl 4 1 2 0 0 2 k2 0 0 0 4 2 2 We then estimate the probabilities of words in each cluster, obtaining the results in Tab. 8. 5 Table 8: Probability distributions of words racket stroke shot goal kick ball kl 0.44 0.11 0.22 0 0 0.22 k2 0 0 0 0.50 0.25 0.25 Estimating P( kj ]ci) Let us next consider the estimation of P(kj[ci). There are two common methods for statistical esti- mation, the maximum likelihood estimation method 5We calculate the probabilities by employing the maximum likelihood estimator: /(kAc0 (13) P(kilci)- f(ci) ' where f(kj]cl) is the frequency of the cluster kj in ci, and f(cl) is the total frequency of clusters in el. 42 Table 10: Calculating log likelihood values [log~L(d[cl)= log2(.14× .25)+2x log2(.14x .50)+log2(.86x.22 +.14x .25): -14.67[ I log S L(dlc2 ) 1og2(.96 x .25) + 2 x log2(.96 x .50) + 1og2(.04 x .22 T .96 × .25) -6.18 I Table 9: Probability distributions of clusters kl k2 Cl 0.86 0.14 c2 0.04 0.96 and the Bayes estimation method. In their imple- mentation for estimating P(kj Ici), however, both of them suffer from computational intractability. The EM algorithm (Dempster, Laird, and Rubin, 1977) can be used to efficiently approximate the maximum likelihood estimator of P(kj [c~). We employ here an extended version of the EM algorithm (Helmbold et al., 1995). (We have also devised, on the basis of the Markov chain Monte Carlo (MCMC) technique (e.g. (Tanner and Wong, 1987; Yamanishi, 1996)) 6, an algorithm to efficiently approximate the Bayes estimator of P(kj [c~).) For the sake of notational simplicity, for a fixed i, let us write P(kjlci) as Oj and P(wlkj) as Pj(w). Then letting 9 = (01,'",0m), the finite mixture model in Eq. (6) may be written as rn P(wlO) = ~0~ x Pj(w). (14) j=l For a given training sequence wl'"WN, the maxi- mum likelihood estimator of 0 is defined as the value which maximizes the following log likelihood func- tion ) L(O) = ~'log OjPj(wt) . (15) ~- \j=l The EM algorithm first arbitrarily sets the initial value of 0, which we denote as 0(0), and then suc- cessively calculates the values of 6 on the basis of its most recent values. Let s be a predetermined num- ber. At the lth iteration (l -: 1,..-, s), we calculate = by 0~ '): 0~ '-1) (~?(VL(00-1))j- 1)+ 1), (16) where ~ > 0 (when ~ = 1, Hembold et al. 's version simply becomes the standard EM algorithm), and 6We have confirmed in our preliminary experiment that MCMC performs slightly better than EM in docu- ment classification, but we omit the details here due to space limitations. ~TL(O) denotes v L(O) = ( 0L001 "'" O0,nOL ) . (17) After s numbers of calculations, the EM algorithm outputs 00) = (0~O,... ,0~ )) as an approximate of 0. It is theoretically guaranteed that the EM al- gorithm converges to a local minimum of the given likelihood (Dempster, Laird, and Rubin, 1977). For the example in Tab. 1, we obtain the results as shown in Tab. 9. Testing For the example in Tab. 1, we can calculate ac- cording to Eq. (8) the likelihood values of the two categories with respect to the document in Fig. 1 (Tab. 10 shows the logarithms of the likelihood val- ues). We then classify the document into category c2, as log 2 L(d]c2) is larger than log 2 L(dlcl). 5 Advantages of FMM For a probabilistic approach to document classifica- tion, the most important thing is to determine what kind of probability model (distribution) to employ as a representation of a category. It must (1) ap- propriately represent a category, as well as (2) have a proper preciseness in terms of number of param- eters. The goodness and badness of selection of a model directly affects classification results. The finite mixture model we propose is particu- larly well-suited to the representation of a category. Described in linguistic terms, a cluster corresponds to a topic and the words assigned to it are related to that topic. Though documents generally concen- trate on a single topic, they may sometimes refer for a time to others, and while a document is dis- cussing any one topic, it will naturally tend to use words strongly related to that topic. A document in the category of 'tennis' is more likely to discuss the topic of 'tennis,' i.e., to use words strongly related to 'tennis,' but it may sometimes briefly shift to the topic of 'soccer,' i.e., use words strongly related to 'soccer.' A human can follow the sequence of words in such a document, associate them with related top- ics, and use the distributions of topics to classify the document. Thus the use of the finite mixture model can be considered as a stochastic implementation of this process. The use of FMM is also appropriate from the viewpoint of number of parameters. Tab. 11 shows the numbers of parameters in our method (FMM), 43 Table 11: Num. of parameters WBM O(n. IWl) HCM O(n. m) FMM o(Ikl+n'm) HCM, and WBM, where IW] is the size of a vocab- ulary, Ikl is the sum of the sizes of word clusters m (i.e.,Ikl -- E~=I Ikil), n is the number of categories, and m is the number of clusters. The number of parameters in FMM is much smaller than that in WBM, which depends on IWl, a very large num- ber in practice (notice that Ikl is always smaller than IWI when we employ the clustering method (with 7 > 0.5) described in Section 4. As a result, FMM requires less data for parameter estimation than WBM and thus can handle the data sparseness problem quite well. Furthermore, it can economize on the space necessary for storing knowledge. On the other hand, the number of parameters in FMM is larger than that in HCM. It is able to represent the differences between categories more precisely than HCM, and thus is able to resolve the two problems, described in Section 2, which plague HCM. Another advantage of our method may be seen in contrast to the use of latent semantic analysis (Deer- wester et al., 1990) in document classification and document retrieval. They claim that their method can solve the following problems: synonymy problem how to group synonyms, like 'stroke' and 'shot,' and make each relatively strongly indicative of a category even though some may individually appear in the category only very rarely; polysemy problem how to determine that a word like 'ball' in a document refers to a 'tennis ball' and not a 'soccer ball,' so as to classify the doc- ument more accurately; dependence problem how to use de- pendent words, like 'kick' and 'goal,' to make their combined appearance in a document more indicative of a category. As seen in Tab.6, our method also helps resolve all of these problems. 6 Preliminary Experimental Results In this section, we describe the results of the exper- iments we have conducted to compare the perfor- mance of our method with that of HCM and others. As a first data set, we used a subset of the Reuters newswire data prepared by Lewis, called Reuters- 21578 Distribution 1.0. 7 We selected nine overlap- ping categories, i.e. in which a document may be- rReuters-21578 is available at http://www.research.att.com/lewis. long to several different categories. We adopted the Lewis Split in the corpus to obtain the training data and the test data. Tabs. 12 and 13 give the de- tails. We did not conduct stemming, or use stop words s. We then applied FMM, HCM, WBM , and a method based on cosine-similarity, which we de- note as COS 9, to conduct binary classification. In particular, we learn the distribution for each cate- gory and that for its complement category from the training data, and then determine whether or not to classify into each category the documents in the test data. When applying FMM, we used our proposed method of creating clusters in Section 4 and set 7 to be 0, 0.4, 0.5, 0.7, because these are representative values. For HCM, we classified words in the same way as in FMM and set 7 to be 0.5, 0.7, 0.9, 0.95. (Notice that in HCM, 7 cannot be set less than 0.5.) Table 12: The first data set Num. of doc. in training data 707 Num. of doc in test data 228 Num. of (type of) words 10902 Avg. num. of words per doe. 310.6 Table 13: Categories in the first data set I wheat,corn,oilseed,sugar,coffee soybean,cocoa,rice,cotton ] Table 14: The second data set Num. of doc. training data 13625 Num. of doc. in test data 6188 Num. of (type of) words 50301 Avg. num. of words per doc. 181.3 As a second data set, we used the entire Reuters- 21578 data with the Lewis Split. Tab. 14 gives the details. Again, we did not conduct stemming, or use stop words. We then applied FMM, HCM, WBM , and COS to conduct binary classification. When ap- plying FMM, we used our proposed method of creat- ing clusters and set 7 to be 0, 0.4, 0.5, 0.7. For HCM, we classified words in the same way as in FMM and set 7 to be 0.5, 0.7, 0.9, 0.95. We have not fully com- pleted these experiments, however, and here we only 8'Stop words' refers to a predetermined list of words containing those which are considered not useful for doc- ument classification, such as articles and prepositions. 9In this method, categories and documents to be clas- sified are viewed as vectors of word frequencies, and the cosine value between the two vectors reflects similarity (Salton and McGill, 1983). 44 Table 15: Tested categories in the second data set earn,acq,crude,money-fx,gr ain interest,trade,ship,wheat,corn ] give the results of classifying into the ten categories having the greatest numbers of documents in the test data (see Tab. 15). For both data sets, we evaluated each method in terms of precision and recall by means of the so- called micro-averaging 10 When applying WBM, HCM, and FMM, rather than use the standard likelihood ratio testing, we used the following heuristics. For simplicity, suppose that there are only two categories cl and c2. Letting ¢ be a given number larger than or equal 0, we assign a new document d in the following way: ~ (logL(dlcl) -logL(dlc2)) > e; d --* cl, (logL(dlc2) logL(dlct)) > ~; d---+ cu, otherwise; unclassify d, (is) where N is the size of document d. (One can easily extend the method to cases with a greater ~umber of categories.) 11 For COS, we conducted classification in a similar way. Fig s. 2 and 3 show precision-recall curves for the first data set and those for the second data set, re- spectively. In these graphs, values given after FMM and HCM represent 3' in our clustering method (e.g. FMM0.5, HCM0.5,etc). We adopted the break-even point as a single measure for comparison, which is the one at which precision equals recall; a higher score for the break-even point indicates better per- formance. Tab. 16 shows the break-even point for each method for the first data set and Tab. 17 shows that for the second data set. For the first data set, FMM0 attains the highest score at break-even point; for the second data set, FMM0.5 attains the highest. We considered the following questions: (1) The training data used in the experimen- tation may be considered sparse. Will a word- clustering-based method (FMM) outperform a word- based method (WBM) here? (2) Is it better to conduct soft clustering (FMM) than to do hard clustering (HCM)? (3) With our current method of creating clusters, as the threshold 7 approaches 0, FMM behaves much like WBM and it does not enjoy the effects of clus- tering at all (the number of parameters is as large l°In micro-averaging(Lewis and Ringuette, 1994), pre- cision is defined as the percentage of classified documents in all categories which are correctly classified. Recall is defined as the percentage of the total documents in all categories which are correctly classified. nNotice that words which are discarded in the duster- ing process should not to be counted in document size. I 0.g 0.8 0.7 ~ 0.6 0.5 0.4 0.3 0.2 ~" ...._':~.. "HCM0.S" -e-. .~". .... ::.':. ~ °HCM0.7" .-v,-- ,.-" " ..~ "'~"~.. "HCMO.9" ~-- .~/ - " "-~, "HCM0.g5" -~'-- • ." ~, "., "FMM0" -e-.- / ~. ~ "FMM0.4" "+-- ~-.. / '~ ~--~ "FMM0.5" -e -- y.... -,, "FMMO.7" /.~::::~:..-~-- '-,. ,.-1 .......... : ..... . 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 recall Figure 2: Precision-recall curve for the first data set c I I O.g 0.8 0.7 0.6 0,5 0.4 0.3 0.2 0.1 "WBM" "+-- "HCM0.5" -D- "HCM0.7 = K-- GI, "" "HCMO.g" ~.- .. ""'~- .... "HCMO.g5" "~-- - "'"'l~ ~3~ "FMMO" -e. -. ". "~. ~ ..-Q °FMM0.4" -+-- • .... ,.:" .. ",,,. "FMM0.5" -Q-- % " -,~ "FMM0.7 ~ "-... "... ~, ~ °0, 012 0:~ 01, 0:s 0:0 0:, 0:8 01, recall Figure 3: Precision-recall curve for the second data set as in WBM). This is because in this case (a) a word will be assigned into all of the clusters, (b) the dis- tribution of words in each cluster will approach that in the corresponding category in WBM, and (c) the likelihood value for each category will approach that in WBM (recall case (2) in Section 3). Since creating clusters in an optimal way is difficult, when cluster- ing does not improve performance we can at least make FMM perform as well as WBM by choosing 7 = 0. The question now is "does FMM perform better than WBM when 7 is 0?" In looking into these issues, we found the follow- ing: (1) When 3' >> 0, i.e., when we conduct clustering, FMM does not perform better than WBM for the first data set, but it performs better than WBM for the second data set. Evaluating classification results on the basis of each individual category, we have found that for three of the nine categories in the first data set, 45 Table 16: Break-even point COS WBM HCM0.5 HCM0.7 HCM0.9 HCM0.95 FMM0 FMM0.4 FMM0.5 FMM0.7 for thq first data set 0.60 0.62 0.32 0.42 0.54 0.51 0.66 0.54 0.52 0.42 Table 17: Break-even point for the COS 10.52 WBM !0.62 HCM0.5 10.47 HCM0.7 i0.51 HCM0.9 10.55 HCM0.95 0.31 FMM0 i0.62 FMM0.4 0.54 FMM0.5 0.67 FMM0.7 0.62 second data set FMM0.5 performs best, and that in two of the ten categories in the second data set FMM0.5 performs best. These results indicate that clustering some- times does improve classification results when we use our current way of creating clusters. (Fig. 4 shows the best result for each method for the cate- gory 'corn' in the first data set and Fig. 5 that for 'grain' in the second data set.) (2) When 3' >> 0, i.e., when we conduct clustering, the best of FMM almost always outperforms that of HCM. (3) When 7 = 0, FMM performs better than WBM for the first data set, and that it performs as well as WBM for the second data set. In summary, FMM always outperforms HCM; in some cases it performs better than WBM; and in general it performs at least as well as WBM. For both data sets, the best FMM results are supe- rior to those of COS throughout. This indicates that the probabilistic approach is more suitable than the cosine approach for document classification based on word distributions. Although we have not completed our experiments on the entire Reuters data set, we found that the re- sults with FMM on the second data set are almost as good as those obtained by the other approaches re- ported in (Lewis and Ringuette, 1994). (The results are not directly comparable, because (a) the results in (Lewis and Ringuette, 1994) were obtained from an older version of the Reuters data; and (b) they t 0,9 0.8 0.7 0.8 0.8 'COS" " ' ~ / , "HCMO.9" ~-. • ' ~ "~., "FMMO.8" , / "-~ o'., °'., o'.~ o'., o.~ oi° oi, o'.8 o'.8 ror,~ Figure 4: Precision-recall curve for category 'corn' 1 °.9 0.8 0.7 0,6 0.5 0.4 0.3 0.2 O.t "".. k~, • ... ~ "h~MO.7" "e-- ", FMI¢~.$ I 0'., 0'., 0'., 0'., 0'.8 0'., 0., 0.° 01, Figure 5: Precision-recall curve for category 'grain' used stop words, but we did not.) We have also conducted experiments on the Su- sanne corpus data t2 and confirmed the effectiveness of our method. We omit an explanation of this work here due to space limitations. 7 Conclusions Let us conclude this paper with the following re- marks: 1. The primary contribution of this research is that we have proposed the use of the finite mix- ture model in document classification. 2. Experimental results indicate that our method of using the finite mixture model outperforms the method based on hard clustering of words. 3. Experimental results also indicate that in some cases our method outperforms the word-based 12The Susanne corpus, which has four non-overlapping categories, is ~va~lable at ftp://ota.ox.ac.uk 46 method when we use our current method of cre- ating clusters. Our future work is to include: 1. comparing the various methods over the entire Reuters corpus and over other data bases, 2. developing better ways of creating clusters. Our proposed method is not limited to document classification; it can also be applied to other natu- ral language processing tasks, like word sense dis- ambiguation, in which we can view the context sur- rounding a ambiguous target word as a document and the word-senses to be resolved as categories. Acknowledgements We are grateful to Tomoyuki Fujita of NEC for his constant encouragement. We also thank Naoki Abe of NEC for his important suggestions, and Mark Pe- tersen of Meiji Univ. for his help with the English of this text. We would like to express special apprecia- tion to the six ACL anonymous reviewers who have provided many valuable comments and criticisms. References Apte, Chidanand, Fred Damerau, and Sholom M. Weiss. 1994. Automated learning of decision rules for text categorization. A CM Tran. on Informa- tion Systems, 12(3):233-251. Cohen, William W. and Yoram Singer. 1996. Context-sensitive learning methods for text cat- egorization. Proc. of SIGIR'96. Deerwester, Scott, Susan T. Dumais, George W. Furnas, Thomas K. Landauer, and Richard Harsh- man. 1990. Indexing by latent semantic analysis. Journ. of the American Society for Information Science, 41(6):391-407. Dempster, A.P., N.M. Laird, and D.B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journ. of the Royal Statistical So- ciety, Series B, 39(1):1-38. Everitt, B. and D. Hand. 1981. Finite Mixture Dis- tributions. London: Chapman and Hall. Fuhr, Norbert. 1989. Models for retrieval with prob- abilistic indexing. Information Processing and Management, 25(1):55-72. Gale, Williams A. and Kenth W. Church. 1990. Poor estimates of context are worse than none. Proc. of the DARPA Speech and Natural Language Workshop, pages 283-287. Guthrie, Louise, Elbert Walker, and Joe Guthrie. 1994. Document classification by machine: The- ory and practice. Proc. of COLING'94, pages 1059-1063. Helmbold, D., R. Schapire, Y. Siuger, and M. War- muth. 1995. A comparison of new and old algo- rithm for a mixture estimation problem. Proc. of COLT'95, pages 61-68. Jelinek, F. and R.I. Mercer. 1980. Interpolated esti- mation of markov source parameters from sparse data. Proc. of Workshop on Pattern Recognition in Practice, pages 381-402. Lewis, David D. 1992. An evaluation of phrasal and clustered representations on a text categorization task. Proc. of SIGIR'9~, pages 37-50. Lewis, David D. and Marc Ringuette. 1994. A com- parison of two learning algorithms for test catego- rization. Proc. of 3rd Annual Symposium on Doc- ument Analysis and Information Retrieval, pages 81-93. Lewis, David D., Robert E. Schapire, James P. Callan, and Ron Papka. 1996. Training algo- rithms for linear text classifiers. Proc. of SI- GIR '96. Pereira, Fernando, Naftali Tishby, and Lillian Lee. 1993. Distributional clustering of english words. Proc. of ACL '93, pages 183-190. Robertson, S.E. and K. Sparck Jones. 1976. Rel- evance weighting of search terms. Journ. of the American Society for Information Science, 27:129-146. Salton, G. and M.J. McGiU. 1983. Introduction to Modern Information Retrieval. New York: Mc- Graw Hill. Schutze, Hinrich, David A. Hull, and Jan O. Peder- sen. 1995. A comparison of classifiers and doc- ument representations for the routing problem. Proc. of SIGIR '95. Tanner, Martin A. and Wing Hung Wong. 1987. The calculation of posterior distributions by data augmentation. Journ. of the American Statistical Association, 82(398):528-540. Wong, S.K.M. and Y.Y. Ya~. 1989. A probability distribution model for information retrieval. In- formation Processing and Management, 25(1):39- 53. Yamanishi, Kenji. 1996. A randomized approxima- tion of the mdl for stochastic models with hidden variables. Proc. of COLT'96, pages 99-109. Yang, Yiming and Christoper G. Chute. 1994. An example-based mapping method for text catego- rization and retrieval. A CM Tran. on Information Systems, 12(3):252-277. 47 | 1997 | 6 |
Representing Constraints with Automata Frank Morawietz and Tom Cornell Seminar fiir Sprachwissenschaft Universit£t Tiibingen Wilhelmstr. 113 72074 Tiibingen, Germany {frank, cornell}~sfs, nphil, uni-tuebingen, de Abstract In this paper we describe an approach to constraint based syntactic theories in terms of finite tree automata. The solutions to constraints expressed in weak monadic sec- ond order (MSO) logic are represented by tree automata recognizing the assignments which make the formulas true. We show that this allows an efficient representation of knowledge about the content of con- straints which can be used as a practical tool for grammatical theory verification. We achieve this by using the intertrans- latability of formulae of MSO logic and tree automata and the embedding of MSO logic into a constraint logic programming scheme. The usefulness of the approach is discussed with examples from the realm of Principles-and-Parameters based parsing. 1 Introduction In recent years there has been a continuing inter- est in computational linguistics in both model theo- retic syntax and finite state techniques. In this pa- per we attempt to bridge the gap between the two by exploiting an old result in logic, that the weak monadic second order (MSO) theory of two successor functions (WS2S) is decidable (Thatcher and Wright 1968, Doner 1970). A "weak" second order theory is one in which the set variables are allowed to range only over finite sets. There is a more powerful result available: it has been shown (Rabin 1969) that the strong monadic second order theory (variables range over infinite sets) of even countably many successor functions is decidable. However, in our linguistic ap- plications we only need to quantify over finite sets, so the weaker theory is enough, and the techniques cor- respondingly simpler3 The decidability proof works by showing a correspondence between formulas in the language of WS2S and tree automata, devel- oped in such a way that the formula is satisfiable iff the set of trees accepted by the corresponding au- tomaton is nonempty. While these results were well known, the (rather surprising) suitability of this for- malism as a constraint language for Principles and Parameters (P&P) based linguistic theories has only recently been shown by Rogers (1994). It should be pointed out immediately that the translation from formulas to automata, while effec- tive, is just about as complex as it is possible to be. In the worst case, the number of states can be given as a function of the number of variables in the input formula with a stack of exponents as tall as the number of quantifier alternations in the for- mula. However, there is a growing body of work in the computer science literature motivated by the success of the MONA decision procedure (Henriksen et al. 1995) 2 on the application of these techniques in computer science (Basin and Klarlund 1995, Kelb et al. 1997), which suggests that in practical cases the extreme explosiveness of this technique can be effectively controlled. It is one of our goals to show that this is the case in linguistic applications as well. The decidability proof for WS2S is inductive on the structure of MSO formulas. Therefore we can choose our particular tree description language rather freely, knowing (a) that the resulting logic 1All of these are generalizations to trees of results on strings and the monadic second order theory of one successor function originally due to Biichi (1960). The applications we mention here could be adapted to strings with finite-state automata replacing tree automata. In general, all the techniques which apply to tree au- tomata are straightforward generalizations of techniques for FSAs. 2The current version of the MONA tool works only on the MSO logic of strings. There is work in progress at the University of Aarhus to extend MONA to "MONA++", for trees (Biehl et al. 1996). 468 will be decidable and (b) that the translation to au- tomata will go through as long as the atomic formu- las of the language represent relations which can be translated (by hand if necessary) to tree automata. We will see how this is done ill the next section, but the point can be appreciated immediately. For example, Niehren and Podelski (1992) and Ayari et al. (1997) have investigated the usefulness of these techniques in dealing with feature trees which un- fold feature structures; there the attributes of an attribute-value term are translated to distinct suc- cessor functions. On the other hand, Rogers (1996) has developed a language rich in long-distance rela- tions (dominance and precedence) which is more ap- propriate for work in Government-Binding (GB) the- ory. Compact automata can be easily constructed to represent dominance and precedence relations. One can imagine other possibilities as well: as we will see, the automaton for Kayne-style asymmet- ric, precedence-restricted c-command (Kayne 1994) is also very compact, and makes a suitable primitive for a description language along the lines developed by Frank and Vijay-Shanker (1995). The paper is organized as follows. First we present some of the mathematical background, then we dis- cuss (na'ive) uses of the techniques, followed by the presentation of a constraint logic programming- based extension of MSO logic to avoid some of the problems of the naive approach, concluding with a discussion of its strengths and weaknesses. 2 Defining Automata with Constraints Tree automata. For completeness, we sketch the definitions of trees and tree automata here. An in- troduction to tree automata can be found in G~cseg and Steinby (1984), as well as in Thatcher and Wright (1968) and Doner (1970). Assume an alphabet E = E0 LJ E2 with Eo = {A} and E2 being a set of binary operation symbols. We think of (binary) trees over E as just the set of terms Tr. constructed from this alphabet. That is, we let A be the empty tree and let a(tl,t2), for a E E2 and tl,t2 E T~., denote the tree with label a and subtrees tl, t2. Alternatively, we can think of a tree t as a function from the addresses in a binary tree domain T to labels in E. 3 A deterministic (bottom-up) tree automaton .4 on binary trees is a tuple (A, E, a0, F, c~ / with A the set 3The first approach is developed in Thatcher and Wright (1968), the second in Doner (1970). A tree do- main is a subset of strings over a linearly ordered set which is closed under prefix and left sister. of states, a0 E A the initial state, F C_ A the fi- nal states and a : (A x A x E) -+ A the transition function. The transition function can be thought of as a homomorphism on trees inductively defined as: h~(~) : a0 and h~(a(tl, t2)) = a(h~(tl), ha(t2), a). An automaton .4 accepts a tree t iff ha (t) E F. The language recognized by A is denoted by T(A) = {tlh,(t) E F}. Emptiness of the language T(,4) is decidable by a fixpoint construction computing the set of reachable states. The reachability algorithm is given below in Figure 1. R contains the reachable states con- structed so far, and R' contains possibly new states constructed on the current pass through the loop. T(A) is empty if and only if no final state is reach- 1. R := {ao}, R' := 0. 2. For all (ai,aj) E R x R, for all a E E, R' := R'U {c~(ai,aj,a)}. 3. If R r- R = 0 then return R, else R := R U R', go to step 2. Figure 1: Reachable states algorithm. able. Naturally, if we want to test emptiness, we can stop the construction as soon as we encounter a final state in R r. Note that, given an automaton with k states, the algorithm must terminate after at most k passes through the loop, so the algorithm terminates after at most k 3 searches through the transition ta- ble. Sets of trees which are the language of some tree automaton are called recognizable. 4 The recogniz- able sets are closed under the boolean operations of conjunction, disjunction and negation, and the automaton constructions which witness these clo- sure results are absolutely straightforward general- izations of the corresponding better-known construc- tions for finite state automata. The recognizable sets are also closed under projections (mappings from one alphabet to another) and inverse projections, and again the construction is essentially that for fi- nite state automata. The projection construction yields a nondeterministic automaton, but, again as for FSA's, bottom-up tree automata can be made deterministic by a straightforward generalization of the subset construction. (Note that top-down tree automata do not have this property: determinis- tic top-down tree automata recognize a strictly nar- rower family of tree sets.) Finally, tree automata can 4The recognizable sets of trees yield the context free string languages, so MSO logics are limited to context free power. However, the CLP extension discussed below can be used to amplify the power of the formalism where necessary. 469 be minimized by a construction which is, yet again, a straightforward generalization of well known FSA techniques. The weak second order theory of two succes- sor functions. One attraction of monadic second order tree logics is that they give us a principled means of generating automata from a constraint- based theory. The connection allows the linguist to specify ideas about natural language in a concise manner in logic, while at the same time providing a way of "compiling" those constraints into a form which can be efficiently used in natural language pro- cessing applications. The translation is provided via the weak monadic second order theory of two successor functions (WS2S). The structure of two successor functions, H2, has for its domain (N2) the infinite binary branching tree. Standardly the language of WS2S is based on two successor functions (left-daughter and right-daughter), but, as Rogers (1994) shows, this is intertranslatable with a language based on domi- nance and precedence relations. Because we choose the monadic second order language over whichever of these two signatures is preferred, we can quan- tify over sets of nodes in N2. So we can use these sets to pick out arbitrarily large finite trees embed- ded in N2. Second order variables can also be used to pick out other properties of nodes, such as cate- gory or other node-labeling features, and they can be used to pick out higher order substructures such as :~ projections or chains. As usual, satisfiability of a formula in the language of WS2S by Af2 is relative to an assignment function, mapping individual variables to members of N2 (as in first order logic) and mapping monadic predicate variables to subsets of N2. Following Biichi (1960), Doner (1970) and Thatcher and Wright (1968) show that assignment functions for such formulas can be coded by a labeling of the nodes in N2 in the follow- ing way. First, we treat individual variables as set variables which are constrained to be singleton sets (we can define the singletonhood property in MSO tree logic). So, without loss of generality, we can think of the domain of the assignment function as a sequence Xz,... , X~ of the variables occurring in the given formula. We choose our labeling alphabet to be the set of length n bit strings: (0, 1} ~. Then, for every node n E N2, if we intend to assign n to the denotation of Xi, we indicate this by labeling n with a bit string in which the ith bit is on. (In effect, we are labelling every node with a list of the sets to which it belongs.) Now every assignment function we might need corresponds uniquely to a labeling function over N2. What Doner, and Thatcher and Wright (and, for strong $2S, Rabin) show is that each formula in the language of WS2S corresponds to a tree automaton which recognizes just the sat- isfying "assignment labelings", and we can thereby define a notion of "recognizable relation". So the formula is satisfiable just in case the corresponding automaton recognizes a nonempty language. Note that any language whose formulas can be converted to automata in this way is therefore guaranteed to be decidable, though whether it is as strong as the language of WS2S must still be shown. This approach to theorem-proving is rather dif- ferent from more general techniques for higher-order theorem proving in ways that the formalizer must keep in mind. In particular, we are deciding mem- bership in the theory of a fixed structure, Af2, and not consequence of an explicit set of tree axioms. So, for example, the parse tree shows up in the for- malization as a second order variable, rather than simply being a satisfying model (cf. Johnson (1994), on "satisfiability-based" grammar formalisms). As an example consider the following formula denoting the relation of directed asymmetric c- command 5 in the sense of Kayne (1994). We use the tree logic signature of Rogers (1994), which, in a sec- ond order setting, is interdefinable with the language of multiple successor functions. Uppercase letters denote second order variables, lowercase ones first order variables, <~* reflexive domination, <~+ proper domination and -4 proper precedence: AC-Com(xl, x2) % x c-commands y: (Vz)[z <~+ x =# z <~+ y] A -~(x <1" y) A % y does not c-command x: 4 + y z 4 + x] A 4" x)) A % x preceeds y: x-~y The corresponding tree automaton is shown in Figure 2. On closer examination of the transitions, we note that we just percolate the initial state as long as we find only nodes which are neither xl nor x2. From the initial state on both the left and the right subtree we can either go to the state denoting "found xl" (al) if we read symbol 10 or to the state denoting "found x2" (a2) if we read symbol 01. We can then percolate a2 as long as the other branch does not immediately dominate xl. When we have 5This relation is not monadic, but reducible via syn- tactic substitution to an MSO signature. In fact, we can define relations of any arity as long as they are explicitly presentable in MSO logic. 470 ,4 = (A,~,ao,F,a), A = {ao,al,a2,a3,a4}, = {11, 10, 01, 00} F = {a3}, (ao,a0,00) = a0 (a0,a0, 10) = al (a0,a0,01) = a2 (a0,a2,00) = (a0, a3,00) = a3 (a2, a0, 00) = (al, a:, 00) = a3 a0,00) = all other transitions are to a4 Figure 2: The automaton for AC-Com(xl, x2) al on the left subtree and a2 on the right one, we go to the final state aa which again can be percolated as long as empty symbols are read. Clearly, the au- tomaton recognizes all trees which have the desired c-command relation between the two nodes. It com- pactly represents the (infinite) number of possible satisfying assignments. The proof of the decidability of WS2S furnishes a technique for deriving such automata for recog- nizable relations effectively. (In fact the above au- tomaton was constructed by a simple implementa- tion of such a compiler which we have running at the University of Tiibingen. See Morawietz and Cornell (1997).) The proof is inductive. In the base case, relations defined by atomic formulas are shown to be recognizable by brute force. Then the induction is based on the closure properties of the recognizable sets, so that logical operators correspond to automa- ton constructions in the following way: conjunction and negation just use the obvious corresponding au- tomaton operations and existential quantification is implemented w~th the projection construction. The inductive nature of the proof allows us a fairly free choice of signature, as long as our atomic relations are recognizable. We could, for example, investi- gate theories in which asymmetric c-command was the only primitive, or asymmetric c-command plus dominance, for example. The projection construction, as noted above, yields nondeterministic automata as output, and the negation construction requires deterministic au- tomata as input, so the subset construction must be used every time a negated existential quantifier is en- countered. The corresponding exponential blowup in the state space is the main cause of the non- elementary complexity of the construction. Since a quantifier prefix of the form 3-.. 3V...V3... is equivalent to 3... 373---373.-- we see that the stack of exponents involved is determined by the number of quantifier alternations. It is obviously desirable to keep the automata as small as possible. In our own prototype, we min- imize the outputs of all of our automata construc- tions. Note that this gives us another way of deter- mining satisfiability, since the minimal automaton recognizing the empty language is readily detectable: its only state is the initial state, and it is not final. 3 Defining Constraints with Automata An obvious goal for the use of the discussed ap- proach would be the (offline) generation of a tree automaton representing an entire grammar. That is, in principle, if we can formalize a grammar in an MSO tree logic, we can apply these compilation techniques to construct an automaton which recog- nizes all and only the valid parse trees. 6 In this set- ting, the parsing problem becomes the problem of conjoining an automaton recognizing the input with the grammar automaton, with the result being an automaton which recognizes all and only the valid parse trees. For example, assume that we have an automaton Gram(X) such that X is a well-formed tree, and suppose we want to recognize the input John sees Mary. Then we conjoin a description of the input with the grammar automaton as given be- low. (3x, y,z E X)[x E John A y E Sees A z E Mary A x -< y -< z A Gram(X)] The recognition problem is just the problem of deter- mining whether or not the resulting automaton rec- ognizes a nonempty language. Since the automaton represents the parse forest, we can run it to generate parse trees for this particular input. Unfortunately, as we have already noted, the problem of generating a tree automaton from an arbitrary MSO formula is of non-elementary com- plexity. Therefore, it seems unlikely that a formal- ization of a realistic principle-based grammar could be compiled into a tree automaton before the heat death of the universe. (The formalization of ideas from Relativized Minimality (Pdzzi 1990) presented in Rogers (1994) fills an entire chapter without spec- ifying even the beginning of a full lexicon, for ex- ample.) Nonetheless there are a number of ways in which these compilation techniques remain use- ful. First, though the construction of a grammar automaton is almost certainly infeasible for realis- tic grammars, the construction of a grammar-and- input automaton--which is a very much smaller 6This is reminiscent of approaches associated with Bernard Lang. See van Noord (1995) and references therein. 471 machine--may not be. We discuss techniques based on constraint logic programming that are applicable to that problem in the next section. Another use for such a compiler is suggested by the standard divide-and-conquer strategy for prob- lem solving: instead of compiling an entire gram- mar formula, we isolate interesting subformulas, and attempt to compile them. Tree automata repre- sent properties of trees and there are many such properties less complex than global well-formedness which are nonetheless important to establish for parse trees. In particular, where the definition of a property of parse trees involves negation or quan- tification, including quantification over sets of nodes, it may be easier to express this in an MSO tree logic, compile the resulting formula, and use the resulting automaton as a filter on parse trees originally gen- erated by other means (e.g., by a covering phrase structure grammar). At the moment, at least, the question of which grammatical properties can be compiled in a reason- able time is largely empirical. It is made even more difficult by the lack of high quality software tools. This situation should be alleviated in the near future when work on MONA++ at the University of Aarhus is completed; the usefulness of its older sister MONA (Henriksen et al. 1995), which works on strings and FSA's, has been well demonstrated in the computer science literature. In the meantime, for tests, we are using a comparatively simple implementation of our own. Even with very low-power tools, however, we can construct automata for interesting grammatical constraints. For example, recall the definition of asymmetric c- command and its associated automaton in Figure 2. In linguistic applications, we generally use versions of c-command which are restricted to be local, in the sense that no element of a certain type is allowed to intervene. The general form of such a locality condition LC might then be formalized as follows. LC(x,y) AC-Comm(x, y) A % there does not exist z with property P: (-~3z)[z E P A % such that it intervenes between x and y: (3w)[w x A w ,a + z A z y]] Here property P is meant to be the property iden- tifying a relevant intervener for the relation meant to hold between x and y. Note that this property could include that some other node be the left suc- cessor of z with certain properties, that is, this gen- eral scheme fits cases where the intervening item is not itself directly on the path between x and y. This formula was compiled by us and yields the automa- ton in Figure 3. Here the first bit position indicates membership in P, the second is for x and the third for y. A = (A,E, ao,F,a), A = {no, al, a2, a3, a4 }, F = {a3}, a(ao,ao,O00) = ao a(ao,ao, 100) = ao a(ao,ao,OlO) -- a2 (~(ao,ao,ll0) = a2 a(ao, ao, 001) = al a(ao, ao, 101) = al a(ao,al,000) -- al ~(ao,a3,000) = a3 a(ao,a3,100) = a3 ~(al,ao,000) = al Ol(a2, al, 000) = a3 a(a2, al, I00) = a3 o~(a3, ao, 000) = a3 a(a3, ao, 100) = a3 all other transitions are to at Figure 3: Automaton for local c-command. This automaton could in turn be implemented it- self as Prolog code, and considered to be an op- timized implementation of the given specification. Note in particular the role of the compiler as an op- timizer. It outputs a minimized automaton, and the minimal automaton is a unique (up to isomorphism) definition of the given relation. Consider again the definition of AC-Command in the previous section. It is far from the most compact and elegant formula defining that relation. There exist much smaller for- mulas equivalent to that definition, and indeed some are suggested by the very structure of the automa- ton. That formula was chosen because it is an ex- tremely straightforward formalization of the prose definition of the relation. Nonetheless, the automa- ton compiled from a much cleverer formalization would still be essentially the same. So no particular degree of cleverness is assumed on the part of the formalizer; optimization is done by the compiler. 7 4 MSO Logic and Constraint Logic Programming The automaton for a grammar formula is presum- ably quite a lot larger than the parse-forest automa- ton, that is, the automaton for the grammar con- joined with the input description. So it makes sense to search for ways to construct the parse-forest au- tomaton which do not require the prior construction of an entire grammar automaton. In this section we consider how we might do this by by the embedding 7The structure of the formula does often have an ef- fect on the time required by the compiler; in that sense writing MSO formalizations is still Logic Programming. 472 of the MSO constraint language into a constraint logic programming scheme. The constraint base is an automaton which represents the incremental ac- cumulation of knowledge about the possible valua- tions of variables. As discussed before, automata are a way to represent even infinite numbers of valu- ations with finite means, while still allowing for the efficient extraction of individual valuations. We in- crementally add information to this constraint base by applying and solving clauses with their associated constraints. That is, we actually use the compiler on line as the constraint solver. Some obvious advan- tages include that we can still use our succinct and flexible constraint language, but gain (a) a more ex- pressive language, since we now can include induc- tive definitions of relations, and (b) a way of guid- ing the compilation process by the specification of appropriate programs. We define a relational extension TC(WS2S) of our constraint language following the HShfeld and Smolka scheme (HShfeld and Smolka 1988). From the scheme we get a sound and complete, but now only semi-decidable, operational interpretation of a definite clause-based derivation process. The result- ing structure is an extension of the underlying con- straint structure with the new relations defined via fixpoints. As usual, a definite clause is an implication with an atom as the head and a body consisting of a sat- isfiable MSO constraint and a (possibly empty) con- junction of atoms. A derivation step consists of two parts: goal reduction, which substitutes the body of a goal for an appropriate head, and constraint solving, which means in our case that we have to check the satisfiability of the constraint associated with the clause in conjunction with the current con- straint store. For simplicity we assume a standard left-to-right, depth-first interpreter for the execution of the programs. The solution to a search branch of a program is a satisfiable constraint, represented in "solved form" as an automaton. Note that automata do make appropriate solved forms for systems of con- straints: minimized automata are normal forms, and they allow for the direct and efficient recovery of par- ticular solutions. Intuitively, we have a language which has an op- erational interpretation similar to Prolog with the differences that we interpret it not on the Herbrand universe but on N2, that we use MS0 constraint solving instead of unification and that we can use defined (linguistic) primitives directly. The resulting system is only semi-decidable, due to the fact that the extension permits monadic sec- ond order variables to appear in recursively defined clauses. So if we view the inductively defined rela- tions as part of an augmented signature, this sig- nature contains relations on sets. These allow the specification of undecidable relations; for example, Morawietz (1997) shows how to encode the PCP. If we limit ourselves to just singleton variables in any directly or indirectly recursive clause, every relation we define stays within the capacity of MSO logic, s since, if they are first order inductively definable, they are explicitly second order definable (Rogers 1994). Since this does not take us beyond the power of MSO logic and natural language is known not to be context-free, the extra power of TC(WS2S) offers a way to get past the context-free boundary. To demonstrate how we now split the work be- tween the compiler and the CLP interpreter, we present a simple example. Consider the following naive specification of a lexicon: 9 Lexicon(x) ~:~ (x E Sees A x E V A . . . ) V (xEJohnAxENA...) Y (xEMaryAxENA...) We have specified a set called Lexicon via a disjunc- tive specification of lexical labels, e.g. Sees, and the appropriate combination of features, e.g.V. Naively, at least, every feature we use must have its own bit position, since in the logic we treat features as set variables. So, the alphabet size with the encoding as bitstrings will be at least 2 IAlphabet[. It is immedi- ately clear that the compilation of such an automa- ton is extremely unattractive, if at all feasible. We can avoid having to compile the whole lexi- con by having separate clauses for each lexical en- try in the CLP extension. Notational conventions will be that constraints associated with clauses are written in curly brackets and subgoals in the body are separated by &'s. Note that relations defined in TC(WS2S) are written lowercase. lexicon(x) t--- {x E Sees A x E V A . . . } lexicon(x) +-- {x E John A x E N A . . . } lexicon(x) e-- {xEMaryAxENA...} This shifts the burden of handling disjunctions to the interpreter. The intuitive point should be clear: it 8Relations on individuals describe sets which are ex- pressible as monadic predicates. 9Here and in the following we treat free variables as being stored in a global table so that we do not have to present them in each and every constraint. In par- ticular, without this lexicon would have the additional arguments Sees, V, John, N, Mary and all free vari- ables appearing in the other definitions. 473 is not the case that every constraint in the grammar has to be expressed in one single tree automaton. We need only compile into the constraint store those which are really needed. Note that this is true even for variables appearing in the global table. In the CLP extension the appearance in the table is not coupled to the appearance in the constraint store. Only those are present in both which are part of the constraint in an applied clause. We can also use offline compiled modules in a T~(WS2S) parsing program. As a source of simple examples, we draw on the definitions from the lec- tures on P&P parsing presented in Johnson (1995). In implementing a program such as Johnson's sim- plified parse relation--see Figure 4--we can in prin- ciple define any of the subgoals in the body either via precompiled automata (so they are essentially treated as facts), or else providing them with more standard definite clause definitions. parse(Words, Tree) {Tree(Words)} & yield(Words, Tree) & xbar(Tree) & ecp(Tree) Figure 4: parse as in Johnson (1995) In more detail, Words denotes a set of nodes la- beled according to the input description. Our initial constraint base, which can be automatically gener- ated from a Prolog list of input words, is the corre- sponding tree automaton. The associated constraint Tree is easily compilable and serves as the initializa- tion for our parse tree. The yield and ecp predicates can easily be explicitly defined and, if practically compilable (which is certainly the case for yield), could then be treated as facts. The xbar predicate, on the other hand, is a disjunctive specification of licensing conditions depending on different features and configurations, e.g., whether we are faced with a binary-, unary- or non-branching structure, which is better expressed as several separate rules. In fact, since we want the lexicon to be represented as sev- eral definite clauses, we cannot have xbar as a sim- ple constraint. This is due to the limitation of the constraints which appear in the definite clauses to (pure) MSO constraints. We now have another well-defined way of using the offiine compiled modules. This, at least, separates the actual processing issues (e.g., parse) from the linguistically motivated modules (e.g., ecp). One can now see that with the relational extension, we can not only use those modules which are compilable di- rectly, but also guide the compilation procedure. In effect this means interleaving the intersection of the grammar and the input description such that only the minimal amount of information to determine the parse is incrementally stored in the constraint base. Furthermore, the language of 7~(WS2S) is suffi- ciently close to standard Prolog-like programming languages to allow the transfer of techniques and approaches developed in the realm of P&P-based parsing. In other words, it needs only little effort to translate a Prolog program to a T~(WS2S) one. 5 Conclusions and Outlook In this paper we presented a first step towards the re- alization of a system using automata-based theorem- proving techniques to implement linguistic process- ing and theory verification. Despite the staggering complexity bound the success of and the continu- ing work on these techniques in computer science promises a useable tool to test formalization of gram- mars. The advantages are readily apparent. The direct use of a succinct and flexible description lan- guage together with an environment to test the for- malizations with the resulting finite, deterministic tree automata offers a way of combining the needs of both formalization and processing. And further- more, the CLP extension offers an even more power- ful language which allows a clear separation of pro- cessing and specification issues while retaining the power and flexibility of the original. Since it allows the control of the generation process, the addition of information to the constraint base is dependent on the input which keeps the number of variables smaller and by this the automata more compact. Nevertheless it remains to be seen how far the system can be advanced with the use of an opti- mized theorem-prover. The number of variables our current prototype can handle lies between eight and eleven. 1° This is not enough to compile or test all interesting aspects of a formalization. So further work will definitly involve the optimization of the prototype implementation, while we await the devel- opment of more sophisticated tools like MONA++. It seems to be promising to improve the (very ba- sic) CLP interpreter, too. The HShfeld and Smolka scheme allows the inclusion of existential quantifi- cation into the relational extension. We intend to use this to provide the theoretical background of the implementation of a garbage collection proce- dure which projects variables from the constraint store which are either local to a definite clause or Z°Note that this corresponds to 256 to 2048 different bitstrings. 474 explicitly marked for projection in the program so that the constraint store can be kept as small as possible. 6 Acknowledgements This work has been supported by the project A8 of the SFB 340 of the Deutsche Forschungsgemein- schaft. We wish especially to thank Uwe MSnnich and Jim Rogers for discussions and advice. Needless to say, any errors and infelicities which remain are ours alone. References Ayari, A., Basin, D. and Podelski, A. (1997). LISA: A specification language based on WS2S, Ms, Uni- versit~it Freiburg. Submitted to CSL'97. Basin, D. and Klarlund, N. (1995). Hardware verification using monadic second-order logic, Computer-Aided Verification (CAV '95), LNCS 939, Springer, pp. 31-41. Biehl, M., Klarlund, N. and Rauhe, T. (1996). Algo- rithms for guided tree automata, Proc. WIA '96, LNCS, Springer-Verlag. Biichi, J. R. (1960). Weak second-order arithmetic and finite automata, Zeitschrift fiir mathematis- ehe Logik und Grundlagen der Mathematik 6: 66- 92. Doner, J. (1970). Tree acceptors and some of their applications, Journal of Computer and System Sciences 4: 406-451. Frank, R. and Vijay-Shanker, K. (1995). C- command and grammatical primitives, Presenta- tion at the 18th GLOW Colloquium. University of Troms0. G@cseg, F. and Steinby, M. (1984). Tree Automata, Akad~miai Kiad6, Budapest. Henriksen, J. G., Jensen, J., J¢rgensen, M., Klar- lund, N., Paige, R., Rauhe, T. and Sandhol, A. (1995). MONA: Monadic second-order logic in practice, in Brinksma, Cleaveland, Larsen, Mar- garia and Steffen (eds), TACAS '95, LNCS 1019, Springer, pp. 89-110. HShfeld, M. and Smolka, G. (1988). Definite rela- tions over constraint languages, LILOG Report 53, IBM Deutschland, Stuttgart, Germany. Johnson, M. (1994). Two ways of formalizing gram- mars, Linguistics and Philosophy 17: 221-248. Johnson, M. (1995). Constraint-based natural lan- guage parsing, ESSLLI '95, Barcelona, Course notes. Kayne, R. S. (1994). The Antisymmetry of Syntax, MIT Press, Cambridge, Mass. and London, Eng- land. Kelb, P., Margaria, T., Mendler, M. and Gsot- tberger, C. (1997). MOSEL: A flexible toolset for monadic second-order logic, in E. Brinksma (ed.), TACAS '97. Morawietz, F. (1997). Monadic second order logic, tree automata and constraint logic programming, Arbeitspapiere des SFB 340 86, SFB 340, Univer- sit~t Tiibingen. Morawietz, F. and Cornell, T. L. (1997). On the recognizability of relations over a tree definable in a monadic second order tree description language, Arbeitspapiere des SFB 340 85, SFB 340, Univer- sit,it Tfibingen. Niehren, J. and Podelski, A. (1992). Feature au- tomata and recognizable sets of feature trees, in M.-C. Gandel and J.-P. Jouannaud (eds), Pro- ceedings of the 4th International Joint Conference on Theory and Practice of Software Development, Springer, LNCS 668, pp. 356-375. Rabin, M. O. (1969). Decidability of second-order theories and automata on infinite trees, Transac- tions of the AMS 141: 1-35. Rizzi, L. (1990). Relativized Minimality, MIT Press. Rogers, J. (1994). Studies in the Logic of Trees with Applications to Grammar Formalisms, PhD the- sis, University of Delaware. CS-Technical Report No. 95-04. Rogers, J. (1996). A model-theoretic framework for theories of syntax, Proc. of the 34th Annual Meet- ing of the ACL, Santa Cruz, USA. Thatcher, J. W. and Wright, J. B. (1968). Gener- alized finite automata theory with an application to a decision problem of second-order logic, Math- ematical Systems Theory 2(1): 57-81. van Noord, G. (1995). The intersection of finite state automata and definite clause grammars, Proc. of the 33th Annual Meeting of the ACL, Boston. 475 | 1997 | 60 |
Retrieving Collocations by Co-occurrences and Word Order Constraints Sayori Shimohata, Toshiyuki Sugio and Junji Nagata Kansai Laboratory, Research &: Development Group Oki Electric Industry Co., Ltd. Crystal Tower 1-2-27, Shiromi, Chuo-ku, Osaka, 540, Japan { sayori, sugio, nagat a} ©kansai. oki. co. j p Abstract In this paper, we describe a method for automatically retrieving collocations from large text corpora. This method retrieve collocations in the following stages: 1) ex- tracting strings of characters as units of collocations 2) extracting recurrent combi- nations of strings in accordance with their word order in a corpus as collocations. Through the method, various range of col- locations, especially domain specific collo- cations, are retrieved. The method is prac- tical because it uses plain texts without any information dependent on a language such as lexical knowledge and parts of speech. 1 Introduction A collocation is a recurrent combination of words, ranging from word level to sentence level. In this pa- per, we classify collocations into two types according to their structures. One is an uninterrupted colloca- tion which consists of a sequence of words, the other is an interrupted collocation which consists of words containing one or several gaps filled in by substi- tutable words or phrases which belong to the same category. The features of collocations are defined as follows: • collocations are recurrent • collocations consist of one or several lexical units • order of units are rigid in a collocation. For language processing such as machine trans- lation, a knowledge of domain specific collocations is indispensable because what collocations mean are different from their literal meaning and the usage and meaning of a collocation is totally dependent on each domain. In addition, new collocations are produced one after another and most of them are technical jargons. There has been a growing interest in corpus-based approaches which retrieve collocations from large corpora (Nagao and Mori, 1994), (Ikehara et al., 1996) (Kupiec, 1993), (Fung, 1995), (Kitamura and Matsumoto, 1996), (Smadja, 1993), (Smadja et al., 1996), (Haruno et al., 1996). Although these ap- proaches achieved good results for the task consid- ered, most of them aim to extract fixed collocations, mainly noun phrases, and require the information which is dependent on each language such as dictio- naries and parts of speech. From a practical point of view, however, a more robust and flexible approach is desirable. We propose a method to retrieve interrupted and uninterrupted collocations by the frequencies of co-occurrences and word order constraints from a monolingual corpus. The method comprises two stages: the first stage extracts sequences of words (or characters) t from a corpus as units of colloca- tions and the second stage extracts recurrent com- binations of units and constructs collocations by ar- ranging them in accordance with word order in the corpus. 2 Algorithm 2.1 Extracting units of collocation (Nagao and Mori, 1994) developed a method to calculate the frequencies of strings composed of n characters(a grams). Since this method generates all n-character strings appeared in a text, the output contains a lot of fragments and useless expressions. For example, even if "local", "area", and "network" always appear as the substrings of '% local area net- work" in a corpus, this method generates redundant strings such as "a local", "a local area" and "area network". To filter out the fragments, we measure the dis- tribution of adjacent words preceding and following 1A word is recognized as a minimum unit in such a language as English where writespace is used to delimit words, while a character is recognized as that in such languages as Japanese and Chinese which have no word delimiters. Although the method described in this paper is applicable to either kinds of languages, we have taken English as an example. 476 the strings using entropy threshold. This is based on the idea that adjacent words will be widely dis- tributed if the string is meaningful, and they will be localized if the string is a substring of a mean- ingful string. Taking the example mentioned above, the words which follow % local area" are practi- cally identified as "network" because % local area" is a substring of % local area network" in the cor- pus. On the contrary, the words which follow % local area network" are hardly identified because "a local area network" is a unit of expression and innu- merable words are possible to follow the string. It means that the distribution of adjacent words is ef- fective to judge whether the string is an appropriate unit or not. We introduce entropy value, which is a measure of disorder. Let the string be str, the adjacent words wl...wn, and the frequency of str freq(str). The probability of each possible adjacent word p(wi) is then: y~eq(wi) p(wi)- freq(str) (1) At that time, the entropy of str H(str) is defined as: 7 l H(str) = ~ -p(wi)logp(wi) (2) i=1 H(str) takes the highest value if n = freq(str) and 1 for all and it takes the lowest value 0 p(wi) = -~ wi, if n = 1 and p(wi) = 1. Calculating the entropy of both sides of the string, we adopt the lower one as the entropy of the string. Str is accepted only if the following inequation is satisfied: H(str) > Tentropu (3) Fragmental strings such as "a local" and "area network" are filtered out with these procedures be- cause their entropy values are expected to be small. Most of the strings extracted in this stage are mean- ingful units such as compound words, prepositional phrases, and idiomatic expressions. These strings are uninterrupted collocations of themselves while they are used in the next stage to construct colloca- tions. This method is useful for the languages with- out word delimiters, and for the other languages as well. 2.2 Extracting collocations By the use of each string derived in the previous stage, this stage extracts strings which frequently co-occur with the string and constructs them as a collocation. It is based on the idea that there is a string which is used to induce a collocation. We call this string % key string", hereafter. The followings are the procedures to retrieve a collocation: 1. Take a key string strk from the strings stri(i = 1...n), and retrieve sentences containing strk from the corpus. 2. Examine how often each possible combinations of str~ and stri co-occurs, and extract stri if the frequency exceeds a given threshold Tire q. 3. Examine every two strings stri and strj and refine them by the following steps alternately: • Combine stri and strj when they overlap or adjoin each other and the following in- equation is satisfied: freq(stri, strj ) freq(stri) > Tratio (4) • Filter out stri if strj subsumes stri and the following inequation is satisfied: freq(strj) freq(srti) >Tratio (5) 4. Construct a collocation by arranging the strings stri in accordance with the word order in the corpus. The second step and the third step narrow down the strings to the units of collocation. Through these steps, only the strings which significantly co-occur with the key string strk are extracted. The second step eliminates the strings that are not frequent enough. Consider the example of Figure 1. This is a list of sentences containing the key string "Refer to" retrieved and each underlined string cor- responds to a string stri. Assuming the frequency threshold Tlr~q as 2, the strings which co-occur with str~ more than twice are extracted in the second step. Table 1 shows the result of this step. Al- though it is very simple technique, almost all the useless strings are excluded through this step. stri f req( strk , stri ) the 4 manual 4 for specific instructions 3 on 2 Table 1: Result of the second step The third step reorganizes the strings to be opti- mum units in the specific context. This is based on the idea that a longer string is more significant as a unit of collocations if it is frequent enough. Assum- ing that the threshold Tra~io is 0.75, first, a string "manual for specific instructions" is produced as the inequation (4) is satisfied. Next, "manual" and "for specific instructions" are deleted as the inequation (5) is satisfied. This process is repeated until no string satisfies the inequations. Table 2 shows a re- sult of this step. The fourth step constructs a collocation by ar- ranging the strings in accordance with the word or- der in the sentences retrieved in the first step. Tak- ing stri in order of frequency, this step determines 477 Refer to the appropriate manual for instructions o_nn... Refer t.o. the manual for specific instructions. Refer to the installation manual for specific instructions fo__£r ... Refer to the manual for specific in'~~-~ffn ~ Figure 1: Sentences containing "Refer to" l stri f req( strk , stri ) the 4 manual for specific instructions 3 on 2 Table 2: Result of the third step where stri is placed in a collocation. In this example, the position of "the" is examined first. According to the sentences shown in Figure 1, "the" is always placed next to "Refer to". Then its position is de- termined to follow "Refer to". Next, the position of "manual for specific instructions" is examined and it is determined to follow a gap placed after "Refer to the". Finally, the following collocation is produced: "Refer to the ... manual for specific instructions on ..." The broken lines in the collocation indicates the gaps where any substitutable words or phrases can be filled in. In the example, "appropriate" or "installa- tion" is filled in the first gap. Thus, we retrieve an arbitrary length of inter- rupted or uninterrupted collocation induced by the key string. This procedure is performed for each string obtained in the previous stage. By changing the threshold, various levels of collocations are re- trieved. 3 Evaluation We performed an experiment for evaluating the al- gorithm. The corpus used in the experiment is a computer manual written in English comprising 1,311,522 words (in 120,240 sentences). In the first stage of this method, 167,387 strings are produced. Among them, 650, 1950, 6774 strings are extracted over the entropy threshold 2, 1.5, 1 re- spectively. For 650 strings whose entropy is greater than 2, 162 strings (24.9%) are complete sentences, 297 strings (45.7%) are regarded as grammatically appropriate units, and 114 strings (17.5%) are re- garded as meaningful units even though they are not grammatical. This told us that the precision of the first stage is 88.1%. Table 3 shows top 20 strings in order of entropy value. They are quite representative of the given do- main. Most of them are technical jargons related to computers and typical expressions used in manual descriptions although they vary in their construc- tions. It is interesting to note that the strings which do not belong to the grammatical units also take high entropy value. Some of them contain punctua- tion, and some of them terminate in articles. Punc- tuation marks and function words in the strings are useful to recognize how the strings are used in a cor- pus. Table 4 illustrates how the entropy is changed with the change of string length. The third column in the table shows the kinds of adjacent words which follow the strings. The table shows that the ungrammatical strings such as "For more information on" and "For more information, refer to" act more cohesively than the grammatical string "For more information" in the corpus. Actually, the former strings are more useful to construct collocations in the second stage. In the second stage, we extracted collocations from 411 key strings retrieved in the first stage (297 grammatical units and 114 meaningful units). Nec- essary thresholds are given by the following set of equations: rI~q ~ x 0.1 ~- ]req(str~) Tratio = 0.8 As a result, 269 combinations of units are retrieved as collocations. Note that collocations are not gen- erated from all the key strings because some of them are uninterrupted collocations in themselves like No. 2 in Table 3. Evaluation is done by human check and 180 collocations are regarded as meaningful. The precision is 43.8% when the number of meaning- ful collocation is divided by the number of the key strings and 66.9% when it is divided by the number of the collocations retrieved in the second stage 2. Table 5 shows the collocations extracted with the underlined key strings. The table indicates that ar- bitrary length of collocations, which are frequently used in computer manuals, are retrieved through the method. As the method focuses on the co- occurrence of strings, most of the collocations are specific to the given domain. Common collocations are tend to be ignored because they are not used re- peatedly in a single text. It is not a serious problem, 2Usually the latter ratio is adopted as precision. 478 however, becausecommon collocations are limited in number and we can efficiently obtain them from dictionaries or by human reflection. No. 7 and 8 in Table 5 are the examples of in- valid collocations. They contain unnecessary strings such as "to a" and ", the" in them. The majority of invalid collocations are of this type. One possible so- lution is to eliminate unnecessary strings at the sec- ond stage. Most of the unnecessary strings consist of only punctuation marks and function words. There- fore, by filtering out these strings, invalid colloca- tions produced by the method should be reduced. Figure 2 summarizes the result of the evaluation. In the experiment, 573 strings are retrieved as appro- priate units of collocations and 180 combinations of units are retrieved as appropriate collocations. Pre- cision is 88.1% in the first stage, and 66.9% in the second stage. 1st stage 2nd stage CS= 162(24.9%) GU=297(45.7%) MU=114(17.5%) F=77(11.9%) MC=180(43.8%) F=89(21.7%) NC= 142(34.5%) CS: complete sentences GU: grammatical units MU: meaningful units MC: meaningful collocations F: fragments NC: not captured Figure 2: Summary of evaluation Although evaluation of retrieval systems is usu- ally performed with precision and recall, we cannot examine recall rate in the experiment. It is difficult to recognize how many collocations are in a corpus because the measure differs largely dependent on the domain or the application considered. As an alter- native way to evaluate the algorithm, we are plan- ning to apply the collocations retrieved to a machine translation system and evaluate how they contribute to the quality of translation. 4 Related work Algorithms for retrieving collocations has been de- scribed (Smadja, 1993) (Haruno et al., 1996). (Smadja, 1993) proposed a method to re- trieve collocations by combining bigrams whose co- occurrences are greater than a given threshold 3. In their approach, the bigrams are valid only when there are fewer than five words between them. This is based on the assumption that "most of the lexical relations involving a word w can be retrieved by ex- amining the neighborhood of w wherever it occurs, within a span of five (-5 and +5 around w) words." While the assumption is reasonable for some lan- guages such as English, it cannot be applied to all the languages, especially to the languages without word delimiters. (Haruno et al., 1996) constructed collocations by combining a couple of strings 4 of high mutual in- formation iteratively. But the mutual information is estimated inadequately lower when the cohesive- ness between two strings is greatly different. Take "in spite (of)", for example. Despite the fact that "spite" is frequently used with "in", mutual informa- tion between "in" and "spite" is small because "in" is used in various ways. Thus, there is the possibility that the method misses significant collocations even though one of the strings have strong cohesiveness. In contrast to these methods, our method focuses on the distribution of adjacent words (or charac- ters) when retrieving units of collocation and the co-occurrence frequencies and word order between a key string and other strings when retrieving colloca- tions. Through the method, various kinds of collo- cations induced by key strings are retrieved regard- less of the number of units or the distance between units in a collocation. Another distinction is that our method does not require any lexical knowledge or language dependent information such as part of speech. Owing to this, the method have good appli- cability to many languages. 5 Conclusion In this paper, we described a robust and practi- cal method for retrieving collocations by the co- occurrence of strings and word order constraints. Through the method, various range of collocations which are frequently used in a specific domain are retrieved automatically. This method is applicable to various languages because it uses a plain tex- tual corpus and requires only the general informa- tion appeared in the corpus. Although the colloca- tions retrieved by the method are monolingual and they are not available to the machine application for the present, the results will be extensible in various ways. We plan to compile a knowledge of bilingual collocations by incorporating the method with con- ventional bilingual approaches. 3This approach is similar to the process of the string refinement described in this paper. 4They call the strings word chunks. 479 No. str H(str) freq(str) 1 the current functional area 3.8 45 2 Before you install this device : 3.78 44 3 This could introduce data corruption . 3.37 29 4 All rights are reserved . 3.37 29 5 Note that the 2.93 53 6 , such as 2.91 87 7 Information on minor numbers is in 2.45 20 8 , for example , 2.44 23 9 The default is 2.44 52 10 , you can use the 2.26 25 11 to see if the 2.2 24 12 stands for 2.15 30 13 system accounting : 2.14 48 14 These are 2.12 37 15 allocation policy 2.1 21 16 For example , the 2.1 97 17 For more information on 2.1 96 18 permission bits 2.07 26 19 By default, the 2.06 32 20 The syntax for 2.03 57 Table 3: Top 20 strings extracted at the first stage str H(str) n fveq(str) For more 0.13 7 200 For more information 0.33 3 168 For more information , 0.21 4 46 For more information , see 1.03 8 25 For more information , refer to 1.17 6 15 For more information on 2.1 56 96 For more information about 1.69 21 35 Table 4: Strings including "For more" No. collocation 1 For more information on ..., refer to the ... manual. 2 You can use the ... to help you. 3 The syntax for .... is : ... 4 output from the execution of... commands. 5 ..., use the ... command with the ... option 6 ... have a special meaning in this manual. 7 ... to a (such as ..., and ...). 8 ... if the system ...or a ...for a...,the.. Table 5: Examples of collocations extracted at the second stage 480 References Pascale Fung. 1995. Compiling bilingual lexicon en- tries from a non-parallel English-Chinese corpus. In Proceedings of the 3rd Workshop on Very Large Corpora, pages 173-183. Masahiko Haruno, Satoru Ikehara, and Take- fumi Yamazaki. 1996. Learning Bilingual Col- locations by Word-Level Sorting. In Proceedings o/ the 16th COLING, pages 525-530. Satoru Ikehara, Satoshi Shirai, and Hajime Uchino. 1996. A statistical method for extracting unin- terrupted and interrupted collocations from very large corpora. In Proceedings of the 16th COL- INC. pages 574-579. Mihoko Kitamura and Yuji Matsumoto. 1996. Au- tomatic extraction of word sequence correspon- dences in parallel corpora. In Proceedings of the 4th Workshop on Very Large Corpora, pages 79- 87. Julian Kupiec. 1993. An algorithm for finding noun phrase correspondences in bilingual corpora. In Proceedings of the 31th Annual Meeting of ACL, pages 17-22. Makoto Nagao and Shinsuke Mori. 1994. New Method of n-gram statistics for large number of n and automatic extranetion of words and phrases from large text data of Japanese. In Proceedings of the 15th COLING, pages 611-615. Frank Smadja. 1993. Retrieving collocations from text: Xtraet. In Computational Linguistics, 19(1), pages 143-177. Frank Smadja, Kathleen MaKe- own, and Vasileios Hatzivassiloglou. 1996. Trans- lating collocations for bilingual lexicons: A sta- tistical approach. In Computational Linguistics, 22(1), pages 1-38. 481 | 1997 | 61 |
Learning Parse and Translation Decisions From Examples With Rich Context Ulf Hermjakob and Raymond J. Mooney Dept. of Computer Sciences University of Texas at Austin Austin, TX 78712, USA [email protected] [email protected] Abstract We present a knowledge and context-based system for parsing and translating natu- ral language and evaluate it on sentences from the Wall Street Journal. Applying machine learning techniques, the system uses parse action examples acquired un- der supervision to generate a determinis- tic shift-reduce parser in the form of a de- cision structure. It relies heavily on con- text, as encoded in features which describe the morphological, syntactic, semantic and other aspects of a given parse state. 1 Introduction The parsing of unrestricted text, with its enormous lexical and structural ambiguity, still poses a great challenge in natural language processing. The tradi- tional approach of trying to master the complexity of parse grammars with hand-coded rules turned out to be much more difficult than expected, if not impos- sible. Newer statistical approaches with often only very limited context sensitivity seem to have hit a performance ceiling even when trained on very large corpora. To cope with the complexity of unrestricted text, parse rules in any kind of formalism will have to consider a complex context with many different mor- phological, syntactic or semantic features. This can present a significant problem, because even linguisti- cally trained natural language developers have great difficulties writing and even more so extending ex- plicit parse grammars covering a wide range of nat- ural language. On the other hand it is much easier for humans to decide how specific sentences should be analyzed. We therefore propose an approach to parsing based on learning from examples with a very strong emphasis on context, integrating morphological, syntactic, semantic and other aspects relevant to making good parse decisions, thereby also allowing the parsing to be deterministic. Applying machine learning techniques, the system uses parse action ex- amples acquired under supervision to generate a de- terministic shift-reduce type parser in the form of a decision structure. The generated parser transforms input sentences into an integrated phrase-structure and case-frame tree, powerful enough to be fed into a transfer and a generation module to complete the full process of machine translation. Balanced by rich context and some background knowledge, our corpus based approach relieves the NL-developer from the hard if not impossible task of writing explicit grammar rules and keeps grammar coverage increases very manageable. Compared with standard statistical methods, our system relies on deeper analysis and more supervision, but radically fewer examples. 2 Basic Parsing Paradigm As the basic mechanism for parsing text into a shallow semantic representation, we choose a shift- reduce type parser (Marcus, 1980). It breaks parsing into an ordered sequence of small and manageable parse actions such as shift and reduce. This ordered 'left-to-right' parsing is much closer to how humans parse a sentence than, for example, chart oriented parsers; it allows a very transparent control struc- ture and makes the parsing process relatively intu- itive for humans. This is very important, because during the training phase, the system is guided by a human supervisor for whom the flow of control needs to be as transparent and intuitive as possible. The parsing does not have separate phases for part-of-speech selection and syntactic and semantic processing, but rather integrates all of them into a single parsing phase. Since the system has all mor- phological, syntactic and semantic context informa- tion available at all times, the system can make well- 482 based decisions very early, allowing a single path, i.e. deterministic parse, which eliminates wasting com- putation on 'dead end' alternatives. Before the parsing itself starts, the input string is segmented into a list of words incl. punctuation marks, which then are sent through a morphological analyzer that, using a lexicon 1, produces primitive frames for the segmented words. A word gets a prim- itive frame for each possible par t of speech. (Mor- phological ambiguity is captured within a frame.) parse stack "bought" synt: verb top of top of stack list • "<input list > , "today" synt adv (R 2 TO S-VP AS PRED (OBJ PAT)) "reduce the 2 top elements of the parse stack to a frame with syntax 'vp' and roles 'pred' and 'obj and pat'" 1 ~ "bought a book . . . . today" synt: vp synt: adv sub: (pred) (obj pat) / I "bought" synt: verb Figure 1: Example of a parse action (simplified); boxes represent frames The central data structure for the parser consists of a parse stack and an input list. The parse stack and the input list contain trees of frames of words or phrases. Core slots of frames are surface and lexi- cal form, syntactic and semantic category, subframes with syntactic and semantic roles, and form restric- 1The lexicon provides part-of-speech information and links words to concepts, as used in the KB (see next section). Additional information includes irregular forms and grammatical gender etc. (in the German lexicon). "John bought a new computer science book today." : synt/sem: S-SNT/I-EV-BUY forms: (3rd_person sing past_tense) lex : "buy" subs : (SUBJ AGENT) "John": synt/sem: S-NP/I-EN-JOHN (PRED) "John" synt/sem: S-NOUN/I-EN-JOHN (PRED) "bought": synt/sem: S-TR-VERB/I-EV-BUY (OBJ THEME) "a new computer science book": synt/sem: S-NP/I-EN-BOOK (DET) "a" (MOD) "new" (PRED) "computer science book" (MOD) "computer science" (MOD) "computer" (PRED) "science" (PRED) "book" (TIME) "today": synt/sem: S-ADV/C-AT-TIME (PRED) "today" synt/sem: S-ADV/I-EADV-TODAY (DUMMY) "." : synt : D-PERIOD Figure 2: Example of a parse tree (simplified). tions such as number, person, and tense. Optional slots include special information like the numerical value of number words. Initially, the parse stack is empty and the input list contains the primitive frames produced by the morphological analyzer. After initialization, the de- terministic parser applies a sequence of parse actions to the parse structure. The most frequent parse ac- tions are shift, which shifts a frame from the input list onto the parse stack or backwards, and reduce, which combines one or several frames on the parse stack into one new frame. The frames to be com- bined are typically, but not necessarily, next to each other at the top of the stack. As shown in figure 1, the action (R 2 TO VP AS PRED (0BJ PAT)) for example reduces the two top frames of the stack into a new frame that is marked as a verb phrase and contains the next-to-the-top frame as its pred- icate (or head) and the top frame of the stack as its object and patient. Other parse actions include add-into, which adds frames arbitrarily deep into an existing frame tree, mark, which can mark any slot of any frame with any value, and operations to in- troduce empty categories (i.e. traces and 'PRO', as in "Shei wanted PR.Oi to win."). Parse actions can 483 have numerous arguments, making the parse action language very powerful. The parse action sequences needed for training the system are acquired interactively. For each train- ing sentence, the system and the supervisor parse the sentence step by step, with the supervisor enter- ing the next parse action, e.g. (R 2 TO VP AS PRED (01aJ PAT) ), and the system executing it, repeating this sequence until the sentence is fully parsed. At least for the very first sentence, the supervisor actu- ally has to type in the entire parse action sequence. With a growing number of parse action examples available, the system, as described below in more de- tail, can be trained using those previous examples. In such a partially trained system, the parse actions are then proposed by the system using a parse deci- sion structure which "classifies" the current context. The proper classification is the specific action or se- quence of actions that (the system believes) should be performed next. During further training, the su- pervisor then enters parse action commands by ei- ther confirming what the system proposes or overrul- ing it by providing the proper action. As the corpus of parse examples grows and the system is trained on more and more data, the system becomes more refined, so that the supervisor has to overrule the system with decreasing frequency. The sequence of correct parse actions for a sentence is then recorded in a log file. 3 Features To make good parse decisions, a wide range of fea- tures at various degrees of abstraction have to be considered. To express such a wide range of fea- tures, we defined a feature language. Parse features can be thought of as functions that map from par- tially parsed sentences to a value. Applied to the target parse state of figure 1, the feature (SYNT OF OBJ OF -1 AT S-SYNT-ELEM), for example, designates the general syntactic class of the object of the first frame of the parse stack 2, in our example np 3. So, features do not a priori operate on words or phrases, but only do so if their description references such words or phrases, as in our example through the path 'OBJ OF -1'. Given a particular parse state and a feature, the system can interpret the feature and compute its 2S-SYNT-ELEM designates the top syntactic level; since -1 is negative, the feature refers to the 1st frame of the parse stack. Note that the top of stack is at the right end for the parse stack. 3If a feature is not defined in a specific parse state, the feature interpreter assigns the special value unavailable. value for the given parse state, often using additional background knowledge such as 1. A knowledge base (KB), which currently con- sists of a directed acyclic graph of 4356 mostly semantic and syntactic concepts connected by 4518 is-a links, e.g. "book,~o~,n-eoncept is-a tangible - objectnoun-coneept". Most concepts representing words are at a fairly shallow level of the KB, e.g. under 'tangible object', 'ab- stract', 'process verb', or 'adjective', with more depth used only in concept areas more relevant for making parse and translation decisions, such as temporal, spatial and animate concepts. 4 2. A subcategorization table that describes the syn- tactic and semantic role structures for verbs, with currently 242 entries. The following representative examples, for easier understanding rendered in English and not in fea- ture language syntax, further illustrate the expres- siveness of the feature language: • the general syntactic class of frame_3 (the third element of the parse stack): e.g. verb, adj, np, • whether or not the adverbial alternative of frame1 (the top element of the input list) is an adjectival degree adverb, • the specific finite tense of frame_i, e.g. present tense, • whether or not frame_l contains an object, • the semantic role of frame_l with respect to frame_2: e.g. agent, time; this involves pattern matching with corresponding entries in the verb subcategorization table, • whether or not frarne_2 and frame_l satisfy subject-verb agreement. Features can in principal refer to any one or sev- eral elements on the parse stack or input list, and any of their subelements, at any depth. Since the currently 205 features are supposed to bear some linguistic relevance, none of them are unjustifiably remote from the current focus of a parse state. The feature collection is basically independent from the supervised parse action acquisition. Before learning a decision structure for the first time, the supervisor has to provide an initial set of features 4Supported by acquisition tools, word/concept pairs are typically entered into the lexicon and the KB at the same time, typically requiring less than a minute per word or group of closely related words. 484 done-operation-p tree START ~ . - -7-ff~" -" "2 7..--- do -~ - - _ ~:JJ -art /sj~ g ¢ I do er - -. - re er o re ¢ . ~" ." shift n 'It s-verb red 'uCe 2..,~ reduce 1... reduce 3... Figure 3: Example of a hybrid decision structure that can be considered obviously relevant. Partic- ularly during the early development of our system, this set was increased whenever parse examples had identical values for all current features but neverthe- less demanded different parse actions. Given a spe- cific conflict pair of partially parsed sentences, the supervisor would add a new relevant feature that dis- criminates the two examples. We expect our feature set to grow to eventually about 300 features when scaling up further within the Wall Street Journal do- main, and quite possibly to a higher number when expanding into new domains. However, such feature set additions require fairly little supervisor effort. Given (1) a log file with the correct parse action sequence of training sentences as acquired under su- pervision and (2) a set of features, the system revis- its the training sentences and computes values for all features at each parse step. Together with the recorded parse actions these feature vectors form parse examples that serve as input to the learning unit. Whenever the feature set is modified, this step must be repeated, but this is unproblematic, because this process is both fully automatic and fast. 4 Learning Decision Structures Traditional statistical techniques also use features, but often have to sharply limit their number (for some trigram approaches to three fairly simple fea- tures) to avoid the loss of statistical significance. In parsing, only a very small number of features are crucial over a wide range of examples, while most features are critical in only a few examples, being used to 'fine-tune' the decision structure for special cases. So in order to overcome the antago- nism between the importance of having a large num- ber of features and the need to control the num- ber of examples required for learning, particularly when acquiring parse action sequence under super- vision, we choose a decision-tree based learning al- gorithm, which recursively selects the most discrim- inating feature of the corresponding subset of train- ing examples, eventually ignoring all locally irrele- vant features, thereby tailoring the size of the final decision structure to the complexity of the training data. While parse actions might be complex for the ac- tion interpreter, they are atomic with respect to the decision structure learner; e.g. "(R 2 TO VP AS PFtED (OBJ PAT))" would be such an atomic clas- sification. A set of parse examples, as already de- scribed in the previous section, is then fed into an ID3-based learning routine that generates a deci- sion structure, which can then 'classify' any given parse state by proposing what parse action to per- form next. We extended the standard ID3 model (Quinlan, 1986) to more general hybrid decision structures. In our tests, the best performing structure was a decision list (Rivest, 1987) of hierarchical decision trees, whose simplified basic structure is illustrated in figure 3. Note that in the 'reduce operation tree', the system first decides whether or not to perform a reduction before deciding on a specific reduction. Using our knowledge of similarity of parse actions and the exceptionality vs. generality of parse action groups, we can provide an overhead structure that helps prevent data fragmentation. 485 5 Transfer and Generation The output tree generated by the parser can be used for translation. A transfer module recursively maps the source language parse tree to an equivalent tree in the target language, reusing the methods devel- oped for parsing with only minor adaptations. The main purpose of learning here is to resolve trans- lation ambiguities, which arise for example when translating the English "to knov]' to German (wis- sen/kennen) or Spanish (saber/conocer). Besides word pair entries, the bilingual dictionary also contains pairs of phrases and expressions in a format closely resembling traditional (paper) dictio- naries, e.g. "to comment on SOMETHING_l"/"sich zu ETWAS_DAT_I ~iut3ern". Even if a complex translation pair does not bridge a structural mis- match, it can make a valuable contribution to dis- ambiguation. Consider for example the term "inter- est rate". Both element nouns are highly, ambigu- ous with respect to German, but the English com- pound conclusively maps to the German compound "Zinssatz". We believe that an extensive collection of complex translation pairs in the bilingual dictio- nary is critical for translation quality and we are confident that its acquisition can be at least partially automated by using techniques like those described in (Smadja et al., 1996). Complex translation en- tries are preprocessed using the same parser as for normal text. During the transfer process, the result- ing parse tree pairs are then accessed using pattern matching. The generation module orders the components of phrases, adds appropriate punctuation, and propa- gates morphologically relevant information in order to compute the proper form of surface words in the target language. 6 Wall Street Journal Experiments ~Ve now present intermediate results on training and testing a prototype implementation of the sys- tem with sentences from the Wall Street Journal, a prominent corpus of 'real' text, as collected on the ACL-CD. In order to limit the size of the required lexicon, we work on a reduced corpus of 105,356 sentences, a tenth of the full corpus, that includes all those sentences that are fully covered by the 3000 most frequently occurring words (ignoring numbers etc.) in the entire corpus. The first 272 sentences used in this experiment vary in length from 4 to 45 words, averaging at 17.1 words and 43.5 parse actions per sentence. One of these sentence is "Canadian man- ufacturers' new orders fell to $20.80 billion (Cana- Tr. snt. 16 32 64 128 256 1 97.5% 1 98.4 I Cr/snt I 2.5 1 2.1j 11. . I LI_I.L I 0 1 I ~ % I 93.0% [ 94.95 1791 9 s Is9 191.7 I 0. 6-57o Str~L I 55 ~10.3~18.8%126.8% Loops 13 6 0 1 1 Table 1: Evaluation results with varying number of training sentences; with all 205 features and hybrid decision structure; Train. = number of training sen- tences; pr/prec. = precision; rec. = recall; I. = la- beled; Tagging = tagging accuracy; Cr/snt = cross- ings per sentence; Ops = correct operations; OpSeq = Operation Sequence labeled precision 95% - 90% - 85% - 80% - 75% I t I I I I I 16 32 64 128 256 512 1024 number of training sentences Figure 4: Learning curve for labeled precision in ta- ble 1. dian) in January, down 4~o from December's $21.67 billion billion on a seasonally adjusted basis, Statis- tics Canada, a federal agency, said.". For our parsing test series, we use 17-fold cross- validation. The corpus of 272 sentences that cur- rently have parse action logs associated with them is divided into 17 blocks of 16 sentences each. The 17 blocks are then consecutively used for testing. For each of the 17 sub-tests, a varying number of sen- tences from the other blocks is used for training the parse decision structure, so that within a sub-test, none of the training sentences are ever used as a test sentence. The results of the 17 sub-tests of each se- ries are then averaged. 486 Features 6 ' 25 50 100 205 Prec. Recall L. pr. L. rec. Tagging Cr/snt 0 cr <lcr <2cr < 3cr < 4cr Ops OpSeq Str&L Loops Va zTw wrr I 87.3% ~ 88.7% 190.8%] 91.7% 179.8% ~ 86.7% ] 87.2%188.6% I 81.6% ~ 84.1% [ 86.9% I 88.1% 1 97.6% 1 9;.9 1 98.1% 1 98.2% 157.4%1 59.6%170.6%172.1% [ 72A% [ 73.9% [ 80.5% [ 84.2% 1 82.7% 1 84,9% [ 88.6% 1 92.3% 1 89.6% 1 89,7% 1 93.8% 1 94.5% I s.8 o 1 13.6 92--7W0 92.8% 89.8% 89.6% 98.4% 1.0 56.3% 73.5% 84.9% 93.0% 94.9% 91.7% 16.5% 2618% Table 2: Evaluation results with varying number of features; with 256 training sentences Precision (pr.): number of correct constituents in system parse number of constituents in system parse Recall (rec.): number of correct constituents in system parse number of constituents in logged parse Crossing brackets (cr): number of constituents which violate constituent boundaries with a con- stituent in the logged parse. Labeled (l.) precision/recall measures not only structural correctness, but also the correctness of the syntactic label. Correct operations (Ops) measures the number of correct operations during a parse that is continuously corrected based on the logged sequence. The correct operations ratio is im- portant for example acquisition, because it describes the percentage of parse actions that the supervisor can confirm by just hitting the return key. A sen- tence has a correct operating sequence (OpSeq), if the system fully predicts the logged parse action sequence, and a correct structure and labeling (Str~L), if the structure and syntactic labeling of the final system parse of a sentence is 100% correct, regardless of the operations leading to it. The current set of 205 features was sufficient to always discriminate examples with different parse actions, resulting in a 100% accuracy on sentences already seen during training. While that percentage is certainly less important than the accuracy figures for unseen sentences, it nevertheless represents an important upper ceiling. Many of the mistakes are due to encountering con- Type of deci- plain hier. plain sion structure list list tree Precision 87.8% 91.0% 87.6% Recall 89.9% 88.2% 89.7% Lab. precision 28.6% 87.4% 38.5% Lab. recall 86.1% 84.7% 85.6% Tagging ace. 97.9% 96.0% 97.9% Crossings/snt 1.2 1.3 1.3 0crossings 55.2% 52.9% 51.5% _< 1 crossings 72.8% 71.0% 65.8% _~ 2 crossings 82.7% 82.7% 81.6% < 3 crossings 89.0% 89.0% 90.1% _< 4 crossings 93.4% 93.4% 93.4% Ops 86.5% 90.3% 90.2% OpSeq 12.9% 11.8% 13.6% Str~L 22.4% 22.8% 21.7% Endless loops 26 23 32 hybrid tree 92.7% 92.8% 89.8% 89.6% 98.4% 1.0 56.3% 73.5% 84.9% 93 2% 94.9% 91.7% 16.5% 26.8% 1 Table 3: Evaluation results with varying types of decision structures; with 256 training sentences and 205 features structions that just have not been seen before at all, typically causing several erroneous parse decisions in a row. This observation further supports our expec- tation, based on the results shown in table 1 and fig- ure 4, that with more training sentences, the testing accuracy for unseen sentences will still rise signifi- cantly. Table 2 shows the impact of reducing the feature set to a set of N core features. While the loss of a few specialized features will not cause a major degrada- tion, the relatively high number of features used in our system finds a clear justification when evaluating compound test characteristics, such as the number of structurally completely correct sentences. When 25 or fewer features are used, all of them are syn- tactic. Therefore the 25 feature test is a relatively good indicator for the contribution of the semantic knowledge base. In another test, we deleted all 10 features relating to the subcategorization table and found that the only metrics with degrading values were those mea- suring semantic role assignment; in particular, none of the precision, recall and crossing bracket values changed significantly. This suggests that, at least in the presence of other semantic features, the subcat- egorization table does not play as critical a role in resolving structural ambiguity as might have been expected. Table 3 compares four different machine learning variants: plain decision lists, hierarchical decision 487 lists, plain decision trees and a hybrid structure, namely a decision list of hierarchical decision trees, as sketched in figure 3. The results show that ex- tensions to the basic decision tree model can signif- icantly improve learning results. System Human translation CONTEX on correct parse CONTEX (full translation) Logos SYSTR.AN Globalink Syntax Semantics 1.18 1.41 2.20 2.19 2.36 2.38 2.57 3.24 2.68 3.35 3.30 3.83 Table 4: Translation evaluation results (best possi- ble = 1.00, worst possible = 6.00) Table 4 summarizes the evaluation results of translating 32 randomly selected sentences from our Wall Street Journal corpus from English to German. Besides our system, CONTEX, we tested three com- mercial systems, Logos, SYSTR.AN, and Globalink. In order to better assess the contribution of the parser, we also added a version that let our system start with the correct parse, effectively just testing the transfer and generation module. The resulting translations, in randomized order and without iden- tification, were evaluated by ten bilingual graduate students, both native German speakers living in the U.S. and native English speakers teaching college level German. As a control, half of the evaluators were also given translations by a bilingual human. Note that the translation results using our parser are fairly close to those starting with a correct parse. This means that the errors made by the parser have had a relatively moderate impact on transla- tion quality. The transfer and generation modules were developed and trained based on only 48 sen- tences, so we expect a significant translation quality improvement by further development of those mod- ules. Our system performed better than the commercial systems, but this has to be interpreted with caution, since our system was trained and tested on sentences from the same lexically limited corpus (but of course without overlap), whereas the other systems were developed on and for texts from a larger variety of domains, making lexical choices more difficult in par- ticular. Table 5 shows the correlation between various parse and translation metrics. Labeled precision has the strongest correlation with both the syntactic and semantic translation evaluation grades. "Metric 'Precision Recall Labeled precision Labeled recall Tagging accuracy Number of crossing brackets J Operations Operation sequence Syntax Semantics -0.63 -0.63 -0.64 -0.66 -0.75 -0.78 -0.65 -0.65 -0.66 -0.56 0.58 0.54 -0.45 -0.41 -0.39 -0.36 Table 5: Correlation between various parse and translation metrics. Values near -1.0 or 1.0 indi- cate very strong correlation, whereas values near 0.0 indicate a weak or no correlation. Most correlation values, incl. for labeled precision are negative, be- cause a higher (better) labeled precision correlates with a numerically lower (better) translation score on the 1.0 (best) to 6.0 (worst) translation evalua- tion scale. 7 Related Work Our basic parsing and interactive training paradigm is based on (Simmons and Yu, 1992). We have extended their work by significantly increasing the expressiveness of the parse action and feature lan- guages, in particular by moving far beyond the few simple features that were limited to syntax only, by adding more background knowledge and by intro- ducing a sophisticated machine learning component. (Magerman, 1995) uses a decision tree model sim- ilar to ours, training his system SPATTER. with parse action sequences for 40,000 Wall Street Journal sen- tences derived from the Penn Treebank (Marcus et al., 1993). Questioning the traditional n-grams, Magerman already advocates a heavier reliance on contextual information. Going beyond Magerman's still relatively rigid set of 36 features, we propose a yet richer, basically unlimited feature language set. Our parse action sequences are too complex to be derived from a treebank like Penn's. Not only do our parse trees contain semantic annotations, roles and more syntactic detail, we also rely on the more informative parse action sequence. While this neces- sitates the involvement of a parsing supervisor for training, we are able to perform deterministic pars- ing and get already very good test results for only 256 training sentences. (Collins, 1996) focuses on bigram lexical depen- dencies (BLD). Trained on the same 40,000 sen- tences as Spatter, it relies on a much more limited type of context than our system and needs little background knowledge. 488 Model Labeled precision Labeled recall Crossings/sentence Sent. with 0 cr. Sent. with < 2 cr. I SPATTER, I BLD I CONTEX 84.9% 86.3% 89.8% 84.6% 85.8% 89.6% 1.26 1.14 1.02 56.6% 59.9% 56.3% 81.4% 83.6% 84.9% Table 6: Comparing our system CONTEX with Magerman's SPATTER, and Collins' BLD; results for SPATTER, and BLD are for sentences of up to 40 words. Table 6 compares our results with SPATTER, and BLD. The results have to be interpreted cautiously since they are not based on the exact same sentences and detail of bracketing. Due to lexical restrictions, our average sentence length (17.1) is below the one used in SPATTER and BLD (22.3), but some of our test sentences have more than 40 words; and while the Penn Treebank leaves many phrases such as "the New York Stock Exchange" without internal struc- ture, our system performs a complete bracketing, thereby increasing the risk of crossing brackets. 8 Conclusion We try to bridge the gap between the typically hard- to-scale hand-crafted approach and the typically large-scale but context-poor statistical approach for unrestricted text parsing. Using • a rich and unified context with 205 features, • a complex parse action language that allows in- tegrated part of speech tagging and syntactic and semantic processing, • a sophisticated decision structure that general- izes traditional decision trees and lists, • a balanced use of machine learning and micro- modular background knowledge, i.e. very small pieces of highly' independent information • a modest number of interactively acquired ex- amples from the Wall Street Journal, our system CONTEX • computes parse trees and translations fast, be- cause it uses a deterministic single-pass parser, • shows good robustness when encountering novel constructions, • produces good parsing results comparable to those of the leading statistical methods, and • delivers competitive results for machine trans- lations. While many limited-context statistical approaches have already reached a performance ceiling, we still expect to significantly improve our results when in- creasing our training base beyond the currently 256 sentences, because the learning curve hasn't flat- tened out yet and adding substantially more exam- ples is still very feasible. Even then the training size will compare favorably with the huge number of training sentences necessary for many statistical systems. References E. Black, J. Lafferty, and S. Roukos. 1992. Devel- opment and evaluation of a broad-coverage prob- abilistic grammar of English-language computer manuals. In 30th Proceedings of the A CL, pages 185-192. M. J. Collins. 1996. A New Statistical Parser Based on Bigram Lexical Dependencies. In 3~th Proceed- ings of the ACL, pages 184-191. U. Hermjakob. 1997. Learning Parse and Trans- lation Decisions From Examples With Rich Con- text. Ph.D. thesis, University of Texas at Austin, Dept. of Computer Sciences TR 97-12. file://ftp.cs.utexas.edu/pub/mooney/papers/herm jakob-dissertation-97.ps.Z D. M. Magerman. 1995. Statistical Decision-Tree Models for Parsing In 33rd Proceedings of the ACL, pages 276-283. M. P. Marcus. 1980. A Theory of Syntactic Recog- nition for Natural Language. MIT Press. M. P. Marcus, B. Santorini, and M. A. Marcinkie- wicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. In Computa- tional Linguistics 19 (2), pages 184-191. S. Nirenburg, J. Carbonell, M. Tomita, and K. Goodman. 1992. Machine Translation: A Knowledge-Based Approach. San Mateo, CA: Morgan Kaufmann. J. R. Quinlan. 1986. Induction of decision trees. In Machine Learning I (I), pages 81-106. R. L. Rivest. 1987. Learning Decision Lists. In Machine Learning 2, pages 229-246. R. F. Simmons and Yeong-Ho Yu. 1992. The Acqui- sition and Use of Context-Dependent Grammars for English. In Computational Linguistics 18 (4), pages 391-418. F. Smadja, K. R. KcKeown and V. Hatzivassiloglou. 1996. Translating Collocations for Bilingual Lex- icons: A Statistical Approach. In Computational Linguistics 22 (I), pages 1-38. Globalink. http://www.globalink.com/home.html Oct. 1996. Logos. http://www.logos-ca.com/ Oct. 1996. SYSTRAN. http://systranmt.com/ Oct. 1996. 489 | 1997 | 62 |
A Word-to-Word Model of Translational Equivalence I. Dan Melamed Dept. of Computer and Information Science University of Pennsylvania Philadelphia, PA, 19104, U.S.A. raelamed~unagi, c is. upenn, edu Abstract Many multilingual NLP applications need to translate words between different lan- guages, but cannot afford the computa- tional expense of inducing or applying a full translation model. For these applications, we have designed a fast algorithm for esti- mating a partial translation model, which accounts for translational equivalence only at the word level . The model's preci- sion/recall trade-off can be directly con- trolled via one threshold parameter. This feature makes the model more suitable for applications that are not fully statistical. The model's hidden parameters can be eas- ily conditioned on information extrinsic to the model, providing an easy way to inte- grate pre-existing knowledge such as part- of-speech, dictionaries, word order, etc.. Our model can link word tokens in paral- lel texts as well as other translation mod- els in the literature. Unlike other transla- tion models, it can automatically produce dictionary-sized translation lexicons, and it can do so with over 99% accuracy. 1 Introduction Over the past decade, researchers at IBM have devel- oped a series of increasingly sophisticated statistical models for machine translation (Brown et al., 1988; Brown et al., 1990; Brown et al., 1993a). However, the IBM models, which attempt to capture a broad range of translation phenomena, are computation- ally expensive to apply. Table look-up using an ex- plicit translation lexicon is sufficient and preferable for many multilingual NLP applications, including "crummy" MT on the World Wide Web (Church & I-Iovy, 1993), certain machine-assisted translation tools (e.g. (Macklovitch, 1994; Melamed, 1996b)), concordancing for bilingual lexicography (Catizone et al., 1993; Gale & Church, 1991), computer- assisted language learning, corpus linguistics (Melby. 1981), and cross-lingual information retrieval (Oard &Dorr, 1996). In this paper, we present a fast method for in- ducing accurate translation lexicons. The method assumes that words are translated one-to-one. This assumption reduces the explanatory power of our model in comparison to the IBM models, but, as shown in Section 3.1, it helps us to avoid what we call indirect associations, a major source of errors in other models. Section 3.1 also shows how the one- to-one assumption enables us to use a new greedy competitive linking algorithm for re-estimating the model's parameters, instead of more expensive algo- rithms that consider a much larger set of word cor- respondence possibilities. The model uses two hid- den parameters to estimate the confidence of its own predictions. The confidence estimates enable direct control of the balance between the model's preci- sion and recall via a simple threshold. The hidden parameters can be conditioned on prior knowledge about the bitext to improve the model's accuracy. 2 Co-occurrence With the exception of (Fung, 1998b), previous methods for automatically constructing statistical translation models begin by looking at word co- occurrence frequencies in bitexts (Gale & Church, 1991; Kumano & Hirakawa, 1994; Fung, 1998a; Melamed, 1995). A bitext comprises a pair of texts in two languages, where each text is a translation of the other. Word co-occurrence can be defined in various ways. The most common way is to divide each half of the bitext into an equal number of seg- ments and to align the segments so that each pair of segments Si and Ti are translations of each other (Gale & Church, 1991; Melamed, 1996a). Then, two word tokens (u, v) are said to co-occur in the 490 aligned segment pair i if u E Si and v E Ti. The co-occurrence relation can also be based on distance in a bitext space, which is a more general represen- tations of bitext correspondence (Dagan et al., 1993; Resnik & Melamed, 1997), or it can be restricted to words pairs that satisfy some matching predicate, which can be extrinsic to the model (Melamed, 1995; Melamed, 1997). 3 The Basic Word-to-Word Model Our translation model consists of the hidden param- eters A + and A-, and likelihood ratios L(u, v). The two hidden parameters are the probabilities of the model generating true and false positives in the data. L(u,v) represents the likelihood that u and v can be mutual translations. For each co-occurring pair of word types u and v, these likelihoods are initially set proportional to their co-occurrence frequency n(u,v) and inversely proportional to their marginal frequen- cies n(u) and n(v) z, following (Dunning, 1993) 2. When the L(u, v) are re-estimated, the model's hid- den parameters come into play. After initialization, the model induction algorithm iterates: 1. Find a set of "links" among word tokens in the bitext, using the likelihood ratios and the com- petitive linking algorithm. 2. Use the links to re-estimate A +, A-, and the likelihood ratios. 3. Repeat from Step 1 until the model converges to the desired degree. The competitive linking algorithm and its one-to-one assumption are detailed in Section 3.1. Section 3.1 explains how to re-estimate the model parameters. 3.1 Competitive Linking Algorithm The competitive linking algorithm is designed to overcome the problem of indirect associations, illus- trated in Figure 1. The sequences of u's and v's represent corresponding regions of a bitext. If uk and vk co-occur much more often than expected by chance, then any reasonable model will deem them likely to be mutual translations. If uk and Vk are indeed mutual translations, then their tendency to ZThe co-occurrence frequency of a word type pair is simply the number of times the pair co-occurs in the corpus. However, n(u) = ~-~v n(u.v), which is not the same as the frequency of u, because each token of u can co-occur with several differentv's. 2We could just as easily use other symmetric "asso- ciation" measures, such as ¢2 (Gale & Church, 1991) or the Dice coefficient (Smadja, 1992). • • • Uk. 1 tJk ~ = Uk+l • • • t • , • Vk. 1 Vk Vk+l • . . Figure 1: Uk and vk often co-occur, as do uk and uk+z. The direct association between uk and vk, and the direct association between uk and Uk+l give rise to an indirect association between v~ and uk+l. co-occur is called a direct association. Now, sup- pose that uk and Uk+z often co-occur within their language. Then vk and uk+l will also co-occur more often than expected by chance. The arrow connect- ing vk and u~+l in Figure 1 represents an indirect association, since the association between vk and Uk+z arises only by virtue of the association between each of them and uk. Models of translational equiv- alence that are ignorant of indirect associations have "a tendency ... to be confused by collocates" (Dagan et al., 1993). Fortunately, indirect associations are usually not difficult to identify, because they tend to be weaker than the direct associations on which they are based (Melamed, 1996c). The majority of indirect associ- ations can be filtered out by a simple competition heuristic: Whenever several word tokens ui in one half of the bitext co-occur with a particular word to- ken v in the other half of the bitext, the word that is most likely to be v's translation is the one for which the likelihood L(u, v) of translational equivalence is highest. The competitive linking algorithm imple- ments this heuristic: 1. Discard all likelihood scores for word types deemed unlikely to be mutual translations, i.e. all L(u,v) < 1. This step significantly reduces the computational burden of the algorithm. It is analogous to the step in other translation model induction algorithms that sets all prob- abilities below a certain threshold to negligible values (Brown et al., 1990; Dagan et al., 1993; Chen, 1996). To retain word type pairs that are at least twice as likely to be mutual transla- tions than not, the threshold can be raised to 2. Conversely, the threshold can be lowered to buy more coverage at the cost of a larger model that will converge more slowly. 2. Sort all remaining likelihood estimates L(u, v) from highest to lowest. 3. Find u and v such that the likelihood ratio L(u,v) is highest. Token pairs of these types 491 n(u,v) N k(u.v) K T k+ k- B(k{n,p) = frequency of co-occurrence between word types u and v = ~"].(u.,,) n(u.v) = total number of co-occurrences in the bitext = frequency of links between word types u and v = ~"].(u,v) k(u.,,) = total number of links in the bitext = Pr( mutual translations I co-occurrence ) = Pr( link I co-occurrence ) = Pr( link [ co-occurrence of mutual translations ) = Pr( link I co-occurrence of not mutual translations ) = Pr(kin,p), where k has a binomial distribution with parameters n and p N.B.: k + and )~- need not sum to 1, because they are conditioned on different events. Figure 2: Variables used to estimate the model parameters. would be the winners in any competitions in- volving u or v. 4. Link all token pairs (u, v) in the bitext. 5. The one-to-one assumption means that linked words cannot be linked again. Therefore, re- move all linked word tokens from their respec- tive texts. 6. If there is another co-occurring word token pair (u, v) such that L(u, v) exists, then repeat from Step 3. The competitive linking algorithm is more greedy than algorithms that try to find a set of link types that are jointly most probable over some segment of the bitext. In practice, our linking algorithm can be implemented so that its worst-case running time is O(lm), where l and m are the lengths of the aligned segments. The simplicity of the competitive linking algo- rithm depends on the one-to-one assumption: Each word translates to at most one other word. Certainly, there are cases where this assumption is false. We prefer not to model those cases, in order to achieve higher accuracy with less effort on the cases where the assumption is true. 3.2 Parameter Estimation The purpose of the competitive linking algorithm is to help us re-estimate the model parameters. The variables that we use in our estimation are summa- rized in Figure 2. The linking algorithm produces a set of links between word tokens in the bitext. We define a link token to be an ordered pair of word tokens, one from each half of the bitext. A link type is an ordered pair of word types. Let n(u.,,) be the co-occurrence frequency of u and v and k(~,,,) be the number of links between tokens of u and v 3. An 3Note that k(u,v) depends on the linking algorithm, but n(u.v) is a constant property of the bitext. important property of the competitive linking algo- rithm is that the ratio kiu.,,)/n(u,v ) tends to be very high if u and v are mutual translations, and quite low if they are not. The bimodality of this ratio for several values of n(u.,,i is illustrated in Figure 3. This figure was plotted after the model's first iter- ation over 300000 aligned sentence pairs from the I0(0) ,oo LI,. {0 } 0 ~ (u V)/n(u v) o~ , Figure 3: A fragment of the joint frequency (k(u.v)/n(u.v), n(u.v)). Note that the frequencies are plotted on a log scale -- the bimodality is quite sharp. Canadian Hansard bitext. Note that the frequencies are plotted on a log scale -- the bimodality is quite sharp. The linking algorithm creates all the links of a given type independently of each other, so the num- ber k(u,v ) of links connecting word types u and v has a binomial distribution with parameters n(u.,,l and P(u.,,)- If u and v are mutual translations, then P(u,,,) tends to a relatively high probability, which we will call A +. If u and v are not mutual translations, then P(u,v) tends to a very low probability, which we will call A-. A + and A- correspond to the two peaks in the frequency distribution of k(u.,,)/niu.v~ in Figure 2. The two parameters can also be inter- preted as the percentage of true and false positives. If the translation in the bitext is consistent and the 492 model is accurate, then A + should be near 1 and A- should be near 0. To find the most probable values of the hidden model parameters A + and A-, we adopt the standard method of maximum likelihood estimation, and find the values that maximize the probability of the link frequency distributions. The one-to-one assumption implies independence between different link types, so that Pr(linkslm°del) = H Vr(k(u,v)[n(u,v), A +, A-). R~V (1) The factors on the right-hand side of Equation 1 can be written explicitly with the help of a mixture co- efficient. Let r be the probability that an arbitrary co-occurring pair of word types are mutual transla- tions. Let B(kln,p ) denote the probability that k links are observed out of n co-occurrences, where k has a binomial distribution with parameters n and p. Then the probability that u and v are linked k(u,v) times out of n(u,v) co-occurrences is a mixture of two binomials: Pr(k(u,v) ln(u,v), A +, A-) = (2) = rB(k(u,v)ln(u,v), A +) ÷ (1-r)B(k(u,v)ln(u,v),A-) One more variable allows us to express r in terms of A + and A- : Let A be the probability that an arbi- trary co-occuring pair of word tokens will be linked, regardless of whether they are mutual translations. Since r is constant over all word types, it also repre- sents the probability that an arbitrary co-occurring pair of word tokens are mutual translations. There- fore, A = rA + + (1 - r)A-. (3) A can also be estimated empirically. Let K be the total number of links in the bitext and let N be the total number of co-occuring word token pairs: K = ~(u,v) k(u,v/, N = ~(~,v) n(u,v). By definition, A = KIN. (4) Equating the right-hand sides of Equations (3) and (4) and rearranging the terms, we get: KIN - ,X- - (5) A+ _ )~- Since r is now a function of A + and A-, only the latter two variables represent degrees of freedom in the model. The probability function expressed by Equations 1 and 2 has many local maxima. In practice, these c -1.2 -1.4 E -1.6 "~ -1.8 ) 0 Figure 4: Pr(links[model) has only one global max- imum in the region of interest. local maxima are like pebbles on a mountain, in- visible at low resolution. We computed Equation 1 over various combinations of A + and A- after the model's first iteration over 300000 aligned sentence pairs from the Canadian Hansard bitext. Figure 4 shows that the region of interest in the parameter space, where 1 > A + > A > A- > 0, has only one clearly visible global maximum. This global maxi- mum can be found by standard hill-climbing meth- ods, as long as the step size is large enough to avoid getting stuck on the pebbles. Given estimates for A + and A-, we can compute B(ku,,,[nu,v, A +) and B(ku,v[nu,v, A-). These are probabilities that k(u,v) links were generated by an algorithm that generates correct links and by an al- gorithm that generates incorrect links, respectively, out ofn(u,v) co-occurrences. The ratio of these prob- abilities is the likelihood ratio in favor of u and v being mutual translations, for all u and v: B(ku,vln<,,,,, ),+) L(u,v) = B(ku,vln~,v, A_ ) . (61 4 Class-Based Word-to-Word Models In the basic word-to-word model, the hidden param- eters A + and A- depend only on the distributions of link frequencies generated by the competitive link- ing algorithm. More accurate models can be induced by taking into account various features of the linked tokens. For example, frequent words are translated less consistently than rare words (Melamed, 1997). To account for this difference, we can estimate sep- arate values of X + and A- for different ranges of n(u,v). Similarly, the hidden parameters can be con- ditioned on the linked parts of speech. Word order can be taken into account by conditioning the hid- den parameters on the relative positions of linked word tokens in their respective sentences. Just as easily, we can model links that coincide with en- tries in a pre-existing translation lexicon separately 493 from those that do not. This method of incorporat- ing dictionary information seems simpler than the method proposed by Brown et ai. for their models (Brown et al., 1993b). When the hidden parameters are conditioned on different link classes, the estima- tion method does not change; it is just repeated for each link class. 5 Evaluation A word-to-word model of translational equivalence can be evaluated either over types or over tokens. It is impossible to replicate the experiments used to evaluate other translation models in the literature, because neither the models nor the programs that induce them are generally available. For each kind of evaluation, we have found one case where we can come close. We induced a two-class word-to-word model of translational equivalence from 13 million words of the Canadian Hansards, aligned using the method in (Gale & Church, 1991). One class repre- sented content-word links and the other represented function-word links 4. Link types with negative log-likelihood were discarded after each iteration. Both classes' parameters converged after six it- erations. The value of class-based models was demonstrated by the differences between the hid- den parameters for the two classes. (A +,A-) con- verged at (.78,00016) for content-class links and at (.43,.000094) for function-class links. 5.1 Link Types The most direct way to evaluate the link types in a word-level model of translational equivalence is to treat each link type as a candidate translation lexi- con entry, and to measure precision and recall. This evaluation criterion carries much practical import, because many of the applications mentioned in Sec- tion 1 depend on accurate broad-coverage transla- tion lexicons. Machine readable bilingual dictionar- ies, even when they are available, have only limited coverage and rarely include domain-specific terms (Resnik & Melamed, 1997). We define the recall of a word-to-word translation model as the fraction of the bitext vocabulary repre- sented in the model. Translation model precision is a more thorny issue, because people disagree about the degree to which context should play a role in judgements of translational equivalence. We hand- evaluated the precision of the link types in our model in the context of the bitext from which the model 4Since function words can be identified by table look- up, no POS-tagger was involved. was induced, using a simple bilingual concordancer. A link type (u, v) was considered correct if u and v ever co-occurred as direct translations of each other. Where the one-to-one assumption failed, but a link type captured part of a correct translation, it was judged "incomplete." Whether incomplete links are correct or incorrect depends on the application. 100 98 96 ~) 94 u 92 t_ 9O 88 86 84 (99.2%) ~ (9~ .6%) t'",,, ""-,}(89.2%) ........ " ...................... x incomplete = incorrect ......... -~(86.8%) 3'6 4'6 9'0 % recall Figure 5: Link type precision with 95~ confidence intervals at varying levels of recall. We evaluated five random samples of 100 link types each at three levels of recall. For our bitext, recall of 36%, 46% and 90% corresponded to trans- lation lexicons containing 32274, 43075 and 88633 words, respectively. Figure 5 shows the precision of the model with 95% confidence intervals. The upper curve represents precision when incomplete links are considered correct, and the lower when they are con- sidered incorrect. On the former metric, our model can generate translation lexicons with precision and recall both exceeding 90%, as well as dictionary- sized translation lexicons that are over 99% correct. Though some have tried, it is not clear how to extract such accurate lexicons from other published translation models. Part of the difficulty stems from the implicit assumption in other models that each word has only one sense. Each word is assigned the same unit of probability mass, which the model dis- tributes over all candidate translations. The correct translations of a word that has several correct trans- lations will be assigned a lower probability than the correct translation of a word that has only one cor- rect translation. This imbalance foils thresholding strategies, clever as they might be (Gale & Church, 1991; Wu ~z Xia, 1994; Chen, 1996). The likelihoods in the word-to-word model remain unnormalized, so they do not compete. The word-to-word model maintains high preci- sion even given much less training data. Resnik & Melamed (1997) report that the model produced 494 translation lexicons with 94% precision and 30% re- call, when trained on French/English software man- uals totaling about 400,000 words. The model was also used to induce a translation lexicon from a 6200-word corpus of French/English weather re- ports. Nasr (1997) reported that the translation lexicon that our model induced from this tiny bitext accounted for 30% of the word types with precision between 84% and 90%. Recall drops when there is tess training data, because the model refuses to make predictions that it cannot make with confidence. For many applications, this is the desired behavior. 5.2 Link Tokens type of error errors made by errors made IBM Model 2 by our model wrong link missing link partial link class conflict tokenization paraphrase 32 12 7 3 39 7 36 10 5 2 36 TOTAL 93 96 Table 1: Erroneous link tokens generated by two translation models. The most detailed evaluation of link tokens to date was performed by (Macklovitch & Hannan, 1996), who trained Brown et al.'s Model 2 on 74 million words of the Canadian Hansards. These au- thors kindly provided us with the links generated by that model in 51 aligned sentences from a held- out test set. We generated links in the same 51 sentences using our two-class word-to-word model, and manually evaluated the content-word links from both models. The IBM models are directional; i.e. they posit the English words that gave rise to each French word, but ignore the distribution of the En- glish words. Therefore, we ignored English words that were linked to nothing. The errors are classified in Table 1. The "wrong link" and "missing link" error categories should be self-explanatory. "Partial links" are those where one French word resulted from multiple English words, but the model only links the French word to one of its English sources. "Class conflict" errors resulted from our model's refusal to link content words with function words. Usually, this is the desired behavior, but words like English auxiliary verbs are sometimes used as content words, giving rise to content words in French. Such errors could be overcome by a model that classifies each word token, for example using a part-of-speech tagger, instead of assigning the same class to all tokens of a given type. The bitext pre- processor for our word-to-word model split hyphen- ated words, but Macklovitch &Hannan's preproces- sor did not. In some cases, hyphenated words were easier to link correctly; in other cases they were more difficult. Both models made some errors because of this tokenization problem, albeit in different places. The "paraphrase" category covers all link errors that resulted from paraphrases in the translation. Nei- ther IBM's Model 2 nor our model is capable of link- ing multi-word sequences to multi-word sequences, and this was the biggest source of error for both models. The test sample contained only about 400 content words 5, and the links for both models were evaluated post-hoc by only one evaluator. Nevertheless, it ap- pears that our word-to-word model with only two link classes does not perform any worse than IBM's Model 2, even though the word-to-word model was trained on less than one fifth the amount of data that was used to train the IBM model. Since it doesn't store indirect associations, our word-to-word model contained an average of 4.5 French words for every English word. Such a compact model requires rel- atively little computational effort to induce and to apply. des screaming vents . winds ,A, dechames ---" and et dangerous une ~ sea mer . conditions s p de'montee.-"" Figure 6: An example of the different sorts of er- rors made by the word-to-word model and the IBM Model 2. Solid lines are links made by both mod- els; dashes lines are links made by the IBM model only. Only content-class links are shown. Neither model makes the correct links (ddcha£nds,screaming) and (ddmontde, dangerous). 5The exact number depends on the tokenization method. 495 In addition to the quantitative differences between the word-to-word model and the IBM model, there is an important qualitative difference, illustrated in Figure 6. As shown in Table 1, the most common kind of error for the word-to-word model was a miss- ing link, whereas the most common error for IBM's Model 2 was a wrong link. Missing links are more in- formative: they indicate where the model has failed. The level at which the model trusts its own judge- ment can be varied directly by changing the likeli- hood cutoff in Step 1 of the competitive linking algo- rithm. Each application of the word-to-word model can choose its own balance between link token pre- cision and recall. An application that calls on the word-to-word model to link words in a bitext could treat unlinked words differently from linked words, and avoid basing subsequent decisions on uncertain inputs. It is not clear how the precision/recall trade- off can be controlled in the IBM models. One advantage that Brown et al.'s Model i has over our word-to-word model is that their objec- tive function has no local maxima. By using the EM algorithm (Dempster et al., 1977), they can guarantee convergence towards the globally opti- mum parameter set. In contrast, the dynamic na- ture of the competitive linking algorithm changes the Pr(datalmodel ) in a non-monotonic fashion. We have adopted the simple heuristic that the model "has converged" when this probability stops increas- ing. 6 Conclusion Many multilingual NLP applications need to trans- late words between different languages, but cannot afford the computational expense of modeling the full range of translation phenomena. For these ap- plications, we have designed a fast algorithm for esti- mating word-to-word models of translational equiv- alence. The estimation method uses a pair of hid- den parameters to measure the model's uncertainty, and avoids making decisions that it's not likely to make correctly. The hidden parameters can be con- ditioned on information extrinsic to the model, pro- viding an easy way to integrate pre-existing knowl- edge. So far we have only implemented a two-class model, to exploit the differences in translation con- sistency between content words and function words. This relatively simple two-class model linked word tokens in parallel texts as accurately as other trans- lation models in the literature, despite being trained on only one fifth as much data. Unlike other transla- tion models, the word-to-word model can automat- ically produce dictionary-sized translation lexicons, and it can do so with over 99% accuracy. Even better accuracy can be achieved with a more fine-grained link class structure. Promising features for classification include part of speech, frequency of co-occurrence, relative word position, and trans- lational entropy (Melamed, 1997). Another inter- esting extension is to broaden the definition of a "word" to include multi-word lexical units (Smadja, 1992). If such units can be identified a priori, their translations can be estimated without modifying the word-to-word model. In this manner, the model can account for a wider range of translation phenomena. Acknowledgements The French/English software manuals were provided by Gary Adams of Sun MicroSystems Laboratories. The weather bitext was prepared at the University of Montreal, under the direction Of Richard Kit- tredge. Thanks to Alexis Nasr for hand-evaluating the weather translation lexicon. Thanks also to Mike Collins, George Foster, Mitch Marcus, Lyle Ungar, and three anonymous reviewers for helpful com- ments. This research was supported by at. equip- ment grant from Sun MicroSystems and by ARPA Contract #N66001-94C-6043. References P. F. Brown, J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, R. Mercer, & P. Roossin, "A Statistical Approach to Language Translation," Proceedings of the 12th International Conference on Computational Linguistics, Budapest, Hun- gary, 1988. P. F. Brown, J. Cocke, S. Della Pietra, V. Della Pietra, F. Jelinek, R. Mercer, & P. Roossin, "A Statistical Approach to Machine Translation," Computational Linguistics 16(2), 1990. P. F. Brown, V. J. Della Pietra, S. A. Della Pietra & R. L. Mercer, "The Mathematics of Statisti- cal Machine Translation: Parameter Estimation," Computational Linguistics 19(2), 1993. P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, M. J. Goldsmith, J. Hajic, R. L. Mercer & S. Mo- hanty, "But Dictionaries are Data Too," Proceed- ings of the ARPA HLT Workshop, Princeton, N J, 1993. R. Catizone, G. Russell & S. Warwick "Deriving Translation Data from Bilingual Texts," Proceed- ings of the First International Lexical Acquisition Workshop, Detroit, MI, 1993. 496 S. Chen, Building Probabilistic Models for Natu- ral Language, Ph.D. Thesis, Harvard University, 1996. K. W. Church & E. H. Hovy, "Good Applications for Crummy Machine Translation," Machine Transla- tion 8, 1993. I. Dagan, K. Church, & W. Gale, "Robust Word Alignment for Machine Aided Translation," Pro- ceedings of the Workshop on Very Large Corpora: Academic and Industrial Perspectives, Columbus, OH, 1993. A. P. Dempster, N. M. Laird & D. B. Rubin, "Maxi- mum likelihood from incomplete data via the EM algorithm," Journal of the Royal Statistical Soci- ety 34(B), 1977. T. Dunning, "Accurate Methods for the Statistics of Surprise and Coincidence," Computational Lin- guistics 19(1), 1993. P. Fung, "Compiling Bilingual Lexicon Entries from a Non-Parallel English-Chinese Corpus," Proceed- ings of the Third Workshop on Very Large Cor- pora, Boston, MA, 1995a. P. Fung, "A Pattern Matching Method for Find- ing Noun and Proper Noun Translations from Noisy Parallel Corpora," Proceedings of the 33rd Annual Meeting of the Association for Computa- tional Linguistics, Boston, MA, 1995b. W. Gale & K. W. Church, "A Program for Align- ing Sentences in Bilingual Corpora" Proceedings of the 29th Annual Meeting of the Association for Computational Linguistics, Berkeley, CA, 1991. W. Gale & K. W. Church, "Identifying Word Corre- spondences in Parallel Texts," Proceedings of the DARPA SNL Workshop, 1991. A. Kumano & H. Hirakawa, "Building an MT Dic- tionary from Parallel Texts Based on Linguistic and Statistical Information," Proceedings of the 15th International Conference on Computational Linguistics, Kyoto, Japan, 1994. E. Macklovitch :'Using Bi-textual Alignment for Translation Validation: The TransCheck Sys- tem," Proceedings of the 1st Conference of the As- sociation for Machine Translation in the Ameri- cas, Columbia, MD, 1994. E. Macklovitch & M.-L. Hannan, "Line 'Em Up: Ad- vances in Alignment Technology and their Impact on Translation Support Tools," 2nd Conference of the Association for Machine Translation in the Americas, Montreal, Canada, 1996. I. D. Melamed "Automatic Evaluation and Uniform Filter Cascades for Inducing N-best Translation Lexicons," Proceedings of the Third Workshop on Very Large Corpora, Boston, MA, 1995. I. D. Melamed, "A Geometric Approach to Mapping Bitext Correspondence," Proceedings of the First Conference on Empirical Methods in Natural Lan- guage Processing, Philadelphia, PA, 1996a. I. D. Melamed "Automatic Detection of Omissions in Translations," Proceedings of the 16th Interna- tional Conference on Computational Linguistics, Copenhagen, Denmark, 1996b. I. D Melamed, "Automatic Construction of Clean Broad-Coverage Translation Lexicons," 2nd Con- ference of the Association for Machine Transla- tion in the Americas, Montreal, Canada, 1996c. I. D. Melamed, "Measuring Semantic Entropy," Pro- ceedings of the SIGLEX Workshop on Tagging Text with Lexical Semantics, Washington, DC, 1997. I. D. Melamed, "A Portable Algorithm for Mapping Bitext Correspondence," Proceedings of the 35th Conference of the Association for Computational Linguistics, Madrid, Spain, 1997. (in this volume) A. Melby, "A Bilingual Concordance System and its Use in Linguistic Studies," Proceedings of the En- glish LACUS Forum, Columbia, SC, 1981. A. Nasr, personal communication, 1997. P. Resnik & I. D. Melamed, "Semi-Automatic Acqui- sition of Domain-Specific Translation Lexicons," Proceedings of the 7th ACL Conference on Ap- plied Natural Language Processing, Washington, DC, 1997. D. W. Oard & B. J. Dorr, "A Survey of Multilingual Text Retrieval, UMIACS TR-96-19, University of Maryland, College Park, MD, 1996. F. Smadja, "How to Compile a Bilingual Collo- cational Lexicon Automatically," Proceedings of the AAAI Workshop on Statistically-Based NLP Techniques, 1992. D. Wu & X. Xia, "Learning an English-Chinese Lexicon from a Parallel Corpus," Proceedings of the First Conference of the Association for Ma- chine Translation in the Americas, Columbia, MD, 1994. 497 | 1997 | 63 |
A Structured Language Model Ciprian Chelba The Johns Hopkins University CLSP, Barton Hall 320 3400 N. Charles Street, Baltimore, MD-21218 chelba@j hu. edu Abstract The paper presents a language model that develops syntactic structure and uses it to extract meaningful information from the word history, thus enabling the use of long distance dependencies. The model as- signs probability to every joint sequence of words-binary-parse-structure with head- word annotation. The model, its proba- bilistic parametrization, and a set of ex- periments meant to evaluate its predictive power are presented. the dog I heard yesterday barked Figure 1: Partial parse '¢"~.( ~ I h_{-=*l ) ~_{-I [ h_O ~ w_l ... w..p ........ w q...w~r w_lr+ll ...w_k w_lk+l} ..... w_n </s> Figure 2: A word-parse k-prefix 1 Introduction The main goal of the proposed project is to develop a language model(LM) that uses syntactic structure. The principles that guided this propo§al were: • the model will develop syntactic knowledge as a built-in feature; it will assign a probability to every joint sequence of words-binary-parse-structure; • the model should operate in a left-to-right man- ner so that it would be possible to decode word lat- tices provided by an automatic speech recognizer. The model consists of two modules: a next word predictor which makes use of syntactic structure as developed by a parser. The operations of these two modules are intertwined. 2 The Basic Idea and Terminology Consider predicting the word barked in the sen- tence: the dog I heard yesterday barked again. A 3-gram approach would predict barked from (heard, yesterday) whereas it is clear that the predictor should use the word dog which is out- side the reach of even 4-grams. Our assumption is that what enables us to make a good predic- tion of barked is the syntactic structure in the past. The correct partial parse of the word his- tory when predicting barked is shown in Figure 1. The word dog is called the headword of the con- stituent ( the (dog (...) )) and dog is an exposed headword when predicting barked -- topmost head- word in the largest constituent that contains it. The syntactic structure in the past filters out irrelevant words and points to the important ones, thus en- abling the use of long distance information when predicting the next word. Our model will assign a probability P(W, T) to every sentence W with ev- ery possible binary branching parse T and every possible headword annotation for every constituent of T. Let W be a sentence of length I words to which we have prepended <s> and appended </s> so that wo =<s> and wl+l =</s>. Let Wk be the word k-prefix w0... wk of the sentence and WkT~ the word-parse k-prefix. To stress this point, a word-parse k-prefix contains only those binary trees whose span is completely included in the word k- prefix, excluding wo =<s>. Single words can be re- garded as root-only trees. Figure 2 shows a word- parse k-prefix; h_0 .. h_{-m} are the exposed head- words. A complete parse -- Figure 3 -- is any bi- nary parse of the wl ... wi </s> sequence with the restriction that </s> is the only allowed headword. 498 ~D <s> w_l ...... w_l </s> Figure 3: Complete parse Note that (wl...wi) needn't be a constituent, but for the parses where it is, there is no restriction on which of its words is the headword. The model will operate by means of two modules: • PREDICTOR predicts the next word wk+l given the word-parse k-prefix and then passes control to the PARSER; • PARSER grows the already existing binary branching structure by repeatedly generating the transitions adjoin-left or adjoin-right until it passes control to the PREDICTOR by taking a null transition. The operations performed by the PARSER en- sure that all possible binary branching parses with all possible headword assignments for the w~... wk word sequence can be generated. They are illus- trated by Figures 4-6. The following algorithm de- scribes how the model generates a word sequence with a complete parse (see Figures 3-6 for notation): Transition t; // a PARSER transition generate <s> ; do{ predict next_word; //PREDICTOR do{ //PARSER if(T_{-l} != <s> ) if(h_0 == </s>) t = adjoin-right; else t = {adjoin-{left,right}, null}; else I; = null; }while(t != null) }while(!(h_0 == </s> &E T_{-1} == <s>)) t = adjoin-right; // adjoin <s>; DONE It is easy to see that any given word sequence with a possible parse and headword annotation is generated by a unique sequence of model actions. 3 Probabilistic Model The probability P(W, T) can be broken into: 1+1 p P(W,T) = l-L=1[ (wk/Wk-lTk-1)" ~]~21 P ( tk l wk, Wk- , Tk-1, t~ . . . t~_l) ] where: • Wk-lTk-1 is the word-parse (k - 1)-prefix • wk is the word predicted by PP~EDICTOR • Nk - 1 is the number of adjoin operations the PARSER executes before passing control to the PREDICTOR (the N~-th operation at position k is the null transition); N~ is a function of T h_{-2 } h_{-I } h_O Figure 4: Before an adjoin operation h.~(-z ) -- h_(-2) h._o. h._(- x ) Figure 5: Result of adjoin-left h'_{*t ).h_(o2) h*_O -- n_O h_ . . . . . . . . . . . Figure 6: Result of adjoin-right • t~ denotes the i-th PARSER operation carried out at position k in the word string; t k E {adjoin-left,adjoin-right},i < Nk , =null, i = Nk Our model is based on two probabilities: P(wk/Wk-lTk-1) (1) P(t~/Wk, Wk-lTk-1, t~... t~_l) (2) As can be seen (wk, Wk-lTk-1, t k k ...ti_l) is one of the Nk word-parse k-prefixes of WkTk, i = 1, Nk at position k in the sentence. To ensure a proper probabilistic model we have to make sure that (1) and (2) are well defined con- ditional probabilities and that the model halts with probability one. A few provisions need to be taken: • P(null/WkTk) = 1, if T_{-1} == <s> ensures that <s> is adjoined in the last step of the parsing process; • P(adjoin-right/WkTk) = 1, if h_0 == </s> ensures that the headword of a complete parse is <Is>; • 3~ > Os.t. P(wk=</s>/Wk-lT~-l) >_ e, VWk-lTk-1 ensures that the model halts with probability one. 3.1 The first model The first term (1) can be reduced to an n-gram LM, P(w~/W~-lTk-1) = P(wk/W~-l... Wk-n+l). A simple alternative to this degenerate approach would be to build a model which predicts the next word based on the preceding p-1 exposed headwords and n-1 words in the history, thus making the fol- lowing equivalence classification: [WkTk] = {h_O .. h_{-p+2},iUk-l..Wk-n+ 1 }. 499 The approach is similar to the trigger LM(Lau93), the difference being that in the present work triggers are identified using the syntactic structure. 3.2 The second model Model (2) assigns probability to different binary parses of the word k-prefix by chaining the ele- mentary operations described above. The workings of the PARSER are very similar to those of Spat- ter (Jelinek94). It can be brought to the full power of Spatter by changing the action of the adjoin operation so that it takes into account the termi- nal/nonterminal labels of the constituent proposed by adjoin and it also predicts the nonterminal la- bel of the newly created constituent; PREDICTOR will now predict the next word along with its POS tag. The best equivalence classification of the WkTk word-parse k-prefix is yet to be determined. The Collins parser (Collins96) shows that dependency- grammar-like bigram constraints may be the most adequate, so the equivalence classification [WkTk] should contain at least (h_0, h_{-1}}. 4 Preliminary Experiments Assuming that the correct partial parse is a func- tion of the word prefix, it makes sense to compare the word level perplexity(PP) of a standard n-gram LM with that of the P(wk/Wk-ITk-1) model. We developed and evaluated four LMs: • 2 bigram LMs P(wk/Wk-lTk-1) = P(Wk/Wk-1) referred to as W and w, respectively; wk-1 is the pre- vious (word, POStag) pair; • 2 P(wk/Wk-ITk--1) = P(wjho) models, re- ferred to as H and h, respectively; h0 is the previous exposed (headword, POS/non-term tag) pair; the parses used in this model were those assigned man- ually in the Penn Treebank (Marcus95) after under- going headword percolation and binarization. All four LMs predict a word wk and they were implemented using the Maximum Entropy Model- ing Toolkit 1 (Ristad97). The constraint templates in the {W,H} models were: 4 <= <*>_<*> <7>; P- <= <7>_<*> <7>; 2 <= <?>_<7> <?>; 8 <= <*>_<?> <7>; and in the {w,h} models they were: 4 <= <*>_<*> <7>; 2 <= <7>_<*> <7>; <.> denotes a don't care position, <7>_<7> a (word, tag) pair; for example, 4 <= <7>_<*> <7> will trig- ger on all ((word, any tag), predicted-word) pairs that occur more than 3 times in the training data. The sentence boundary is not included in the PP cal- culation. Table 1 shows the PP results along with I ftp://ftp.cs.princeton.edu/pub/packages/memt the number of parameters for each of the 4 models described. H LM PP [ parara H LM PP param II H 312 206540 h 410 102437 Table 1: Perplexity results 5 Acknowledgements The author thanks to Frederick Jelinek, Sanjeev Khudanpur, Eric Ristad and all the other members of the Dependency Modeling Group (Stolcke97), WS96 DoD Workshop at the Johns Hopkins Uni- versity. References Michael John Collins. 1996. A new statistical parser based on bigram lexical dependencies. In Pro- ceedings of the 3~th Annual Meeting of the As- sociation for Computational Linguistics, 184-191, Santa Cruz, CA. Frederick Jelinek. 1997. Information extraction from speech and text -- course notes. The Johns Hop- kins University, Baltimore, MD. Frederick Jelinek, John Lafferty, David M. Mager- man, Robert Mercer, Adwait Ratnaparkhi, Salim Roukos. 1994. Decision Tree Parsing using a Hid- den Derivational Model. In Proceedings of the Human Language Technology Workshop, 272-277. ARPA. Raymond Lau, Ronald Rosenfeld, and Salim Roukos. 1993. Trigger-based language models: a maximum entropy approach. In Proceedings of the IEEE Conference on Acoustics, Speech, and Sig- nal Processing, volume 2, 45-48, Minneapolis. Mitchell P. Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz. 1995. Building a large annotated corpus of English: the Penn Treebank. Computa- tional Linguistics, 19(2):313-330. Eric Sven Ristad. 1997. Maximum entropy model- ing toolkit. Technical report, Department of Com- puter Science, Princeton University, Princeton, N J, January 1997, v. 1.4 Beta. Andreas Stolcke, Ciprian Chelba, David Engle, Frederick Jelinek, Victor Jimenez, Sanjeev Khu- danpur, Lidia Mangu, Harry Printz, Eric Sven Ristad, Roni Rosenfeld, Dekai Wu. 1997. Struc- ture and Performance of a Dependency Language Model. In Proceedings of Eurospeech'97, PJaodes, Greece. To appear. 500 | 1997 | 64 |
Incorporating Context Information for the Extraction of Terms Katerina T. Frantzi Dept. of Computing Manchester Metropolitan University Manchester, M1 5GD, U.K. K. Frantzi@doc. mmu. ac. uk Abstract The information used for the extraction of terms can be considered as rather 'inter- nal', i.e. coming from the candidate string itself. This paper presents the incorpora- tion of 'external' information derived from the context of the candidate string. It is embedded to the C-value approach for automatic term recognition (ATR), in the form of weights constructed from statisti- cal characteristics of the context words of the candidate string. 1 Introduction &: Related Work The applications of term recognition (specialised dic- tionary construction and maintenance, human and machine translation, text categorization, etc.), and the fact that new terms appear with high speed in some domains (e.g. in computer science), enforce the need for automating the extraction of terms. ATR also gives the potential to work with large amounts of real data, that it would not be able to handle man- ually. We should note that by ATR we neither mean dictionary string matching, nor term interpretation (which deals with the relations between terms and concepts). Terms may consist of either one or more words. When the aim is the extraction of single-word terms, domain-dependent linguistic information (i.e. mor- phology) is used (Ananiadou, 1994). Multi-word ATR usually uses linguistic information in the form of a grammar that mainly allows noun phrases or compounds to be extracted as candidate terms: (Bourigault, 1992) extracts maximal-length noun phrases and their subgroups (depending on their grammatical structure and position) as candidate terms. (Dagan and Church, 1994), accept sequen- cies of nouns, which give them high precision, but not such a good recall as that of (Justeson and Katz, 1995), which allow some prepositions (i.e. oj~ to be part of the extracted candidate terms. (Frantzi and Ananiadou, 1996), stand between these two ap- proaches, allowing the extracted compounds to con- tain adjectives but no prepositions. (Daille et al., 1994) also allow adjectives to be part of the two- word English terms they extract. From the above, only (Bourigault, 1992) does not use any statistical information. (Justeson and Katz, 1995) and (Dagan and Church, 1994) use the fre- quency of occurrence of the candidate string as a measure of its likelihood to be a term. (Daille et al., 1994) agree that frequency of occurrence "presents the best histogram", but also suggest the likeli- hood ratio for the extraction of two-word English terms. (Frantzi and Ananiadou, 1996), besides the frequency of occurrence, also consider the frequency of the candidate string as a part of longer candidate terms, as well as the number of these longer candi- date terms it is found nested in. In this paper, we extend C-value, the statisti- cal measure proposed by (Frantzi and Ananiadou, 1996), incorporating information gained from the textual context of the candidate term. 2 Context information for terms The idea of incorporating context information for term extraction came from that "Extended term units are different in type from extended word units in that they cannot be freely modified" (Sager, 1978). Therefore, information from the modifiers of the candidate strings could be used in the pro- cedure of their evaluation as candidate terms. This could be extended beyond adjective/noun modifica- tion, to verbs that belong to the candidate string's context. For example, the form shows of the verb to show in medical domains, is very often followed by a term, e.g. shows a basal cell carcinoma. There are cases where the verbs that appear with terms can even be domain independent, like the form called of 501 the verb to call, or the form known of the verb to know, which are often involved in definitions in var- ious areas, e.g. is known as the singular existential quantifier, is called the Cartesian product. Since context carries information about terms it should be involved in the procedure for their ex- traction. We incorporate context information in the form of weights constructed in a fully automatic way. 2.1 The Linguistic Part The corpus is tagged, and a linguistic filter will only accept specific part-of-speech sequencies. The choice of the linguistic filter affects the precision and re- call of the results: having a 'closed' filter, that is, a strict one regarding the part-of-speech sequencies it accepts, like the N + that (Dagan and Church, 1994) use, wilt improve the precision but have bad effect on the recall. On the other side, an 'open' filter, one that accepts more part-of-speech sequen- cies, like that of (Justeson and Katz, 1995) that ac- cepts prepositions as well as adjectives and nouns, will have the opposite result. In our choice of the linguistic filter, we lie some- where in the middle, accepting strings consisting of adjectives and nouns: ( N ounlAdjective) + Noun (1) However, we do not claim that this specific fil- ter should be used at all cases, but that its choice depends on the application: the construction of domain-specific dictionaries requires high coverage, and would therefore allow low precision in order to achieve high recall, while when speed is required, high quality would be better appreciated, so that the manual filtering of the extracted list of candidate terms can be as fast as possible. So, in the first case we could choose an 'open' linguistic filter (e.g. one that accepts prepositions), while in the second, a 'closed' one (e.g. one that only accepts nouns). The type of context involved on the extraction of candidate terms is also an issue. At this stage of this work, the adjectives, nouns and verbs are considered. However, further investigation is needed over the context used (as it is discussed in the future work). 2.2 The Statistical Part The procedure involves the following steps: Step 1: The raw corpus is tagged and from the tagged corpus the strings that obey the (NounlAdjective)+Noun expression are extracted. Step 2: For these strings, C-value is calculated resulting in a list of candidate terms (ranked by C- value as their likelihood of being terms). The length of the string is incorporated in the C-value measure resulting to C-value' C-value' (a) -=- I where log2 lalf(a) lal = max, ~,~, ~(b) log2 lal(f(a) - p(ro) ) otherwise (2) a is the examined string, lal the length of a in terms of number of words, f(a) the frequency of a in the corpus, Ta the set of candidate terms that contain a, P(T~) the number of these candidate terms. At this point the incorporation of the context in- formation will take place. Step 3: Since C-value is a measure for extract- ing terms, the top of the previously constructed list presents the higher density on terms among any other part of the list. This top of the list, or else, the 'first' of these ranked candidate terms will give the weights to the context. We take the top ranked candidate strings, and from the initial corpus we ex- tract their context which currently are the adjec- tives, nouns and verbs that surround the candidate term. For each of these adjectives, nouns and verbs, we consider three parameters: 1. its total frequency in the corpus, 2. its frequency as a context word (of the 'first' candidate terms), 3. the number of these 'first' candidate terms it appears with. These characteristics are combined in the following way to assign a weight to the context word ft(w) ) Weight(w) = 0.5(~ -~ + f(w) (3) where w is the noun/verb/adjective to be assigned a weight, n the number of the 'first' candidate terms consid- ered, t(w) the number of candidate terms the word w ap- pears with, ft(w) w's total frequency appearing with candidate terms, f(w) w's total frequency in the corpus. A variation to improve the results, that involves human interaction, is the following: the candidate terms involved for the extraction of context are firstly manually evaluated, and only the 'real terms' will proceed to the extraction of the context and as- signment of weights (as previously). 502 At this point a list of context words together with their weights has been created. Step 4: The previously created by C-value r list will now be re-ordered considering the weights obtained from step 3. For each of the candidate strings of the list. its context (adjectives, nouns and verbs that surround it) are extracted from the corpus. These context words have either been found at step 3 and therefore assigned a weight, or not. In the latter case, they are now assigned weight equal to 0. Each of these candidate strings is now ready to be assigned a context weight which would be the sum of the weights of its context words: wei(a) = Weight(b) + 1 (4) b~C° where a is the examined n-gram, Ca the context of a, Weight(b) the calculated (from step 3) weight for the word b. The candidate terms will be now re-ranked according to: 1 NC.value(a) = ~ C-value'(a) • wei(a) (5) tog(. r) where a is the examined n-gram, C-value'(a) calculated from step 2, wei(a), the calculated from step 4 sum of the context weights for a, N the size of the corpus in terms of number of words. 3 Future work Our future work involves 1. The investigation of the context used for the evaluation of the candidate string, and the amount of information that various context carries. We said that for this prototype we considered the adjectives, nouns and verbs that surround the candidate string. However, could ~something else' also carry useful in- formation? Should adjectives, nouns and verbs all be considered to carry the same amount of informa- tion, or should they be assigned different weights? 2. The investigation of the assignment of weights on the parameters used for the measures. Currently, the measures contain the parameters in a 'flat' way. That is, not really considering the 'weight' (the im- portance) of each of them. So, the measures are at this point a description of which parameters to be used, and not on the degree to which they should be used. 3. The comparison of this method with other ATR approaches. The experimentation on real data will show if this approach actually brings improvement to the results in comparison with previous approaches. Moreover, the application on real data should cover more than one domains. 4 Acknowledgement I thank my supervisors Dr. S. Ananiadou and Prof. J. Tsujii. Also Dr. T. Sharpe from the Med- ical School of the University of Manchester for the eye-pathology corpus. References Sophia Ananiadou. 1988. A Methodology for Auto- matic Term Recognition. Ph.D Thesis, University of Manchester Institute of Science and Technol- ogy. Didier Bourigault. 1992. Surface Grammatical Analysis for the Extraction of Terminological Noun Phrases. In Proceedings of the Interna- tional Conference on Computational Linguistics, COLING-92, pages 977-981. Ido Dagan and Ken Church. 1994. Termight: Iden- tifying and Translating Technical Terminology. In Proceedings of the European Chapter of the Asso- ciation for Computational Linguistics, EACL-94, pages 34-40. B~atrice Daille, I~ric Gaussier and Jean-Marc Lang,. 1994. Towards Automatic Extraction of Monolin- gual and Bilingual Terminology. In Proceedings of the International Conference on Computational Linguistics, COLING-94, pages 515-521. Katerina T. Frantzi and Sophia Ananiadou. 1996. A Hybrid Approach to Term Recognition. In Pro- ceedings of the International Conference on Nat- ural Language Processing and Industrial Applica- tions, NLP+L4-96. pages 93-98. John S. Justeson and Slava M. Katz. 1995. Tech- nical terminology: some linguistic properties and an algorithm for identification in text. In Natural Language Engineering, 1:9-27. Juan C. Sager. 1978. Commentary in Table Ronde sur les Probldmes du Ddcourage du Terme. Ser- vice des Publications, Direction des Francaise, Montreal, 1979, pages 39-52. 503 | 1997 | 65 |
Knowledge Acquisition from Texts : Using an Automatic Clustering Method Based on Noun-Modifier Relationship Houssem Assadi Electricit4 de France - DER/IMA and Paris 6 University - LAFORIA 1 avenue du G4n4ral de Gaulle, F-92141, Clamart, France houssem, assadi@der, edfgdf, fr Abstract We describe the early stage of our method- ology of knowledge acquisition from techni- cal texts. First, a partial morpho-syntactic analysis is performed to extract "candi- date terms". Then, the knowledge engi- neer, assisted by an automatic clustering tool, builds the "conceptual fields" of the domain. We focus on this conceptual anal- ysis stage, describe the data prepared from the results of the morpho-syntactic analy- sis and show the results of the clustering module and their interpretation. We found that syntactic links represent good descrip- tors for candidate terms clustering since the clusters are often easily interpreted as "conceptual fields". 1 Introduction Knowledge Acquisition (KA) from technical texts is a growing research area among the Knowledge- Based Systems (KBS) research community since documents containing a large amount of technical knowledge are available on electronic media. We focus on the methodological aspects of KA from texts. In order to build up the model of the subject field, we need to perform a corpus-based semantic analysis. Prior to the semantic analysis, morpho-syntactic analysis is performed by LEXTER, a terminology extraction software (Bourigault et al., 1996) : LEXTER gives a network of noun phrases which are likely to be terminological units and which are connected by syntactical links. When dealing with medium-sized corpora (a few hundred thousand words), the terminological network is too volumi- nous for analysis by hand and it becomes necessary to use data analysis tools to process it. The main idea to make KA from medium-sized corpora a feasi- ble and efficient task is to perform a robust syntactic analysis (using LEXTER, see section 2) followed by a semi-automatic semantic analysis where automatic clustering techniques are used interactively by the knowledge engineer (see sections 3 and 4). We agree with the differential definition of seman- tics : the meaning of the morpho-lexical units is not defined by reference to a concept, but rather by contrast with other units (Rastier et al., 1994). In fact, we are considering "word usage rather than word meanin]' (Zernik, 1990) following in this the distributional point of view, see (Harris, 1968), (Hin- dle, 1990). Statistical or probabilistic methods are often used to extract semantic clusters from corpora in order to build lexical resources for ANLP tools (Hindle, 1990), (Zernik, 1990), (Resnik, 1993), or for au- tomatic thesaurus generation (Grefenstette, 1994). We use similar techniques, enriched by a prelimi- naxy morpho-synta~ztic analysis, in order to perform knowledge acquisition and modeling for a specific task (e.g. : electrical network planning). Moreover, we are dealing with language for specific purpose texts and not with general texts. 2 The morpho-syntactic analysis : the LEXTER software LEXTER is a terminology extraction software (Bouri- gault et al., 1996). A corpus of French texts on any technical subject can be fed into it. LEXTER per- forms a morpho-syntactic analysis of this corpus and gives a network of noun phrases which are likely to be terminological units. Any complex term is recursively broken up into two parts : head (e.g. PLANNING in the term RE- GIONAL NETWORK PLANNING), and expansion (e.g. REGIONAL in the term REGIONAL NETWORK) 1 This analysis allows the organisation of all the candidate terms in a network format, known as the XAll the examples given in this paper are translated from French. 504 "terminological network". Each analysed complex candidate term is linked to both its head (H-link) and expansion (E-link). LEXTER alSO extracts phraseological units (PU) which are "informative collocations of the candidate terms". For instance, CONSTRUCTION OF THE HIGH- VOLTAGE LINE is a PU built with the candidate term HIGH-VOLTAGE LINE. PUs are recursively broken up into two parts, similarly to the candidate terms, and the links are called H'-link and E'-link. 3 The data for the clustering module The candidate terms extracted by LEXTER can be NPs or adjectives. In this paper, we focus on NP clustering. A NP is described by its "terminological context". The four syntactic links of LEXTER Can be used to define this terminological context. For in- stance, the "expansion terminological context" (E- terminological context) of a NP is the set of the can- didate terms appearing in the expansion of the more complex candidate term containing the current NP in head position. For example, the candidate terms (NATIONAL NETWORK, REGIONAL NETWORK, DIS- PATCHING NETWORK) give the context (NATIONAL, REGIONAL, DISPATCHING) for the noun NETWORK. If we suppose that the modifiers represent special- isations of a head NP by giving a specific attribute of it, NPs described by similar E-terminological con- texts will be semantically close. These semantic sim- ilarities allow the KE to build conceptual fields in the early stages of the KA process. The links around a NP within a PU are also inter- esting. Those candidate terms appearing in the head position in a PU containing a given NP could de- note properties or actions related to this NP. For in- stance, the PUs LENGTH OF THE LINE and NOMINAL POWER OF THE LINE show two properties (LENGTH and NOMINAL POWER) of the object LINE; the PU CONSTRUCTION OF THE LINE shows an action (CON- STRUCTION) which can be applied to the object LINE. This definition of the context is original compared to the classical context definitions used in Informa- tion Retrieval, where the context of a lexical unit is obtained by examining its neighbours (collocations) within a fixed-size window. Given that candidate terms extraction in LEXTER is based on a morpho- syntactical analysis, our definition allows us to group collocation information disseminated in the corpus under different inflections (the candidate terms of LEXTER are lemmatised) and takes into account the syntactical structure of the candidate terms. For in- stance, LEXTER extracts the complex candidate term BUILT DISPATCHING LINE, and analyses it in (BUILT (DISPATCHING LINE)); the adjective BUILT will ap- pear in the terminological context of DISPATCHING LINE and not in that of DISPATCHING. It is obvi- ous that only the first context is relevant given that BUILT characterises the DISPATCHING LINE and not the DISPATCHING. To perform NP clustering, we prepared two data sets : in the first, NPs are described by their E- terminological context; in the second one, both the E-terminological context and the H'- terminological context (obtained with the H'-link within PUs) are used. The same filtering method 2 and clustering algorithm are applied in both cases. Table 1 shows an extract from the first data set. The columns are labelled by the expansions (nominal or adjectival) of the NPs being clustered. Each line represents a NP (an individual, in statistical terms) : there is a '1' when the term built with the NP and the expansion exists (e.g. REGIONAL NETWORK is extracted by LEXTER), and a '0' otherwise ("national line" is not extracted by LEXTER). NATIONAL DISPATCHING REGIONAL LINE 0 1 0 NETWORK 1 1 1 Table 1: example of the data used for NP clustering In the remainder of this article, we describe the way a KE uses LEXICLASS to build "conceptual fields" and we also compare the clusterings obtained from the two different data sets. 4 The conceptual analysis : the LEXICLASS software LEXICLASS is a clustering tool written using C lan- guage and specialised data analysis functions from Splus TM software. Given the individuals-variables matrix above, a similarity measure between the individuals is calcu- lated 3 and a hierarchical clustering method is per- formed with, as input, a similarity matrix. This kind of methods gives, as a result, a classification tree (or dendrogram) which has to be cut at a given level in order to produce clusters. For example, this method, applied on a population of 221 NPs (data set 1) gives 2This filtering method is mandatory, given that the chosen clustering algorithm cannot be applied to the whole terminological network (several thousands of terms) and that the results have to be validated by hand. We have no space to give details about this method, but we must say that it is very important to obtain proper data for clustering 3similarity measures adapted to binary data are used - e.g. the Anderberg measure - see (Kotz et al., 1985) 505 21 clusters, figure 1 shows an example of such a clus- ter. i .................................... AN AUTOMATICALLY FOUND ~ OUTPOST NETWORK CLUSTER , BAR STANDBY ', CABLE PRIMARY ', LINK TRANFORMER UINE TRANSFORMATION LEVEL UNDERGROUND CABLE ', STRUCTURE PART INTERPRETATION BY TI~ KNOWLEDGE ENGINEER STRUCTUI~S und~g~Lmd ~1~ Figure 1: a cluster interpretation The interpretation, by the KE, of the results given by the clustering methods applied on the data of ta- ble 1 leads him to define conceptual fields. Figure 1 shows the transition from an automatically found cluster to a conceptual field : the KE constitutes the conceptual fields of "the structures". He puts some concepts in it by either validating a candidate term (e.g. LINE), or reformulating a candidate term (e.g. PRIMARY is an ellipsis and leads the KE to cre- ate the concept primary substation). The other candidate terms are not kept because they are con- sidered as non relevant by the KE. The conceptual fields have to be completed all along the KA pro- cess. At the end of this operation, the candidate terms appearing in a conceptual field are validated. This first stage of the KA process is also the oppor- tunity for the KE to constitute synonym sets : the synonym terms are grouped, one of them is chosen as a concept label, and the others are kept as the values of a generic attribute labels of the considered concept (see figure 2 for an example). l line //conceptual field// : structure //typell : object //labels// : LINE, ELECTRIC LINE, OVERHEAD LINE Figure 2: a partial description of the concept "line" 5 Discussion • Evaluation of the quality of the clustering pro- cedure • in the majority of the works using clus- tering methods, the evaluation of the quality of the method used is based on recall and preci- sion parameters. In our case, it is not possi- ble to have an a priori reference classification. The reference classification is highly domain- and task-dependent. The only criterion that we have at the present time is a qualitative one : that is the usefulness of the results of the clus- tering methods for a KE building a conceptual model. We asked the KE to evaluate the quality of the clusters, by scoring each of them, assum- ing that there are three types of clusters : 1. Non relevant clusters. 2. Relevant clusters that cannot be labelled. 3. Relevant clusters that can be labelled. Then an overall clustering score is computed. This elementary qualitative scoring allowed the KE to say that the clustering obtained with the second data set is better than the one obtained with the first. LEXICLASS is a generic clustering module, it only needs nominal (or verbal) compounds de- scribed by dependancy relationships. It may use the results of any morpho-syntactic analyzer which provides dependancy relations (e.g. verb- object relationship). The interactive conceptual analysis : in the present article, we only described the first step of the KA process (the "conceptual fields" con- struction). Actually, this process continues in an interactive manner : the system uses the conceptual fields defined by the KE to compute new conceptual structures; these are accepted or rejected by the KE and the exploration of both the terminological network and the docu- mentation continues. References Bourigault D., Gonzalez-Mullier I., and Gros C. 1996. Lexter, a Natural Language Processing Tool for Terminology Extraction. In Proceedings of the 7th Euralex International Congress, GSteborg, Sweden. Grefenstette G. 1994. Explorations in Automatic Thesaurus Discovery. Kluwer Academic Publish- ers, Boston. Harris Z. 1968. Mathematical Structures of Lan- guage. Wiley, NY. Hindle H. 1990. Noun classification from predicate- argument structures. In 28th Annual Meeting of the Association for Computational Linguistics, pages 268-275, Pittsburgh, Pennsylvania. Associ- ation for Computational Linguistics, Morristown, New Jersey. Kotz S., Johnson N. L., and Read C. B. (Eds). 1985. Encyclopedia of Statistical Sciences. Vol.5, Wiley- Interscience, NY. Rastier F., Cavazza M., and Abeill@ A. 1994. S~- mantique pour l'analyse. Masson, Paris. Resnik P. 1993. Selection and Information : A Class-Based Approach to Lexical Relationships. PhD Thesis, University of Pennsylvania. Zernik U. 1993. Corpus-Based Thematic Analysis. In Jacobs P. S. Ed., Text-Based Intelligent Sys- tems. Lawrence Erlbaum, Hillsdale, NJ. 506 | 1997 | 66 |
Choosing the Word Most Typical in Context Using a Lexical Co-occurrence Network Philip Edmonds Department of Computer Science, University of Toronto Toronto, Canada, M5S 3G4 pedmonds©cs, toronto, edu Abstract This paper presents a partial solution to a com- ponent of the problem of lexical choice: choos- ing the synonym most typical, or expected, in context. We apply a new statistical approach to representing the context of a word through lexical co-occurrence networks. The imple- mentation was trained and evaluated on a large corpus, and results show that the inclusion of second-order co-occurrence relations improves the performance of our implemented lexical choice program. 1 Introduction Recent work views lexical choice as the process of map- ping fi'om a set of concepts (in some representation of knowledge) to a word or phrase (Elhadad, 1992; Stede, 1996). When the same concept admits more than one lexicalization, it is often difficult to choose which of these 'synonyms' is the most appropriate for achieving the desired pragmatic goals: but this is necessary for high- quality machine translation and natural language genera- tion. Knowledge-based approaches to representing the po- tentially subtle differences between synonyms have suf- fered from a serious lexical acquisition bottleneck (Di- Marco, Hirst, and Stede, 1993; Hirst, 1995). Statistical approaches, which have sought to explicitly represent differences between pairs of synonyms with respect to their occurrence with other specific words (Church et al., 1994), are inefficient in time and space. This paper presents a new statistical approach to mod- eling context that provides a preliminary solution to an important sub-problem, that of determining the near- synonym that is most typical, or expected, if any, in a given context. Although weaker than full lexical choice, because it doesn't choose the 'best' word, we believe that it is a necessary first step, because it would allow one to determine the effects of choosing a non-typical word in place of the typical word. The approach relies on a generalization of lexical co-occurrence that allows for an implicit representation of the differences between two (or more) words with respect to any actual context. For example, our implemented lexical choice program selects mistake as most typical for the 'gap' in sen- tence (1), and error in (2). (1) However, such a move also would run the risk of cutting deeply into U.S. economic growth, which is why some economists think it would be a big {error I mistake [ oversight}. (2) The {error I mistake t oversight} was magnified when the Army failed to charge the standard percentage rate for packing and handling. 2 Generalizing Lexical Co-occurrence 2.1 Evidence-based Models of Context Evidence-based models represent context as a set of fea- tures, say words, that are observed to co-occur with, and thereby predict, a word (Yarowsky, 1992; Golding and Schabes, 1996; Karow and Edelman, 1996; Ng and Lee, 1996). But, if we use just the context surrounding a word, we might not be able to build up a representation satisfac- tory to uncover the subtle differences between synonyms, because of the massive volume of text that would be re- quired. Now, observe that even though a word might not co- occur significantly with another given word, it might nev- ertheless predict the use of that word if the two words are mutually related to a third word. That is, we can treat lexical co-occurrence as though it were moderately tran- sitive. For example, in (3), learn provides evidence for task because it co-occurs (in other contexts) with difficult, which in turn co-occurs with task (in other contexts), even though learn is not seen to co-occur significantly with task. (3) The team's most urgent task was to learn whether Chernobyl would suggest any safety flaws at KWU-designed plants. So, by augmenting the contextual representation of a word with such second-order (and higher) co-occurrence relations, we stand to have greater predictive power, as- suming that we assign less weight to them in accordance with their lower information content. And as our results will show, this generalization of co-occurrence is neces- sary. 507 I ~"~yms I ~aw~SSS I I r~ 1.36 Figure 1: A fragment of the lexical co-occurrence net- work for task. The dashed line is a second-order relation implied by the network. We can represent these relations in a lexical co- occurrence network, as in figure 1, that connects lexi- cal items by just their first-order co-occurrence relations. Second-order and higher relations are then implied by transitivity. 2.2 Building Co-occurrence Networks We build a lexical co-occurrence network as follows: Given a root word, connect it to all the words that sig- nificantly co-occur with it in the training corpus; 1 then, recursively connect these words to their significant co- occurring words up to some specified depth. We use the intersection of two well-known measures of significance, mutual information scores and t-scores (Church et al., 1994), to determine if a (first-order) co- occurrence relation should be included in the network; however, we use just the t-scores in computing signifi- cance scores for all the relations. Given two words, w0 and wd, in a co-occurrence relation of order d, and a shortest path P(wo, wd) = (wo ..... wd) between them, the significance score is 1 t(Wi-1, wi) sig(wo, wa) = -~ E i wiEP(w| ,wd) This formula ensures that significance is inversely pro- portional to the order of the relation. For example, in the network of figure 1, sig(task, learn) = It(task, difficult) + ½t(difficult, learn)]18 = 0.41. A single network can be quite large. For instance, the complete network for task (see figure 1) up to the third-order has 8998 nodes and 37,548 edges. 2.3 Choosing the Most Typical Word The amount of evidence that a given sentence provides for choosing a candidate word is the sum of the significance scores of each co-occurrence of the candidate with a word IOur training corpus was the part-of-speech-tagged 1989 Wall Street Journal, which consists of N = 2, 709,659 tokens. No lemrnatization or sense disambiguation was done. Stop words were numbers, symbols, proper nouns, and any token with a raw frequency greater than F = 800. Set POS Synonyms (with training corpus frequency) 1 JJ difficult (352), hard (348), tough (230) 2 I~ error (64), mistake (61), oversight (37) 3 ~n~ job (418), task (123), duty (48) 4 NN responsibility (142), commitment (122), obligation (96), burden (81) 5 r~N material (177), stuff (79), substance (45) 6 VB give (624), provide (501), offer (302) 7 VB settle (126), resolve (79) Table 1: The sets of synonyms for our experiment. in the sentence. So, given a gap in a sentence S, we find the candidate c for the gap that maximizes M(c, S) = ~_, sig(c, w) wES For example, given S as sentence (3), above, and the network of figure 1, M(task, S) = 4.40. However, job (using its own network) matches best with a score of 5.52; duty places third with a score of 2.21. 3 Results and Evaluation To evaluate the lexical choice program, we selected sev- eral sets of near-synonyms, shown in table 1, that have low polysemy in the corpus, and that occur with similar frequencies. This is to reduce the confounding effects of lexical ambiguity. For each set, we collected all sentences from the yet- unseen 1987 Wall Street Journal (part-of-speech-tagged) that contained any of the members of the set, ignoring word sense. We replaced each occurrence by a 'gap' that the program then had to fill. We compared the 'correct- ness' of the choices made by our program to the baseline of always choosing the most frequent synonym according to the training corpus. But what are the 'correct' responses? Ideally, they should be chosen by a credible human informant. But regrettably, we are not in a position to undertake a study of how humans judge typical usage, so we will turn in- stead to a less ideal source: the authors of the Wall Street Journal. The problem is, of course, that authors aren't always typical. A particular word might occur in a 'pat- tern' in which another synonym was seen more often, making it the typical choice. Thus, we cannot expect perfect accuracy in this evaluation. Table 2 shows the results for all seven sets of synonyms under different versions of the program. We varied two parameters: (1) the window size used during the construc- tion of the network: either narrow (4-4 words), medium (4- 10 words), or wide (4- 50 words); (2) the maximum order of co-occurrence relation allowed: 1, 2, or 3. The results show that at least second-order co- occurrences are necessary to achieve better than baseline accuracy in this task; regular co-occurrence relations are insufficient. This justifies our assumption that we need 508 Set 1 2 3 4 5 6 7 Size 6665 1030 5402 3138 1828 10204 1568 Baseline 40.1% 33.5% 74.2% 36.6% 62.8% 45.7% 62.2% 1 31.3% 18.7% 34.5% 27.7% 28.8% 33.2% 41.3% Narrow 2 47.2% 44.5% 66.2% 43.9% 61.9% a 48.1% 62.8% a 3 47.9% 48.9% 68.9% 44.3% 64.6% a 48.6% 65.9% 1 24.0% 25.0% 26.4% 29.3% 28.8% 20.6% 44.2% Medium2 42.5% 47.1% 55.3% 45.3% 61.5% a 44.3% 63.6% a 3 42.5% 47.0% 53.6% . . . . Wide 1 9.2% 20.6% 17.5% 20.7% 21.2% 4.1% 26.5% 2 39.9% a 46.2% 47.1% 43.2% 52.7% 37.7% 58.6% =Difference from baseline not significant. Table 2: Accuracy of several different versions of the iexical choice program. The best score for each set is in boldface. Size refers to the size of the sample collection. All differences from baseline are significant at the 5% level according to Pearson's X 2 test, unless indicated. more than the surrounding context to build adequate con- textual representations. Also, the narrow window gives consistently higher ac- curacy than the other sizes. This can be explained, per- haps, by the fact that differences between near-synonyms often involve differences in short-distance collocations with neighboring words, e.g., face the task. There are two reasons why the approach doesn't do as well as an automatic approach ought to. First, as mentioned above, our method of evaluation is not ideal; it may make our results just seem poor. Perhaps our results actually show the level of 'typical usage' in the newspaper. Second, lexical ambiguity is a major problem, affecting both evaluation and the construction of the co-occurrence network. For example, in sentence (3), above, it turns out that the program uses safety as evidence for choosing job (because job safety is a frequent collocation), but this is the wrong sense of job. Syntactic and collocational red herrings can add noise too. 4 Conclusion We introduced the problem of choosing the most typical synonym in context, and gave a solution that relies on a generalization oflexical co-occurrence. The results show that a narrow window of training context (-t-4 words) works best for this task, and that at least second-order co-occurrence relations are necessary. We are planning to extend the model to account for more structure in the narrow window of context. Acknowledgements For comments and advice, I thank Graeme Hirst, Eduard Hovy, and Stephen Green. This work is financially sup- ported by the Natural Sciences and Engineering Council of Canada. References Church, Kenneth Ward, William Gale, Patrick Hanks, Donald Hindle, and Rosamund Moon. 1994. Lexical substitutability. In B.T.S. Atkins and A. ZampoUi, editors, Computational Approaches to the Lexicon. Oxford University Press, pages 153-177. DiMarco, Chrysanne, Graeme Hirst, and Manfred Stede. 1993. The semantic and stylistic differentiation of synonyms and near-synonyms. In AAAI Spring Symposium on Building Lexicons for Machine Translation, pages 114--121, Stanford, CA, March. Elhadad, Michael. 1992. Using Argumentation to Control Lexical Choice: A Functional Unification Implementation. Ph.D. thesis, Columbia University. Golding, Andrew R. and Yves Schabes. 1996. Combin- ing trigram-based and feature-based methods for context- sensitive spelling correction. In Proceedings of the 34th Annual Meeting of the Association for Computational Lin- guistics. Hirst, Graeme. 1995. Near-synonymy and the structure of lexical knowledge. In AAAI Symposium on Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity, and Generativity, pages 51-56, Stanford, CA, March. Karow, Yael and Shimon Edelman. 1996. Learning similarity- based word sense disambiguation from sparse data. In Pro- ceedings of the Fourth Workshop on Very Large Corpora, Copenhagen, August. Ng, Hwee Tou and Hian Beng Lee. 1996. Integrating multiple sources to disambiguate word sense: An exemplar-based approach. In Proceedings of the 34th Annual Meeting of the Association for Computational Linguistics. Stede, Manfred. 1996. Lexical Semantics and Knowledge Rep- resentation in Multilingual Sentence Generation. Ph.D. the- sis, University of Toronto. Yarowsky, David. 1992. Word-sense disambiguation using statistical models of Roget's categories trained on large cor- pora. In Proceedings of the14th lnternational Conference on Computational Linguistics (COLING-92), pages 4~ a. a.50. 509 | 1997 | 67 |
Improving Translation through Contextual Information Maite Taboada" Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA 15213 t aboada+©cmu, edu Abstract This paper proposes a two-layered model of dialogue structure for task-oriented di- alogues that processes contextual informa- tion and disambiguates speech acts. The final goal is to improve translation quality in a speech-to-speech translation system. 1 Ambiguity in Speech Translation For any given utterance out of what we can loosely call context, there is usually more than one possible interpretation. A speaker's utterance of an ellipti- cal expression, like the figure "'twelve fifteen", might have a different meaning depending on the context of situation, the way the conversation has evolved un- til that point, and the previous speaker's utterance. "Twelve fifteen" could be the time "a quarter after twelve", the price "one thousand two hundred and fifteen", the room number "'one two one five", and so on. Although English can conflate all those possible meanings into one expression, the translation into other languages usually requires more specificity. If this is a problem for any human listener, the problem grows considerably when it is a parser do- ing the disambiguation. In this paper, I explain how we can use discourse knowledge in order to help a parser disambiguate among different possible parses for an input sentence, with the final goal of improv- ing the translation in an end-to-end speech transla- tion system. The work described was conducted within the JANUS multi-lingual speech-to-speech translation system designed to translate spontaneous dialogue in a limited domain (Lavie et al.. 1996). The machine translation component of JANUS handles these problems using two different approaches: the Generalized Left-to-Right parser GLR* (Lavie and Tomita, 1993) and Phoenix. the latter being the fo- cus of this paper. *The author gratefully acknowledges support from "In Caixa" Fellowship Program. ATR Interpreting Labora- tories, and Project Enthusias~. 2 Disambiguation through Contextual Information This project addresses the problem of choosing the most appropriate semantic parse for any given in- put. The approach is to combine discourse informa- tion with the set of possible parses provided by the Phoenix parser for an input string. The discourse module selects one of these possibilities. The deci- sion is to be based on: 1. The domain of the dialogue. JANUS deals with dialogues restricted to a domain, such as scheduling an appointment or making travel ar- rangements. The general topic provides some information about what types of exchanges, and therefore speech acts, can be expected. 2. The macro-structure of the dialogue up to that point. We can divide a dialogue into smaller, self-contained units that provide information on what phases are over or yet to be covered: Are we past the greeting phase? If a flight was re- served, should we expect a payment phase at some point in the rest of the conversation'? 3. The structure of adjacency pairs (Schegloff and Sacks, 1973), together with the responses to speech functions (Halliday, 1994: Martin. 1992). If one speaker has uttered a request for infor- mation, we expect some sort of response to that -- an answer, a disclaimer or a clarification. The domain of the dialogues, named travel plan- nin 9 domain, consists of dialogues where a customer makes travel arrangements with a travel agent or a hotel clerk to book hotel rooms, flights or other forms of transportation. They are task-oriented di- alogues, in which the speakers have specific goals of carrying out a task that involves the exchange of both intbrmation and services. Discourse processing is structured in two different levels: the context module keeps a global history of the conversation, from which it will be able to esti- mate, for instance, the likelihood of a greeting once the opening phase of the conversation is over. A more local history predicts the expected response in 510 any adjacency pair. such as a question-answer se- quence. The model adopted here is that of a two- layered finite state machine (henceforth FSM). and the approach is that of late-stage di.sarnbzguatlon. where as muci~ information as possible is collected before proceeding on to disambiguation, rather than restricting the parser's search earlier on. 3 Representation of Speech Acts in Phoenix Writing tile appropriate grammars and deciding on the set of speech acts for this domain is also an im- portant part of this project. The selected speech acts are encoded in the grammar -- in the Phoeni× case. a semantic grammar -- the tokens of whici~ are concepts thac the segment in question represents. Any utterance is divided into SDUs -- Semantic Di- alogue Units -- which are fed to the parser one at a time. SDUs represent a full concept, expression, or thought, but not necessarily a complete grammati- cal sentence. Let us take an example input, and a possible parse for it: (1) Could you tell me the prices at the Holiday Inn? ,[request] (COULD YOU ;[reques¢-mfo} (TELL ME ,'[price-into] (THE PRICES ([establishment] (AT THE , [estabhshmenc-name] (HOLIDAY INN)))))))))) The top-level concepts of the grammar are speech acts themselves, the ones immediately after are fur- ther refinements of the speech act, and the lower level concepts capture the specifics of the utterance. such as the name of the hotel in the above example. 4 The Discourse Processor The discourse module processes the global and lo- cal structure of the dialogue in two different lay- ers. The first one is a general organization of tile dialogue's subparts: the layer under that pro- ,:esses the possible sequence of speech acts in a subpart. The assumption is that negotiation di- alogues develop m a predictable way -- this as- sumption was also made for scheduling dialogues in tile Verbmobil project (Maier, I096) --. with three ,'lear phases: mlttalizatwn, negotiation, and dos- rag. \Ve will call the middle phase in our dialogues the task performance phase, since it is not always a negotiation per se. Within the task performance phase very many subdialogues can take place, such as intbrmation-seeking, decision-making, payment. clarification, etc. Disco trse processing has frequently made use of ~equeuces of speech acts as they occur in the dia- logue, through bigram probabilities of occurrences. or through modelling in a finite state machine. (31aier. 1.996: Reithinger eta[., t9.96: Iida and Ya- maoka. 1990: Qu et al.. 1996). However. taking into account only the speech act of the previous segment Phoenix P~l'~er ?J~c 7.~¢ 3 . ! Discourse ~|odule Glooal St~cture Local structure i ~ / -I i • v NrLal Cl~e: i 1~'~ Tree 2 Figure 1: The Discourse Module might leave us with insufficient information to decide as is the case in some elliptical utterances which do not follow a strict adjacency pair sequence: (2) (talking about flight times...} S1 [ can .give you the arrival time. Do you have that information already'? S2 No. [ don't. $1 It's twelve fifteen. If we are in parsing tile segment "'It's twelve fif- teen", and our only source of information is the pre- vious segment. "'No. [ don't', we cannot possibly find tile referent for "'twelve fifteen", unless we know we are in a subdialogue discussing flight times, and arrival times have been previously mentioned. Our approach aims at obtaining information both from the subdialogue structure and the speech act sequence by modelling the global structure of tile di- alogue with a FSM. with opening and closing as initial and final states, and other possible subdia- loguesin the intervening states. Each one of those states contains a FSAI itself, which determines the allowed speech acts in a given subdialogue and their sequence. For a picture of the discourse component here proposed, see Figure I. Let us look at another example where the use of information on the previous context and on tile speaker aIternance will help choose the most appro- priate parse and thus achieve a better translation. 511 The expression "okay" can be a prompt for an an- swer (3), an acceptance of a previous offer (4) or a backchanneling element, i.e., an acknowledgement that the previous speaker's utterance has been un- derstood (5). (3) $1 So we'll switch you to a double room. okay? (4) S1 So we'll switch you to a double room. $2 Okay. (5) S1 The double room is $90 a night. $2 Okay, and how much is a single room? In example (3), we will know that "okay" is a prompt, because it is uttered by the speaker after he or she has made a suggestion. In example (4), it will be an acceptance because it is uttered after the previous speaker's suggestion. And in (5) it is an acknowledgment of the information provided. The correct assignment of speech acts will provide a more accurate translation into other languages. To summarize, the two-layered FSM models a con- versation through transitions of speech acts that are included in subdialogues. When the parser returns an ambiguity in the form of two or more possible speech acts, the FSM will help decide which one is the most appropriate given the context. There are situations where the path followed in the two layers of the structure does not match the parse possibility we are trying to accept or reject. One such situation is the presence of clarification and correction subdialogues at any point in the con- versation. In that case, the processor will try to jump to the upper layer, in order to switch the sub- dialogue under consideration. We also take into ac- count the situation where there is no possible choice, either because the FSM does not restrict the choice i.e., the FSM allows all the parses returned by the parser -- or because the model does not allow any of them. In either of those cases, the transition is determined by unigram probabilities of the speech act in isolation, and bigrams of the combination of the speech act we are trying to disambiguate plus its predecessor. 5 Evaluation The discourse module is being developed on a set of 29 dialogues, totalling 1,393 utterances. An evalu- ation will be performed on 10 dialogues, previously unseen by the discourse module. Since the mod- ule can be either incorporated into the system, or turned off, the evaluation will be on the system's performance with and without the discourse module, Independent graders assign a grade to the quality of the translation 1. A secondary evaluation will be IThe final results of this evaluation will be available at the time of the ACL conference. based on the quality of the speech act disambigua- tion itself, regardless of its contribution to transla- tion quality. 6 Conclusion and Future Work In this paper I have presented a model of dialogue structure in two layers, which processes the sequence of subdialogues and speech acts in task-oriented dialogues in order to select the most appropriate from the ambiguous parses returned by the Phoenix parser. The model structures dialogue in two lev- els of finite state machines, with the final goal of improving translation quality. A possible extension to the work here described would be to generalize the two-layer model to other. less homogeneous domains. The use of statistical information in different parts of the processing, such as the arcs of the FSM, could enhance performance. References Michael A. K. Halliday. 1994. An Introduction to Func- tional Grammar. Edward Arnold, London (2nd edi- tion). Hitoshi lida and Takyuki Yamaoka. 1990. Dialogue Structure Analysis Method and Its Application to Pre- dicting the Next Utterance. Dialogue Structure Anal- ysis. German-Japanese Workshop, Kyoto, Japan. Alon Lavie, Donna Gates, Marsal Gavaldh, Laura May- field, Alex Waibet, Lori Levin. 1996. Multi-lingual Translation of Spontaneously Spoken Language in a Limited Domain. In Proceedings o.f COLING 96. Copenhagen. Alon Lavie and Masaru Tomita. 1993. GLR*: An Ef- ficient Noise Skipping Parsing Algorithm for Context Free Grammars. In Proceedings o.f the Third [nterna- tional Workshop on Parsing Technologies, [WPT 93, Tilburg, The Netherlands. Elisabeth Maier. 1996. Context Construction as Sub- task of Dialogue Processing: The Verbmobil Case. In Proceedings of the Eleventh Twente Workshop on Lan- guage Technology. TWLT 11. James Martin. 1992. English Text: System and Struc- ture. John Benjamins. Philadelphia/Amsterdam. 'fan Qu, Barbara Di Eugenio, Alon Lavie, Lori Levin. 1996. Minimizing Cumulative Error in Discourse Con- text. In Proceedings o] ECAI 96, Budapest, Hungary. Norbert Reithinger, Ralf Engel, Michael Kipp. Martin Klesen. 1996. Predicting Dialogue Acts for a Speech- to-Speech Translation System. In Proceedings of IC- SLP 96, Philadelphia, USA. Emmanuel Schegloff and Harvey Sacks. 1973. Opening up Closings. Semiotica 7, pages 289-327. Wayne Ward. 1991. Understanding Spontaneous Speech: the Phoenix System. In Proceedings of ICASSP 91. 512 | 1997 | 68 |
Generative Power of CCGs with Generalized Type-Raised Categories Nobo Komagata Department of Computer and Information Science University of Pennsylvania Philadelphia, PA 19104 komaga t a@ 1 inc. c i s. upenn, edu Abstract This paper shows that a class of Combinatory Categorial Grammars (CCGs) augmented with a linguistically-motivated form of type raising involving variables is weakly equivalent to the standard CCGs not involving variables. The proof is based on the idea that any instance of such a grammar can be simulated by a standard CCG. 1 Introduction The class of Combinatory Categorial Grammars (CCG- Std) was proved to be weakly equivalent to Linear Index Grammars and Tree Adjoining Grammars (Joshi, Vijay- Shanker, and Weir, 1991; Vijay-Shanker and Weir, 1994). But CCG-Std cannot handle the generalization of type raising that has been used in accounting for various lin- guistic phenomena including: coordination and extrac- tion (Steedman, 1985; Dowty, 1988; Steedman, 1996), prosody (Prevost and Steedman, 1993), and quantifier scope (Park, 1995). Intuitively, all of these phenomena call for a non-traditional, more flexible notion of consti- tuency capable of representing surface structures inclu- ding "(Subj V) (Obj)" in English. Although lexical type raising involving variables can be introduced to derive such a constituent? unconstrained use of variables can increase the power. For example, a grammar involving (T\z)/(T\v) can generate a language A"B"C"D"E" which CCG-Std cannot (Hoffman, 1993). This paper argues that there is a class of grammars which allows the use of linguistically-motivated form of type raising involving variables while it is still weakly equivalent to CCG-Std. A class of grammars, CCG- GTRC, is introduced in the next section as an extension to CCG-Std. Then we show that CCG-GTRC can actually be simulated by a CCG-Std, proving the equivalence. °Thanks to Mark Steedman, Beryl Hoffman, Anoop Sarkar, and the reviewers. The research was supported in part by NSF Grant Nos. IRI95-04372, STC-SBR-8920230, ARPA Grant No. N66001-94-C6043, and ARID Grant No. DAAH04-94- G0426. IOur lexieal rules to introduce type raising are non-recursive and thus do not suffer from the problem of the overgeneration discussed in (Carpenter, 1991). 2 CCGs with Generalized Type-Raised Categories In languages like Japanese, multiple NPs can easily form a non-traditional constituent as in "[(Subj I Objl) & (Subj2 Obj2)] Verb". The proposed ~ammars (CCG-GTRC) admit lexical type-raised categories (LTRC) of the form "1"/(T\a) or'l'\ (T/a) where T is a variable over categories and a is a constant category (Const). 2 Then, composition of LTRCs can give rise to a class of categories having the formT/(T\a .... \at) or T\ (T/a .... /at), representing a multiple-NP constituent exemplified by "Subjl Objt". We call these categories generalized type-raised cate- gories (GTRC) and each ai of a GTRC an argument (of the GTRC). The introduction of GTRCs affects the use of combi- natory rules: functional application ">: z/y + y ---, z" and generalized functional composition ">B ~ (x) : z/y + ylzt ...[zk --- zlzl ...[z~" where k is bounded by a grammar-dependent kma~ as in CCG-Std. 3 This paper assumes two constraints defined for the grammars and one condition stipulated to control the formal properties. The following order-preserving constraint, which follows more primitive directionality features (Steedman, 1991), limits the directions of the slashes in GTRCs. (1) In a GTRC "1"[o (T[,a .... Ira,), the direction of [0 must be the opposite to any of In, ..., ]b This prohibits functional composition '>B×' on 'GTRC+GTRC' pairs so that "T/(T\A\B) + U\(U/C/D)" does not result in T\ (T\A\B/C/D) or U/(UIC/D\A\B). That is, no movement of arguments across the functor is allowed. The variable constraint states that: (2) Variables are limited to the defined positions in GTRCs. This prohibits '>B k (×)' with k > I on the pair 2Categories are in the "result-leftmost" representation and associate left. Thus a/b/c should be read as (a/b)/c and re- turns a/b when an argument c is applied to its right. A ..... Z stand for nonterminals and a,...,z for complex, constant categories. 3There are also backward rules (<) that are analogous to forward rules (>). Crossing rules where zt is found in the direction opposite of that of y are labelled with 'x'. 'k' re- presents the number of arguments being passed. '[' stands for a directional meta-variable for {/, \}. 513 'Const+GTRC'. For example, '>B 2' on "A/B + T/(TkC)" cannot realize the unification of the form "A/B + TrITe./(TtITz\C)" (with T = TilT,_) resulting in "AIT,./(BITz\C)". In order to assure the expected generative capacity, we place a condition on the use of rules. The condition can be viewed in a way comparable to those on rewriting rules to define, say, context-free grammars. The bounded ar- gument condition ensures that every argument category is bounded as follows: (3) '>B (x)' should not apply to the pair 'Const+GTRC'. For example, this prohibits "A/ B + T~ (TkC....\Ct) -- A/(B\C,...\Cl)", where the underlined argument can be unboundedly large. These constraints and condition also tell us how we can implement a CCG-GTRC system without overgeneration. The possible cases of combinatory rule application are summarized as follows: (4) a. For 'Const+Const', the same rules as in CCG-Std are applicable. b. For 'GTRC+Const', the applicable rules are: (i) >: e.g., "T/(TkAkB) + SkAkB -- S" (ii) >B k (x): e.g., "T/(TkA\B) + SkA\BkC/D -. S\C/D'" c. For 'Const+GTRC', only '>' is possible: e.g., "S/ (S/ (S\B)) +r/(T\B) --, S" d. For 'GTRC+GTRC', the possibilities are: (i) >: e.g., "T/(mx (S/A/B)) + Tk (T/A/B) (ii) >B: e.g., "T/(T\A\B) + T/(T\C\D) -. T/(TkAkB\C\D)" CCG-GTRC is defined below where g, ta and ~a,rc re- present the classes of the instances of CCG-Std and CCG- GTRC, respectively: Definition 1 Gatrc is the collection of G's (extension of a G E G, ta) such that: l. For the lexical function f of G (from terminals to sets of categories), if a E f (a), f' may additionally include { (a, T/(T\a)), (a, T\ (T/a)) }. 2. G' may include the rule schemata in (4). The main claim of the paper is the following: Proposition 1 ~9*~e is weakly equivalent with ~,ta. We show the non-trivial direction: for any G' E Ggt~c, there is a G" 6 ~,,a such that L (G') = L (G"). As G' corresponds to a unique G E ~,ta, we extend G" from G to simulate G', then show that the languages are exactly the same. 3 Simulation of CCG-GTRC Consider a fragment of CCG-GTRC with a lexical function f such that f(a) = {A,T/(T\A)},f(b) = { A, T/(TkA) }, f (¢) = {SNA\B}. This fragment can generate the following two permutations: (5) a. ~ b ¢ ,/(T\a) + > S\A > $ b. b a c r/(r\B) + r/(r\a) + s\a\8 .>BX S\B > S Notice that (5b) cannot be generated by the original CCG- Std where the lexicon does not involve GTRCs. In order to (statically) simulate (5b) by a CCG-Std, we add S\BkA to the value of f" (c) in the lexicon of G'. Let us call this type of relation between the original S\A\B and the S\B]\A] wrapping, due to its resemblance to the new operation of the same name in (Bach, 1979). There are two potential problems with this simple augmentation. First, wrapping may affect unboundedly long chunks of categories as exemplified in (6). Second, the simulation may overgenerate. We discuss these issues in turn. (6) "T/(T\A)+T/(TkB)+...+T/(T\A)+T/(T\B)+ s\a\B...\a\B\c - s\c" We need S\~ -- \AXB...kAkB 1 which can be the result of unboundedly-long compositions, to simulate (6) without depending on the GTRCs. Intuitively, this situation is analogous to long-distance movement of C from the po- sition left of SkAkB...kC to the sentence-initial position. In order to deal with the first problem, the following key properties of CCG-GTRC must be observed: (7) a. Any derived category is a combination of lexical categories. For example, SkAkB\A\B...\AkBkC may be derived from "SkAkBkC + ... + SkAkBkS + SkAkBkS" by '<B'. b. Wrapping can occur only when GTRCs are invol- ved in the use of'> Bkx ' and can only cross at most km~= arguments. Since there are only finitely- many argument categories, the argument(s) being passed can be encoded in afinite store. For derivable categories bounded by the maximum number of arguments of a lexical category, we add all the instances of wrapping required for simulating the ef- fect of GTRC into the lexicon of G". For the unbounded case, we extend the lexicon as in the following example: (8) a. For a category S\A\B\C, add S{\c}\AkB to the lexicon. b. For SkA\BkS, add S{\c}\A\BkS{\c}, S\A\B\C\S{\c} ..... S\C~\S{\c}. S{\c} is a new category representing the situation where \C is being passed across categories. Thus \C which originatedin SkAkB\C in (a) may be passed onto another 514 category in (b), after a possibly unbounded number of compositions as follows: (9) S{\c}\A\B + S{\c}\A\B\S{\c}+ ... + S\~S{\c} -.- S\GJ \A\B...\A\B\A\B Now, both of the permutations in (5) can be derived in this extension of CCG-Std. The finite lexicon with finite extension assures the termination of the process. This covers the case (4bii). Case (4e) can be characterized by a general pattern "cl (hi (b\ak...\a,)) + T/(T\ak...\a,) --* c" where T = b. Since any argument category is bounded, we can add b/(b\ak...\a~) 6 f' (al...a,) in the lexicon as an idiom. The other cases do not require simulation as the same string can be derived in the original grammar. The second problem of overgeneration calls for another step. Suppose that the lexicon includes jr(c) = {S\A\B}, f(d) = {S\B\A}, and f(e) = {E\(S\B\A)} and that S\BF~ is added to f(c) by wrapping. To avoid generating an illegal string "c e" (in addition to the legal "de"), we label the state of wrapping as S\Bt+~o,~pl[ \A~+,~,.~,p] t The origi- nal entries can be labelled as S\Bt .... p]\A[ .... pj and E\ (S\B[ .... pj\A[ .... pl). The lexical, argument cate- gories, e.g., A, are underspecified with respect to the fea- ture. Since finite features can be folded into a category, this can be written as a CCG-Std without features. 4 Equivalence of the Two Languages Proposition I can be proved by the following lemma (as a special case where c = S): Lemma 1 For any G' 6 Ggtre (an extension of G), there is a G" 6 ~,td such that a string w is derivable from a constant category c in G' iff (~) w is derivable from c in Gll • The sketch of the proof goes as follows. First, we con- struct G" from G' as in the previous section. Both di- rections of the lemma can be proved by induction on the height of derivation. Consider the direction of '---.'. The base (lexical) case holds by definition of the grammars. For the induction step, we consider each case of rule ap- plication in (4). Case (4a) allows direct application of the induction hypothesis for the substructure of smaller height starting with a constant category. Other cases in- volve GTRC and require sublemmas which can be proved by induction on the length of the GTRC. Cases (4hi, di) have a differently-branching derivation in G" but can be derived without simulation. Cases (4bii, c) depend on the simulation of the previous section. Case (4dii) only appears in sublemmas as the result category is GTRC. In each sublemma, the induction hypothesis of Lemma 1 is applied (mutually recursively) to handle the derivations of the smaller substructures from a constant category. A similar proof is applicable to the other direction. The special cases in this direction involves the feature [+wrap] and/or the new categories of the form 'z{...}' which record the argument(s) being passed. As before, we need sublemmas to handle each case. The proof of the sublemma involving the 'z{...}' form can be done by induction on the length of the category. 5 Conclusion We have shown that CCG-GTRC as formulated above is weakly equivalent to CCG-Std. The results support the use of type raising involving variables in accounting for various linguistic phenomena. Other related results to be reported in the future include: (i) an extension o[ the po- lynomial parsing algorithm of (Vijay-Shanker and Weir, 1990) for CCG-Std to CCG-GTRC (Komagata, 1997), (ii) application to a Japanese parser which is capable of handling non-traditional constituents and information structure (roughly, topic/focus structure). An extension of the formalism is also being studied, to include lexi- ca/type raising of the form T/(T\c) ld~...Id~ for English prepositions/articles and Japanese particles. References Bach, Emmon. 1979. Control in Montague grammar. Lingui- stic Inquiry, 10. Carpenter, Bob. 1991. The generative power of Categorial Grammars and Head-driven Phrase Structure Grammars with lexical rules. ComputationalLinguistics, 17. Dowty, David. 1988. Type raising, functional composition, and non-constituent conjunction. In Richard Oehrle et al., editors, Categorial Grammars and Natural Language Struc- tures. D. Reidel. Hoffman, Beryl. 1993. The formal consequences of using variables in CCG categories. In ACL31. Joshi, Aravind, K. Vijay-Shanker, and David Weir. 1991. The convergence of mildly context-sensitive grammatical forma- lisms. In Peter Sells et al., editors, Foundational Issues in Natural Language Processing. MIT Press, pages 31-81. Komagata, Nobo. 1997. Efficient parsing of CCGs with genera- lized type-raised categories. Ms. University of Pennsylvania. Park, Jong C. 1995. Quantifier scope and constituency. In ACL33. Prevost, Scott and Mark Steedman. 1993. Generating contex- tually appropriate intonation. In EACL6. Steedman, Mark J. 1985. Dependency and coordination in the grammar of Dutch and English. Language, 61:523-56. Steedman, Mark. 1991. Type-raising and directionality in Combinatory Grammar. In ACL29. Steedman, Mark. 1996. Surface Structure and Interpretation. MIT Press. Vijay-Shanker, K. and David J. Weir. 1990. Polynomial time parsing of Combinatory Categorial Grammars. In ACL28. Vijay-Shanker, K. and D. J. Weir. 1994. The equivalence of four extensions of context-free grammars. Mathematical Systems Theory, 27:511. 515 | 1997 | 69 |
Combining Unsupervised Lexical Knowledge Methods for Word Sense Disambiguation * German Rigau, Jordi Atserias Eneko Agirre Dept. de Llenguatges i Sist. Informktics Lengoaia eta Sist. Informatikoak saila Universitat Polit~cnica de Catalunya Euskal Herriko Unibertsitatea Barcelona, Catalonia Donostia, Basque Country {g. rigau, bat alla}@is i. upc. es j ibagbee~s i. ehu. es Abstract This paper presents a method to combine a set of unsupervised algorithms that can accurately disambiguate word senses in a large, completely untagged corpus. Al- though most of the techniques for word sense resolution have been presented as stand-alone, it is our belief that full-fledged lexical ambiguity resolution should com- bine several information sources and tech- niques. The set of techniques have been applied in a combined way to disambiguate the genus terms of two machine-readable dictionaries (MRD), enabling us to con- struct complete taxonomies for Spanish and French. Tested accuracy is above 80% overall and 95% for two-way ambiguous genus terms, showing that taxonomy build- ing is not limited to structured dictionaries such as LDOCE. 1 Introduction While in English the "lexical bottleneck" problem (Briscoe, 1991) seems to be softened (e.g. WordNet (Miller, 1990), Alvey Lexicon (Grover et al., 1993), COMLEX (Grishman et al., 1994), etc.) there are no available wide range lexicons for natural language processing (NLP) for other languages. Manual con- struction of lexicons is the most reliable technique for obtaining structured lexicons but is costly and highly time-consuming. This is the reason for many researchers having focused on the massive acquisi- tion of lexical knowledge and semantic information from pre-existing structured lexical resources as au- tomatically as possible. *This research has been partially funded by CICYT TIC96-1243-C03-02 (ITEM project) and the European Comission LE-4003 (EuroWordNet project). As dictionaries are special texts whose subject matter is a language (or a pair of languages in the case of bilingual dictionaries) they provide a wide range of information about words by giving defini- tions of senses of words, and, doing that, supplying knowledge not just about language, but about the world itself. One of the most important relation to be ex- tracted from machine-readable dictionaries (MRD) is the hyponym/hypernym relation among dictio- nary senses (e.g. (Amsler, 1981), (Vossen and Serail, 1990) ) not only because of its own importance as the backbone of taxonomies, but also because this rela- tion acts as the support of main inheritance mecha- nisms helping, thus, the acquisition of other relations and semantic features (Cohen and Loiselle, 1988), providing formal structure and avoiding redundancy in the lexicon (Briscoe et al., 1990). For instance, following the natural chain of dictionary senses de- scribed in the Diccionario General Ilustrado de la Lengua Espadola (DGILE, 1987) we can discover that a bonsai is a cultivated plant or bush. bonsai_l_2 planta y arbusto asi cultivado. (bonsai, plant and bush cultivated in that way) The hyponym/hypernym relation appears be- tween the entry word (e.g. bonsai) and the genus term, or the core of the phrase (e.g. planta and arbusto). Thus, usually a dictionary definition is written to employ a genus term combined with dif- ferentia which distinguishes the word being defined from other words with the same genus term 1. As lexical ambiguity pervades language in texts, the words used in dictionary are themselves lexically ambiguous. Thus, when constructing complete dis- ambiguated taxonomies, the correct dictionary sense of the genus term must be selected in each dictionary :For other kind of definition patterns not based on genus, a genus-like term was added after studying those patterns. 48 DGILE overall headwords 93,484 senses 168,779 total number of words average length of definition 1,227,380 7.26 nouns 53,799 93,275 903,163 9.68 LPPL overall nouns 15,953 10,506 22,899 13,740 97,778 66,323 3.27 3.82 Table 1: Dictionary Data definition, performing what is usually called Word Sense Disambiguation (WSD) 2. In the previous ex- ample planta has thirteen senses and arbusto only one. Although a large set of dictionaries have been ex- ploited as lexicM resources, the most widely used monolingual MRD for NLP is LDOCE which was designed for learners of English. It is clear that dif- ferent dictionaries do not contain the same explicit information. The information placed in LDOCE has allowed to extract other implicit information easily, e.g. taxonomies (Bruce et al., 1992). Does it mean that only highly structured dictionaries like LDOCE are suitable to be exploited to provide lexical re- sources for NLP systems? We explored this question probing two disparate dictionaries: Diccionario General Ilustrado de la Lengua Espa~ola (DGILE, 1987) for Spanish, and Le Plus Petit Larousse (LPPL, 1980) for French. Both are substantially poorer in coded information than LDOCE (LDOCE, 1987) 3. These dictionaries are very different in number of headwords, polysemy degree, size and length of definitions (c.f. table 1). While DGILE is a good example of a large sized dictionary, LPPL shows to what extent the smallest dictionary is useful. Even if most of the techniques for WSD are pre- sented as stand-alone, it is our belief, following the ideas of (McRoy, 1992), that full-fledged lexical am- biguity resolution should combine several informa- tion sources and techniques. This work does not ad- dress all the heuristics cited in her paper, but prof- its from techniques that were at hand, without any claim of them being complete. In fact we use unsu- pervised techniques, i.e. those that do not require hand-coding of any kind, that draw knowledge from a variety of sources - the source dictionaries, bilin- gual dictionaries and WordNet - in diverse ways. 2Called also Lexical Ambiguity Resolution, Word Sense Discrimination, Word Sense Selection or Word Sense Identification. 3In LDOCE, dictionary senses are explicitly ordered by frequency, 86% dictionary senses have semantic codes and 44% of dictionary senses have pragmatic codes. This paper tries to proof that using an appropriate method to combine those heuristics we can disam- biguate the genus terms with reasonable precision, and thus construct complete taxonomies from any conventional dictionary in any language. This paper is organized as follows. After this short introduction, section 2 shows the methods we have applied. Section 3 describes the test sets and shows the results. Section 4 explains the construction of the lexical knowledge resources used. Section 5 dis- cusses previous work, and finally, section 6 faces some conclusions and comments on future work. 2 Heuristics for Genus Sense Disambiguation As the methods described in this paper have been developed for being applied in a combined way, each one must be seen as a container of some part of the knowledge (or heuristic) needed to disambiguate the correct hypernym sense. Not all the heuristics are suitable to be applied to all definitions. For combin- ing the heuristics, each heuristic assigns each candi- date hypernym sense a normalized weight, i.e. a real number ranging from 0 to 1 (after a scaling process, where maximum score is assigned 1, c.f. section 2.9). The heuristics applied range from the simplest (e.g. heuristic 1, 2, 3 and 4) to the most informed ones (e.g. heuristics 5, 6, 7 and 8), and use information present in the entries under study (e.g. heuristics 1, 2, 3 and 4) or extracted from the whole dictionary as a unique lexical knowledge resource (e.g. heuristics 5 and 6) or combining lexical knowledge from sev- eral heterogeneous lexical resources (e.g. heuristic 7 and 8). 2.1 Heuristic 1: Monosemous Genus Term This heuristic is applied when the genus term is monosemous. As there is only one hypernym sense candidate, the hyponym sense is attached to it. Only 12% of noun dictionary senses have monosemous genus terms in DGILE, whereas the smaller LPPL reaches 40%. 2.2 Heuristic 2: Entry Sense Ordering This heuristic assumes that senses are ordered in an entry by frequency of usage. That is, the most used and important senses are placed in the entry before less frequent or less important ones. This heuristic provides the maximum score to the first sense of the hypernym candidates and decreasing scores to the others. 49 2.3 Heuristic 3: Explicit Semantic Domain This heuristic assigns the maximum score to the hy- pernym sense which has the same semantic domain tag as the hyponym. This heuristic is of limited ap- plication: LPPL lacks semantic tags, and less than 10% of the definitions in DGILE are marked with one of the 96 different semantic domain tags (e.g. med. for medicine, or def. for law, etc.). 2.4 Heuristic 4: Word Matching This heuristic trusts that related concepts will be expressed using the same content words. Given two definitions - that of the hyponym and that of one candidate hypernym - this heuristic computes the total amount of content words shared (including headwords). Due to the morphological productivity of Spanish and French, we have considered differ- ent variants of this heuristic. For LPPL the match among lemmas proved most useful, while DGILE yielded better results when matching the first four characters of words. 2.5 Heuristic 5: Simple Cooccurrence This heuristic uses cooccurrence data collected from the whole dictionary (see section 4.1 for more de- tails). Thus, given a hyponym definition (O) and a set of candidate hypernym definitions, this method selects the candidate hypernym definition (E) which returns the maximum score given by formula (1): SC(O, E) : E cw(wi, wj) (I) 'wIEOAwj6E The cooccurrence weight (cw) between two words can be given by Cooccurrence Frequency, Mutual Information (Church and Hanks, 1990) or Associ- ation Ratio (Resnik, 1992). We tested them us- ing different context window sizes. Best results were obtained in both dictionaries using the Association Ratio. In DGILE window size 7 proved the most suitable, whereas in LPPL whole definitions were used. 2.6 Heuristic 6: Cooccurrence Vectors This heuristic is based on the method presented in (Wilks et al., 1993) which also uses cooccurrence data collected from the whole dictionary (c.f. sec- tion 4.1). Given a hyponym definition (O) and a set of candidate hypernym definitions, this method se- lects the candidate hypernym (E) which returns the maximum score following formula (2): CV(O, E) = sim(Vo, VE) (2) The similarity (sim) between two definitions can be measured by the dot product, the cosine function or the Euclidean distance between two vectors (Vo and VE) which represent the contexts of the words presented in the respective definitions following for- mula (3): t%el = eiv(wd (3) wi6De,f The vector for a definition (VDel) is computed adding the cooccurrence information vectors of the words in the definition (civ(wi)). The cooccur- rence information vector for a word is collected from the whole dictionary using Cooccurrence Frequency, Mutual Information or Association Ratio. The best combination for each dictionary vary: whereas the dot product, Association Ratio, and window size 7 proved best for DGILE, the cosine, Mutual Informa- tion and whole definitions were preferred for LPPL. 2.7 Heuristic 7: Semantic Vectors Because both LPPL and DGILE are poorly seman- tically coded we decided to enrich the dictionary as- signing automatically a semantic tag to each dictio- nary sense (see section 4.2 for more details). Instead of assigning only one tag we can attach to each dic- tionary sense a vector with weights for each of the 25 semantic tags we considered (which correspond to the 25 lexicographer files of WordNet (Miller, 1990)). In this case, given an hyponym (O) and a set of possible hypernyms we select the candidate hzy- pernym (E) which yields maximum similarity among semantic vectors: sv(o, E) = sim(Vo, (4) where sim can be the dot product, cosine or Eu- clidean Distance, as before. Each dictionary sense .has been semantically tagged with a vector of se- mantic weights following formula (5). Yogi = sw (w,) (5) wiEDef The salient word vector (swv) for a word contains a saliency weight (Yarowsky, 1992) for each of the 25 semantic tags of WordNet. Again, the best method differs from one dictionary to the other: each one prefers the method used in the previous section. 2.8 Heuristic 8" Conceptual Distance Conceptual distance provides a basis for determining closeness in meaning among words, taking as refer- ence a structured hierarchical net. Conceptual dis- tance between two concepts is essentially the length 50 of the shortest path that connects the concepts in the hierarchy. In order to apply conceptual distance, WordNet was chosen as the hierarchical knowledge base, and bilingual dictionaries were used to link Spanish and French words to the English concepts. Given a hyponym definition (O) and a set of candi- date hypernym definitions, this heuristic chooses the hypernym definition (E) which is closest according to the following formula: CD(O, E) = dist(headwordo, genusE) (6) That is, Conceptual Distance is measured between the headword of the hyponym definition and the genus of the candidate hypernym definitions using formula (7), c.f. (Agirre et al., 1994). To compute the distance between any two words (wl,w2), all the corresponding concepts in WordNet (el,, e2j) are searched via a bilingual dictionary, and the mini- mum of the summatory for each concept in the path between each possible combination of c1~ and c2~ is returned, as shown below: 1 dist(wl, w2) = rain E depth(ck) Cl i EWl C2j EW2 CkE path(cl~ ,c2.i ) (7) Formulas (6) and (7) proved the most suitable of several other possibilities for this task, includ- ing those which included full definitions in (6) or those using other Conceptual Distance formulas, c.f. (Agirre and Rigau, 1996). 2.9 Combining the heuristics: Summing As outlined in the beginning of this section, the way to combine all the heuristics in one single decision is simple. The weights each heuristic assigns to the rivaling senses of one genus are normalized to the interval between 1 (best weight) and 0. Formula (8) shows the normalized value a given heuristic will give to sense E of the genus, according to the weight as- signed to the heuristic to sense E and the maximum weight of all the sense of the genus Ei. vote(O, E) = weight(O, E) max E, ( weigth( O , Ei ) ) (s) The values thus collected from each heuristic, are added up for each competing sense. The order in which the heuristics are applied has no relevance at all. Correct Genus Selected Monosemous Senses per genus idem (polysemous only) Correct senses per genus idem (polysemous only) DGILE LPPL 391 382 (98%) 61 (16%) 115 111 (97%) 40 (36%) 2.75 2.29 3.64 3.02 1.38 1.51 1.05 [] Table 2: Test Sets 3 Evaluation 3.1 Test Set In order to test the performance of each heuristic and their combination, we selected two test sets at ran- dom (one per dictionary): 391 noun senses for DG- ILE and 115 noun senses for LPPL, which give confi- dence rates of 95% and 91% respectively. From these samples, we retained only those for which the au- tomatic selection process selected the correct genus (more than 97% in both dictionaries). Both test sets were disambiguated by hand. Where necessary mul- tiple correct senses were allowed in both dictionaries. Table 2 shows the data for the test sets. 3.2 Results Table 3 summarizes the results for polysemous genus. In general, the results obtained for each heuristic seem to be poor, but always over the random choice baseline (also shown in tables 3 and 4). The best heuristics according to the recall in both dictionaries is the sense ordering heuristic (2). For the rest, the difference in size of the dictionaries could explain the reason why cooccurrence-based heuristics (5 and 6) are the best for DGILE, and the worst for LPPL. Semantic distance gives the best precision for LPPL, but chooses an average of 1.25 senses for each genus. With the combination of the heuristics (Sum) we obtained an improvement over sense ordering (heuristic 2) of 9% (from 70% to 79%) in DGILE, and of 7% (from 66% to 73%) in LPPL, maintaining in both cases a coverage of 100%. Including monose- mous genus in the results (c.f. table 4), the sum is able to correctly disambiguate 83% of the genus in DGILE (8% improvement over sense ordering) and 82% of the genus in LPPL (4% improvement). Note that we are adding the results of eight different heuristics with eight different performances, improv- ing the individual performance of each one. In order to test the contribution of each heuris- tic to the total knowledge, we tested the sum of all the heuristics, eliminating one of them in turn. The results are provided in table 5. 51 LPPL recall precision coverage DGILE recall precision coverage LPPL recall precision coverage DGILE recall precision random (1) (2) (3) (4) (5) (6) 36% 66% 8% 11% 22% 36% - 66% 66% 44% 61% 100% 100% 12% 25% 36% (7) 11% 57% 19% (8) 50% 76% 66% Sum 73% 73% 100% 30% 70% 1% 44% 57% 60% 57% 47% 79% 30% 70% 100% 72% 57% 60% 58% 49% 79% 100% 100% 1% 61% 100% 100% 99% 95% 100% Table 3: Results for polysemous genus. coverage random (1) (2) (3) (4) (5) (6) 59% 35% 78% - 40% 42% 50% 59% 100% 78% 93% 82% 84% 100% 35% 100% 43% 51% 59% (7) 42% 88% 48% (s) 68% 87% 78% Sum 82% 82% 100% 41% 16% 75% 2% 41% 59% 63% 59% 48% 83% 41% 100% 75% 100% 79% 65% 66% 63% 57% 83% 100% 16% 100% 2% 56% 95% 97% 94% 89% 100% Table 4: Overall results. LPPL Sum -(1) -(2) -(3) -(4) -(5) -(6) recall 82% 73% 74% - 73% 76% 77% precision 82% 73% 75% - 73% 76% 77% coverage 100% 100% 99% - 100% 100% 100% DGILE recall 83% 79% 72% 81% 81% 81% 81% precision 83% 79% 72% 82% 81% 81% 81% coverage 100% 100% 100% 98% 100% 100% 100% -(7) -(8) 77% 78% 77% 78% lOO% lOO% 81% 77% 81% 77% 100% 100% Table 5: Knowledge provided by each heuristic (overall results). (Gale et al., 1993) estimate that any sense- identification system that does not give the cor- rect sense of polysemous words more than 75% of the time would not be worth serious consideration. As table 5 shows this is not the case in our sys- tem. For instance, in DGILE heuristic 8 has the worst performance (see table 4, precision 57%), but it has the second larger contribution (see table 5, precision decreases from 83% to 77%). That is, even those heuristics with poor performance can con- tribute with knowledge that other heuristics do not provide. 3.3 Evaluation The difference in performance between the two dic- tionaries show that quality and size of resources is a key issue. Apparently the task of disambiguating LPPL seems easier: less polysemy, more monose- mous genus and high precision of the sense order- ing heuristic. However, the heuristics that depend only on the size of the data (5, 6) perform poorly on LPPL, while they are powerful methods for DGILE. The results show that the combination of heuris- tics is useful, even if the performance of some of the heuristics is low. The combination performs better than isolated heuristics, and allows to disambiguate all the genus of the test set with a success rate of 83% in DGILE and 82% in LPPL. All the heuristics except heuristic 3 can readily be applied to any other dictionary. Minimal parameter adjustment (window size, cooccurrence weigth for- mula and vector similarity function) should be done to fit the characteristics of the dictionary, but ac- cording to our results it does not alter significantly the results after combining the heuristics. 4 Derived Lexical Knowledge Resources 4.1 Cooccurrence Data Following (Wilks et al., 1993) two words cooccur if they appear in the same definition (word order in definitions are not taken into account). For instance, for DGILE, a lexicon of 300,062 cooccurrence pairs among 40,193 word forms was derived (stop words were not taken into account). Table 6 shows the first eleven words out of the 360 which cooccur with vino (wine) ordered by Association Ratio. From left to right, Association Ratio and number of occurrences. The lexicon (or machine-tractable dictionary, 52 AR #oc. 11.1655 15 tinto (red) 10.0162 23 beber (to drink) 9.6627 14 mos¢o (must) 8.6633 9 jerez (sherry) 8.1051 9 cubas (cask, barrel) 8.0551 16 licor (liquor) 7.2127 17 bebida (drink) 6.9338 12 uva (grape) 6.8436 9 trago (drink, swig) 6.6221 12 sabot (taste) 6.4506 15 pan (bread) Table 6: Example of (wine). association ratio for vino MTD) thus produced from the dictionary is used by heuristics 5 and 6. 4.2 Multilingual Data Heuristics 7 and 8 need external knowledge, not present in the dictionaries themselves. This knowl- edge is composed of semantic field tags and hier- archical structures, and both were extracted from WordNet. In order to do this, the gap between our working languages and English was filled with two bilingual dictionaries. For this purpose, we derived a list of links for each word in Spanish and French as follows. Firstly, each Spanish or French word was looked up in the bilingual dictionary, and its English trans- lation was found. For each translation WordNet yielded its senses, in the form of WordNet concepts (synsets). The pair made of the original word and each of the concepts linked to it, was included in a file, thus producing a MTD with links between Span- ish or French words and WordNet concepts. Obvi- ously some of this links are not correct, as the trans- lation in the bilingual dictionary may not necessarily be understood in its senses (as listed in WordNet). The heuristics using these MTDs are aware of this. For instance when accessing the semantic fields for vin (French) we get a unique translation, wine, which has two senses in WordNet: <wine,vino> as a beverage, and <wine, wine-coloured> as a kind of color. In this example two links would be produced (vin, <wine,vino>) and (vin, <wine, wine-coloured>). This link allows us to get two possible semantic fields for vin (noun.food, file 13, and noun.attribute, file 7) and the whole structure of the hierarchy in Word- Net for each of the concepts. 5 Comparison with Previous Work Several approaches have been proposed for attaching the correct sense (from a set of prescribed ones) of a word in context. Some of them have been fully tested in real size texts (e.g. statistical methods (Yarowsky, 1992), (Yarowsky, 1994), (Miller and Teibel, 1991), knowledge based methods (Sussna, 1993), (Agirre and Rigau, 1996), or mixed methods (Richardson et al., 1994), (Resnik, 1995)). The performance of WSD is reaching a high stance, although usually only small sets of words with clear sense distinctions are selected for disambiguation (e.g. (Yarowsky, 1995) reports a success rate of 96% disambiguating twelve words with two clear sense distinctions each one). This paper has presented a general technique for WSD which is a combination of statistical and knowledge based methods, and which has been ap- plied to disambiguate all the genus terms in two dic- tionaries. Although this latter task could be seen easier than general WSD 4, genus are usually frequent and gen- eral words with high ambiguity ~. While the average of senses per noun in DGILE is 1.8 the average of senses per noun genus is 2.75 (1.30 and 2.29 respec- tively for LPPL). Furthermore, it is not possible to apply the powerful "one sense per discourse" prop- erty (Yarowsky, 1995) because there is no discourse in dictionaries. WSD is a very difficult task even for humans 6, but semiautomatic techniques to disambiguate genus have been broadly used (Amsler, 1981) (Vossen and Serail, 1990) (Ageno et ah, 1992) (Artola, 1993) and some attempts to do automatic genus disam- biguation have been performed using the semantic codes of the dictionary (Bruce et al., 1992) or us- ing cooccurrence data extracted from the dictionary itself (Wilks et al., 1993). Selecting the correct sense for LDOCE genus terms, (Bruce et al., 1992)) report a success rate of 80% (90% after hand coding of ten genus). This impressive rate is achieved using the intrinsic char- 4In contrast to other sense distinctions Dictionary word senses frequently differ in subtle distinctions (only some of which have to do with meaning (Gale et ah, 1993)) producing a large set of closely related dictionary senses (Jacobs, 1991). 5However, in dictionary definitions the headword and the genus term have to be the same part of speech. 6(Wilks et al., 1993) disambiguating 197 occurrences of the word bank in LDOCE say "was not an easy task, as some of the usages of bank did not seem to fit any of the definitions very well". Also (Miller et al., 1994) tagging semantically SemCor by hand, measure an error rate around 10% for polysemous words. 53 acteristics of LDOCE. Yhrthermore, using only the implicit information contained into the dictionary definitions of LDOCE (Cowie et al., 1992) report a success rate of 47% at a sense level. (Wilks et al., 1993) reports a success rate of 45% disambiguat- ing the word bank (thirteen senses LDOCE) using a technique similar to heuristic 6. In our case, combin- ing informed heuristics and without explicit seman- tic tags, the success rates are 83% and 82% over- all, and 95% and 75% for two-way ambiguous genus (DGILE and LPPL data, respectively). Moreover, 93% and 92% of times the real solution is between the first and second proposed solution. 6 Conclusion and Future Work The results show that computer aided construction of taxonomies using lexical resources is not limited to highly-structured dictionaries as LDOCE, but has been succesfully achieved with two very different dic- tionaries. All the heuristics used are unsupervised, in the sense that they do not need hand-codding of any kind, and the proposed method can be adapted to any dictionary with minimal parameter setting. Nevertheless, quality and size of the lexical knowl- edge resources are important. As the results for LPPL show, small dictionaries with short definitions can not profit from raw corpus techniques (heuristics 5, 6), and consequently the improvement of preci- sion over the random baseline or first-sense heuristic is lower than in DGILE. We have also shown that such a simple technique as just summing is a useful way to combine knowl- edge from several unsupervised WSD methods, al- lowing to raise the performance of each one in isola- tion (coverage and/or precision). Furthermore, even those heuristics with apparently poor results provide knowledge to the final result not provided by the rest of heuristics. Thus, adding new heuristics with dif- ferent methodologies and different knowledge (e.g. from corpora) as they become available will certainly improve the results. Needless to say, several improvements can be done both in individual heuristic and also in the method to combine them. For instance, the cooccur- fence heuristics have been applied quite indiscrim- inately, even in low frequency conditions. Signifi- cance tests or association coefficients could be used in order to discard low confidence decisions. Also, instead of just summing, more clever combinations can be tried, such as training classifiers which use the heuristics as predictor variables. Although we used these techniques for genus dis- ambiguation we expect similar results (or even bet- ter taken the "one sense per discourse" property and lexical knowledge acquired from corpora) for the WSD problem. 7 Acknowledgments This work would not be possible without the col- laboration of our colleagues, specially Jose Mari Ar- riola, Xabier Artola, Arantza Diaz de Ilarraza, Kepa Sarasola and Aitor Soroa in the Basque Country and Horacio Rodr~guez in Catalonia. References Alicia Ageno, Irene CastellSn, Maria Antonia Marti, Francesc Ribas, German Rigau, Horacio Rodriguez, Mariona Taul@ and Felisa Verdejo. 1992. SEISD: An environment for extraction of Semantic information from on-line dictionaries. In Proceedings of the 3th Conference on Applied Natural Language Processing (ANLP'92), Trento, Italy. Eneko Agirre, Xabier Arregi, Xabier Artola, Arantza Diaz de Ilarraza and Kepa Sarasola. 1994. Con- ceptual Distance and Automatic Spelling Correc- tion. In Proceedings of the workshop on Compu- tational Linguistics /or Speech and Handwriting Recognition, Leeds, United Kingdom. Eneko Agirre and German Rigau. 1996. Word Sense Disambiguation using Conceptual Density. In Proceedings of the 16th International Confer- ence on Computational Linguistics (Coling'96), pages 16-22. Copenhagen, Denmark. Robert Amsler. 1981. A Taxonomy for English Nouns and Verbs. In Proceedings of the 19th Annual Meeting of the Association for Computa- tional Linguistics, pages 133-138. Stanford, Cali- fornia. Xabier Artola. 1993. Conception et construc- cion d'un systeme intelligent d'aide diccionariale (SIAL)). PhD. Thesis, Euskal Herriko Unibertsi- tatea, Donostia, Basque Country. Eduard Briscoe, Ann Copestake and Branimir Bogu- raev. 1990. Enjoy the paper: Lexical Semantics via lexicology. In Proceedings of the 13th Inter'na- tional Conference on Computational Linguistics (Coling'90), pages 42-47. Eduard Briscoe. 1991. Lexical Issues in Natural Language Processing. In Klein E. and Veltman F. eds. Natural Language and Speech. pages 39-68, Springer-Verlag. Rebecca Bruce, Yorick Wilks, Louise Guthrie, Brian Slator and Ted Dunning. 1992. NounSense - A Disambiguated Noun Taxonomy with a Sense of 54 Humour. Research Report MCCS-92-2~6. Com- puting Research Laboratory, New Mexico State University. Las Cruces. Kenneth Church and Patrick Hanks. 1990. Word Association Norms, Mutual Information, and Lex- icography. Computational Linguistics, vol. 16, ns. 1, 22-29. P. Cohen and C. Loiselle. 1988. Beyond ISA: Struc- tures for Plausible Inference in Semantic Data. In Proceedings of 7th Natural Language Conference AAAI'88. Jim Cowie, Joe Guthrie and Louise Guthrie. 1992. Lexical Disambiguation using Simulated Anneal- ing. In Proceedings of DARPA WorkShop on Speech and Natural Language, pages 238-242, New York. DGILE 1987. Diccionario General Ilustrado de la Lengua Espa~ola VOX. Alvar M.ed. Biblograf S.A. Barcelona, Spain. William Gale, Kenneth Church and David Yarowsky. 1993. A Method for Disambiguating Word Senses in a Large Corpus. Computers and the Humanities 26, pages 415-439. Ralph Grishman, Catherine Macleod and Adam Meyers. 1994.. Comlex syntax: building a com- putational lexicon. In Proceedings of the 15th Annual Meeting of the Association for Compu- tational Linguistics, (Coling'9~). 268-272. Kyoto, Japan. Claire Grover, John Carroll and John Reckers. 1993. The Alvey Natural Language Tools grammar (4th realese). Technical Report 284. Computer Labo- ratory, Cambridge University, UK. Paul Jacobs. 1991. Making Sense of Lexical Ac- quisition. In Zernik U. ed., Lexical Acquisition: Exploiting On-line Resources to Build a Lexicon, Lawrence Erlbaum Associates, publishers. Hills- dale, New Jersey. LDOCE 1987. Longman Dictionary of Contempo- rary English. Procter, P. ed. Longman, Harlow and London. LPPL 1980. Le Plus Petit Larousse. Gougenheim, G. ed. Librairie Larousse. Sussan McRoy. 1992. Using Multiple Knowledge Sources for Word Sense Discrimination. Compu- tational Linguistics 18(1). George Miller. 1990. Five papers on WordNet. Spe- cial Issue of International Journal of Lexicography 3(4). George Miller and David Teibel. 1991. A pro- posal for Lexical Disambiguation. In Proceedings of DARPA Speech and Natural Language Work- shop, 395-399, Pacific Grave, California. George Miller, Martin Chodorow, Shari Landes, Claudia Leacock and Robert Thomas. 1994. Us- ing a Semantic Concordance for sense Identifica- tion. In Proceedings of ARPA Workshop on Hu- man Language Technology. Philip Resnik. 1992. WordNet and Distributional analysis: A class-based approach to lexical dis- covery. In Proceedings of AAAI Symposyum on Probabilistic Approaches to NL, San Jose, Califor- nia. Philip Resnik. 1995. Disambiguating Noun Group- ings with Respect to WordNet Senses. In Proceed- ings of the Third Workshop on Very Large Cor- pora, MIT. R. Richardson, A.F. Smeaton and J. Murphy. 1994. Using WordNet as a Knowledge Base for Measur- ing Semantic Similarity between Words. Work- ing Paper CA-129~, School of Computer Applica- tions, Dublin City University. Dublin, Ireland. Michael Sussna. 1993. Word Sense Disambiguation for Free-text Indexing Using a Massive Semantic Network. In Proceedings of the Second Interna- tional Conference on Information and knowledge Management. Arlington, Virginia. Piek Vossen and Iskander Serail. 1992. Word-Devil, a Taxonomy-Browser for Lexical Decomposition via the Lexicon. Esprit BRA-3030 Acquilex Work- ing Paper n. 009. Yorick Wilks, Dam Fass, Cheng-Ming Guo, James McDonald, Tony Plate and Brian Slator. 1993. Providing Machine Tractable Dictionary Tools. In Pustejowsky J. ed. Semantics and the Lexicon, pages 341-401. David Yarowsky. 1992. Word-Sense Disambigua- tion Using Statistical Models of Rogets Categories Trained on Large Corpora. In Proceedings of the l~th International Conference on Computational Linguistics (Coling'92), pages 454-460. Nantes, France. David Yarowsky. 1994. Decision Lists for Lexical Ambiguity Resolution. In Proceedings of the 32th Annual Meeting of the Association for Compu- tational Linguistics, (ACL'9~). Las Cruces, New Mexico. David Yarowsky. 1995. Unsupervised Word Sense Disambiguation Rivaling Supervised Meth- ods. In Proceedings of the 33th Annual Meeting of the Association for Computational Linguistics, (ACL'95). Cambridge, Massachussets. 55 | 1997 | 7 |
Representing Paraphrases Using Synchronous TAGs Mark Dras Microsoft Research Institute, Macquarie University NSW Australia 2109 markd~mpce, mq. edu. au Abstract This paper looks at representing para- phrases using the formalism of Syn- chronous TAGs; it looks particularly at comparisons with machine translation and the modifications it is necessary to make to Synchronous TAGs for paraphrasing. A more detailed version is in Dras (1997a). 1 Introduction The context of the paraphrasing in this work is that of Reluctant Paraphrase (Dras, 1997b). In this framework, a paraphrase is a tool for modify- ing a text to fit a set of constraints like length or lexical density. As such, generally applicable para- phrases are appropriate, so syntactic paraphrases-- paraphrases that can be represented in terms of a mapping between syntax trees describing each of the paraphrase alternatives--have been chosen for their general applicability. Three examples are: (1) a. The salesman made an attempt to wear Steven down. b. The salesman attempted to wear Steven down. (2) a. The compere who put the contestant to the lie detector gained the cheers of the audience. b. The compere put the contestant to the lie detector test. He gained the cheers of the audience. (3) a. The smile broke his composure. b. His composure was broken by the smile. A possible approach for representing paraphrases is that of Chandrasekar et al (1996) in the context of text simplification. This involves a fairly straightfor- ward representation, as the focus is on paraphrases which simplify sentences by breaking them apart. However, for purposes other than sentence simplifi- cation, where paraphrases like (1) are used, a more complex representation is needed. A paraphrase representation can be thought of as comprising two parts--a representation for each of the source and target texts, and a representation for mapping between them. Tree Adjoining Gram- mars (TAGs) cover the first part: as a formalism for describing the syntactic aspects of text, they have a number of desirable features. The proper- ties of the formalism are well established (Joshi et al, 1975), and the research has also led to the de- velopment of a large standard grammar (XTAG Re- search Group, 1995), and a parser XTAG (Doran et al, 1994). Mapping between source and target texts is achieved by an extension to the TAG formalism known as Synchronous TAG, introduced by Shieber and Schabes (1990). Synchronous TAGs (STAGs) comprise a pair of trees plus links between nodes of the trees. The original paper of Shieber and Schabes proposed using STAGs to map from a syntactic to a semantic representation, while another paper by Abeill@ (1990) proposed their use in machine trans- lation. The use in machine translation is quite close to the use proposed here, hence the comparison in the following section; instead of mapping between possibly different trees in different languages, there is a mapping between trees in the same language with very different syntactic properties. 2 Paraphrasing with STAGs Abeill~ notes that the STAG formalism allows an explicit semantic representation to be avoided, map- ping from syntax to syntax directly. This fits well with the syntactic paraphrases described in this pa- per; but it does not, as Abeill@ also notes, pre- clude semantic-based mappings, with Shieber and Schabes constructing syntax-to-semantics mappings as the first demonstration of STAGs. Similarly, more semantically-based paraphrases are possible through an indirect application of STAGs to a semantic rep- resentation, and then back to the syntax. One major difference between use in MT and paraphrase is in lexicalisation. The sorts of map- pings that Abeill~ deals with are lexically idiosyn- cratic: the English sentences Kim likes Dale and Kim misses Dale, while syntactically parallel and semantically fairly dose, are translated to different 516 S Figure 1: STAGs: miss-manquer d syntactic structures in French; see Figure 1. The actual mappings depend on the properties of words, so any TAGs used in this synchronous manner will necessarily be lexicaiised. Here, however, the sorts of paraphrases which are used are lexically general: splitting off a relative clause, as in (2), is not depen- dent on any lexical attribute of the sentence. Related to this is that, at least between English and French, extensive syntactic mismatch is un- usual, much of the difficulty in translation coming from lexical idiosyncrasies. A consequence for ma- chine translation is that much of the synchronis- ing of TAGs is between elementary trees. So, even with a more complex syntactic structure than the translation examples above, the changes can be de- scribed by composing mappings between elementary trees, or just in the transfer lexicon. Abeill~ notes that there are occasions where it is necessary to re- place an elementary tree by a derived tree; for exam- ple, in Hopefully, John will work becomes On esp~re que Jean travaillera, hopefully (an elementary tree) matches on esp~re que (derived). , ~ v,o N~ P.Po Figure 2: Relative clause paraphrase The situation is more complex in paraphrasing: by definition, the mappings are between units of text with differing syntactic properties. For exam- ple, the mapping of examples (2a) and (2b) involves the pairing of two derived trees, as in Figure 2. In this case, both trees are derived ones. A problem with the STAG formalism in this situation is that it doesn't capture the generality of the mapping be- tween (2a) and (2b); separate tree pairings will have to be made for verbs in the matrix clause which have complementation patterns different from that of the above examples; the same is true for verbs in the sub- ordinate clause. For more complex matchings, the making and pairing of derived trees becomes combi- natorially large. A more compact definition is to have links, of a kind different from the standard STAG links, be- tween nodes higher in the tree. In STAG, a link between two nodes specifies that any substitution or adjunction occurring at one node must be repli- cated at the other. This new proposed link would be a summary link indicating the synchronisation of an entire subtree: more precisely, each subnode of the node with the summary link is mapped to the cor- responding node in the paired tree in a synchronous depth-first traversal of the subtree. Naturally, this can only be defined for pairs of nodes which have the same structure 1 ; that is, in the context of para- phrasing, it is effectively a statement that the paired subtrees are identical. So, for example, a mapping between the nodes labelled VP1 in each of the trees of the example described above would be an appro- priate place to have such a summary link: by es- tablishing a mapping between each subnode of VP1, this covers different types of matrix clauses. Another feature of using STAGs for paraphras- ing is that the links are not necessarily one-to-one. In the right-hand tree of the Figure 2 pairing, the subject NPs of both sentences are linked to NP1 of the left-hand tree; this is a statement that both re- sulting sentences have the same subject. This does not, however, change the properties in any signifi- cant way. 2 It is also useful to add another type of link which is non-standard, in that it is not just a link between nodes at which adjunction and substitution occur, but which represents shared attributes. It connects nodes such as the main verb of each tree, and indi- cates that particular attributes are held in common. For example, mapping between active and passive voice versions of a sentence is represented by the tree in Figure 3. The verb in the active version of (3) (broke) shares the attribute of tense with the auxiliary verb \be\, and the lexical component is shared with the main verb of the passive tree (bro- 1More precisely, they need only have the same num- ber and type of argument slots. 2This is equivalent to there being m dummy child nodes of the node at the multiple end of an m:l link, each child node being exactly the same as the parent with fully re-entrant feature structures, with one link being systematically allocated to each child. 517 ken), which takes the past participle form. This sort of link is unnecessary when STAGs are used in MT, as the trees are lexicalised, and the information is shared in the transfer lexicon. Since, with para- phrasing, the transfer lexicon does not play such a role, the shared information is represented by this new type of link between the trees, where the links are labelled according to the information shared. Hence, node 1/1 in the active tree has a TENSE link with node Vo in the passive tree, where tense is the attribute in common; and a LEX link with node I/1 in the passive tree, where the lexeme is shared. 3 3 Notation In paraphrasing, the tree notation thus becomes fairly clumsy: as well as consuming a large amount of space (given the large derived trees), it fails to reflect the generality provided by the summary links. That is, it is not possible to define a mapping between two structures reflecting their common features if the structures are not, as is standard in STAG, en- tire elementary or derived trees. Therefore, a new and more compact notation is proposed to overcome these two disadvantages. The new notation has three parts: the first part uniquely defines each tree of a synchronous tree pair; the second part describes, also uniquely, the nodes that will be part of the links; the third part links the trees via these nodes. So, let variables X and Y stand for any string of argument types accept- able in tree names; for example, X could be nxlnx2 and Y nl. Then, for example, the tree for (2a) can be defined as the adjunction of a flN0nx0VX tree (generic relative clause tree, standing for, e.g., ~N0nx0Vnxlnx2) into an an0VY tree; the tree for (2b) can be defined as a conjoined S tree, having a parent Sm node and 2 child nodes an0VX and an0VY. s, s, Figure 3: Paraphrase with partial links The second part of the notation requires pick- ing out important nodes. The identification scheme ~The determination of a precise set of link labels is future work. proposed here has a string comprising node labels with relations between them, signifying a relation- ship taken from the set {parent, child, left-sibling, right-sibling}, abbreviated {p, c, ls, rs}. The node NP1 of the left-hand tree of Figure 2 can then be described by the string NPpNPpSrpNIL; an asso- ciated mnemonic nickname might be T1 subjNP. The third part of the representation is then link- ing the nodes. Standard links are represented by an equal sign; other links are represented with the link type subscripted to the equal sign. Thus, for Figure 2, TlsubjNP=TfleftsubjNP, where T21eftsubjNP is NPpSrpSmpNIL for the right- hand tree. For a tabular representation using this notation, see Dras (1997a). 4 Conclusion Synchronous TAGs are a useful representation for paraphrasing, the mapping between parallel texts of the same language which have different syntac- tic structure. A number of modifications need to be made, however, to properly capture the nature of paraphrases: the creation of a new type of summary link, to compensate for the increased importance of derived trees; the allowing of many-to-many links between trees; the creation of partial links, which allow some information to be shared; and a new no- tation which expresses the generality of paraphras- ing. References Abeill~, Anne, Y. Schabes and A. Joshi. 1990. Using Lexicalised Tags for Machine Translation. Proc. of COLING90, 1-6. Chandrasekar, R., C. Doran, B. Srinivas. 1996. Moti- vations and Methods for Text Simplification. Proc. of COLING96, 1041-1044. Doran, Christy, D. Egedi, B.A. Hockey, B. Srinivas and M. Zaidel. 1994. XTAG System - A Wide Coverage Grammar of English. Proc. o/COLING94, 922-928. Dras, Mark. 1997a. Representing Paraphrases Using Synchronous Tree Adjoining Grammars. 1997 Aus- tralasian NLP Summer Workshop, 17-24. Dras, Mark. 1997b. Reluctant Paraphrase: Textual Re- structuring under an Optimisation Model. Submitted to PACLING97. Joshi, Aravind, L. Levy and M. Takahashi. 1975. Tree Adjunct Grammars. J. of Computer and System Sci- ences, 10(1). Shieber, Stuart and Y. Schabes. 1990. Synchronous Tree Adjoining Grammars. Proc. of COLINGgo, 253-258. XTAG Research Group. 1995. A Lexicalised Tree Ad- joining Grammar for English. Univ. of Pennsylvania Technical Report IRCS 95-03. 518 | 1997 | 70 |
Contrastive accent in a data-to-speech system Mari~t Theune IPO, Center for Research on User-System Interaction P.O. Box 513 5600 MB Eindhoven The Netherlands theune@ipo, tue. nl Abstract Being able to predict the placement of con- trastive accent is essential for the assign- ment of correct accentuation patterns in spoken language generation. I discuss two approaches to the generation of contrastive accent and propose an alternative method that is feasible and computationally at- tractive in data-to-speech systems. 1 Motivation The placement of pitch accent plays an important role in the interpretation of spoken messages. Utter- antes having the same surface structure but a differ- ent accentuation pattern may express very different meanings. A generation system for spoken language should therefore be able to produce appropriate ac- centuation patterns for its output messages. One of the factors determining accentuation is contrast. Its importance canbe illustrated with all example from GoalGetter, a data-to-speech sys- teln which generates spoken soccer reports in Dutch (Klabbers et al., 1997). The input of the system is a typed data structure containing data on a soccer match. So-called syntactic templates (van Deemter and Odijk, 1995) are used to express parts of this data structure. In GoalGetter, only 'new' inform- ation is accented; 'given' ('old') information is not (Chafe, 1976), (Brown, 1983), (Hirschberg, 1992). However, this strategy does not always lead to a cor- rect accentuation pattern if contrastive information is not taken into account, as shown in example (1). t (1) a Ill the 16th minute, the Ajax player Kluivert kicked the ball into the wrong goal. b Ten minutes later, Wooter scored for Ajax. 1 All GoalGetter examples are translated from Dutch. Accented words are given in italics; deaccented words are underlined. This is only done where relevant. The word Ajax in (1)b is not accented by the sys- tem, because it is mentioned for the second time and therefore regarded as 'given'. However, this lack of accent creates the impression that Kluivert scored for Ajax too, whereas in fact he scored for the op- posing team through an own goal. This undesirable effect could be avoided by accenting the second oc- currence of Ajax in spite of its givenness, to indicate that it constitutes contrastive information. 2 Predicting contrastive accent In this section I discuss two approaches to predicting contrastive accent, which were put forward by Scott Prevost (1995) and Stephen Pulinan (1997). In the theory of contrast proposed in (Prevost, 1995), an item receives contrastive accent if it co- occurs with another item that belongs to its 'set of alternatives', i.e. a set of different items of the same type. There are two main problems with this ap- proach. First, as Prevost himself notes, it is very difficult to define exactly which items count as be- ing of 'the same type'. If the definition is too strict, not all cases of contrast will be accounted for. On the other hand, if it is too broad, then anything will be predicted to contrast with anything. A second problem is that there are cases where co-occurrence of two items of the same type does not trigger con- trast, as in the following soccer example: (2) a b c After six minutes Nilis scored a goal for PSV. This caused Ajax to fall behind. Twenty minutes later Cocu scored for PSV. According to Prevost's theory, PSVin (2)c should have a contrastive accent, because the two teams Ajax and PSV are obviously in each other's altern- ative set. In fact, though, there is no contrast and PSV should be normally deaccented due to given- ness. This shows that the presence of an alternative item is not sufficient to trigger contrast accent. 519 Another approach to contrastive accent is advoc- ated by Pulman (1997), who proposes to use higher order unification (HOU) for both interpretation and prediction of focus. Described informally, Pulman's focus assignment algorithm takes the semantic rep- resentation of a sentence which has just been gener- ated, looks in the context for another sentence rep- resentation containing parallel items, and abstracts over these items in both representations. If the resulting representations are unifiable, the two sen- tences stand in a contrast relation and the parallel elements from the most recent one receive a pitch accent (or another focus marker). Pulman does not give a full definition of parallel- ism, but states that "to be parallel, two items need to be at least of the same type and have the same sortal properties" ((Pulman, 1997), p. 90). This is rather similar to Prevost's conditions on alternative sets. Consequently, Pulman's theory also faces the problem of determining when two items are of the same type. Still, contrary to Prevost, Pulman can explain the lack of contrast accent in (2)c, because obviously the representations of sentences (2)b and (2)c will not unify. Another advantage, pointed out in (Gardent et al., 1996), is that a HOU algorithm can take world know- ledge into account, which is sometimes necessary for determining contrast. For instance, the contrast in (1) is based on the knowledge that kicking the ball into the wrong goal implies scoring a goal for the opposing team. In a HOU approach, the contrast in this example might be predicted by unifying the representation of the second sentence with the entail- ment of the first. However, such a strategy would require the explicit enumeration of all possible se- mantic equivalences and entalhnents in the relevant domain, which seems hardly feasible. Also, imple- mentation of higher order unification can be quite inefficient. This means that although theoretically appealing, the HOU approach to contrastive accent is less attractive from a computational viewpoint. 3 An alternative solution Fortunately, in data-to-speech systems like GoalGet- ter, the input of which is formed by typed and struc- tured data, a simple principle can be used for de- termining contrast. If two subsequent sentences are generated from the same type of data structure they express similar information and should therefore be regarded as potentially contrastive, even if their sur- face forms are different. Pitch accent should be as- signed to those parts of the second sentence that ex- press data which differ from those in the data struc- ture expressed by the first sentence. Example (1) can be used as illustration. The the- ory of Prevost will not predict contrastive accent on Ajax in (1)b, because (1)a does not contain a mem- ber of its alternative set. In Pulman's approach, the contrast can only be predicted if the system uses the world knowledge that scoring an own goal means scoring for the opposing team. In the approach that I propose, the contrast between (1)a and b can be de- rived directly from the data structures they express. Figure 1 shows these structures, A and B, which are both of the type goaLevent: a record with fields spe- cifying the team for which a goal was scored, the player who scored, the time and the kind of goal: normal, own goal or penalty. A: goaLevent team: PSV player: Kluivert minute: 16 goaltype: own B: goaLevent team: Ajax player: Wooter minute: 26 goaltype: normal Figure 1: Data structures expressed by (1)a and b. Since A and B are of the same type, the values of their fields can be compared, showing which pieces of information are contrastive. Figure 1 shows that all the fields of B have different values from those of A. This means that each phrase in (1)b which ex- presses the value of one of those fields should receive contrastive accent, 2 even if the corresponding field value of A was not mentioned in (1)a. This guar- antees that in (1)b the proper name Ajax, which expresses the value of the team field of B, is accen- ted despite the fact that the contrasting team was not explicitly mentioned in (1)a. The discussion of example (1) shows that in the approach proposed here no world knowledge is needed to determine contrast; it is only necessary to compare the data structures that are expressed by the generated sentences. The fact that the input data structures of the system are organized in such a way that identical data types express semantically parallel information allows us to make use of the world (or domain) knowledge incorporated in the design of these data structures, without having to separately encode this knowledge. This also means 2Sentence (1)b happens not to express the goaltype value of B, but if it did, this phrase should also receive contrastive accent (e.g., 'Twenty minutes later, Over- mars scored a normal goal'). 520 that the prediction of contrast does not depend on the linguistic expressions which are chosen to ex- press the input data; the data can be expressed in an indirect way, as in (1)a, without influencing the prediction of contrast. The approach sketched above will also give the de- sired result for example (2): sentence (2)c will not be regarded as contrastive with (2)b, since (2)c ex- presses a goal event but (2)b does not. 4 Future directions An open question which still remains, is at which level data structures should be compared. In other words, how do we deal with sub- and supertypes? For example, apart from the goal_event data type the GoalGetter system also has a card_event type, which specifies at what time which player received a card of which color. Since goal_event and card_event are different types, they are not expected to be con- trastible. However, both are subtypes of a more gen- eral event type, and if regarded at this higher event level, the structures might be considered as contrast- ible after all. Examples like (3) seem to suggest that this is possible. (3) a In the 11th minute, Ajax took the lead through a goal by Kluivert. b Shortly after the break, the referee handed Nilis a yellow card. c Ten minutes later, Kluivert scored for the second time. The fact that it is not inappropriate to accent Klu- ivert in (3)c, shows that (3)c may be regarded as contrastive to (3)b; otherwise, it would be obligat- ory to deaccent the second mention of Kluivert due to givenness, like PSV in (2)c. Cases like this might be accounted for by assuming that there can be con- trast between fields that are shared by data types having the same supertype. In (3), these would be the player and the minute fields of structures C and D, shown in Figure 2. This is a tentative solu- tion which requires further research. player: Nilis ] C: card_event minute: 11 cardtype: yellow team: Ajax D: goal_event player: Kluivert minute: 21 goaltype: normal Figure 2: Data structures expressed by (3)b and c. 5 Conclusion I have sketched a practical approach to the assign- ment of contrastive accent in data-to-speech sys- tems, which does not need a universal definition of alternative or parallel items. Because the determin- ation of contrast is based on the data expressed by generated sentences, instead of their syntactic struc- tures or semantic reprentations, there is no need for separately encoding world knowledge. The proposed approach is domain-specific in that it relies heavily on the data structures that form the input from gen- eration. On the other hand it is based on a general principle, which should be applicable in any system where typed data structures form the input for lin- guistic generation. In the near future, the proposed approach will be implemented in GoalGetter. Acknowledgements: This research was carried out within the Priority Programme Language and Speech Technology (TST), sponsored by NWO (the Netherlands Organization for Scientific Research). References Gillian Brown. 1983. Prosodic structure and the given/new distinction. In D.R. Ladd and A. Cutler (Eds.): Prosody: Models and Measurements. Springer Verlag, Berlin. Wallace Chafe. 1976. Givenness, contrastiveness, defin- iteness, subjects, topics and points of view. In C.N. Li (Ed): Subject and Topic. Academic Press, New York. Kees van Deemter and Jan Odijk. 1995. Context modeling and the generation of spoken discourse. Manuscript 1125, IPO, Eindhoven, October 1995. Philips Research Manuscript NL-MS 18 728. To ap- pear in Speech Communication, 21 (1/2). Claire Gardent, Michael Kohlhase and Noor van Leusen. 1996. Corrections and higher-order unification. To appear in Proceedings of KONVENS, Bielefeld. Julia Hirschberg. 1992. Using discourse context to guide pitch accent decisions in synthetic speech. In G. Bailly, C. Benoit and T.R. Sawallis (Eds) Talking Ma- chines: Theories, Models, and Designs. Elsevier Sci- ence Publishers, Amsterdam, The Netherlands. Esther Klabbers, Jan Odijk, Jan Roelof de Pijper and Mari~t Theune. 1997. GoalGetter: from Teletext to speech. To appear in IPO Annual Progress Report 31. Eindhoven, The Netherlands. Scott Prevost. 1995. A semantics of contrast and in- formation structure for specifying intonation in spoken language generation. PhD-dissertation, University of Pennsylvania. Stephen Pulman. 1997. Higher Order Unification and the interpretation of focus. In Linguistics and Philo- sophy 20. 521 | 1997 | 71 |
Towards resolution of bridging descriptions Renata Vieira and Simone Teufel Centre for Cognitive Science - University of Edinburgh 2, Buccleuch Place EH8 9LW Edinburgh UK {renat a, simone}©cogsci, ed. ac. uk Abstract We present preliminary results concern- ing robust techniques for resolving bridging definite descriptions. We report our anal- ysis of a collection of 20 Wall Street Jour- nal articles from the Penn Treebank Cor- pus and our experiments with WordNet to identify relations between bridging descrip- tions and their antecedents. 1 Background As part of our research on definite description (DD) interpretation, we asked 3 subjects to classify the uses of DDs in a corpus using a taxonomy related to the proposals of (Hawkins, 1978) (Prince, 1981) and (Prince, 1992). Of the 1040 DDs in our corpus, 312 (30%) were identified as anaphoric (same head), 492 (47%) as larger situation/unfamiliar (Prince's discourse new), and 204 (20%) as bridging refer- ences, defined as uses of DDs whose antecedents-- coreferential or not--have a different head noun; the remaining were classified as idioms or were cases for which the subjects expressed doubt--see (Poesio and Vieira, 1997) for a description of the experiments. In previous work we implemented a system ca- pable of interpreting DDs in a parsed corpus (Vieira and Poesio, 1997). Our implementation employed fairly simple techniques; we concentrated on anaphoric (same head) descriptions (resolved by matching the head nouns of DDs with those of their antecedents) and larger situation/unfamiliar descriptions (identified by certain syntactic struc- tures, as suggested in (Hawkins, 1978)). In this paper we describe our subsequent work on bridging DDs, which involve more complex forms of common- sense reasoning. 2 Bridging descriptions: a corpus study Linguistic and computational theories of bridg- ing references acknowledge two main problems in their resolution: first, to find their antecedents (ANCHORS) and second, to find the relations (LINKS) holding between the descriptions and their anchors (Clark, 1977; Sidner, 1979; Heim, 1982; Carter, 1987; Fraurud, 1990; Chinchor and Sundheim, 1995; Strand, 1997). A speaker is licensed in using a bridg- ing DD when he/she can assume that the common- sense knowledge required to identify the relation is shared by the listener (Hawkins, 1978; Clark and Marshall, 1981; Prince, 1981). This reliance on shared knowledge means that, in general, a system could only resolve bridging references when supplied with an adequate lexicon; the best results have been obtained by restricting the domain and feeding the system with specific knowledge (Carter, 1987). We used the publicly available lexical database Word- Net (WN) (Miller, 1993) as an approximation of a knowledge basis containing generic information. Bridging DDs and WordNet As a first experi- ment, we used WN to automatically find the anchor of a bridging DD, among the NPs contained in the previous five sentences. The system reports a se- mantic link between the DD and the NP if one of the following is true: • The NP and the DD are synonyms of each other, as in the suit -- the lawsuit. • The NP and the DD are in direct hyponymy relation with each other, for instance, dollar -- the currency. • There is a direct or indirect meronymy (part- of relation) between the NP and the DD. Indirect meronymy holds when a concept inherits parts from its hypernyms, like car inherits the part wheel from its hypernym wheeled_vehicle. • Due to WN's idiosyncratic encoding, it is often 522 necessary to look for a semantic relation between sisters, i.e. hyponyms of the same hypernym, such as home -- the house. An automatic search for a semantic relation in 5481 possible anchor/DD pairs (relative to 204 bridging DDs) found a total of 240 relations, dis- tributed over 107 cases of DDs. There were 54 cor- rect resolutions (distributed over 34 DDs) and 186 false positives. Types of bridging definite descriptions A closer analysis revealed one reason for the poor results: anchors and descriptions are often linked by other means than direct lexico-semantic rela- tions. According to different anchor/link types and their processing requirements, we observed six ma- jor classes of bridging DDs in our corpus: Synonymy/Hyponymy/Meronymy These DDs are in a semantic relation with their anchors that might be encoded in WN. Examples are: a) Syn- onymy: new album -- the record, three bills -- the legislation; b) Hypernymy-Hyponymy: rice -- the plant, the television show -- the program; c) Meronymy: plants -- the pollen, the house -- the chimney. Names Definite descriptions may be anchored to proper names, as in: Mrs. Park -- the housewife and Pinkerton's Inc -- the company. Events There are cases where the anchor of a bridg- ing DD is not an NP but a VP or a sentence. Ex- amples are: ...individual investors contend. -- They make the argument in letters...; Kadane Oil Co. is currently drilling two wells... -- The activity ... Compound Nouns This class of DDs requires con- sidering not only the head nouns of a DD and its anchor for its resolution but also the premodifiers. Examples include: stock market crash -- the mar- kets, and discount packages -- the discounts. Discourse Topic There are some cases of DDs which are anchored to an implicit discourse topic rather than to some specific NP or VP. For instance, the industry (the topic being oil companies) and the first half (the topic being a concert). Inference One other class of bridging DDs includes cases based on a relation of reason, cause, conse- quence, or set-members between an anchor (previous NP) and the DD (as in Republicans/Democratics -- the two sides, and last week's earthquake -- the suf- fering people are going through). The relative importance of these classes in our corpus is shown in Table 1. These results explain in part the poor results obtained in our first experi- ment: only 19% of the cases of bridging DDs fall into the category which we might expect WN to handle. Class # % Class # % S/H/M 38 19% C.Nouns 25 12% Names 49 24% D.Topic 15 07% Events 40 20% Inference 37 18% Table 1: Distribution of types of bridging DDs 3 Other experiments with WordNet Cases that WN could handle Next, we consid- ered only the 38 cases of syn/hyp/mer relations and tested whether WN encoded a semantic relation be- tween them and their (manually identified) anchors. The results for these 38 DDs are summarized in Ta- ble 2. Overall recall was 39% (15/38). 1 Class Total Found in WN Not Found Syn 12 4 8 Hyp 14 8 6 Mer 12 3 9 Table 2: Search for semantic relations in WN Problems with WordNet Some of the missing relations are due to the unexpected way in which knowledge is organized in WN. For example, our artifact I structure/1 construction/4 . part of housing building ~ lodging edifice " all /\ house dwelling, home /~ part_of specific houses blood family Figure 1: Part of WN's semantic net for buildings method could not find an association between house and walls, because house was not entered as a hy- ponym of building but of housing, and housing does 1 Our previous experiment found correct relations for 34 DDs, from which only 18 were in the syn/hyp/mer class. Among these 18, 8 were based on different anchors from the ones we identified manually (for instance, we identified pound -- the currency, whereas our automatic search found sterling -- the currency). Other 16 correct relations resulting from the automatic search were found for DDs which we have ascribed manually to other classes than syn/hyp/mer, for instance, a relation was found for the pair Bach -- the composer, in which the anchor is a name. Also, whereas we identified the pair Koreans -- the population, the search found a WN relation for nation -- the population. 523 not have a meronymy link to wall whereas building does. On the other hand, specific houses (school- house, smoke house, tavern) were encoded in WN as hyponyms of building rather than hyponyms of house (Fig. 1). Discourse structure Another problem found in our first test with WN was the large number of false positives. Ideally, we should have a mechanism for focus tracking to reduce the number of false posi- tives- (Sidner. 1979), (Grosz, 1977). We repeated our first experiment using a simpler heuristic: con- sidering only the closest anchor found in a five sen- tence window (instead of all possible anchors). By adopting this heuristic we found the correct anchors for 30 DDs (instead of 34) and reduced the number of false positives from 186 to 77. 4 Future work We are currently working on a revised version of the system that takes the problems just discussed into account. A few names are available in WN, such as famous people, countries, cities and languages. For other names, if we can infer their entity type we could resolve them using WN. Entity types can be identified by complements like Mr., Co., Inc. etc. An initial implementation of this idea resulted in the resolution of .53% (26/49) of the cases based on names. Some relations are not found in WN, for instance, Mr. Morishita (type person)-- the 57 year-old. To process DDs based on events we could try first to transform verbs into their nominalisa- tions, and then looking for a relation between nouns in a semantic net. Some rule based heuristics or a stochastic method are required to 'guess' the form of a nominalisation. We propose to use WN's mor- phology component as a stemmer, and to augment the verbal stems with the most common suffixes for nominalisations, like -ment, -ion. In our corpus, 16% (7/43) of the cases based on events are direct nom- inalisations (for instance, changes were proposed -- the proposals), and another 16% were based on se- mantic relations holding between nouns and verbs (such as borrou~,ed -- the loan). The other 29 cases (68%) of DDs based on events require inference rea- soning based on the compositional meaning of the phrases (as in It u~ent looking for a partner -- the prospect); these cases are out of reach just now, as well as the cases listed under "'discourse topic" and "inference". We still have to look in more detail at compound nouns. References Carter, D. M. 1987. Interpreting Anaphors in .Vat- ural Language Tezts. Ellis Horwood, Chichester. UK. Chinchor, N. A. and B. Sundheim. 1995. (MUC) tests of discourse processing. In Proc. AAA[ SS on Empirical Methods in Discourse Interpretation and Generation. pages 21-26, Stanford. Clark, H. H. 1977. Bridging. In Johnson-Laird and Wason, eds.. Thinking: Readings in Cognitive Science. Cambridge University Press, Cambridge. Clark, H. H. and C. P~. Marshall. 1981. Definite ref- erence and mutual knowledge. In Joshi, Webber and Sag, eds.,Elements of Discourse Understand- ing. Cambridge University Press, Cambridge. Fraurud, K. 1990. Definiteness and the Processing of Noun Phrases in Natural Discourse. Journal of Semantics, 7, pages 39.5-433. Grosz, B. J. 1977. The Representation and Use of Focus in Dialogue Understanding. Ph.D. thesis, Stanford University. Hawkins, J. A. 1978. Definiteness and Indefinite- ness. Croom Helm, London. Helm, I. 1982. The Semantics of Definite and In- definite Noun Phrases. Ph.D. thesis, University of Massachusetts at Amherst. Miller, G. et al. 1993. Five papers in WordNet. Technical Report CSL Report ~3, Cognitive Sci- ence Laboratory, Princeton University. Poesio, M. and Vieira. R. 1997. A Corpus based investigation of definite description use. Manuscript, Centre for Cognitive Science, Univer- sity of Edinburgh. Prince, E. 1981. Toward a taxonomy of given/new information. In Cole. ed., Radical Pragmatics. Academic Press. New York, pages '223-255. Prince, E. 1992. The ZPG letter: subjects, definete- ness, and information-status. In Thompson and Mann, eds., Discourse description: diverse analy- ses of a fund raising text. Benjamins. Amsterdam, pages 295-325. Sidner, C. L. 1979. Towards a computational the- ory of definite anaphora comprehension in English discourse. Ph.D. thesis. MIT. Strand, K. 1997. A Taxonomy of Linking Relations. Journal of Semantics, forthcoming. Vieira, R. and M. Poesio. 1997. Corpus-based processing of definite descriptions. In Botley and McEnery eds., Corpus-based and computational approaches to anaphora. UCL Press. London. 524 | 1997 | 72 |
Compositional Semantics of German Prefix Verbs Maria Wolters Institut fiir Kommunikationsforschung und Phonetik University of Bonn Poppelsdorfer Allee 47, D-53115 Bonn mwo©asll, ikp. uni-bonn, de Abstract A compositional account of the semantics of German prefix verbs in HPSG is out- lined. We consider only those verbs that are formed by productive synchronic rules. Rules are fully productive if they apply to all base verbs which satisfy a common de- scription. Prefixes can be polysemous and have separate, highly underspecified lexical entries. Adequate bases are determined via selection restrictions. 1 The Problem Determining the semantics of unknown words which can be derived from lexicon entries is highly de- sirable for natural language understanding (Light, 1996). In this paper, I sketch a compositional ac- count of the semantics of German prefix verbs de- rived from a verbal base, concentrating on those verbs that can be generated by a productive word formation rule. Like (Witte, 1997), I assume that the meaning of most of these verbs can be derived compositionally by uni~'ing the semantic represen- tations of its constituents. Example: (1) durch + laufen ('through + to run') =~ durchlaufen ('to run through') This is an instance of a common rule which can be summarized informally ms (2) 'durch' + VERB[+motion,+agentive] ::~ VERB through a space When a prefix verb is lexicalized, its meaning fre- quently shifts due to language change and metaphor- ical usage (Mayo et al., 1995). For example, 'durch- laufen' is mostly associated with the meaning "pass- ing through all stages of a process": (3) Er durchl£uft die Schulung. He passes through the training. 2 The Semantics of Prefix Verbs Frequently, the prefix modifies features of the base verb such as valency or aspect 1. For example, while 'eilen' ('to haste') is an activity, 'etw. dureheilen' ('to haste through sth.') is an accomplishment. I assume that the prefix entry provides a highly un- derspecified blueprint of the structure of the prefix verb; therefore, I regard the prefix as the head of the prefix verb (but see (Bauer, 1990)). The values for all features of the prefix verb are obtained from the base verb via structure sharing, except for basic morphological information and the information to be modified. In other words, the val- ues of all unmodified features of the prefix verb are token identical with the corresponding values of the base verb. Most prefixes appear in distinct but semantically related rules, resulting in polysemou,s prefixes. For example, combined with some stative verbs, 'durch' signifies "'VERB during a certain period of time", as in (4) durch + leben ('through' + 'live') =~ durchleben ('live through:) Specifying the set of adequate bases implicitly by selection restrictions allows to elegantly capture gen- eralizations. For example, we can specify at the feature structure for verbs of motion that they can only combine with the instance of "durch' denoting "VERB through a space". The productivity of a word formation rule is a complex notion (Kastovsky, 1986; Bauer, 1988; Mayo et al., 1995). For our purposes, a rule is pro- ductive if it applies to all bases which satisfy a com- mon description such as "'state" or "transitive verb". A rule only provides patterns for analogical forma- 1Here. aspect denotes certain general verb classes (Binnick. 1992; Comrie. 1992) such as state, activity, accomplishment, and achievement (Vendler, 1957). 525 tions; the frequency of application and acceptability of results also indicate its degree of productivity. 3 Prefix Semantics in HPSG The main advantage of HPSG (Head Driven Phrase Structure Grammar, (Pollard and Sag, 1994), for German see e.g.(Kathol, 1995)) is that it is both a formalism with strong ties to logic and knowledge representation and a linguistic theory. Much re- search in HPSG focuses on the structure of the lexi- con, e.g. (Davis, 1997). However, work on semantics and morphology in HPSG is relatively scarce. 3.1 Previous Work lVlost HPSG work on German affixation focuses on the suffix -bar, which can combine with verbs, most of them transitive, to form an adjective. (Krieger and Nerbonne, 1992) (KN) assign sepa- rate lexical entries to affixes and express selection restrictions by typing and subcategorization frames. In their model, -bar is of sort bar-surf and subcate- gorizes for verbs of sort bar-verb to form adjectives of sort bar-comp-adj. Complex words have a headed binary structure, with the affix as head. In keeping with the I-IPSG Semant, ics Principle, the semantics of the complex word is structure shared with the semantics of the head. (Riehemann, 1993) found that subcategorization frames were incompatible with her data. Instead of a word syntactic approach with separate lexical entries for affixes, she describes the formation of bar- adjectives via a lexical inheritance hierarchy of sorts. Different sorts correspond to different types of verbal bases (transitive, dative, etc.). New adjectives are formed in analogy to existing ones. Although Riehemann's approach is very elegant, it is not adequate for verb prefixes. Most prefixes can be separated from the verb depending on their phonological level, e Example: (5) Ich mache die Tiir zu. ('I close the door'; zumachen = 'to close') Therefore, a word syntactic approach and separate lexicM entries for verb prefixes may well be adequate. (Witte, 1997) also advocates a word syntactic ap- proach. His semantic representation relies on (Davis, 1997). (Light, 1996) bases his semantic representa- tions on first order logic, but he does not use HPSG. 3.2 Verb Prefixes Fig. 1 presents the prefix-related part of the sort hierarchy. The sort verb-prefix specifies typical lea- 2 Le.,dcM Phonology (Mohanan, 1987) assumes several levels of rules. verb-prefix durch durch_l dutch2 Figure 1: Part of the sort hierarchy for verb prefixes tures of verb prefixes. Each prefix p is assigned a sort p with subsorts Pl ..... Pn for each potential mean- ing. Relevant verb classes, such as semantic fields or Vendler classes, are also specified using sorts. Following KN, I assume that the prefix is the head of complex affix words, but like Riehemann, I do not assume a binary structure. The internal structure of a complex derived word is given in Fig. 2. Morpho- logical information is given at the feature MORPII. MORPtIILEVEL specifies separability (1 - unsepara- ble, 2 - separable). MORPHIDTRS the internal struc- ture, and MORPHIB.-kSE the base form. Each verb has a complex feature PREFIX located at SYNSEMILOCICAT. FOr each prefix p, the value of the subfeature PREFIXIp points to the adequate prefix meaning. For example, if the instance of 'dutch' corresponding to (2) is labelled dutch_l, we get PREFIX]DURCtI: 1 in the lexical entry for 'eilen'. A verb can only combine with prefixes for which an instance is specified at PREFIX. Regarding se- mantics, we focus on aspectual classes. The se- mantic framework chosen here is Lexical Concep- tual Structure, which has been applied successfidly to the interface between morphology and lexical se- mantics by e.g. (Rappaport Hovav and Levin, in press). The representation of "v~ndler classes is adapted from (Van Valin, 1990). Class is specified at SYNSEMII, OClCONTENTICr, Ass. Prefix entries are heavily underspecified. For ex- ample, the entry for "durch' can be derived from Fig. 2 by deleting all information specific to the COMPlement "eilen'. except for the value of PREFIX]DIRCII. The semantics of the complex word is composed at the head and then structure shared with the whole word, in accordance with the Seman- tics Principle. A prefix can only be combined with verbs with an adequate feature value at PREFIX. 4 Conclusion and Further Work The representation of the relevant semantics will for- realized more rigorously. Hypotheses will be checked with the data, using a more refined, statistically mo- tivated notion of productivity. The theory will also be implemented in an adequate lexical knowledge 526 MORPH 'BASE cond_concat(V~, []) SYNSEM[] r,OCICONTJCLASS CAUSE [] L,~,A,,s~ BECOME(['3], XOT IN r~) DTRS "MORPHIBASE [] 'ellen' ( [c Assloo [] COMPS SYNSEMILOC CONT /NUC [] [RELN eilen ] L [AGENT [] NP] F, YNSEM [] Figure 2: Partial lexical entry for 'durcheilen'. 4 refers to the direct object. 3 to the subject representation language. Acknowledgements Thanks to Bernhard SchrSder and three anonymous reviewers for their valuable comments. This research was partially supported by the Studienstiftung des deutschen Volkes and ERASMUS. References L. Bauer. 1988. Introducing Linguistic Morphology. Edinburgh University Press, Edinburgh. L. Bauer. 1990. Be-heading the word. J. Linguis- tics, 26:1-31. R. Binnick. 1992. Time and the Verb. Oxford Uni- versity Press, Oxford. B. Comrie. 1992. Aspect. Cambridge University Press, Cambridge. A. Davis. 1997. Lexical Semantics and Linking and the Hierarchical Lexicon. Ph.D. thesis, Depart- ment of Linguistics, Stanford University. D. Kastovsky. 1986. The problem of productivity in word-formation. Linguistics, 24:585-600. A. Kathol. 1995. Linearization-Based German Syn- tax. Ph.D. thesis, Department of Linguistics, Stanford University. H.-U. Krieger and J. Nerbonne. 1992. Feature-ba~sed inheritance networks for computational lexicons. In Ted Briscoe, Valeria de Paiva, and Ann Copes- take, editors, Inheritance, Defaults and the Lexi- con, chapter 7, pages 90-136. Cambridge Univer- sity Press. M. Light. 1996. Morphological Cues for Lexieal Se- mantics. Ph.D. thesis, Department of Computer Science, University of Rochester. B. Mayo, M.-T. Schepping, C. Schwarze, and A. Zal- fanella. 1995. Semantics in the derivational mor- phology of Italian: implications for the structure of the lexicon. Linguistics, 33:583-638. K.P. Mohanan. 1987. The Theory of Lexieal Phonol- ogy. Reidel, Dordrecht. C. Pollard and I. Sag. 1994. Head-Driven Phrase Structure Gramraar. University of Chicago Press. M. Rappaport Hovav and B. Levin. in press. Mor- phology and lexical semantics. In A. Zwicky and A. Spencer. editors. Handbook of Morphology. Blackwell, Oxford. S. Riehemann. 1993. Word formation in lexical type hierarchies - a case study of bar-adjectives in Ger- man. Master's thesis, Universit£t Tiibingen. SfS- Report-02-93. R.D. Van VMin. 1990. Semantic parameters of split intransitivity. Language. 66:221-260. Z. ~ndler. 1957. Verbs and times. Philosophical Review, 56:143-160. J. Witte. 1997. CompositionM semantics for resul- tative separable prefix constructions in German. In Proe. HPSG 4. 527 | 1997 | 73 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.